WorldWideScience

Sample records for empirical lcao parameters

  1. Quantum Chemistry of Solids LCAO Treatment of Crystals and Nanostructures

    CERN Document Server

    Evarestov, Robert A

    2012-01-01

    Quantum Chemistry of Solids delivers a comprehensive account of the main features and possibilities of LCAO methods for the first principles calculations of electronic structure of periodic systems. The first part describes the basic theory underlying the LCAO methods  applied to periodic systems and the use of Hartree-Fock(HF), Density Function theory(DFT) and hybrid Hamiltonians. The translation and site symmetry consideration is included to establish connection between k-space solid –state physics and real-space quantum chemistry. The inclusion of electron correlation effects for periodic systems is considered on the basis of localized crystalline orbitals. The possibilities of LCAO methods for chemical bonding analysis in periodic systems are discussed. The second part deals with the applications of LCAO methods  for calculations of bulk crystal properties, including magnetic ordering and crystal structure optimization.  In the second edition two new chapters are added in the application part II of t...

  2. DFT LCAO and plane wave calculations of SrZrO3

    International Nuclear Information System (INIS)

    Evarestov, R.A.; Bandura, A.V.; Alexandrov, V.E.; Kotomin, E.A.

    2005-01-01

    The results of the density functional (DFT) LCAO and plane wave (PW) calculations of the electronic and structural properties of four known SrZrO 3 phases (Pm3m, I4/mcm, Cmcm and Pbnm) are presented and discussed. The calculated unit cell energies and relative stability of these phases agree well with the experimental sequence of SrZrO 3 phases as the temperature increases. The lattice structure parameters optimized in the PW calculations for all four phases are in good agreement with the experimental neutron diffraction data. The LCAO and PW results for the electronic structure, density of states and chemical bonding in the cubic phase (Pm3m) are discussed in detail and compared with the results of previous PW calculations. (copyright 2005 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim) (orig.)

  3. DFT LCAO and plane wave calculations of SrZrO{sub 3}

    Energy Technology Data Exchange (ETDEWEB)

    Evarestov, R.A.; Bandura, A.V.; Alexandrov, V.E. [Department of Quantum Chemistry, St. Petersburg State University, 26 Universitetskiy Prospekt, Stary Peterhof 198504 (Russian Federation); Kotomin, E.A. [Max-Planck-Institut fuer Festkoerperforschung, Heisenbergstr. 1, 70569, Stuttgart (Germany)

    2005-02-01

    The results of the density functional (DFT) LCAO and plane wave (PW) calculations of the electronic and structural properties of four known SrZrO{sub 3} phases (Pm3m, I4/mcm, Cmcm and Pbnm) are presented and discussed. The calculated unit cell energies and relative stability of these phases agree well with the experimental sequence of SrZrO{sub 3} phases as the temperature increases. The lattice structure parameters optimized in the PW calculations for all four phases are in good agreement with the experimental neutron diffraction data. The LCAO and PW results for the electronic structure, density of states and chemical bonding in the cubic phase (Pm3m) are discussed in detail and compared with the results of previous PW calculations. (copyright 2005 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim) (orig.)

  4. Electronic structure of crystalline uranium nitrides UN, U{sub 2}N{sub 3} and UN{sub 2}: LCAO calculations with the basis set optimization

    Energy Technology Data Exchange (ETDEWEB)

    Evarestov, R A; Panin, A I; Bandura, A V; Losev, M V [Department of Quantum Chemistry, St. Petersburg State University, University Prospect 26, Stary Peterghof, St. Petersburg, 198504 (Russian Federation)], E-mail: re1973@re1973.spb.edu

    2008-06-01

    The results of LCAO DFT calculations of lattice parameters, cohesive energy and bulk modulus of the crystalline uranium nitrides UN, U{sub 2}N{sub 3} and UN{sub 2} are presented and discussed. The LCAO computer codes Gaussian03 and Crystal06 are applied. The calculations are made with the uranium atom relativistic effective small core potential by Stuttgart-Cologne group (60 electrons in the core). The calculations include the U atom basis set optimization. Powell, Hooke-Jeeves, conjugated gradient and Box methods are implemented in the author's optimization package, being external to the codes for molecular and periodic calculations. The basis set optimization in LCAO calculations improves the agreement of the lattice parameter and bulk modulus of UN crystal with the experimental data, the change of the cohesive energy due to the optimization is small. The mixed metallic-covalent chemical bonding is found both in LCAO calculations of UN and U{sub 2}N{sub 3} crystals; UN{sub 2} crystal has the semiconducting nature.

  5. Electronic structure of crystalline uranium nitrides UN, U2N3 and UN2: LCAO calculations with the basis set optimization

    International Nuclear Information System (INIS)

    Evarestov, R A; Panin, A I; Bandura, A V; Losev, M V

    2008-01-01

    The results of LCAO DFT calculations of lattice parameters, cohesive energy and bulk modulus of the crystalline uranium nitrides UN, U 2 N 3 and UN 2 are presented and discussed. The LCAO computer codes Gaussian03 and Crystal06 are applied. The calculations are made with the uranium atom relativistic effective small core potential by Stuttgart-Cologne group (60 electrons in the core). The calculations include the U atom basis set optimization. Powell, Hooke-Jeeves, conjugated gradient and Box methods are implemented in the author's optimization package, being external to the codes for molecular and periodic calculations. The basis set optimization in LCAO calculations improves the agreement of the lattice parameter and bulk modulus of UN crystal with the experimental data, the change of the cohesive energy due to the optimization is small. The mixed metallic-covalent chemical bonding is found both in LCAO calculations of UN and U 2 N 3 crystals; UN 2 crystal has the semiconducting nature

  6. Application of parameters space analysis tools for empirical model validation

    Energy Technology Data Exchange (ETDEWEB)

    Paloma del Barrio, E. [LEPT-ENSAM UMR 8508, Talence (France); Guyon, G. [Electricite de France, Moret-sur-Loing (France)

    2004-01-01

    A new methodology for empirical model validation has been proposed in the framework of the Task 22 (Building Energy Analysis Tools) of the International Energy Agency. It involves two main steps: checking model validity and diagnosis. Both steps, as well as the underlying methods, have been presented in the first part of the paper. In this part, they are applied for testing modelling hypothesis in the framework of the thermal analysis of an actual building. Sensitivity analysis tools have been first used to identify the parts of the model that can be really tested on the available data. A preliminary diagnosis is then supplied by principal components analysis. Useful information for model behaviour improvement has been finally obtained by optimisation techniques. This example of application shows how model parameters space analysis is a powerful tool for empirical validation. In particular, diagnosis possibilities are largely increased in comparison with residuals analysis techniques. (author)

  7. Empirical flow parameters : a tool for hydraulic model validity

    Science.gov (United States)

    Asquith, William H.; Burley, Thomas E.; Cleveland, Theodore G.

    2013-01-01

    The objectives of this project were (1) To determine and present from existing data in Texas, relations between observed stream flow, topographic slope, mean section velocity, and other hydraulic factors, to produce charts such as Figure 1 and to produce empirical distributions of the various flow parameters to provide a methodology to "check if model results are way off!"; (2) To produce a statistical regional tool to estimate mean velocity or other selected parameters for storm flows or other conditional discharges at ungauged locations (most bridge crossings) in Texas to provide a secondary way to compare such values to a conventional hydraulic modeling approach. (3.) To present ancillary values such as Froude number, stream power, Rosgen channel classification, sinuosity, and other selected characteristics (readily determinable from existing data) to provide additional information to engineers concerned with the hydraulic-soil-foundation component of transportation infrastructure.

  8. Quantum Chemistry of Solids The LCAO First Principles Treatment of Crystals

    CERN Document Server

    Evarestov, Robert A

    2007-01-01

    Quantum Chemistry of Solids delivers a comprehensive account of the main features and possibilities of LCAO methods for the first principles calculations of electronic structure of periodic systems. The first part describes the basic theory underlying the LCAO methods applied to periodic systems and the use of wave-function-based (Hartree-Fock), density-based (DFT) and hybrid hamiltonians. The translation and site symmetry consideration is included to establish connection between k-space solid-state physics and real-space quantum chemistry methods in the framework of cyclic model of an infinite crystal. The inclusion of electron correlation effects for periodic systems is considered on the basis of localized crystalline orbitals. The possibilities of LCAO methods for chemical bonding analysis in periodic systems are discussed. The second part deals with the applications of LCAO methods for calculations of bulk crystal properties, including magnetic ordering and crystal structure optimization. The discussion o...

  9. LCAO calculations of SrTiO{sub 3} nanotubes

    Energy Technology Data Exchange (ETDEWEB)

    Evarestov, Robert; Bandura, Andrei, E-mail: re1973@re1973.spb.edu [Department of Quantum Chemistry, St. Petersburg State University, 26 Universitetsky Ave., 198504, Petrodvorets (Russian Federation)

    2011-06-23

    The large-scale first-principles simulation of the structure and stability of SrTiO{sub 3} nanotubes is performed for the first time using the periodic PBE0 LCAO method. The initial structures of the nanotubes have been obtained by the rolling up of the stoichiometric SrTiO{sub 3} slabs consisting of two or four alternating (001) SrO and TiO{sub 2} atomic planes. Nanotubes (NTs) with chiralities (n,0) and (n,n) have been studied. Two different NTs were constructed for each chirality: (I) with SrO outer shell, and (II) with TiO{sub 2} outer shell. Positions of all atoms have been optimized to obtain the most stable NT structure . In the majority of considered cases the inner or outer TiO{sub 2} shells of NT undergo a considerable reconstruction due to shrinkage or stretching of interatomic distances in the initial cubic perovskite structure. There were found two types of surface reconstruction: (1) breaking of Ti-O bonds with creating of Ti = O titanyl groups in outer surface; (2) inner surface folding due to Ti-O-Ti bending. Based on strain energy calculations the largest stability was found for (n,0) NTs with TiO{sub 2} outer shell.

  10. LCAO calculations of SrTiO3 nanotubes

    International Nuclear Information System (INIS)

    Evarestov, Robert; Bandura, Andrei

    2011-01-01

    The large-scale first-principles simulation of the structure and stability of SrTiO 3 nanotubes is performed for the first time using the periodic PBE0 LCAO method. The initial structures of the nanotubes have been obtained by the rolling up of the stoichiometric SrTiO 3 slabs consisting of two or four alternating (001) SrO and TiO 2 atomic planes. Nanotubes (NTs) with chiralities (n,0) and (n,n) have been studied. Two different NTs were constructed for each chirality: (I) with SrO outer shell, and (II) with TiO 2 outer shell. Positions of all atoms have been optimized to obtain the most stable NT structure . In the majority of considered cases the inner or outer TiO 2 shells of NT undergo a considerable reconstruction due to shrinkage or stretching of interatomic distances in the initial cubic perovskite structure. There were found two types of surface reconstruction: (1) breaking of Ti-O bonds with creating of Ti = O titanyl groups in outer surface; (2) inner surface folding due to Ti-O-Ti bending. Based on strain energy calculations the largest stability was found for (n,0) NTs with TiO 2 outer shell.

  11. Empirically modelled Pc3 activity based on solar wind parameters

    Directory of Open Access Journals (Sweden)

    B. Heilig

    2010-09-01

    Full Text Available It is known that under certain solar wind (SW/interplanetary magnetic field (IMF conditions (e.g. high SW speed, low cone angle the occurrence of ground-level Pc3–4 pulsations is more likely. In this paper we demonstrate that in the event of anomalously low SW particle density, Pc3 activity is extremely low regardless of otherwise favourable SW speed and cone angle. We re-investigate the SW control of Pc3 pulsation activity through a statistical analysis and two empirical models with emphasis on the influence of SW density on Pc3 activity. We utilise SW and IMF measurements from the OMNI project and ground-based magnetometer measurements from the MM100 array to relate SW and IMF measurements to the occurrence of Pc3 activity. Multiple linear regression and artificial neural network models are used in iterative processes in order to identify sets of SW-based input parameters, which optimally reproduce a set of Pc3 activity data. The inclusion of SW density in the parameter set significantly improves the models. Not only the density itself, but other density related parameters, such as the dynamic pressure of the SW, or the standoff distance of the magnetopause work equally well in the model. The disappearance of Pc3s during low-density events can have at least four reasons according to the existing upstream wave theory: 1. Pausing the ion-cyclotron resonance that generates the upstream ultra low frequency waves in the absence of protons, 2. Weakening of the bow shock that implies less efficient reflection, 3. The SW becomes sub-Alfvénic and hence it is not able to sweep back the waves propagating upstream with the Alfvén-speed, and 4. The increase of the standoff distance of the magnetopause (and of the bow shock. Although the models cannot account for the lack of Pc3s during intervals when the SW density is extremely low, the resulting sets of optimal model inputs support the generation of mid latitude Pc3 activity predominantly through

  12. Extended Fenske-Hall LCAO MO Calculations for Mixed Methylene Dihalides

    Science.gov (United States)

    Ziemann, Hartmut; Paulun, Manfred

    1988-10-01

    The electronic structure of mixed methylene dihalides CH2XY (X, Y = F, Cl, Br. I) has been studied using extended Fenske-Hall LCAO MO method. The comparison with available photoelec­tron spectra confirmes previous assignments of all bands with binding energies <100 eV. The electronic structure changes occurring upon varying the halogen substituents are discussed.

  13. LCAO fitting of positron 2D-ACAR momentum densities of non-metallic solids

    International Nuclear Information System (INIS)

    Chiba, T.

    2001-01-01

    We present a least-squares fitting method to fit and analyze momentum densities obtained by 2D-ACAR. The method uses an LCAO-MO as a fitting basis and thus is applicable to non-metals. Here we illustrate the method by taking MgO as an example. (orig.)

  14. LCAO fitting of positron 2D-ACAR momentum densities of non-metallic solids

    Energy Technology Data Exchange (ETDEWEB)

    Chiba, T. [National Inst. for Research in Inorganic Materials, Tsukuba, Ibaraki (Japan)

    2001-07-01

    We present a least-squares fitting method to fit and analyze momentum densities obtained by 2D-ACAR. The method uses an LCAO-MO as a fitting basis and thus is applicable to non-metals. Here we illustrate the method by taking MgO as an example. (orig.)

  15. Empirical estimation of school siting parameter towards improving children's safety

    Science.gov (United States)

    Aziz, I. S.; Yusoff, Z. M.; Rasam, A. R. A.; Rahman, A. N. N. A.; Omar, D.

    2014-02-01

    Distance from school to home is a key determination in ensuring the safety of hildren. School siting parameters are made to make sure that a particular school is located in a safe environment. School siting parameters are made by Department of Town and Country Planning Malaysia (DTCP) and latest review was on June 2012. These school siting parameters are crucially important as they can affect the safety, school reputation, and not to mention the perception of the pupil and parents of the school. There have been many studies to review school siting parameters since these change in conjunction with this ever-changing world. In this study, the focus is the impact of school siting parameter on people with low income that live in the urban area, specifically in Johor Bahru, Malaysia. In achieving that, this study will use two methods which are on site and off site. The on site method is to give questionnaires to people and off site is to use Geographic Information System (GIS) and Statistical Product and Service Solutions (SPSS), to analyse the results obtained from the questionnaire. The output is a maps of suitable safe distance from school to house. The results of this study will be useful to people with low income as their children tend to walk to school rather than use transportation.

  16. An extension of the fenske-hall LCAO method for approximate calculations of inner-shell binding energies of molecules

    Science.gov (United States)

    Zwanziger, Ch.; Reinhold, J.

    1980-02-01

    The approximate LCAO MO method of Fenske and Hall has been extended to an all-election method allowing the calculation of inner-shell binding energies of molecules and their chemical shifts. Preliminary results are given.

  17. Simple and efficient LCAO basis sets for the diffuse states in carbon nanostructures.

    Science.gov (United States)

    Papior, Nick R; Calogero, Gaetano; Brandbyge, Mads

    2018-06-27

    We present a simple way to describe the lowest unoccupied diffuse states in carbon nanostructures in density functional theory calculations using a minimal LCAO (linear combination of atomic orbitals) basis set. By comparing plane wave basis calculations, we show how these states can be captured by adding long-range orbitals to the standard LCAO basis sets for the extreme cases of planar sp 2 (graphene) and curved carbon (C 60 ). In particular, using Bessel functions with a long range as additional basis functions retain a minimal basis size. This provides a smaller and simpler atom-centered basis set compared to the standard pseudo-atomic orbitals (PAOs) with multiple polarization orbitals or by adding non-atom-centered states to the basis.

  18. Empirical tight-binding parameters for solid C60

    International Nuclear Information System (INIS)

    Tit, N.; Kumar, V.

    1993-01-01

    We present a tight-binding model for the electronic structure of C 60 using four (1s and 3p) orbitals per carbon atom. The model has been developed by fitting the tight-binding parameters to the ab-initio pseudopotential calculation of Troullier and Martins (Phys. Rev. B46, 1754 (1992)) in the face-centered cubic (Fm3-bar) phase. Following this, calculations of the energy bands and the density of electronic states have been carried out as a function of the lattice constant. Good agreement has been obtained with the observed lattice-constant dependence of T c using McMillan's formula. Furthermore, calculations of the electronic structure are presented in the simple cubic (Pa3-bar) phase. (author). 43 refs, 3 figs, 1 tab

  19. Comparison of nuisance parameters in pediatric versus adult randomized trials: a meta-epidemiologic empirical evaluation

    NARCIS (Netherlands)

    Vandermeer, Ben; van der Tweel, Ingeborg; Jansen-van der Weide, Marijke C.; Weinreich, Stephanie S.; Contopoulos-Ioannidis, Despina G.; Bassler, Dirk; Fernandes, Ricardo M.; Askie, Lisa; Saloojee, Haroon; Baiardi, Paola; Ellenberg, Susan S.; van der Lee, Johanna H.

    2018-01-01

    Background: We wished to compare the nuisance parameters of pediatric vs. adult randomized-trials (RCTs) and determine if the latter can be used in sample size computations of the former. Methods: In this meta-epidemiologic empirical evaluation we examined meta-analyses from the Cochrane Database of

  20. Relationships between moment magnitude and fault parameters: theoretical and semi-empirical relationships

    Science.gov (United States)

    Wang, Haiyun; Tao, Xiaxin

    2003-12-01

    Fault parameters are important in earthquake hazard analysis. In this paper, theoretical relationships between moment magnitude and fault parameters including subsurface rupture length, downdip rupture width, rupture area, and average slip over the fault surface are deduced based on seismological theory. These theoretical relationships are further simplified by applying similarity conditions and an unique form is established. Then, combining the simplified theoretical relationships between moment magnitude and fault parameters with seismic source data selected in this study, a practical semi-empirical relationship is established. The seismic source data selected is also to used to derive empirical relationships between moment magnitude and fault parameters by the ordinary least square regression method. Comparisons between semi-empirical relationships and empirical relationships show that the former depict distribution trends of data better than the latter. It is also observed that downdip rupture widths of strike slip faults are saturated when moment magnitude is more than 7.0, but downdip rupture widths of dip slip faults are not saturated in the moment magnitude ranges of this study.

  1. Electronic properties of mixed molybdenum dichalcogenide MoTeSe: LCAO calculations and Compton spectroscopy

    Energy Technology Data Exchange (ETDEWEB)

    Ahuja, Ushma [Department of Electrical Engineering, Veermata Jijabai Technological Institute, H. R. Mahajani Marg, Matunga (East), Mumbai 400019, Maharashtra (India); Kumar, Kishor; Joshi, Ritu [Department of Physics, University College of Science, M.L. Sukhadia University, Udaipur 313001, Rajasthan (India); Bhavsar, D.N. [Department of Physics, Bhavan' s Seth R.A. College of Science, Khanpur, Ahmedabad 380001, Gujarat (India); Heda, N.L., E-mail: nlheda@yahoo.co.in [Department of Pure and Applied Physics, University of Kota, Kota 324007, Rajasthan (India)

    2016-07-01

    We have employed linear combination of atomic orbitals (LCAO) method to compute the Mulliken’s population (MP), energy bands, density of states (DOS) and Compton profiles for hexagonal MoTeSe. The density functional theory (DFT) and hybridization of Hartree-Fock with DFT (B3LYP) have been used within the LCAO approximation. Performance of theoretical models has been tested by comparing the theoretical momentum densities with the experimental Compton profile of MoTeSe measured using {sup 137}Cs Compton spectrometer. It is seen that the B3LYP prescription gives a better agreement with the experimental data than other DFT based approximations. The energy bands and DOS depict an indirect band gap character in MoTeSe. In addition, a relative nature of bonding in MoTeSe and its isovalent MoTe{sub 2} is discussed in terms of equal-valence-electron-density (EVED) profiles. On the basis of EVED profiles it is seen that MoTeSe is more covalent than MoTe{sub 2}.

  2. An empirical multivariate log-normal distribution representing uncertainty of biokinetic parameters for 137Cs

    International Nuclear Information System (INIS)

    Miller, G.; Martz, H.; Bertelli, L.; Melo, D.

    2008-01-01

    A simplified biokinetic model for 137 Cs has six parameters representing transfer of material to and from various compartments. Using a Bayesian analysis, the joint probability distribution of these six parameters is determined empirically for two cases with quite a lot of bioassay data. The distribution is found to be a multivariate log-normal. Correlations between different parameters are obtained. The method utilises a fairly large number of pre-determined forward biokinetic calculations, whose results are stored in interpolation tables. Four different methods to sample the multidimensional parameter space with a limited number of samples are investigated: random, stratified, Latin Hypercube sampling with a uniform distribution of parameters and importance sampling using a lognormal distribution that approximates the posterior distribution. The importance sampling method gives much smaller sampling uncertainty. No sampling method-dependent differences are perceptible for the uniform distribution methods. (authors)

  3. Extended Fenske-Hall LCAO MO calculations of core-level shifts in solid P compounds

    Science.gov (United States)

    Franke, R.; Chassé, T.; Reinhold, J.; Streubel, P.; Szargan, R.

    1997-08-01

    Extended Fenske-Hall LCAO-MO ΔSCF calculations on solids modelled as H-pseudoatom saturated clusters are reported. The computational results verify the experimentally obtained initial-state (effective atomic charges, Madelung potential) and relaxation-energy contributions to the XPS phosphorus core-level binding energy shifts measured in Na 3PO 3S, Na 3PO 4, Na 2PO 3F and NH 4PF 6 in reference to red phosphorus. It is shown that the different initial-state contributions observed in the studied phosphates are determined by local and nonlocal terms while the relaxation-energy contributions are mainly dependent on the nature of the nearest neighbors of the phosphorus atom.

  4. RHFPPP, SCF-LCAO-MO Calculation for Closed Shell and Open Shell Organic Molecules

    International Nuclear Information System (INIS)

    Bieber, A.; Andre, J.J.

    1987-01-01

    1 - Nature of physical problem solved: Complete program performs SCF-LCAO-MO calculations for both closed and open-shell organic pi-molecules. The Pariser-Parr-People approximations are used with- in the framework of the restricted Hartree-Fock method. The SCF calculation is followed, if desired, by a variational configuration interaction (CI) calculation including singly excited configurations. 2 - Method of solution: A standard procedure is used; at each step a real symmetric matrix has to be diagonalized. The self-consistency is checked by comparing the eigenvectors between two consecutive steps. 3 - Restrictions on the complexity of the problem: i) The calculations are restricted to planar molecules. ii) In order to avoid accumulation of round-off errors, in the iterative procedure, double precision arithmetic is used. iii) The program is restricted to systems up to about 16 atoms; however the size of the systems can easily be modified if required

  5. Adjusted Empirical Likelihood Method in the Presence of Nuisance Parameters with Application to the Sharpe Ratio

    Directory of Open Access Journals (Sweden)

    Yuejiao Fu

    2018-04-01

    Full Text Available The Sharpe ratio is a widely used risk-adjusted performance measurement in economics and finance. Most of the known statistical inferential methods devoted to the Sharpe ratio are based on the assumption that the data are normally distributed. In this article, without making any distributional assumption on the data, we develop the adjusted empirical likelihood method to obtain inference for a parameter of interest in the presence of nuisance parameters. We show that the log adjusted empirical likelihood ratio statistic is asymptotically distributed as the chi-square distribution. The proposed method is applied to obtain inference for the Sharpe ratio. Simulation results illustrate that the proposed method is comparable to Jobson and Korkie’s method (1981 and outperforms the empirical likelihood method when the data are from a symmetric distribution. In addition, when the data are from a skewed distribution, the proposed method significantly outperforms all other existing methods. A real-data example is analyzed to exemplify the application of the proposed method.

  6. Development of an Empirical Model for Optimization of Machining Parameters to Minimize Power Consumption

    Science.gov (United States)

    Kant Garg, Girish; Garg, Suman; Sangwan, K. S.

    2018-04-01

    The manufacturing sector consumes huge energy demand and the machine tools used in this sector have very less energy efficiency. Selection of the optimum machining parameters for machine tools is significant for energy saving and for reduction of environmental emission. In this work an empirical model is developed to minimize the power consumption using response surface methodology. The experiments are performed on a lathe machine tool during the turning of AISI 6061 Aluminum with coated tungsten inserts. The relationship between the power consumption and machining parameters is adequately modeled. This model is used for formulation of minimum power consumption criterion as a function of optimal machining parameters using desirability function approach. The influence of machining parameters on the energy consumption has been found using the analysis of variance. The validation of the developed empirical model is proved using the confirmation experiments. The results indicate that the developed model is effective and has potential to be adopted by the industry for minimum power consumption of machine tools.

  7. Empirical relations between instrumental and seismic parameters of some strong earthquakes of Colombia

    International Nuclear Information System (INIS)

    Marin Arias, Juan Pablo; Salcedo Hurtado, Elkin de Jesus; Castillo Gonzalez, Hardany

    2008-01-01

    In order to establish the relationships between macroseismic and instrumental parameters, macroseismic field of 28 historical earthquakes that produced great effects in the Colombian territory were studied. The integration of the parameters was made by using the methodology of Kaussel and Ramirez (1992), for great Chilean earthquakes; Kanamori and Anderson (1975) and Coppersmith and Well (1994) for world-wide earthquakes. Once determined the macroseismic and instrumental parameters it was come to establish the model of the source of each earthquake, with which the data base of these parameters was completed. For each earthquake parameters related to the local and normal macroseismic epicenter were complemented, depth of the local and normal center, horizontal extension of both centers, vertical extension of the normal center, model of the source, area of rupture. The obtained empirical relations from linear equations, even show behaviors very similar to the found ones by other authors for other regions of the world and to world-wide level. The results of this work allow establishing that certain mutual non compatibility exists between the area of rupture and the length of rupture determined by the macroseismic methods, with parameters found with instrumental data like seismic moment, Ms magnitude and Mw magnitude.

  8. Robust fluence map optimization via alternating direction method of multipliers with empirical parameter optimization

    International Nuclear Information System (INIS)

    Gao, Hao

    2016-01-01

    For the treatment planning during intensity modulated radiation therapy (IMRT) or volumetric modulated arc therapy (VMAT), beam fluence maps can be first optimized via fluence map optimization (FMO) under the given dose prescriptions and constraints to conformally deliver the radiation dose to the targets while sparing the organs-at-risk, and then segmented into deliverable MLC apertures via leaf or arc sequencing algorithms. This work is to develop an efficient algorithm for FMO based on alternating direction method of multipliers (ADMM). Here we consider FMO with the least-square cost function and non-negative fluence constraints, and its solution algorithm is based on ADMM, which is efficient and simple-to-implement. In addition, an empirical method for optimizing the ADMM parameter is developed to improve the robustness of the ADMM algorithm. The ADMM based FMO solver was benchmarked with the quadratic programming method based on the interior-point (IP) method using the CORT dataset. The comparison results suggested the ADMM solver had a similar plan quality with slightly smaller total objective function value than IP. A simple-to-implement ADMM based FMO solver with empirical parameter optimization is proposed for IMRT or VMAT. (paper)

  9. Correcting the bias of empirical frequency parameter estimators in codon models.

    Directory of Open Access Journals (Sweden)

    Sergei Kosakovsky Pond

    2010-07-01

    Full Text Available Markov models of codon substitution are powerful inferential tools for studying biological processes such as natural selection and preferences in amino acid substitution. The equilibrium character distributions of these models are almost always estimated using nucleotide frequencies observed in a sequence alignment, primarily as a matter of historical convention. In this note, we demonstrate that a popular class of such estimators are biased, and that this bias has an adverse effect on goodness of fit and estimates of substitution rates. We propose a "corrected" empirical estimator that begins with observed nucleotide counts, but accounts for the nucleotide composition of stop codons. We show via simulation that the corrected estimates outperform the de facto standard estimates not just by providing better estimates of the frequencies themselves, but also by leading to improved estimation of other parameters in the evolutionary models. On a curated collection of sequence alignments, our estimators show a significant improvement in goodness of fit compared to the approach. Maximum likelihood estimation of the frequency parameters appears to be warranted in many cases, albeit at a greater computational cost. Our results demonstrate that there is little justification, either statistical or computational, for continued use of the -style estimators.

  10. An Empirical Study of Parameter Estimation for Stated Preference Experimental Design

    Directory of Open Access Journals (Sweden)

    Fei Yang

    2014-01-01

    Full Text Available The stated preference experimental design can affect the reliability of the parameters estimation in discrete choice model. Some scholars have proposed some new experimental designs, such as D-efficient, Bayesian D-efficient. But insufficient empirical research has been conducted on the effectiveness of these new designs and there has been little comparative analysis of the new designs against the traditional designs. In this paper, a new metro connecting Chengdu and its satellite cities is taken as the research subject to demonstrate the validity of the D-efficient and Bayesian D-efficient design. Comparisons between these new designs and orthogonal design were made by the fit of model and standard deviation of parameters estimation; then the best model result is obtained to analyze the travel choice behavior. The results indicate that Bayesian D-efficient design works better than D-efficient design. Some of the variables can affect significantly the choice behavior of people, including the waiting time and arrival time. The D-efficient and Bayesian D-efficient design for MNL can acquire reliability result in ML model, but the ML model cannot develop the theory advantages of these two designs. Finally, the metro can handle over 40% passengers flow if the metro will be operated in the future.

  11. Evaluation of Empirical and Machine Learning Algorithms for Estimation of Coastal Water Quality Parameters

    Directory of Open Access Journals (Sweden)

    Majid Nazeer

    2017-11-01

    Full Text Available Coastal waters are one of the most vulnerable resources that require effective monitoring programs. One of the key factors for effective coastal monitoring is the use of remote sensing technologies that significantly capture the spatiotemporal variability of coastal waters. Optical properties of coastal waters are strongly linked to components, such as colored dissolved organic matter (CDOM, chlorophyll-a (Chl-a, and suspended solids (SS concentrations, which are essential for the survival of a coastal ecosystem and usually independent of each other. Thus, developing effective remote sensing models to estimate these important water components based on optical properties of coastal waters is mandatory for a successful coastal monitoring program. This study attempted to evaluate the performance of empirical predictive models (EPM and neural networks (NN-based algorithms to estimate Chl-a and SS concentrations, in the coastal area of Hong Kong. Remotely-sensed data over a 13-year period was used to develop regional and local models to estimate Chl-a and SS over the entire Hong Kong waters and for each water class within the study area, respectively. The accuracy of regional models derived from EPM and NN in estimating Chl-a and SS was 83%, 93%, 78%, and 97%, respectively, whereas the accuracy of local models in estimating Chl-a and SS ranged from 60–94% and 81–94%, respectively. Both the regional and local NN models exhibited a higher performance than those models derived from empirical analysis. Thus, this study suggests using machine learning methods (i.e., NN for the more accurate and efficient routine monitoring of coastal water quality parameters (i.e., Chl-a and SS concentrations over the complex coastal area of Hong Kong and other similar coastal environments.

  12. AN EMPIRICAL CALIBRATION TO ESTIMATE COOL DWARF FUNDAMENTAL PARAMETERS FROM H-BAND SPECTRA

    Energy Technology Data Exchange (ETDEWEB)

    Newton, Elisabeth R.; Charbonneau, David; Irwin, Jonathan [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States); Mann, Andrew W., E-mail: enewton@cfa.harvard.edu [Astronomy Department, University of Texas at Austin, Austin, TX 78712 (United States)

    2015-02-20

    Interferometric radius measurements provide a direct probe of the fundamental parameters of M dwarfs. However, interferometry is within reach for only a limited sample of nearby, bright stars. We use interferometrically measured radii, bolometric luminosities, and effective temperatures to develop new empirical calibrations based on low-resolution, near-infrared spectra. We find that H-band Mg and Al spectral features are good tracers of stellar properties, and derive functions that relate effective temperature, radius, and log luminosity to these features. The standard deviations in the residuals of our best fits are, respectively, 73 K, 0.027 R {sub ☉}, and 0.049 dex (an 11% error on luminosity). Our calibrations are valid from mid K to mid M dwarf stars, roughly corresponding to temperatures between 3100 and 4800 K. We apply our H-band relationships to M dwarfs targeted by the MEarth transiting planet survey and to the cool Kepler Objects of Interest (KOIs). We present spectral measurements and estimated stellar parameters for these stars. Parallaxes are also available for many of the MEarth targets, allowing us to independently validate our calibrations by demonstrating a clear relationship between our inferred parameters and the stars' absolute K magnitudes. We identify objects with magnitudes that are too bright for their inferred luminosities as candidate multiple systems. We also use our estimated luminosities to address the applicability of near-infrared metallicity calibrations to mid and late M dwarfs. The temperatures we infer for the KOIs agree remarkably well with those from the literature; however, our stellar radii are systematically larger than those presented in previous works that derive radii from model isochrones. This results in a mean planet radius that is 15% larger than one would infer using the stellar properties from recent catalogs. Our results confirm the derived parameters from previous in-depth studies of KOIs 961 (Kepler

  13. The softness of an atom in a molecule and a functional group softness definition; an LCAO scale

    International Nuclear Information System (INIS)

    Giambiagi, M.; Giambiagi, M.S. de; Pires, J.M.; Pitanga, P.

    1987-01-01

    We introduce a scale for the softness of an atom in different molecules and we similarly define a functional group softness. These definitions, unlike previous ones, are not tied to the finite difference approximation neither, hence, to valence state ionization potentials and electron affinities; they result from the LCAO calculation itself. We conclude that a) the softness of an atom in a molecule shows wide variations; b) the geometric average of the softnesses of the atoms in the molecule gives the most consistent results for the molecular softnesses; c) the functional group softness is transferable within a homologous series. (Author) [pt

  14. Assessment of radiological parameters and patient dose audit using semi-empirical model

    International Nuclear Information System (INIS)

    Olowookere, C.J.; Onabiyi, B.; Ajumobi, S. A.; Obed, R.I.; Babalola, I. A.; Bamidele, L.

    2011-01-01

    Risk is associated with all human activities, medical imaging is no exception. The risk in medical imaging is quantified using effective dose. However, measurement of effective dose is rather difficult and time consuming, therefore, energy imparted and entrance surface dose are obtained and converted into effective dose using the appropriate conversion factors. In this study, data on exposure parameters and patient characteristics were obtained during the routine diagnostic examinations for four common types of X-ray procedures. A semi-empirical model involving computer software Xcomp5 was used to determine energy imparted per unit exposure-area product, entrance skin exposure(ESE) and incident air kerma which are radiation dose indices. The value of energy imparted per unit exposure-area product ranges between 0.60 and 1.21x 10 -3 JR -1 cm -2 and entrance skin exposure range from 5.07±1.25 to 36.62±27.79 mR, while the incident air kerma range between 43.93μGy and 265.5μGy. The filtrations of two of the three machines investigated were lower than the standard requirement of CEC for the machines used in conventional radiography. The values of and ESE obtained in the study were relatively lower compared to the published data, indicating that patients irradiated during the routine examinations in this study are at lower health risk. The energy imparted per unit exposure- area product could be used to determine the energy delivered to the patient during diagnostic examinations, and it is an approximate indicator of patient risk.

  15. Data mining of Ti-Al semi-empirical parameters for developing reduced order models

    Energy Technology Data Exchange (ETDEWEB)

    Broderick, Scott R [Department of Materials Science and Engineering and Institute for Combinatorial Discovery, Iowa State University, Ames, IA 50011 (United States); Aourag, Hafid [Department of Physics, University Abou Bakr Belkaid, Tlemcen 13000 (Algeria); Rajan, Krishna [Department of Materials Science and Engineering and Institute for Combinatorial Discovery, Iowa State University, Ames, IA 50011 (United States)

    2011-05-15

    A focus of materials design is determining the minimum amount of information necessary to fully describe a system, thus reducing the number of empirical results required and simplifying the data analysis. Screening descriptors calculated through a semi-empirical model, we demonstrate how an informatics-based analysis can be used to address this issue with no prior assumptions. We have developed a unique approach for identifying the minimum number of descriptors necessary to capture all the information of a system. Using Ti-Al alloys of varying compositions and crystal chemistries as the test bed, 5 of the 21 original descriptors from electronic structure calculations are found to capture all the information from the calculation, thereby reducing the structure-chemistry-property search space. Additionally, by combining electronic structure calculations with data mining, we classify the systems by chemistries and structures, based on the electronic structure inputs, and thereby rank the impact of change in chemistry and crystal structure on the electronic structure. -- Research Highlights: {yields} We developed an informatics-based methodology to minimize the necessary information. {yields} We applied this methodology to descriptors from semi-empirical calculations. {yields} We developed a validation approach for maintaining information from screening. {yields} We classified intermetallics and identified patterns of composition and structure.

  16. Data mining of Ti-Al semi-empirical parameters for developing reduced order models

    International Nuclear Information System (INIS)

    Broderick, Scott R.; Aourag, Hafid; Rajan, Krishna

    2011-01-01

    A focus of materials design is determining the minimum amount of information necessary to fully describe a system, thus reducing the number of empirical results required and simplifying the data analysis. Screening descriptors calculated through a semi-empirical model, we demonstrate how an informatics-based analysis can be used to address this issue with no prior assumptions. We have developed a unique approach for identifying the minimum number of descriptors necessary to capture all the information of a system. Using Ti-Al alloys of varying compositions and crystal chemistries as the test bed, 5 of the 21 original descriptors from electronic structure calculations are found to capture all the information from the calculation, thereby reducing the structure-chemistry-property search space. Additionally, by combining electronic structure calculations with data mining, we classify the systems by chemistries and structures, based on the electronic structure inputs, and thereby rank the impact of change in chemistry and crystal structure on the electronic structure. -- Research Highlights: → We developed an informatics-based methodology to minimize the necessary information. → We applied this methodology to descriptors from semi-empirical calculations. → We developed a validation approach for maintaining information from screening. → We classified intermetallics and identified patterns of composition and structure.

  17. An empirical model for parameters affecting energy consumption in boron removal from boron-containing wastewaters by electrocoagulation.

    Science.gov (United States)

    Yilmaz, A Erdem; Boncukcuoğlu, Recep; Kocakerim, M Muhtar

    2007-06-01

    In this study, it was investigated parameters affecting energy consumption in boron removal from boron containing wastewaters prepared synthetically, via electrocoagulation method. The solution pH, initial boron concentration, dose of supporting electrolyte, current density and temperature of solution were selected as experimental parameters affecting energy consumption. The obtained experimental results showed that boron removal efficiency reached up to 99% under optimum conditions, in which solution pH was 8.0, current density 6.0 mA/cm(2), initial boron concentration 100mg/L and solution temperature 293 K. The current density was an important parameter affecting energy consumption too. High current density applied to electrocoagulation cell increased energy consumption. Increasing solution temperature caused to decrease energy consumption that high temperature decreased potential applied under constant current density. That increasing initial boron concentration and dose of supporting electrolyte caused to increase specific conductivity of solution decreased energy consumption. As a result, it was seen that energy consumption for boron removal via electrocoagulation method could be minimized at optimum conditions. An empirical model was predicted by statistically. Experimentally obtained values were fitted with values predicted from empirical model being as following; [formula in text]. Unfortunately, the conditions obtained for optimum boron removal were not the conditions obtained for minimum energy consumption. It was determined that support electrolyte must be used for increase boron removal and decrease electrical energy consumption.

  18. Empirical correlation between mechanical and physical parameters of irradiated pressure vessel steels

    International Nuclear Information System (INIS)

    Tipping, P.; Solt, G.; Waeber, W.

    1991-02-01

    Neutron irradiation embrittlement of nuclear reactor pressure vessel (PV) steels is one of the best known ageing factors of nuclear power plants. If the safety limits set by the regulators for the PV steel are not satisfied any more, and other measures are too expensive for the economics of the plant, this embrittlement could lead to the closure of the plant. Despite this, the fundamental mechanisms of neutron embrittlement are not yet fully understood, and usually only empirical mathematical models exist to asses neutron fluence effects on embrittlement, as given by the Charpy test for example. In this report, results of a systematic study of a French forging (1.2 MD 07 B), irradiated to several fluences will be reported. Mechanical property measurements (Charpy tensile and Vickers microhardness), and physical property measurements (small angle neutron scattering - SANS), have been done on specimens having the same irradiation or irradiation-annealing-reirradiation treatment histories. Empirical correlations have been established between the temperature shift and the decrease in the upper shelf energy as measured on Charpy specimens and tensile stresses and hardness increases on the one hand, and the size of the copper-rich precipitates formed by the irradiation on the other hand. The effect of copper (as an impurity element) in enhancing the degradation of mechanical properties has been demonstrated; the SANS measurements have shown that the size and amount of precipitates are important. The correlations represent the first step in an effort to develop a description of neutron irradiation induced embrittlement which is based on physical models. (author) 6 figs., 27 refs

  19. Empirical estimation of school siting parameter towards improving children's safety

    International Nuclear Information System (INIS)

    Aziz, I S; Yusoff, Z M; Rasam, A R A; Rahman, A N N A; Omar, D

    2014-01-01

    Distance from school to home is a key determination in ensuring the safety of hildren. School siting parameters are made to make sure that a particular school is located in a safe environment. School siting parameters are made by Department of Town and Country Planning Malaysia (DTCP) and latest review was on June 2012. These school siting parameters are crucially important as they can affect the safety, school reputation, and not to mention the perception of the pupil and parents of the school. There have been many studies to review school siting parameters since these change in conjunction with this ever-changing world. In this study, the focus is the impact of school siting parameter on people with low income that live in the urban area, specifically in Johor Bahru, Malaysia. In achieving that, this study will use two methods which are on site and off site. The on site method is to give questionnaires to people and off site is to use Geographic Information System (GIS) and Statistical Product and Service Solutions (SPSS), to analyse the results obtained from the questionnaire. The output is a maps of suitable safe distance from school to house. The results of this study will be useful to people with low income as their children tend to walk to school rather than use transportation

  20. Tests of Parameters Instability: Theoretical Study and Empirical Applications on Two Types of Models (ARMA Model and Market Model

    Directory of Open Access Journals (Sweden)

    Sahbi FARHANI

    2012-01-01

    Full Text Available This paper considers tests of parameters instability and structural change with known, unknown or multiple breakpoints. The results apply to a wide class of parametric models that are suitable for estimation by strong rules for detecting the number of breaks in a time series. For that, we use Chow, CUSUM, CUSUM of squares, Wald, likelihood ratio and Lagrange multiplier tests. Each test implicitly uses an estimate of a change point. We conclude with an empirical analysis on two different models (ARMA model and simple linear regression model.

  1. X-ray spectrum analysis of multi-component samples by a method of fundamental parameters using empirical ratios

    International Nuclear Information System (INIS)

    Karmanov, V.I.

    1986-01-01

    A type of the fundamental parameter method based on empirical relation of corrections for absorption and additional-excitation with absorbing characteristics of samples is suggested. The method is used for X-ray fluorescence analysis of multi-component samples of charges of welded electrodes. It is shown that application of the method is justified only for determination of titanium, calcium and silicon content in charges taking into account only corrections for absorption. Irn and manganese content can be calculated by the simple method of the external standard

  2. Medium-resolution Isaac Newton Telescope library of empirical spectra - II. The stellar atmospheric parameters

    NARCIS (Netherlands)

    Cenarro, A. J.; Peletier, R. F.; Sanchez-Blazquez, P.; Selam, S. O.; Toloba, E.; Cardiel, N.; Falcon-Barroso, J.; Gorgas, J.; Jimenez-Vicente, J.; Vazdekis, A.

    2007-01-01

    We present a homogeneous set of stellar atmospheric parameters (T-eff, log g, [Fe/H]) for MILES, a new spectral stellar library covering the range lambda lambda 3525-7500 angstrom at 2.3 angstrom (FWHM) spectral resolution. The library consists of 985 stars spanning a large range in atmospheric

  3. Identifying mechanisms that structure ecological communities by snapping model parameters to empirically observed tradeoffs.

    Science.gov (United States)

    Thomas Clark, Adam; Lehman, Clarence; Tilman, David

    2018-04-01

    Theory predicts that interspecific tradeoffs are primary determinants of coexistence and community composition. Using information from empirically observed tradeoffs to augment the parametrisation of mechanism-based models should therefore improve model predictions, provided that tradeoffs and mechanisms are chosen correctly. We developed and tested such a model for 35 grassland plant species using monoculture measurements of three species characteristics related to nitrogen uptake and retention, which previous experiments indicate as important at our site. Matching classical theoretical expectations, these characteristics defined a distinct tradeoff surface, and models parameterised with these characteristics closely matched observations from experimental multi-species mixtures. Importantly, predictions improved significantly when we incorporated information from tradeoffs by 'snapping' characteristics to the nearest location on the tradeoff surface, suggesting that the tradeoffs and mechanisms we identify are important determinants of local community structure. This 'snapping' method could therefore constitute a broadly applicable test for identifying influential tradeoffs and mechanisms. © 2018 The Authors. Ecology Letters published by CNRS and John Wiley & Sons Ltd.

  4. The Use of Asymptotic Functions for Determining Empirical Values of CN Parameter in Selected Catchments of Variable Land Cover

    Directory of Open Access Journals (Sweden)

    Wałęga Andrzej

    2017-12-01

    Full Text Available The aim of the study was to assess the applicability of asymptotic functions for determining the value of CN parameter as a function of precipitation depth in mountain and upland catchments. The analyses were carried out in two catchments: the Rudawa, left tributary of the Vistula, and the Kamienica, right tributary of the Dunajec. The input material included data on precipitation and flows for a multi-year period 1980–2012, obtained from IMGW PIB in Warsaw. Two models were used to determine empirical values of CNobs parameter as a function of precipitation depth: standard Hawkins model and 2-CN model allowing for a heterogeneous nature of a catchment area.

  5. An empirical model for parameters affecting energy consumption in boron removal from boron-containing wastewaters by electrocoagulation

    International Nuclear Information System (INIS)

    Yilmaz, A. Erdem; Boncukcuoglu, Recep; Kocakerim, M. Muhtar

    2007-01-01

    In this study, it was investigated parameters affecting energy consumption in boron removal from boron containing wastewaters prepared synthetically, via electrocoagulation method. The solution pH, initial boron concentration, dose of supporting electrolyte, current density and temperature of solution were selected as experimental parameters affecting energy consumption. The obtained experimental results showed that boron removal efficiency reached up to 99% under optimum conditions, in which solution pH was 8.0, current density 6.0mA/cm 2 , initial boron concentration 100mg/L and solution temperature 293K. The current density was an important parameter affecting energy consumption too. High current density applied to electrocoagulation cell increased energy consumption. Increasing solution temperature caused to decrease energy consumption that high temperature decreased potential applied under constant current density. That increasing initial boron concentration and dose of supporting electrolyte caused to increase specific conductivity of solution decreased energy consumption. As a result, it was seen that energy consumption for boron removal via electrocoagulation method could be minimized at optimum conditions. An empirical model was predicted by statistically. Experimentally obtained values were fitted with values predicted from empirical model being as following;[ECB]=7.6x10 6 x[OH] 0.11 x[CD] 0.62 x[IBC] -0.57 x[DSE] -0.04 x[T] -2.98 x[t] Unfortunately, the conditions obtained for optimum boron removal were not the conditions obtained for minimum energy consumption. It was determined that support electrolyte must be used for increase boron removal and decrease electrical energy consumption.

  6. An empirical model for parameters affecting energy consumption in boron removal from boron-containing wastewaters by electrocoagulation

    Energy Technology Data Exchange (ETDEWEB)

    Yilmaz, A. Erdem [Atatuerk University, Faculty of Engineering, Department of Environmental Engineering, 25240 Erzurum (Turkey)]. E-mail: aerdemy@atauni.edu.tr; Boncukcuoglu, Recep [Atatuerk University, Faculty of Engineering, Department of Environmental Engineering, 25240 Erzurum (Turkey); Kocakerim, M. Muhtar [Atatuerk University, Faculty of Engineering, Department of Chemical Engineering, 25240 Erzurum (Turkey)

    2007-06-01

    In this study, it was investigated parameters affecting energy consumption in boron removal from boron containing wastewaters prepared synthetically, via electrocoagulation method. The solution pH, initial boron concentration, dose of supporting electrolyte, current density and temperature of solution were selected as experimental parameters affecting energy consumption. The obtained experimental results showed that boron removal efficiency reached up to 99% under optimum conditions, in which solution pH was 8.0, current density 6.0mA/cm{sup 2}, initial boron concentration 100mg/L and solution temperature 293K. The current density was an important parameter affecting energy consumption too. High current density applied to electrocoagulation cell increased energy consumption. Increasing solution temperature caused to decrease energy consumption that high temperature decreased potential applied under constant current density. That increasing initial boron concentration and dose of supporting electrolyte caused to increase specific conductivity of solution decreased energy consumption. As a result, it was seen that energy consumption for boron removal via electrocoagulation method could be minimized at optimum conditions. An empirical model was predicted by statistically. Experimentally obtained values were fitted with values predicted from empirical model being as following;[ECB]=7.6x10{sup 6}x[OH]{sup 0.11}x[CD]{sup 0.62}x[IBC]{sup -0.57}x[DSE]{sup -0.}= {sup 04}x[T]{sup -2.98}x[t] Unfortunately, the conditions obtained for optimum boron removal were not the conditions obtained for minimum energy consumption. It was determined that support electrolyte must be used for increase boron removal and decrease electrical energy consumption.

  7. Sensitivity of traffic input parameters on rutting performance of a flexible pavement using Mechanistic Empirical Pavement Design Guide

    Directory of Open Access Journals (Sweden)

    Nur Hossain

    2016-11-01

    Full Text Available The traffic input parameters in the Mechanistic Empirical Pavement Design Guide (MEPDG are: (a general traffic inputs, (b traffic volume adjustment factors, and (c axle load spectra (ALS. Of these three traffic inputs, the traffic volume adjustment factors specifically monthly adjustment factor (MAF and the ALS are widely considered to be important and sensitive factors, which can significantly affect design of and prediction of distress in flexible pavements. Therefore, the present study was undertaken to assess the sensitivity of ALS and MAF traffic inputs on rutting distress of a flexible pavement. The traffic data of four years (from 2008 to 2012 were collected from an instrumented test section on I-35 in Oklahoma. Site specific traffic input parameters were developed. It was observed that significant differences exist between the MEPDG default and developed site-specific traffic input values. However, the differences in the yearly ALS and MAF data, developed for these four years, were not found to be as significant when compared to one another. In addition, quarterly field rut data were measured on the test section and compared with the MEPDG predicted rut values using the default and developed traffic input values for different years. It was found that significant differences exist between the measured rut and the MEPDG (AASHTOWare-ME predicted rut when default values were used. Keywords: MEPDG, Rut, Level 1 inputs, Axle load spectra, Traffic input parameters, Sensitivity

  8. Investigation of hydrodynamic parameters in a novel expanded bed configuration: local axial dispersion characterization and an empirical correlation study

    Directory of Open Access Journals (Sweden)

    E. S. Taheri

    2012-12-01

    Full Text Available Study of liquid behavior in an expanded bed adsorption (EBA system is important for understanding, modeling and predicting nanobioproduct/biomolecule adsorption performance in such processes. In this work, in order to analyze the local axial dispersion parameters, simple custom NBG (Nano Biotechnology Group expanded bed columns with 10 and 26 mm inner diameter were modified by insertion of sampling holes. Based on this configuration, the particles and liquid can be withdrawn directly from various axial positions of the columns. Streamline DEAE particles were used as solid phase in this work. The effects of factors such as liquid velocity, viscosity, settled bed height and column diameter on the hydrodynamic parameters were investigated. Local bed voidages in different axial bed positions were measured by a direct procedure within the column with 26 mm diameter. Increasing trend of voidage with velocity at a certain position of the bed and with bed height at a certain degree of expansion was observed. Residence time distribution (RTD analysis at various bed points showed approximately uniform hydrodynamic behavior in the column with 10 mm diameter while a decreasing trend of mixing/dispersion along the bed height at a certain degree of expansion was seen in the column with 26 mm diameter. Also lower mixing/dispersion occured in the smaller diameter column. Finally, a combination of two empirical correlations proposed by Richardson-Zaki and Tong-Sun was successfully employed for identification of the bed voidage at various bed heights (RSSE=99.9%. Among the empirical correlations presented in the literatures for variation of the axial dispersion coefficient, the Yun correlation gave good agreement with our experimental data (RSSE=87% in this column.

  9. The Use of Asymptotic Functions for Determining Empirical Values of CN Parameter in Selected Catchments of Variable Land Cover

    Science.gov (United States)

    Wałęga, Andrzej; Młyński, Dariusz; Wachulec, Katarzyna

    2017-12-01

    The aim of the study was to assess the applicability of asymptotic functions for determining the value of CN parameter as a function of precipitation depth in mountain and upland catchments. The analyses were carried out in two catchments: the Rudawa, left tributary of the Vistula, and the Kamienica, right tributary of the Dunajec. The input material included data on precipitation and flows for a multi-year period 1980-2012, obtained from IMGW PIB in Warsaw. Two models were used to determine empirical values of CNobs parameter as a function of precipitation depth: standard Hawkins model and 2-CN model allowing for a heterogeneous nature of a catchment area. The study analyses confirmed that asymptotic functions properly described P-CNobs relationship for the entire range of precipitation variability. In the case of high rainfalls, CNobs remained above or below the commonly accepted average antecedent moisture conditions AMCII. The study calculations indicated that the runoff amount calculated according to the original SCS-CN method might be underestimated, and this could adversely affect the values of design flows required for the design of hydraulic engineering projects. In catchments with heterogeneous land cover, the results of CNobs were more accurate when 2-CN model was used instead of the standard Hawkins model. 2-CN model is more precise in accounting for differences in runoff formation depending on retention capacity of the substrate. It was also demonstrated that the commonly accepted initial abstraction coefficient λ = 0.20 yielded too big initial loss of precipitation in the analyzed catchments and, therefore, the computed direct runoff was underestimated. The best results were obtained for λ = 0.05.

  10. Organizational Flexibility for Hypercompetitive Markets : Empirical Evidence of the Composition and Context Specificity of Dynamic Capabilities and Organization Design Parameters

    NARCIS (Netherlands)

    N.P. van der Weerdt (Niels)

    2009-01-01

    textabstractThis research project, which builds on the conceptual work of Henk Volberda on the flexible firm, empirically investigates four aspects of organizational flexibility. Our analysis of data of over 1900 firms and over 3000 respondents shows (1) that several increasing levels of

  11. An empirical study on the utility of BRDF model parameters and topographic parameters for mapping vegetation in a semi-arid region with MISR imagery

    Science.gov (United States)

    Multi-angle remote sensing has been proved useful for mapping vegetation community types in desert regions. Based on Multi-angle Imaging Spectro-Radiometer (MISR) multi-angular images, this study compares roles played by Bidirectional Reflectance Distribution Function (BRDF) model parameters with th...

  12. A Parameter Identification Method for Helicopter Noise Source Identification and Physics-Based Semi-Empirical Modeling

    Science.gov (United States)

    Greenwood, Eric, II; Schmitz, Fredric H.

    2010-01-01

    A new physics-based parameter identification method for rotor harmonic noise sources is developed using an acoustic inverse simulation technique. This new method allows for the identification of individual rotor harmonic noise sources and allows them to be characterized in terms of their individual non-dimensional governing parameters. This new method is applied to both wind tunnel measurements and ground noise measurements of two-bladed rotors. The method is shown to match the parametric trends of main rotor Blade-Vortex Interaction (BVI) noise, allowing accurate estimates of BVI noise to be made for operating conditions based on a small number of measurements taken at different operating conditions.

  13. Empirical parameters for solvent acidity, basicity, dipolarity, and polarizability of the ionic liquids [BMIM][BF4] and [BMIM][PF6].

    Science.gov (United States)

    del Valle, J C; García Blanco, F; Catalán, J

    2015-04-02

    The empirical solvent scales for polarizability (SP), dipolarity (SdP), acidity (SA), and basicity (SB) have been successfully used to interpret the solvatochromism of compounds dissolved in organic solvents and their solvent mixtures. Providing that the published solvatochromic parameters for the ionic liquids 1-(1-butyl)-3-methylimidazolium tetrafluoroborate, [BMIM][BF4] and 1-(1-butyl)-3-methylimidazolium hexafluorophosphate, [BMIM][PF6], are excessively widespread, their SP, SdP, SA, and SB values are measured herein at temperatures from 293 to 353 K. Four key points are emphasized herein: (i) the origin of the solvatochromic solvent scales--the gas phase, that is the absence of any medium perturbation--; (ii) the separation of the polarizability and dipolarity effects; (iii) the simplification of the probing process in order to obtain the solvatochromic parameters; and (iv) the SP, SdP, SA, and SB solvent scales can probe the polarizability, dipolarity, acidity, and basicity of ionic liquids as well as of organic solvents and water-organic solvent mixtures. From the multiparameter approach using the four pure solvent scales one can draw the conclusion that (a) the solvent influence of [BMIM][BF4] parallels that of formamide at 293 K, both of them miscible with water; (b) [BMIM][PF6] shows a set of solvatochromic parameters similar to that of chloroacetonitrile, both of them water insoluble; and (c) that the corresponding solvent acidity and basicity of the ionic liquids can be explained to a great extent from the cation species by comparing the empirical parameters of [BMIM](+) with those of the solvent 1-methylimidazole. The insolubility of [BMIM][PF6] in water as compared to [BMIM][BF4] is tentatively connected to some extent to the larger molar volume of the anion [PF6](-), and to the difference in basicity of [PF6](-) and [BF4](-).

  14. Empirical Scaling Relations of Source Parameters For The Earthquake Swarm 2000 At Novy Kostel (vogtland/nw-bohemia)

    Science.gov (United States)

    Heuer, B.; Plenefisch, T.; Seidl, D.; Klinge, K.

    Investigations on the interdependence of different source parameters are an impor- tant task to get more insight into the mechanics and dynamics of earthquake rup- ture, to model source processes and to make predictions for ground motion at the surface. The interdependencies, providing so-called scaling relations, have often been investigated for large earthquakes. However, they are not commonly determined for micro-earthquakes and swarm-earthquakes, especially for those of the Vogtland/NW- Bohemia region. For the most recent swarm in the Vogtland/NW-Bohemia, which took place between August and December 2000 near Novy Kostel (Czech Republic), we systematically determine the most important source parameters such as energy E0, seismic moment M0, local magnitude ML, fault length L, corner frequency fc and rise time r and build their interdependencies. The swarm of 2000 is well suited for such investigations since it covers a large magnitude interval (1.5 ML 3.7) and there are also observations in the near-field at several stations. In the present paper we mostly concentrate on two near-field stations with hypocentral distances between 11 and 13 km, namely WERN (Wernitzgrün) and SBG (Schönberg). Our data processing includes restitution to true ground displacement and rotation into the ray-based prin- cipal co-ordinate system, which we determine by the covariance matrix of the P- and S-displacement, respectively. Data preparation, determination of the distinct source parameters as well as statistical interpretation of the results will be exemplary pre- sented. The results will be discussed with respect to temporal variations in the swarm activity (the swarm consists of eight distinct sub-episodes) and already existing focal mechanisms.

  15. Monte Carlo semi-empirical model for Si(Li) x-ray detector: Differences between nominal and fitted parameters

    Energy Technology Data Exchange (ETDEWEB)

    Lopez-Pino, N.; Padilla-Cabal, F.; Garcia-Alvarez, J. A.; Vazquez, L.; D' Alessandro, K.; Correa-Alfonso, C. M. [Departamento de Fisica Nuclear, Instituto Superior de Tecnologia y Ciencias Aplicadas (InSTEC) Ave. Salvador Allende y Luaces. Quinta de los Molinos. Habana 10600. A.P. 6163, La Habana (Cuba); Godoy, W.; Maidana, N. L.; Vanin, V. R. [Laboratorio do Acelerador Linear, Instituto de Fisica - Universidade de Sao Paulo Rua do Matao, Travessa R, 187, 05508-900, SP (Brazil)

    2013-05-06

    A detailed characterization of a X-ray Si(Li) detector was performed to obtain the energy dependence of efficiency in the photon energy range of 6.4 - 59.5 keV, which was measured and reproduced by Monte Carlo (MC) simulations. Significant discrepancies between MC and experimental values were found when the manufacturer parameters of the detector were used in the simulation. A complete Computerized Tomography (CT) detector scan allowed to find the correct crystal dimensions and position inside the capsule. The computed efficiencies with the resulting detector model differed with the measured values no more than 10% in most of the energy range.

  16. Use of Multi-class Empirical Orthogonal Function for Identification of Hydrogeological Parameters and Spatiotemporal Pattern of Multiple Recharges in Groundwater Modeling

    Science.gov (United States)

    Huang, C. L.; Hsu, N. S.; Yeh, W. W. G.; Hsieh, I. H.

    2017-12-01

    This study develops an innovative calibration method for regional groundwater modeling by using multi-class empirical orthogonal functions (EOFs). The developed method is an iterative approach. Prior to carrying out the iterative procedures, the groundwater storage hydrographs associated with the observation wells are calculated. The combined multi-class EOF amplitudes and EOF expansion coefficients of the storage hydrographs are then used to compute the initial gauss of the temporal and spatial pattern of multiple recharges. The initial guess of the hydrogeological parameters are also assigned according to in-situ pumping experiment. The recharges include net rainfall recharge and boundary recharge, and the hydrogeological parameters are riverbed leakage conductivity, horizontal hydraulic conductivity, vertical hydraulic conductivity, storage coefficient, and specific yield. The first step of the iterative algorithm is to conduct the numerical model (i.e. MODFLOW) by the initial guess / adjusted values of the recharges and parameters. Second, in order to determine the best EOF combination of the error storage hydrographs for determining the correction vectors, the objective function is devised as minimizing the root mean square error (RMSE) of the simulated storage hydrographs. The error storage hydrograph are the differences between the storage hydrographs computed from observed and simulated groundwater level fluctuations. Third, adjust the values of recharges and parameters and repeat the iterative procedures until the stopping criterion is reached. The established methodology was applied to the groundwater system of Ming-Chu Basin, Taiwan. The study period is from January 1st to December 2ed in 2012. Results showed that the optimal EOF combination for the multiple recharges and hydrogeological parameters can decrease the RMSE of the simulated storage hydrographs dramatically within three calibration iterations. It represents that the iterative approach that

  17. Empirical Hamiltonians

    International Nuclear Information System (INIS)

    Peggs, S.; Talman, R.

    1987-01-01

    As proton accelerators get larger, and include more magnets, the conventional tracking programs which simulate them run slower. The purpose of this paper is to describe a method, still under development, in which element-by-element tracking around one turn is replaced by a single man, which can be processed far faster. It is assumed for this method that a conventional program exists which can perform faithful tracking in the lattice under study for some hundreds of turns, with all lattice parameters held constant. An empirical map is then generated by comparison with the tracking program. A procedure has been outlined for determining an empirical Hamiltonian, which can represent motion through many nonlinear kicks, by taking data from a conventional tracking program. Though derived by an approximate method this Hamiltonian is analytic in form and can be subjected to further analysis of varying degrees of mathematical rigor. Even though the empirical procedure has only been described in one transverse dimension, there is good reason to hope that it can be extended to include two transverse dimensions, so that it can become a more practical tool in realistic cases

  18. A LCAO-LDF study of Brønsted acids chemisorption on ZnO(0001)

    Science.gov (United States)

    Casarin, Maurizio; Maccato, Chiara; Tabacchi, Gloria; Vittadini, Andrea

    1996-05-01

    The local density functional theory coupled to the molecular cluster approach has been used to study the chemisorption of Br∅nsted acids (H 2O, H 2S, HCN, CH 3OH and CH 3SH) on the ZnO(0001) polar surface. Geometrical parameters and vibrational frequencies for selected species molecularly and dissociatively chemisorbed have been computed. The agreement with literature experimental data, when available, has been found to be good. The nature of the interaction between the conjugate base of the examined Br∅nsted acids and the Lewis acid site available on the surface has been elucidated, confirming its leading role in determining the actual relative acidity scale obtained by titration displacement reactions. The strength of this interaction follows the order OH - ≈ CN - > CH 3O - > SH - > CH 3S -.

  19. An Empirical Fitting Method to Type Ia Supernova Light Curves. III. A Three-parameter Relationship: Peak Magnitude, Rise Time, and Photospheric Velocity

    Science.gov (United States)

    Zheng, WeiKang; Kelly, Patrick L.; Filippenko, Alexei V.

    2018-05-01

    We examine the relationship between three parameters of Type Ia supernovae (SNe Ia): peak magnitude, rise time, and photospheric velocity at the time of peak brightness. The peak magnitude is corrected for extinction using an estimate determined from MLCS2k2 fitting. The rise time is measured from the well-observed B-band light curve with the first detection at least 1 mag fainter than the peak magnitude, and the photospheric velocity is measured from the strong absorption feature of Si II λ6355 at the time of peak brightness. We model the relationship among these three parameters using an expanding fireball with two assumptions: (a) the optical emission is approximately that of a blackbody, and (b) the photospheric temperatures of all SNe Ia are the same at the time of peak brightness. We compare the precision of the distance residuals inferred using this physically motivated model against those from the empirical Phillips relation and the MLCS2k2 method for 47 low-redshift SNe Ia (0.005 Ia in our sample with higher velocities are inferred to be intrinsically fainter. Eliminating the high-velocity SNe and applying a more stringent extinction cut to obtain a “low-v golden sample” of 22 SNe, we obtain significantly reduced scatter of 0.108 ± 0.018 mag in the new relation, better than those of the Phillips relation and the MLCS2k2 method. For 250 km s‑1 of residual peculiar motions, we find 68% and 95% upper limits on the intrinsic scatter of 0.07 and 0.10 mag, respectively.

  20. Empirical Hamiltonians

    International Nuclear Information System (INIS)

    Peggs, S.; Talman, R.

    1986-08-01

    As proton accelerators get larger, and include more magnets, the conventional tracking programs which simulate them run slower. At the same time, in order to more carefully optimize the higher cost of the accelerators, they must return more accurate results, even in the presence of a longer list of realistic effects, such as magnet errors and misalignments. For these reasons conventional tracking programs continue to be computationally bound, despite the continually increasing computing power available. This limitation is especially severe for a class of problems in which some lattice parameter is slowly varying, when a faithful description is only obtained by tracking for an exceedingly large number of turns. Examples are synchrotron oscillations in which the energy varies slowly with a period of, say, hundreds of turns, or magnet ripple or noise on a comparably slow time scale. In these cases one may with to track for hundreds of periods of the slowly varying parameter. The purpose of this paper is to describe a method, still under development, in which element-by-element tracking around one turn is replaced by a single map, which can be processed far faster. Similar programs have already been written in which successive elements are ''concatenated'' with truncation to linear, sextupole, or octupole order, et cetera, using Lie algebraic techniques to preserve symplecticity. The method described here is rather more empirical than this but, in principle, contains information to all orders and is able to handle resonances in a more straightforward fashion

  1. Inglorious Empire

    DEFF Research Database (Denmark)

    Khair, Tabish

    2017-01-01

    Review of 'Inglorious Empire: What the British did to India' by Shashi Tharoor, London, Hurst Publishers, 2017, 296 pp., £20.00......Review of 'Inglorious Empire: What the British did to India' by Shashi Tharoor, London, Hurst Publishers, 2017, 296 pp., £20.00...

  2. Assessment of brittleness and empirical correlations between physical and mechanical parameters of the Asmari limestone in Khersan 2 dam site, in southwest of Iran

    Science.gov (United States)

    Lashkaripour, Gholam Reza; Rastegarnia, Ahmad; Ghafoori, Mohammad

    2018-02-01

    The determination of brittleness and geomechanical parameters, especially uniaxial compressive strength (UCS) and Young's modulus (ES) of rocks are needed for the design of different rock engineering applications. Evaluation of these parameters are time-consuming processes, tedious, expensive and require well-prepared rock cores. Therefore, compressional wave velocity (Vp) and index parameters such as point load index and porosity are often used to predict the properties of rocks. In this paper, brittleness and other properties, physical and mechanical in type, of 56 Asmari limestones in dry and saturated conditions were analyzed. The rock samples were collected from Khersan 2 dam site. This dam with the height of 240 m is being constructed and located in the Zagros Mountain, in the southwest of Iran. The bedrock and abutments of the dam site consist of Asemari and Gachsaran Formations. In this paper, a practical relation for predicting brittleness and some relations between mechanical and index parameters of the Asmari limestone were established. The presented equation for predicting brittleness based on UCS, Brazilian tensile strength and Vp had high accuracy. Moreover, results showed that the brittleness estimation based on B3 concept (the ratio of multiply compressive strength in tensile strength divided 2) had more accuracy as compared to the B2 (the ratio of compressive strength minus tensile strength to compressive strength plus tensile strength) and B1 (the ratio of compressive strength to tensile strength) concepts.

  3. Empirical likelihood

    CERN Document Server

    Owen, Art B

    2001-01-01

    Empirical likelihood provides inferences whose validity does not depend on specifying a parametric model for the data. Because it uses a likelihood, the method has certain inherent advantages over resampling methods: it uses the data to determine the shape of the confidence regions, and it makes it easy to combined data from multiple sources. It also facilitates incorporating side information, and it simplifies accounting for censored, truncated, or biased sampling.One of the first books published on the subject, Empirical Likelihood offers an in-depth treatment of this method for constructing confidence regions and testing hypotheses. The author applies empirical likelihood to a range of problems, from those as simple as setting a confidence region for a univariate mean under IID sampling, to problems defined through smooth functions of means, regression models, generalized linear models, estimating equations, or kernel smooths, and to sampling with non-identically distributed data. Abundant figures offer vi...

  4. Semi-empirical correlation for binary interaction parameters of the Peng–Robinson equation of state with the van der Waals mixing rules for the prediction of high-pressure vapor–liquid equilibrium

    Directory of Open Access Journals (Sweden)

    Seif-Eddeen K. Fateen

    2013-03-01

    Full Text Available Peng–Robinson equation of state is widely used with the classical van der Waals mixing rules to predict vapor liquid equilibria for systems containing hydrocarbons and related compounds. This model requires good values of the binary interaction parameter kij. In this work, we developed a semi-empirical correlation for kij partly based on the Huron–Vidal mixing rules. We obtained values for the adjustable parameters of the developed formula for over 60 binary systems and over 10 categories of components. The predictions of the new equation system were slightly better than the constant-kij model in most cases, except for 10 systems whose predictions were considerably improved with the new correlation.

  5. Semi-empirical correlation for binary interaction parameters of the Peng-Robinson equation of state with the van der Waals mixing rules for the prediction of high-pressure vapor-liquid equilibrium.

    Science.gov (United States)

    Fateen, Seif-Eddeen K; Khalil, Menna M; Elnabawy, Ahmed O

    2013-03-01

    Peng-Robinson equation of state is widely used with the classical van der Waals mixing rules to predict vapor liquid equilibria for systems containing hydrocarbons and related compounds. This model requires good values of the binary interaction parameter kij . In this work, we developed a semi-empirical correlation for kij partly based on the Huron-Vidal mixing rules. We obtained values for the adjustable parameters of the developed formula for over 60 binary systems and over 10 categories of components. The predictions of the new equation system were slightly better than the constant-kij model in most cases, except for 10 systems whose predictions were considerably improved with the new correlation.

  6. A comparison of the performance of a fundamental parameter method for analysis of total reflection X-ray fluorescence spectra and determination of trace elements, versus an empirical quantification procedure

    Science.gov (United States)

    W(egrzynek, Dariusz; Hołyńska, Barbara; Ostachowicz, Beata

    1998-01-01

    The performance has been compared of two different quantification methods — namely, the commonly used empirical quantification procedure and a fundamental parameter approach — for determination of the mass fractions of elements in particulate-like sample residues on a quartz reflector measured in the total reflection geometry. In the empirical quantification procedure, the spectrometer system needs to be calibrated with the use of samples containing known concentrations of the elements. On the basis of intensities of the X-ray peaks and the known concentration or mass fraction of an internal standard element, by using relative sensitivities of the spectrometer system the concentrations or mass fractions of the elements are calculated. The fundamental parameter approach does not require any calibration of the spectrometer system to be carried out. However, in order to account for an unknown mass per unit area of a sample and sample nonuniformity, an internal standard element is added. The concentrations/mass fractions of the elements to be determined are calculated during fitting a modelled X-ray spectrum to the measured one. The two quantification methods were applied to determine the mass fractions of elements in the cross-sections of a peat core, biological standard reference materials and to determine the concentrations of elements in samples prepared from an aqueous multi-element standard solution.

  7. Polymeric membrane materials: new aspects of empirical approaches to prediction of gas permeability parameters in relation to permanent gases, linear lower hydrocarbons and some toxic gases.

    Science.gov (United States)

    Malykh, O V; Golub, A Yu; Teplyakov, V V

    2011-05-11

    Membrane gas separation technologies (air separation, hydrogen recovery from dehydrogenation processes, etc.) use traditionally the glassy polymer membranes with dominating permeability of "small" gas molecules. For this purposes the membranes based on the low free volume glassy polymers (e.g., polysulfone, tetrabromopolycarbonate and polyimides) are used. On the other hand, an application of membrane methods for VOCs and some toxic gas recovery from air, separation of the lower hydrocarbons containing mixtures (in petrochemistry and oil refining) needs the membranes with preferable penetration of components with relatively larger molecular sizes. In general, this kind of permeability is characterized for rubbers and for the high free volume glassy polymers. Data files accumulated (more than 1500 polymeric materials) represent the region of parameters "inside" of these "boundaries." Two main approaches to the prediction of gas permeability of polymers are considered in this paper: (1) the statistical treatment of published transport parameters of polymers and (2) the prediction using model of ≪diffusion jump≫ with consideration of the key properties of the diffusing molecule and polymeric matrix. In the frames of (1) the paper presents N-dimensional methods of the gas permeability estimation of polymers using the correlations "selectivity/permeability." It is found that the optimal accuracy of prediction is provided at n=4. In the frames of the solution-diffusion mechanism (2) the key properties include the effective molecular cross-section of penetrating species to be responsible for molecular transportation in polymeric matrix and the well known force constant (ε/k)(eff i) of {6-12} potential for gas-gas interaction. Set of corrected effective molecular cross-section of penetrant including noble gases (He, Ne, Ar, Kr, Xe), permanent gases (H(2), O(2), N(2), CO), ballast and toxic gases (CO(2), NO(,) NO(2), SO(2), H(2)S) and linear lower hydrocarbons (CH(4

  8. Empirical microeconomics action functionals

    Science.gov (United States)

    Baaquie, Belal E.; Du, Xin; Tanputraman, Winson

    2015-06-01

    A statistical generalization of microeconomics has been made in Baaquie (2013), where the market price of every traded commodity, at each instant of time, is considered to be an independent random variable. The dynamics of commodity market prices is modeled by an action functional-and the focus of this paper is to empirically determine the action functionals for different commodities. The correlation functions of the model are defined using a Feynman path integral. The model is calibrated using the unequal time correlation of the market commodity prices as well as their cubic and quartic moments using a perturbation expansion. The consistency of the perturbation expansion is verified by a numerical evaluation of the path integral. Nine commodities drawn from the energy, metal and grain sectors are studied and their market behavior is described by the model to an accuracy of over 90% using only six parameters. The paper empirically establishes the existence of the action functional for commodity prices that was postulated to exist in Baaquie (2013).

  9. Alternative Approaches to Evaluation in Empirical Microeconomics

    Science.gov (United States)

    Blundell, Richard; Dias, Monica Costa

    2009-01-01

    This paper reviews some of the most popular policy evaluation methods in empirical microeconomics: social experiments, natural experiments, matching, instrumental variables, discontinuity design, and control functions. It discusses identification of traditionally used average parameters and more complex distributional parameters. The adequacy,…

  10. Empirical Test Case Specification

    DEFF Research Database (Denmark)

    Kalyanova, Olena; Heiselberg, Per

    This document includes the empirical specification on the IEA task of evaluation building energy simulation computer programs for the Double Skin Facades (DSF) constructions. There are two approaches involved into this procedure, one is the comparative approach and another is the empirical one. I....... In the comparative approach the outcomes of different software tools are compared, while in the empirical approach the modelling results are compared with the results of experimental test cases....

  11. Empirical Philosophy of Science

    DEFF Research Database (Denmark)

    Mansnerus, Erika; Wagenknecht, Susann

    2015-01-01

    knowledge takes place through the integration of the empirical or historical research into the philosophical studies, as Chang, Nersessian, Thagard and Schickore argue in their work. Building upon their contributions we will develop a blueprint for an Empirical Philosophy of Science that draws upon...... qualitative methods from the social sciences in order to advance our philosophical understanding of science in practice. We will regard the relationship between philosophical conceptualization and empirical data as an iterative dialogue between theory and data, which is guided by a particular ‘feeling with......Empirical insights are proven fruitful for the advancement of Philosophy of Science, but the integration of philosophical concepts and empirical data poses considerable methodological challenges. Debates in Integrated History and Philosophy of Science suggest that the advancement of philosophical...

  12. Life Writing After Empire

    DEFF Research Database (Denmark)

    A watershed moment of the twentieth century, the end of empire saw upheavals to global power structures and national identities. However, decolonisation profoundly affected individual subjectivities too. Life Writing After Empire examines how people around the globe have made sense of the post...... in order to understand how individual life writing reflects broader societal changes. From far-flung corners of the former British Empire, people have turned to life writing to manage painful or nostalgic memories, as well as to think about the past and future of the nation anew through the personal...

  13. Theological reflections on empire

    Directory of Open Access Journals (Sweden)

    Allan A. Boesak

    2009-11-01

    Full Text Available Since the meeting of the World Alliance of Reformed Churches in Accra, Ghana (2004, and the adoption of the Accra Declaration, a debate has been raging in the churches about globalisation, socio-economic justice, ecological responsibility, political and cultural domination and globalised war. Central to this debate is the concept of empire and the way the United States is increasingly becoming its embodiment. Is the United States a global empire? This article argues that the United States has indeed become the expression of a modern empire and that this reality has considerable consequences, not just for global economics and politics but for theological refl ection as well.

  14. Empirical Evidence from Kenya

    African Journals Online (AJOL)

    FIRST LADY

    2011-01-18

    Jan 18, 2011 ... Empirical results reveal that consumption of sugar in. Kenya varies ... experiences in trade in different regions of the world. Some studies ... To assess the relationship between domestic sugar retail prices and sugar sales in ...

  15. Empirical philosophy of science

    DEFF Research Database (Denmark)

    Wagenknecht, Susann; Nersessian, Nancy J.; Andersen, Hanne

    2015-01-01

    A growing number of philosophers of science make use of qualitative empirical data, a development that may reconfigure the relations between philosophy and sociology of science and that is reminiscent of efforts to integrate history and philosophy of science. Therefore, the first part...... of this introduction to the volume Empirical Philosophy of Science outlines the history of relations between philosophy and sociology of science on the one hand, and philosophy and history of science on the other. The second part of this introduction offers an overview of the papers in the volume, each of which...... is giving its own answer to questions such as: Why does the use of qualitative empirical methods benefit philosophical accounts of science? And how should these methods be used by the philosopher?...

  16. PWR surveillance based on correspondence between empirical models and physical

    International Nuclear Information System (INIS)

    Zwingelstein, G.; Upadhyaya, B.R.; Kerlin, T.W.

    1976-01-01

    An on line surveillance method based on the correspondence between empirical models and physicals models is proposed for pressurized water reactors. Two types of empirical models are considered as well as the mathematical models defining the correspondence between the physical and empirical parameters. The efficiency of this method is illustrated for the surveillance of the Doppler coefficient for Oconee I (an 886 MWe PWR) [fr

  17. Empire vs. Federation

    DEFF Research Database (Denmark)

    Gravier, Magali

    2011-01-01

    The article discusses the concepts of federation and empire in the context of the European Union (EU). Even if these two concepts are not usually contrasted to one another, the article shows that they refer to related type of polities. Furthermore, they can be used at a time because they shed light...... on different and complementary aspects of the European integration process. The article concludes that the EU is at the crossroads between federation and empire and may remain an ‘imperial federation’ for several decades. This could mean that the EU is on the verge of transforming itself to another type...

  18. Empirical comparison of theories

    International Nuclear Information System (INIS)

    Opp, K.D.; Wippler, R.

    1990-01-01

    The book represents the first, comprehensive attempt to take an empirical approach for comparative assessment of theories in sociology. The aims, problems, and advantages of the empirical approach are discussed in detail, and the three theories selected for the purpose of this work are explained. Their comparative assessment is performed within the framework of several research projects, which among other subjects also investigate the social aspects of the protest against nuclear power plants. The theories analysed in this context are the theory of mental incongruities and that of the benefit, and their efficiency in explaining protest behaviour is compared. (orig./HSCH) [de

  19. Empirical Music Aesthetics

    DEFF Research Database (Denmark)

    Grund, Cynthia M.

    The toolbox for empirically exploring the ways that artistic endeavors convey and activate meaning on the part of performers and audiences continues to expand. Current work employing methods at the intersection of performance studies, philosophy, motion capture and neuroscience to better understand...... musical performance and reception is inspired by traditional approaches within aesthetics, but it also challenges some of the presuppositions inherent in them. As an example of such work I present a research project in empirical music aesthetics begun last year and of which I am a team member....

  20. Empirical research through design

    NARCIS (Netherlands)

    Keyson, D.V.; Bruns, M.

    2009-01-01

    This paper describes the empirical research through design method (ERDM), which differs from current approaches to research through design by enforcing the need for the designer, after a series of pilot prototype based studies, to a-priori develop a number of testable interaction design hypothesis

  1. Essays in empirical microeconomics

    NARCIS (Netherlands)

    Péter, A.N.

    2016-01-01

    The empirical studies in this thesis investigate various factors that could affect individuals' labor market, family formation and educational outcomes. Chapter 2 focuses on scheduling as a potential determinant of individuals' productivity. Chapter 3 looks at the role of a family factor on

  2. Worship, Reflection, Empirical Research

    OpenAIRE

    Ding Dong,

    2012-01-01

    In my youth, I was a worshipper of Mao Zedong. From the latter stage of the Mao Era to the early years of Reform and Opening, I began to reflect on Mao and the Communist Revolution he launched. In recent years I’ve devoted myself to empirical historical research on Mao, seeking the truth about Mao and China’s modern history.

  3. Trade and Empire

    DEFF Research Database (Denmark)

    Bang, Peter Fibiger

    2007-01-01

    This articles seeks to establish a new set of organizing concepts for the analysis of the Roman imperial economy from Republic to late antiquity: tributary empire, port-folio capitalism and protection costs. Together these concepts explain better economic developments in the Roman world than the...

  4. Empirically sampling Universal Dependencies

    DEFF Research Database (Denmark)

    Schluter, Natalie; Agic, Zeljko

    2017-01-01

    Universal Dependencies incur a high cost in computation for unbiased system development. We propose a 100% empirically chosen small subset of UD languages for efficient parsing system development. The technique used is based on measurements of model capacity globally. We show that the diversity o...

  5. An Empirical Mass Function Distribution

    Science.gov (United States)

    Murray, S. G.; Robotham, A. S. G.; Power, C.

    2018-03-01

    The halo mass function, encoding the comoving number density of dark matter halos of a given mass, plays a key role in understanding the formation and evolution of galaxies. As such, it is a key goal of current and future deep optical surveys to constrain the mass function down to mass scales that typically host {L}\\star galaxies. Motivated by the proven accuracy of Press–Schechter-type mass functions, we introduce a related but purely empirical form consistent with standard formulae to better than 4% in the medium-mass regime, {10}10{--}{10}13 {h}-1 {M}ȯ . In particular, our form consists of four parameters, each of which has a simple interpretation, and can be directly related to parameters of the galaxy distribution, such as {L}\\star . Using this form within a hierarchical Bayesian likelihood model, we show how individual mass-measurement errors can be successfully included in a typical analysis, while accounting for Eddington bias. We apply our form to a question of survey design in the context of a semi-realistic data model, illustrating how it can be used to obtain optimal balance between survey depth and angular coverage for constraints on mass function parameters. Open-source Python and R codes to apply our new form are provided at http://mrpy.readthedocs.org and https://cran.r-project.org/web/packages/tggd/index.html respectively.

  6. Generalized empirical likelihood methods for analyzing longitudinal data

    KAUST Repository

    Wang, S.; Qian, L.; Carroll, R. J.

    2010-01-01

    Efficient estimation of parameters is a major objective in analyzing longitudinal data. We propose two generalized empirical likelihood based methods that take into consideration within-subject correlations. A nonparametric version of the Wilks

  7. Autobiography After Empire

    DEFF Research Database (Denmark)

    Rasch, Astrid

    of the collective, but insufficient attention has been paid to how individuals respond to such narrative changes. This dissertation examines the relationship between individual and collective memory at the end of empire through analysis of 13 end of empire autobiographies by public intellectuals from Australia......Decolonisation was a major event of the twentieth century, redrawing maps and impacting on identity narratives around the globe. As new nations defined their place in the world, the national and imperial past was retold in new cultural memories. These developments have been studied at the level......, the Anglophone Caribbean and Zimbabwe. I conceive of memory as reconstructive and social, with individual memory striving to make sense of the past in the present in dialogue with surrounding narratives. By examining recurring tropes in the autobiographies, like colonial education, journeys to the imperial...

  8. Gazprom: the new empire

    International Nuclear Information System (INIS)

    Guillemoles, A.; Lazareva, A.

    2008-01-01

    Gazprom is conquering the world. The Russian industrial giant owns the hugest gas reserves and enjoys the privilege of a considerable power. Gazprom edits journals, owns hospitals, airplanes and has even built cities where most of the habitants work for him. With 400000 workers, Gazprom represents 8% of Russia's GDP. This inquiry describes the history and operation of this empire and show how its has become a masterpiece of the government's strategy of russian influence reconquest at the world scale. Is it going to be a winning game? Are the corruption affairs and the expected depletion of resources going to weaken the empire? The authors shade light on the political and diplomatic strategies that are played around the crucial dossier of the energy supply. (J.S.)

  9. Empirical formula for the parameters of metallic monovalent halides ...

    African Journals Online (AJOL)

    By collating the data on melting properties and transport coefficients obtained from various experiments and theories for certain halides of monovalent metals, allinclusive linear relationship has been fashioned out. This expression holds between the change in entropy and volume on melting; it is approximately obeyed by ...

  10. Empirical Bayesian inference and model uncertainty

    International Nuclear Information System (INIS)

    Poern, K.

    1994-01-01

    This paper presents a hierarchical or multistage empirical Bayesian approach for the estimation of uncertainty concerning the intensity of a homogeneous Poisson process. A class of contaminated gamma distributions is considered to describe the uncertainty concerning the intensity. These distributions in turn are defined through a set of secondary parameters, the knowledge of which is also described and updated via Bayes formula. This two-stage Bayesian approach is an example where the modeling uncertainty is treated in a comprehensive way. Each contaminated gamma distributions, represented by a point in the 3D space of secondary parameters, can be considered as a specific model of the uncertainty about the Poisson intensity. Then, by the empirical Bayesian method each individual model is assigned a posterior probability

  11. Surface Passivation in Empirical Tight Binding

    OpenAIRE

    He, Yu; Tan, Yaohua; Jiang, Zhengping; Povolotskyi, Michael; Klimeck, Gerhard; Kubis, Tillmann

    2015-01-01

    Empirical Tight Binding (TB) methods are widely used in atomistic device simulations. Existing TB methods to passivate dangling bonds fall into two categories: 1) Method that explicitly includes passivation atoms is limited to passivation with atoms and small molecules only. 2) Method that implicitly incorporates passivation does not distinguish passivation atom types. This work introduces an implicit passivation method that is applicable to any passivation scenario with appropriate parameter...

  12. Epistemology and Empirical Investigation

    DEFF Research Database (Denmark)

    Ahlström, Kristoffer

    2008-01-01

    Recently, Hilary Kornblith has argued that epistemological investigation is substantially empirical. In the present paper, I will ¿rst show that his claim is not contingent upon the further and, admittedly, controversial assumption that all objects of epistemological investigation are natural kinds....... Then, I will argue that, contrary to what Kornblith seems to assume, this methodological contention does not imply that there is no need for attending to our epistemic concepts in epistemology. Understanding the make-up of our concepts and, in particular, the purposes they ¿ll, is necessary...

  13. What 'empirical turn in bioethics'?

    Science.gov (United States)

    Hurst, Samia

    2010-10-01

    Uncertainty as to how we should articulate empirical data and normative reasoning seems to underlie most difficulties regarding the 'empirical turn' in bioethics. This article examines three different ways in which we could understand 'empirical turn'. Using real facts in normative reasoning is trivial and would not represent a 'turn'. Becoming an empirical discipline through a shift to the social and neurosciences would be a turn away from normative thinking, which we should not take. Conducting empirical research to inform normative reasoning is the usual meaning given to the term 'empirical turn'. In this sense, however, the turn is incomplete. Bioethics has imported methodological tools from empirical disciplines, but too often it has not imported the standards to which researchers in these disciplines are held. Integrating empirical and normative approaches also represents true added difficulties. Addressing these issues from the standpoint of debates on the fact-value distinction can cloud very real methodological concerns by displacing the debate to a level of abstraction where they need not be apparent. Ideally, empirical research in bioethics should meet standards for empirical and normative validity similar to those used in the source disciplines for these methods, and articulate these aspects clearly and appropriately. More modestly, criteria to ensure that none of these standards are completely left aside would improve the quality of empirical bioethics research and partly clear the air of critiques addressing its theoretical justification, when its rigour in the particularly difficult context of interdisciplinarity is what should be at stake.

  14. EGG: Empirical Galaxy Generator

    Science.gov (United States)

    Schreiber, C.; Elbaz, D.; Pannella, M.; Merlin, E.; Castellano, M.; Fontana, A.; Bourne, N.; Boutsia, K.; Cullen, F.; Dunlop, J.; Ferguson, H. C.; Michałowski, M. J.; Okumura, K.; Santini, P.; Shu, X. W.; Wang, T.; White, C.

    2018-04-01

    The Empirical Galaxy Generator (EGG) generates fake galaxy catalogs and images with realistic positions, morphologies and fluxes from the far-ultraviolet to the far-infrared. The catalogs are generated by egg-gencat and stored in binary FITS tables (column oriented). Another program, egg-2skymaker, is used to convert the generated catalog into ASCII tables suitable for ingestion by SkyMaker (ascl:1010.066) to produce realistic high resolution images (e.g., Hubble-like), while egg-gennoise and egg-genmap can be used to generate the low resolution images (e.g., Herschel-like). These tools can be used to test source extraction codes, or to evaluate the reliability of any map-based science (stacking, dropout identification, etc.).

  15. Birds of the Mongol Empire

    OpenAIRE

    Eugene N. Anderson

    2016-01-01

    The Mongol Empire, the largest contiguous empire the world has ever known, had, among other things, a goodly number of falconers, poultry raisers, birdcatchers, cooks, and other experts on various aspects of birding. We have records of this, largely in the Yinshan Zhengyao, the court nutrition manual of the Mongol empire in China (the Yuan Dynasty). It discusses in some detail 22 bird taxa, from swans to chickens. The Huihui Yaofang, a medical encyclopedia, lists ten taxa used medicinally. Ma...

  16. Parameter Estimation

    DEFF Research Database (Denmark)

    Sales-Cruz, Mauricio; Heitzig, Martina; Cameron, Ian

    2011-01-01

    of optimisation techniques coupled with dynamic solution of the underlying model. Linear and nonlinear approaches to parameter estimation are investigated. There is also the application of maximum likelihood principles in the estimation of parameters, as well as the use of orthogonal collocation to generate a set......In this chapter the importance of parameter estimation in model development is illustrated through various applications related to reaction systems. In particular, rate constants in a reaction system are obtained through parameter estimation methods. These approaches often require the application...... of algebraic equations as the basis for parameter estimation.These approaches are illustrated using estimations of kinetic constants from reaction system models....

  17. Empirical techniques in finance

    CERN Document Server

    Bhar, Ramaprasad

    2005-01-01

    This book offers the opportunity to study and experience advanced empi- cal techniques in finance and in general financial economics. It is not only suitable for students with an interest in the field, it is also highly rec- mended for academic researchers as well as the researchers in the industry. The book focuses on the contemporary empirical techniques used in the analysis of financial markets and how these are implemented using actual market data. With an emphasis on Implementation, this book helps foc- ing on strategies for rigorously combing finance theory and modeling technology to extend extant considerations in the literature. The main aim of this book is to equip the readers with an array of tools and techniques that will allow them to explore financial market problems with a fresh perspective. In this sense it is not another volume in eco- metrics. Of course, the traditional econometric methods are still valid and important; the contents of this book will bring in other related modeling topics tha...

  18. Empirical scholarship in contract law: possibilities and pitfalls

    Directory of Open Access Journals (Sweden)

    Russell Korobkin

    2015-01-01

    Full Text Available Professor Korobkin examines and analyzes empirical contract law scholarship over the last fifteen years in an attempt to guide scholars concerning how empiricism can be used in and enhance the study of contract law. After defining the parameters of the study, Professor Korobkin categorizes empirical contract law scholarship by both the source of data and main purpose of the investigation. He then describes and analyzes three types of criticisms that can be made of empirical scholarship, explains how these criticisms pertain to contract law scholarship, and considers what steps researchers can take to minimize the force of such criticisms.

  19. Final Empirical Test Case Specification

    DEFF Research Database (Denmark)

    Kalyanova, Olena; Heiselberg, Per

    This document includes the empirical specification on the IEA task of evaluation building energy simulation computer programs for the Double Skin Facades (DSF) constructions. There are two approaches involved into this procedure, one is the comparative approach and another is the empirical one....

  20. Remembrances of Empires Past

    Directory of Open Access Journals (Sweden)

    Robert Aldrich

    2010-03-01

    Full Text Available This paper argues that the colonial legacy is ever present in contemporary Europe. For a generation, most Europeans largely tried, publicly, to forget the colonial past, or remembered it only through the rose-coloured lenses of nostalgia; now the pendulum has swung to memory of that past – even perhaps, in the views of some, to a surfeit of memory, where each group agitates for its own version of history, its own recognition in laws and ceremonies, its own commemoration in museums and monuments, the valorization or repatriation of its own art and artefacts. Word such as ‘invasion,’ ‘racism’ and ‘genocide’ are emotional terms that provoke emotional reactions. Whether leaders should apologize for wrongs of the past – and which wrongs – remains a highly sensitive issue. The ‘return of the colonial’ thus has to do with ethics and politics as well as with history, and can link to statements of apology or recognition, legislation about certain views of history, monetary compensation, repatriation of objects, and—perhaps most importantly—redefinition of national identity and policy. The colonial flags may have been lowered, but many barricades seem to have been raised. Private memories—of loss of land, of unacknowledged service, of political, economic, social and cultural disenfranchisement, but also on the other side of defeat, national castigation and self-flagellation—have been increasingly public. Monuments and museums act not only as sites of history but as venues for political agitation and forums for academic debate – differences of opinion that have spread to the streets. Empire has a long after-life.

  1. Empirical Support for Perceptual Conceptualism

    Directory of Open Access Journals (Sweden)

    Nicolás Alejandro Serrano

    2018-03-01

    Full Text Available The main objective of this paper is to show that perceptual conceptualism can be understood as an empirically meaningful position and, furthermore, that there is some degree of empirical support for its main theses. In order to do this, I will start by offering an empirical reading of the conceptualist position, and making three predictions from it. Then, I will consider recent experimental results from cognitive sciences that seem to point towards those predictions. I will conclude that, while the evidence offered by those experiments is far from decisive, it is enough not only to show that conceptualism is an empirically meaningful position but also that there is empirical support for it.

  2. Empire as a Geopolitical Figure

    DEFF Research Database (Denmark)

    Parker, Noel

    2010-01-01

    This article analyses the ingredients of empire as a pattern of order with geopolitical effects. Noting the imperial form's proclivity for expansion from a critical reading of historical sociology, the article argues that the principal manifestation of earlier geopolitics lay not in the nation...... but in empire. That in turn has been driven by a view of the world as disorderly and open to the ordering will of empires (emanating, at the time of geopolitics' inception, from Europe). One implication is that empires are likely to figure in the geopolitical ordering of the globe at all times, in particular...... after all that has happened in the late twentieth century to undermine nationalism and the national state. Empire is indeed a probable, even for some an attractive form of regime for extending order over the disorder produced by globalisation. Geopolitics articulated in imperial expansion is likely...

  3. Development of covariance capabilities in EMPIRE code

    Energy Technology Data Exchange (ETDEWEB)

    Herman,M.; Pigni, M.T.; Oblozinsky, P.; Mughabghab, S.F.; Mattoon, C.M.; Capote, R.; Cho, Young-Sik; Trkov, A.

    2008-06-24

    The nuclear reaction code EMPIRE has been extended to provide evaluation capabilities for neutron cross section covariances in the thermal, resolved resonance, unresolved resonance and fast neutron regions. The Atlas of Neutron Resonances by Mughabghab is used as a primary source of information on uncertainties at low energies. Care is taken to ensure consistency among the resonance parameter uncertainties and those for thermal cross sections. The resulting resonance parameter covariances are formatted in the ENDF-6 File 32. In the fast neutron range our methodology is based on model calculations with the code EMPIRE combined with experimental data through several available approaches. The model-based covariances can be obtained using deterministic (Kalman) or stochastic (Monte Carlo) propagation of model parameter uncertainties. We show that these two procedures yield comparable results. The Kalman filter and/or the generalized least square fitting procedures are employed to incorporate experimental information. We compare the two approaches analyzing results for the major reaction channels on {sup 89}Y. We also discuss a long-standing issue of unreasonably low uncertainties and link it to the rigidity of the model.

  4. An Empirical Method for Particle Damping Design

    Directory of Open Access Journals (Sweden)

    Zhi Wei Xu

    2004-01-01

    Full Text Available Particle damping is an effective vibration suppression method. The purpose of this paper is to develop an empirical method for particle damping design based on extensive experiments on three structural objects – steel beam, bond arm and bond head stand. The relationships among several key parameters of structure/particles are obtained. Then the procedures with the use of particle damping are proposed to provide guidelines for practical applications. It is believed that the results presented in this paper would be helpful to effectively implement the particle damping for various structural systems for the purpose of vibration suppression.

  5. Relationship between process parameters and properties of multifunctional needlepunched geotextiles

    CSIR Research Space (South Africa)

    Rawal, A

    2006-04-01

    Full Text Available , and filtration. In this study, the effect of process parameters, namely, feed rate, stroke frequency, and depth of needle penetration has been investigated on various properties of needlepunched geotextiles. These process parameters are then empirically related...

  6. Birds of the Mongol Empire

    Directory of Open Access Journals (Sweden)

    Eugene N. Anderson

    2016-09-01

    Full Text Available The Mongol Empire, the largest contiguous empire the world has ever known, had, among other things, a goodly number of falconers, poultry raisers, birdcatchers, cooks, and other experts on various aspects of birding. We have records of this, largely in the Yinshan Zhengyao, the court nutrition manual of the Mongol empire in China (the Yuan Dynasty. It discusses in some detail 22 bird taxa, from swans to chickens. The Huihui Yaofang, a medical encyclopedia, lists ten taxa used medicinally. Marco Polo also made notes on Mongol bird use. There are a few other records. This allows us to draw conclusions about Mongol ornithology, which apparently was sophisticated and detailed.

  7. Inventory parameters

    CERN Document Server

    Sharma, Sanjay

    2017-01-01

    This book provides a detailed overview of various parameters/factors involved in inventory analysis. It especially focuses on the assessment and modeling of basic inventory parameters, namely demand, procurement cost, cycle time, ordering cost, inventory carrying cost, inventory stock, stock out level, and stock out cost. In the context of economic lot size, it provides equations related to the optimum values. It also discusses why the optimum lot size and optimum total relevant cost are considered to be key decision variables, and uses numerous examples to explain each of these inventory parameters separately. Lastly, it provides detailed information on parameter estimation for different sectors/products. Written in a simple and lucid style, it offers a valuable resource for a broad readership, especially Master of Business Administration (MBA) students.

  8. Empirical Legality and Effective Reality

    Directory of Open Access Journals (Sweden)

    Hernán Pringe

    2015-08-01

    Full Text Available The conditions that Kant’s doctrine establishes are examined for the predication of the effective reality of certain empirical objects. It is maintained that a for such a predication, it is necessary to have not only perception but also a certain homogeneity of sensible data, and b the knowledge of the existence of certain empirical objects depends on the application of regulative principles of experience.

  9. Empirical logic and quantum mechanics

    International Nuclear Information System (INIS)

    Foulis, D.J.; Randall, C.H.

    1976-01-01

    This article discusses some of the basic notions of quantum physics within the more general framework of operational statistics and empirical logic (as developed in Foulis and Randall, 1972, and Randall and Foulis, 1973). Empirical logic is a formal mathematical system in which the notion of an operation is primitive and undefined; all other concepts are rigorously defined in terms of such operations (which are presumed to correspond to actual physical procedures). (Auth.)

  10. Empirical Research In Engineering Design

    DEFF Research Database (Denmark)

    Ahmed, Saeema

    2007-01-01

    Increasingly engineering design research involves the use of empirical studies that are conducted within an industrial environment [Ahmed, 2001; Court 1995; Hales 1987]. Research into the use of information by designers or understanding how engineers build up experience are examples of research...... of research issues. This paper describes case studies of empirical research carried out within industry in engineering design focusing upon information, knowledge and experience in engineering design. The paper describes the research methods employed, their suitability for the particular research aims...

  11. An Empirical Model for Energy Storage Systems

    Energy Technology Data Exchange (ETDEWEB)

    Rosewater, David Martin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Scott, Paul [TransPower, Poway, CA (United States)

    2016-03-17

    Improved models of energy storage systems are needed to enable the electric grid’s adaptation to increasing penetration of renewables. This paper develops a generic empirical model of energy storage system performance agnostic of type, chemistry, design or scale. Parameters for this model are calculated using test procedures adapted from the US DOE Protocol for Uniformly Measuring and Expressing the Performance of Energy Storage. We then assess the accuracy of this model for predicting the performance of the TransPower GridSaver – a 1 MW rated lithium-ion battery system that underwent laboratory experimentation and analysis. The developed model predicts a range of energy storage system performance based on the uncertainty of estimated model parameters. Finally, this model can be used to better understand the integration and coordination of energy storage on the electric grid.

  12. WIPP Compliance Certification Application calculations parameters. Part 1: Parameter development

    International Nuclear Information System (INIS)

    Howarth, S.M.

    1997-01-01

    The Waste Isolation Pilot Plant (WIPP) in southeast New Mexico has been studied as a transuranic waste repository for the past 23 years. During this time, an extensive site characterization, design, construction, and experimental program was completed, which provided in-depth understanding of the dominant processes that are most likely to influence the containment of radionuclides for 10,000 years. Nearly 1,500 parameters were developed using information gathered from this program; the parameters were input to numerical models for WIPP Compliance Certification Application (CCA) Performance Assessment (PA) calculations. The CCA probabilistic codes frequently require input values that define a statistical distribution for each parameter. Developing parameter distributions begins with the assignment of an appropriate distribution type, which is dependent on the type, magnitude, and volume of data or information available. The development of the parameter distribution values may require interpretation or statistical analysis of raw data, combining raw data with literature values, scaling of lab or field data to fit code grid mesh sizes, or other transformation. Parameter development and documentation of the development process were very complicated, especially for those parameters based on empirical data; they required the integration of information from Sandia National Laboratories (SNL) code sponsors, parameter task leaders (PTLs), performance assessment analysts (PAAs), and experimental principal investigators (PIs). This paper, Part 1 of two parts, contains a discussion of the parameter development process, roles and responsibilities, and lessons learned. Part 2 will discuss parameter documentation, traceability and retrievability, and lessons learned from related audits and reviews

  13. An Empirical Bayes Approach to Mantel-Haenszel DIF Analysis.

    Science.gov (United States)

    Zwick, Rebecca; Thayer, Dorothy T.; Lewis, Charles

    1999-01-01

    Developed an empirical Bayes enhancement to Mantel-Haenszel (MH) analysis of differential item functioning (DIF) in which it is assumed that the MH statistics are normally distributed and that the prior distribution of underlying DIF parameters is also normal. (Author/SLD)

  14. Determination of empirical relations between geoelectrical data and ...

    African Journals Online (AJOL)

    In order to establish empirical equations that relate layer resistivity values with geotechnical parameters for engineering site characterization, geotechnical tests comprising Standard Penetration Test (SPT), Atterberg limit, Triaxial Compression and Oedometer consolidation tests were conducted on soil samples collected ...

  15. Umayyad Relations with Byzantium Empire

    Directory of Open Access Journals (Sweden)

    Mansoor Haidari

    2017-06-01

    Full Text Available This research investigates the political and military relations between Umayyad caliphates with the Byzantine Empire. The aim of this research is to clarify Umayyad caliphate’s relations with the Byzantine Empire. We know that these relations were mostly about war and fight. Because there were always intense conflicts between Muslims and the Byzantine Empire, they had to have an active continuous diplomacy to call truce and settle the disputes. Thus, based on the general policy of the Umayyad caliphs, Christians were severely ignored and segregated within Islamic territories. This segregation of the Christians was highly affected by political relationships. It is worthy of mentioning that Umayyad caliphs brought the governing style of the Sassanid kings and Roman Caesar into the Islamic Caliphate system but they didn’t establish civil institutions and administrative organizations.

  16. Bomb parameters

    International Nuclear Information System (INIS)

    Kerr, George D.; Young, Rebert W.; Cullings, Harry M.; Christry, Robert F.

    2005-01-01

    The reconstruction of neutron and gamma-ray doses at Hiroshima and Nagasaki begins with a determination of the parameters describing the explosion. The calculations of the air transported radiation fields and survivor doses from the Hiroshima and Nagasaki bombs require knowledge of a variety of parameters related to the explosions. These various parameters include the heading of the bomber when the bomb was released, the epicenters of the explosions, the bomb yields, and the tilt of the bombs at time of explosion. The epicenter of a bomb is the explosion point in air that is specified in terms of a burst height and a hypocenter (or the point on the ground directly below the epicenter of the explosion). The current reassessment refines the energy yield and burst height for the Hiroshima bomb, as well as the locations of the Hiroshima and Nagasaki hypocenters on the modern city maps used in the analysis of the activation data for neutrons and TLD data for gamma rays. (J.P.N.)

  17. An empirical algorithm to estimate spectral average cosine of underwater light field from remote sensing data in coastal oceanic waters.

    Digital Repository Service at National Institute of Oceanography (India)

    Talaulika, M.; Suresh, T.; Desa, E.S.; Inamdar, A.

    parameters from the coastal waters off Goa, India, and eastern Arabian Sea and the optical parameters derived using the radiative transfer code using these measured data. The algorithm was compared with two earlier reported empirical algorithms of Haltrin...

  18. Gazprom the new russian empire

    International Nuclear Information System (INIS)

    Cosnard, D.

    2004-01-01

    The author analyzes the economical and political impacts of the great Gazprom group, leader in the russian energy domain, in Russia. Already number one of the world gas industry, this Group is becoming the right-hand of the Kremlin. Thus the author wonders on this empire transparency and limits. (A.L.B.)

  19. Phenomenology and the Empirical Turn

    NARCIS (Netherlands)

    Zwier, Jochem; Blok, Vincent; Lemmens, Pieter

    2016-01-01

    This paper provides a phenomenological analysis of postphenomenological philosophy of technology. While acknowledging that the results of its analyses are to be recognized as original, insightful, and valuable, we will argue that in its execution of the empirical turn, postphenomenology forfeits

  20. Empirical ethics as dialogical practice

    NARCIS (Netherlands)

    Widdershoven, G.A.M.; Abma, T.A.; Molewijk, A.C.

    2009-01-01

    In this article, we present a dialogical approach to empirical ethics, based upon hermeneutic ethics and responsive evaluation. Hermeneutic ethics regards experience as the concrete source of moral wisdom. In order to gain a good understanding of moral issues, concrete detailed experiences and

  1. Teaching "Empire of the Sun."

    Science.gov (United States)

    Riet, Fred H. van

    1990-01-01

    A Dutch teacher presents reading, film viewing, and writing activities for "Empire of the Sun," J. G. Ballard's autobiographical account of life as a boy in Shanghai and in a Japanese internment camp during World War II (the subject of Steven Spielberg's film of the same name). Includes objectives, procedures, and several literature,…

  2. Empirical Specification of Utility Functions.

    Science.gov (United States)

    Mellenbergh, Gideon J.

    Decision theory can be applied to four types of decision situations in education and psychology: (1) selection; (2) placement; (3) classification; and (4) mastery. For the application of the theory, a utility function must be specified. Usually the utility function is chosen on a priori grounds. In this paper methods for the empirical assessment…

  3. Empirical Productivity Indices and Indicators

    NARCIS (Netherlands)

    B.M. Balk (Bert)

    2016-01-01

    textabstractThe empirical measurement of productivity change (or difference) by means of indices and indicators starts with the ex post profit/loss accounts of a production unit. Key concepts are profit, leading to indicators, and profitability, leading to indices. The main task for the productivity

  4. EMPIRICAL RESEARCH AND CONGREGATIONAL ANALYSIS ...

    African Journals Online (AJOL)

    empirical research has made to the process of congregational analysis. 1 Part of this ... contextual congegrational analysis – meeting social and divine desires”) at the IAPT .... methodology of a congregational analysis should be regarded as a process. ... essential to create space for a qualitative and quantitative approach.

  5. Empirical processes: theory and applications

    OpenAIRE

    Venturini Sergio

    2005-01-01

    Proceedings of the 2003 Summer School in Statistics and Probability in Torgnon (Aosta, Italy) held by Prof. Jon A. Wellner and Prof. M. Banerjee. The topic presented was the theory of empirical processes with applications to statistics (m-estimation, bootstrap, semiparametric theory).

  6. Empirical laws, regularity and necessity

    NARCIS (Netherlands)

    Koningsveld, H.

    1973-01-01

    In this book I have tried to develop an analysis of the concept of an empirical law, an analysis that differs in many ways from the alternative analyse's found in contemporary literature dealing with the subject.

    1 am referring especially to two well-known views, viz. the regularity and

  7. Empirical analysis of consumer behavior

    NARCIS (Netherlands)

    Huang, Yufeng

    2015-01-01

    This thesis consists of three essays in quantitative marketing, focusing on structural empirical analysis of consumer behavior. In the first essay, he investigates the role of a consumer's skill of product usage, and its imperfect transferability across brands, in her product choice. It shows that

  8. Moment Conditions Selection Based on Adaptive Penalized Empirical Likelihood

    Directory of Open Access Journals (Sweden)

    Yunquan Song

    2014-01-01

    Full Text Available Empirical likelihood is a very popular method and has been widely used in the fields of artificial intelligence (AI and data mining as tablets and mobile application and social media dominate the technology landscape. This paper proposes an empirical likelihood shrinkage method to efficiently estimate unknown parameters and select correct moment conditions simultaneously, when the model is defined by moment restrictions in which some are possibly misspecified. We show that our method enjoys oracle-like properties; that is, it consistently selects the correct moment conditions and at the same time its estimator is as efficient as the empirical likelihood estimator obtained by all correct moment conditions. Moreover, unlike the GMM, our proposed method allows us to carry out confidence regions for the parameters included in the model without estimating the covariances of the estimators. For empirical implementation, we provide some data-driven procedures for selecting the tuning parameter of the penalty function. The simulation results show that the method works remarkably well in terms of correct moment selection and the finite sample properties of the estimators. Also, a real-life example is carried out to illustrate the new methodology.

  9. Surface Passivation in Empirical Tight Binding

    Science.gov (United States)

    He, Yu; Tan, Yaohua; Jiang, Zhengping; Povolotskyi, Michael; Klimeck, Gerhard; Kubis, Tillmann

    2016-03-01

    Empirical Tight Binding (TB) methods are widely used in atomistic device simulations. Existing TB methods to passivate dangling bonds fall into two categories: 1) Method that explicitly includes passivation atoms is limited to passivation with atoms and small molecules only. 2) Method that implicitly incorporates passivation does not distinguish passivation atom types. This work introduces an implicit passivation method that is applicable to any passivation scenario with appropriate parameters. This method is applied to a Si quantum well and a Si ultra-thin body transistor oxidized with SiO2 in several oxidation configurations. Comparison with ab-initio results and experiments verifies the presented method. Oxidation configurations that severely hamper the transistor performance are identified. It is also shown that the commonly used implicit H atom passivation overestimates the transistor performance.

  10. Generalized empirical likelihood methods for analyzing longitudinal data

    KAUST Repository

    Wang, S.

    2010-02-16

    Efficient estimation of parameters is a major objective in analyzing longitudinal data. We propose two generalized empirical likelihood based methods that take into consideration within-subject correlations. A nonparametric version of the Wilks theorem for the limiting distributions of the empirical likelihood ratios is derived. It is shown that one of the proposed methods is locally efficient among a class of within-subject variance-covariance matrices. A simulation study is conducted to investigate the finite sample properties of the proposed methods and compare them with the block empirical likelihood method by You et al. (2006) and the normal approximation with a correctly estimated variance-covariance. The results suggest that the proposed methods are generally more efficient than existing methods which ignore the correlation structure, and better in coverage compared to the normal approximation with correctly specified within-subject correlation. An application illustrating our methods and supporting the simulation study results is also presented.

  11. Assessment of empirical formulae for local response of concrete structures to hard projectile impact

    International Nuclear Information System (INIS)

    Buzaud, E.; Cazaubon, Ch.; Chauvel, D.

    2007-01-01

    The outcome of the impact of a hard projectile on a reinforced concrete structure is affected by different parameters such as the configuration of the interaction, the projectile geometry, mass and velocity and the target geometry, reinforcement, and concrete mechanical properties. Those parameters have been investigated experimentally during the last 30 years, hence providing a basis of simplified mathematical models like empirical formulae. The aim of the authors is to assess the relative performances of classical and more recent empirical formulae. (authors)

  12. Empirically Testing Thematic Analysis (ETTA)

    DEFF Research Database (Denmark)

    Gildberg, Frederik Alkier; Bradley, Stephen K.; Tingleff, Elllen B.

    2015-01-01

    Text analysis is not a question of a right or wrong way to go about it, but a question of different traditions. These tend to not only give answers to how to conduct an analysis, but also to provide the answer as to why it is conducted in the way that it is. The problem however may be that the li...... for themselves. The advantage of utilizing the presented analytic approach is argued to be the integral empirical testing, which should assure systematic development, interpretation and analysis of the source textual material....... between tradition and tool is unclear. The main objective of this article is therefore to present Empirical Testing Thematic Analysis, a step by step approach to thematic text analysis; discussing strengths and weaknesses, so that others might assess its potential as an approach that they might utilize/develop...

  13. Essays in empirical industrial organization

    OpenAIRE

    Aguiar de Luque, Luis

    2013-01-01

    My PhD thesis consists of three chapters in Empirical Industrial Organization. The first two chapters focus on the relationship between firrm performance and specific public policies. In particular, we analyze the cases of cooperative research and development (R&D) in the European Union and the regulation of public transports in France. The third chapter focuses on copyright protection in the digital era and analyzes the relationship between legal and illegal consumption of di...

  14. Empirical research on Waldorf education

    OpenAIRE

    Randoll, Dirk; Peters, Jürgen

    2015-01-01

    Waldorf education began in 1919 with the first Waldorf School in Stuttgart and nowadays is widespread in many countries all over the world. Empirical research, however, has been rare until the early nineties and Waldorf education has not been discussed within educational science so far. This has changed during the last decades. This article reviews the results of surveys during the last 20 years and is mainly focused on German Waldorf Schools, because most investigations have been done in thi...

  15. Empirical distribution function under heteroscedasticity

    Czech Academy of Sciences Publication Activity Database

    Víšek, Jan Ámos

    2011-01-01

    Roč. 45, č. 5 (2011), s. 497-508 ISSN 0233-1888 Grant - others:GA UK(CZ) GA402/09/0557 Institutional research plan: CEZ:AV0Z10750506 Keywords : Robustness * Convergence * Empirical distribution * Heteroscedasticity Subject RIV: BB - Applied Statistics , Operational Research Impact factor: 0.724, year: 2011 http://library.utia.cas.cz/separaty/2011/SI/visek-0365534.pdf

  16. Expert opinion vs. empirical evidence

    OpenAIRE

    Herman, Rod A; Raybould, Alan

    2014-01-01

    Expert opinion is often sought by government regulatory agencies when there is insufficient empirical evidence to judge the safety implications of a course of action. However, it can be reckless to continue following expert opinion when a preponderance of evidence is amassed that conflicts with this opinion. Factual evidence should always trump opinion in prioritizing the information that is used to guide regulatory policy. Evidence-based medicine has seen a dramatic upturn in recent years sp...

  17. Physiological parameters

    International Nuclear Information System (INIS)

    Natera, E.S.

    1998-01-01

    The physiological characteristics of man depend on the intake, metabolism and excretion of stable elements from food, water, and air. The physiological behavior of natural radionuclides and radionuclides from nuclear weapons testing and from the utilization of nuclear energy is believed to follow the pattern of stable elements. Hence information on the normal physiological processes occurring in the human body plays an important role in the assessment of the radiation dose received by man. Two important physiological parameters needed for internal dose determination are the pulmonary function and the water balance. In the Coordinated Research Programme on the characterization of Asian population, five participants submitted data on these physiological characteristics - China, India, Japan, Philippines and Viet Nam. During the CRP, data on other pertinent characteristics such as physical and dietary were simultaneously being collected. Hence, the information on the physiological characteristics alone, coming from the five participants were not complete and are probably not sufficient to establish standard values for the Reference Asian Man. Nonetheless, the data collected is a valuable contribution to this research programme

  18. Empirical isotropic chemical shift surfaces

    International Nuclear Information System (INIS)

    Czinki, Eszter; Csaszar, Attila G.

    2007-01-01

    A list of proteins is given for which spatial structures, with a resolution better than 2.5 A, are known from entries in the Protein Data Bank (PDB) and isotropic chemical shift (ICS) values are known from the RefDB database related to the Biological Magnetic Resonance Bank (BMRB) database. The structures chosen provide, with unknown uncertainties, dihedral angles φ and ψ characterizing the backbone structure of the residues. The joint use of experimental ICSs of the same residues within the proteins, again with mostly unknown uncertainties, and ab initio ICS(φ,ψ) surfaces obtained for the model peptides For-(l-Ala) n -NH 2 , with n = 1, 3, and 5, resulted in so-called empirical ICS(φ,ψ) surfaces for all major nuclei of the 20 naturally occurring α-amino acids. Out of the many empirical surfaces determined, it is the 13C α ICS(φ,ψ) surface which seems to be most promising for identifying major secondary structure types, α-helix, β-strand, left-handed helix (α D ), and polyproline-II. Detailed tests suggest that Ala is a good model for many naturally occurring α-amino acids. Two-dimensional empirical 13C α - 1 H α ICS(φ,ψ) correlation plots, obtained so far only from computations on small peptide models, suggest the utility of the experimental information contained therein and thus they should provide useful constraints for structure determinations of proteins

  19. Two concepts of empirical ethics.

    Science.gov (United States)

    Parker, Malcolm

    2009-05-01

    The turn to empirical ethics answers two calls. The first is for a richer account of morality than that afforded by bioethical principlism, which is cast as excessively abstract and thin on the facts. The second is for the facts in question to be those of human experience and not some other, unworldly realm. Empirical ethics therefore promises a richer naturalistic ethics, but in fulfilling the second call it often fails to heed the metaethical requirements related to the first. Empirical ethics risks losing the normative edge which necessarily characterizes the ethical, by failing to account for the nature and the logic of moral norms. I sketch a naturalistic theory, teleological expressivism (TE), which negotiates the naturalistic fallacy by providing a more satisfactory means of taking into account facts and research data with ethical implications. The examples of informed consent and the euthanasia debate are used to illustrate the superiority of this approach, and the problems consequent on including the facts in the wrong kind of way.

  20. An empirical formula for scattered neutron components in fast neutron radiography

    International Nuclear Information System (INIS)

    Dou Haifeng; Tang Bin

    2011-01-01

    Scattering neutrons are one of the key factors that may affect the images of fast neutron radiography. In this paper, a mathematical model for scattered neutrons is developed on a cylinder sample, and an empirical formula for scattered neutrons is obtained. According to the results given by Monte Carlo methods, the parameters in the empirical formula are obtained with curve fitting, which confirms the logicality of the empirical formula. The curve-fitted parameters of common materials such as 6 LiD are given. (authors)

  1. Empirical Phenomenology: A Qualitative Research Approach (The ...

    African Journals Online (AJOL)

    Empirical Phenomenology: A Qualitative Research Approach (The Cologne Seminars) ... and practical application of empirical phenomenology in social research. ... and considers its implications for qualitative methods such as interviewing ...

  2. Social opportunity cost of capital: empirical estimates

    Energy Technology Data Exchange (ETDEWEB)

    Townsend, S.

    1978-02-01

    This report develops estimates of the social-opportunity cost of public capital. The private and social costs of capital are found to diverge primarily because of the effects of corporate and personal income taxes. Following Harberger, the social-opportunity cost of capital is approximated by a weighted average of the returns to different classes of savers and investors where the weights are the flows of savings or investments in each class multiplied by the relevant elasticity. Estimates of these parameters are obtained and the social-opportunity cost of capital is determined to be in the range of 6.2 to 10.8%, depending upon the parameter values used. Uncertainty is found to affect the social-opportunity cost of capital in two ways. First, some allowance must be made for the chance of failure or at least of not realizing claims of a project's proponents. Second, a particular government project will change the expected variability of the returns to the government's entire portfolio of projects. In the absence of specific information about each project, the use of the economy-wide average default and risk adjustments is suggested. These are included in the empirical estimates reported. International capital markets make available private capital, the price of which is not distorted by the U.S. tax system. The inclusion of foreign sources slightly reduces the social-opportunity cost of capital. 21 references.

  3. An empirical comparison of Item Response Theory and Classical Test Theory

    Directory of Open Access Journals (Sweden)

    Špela Progar

    2008-11-01

    Full Text Available Based on nonlinear models between the measured latent variable and the item response, item response theory (IRT enables independent estimation of item and person parameters and local estimation of measurement error. These properties of IRT are also the main theoretical advantages of IRT over classical test theory (CTT. Empirical evidence, however, often failed to discover consistent differences between IRT and CTT parameters and between invariance measures of CTT and IRT parameter estimates. In this empirical study a real data set from the Third International Mathematics and Science Study (TIMSS 1995 was used to address the following questions: (1 How comparable are CTT and IRT based item and person parameters? (2 How invariant are CTT and IRT based item parameters across different participant groups? (3 How invariant are CTT and IRT based item and person parameters across different item sets? The findings indicate that the CTT and the IRT item/person parameters are very comparable, that the CTT and the IRT item parameters show similar invariance property when estimated across different groups of participants, that the IRT person parameters are more invariant across different item sets, and that the CTT item parameters are at least as much invariant in different item sets as the IRT item parameters. The results furthermore demonstrate that, with regards to the invariance property, IRT item/person parameters are in general empirically superior to CTT parameters, but only if the appropriate IRT model is used for modelling the data.

  4. Bias-dependent hybrid PKI empirical-neural model of microwave FETs

    Science.gov (United States)

    Marinković, Zlatica; Pronić-Rančić, Olivera; Marković, Vera

    2011-10-01

    Empirical models of microwave transistors based on an equivalent circuit are valid for only one bias point. Bias-dependent analysis requires repeated extractions of the model parameters for each bias point. In order to make model bias-dependent, a new hybrid empirical-neural model of microwave field-effect transistors is proposed in this article. The model is a combination of an equivalent circuit model including noise developed for one bias point and two prior knowledge input artificial neural networks (PKI ANNs) aimed at introducing bias dependency of scattering (S) and noise parameters, respectively. The prior knowledge of the proposed ANNs involves the values of the S- and noise parameters obtained by the empirical model. The proposed hybrid model is valid in the whole range of bias conditions. Moreover, the proposed model provides better accuracy than the empirical model, which is illustrated by an appropriate modelling example of a pseudomorphic high-electron mobility transistor device.

  5. Science and the British Empire.

    Science.gov (United States)

    Harrison, Mark

    2005-03-01

    The last few decades have witnessed a flowering of interest in the history of science in the British Empire. This essay aims to provide an overview of some of the most important work in this area, identifying interpretative shifts and emerging themes. In so doing, it raises some questions about the analytical framework in which colonial science has traditionally been viewed, highlighting interactions with indigenous scientific traditions and the use of network-based models to understand scientific relations within and beyond colonial contexts.

  6. Empirical logic and tensor products

    International Nuclear Information System (INIS)

    Foulis, D.J.; Randall, C.H.

    1981-01-01

    In our work we are developing a formalism called empirical logic to support a generalization of conventional statistics; the resulting generalization is called operational statistics. We are not attempting to develop or advocate any particular physical theory; rather we are formulating a precision 'language' in which such theories can be expressed, compared, evaluated, and related to laboratory experiments. We believe that only in such a language can the connections between real physical procedures (operations) and physical theories be made explicit and perspicuous. (orig./HSI)

  7. Gravitation theory - Empirical status from solar system experiments.

    Science.gov (United States)

    Nordtvedt, K. L., Jr.

    1972-01-01

    Review of historical and recent experiments which speak in favor of a post-Newtonian relativistic gravitational theory. The topics include the foundational experiments, metric theories of gravity, experiments designed to differentiate among the metric theories, and tests of Machian concepts of gravity. It is shown that the metric field for any metric theory can be specified by a series of potential terms with several parameters. It is pointed out that empirical results available up to date yield values of the parameters which are consistent with the prediction of Einstein's general relativity.

  8. Empirical reality, empirical causality, and the measurement problem

    International Nuclear Information System (INIS)

    d'Espagnat, B.

    1987-01-01

    Does physics describe anything that can meaningfully be called independent reality, or is it merely operational? Most physicists implicitly favor an intermediate standpoint, which takes quantum physics into account, but which nevertheless strongly holds fast to quite strictly realistic ideas about apparently obvious facts concerning the macro-objects. Part 1 of this article, which is a survey of recent measurement theories, shows that, when made explicit, the standpoint in question cannot be upheld. Part 2 brings forward a proposal for making minimal changes to this standpoint in such a way as to remove such objections. The empirical reality thus constructed is a notion that, to some extent, does ultimately refer to the human means of apprehension and of data processing. It nevertheless cannot be said that it reduces to a mere name just labelling a set of recipes that never fail. It is shown that their usual notion of macroscopic causality must be endowed with similar features

  9. Empirical Likelihood in Nonignorable Covariate-Missing Data Problems.

    Science.gov (United States)

    Xie, Yanmei; Zhang, Biao

    2017-04-20

    Missing covariate data occurs often in regression analysis, which frequently arises in the health and social sciences as well as in survey sampling. We study methods for the analysis of a nonignorable covariate-missing data problem in an assumed conditional mean function when some covariates are completely observed but other covariates are missing for some subjects. We adopt the semiparametric perspective of Bartlett et al. (Improving upon the efficiency of complete case analysis when covariates are MNAR. Biostatistics 2014;15:719-30) on regression analyses with nonignorable missing covariates, in which they have introduced the use of two working models, the working probability model of missingness and the working conditional score model. In this paper, we study an empirical likelihood approach to nonignorable covariate-missing data problems with the objective of effectively utilizing the two working models in the analysis of covariate-missing data. We propose a unified approach to constructing a system of unbiased estimating equations, where there are more equations than unknown parameters of interest. One useful feature of these unbiased estimating equations is that they naturally incorporate the incomplete data into the data analysis, making it possible to seek efficient estimation of the parameter of interest even when the working regression function is not specified to be the optimal regression function. We apply the general methodology of empirical likelihood to optimally combine these unbiased estimating equations. We propose three maximum empirical likelihood estimators of the underlying regression parameters and compare their efficiencies with other existing competitors. We present a simulation study to compare the finite-sample performance of various methods with respect to bias, efficiency, and robustness to model misspecification. The proposed empirical likelihood method is also illustrated by an analysis of a data set from the US National Health and

  10. Expert opinion vs. empirical evidence

    Science.gov (United States)

    Herman, Rod A; Raybould, Alan

    2014-01-01

    Expert opinion is often sought by government regulatory agencies when there is insufficient empirical evidence to judge the safety implications of a course of action. However, it can be reckless to continue following expert opinion when a preponderance of evidence is amassed that conflicts with this opinion. Factual evidence should always trump opinion in prioritizing the information that is used to guide regulatory policy. Evidence-based medicine has seen a dramatic upturn in recent years spurred by examples where evidence indicated that certain treatments recommended by expert opinions increased death rates. We suggest that scientific evidence should also take priority over expert opinion in the regulation of genetically modified crops (GM). Examples of regulatory data requirements that are not justified based on the mass of evidence are described, and it is suggested that expertise in risk assessment should guide evidence-based regulation of GM crops. PMID:24637724

  11. Empirical Scaling Laws of Neutral Beam Injection Power in HL-2A Tokamak

    International Nuclear Information System (INIS)

    Cao Jian-Yong; Wei Hui-Ling; Liu He; Yang Xian-Fu; Zou Gui-Qing; Yu Li-Ming; Li Qing; Luo Cui-Wen; Pan Yu-Dong; Jiang Shao-Feng; Lei Guang-Jiu; Li Bo; Rao Jun; Duan Xu-Ru

    2015-01-01

    We present an experimental method to obtain neutral beam injection (NBI) power scaling laws with operating parameters of the NBI system on HL-2A, including the beam divergence angle, the beam power transmission efficiency, the neutralization efficiency and so on. With the empirical scaling laws, the estimating power can be obtained in every shot of experiment on time, therefore the important parameters such as the energy confinement time can be obtained precisely. The simulation results by the tokamak simulation code (TSC) show that the evolution of the plasma parameters is in good agreement with the experimental results by using the NBI power from the empirical scaling law. (paper)

  12. Transformation of an empirical distribution to normal distribution by the use of Johnson system of translation and symmetrical quantile method

    OpenAIRE

    Ludvík Friebel; Jana Friebelová

    2006-01-01

    This article deals with approximation of empirical distribution to standard normal distribution using Johnson transformation. This transformation enables us to approximate wide spectrum of continuous distributions with a normal distribution. The estimation of parameters of transformation formulas is based on percentiles of empirical distribution. There are derived theoretical probability distribution functions of random variable obtained on the base of backward transformation standard normal ...

  13. A Socio-Cultural Model Based on Empirical Data of Cultural and Social Relationship

    DEFF Research Database (Denmark)

    Lipi, Afia Akhter; Nakano, Yukiko; Rehm, Matthias

    2010-01-01

    The goal of this paper is to integrate culture and social relationship as a computational term in an embodied conversational agent system by employing empirical and theoretical approach. We propose a parameter-based model that predicts nonverbal expressions appropriate for specific cultures...... in different social relationship. So, first, we introduce the theories of social and cultural characteristics. Then, we did corpus analysis of human interaction of two cultures in two different social situations and extracted empirical data and finally, by integrating socio-cultural characteristics...... with empirical data, we establish a parameterized network model that generates culture specific non-verbal expressions in different social relationships....

  14. Combining Empirical and Stochastic Models for Extreme Floods Estimation

    Science.gov (United States)

    Zemzami, M.; Benaabidate, L.

    2013-12-01

    Hydrological models can be defined as physical, mathematical or empirical. The latter class uses mathematical equations independent of the physical processes involved in the hydrological system. The linear regression and Gradex (Gradient of Extreme values) are classic examples of empirical models. However, conventional empirical models are still used as a tool for hydrological analysis by probabilistic approaches. In many regions in the world, watersheds are not gauged. This is true even in developed countries where the gauging network has continued to decline as a result of the lack of human and financial resources. Indeed, the obvious lack of data in these watersheds makes it impossible to apply some basic empirical models for daily forecast. So we had to find a combination of rainfall-runoff models in which it would be possible to create our own data and use them to estimate the flow. The estimated design floods would be a good choice to illustrate the difficulties facing the hydrologist for the construction of a standard empirical model in basins where hydrological information is rare. The construction of the climate-hydrological model, which is based on frequency analysis, was established to estimate the design flood in the Anseghmir catchments, Morocco. The choice of using this complex model returns to its ability to be applied in watersheds where hydrological information is not sufficient. It was found that this method is a powerful tool for estimating the design flood of the watershed and also other hydrological elements (runoff, volumes of water...).The hydrographic characteristics and climatic parameters were used to estimate the runoff, water volumes and design flood for different return periods.

  15. Essays on empirical likelihood in economics

    NARCIS (Netherlands)

    Gao, Z.

    2012-01-01

    This thesis intends to exploit the roots of empirical likelihood and its related methods in mathematical programming and computation. The roots will be connected and the connections will induce new solutions for the problems of estimation, computation, and generalization of empirical likelihood.

  16. Empirical training for conditional random fields

    NARCIS (Netherlands)

    Zhu, Zhemin; Hiemstra, Djoerd; Apers, Peter M.G.; Wombacher, Andreas

    2013-01-01

    In this paper (Zhu et al., 2013), we present a practi- cally scalable training method for CRFs called Empir- ical Training (EP). We show that the standard train- ing with unregularized log likelihood can have many maximum likelihood estimations (MLEs). Empirical training has a unique closed form MLE

  17. Intermodal connectivity in Europe, an empirical exploration

    NARCIS (Netherlands)

    de Langen, P.W.; Lases Figueroa, D.M.; van Donselaar, K.H.; Bozuwa, J.

    2017-01-01

    In this paper we analyse the intermodal connectivity in Europe. The empirical analysis is to our knowledge the first empirical analysis of intermodal connections, and is based on a comprehensive database of intermodal connections in Europe. The paper focuses on rail and barge services, as they are

  18. Empirical Moral Philosophy and Teacher Education

    Science.gov (United States)

    Schjetne, Espen; Afdal, Hilde Wågsås; Anker, Trine; Johannesen, Nina; Afdal, Geir

    2016-01-01

    In this paper, we explore the possible contributions of empirical moral philosophy to professional ethics in teacher education. We argue that it is both possible and desirable to connect knowledge of how teachers empirically do and understand professional ethics with normative theories of teachers' professional ethics. Our argument is made in…

  19. Empirical ethics, context-sensitivity, and contextualism.

    Science.gov (United States)

    Musschenga, Albert W

    2005-10-01

    In medical ethics, business ethics, and some branches of political philosophy (multi-culturalism, issues of just allocation, and equitable distribution) the literature increasingly combines insights from ethics and the social sciences. Some authors in medical ethics even speak of a new phase in the history of ethics, hailing "empirical ethics" as a logical next step in the development of practical ethics after the turn to "applied ethics." The name empirical ethics is ill-chosen because of its associations with "descriptive ethics." Unlike descriptive ethics, however, empirical ethics aims to be both descriptive and normative. The first question on which I focus is what kind of empirical research is used by empirical ethics and for which purposes. I argue that the ultimate aim of all empirical ethics is to improve the context-sensitivity of ethics. The second question is whether empirical ethics is essentially connected with specific positions in meta-ethics. I show that in some kinds of meta-ethical theories, which I categorize as broad contextualist theories, there is an intrinsic need for connecting normative ethics with empirical social research. But context-sensitivity is a goal that can be aimed for from any meta-ethical position.

  20. The emerging empirics of evolutionary economic geography

    NARCIS (Netherlands)

    Boschma, R.A.; Frenken, K.

    2011-01-01

    Following last decade’s programmatic papers on Evolutionary Economic Geography, we report on recent empirical advances and how this empirical work can be positioned vis-a`-vis other strands of research in economic geography. First, we review studies on the path dependent nature of clustering, and

  1. The emerging empirics of evolutionary economic geography

    NARCIS (Netherlands)

    Boschma, R.A.; Frenken, K.

    2010-01-01

    Following last decade’s programmatic papers on Evolutionary Economic Geography, we report on recent empirical advances and how this empirical work can be positioned vis-à-vis other strands of research in economic geography. First, we review studies on the path dependent nature of clustering, and how

  2. The emerging empirics of evolutionary economic geography.

    NARCIS (Netherlands)

    Boschma, R.A.; Frenken, K.

    2011-01-01

    Following last decade’s programmatic papers on Evolutionary Economic Geography, we report on recent empirical advances and how this empirical work can be positioned vis-a`-vis other strands of research in economic geography. First, we review studies on the path dependent nature of clustering, and

  3. The Role of Empirical Research in Bioethics

    Science.gov (United States)

    Kon, Alexander A.

    2010-01-01

    There has long been tension between bioethicists whose work focuses on classical philosophical inquiry and those who perform empirical studies on bioethical issues. While many have argued that empirical research merely illuminates current practices and cannot inform normative ethics, others assert that research-based work has significant implications for refining our ethical norms. In this essay, I present a novel construct for classifying empirical research in bioethics into four hierarchical categories: Lay of the Land, Ideal Versus Reality, Improving Care, and Changing Ethical Norms. Through explaining these four categories and providing examples of publications in each stratum, I define how empirical research informs normative ethics. I conclude by demonstrating how philosophical inquiry and empirical research can work cooperatively to further normative ethics. PMID:19998120

  4. A comparison between two powder compaction parameters of plasticity: the effective medium A parameter and the Heckel 1/K parameter.

    Science.gov (United States)

    Mahmoodi, Foad; Klevan, Ingvild; Nordström, Josefina; Alderborn, Göran; Frenning, Göran

    2013-09-10

    The purpose of the research was to introduce a procedure to derive a powder compression parameter (EM A) representing particle yield stress using an effective medium equation and to compare the EM A parameter with the Heckel compression parameter (1/K). 16 pharmaceutical powders, including drugs and excipients, were compressed in a materials testing instrument and powder compression profiles were derived using the EM and Heckel equations. The compression profiles thus obtained could be sub-divided into regions among which one region was approximately linear and from this region, the compression parameters EM A and 1/K were calculated. A linear relationship between the EM A parameter and the 1/K parameter was obtained with a strong correlation. The slope of the plot was close to 1 (0.84) and the intercept of the plot was small in comparison to the range of parameter values obtained. The relationship between the theoretical EM A parameter and the 1/K parameter supports the interpretation of the empirical Heckel parameter as being a measure of yield stress. It is concluded that the combination of Heckel and EM equations represents a suitable procedure to derive a value of particle plasticity from powder compression data. Copyright © 2013 Elsevier B.V. All rights reserved.

  5. Empirical essays on energy economics

    Energy Technology Data Exchange (ETDEWEB)

    Thoenes, Stefan

    2013-06-13

    The main part of this thesis consists of three distinct essays that empirically analyze economic issues related to energy markets in the United States and Europe. The first chapter provides an introduction and discusses the motivation for the different analyses pursued in this thesis. The second chapter examines attention effects in the market for hybrid vehicles. We show that local media coverage, gasoline price changes and unprecedented record gasoline prices have a significant impact on the consumers' attention. As attention is not directly observable, we analyze online search behavior as a proxy for the revealed consumer attention. Our study is based on a unique weekly panel dataset for 19 metropolitan areas in the US. Additionally, we use monthly state-level panel data to show that the adoption rate of hybrid vehicles is robustly related to our measure of attention. Our results show that the consumers' attention fluctuates strongly and systematically. The third chapter shows how the effect of fuel prices varies with the level of electricity demand. It analyzes the relationship between daily prices of electricity, natural gas and carbon emission allowances with a semiparametric varying smooth coefficient cointegration model. This model is used to analyze the market impact of the nuclear moratorium by the German Government in March 2011. Futures prices of electricity, natural gas and emission allowances are used to show that the market efficiently accounts for the suspended capacity and correctly expects that several nuclear plants will not be switched on after the moratorium. In the fourth chapter, we develop a structural vector autoregressive model (VAR) for the German natural gas market. In particular, we illustrate the usefulness of our approach by disentangling the effects of different fundamental influences during four specific events: The financial crisis starting in 2008, the Russian-Ukrainian gas dispute in January 2009, the Libyan civil war

  6. Empirical essays on energy economics

    International Nuclear Information System (INIS)

    Thoenes, Stefan

    2013-01-01

    The main part of this thesis consists of three distinct essays that empirically analyze economic issues related to energy markets in the United States and Europe. The first chapter provides an introduction and discusses the motivation for the different analyses pursued in this thesis. The second chapter examines attention effects in the market for hybrid vehicles. We show that local media coverage, gasoline price changes and unprecedented record gasoline prices have a significant impact on the consumers' attention. As attention is not directly observable, we analyze online search behavior as a proxy for the revealed consumer attention. Our study is based on a unique weekly panel dataset for 19 metropolitan areas in the US. Additionally, we use monthly state-level panel data to show that the adoption rate of hybrid vehicles is robustly related to our measure of attention. Our results show that the consumers' attention fluctuates strongly and systematically. The third chapter shows how the effect of fuel prices varies with the level of electricity demand. It analyzes the relationship between daily prices of electricity, natural gas and carbon emission allowances with a semiparametric varying smooth coefficient cointegration model. This model is used to analyze the market impact of the nuclear moratorium by the German Government in March 2011. Futures prices of electricity, natural gas and emission allowances are used to show that the market efficiently accounts for the suspended capacity and correctly expects that several nuclear plants will not be switched on after the moratorium. In the fourth chapter, we develop a structural vector autoregressive model (VAR) for the German natural gas market. In particular, we illustrate the usefulness of our approach by disentangling the effects of different fundamental influences during four specific events: The financial crisis starting in 2008, the Russian-Ukrainian gas dispute in January 2009, the Libyan civil war in 2011 as

  7. Empirical data and moral theory. A plea for integrated empirical ethics.

    Science.gov (United States)

    Molewijk, Bert; Stiggelbout, Anne M; Otten, Wilma; Dupuis, Heleen M; Kievit, Job

    2004-01-01

    Ethicists differ considerably in their reasons for using empirical data. This paper presents a brief overview of four traditional approaches to the use of empirical data: "the prescriptive applied ethicists," "the theorists," "the critical applied ethicists," and "the particularists." The main aim of this paper is to introduce a fifth approach of more recent date (i.e. "integrated empirical ethics") and to offer some methodological directives for research in integrated empirical ethics. All five approaches are presented in a table for heuristic purposes. The table consists of eight columns: "view on distinction descriptive-prescriptive sciences," "location of moral authority," "central goal(s)," "types of normativity," "use of empirical data," "method," "interaction empirical data and moral theory," and "cooperation with descriptive sciences." Ethicists can use the table in order to identify their own approach. Reflection on these issues prior to starting research in empirical ethics should lead to harmonization of the different scientific disciplines and effective planning of the final research design. Integrated empirical ethics (IEE) refers to studies in which ethicists and descriptive scientists cooperate together continuously and intensively. Both disciplines try to integrate moral theory and empirical data in order to reach a normative conclusion with respect to a specific social practice. IEE is not wholly prescriptive or wholly descriptive since IEE assumes an interdepence between facts and values and between the empirical and the normative. The paper ends with three suggestions for consideration on some of the future challenges of integrated empirical ethics.

  8. Empirical Investigation of Industrial Management

    Directory of Open Access Journals (Sweden)

    Elenko Zahariev

    2014-07-01

    Full Text Available The paper is devoted to an aspect in the sphere of management – business priorities of industrial management in XXI century. In modern times the actuality of treated problems is mainly laid into the necessities of the real management practice in industrial organizations and the need theoretical and applied knowledge to be offered to that practice which would allow it methodologically right and methodically correct to implement the corresponding changes in management of a concrete organization. Objects of analyses and evaluations are some fragmented approbations of theses using the corresponding instruments. The characteristic features of the organizations’ profiles and the persons interviewed participated in the investigation are summarized. The determining approaches for Bulgarian organizations are considered too. On the basis of the critical analyses the fundamental tasks are drawn which are inherent to contemporary industrial managers. Attention is paid to key management functions for an effective managerial process. An analysis of managers reaching the best results in industrial management is presented as well as when they are reached. Outlined are also specific peculiarities of industrial management in theRepublicofBulgariaand parameters of the level of productiveness in conditions of business globalization and priority forms in marketing of the ready product / service in XXI century. The results of the launched idea for the necessity to create a new International management architecture (NIMA are determined – structure and structure defining parameters. The results of the investigation of main business priorities in industrial management are commented as well as expected problems in the process of functioning of industrial organizations in XXI century. At the end the corresponding conclusions are made in respect to the techniques used in determining effectiveness of industrial management in Bulgarian organizations.

  9. Statistical microeconomics and commodity prices: theory and empirical results.

    Science.gov (United States)

    Baaquie, Belal E

    2016-01-13

    A review is made of the statistical generalization of microeconomics by Baaquie (Baaquie 2013 Phys. A 392, 4400-4416. (doi:10.1016/j.physa.2013.05.008)), where the market price of every traded commodity, at each instant of time, is considered to be an independent random variable. The dynamics of commodity market prices is given by the unequal time correlation function and is modelled by the Feynman path integral based on an action functional. The correlation functions of the model are defined using the path integral. The existence of the action functional for commodity prices that was postulated to exist in Baaquie (Baaquie 2013 Phys. A 392, 4400-4416. (doi:10.1016/j.physa.2013.05.008)) has been empirically ascertained in Baaquie et al. (Baaquie et al. 2015 Phys. A 428, 19-37. (doi:10.1016/j.physa.2015.02.030)). The model's action functionals for different commodities has been empirically determined and calibrated using the unequal time correlation functions of the market commodity prices using a perturbation expansion (Baaquie et al. 2015 Phys. A 428, 19-37. (doi:10.1016/j.physa.2015.02.030)). Nine commodities drawn from the energy, metal and grain sectors are empirically studied and their auto-correlation for up to 300 days is described by the model to an accuracy of R(2)>0.90-using only six parameters. © 2015 The Author(s).

  10. Empirical complexities in the genetic foundations of lethal mutagenesis.

    Science.gov (United States)

    Bull, James J; Joyce, Paul; Gladstone, Eric; Molineux, Ian J

    2013-10-01

    From population genetics theory, elevating the mutation rate of a large population should progressively reduce average fitness. If the fitness decline is large enough, the population will go extinct in a process known as lethal mutagenesis. Lethal mutagenesis has been endorsed in the virology literature as a promising approach to viral treatment, and several in vitro studies have forced viral extinction with high doses of mutagenic drugs. Yet only one empirical study has tested the genetic models underlying lethal mutagenesis, and the theory failed on even a qualitative level. Here we provide a new level of analysis of lethal mutagenesis by developing and evaluating models specifically tailored to empirical systems that may be used to test the theory. We first quantify a bias in the estimation of a critical parameter and consider whether that bias underlies the previously observed lack of concordance between theory and experiment. We then consider a seemingly ideal protocol that avoids this bias-mutagenesis of virions-but find that it is hampered by other problems. Finally, results that reveal difficulties in the mere interpretation of mutations assayed from double-strand genomes are derived. Our analyses expose unanticipated complexities in testing the theory. Nevertheless, the previous failure of the theory to predict experimental outcomes appears to reside in evolutionary mechanisms neglected by the theory (e.g., beneficial mutations) rather than from a mismatch between the empirical setup and model assumptions. This interpretation raises the specter that naive attempts at lethal mutagenesis may augment adaptation rather than retard it.

  11. Stellar Parameters for Trappist-1

    Science.gov (United States)

    Van Grootel, Valérie; Fernandes, Catarina S.; Gillon, Michael; Jehin, Emmanuel; Manfroid, Jean; Scuflaire, Richard; Burgasser, Adam J.; Barkaoui, Khalid; Benkhaldoun, Zouhair; Burdanov, Artem; Delrez, Laetitia; Demory, Brice-Olivier; de Wit, Julien; Queloz, Didier; Triaud, Amaury H. M. J.

    2018-01-01

    TRAPPIST-1 is an ultracool dwarf star transited by seven Earth-sized planets, for which thorough characterization of atmospheric properties, surface conditions encompassing habitability, and internal compositions is possible with current and next-generation telescopes. Accurate modeling of the star is essential to achieve this goal. We aim to obtain updated stellar parameters for TRAPPIST-1 based on new measurements and evolutionary models, compared to those used in discovery studies. We present a new measurement for the parallax of TRAPPIST-1, 82.4 ± 0.8 mas, based on 188 epochs of observations with the TRAPPIST and Liverpool Telescopes from 2013 to 2016. This revised parallax yields an updated luminosity of {L}* =(5.22+/- 0.19)× {10}-4 {L}ȯ , which is very close to the previous estimate but almost two times more precise. We next present an updated estimate for TRAPPIST-1 stellar mass, based on two approaches: mass from stellar evolution modeling, and empirical mass derived from dynamical masses of equivalently classified ultracool dwarfs in astrometric binaries. We combine them using a Monte-Carlo approach to derive a semi-empirical estimate for the mass of TRAPPIST-1. We also derive estimate for the radius by combining this mass with stellar density inferred from transits, as well as an estimate for the effective temperature from our revised luminosity and radius. Our final results are {M}* =0.089+/- 0.006 {M}ȯ , {R}* =0.121+/- 0.003 {R}ȯ , and {T}{eff} = 2516 ± 41 K. Considering the degree to which the TRAPPIST-1 system will be scrutinized in coming years, these revised and more precise stellar parameters should be considered when assessing the properties of TRAPPIST-1 planets.

  12. Blast wave parameters at diminished ambient pressure

    Science.gov (United States)

    Silnikov, M. V.; Chernyshov, M. V.; Mikhaylin, A. I.

    2015-04-01

    Relation between blast wave parameters resulted from a condensed high explosive (HE) charge detonation and a surrounding gas (air) pressure has been studied. Blast wave pressure and impulse differences at compression and rarefaction phases, which traditionally determine damage explosive effect, has been analyzed. An initial pressure effect on a post-explosion quasi-static component of the blast load has been investigated. The analysis is based on empirical relations between blast parameters and non-dimensional similarity criteria. The results can be directly applied to flying vehicle (aircraft or spacecraft) blast safety analysis.

  13. Introducing Empirical Exercises into Principles of Economics.

    Science.gov (United States)

    McGrath, Eileen L.; Tiemann, Thomas K.

    1985-01-01

    A rationale for requiring undergraduate students to become familiar with the empirical side of economics is presented, and seven exercises that can be used in an introductory course are provided. (Author/RM)

  14. Review essay: empires, ancient and modern.

    Science.gov (United States)

    Hall, John A

    2011-09-01

    This essay drews attention to two books on empires by historians which deserve the attention of sociologists. Bang's model of the workings of the Roman economy powerfully demonstrates the tributary nature of per-industrial tributary empires. Darwin's analysis concentrates on modern overseas empires, wholly different in character as they involved the transportation of consumption items for the many rather than luxury goods for the few. Darwin is especially good at describing the conditions of existence of late nineteenth century empires, noting that their demise was caused most of all by the failure of balance of power politics in Europe. Concluding thoughts are offered about the USA. © London School of Economics and Political Science 2011.

  15. Inland empire logistics GIS mapping project.

    Science.gov (United States)

    2009-01-01

    The Inland Empire has experienced exponential growth in the area of warehousing and distribution facilities within the last decade and it seems that it will continue way into the future. Where are these facilities located? How large are the facilitie...

  16. Teaching Empirical Software Engineering Using Expert Teams

    DEFF Research Database (Denmark)

    Kuhrmann, Marco

    2017-01-01

    Empirical software engineering aims at making software engineering claims measurable, i.e., to analyze and understand phenomena in software engineering and to evaluate software engineering approaches and solutions. Due to the involvement of humans and the multitude of fields for which software...... is crucial, software engineering is considered hard to teach. Yet, empirical software engineering increases this difficulty by adding the scientific method as extra dimension. In this paper, we present a Master-level course on empirical software engineering in which different empirical instruments...... an extra specific expertise that they offer as service to other teams, thus, fostering cross-team collaboration. The paper outlines the general course setup, topics addressed, and it provides initial lessons learned....

  17. Principles Involving Marketing Policies: An Empirical Assessment

    OpenAIRE

    JS Armstrong; Randall L. Schultz

    2005-01-01

    We examined nine marketing textbooks, published since 1927, to see if they contained useful marketing principles. Four doctoral students found 566 normative statements about pricing, product, place, or promotion in these texts. None of these stateinents were supported by empirical evidence. Four raters agreed on only twenty of these 566 statements as providing meaningful principles. Twenty marketing professors rated whether the twenty meaningful principles were correct, supported by empirical...

  18. Sources of Currency Crisis: An Empirical Analysis

    OpenAIRE

    Weber, Axel A.

    1997-01-01

    Two types of currency crisis models coexist in the literature: first generation models view speculative attacks as being caused by economic fundamentals which are inconsistent with a given parity. Second generation models claim self-fulfilling speculation as the main source of a currency crisis. Recent empirical research in international macroeconomics has attempted to distinguish between the sources of currency crises. This paper adds to this literature by proposing a new empirical approach ...

  19. Agency Theory and Franchising: Some Empirical Results

    OpenAIRE

    Francine Lafontaine

    1992-01-01

    This article provides an empirical assessment of various agency-theoretic explanations for franchising, including risk sharing, one-sided moral hazard, and two-sided moral hazard. The empirical models use proxies for factors such as risk, moral hazard, and franchisors' need for capital to explain both franchisors' decisions about the terms of their contracts (royalty rates and up-front franchise fees) and the extent to which they use franchising. In this article, I exploit several new sources...

  20. Gun Laws and Crime: An Empirical Assessment

    OpenAIRE

    Matti Viren

    2012-01-01

    This paper deals with the effect of gun laws on crime. Several empirical analyses are carried to investigate the relationship between five different crime rates and alternative law variables. The tests are based on cross-section data from US sates. Three different law variables are used in the analysis, together with a set of control variables for income, poverty, unemployment and ethnic background of the population. Empirical analysis does not lend support to the notion that crime laws would...

  1. Empirical direction in design and analysis

    CERN Document Server

    Anderson, Norman H

    2001-01-01

    The goal of Norman H. Anderson's new book is to help students develop skills of scientific inference. To accomplish this he organized the book around the ""Experimental Pyramid""--six levels that represent a hierarchy of considerations in empirical investigation--conceptual framework, phenomena, behavior, measurement, design, and statistical inference. To facilitate conceptual and empirical understanding, Anderson de-emphasizes computational formulas and null hypothesis testing. Other features include: *emphasis on visual inspection as a basic skill in experimental analysis to help student

  2. An Empirical Taxonomy of Crowdfunding Intermediaries

    OpenAIRE

    Haas, Philipp; Blohm, Ivo; Leimeister, Jan Marco

    2014-01-01

    Due to the recent popularity of crowdfunding, a broad magnitude of crowdfunding intermediaries has emerged, while research on crowdfunding intermediaries has been largely neglected. As a consequence, existing classifications of crowdfunding intermediaries are conceptual, lack theoretical grounding, and are not empirically validated. Thus, we develop an empirical taxonomy of crowdfunding intermediaries, which is grounded in the theories of two-sided markets and financial intermediation. Integr...

  3. An empirical analysis of Diaspora bonds

    OpenAIRE

    AKKOYUNLU, Şule; STERN, Max

    2018-01-01

    Abstract. This study is the first to investigate theoretically and empirically the determinants of Diaspora Bonds for eight developing countries (Bangladesh, Ethiopia, Ghana, India, Lebanon, Pakistan, the Philippines, and Sri-Lanka) and one developed country - Israel for the period 1951 and 2008. Empirical results are consistent with the predictions of the theoretical model. The most robust variables are the closeness indicator and the sovereign rating, both on the demand-side. The spread is ...

  4. A Non-standard Empirical Likelihood for Time Series

    DEFF Research Database (Denmark)

    Nordman, Daniel J.; Bunzel, Helle; Lahiri, Soumendra N.

    Standard blockwise empirical likelihood (BEL) for stationary, weakly dependent time series requires specifying a fixed block length as a tuning parameter for setting confidence regions. This aspect can be difficult and impacts coverage accuracy. As an alternative, this paper proposes a new version...... of BEL based on a simple, though non-standard, data-blocking rule which uses a data block of every possible length. Consequently, the method involves no block selection and is also anticipated to exhibit better coverage performance. Its non-standard blocking scheme, however, induces non......-standard asymptotics and requires a significantly different development compared to standard BEL. We establish the large-sample distribution of log-ratio statistics from the new BEL method for calibrating confidence regions for mean or smooth function parameters of time series. This limit law is not the usual chi...

  5. Dielectric response of molecules in empirical tight-binding theory

    Science.gov (United States)

    Boykin, Timothy B.; Vogl, P.

    2002-01-01

    In this paper we generalize our previous approach to electromagnetic interactions within empirical tight-binding theory to encompass molecular solids and isolated molecules. In order to guarantee physically meaningful results, we rederive the expressions for relevant observables using commutation relations appropriate to the finite tight-binding Hilbert space. In carrying out this generalization, we examine in detail the consequences of various prescriptions for the position and momentum operators in tight binding. We show that attempting to fit parameters of the momentum matrix directly generally results in a momentum operator which is incompatible with the underlying tight-binding model, while adding extra position parameters results in numerous difficulties, including the loss of gauge invariance. We have applied our scheme, which we term the Peierls-coupling tight-binding method, to the optical dielectric function of the molecular solid PPP, showing that this approach successfully predicts its known optical properties even in the limit of isolated molecules.

  6. Physico-empirical approach for mapping soil hydraulic behaviour

    Directory of Open Access Journals (Sweden)

    G. D'Urso

    1997-01-01

    Full Text Available Abstract: Pedo-transfer functions are largely used in soil hydraulic characterisation of large areas. The use of physico-empirical approaches for the derivation of soil hydraulic parameters from disturbed samples data can be greatly enhanced if a characterisation performed on undisturbed cores of the same type of soil is available. In this study, an experimental procedure for deriving maps of soil hydraulic behaviour is discussed with reference to its application in an irrigation district (30 km2 in southern Italy. The main steps of the proposed procedure are: i the precise identification of soil hydraulic functions from undisturbed sampling of main horizons in representative profiles for each soil map unit; ii the determination of pore-size distribution curves from larger disturbed sampling data sets within the same soil map unit. iii the calibration of physical-empirical methods for retrieving soil hydraulic parameters from particle-size data and undisturbed soil sample analysis; iv the definition of functional hydraulic properties from water balance output; and v the delimitation of soil hydraulic map units based on functional properties.

  7. Booster parameter list

    International Nuclear Information System (INIS)

    Parsa, Z.

    1986-10-01

    The AGS Booster is designed to be an intermediate synchrotron injector for the AGS, capable of accelerating protons from 200 MeV to 1.5 GeV. The parameters listed include beam and operational parameters and lattice parameters, as well as parameters pertaining to the accelerator's magnets, vacuum system, radio frequency acceleration system, and the tunnel. 60 refs., 41 figs

  8. Lumped-parameter models

    Energy Technology Data Exchange (ETDEWEB)

    Ibsen, Lars Bo; Liingaard, M.

    2006-12-15

    A lumped-parameter model represents the frequency dependent soil-structure interaction of a massless foundation placed on or embedded into an unbounded soil domain. In this technical report the steps of establishing a lumped-parameter model are presented. Following sections are included in this report: Static and dynamic formulation, Simple lumped-parameter models and Advanced lumped-parameter models. (au)

  9. An update on the "empirical turn" in bioethics: analysis of empirical research in nine bioethics journals.

    Science.gov (United States)

    Wangmo, Tenzin; Hauri, Sirin; Gennet, Eloise; Anane-Sarpong, Evelyn; Provoost, Veerle; Elger, Bernice S

    2018-02-07

    A review of literature published a decade ago noted a significant increase in empirical papers across nine bioethics journals. This study provides an update on the presence of empirical papers in the same nine journals. It first evaluates whether the empirical trend is continuing as noted in the previous study, and second, how it is changing, that is, what are the characteristics of the empirical works published in these nine bioethics journals. A review of the same nine journals (Bioethics; Journal of Medical Ethics; Journal of Clinical Ethics; Nursing Ethics; Cambridge Quarterly of Healthcare Ethics; Hastings Center Report; Theoretical Medicine and Bioethics; Christian Bioethics; and Kennedy Institute of Ethics Journal) was conducted for a 12-year period from 2004 to 2015. Data obtained was analysed descriptively and using a non-parametric Chi-square test. Of the total number of original papers (N = 5567) published in the nine bioethics journals, 18.1% (n = 1007) collected and analysed empirical data. Journal of Medical Ethics and Nursing Ethics led the empirical publications, accounting for 89.4% of all empirical papers. The former published significantly more quantitative papers than qualitative, whereas the latter published more qualitative papers. Our analysis reveals no significant difference (χ2 = 2.857; p = 0.091) between the proportion of empirical papers published in 2004-2009 and 2010-2015. However, the increasing empirical trend has continued in these journals with the proportion of empirical papers increasing from 14.9% in 2004 to 17.8% in 2015. This study presents the current state of affairs regarding empirical research published nine bioethics journals. In the quarter century of data that is available about the nine bioethics journals studied in two reviews, the proportion of empirical publications continues to increase, signifying a trend towards empirical research in bioethics. The growing volume is mainly attributable to two

  10. Reflective equilibrium and empirical data: third person moral experiences in empirical medical ethics.

    Science.gov (United States)

    De Vries, Martine; Van Leeuwen, Evert

    2010-11-01

    In ethics, the use of empirical data has become more and more popular, leading to a distinct form of applied ethics, namely empirical ethics. This 'empirical turn' is especially visible in bioethics. There are various ways of combining empirical research and ethical reflection. In this paper we discuss the use of empirical data in a special form of Reflective Equilibrium (RE), namely the Network Model with Third Person Moral Experiences. In this model, the empirical data consist of the moral experiences of people in a practice. Although inclusion of these moral experiences in this specific model of RE can be well defended, their use in the application of the model still raises important questions. What precisely are moral experiences? How to determine relevance of experiences, in other words: should there be a selection of the moral experiences that are eventually used in the RE? How much weight should the empirical data have in the RE? And the key question: can the use of RE by empirical ethicists really produce answers to practical moral questions? In this paper we start to answer the above questions by giving examples taken from our research project on understanding the norm of informed consent in the field of pediatric oncology. We especially emphasize that incorporation of empirical data in a network model can reduce the risk of self-justification and bias and can increase the credibility of the RE reached. © 2009 Blackwell Publishing Ltd.

  11. Current State of History of Psychology Teaching and Education in Argentina: An Empirical Bibliometric Investigation

    Science.gov (United States)

    Fierro, Catriel; Ostrovsky, Ana Elisa; Di Doménico, María Cristina

    2018-01-01

    This study is an empirical analysis of the field's current state in Argentinian universities. Bibliometric parameters were used to retrieve the total listed texts (N = 797) of eight undergraduate history courses' syllabi from Argentina's most populated public university psychology programs. Then, professors in charge of the selected courses (N =…

  12. Fission in Empire-II version 2.19 beta1, Lodi

    International Nuclear Information System (INIS)

    Sin, M.

    2003-01-01

    This is a description of the fission model implemented presently in EMPIRE-II. This package offers two ways to calculate the fission probability selected by parameters in the optional input. Fission barriers, fission transmission coefficients, fission cross sections and fission files are calculated

  13. Extended Analysis of Empirical Citations with Skinner's "Verbal Behavior": 1984-2004

    Science.gov (United States)

    Dixon, Mark R.; Small, Stacey L.; Rosales, Rocio

    2007-01-01

    The present paper comments on and extends the citation analysis of verbal operant publications based on Skinner's "Verbal Behavior" (1957) by Dymond, O'Hora, Whelan, and O'Donovan (2006). Variations in population parameters were evaluated for only those studies that Dymond et al. categorized as empirical. Preliminary results indicate that the…

  14. DIF Testing with an Empirical-Histogram Approximation of the Latent Density for Each Group

    Science.gov (United States)

    Woods, Carol M.

    2011-01-01

    This research introduces, illustrates, and tests a variation of IRT-LR-DIF, called EH-DIF-2, in which the latent density for each group is estimated simultaneously with the item parameters as an empirical histogram (EH). IRT-LR-DIF is used to evaluate the degree to which items have different measurement properties for one group of people versus…

  15. Integrated empirical ethics: loss of normativity?

    Science.gov (United States)

    van der Scheer, Lieke; Widdershoven, Guy

    2004-01-01

    An important discussion in contemporary ethics concerns the relevance of empirical research for ethics. Specifically, two crucial questions pertain, respectively, to the possibility of inferring normative statements from descriptive statements, and to the danger of a loss of normativity if normative statements should be based on empirical research. Here we take part in the debate and defend integrated empirical ethical research: research in which normative guidelines are established on the basis of empirical research and in which the guidelines are empirically evaluated by focusing on observable consequences. We argue that in our concrete example normative statements are not derived from descriptive statements, but are developed within a process of reflection and dialogue that goes on within a specific praxis. Moreover, we show that the distinction in experience between the desirable and the undesirable precludes relativism. The normative guidelines so developed are both critical and normative: they help in choosing the right action and in evaluating that action. Finally, following Aristotle, we plead for a return to the view that morality and ethics are inherently related to one another, and for an acknowledgment of the fact that moral judgments have their origin in experience which is always related to historical and cultural circumstances.

  16. Pluvials, Droughts, Energetics, and the Mongol Empire

    Science.gov (United States)

    Hessl, A. E.; Pederson, N.; Baatarbileg, N.

    2012-12-01

    The success of the Mongol Empire, the largest contiguous land empire the world has ever known, is a historical enigma. At its peak in the late 13th century, the empire influenced areas from the Hungary to southern Asia and Persia. Powered by domesticated herbivores, the Mongol Empire grew at the expense of agriculturalists in Eastern Europe, Persia, and China. What environmental factors contributed to the rise of the Mongols? What factors influenced the disintegration of the empire by 1300 CE? Until now, little high resolution environmental data have been available to address these questions. We use tree-ring records of past temperature and water to illuminate the role of energy and water in the evolution of the Mongol Empire. The study of energetics has long been applied to biological and ecological systems but has only recently become a theme in understanding modern coupled natural and human systems (CNH). Because water and energy are tightly linked in human and natural systems, studying their synergies and interactions make it possible to integrate knowledge across disciplines and human history, yielding important lessons for modern societies. We focus on the role of energy and water in the trajectory of an empire, including its rise, development, and demise. Our research is focused on the Orkhon Valley, seat of the Mongol Empire, where recent paleoenvironmental and archeological discoveries allow high resolution reconstructions of past human and environmental conditions for the first time. Our preliminary records indicate that the period 1210-1230 CE, the height of Chinggis Khan's reign is one of the longest and most consistent pluvials in our tree ring reconstruction of interannual drought. Reconstructed temperature derived from five millennium-long records from subalpine forests in Mongolia document warm temperatures beginning in the early 1200's and ending with a plunge into cold temperatures in 1260. Abrupt cooling in central Mongolia at this time is

  17. Symbiotic empirical ethics: a practical methodology.

    Science.gov (United States)

    Frith, Lucy

    2012-05-01

    Like any discipline, bioethics is a developing field of academic inquiry; and recent trends in scholarship have been towards more engagement with empirical research. This 'empirical turn' has provoked extensive debate over how such 'descriptive' research carried out in the social sciences contributes to the distinctively normative aspect of bioethics. This paper will address this issue by developing a practical research methodology for the inclusion of data from social science studies into ethical deliberation. This methodology will be based on a naturalistic conception of ethical theory that sees practice as informing theory just as theory informs practice - the two are symbiotically related. From this engagement with practice, the ways that such theories need to be extended and developed can be determined. This is a practical methodology for integrating theory and practice that can be used in empirical studies, one that uses ethical theory both to explore the data and to draw normative conclusions. © 2010 Blackwell Publishing Ltd.

  18. Reframing Serial Murder Within Empirical Research.

    Science.gov (United States)

    Gurian, Elizabeth A

    2017-04-01

    Empirical research on serial murder is limited due to the lack of consensus on a definition, the continued use of primarily descriptive statistics, and linkage to popular culture depictions. These limitations also inhibit our understanding of these offenders and affect credibility in the field of research. Therefore, this comprehensive overview of a sample of 508 cases (738 total offenders, including partnered groups of two or more offenders) provides analyses of solo male, solo female, and partnered serial killers to elucidate statistical differences and similarities in offending and adjudication patterns among the three groups. This analysis of serial homicide offenders not only supports previous research on offending patterns present in the serial homicide literature but also reveals that empirically based analyses can enhance our understanding beyond traditional case studies and descriptive statistics. Further research based on these empirical analyses can aid in the development of more accurate classifications and definitions of serial murderers.

  19. Wireless and empire geopolitics radio industry and ionosphere in the British Empire 1918-1939

    CERN Document Server

    Anduaga, Aitor

    2009-01-01

    Although the product of consensus politics, the British Empire was based on communications supremacy and the knowledge of the atmosphere. Focusing on science, industry, government, the military, and education, this book studies the relationship between wireless and Empire throughout the interwar period.

  20. Empirical psychology, common sense, and Kant's empirical markers for moral responsibility.

    Science.gov (United States)

    Frierson, Patrick

    2008-12-01

    This paper explains the empirical markers by which Kant thinks that one can identify moral responsibility. After explaining the problem of discerning such markers within a Kantian framework I briefly explain Kant's empirical psychology. I then argue that Kant's empirical markers for moral responsibility--linked to higher faculties of cognition--are not sufficient conditions for moral responsibility, primarily because they are empirical characteristics subject to natural laws. Next. I argue that these markers are not necessary conditions of moral responsibility. Given Kant's transcendental idealism, even an entity that lacks these markers could be free and morally responsible, although as a matter of fact Kant thinks that none are. Given that they are neither necessary nor sufficient conditions, I discuss the status of Kant's claim that higher faculties are empirical markers of moral responsibility. Drawing on connections between Kant's ethical theory and 'common rational cognition' (4:393), I suggest that Kant's theory of empirical markers can be traced to ordinary common sense beliefs about responsibility. This suggestion helps explain both why empirical markers are important and what the limits of empirical psychology are within Kant's account of moral responsibility.

  1. Empirical evaluation methods in computer vision

    CERN Document Server

    Christensen, Henrik I

    2002-01-01

    This book provides comprehensive coverage of methods for the empirical evaluation of computer vision techniques. The practical use of computer vision requires empirical evaluation to ensure that the overall system has a guaranteed performance. The book contains articles that cover the design of experiments for evaluation, range image segmentation, the evaluation of face recognition and diffusion methods, image matching using correlation methods, and the performance of medical image processing algorithms. Sample Chapter(s). Foreword (228 KB). Chapter 1: Introduction (505 KB). Contents: Automate

  2. Evaluation of empirical atmospheric diffusion data

    International Nuclear Information System (INIS)

    Horst, T.W.; Doran, J.C.; Nickola, P.W.

    1979-10-01

    A study has been made of atmospheric diffusion over level, homogeneous terrain of contaminants released from non-buoyant point sources up to 100 m in height. Current theories of diffusion are compared to empirical diffusion data, and specific dispersion estimation techniques are recommended which can be implemented with the on-site meteorological instrumentation required by the Nuclear Regulatory Commission. A comparison of both the recommended diffusion model and the NRC diffusion model with the empirical data demonstrates that the predictions of the recommended model have both smaller scatter and less bias, particularly for groundlevel sources

  3. Evaluation of empirical atmospheric diffusion data

    Energy Technology Data Exchange (ETDEWEB)

    Horst, T.W.; Doran, J.C.; Nickola, P.W.

    1979-10-01

    A study has been made of atmospheric diffusion over level, homogeneous terrain of contaminants released from non-buoyant point sources up to 100 m in height. Current theories of diffusion are compared to empirical diffusion data, and specific dispersion estimation techniques are recommended which can be implemented with the on-site meteorological instrumentation required by the Nuclear Regulatory Commission. A comparison of both the recommended diffusion model and the NRC diffusion model with the empirical data demonstrates that the predictions of the recommended model have both smaller scatter and less bias, particularly for ground-level sources.

  4. Empirical Model Building Data, Models, and Reality

    CERN Document Server

    Thompson, James R

    2011-01-01

    Praise for the First Edition "This...novel and highly stimulating book, which emphasizes solving real problems...should be widely read. It will have a positive and lasting effect on the teaching of modeling and statistics in general." - Short Book Reviews This new edition features developments and real-world examples that showcase essential empirical modeling techniques Successful empirical model building is founded on the relationship between data and approximate representations of the real systems that generated that data. As a result, it is essential for researchers who construct these m

  5. PROBLEMS WITH WIREDU'S EMPIRICALISM Martin Odei Ajei1 ...

    African Journals Online (AJOL)

    In his “Empiricalism: The Empirical Character of an African Philosophy”,. Kwasi Wiredu sets out ... others, that an empirical metaphysical system contains both empirical ..... realms which multiple categories of existents inhabit and conduct their being in .... to a mode of reasoning that conceives categories polarized by formal.

  6. Disorder parameter of confinement

    International Nuclear Information System (INIS)

    Nakamura, N.; Ejiri, S.; Matsubara, Y.; Suzuki, T.

    1996-01-01

    The disorder parameter of confinement-deconfinement phase transition based on the monopole action determined previously in SU(2) QCD are investigated. We construct an operator which corresponds to the order parameter defined in the abelian Higgs model. The operator shows proper behaviors as the disorder parameter in the numerical simulations of finite temperature QCD. (orig.)

  7. On the Empirical Estimation of Utility Distribution Damping Parameters Using Power Quality Waveform Data

    Directory of Open Access Journals (Sweden)

    Irene Y. H. Gu

    2007-01-01

    Full Text Available This paper describes an efficient yet accurate methodology for estimating system damping. The proposed technique is based on linear dynamic system theory and the Hilbert damping analysis. The proposed technique requires capacitor switching waveforms only. The detected envelope of the intrinsic transient portion of the voltage waveform after capacitor bank energizing and its decay rate along with the damped resonant frequency are used to quantify effective X/R ratio of a system. Thus, the proposed method provides complete knowledge of system impedance characteristics. The estimated system damping can also be used to evaluate the system vulnerability to various PQ disturbances, particularly resonance phenomena, so that a utility may take preventive measures and improve PQ of the system.

  8. An empirical method for calculating thermodynamic parameters for U(6) phases, applications to performance assessment calculations

    International Nuclear Information System (INIS)

    Ewing, R.C.; Chen, F.; Clark, S.B.

    2002-01-01

    Uranyl minerals form by oxidation and alteration of uraninite, UO 2+x , and the UO 2 in used nuclear fuels. The thermodynamic database for these phases is extremely limited. However, the Gibbs free energies and enthalpies for uranyl phases may be estimated based on a method that sums polyhedral contributions. The molar contributions of the structural components to Δ f G m 0 and Δ f H m 0 are derived by multiple regression using the thermodynamic data of phases for which the crystal structures are known. In comparison with experimentally determined values, the average residuals associated with the predicted Δ f G m 0 and Δ f H m 0 for the uranyl phases used in the model are 0.08 and 0.10%, respectively. There is also good agreement between the predicted mineral stability relations and field occurrences, thus providing confidence in this method for the estimation of Δ f G m 0 and Δ f H m 0 of the U(VI) phases. This approach provides a means of generating estimated thermodynamic data for performance assessment calcination and a basic for making bounding calcination of phase stabilities and solubilities. (author)

  9. Advanced empirical estimate of information value for credit scoring models

    Directory of Open Access Journals (Sweden)

    Martin Řezáč

    2011-01-01

    Full Text Available Credit scoring, it is a term for a wide spectrum of predictive models and their underlying techniques that aid financial institutions in granting credits. These methods decide who will get credit, how much credit they should get, and what further strategies will enhance the profitability of the borrowers to the lenders. Many statistical tools are avaiable for measuring quality, within the meaning of the predictive power, of credit scoring models. Because it is impossible to use a scoring model effectively without knowing how good it is, quality indexes like Gini, Kolmogorov-Smirnov statisic and Information value are used to assess quality of given credit scoring model. The paper deals primarily with the Information value, sometimes called divergency. Commonly it is computed by discretisation of data into bins using deciles. One constraint is required to be met in this case. Number of cases have to be nonzero for all bins. If this constraint is not fulfilled there are some practical procedures for preserving finite results. As an alternative method to the empirical estimates one can use the kernel smoothing theory, which allows to estimate unknown densities and consequently, using some numerical method for integration, to estimate value of the Information value. The main contribution of this paper is a proposal and description of the empirical estimate with supervised interval selection. This advanced estimate is based on requirement to have at least k, where k is a positive integer, observations of socres of both good and bad client in each considered interval. A simulation study shows that this estimate outperform both the empirical estimate using deciles and the kernel estimate. Furthermore it shows high dependency on choice of the parameter k. If we choose too small value, we get overestimated value of the Information value, and vice versa. Adjusted square root of number of bad clients seems to be a reasonable compromise.

  10. Downside Risk And Empirical Asset Pricing

    NARCIS (Netherlands)

    P. van Vliet (Pim)

    2004-01-01

    textabstractCurrently, the Nobel prize winning Capital Asset Pricing Model (CAPM) celebrates its 40th birthday. Although widely applied in financial management, this model does not fully capture the empirical riskreturn relation of stocks; witness the beta, size, value and momentum effects. These

  11. Empirical analysis of uranium spot prices

    International Nuclear Information System (INIS)

    Morman, M.R.

    1988-01-01

    The objective is to empirically test a market model of the uranium industry that incorporates the notion that, if the resource is viewed as an asset by economic agents, then its own rate of return along with the own rate of return of a competing asset would be a major factor in formulating the price of the resource. The model tested is based on a market model of supply and demand. The supply model incorporates the notion that the decision criteria used by uranium mine owners is to select that extraction rate that maximizes the net present value of their extraction receipts. The demand model uses a concept that allows for explicit recognition of the prospect of arbitrage between a natural-resource market and the market for other capital goods. The empirical approach used for estimation was a recursive or causal model. The empirical results were consistent with the theoretical models. The coefficients of the demand and supply equations had the appropriate signs. Tests for causality were conducted to validate the use of the causal model. The results obtained were favorable. The implication of the findings as related to future studies of exhaustible resources are: (1) in some cases causal models are the appropriate specification for empirical analysis; (2) supply models should incorporate a measure to capture depletion effects

  12. Trade costs in empirical New Economic Geography

    NARCIS (Netherlands)

    Bosker, E.M.; Garretsen, J.H.

    Trade costs are a crucial element of New Economic Geography (NEG) models. Without trade costs there is no role for geography. In empirical NEG studies the unavailability of direct trade cost data calls for the need to approximate these trade costs by introducing a trade cost function. In doing so,

  13. Methods for Calculating Empires in Quasicrystals

    Directory of Open Access Journals (Sweden)

    Fang Fang

    2017-10-01

    Full Text Available This paper reviews the empire problem for quasiperiodic tilings and the existing methods for generating the empires of the vertex configurations in quasicrystals, while introducing a new and more efficient method based on the cut-and-project technique. Using Penrose tiling as an example, this method finds the forced tiles with the restrictions in the high dimensional lattice (the mother lattice that can be cut-and-projected into the lower dimensional quasicrystal. We compare our method to the two existing methods, namely one method that uses the algorithm of the Fibonacci chain to force the Ammann bars in order to find the forced tiles of an empire and the method that follows the work of N.G. de Bruijn on constructing a Penrose tiling as the dual to a pentagrid. This new method is not only conceptually simple and clear, but it also allows us to calculate the empires of the vertex configurations in a defected quasicrystal by reversing the configuration of the quasicrystal to its higher dimensional lattice, where we then apply the restrictions. These advantages may provide a key guiding principle for phason dynamics and an important tool for self error-correction in quasicrystal growth.

  14. Characterizing Student Expectations: A Small Empirical Study

    Science.gov (United States)

    Warwick, Jonathan

    2016-01-01

    This paper describes the results of a small empirical study (n = 130), in which undergraduate students in the Business Faculty of a UK university were asked to express views and expectations relating to the study of a mathematics. Factor analysis is used to identify latent variables emerging from clusters of the measured variables and these are…

  15. EVOLVING AN EMPIRICAL METHODOLOGY DOR DETERMINING ...

    African Journals Online (AJOL)

    The uniqueness of this approach, is that it can be applied to any forest or dynamic feature on the earth, and can enjoy universal application as well. KEY WORDS: Evolving empirical methodology, innovative mathematical model, appropriate interval, remote sensing, forest environment planning and management. Global Jnl ...

  16. Caught between Empires: Ambivalence in Australian Films ...

    African Journals Online (AJOL)

    Caught between Empires: Ambivalence in Australian Films. Greg McCarthy. AJOL African Journals Online. HOW TO USE AJOL... for Researchers · for Librarians · for Authors · FAQ's · More about AJOL · AJOL's Partners · Terms and Conditions of Use · Contact AJOL · News. OTHER RESOURCES... for Researchers · for ...

  17. Spitsbergen - Imperialists beyond the British Empire

    NARCIS (Netherlands)

    Kruse, Frigga; Hacquebord, Louwrens

    2012-01-01

    This paper looks at the relationship between Spitsbergen in the European High Arctic and the global British Empire in the first quarter of the twentieth century. Spitsbergen was an uninhabited no man's land and comprised an unknown quantity of natural resources. The concepts of geopolitics and New

  18. An Empirical Investigation into Nigerian ESL Learners ...

    African Journals Online (AJOL)

    General observations indicate that ESL learners in Nigeria tend to manifest fear and anxiety in grammar classes, which could influence their performance negatively or positively. This study examines empirically some of the reasons for some ESL learners' apprehension of grammar classes. The data for the study were ...

  19. Air pollutant taxation: an empirical survey

    International Nuclear Information System (INIS)

    Cansier, D.; Krumm, R.

    1997-01-01

    An empirical analysis of the current taxation of the air pollutants sulphur dioxide, nitrogen oxides and carbon dioxide in the Scandinavian countries, the Netherlands, France and Japan is presented. Political motivation and technical factors such as tax base, rate structure and revenue use are compared. The general concepts of the current polices are characterised

  20. Empirical research on constructing Taiwan's ecoenvironmental ...

    African Journals Online (AJOL)

    In this paper, the material flow indicators and ecological footprint approach structured are adopted to construct eco-environmental stress indicators. We use relevant data to proceed with the empirical analyses on environmental stress and ecological impacts in Taiwan between the years of 1998 and 2007. Analysis of ...

  1. Empirical Bayes Approaches to Multivariate Fuzzy Partitions.

    Science.gov (United States)

    Woodbury, Max A.; Manton, Kenneth G.

    1991-01-01

    An empirical Bayes-maximum likelihood estimation procedure is presented for the application of fuzzy partition models in describing high dimensional discrete response data. The model describes individuals in terms of partial membership in multiple latent categories that represent bounded discrete spaces. (SLD)

  2. Empirically Exploring Higher Education Cultures of Assessment

    Science.gov (United States)

    Fuller, Matthew B.; Skidmore, Susan T.; Bustamante, Rebecca M.; Holzweiss, Peggy C.

    2016-01-01

    Although touted as beneficial to student learning, cultures of assessment have not been examined adequately using validated instruments. Using data collected from a stratified, random sample (N = 370) of U.S. institutional research and assessment directors, the models tested in this study provide empirical support for the value of using the…

  3. Empirically Based Myths: Astrology, Biorhythms, and ATIs.

    Science.gov (United States)

    Ragsdale, Ronald G.

    1980-01-01

    A myth may have an empirical basis through chance occurrence; perhaps Aptitude Treatment Interactions (ATIs) are in this category. While ATIs have great utility in describing, planning, and implementing instruction, few disordinal interactions have been found. Article suggests narrowing of ATI research with replications and estimates of effect…

  4. Transition States from Empirical Force Fields

    DEFF Research Database (Denmark)

    Jensen, Frank; Norrby, Per-Ola

    2003-01-01

    This is an overview of the use of empirical force fields in the study of reaction mechanisms. EVB-type methods (including RFF and MCMM) produce full reaction surfaces by mixing, in the simplest case, known force fields describing reactants and products. The SEAM method instead locates approximate...

  5. Classification of Marital Relationships: An Empirical Approach.

    Science.gov (United States)

    Snyder, Douglas K.; Smith, Gregory T.

    1986-01-01

    Derives an empirically based classification system of marital relationships, employing a multidimensional self-report measure of marital interaction. Spouses' profiles on the Marital Satisfaction Inventory for samples of clinic and nonclinic couples were subjected to cluster analysis, resulting in separate five-group typologies for husbands and…

  6. Empirical scaling for present ohmic heated tokamaks

    International Nuclear Information System (INIS)

    Daughney, C.

    1975-06-01

    Empirical scaling laws are given for the average electron temperature and electron energy confinement time as functions of plasma current, average electron density, effective ion charge, toroidal magnetic field, and major and minor plasma radius. The ohmic heating is classical, and the electron energy transport is anomalous. The present scaling indicates that ohmic-heating becomes ineffective with larger experiments. (U.S.)

  7. Developing empirical relationship between interrill erosion, rainfall ...

    African Journals Online (AJOL)

    In order to develop an empirical relationship for interrill erosion based on rainfall intensity, slope steepness and soil types, an interrill erosion experiment was conducted using laboratory rainfall simulator on three soil types (Vertisols, Cambisols and Leptosols) for the highlands of North Shewa Zone of Oromia Region.

  8. Governance and Human Development: Empirical Evidence from ...

    African Journals Online (AJOL)

    This study empirically investigates the effects of governance on human development in Nigeria. Using annual time series data covering the period 1998 to 2010, obtained from various sources, and employing the classical least squares estimation technique, the study finds that corruption, foreign aid and government ...

  9. Software Development Management: Empirical and Analytical Perspectives

    Science.gov (United States)

    Kang, Keumseok

    2011-01-01

    Managing software development is a very complex activity because it must deal with people, organizations, technologies, and business processes. My dissertation consists of three studies that examine software development management from various perspectives. The first study empirically investigates the impacts of prior experience with similar…

  10. The Italian Footwear Industry: an Empirical Analysis

    OpenAIRE

    Pirolo, Luca; Giustiniano, Luca; Nenni, Maria Elena

    2013-01-01

    This paper aims to provide readers with a deep empirical analysis on the Italian footwear industry in order to investigate the evolution of its structure (trends in sales and production, number of firms and employees, main markets, etc.), together with the identification of the main drivers of competitiveness in order to explain the strategies implemented by local actors.

  11. Multiscale empirical interpolation for solving nonlinear PDEs

    KAUST Repository

    Calo, Victor M.; Efendiev, Yalchin R.; Galvis, Juan; Ghommem, Mehdi

    2014-01-01

    residuals and Jacobians on the fine grid. We use empirical interpolation concepts to evaluate these residuals and Jacobians of the multiscale system with a computational cost which is proportional to the size of the coarse-scale problem rather than the fully

  12. Quantitative analyses of empirical fitness landscapes

    International Nuclear Information System (INIS)

    Szendro, Ivan G; Franke, Jasper; Krug, Joachim; Schenk, Martijn F; De Visser, J Arjan G M

    2013-01-01

    The concept of a fitness landscape is a powerful metaphor that offers insight into various aspects of evolutionary processes and guidance for the study of evolution. Until recently, empirical evidence on the ruggedness of these landscapes was lacking, but since it became feasible to construct all possible genotypes containing combinations of a limited set of mutations, the number of studies has grown to a point where a classification of landscapes becomes possible. The aim of this review is to identify measures of epistasis that allow a meaningful comparison of fitness landscapes and then apply them to the empirical landscapes in order to discern factors that affect ruggedness. The various measures of epistasis that have been proposed in the literature appear to be equivalent. Our comparison shows that the ruggedness of the empirical landscape is affected by whether the included mutations are beneficial or deleterious and by whether intragenic or intergenic epistasis is involved. Finally, the empirical landscapes are compared to landscapes generated with the rough Mt Fuji model. Despite the simplicity of this model, it captures the features of the experimental landscapes remarkably well. (paper)

  13. Qualitative Case Study Research as Empirical Inquiry

    Science.gov (United States)

    Ellinger, Andrea D.; McWhorter, Rochell

    2016-01-01

    This article introduces the concept of qualitative case study research as empirical inquiry. It defines and distinguishes what a case study is, the purposes, intentions, and types of case studies. It then describes how to determine if a qualitative case study is the preferred approach for conducting research. It overviews the essential steps in…

  14. The problem analysis for empirical studies

    NARCIS (Netherlands)

    Groenland, E.A.G.

    2014-01-01

    This article proposes a systematic methodology for the development of a problem analysis for cross-sectional, empirical research. This methodology is referred to as the 'Annabel approach'. It is suitable both for academic studies and applied (business) studies. In addition it can be used for both

  15. Empirical questions for collective-behaviour modelling

    Indian Academy of Sciences (India)

    The collective behaviour of groups of social animals has been an active topic of study ... Models have been successful at reproducing qualitative features of ... quantitative and detailed empirical results for a range of animal systems. ... standard method [23], the redundant information recorded by the cameras can be used to.

  16. Synthetic and Empirical Capsicum Annuum Image Dataset

    NARCIS (Netherlands)

    Barth, R.

    2016-01-01

    This dataset consists of per-pixel annotated synthetic (10500) and empirical images (50) of Capsicum annuum, also known as sweet or bell pepper, situated in a commercial greenhouse. Furthermore, the source models to generate the synthetic images are included. The aim of the datasets are to

  17. An Empirical Investigation into Programming Language Syntax

    Science.gov (United States)

    Stefik, Andreas; Siebert, Susanna

    2013-01-01

    Recent studies in the literature have shown that syntax remains a significant barrier to novice computer science students in the field. While this syntax barrier is known to exist, whether and how it varies across programming languages has not been carefully investigated. For this article, we conducted four empirical studies on programming…

  18. Self-Published Books: An Empirical "Snapshot"

    Science.gov (United States)

    Bradley, Jana; Fulton, Bruce; Helm, Marlene

    2012-01-01

    The number of books published by authors using fee-based publication services, such as Lulu and AuthorHouse, is overtaking the number of books published by mainstream publishers, according to Bowker's 2009 annual data. Little empirical research exists on self-published books. This article presents the results of an investigation of a random sample…

  19. Empirical Differential Balancing for Nonlinear Systems

    NARCIS (Netherlands)

    Kawano, Yu; Scherpen, Jacquelien M.A.; Dochain, Denis; Henrion, Didier; Peaucelle, Dimitri

    In this paper, we consider empirical balancing of nonlinear systems by using its prolonged system, which consists of the original nonlinear system and its variational system. For the prolonged system, we define differential reachability and observability Gramians, which are matrix valued functions

  20. Generalized Empirical Likelihood-Based Focused Information Criterion and Model Averaging

    Directory of Open Access Journals (Sweden)

    Naoya Sueishi

    2013-07-01

    Full Text Available This paper develops model selection and averaging methods for moment restriction models. We first propose a focused information criterion based on the generalized empirical likelihood estimator. We address the issue of selecting an optimal model, rather than a correct model, for estimating a specific parameter of interest. Then, this study investigates a generalized empirical likelihood-based model averaging estimator that minimizes the asymptotic mean squared error. A simulation study suggests that our averaging estimator can be a useful alternative to existing post-selection estimators.

  1. Estimating the empirical probability of submarine landslide occurrence

    Science.gov (United States)

    Geist, Eric L.; Parsons, Thomas E.; Mosher, David C.; Shipp, Craig; Moscardelli, Lorena; Chaytor, Jason D.; Baxter, Christopher D. P.; Lee, Homa J.; Urgeles, Roger

    2010-01-01

    The empirical probability for the occurrence of submarine landslides at a given location can be estimated from age dates of past landslides. In this study, tools developed to estimate earthquake probability from paleoseismic horizons are adapted to estimate submarine landslide probability. In both types of estimates, one has to account for the uncertainty associated with age-dating individual events as well as the open time intervals before and after the observed sequence of landslides. For observed sequences of submarine landslides, we typically only have the age date of the youngest event and possibly of a seismic horizon that lies below the oldest event in a landslide sequence. We use an empirical Bayes analysis based on the Poisson-Gamma conjugate prior model specifically applied to the landslide probability problem. This model assumes that landslide events as imaged in geophysical data are independent and occur in time according to a Poisson distribution characterized by a rate parameter λ. With this method, we are able to estimate the most likely value of λ and, importantly, the range of uncertainty in this estimate. Examples considered include landslide sequences observed in the Santa Barbara Channel, California, and in Port Valdez, Alaska. We confirm that given the uncertainties of age dating that landslide complexes can be treated as single events by performing statistical test of age dates representing the main failure episode of the Holocene Storegga landslide complex.

  2. Matrix effect studies with empirical formulations in maize saplings

    International Nuclear Information System (INIS)

    Bansal, Meenakshi; Deep, Kanan; Mittal, Raj

    2012-01-01

    In X-ray fluorescence, the earlier derived matrix effects from fundamental relations of intensities of analyte/matrix elements with basic atomic and experimental setup parameters and tested on synthetic known samples were found empirically related to analyte/matrix elemental amounts. The present study involves the application of these relations on potassium and calcium macronutrients of maize saplings treated with different fertilizers. The novelty of work involves a determination of an element in the presence of its secondary excitation rather than avoiding the secondary fluorescence. Therefore, the possible utility of this process is in studying the absorption for some intermediate samples in a lot of a category of samples with close Z interfering constituents (just like Ca and K). Once the absorption and enhancement terms are fitted to elemental amounts and fitted coefficients are determined, with the absorption terms from the fit and an enhancer element amount known from its selective excitation, the next iterative elemental amount can be directly evaluated from the relations. - Highlights: ► Empirical formulation for matrix corrections in terms of amounts of analyte and matrix element. ► The study applied on K and Ca nutrients of maize, rice and potato organic materials. ► The formulation provides matrix terms from amounts of analyte/matrix elements and vice versa.

  3. Multiscale empirical interpolation for solving nonlinear PDEs

    KAUST Repository

    Calo, Victor M.

    2014-12-01

    In this paper, we propose a multiscale empirical interpolation method for solving nonlinear multiscale partial differential equations. The proposed method combines empirical interpolation techniques and local multiscale methods, such as the Generalized Multiscale Finite Element Method (GMsFEM). To solve nonlinear equations, the GMsFEM is used to represent the solution on a coarse grid with multiscale basis functions computed offline. Computing the GMsFEM solution involves calculating the system residuals and Jacobians on the fine grid. We use empirical interpolation concepts to evaluate these residuals and Jacobians of the multiscale system with a computational cost which is proportional to the size of the coarse-scale problem rather than the fully-resolved fine scale one. The empirical interpolation method uses basis functions which are built by sampling the nonlinear function we want to approximate a limited number of times. The coefficients needed for this approximation are computed in the offline stage by inverting an inexpensive linear system. The proposed multiscale empirical interpolation techniques: (1) divide computing the nonlinear function into coarse regions; (2) evaluate contributions of nonlinear functions in each coarse region taking advantage of a reduced-order representation of the solution; and (3) introduce multiscale proper-orthogonal-decomposition techniques to find appropriate interpolation vectors. We demonstrate the effectiveness of the proposed methods on several nonlinear multiscale PDEs that are solved with Newton\\'s methods and fully-implicit time marching schemes. Our numerical results show that the proposed methods provide a robust framework for solving nonlinear multiscale PDEs on a coarse grid with bounded error and significant computational cost reduction.

  4. Hybrid empirical--theoretical approach to modeling uranium adsorption

    International Nuclear Information System (INIS)

    Hull, Larry C.; Grossman, Christopher; Fjeld, Robert A.; Coates, John T.; Elzerman, Alan W.

    2004-01-01

    An estimated 330 metric tons of U are buried in the radioactive waste Subsurface Disposal Area (SDA) at the Idaho National Engineering and Environmental Laboratory (INEEL). An assessment of U transport parameters is being performed to decrease the uncertainty in risk and dose predictions derived from computer simulations of U fate and transport to the underlying Snake River Plain Aquifer. Uranium adsorption isotherms were measured for 14 sediment samples collected from sedimentary interbeds underlying the SDA. The adsorption data were fit with a Freundlich isotherm. The Freundlich n parameter is statistically identical for all 14 sediment samples and the Freundlich K f parameter is correlated to sediment surface area (r 2 =0.80). These findings suggest an efficient approach to material characterization and implementation of a spatially variable reactive transport model that requires only the measurement of sediment surface area. To expand the potential applicability of the measured isotherms, a model is derived from the empirical observations by incorporating concepts from surface complexation theory to account for the effects of solution chemistry. The resulting model is then used to predict the range of adsorption conditions to be expected in the vadose zone at the SDA based on the range in measured pore water chemistry. Adsorption in the deep vadose zone is predicted to be stronger than in near-surface sediments because the total dissolved carbonate decreases with depth

  5. Empirical study on social groups in pedestrian evacuation dynamics

    Science.gov (United States)

    von Krüchten, Cornelia; Schadschneider, Andreas

    2017-06-01

    Pedestrian crowds often include social groups, i.e. pedestrians that walk together because of social relationships. They show characteristic configurations and influence the dynamics of the entire crowd. In order to investigate the impact of social groups on evacuations we performed an empirical study with pupils. Several evacuation runs with groups of different sizes and different interactions were performed. New group parameters are introduced which allow to describe the dynamics of the groups and the configuration of the group members quantitatively. The analysis shows a possible decrease of evacuation times for large groups due to self-ordering effects. Social groups can be approximated as ellipses that orientate along their direction of motion. Furthermore, explicitly cooperative behaviour among group members leads to a stronger aggregation of group members and an intermittent way of evacuation.

  6. Optimal design criteria - prediction vs. parameter estimation

    Science.gov (United States)

    Waldl, Helmut

    2014-05-01

    G-optimality is a popular design criterion for optimal prediction, it tries to minimize the kriging variance over the whole design region. A G-optimal design minimizes the maximum variance of all predicted values. If we use kriging methods for prediction it is self-evident to use the kriging variance as a measure of uncertainty for the estimates. Though the computation of the kriging variance and even more the computation of the empirical kriging variance is computationally very costly and finding the maximum kriging variance in high-dimensional regions can be time demanding such that we cannot really find the G-optimal design with nowadays available computer equipment in practice. We cannot always avoid this problem by using space-filling designs because small designs that minimize the empirical kriging variance are often non-space-filling. D-optimality is the design criterion related to parameter estimation. A D-optimal design maximizes the determinant of the information matrix of the estimates. D-optimality in terms of trend parameter estimation and D-optimality in terms of covariance parameter estimation yield basically different designs. The Pareto frontier of these two competing determinant criteria corresponds with designs that perform well under both criteria. Under certain conditions searching the G-optimal design on the above Pareto frontier yields almost as good results as searching the G-optimal design in the whole design region. In doing so the maximum of the empirical kriging variance has to be computed only a few times though. The method is demonstrated by means of a computer simulation experiment based on data provided by the Belgian institute Management Unit of the North Sea Mathematical Models (MUMM) that describe the evolution of inorganic and organic carbon and nutrients, phytoplankton, bacteria and zooplankton in the Southern Bight of the North Sea.

  7. Generalized empirical equation for the extrapolated range of electrons in elemental and compound materials

    International Nuclear Information System (INIS)

    Lima, W. de; Poli CR, D. de

    1999-01-01

    The extrapolated range R ex of electrons is useful for various purposes in research and in the application of electrons, for example, in polymer modification, electron energy determination and estimation of effects associated with deep penetration of electrons. A number of works have used empirical equations to express the extrapolated range for some elements. In this work a generalized empirical equation, very simple and accurate, in the energy region 0.3 keV - 50 MeV is proposed. The extrapolated range for elements, in organic or inorganic molecules and compound materials, can be well expressed as a function of the atomic number Z or two empirical parameters Zm for molecules and Zc for compound materials instead of Z. (author)

  8. Porphyry of Russian Empires in Paris

    Science.gov (United States)

    Bulakh, Andrey

    2014-05-01

    Porphyry of Russian Empires in Paris A. G. Bulakh (St Petersburg State University, Russia) So called "Schokhan porphyry" from Lake Onega, Russia, belongs surely to stones of World cultural heritage. One can see this "porphyry" at facades of a lovely palace of Pavel I and in pedestal of the monument after Nicolas I in St Petersburg. There are many other cases of using this stone in Russia. In Paris, sarcophagus of Napoleon I Bonaparte is constructed of blocks of this stone. Really, it is Proterozoic quartzite. Geology situation, petrography and mineralogical characteristic will be reported too. Comparison with antique porphyre from the Egyptian Province of the Roma Empire is given. References: 1) A.G.Bulakh, N.B.Abakumova, J.V.Romanovsky. St Petersburg: a History in Stone. 2010. Print House of St Petersburg State University. 173 p.

  9. Empirically Examining Prostitution through a Feminist Perspective

    OpenAIRE

    Child, Shyann

    2009-01-01

    The purpose of this thesis is to empirically explore prostitution through a feminist perspective. Several background factors are explored on a small sample of women in the northeastern United States. Some of these women have been involved in an act of prostitution in their lifetime; some have not. This research will add to the body of knowledge on prostitution, as well as highlight the unique experiences of women. The goal is to understand whether or not these life experiences have had a h...

  10. Theoretical and Empirical Descriptions of Thermospheric Density

    Science.gov (United States)

    Solomon, S. C.; Qian, L.

    2004-12-01

    The longest-term and most accurate overall description the density of the upper thermosphere is provided by analysis of change in the ephemeris of Earth-orbiting satellites. Empirical models of the thermosphere developed in part from these measurements can do a reasonable job of describing thermospheric properties on a climatological basis, but the promise of first-principles global general circulation models of the coupled thermosphere/ionosphere system is that a true high-resolution, predictive capability may ultimately be developed for thermospheric density. However, several issues are encountered when attempting to tune such models so that they accurately represent absolute densities as a function of altitude, and their changes on solar-rotational and solar-cycle time scales. Among these are the crucial ones of getting the heating rates (from both solar and auroral sources) right, getting the cooling rates right, and establishing the appropriate boundary conditions. However, there are several ancillary issues as well, such as the problem of registering a pressure-coordinate model onto an altitude scale, and dealing with possible departures from hydrostatic equilibrium in empirical models. Thus, tuning a theoretical model to match empirical climatology may be difficult, even in the absence of high temporal or spatial variation of the energy sources. We will discuss some of the challenges involved, and show comparisons of simulations using the NCAR Thermosphere-Ionosphere-Electrodynamics General Circulation Model (TIE-GCM) to empirical model estimates of neutral thermosphere density and temperature. We will also show some recent simulations using measured solar irradiance from the TIMED/SEE instrument as input to the TIE-GCM.

  11. Insurability of Cyber Risk: An Empirical Analysis

    OpenAIRE

    Biener, Christian; Eling, Martin; Wirfs, Jan Hendrik

    2015-01-01

    This paper discusses the adequacy of insurance for managing cyber risk. To this end, we extract 994 cases of cyber losses from an operational risk database and analyse their statistical properties. Based on the empirical results and recent literature, we investigate the insurability of cyber risk by systematically reviewing the set of criteria introduced by Berliner (1982). Our findings emphasise the distinct characteristics of cyber risks compared with other operational risks and bring to li...

  12. Conducting empirical research in virtual worlds

    OpenAIRE

    Minocha, Shailey

    2011-01-01

    We will focus on the following aspects of conducting empirical research in virtual worlds: the toolbox of techniques for data collection; selection of technique(s) for the research questions; tips on how the techniques need to be adapted for conducting research in virtual worlds; guidance for developing research materials such as the consent form, project summary sheet, and how to address the possible concerns of an institution’s ethics committee who may not be familiar with the avatar-based ...

  13. Empirical solar/stellar cycle simulations

    Directory of Open Access Journals (Sweden)

    Santos Ângela R. G.

    2015-01-01

    Full Text Available As a result of the magnetic cycle, the properties of the solar oscillations vary periodically. With the recent discovery of manifestations of activity cycles in the seismic data of other stars, the understanding of the different contributions to such variations becomes even more important. With this in mind, we built an empirical parameterised model able to reproduce the properties of the sunspot cycle. The resulting simulations can be used to estimate the magnetic-induced frequency shifts.

  14. The empirical turn in international legal scholarship

    Directory of Open Access Journals (Sweden)

    Gregory Shaffer

    2015-07-01

    Full Text Available This article presents and assesses a new wave of em- pirical research on international law. Recent scholar- ship has moved away from theoretical debates over whether international law “matters,” and focuses in- stead on exploring the conditions under which inter- national law is created and produces effects. As this empirical research program has matured, it has al- lowed for new, midlevel theorizing that we call “conditional international law theory”. 

  15. Compassion: An Evolutionary Analysis and Empirical Review

    OpenAIRE

    Goetz, Jennifer L.; Keltner, Dacher; Simon-Thomas, Emiliana

    2010-01-01

    What is compassion? And how did it evolve? In this review, we integrate three evolutionary arguments that converge on the hypothesis that compassion evolved as a distinct affective experience whose primary function is to facilitate cooperation and protection of the weak and those who suffer. Our empirical review reveals compassion to have distinct appraisal processes attuned to undeserved suffering, distinct signaling behavior related to caregiving patterns of touch, posture, and vocalization...

  16. Services outsourcing and innovation: an empirical investigation

    OpenAIRE

    Görg, Holger; Hanley, Aoife

    2009-01-01

    We provide a comprehensive empirical analysis of the links between international services outsourcing, domestic outsourcing, profits and innovation using plant level data. We find a positive effect of international outsourcing of services on innovative activity at the plant level. Such a positive effect can also be observed for domestic outsourcing of services, but the magnitude is smaller. This makes intuitive sense, as international outsourcing allows more scope for exploiting international...

  17. A New Empirical Model for Radar Scattering from Bare Soil Surfaces

    Directory of Open Access Journals (Sweden)

    Nicolas Baghdadi

    2016-11-01

    Full Text Available The objective of this paper is to propose a new semi-empirical radar backscattering model for bare soil surfaces based on the Dubois model. A wide dataset of backscattering coefficients extracted from synthetic aperture radar (SAR images and in situ soil surface parameter measurements (moisture content and roughness is used. The retrieval of soil parameters from SAR images remains challenging because the available backscattering models have limited performances. Existing models, physical, semi-empirical, or empirical, do not allow for a reliable estimate of soil surface geophysical parameters for all surface conditions. The proposed model, developed in HH, HV, and VV polarizations, uses a formulation of radar signals based on physical principles that are validated in numerous studies. Never before has a backscattering model been built and validated on such an important dataset as the one proposed in this study. It contains a wide range of incidence angles (18°–57° and radar wavelengths (L, C, X, well distributed, geographically, for regions with different climate conditions (humid, semi-arid, and arid sites, and involving many SAR sensors. The results show that the new model shows a very good performance for different radar wavelengths (L, C, X, incidence angles, and polarizations (RMSE of about 2 dB. This model is easy to invert and could provide a way to improve the retrieval of soil parameters.

  18. Empirical methods for estimating future climatic conditions

    International Nuclear Information System (INIS)

    Anon.

    1990-01-01

    Applying the empirical approach permits the derivation of estimates of the future climate that are nearly independent of conclusions based on theoretical (model) estimates. This creates an opportunity to compare these results with those derived from the model simulations of the forthcoming changes in climate, thus increasing confidence in areas of agreement and focusing research attention on areas of disagreements. The premise underlying this approach for predicting anthropogenic climate change is based on associating the conditions of the climatic optimums of the Holocene, Eemian, and Pliocene with corresponding stages of the projected increase of mean global surface air temperature. Provided that certain assumptions are fulfilled in matching the value of the increased mean temperature for a certain epoch with the model-projected change in global mean temperature in the future, the empirical approach suggests that relationships leading to the regional variations in air temperature and other meteorological elements could be deduced and interpreted based on use of empirical data describing climatic conditions for past warm epochs. Considerable care must be taken, of course, in making use of these spatial relationships, especially in accounting for possible large-scale differences that might, in some cases, result from different factors contributing to past climate changes than future changes and, in other cases, might result from the possible influences of changes in orography and geography on regional climatic conditions over time

  19. Cosmological Parameters 2000

    OpenAIRE

    Primack, Joel R.

    2000-01-01

    The cosmological parameters that I emphasize are the age of the universe $t_0$, the Hubble parameter $H_0 \\equiv 100 h$ km s$^{-1}$ Mpc$^{-1}$, the average matter density $\\Omega_m$, the baryonic matter density $\\Omega_b$, the neutrino density $\\Omega_\

  20. Empirical research in medical ethics: How conceptual accounts on normative-empirical collaboration may improve research practice

    Science.gov (United States)

    2012-01-01

    Background The methodology of medical ethics during the last few decades has shifted from a predominant use of normative-philosophical analyses to an increasing involvement of empirical methods. The articles which have been published in the course of this so-called 'empirical turn' can be divided into conceptual accounts of empirical-normative collaboration and studies which use socio-empirical methods to investigate ethically relevant issues in concrete social contexts. Discussion A considered reference to normative research questions can be expected from good quality empirical research in medical ethics. However, a significant proportion of empirical studies currently published in medical ethics lacks such linkage between the empirical research and the normative analysis. In the first part of this paper, we will outline two typical shortcomings of empirical studies in medical ethics with regard to a link between normative questions and empirical data: (1) The complete lack of normative analysis, and (2) cryptonormativity and a missing account with regard to the relationship between 'is' and 'ought' statements. Subsequently, two selected concepts of empirical-normative collaboration will be presented and how these concepts may contribute to improve the linkage between normative and empirical aspects of empirical research in medical ethics will be demonstrated. Based on our analysis, as well as our own practical experience with empirical research in medical ethics, we conclude with a sketch of concrete suggestions for the conduct of empirical research in medical ethics. Summary High quality empirical research in medical ethics is in need of a considered reference to normative analysis. In this paper, we demonstrate how conceptual approaches of empirical-normative collaboration can enhance empirical research in medical ethics with regard to the link between empirical research and normative analysis. PMID:22500496

  1. Using Graph and Vertex Entropy to Compare Empirical Graphs with Theoretical Graph Models

    Directory of Open Access Journals (Sweden)

    Tomasz Kajdanowicz

    2016-09-01

    Full Text Available Over the years, several theoretical graph generation models have been proposed. Among the most prominent are: the Erdős–Renyi random graph model, Watts–Strogatz small world model, Albert–Barabási preferential attachment model, Price citation model, and many more. Often, researchers working with real-world data are interested in understanding the generative phenomena underlying their empirical graphs. They want to know which of the theoretical graph generation models would most probably generate a particular empirical graph. In other words, they expect some similarity assessment between the empirical graph and graphs artificially created from theoretical graph generation models. Usually, in order to assess the similarity of two graphs, centrality measure distributions are compared. For a theoretical graph model this means comparing the empirical graph to a single realization of a theoretical graph model, where the realization is generated from the given model using an arbitrary set of parameters. The similarity between centrality measure distributions can be measured using standard statistical tests, e.g., the Kolmogorov–Smirnov test of distances between cumulative distributions. However, this approach is both error-prone and leads to incorrect conclusions, as we show in our experiments. Therefore, we propose a new method for graph comparison and type classification by comparing the entropies of centrality measure distributions (degree centrality, betweenness centrality, closeness centrality. We demonstrate that our approach can help assign the empirical graph to the most similar theoretical model using a simple unsupervised learning method.

  2. Comparison of artificial intelligence methods and empirical equations to estimate daily solar radiation

    Science.gov (United States)

    Mehdizadeh, Saeid; Behmanesh, Javad; Khalili, Keivan

    2016-08-01

    In the present research, three artificial intelligence methods including Gene Expression Programming (GEP), Artificial Neural Networks (ANN) and Adaptive Neuro-Fuzzy Inference System (ANFIS) as well as, 48 empirical equations (10, 12 and 26 equations were temperature-based, sunshine-based and meteorological parameters-based, respectively) were used to estimate daily solar radiation in Kerman, Iran in the period of 1992-2009. To develop the GEP, ANN and ANFIS models, depending on the used empirical equations, various combinations of minimum air temperature, maximum air temperature, mean air temperature, extraterrestrial radiation, actual sunshine duration, maximum possible sunshine duration, sunshine duration ratio, relative humidity and precipitation were considered as inputs in the mentioned intelligent methods. To compare the accuracy of empirical equations and intelligent models, root mean square error (RMSE), mean absolute error (MAE), mean absolute relative error (MARE) and determination coefficient (R2) indices were used. The results showed that in general, sunshine-based and meteorological parameters-based scenarios in ANN and ANFIS models presented high accuracy than mentioned empirical equations. Moreover, the most accurate method in the studied region was ANN11 scenario with five inputs. The values of RMSE, MAE, MARE and R2 indices for the mentioned model were 1.850 MJ m-2 day-1, 1.184 MJ m-2 day-1, 9.58% and 0.935, respectively.

  3. Selection Bias in Educational Transition Models: Theory and Empirical Evidence

    DEFF Research Database (Denmark)

    Holm, Anders; Jæger, Mads

    variables. This paper, first, explains theoretically how selection on unobserved variables leads to waning coefficients and, second, illustrates empirically how selection leads to biased estimates of the effect of family background on educational transitions. Our empirical analysis using data from...

  4. Magnetic S-parameter

    DEFF Research Database (Denmark)

    Sannino, Francesco

    2010-01-01

    We propose a direct test of the existence of gauge duals for nonsupersymmetric asymptotically free gauge theories developing an infrared fixed point by computing the S-parameter in the electric and dual magnetic description. In particular we show that at the lower bound of the conformal window...... the magnetic S-parameter, i.e. the one determined via the dual magnetic gauge theory, assumes a simple expression in terms of the elementary magnetic degrees of freedom. The results further support our recent conjecture of the existence of a universal lower bound on the S parameter and indicates...

  5. Empirically characteristic analysis of chaotic PID controlling particle swarm optimization.

    Science.gov (United States)

    Yan, Danping; Lu, Yongzhong; Zhou, Min; Chen, Shiping; Levy, David

    2017-01-01

    Since chaos systems generally have the intrinsic properties of sensitivity to initial conditions, topological mixing and density of periodic orbits, they may tactfully use the chaotic ergodic orbits to achieve the global optimum or their better approximation to given cost functions with high probability. During the past decade, they have increasingly received much attention from academic community and industry society throughout the world. To improve the performance of particle swarm optimization (PSO), we herein propose a chaotic proportional integral derivative (PID) controlling PSO algorithm by the hybridization of chaotic logistic dynamics and hierarchical inertia weight. The hierarchical inertia weight coefficients are determined in accordance with the present fitness values of the local best positions so as to adaptively expand the particles' search space. Moreover, the chaotic logistic map is not only used in the substitution of the two random parameters affecting the convergence behavior, but also used in the chaotic local search for the global best position so as to easily avoid the particles' premature behaviors via the whole search space. Thereafter, the convergent analysis of chaotic PID controlling PSO is under deep investigation. Empirical simulation results demonstrate that compared with other several chaotic PSO algorithms like chaotic PSO with the logistic map, chaotic PSO with the tent map and chaotic catfish PSO with the logistic map, chaotic PID controlling PSO exhibits much better search efficiency and quality when solving the optimization problems. Additionally, the parameter estimation of a nonlinear dynamic system also further clarifies its superiority to chaotic catfish PSO, genetic algorithm (GA) and PSO.

  6. EMPIRE-II statistical model code for nuclear reaction calculations

    Energy Technology Data Exchange (ETDEWEB)

    Herman, M [International Atomic Energy Agency, Vienna (Austria)

    2001-12-15

    EMPIRE II is a nuclear reaction code, comprising various nuclear models, and designed for calculations in the broad range of energies and incident particles. A projectile can be any nucleon or Heavy Ion. The energy range starts just above the resonance region, in the case of neutron projectile, and extends up to few hundreds of MeV for Heavy Ion induced reactions. The code accounts for the major nuclear reaction mechanisms, such as optical model (SCATB), Multistep Direct (ORION + TRISTAN), NVWY Multistep Compound, and the full featured Hauser-Feshbach model. Heavy Ion fusion cross section can be calculated within the simplified coupled channels approach (CCFUS). A comprehensive library of input parameters covers nuclear masses, optical model parameters, ground state deformations, discrete levels and decay schemes, level densities, fission barriers (BARFIT), moments of inertia (MOMFIT), and {gamma}-ray strength functions. Effects of the dynamic deformation of a fast rotating nucleus can be taken into account in the calculations. The results can be converted into the ENDF-VI format using the accompanying code EMPEND. The package contains the full EXFOR library of experimental data. Relevant EXFOR entries are automatically retrieved during the calculations. Plots comparing experimental results with the calculated ones can be produced using X4TOC4 and PLOTC4 codes linked to the rest of the system through bash-shell (UNIX) scripts. The graphic user interface written in Tcl/Tk is provided. (author)

  7. Empirical Modeling of the Plasmasphere Dynamics Using Neural Networks

    Science.gov (United States)

    Zhelavskaya, I. S.; Shprits, Y.; Spasojevic, M.

    2017-12-01

    We present a new empirical model for reconstructing the global dynamics of the cold plasma density distribution based only on solar wind data and geomagnetic indices. Utilizing the density database obtained using the NURD (Neural-network-based Upper hybrid Resonance Determination) algorithm for the period of October 1, 2012 - July 1, 2016, in conjunction with solar wind data and geomagnetic indices, we develop a neural network model that is capable of globally reconstructing the dynamics of the cold plasma density distribution for 2 ≤ L ≤ 6 and all local times. We validate and test the model by measuring its performance on independent datasets withheld from the training set and by comparing the model predicted global evolution with global images of He+ distribution in the Earth's plasmasphere from the IMAGE Extreme UltraViolet (EUV) instrument. We identify the parameters that best quantify the plasmasphere dynamics by training and comparing multiple neural networks with different combinations of input parameters (geomagnetic indices, solar wind data, and different durations of their time history). We demonstrate results of both local and global plasma density reconstruction. This study illustrates how global dynamics can be reconstructed from local in-situ observations by using machine learning techniques.

  8. Development of an empirical correlation for combustion durations in spark ignition engines

    International Nuclear Information System (INIS)

    Bayraktar, Hakan; Durgun, Orhan

    2004-01-01

    Development of an empirical correlation for combustion duration is presented. For this purpose, the effects of variations in compression ratio engine speed, fuel/air equivalence ratio and spark advance on combustion duration have been determined by means of a quasi-dimensional SI engine cycle model previously developed by the authors. Burn durations at several engine operating conditions were calculated from the turbulent combustion model. Variations of combustion duration with each operating parameter obtained from the theoretical results were expressed by second degree polynomial functions. By using these functions, a general empirical correlation for the burn duration has been developed. In this correlation, the effects of engine operating parameters on combustion duration were taken into account. Combustion durations predicted by means of this correlation are in good agreement with those obtained from experimental studies and a detailed combustion model

  9. SATELLITE CONSTELLATION DESIGN PARAMETER

    Indian Academy of Sciences (India)

    First page Back Continue Last page Overview Graphics. SATELLITE CONSTELLATION DESIGN PARAMETER. 1. ORBIT CHARACTERISTICS. ORBITAL HEIGHT >= 20,000 KM. LONGER VISIBILITY; ORBITAL PERIOD. PERTURBATIONS(MINIMUM). SOLAR RADIATION PRESSURE (IMPACTS ECCENTRICITY); LUNI ...

  10. Reassessment of safeguards parameters

    Energy Technology Data Exchange (ETDEWEB)

    Hakkila, E.A.; Richter, J.L.; Mullen, M.F.

    1994-07-01

    The International Atomic Energy Agency is reassessing the timeliness and goal quantity parameters that are used in defining safeguards approaches. This study reviews technology developments since the parameters were established in the 1970s and concludes that there is no reason to relax goal quantity or conversion time for reactor-grade plutonium relative to weapons-grade plutonium. For low-enriched uranium, especially in countries with advanced enrichment capability there may be an incentive to shorten the detection time.

  11. Empirical knowledge in legislation and regulation : A decision making perspective

    NARCIS (Netherlands)

    Trautmann, S.T.

    2013-01-01

    This commentary considers the pros and cons of the empirical approach to legislation from the vantage point of empirical decision making research. It focuses on methodological aspects that are typically not considered by legal scholars. It points out weaknesses in the empirical approach that are

  12. Advancing Empirical Scholarship to Further Develop Evaluation Theory and Practice

    Science.gov (United States)

    Christie, Christina A.

    2011-01-01

    Good theory development is grounded in empirical inquiry. In the context of educational evaluation, the development of empirically grounded theory has important benefits for the field and the practitioner. In particular, a shift to empirically derived theory will assist in advancing more systematic and contextually relevant evaluation practice, as…

  13. Empirical studies on changes in oil governance

    Science.gov (United States)

    Kemal, Mohammad

    Regulation of the oil and gas sector is consequential to the economies of oil-producing countries. In the literature, there are two types of regulation: indirect regulation through taxes and tariffs or direct regulation through the creation of a National Oil Company (NOC). In the 1970s, many oil-producing countries nationalized their oil and gas sectors by creating and giving ownership rights of oil and gas resources to NOCs. In light of the success of Norway in regulating its oil and gas resources, over the past two decades several countries have changed their oil governance by changing the rights given to NOC from ownership right to mere access rights like other oil companies. However, empirical literature on these changes in oil governance is quite thin. Thus, this dissertation will explore three research questions to investigate empirically these changes in oil governance. First, I investigate empirically the impact of the changes in oil governance on aggregate domestic income. By employing a difference-in-difference method, I will show that a country which changed its oil governance increases its GDP per-capita by 10%. However, the impact is different for different types of political institution. Second, by observing the changes in oil governance in Indonesia , I explore the impact of the changes on learning-by-doing and learning spillover effect in offshore exploration drilling. By employing an econometric model which includes interaction terms between various experience variables and changes in an oil governance dummy, I will show that the change in oil governance in Indonesia enhances learning-by-doing by the rigs and learning spillover in a basin. Lastly, the impact of the changes in oil governance on expropriation risk and extraction path will be explored. By employing a difference-in-difference method, this essay will show that the changes in oil governance reduce expropriation and the impact of it is different for different sizes of resource stock.

  14. Dissociative identity disorder: An empirical overview.

    Science.gov (United States)

    Dorahy, Martin J; Brand, Bethany L; Sar, Vedat; Krüger, Christa; Stavropoulos, Pam; Martínez-Taboas, Alfonso; Lewis-Fernández, Roberto; Middleton, Warwick

    2014-05-01

    Despite its long and auspicious place in the history of psychiatry, dissociative identity disorder (DID) has been associated with controversy. This paper aims to examine the empirical data related to DID and outline the contextual challenges to its scientific investigation. The overview is limited to DID-specific research in which one or more of the following conditions are met: (i) a sample of participants with DID was systematically investigated, (ii) psychometrically-sound measures were utilised, (iii) comparisons were made with other samples, (iv) DID was differentiated from other disorders, including other dissociative disorders, (v) extraneous variables were controlled or (vi) DID diagnosis was confirmed. Following an examination of challenges to research, data are organised around the validity and phenomenology of DID, its aetiology and epidemiology, the neurobiological and cognitive correlates of the disorder, and finally its treatment. DID was found to be a complex yet valid disorder across a range of markers. It can be accurately discriminated from other disorders, especially when structured diagnostic interviews assess identity alterations and amnesia. DID is aetiologically associated with a complex combination of developmental and cultural factors, including severe childhood relational trauma. The prevalence of DID appears highest in emergency psychiatric settings and affects approximately 1% of the general population. Psychobiological studies are beginning to identify clear correlates of DID associated with diverse brain areas and cognitive functions. They are also providing an understanding of the potential metacognitive origins of amnesia. Phase-oriented empirically-guided treatments are emerging for DID. The empirical literature on DID is accumulating, although some areas remain under-investigated. Existing data show DID as a complex, valid and not uncommon disorder, associated with developmental and cultural variables, that is amenable to

  15. Empirical antimicrobial therapy of acute dentoalveolar abscess

    Directory of Open Access Journals (Sweden)

    Matijević Stevo

    2009-01-01

    Full Text Available Background/Aim. The most common cause of acute dental infections are oral streptococci and anaerobe bacteria. Acute dentoalveolar infections are usually treated surgically in combination with antibiotics. Empirical therapy in such infections usually requires the use of penicillin-based antibiotics. The aim of this study was to investigate the clinical efficiency of amoxicillin and cefalexin in the empirical treatment of acute odontogenic abscess and to assess the antimicrobial susceptibility of the isolated bacteria in early phases of its development. Methods. This study included 90 patients with acute odontogenic abscess who received surgical treatment (extraction of a teeth and/or abscess incision and were divided into three groups: two surgicalantibiotic groups (amoxicillin, cefalexin and the surgical group. In order to evaluate the effects of the applied therapy following clinical symptoms were monitored: inflammatory swelling, trismus, regional lymphadentytis and febrility. In all the patients before the beginning of antibiotic treatment suppuration was suched out of the abscess and antibiotic susceptibility of isolated bacteria was tested by using the disk diffusion method. Results. The infection signs and symptoms lasted on the average 4.47 days, 4.67 days, and 6.17 days in the amoxicillin, cefalexin, and surgically only treated group, respectively. A total of 111 bacterial strains were isolated from 90 patients. Mostly, the bacteria were Gram-positive facultative anaerobs (81.1%. The most common bacteria isolated were Viridans streptococci (68/111. Antibiotic susceptibility of isolated bacteria to amoxicillin was 76.6% and cefalexin 89.2%. Conclusion. Empirical, peroral use of amoxicillin or cefalexin after surgical treatment in early phase of development of dentoalveolar abscess significantly reduced the time of clinical symptoms duration in the acute odontogenic infections in comparison to surgical treatment only. Bacterial strains

  16. Casual Empire: Video Games as Neocolonial Praxis

    Directory of Open Access Journals (Sweden)

    Sabine Harrer

    2018-01-01

    Full Text Available As a media form entwined in the U.S. military-industrial complex, video games continue to celebrate imperialist imagery and Western-centric narratives of the great white explorer (Breger, 2008; Dyer-Witheford & de Peuter, 2009; Geyser & Tshalabala, 2011; Mukherjee, 2016. While much ink has been spilt on the detrimental effects of colonial imagery on those it objectifies and dehumanises, the question is why these games still get made, and what mechanisms are at work in the enjoyment of empire-themed play experiences. To explore this question, this article develops the concept of ‘casual empire’, suggesting that the wish to play games as a casual pastime expedites the incidental circulation of imperialist ideology. Three examples – 'Resident Evil V' (2009, 'The Conquest: Colonization' (2015 and 'Playing History: Slave Trade' (2013 – are used to demonstrate the production and consumption of casual empire across multiple platforms, genres and player bases. Following a brief contextualisation of postcolonial (game studies, this article addresses casual design, by which I understand game designers’ casual reproduction of inferential racism (Hall, 1995 for the sake of entertainment. I then look at casual play, and players’ attitudes to games as rational commodities continuing a history of commodity racism (McClintock, 1995. Finally, the article investigates the casual involvement of formalist game studies in the construction of imperial values. These three dimensions of the casual – design, play and academia – make up the three pillars of the casual empire that must be challenged to undermine video games’ neocolonialist praxis.

  17. Safeguards systems parameters

    International Nuclear Information System (INIS)

    Avenhaus, R.; Heil, J.

    1979-01-01

    In this paper analyses are made of the values of those parameters that characterize the present safeguards system that is applied to a national fuel cycle; those values have to be fixed quantitatively so that all actions of the safeguards authority are specified precisely. The analysis starts by introducing three categories of quantities: The design parameters (number of MBAs, inventory frequency, variance of MUF, verification effort and false-alarm probability) describe those quantities whose values have to be specified before the safeguards system can be implemented. The performance criteria (probability of detection, expected detection time, goal quantity) measure the effectiveness of a safeguards system; and the standards (threshold amount and critical time) characterize the magnitude of the proliferation problem. The means by which the values of the individual design parameters can be determined with the help of the performance criteria; which qualitative arguments can narrow down the arbitrariness of the choice of values of the remaining parameters; and which parameter values have to be fixed more or less arbitrarily, are investigated. As a result of these considerations, which include the optimal allocation of a given inspection effort, the problem of analysing the structure of the safeguards system is reduced to an evaluation of the interplay of only a few parameters, essentially the quality of the measurement system (variance of MUF), verification effort, false-alarm probability, goal quantity and probability of detection

  18. Land of Addicts? An Empirical Investigation of Habit-Based Asset Pricing Behavior

    OpenAIRE

    Xiaohong Chen; Sydney C. Ludvigson

    2004-01-01

    This paper studies the ability of a general class of habit-based asset pricing models to match the conditional moment restrictions implied by asset pricing theory. We treat the functional form of the habit as unknown, and to estimate it along with the rest of the model's finite dimensional parameters. Using quarterly data on consumption growth, assets returns and instruments, our empirical results indicate that the estimated habit function is nonlinear, the habit formation is better described...

  19. HEDL empirical correlation of fuel pin top failure thresholds, status 1976

    International Nuclear Information System (INIS)

    Baars, R.E.

    1976-01-01

    The Damage Parameter (DP) empirical correlation of fuel pin cladding failure thresholds for TOP events has been revised and recorrelated to the results of twelve TREAT tests. The revised correlation, called the Failure Potential (FP) correlation, predicts failure times for the tests in the data base with an average error of 35 ms for $3/s tests and of 150 ms for 50 cents/s tests

  20. Semi-empirical neutron tool calibration (one and two-group approximation)

    International Nuclear Information System (INIS)

    Czubek, J.A.

    1988-01-01

    The physical principles of the new method of calibration of neutron tools for the rock porosity determination are given. A short description of the physics of neutron transport in the matter is presented together with some remarks on the elementary interactions of neutrons with nuclei (cross sections, group cross sections etc.). The definitions of the main integral parameters characterizing the neutron transport in the rock media are given. The three main approaches to the calibration problem: empirical, theoretical and semi-empirical are presented with some more detailed description of the latter one. The new semi-empirical approach is described. The method is based on the definition of the apparent slowing down or migration length for neutrons sensed by the neutron tool situated in the real borehole-rock conditions. To calculate this apparent slowing down or migration lengths the ratio of the proper space moments of the neutron distribution along the borehole axis is used. Theoretical results are given for one- and two-group diffusion approximations in the rock-borehole geometrical conditions when the tool is in the sidewall position. The physical and chemical parameters are given for the calibration blocks of the Logging Company in Zielona Gora. Using these data the neutron parameters of the calibration blocks have been calculated. An example, how to determine the calibration curve for the dual detector tool applying this new method and using the neutron parameters mentioned above together with the measurements performed in the calibration blocks, is given. The most important advantage of the new semi-empirical method of calibration is the possibility of setting on the unique calibration curve all experimental calibration data obtained for a given neutron tool for different porosities, lithologies and borehole diameters. 52 refs., 21 figs., 21 tabs. (author)

  1. Salmonella typhi time to change empiric treatment

    DEFF Research Database (Denmark)

    Gade, C.; Engberg, J.; Weis, N.

    2008-01-01

    In the present case series report we describe seven recent cases of typhoid fever. All the patients were travellers returning from Pakistan, where typhoid is endemic. Salmonella typhi isolated from the patients by blood culture were reported as intermediary susceptible to fluoroquinolones in six...... out of seven cases. We recommend that empiric treatment of suspected cases of typhoid fever includes a third generation cephalosporin such as ceftriaxon. Furthermore, the present report stresses the importance of typhoid vaccination of travellers to areas where typhoid is endemic Udgivelsesdato: 2008/9/29...

  2. Calculation of Critical Temperatures by Empirical Formulae

    Directory of Open Access Journals (Sweden)

    Trzaska J.

    2016-06-01

    Full Text Available The paper presents formulas used to calculate critical temperatures of structural steels. Equations that allow calculating temperatures Ac1, Ac3, Ms and Bs were elaborated based on the chemical composition of steel. To elaborate the equations the multiple regression method was used. Particular attention was paid to the collection of experimental data which was required to calculate regression coefficients, including preparation of data for calculation. The empirical data set included more than 500 chemical compositions of structural steel and has been prepared based on information available in literature on the subject.

  3. International Joint Venture Termination: An Empirical Investigation

    DEFF Research Database (Denmark)

    Nielsen, Ulrik B.; Rasmussen, Erik Stavnsager; Siersbæk, Nikolaj

    for the article stems from data from the project portfolio of a Danish Investment Fund for Developing Countries with a total of 773 investments. A number of hypotheses are established from the literature review and tested related to the empirical data. The result indicates that the most important factor...... in successful IJV termination is the length of the investment and to some extent the size of the investment. The psychic distance plays a negative role for investments in the African region while a general recession will lead to a lower success rate....

  4. Unemployment and Mental Disorders - An Empirical Analysis

    DEFF Research Database (Denmark)

    Agerbo, Esben; Eriksson, Tor Viking; Mortensen, Preben Bo

    1998-01-01

    The purpose of this paper is also to analyze the importance of unemployment and other social factors as risk factors for impaired mental health. It departs from previous studies in that we make use of information about first admissions to a psychiatric hospital or ward as our measure of mental...... from the Psychiatric case register. Secondly, we estimate conditional logistic regression models for case-control data on first admissions to a psychiatric hospital. The explanatory variables in the empirical analysis include age, gender, education, marital status, income, wealth, and unemployment (and...

  5. Empirical continuation of the differential cross section

    International Nuclear Information System (INIS)

    Borbely, I.

    1978-12-01

    The theoretical basis as well as the practical methods of empirical continuation of the differential cross section into the nonphysical region of the cos theta variable are discussed. The equivalence of the different methods is proved. A physical applicability condition is given and the published applications are reviewed. In many cases the correctly applied procedure turns out to provide nonsignificant or even incorrect structure information which points to the necessity for careful and statistically complete analysis of the experimental data with a physical understanding of the analysed process. (author)

  6. Is nondistributivity for microsystems empirically founded

    International Nuclear Information System (INIS)

    Selleri, F.; Tarozzi, G.

    1978-01-01

    Some authors have proposed nondistributive logic as a way out of the difficulties usually met in trying to describe typical quantum phenomena (e.g. the double-slit experiment). It is shown, however, that, if one takes seriously the wave-corpuscle dualism, which was after all the central fact around which quantum theory was developed, ordinary (distributive) logic can fully account for the empirical observations. Furthermore, it is pointed out that there are unavoidable physical difficulties connected with the adoption of a nondistributive corpuscolar approach. (author)

  7. The empirical equilibrium structure of diacetylene

    OpenAIRE

    Thorwirth, S.; Harding, M. E.; Muders, D.; Gauss, J.

    2008-01-01

    High-level quantum-chemical calculations are reported at the MP2 and CCSD(T) levels of theory for the equilibrium structure and the harmonic and anharmonic force fields of diacetylene, HCCCCH. The calculations were performed employing Dunning's hierarchy of correlation-consistent basis sets cc-pVXZ, cc-pCVXZ, and cc-pwCVXZ, as well as the ANO2 basis set of Almloef and Taylor. An empirical equilibrium structure based on experimental rotational constants for thirteen isotopic species of diacety...

  8. 30. L’empire de la raison

    OpenAIRE

    2018-01-01

    La pensée politique de Stanislas Leszczynski (1677–1766), roi de Pologne puis duc de Lorraine, est faite d’un mélange de pragmatisme et d’idéalisme : pour vivre en paix avec ses voisins, un État doit savoir s’en faire craindre ; mais il n’exercera durablement son empire que par la sagesse de ses lois et la vertu de son souverain. Dans l’Entretien d’un Européen avec un insulaire du Royaume de Dumocala, il fait dialoguer un voyageur, dont le vaisseau a fait naufrage sur une terre australe incon...

  9. Empirical Bayes conditional independence graphs for regulatory network recovery

    Science.gov (United States)

    Mahdi, Rami; Madduri, Abishek S.; Wang, Guoqing; Strulovici-Barel, Yael; Salit, Jacqueline; Hackett, Neil R.; Crystal, Ronald G.; Mezey, Jason G.

    2012-01-01

    Motivation: Computational inference methods that make use of graphical models to extract regulatory networks from gene expression data can have difficulty reconstructing dense regions of a network, a consequence of both computational complexity and unreliable parameter estimation when sample size is small. As a result, identification of hub genes is of special difficulty for these methods. Methods: We present a new algorithm, Empirical Light Mutual Min (ELMM), for large network reconstruction that has properties well suited for recovery of graphs with high-degree nodes. ELMM reconstructs the undirected graph of a regulatory network using empirical Bayes conditional independence testing with a heuristic relaxation of independence constraints in dense areas of the graph. This relaxation allows only one gene of a pair with a putative relation to be aware of the network connection, an approach that is aimed at easing multiple testing problems associated with recovering densely connected structures. Results: Using in silico data, we show that ELMM has better performance than commonly used network inference algorithms including GeneNet, ARACNE, FOCI, GENIE3 and GLASSO. We also apply ELMM to reconstruct a network among 5492 genes expressed in human lung airway epithelium of healthy non-smokers, healthy smokers and individuals with chronic obstructive pulmonary disease assayed using microarrays. The analysis identifies dense sub-networks that are consistent with known regulatory relationships in the lung airway and also suggests novel hub regulatory relationships among a number of genes that play roles in oxidative stress and secretion. Availability and implementation: Software for running ELMM is made available at http://mezeylab.cb.bscb.cornell.edu/Software.aspx. Contact: ramimahdi@yahoo.com or jgm45@cornell.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:22685074

  10. Data on empirically estimated corporate survival rate in Russia.

    Science.gov (United States)

    Kuzmin, Evgeny A

    2018-02-01

    The article presents data on the corporate survival rate in Russia in 1991-2014. The empirical survey was based on a random sample with the average number of non-repeated observations (number of companies) for the survey each year equal to 75,958 (24,236 minimum and 126,953 maximum). The actual limiting mean error ∆ p was 2.24% with 99% integrity. The survey methodology was based on a cross joining of various formal periods in the corporate life cycles (legal and business), which makes it possible to talk about a conventionally active time life of companies' existence with a number of assumptions. The empirical survey values were grouped by Russian regions and industries according to the classifier and consolidated into a single database for analysing the corporate life cycle and their survival rate and searching for deviation dependencies in calculated parameters. Preliminary and incomplete figures were available in the paper entitled "Survival Rate and Lifecycle in Terms of Uncertainty: Review of Companies from Russia and Eastern Europe" (Kuzmin and Guseva, 2016) [3]. The further survey led to filtered processed data with clerical errors excluded. These particular values are available in the article. The survey intended to fill a fact-based gap in various fundamental surveys that involved matters of the corporate life cycle in Russia within the insufficient statistical framework. The data are of interest for an analysis of Russian entrepreneurship, assessment of the market development and incorporation risks in the current business environment. A further heuristic potential is achievable through an ability of forecasted changes in business demography and model building based on the representative data set.

  11. Parameter Estimation for Thurstone Choice Models

    Energy Technology Data Exchange (ETDEWEB)

    Vojnovic, Milan [London School of Economics (United Kingdom); Yun, Seyoung [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-04-24

    We consider the estimation accuracy of individual strength parameters of a Thurstone choice model when each input observation consists of a choice of one item from a set of two or more items (so called top-1 lists). This model accommodates the well-known choice models such as the Luce choice model for comparison sets of two or more items and the Bradley-Terry model for pair comparisons. We provide a tight characterization of the mean squared error of the maximum likelihood parameter estimator. We also provide similar characterizations for parameter estimators defined by a rank-breaking method, which amounts to deducing one or more pair comparisons from a comparison of two or more items, assuming independence of these pair comparisons, and maximizing a likelihood function derived under these assumptions. We also consider a related binary classification problem where each individual parameter takes value from a set of two possible values and the goal is to correctly classify all items within a prescribed classification error. The results of this paper shed light on how the parameter estimation accuracy depends on given Thurstone choice model and the structure of comparison sets. In particular, we found that for unbiased input comparison sets of a given cardinality, when in expectation each comparison set of given cardinality occurs the same number of times, for a broad class of Thurstone choice models, the mean squared error decreases with the cardinality of comparison sets, but only marginally according to a diminishing returns relation. On the other hand, we found that there exist Thurstone choice models for which the mean squared error of the maximum likelihood parameter estimator can decrease much faster with the cardinality of comparison sets. We report empirical evaluation of some claims and key parameters revealed by theory using both synthetic and real-world input data from some popular sport competitions and online labor platforms.

  12. Empirical Green's function analysis: Taking the next step

    Science.gov (United States)

    Hough, S.E.

    1997-01-01

    An extension of the empirical Green's function (EGF) method is presented that involves determination of source parameters using standard EGF deconvolution, followed by inversion for a common attenuation parameter for a set of colocated events. Recordings of three or more colocated events can thus be used to constrain a single path attenuation estimate. I apply this method to recordings from the 1995-1996 Ridgecrest, California, earthquake sequence; I analyze four clusters consisting of 13 total events with magnitudes between 2.6 and 4.9. I first obtain corner frequencies, which are used to infer Brune stress drop estimates. I obtain stress drop values of 0.3-53 MPa (with all but one between 0.3 and 11 MPa), with no resolved increase of stress drop with moment. With the corner frequencies constrained, the inferred attenuation parameters are very consistent; they imply an average shear wave quality factor of approximately 20-25 for alluvial sediments within the Indian Wells Valley. Although the resultant spectral fitting (using corner frequency and ??) is good, the residuals are consistent among the clusters analyzed. Their spectral shape is similar to the the theoretical one-dimensional response of a layered low-velocity structure in the valley (an absolute site response cannot be determined by this method, because of an ambiguity between absolute response and source spectral amplitudes). I show that even this subtle site response can significantly bias estimates of corner frequency and ??, if it is ignored in an inversion for only source and path effects. The multiple-EGF method presented in this paper is analogous to a joint inversion for source, path, and site effects; the use of colocated sets of earthquakes appears to offer significant advantages in improving resolution of all three estimates, especially if data are from a single site or sites with similar site response.

  13. Semi-empirical formulas for sputtering yield

    International Nuclear Information System (INIS)

    Yamamura, Yasumichi

    1994-01-01

    When charged particles, electrons, light and so on are irradiated on solid surfaces, the materials are lost from the surfaces, and this phenomenon is called sputtering. In order to understand sputtering phenomenon, the bond energy of atoms on surfaces, the energy given to the vicinity of surfaces and the process of converting the given energy to the energy for releasing atoms must be known. The theories of sputtering and the semi-empirical formulas for evaluating the dependence of sputtering yield on incident energy are explained. The mechanisms of sputtering are that due to collision cascade in the case of heavy ion incidence and that due to surface atom recoil in the case of light ion incidence. The formulas for the sputtering yield of low energy heavy ion sputtering, high energy light ion sputtering and the general case between these extreme cases, and the Matsunami formula are shown. At the stage of the publication of Atomic Data and Nuclear Data Tables in 1984, the data up to 1983 were collected, and about 30 papers published thereafter were added. The experimental data for low Z materials, for example Be, B and C and light ion sputtering data were reported. The combination of ions and target atoms in the collected sputtering data is shown. The new semi-empirical formula by slightly adjusting the Matsunami formula was decided. (K.I.)

  14. Empirical seasonal forecasts of the NAO

    Science.gov (United States)

    Sanchezgomez, E.; Ortizbevia, M.

    2003-04-01

    We present here seasonal forecasts of the North Atlantic Oscillation (NAO) issued from ocean predictors with an empirical procedure. The Singular Values Decomposition (SVD) of the cross-correlation matrix between predictor and predictand fields at the lag used for the forecast lead is at the core of the empirical model. The main predictor field are sea surface temperature anomalies, although sea ice cover anomalies are also used. Forecasts are issued in probabilistic form. The model is an improvement over a previous version (1), where Sea Level Pressure Anomalies were first forecast, and the NAO Index built from this forecast field. Both correlation skill between forecast and observed field, and number of forecasts that hit the correct NAO sign, are used to assess the forecast performance , usually above those values found in the case of forecasts issued assuming persistence. For certain seasons and/or leads, values of the skill are above the .7 usefulness treshold. References (1) SanchezGomez, E. and Ortiz Bevia M., 2002, Estimacion de la evolucion pluviometrica de la Espana Seca atendiendo a diversos pronosticos empiricos de la NAO, in 'El Agua y el Clima', Publicaciones de la AEC, Serie A, N 3, pp 63-73, Palma de Mallorca, Spain

  15. Practice management: observations, issues, and empirical evidence.

    Science.gov (United States)

    Wong, H M; Braithwaite, J

    2001-02-01

    The primary objective of this study is to provide objective, empirical, evidence-based practice management information. This is a hitherto under-researched area of considerable interest for both the practitioner and educator. A questionnaire eliciting a mix of structured and free text responses was administered to a random sample of 480 practitioners who are members of the American Academy of Periodontology. Potential respondents not in private practice were excluded and the next listed person substituted. The results provide demographic and descriptive information about some of the main issues and problems facing practice managers, central to which are information technology (IT), financial, people management, and marketing. Human resource and marketing management appear to represent the biggest challenges. Periodontists running practices would prefer more information, development, and support in dealing with IT, finance, marketing, and people management. The empirical evidence reported here suggests that although tailored educational programs on key management issues at both undergraduate and postgraduate levels have become ubiquitous, nevertheless some respondents seek further training opportunities. Evidence-based practice management information will be invaluable to the clinician considering strategic and marketing planning, and also for those responsible for the design and conduct of predoctoral and postdoctoral programs.

  16. An empirical framework for tropical cyclone climatology

    Energy Technology Data Exchange (ETDEWEB)

    Kang, Nam-Young [Korea Meteorological Administration, Seoul (Korea, Republic of); Florida State University, Tallahassee, FL (United States); Elsner, James B. [Florida State University, Tallahassee, FL (United States)

    2012-08-15

    An empirical approach for analyzing tropical cyclone climate is presented. The approach uses lifetime-maximum wind speed and cyclone frequency to induce two orthogonal variables labeled ''activity'' and ''efficiency of intensity''. The paired variations of activity and efficiency of intensity along with the opponent variations of frequency and intensity configure a framework for evaluating tropical cyclone climate. Although cyclone activity as defined in this framework is highly correlated with the commonly used exponent indices like accumulated cyclone energy, it does not contain cyclone duration. Empirical quantiles are used to determine threshold intensity levels, and variant year ranges are used to find consistent trends in tropical cyclone climatology. In the western North Pacific, cyclone activity is decreasing despite increases in lifetime-maximum intensity. This is due to overwhelming decreases in cyclone frequency. These changes are also explained by an increasing efficiency of intensity. The North Atlantic shows different behavior. Cyclone activity is increasing due to increasing frequency and, to a lesser extent, increasing intensity. These changes are also explained by a decreasing efficiency of intensity. Tropical cyclone trends over the North Atlantic basin are more consistent over different year ranges than tropical cyclone trends over the western North Pacific. (orig.)

  17. Empirical models for the estimation of global solar radiation with sunshine hours on horizontal surface in various cities of Pakistan

    International Nuclear Information System (INIS)

    Gadiwala, M.S.; Usman, A.; Akhtar, M.; Jamil, K.

    2013-01-01

    In developing countries like Pakistan the global solar radiation and its components is not available for all locations due to which there is a requirement of using different models for the estimation of global solar radiation that use climatological parameters of the locations. Only five long-period locations data of solar radiation data is available in Pakistan (Karachi, Quetta, Lahore, Multan and Peshawar). These locations almost encompass the different geographical features of Pakistan. For this reason in this study the Mean monthly global solar radiation has been estimated using empirical models of Angstrom, FAO, Glover Mc-Culloch, Sangeeta & Tiwari for the diversity of approach and use of climatic and geographical parameters. Empirical constants for these models have been estimated and the results obtained by these models have been tested statistically. The results show encouraging agreement between estimated and measured values. The outcome of these empirical models will assist the researchers working on solar energy estimation of the location having similar conditions

  18. Empirical Equation Based Chirality (n, m Assignment of Semiconducting Single Wall Carbon Nanotubes from Resonant Raman Scattering Data

    Directory of Open Access Journals (Sweden)

    Md Shamsul Arefin

    2012-12-01

    Full Text Available This work presents a technique for the chirality (n, m assignment of semiconducting single wall carbon nanotubes by solving a set of empirical equations of the tight binding model parameters. The empirical equations of the nearest neighbor hopping parameters, relating the term (2n, m with the first and second optical transition energies of the semiconducting single wall carbon nanotubes, are also proposed. They provide almost the same level of accuracy for lower and higher diameter nanotubes. An algorithm is presented to determine the chiral index (n, m of any unknown semiconducting tube by solving these empirical equations using values of radial breathing mode frequency and the first or second optical transition energy from resonant Raman spectroscopy. In this paper, the chirality of 55 semiconducting nanotubes is assigned using the first and second optical transition energies. Unlike the existing methods of chirality assignment, this technique does not require graphical comparison or pattern recognition between existing experimental and theoretical Kataura plot.

  19. Empirical Equation Based Chirality (n, m) Assignment of Semiconducting Single Wall Carbon Nanotubes from Resonant Raman Scattering Data

    Science.gov (United States)

    Arefin, Md Shamsul

    2012-01-01

    This work presents a technique for the chirality (n, m) assignment of semiconducting single wall carbon nanotubes by solving a set of empirical equations of the tight binding model parameters. The empirical equations of the nearest neighbor hopping parameters, relating the term (2n− m) with the first and second optical transition energies of the semiconducting single wall carbon nanotubes, are also proposed. They provide almost the same level of accuracy for lower and higher diameter nanotubes. An algorithm is presented to determine the chiral index (n, m) of any unknown semiconducting tube by solving these empirical equations using values of radial breathing mode frequency and the first or second optical transition energy from resonant Raman spectroscopy. In this paper, the chirality of 55 semiconducting nanotubes is assigned using the first and second optical transition energies. Unlike the existing methods of chirality assignment, this technique does not require graphical comparison or pattern recognition between existing experimental and theoretical Kataura plot. PMID:28348319

  20. Sea Surface Height Determination In The Arctic Using Cryosat-2 SAR Data From Primary Peak Empirical Retrackers

    DEFF Research Database (Denmark)

    Jain, Maulik; Andersen, Ole Baltazar; Dall, Jørgen

    2015-01-01

    extraction. The primary peak retrackers involve the application of retracking algorithms on just the primary peak of the waveform instead of the complete reflected waveform. These primary peak empirical retrackers are developed for Cryosat-2 SAR data. This is the first time SAR data in the Arctic...... and five parameter beta retrackers. In the case of SAR-lead data, it is concluded that the proposed primary peak retrackers work better as compared with the traditional retrackers (OCOG, threshold, five parameter beta) as well as the ESA Retracker.......SAR waveforms from Cryosat-2 are processed using primary peak empirical retrackers to determine the sea surface height in the Arctic. The empirical retrackers investigated are based on the combination of the traditional OCOG (Offset Center of Gravity) and threshold methods with primary peak...

  1. Electroweak interaction parameters

    International Nuclear Information System (INIS)

    Marciano, W.J.

    1984-01-01

    After a presentation of the experimentally determined parameters of the standard SU(3) x SU(2) x U(1) model the author discusses the definition of the Weinberg angle. Then masses and widths of the intermediate vector bosons are considered in the framework of the Weinberg-Salam theory with radiative corrections. Furthermore the radiative decays of these bosons are discussed. Then the relations between the masses of the Higgs boson and the top quark are considered. Thereafter grand unification is briefly discussed with special regards to the SU(5) prediction of some observable parameters. Finally some speculations are made concerning the observation of radiative decays in the UA1 experiments. (HSI)

  2. Band parameters of phosphorene

    International Nuclear Information System (INIS)

    Lew Yan Voon, L C; Wang, J; Zhang, Y; Willatzen, M

    2015-01-01

    Phosphorene is a two-dimensional nanomaterial with a direct band-gap at the Brillouin zone center. In this paper, we present a recently derived effective-mass theory of the band structure in the presence of strain and electric field, based upon group theory. Band parameters for this theory are computed using a first-principles theory based upon the generalized-gradient approximation to the density-functional theory. These parameters and Hamiltonian will be useful for modeling physical properties of phosphorene. (paper)

  3. Band parameters of phosphorene

    DEFF Research Database (Denmark)

    Lew Yan Voon, L. C.; Wang, J.; Zhang, Y.

    2015-01-01

    Phosphorene is a two-dimensional nanomaterial with a direct band-gap at the Brillouin zone center. In this paper, we present a recently derived effective-mass theory of the band structure in the presence of strain and electric field, based upon group theory. Band parameters for this theory...... are computed using a first-principles theory based upon the generalized-gradient approximation to the density-functional theory. These parameters and Hamiltonian will be useful for modeling physical properties of phosphorene....

  4. Uncertainties of Molecular Structural Parameters

    International Nuclear Information System (INIS)

    Császár, Attila G.

    2014-01-01

    Full text: The most fundamental property of a molecule is its three-dimensional (3D) structure formed by its constituent atoms (see, e.g., the perfectly regular hexagon associated with benzene). It is generally accepted that knowledge of the detailed structure of a molecule is a prerequisite to determine most of its other properties. What nowadays is a seemingly simple concept, namely that molecules have a structure, was introduced into chemistry in the 19th century. Naturally, the word changed its meaning over the years. Elemental analysis, simple structural formulae, two-dimensional and then 3D structures mark the development of the concept to its modern meaning. When quantum physics and quantum chemistry emerged in the 1920s, the simple concept associating structure with a three-dimensional object seemingly gained a firm support. Nevertheless, what seems self-explanatory today is in fact not so straightforward to justify within quantum mechanics. In quantum chemistry the concept of an equilibrium structure of a molecule is tied to the Born-Oppenheimer approximation but beyond the adiabatic separation of the motions of the nuclei and the electrons the meaning of a structure is still slightly obscured. Putting the conceptual difficulties aside, there are several experimental, empirical, and theoretical techniques to determine structures of molecules. One particular problem, strongly related to the question of uncertainties of “measured” or “computed” structural parameters, is that all the different techniques correspond to different structure definitions and thus yield different structural parameters. Experiments probing the structure of molecules rely on a number of structure definitions, to name just a few: r_0, r_g, r_a, r_s, r_m, etc., and one should also consider the temperature dependence of most of these structural parameters which differ from each other in the way the rovibrational motions of the molecules are treated and how the averaging is

  5. Semi-empirical proton binding constants for natural organic matter

    Science.gov (United States)

    Matynia, Anthony; Lenoir, Thomas; Causse, Benjamin; Spadini, Lorenzo; Jacquet, Thierry; Manceau, Alain

    2010-03-01

    Average proton binding constants ( KH,i) for structure models of humic (HA) and fulvic (FA) acids were estimated semi-empirically by breaking down the macromolecules into reactive structural units (RSUs), and calculating KH,i values of the RSUs using linear free energy relationships (LFER) of Hammett. Predicted log KH,COOH and log KH,Ph-OH are 3.73 ± 0.13 and 9.83 ± 0.23 for HA, and 3.80 ± 0.20 and 9.87 ± 0.31 for FA. The predicted constants for phenolic-type sites (Ph-OH) are generally higher than those derived from potentiometric titrations, but the difference may not be significant in view of the considerable uncertainty of the acidity constants determined from acid-base measurements at high pH. The predicted constants for carboxylic-type sites agree well with titration data analyzed with Model VI (4.10 ± 0.16 for HA, 3.20 ± 0.13 for FA; Tipping, 1998), the Impermeable Sphere model (3.50-4.50 for HA; Avena et al., 1999), and the Stockholm Humic Model (4.10 ± 0.20 for HA, 3.50 ± 0.40 for FA; Gustafsson, 2001), but differ by about one log unit from those obtained by Milne et al. (2001) with the NICA-Donnan model (3.09 ± 0.51 for HA, 2.65 ± 0.43 for FA), and used to derive recommended generic values. To clarify this ambiguity, 10 high-quality titration data from Milne et al. (2001) were re-analyzed with the new predicted equilibrium constants. The data are described equally well with the previous and new sets of values ( R2 ⩾ 0.98), not necessarily because the NICA-Donnan model is overparametrized, but because titration lacks the sensitivity needed to quantify the full binding properties of humic substances. Correlations between NICA-Donnan parameters are discussed, but general progress is impeded by the unknown number of independent parameters that can be varied during regression of a model fit to titration data. The high consistency between predicted and experimental KH,COOH values, excluding those of Milne et al. (2001), gives faith in the proposed

  6. Manifestation of interplanetary medium parameters in development of a geomagnetic storm initial phase

    International Nuclear Information System (INIS)

    Chkhetiya, A.M.

    1988-01-01

    The role of solar wind plasma parameters in formation of a geomagnetic storm initial phase is refined. On the basis of statistical analysis an empirical formula relating the interplanetary medium parameters (components of interplanetary magnetic field, proton velocity and concentration) and D st -index during the geomagnetic storm initial phase is proposed

  7. Sea surface stability parameters

    International Nuclear Information System (INIS)

    Weber, A.H.; Suich, J.E.

    1978-01-01

    A number of studies dealing with climatology of the Northwest Atlantic Ocean have been published in the last ten years. These published studies have dealt with directly measured meteorological parameters, e.g., wind speed, temperature, etc. This information has been useful because of the increased focus on the near coastal zone where man's activities are increasing in magnitude and scope, e.g., offshore power plants, petroleum production, and the subsequent environmental impacts of these activities. Atmospheric transport of passive or nonpassive material is significantly influenced by the turbulence structure of the atmosphere in the region of the atmosphere-ocean interface. This research entails identification of the suitability of standard atmospheric stability parameters which can be used to determine turbulence structure; the calculation of these parameters for the near-shore and continental shelf regions of the U.S. east coast from Cape Hatteras to Miami, Florida; and the preparation of a climatology of these parameters. In addition, a climatology for average surface stress for the same geographical region is being prepared

  8. Measuring the chargino parameters

    Indian Academy of Sciences (India)

    by measuring the cross-sections with polarized beams at e+e- collider ... is given by the fundamental SUSY parameters: the SU(2) gaugino mass Е¾, the higgsino .... two points in the plane which are symmetric under the interchange ¾Д ° ¾К.

  9. General image acquisition parameters

    International Nuclear Information System (INIS)

    Teissier, J.M.; Lopez, F.M.; Langevin, J.F.

    1993-01-01

    The general parameters are of primordial importance to achieve image quality in terms of spatial resolution and contrast. They also play a role in the acquisition time for each sequence. We describe them separately, before associating them in a decision tree gathering the various options that are possible for diagnosis

  10. Quantization of physical parameters

    International Nuclear Information System (INIS)

    Jackiw, R.; Massachusetts Inst. of Tech., Cambridge; Massachusetts Inst. of Tech., Cambridge

    1984-01-01

    Dynamical models are described with parameters (mass, coupling strengths) which must be quantized for quantum mechanical consistency. These and related topological ideas have physical application to phenomenological descriptions of high temperature and low energy quantum chromodynamics, to the nonrelativistic dynamics of magnetic monopoles, and to the quantum Hall effect. (author)

  11. Optimal parameters for the FFA-Beddoes dynamic stall model

    Energy Technology Data Exchange (ETDEWEB)

    Bjoerck, A; Mert, M [FFA, The Aeronautical Research Institute of Sweden, Bromma (Sweden); Madsen, H A [Risoe National Lab., Roskilde (Denmark)

    1999-03-01

    Unsteady aerodynamic effects, like dynamic stall, must be considered in calculation of dynamic forces for wind turbines. Models incorporated in aero-elastic programs are of semi-empirical nature. Resulting aerodynamic forces therefore depend on values used for the semi-empiricial parameters. In this paper a study of finding appropriate parameters to use with the Beddoes-Leishman model is discussed. Minimisation of the `tracking error` between results from 2D wind tunnel tests and simulation with the model is used to find optimum values for the parameters. The resulting optimum parameters show a large variation from case to case. Using these different sets of optimum parameters in the calculation of blade vibrations, give rise to quite different predictions of aerodynamic damping which is discussed. (au)

  12. An Empirical Analysis of the Budget Deficit

    Directory of Open Access Journals (Sweden)

    Ioan Talpos

    2007-11-01

    Full Text Available Economic policies and, particularly, fiscal policies are not designed and implemented in an “empty space”: the structural characteristics of the economic systems, the institutional architecture of societies, the cultural paradigm and the power relations between different social groups define the borders of these policies. This paper tries to deal with these borders, to describe their nature and the implications of their existence to the fiscal policies’ quality and impact at a theoretical level as well as at an empirical one. The main results of the proposed analysis support the ideas that the mentioned variables matters both for the social mandate entrusted by the society to the state and thus to the role and functions of the state and for the economic growth as a support of the resources collected at distributed by the public authorities.

  13. Mobile Systems Development: An Empirical Study

    DEFF Research Database (Denmark)

    Hosbond, J. H.

    As part of an ongoing study on mobile systems development (MSD), this paper presents preliminary findings of research-in-progress. The debate on mobility in research has so far been dominated by mobile HCI, technological innovations, and socio-technical issues related to new and emerging mobile...... work patterns. This paper is about the development of mobile systems.Based on an on-going empirical study I present four case studies of companies each with different products or services to offer and diverging ways of establishing and sustaining a successful business in the mobile industry. From...... the case studies I propose a five-layered framework for understanding the structure and segmentation of the industry. This leads to an analysis of the different modes of operation within the mobile industry, exemplified by the four case studies.The contribution of this paper is therefore two-fold: (1) I...

  14. Empirical atom model of Vegard's law

    International Nuclear Information System (INIS)

    Zhang, Lei; Li, Shichun

    2014-01-01

    Vegard's law seldom holds true for most binary continuous solid solutions. When two components form a solid solution, the atom radii of component elements will change to satisfy the continuity requirement of electron density at the interface between component atom A and atom B so that the atom with larger electron density will expand and the atom with the smaller one will contract. If the expansion and contraction of the atomic radii of A and B respectively are equal in magnitude, Vegard's law will hold true. However, the expansion and contraction of two component atoms are not equal in most situations. The magnitude of the variation will depend on the cohesive energy of corresponding element crystals. An empirical atom model of Vegard's law has been proposed to account for signs of deviations according to the electron density at Wigner–Seitz cell from Thomas–Fermi–Dirac–Cheng model

  15. Visual Semiotics & Uncertainty Visualization: An Empirical Study.

    Science.gov (United States)

    MacEachren, A M; Roth, R E; O'Brien, J; Li, B; Swingley, D; Gahegan, M

    2012-12-01

    This paper presents two linked empirical studies focused on uncertainty visualization. The experiments are framed from two conceptual perspectives. First, a typology of uncertainty is used to delineate kinds of uncertainty matched with space, time, and attribute components of data. Second, concepts from visual semiotics are applied to characterize the kind of visual signification that is appropriate for representing those different categories of uncertainty. This framework guided the two experiments reported here. The first addresses representation intuitiveness, considering both visual variables and iconicity of representation. The second addresses relative performance of the most intuitive abstract and iconic representations of uncertainty on a map reading task. Combined results suggest initial guidelines for representing uncertainty and discussion focuses on practical applicability of results.

  16. Chemistry and metallurgy in the Portuguese Empire

    Energy Technology Data Exchange (ETDEWEB)

    Habashi, F. [Laval Univ., Sainte-Foy, Quebec City, PQ (Canada)

    2000-10-01

    The foundation and expansion of the Portuguese Empire is sketched, with emphasis on the development of a new type of ship by Prince Henrique the Navigator (AD 1385-1460), known as the caravel. By virtue of its advanced design, it was capable of sailing the stormy seas at high speeds, and thereby was instrumental in extending Portuguese influence over vast territories in South America, Asia and Africa, extending Portuguese know-how in mining, metallurgy, chemistry and trade along with Christianity. The role played by the University of Coimbra, founded in 1306, and the contribution of the Brazilian Geological Survey, established in 1875, and of the School of Mines in Ouro Preto in Brazil in 1876, in the exploitation of the mineral wealth of the Portuguese colonies is chronicled.

  17. Architecture Between Mind and Empirical Experience

    Directory of Open Access Journals (Sweden)

    Shatha Abbas Hassan

    2016-10-01

    Full Text Available The research aims to identify the level of balance in the architectural thought influenced by the rational type human consciousness, the materialistic based on the Empirical type, moral based on human experience as source of knowledge. This was reflected in architecture in the specialized thought that the mind is the source of knowledge which explains the phenomena of life. The rational approach based on objectivity and methodology in (Form Production, the other approach is based on subjectivity in form production (Form Inspiration. The research problem is that there is imbalance in the relationship between the rational side and the human experience in architecture, which led into imbalance between theory and application in architecture according to architectural movements.

  18. An empirical assessment of the SERVQUAL scale

    Directory of Open Access Journals (Sweden)

    Mahla Zargar

    2015-11-01

    Full Text Available During the past few years, many people have used point of sales for purchasing goods and services. Point of sales tends to provide a reliable method for making purchases in stores. Implementation of point of sales may reduce depreciation cost of automated telling machines and helps banks increase their productivities. Therefore, for bank managers, it is important to provide high quality services. This paper presents an empirical investigation to measure quality service using SERVQUAL scale. The study first extracts six factors including Trust, Responsiveness, Reliability, Empathy, Tangibles and getting insight for future development through the implementation of structural equation modeling. Next, it has implemented structural equation modeling and realizes that all components had positive impacts on customer satisfaction.

  19. Empirical scaling for present Ohmically heated tokamaks

    International Nuclear Information System (INIS)

    Daughney, C.

    1975-01-01

    Experimental results from the Adiabatic Toroidal Compressor (ATC) tokamak are used to obtain empirical scaling laws for the average electron temperature and electron energy confinement time as functions of the average electron density, the effective ion charge, and the plasma current. These scaling laws are extended to include dependence upon minor and major plasma radius and toroidal field strength through a comparison of the various tokamaks described in the literature. Electron thermal conductivity is the dominant loss process for the ATC tokamak. The parametric dependences of the observed electron thermal conductivity are not explained by present theoretical considerations. The electron temperature obtained with Ohmic heating is shown to be a function of current density - which will not be increased in the next generation of large tokamaks. However, the temperature dependence of the electron energy confinement time suggests that significant improvement in confinement time will be obtained with supplementary electron heating. (author)

  20. Improvement of electrocardiogram by empirical wavelet transform

    Science.gov (United States)

    Chanchang, Vikanda; Kumchaiseemak, Nakorn; Sutthiopad, Malee; Luengviriya, Chaiya

    2017-09-01

    Electrocardiogram (ECG) is a crucial tool in the detection of cardiac arrhythmia. It is also often used in a routine physical exam, especially, for elderly people. This graphical representation of electrical activity of heart is obtained by a measurement of voltage at the skin; therefore, the signal is always contaminated by noise from various sources. For a proper interpretation, the quality of the ECG should be improved by a noise reduction. In this article, we present a study of a noise filtration in the ECG by using an empirical wavelet transform (EWT). Unlike the traditional wavelet method, EWT is adaptive since the frequency spectrum of the ECG is taken into account in the construction of the wavelet basis. We show that the signal-to-noise ratio increases after the noise filtration for different noise artefacts.

  1. An empirical examination of restructured electricity prices

    International Nuclear Information System (INIS)

    Knittel, C.R.; Roberts, M.R.

    2005-01-01

    We present an empirical analysis of restructured electricity prices. We study the distributional and temporal properties of the price process in a non-parametric framework, after which we parametrically model the price process using several common asset price specifications from the asset-pricing literature, as well as several less conventional models motivated by the peculiarities of electricity prices. The findings reveal several characteristics unique to electricity prices including several deterministic components of the price series at different frequencies. An 'inverse leverage effect' is also found, where positive shocks to the price series result in larger increases in volatility than negative shocks. We find that forecasting performance in dramatically improved when we incorporate features of electricity prices not commonly modelled in other asset prices. Our findings have implications for how empiricists model electricity prices, as well as how theorists specify models of energy pricing. (author)

  2. An empirical method for dynamic camouflage assessment

    Science.gov (United States)

    Blitch, John G.

    2011-06-01

    As camouflage systems become increasingly sophisticated in their potential to conceal military personnel and precious cargo, evaluation methods need to evolve as well. This paper presents an overview of one such attempt to explore alternative methods for empirical evaluation of dynamic camouflage systems which aspire to keep pace with a soldier's movement through rapidly changing environments that are typical of urban terrain. Motivating factors are covered first, followed by a description of the Blitz Camouflage Assessment (BCA) process and results from an initial proof of concept experiment conducted in November 2006. The conclusion drawn from these results, related literature and the author's personal experience suggest that operational evaluation of personal camouflage needs to be expanded beyond its foundation in signal detection theory and embrace the challenges posed by high levels of cognitive processing.

  3. Empirical Design Considerations for Industrial Centrifugal Compressors

    Directory of Open Access Journals (Sweden)

    Cheng Xu

    2012-01-01

    Full Text Available Computational Fluid Dynamics (CFD has been extensively used in centrifugal compressor design. CFD provides further optimisation opportunities for the compressor design rather than designing the centrifugal compressor. The experience-based design process still plays an important role for new compressor developments. The wide variety of design subjects represents a very complex design world for centrifugal compressor designers. Therefore, some basic information for centrifugal design is still very important. The impeller is the key part of the centrifugal stage. Designing a highly efficiency impeller with a wide operation range can ensure overall stage design success. This paper provides some empirical information for designing industrial centrifugal compressors with a focus on the impeller. A ported shroud compressor basic design guideline is also discussed for improving the compressor range.

  4. Empirical and dynamic primary energy factors

    International Nuclear Information System (INIS)

    Wilby, Mark Richard; Rodríguez González, Ana Belén; Vinagre Díaz, Juan José

    2014-01-01

    Current legislation, standards, and scientific research in the field of energy efficiency often make use of PEFs (primary energy factors). The measures employed are usually fixed and based on theoretical calculations. However given the intrinsically variable nature of energy systems, these PEFs should rely on empirical data and evolve in time. Otherwise the obtained efficiencies may not be representative of the actual energy system. In addition, incorrect PEFs may cause a negative effect on the energy efficiency measures. For instance, imposing a high value on the PEF of electricity may discourage the use of renewable energy sources, which have an actual value close to 1. In order to provide a solution to this issue, we propose an application of the Energy Networks (ENs), described in a previous work, to calculate dynamic PEFs based on empirical data. An EN represents an entire energy system both numerically and graphically, from its primary energy sources to their final energy forms, and consuming sectors. Using ENs we can calculate the PEF of any energy form and depict it in a simple and meaningful graph that shows the details of the contribution of each primary energy and the efficiency of the associated process. The analysis of these PEFs leads to significant conclusions regarding the energy models adopted among countries, their evolution in time, the selection of viable ways to improve efficiency, and the detection of best practices that could contribute to the overall energy efficiency targets. - Highlights: • Primary Energy Factors (PEFs) are foundation of much energy legislation and research. • Traditionally, they have been treated as geotemporally invariant. • This work provides a systematic and transparent methodology for adding variability. • It also shows the variability between regions due to market, policy, and technology. • Finally it demonstrates the utility of extended PEFs as a tool in their own right

  5. A research program in empirical computer science

    Science.gov (United States)

    Knight, J. C.

    1991-01-01

    During the grant reporting period our primary activities have been to begin preparation for the establishment of a research program in experimental computer science. The focus of research in this program will be safety-critical systems. Many questions that arise in the effort to improve software dependability can only be addressed empirically. For example, there is no way to predict the performance of the various proposed approaches to building fault-tolerant software. Performance models, though valuable, are parameterized and cannot be used to make quantitative predictions without experimental determination of underlying distributions. In the past, experimentation has been able to shed some light on the practical benefits and limitations of software fault tolerance. It is common, also, for experimentation to reveal new questions or new aspects of problems that were previously unknown. A good example is the Consistent Comparison Problem that was revealed by experimentation and subsequently studied in depth. The result was a clear understanding of a previously unknown problem with software fault tolerance. The purpose of a research program in empirical computer science is to perform controlled experiments in the area of real-time, embedded control systems. The goal of the various experiments will be to determine better approaches to the construction of the software for computing systems that have to be relied upon. As such it will validate research concepts from other sources, provide new research results, and facilitate the transition of research results from concepts to practical procedures that can be applied with low risk to NASA flight projects. The target of experimentation will be the production software development activities undertaken by any organization prepared to contribute to the research program. Experimental goals, procedures, data analysis and result reporting will be performed for the most part by the University of Virginia.

  6. Equifinality in empirical studies of cultural transmission.

    Science.gov (United States)

    Barrett, Brendan J

    2018-01-31

    Cultural systems exhibit equifinal behavior - a single final state may be arrived at via different mechanisms and/or from different initial states. Potential for equifinality exists in all empirical studies of cultural transmission including controlled experiments, observational field research, and computational simulations. Acknowledging and anticipating the existence of equifinality is important in empirical studies of social learning and cultural evolution; it helps us understand the limitations of analytical approaches and can improve our ability to predict the dynamics of cultural transmission. Here, I illustrate and discuss examples of equifinality in studies of social learning, and how certain experimental designs might be prone to it. I then review examples of equifinality discussed in the social learning literature, namely the use of s-shaped diffusion curves to discern individual from social learning and operational definitions and analytical approaches used in studies of conformist transmission. While equifinality exists to some extent in all studies of social learning, I make suggestions for how to address instances of it, with an emphasis on using data simulation and methodological verification alongside modern statistical approaches that emphasize prediction and model comparison. In cases where evaluated learning mechanisms are equifinal due to non-methodological factors, I suggest that this is not always a problem if it helps us predict cultural change. In some cases, equifinal learning mechanisms might offer insight into how both individual learning, social learning strategies and other endogenous social factors might by important in structuring cultural dynamics and within- and between-group heterogeneity. Copyright © 2018 Elsevier B.V. All rights reserved.

  7. The Japanese Medical Empire and Its Iterations

    Directory of Open Access Journals (Sweden)

    John DiMoia

    2015-06-01

    Full Text Available Hoi-eun Kim. Doctors of Empire: Medical and Cultural Encounters between Imperial Germany and Meiji Japan. Toronto: University of Toronto Press, 2014. 272 pp. $55 (cloth/ebook. As recently as the early 1980s, the literature in English concerning the broader transformation of East Asia as a space for emerging developments in science, technology, and medicine (STM was dominated almost exclusively by works on imperial China. This is not surprising, given its considerable historical legacy as the dominant cultural force in the region. It was perfectly acceptable within the field, moreover, to treat neighboring countries within this Sinocentric framework, or at least to regard their cultural and historical indebtedness to China as one of their central features of interest. If I exaggerate the hegemonic force of China studies in the recent past to make a rhetorical point, I do so to mark the arrival of a great deal of newer scholarship concerning the transformation of the East Asian region since the nineteenth century, and arguably since at least the seventeenth century, particularly within the field of medicine—whether Western, “traditional” (a problematic term, admittedly, or even, in more complex cases, those practices embedded within a dense nexus of religious worship and healing. The work under review here, Hoi-eun Kim’s Doctors of Empire, provides a new and welcome addition to the growing literature on Meiji Japan, following in the tradition of a substantial body of previous work on scientific and technological accomplishments, including studies by James Bartholomew (1993, Tessa Morris-Suzuki (1994, and Morris Low (2005, among many others....

  8. Empirical study of supervised gene screening

    Directory of Open Access Journals (Sweden)

    Ma Shuangge

    2006-12-01

    Full Text Available Abstract Background Microarray studies provide a way of linking variations of phenotypes with their genetic causations. Constructing predictive models using high dimensional microarray measurements usually consists of three steps: (1 unsupervised gene screening; (2 supervised gene screening; and (3 statistical model building. Supervised gene screening based on marginal gene ranking is commonly used to reduce the number of genes in the model building. Various simple statistics, such as t-statistic or signal to noise ratio, have been used to rank genes in the supervised screening. Despite of its extensive usage, statistical study of supervised gene screening remains scarce. Our study is partly motivated by the differences in gene discovery results caused by using different supervised gene screening methods. Results We investigate concordance and reproducibility of supervised gene screening based on eight commonly used marginal statistics. Concordance is assessed by the relative fractions of overlaps between top ranked genes screened using different marginal statistics. We propose a Bootstrap Reproducibility Index, which measures reproducibility of individual genes under the supervised screening. Empirical studies are based on four public microarray data. We consider the cases where the top 20%, 40% and 60% genes are screened. Conclusion From a gene discovery point of view, the effect of supervised gene screening based on different marginal statistics cannot be ignored. Empirical studies show that (1 genes passed different supervised screenings may be considerably different; (2 concordance may vary, depending on the underlying data structure and percentage of selected genes; (3 evaluated with the Bootstrap Reproducibility Index, genes passed supervised screenings are only moderately reproducible; and (4 concordance cannot be improved by supervised screening based on reproducibility.

  9. Spillover effects in epidemiology: parameters, study designs and methodological considerations

    Science.gov (United States)

    Benjamin-Chung, Jade; Arnold, Benjamin F; Berger, David; Luby, Stephen P; Miguel, Edward; Colford Jr, John M; Hubbard, Alan E

    2018-01-01

    Abstract Many public health interventions provide benefits that extend beyond their direct recipients and impact people in close physical or social proximity who did not directly receive the intervention themselves. A classic example of this phenomenon is the herd protection provided by many vaccines. If these ‘spillover effects’ (i.e. ‘herd effects’) are present in the same direction as the effects on the intended recipients, studies that only estimate direct effects on recipients will likely underestimate the full public health benefits of the intervention. Causal inference assumptions for spillover parameters have been articulated in the vaccine literature, but many studies measuring spillovers of other types of public health interventions have not drawn upon that literature. In conjunction with a systematic review we conducted of spillovers of public health interventions delivered in low- and middle-income countries, we classified the most widely used spillover parameters reported in the empirical literature into a standard notation. General classes of spillover parameters include: cluster-level spillovers; spillovers conditional on treatment or outcome density, distance or the number of treated social network links; and vaccine efficacy parameters related to spillovers. We draw on high quality empirical examples to illustrate each of these parameters. We describe study designs to estimate spillovers and assumptions required to make causal inferences about spillovers. We aim to advance and encourage methods for spillover estimation and reporting by standardizing spillover parameter nomenclature and articulating the causal inference assumptions required to estimate spillovers. PMID:29106568

  10. Verification of supersonic and hypersonic semi-empirical predictions using CFD

    International Nuclear Information System (INIS)

    McIlwain, S.; Khalid, M.

    2004-01-01

    CFD was used to verify the accuracy of the axial force, normal force, and pitching moment predictions of two semi-empirical codes. This analysis considered the flow around the forebody of four different aerodynamic shapes. These included geometries with equal-volume straight or tapered bodies, with either standard or double-angle nose cones. The flow was tested at freestream Mach numbers of M = 1.5, 4.0, and 7.0. The CFD results gave the expected flow pressure contours for each geometry. The geometries with straight bodies produced larger axial forces, smaller normal forces, and larger pitching moments compared to the geometries with tapered bodies. The double-angle nose cones introduced a shock into the flow, but affected the straight-body geometries more than the tapered-body geometries. Both semi-empirical codes predicted axial forces that were consistent with the CFD data. The agreement between the normal forces and pitching moments was not as good, particularly for the straight-body geometries. But even though the semi-empirical results were not exactly the same as the CFD data, the semi-empirical codes provided rough estimates of the aerodynamic parameters in a fraction of the time required to perform a CFD analysis. (author)

  11. EbayesThresh: R Programs for Empirical Bayes Thresholding

    Directory of Open Access Journals (Sweden)

    Iain Johnstone

    2005-04-01

    Full Text Available Suppose that a sequence of unknown parameters is observed sub ject to independent Gaussian noise. The EbayesThresh package in the S language implements a class of Empirical Bayes thresholding methods that can take advantage of possible sparsity in the sequence, to improve the quality of estimation. The prior for each parameter in the sequence is a mixture of an atom of probability at zero and a heavy-tailed density. Within the package, this can be either a Laplace (double exponential density or else a mixture of normal distributions with tail behavior similar to the Cauchy distribution. The mixing weight, or sparsity parameter, is chosen automatically by marginal maximum likelihood. If estimation is carried out using the posterior median, this is a random thresholding procedure; the estimation can also be carried out using other thresholding rules with the same threshold, and the package provides the posterior mean, and hard and soft thresholding, as additional options. This paper reviews the method, and gives details (far beyond those previously published of the calculations needed for implementing the procedures. It explains and motivates both the general methodology, and the use of the EbayesThresh package, through simulated and real data examples. When estimating the wavelet transform of an unknown function, it is appropriate to apply the method level by level to the transform of the observed data. The package can carry out these calculations for wavelet transforms obtained using various packages in R and S-PLUS. Details, including a motivating example, are presented, and the application of the method to image estimation is also explored. The final topic considered is the estimation of a single sequence that may become progressively sparser along the sequence. An iterated least squares isotone regression method allows for the choice of a threshold that depends monotonically on the order in which the observations are made. An alternative

  12. Segmentation-free empirical beam hardening correction for CT

    Energy Technology Data Exchange (ETDEWEB)

    Schüller, Sören; Sawall, Stefan [German Cancer Research Center (DKFZ), Im Neuenheimer Feld 280, Heidelberg 69120 (Germany); Stannigel, Kai; Hülsbusch, Markus; Ulrici, Johannes; Hell, Erich [Sirona Dental Systems GmbH, Fabrikstraße 31, 64625 Bensheim (Germany); Kachelrieß, Marc, E-mail: marc.kachelriess@dkfz.de [German Cancer Research Center (DKFZ), Im Neuenheimer Feld 280, 69120 Heidelberg (Germany)

    2015-02-15

    Purpose: The polychromatic nature of the x-ray beams and their effects on the reconstructed image are often disregarded during standard image reconstruction. This leads to cupping and beam hardening artifacts inside the reconstructed volume. To correct for a general cupping, methods like water precorrection exist. They correct the hardening of the spectrum during the penetration of the measured object only for the major tissue class. In contrast, more complex artifacts like streaks between dense objects need other techniques of correction. If using only the information of one single energy scan, there are two types of corrections. The first one is a physical approach. Thereby, artifacts can be reproduced and corrected within the original reconstruction by using assumptions in a polychromatic forward projector. These assumptions could be the used spectrum, the detector response, the physical attenuation and scatter properties of the intersected materials. A second method is an empirical approach, which does not rely on much prior knowledge. This so-called empirical beam hardening correction (EBHC) and the previously mentioned physical-based technique are both relying on a segmentation of the present tissues inside the patient. The difficulty thereby is that beam hardening by itself, scatter, and other effects, which diminish the image quality also disturb the correct tissue classification and thereby reduce the accuracy of the two known classes of correction techniques. The herein proposed method works similar to the empirical beam hardening correction but does not require a tissue segmentation and therefore shows improvements on image data, which are highly degraded by noise and artifacts. Furthermore, the new algorithm is designed in a way that no additional calibration or parameter fitting is needed. Methods: To overcome the segmentation of tissues, the authors propose a histogram deformation of their primary reconstructed CT image. This step is essential for the

  13. Segmentation-free empirical beam hardening correction for CT.

    Science.gov (United States)

    Schüller, Sören; Sawall, Stefan; Stannigel, Kai; Hülsbusch, Markus; Ulrici, Johannes; Hell, Erich; Kachelrieß, Marc

    2015-02-01

    The polychromatic nature of the x-ray beams and their effects on the reconstructed image are often disregarded during standard image reconstruction. This leads to cupping and beam hardening artifacts inside the reconstructed volume. To correct for a general cupping, methods like water precorrection exist. They correct the hardening of the spectrum during the penetration of the measured object only for the major tissue class. In contrast, more complex artifacts like streaks between dense objects need other techniques of correction. If using only the information of one single energy scan, there are two types of corrections. The first one is a physical approach. Thereby, artifacts can be reproduced and corrected within the original reconstruction by using assumptions in a polychromatic forward projector. These assumptions could be the used spectrum, the detector response, the physical attenuation and scatter properties of the intersected materials. A second method is an empirical approach, which does not rely on much prior knowledge. This so-called empirical beam hardening correction (EBHC) and the previously mentioned physical-based technique are both relying on a segmentation of the present tissues inside the patient. The difficulty thereby is that beam hardening by itself, scatter, and other effects, which diminish the image quality also disturb the correct tissue classification and thereby reduce the accuracy of the two known classes of correction techniques. The herein proposed method works similar to the empirical beam hardening correction but does not require a tissue segmentation and therefore shows improvements on image data, which are highly degraded by noise and artifacts. Furthermore, the new algorithm is designed in a way that no additional calibration or parameter fitting is needed. To overcome the segmentation of tissues, the authors propose a histogram deformation of their primary reconstructed CT image. This step is essential for the proposed

  14. Prediction of the Dynamic Yield Strength of Metals Using Two Structural-Temporal Parameters

    Science.gov (United States)

    Selyutina, N. S.; Petrov, Yu. V.

    2018-02-01

    The behavior of the yield strength of steel and a number of aluminum alloys is investigated in a wide range of strain rates, based on the incubation time criterion of yield and the empirical models of Johnson-Cook and Cowper-Symonds. In this paper, expressions for the parameters of the empirical models are derived through the characteristics of the incubation time criterion; a satisfactory agreement of these data and experimental results is obtained. The parameters of the empirical models can depend on some strain rate. The independence of the characteristics of the incubation time criterion of yield from the loading history and their connection with the structural and temporal features of the plastic deformation process give advantage of the approach based on the concept of incubation time with respect to empirical models and an effective and convenient equation for determining the yield strength in a wider range of strain rates.

  15. Process health management using success tree and empirical model

    Energy Technology Data Exchange (ETDEWEB)

    Heo, Gyunyoung [Kyung Hee Univ., Yongin (Korea, Republic of); Kim, Suyoung [BNF Technology, Daejeon (Korea, Republic of); Sung, Wounkyoung [Korea South-East Power Co. Ltd., Seoul (Korea, Republic of)

    2012-03-15

    Interests on predictive or condition-based maintenance are heightening in power industries. The ultimate goal of the condition-based maintenance is to prioritize and optimize the maintenance resources by taking a reasonable decision-making process depending op plant's conditions. Such decision-making process should be able to not only observe the deviation from a normal state but also determine the severity or impact of the deviation on different levels such as a component, a system, or a plant. In order to achieve this purpose, a Plant Health Index (PHI) monitoring system was developed, which is operational in more than 10 units of large steam turbine cycles in Korea as well as desalination plants in Saudi Arabia as a proto-type demonstration. The PHI monitoring system has capability to detect whether the deviation between a measured and an estimated parameter which is the result of kernel regression using the accumulated operation data and the current plant boundary conditions (referred as an empirical model) is statistically meaningful. This deviation is converted into a certain index considering the margin to set points which are associated with safety. This index is referred as a PHI and the PHIs can be monitored for an individual parameter as well as a component, system, or plant level. In order to organize the PHIs at the component, system, or plant level, a success tree was developed. At the top of the success tree, the PHIs nodes in the middle of the success tree, the PHIs represent the health status of a component or a system. The concept and definition of the PHI, the key methodologies, the architecture of the developed system, and a practical case of using the PHI monitoring system are described in this article.

  16. Process health management using success tree and empirical model

    International Nuclear Information System (INIS)

    Heo, Gyunyoung; Kim, Suyoung; Sung, Wounkyoung

    2012-01-01

    Interests on predictive or condition-based maintenance are heightening in power industries. The ultimate goal of the condition-based maintenance is to prioritize and optimize the maintenance resources by taking a reasonable decision-making process depending op plant's conditions. Such decision-making process should be able to not only observe the deviation from a normal state but also determine the severity or impact of the deviation on different levels such as a component, a system, or a plant. In order to achieve this purpose, a Plant Health Index (PHI) monitoring system was developed, which is operational in more than 10 units of large steam turbine cycles in Korea as well as desalination plants in Saudi Arabia as a proto-type demonstration. The PHI monitoring system has capability to detect whether the deviation between a measured and an estimated parameter which is the result of kernel regression using the accumulated operation data and the current plant boundary conditions (referred as an empirical model) is statistically meaningful. This deviation is converted into a certain index considering the margin to set points which are associated with safety. This index is referred as a PHI and the PHIs can be monitored for an individual parameter as well as a component, system, or plant level. In order to organize the PHIs at the component, system, or plant level, a success tree was developed. At the top of the success tree, the PHIs nodes in the middle of the success tree, the PHIs represent the health status of a component or a system. The concept and definition of the PHI, the key methodologies, the architecture of the developed system, and a practical case of using the PHI monitoring system are described in this article

  17. Empirically characteristic analysis of chaotic PID controlling particle swarm optimization

    Science.gov (United States)

    Yan, Danping; Lu, Yongzhong; Zhou, Min; Chen, Shiping; Levy, David

    2017-01-01

    Since chaos systems generally have the intrinsic properties of sensitivity to initial conditions, topological mixing and density of periodic orbits, they may tactfully use the chaotic ergodic orbits to achieve the global optimum or their better approximation to given cost functions with high probability. During the past decade, they have increasingly received much attention from academic community and industry society throughout the world. To improve the performance of particle swarm optimization (PSO), we herein propose a chaotic proportional integral derivative (PID) controlling PSO algorithm by the hybridization of chaotic logistic dynamics and hierarchical inertia weight. The hierarchical inertia weight coefficients are determined in accordance with the present fitness values of the local best positions so as to adaptively expand the particles’ search space. Moreover, the chaotic logistic map is not only used in the substitution of the two random parameters affecting the convergence behavior, but also used in the chaotic local search for the global best position so as to easily avoid the particles’ premature behaviors via the whole search space. Thereafter, the convergent analysis of chaotic PID controlling PSO is under deep investigation. Empirical simulation results demonstrate that compared with other several chaotic PSO algorithms like chaotic PSO with the logistic map, chaotic PSO with the tent map and chaotic catfish PSO with the logistic map, chaotic PID controlling PSO exhibits much better search efficiency and quality when solving the optimization problems. Additionally, the parameter estimation of a nonlinear dynamic system also further clarifies its superiority to chaotic catfish PSO, genetic algorithm (GA) and PSO. PMID:28472050

  18. Empirically characteristic analysis of chaotic PID controlling particle swarm optimization.

    Directory of Open Access Journals (Sweden)

    Danping Yan

    Full Text Available Since chaos systems generally have the intrinsic properties of sensitivity to initial conditions, topological mixing and density of periodic orbits, they may tactfully use the chaotic ergodic orbits to achieve the global optimum or their better approximation to given cost functions with high probability. During the past decade, they have increasingly received much attention from academic community and industry society throughout the world. To improve the performance of particle swarm optimization (PSO, we herein propose a chaotic proportional integral derivative (PID controlling PSO algorithm by the hybridization of chaotic logistic dynamics and hierarchical inertia weight. The hierarchical inertia weight coefficients are determined in accordance with the present fitness values of the local best positions so as to adaptively expand the particles' search space. Moreover, the chaotic logistic map is not only used in the substitution of the two random parameters affecting the convergence behavior, but also used in the chaotic local search for the global best position so as to easily avoid the particles' premature behaviors via the whole search space. Thereafter, the convergent analysis of chaotic PID controlling PSO is under deep investigation. Empirical simulation results demonstrate that compared with other several chaotic PSO algorithms like chaotic PSO with the logistic map, chaotic PSO with the tent map and chaotic catfish PSO with the logistic map, chaotic PID controlling PSO exhibits much better search efficiency and quality when solving the optimization problems. Additionally, the parameter estimation of a nonlinear dynamic system also further clarifies its superiority to chaotic catfish PSO, genetic algorithm (GA and PSO.

  19. Health monitoring of pipeline girth weld using empirical mode decomposition

    Science.gov (United States)

    Rezaei, Davood; Taheri, Farid

    2010-05-01

    In the present paper the Hilbert-Huang transform (HHT), as a time-series analysis technique, has been combined with a local diagnostic approach in an effort to identify flaws in pipeline girth welds. This method is based on monitoring the free vibration signals of the pipe at its healthy and flawed states, and processing the signals through the HHT and its associated signal decomposition technique, known as empirical mode decomposition (EMD). The EMD method decomposes the vibration signals into a collection of intrinsic mode functions (IMFs). The deviations in structural integrity, measured from a healthy-state baseline, are subsequently evaluated by two damage sensitive parameters. The first is a damage index, referred to as the EM-EDI, which is established based on an energy comparison of the first or second IMF of the vibration signals, before and after occurrence of damage. The second parameter is the evaluation of the lag in instantaneous phase, a quantity derived from the HHT. In the developed methodologies, the pipe's free vibration is monitored by piezoceramic sensors and a laser Doppler vibrometer. The effectiveness of the proposed techniques is demonstrated through a set of numerical and experimental studies on a steel pipe with a mid-span girth weld, for both pressurized and nonpressurized conditions. To simulate a crack, a narrow notch is cut on one side of the girth weld. Several damage scenarios, including notches of different depths and at various locations on the pipe, are investigated. Results from both numerical and experimental studies reveal that in all damage cases the sensor located at the notch vicinity could successfully detect the notch and qualitatively predict its severity. The effect of internal pressure on the damage identification method is also monitored. Overall, the results are encouraging and promise the effectiveness of the proposed approaches as inexpensive systems for structural health monitoring purposes.

  20. Empirical molecular-dynamics study of diffusion in liquid semiconductors

    Science.gov (United States)

    Yu, W.; Wang, Z. Q.; Stroud, D.

    1996-11-01

    We report the results of an extensive molecular-dynamics study of diffusion in liquid Si and Ge (l-Si and l-Ge) and of impurities in l-Ge, using empirical Stillinger-Weber (SW) potentials with several choices of parameters. We use a numerical algorithm in which the three-body part of the SW potential is decomposed into products of two-body potentials, thereby permitting the study of large systems. One choice of SW parameters agrees very well with the observed l-Ge structure factors. The diffusion coefficients D(T) at melting are found to be approximately 6.4×10-5 cm2/s for l-Si, in good agreement with previous calculations, and about 4.2×10-5 and 4.6×10-5 cm2/s for two models of l-Ge. In all cases, D(T) can be fitted to an activated temperature dependence, with activation energies Ed of about 0.42 eV for l-Si, and 0.32 or 0.26 eV for two models of l-Ge, as calculated from either the Einstein relation or from a Green-Kubo-type integration of the velocity autocorrelation function. D(T) for Si impurities in l-Ge is found to be very similar to the self-diffusion coefficient of l-Ge. We briefly discuss possible reasons why the SW potentials give D(T)'s substantially lower than ab initio predictions.

  1. Optomechanical parameter estimation

    International Nuclear Information System (INIS)

    Ang, Shan Zheng; Tsang, Mankei; Harris, Glen I; Bowen, Warwick P

    2013-01-01

    We propose a statistical framework for the problem of parameter estimation from a noisy optomechanical system. The Cramér–Rao lower bound on the estimation errors in the long-time limit is derived and compared with the errors of radiometer and expectation–maximization (EM) algorithms in the estimation of the force noise power. When applied to experimental data, the EM estimator is found to have the lowest error and follow the Cramér–Rao bound most closely. Our analytic results are envisioned to be valuable to optomechanical experiment design, while the EM algorithm, with its ability to estimate most of the system parameters, is envisioned to be useful for optomechanical sensing, atomic magnetometry and fundamental tests of quantum mechanics. (paper)

  2. Critical parameters for ammonia

    International Nuclear Information System (INIS)

    Sato, M.; Masui, G.; Uematsu, M.

    2005-01-01

    (p, ρ, T) measurements and visual observations of the meniscus for ammonia were carried out carefully in the critical region over the range of temperatures: -1 K (T - T c ) 0.04 K, and of densities: -19 kg . m -3 (ρ - ρ c ) 19 kg . m -3 by a metal-bellows volumometer with an optical cell. Vapor pressures were also measured at T = (310, 350, and 400) K. The critical parameters of T c and ρ c were determined based on the results of observation of the critical opalescence. The critical pressure p c was determined from the present measurements at T c on the vapor pressure curve. Comparisons of the critical parameters with values given in the literature are presented

  3. Critical parameters for ammonia

    Energy Technology Data Exchange (ETDEWEB)

    Sato, M. [Center for Mechanical Engineering and Applied Mechanics, Keio University, Hiyoshi 3-14-1, Kohoku-ku, Yokohama 223-8522 (Japan); Masui, G. [Center for Mechanical Engineering and Applied Mechanics, Keio University, Hiyoshi 3-14-1, Kohoku-ku, Yokohama 223-8522 (Japan); Uematsu, M. [Center for Mechanical Engineering and Applied Mechanics, Keio University, Hiyoshi 3-14-1, Kohoku-ku, Yokohama 223-8522 (Japan)]. E-mail: uematsu@mech.keio.ac.jp

    2005-09-15

    (p, {rho}, T) measurements and visual observations of the meniscus for ammonia were carried out carefully in the critical region over the range of temperatures: -1 K (T - T {sub c}) 0.04 K, and of densities: -19 kg . m{sup -3} ({rho} - {rho} {sub c}) 19 kg . m{sup -3} by a metal-bellows volumometer with an optical cell. Vapor pressures were also measured at T = (310, 350, and 400) K. The critical parameters of T {sub c} and {rho} {sub c} were determined based on the results of observation of the critical opalescence. The critical pressure p {sub c} was determined from the present measurements at T {sub c} on the vapor pressure curve. Comparisons of the critical parameters with values given in the literature are presented.

  4. Empirical Reduced-Order Modeling for Boundary Feedback Flow Control

    Directory of Open Access Journals (Sweden)

    Seddik M. Djouadi

    2008-01-01

    Full Text Available This paper deals with the practical and theoretical implications of model reduction for aerodynamic flow-based control problems. Various aspects of model reduction are discussed that apply to partial differential equation- (PDE- based models in general. Specifically, the proper orthogonal decomposition (POD of a high dimension system as well as frequency domain identification methods are discussed for initial model construction. Projections on the POD basis give a nonlinear Galerkin model. Then, a model reduction method based on empirical balanced truncation is developed and applied to the Galerkin model. The rationale for doing so is that linear subspace approximations to exact submanifolds associated with nonlinear controllability and observability require only standard matrix manipulations utilizing simulation/experimental data. The proposed method uses a chirp signal as input to produce the output in the eigensystem realization algorithm (ERA. This method estimates the system's Markov parameters that accurately reproduce the output. Balanced truncation is used to show that model reduction is still effective on ERA produced approximated systems. The method is applied to a prototype convective flow on obstacle geometry. An H∞ feedback flow controller is designed based on the reduced model to achieve tracking and then applied to the full-order model with excellent performance.

  5. Production functions for climate policy modeling. An empirical analysis

    International Nuclear Information System (INIS)

    Van der Werf, Edwin

    2008-01-01

    Quantitative models for climate policy modeling differ in the production structure used and in the sizes of the elasticities of substitution. The empirical foundation for both is generally lacking. This paper estimates the parameters of 2-level CES production functions with capital, labour and energy as inputs, and is the first to systematically compare all nesting structures. Using industry-level data from 12 OECD countries, we find that the nesting structure where capital and labour are combined first, fits the data best, but for most countries and industries we cannot reject that all three inputs can be put into one single nest. These two nesting structures are used by most climate models. However, while several climate policy models use a Cobb-Douglas function for (part of the) production function, we reject elasticities equal to one, in favour of considerably smaller values. Finally we find evidence for factor-specific technological change. With lower elasticities and with factor-specific technological change, some climate policy models may find a bigger effect of endogenous technological change on mitigating the costs of climate policy. (author)

  6. LMFBR plant parameters

    International Nuclear Information System (INIS)

    1979-03-01

    This document contains up-to-date data on existing or firmly decided prototype or demonstration LMFBR reactors (Table I), on planned commercial size LMFBR according to the present status of design (Table II) and on experimental fast reactors such as BOR-60, DFR, EBR-II, FERMI, FFTF, JOYO, KNK-II, PEC, RAPSODIE-FORTISSIMO (Table III). Only corrected and revised parameters submitted by the countries participating in the IWGFR are included in this document

  7. Ranking as parameter estimation

    Czech Academy of Sciences Publication Activity Database

    Kárný, Miroslav; Guy, Tatiana Valentine

    2009-01-01

    Roč. 4, č. 2 (2009), s. 142-158 ISSN 1745-7645 R&D Projects: GA MŠk 2C06001; GA AV ČR 1ET100750401; GA MŠk 1M0572 Institutional research plan: CEZ:AV0Z10750506 Keywords : ranking * Bayesian estimation * negotiation * modelling Subject RIV: BB - Applied Statistics, Operational Research http://library.utia.cas.cz/separaty/2009/AS/karny- ranking as parameter estimation.pdf

  8. Calculation of shielding parameters

    International Nuclear Information System (INIS)

    Montoya Z, J.

    1994-01-01

    With the propose of reduce the hazard to radiation, exist three basic factors: a) time, the time to exposition to working person inside to area, from exist determined speed the doses, is proportional of the time permanence; b) distance, the reduce to doses is inverse square of the distance to exposition point; c) building, consist to interpose between source and exposition point to material. The main aspect development to the analysis of parameters distance and building. The analysis consist to development of the mathematical implicit, in the model of source radioactive, beginning with the geometry to source, distance to exposition source, and configuration building. In the final part was realize one comparative studied to calculus of parameters to blinding, employs two codes CPBGAM and MICROSHIELD, the first made as part to work thesis. The point source its a good approximation to any one real source, but in the majority of the time to propose analysis the spatial distribution of the source must realized in explicit way. The buildings calculus in volumetry's source can be approximate begin's of plan as source adaptations. It's important to have present that not only the building exist the exposition to the radiation, and the parameters time and distance plays an important paper too. (Author)

  9. LMFBR plant parameters 1991

    International Nuclear Information System (INIS)

    1991-03-01

    The document has been prepared on the basis of information provided by the members of the IAEA International Working Group on Fast Reactors (IWGFR). It contains updated parameters of 27 experimental, prototype and commercial size liquid metal fast breeder reactors (LMFBRs). Most of the reactors are currently in operation, under construction or in an advanced planning stage. Parameters of the Clinch River Breeder Reactor (USA), PEC (Italy), RAPSODIE (France), DFR (UK) and EFFBR (USA) are included in the report because of their important role in the development of LMFBR technology from first LMFBRs to the prototype size fast reactors. Two more reactors appeared in the list: European Fast Reactor (EFR) and PRISM (USA). Parameters of these reactors included in this publication are based on the data from the papers presented at the 23rd Annual Meeting of the IWGFR. All in all more than four hundred corrections and additions have been made to update the document. The report is intended for specialists and institutions in industrialized and developing countries who are responsible for the design and operation of liquid metal fast breeder reactors

  10. Technical Note: A comparison of model and empirical measures of catchment-scale effective energy and mass transfer

    Directory of Open Access Journals (Sweden)

    C. Rasmussen

    2013-09-01

    Full Text Available Recent work suggests that a coupled effective energy and mass transfer (EEMT term, which includes the energy associated with effective precipitation and primary production, may serve as a robust prediction parameter of critical zone structure and function. However, the models used to estimate EEMT have been solely based on long-term climatological data with little validation using direct empirical measures of energy, water, and carbon balances. Here we compare catchment-scale EEMT estimates generated using two distinct approaches: (1 EEMT modeled using the established methodology based on estimates of monthly effective precipitation and net primary production derived from climatological data, and (2 empirical catchment-scale EEMT estimated using data from 86 catchments of the Model Parameter Estimation Experiment (MOPEX and MOD17A3 annual net primary production (NPP product derived from Moderate Resolution Imaging Spectroradiometer (MODIS. Results indicated positive and significant linear correspondence (R2 = 0.75; P −2 yr−1. Modeled EEMT values were consistently greater than empirical measures of EEMT. Empirical catchment estimates of the energy associated with effective precipitation (EPPT were calculated using a mass balance approach that accounts for water losses to quick surface runoff not accounted for in the climatologically modeled EPPT. Similarly, local controls on primary production such as solar radiation and nutrient limitation were not explicitly included in the climatologically based estimates of energy associated with primary production (EBIO, whereas these were captured in the remotely sensed MODIS NPP data. These differences likely explain the greater estimate of modeled EEMT relative to the empirical measures. There was significant positive correlation between catchment aridity and the fraction of EEMT partitioned into EBIO (FBIO, with an increase in FBIO as a fraction of the total as aridity increases and percentage of

  11. Empirical Validation of a Thermal Model of a Complex Roof Including Phase Change Materials

    Directory of Open Access Journals (Sweden)

    Stéphane Guichard

    2015-12-01

    Full Text Available This paper deals with the empirical validation of a building thermal model of a complex roof including a phase change material (PCM. A mathematical model dedicated to PCMs based on the heat apparent capacity method was implemented in a multi-zone building simulation code, the aim being to increase the understanding of the thermal behavior of the whole building with PCM technologies. In order to empirically validate the model, the methodology is based both on numerical and experimental studies. A parametric sensitivity analysis was performed and a set of parameters of the thermal model has been identified for optimization. The use of the generic optimization program called GenOpt® coupled to the building simulation code enabled to determine the set of adequate parameters. We first present the empirical validation methodology and main results of previous work. We then give an overview of GenOpt® and its coupling with the building simulation code. Finally, once the optimization results are obtained, comparisons of the thermal predictions with measurements are found to be acceptable and are presented.

  12. Estimation of the value-at-risk parameter: Econometric analysis and the extreme value theory approach

    Directory of Open Access Journals (Sweden)

    Mladenović Zorica

    2006-01-01

    Full Text Available In this paper different aspects of value-at-risk estimation are considered. Daily returns of CISCO, INTEL and NASDAQ stock indices are analyzed for period: September 1996 - September 2006. Methods that incorporate time varying variability and heavy tails of the empirical distributions of returns are implemented. The main finding of the paper is that standard econometric methods underestimate the value-at-risk parameter if heavy tails of the empirical distribution are not explicitly taken into account. .

  13. An empirical Bayesian approach for model-based inference of cellular signaling networks

    Directory of Open Access Journals (Sweden)

    Klinke David J

    2009-11-01

    Full Text Available Abstract Background A common challenge in systems biology is to infer mechanistic descriptions of biological process given limited observations of a biological system. Mathematical models are frequently used to represent a belief about the causal relationships among proteins within a signaling network. Bayesian methods provide an attractive framework for inferring the validity of those beliefs in the context of the available data. However, efficient sampling of high-dimensional parameter space and appropriate convergence criteria provide barriers for implementing an empirical Bayesian approach. The objective of this study was to apply an Adaptive Markov chain Monte Carlo technique to a typical study of cellular signaling pathways. Results As an illustrative example, a kinetic model for the early signaling events associated with the epidermal growth factor (EGF signaling network was calibrated against dynamic measurements observed in primary rat hepatocytes. A convergence criterion, based upon the Gelman-Rubin potential scale reduction factor, was applied to the model predictions. The posterior distributions of the parameters exhibited complicated structure, including significant covariance between specific parameters and a broad range of variance among the parameters. The model predictions, in contrast, were narrowly distributed and were used to identify areas of agreement among a collection of experimental studies. Conclusion In summary, an empirical Bayesian approach was developed for inferring the confidence that one can place in a particular model that describes signal transduction mechanisms and for inferring inconsistencies in experimental measurements.

  14. ATLAS parameter study

    International Nuclear Information System (INIS)

    Adler, R.J.

    1994-01-01

    The purpose of this study is to make an independent assessment on the parameters chosen for the ATLAS capacitor bank at LANL. The contractor will perform a study of the basic pulsed power parameters of the ATLAS device with baseline functional parameters of >25 MA implosion current and <2.5 microsecond current risetime. Nominal circuit parameters held fixed will be the 14 nH from the vacuum interface to the load, and the nominal load impedances of 1 milliohm for slow loads and 10 milliohms for fast loads. Single Ended designs, as opposed to bipolar designs, will be studied in detail. The ATLAS pulsed power design problem is about inductance. The reason that a 36 MJ bank is required is that such a bank has enough individual capacitors so that the parallel inductance is acceptably low. Since about half the inductance is in the bank, and the inductance and time constant of the submodules is fixed, the variation of output with a given parameter will generally be a weak one. In general, the dl/dt calculation demonstrates that for the real system inductances, 700 kV is the optimum voltage for the bank to drive X-ray loads. The optimum is broad, and there is little reduction in performance at voltages as low as 450 kV. The direct drive velocity analysis also shows that the optimum velocity is between 480 and 800 kV for a variety of assumptions, and that there is less than a 10% variation in velocity over this range. Voltages in the 120 kV--600 kV range are desirable for driving heavy liners. A compromise optimum operating point might be 480 kV, at which all X-ray operation scenarios are within 10% of their velocity optimum, and heavy liners can be configured to be near optimum if small enough. Based on very preliminary studies the author believes that the choice of a single operating voltage point (say, 480 kV) is unnecessary, and that a bank engineered for dual operation at 480 and 240 kV will be the best solution to the ATLAS problem

  15. The Apocalyptic Empire of America L’Empire apocalyptique américain

    Directory of Open Access Journals (Sweden)

    Akça Ataç

    2009-10-01

    Full Text Available En général, les études traitant de « l’Empire » américain tendent à chercher à comprendre celui-ci à partir de termes concrets tels que la frontière, l’intervention militaire, le commerce international. Néanmoins, les Empires sont d’abord le résultat de profondes traditions intellectuelles intangibles qui encouragent et justifient les actions entreprises dans le cadre de politiques impériales. Dans le cas de l’Amérique, les fondements intellectuels du nouvel idéal impérial sont ancrés dans la vision apocalyptique transportée dans les bagages des premiers colons puritains. Si l’on ne prend pas en compte cet ancrage apocalyptique, on ne peut saisir, dans leur totalité, les principes fondamentaux de « l’Empire » américain. On devrait examiner des termes qui entrent en résonance avec le discours impérial tels que « mission » et « destinée » ainsi que de l’engagement explicite dans la rhétorique présidentielle en faveur de « l’amélioration » du monde à n’importe quel prix du point de vue de cette croyance apocalyptique éternelle. Cet article essaie d’élucider l’origine et l’essence de la vision apocalyptique américaine en portant une attention particulière sur son influence dans la genèse du concept d’Empire américain.

  16. Merging expert and empirical data for rare event frequency estimation: Pool homogenisation for empirical Bayes models

    International Nuclear Information System (INIS)

    Quigley, John; Hardman, Gavin; Bedford, Tim; Walls, Lesley

    2011-01-01

    Empirical Bayes provides one approach to estimating the frequency of rare events as a weighted average of the frequencies of an event and a pool of events. The pool will draw upon, for example, events with similar precursors. The higher the degree of homogeneity of the pool, then the Empirical Bayes estimator will be more accurate. We propose and evaluate a new method using homogenisation factors under the assumption that events are generated from a Homogeneous Poisson Process. The homogenisation factors are scaling constants, which can be elicited through structured expert judgement and used to align the frequencies of different events, hence homogenising the pool. The estimation error relative to the homogeneity of the pool is examined theoretically indicating that reduced error is associated with larger pool homogeneity. The effects of misspecified expert assessments of the homogenisation factors are examined theoretically and through simulation experiments. Our results show that the proposed Empirical Bayes method using homogenisation factors is robust under different degrees of misspecification.

  17. Reduced multiple empirical kernel learning machine.

    Science.gov (United States)

    Wang, Zhe; Lu, MingZhe; Gao, Daqi

    2015-02-01

    Multiple kernel learning (MKL) is demonstrated to be flexible and effective in depicting heterogeneous data sources since MKL can introduce multiple kernels rather than a single fixed kernel into applications. However, MKL would get a high time and space complexity in contrast to single kernel learning, which is not expected in real-world applications. Meanwhile, it is known that the kernel mapping ways of MKL generally have two forms including implicit kernel mapping and empirical kernel mapping (EKM), where the latter is less attracted. In this paper, we focus on the MKL with the EKM, and propose a reduced multiple empirical kernel learning machine named RMEKLM for short. To the best of our knowledge, it is the first to reduce both time and space complexity of the MKL with EKM. Different from the existing MKL, the proposed RMEKLM adopts the Gauss Elimination technique to extract a set of feature vectors, which is validated that doing so does not lose much information of the original feature space. Then RMEKLM adopts the extracted feature vectors to span a reduced orthonormal subspace of the feature space, which is visualized in terms of the geometry structure. It can be demonstrated that the spanned subspace is isomorphic to the original feature space, which means that the dot product of two vectors in the original feature space is equal to that of the two corresponding vectors in the generated orthonormal subspace. More importantly, the proposed RMEKLM brings a simpler computation and meanwhile needs a less storage space, especially in the processing of testing. Finally, the experimental results show that RMEKLM owns a much efficient and effective performance in terms of both complexity and classification. The contributions of this paper can be given as follows: (1) by mapping the input space into an orthonormal subspace, the geometry of the generated subspace is visualized; (2) this paper first reduces both the time and space complexity of the EKM-based MKL; (3

  18. How rational should bioethics be? The value of empirical approaches.

    Science.gov (United States)

    Alvarez, A A

    2001-10-01

    Rational justification of claims with empirical content calls for empirical and not only normative philosophical investigation. Empirical approaches to bioethics are epistemically valuable, i.e., such methods may be necessary in providing and verifying basic knowledge about cultural values and norms. Our assumptions in moral reasoning can be verified or corrected using these methods. Moral arguments can be initiated or adjudicated by data drawn from empirical investigation. One may argue that individualistic informed consent, for example, is not compatible with the Asian communitarian orientation. But this normative claim uses an empirical assumption that may be contrary to the fact that some Asians do value and argue for informed consent. Is it necessary and factual to neatly characterize some cultures as individualistic and some as communitarian? Empirical investigation can provide a reasonable way to inform such generalizations. In a multi-cultural context, such as in the Philippines, there is a need to investigate the nature of the local ethos before making any appeal to authenticity. Otherwise we may succumb to the same ethical imperialism we are trying hard to resist. Normative claims that involve empirical premises cannot be reasonable verified or evaluated without utilizing empirical methods along with philosophical reflection. The integration of empirical methods to the standard normative approach to moral reasoning should be reasonably guided by the epistemic demands of claims arising from cross-cultural discourse in bioethics.

  19. Determination of beam characteristic parameters for a linear accelerator

    International Nuclear Information System (INIS)

    Lima, D.A. de.

    1978-01-01

    A mechanism to determine electron beam characteristic parameters of a linear accelerator was constructed. The mechanism consists in an electro-calorimeter and an accurate optical densitometer. The following parameters: mean power, mean current, mean energy/particle, pulse Width, pulse amplitude dispersion, and pulse frequency, operating the 2 MeV linear accelerator of CBPF (Brazilian Center pf Physics Researches). The optical isodensity curves of irradiated glass lamellae were obtained, providing information about focus degradation penetration direction in material and the reach of particle. The point to point dose distribution in the material from optical density curves were obtained, using a semi empirical and approached model. (M.C.K.) [pt

  20. Strategy for a Rock Mechanics Site Descriptive Model. Development and testing of the empirical approach

    International Nuclear Information System (INIS)

    Roeshoff, Kennert; Lanaro, Flavio; Lanru Jing

    2002-05-01

    This report presents the results of one part of a wide project for the determination of a methodology for the determination of the rock mechanics properties of the rock mass for the so-called Aespoe Test Case. The Project consists of three major parts: the empirical part dealing with the characterisation of the rock mass by applying empirical methods, a part determining the rock mechanics properties of the rock mass through numerical modelling, and a third part carrying out numerical modelling for the determination of the stress state at Aespoe. All Project's parts were performed based on a limited amount of data about the geology and mechanical tests on samples selected from the Aespoe Database. This Report only considers the empirical approach. The purpose of the project is the development of a descriptive rock mechanics model for SKBs rock mass investigations for a final repository site. The empirical characterisation of the rock mass provides correlations with some of the rock mechanics properties of the rock mass such as the deformation modulus, the friction angle and cohesion for a certain stress interval and the uniaxial compressive strength. For the characterisation of the rock mass, several empirical methods were analysed and reviewed. Among those methods, some were chosen because robust, applicable and widespread in modern rock mechanics. Major weight was given to the well-known Tunnel Quality Index (Q) and Rock Mass Rating (RMR) but also the Rock Mass Index (RMi), the Geological Strength Index (GSI) and Ramamurthy's Criterion were applied for comparison with the two classical methods. The process of: i) sorting the geometrical/geological/rock mechanics data, ii) identifying homogeneous rock volumes, iii) determining the input parameters for the empirical ratings for rock mass characterisation; iv) evaluating the mechanical properties by using empirical relations with the rock mass ratings; was considered. By comparing the methodologies involved by the

  1. Strategy for a Rock Mechanics Site Descriptive Model. Development and testing of the empirical approach

    Energy Technology Data Exchange (ETDEWEB)

    Roeshoff, Kennert; Lanaro, Flavio [Berg Bygg Konsult AB, Stockholm (Sweden); Lanru Jing [Royal Inst. of Techn., Stockholm (Sweden). Div. of Engineering Geology

    2002-05-01

    This report presents the results of one part of a wide project for the determination of a methodology for the determination of the rock mechanics properties of the rock mass for the so-called Aespoe Test Case. The Project consists of three major parts: the empirical part dealing with the characterisation of the rock mass by applying empirical methods, a part determining the rock mechanics properties of the rock mass through numerical modelling, and a third part carrying out numerical modelling for the determination of the stress state at Aespoe. All Project's parts were performed based on a limited amount of data about the geology and mechanical tests on samples selected from the Aespoe Database. This Report only considers the empirical approach. The purpose of the project is the development of a descriptive rock mechanics model for SKBs rock mass investigations for a final repository site. The empirical characterisation of the rock mass provides correlations with some of the rock mechanics properties of the rock mass such as the deformation modulus, the friction angle and cohesion for a certain stress interval and the uniaxial compressive strength. For the characterisation of the rock mass, several empirical methods were analysed and reviewed. Among those methods, some were chosen because robust, applicable and widespread in modern rock mechanics. Major weight was given to the well-known Tunnel Quality Index (Q) and Rock Mass Rating (RMR) but also the Rock Mass Index (RMi), the Geological Strength Index (GSI) and Ramamurthy's Criterion were applied for comparison with the two classical methods. The process of: i) sorting the geometrical/geological/rock mechanics data, ii) identifying homogeneous rock volumes, iii) determining the input parameters for the empirical ratings for rock mass characterisation; iv) evaluating the mechanical properties by using empirical relations with the rock mass ratings; was considered. By comparing the methodologies involved

  2. Application of Generalized Student’s T-Distribution In Modeling The Distribution of Empirical Return Rates on Selected Stock Exchange Indexes

    Directory of Open Access Journals (Sweden)

    Purczyńskiz Jan

    2014-07-01

    Full Text Available This paper examines the application of the so called generalized Student’s t-distribution in modeling the distribution of empirical return rates on selected Warsaw stock exchange indexes. It deals with distribution parameters by means of the method of logarithmic moments, the maximum likelihood method and the method of moments. Generalized Student’s t-distribution ensures better fitting to empirical data than the classical Student’s t-distribution.

  3. Time-varying volatility in Malaysian stock exchange: An empirical study using multiple-volatility-shift fractionally integrated model

    Science.gov (United States)

    Cheong, Chin Wen

    2008-02-01

    This article investigated the influences of structural breaks on the fractionally integrated time-varying volatility model in the Malaysian stock markets which included the Kuala Lumpur composite index and four major sectoral indices. A fractionally integrated time-varying volatility model combined with sudden changes is developed to study the possibility of structural change in the empirical data sets. Our empirical results showed substantial reduction in fractional differencing parameters after the inclusion of structural change during the Asian financial and currency crises. Moreover, the fractionally integrated model with sudden change in volatility performed better in the estimation and specification evaluations.

  4. Proposed empirical gas geothermometer using multidimensional approach

    Energy Technology Data Exchange (ETDEWEB)

    Supranto; Sudjatmiko; Toha, Budianto; Wintolo, Djoko; Alhamid, Idrus

    1996-01-24

    Several formulas of surface gas geothermometer have been developed to utilize in geothermal exploration, i.e. by D'Amore and Panichi (1980) and by Darling and Talbot (1992). This paper presents an empirical gas geothermometer formula using multidimensional approach. The formula was derived from 37 selected chemical data of the 5 production wells from the Awibengkok Geothermal Volcanic Field in West Java. Seven components, i.e., gas volume percentage, CO2, H2S, CH4, H2, N2, and NH3, from these data are utilize to developed three model equations which represent relationship between temperature and gas compositions. These formulas are then tested by several fumarolic chemical data from Sibual-buali Area (North Sumatera) and from Ringgit Area (South Sumatera). Preliminary result indicated that gas volume percentage, H2S and CO2 concentrations have a significant role in term of gas geothermometer. Further verification is currently in progress.

  5. An empirical study on entrepreneurs' personal characteristics

    Directory of Open Access Journals (Sweden)

    Ahmad Ahmadkhani

    2012-04-01

    Full Text Available The personality of an entrepreneur is one of the most important characteristics of reaching success by creating jobs and opportunities. In this paper, we demonstrate an empirical study on personal characteristics of students who are supposed to act as entrepreneur to create jobs in seven fields of accounting, computer science, mechanical engineering, civil engineering, metallurgy engineering, electrical engineering and drawing. There are seven aspects of accepting reasonable risk, locus of control, the need for success, mental health conditions, being pragmatic, tolerating ambiguity, dreaming and the sense of challenging in our study to measure the level of entrepreneurship. We uniformly distribute 133 questionnaires among undergraduate students in all seven groups and analyze the results based on t-student test. Our investigation indicates that all students accept reasonable amount of risk, they preserve sufficient locus of control and they are eager for success. In addition, our tests indicate that students believe they maintain sufficient level of mental health care with strong sense of being pragmatic and they could handle ambiguity and challenges.

  6. Empirical analysis of online human dynamics

    Science.gov (United States)

    Zhao, Zhi-Dan; Zhou, Tao

    2012-06-01

    Patterns of human activities have attracted increasing academic interests, since the quantitative understanding of human behavior is helpful to uncover the origins of many socioeconomic phenomena. This paper focuses on behaviors of Internet users. Six large-scale systems are studied in our experiments, including the movie-watching in Netflix and MovieLens, the transaction in Ebay, the bookmark-collecting in Delicious, and the posting in FreindFeed and Twitter. Empirical analysis reveals some common statistical features of online human behavior: (1) The total number of user's actions, the user's activity, and the interevent time all follow heavy-tailed distributions. (2) There exists a strongly positive correlation between user's activity and the total number of user's actions, and a significantly negative correlation between the user's activity and the width of the interevent time distribution. We further study the rescaling method and show that this method could to some extent eliminate the different statistics among users caused by the different activities, yet the effectiveness depends on the data sets.

  7. Empirical validation of directed functional connectivity.

    Science.gov (United States)

    Mill, Ravi D; Bagic, Anto; Bostan, Andreea; Schneider, Walter; Cole, Michael W

    2017-02-01

    Mapping directions of influence in the human brain connectome represents the next phase in understanding its functional architecture. However, a host of methodological uncertainties have impeded the application of directed connectivity methods, which have primarily been validated via "ground truth" connectivity patterns embedded in simulated functional MRI (fMRI) and magneto-/electro-encephalography (MEG/EEG) datasets. Such simulations rely on many generative assumptions, and we hence utilized a different strategy involving empirical data in which a ground truth directed connectivity pattern could be anticipated with confidence. Specifically, we exploited the established "sensory reactivation" effect in episodic memory, in which retrieval of sensory information reactivates regions involved in perceiving that sensory modality. Subjects performed a paired associate task in separate fMRI and MEG sessions, in which a ground truth reversal in directed connectivity between auditory and visual sensory regions was instantiated across task conditions. This directed connectivity reversal was successfully recovered across different algorithms, including Granger causality and Bayes network (IMAGES) approaches, and across fMRI ("raw" and deconvolved) and source-modeled MEG. These results extend simulation studies of directed connectivity, and offer practical guidelines for the use of such methods in clarifying causal mechanisms of neural processing. Copyright © 2016 Elsevier Inc. All rights reserved.

  8. Empirical stopping powers for ions in solids

    International Nuclear Information System (INIS)

    Ziegler, J.F.; Biersack, J.P.; Littmark, U.

    1983-01-01

    The work of Brandt and collaborators on low energy ion stopping powers has been extended to create an empirical formulation for the stopping of ions in solids. The result is a simple computer program (about 60 lines of code) which calculates stopping powers from zero to 100 MeV/amu for all ions in all elemental solids. This code has been compared to the data in about 2000 papers, and has a standard error of 9% for energies above keV/amu. This approach includes high energy relativistic effects and shell-corrections. In the medium energy range it uses stopping theory based on the local-density approximation and Lindhard stopping in a free electron gas. This is applied to realistic Hartree-Fock charge distributions for crystalline solids. In the low energy range it uses the Brandt concepts of ion stripping relative to the Fermi velocity of solids, and also his formalism for the relation of projectile ionization to its effective charge. The details of the calculation are presented, and a broad comparison is shown with experiment. Special comparative examples are shown of both the low energy stopping power oscillations which depend on the atomic number of the ion, and also of the target

  9. Empire and the Ambiguities of Love

    Directory of Open Access Journals (Sweden)

    Linnell Secomb

    2013-09-01

    Full Text Available Colonialism is not only enforced through violence but facilitated also by economic, religious and social strategies and inducements. Amongst these, love has been exploited as a tool of empire to construct alliances, procure compliance and disguise the conquest of peoples and territories. Love has also, however, been the basis for an ethics and politics that contests imperialism. Friendship, affinity and amorous relations between coloniser and colonised enables a resistance to the ambitions of colonial occupation and rule. In this article, the work of the London-based artist, Yinka Shonibare, is used to examine the operation of love in the colonial context. Focusing especially on his 2007 installation Jardin d’Amour the bifurcation of love into an imperialist strategy on the one hand and an anti-colonial ethics on the other is challenged. Instead, love is conceived as a process of exposure and hybridisation that transforms lover and beloved by introducing otherness into the heart of the subject. Drawing on French philosopher Jean-Luc Nancy’s analysis of shattered love, it is suggested that each instance of love involves both violence and caress: each performance of love is an intrusion of otherness that inaugurates the subject as an always multiple and heterogeneous being.

  10. Evolution of viral virulence: empirical studies

    Science.gov (United States)

    Kurath, Gael; Wargo, Andrew R.

    2016-01-01

    The concept of virulence as a pathogen trait that can evolve in response to selection has led to a large body of virulence evolution theory developed in the 1980-1990s. Various aspects of this theory predict increased or decreased virulence in response to a complex array of selection pressures including mode of transmission, changes in host, mixed infection, vector-borne transmission, environmental changes, host vaccination, host resistance, and co-evolution of virus and host. A fundamental concept is prediction of trade-offs between the costs and benefits associated with higher virulence, leading to selection of optimal virulence levels. Through a combination of observational and experimental studies, including experimental evolution of viruses during serial passage, many of these predictions have now been explored in systems ranging from bacteriophage to viruses of plants, invertebrates, and vertebrate hosts. This chapter summarizes empirical studies of viral virulence evolution in numerous diverse systems, including the classic models myxomavirus in rabbits, Marek's disease virus in chickens, and HIV in humans. Collectively these studies support some aspects of virulence evolution theory, suggest modifications for other aspects, and show that predictions may apply in some virus:host interactions but not in others. Finally, we consider how virulence evolution theory applies to disease management in the field.

  11. Demystification of empirical concepts on radioactivity

    International Nuclear Information System (INIS)

    Júnior, Cláudio L.R.; Silva, Islane C.S.

    2017-01-01

    Ionizing radiation has been used for clinical diagnostic purposes since the last century, with the advancement of nuclear physics, which enabled the determination and control of doses. Nuclear Medicine is a medical specialty that uses safe, painless, and non-invasive methods to provide information that other diagnostic and therapeutic exams would not, through the use of open radionuclide sources. It has as basic principles the obtaining of scintigraphic images that are based on the ability to detect gamma radiation emitted by radioactive material. This paper aims to demystify the empirical concepts about radioactivity established by society. The knowledge of 300 people, including non-radiological professionals and people who live and work in the region surrounding the nuclear medicine services (NMS) of the metropolitan region of Recife, were heard and evaluated. For the evaluation, a questionnaire containing questions about the main doubts and fears that the professionals working with ionizing radiations have been developed. In this questionnaire, questions were also raised about the activity performed in the NMS, in order to know if they have knowledge about the procedures that are performed in the workplace. It is possible to conclude that despite being present in the daily and being responsible for numerous benefits for society, ionizing radiation causes considerable fear and still has doubts about its use. It was also observed that the vast majority of people had information about the activities developed in the evaluated services and that even if the evaluated professionals were wrong, they had concepts about the issues highlighted

  12. Empirical particle transport model for tokamaks

    International Nuclear Information System (INIS)

    Petravic, M.; Kuo-Petravic, G.

    1986-08-01

    A simple empirical particle transport model has been constructed with the purpose of gaining insight into the L- to H-mode transition in tokamaks. The aim was to construct the simplest possible model which would reproduce the measured density profiles in the L-regime, and also produce a qualitatively correct transition to the H-regime without having to assume a completely different transport mode for the bulk of the plasma. Rather than using completely ad hoc constructions for the particle diffusion coefficient, we assume D = 1/5 chi/sub total/, where chi/sub total/ ≅ chi/sub e/ is the thermal diffusivity, and then use the κ/sub e/ = n/sub e/chi/sub e/ values derived from experiments. The observed temperature profiles are then automatically reproduced, but nontrivially, the correct density profiles are also obtained, for realistic fueling rates and profiles. Our conclusion is that it is sufficient to reduce the transport coefficients within a few centimeters of the surface to produce the H-mode behavior. An additional simple assumption, concerning the particle mean-free path, leads to a convective transport term which reverses sign a few centimeters inside the surface, as required by the H-mode density profiles

  13. An empirical approach to wisdom processes

    Directory of Open Access Journals (Sweden)

    Ioana Laura Dumbravă

    2017-12-01

    Full Text Available In psychology, wisdom gradually received more and more interest form researchers. The first who have directed their attention to this area were the Greek philosophers, but gradually theoretical models were developed based on empirical data and components that explain the development of wisdom. In this paper, we used Ardelt's approach, which takes into account both the eastern and western approaches regarding wisdom. Within a sample of students (N = 100, mean age = 24, SD = 6.44, wisdom was investigated in relation to several phenomena that could be involved in the development of wisdom and wisdom processes. Thus, wisdom is studied in relation to general metacognition, moral metacognition, cognitive empathy, emotional empathy, irrationality and cognitive flexibility. The results were found to be in accordance with the hypothesis of affective empathy, identifying a significant positive relationship between the two variables. One must consider that the rest of the correlations were negative and significant, thus drawing attention to other possible factors that might be important in describing wisdom. Variables evaluated were found to explain the variance of wisdom.

  14. Successful intelligence and giftedness: an empirical study

    Directory of Open Access Journals (Sweden)

    Mercedes Ferrando

    Full Text Available The aim of our research is to look into the diversity within gifted and talented students. This is important to better understand their complexity and thus offer a more appropriate educational programs. There are rather few empirical works which attempt to identify high abilities profiles (giftedness and talent that actually exist beyond the theoretical level. The present work intends to single out the different patterns or profiles resulting from the combination of the successful intelligence abilities (analytical, synthetic and practical, as defined by Stenberg. A total of 431 students from the Region of Murcia participated in this study. These students performed the Aurora Battery tasks (Chart, Grigorenko, & Sternberg, 2008, designed to measure the analytical, practical and creative intelligence. Analytically gifted students (n=27, practically gifted (n=33 and creatively gifted (n= 34 were identified, taking as criteria scores equal to or higher than 120 IQ on each intelligence. Different Q-factor analyses were carried out for the three groups of students, in such a way that students were grouped according to their similarities. A total of 10 profiles showing how successful intelligence abilities are combined were obtained, something that has made possible to support the theory put forward by Sternberg (2000: the analytical, practical and creative talent profiles, as well as the resulting combinations, the analytical-practical, analytical-creative, practical-creative profiles, along with the consummate balance talent (high performance in the three types of intelligence.

  15. The empirical equilibrium structure of diacetylene

    Science.gov (United States)

    Thorwirth, Sven; Harding, Michael E.; Muders, Dirk; Gauss, Jürgen

    2008-09-01

    High-level quantum-chemical calculations are reported at the MP2 and CCSD(T) levels of theory for the equilibrium structure and the harmonic and anharmonic force fields of diacetylene, H sbnd C tbnd C sbnd C tbnd C sbnd H. The calculations were performed employing Dunning's hierarchy of correlation-consistent basis sets cc-pV XZ, cc-pCV XZ, and cc-pwCV XZ, as well as the ANO2 basis set of Almlöf and Taylor. An empirical equilibrium structure based on experimental rotational constants for 13 isotopic species of diacetylene and computed zero-point vibrational corrections is determined (reemp:r=1.0615 Å,r=1.2085 Å,r=1.3727 Å) and in good agreement with the best theoretical structure (CCSD(T)/cc-pCV5Z: r=1.0617 Å, r=1.2083 Å, r=1.3737 Å). In addition, the computed fundamental vibrational frequencies are compared with the available experimental data and found in satisfactory agreement.

  16. Mental disorder ethics: theory and empirical investigation

    Science.gov (United States)

    Eastman, N; Starling, B

    2006-01-01

    Mental disorders and their care present unusual problems within biomedical ethics. The disorders themselves invite an ethical critique, as does society's attitude to them; researching the diagnosis and treatment of mental disorders also presents special ethical issues. The current high profile of mental disorder ethics, emphasised by recent political and legal developments, makes this a field of research that is not only important but also highly topical. For these reasons, the Wellcome Trust's biomedical ethics programme convened a meeting, “Investigating Ethics and Mental Disorders”, in order to review some current research, and to stimulate topics and methods of future research in the field. The meeting was attended by policy makers, regulators, research funders, and researchers, including social scientists, psychiatrists, psychologists, lawyers, philosophers, criminologists, and others. As well as aiming to inspire a stronger research endeavour, the meeting also sought to stimulate an improved understanding of the methods and interactions that can contribute to “empirical ethics” generally. This paper reports on the meeting by describing contributions from individual speakers and discussion sections of the meeting. At the end we describe and discuss the conclusions of the meeting. As a result, the text is referenced less than would normally be expected in a review. Also, in summarising contributions from named presenters at the meeting it is possible that we have created inaccuracies; however, the definitive version of each paper, as provided directly by the presenter, is available at http://www.wellcome.ac.uk/doc.WTX025116.html. PMID:16446414

  17. Semi-empirical master curve concept describing the rate capability of lithium insertion electrodes

    Science.gov (United States)

    Heubner, C.; Seeba, J.; Liebmann, T.; Nickol, A.; Börner, S.; Fritsch, M.; Nikolowski, K.; Wolter, M.; Schneider, M.; Michaelis, A.

    2018-03-01

    A simple semi-empirical master curve concept, describing the rate capability of porous insertion electrodes for lithium-ion batteries, is proposed. The model is based on the evaluation of the time constants of lithium diffusion in the liquid electrolyte and the solid active material. This theoretical approach is successfully verified by comprehensive experimental investigations of the rate capability of a large number of porous insertion electrodes with various active materials and design parameters. It turns out, that the rate capability of all investigated electrodes follows a simple master curve governed by the time constant of the rate limiting process. We demonstrate that the master curve concept can be used to determine optimum design criteria meeting specific requirements in terms of maximum gravimetric capacity for a desired rate capability. The model further reveals practical limits of the electrode design, attesting the empirically well-known and inevitable tradeoff between energy and power density.

  18. A nonparametric empirical Bayes framework for large-scale multiple testing.

    Science.gov (United States)

    Martin, Ryan; Tokdar, Surya T

    2012-07-01

    We propose a flexible and identifiable version of the 2-groups model, motivated by hierarchical Bayes considerations, that features an empirical null and a semiparametric mixture model for the nonnull cases. We use a computationally efficient predictive recursion (PR) marginal likelihood procedure to estimate the model parameters, even the nonparametric mixing distribution. This leads to a nonparametric empirical Bayes testing procedure, which we call PRtest, based on thresholding the estimated local false discovery rates. Simulations and real data examples demonstrate that, compared to existing approaches, PRtest's careful handling of the nonnull density can give a much better fit in the tails of the mixture distribution which, in turn, can lead to more realistic conclusions.

  19. Water coning. An empirical formula for the critical oil-production rate

    Energy Technology Data Exchange (ETDEWEB)

    Schols, R S

    1972-01-01

    The production of oil through a well that partly penetrates an oil layer underlain by water causes the oil/water interface to deform into a bell shape, usually referred to as water coning. To prevent water- breakthrough as a result of water coning, a knowledge of critical rates is necessary. Experiments are described in which critical rates were measured as a function of the relevant parameters. The experiments were conducted in Hele Shaw models, suitable for radial flow. From the experimental data, an empirical formula for critical rates was derived in dimensionless form. Approximate theoretical solutions for the critical rate appear in literature. A comparison of critical rates calculated according to these solutions with those from the empirical formula shows that these literature data give either too high or too low values for the critical rates.

  20. Radiation portal evaluation parameters

    International Nuclear Information System (INIS)

    York, R.L.

    1998-01-01

    The detection of the unauthorized movement of radioactive materials is one of the most effective nonproliferation measures. Automatic special nuclear material (SNM) portal monitors are designed to detect this unauthorized movement and are an important part of the safeguard systems at US nuclear facilities. SNM portals differ from contamination monitors because they are designed to have high sensitivity for the low energy gamma-rays associated with highly enriched uranium (HEU) and plutonium. These instruments are now being installed at international borders to prevent the spread of radioactive contamination an SNM. In this paper the parameters important to evaluating radiation portal monitors are discussed. (author)

  1. Buncher system parameter optimization

    International Nuclear Information System (INIS)

    Wadlinger, E.A.

    1981-01-01

    A least-squares algorithm is presented to calculate the RF amplitudes and cavity spacings for a series of buncher cavities each resonating at a frequency that is a multiple of a fundamental frequency of interest. The longitudinal phase-space distribution, obtained by particle tracing through the bunching system, is compared to a desired distribution function of energy and phase. The buncher cavity parameters are adjusted to minimize the difference between these two distributions. Examples are given for zero space charge. The manner in which the method can be extended to include space charge using the 3-D space-charge calculation procedure is indicated

  2. Infrared Drying Parameter Optimization

    Science.gov (United States)

    Jackson, Matthew R.

    In recent years, much research has been done to explore direct printing methods, such as screen and inkjet printing, as alternatives to the traditional lithographic process. The primary motivation is reduction of the material costs associated with producing common electronic devices. Much of this research has focused on developing inkjet or screen paste formulations that can be printed on a variety of substrates, and which have similar conductivity performance to the materials currently used in the manufacturing of circuit boards and other electronic devices. Very little research has been done to develop a process that would use direct printing methods to manufacture electronic devices in high volumes. This study focuses on developing and optimizing a drying process for conductive copper ink in a high volume manufacturing setting. Using an infrared (IR) dryer, it was determined that conductive copper prints could be dried in seconds or minutes as opposed to tens of minutes or hours that it would take with other drying devices, such as a vacuum oven. In addition, this study also identifies significant parameters that can affect the conductivity of IR dried prints. Using designed experiments and statistical analysis; the dryer parameters were optimized to produce the best conductivity performance for a specific ink formulation and substrate combination. It was determined that for an ethylene glycol, butanol, 1-methoxy 2- propanol ink formulation printed on Kapton, the optimal drying parameters consisted of a dryer height of 4 inches, a temperature setting between 190 - 200°C, and a dry time of 50-65 seconds depending on the printed film thickness as determined by the number of print passes. It is important to note that these parameters are optimized specifically for the ink formulation and substrate used in this study. There is still much research that needs to be done into optimizing the IR dryer for different ink substrate combinations, as well as developing a

  3. Power spectrum model of visual masking: simulations and empirical data.

    Science.gov (United States)

    Serrano-Pedraza, Ignacio; Sierra-Vázquez, Vicente; Derrington, Andrew M

    2013-06-01

    cutoffs around the spatial frequency of the signal match the shape of the visual channel (symmetric or asymmetric) involved in the detection. In order to test the explanatory power of the model with empirical data, we performed six visual masking experiments. We show that this model, with only two free parameters, fits the empirical masking data with high precision. Finally, we provide equations of the power spectrum model for six masking noises used in the simulations and in the experiments.

  4. Constitutive Equation with Varying Parameters for Superplastic Flow Behavior

    Science.gov (United States)

    Guan, Zhiping; Ren, Mingwen; Jia, Hongjie; Zhao, Po; Ma, Pinkui

    2014-03-01

    In this study, constitutive equations for superplastic materials with an extra large elongation were investigated through mechanical analysis. From the view of phenomenology, firstly, some traditional empirical constitutive relations were standardized by restricting some strain paths and parameter conditions, and the coefficients in these relations were strictly given new mechanical definitions. Subsequently, a new, general constitutive equation with varying parameters was theoretically deduced based on the general mechanical equation of state. The superplastic tension test data of Zn-5%Al alloy at 340 °C under strain rates, velocities, and loads were employed for building a new constitutive equation and examining its validity. Analysis results indicated that the constitutive equation with varying parameters could characterize superplastic flow behavior in practical superplastic forming with high prediction accuracy and without any restriction of strain path or deformation condition, showing good industrial or scientific interest. On the contrary, those empirical equations have low prediction capabilities due to constant parameters and poor applicability because of the limit of special strain path or parameter conditions based on strict phenomenology.

  5. LMFBR plant parameters

    International Nuclear Information System (INIS)

    1985-07-01

    This document has been prepared on the basis of information compiled by the members of the IAEA International Working Group on Fast Reactors (IWGFR). It contains parameters of 25 experimental, prototype and commercial size liquid metal fast breeder reactors (LMFBR). Most of the reactors are currently in operation, under construction or in an advanced planning stage. Parameters of the Clinch River Breeder Reactor (USA) are presented because its design was nearly finished and most of the components were fabricated at the time when the project was terminated. Three reactors (RAPSODIE (France), DFR (UK) and EFFBR (USA)) have been shut down. However, they are included in the report because of their important role in the development of LMFBR technology from first LMFBRs to the prototype size fast reactors. The first LMFBRs (CLEMENTINE (USA), EBR-1 (USA), BR-2 (USSR), BR-5 (USSR)) and very special reactors (LAMPRE (USA), SEFOR (USA)) were not recommended by the members of the IWGFR to be included in the report

  6. AI-guided parameter optimization in inverse treatment planning

    International Nuclear Information System (INIS)

    Yan Hui; Yin Fangfang; Guan Huaiqun; Kim, Jae Ho

    2003-01-01

    An artificial intelligence (AI)-guided inverse planning system was developed to optimize the combination of parameters in the objective function for intensity-modulated radiation therapy (IMRT). In this system, the empirical knowledge of inverse planning was formulated with fuzzy if-then rules, which then guide the parameter modification based on the on-line calculated dose. Three kinds of parameters (weighting factor, dose specification, and dose prescription) were automatically modified using the fuzzy inference system (FIS). The performance of the AI-guided inverse planning system (AIGIPS) was examined using the simulated and clinical examples. Preliminary results indicate that the expected dose distribution was automatically achieved using the AI-guided inverse planning system, with the complicated compromising between different parameters accomplished by the fuzzy inference technique. The AIGIPS provides a highly promising method to replace the current trial-and-error approach

  7. Corrosion-induced bond strength degradation in reinforced concrete-Analytical and empirical models

    International Nuclear Information System (INIS)

    Bhargava, Kapilesh; Ghosh, A.K.; Mori, Yasuhiro; Ramanujam, S.

    2007-01-01

    The present paper aims to investigate the relationship between the bond strength and the reinforcement corrosion in reinforced concrete (RC). Analytical and empirical models are proposed for the bond strength of corroded reinforcing bars. Analytical model proposed by Cairns.and Abdullah [Cairns, J., Abdullah, R.B., 1996. Bond strength of black and epoxy-coated reinforcement-a theoretical approach. ACI Mater. J. 93 (4), 362-369] for splitting bond failure and later modified by Coronelli [Coronelli, D. 2002. Corrosion cracking and bond strength modeling for corroded bars in reinforced concrete. ACI Struct. J. 99 (3), 267-276] to consider the corroded bars, has been adopted. Estimation of the various parameters in the earlier analytical model has been proposed by the present authors. These parameters include corrosion pressure due to expansive action of corrosion products, modeling of tensile behaviour of cracked concrete and adhesion and friction coefficient between the corroded bar and cracked concrete. Simple empirical models are also proposed to evaluate the reduction in bond strength as a function of reinforcement corrosion in RC specimens. These empirical models are proposed by considering a wide range of published experimental investigations related to the bond degradation in RC specimens due to reinforcement corrosion. It has been found that the proposed analytical and empirical bond models are capable of providing the estimates of predicted bond strength of corroded reinforcement that are in reasonably good agreement with the experimentally observed values and with those of the other reported published data on analytical and empirical predictions. An attempt has also been made to evaluate the flexural strength of RC beams with corroded reinforcement failing in bond. It has also been found that the analytical predictions for the flexural strength of RC beams based on the proposed bond degradation models are in agreement with those of the experimentally

  8. Seven-parameter statistical model for BRDF in the UV band.

    Science.gov (United States)

    Bai, Lu; Wu, Zhensen; Zou, Xiren; Cao, Yunhua

    2012-05-21

    A new semi-empirical seven-parameter BRDF model is developed in the UV band using experimentally measured data. The model is based on the five-parameter model of Wu and the fourteen-parameter model of Renhorn and Boreman. Surface scatter, bulk scatter and retro-reflection scatter are considered. An optimizing modeling method, the artificial immune network genetic algorithm, is used to fit the BRDF measurement data over a wide range of incident angles. The calculation time and accuracy of the five- and seven-parameter models are compared. After fixing the seven parameters, the model can well describe scattering data in the UV band.

  9. Display Parameters and Requirements

    Science.gov (United States)

    Bahadur, Birendra

    The following sections are included: * INTRODUCTION * HUMAN FACTORS * Anthropometry * Sensory * Cognitive * Discussions * THE HUMAN VISUAL SYSTEM - CAPABILITIES AND LIMITATIONS * Cornea * Pupil and Iris * Lens * Vitreous Humor * Retina * RODS - NIGHT VISION * CONES - DAY VISION * RODS AND CONES - TWILIGHT VISION * VISUAL PIGMENTS * MACULA * BLOOD * CHOROID COAT * Visual Signal Processing * Pathways to the Brain * Spatial Vision * Temporal Vision * Colour Vision * Colour Blindness * DICHROMATISM * Protanopia * Deuteranopia * Tritanopia * ANOMALOUS TRICHROMATISM * Protanomaly * Deuteranomaly * Tritanomaly * CONE MONOCHROMATISM * ROD MONOCHROMATISM * Using Colour Effectively * COLOUR MIXTURES AND THE CHROMATICITY DIAGRAM * Colour Matching Functions and Chromaticity Co-ordinates * CIE 1931 Colour Space * CIE PRIMARIES * CIE COLOUR MATCHING FUNCTIONS AND CHROMATICITY CO-ORDINATES * METHODS FOR DETERMINING TRISTIMULUS VALUES AND COLOUR CO-ORDINATES * Spectral Power Distribution Method * Filter Method * CIE 1931 CHROMATICITY DIAGRAM * ADDITIVE COLOUR MIXTURE * CIE 1976 Chromaticity Diagram * CIE Uniform Colour Spaces and Colour Difference Formulae * CIELUV OR L*u*v* * CIELAB OR L*a*b* * CIE COLOUR DIFFERENCE FORMULAE * Colour Temperature and CIE Standard Illuminants and source * RADIOMETRIC AND PHOTOMETRIC QUANTITIES * Photopic (Vλ and Scotopic (Vλ') Luminous Efficiency Function * Photometric and Radiometric Flux * Luminous and Radiant Intensities * Incidence: Illuminance and Irradiance * Exitance or Emittance (M) * Luminance and Radiance * ERGONOMIC REQUIREMENTS OF DISPLAYS * ELECTRO-OPTICAL PARAMETERS AND REQUIREMENTS * Contrast and Contrast Ratio * Luminance and Brightness * Colour Contrast and Chromaticity * Glare * Other Aspects of Legibility * SHAPE AND SIZE OF CHARACTERS * DEFECTS AND BLEMISHES * FLICKER AND DISTORTION * ANGLE OF VIEW * Switching Speed * Threshold and Threshold Characteristic * Measurement Techniques For Electro-optical Parameters * RADIOMETRIC

  10. Managerial Career Patterns: A Review of the Empirical Evidence

    Science.gov (United States)

    Vinkenburg, Claartje J.; Weber, Torsten

    2012-01-01

    Despite the ubiquitous presence of the term "career patterns" in the discourse about careers, the existing empirical evidence on (managerial) career patterns is rather limited. From this literature review of 33 published empirical studies of managerial and similar professional career patterns found in electronic bibliographic databases, it is…

  11. Managerial Career Patterns: A Review of the Empirical Evidence

    NARCIS (Netherlands)

    Vinkenburg, C.J.; Weber, T.

    2012-01-01

    Despite the ubiquitous presence of the term "career patterns" in the discourse about careers, the existing empirical evidence on (managerial) career patterns is rather limited. From this literature review of 33 published empirical studies of managerial and similar professional career patterns found

  12. Goodness! The empirical turn in health care ethics

    NARCIS (Netherlands)

    Willems, D.; Pols, J.

    2010-01-01

    This paper is intended to encourage scholars to submit papers for a symposium and the next special issue of Medische Antropologie which will be on empirical studies of normative questions. We describe the ‘empirical turn’ in medical ethics. Medical ethics and bioethics in general have witnessed a

  13. Who supported the Deutsche Bundesbank? An empirical investigation

    NARCIS (Netherlands)

    Maier, P; Knaap, T

    2002-01-01

    The relevance of public support for monetary policy has largely been over-looked in the empirical Central Bank literature. We have constructed a new indicator for the support of the German Bundesbank and present descriptive and empirical evidence. We find that major German interest groups were quite

  14. Poisson and Gaussian approximation of weighted local empirical processes

    NARCIS (Netherlands)

    Einmahl, J.H.J.

    1995-01-01

    We consider the local empirical process indexed by sets, a greatly generalized version of the well-studied uniform tail empirical process. We show that the weak limit of weighted versions of this process is Poisson under certain conditions, whereas it is Gaussian in other situations. Our main

  15. Psychological Models of Art Reception must be Empirically Grounded

    DEFF Research Database (Denmark)

    Nadal, Marcos; Vartanian, Oshin; Skov, Martin

    2017-01-01

    We commend Menninghaus et al. for tackling the role of negative emotions in art reception. However, their model suffers from shortcomings that reduce its applicability to empirical studies of the arts: poor use of evidence, lack of integration with other models, and limited derivation of testable...... hypotheses. We argue that theories about art experiences should be based on empirical evidence....

  16. Empirical pseudo-potential studies on electronic structure

    Indian Academy of Sciences (India)

    Theoretical investigations of electronic structure of quantum dots is of current interest in nanophase materials. Empirical theories such as effective mass approximation, tight binding methods and empirical pseudo-potential method are capable of explaining the experimentally observed optical properties. We employ the ...

  17. Learning to Read Empirical Articles in General Psychology

    Science.gov (United States)

    Sego, Sandra A.; Stuart, Anne E.

    2016-01-01

    Many students, particularly underprepared students, struggle to identify the essential information in empirical articles. We describe a set of assignments for instructing general psychology students to dissect the structure of such articles. Students in General Psychology I read empirical articles and answered a set of general, factual questions…

  18. Ontology-Based Empirical Knowledge Verification for Professional Virtual Community

    Science.gov (United States)

    Chen, Yuh-Jen

    2011-01-01

    A professional virtual community provides an interactive platform for enterprise experts to create and share their empirical knowledge cooperatively, and the platform contains a tremendous amount of hidden empirical knowledge that knowledge experts have preserved in the discussion process. Therefore, enterprise knowledge management highly…

  19. On the Empirical Evidence of Mutual Fund Strategic Risk Taking

    NARCIS (Netherlands)

    Goriaev, A.P.; Nijman, T.E.; Werker, B.J.M.

    2001-01-01

    We reexamine empirical evidence on strategic risk-taking behavior by mutual fund managers.Several studies suggest that fund performance in the first semester of a year influences risk-taking in the second semester.However, we show that previous empirical studies implicitly assume that idiosyncratic

  20. Collective Labour Supply, Taxes, and Intrahousehold Allocation: An Empirical Approach

    NARCIS (Netherlands)

    Bloemen, H.G.

    2017-01-01

    Most empirical studies of the impact of labour income taxation on the labour supply behaviour of households use a unitary modelling approach. In this paper we empirically analyze income taxation and the choice of working hours by combining the collective approach for household behaviour and the

  1. Empirical studies of regulatory restructuring and incentives

    Science.gov (United States)

    Knittel, Christopher Roland

    This dissertation examines the actions of firms when faced with regulatory restructuring. Chapter I examines the equilibrium pricing behavior of local exchange telephone companies under a variety of market structures. In particular, the pricing behavior of three services are analyzed: residential local service, business local service, and intraLATA toll service. Beginning in 1984, a variety of market structure changes have taken place in the local telecommunications industry. I analyze differences in the method of price-setting regulation and the restrictions on entry. Specifically, the relative pricing behavior under rate of return and price cap regulation is analyzed, as well as the impact of entry in the local exchange and intraLATA toll service markets. In doing so, I estimate an empirical model that accounts for the stickiness of rates in regulated industries that is based on firm and regulator decision processes in the presence of adjustment costs. I find that, faced with competitive pressures that reduce rates in one service, incumbent firm rates increase in other services, thereby reducing the benefits from competition. In addition, the findings suggest that price cap regulation leads to higher rates relative to rate-of-return regulation. Chapter 2 analyzes the pricing and investment behavior of electricity firms. Electricity and natural gas markets have traditionally been serviced by one of two market structures. In some markets, electricity and natural gas are sold by a dual-product regulated monopolist, while in other markets, electricity and natural gas are sold by separate single-product regulated monopolies. This paper analyzes the relative pricing and investment decisions of electricity firms operating in the two market structures. The unique relationship between these two products imply that the relative incentives of single and dual-product firms are likely to differ. Namely electricity and natural gas are substitutes in consumption while natural

  2. Strategic Orientation of SMEs: Empirical Research

    Directory of Open Access Journals (Sweden)

    Jelena Minović

    2016-04-01

    Full Text Available The main objective of the paper is to identify the sources of competitive advantage of small and medium-sized enterprises in Serbia. Gaining a competitive advantage is the key priority of market-oriented enterprises regardless of their size and sector. Since business environment in Serbia is not stimulating enough for enterprises’ growth and development, the paper highlights the role of strategic orientation in business promotion and development. In order to identify the sources of competitive advantage, the empirical research is conducted by using the survey method. The research sample is created by using a selective approach, namely, the sample includes enterprises with more than ten employees, and enterprises identified to have the potential for growth and development. The research results indicate that small and medium-sized enterprises in Serbia are generally focused on costs as a source of competitive advantage, i.e., they gain competitive advantage in a selected market segment by offering low price and average quality products/services. In addition, the results of the research point out that the Serbian small and medium-sized enterprises are innovation-oriented. Organizations qualifying as middle-sized enterprises are predominantly focused on process innovations, while small businesses are primarily oriented towards product innovations. One of the limitations of the research refers to the small presence of the research sample within the category of middle-sized enterprises. The smaller sample presence than it was previously planned is mostly due to the lack of managers’ willingness to participate in the research, as well as to the fact that these enterprises account for the smaller share in the total number of enterprises in the small-and medium-sized enterprises’ sector. Taking into account that the sector of small and medium-sized enterprises generates around 30% of the country’s GDP, we consider the research results to be

  3. Computational mate choice: theory and empirical evidence.

    Science.gov (United States)

    Castellano, Sergio; Cadeddu, Giorgia; Cermelli, Paolo

    2012-06-01

    The present review is based on the thesis that mate choice results from information-processing mechanisms governed by computational rules and that, to understand how females choose their mates, we should identify which are the sources of information and how they are used to make decisions. We describe mate choice as a three-step computational process and for each step we present theories and review empirical evidence. The first step is a perceptual process. It describes the acquisition of evidence, that is, how females use multiple cues and signals to assign an attractiveness value to prospective mates (the preference function hypothesis). The second step is a decisional process. It describes the construction of the decision variable (DV), which integrates evidence (private information by direct assessment), priors (public information), and value (perceived utility) of prospective mates into a quantity that is used by a decision rule (DR) to produce a choice. We make the assumption that females are optimal Bayesian decision makers and we derive a formal model of DV that can explain the effects of preference functions, mate copying, social context, and females' state and condition on the patterns of mate choice. The third step of mating decision is a deliberative process that depends on the DRs. We identify two main categories of DRs (absolute and comparative rules), and review the normative models of mate sampling tactics associated to them. We highlight the limits of the normative approach and present a class of computational models (sequential-sampling models) that are based on the assumption that DVs accumulate noisy evidence over time until a decision threshold is reached. These models force us to rethink the dichotomy between comparative and absolute decision rules, between discrimination and recognition, and even between rational and irrational choice. Since they have a robust biological basis, we think they may represent a useful theoretical tool for

  4. An empirical, integrated forest biomass monitoring system

    Science.gov (United States)

    Kennedy, Robert E.; Ohmann, Janet; Gregory, Matt; Roberts, Heather; Yang, Zhiqiang; Bell, David M.; Kane, Van; Hughes, M. Joseph; Cohen, Warren B.; Powell, Scott; Neeti, Neeti; Larrue, Tara; Hooper, Sam; Kane, Jonathan; Miller, David L.; Perkins, James; Braaten, Justin; Seidl, Rupert

    2018-02-01

    The fate of live forest biomass is largely controlled by growth and disturbance processes, both natural and anthropogenic. Thus, biomass monitoring strategies must characterize both the biomass of the forests at a given point in time and the dynamic processes that change it. Here, we describe and test an empirical monitoring system designed to meet those needs. Our system uses a mix of field data, statistical modeling, remotely-sensed time-series imagery, and small-footprint lidar data to build and evaluate maps of forest biomass. It ascribes biomass change to specific change agents, and attempts to capture the impact of uncertainty in methodology. We find that: • A common image framework for biomass estimation and for change detection allows for consistent comparison of both state and change processes controlling biomass dynamics. • Regional estimates of total biomass agree well with those from plot data alone. • The system tracks biomass densities up to 450-500 Mg ha-1 with little bias, but begins underestimating true biomass as densities increase further. • Scale considerations are important. Estimates at the 30 m grain size are noisy, but agreement at broad scales is good. Further investigation to determine the appropriate scales is underway. • Uncertainty from methodological choices is evident, but much smaller than uncertainty based on choice of allometric equation used to estimate biomass from tree data. • In this forest-dominated study area, growth and loss processes largely balance in most years, with loss processes dominated by human removal through harvest. In years with substantial fire activity, however, overall biomass loss greatly outpaces growth. Taken together, our methods represent a unique combination of elements foundational to an operational landscape-scale forest biomass monitoring program.

  5. New Empirical Earthquake Source‐Scaling Laws

    KAUST Repository

    Thingbaijam, Kiran Kumar S.

    2017-12-13

    We develop new empirical scaling laws for rupture width W, rupture length L, rupture area A, and average slip D, based on a large database of rupture models. The database incorporates recent earthquake source models in a wide magnitude range (M 5.4–9.2) and events of various faulting styles. We apply general orthogonal regression, instead of ordinary least-squares regression, to account for measurement errors of all variables and to obtain mutually self-consistent relationships. We observe that L grows more rapidly with M compared to W. The fault-aspect ratio (L/W) tends to increase with fault dip, which generally increases from reverse-faulting, to normal-faulting, to strike-slip events. At the same time, subduction-inter-face earthquakes have significantly higher W (hence a larger rupture area A) compared to other faulting regimes. For strike-slip events, the growth of W with M is strongly inhibited, whereas the scaling of L agrees with the L-model behavior (D correlated with L). However, at a regional scale for which seismogenic depth is essentially fixed, the scaling behavior corresponds to the W model (D not correlated with L). Self-similar scaling behavior with M − log A is observed to be consistent for all the cases, except for normal-faulting events. Interestingly, the ratio D/W (a proxy for average stress drop) tends to increase with M, except for shallow crustal reverse-faulting events, suggesting the possibility of scale-dependent stress drop. The observed variations in source-scaling properties for different faulting regimes can be interpreted in terms of geological and seismological factors. We find substantial differences between our new scaling relationships and those of previous studies. Therefore, our study provides critical updates on source-scaling relations needed in seismic–tsunami-hazard analysis and engineering applications.

  6. Empirical high-latitude electric field models

    International Nuclear Information System (INIS)

    Heppner, J.P.; Maynard, N.C.

    1987-01-01

    Electric field measurements from the Dynamics Explorer 2 satellite have been analyzed to extend the empirical models previously developed from dawn-dusk OGO 6 measurements (J.P. Heppner, 1977). The analysis embraces large quantities of data from polar crossings entering and exiting the high latitudes in all magnetic local time zones. Paralleling the previous analysis, the modeling is based on the distinctly different polar cap and dayside convective patterns that occur as a function of the sign of the Y component of the interplanetary magnetic field. The objective, which is to represent the typical distributions of convective electric fields with a minimum number of characteristic patterns, is met by deriving one pattern (model BC) for the northern hemisphere with a +Y interplanetary magnetic field (IMF) and southern hemisphere with a -Y IMF and two patterns (models A and DE) for the northern hemisphere with a -Y IMF and southern hemisphere with a +Y IMF. The most significant large-scale revisions of the OGO 6 models are (1) on the dayside where the latitudinal overlap of morning and evening convection cells reverses with the sign of the IMF Y component, (2) on the nightside where a westward flow region poleward from the Harang discontinuity appears under model BC conditions, and (3) magnetic local time shifts in the positions of the convection cell foci. The modeling above was followed by a detailed examination of cases where the IMF Z component was clearly positive (northward). Neglecting the seasonally dependent cases where irregularities obscure pattern recognition, the observations range from reasonable agreement with the new BC and DE models, to cases where different characteristics appeared primarily at dayside high latitudes

  7. Empirically evaluating decision-analytic models.

    Science.gov (United States)

    Goldhaber-Fiebert, Jeremy D; Stout, Natasha K; Goldie, Sue J

    2010-08-01

    Model-based cost-effectiveness analyses support decision-making. To augment model credibility, evaluation via comparison to independent, empirical studies is recommended. We developed a structured reporting format for model evaluation and conducted a structured literature review to characterize current model evaluation recommendations and practices. As an illustration, we applied the reporting format to evaluate a microsimulation of human papillomavirus and cervical cancer. The model's outputs and uncertainty ranges were compared with multiple outcomes from a study of long-term progression from high-grade precancer (cervical intraepithelial neoplasia [CIN]) to cancer. Outcomes included 5 to 30-year cumulative cancer risk among women with and without appropriate CIN treatment. Consistency was measured by model ranges overlapping study confidence intervals. The structured reporting format included: matching baseline characteristics and follow-up, reporting model and study uncertainty, and stating metrics of consistency for model and study results. Structured searches yielded 2963 articles with 67 meeting inclusion criteria and found variation in how current model evaluations are reported. Evaluation of the cervical cancer microsimulation, reported using the proposed format, showed a modeled cumulative risk of invasive cancer for inadequately treated women of 39.6% (30.9-49.7) at 30 years, compared with the study: 37.5% (28.4-48.3). For appropriately treated women, modeled risks were 1.0% (0.7-1.3) at 30 years, study: 1.5% (0.4-3.3). To support external and projective validity, cost-effectiveness models should be iteratively evaluated as new studies become available, with reporting standardized to facilitate assessment. Such evaluations are particularly relevant for models used to conduct comparative effectiveness analyses.

  8. ROMANIAN YOUNG ENTREPRENEURS FEATURES: AN EMPIRICAL SURVEY

    Directory of Open Access Journals (Sweden)

    Ceptureanu Sebastian Ion

    2015-07-01

    Full Text Available There are many studies linking entrepreneurship and economic development. For specialists and public decision makers, developing entrepreneurship seems to be an easy policy action, even though actions and results are rather debatable. Unfortunately the relevant literature is not so generous concerning youth entrepreneurship. Youth is one of the most vulnerable groups in society, especially in the current economic and demographic situation in European Union and worldwide. At the same time, youth is the period when most people engage in their first job, are gaining financial independence and are assuming new responsibilities and roles shaping their identity. With respect to this, starting their own business is a natural choice for many young people. When considering entrepreneurial potential of young Romanians, there is almost not any data available. This paper aims to disseminate the results of a survey focused on young entrepreneurs, designed to fill the gap in the literature about Romanian young entrepreneurs’ features. The empirical study was divided in five parts: A. Personality of young entrepreneurs, highlighting the main features of behaviour and personality of young entrepreneurs. B. Professional background, focusing on young entrepreneurs’ background and how it influences their interest and performance improvement. C. Risk and crisis acceptance, highlighting the ability of young entrepreneurs to deal with critical situations. D. Business and business environment, focusing on internal and environmental aspects of the business. E. Social - cultural attitude, highlighting the attitude of society (incentives and disincentives to entrepreneurial initiatives of young people. This are excerpts of results from the first part, regarding personality of Romanian young entrepreneurs, concerning issues like level of independence, capacity for innovation, self-confidence, decision making process, level of persistence flexibility of young

  9. Timetable Attractiveness Parameters

    DEFF Research Database (Denmark)

    Schittenhelm, Bernd

    2008-01-01

    Timetable attractiveness is influenced by a set of key parameters that are described in this article. Regarding the superior structure of the timetable, the trend in Europe goes towards periodic regular interval timetables. Regular departures and focus on optimal transfer possibilities make...... these timetables attractive. The travel time in the timetable depends on the characteristics of the infrastructure and rolling stock, the heterogeneity of the planned train traffic and the necessary number of transfers on the passenger’s journey. Planned interdependencies between trains, such as transfers...... and heterogeneous traffic, add complexity to the timetable. The risk of spreading initial delays to other trains and parts of the network increases with the level of timetable complexity....

  10. Parameter measurement of target

    International Nuclear Information System (INIS)

    Gao Dangzhong

    2001-01-01

    The progress of parameter measurement of target (ICF-15) in 1999 are presented, including the design and contract of the microsphere equator profiler, the precise air bearing manufacturing, high-resolution X-ray image of multi-layer shells and the X-ray photos processed with special image and data software, some plastic shells measured in precision of 0.3 μm, the high-resolution observation and photograph system of 'dew-point method', special fixture of target and its temperature distribution measuring, the dew-point temperature and fuel gas pressure of shells measuring with internal pressure of 5 - 15 (x10 5 ) Pa D 2 and wall thickness of 1.5∼3 μm

  11. Safety Parameters Graphical Interface

    International Nuclear Information System (INIS)

    Canamero, B.

    1998-01-01

    Nuclear power plant data are received at the Operations Center of the Consejo de Seguridad Nuclear in emergency situations. In order to achieve the required interface and to prepare those data to perform simulation and forecasting with already existing computer codes a Safety Parameters Graphical Interface (IGPS) has been developed. The system runs in a UNIX environment and use the Xwindows capabilities. The received data are stored in such a way that it can be easily used for further analysis and training activities. The system consists of task-oriented modules (processes) which communicate each other using well known UNIX mechanisms (signals, sockets and shared memory segments). IGPS conceptually have two different parts: Data collection and preparation, and Data monitorization. (Author)

  12. A comparative empirical analysis of statistical models for evaluating highway segment crash frequency

    Directory of Open Access Journals (Sweden)

    Bismark R.D.K. Agbelie

    2016-08-01

    Full Text Available The present study conducted an empirical highway segment crash frequency analysis on the basis of fixed-parameters negative binomial and random-parameters negative binomial models. Using a 4-year data from a total of 158 highway segments, with a total of 11,168 crashes, the results from both models were presented, discussed, and compared. About 58% of the selected variables produced normally distributed parameters across highway segments, while the remaining produced fixed parameters. The presence of a noise barrier along a highway segment would increase mean annual crash frequency by 0.492 for 88.21% of the highway segments, and would decrease crash frequency for 11.79% of the remaining highway segments. Besides, the number of vertical curves per mile along a segment would increase mean annual crash frequency by 0.006 for 84.13% of the highway segments, and would decrease crash frequency for 15.87% of the remaining highway segments. Thus, constraining the parameters to be fixed across all highway segments would lead to an inaccurate conclusion. Although, the estimated parameters from both models showed consistency in direction, the magnitudes were significantly different. Out of the two models, the random-parameters negative binomial model was found to be statistically superior in evaluating highway segment crashes compared with the fixed-parameters negative binomial model. On average, the marginal effects from the fixed-parameters negative binomial model were observed to be significantly overestimated compared with those from the random-parameters negative binomial model.

  13. varying elastic parameters distributions

    KAUST Repository

    Moussawi, Ali

    2014-12-01

    The experimental identication of mechanical properties is crucial in mechanics for understanding material behavior and for the development of numerical models. Classical identi cation procedures employ standard shaped specimens, assume that the mechanical elds in the object are homogeneous, and recover global properties. Thus, multiple tests are required for full characterization of a heterogeneous object, leading to a time consuming and costly process. The development of non-contact, full- eld measurement techniques from which complex kinematic elds can be recorded has opened the door to a new way of thinking. From the identi cation point of view, suitable methods can be used to process these complex kinematic elds in order to recover multiple spatially varying parameters through one test or a few tests. The requirement is the development of identi cation techniques that can process these complex experimental data. This thesis introduces a novel identi cation technique called the constitutive compatibility method. The key idea is to de ne stresses as compatible with the observed kinematic eld through the chosen class of constitutive equation, making possible the uncoupling of the identi cation of stress from the identi cation of the material parameters. This uncoupling leads to parametrized solutions in cases where 5 the solution is non-unique (due to unknown traction boundary conditions) as demonstrated on 2D numerical examples. First the theory is outlined and the method is demonstrated in 2D applications. Second, the method is implemented within a domain decomposition framework in order to reduce the cost for processing very large problems. Finally, it is extended to 3D numerical examples. Promising results are shown for 2D and 3D problems.

  14. Process Damping Parameters

    International Nuclear Information System (INIS)

    Turner, Sam

    2011-01-01

    The phenomenon of process damping as a stabilising effect in milling has been encountered by machinists since milling and turning began. It is of great importance when milling aerospace alloys where maximum surface speed is limited by excessive tool wear and high speed stability lobes cannot be attained. Much of the established research into regenerative chatter and chatter avoidance has focussed on stability lobe theory with different analytical and time domain models developed to expand on the theory first developed by Trusty and Tobias. Process damping is a stabilising effect that occurs when the surface speed is low relative to the dominant natural frequency of the system and has been less successfully modelled and understood. Process damping is believed to be influenced by the interference of the relief face of the cutting tool with the waveform traced on the cut surface, with material properties and the relief geometry of the tool believed to be key factors governing performance. This study combines experimental trials with Finite Element (FE) simulation in an attempt to identify and understand the key factors influencing process damping performance in titanium milling. Rake angle, relief angle and chip thickness are the variables considered experimentally with the FE study looking at average radial and tangential forces and surface compressive stress. For the experimental study a technique is developed to identify the critical process damping wavelength as a means of measuring process damping performance. For the range of parameters studied, chip thickness is found to be the dominant factor with maximum stable parameters increased by a factor of 17 in the best case. Within the range studied, relief angle was found to have a lesser effect than expected whilst rake angle had an influence.

  15. A sensitivity analysis of centrifugal compressors' empirical models

    International Nuclear Information System (INIS)

    Yoon, Sung Ho; Baek, Je Hyun

    2001-01-01

    The mean-line method using empirical models is the most practical method of predicting off-design performance. To gain insight into the empirical models, the influence of empirical models on the performance prediction results is investigated. We found that, in the two-zone model, the secondary flow mass fraction has a considerable effect at high mass flow-rates on the performance prediction curves. In the TEIS model, the first element changes the slope of the performance curves as well as the stable operating range. The second element makes the performance curves move up and down as it increases or decreases. It is also discovered that the slip factor affects pressure ratio, but it has little effect on efficiency. Finally, this study reveals that the skin friction coefficient has significant effect on both the pressure ratio curve and the efficiency curve. These results show the limitations of the present empirical models, and more reasonable empirical models are reeded

  16. The effect of loss functions on empirical Bayes reliability analysis

    Directory of Open Access Journals (Sweden)

    Camara Vincent A. R.

    1998-01-01

    Full Text Available The aim of the present study is to investigate the sensitivity of empirical Bayes estimates of the reliability function with respect to changing of the loss function. In addition to applying some of the basic analytical results on empirical Bayes reliability obtained with the use of the “popular” squared error loss function, we shall derive some expressions corresponding to empirical Bayes reliability estimates obtained with the Higgins–Tsokos, the Harris and our proposed logarithmic loss functions. The concept of efficiency, along with the notion of integrated mean square error, will be used as a criterion to numerically compare our results. It is shown that empirical Bayes reliability functions are in general sensitive to the choice of the loss function, and that the squared error loss does not always yield the best empirical Bayes reliability estimate.

  17. Ocean Wave Parameters Retrieval from Sentinel-1 SAR Imagery

    Directory of Open Access Journals (Sweden)

    Weizeng Shao

    2016-08-01

    Full Text Available In this paper, a semi-empirical algorithm for significant wave height (Hs and mean wave period (Tmw retrieval from C-band VV-polarization Sentinel-1 synthetic aperture radar (SAR imagery is presented. We develop a semi-empirical function for Hs retrieval, which describes the relation between Hs and cutoff wavelength, radar incidence angle, and wave propagation direction relative to radar look direction. Additionally, Tmw can be also calculated through Hs and cutoff wavelength by using another empirical function. We collected 106 C-band stripmap mode Sentinel-1 SAR images in VV-polarization and wave measurements from in situ buoys. There are a total of 150 matchup points. We used 93 matchups to tune the coefficients of the semi-empirical algorithm and the rest 57 matchups for validation. The comparison shows a 0.69 m root mean square error (RMSE of Hs with a 18.6% of scatter index (SI and 1.98 s RMSE of Tmw with a 24.8% of SI. Results indicate that the algorithm is suitable for wave parameters retrieval from Sentinel-1 SAR data.

  18. Research Article Evaluation of different signal propagation models for a mixed indoor-outdoor scenario using empirical data

    Directory of Open Access Journals (Sweden)

    Oleksandr Artemenko

    2016-06-01

    Full Text Available In this paper, we are choosing a suitable indoor-outdoor propagation model out of the existing models by considering path loss and distance as parameters. A path loss is calculated empirically by placing emitter nodes inside a building. A receiver placed outdoors is represented by a Quadrocopter (QC that receives beacon messages from indoor nodes. As per our analysis, the International Telecommunication Union (ITU model, Stanford University Interim (SUI model, COST-231 Hata model, Green-Obaidat model, Free Space model, Log-Distance Path Loss model and Electronic Communication Committee 33 (ECC-33 models are chosen and evaluated using empirical data collected in a real environment. The aim is to determine if the analytically chosen models fit our scenario by estimating the minimal standard deviation from the empirical data.

  19. On the Sophistication of Naïve Empirical Reasoning: Factors Influencing Mathematicians' Persuasion Ratings of Empirical Arguments

    Science.gov (United States)

    Weber, Keith

    2013-01-01

    This paper presents the results of an experiment in which mathematicians were asked to rate how persuasive they found two empirical arguments. There were three key results from this study: (a) Participants judged an empirical argument as more persuasive if it verified that integers possessed an infrequent property than if it verified that integers…

  20. Empirical relationships between gas abundances and UV selective extinction

    International Nuclear Information System (INIS)

    Joseph, C.L.

    1990-01-01

    Several studies of gas-phase abundances in lines of sight through the outer edges of dense clouds are summarized. These lines of sight have 0.4 less than E(B-V) less than 1.1 and have inferred spatial densities of a few hundred cm(-3). The primary thrust of these studies has been to compare gaseous abundances in interstellar clouds that have various types of peculiar selective extinction. To date, the most notable result has been an empirical relationship between the CN/Fe I abundance ratio and the depth of the 2200 A extinction bump. It is not clear whether these two parameters are linearly correlated or the data are organized into two discrete ensembles. Based on 19 samples lines of sight that have a CN/Fe I abundance ratio greater than 0.3 (dex) appear to have a shallow 2.57 plus or minus 0.55 bump compared to 3.60 plus or minus 0.36 for other dense clouds and compared to the 3.6 Seaton (1979) average. The difference in the strength of the extinction bump between these two ensembles is 1.03 plus or minus 0.23. Although a high-resolution IUE survey of dense clouds is far from complete, the few lines of sight with shallow extinction bumps all show preferential depletion of certain elements, while those lines of sight with normal 2200 A bumps do not. Ca II, Cr II, and Mn II appear to exhibit the strongest preferential depletion. Fe II and Si II depletions also appear to be enhanced somewhat. It should be noted that Copernicus data suggest all elements, including the so-called nondepletors, deplete in diffuse clouds (Snow and Jenkins 1980, Joseph 1988). Those lines of sight through dense clouds that have normal 2200 A extinction bumps appear to be extensions of the depletions found in the diffuse interstellar medium

  1. Flexible Modeling of Epidemics with an Empirical Bayes Framework

    Science.gov (United States)

    Brooks, Logan C.; Farrow, David C.; Hyun, Sangwon; Tibshirani, Ryan J.; Rosenfeld, Roni

    2015-01-01

    Seasonal influenza epidemics cause consistent, considerable, widespread loss annually in terms of economic burden, morbidity, and mortality. With access to accurate and reliable forecasts of a current or upcoming influenza epidemic’s behavior, policy makers can design and implement more effective countermeasures. This past year, the Centers for Disease Control and Prevention hosted the “Predict the Influenza Season Challenge”, with the task of predicting key epidemiological measures for the 2013–2014 U.S. influenza season with the help of digital surveillance data. We developed a framework for in-season forecasts of epidemics using a semiparametric Empirical Bayes framework, and applied it to predict the weekly percentage of outpatient doctors visits for influenza-like illness, and the season onset, duration, peak time, and peak height, with and without using Google Flu Trends data. Previous work on epidemic modeling has focused on developing mechanistic models of disease behavior and applying time series tools to explain historical data. However, tailoring these models to certain types of surveillance data can be challenging, and overly complex models with many parameters can compromise forecasting ability. Our approach instead produces possibilities for the epidemic curve of the season of interest using modified versions of data from previous seasons, allowing for reasonable variations in the timing, pace, and intensity of the seasonal epidemics, as well as noise in observations. Since the framework does not make strict domain-specific assumptions, it can easily be applied to some other diseases with seasonal epidemics. This method produces a complete posterior distribution over epidemic curves, rather than, for example, solely point predictions of forecasting targets. We report prospective influenza-like-illness forecasts made for the 2013–2014 U.S. influenza season, and compare the framework’s cross-validated prediction error on historical data to

  2. Empirical and theoretical analysis of complex systems

    Science.gov (United States)

    Zhao, Guannan

    structures evolve on a similar timescale to individual level transmission, we investigated the process of transmission through a model population comprising of social groups which follow simple dynamical rules for growth and break-up, and the profiles produced bear a striking resemblance to empirical data obtained from social, financial and biological systems. Finally, for better implementation of a widely accepted power law test algorithm, we have developed a fast testing procedure using parallel computation.

  3. Empirical models of the Solar Wind : Extrapolations from the Helios & Ulysses observations back to the corona

    Science.gov (United States)

    Maksimovic, M.; Zaslavsky, A.

    2017-12-01

    We will present extrapolation of the HELIOS & Ulysses proton density, temperature & bulk velocities back to the corona. Using simple mass flux conservations we show a very good agreement between these extrapolations and the current state knowledge of these parameters in the corona, based on SOHO mesurements. These simple extrapolations could potentially be very useful for the science planning of both the Parker Solar Probe and Solar Orbiter missions. Finally will also present some modelling considerations, based on simple energy balance equations which arise from these empirical observationnal models.

  4. Generalized least squares and empirical Bayes estimation in regional partial duration series index-flood modeling

    DEFF Research Database (Denmark)

    Madsen, Henrik; Rosbjerg, Dan

    1997-01-01

    parameters is inferred from regional data using generalized least squares (GLS) regression. Two different Bayesian T-year event estimators are introduced: a linear estimator that requires only some moments of the prior distributions to be specified and a parametric estimator that is based on specified......A regional estimation procedure that combines the index-flood concept with an empirical Bayes method for inferring regional information is introduced. The model is based on the partial duration series approach with generalized Pareto (GP) distributed exceedances. The prior information of the model...

  5. An empirical relationship for homogenization in single-phase binary alloy systems

    Science.gov (United States)

    Unnam, J.; Tenney, D. R.; Stein, B. A.

    1979-01-01

    A semiempirical formula is developed for describing the extent of interaction between constituents in single-phase binary alloy systems with planar, cylindrical, or spherical interfaces. The formula contains two parameters that are functions of mean concentration and interface geometry of the couple. The empirical solution is simple, easy to use, and does not involve sequential calculations, thereby allowing quick estimation of the extent of interactions without lengthy calculations. Results obtained with this formula are in good agreement with those from a finite-difference analysis.

  6. Empirical formulae for excess noise factor with dead space for single carrier multiplication

    KAUST Repository

    Dehwah, Ahmad H.

    2011-09-01

    In this letter, two empirical equations are presented for the calculation of the excess noise factor of an avalanche photodiode for single carrier multiplication including the dead space effect. The first is an equation for calculating the excess noise factor when the multiplication approaches infinity as a function of parameters that describe the degree of the dead space effect. The second equation can be used to find the minimum value of the excess noise factor for any multiplication when the dead space effect is completely dominant, the so called "deterministic" limit. This agrees with the theoretically known equation for multiplications less than or equal to two. © 2011 World Scientific Publishing Company.

  7. Penicillin as empirical therapy for patients hospitalised with community acquired pneumonia at a Danish hospital

    DEFF Research Database (Denmark)

    Kirk, O; Glenthøj, Jonathan Peter; Dragsted, Ulrik Bak

    2001-01-01

    and outcome parameters were collected. Three groups were established according to the initial choice of antibiotic(s): penicillin only (n = 160); non-allergic patients starting broader spectrum therapy (n = 54); and patients with suspected penicillin allergy (n = 29). RESULTS: The overall mortality within...... treated with penicillin monotherapy. No differences in clinical outcomes were documented between patients treated empirically with broad-spectrum therapy and penicillin monotherapy. Therefore, penicillin seems to be a reasonable first choice for initial therapy of HCAP in Denmark as in other regions...

  8. A reactive empirical bond order (REBO) potential for hydrocarbon-oxygen interactions

    International Nuclear Information System (INIS)

    Ni, Boris; Lee, Ki-Ho; Sinnott, Susan B

    2004-01-01

    The expansion of the second-generation reactive empirical bond order (REBO) potential for hydrocarbons, as parametrized by Brenner and co-workers, to include oxygen is presented. This involves the explicit inclusion of C-O, H-O, and O-O interactions to the existing C-C, C-H, and H-H interactions in the REBO potential. The details of the expansion, including all parameters, are given. The new, expanded potential is then applied to the study of the structure and chemical stability of several molecules and polymer chains, and to modelling chemical reactions among a series of molecules, within classical molecular dynamics simulations

  9. Empirical formulae for excess noise factor with dead space for single carrier multiplication

    KAUST Repository

    Dehwah, Ahmad H.; Ajia, Idris A.; Marsland, John S.

    2011-01-01

    In this letter, two empirical equations are presented for the calculation of the excess noise factor of an avalanche photodiode for single carrier multiplication including the dead space effect. The first is an equation for calculating the excess noise factor when the multiplication approaches infinity as a function of parameters that describe the degree of the dead space effect. The second equation can be used to find the minimum value of the excess noise factor for any multiplication when the dead space effect is completely dominant, the so called "deterministic" limit. This agrees with the theoretically known equation for multiplications less than or equal to two. © 2011 World Scientific Publishing Company.

  10. [Acoustical parameters of toys].

    Science.gov (United States)

    Harazin, Barbara

    2010-01-01

    Toys play an important role in the development of the sight and hearing concentration in children. They also support the development of manipulation, gently influence a child and excite its emotional activities. A lot of toys emit various sounds. The aim of the study was to assess sound levels produced by sound-emitting toys used by young children. Acoustical parameters of noise were evaluated for 16 sound-emitting plastic toys in laboratory conditions. The noise level was recorded at four different distances, 10, 20, 25 and 30 cm, from the toy. Measurements of A-weighted sound pressure levels and noise levels in octave band in the frequency range from 31.5 Hz to 16 kHz were performed at each distance. Taking into consideration the highest equivalent A-weighted sound levels produced by tested toys, they can be divided into four groups: below 70 dB (6 toys), from 70 to 74 dB (4 toys), from 75 to 84 dB (3 toys) and from 85 to 94 dB (3 toys). The majority of toys (81%) emitted dominant sound levels in octave band at the frequency range from 2 kHz to 4 kHz. Sound-emitting toys produce the highest acoustic energy at the frequency range of the highest susceptibility of the auditory system. Noise levels produced by some toys can be dangerous to children's hearing.

  11. Using empirical Bayes predictors from generalized linear mixed models to test and visualize associations among longitudinal outcomes.

    Science.gov (United States)

    Mikulich-Gilbertson, Susan K; Wagner, Brandie D; Grunwald, Gary K; Riggs, Paula D; Zerbe, Gary O

    2018-01-01

    Medical research is often designed to investigate changes in a collection of response variables that are measured repeatedly on the same subjects. The multivariate generalized linear mixed model (MGLMM) can be used to evaluate random coefficient associations (e.g. simple correlations, partial regression coefficients) among outcomes that may be non-normal and differently distributed by specifying a multivariate normal distribution for their random effects and then evaluating the latent relationship between them. Empirical Bayes predictors are readily available for each subject from any mixed model and are observable and hence, plotable. Here, we evaluate whether second-stage association analyses of empirical Bayes predictors from a MGLMM, provide a good approximation and visual representation of these latent association analyses using medical examples and simulations. Additionally, we compare these results with association analyses of empirical Bayes predictors generated from separate mixed models for each outcome, a procedure that could circumvent computational problems that arise when the dimension of the joint covariance matrix of random effects is large and prohibits estimation of latent associations. As has been shown in other analytic contexts, the p-values for all second-stage coefficients that were determined by naively assuming normality of empirical Bayes predictors provide a good approximation to p-values determined via permutation analysis. Analyzing outcomes that are interrelated with separate models in the first stage and then associating the resulting empirical Bayes predictors in a second stage results in different mean and covariance parameter estimates from the maximum likelihood estimates generated by a MGLMM. The potential for erroneous inference from using results from these separate models increases as the magnitude of the association among the outcomes increases. Thus if computable, scatterplots of the conditionally independent empirical Bayes

  12. Parameter estimation for lithium ion batteries

    Science.gov (United States)

    Santhanagopalan, Shriram

    With an increase in the demand for lithium based batteries at the rate of about 7% per year, the amount of effort put into improving the performance of these batteries from both experimental and theoretical perspectives is increasing. There exist a number of mathematical models ranging from simple empirical models to complicated physics-based models to describe the processes leading to failure of these cells. The literature is also rife with experimental studies that characterize the various properties of the system in an attempt to improve the performance of lithium ion cells. However, very little has been done to quantify the experimental observations and relate these results to the existing mathematical models. In fact, the best of the physics based models in the literature show as much as 20% discrepancy when compared to experimental data. The reasons for such a big difference include, but are not limited to, numerical complexities involved in extracting parameters from experimental data and inconsistencies in interpreting directly measured values for the parameters. In this work, an attempt has been made to implement simplified models to extract parameter values that accurately characterize the performance of lithium ion cells. The validity of these models under a variety of experimental conditions is verified using a model discrimination procedure. Transport and kinetic properties are estimated using a non-linear estimation procedure. The initial state of charge inside each electrode is also maintained as an unknown parameter, since this value plays a significant role in accurately matching experimental charge/discharge curves with model predictions and is not readily known from experimental data. The second part of the dissertation focuses on parameters that change rapidly with time. For example, in the case of lithium ion batteries used in Hybrid Electric Vehicle (HEV) applications, the prediction of the State of Charge (SOC) of the cell under a variety of

  13. An empirical investigation of spatial differentiation and price floor regulations in retail markets for gasoline

    Science.gov (United States)

    Houde, Jean-Francois

    In the first essay of this dissertation, I study an empirical model of spatial competition. The main feature of my approach is to formally specify commuting paths as the "locations" of consumers in a Hotelling-type model of spatial competition. The main consequence of this location assumption is that the substitution patterns between stations depend in an intuitive way on the structure of the road network and the direction of traffic flows. The demand-side of the model is estimated by combining a model of traffic allocation with econometric techniques used to estimate models of demand for differentiated products (Berry, Levinsohn and Pakes (1995)). The estimated parameters are then used to evaluate the importance of commuting patterns in explaining the distribution of gasoline sales, and compare the economic predictions of the model with the standard home-location model. In the second and third essays, I examine empirically the effect of a price floor regulation on the dynamic and static equilibrium outcomes of the gasoline retail industry. In particular, in the second essay I study empirically the dynamic entry and exit decisions of gasoline stations, and measure the impact of a price floor on the continuation values of staying in the industry. In the third essay, I develop and estimate a static model of quantity competition subject to a price floor regulation. Both models are estimated using a rich panel dataset on the Quebec gasoline retail market before and after the implementation of a price floor regulation.

  14. U-tube steam generator empirical model development and validation using neural networks

    International Nuclear Information System (INIS)

    Parlos, A.G.; Chong, K.T.; Atiya, A.

    1992-01-01

    Empirical modeling techniques that use model structures motivated from neural networks research have proven effective in identifying complex process dynamics. A recurrent multilayer perception (RMLP) network was developed as a nonlinear state-space model structure along with a static learning algorithm for estimating the parameter associated with it. The methods developed were demonstrated by identifying two submodels of a U-tube steam generator (UTSG), each valid around an operating power level. A significant drawback of this approach is the long off-line training times required for the development of even a simplified model of a UTSG. Subsequently, a dynamic gradient descent-based learning algorithm was developed as an accelerated alternative to train an RMLP network for use in empirical modeling of power plants. The two main advantages of this learning algorithm are its ability to consider past error gradient information for future use and the two forward passes associated with its implementation. The enhanced learning capabilities provided by the dynamic gradient descent-based learning algorithm were demonstrated via the case study of a simple steam boiler power plant. In this paper, the dynamic gradient descent-based learning algorithm is used for the development and validation of a complete UTSG empirical model

  15. Empirical Percentile Growth Curves with Z-scores Considering Seasonal Compensatory Growths for Japanese Thoroughbred Horses

    Science.gov (United States)

    ONODA, Tomoaki; YAMAMOTO, Ryuta; SAWAMURA, Kyohei; MURASE, Harutaka; NAMBO, Yasuo; INOUE, Yoshinobu; MATSUI, Akira; MIYAKE, Takeshi; HIRAI, Nobuhiro

    2013-01-01

    Percentile growth curves are often used as a clinical indicator to evaluate variations of children’s growth status. In this study, we propose empirical percentile growth curves using Z-scores adapted for Japanese Thoroughbred horses, with considerations of the seasonal compensatory growth that is a typical characteristic of seasonal breeding animals. We previously developed new growth curve equations for Japanese Thoroughbreds adjusting for compensatory growth. Individual horses and residual effects were included as random effects in the growth curve equation model and their variance components were estimated. Based on the Z-scores of the estimated variance components, empirical percentile growth curves were constructed. A total of 5,594 and 5,680 body weight and age measurements of male and female Thoroughbreds, respectively, and 3,770 withers height and age measurements were used in the analyses. The developed empirical percentile growth curves using Z-scores are computationally feasible and useful for monitoring individual growth parameters of body weight and withers height of young Thoroughbred horses, especially during compensatory growth periods. PMID:24834004

  16. The effect of loss functions on empirical Bayes reliability analysis

    Directory of Open Access Journals (Sweden)

    Vincent A. R. Camara

    1999-01-01

    Full Text Available The aim of the present study is to investigate the sensitivity of empirical Bayes estimates of the reliability function with respect to changing of the loss function. In addition to applying some of the basic analytical results on empirical Bayes reliability obtained with the use of the “popular” squared error loss function, we shall derive some expressions corresponding to empirical Bayes reliability estimates obtained with the Higgins–Tsokos, the Harris and our proposed logarithmic loss functions. The concept of efficiency, along with the notion of integrated mean square error, will be used as a criterion to numerically compare our results.

  17. Activation method for measurement of neutron spectrum parameters

    International Nuclear Information System (INIS)

    Efimov, B.V.; Demidov, A.M.; Ionov, V.S.; Konjaev, S.I.; Marin, S.V.; Bryzgalov, V.I.

    2007-01-01

    Experimental researches of spectrum parameters of neutrons at nuclear installations RRC KI are submitted. The installations have different designs of the cores, reflector, parameters and types of fuel elements. Measurements were carried out with use of the technique developed in RRC KI for irradiation resonance detectors UKD. The arrangement of detectors in the cores ensured possibility of measurement of neutron spectra with distinguished values of parameters. The spectrum parameters which are introduced by parametrical representation of a neutrons spectrum in the form corresponding to formalism Westcott. On experimental data were determinate absolute values of density neutron flux (DNF) in thermal and epithermal area of a spectrum (F t , f epi ), empirical dependence of temperature of neutron gas (Tn) on parameter of a rigidity of a spectrum (z), density neutron flux in transitional energy area of the spectrum. Dependences of spectral indexes of nuclides (UDy/UX), included in UKD, from a rigidity z and-or temperatures of neutron gas Tn are obtained.B Tools of mathematical processing of results are used for activation data and estimation of parameters of a spectrum (F t , f epi , z, Tn, UDy/UX). In the paper are presented some results of researches of neutron spectrum parameters of the nuclear installations (Authors)

  18. Learning regularization parameters for general-form Tikhonov

    International Nuclear Information System (INIS)

    Chung, Julianne; Español, Malena I

    2017-01-01

    Computing regularization parameters for general-form Tikhonov regularization can be an expensive and difficult task, especially if multiple parameters or many solutions need to be computed in real time. In this work, we assume training data is available and describe an efficient learning approach for computing regularization parameters that can be used for a large set of problems. We consider an empirical Bayes risk minimization framework for finding regularization parameters that minimize average errors for the training data. We first extend methods from Chung et al (2011 SIAM J. Sci. Comput. 33 3132–52) to the general-form Tikhonov problem. Then we develop a learning approach for multi-parameter Tikhonov problems, for the case where all involved matrices are simultaneously diagonalizable. For problems where this is not the case, we describe an approach to compute near-optimal regularization parameters by using operator approximations for the original problem. Finally, we propose a new class of regularizing filters, where solutions correspond to multi-parameter Tikhonov solutions, that requires less data than previously proposed optimal error filters, avoids the generalized SVD, and allows flexibility and novelty in the choice of regularization matrices. Numerical results for 1D and 2D examples using different norms on the errors show the effectiveness of our methods. (paper)

  19. Parameters of care for craniosynostosis

    DEFF Research Database (Denmark)

    McCarthy, Joseph G; Warren, Stephen M; Bernstein, Joseph

    2012-01-01

    A multidisciplinary meeting was held from March 4 to 6, 2010, in Atlanta, Georgia, entitled "Craniosynostosis: Developing Parameters for Diagnosis, Treatment, and Management." The goal of this meeting was to create parameters of care for individuals with craniosynostosis.......A multidisciplinary meeting was held from March 4 to 6, 2010, in Atlanta, Georgia, entitled "Craniosynostosis: Developing Parameters for Diagnosis, Treatment, and Management." The goal of this meeting was to create parameters of care for individuals with craniosynostosis....

  20. Subsurface Geotechnical Parameters Report

    International Nuclear Information System (INIS)

    Rigby, D.; Mrugala, M.; Shideler, G.; Davidsavor, T.; Leem, J.; Buesch, D.; Sun, Y.; Potyondy, D.; Christianson, M.

    2003-01-01

    The Yucca Mountain Project is entering a the license application (LA) stage in its mission to develop the nation's first underground nuclear waste repository. After a number of years of gathering data related to site characterization, including activities ranging from laboratory and site investigations, to numerical modeling of processes associated with conditions to be encountered in the future repository, the Project is realigning its activities towards the License Application preparation. At the current stage, the major efforts are directed at translating the results of scientific investigations into sets of data needed to support the design, and to fulfill the licensing requirements and the repository design activities. This document addresses the program need to address specific technical questions so that an assessment can be made about the suitability and adequacy of data to license and construct a repository at the Yucca Mountain Site. In July 2002, the U.S. Nuclear Regulatory Commission (NRC) published an Integrated Issue Resolution Status Report (NRC 2002). Included in this report were the Repository Design and Thermal-Mechanical Effects (RDTME) Key Technical Issues (KTI). Geotechnical agreements were formulated to resolve a number of KTI subissues, in particular, RDTME KTIs 3.04, 3.05, 3.07, and 3.19 relate to the physical, thermal and mechanical properties of the host rock (NRC 2002, pp. 2.1.1-28, 2.1.7-10 to 2.1.7-21, A-17, A-18, and A-20). The purpose of the Subsurface Geotechnical Parameters Report is to present an accounting of current geotechnical information that will help resolve KTI subissues and some other project needs. The report analyzes and summarizes available qualified geotechnical data. It evaluates the sufficiency and quality of existing data to support engineering design and performance assessment. In addition, the corroborative data obtained from tests performed by a number of research organizations is presented to reinforce

  1. Subsurface Geotechnical Parameters Report

    Energy Technology Data Exchange (ETDEWEB)

    D. Rigby; M. Mrugala; G. Shideler; T. Davidsavor; J. Leem; D. Buesch; Y. Sun; D. Potyondy; M. Christianson

    2003-12-17

    The Yucca Mountain Project is entering a the license application (LA) stage in its mission to develop the nation's first underground nuclear waste repository. After a number of years of gathering data related to site characterization, including activities ranging from laboratory and site investigations, to numerical modeling of processes associated with conditions to be encountered in the future repository, the Project is realigning its activities towards the License Application preparation. At the current stage, the major efforts are directed at translating the results of scientific investigations into sets of data needed to support the design, and to fulfill the licensing requirements and the repository design activities. This document addresses the program need to address specific technical questions so that an assessment can be made about the suitability and adequacy of data to license and construct a repository at the Yucca Mountain Site. In July 2002, the U.S. Nuclear Regulatory Commission (NRC) published an Integrated Issue Resolution Status Report (NRC 2002). Included in this report were the Repository Design and Thermal-Mechanical Effects (RDTME) Key Technical Issues (KTI). Geotechnical agreements were formulated to resolve a number of KTI subissues, in particular, RDTME KTIs 3.04, 3.05, 3.07, and 3.19 relate to the physical, thermal and mechanical properties of the host rock (NRC 2002, pp. 2.1.1-28, 2.1.7-10 to 2.1.7-21, A-17, A-18, and A-20). The purpose of the Subsurface Geotechnical Parameters Report is to present an accounting of current geotechnical information that will help resolve KTI subissues and some other project needs. The report analyzes and summarizes available qualified geotechnical data. It evaluates the sufficiency and quality of existing data to support engineering design and performance assessment. In addition, the corroborative data obtained from tests performed by a number of research organizations is presented to reinforce

  2. Uncertainty estimation of core safety parameters using cross-correlations of covariance matrix

    International Nuclear Information System (INIS)

    Yamamoto, A.; Yasue, Y.; Endo, T.; Kodama, Y.; Ohoka, Y.; Tatsumi, M.

    2012-01-01

    An uncertainty estimation method for core safety parameters, for which measurement values are not obtained, is proposed. We empirically recognize the correlations among the prediction errors among core safety parameters, e.g., a correlation between the control rod worth and assembly relative power of corresponding position. Correlations of uncertainties among core safety parameters are theoretically estimated using the covariance of cross sections and sensitivity coefficients for core parameters. The estimated correlations among core safety parameters are verified through the direct Monte-Carlo sampling method. Once the correlation of uncertainties among core safety parameters is known, we can estimate the uncertainty of a safety parameter for which measurement value is not obtained. Furthermore, the correlations can be also used for the reduction of uncertainties of core safety parameters. (authors)

  3. On the effect of response transformations in sequential parameter optimization.

    Science.gov (United States)

    Wagner, Tobias; Wessing, Simon

    2012-01-01

    Parameter tuning of evolutionary algorithms (EAs) is attracting more and more interest. In particular, the sequential parameter optimization (SPO) framework for the model-assisted tuning of stochastic optimizers has resulted in established parameter tuning algorithms. In this paper, we enhance the SPO framework by introducing transformation steps before the response aggregation and before the actual modeling. Based on design-of-experiments techniques, we empirically analyze the effect of integrating different transformations. We show that in particular, a rank transformation of the responses provides significant improvements. A deeper analysis of the resulting models and additional experiments with adaptive procedures indicates that the rank and the Box-Cox transformation are able to improve the properties of the resultant distributions with respect to symmetry and normality of the residuals. Moreover, model-based effect plots document a higher discriminatory power obtained by the rank transformation.

  4. An Empirical Assessment of the Relationship of Marketing ...

    African Journals Online (AJOL)

    An Empirical Assessment of the Relationship of Marketing Communication Mix and Performance of ... Marketing efficiency of a communication mix as well as analyzing the effect of using a ... EMAIL FREE FULL TEXT EMAIL FREE FULL TEXT

  5. Empiric antibiotic prescription among febrile under-five Children in ...

    African Journals Online (AJOL)

    limiting viral infection and therefore, would not require antibiotics. Over prescription of antibiotics increases antibiotics exposure and development of resistance among patients. There is need to evaluate empiric antibiotic prescription in order to limit ...

  6. An Empirical Formula From Ion Exchange Chromatography and Colorimetry.

    Science.gov (United States)

    Johnson, Steven D.

    1996-01-01

    Presents a detailed procedure for finding an empirical formula from ion exchange chromatography and colorimetry. Introduces students to more varied techniques including volumetric manipulation, titration, ion-exchange, preparation of a calibration curve, and the use of colorimetry. (JRH)

  7. Psychological Vulnerability to Completed Suicide: A Review of Empirical Studies.

    Science.gov (United States)

    Conner, Kenneth R.; Duberstein, Paul R.; Conwell, Yeates; Seidlitz, Larry; Caine, Eric D.

    2001-01-01

    This article reviews empirical literature on psychological vulnerability to completed suicide. Five constructs have been consistently associated with completed suicide: impulsivity/aggression; depression; anxiety; hopelessness; and self-consciousness/social disengagement. Current knowledge of psychological vulnerability could inform social…

  8. New empirical generalizations on the determinants of price elasticity

    NARCIS (Netherlands)

    Bijmolt, THA; Van Heerde, HJ; Pieters, RGM

    The importance of pricing decisions for firms has fueled an extensive stream of research on price elasticities. In an influential meta-analytical study, Tellis (1988) summarized price elasticity research findings until 1986. However, empirical generalizations on price elasticity require

  9. Defying Intuition: Demonstrating the Importance of the Empirical Technique.

    Science.gov (United States)

    Kohn, Art

    1992-01-01

    Describes a classroom activity featuring a simple stay-switch probability game. Contends that the exercise helps students see the importance of empirically validating beliefs. Includes full instructions for conducting and discussing the exercise. (CFR)

  10. MRO CRISM TARGETED EMPIRICAL RECORD V1.0

    Data.gov (United States)

    National Aeronautics and Space Administration — This volume contains the CRISM Targeted Empirical Record (TER) archive, a collection of multiband image cubes derived from targeted (gimbaled) observations of Mars'...

  11. Empirical model for estimating the surface roughness of machined ...

    African Journals Online (AJOL)

    Empirical model for estimating the surface roughness of machined ... as well as surface finish is one of the most critical quality measure in mechanical products. ... various cutting speed have been developed using regression analysis software.

  12. an empirical study of poverty in calabar and its environs.

    African Journals Online (AJOL)

    DJFLEX

    2009-06-17

    Jun 17, 2009 ... AN EMPIRICAL STUDY OF POVERTY IN CALABAR AND ITS. ENVIRONS. ... one of the poorest nations in the world (CBN, 2001). Specifically, these .... rural development in poor regions, inadequate access to education ...

  13. Empirical model for estimating the surface roughness of machined ...

    African Journals Online (AJOL)

    Michael Horsfall

    one of the most critical quality measure in mechanical products. In the ... Keywords: cutting speed, centre lathe, empirical model, surface roughness, Mean absolute percentage deviation ... The factors considered were work piece properties.

  14. The Roman Empire - The Third Century Crisis and Crisis Management

    Science.gov (United States)

    2012-04-04

    December 2010. Hekster, Olivier, Gerda De Kleijn, and Danielle Slootjes. "Introduction." Impact of Empire. 7, (2006, June 1): 3-10. Koselleck...Crisis of the Third Century. Edited by Olivier Hekster, Gerda De Kleijn, and Danielle Slootjes. Vol. 7, Impact of EMpire. Boston: Brill Academic, 2012...1. Protagoras and John Nicols, Mapping the Crisis of the Third Century, ed. Olivier Hekster, Gerda De Kleijn, and Danielle Slootjes

  15. Empirical P-L-C relations for delta Scuti stars

    International Nuclear Information System (INIS)

    Gupta, S.K.

    1978-01-01

    Separate P-L-C relations have been empirically derived by sampling the delta Scuti stars according to their pulsation modes. The results based on these relations have been compared with those estimated from the model based P-L-C relations and the other existing empirical P-L-C relations. It is found that a separate P-L-C relation for each pulsation mode provides a better correspondence with observations. (Auth.)

  16. Economic reasons behind the decline of the Ottoman empire

    OpenAIRE

    Duranoglu, Erkut; Okutucu, Guzide

    2009-01-01

    This study addresses the economic reasons of the decline and fall of the Ottoman Empire. On the contrary to the previous researches, by undertaking both global and domestic developments, the paper examines the decline of the empire from an economical point of perspective. Although international developments such as industrialization in European countries, pressure on the Ottomans in terms of integrating with the world economy, global economic factors like depressions and war...

  17. Lessons from empirical studies in product and service variety management.

    OpenAIRE

    Lyons, Andrew C.L.

    2013-01-01

    [EN] For many years, a trend for businesses has been to increase market segmentation and extend product and service-variety offerings in order to provid more choice for customers and gain a competitive advantags. However, there have been relatively few variety-related, empirical studies that have been undertaken. In this research, two empirical studies are presented that address the impact of product and service variety on business and business function performance. In the first (service-vari...

  18. A REVIEW of WEBERIAN STUDIES ON THE OTTOMAN EMPIRE

    OpenAIRE

    MAZMAN, İbrahim

    2018-01-01

    This study examines the secondary literature on Max Weber’s (1864-1920) writings onIslam and the Ottoman Empire. It demarcates approaches prevalent in the secondaryliterature. Three basic themes are apparent:- Section a) concentrates on authors who applied Weber’s concepts of patrimonialism andbureaucracy to non-Ottoman countries, such as Maslovski (on the Soviet bureaucracy)and Eisenberg (on China).- Section b) focuses on authors who studied the Ottoman Empire utilizing non-Weberianaboveall ...

  19. Empirical Analysis of Closed-Loop Duopoly Advertising Strategies

    OpenAIRE

    Gary M. Erickson

    1992-01-01

    Closed-loop (perfect) equilibria in a Lanchester duopoly differential game of advertising competition are used as the basis for empirical investigation. Two systems of simultaneous nonlinear equations are formed, one from a general Lanchester model and one from a constrained model. Two empirical applications are conducted. In one involving Coca-Cola and Pepsi-Cola, a formal statistical testing procedure is used to detect whether closed-loop equilibrium advertising strategies are used by the c...

  20. Explaining foreign ownership by comparative and competitive advantage. Empirical evidence.

    OpenAIRE

    Bellak, Christian

    1999-01-01

    This paper provides empirical evidence on the determinants of foreign ownership in manufacturing industries. Foreign ownership, according to the theory of international production, is the result of the combination of comparative and competitive advantage. An adequate examination of the ownership structure of an industry requires the ability to establish empirically the extent to which international competitiveness of firms rests on comparative and competitive advantage. Analysis is based on a...

  1. Differences in Dynamic Brand Competition Across Markets: An Empirical Analysis

    OpenAIRE

    Jean-Pierre Dubé; Puneet Manchanda

    2005-01-01

    We investigate differences in the dynamics of marketing decisions across geographic markets empirically. We begin with a linear-quadratic game involving forward-looking firms competing on prices and advertising. Based on the corresponding Markov perfect equilibrium, we propose estimable econometric equations for demand and marketing policy. Our model allows us to measure empirically the strategic response of competitors along with economic measures such as firm profitability. We use a rich da...

  2. Empirical fit to inelastic electron-deuteron and electron-neutron resonance region transverse cross sections

    International Nuclear Information System (INIS)

    Bosted, P. E.; Christy, M. E.

    2008-01-01

    An empirical fit is described to measurements of inclusive inelastic electron-deuteron cross sections in the kinematic range of four-momentum transfer 0≤Q 2 2 and final state invariant mass 1.1 p of longitudinal to transverse cross sections for the proton, and the assumption R p =R n . The underlying fit parameters describe the average cross section for a free proton and a free neutron, with a plane-wave impulse approximation used to fit to the deuteron data. Additional fit parameters are used to fill in the dip between the quasi-elastic peak and the Δ(1232) resonance. The mean deviation of data from the fit is 3%, with less than 4% of the data points deviating from the fit by more than 10%

  3. A semi-empirical formula for total cross sections of electron scattering from diatomic molecules

    International Nuclear Information System (INIS)

    Liu Yufang; Sun Jinfeng; Henan Normal Univ., Xinxiang

    1996-01-01

    A fitting formula based on the Born approximation is used to fit the total cross sections for electron scattering by diatomic molecules (CO, N 2 , NO, O 2 and HCl) in the intermediate- and high-energy range. By analyzing the fitted parameters and the total cross sections, we found that the internuclear distance of the constituent atoms plays an important role in the e-diatomic molecule collision process. Thus a new semi-empirical formula has been obtained. There is no free parameter in the formula, and the dependence of the total cross sections on the internuclear distance has been reflected clearly. The total cross sections for electron scattering by CO, N 2 , NO, O 2 and HCl have been calculated over an incident energy range of 10-4000 eV. The results agree well with other available experimental and calculation data. (orig.)

  4. Atomic mass prediction from the mass formula with empirical shell terms

    International Nuclear Information System (INIS)

    Uno, Masahiro; Yamada, Masami

    1982-08-01

    The mass-excess prediction of about 8000 nuclides was calculated from two types of the atomic mass formulas with empirical shell terms of Uno and Yamada. The theoretical errors to accompany the calculated mass excess are also presented. These errors have been obtained by a new statistical method. The mass-excess prediction includes the term of the gross feature of a nuclear mass surface, the shell terms and a small correction term for odd-odd nuclei. Two functional forms for the shell terms were used. The first is the constant form, and the sencond is the linear form. In determining the values of shell parameters, only the data of even-even and odd-A nuclei were used. A new statistical method was applied, in which the error inherent to the mass formula was taken account. The obtained shell parameters and the values of mass excess are shown in tables. (Kato, T.)

  5. Bayes Empirical Bayes Inference of Amino Acid Sites Under Positive Selection

    DEFF Research Database (Denmark)

    Yang, Ziheng; Wong, Wendy Shuk Wan; Nielsen, Rasmus

    2005-01-01

    , with > 1 indicating positive selection. Statistical distributions are used to model the variation in among sites, allowing a subset of sites to have > 1 while the rest of the sequence may be under purifying selection with ... probabilities that a site comes from the site class with > 1. Current implementations, however, use the naive EB (NEB) approach and fail to account for sampling errors in maximum likelihood estimates of model parameters, such as the proportions and ratios for the site classes. In small data sets lacking...... information, this approach may lead to unreliable posterior probability calculations. In this paper, we develop a Bayes empirical Bayes (BEB) approach to the problem, which assigns a prior to the model parameters and integrates over their uncertainties. We compare the new and old methods on real and simulated...

  6. Critical Realism and Empirical Bioethics: A Methodological Exposition.

    Science.gov (United States)

    McKeown, Alex

    2017-09-01

    This paper shows how critical realism can be used to integrate empirical data and philosophical analysis within 'empirical bioethics'. The term empirical bioethics, whilst appearing oxymoronic, simply refers to an interdisciplinary approach to the resolution of practical ethical issues within the biological and life sciences, integrating social scientific, empirical data with philosophical analysis. It seeks to achieve a balanced form of ethical deliberation that is both logically rigorous and sensitive to context, to generate normative conclusions that are practically applicable to the problem, challenge, or dilemma. Since it incorporates both philosophical and social scientific components, empirical bioethics is a field that is consistent with the use of critical realism as a research methodology. The integration of philosophical and social scientific approaches to ethics has been beset with difficulties, not least because of the irreducibly normative, rather than descriptive, nature of ethical analysis and the contested relation between fact and value. However, given that facts about states of affairs inform potential courses of action and their consequences, there is a need to overcome these difficulties and successfully integrate data with theory. Previous approaches have been formulated to overcome obstacles in combining philosophical and social scientific perspectives in bioethical analysis; however each has shortcomings. As a mature interdisciplinary approach critical realism is well suited to empirical bioethics, although it has hitherto not been widely used. Here I show how it can be applied to this kind of research and explain how it represents an improvement on previous approaches.

  7. On the validity of evolutionary models with site-specific parameters.

    Directory of Open Access Journals (Sweden)

    Konrad Scheffler

    Full Text Available Evolutionary models that make use of site-specific parameters have recently been criticized on the grounds that parameter estimates obtained under such models can be unreliable and lack theoretical guarantees of convergence. We present a simulation study providing empirical evidence that a simple version of the models in question does exhibit sensible convergence behavior and that additional taxa, despite not being independent of each other, lead to improved parameter estimates. Although it would be desirable to have theoretical guarantees of this, we argue that such guarantees would not be sufficient to justify the use of these models in practice. Instead, we emphasize the importance of taking the variance of parameter estimates into account rather than blindly trusting point estimates - this is standardly done by using the models to construct statistical hypothesis tests, which are then validated empirically via simulation studies.

  8. An empirical model to predict infield thin layer drying rate of cut switchgrass

    International Nuclear Information System (INIS)

    Khanchi, A.; Jones, C.L.; Sharma, B.; Huhnke, R.L.; Weckler, P.; Maness, N.O.

    2013-01-01

    A series of 62 thin layer drying experiments were conducted to evaluate the effect of solar radiation, vapor pressure deficit and wind speed on drying rate of switchgrass. An environmental chamber was fabricated that can simulate field drying conditions. An empirical drying model based on maturity stage of switchgrass was also developed during the study. It was observed that solar radiation was the most significant factor in improving the drying rate of switchgrass at seed shattering and seed shattered maturity stage. Therefore, drying switchgrass in wide swath to intercept the maximum amount of radiation at these stages of maturity is recommended. Moreover, it was observed that under low radiation intensity conditions, wind speed helps to improve the drying rate of switchgrass. Field operations such as raking or turning of the windrows are recommended to improve air circulation within a swath on cloudy days. Additionally, it was found that the effect of individual weather parameters on the drying rate of switchgrass was dependent on maturity stage. Vapor pressure deficit was strongly correlated with the drying rate during seed development stage whereas, vapor pressure deficit was weakly correlated during seed shattering and seed shattered stage. These findings suggest the importance of using separate drying rate models for each maturity stage of switchgrass. The empirical models developed in this study can predict the drying time of switchgrass based on the forecasted weather conditions so that the appropriate decisions can be made. -- Highlights: • An environmental chamber was developed in the present study to simulate field drying conditions. • An empirical model was developed that can estimate drying rate of switchgrass based on forecasted weather conditions. • Separate equations were developed based on maturity stage of switchgrass. • Designed environmental chamber can be used to evaluate the effect of other parameters that affect drying of crops

  9. Empirical Assessment of the Mean Block Volume of Rock Masses Intersected by Four Joint Sets

    Science.gov (United States)

    Morelli, Gian Luca

    2016-05-01

    The estimation of a representative value for the rock block volume ( V b) is of huge interest in rock engineering in regards to rock mass characterization purposes. However, while mathematical relationships to precisely estimate this parameter from the spacing of joints can be found in literature for rock masses intersected by three dominant joint sets, corresponding relationships do not actually exist when more than three sets occur. In these cases, a consistent assessment of V b can only be achieved by directly measuring the dimensions of several representative natural rock blocks in the field or by means of more sophisticated 3D numerical modeling approaches. However, Palmström's empirical relationship based on the volumetric joint count J v and on a block shape factor β is commonly used in the practice, although strictly valid only for rock masses intersected by three joint sets. Starting from these considerations, the present paper is primarily intended to investigate the reliability of a set of empirical relationships linking the block volume with the indexes most commonly used to characterize the degree of jointing in a rock mass (i.e. the J v and the mean value of the joint set spacings) specifically applicable to rock masses intersected by four sets of persistent discontinuities. Based on the analysis of artificial 3D block assemblies generated using the software AutoCAD, the most accurate best-fit regression has been found between the mean block volume (V_{{{{b}}_{{m}} }}) of tested rock mass samples and the geometric mean value of the spacings of the joint sets delimiting blocks; thus, indicating this mean value as a promising parameter for the preliminary characterization of the block size. Tests on field outcrops have demonstrated that the proposed empirical methodology has the potential of predicting the mean block volume of multiple-set jointed rock masses with an acceptable accuracy for common uses in most practical rock engineering applications.

  10. Rock models at Zielona Gora, Poland applied to the semi-empirical neutron tool calibration

    International Nuclear Information System (INIS)

    Czubek, J.A.; Ossowski, A.; Zorski, T.; Massalski, T.

    1995-01-01

    The semi-empirical calibration method applied to the neutron porosity tool is presented in this paper. It was used with the ODSN-102 tool of 70 mm diameter and equipped with an Am-Be neutron source at the calibration facility of Zielona Gora, Poland, inside natural and artificial rocks: four sandstone, four limestone and one dolomite block with borehole diameters of 143 and 216 mm, and three artificial ceramic blocks with borehole diameters of 90 and 180 mm. All blocks were saturated with fresh water, and fresh water was also inside all boreholes. In five blocks mineralized water (200,000 ppm NaCl) was introduced inside the boreholes. All neutron characteristics of the calibration blocks are given in this paper. The semi-empirical method of calibration correlates the tool readings observed experimentally with the general neutron parameter (GNP). This results in a general calibration curve, where the tool readings (TR) vs GNP are situated at one curve irrespective of their origin, i.e. of the formation lithology, borehole diameter, tool stand-off, brine salinity, etc. The n and m power coefficients are obtained experimentally during the calibration procedure. The apparent neutron parameters are defined as those sensed by a neutron tool situated inside the borehole and in real environmental conditions. When they are known, the GNP parameter can be computed analytically for the whole range of porosity at any kind of borehole diameter, formation lithology (including variable rock matrix absorption cross-section and density), borehole and formation salinity, tool stand-off and drilling fluid physical parameters. By this approach all porosity corrections with respect to the standard (e.g. limestone) calibration curve can be generated. (author)

  11. Empirical modeling of drying kinetics and microwave assisted extraction of bioactive compounds from Adathoda vasica

    Directory of Open Access Journals (Sweden)

    Prithvi Simha

    2016-03-01

    Full Text Available To highlight the shortcomings in conventional methods of extraction, this study investigates the efficacy of Microwave Assisted Extraction (MAE toward bioactive compound recovery from pharmaceutically-significant medicinal plants, Adathoda vasica and Cymbopogon citratus. Initially, the microwave (MW drying behavior of the plant leaves was investigated at different sample loadings, MW power and drying time. Kinetics was analyzed through empirical modeling of drying data against 10 conventional thin-layer drying equations that were further improvised through the incorporation of Arrhenius, exponential and linear-type expressions. 81 semi-empirical Midilli equations were derived and subjected to non-linear regression to arrive at the characteristic drying equations. Bioactive compounds recovery from the leaves was examined under various parameters through a comparative approach that studied MAE against Soxhlet extraction. MAE of A. vasica reported similar yields although drastic reduction in extraction time (210 s as against the average time of 10 h in the Soxhlet apparatus. Extract yield for MAE of C. citratus was higher than the conventional process with optimal parameters determined to be 20 g sample load, 1:20 sample/solvent ratio, extraction time of 150 s and 300 W output power. Scanning Electron Microscopy and Fourier Transform Infrared Spectroscopy were performed to depict changes in internal leaf morphology.

  12. Empirical valence bond models for reactive potential energy surfaces: a parallel multilevel genetic program approach.

    Science.gov (United States)

    Bellucci, Michael A; Coker, David F

    2011-07-28

    We describe a new method for constructing empirical valence bond potential energy surfaces using a parallel multilevel genetic program (PMLGP). Genetic programs can be used to perform an efficient search through function space and parameter space to find the best functions and sets of parameters that fit energies obtained by ab initio electronic structure calculations. Building on the traditional genetic program approach, the PMLGP utilizes a hierarchy of genetic programming on two different levels. The lower level genetic programs are used to optimize coevolving populations in parallel while the higher level genetic program (HLGP) is used to optimize the genetic operator probabilities of the lower level genetic programs. The HLGP allows the algorithm to dynamically learn the mutation or combination of mutations that most effectively increase the fitness of the populations, causing a significant increase in the algorithm's accuracy and efficiency. The algorithm's accuracy and efficiency is tested against a standard parallel genetic program with a variety of one-dimensional test cases. Subsequently, the PMLGP is utilized to obtain an accurate empirical valence bond model for proton transfer in 3-hydroxy-gamma-pyrone in gas phase and protic solvent. © 2011 American Institute of Physics

  13. Plasma parameter estimations for the Large Helical Device based on the gyro-reduced Bohm scaling

    International Nuclear Information System (INIS)

    Okamoto, Masao; Nakajima, Noriyoshi; Sugama, Hideo.

    1991-10-01

    A model of gyro-reduced Bohm scaling law is incorporated into a one-dimensional transport code to predict plasma parameters for the Large Helical Device (LHD). The transport code calculations reproduce well the LHD empirical scaling law and basic parameters and profiles of the LHD plasma are calculated. The amounts of toroidal currents (bootstrap current and beam-driven current) are also estimated. (author)

  14. Carbon 13 nuclear magnetic resonance chemical shifts empiric calculations of polymers by multi linear regression and molecular modeling

    International Nuclear Information System (INIS)

    Da Silva Pinto, P.S.; Eustache, R.P.; Audenaert, M.; Bernassau, J.M.

    1996-01-01

    This work deals with carbon 13 nuclear magnetic resonance chemical shifts empiric calculations by multi linear regression and molecular modeling. The multi linear regression is indeed one way to obtain an equation able to describe the behaviour of the chemical shift for some molecules which are in the data base (rigid molecules with carbons). The methodology consists of structures describer parameters definition which can be bound to carbon 13 chemical shift known for these molecules. Then, the linear regression is used to determine the equation significant parameters. This one can be extrapolated to molecules which presents some resemblances with those of the data base. (O.L.). 20 refs., 4 figs., 1 tab

  15. UAV-based multi-angular measurements for improved crop parameter retrieval

    NARCIS (Netherlands)

    Roosjen, Peter P.J.

    2017-01-01

    Optical remote sensing enables the estimation of crop parameters based on reflected light through empirical-statistical methods or inversion of radiative transfer models. Natural surfaces, however, reflect light anisotropically, which means that the intensity of reflected light depends on the

  16. A semi-empirical analysis of strong-motion peaks in terms of seismic source, propagation path, and local site conditions

    Science.gov (United States)

    Kamiyama, M.; Orourke, M. J.; Flores-Berrones, R.

    1992-09-01

    A new type of semi-empirical expression for scaling strong-motion peaks in terms of seismic source, propagation path, and local site conditions is derived. Peak acceleration, peak velocity, and peak displacement are analyzed in a similar fashion because they are interrelated. However, emphasis is placed on the peak velocity which is a key ground motion parameter for lifeline earthquake engineering studies. With the help of seismic source theories, the semi-empirical model is derived using strong motions obtained in Japan. In the derivation, statistical considerations are used in the selection of the model itself and the model parameters. Earthquake magnitude M and hypocentral distance r are selected as independent variables and the dummy variables are introduced to identify the amplification factor due to individual local site conditions. The resulting semi-empirical expressions for the peak acceleration, velocity, and displacement are then compared with strong-motion data observed during three earthquakes in the U.S. and Mexico.

  17. Stellar atmospheric parameter estimation using Gaussian process regression

    Science.gov (United States)

    Bu, Yude; Pan, Jingchang

    2015-02-01

    As is well known, it is necessary to derive stellar parameters from massive amounts of spectral data automatically and efficiently. However, in traditional automatic methods such as artificial neural networks (ANNs) and kernel regression (KR), it is often difficult to optimize the algorithm structure and determine the optimal algorithm parameters. Gaussian process regression (GPR) is a recently developed method that has been proven to be capable of overcoming these difficulties. Here we apply GPR to derive stellar atmospheric parameters from spectra. Through evaluating the performance of GPR on Sloan Digital Sky Survey (SDSS) spectra, Medium resolution Isaac Newton Telescope Library of Empirical Spectra (MILES) spectra, ELODIE spectra and the spectra of member stars of galactic globular clusters, we conclude that GPR can derive stellar parameters accurately and precisely, especially when we use data preprocessed with principal component analysis (PCA). We then compare the performance of GPR with that of several widely used regression methods (ANNs, support-vector regression and KR) and find that with GPR it is easier to optimize structures and parameters and more efficient and accurate to extract atmospheric parameters.

  18. Inference of directional selection and mutation parameters assuming equilibrium.

    Science.gov (United States)

    Vogl, Claus; Bergman, Juraj

    2015-12-01

    In a classical study, Wright (1931) proposed a model for the evolution of a biallelic locus under the influence of mutation, directional selection and drift. He derived the equilibrium distribution of the allelic proportion conditional on the scaled mutation rate, the mutation bias and the scaled strength of directional selection. The equilibrium distribution can be used for inference of these parameters with genome-wide datasets of "site frequency spectra" (SFS). Assuming that the scaled mutation rate is low, Wright's model can be approximated by a boundary-mutation model, where mutations are introduced into the population exclusively from sites fixed for the preferred or unpreferred allelic states. With the boundary-mutation model, inference can be partitioned: (i) the shape of the SFS distribution within the polymorphic region is determined by random drift and directional selection, but not by the mutation parameters, such that inference of the selection parameter relies exclusively on the polymorphic sites in the SFS; (ii) the mutation parameters can be inferred from the amount of polymorphic and monomorphic preferred and unpreferred alleles, conditional on the selection parameter. Herein, we derive maximum likelihood estimators for the mutation and selection parameters in equilibrium and apply the method to simulated SFS data as well as empirical data from a Madagascar population of Drosophila simulans. Copyright © 2015 Elsevier Inc. All rights reserved.

  19. Probabilistic inference of ecohydrological parameters using observations from point to satellite scales

    Science.gov (United States)

    Bassiouni, Maoya; Higgins, Chad W.; Still, Christopher J.; Good, Stephen P.

    2018-06-01

    Vegetation controls on soil moisture dynamics are challenging to measure and translate into scale- and site-specific ecohydrological parameters for simple soil water balance models. We hypothesize that empirical probability density functions (pdfs) of relative soil moisture or soil saturation encode sufficient information to determine these ecohydrological parameters. Further, these parameters can be estimated through inverse modeling of the analytical equation for soil saturation pdfs, derived from the commonly used stochastic soil water balance framework. We developed a generalizable Bayesian inference framework to estimate ecohydrological parameters consistent with empirical soil saturation pdfs derived from observations at point, footprint, and satellite scales. We applied the inference method to four sites with different land cover and climate assuming (i) an annual rainfall pattern and (ii) a wet season rainfall pattern with a dry season of negligible rainfall. The Nash-Sutcliffe efficiencies of the analytical model's fit to soil observations ranged from 0.89 to 0.99. The coefficient of variation of posterior parameter distributions ranged from interest. In these cases, model inversion converged more slowly but ultimately provided better goodness of fit and lower uncertainty. Results were robust using as few as 100 daily observations randomly sampled from the full records, demonstrating the advantage of analyzing soil saturation pdfs instead of time series to estimate ecohydrological parameters from sparse records. Our work combines modeling and empirical approaches in ecohydrology and provides a simple framework to obtain scale- and site-specific analytical descriptions of soil moisture dynamics consistent with soil moisture observations.

  20. An empirical investigation of compressibility in magnetohydrodynamic turbulence

    International Nuclear Information System (INIS)

    Spangler, Steven R.; Spitler, Laura G.

    2004-01-01

    The density fluctuations which occur in magnetohydrodynamic (MHD) turbulence are an important diagnostic of the turbulent dynamics, and serve as the basis of astrophysical remote sensing measurements. This paper is concerned with the relation between density fluctuations and fluctuations of the magnetic field and velocity. The approach is empirical, utilizing spacecraft observations of slow solar wind turbulence. Sixty-six data intervals of 1 h duration were chosen, in which the solar wind speed was less than 450 km/s, and in which the fluctuations in density and vector magnetic field appeared to be approximately stationary. The parameters of interest were the root-mean-square fluctuations of density and magnetic field, normalized by the respective mean values, ε N ≡ 2 > 0.5 /n 0 and ε B ≡ 2 > 0.5 /B 0 , respectively, where n 0 and B 0 are the mean plasma number density and magnetic field strength. The conclusions of this study are as follows: (1) Consistent with previous investigations, the dependence of the normalized density fluctuation on the normalized magnetic field fluctuation is found to be between linear (ε N =ε B ) and quadratic (ε N =ε B 2 ). (2) The value of R≡ε N /ε B shows a wide range from 4; the median value is 0.46 and the mean is 0.72. (3) Typical normalized fluctuation amplitudes (ε N and ε B ) for records of one hour length (maximum scale size of ≅1.6x10 6 km) are 0.03-0.08 for the density, and 0.04-0.21 for the magnetic field. (4) For most intervals, the magnitude of the perpendicular (to the large scale magnetic field) magnetic field fluctuations exceeds that of the parallel fluctuations by a factor of 3-4. This indicates that the turbulent magnetic field fluctuations are primarily transverse fluctuations. The implications of these results for theories of MHD turbulence, and for the remote sensing of turbulent plasmas such as the corona, the near-Sun solar wind, and the interstellar medium, are discussed

  1. Consistent constitutive modeling of metallic target penetration using empirical, analytical, and numerical penetration models

    Directory of Open Access Journals (Sweden)

    John (Jack P. Riegel III

    2016-04-01

    Full Text Available Historically, there has been little correlation between the material properties used in (1 empirical formulae, (2 analytical formulations, and (3 numerical models. The various regressions and models may each provide excellent agreement for the depth of penetration into semi-infinite targets. But the input parameters for the empirically based procedures may have little in common with either the analytical model or the numerical model. This paper builds on previous work by Riegel and Anderson (2014 to show how the Effective Flow Stress (EFS strength model, based on empirical data, can be used as the average flow stress in the analytical Walker–Anderson Penetration model (WAPEN (Anderson and Walker, 1991 and how the same value may be utilized as an effective von Mises yield strength in numerical hydrocode simulations to predict the depth of penetration for eroding projectiles at impact velocities in the mechanical response regime of the materials. The method has the benefit of allowing the three techniques (empirical, analytical, and numerical to work in tandem. The empirical method can be used for many shot line calculations, but more advanced analytical or numerical models can be employed when necessary to address specific geometries such as edge effects or layering that are not treated by the simpler methods. Developing complete constitutive relationships for a material can be costly. If the only concern is depth of penetration, such a level of detail may not be required. The effective flow stress can be determined from a small set of depth of penetration experiments in many cases, especially for long penetrators such as the L/D = 10 ones considered here, making it a very practical approach. In the process of performing this effort, the authors considered numerical simulations by other researchers based on the same set of experimental data that the authors used for their empirical and analytical assessment. The goals were to establish a

  2. Improved Estimates of Thermodynamic Parameters

    Science.gov (United States)

    Lawson, D. D.

    1982-01-01

    Techniques refined for estimating heat of vaporization and other parameters from molecular structure. Using parabolic equation with three adjustable parameters, heat of vaporization can be used to estimate boiling point, and vice versa. Boiling points and vapor pressures for some nonpolar liquids were estimated by improved method and compared with previously reported values. Technique for estimating thermodynamic parameters should make it easier for engineers to choose among candidate heat-exchange fluids for thermochemical cycles.

  3. Uncertainty estimation of core safety parameters using cross-correlations of covariance matrix

    International Nuclear Information System (INIS)

    Yamamoto, Akio; Yasue, Yoshihiro; Endo, Tomohiro; Kodama, Yasuhiro; Ohoka, Yasunori; Tatsumi, Masahiro

    2013-01-01

    An uncertainty reduction method for core safety parameters, for which measurement values are not obtained, is proposed. We empirically recognize that there exist some correlations among the prediction errors of core safety parameters, e.g., a correlation between the control rod worth and the assembly relative power at corresponding position. Correlations of errors among core safety parameters are theoretically estimated using the covariance of cross sections and sensitivity coefficients of core parameters. The estimated correlations of errors among core safety parameters are verified through the direct Monte Carlo sampling method. Once the correlation of errors among core safety parameters is known, we can estimate the uncertainty of a safety parameter for which measurement value is not obtained. (author)

  4. Hardrock Elastic Physical Properties: Birch's Seismic Parameter Revisited

    Science.gov (United States)

    Wu, M.; Milkereit, B.

    2014-12-01

    Identifying rock composition and properties is imperative in a variety of fields including geotechnical engineering, mining, and petroleum exploration, in order to accurately make any petrophysical calculations. Density is, in particular, an important parameter that allows us to differentiate between lithologies and estimate or calculate other petrophysical properties. It is well established that compressional and shear wave velocities of common crystalline rocks increase with increasing densities (i.e. the Birch and Nafe-Drake relationships). Conventional empirical relations do not take into account S-wave velocity. Physical properties of Fe-oxides and massive sulfides, however, differ significantly from the empirical velocity-density relationships. Currently, acquiring in-situ density data is challenging and problematic, and therefore, developing an approximation for density based on seismic wave velocity and elastic moduli would be beneficial. With the goal of finding other possible or better relationships between density and the elastic moduli, a database of density, P-wave velocity, S-wave velocity, bulk modulus, shear modulus, Young's modulus, and Poisson's ratio was compiled based on a multitude of lab samples. The database is comprised of isotropic, non-porous metamorphic rock. Multi-parameter cross plots of the various elastic parameters have been analyzed in order to find a suitable parameter combination that reduces high density outliers. As expected, the P-wave velocity to S-wave velocity ratios show no correlation with density. However, Birch's seismic parameter, along with the bulk modulus, shows promise in providing a link between observed compressional and shear wave velocities and rock densities, including massive sulfides and Fe-oxides.

  5. Systematic of delayed neutron parameters

    International Nuclear Information System (INIS)

    Isaev, S.G.; Piksaikin, V.M.

    2000-01-01

    The experimental studies of the energy dependence of the delayed neutron (DN) parameters for various fission systems has shown that the behaviour of a some combination of delayed neutron parameters has a similar features. On the basis of this findings the systematics of delayed neutron experimental data for thorium, uranium, plutonium and americium isotopes have been investigated with the purpose to find a correlation of DN parameters with characteristics of fissioning system as well as a correlation between the delayed neutron parameters themselves. It was presented the preliminary results which were obtained during study the physics interpretation of the results [ru

  6. Empirical Study of How Traffic Intensity Detector Parameters Influence Dynamic Street Lighting Energy Consumption: A Case Study in Krakow, Poland

    Directory of Open Access Journals (Sweden)

    Igor Wojnicki

    2018-04-01

    Full Text Available The deployment of dynamic street lighting, which adjusts lighting levels to fulfill particular needs, leads to energy savings. These savings contribute to the overall lighting infrastructure maintenance cost. Yet another contribution is the cost of traffic intensity data. The data is read directly from sensor systems or intelligent transportation systems (ITSs. The more frequent the readings are, the more costly they become, because of hardware capabilities, data transfer and software license costs, among others. The paper investigates a relationship between the frequency of readings, in particular the averaging window size and step, and achieved energy savings. It is based on a simulation, taking into account a representative part of a city and traffic intensity data, which span over a period of one year. While the energy consumption reduction is simulated, all data, including each luminaire power setting, induction loop locations and street characteristics, come from a representative sample of the city of Krakow, Poland. Controlling the power settings complies with the lighting standard CEN/TR 13201. Analysis of the outcomes indicates that the shorter the window size or step are, the more energy saving that is available. In particular, for the previous standard CEN/TR 13201 2004, having the window size and step at 15 min results in 26.75% of energy saving, while reducing these values to 6 min provides 27%. Savings are more profound for the current standard (CEN/TR 13201 2014, assuming a 15 min size and step results in 47.43%, while having a 6 min size and step provides 47.69%. The results can serve as a guideline for identifying the economic viability of dynamic lighting control systems. Additionally, it can be observed that the current lighting standard provides far greater potential for dynamic control then the previous standard.

  7. Empirical relationship between electrical resistivity and geotechnical parameters: A case study of Federal University of Technology campus, Akure SW, Nigeria

    Science.gov (United States)

    Akintorinwa, O. J.; Oluwole, S. T.

    2018-06-01

    For several decades, geophysical prospecting method coupled with geotechnical analysis has become increasingly useful in evaluating the subsurface for both pre and post engineering investigations. Shallow geophysical tool is often used alongside geotechnical method to evaluate subsurface soil for engineering study to obtain information which may include the subsurface lithology and their thicknesses, competence of the bedrock and depths to its upper interface, and competence of the material that make up the overburden, especially the shallow section which serves as host for foundations of engineering structures (Aina et al., 1996; Adewumi and Olorunfemi, 2005; and Idornigie et al., 2006). This information helps the engineers to correctly locate and design the foundation of engineering structures. The information also serves as guide to the choice of design and suitable materials needed for road construction (Akinlabi and Adeyemi, 2014). Lack of knowledge of the properties of subsurface may leads to the failure of most engineering structures. Therefore, it is of great importance to carry out a pre-construction investigation of a proposed site in order to ascertain the fitness of the host earth material.

  8. Virus detection and quantification using electrical parameters

    Science.gov (United States)

    Ahmad, Mahmoud Al; Mustafa, Farah; Ali, Lizna M.; Rizvi, Tahir A.

    2014-10-01

    Here we identify and quantitate two similar viruses, human and feline immunodeficiency viruses (HIV and FIV), suspended in a liquid medium without labeling, using a semiconductor technique. The virus count was estimated by calculating the impurities inside a defined volume by observing the change in electrical parameters. Empirically, the virus count was similar to the absolute value of the ratio of the change of the virus suspension dopant concentration relative to the mock dopant over the change in virus suspension Debye volume relative to mock Debye volume. The virus type was identified by constructing a concentration-mobility relationship which is unique for each kind of virus, allowing for a fast (within minutes) and label-free virus quantification and identification. For validation, the HIV and FIV virus preparations were further quantified by a biochemical technique and the results obtained by both approaches corroborated well. We further demonstrate that the electrical technique could be applied to accurately measure and characterize silica nanoparticles that resemble the virus particles in size. Based on these results, we anticipate our present approach to be a starting point towards establishing the foundation for label-free electrical-based identification and quantification of an unlimited number of viruses and other nano-sized particles.

  9. Empirical evidence for site coefficients in building code provisions

    Science.gov (United States)

    Borcherdt, R.D.

    2002-01-01

    Site-response coefficients, Fa and Fv, used in U.S. building code provisions are based on empirical data for motions up to 0.1 g. For larger motions they are based on theoretical and laboratory results. The Northridge earthquake of 17 January 1994 provided a significant new set of empirical data up to 0.5 g. These data together with recent site characterizations based on shear-wave velocity measurements provide empirical estimates of the site coefficients at base accelerations up to 0.5 g for Site Classes C and D. These empirical estimates of Fa and Fnu; as well as their decrease with increasing base acceleration level are consistent at the 95 percent confidence level with those in present building code provisions, with the exception of estimates for Fa at levels of 0.1 and 0.2 g, which are less than the lower confidence bound by amounts up to 13 percent. The site-coefficient estimates are consistent at the 95 percent confidence level with those of several other investigators for base accelerations greater than 0.3 g. These consistencies and present code procedures indicate that changes in the site coefficients are not warranted. Empirical results for base accelerations greater than 0.2 g confirm the need for both a short- and a mid- or long-period site coefficient to characterize site response for purposes of estimating site-specific design spectra.

  10. Guidelines for using empirical studies in software engineering education

    Directory of Open Access Journals (Sweden)

    Fabian Fagerholm

    2017-09-01

    Full Text Available Software engineering education is under constant pressure to provide students with industry-relevant knowledge and skills. Educators must address issues beyond exercises and theories that can be directly rehearsed in small settings. Industry training has similar requirements of relevance as companies seek to keep their workforce up to date with technological advances. Real-life software development often deals with large, software-intensive systems and is influenced by the complex effects of teamwork and distributed software development, which are hard to demonstrate in an educational environment. A way to experience such effects and to increase the relevance of software engineering education is to apply empirical studies in teaching. In this paper, we show how different types of empirical studies can be used for educational purposes in software engineering. We give examples illustrating how to utilize empirical studies, discuss challenges, and derive an initial guideline that supports teachers to include empirical studies in software engineering courses. Furthermore, we give examples that show how empirical studies contribute to high-quality learning outcomes, to student motivation, and to the awareness of the advantages of applying software engineering principles. Having awareness, experience, and understanding of the actions required, students are more likely to apply such principles under real-life constraints in their working life.

  11. The conceptual and empirical relationship between gambling, investing, and speculation.

    Science.gov (United States)

    Arthur, Jennifer N; Williams, Robert J; Delfabbro, Paul H

    2016-12-01

    Background and aims To review the conceptual and empirical relationship between gambling, investing, and speculation. Methods An analysis of the attributes differentiating these constructs as well as identification of all articles speaking to their empirical relationship. Results Gambling differs from investment on many different attributes and should be seen as conceptually distinct. On the other hand, speculation is conceptually intermediate between gambling and investment, with a few of its attributes being investment-like, some of its attributes being gambling-like, and several of its attributes being neither clearly gambling or investment-like. Empirically, gamblers, investors, and speculators have similar cognitive, motivational, and personality attributes, with this relationship being particularly strong for gambling and speculation. Population levels of gambling activity also tend to be correlated with population level of financial speculation. At an individual level, speculation has a particularly strong empirical relationship to gambling, as speculators appear to be heavily involved in traditional forms of gambling and problematic speculation is strongly correlated with problematic gambling. Discussion and conclusions Investment is distinct from gambling, but speculation and gambling have conceptual overlap and a strong empirical relationship. It is recommended that financial speculation be routinely included when assessing gambling involvement, and there needs to be greater recognition and study of financial speculation as both a contributor to problem gambling as well as an additional form of behavioral addiction in its own right.

  12. Empirical research on international environmental migration: a systematic review.

    Science.gov (United States)

    Obokata, Reiko; Veronis, Luisa; McLeman, Robert

    2014-01-01

    This paper presents the findings of a systematic review of scholarly publications that report empirical findings from studies of environmentally-related international migration. There exists a small, but growing accumulation of empirical studies that consider environmentally-linked migration that spans international borders. These studies provide useful evidence for scholars and policymakers in understanding how environmental factors interact with political, economic and social factors to influence migration behavior and outcomes that are specific to international movements of people, in highlighting promising future research directions, and in raising important considerations for international policymaking. Our review identifies countries of migrant origin and destination that have so far been the subject of empirical research, the environmental factors believed to have influenced these migrations, the interactions of environmental and non-environmental factors as well as the role of context in influencing migration behavior, and the types of methods used by researchers. In reporting our findings, we identify the strengths and challenges associated with the main empirical approaches, highlight significant gaps and future opportunities for empirical work, and contribute to advancing understanding of environmental influences on international migration more generally. Specifically, we propose an exploratory framework to take into account the role of context in shaping environmental migration across borders, including the dynamic and complex interactions between environmental and non-environmental factors at a range of scales.

  13. Empirical agent-based modelling challenges and solutions

    CERN Document Server

    Barreteau, Olivier

    2014-01-01

    This instructional book showcases techniques to parameterise human agents in empirical agent-based models (ABM). In doing so, it provides a timely overview of key ABM methodologies and the most innovative approaches through a variety of empirical applications.  It features cutting-edge research from leading academics and practitioners, and will provide a guide for characterising and parameterising human agents in empirical ABM.  In order to facilitate learning, this text shares the valuable experiences of other modellers in particular modelling situations. Very little has been published in the area of empirical ABM, and this contributed volume will appeal to graduate-level students and researchers studying simulation modeling in economics, sociology, ecology, and trans-disciplinary studies, such as topics related to sustainability. In a similar vein to the instruction found in a cookbook, this text provides the empirical modeller with a set of 'recipes'  ready to be implemented. Agent-based modeling (AB...

  14. Parameterization of water vapor using high-resolution GPS data and empirical models

    Science.gov (United States)

    Ningombam, Shantikumar S.; Jade, Sridevi; Shrungeshwara, T. S.

    2018-03-01

    The present work evaluates eleven existing empirical models to estimate Precipitable Water Vapor (PWV) over a high-altitude (4500 m amsl), cold-desert environment. These models are tested extensively and used globally to estimate PWV for low altitude sites (below 1000 m amsl). The moist parameters used in the model are: water vapor scale height (Hc), dew point temperature (Td) and water vapor pressure (Es 0). These moist parameters are derived from surface air temperature and relative humidity measured at high temporal resolution from automated weather station. The performance of these models are examined statistically with observed high-resolution GPS (GPSPWV) data over the region (2005-2012). The correlation coefficient (R) between the observed GPSPWV and Model PWV is 0.98 at daily data and varies diurnally from 0.93 to 0.97. Parameterization of moisture parameters were studied in-depth (i.e., 2 h to monthly time scales) using GPSPWV , Td , and Es 0 . The slope of the linear relationships between GPSPWV and Td varies from 0.073°C-1 to 0.106°C-1 (R: 0.83 to 0.97) while GPSPWV and Es 0 varied from 1.688 to 2.209 (R: 0.95 to 0.99) at daily, monthly and diurnal time scales. In addition, the moist parameters for the cold desert, high-altitude environment are examined in-depth at various time scales during 2005-2012.

  15. Empirical Ground Motion Characterization of Induced Seismicity in Alberta and Oklahoma

    Science.gov (United States)

    Novakovic, M.; Atkinson, G. M.; Assatourians, K.

    2017-12-01

    We develop empirical ground-motion prediction equations (GMPEs) for ground motions from induced earthquakes in Alberta and Oklahoma following the stochastic-model-based method of Atkinson et al. (2015 BSSA). The Oklahoma ground-motion database is compiled from over 13,000 small to moderate seismic events (M 1 to 5.8) recorded at 1600 seismic stations, at distances from 1 to 750 km. The Alberta database is compiled from over 200 small to moderate seismic events (M 1 to 4.2) recorded at 50 regional stations, at distances from 30 to 500 km. A generalized inversion is used to solve for regional source, attenuation and site parameters. The obtained parameters describe the regional attenuation, stress parameter and site amplification. Resolving these parameters allows for the derivation of regionally-calibrated GMPEs that can be used to compare ground motion observations between waste water injection (Oklahoma) and hydraulic fracture induced events (Alberta), and further compare induced observations with ground motions resulting from natural sources (California, NGAWest2). The derived GMPEs have applications for the evaluation of hazards from induced seismicity and can be used to track amplitudes across the regions in real time, which is useful for ground-motion-based alerting systems and traffic light protocols.

  16. Dynamics of bloggers’ communities: Bipartite networks from empirical data and agent-based modeling

    Science.gov (United States)

    Mitrović, Marija; Tadić, Bosiljka

    2012-11-01

    We present an analysis of the empirical data and the agent-based modeling of the emotional behavior of users on the Web portals where the user interaction is mediated by posted comments, like Blogs and Diggs. We consider the dataset of discussion-driven popular Diggs, in which all comments are screened by machine-learning emotion detection in the text, to determine positive and negative valence (attractiveness and aversiveness) of each comment. By mapping the data onto a suitable bipartite network, we perform an analysis of the network topology and the related time-series of the emotional comments. The agent-based model is then introduced to simulate the dynamics and to capture the emergence of the emotional behaviors and communities. The agents are linked to posts on a bipartite network, whose structure evolves through their actions on the posts. The emotional states (arousal and valence) of each agent fluctuate in time, subject to the current contents of the posts to which the agent is exposed. By an agent’s action on a post its current emotions are transferred to the post. The model rules and the key parameters are inferred from the considered empirical data to ensure their realistic values and mutual consistency. The model assumes that the emotional arousal over posts drives the agent’s action. The simulations are preformed for the case of constant flux of agents and the results are analyzed in full analogy with the empirical data. The main conclusions are that the emotion-driven dynamics leads to long-range temporal correlations and emergent networks with community structure, that are comparable with the ones in the empirical system of popular posts. In view of pure emotion-driven agents actions, this type of comparisons provide a quantitative measure for the role of emotions in the dynamics on real blogs. Furthermore, the model reveals the underlying mechanisms which relate the post popularity with the emotion dynamics and the prevalence of negative

  17. Parameters in pure type systems

    NARCIS (Netherlands)

    Bloo, C.J.; Kamareddine, F.; Laan, T.D.L.; Nederpelt, R.P.; Rajsbaum, S.

    2002-01-01

    In this paper we study the addition of parameters to typed ¿-calculus with definitions. We show that the resulting systems have nice properties and illustrate that parameters allow for a better fine-tuning of the strength of type systems as well as staying closer to type systems used in practice in

  18. ACTIVATION PARAMETERS AND EXCESS THERMODYANAMIC ...

    African Journals Online (AJOL)

    Applying these data, viscosity-B-coefficients, activation parameters (Δμ10≠) and (Δμ20≠) and excess thermodynamic functions, viz., excess molar volume (VE), excess viscosity, ηE and excess molar free energy of activation of flow, (GE) were calculated. The value of interaction parameter, d, of Grunberg and Nissan ...

  19. HF Parameters of Induction Motor

    Directory of Open Access Journals (Sweden)

    M. N. Benallal

    2017-09-01

    Full Text Available This article describes the results of experimental studies of HF input and primary parameters. A simulation model in Matlab SimulinkTM of multiphase windings as ladder circuit of coils is developed. A method for determining the primary parameters of ladder equivalent circuits is presented.

  20. Univariate and Bivariate Empirical Mode Decomposition for Postural Stability Analysis

    Directory of Open Access Journals (Sweden)

    Jacques Duchêne

    2008-05-01

    Full Text Available The aim of this paper was to compare empirical mode decomposition (EMD and two new extended methods of  EMD named complex empirical mode decomposition (complex-EMD and bivariate empirical mode decomposition (bivariate-EMD. All methods were used to analyze stabilogram center of pressure (COP time series. The two new methods are suitable to be applied to complex time series to extract complex intrinsic mode functions (IMFs before the Hilbert transform is subsequently applied on the IMFs. The trace of the analytic IMF in the complex plane has a circular form, with each IMF having its own rotation frequency. The area of the circle and the average rotation frequency of IMFs represent efficient indicators of the postural stability status of subjects. Experimental results show the effectiveness of these indicators to identify differences in standing posture between groups.