WorldWideScience

Sample records for empirical methods

  1. Empirical evaluation methods in computer vision

    CERN Document Server

    Christensen, Henrik I

    2002-01-01

    This book provides comprehensive coverage of methods for the empirical evaluation of computer vision techniques. The practical use of computer vision requires empirical evaluation to ensure that the overall system has a guaranteed performance. The book contains articles that cover the design of experiments for evaluation, range image segmentation, the evaluation of face recognition and diffusion methods, image matching using correlation methods, and the performance of medical image processing algorithms. Sample Chapter(s). Foreword (228 KB). Chapter 1: Introduction (505 KB). Contents: Automate

  2. Methods for Calculating Empires in Quasicrystals

    Directory of Open Access Journals (Sweden)

    Fang Fang

    2017-10-01

    Full Text Available This paper reviews the empire problem for quasiperiodic tilings and the existing methods for generating the empires of the vertex configurations in quasicrystals, while introducing a new and more efficient method based on the cut-and-project technique. Using Penrose tiling as an example, this method finds the forced tiles with the restrictions in the high dimensional lattice (the mother lattice that can be cut-and-projected into the lower dimensional quasicrystal. We compare our method to the two existing methods, namely one method that uses the algorithm of the Fibonacci chain to force the Ammann bars in order to find the forced tiles of an empire and the method that follows the work of N.G. de Bruijn on constructing a Penrose tiling as the dual to a pentagrid. This new method is not only conceptually simple and clear, but it also allows us to calculate the empires of the vertex configurations in a defected quasicrystal by reversing the configuration of the quasicrystal to its higher dimensional lattice, where we then apply the restrictions. These advantages may provide a key guiding principle for phason dynamics and an important tool for self error-correction in quasicrystal growth.

  3. An Empirical Method for Particle Damping Design

    Directory of Open Access Journals (Sweden)

    Zhi Wei Xu

    2004-01-01

    Full Text Available Particle damping is an effective vibration suppression method. The purpose of this paper is to develop an empirical method for particle damping design based on extensive experiments on three structural objects – steel beam, bond arm and bond head stand. The relationships among several key parameters of structure/particles are obtained. Then the procedures with the use of particle damping are proposed to provide guidelines for practical applications. It is believed that the results presented in this paper would be helpful to effectively implement the particle damping for various structural systems for the purpose of vibration suppression.

  4. An empirical method for dynamic camouflage assessment

    Science.gov (United States)

    Blitch, John G.

    2011-06-01

    As camouflage systems become increasingly sophisticated in their potential to conceal military personnel and precious cargo, evaluation methods need to evolve as well. This paper presents an overview of one such attempt to explore alternative methods for empirical evaluation of dynamic camouflage systems which aspire to keep pace with a soldier's movement through rapidly changing environments that are typical of urban terrain. Motivating factors are covered first, followed by a description of the Blitz Camouflage Assessment (BCA) process and results from an initial proof of concept experiment conducted in November 2006. The conclusion drawn from these results, related literature and the author's personal experience suggest that operational evaluation of personal camouflage needs to be expanded beyond its foundation in signal detection theory and embrace the challenges posed by high levels of cognitive processing.

  5. Empirical methods for estimating future climatic conditions

    International Nuclear Information System (INIS)

    Anon.

    1990-01-01

    Applying the empirical approach permits the derivation of estimates of the future climate that are nearly independent of conclusions based on theoretical (model) estimates. This creates an opportunity to compare these results with those derived from the model simulations of the forthcoming changes in climate, thus increasing confidence in areas of agreement and focusing research attention on areas of disagreements. The premise underlying this approach for predicting anthropogenic climate change is based on associating the conditions of the climatic optimums of the Holocene, Eemian, and Pliocene with corresponding stages of the projected increase of mean global surface air temperature. Provided that certain assumptions are fulfilled in matching the value of the increased mean temperature for a certain epoch with the model-projected change in global mean temperature in the future, the empirical approach suggests that relationships leading to the regional variations in air temperature and other meteorological elements could be deduced and interpreted based on use of empirical data describing climatic conditions for past warm epochs. Considerable care must be taken, of course, in making use of these spatial relationships, especially in accounting for possible large-scale differences that might, in some cases, result from different factors contributing to past climate changes than future changes and, in other cases, might result from the possible influences of changes in orography and geography on regional climatic conditions over time

  6. Generalized empirical likelihood methods for analyzing longitudinal data

    KAUST Repository

    Wang, S.; Qian, L.; Carroll, R. J.

    2010-01-01

    Efficient estimation of parameters is a major objective in analyzing longitudinal data. We propose two generalized empirical likelihood based methods that take into consideration within-subject correlations. A nonparametric version of the Wilks

  7. Generalized empirical likelihood methods for analyzing longitudinal data

    KAUST Repository

    Wang, S.

    2010-02-16

    Efficient estimation of parameters is a major objective in analyzing longitudinal data. We propose two generalized empirical likelihood based methods that take into consideration within-subject correlations. A nonparametric version of the Wilks theorem for the limiting distributions of the empirical likelihood ratios is derived. It is shown that one of the proposed methods is locally efficient among a class of within-subject variance-covariance matrices. A simulation study is conducted to investigate the finite sample properties of the proposed methods and compare them with the block empirical likelihood method by You et al. (2006) and the normal approximation with a correctly estimated variance-covariance. The results suggest that the proposed methods are generally more efficient than existing methods which ignore the correlation structure, and better in coverage compared to the normal approximation with correctly specified within-subject correlation. An application illustrating our methods and supporting the simulation study results is also presented.

  8. Empirical pillar design methods review report: Final report

    International Nuclear Information System (INIS)

    1988-02-01

    This report summarizes and evaluates empirical pillar design methods that may be of use during the conceptual design of a high-level nuclear waste repository in salt. The methods are discussed according to category (i.e, main, submain, and panel pillars; barrier pillars; and shaft pillars). Of the 21 identified for main, submain, and panel pillars, one method, the Confined Core Method, is evaluated as being most appropriate for conceptual design. Five methods are considered potentially applicable. Of six methods identified for barrier pillars, one method based on the Load Transfer Distance concept is considered most appropriate for design. Based on the evaluation of 25 methods identified for shaft pillars, an approximate sizing criterion is proposed for use in conceptual design. Aspects of pillar performance relating to creep, ground deformation, interaction with roof and floor rock, and response to high temperature environments are not adequately addressed by existing empirical design methods. 152 refs., 22 figs., 14 tabs

  9. Empirical Evidence or Intuition? An Activity Involving the Scientific Method

    Science.gov (United States)

    Overway, Ken

    2007-01-01

    Students need to have basic understanding of scientific method during their introductory science classes and for this purpose an activity was devised which involved a game based on famous Monty Hall game problem. This particular activity allowed students to banish or confirm their intuition based on empirical evidence.

  10. Towards Multi-Method Research Approach in Empirical Software Engineering

    Science.gov (United States)

    Mandić, Vladimir; Markkula, Jouni; Oivo, Markku

    This paper presents results of a literature analysis on Empirical Research Approaches in Software Engineering (SE). The analysis explores reasons why traditional methods, such as statistical hypothesis testing and experiment replication are weakly utilized in the field of SE. It appears that basic assumptions and preconditions of the traditional methods are contradicting the actual situation in the SE. Furthermore, we have identified main issues that should be considered by the researcher when selecting the research approach. In virtue of reasons for weak utilization of traditional methods we propose stronger use of Multi-Method approach with Pragmatism as the philosophical standpoint.

  11. Empirical method for simulation of water tables by digital computers

    International Nuclear Information System (INIS)

    Carnahan, C.L.; Fenske, P.R.

    1975-09-01

    An empirical method is described for computing a matrix of water-table elevations from a matrix of topographic elevations and a set of observed water-elevation control points which may be distributed randomly over the area of interest. The method is applicable to regions, such as the Great Basin, where the water table can be assumed to conform to a subdued image of overlying topography. A first approximation to the water table is computed by smoothing a matrix of topographic elevations and adjusting each node of the smoothed matrix according to a linear regression between observed water elevations and smoothed topographic elevations. Each observed control point is assumed to exert a radially decreasing influence on the first approximation surface. The first approximation is then adjusted further to conform to observed water-table elevations near control points. Outside the domain of control, the first approximation is assumed to represent the most probable configuration of the water table. The method has been applied to the Nevada Test Site and the Hot Creek Valley areas in Nevada

  12. An empirical method to estimate bulk particulate refractive index for ocean satellite applications

    Digital Repository Service at National Institute of Oceanography (India)

    Suresh, T.; Desa, E.; Mascarenhas, A.A.M.Q.; Matondkar, S.G.P.; Naik, P.; Nayak, S.R.

    An empirical method is presented here to estimates bulk particulate refractive index using the measured inherent and apparent optical properties from the various waters types of the Arabian Sea. The empirical model, where the bulk refractive index...

  13. Estimation of Cumulative Absolute Velocity using Empirical Green's Function Method

    International Nuclear Information System (INIS)

    Park, Dong Hee; Yun, Kwan Hee; Chang, Chun Joong; Park, Se Moon

    2009-01-01

    In recognition of the needs to develop a new criterion for determining when the OBE (Operating Basis Earthquake) has been exceeded at nuclear power plants, Cumulative Absolute Velocity (CAV) was introduced by EPRI. The concept of CAV is the area accumulation with the values more than 0.025g occurred during every one second. The equation of the CAV is as follows. CAV = ∫ 0 max |a(t)|dt (1) t max = duration of record, a(t) = acceleration (>0.025g) Currently, the OBE exceedance criteria in Korea is Peak Ground Acceleration (PGA, PGA>0.1g). When Odesan earthquake (M L =4.8, January 20th, 2007) and Gyeongju earthquake (M L =3.4, June 2nd, 1999) were occurred, we have had already experiences of PGA greater than 0.1g that did not even cause any damage to the poorly-designed structures nearby. This moderate earthquake has motivated Korea to begin the use of the CAV for OBE exceedance criteria for NPPs. Because the present OBE level has proved itself to be a poor indicator for small-to-moderate earthquakes, for which the low OBE level can cause an inappropriate shut down the plant. A more serious possibility is that this scenario will become a reality at a very high level. Empirical Green's Function method was a simulation technique which can estimate the CAV value and it is hereby introduced

  14. The Socratic Method: Empirical Assessment of a Psychology Capstone Course

    Science.gov (United States)

    Burns, Lawrence R.; Stephenson, Paul L.; Bellamy, Katy

    2016-01-01

    Although students make some epistemological progress during college, most graduate without developing meaning-making strategies that reflect an understanding that knowledge is socially constructed. Using a pre-test-post-test design and a within-subjects 2 × 2 mixed-design ANOVA, this study reports on empirical findings which support the Socratic…

  15. Dealing with noise and physiological artifacts in human EEG recordings: empirical mode methods

    Science.gov (United States)

    Runnova, Anastasiya E.; Grubov, Vadim V.; Khramova, Marina V.; Hramov, Alexander E.

    2017-04-01

    In the paper we propose the new method for removing noise and physiological artifacts in human EEG recordings based on empirical mode decomposition (Hilbert-Huang transform). As physiological artifacts we consider specific oscillatory patterns that cause problems during EEG analysis and can be detected with additional signals recorded simultaneously with EEG (ECG, EMG, EOG, etc.) We introduce the algorithm of the proposed method with steps including empirical mode decomposition of EEG signal, choosing of empirical modes with artifacts, removing these empirical modes and reconstructing of initial EEG signal. We show the efficiency of the method on the example of filtration of human EEG signal from eye-moving artifacts.

  16. Empirical projection-based basis-component decomposition method

    Science.gov (United States)

    Brendel, Bernhard; Roessl, Ewald; Schlomka, Jens-Peter; Proksa, Roland

    2009-02-01

    Advances in the development of semiconductor based, photon-counting x-ray detectors stimulate research in the domain of energy-resolving pre-clinical and clinical computed tomography (CT). For counting detectors acquiring x-ray attenuation in at least three different energy windows, an extended basis component decomposition can be performed in which in addition to the conventional approach of Alvarez and Macovski a third basis component is introduced, e.g., a gadolinium based CT contrast material. After the decomposition of the measured projection data into the basis component projections, conventional filtered-backprojection reconstruction is performed to obtain the basis-component images. In recent work, this basis component decomposition was obtained by maximizing the likelihood-function of the measurements. This procedure is time consuming and often unstable for excessively noisy data or low intrinsic energy resolution of the detector. Therefore, alternative procedures are of interest. Here, we introduce a generalization of the idea of empirical dual-energy processing published by Stenner et al. to multi-energy, photon-counting CT raw data. Instead of working in the image-domain, we use prior spectral knowledge about the acquisition system (tube spectra, bin sensitivities) to parameterize the line-integrals of the basis component decomposition directly in the projection domain. We compare this empirical approach with the maximum-likelihood (ML) approach considering image noise and image bias (artifacts) and see that only moderate noise increase is to be expected for small bias in the empirical approach. Given the drastic reduction of pre-processing time, the empirical approach is considered a viable alternative to the ML approach.

  17. Filtration of human EEG recordings from physiological artifacts with empirical mode method

    Science.gov (United States)

    Grubov, Vadim V.; Runnova, Anastasiya E.; Khramova, Marina V.

    2017-03-01

    In the paper we propose the new method for dealing with noise and physiological artifacts in experimental human EEG recordings. The method is based on analysis of EEG signals with empirical mode decomposition (Hilbert-Huang transform). We consider noises and physiological artifacts on EEG as specific oscillatory patterns that cause problems during EEG analysis and can be detected with additional signals recorded simultaneously with EEG (ECG, EMG, EOG, etc.) We introduce the algorithm of the method with following steps: empirical mode decomposition of EEG signal, choosing of empirical modes with artifacts, removing empirical modes with artifacts, reconstruction of the initial EEG signal. We test the method on filtration of experimental human EEG signals from eye-moving artifacts and show high efficiency of the method.

  18. Computational RNA secondary structure design: empirical complexity and improved methods

    Directory of Open Access Journals (Sweden)

    Condon Anne

    2007-01-01

    Full Text Available Abstract Background We investigate the empirical complexity of the RNA secondary structure design problem, that is, the scaling of the typical difficulty of the design task for various classes of RNA structures as the size of the target structure is increased. The purpose of this work is to understand better the factors that make RNA structures hard to design for existing, high-performance algorithms. Such understanding provides the basis for improving the performance of one of the best algorithms for this problem, RNA-SSD, and for characterising its limitations. Results To gain insights into the practical complexity of the problem, we present a scaling analysis on random and biologically motivated structures using an improved version of the RNA-SSD algorithm, and also the RNAinverse algorithm from the Vienna package. Since primary structure constraints are relevant for designing RNA structures, we also investigate the correlation between the number and the location of the primary structure constraints when designing structures and the performance of the RNA-SSD algorithm. The scaling analysis on random and biologically motivated structures supports the hypothesis that the running time of both algorithms scales polynomially with the size of the structure. We also found that the algorithms are in general faster when constraints are placed only on paired bases in the structure. Furthermore, we prove that, according to the standard thermodynamic model, for some structures that the RNA-SSD algorithm was unable to design, there exists no sequence whose minimum free energy structure is the target structure. Conclusion Our analysis helps to better understand the strengths and limitations of both the RNA-SSD and RNAinverse algorithms, and suggests ways in which the performance of these algorithms can be further improved.

  19. An empirical comparison of several recent epistatic interaction detection methods.

    Science.gov (United States)

    Wang, Yue; Liu, Guimei; Feng, Mengling; Wong, Limsoon

    2011-11-01

    Many new methods have recently been proposed for detecting epistatic interactions in GWAS data. There is, however, no in-depth independent comparison of these methods yet. Five recent methods-TEAM, BOOST, SNPHarvester, SNPRuler and Screen and Clean (SC)-are evaluated here in terms of power, type-1 error rate, scalability and completeness. In terms of power, TEAM performs best on data with main effect and BOOST performs best on data without main effect. In terms of type-1 error rate, TEAM and BOOST have higher type-1 error rates than SNPRuler and SNPHarvester. SC does not control type-1 error rate well. In terms of scalability, we tested the five methods using a dataset with 100 000 SNPs on a 64 bit Ubuntu system, with Intel (R) Xeon(R) CPU 2.66 GHz, 16 GB memory. TEAM takes ~36 days to finish and SNPRuler reports heap allocation problems. BOOST scales up to 100 000 SNPs and the cost is much lower than that of TEAM. SC and SNPHarvester are the most scalable. In terms of completeness, we study how frequently the pruning techniques employed by these methods incorrectly prune away the most significant epistatic interactions. We find that, on average, 20% of datasets without main effect and 60% of datasets with main effect are pruned incorrectly by BOOST, SNPRuler and SNPHarvester. The software for the five methods tested are available from the URLs below. TEAM: http://csbio.unc.edu/epistasis/download.php BOOST: http://ihome.ust.hk/~eeyang/papers.html. SNPHarvester: http://bioinformatics.ust.hk/SNPHarvester.html. SNPRuler: http://bioinformatics.ust.hk/SNPRuler.zip. Screen and Clean: http://wpicr.wpic.pitt.edu/WPICCompGen/. wangyue@nus.edu.sg.

  20. Empirical evaluation of data normalization methods for molecular classification.

    Science.gov (United States)

    Huang, Huei-Chung; Qin, Li-Xuan

    2018-01-01

    Data artifacts due to variations in experimental handling are ubiquitous in microarray studies, and they can lead to biased and irreproducible findings. A popular approach to correct for such artifacts is through post hoc data adjustment such as data normalization. Statistical methods for data normalization have been developed and evaluated primarily for the discovery of individual molecular biomarkers. Their performance has rarely been studied for the development of multi-marker molecular classifiers-an increasingly important application of microarrays in the era of personalized medicine. In this study, we set out to evaluate the performance of three commonly used methods for data normalization in the context of molecular classification, using extensive simulations based on re-sampling from a unique pair of microRNA microarray datasets for the same set of samples. The data and code for our simulations are freely available as R packages at GitHub. In the presence of confounding handling effects, all three normalization methods tended to improve the accuracy of the classifier when evaluated in an independent test data. The level of improvement and the relative performance among the normalization methods depended on the relative level of molecular signal, the distributional pattern of handling effects (e.g., location shift vs scale change), and the statistical method used for building the classifier. In addition, cross-validation was associated with biased estimation of classification accuracy in the over-optimistic direction for all three normalization methods. Normalization may improve the accuracy of molecular classification for data with confounding handling effects; however, it cannot circumvent the over-optimistic findings associated with cross-validation for assessing classification accuracy.

  1. Community of Inquiry Method and Language Skills Acquisition: Empirical Evidence

    Science.gov (United States)

    Preece, Abdul Shakhour Duncan

    2015-01-01

    The study investigates the effectiveness of community of inquiry method in preparing students to develop listening and speaking skills in a sample of junior secondary school students in Borno state, Nigeria. A sample of 100 students in standard classes was drawn in one secondary school in Maiduguri metropolis through stratified random sampling…

  2. What Happened to Remote Usability Testing? An Empirical Study of Three Methods

    DEFF Research Database (Denmark)

    Stage, Jan; Andreasen, M. S.; Nielsen, H. V.

    2007-01-01

    The idea of conducting usability tests remotely emerged ten years ago. Since then, it has been studied empirically, and some software organizations employ remote methods. Yet there are still few comparisons involving more than one remote method. This paper presents results from a systematic...... empirical comparison of three methods for remote usability testing and a conventional laboratorybased think-aloud method. The three remote methods are a remote synchronous condition, where testing is conducted in real time but the test monitor is separated spatially from the test subjects, and two remote...

  3. Empirical method for matrix effects correction in liquid samples

    International Nuclear Information System (INIS)

    Vigoda de Leyt, Dora; Vazquez, Cristina

    1987-01-01

    A simple method for the determination of Cr, Ni and Mo in stainless steels is presented. In order to minimize the matrix effects, the conditions of liquid system to dissolve stainless steels chips has been developed. Pure element solutions were used as standards. Preparation of synthetic solutions with all the elements of steel and also mathematic corrections are avoided. It results in a simple chemical operation which simplifies the method of analysis. The variance analysis of the results obtained with steel samples show that the three elements may be determined from the comparison with the analytical curves obtained with the pure elements if the same parameters in the calibration curves are used. The accuracy and the precision were checked against other techniques using the British Chemical Standards of the Bureau of Anlysed Samples Ltd. (England). (M.E.L.) [es

  4. Electronic structure prediction via data-mining the empirical pseudopotential method

    Energy Technology Data Exchange (ETDEWEB)

    Zenasni, H; Aourag, H [LEPM, URMER, Departement of Physics, University Abou Bakr Belkaid, Tlemcen 13000 (Algeria); Broderick, S R; Rajan, K [Department of Materials Science and Engineering, Iowa State University, Ames, Iowa 50011-2230 (United States)

    2010-01-15

    We introduce a new approach for accelerating the calculation of the electronic structure of new materials by utilizing the empirical pseudopotential method combined with data mining tools. Combining data mining with the empirical pseudopotential method allows us to convert an empirical approach to a predictive approach. Here we consider tetrahedrally bounded III-V Bi semiconductors, and through the prediction of form factors based on basic elemental properties we can model the band structure and charge density for these semi-conductors, for which limited results exist. This work represents a unique approach to modeling the electronic structure of a material which may be used to identify new promising semi-conductors and is one of the few efforts utilizing data mining at an electronic level. (Abstract Copyright [2010], Wiley Periodicals, Inc.)

  5. Adjusted Empirical Likelihood Method in the Presence of Nuisance Parameters with Application to the Sharpe Ratio

    Directory of Open Access Journals (Sweden)

    Yuejiao Fu

    2018-04-01

    Full Text Available The Sharpe ratio is a widely used risk-adjusted performance measurement in economics and finance. Most of the known statistical inferential methods devoted to the Sharpe ratio are based on the assumption that the data are normally distributed. In this article, without making any distributional assumption on the data, we develop the adjusted empirical likelihood method to obtain inference for a parameter of interest in the presence of nuisance parameters. We show that the log adjusted empirical likelihood ratio statistic is asymptotically distributed as the chi-square distribution. The proposed method is applied to obtain inference for the Sharpe ratio. Simulation results illustrate that the proposed method is comparable to Jobson and Korkie’s method (1981 and outperforms the empirical likelihood method when the data are from a symmetric distribution. In addition, when the data are from a skewed distribution, the proposed method significantly outperforms all other existing methods. A real-data example is analyzed to exemplify the application of the proposed method.

  6. An Empirical Review of Research Methodologies and Methods in Creativity Studies (2003-2012)

    Science.gov (United States)

    Long, Haiying

    2014-01-01

    Based on the data collected from 5 prestigious creativity journals, research methodologies and methods of 612 empirical studies on creativity, published between 2003 and 2012, were reviewed and compared to those in gifted education. Major findings included: (a) Creativity research was predominantly quantitative and psychometrics and experiment…

  7. Critical factors in the empirical performance of temporal difference and evolutionary methods for reinforcement learning

    NARCIS (Netherlands)

    Whiteson, S.; Taylor, M.E.; Stone, P.

    2010-01-01

    Temporal difference and evolutionary methods are two of the most common approaches to solving reinforcement learning problems. However, there is little consensus on their relative merits and there have been few empirical studies that directly compare their performance. This article aims to address

  8. Comparison of two interpolation methods for empirical mode decomposition based evaluation of radiographic femur bone images.

    Science.gov (United States)

    Udhayakumar, Ganesan; Sujatha, Chinnaswamy Manoharan; Ramakrishnan, Swaminathan

    2013-01-01

    Analysis of bone strength in radiographic images is an important component of estimation of bone quality in diseases such as osteoporosis. Conventional radiographic femur bone images are used to analyze its architecture using bi-dimensional empirical mode decomposition method. Surface interpolation of local maxima and minima points of an image is a crucial part of bi-dimensional empirical mode decomposition method and the choice of appropriate interpolation depends on specific structure of the problem. In this work, two interpolation methods of bi-dimensional empirical mode decomposition are analyzed to characterize the trabecular femur bone architecture of radiographic images. The trabecular bone regions of normal and osteoporotic femur bone images (N = 40) recorded under standard condition are used for this study. The compressive and tensile strength regions of the images are delineated using pre-processing procedures. The delineated images are decomposed into their corresponding intrinsic mode functions using interpolation methods such as Radial basis function multiquadratic and hierarchical b-spline techniques. Results show that bi-dimensional empirical mode decomposition analyses using both interpolations are able to represent architectural variations of femur bone radiographic images. As the strength of the bone depends on architectural variation in addition to bone mass, this study seems to be clinically useful.

  9. Evaluation of registration methods on thoracic CT : the EMPIRE10 challenge

    NARCIS (Netherlands)

    Murphy, K.; Ginneken, van B.; Reinhardt, J.M.; Kabus, S.; Ding, K.; Deng, Xiang; Cao, K.; Du, K.; Christensen, G.E.; Garcia, V.; Vercauteren, T.; Ayache, N.; Commowick, O.; Malandain, G.; Glocker, B.; Paragios, N.; Navab, N.; Gorbunova, V.; Sporring, J.; Bruijne, de M.; Han, Xiao; Heinrich, M.P.; Schnabel, J.A.; Jenkinson, M.; Lorenz, C.; Modat, M.; McClelland, J.R.; Ourselin, S.; Muenzing, S.E.A.; Viergever, M.A.; Nigris, De D.; Collins, D.L.; Arbel, T.; Peroni, M.; Li, R.; Sharp, G.; Schmidt-Richberg, A.; Ehrhardt, J.; Werner, R.; Smeets, D.; Loeckx, D.; Song, G.; Tustison, N.; Avants, B.; Gee, J.C.; Staring, M.; Klein, S.; Stoel, B.C.; Urschler, M.; Werlberger, M.; Vandemeulebroucke, J.; Rit, S.; Sarrut, D.; Pluim, J.P.W.

    2011-01-01

    EMPIRE10 (Evaluation of Methods for Pulmonary Image REgistration 2010) is a public platform for fair and meaningful comparison of registration algorithms which are applied to a database of intrapatient thoracic CT image pairs. Evaluation of nonrigid registration techniques is a nontrivial task. This

  10. Climate Prediction for Brazil's Nordeste: Performance of Empirical and Numerical Modeling Methods.

    Science.gov (United States)

    Moura, Antonio Divino; Hastenrath, Stefan

    2004-07-01

    Comparisons of performance of climate forecast methods require consistency in the predictand and a long common reference period. For Brazil's Nordeste, empirical methods developed at the University of Wisconsin use preseason (October January) rainfall and January indices of the fields of meridional wind component and sea surface temperature (SST) in the tropical Atlantic and the equatorial Pacific as input to stepwise multiple regression and neural networking. These are used to predict the March June rainfall at a network of 27 stations. An experiment at the International Research Institute for Climate Prediction, Columbia University, with a numerical model (ECHAM4.5) used global SST information through February to predict the March June rainfall at three grid points in the Nordeste. The predictands for the empirical and numerical model forecasts are correlated at +0.96, and the period common to the independent portion of record of the empirical prediction and the numerical modeling is 1968 99. Over this period, predicted versus observed rainfall are evaluated in terms of correlation, root-mean-square error, absolute error, and bias. Performance is high for both approaches. Numerical modeling produces a correlation of +0.68, moderate errors, and strong negative bias. For the empirical methods, errors and bias are small, and correlations of +0.73 and +0.82 are reached between predicted and observed rainfall.

  11. Comparison of artificial intelligence methods and empirical equations to estimate daily solar radiation

    Science.gov (United States)

    Mehdizadeh, Saeid; Behmanesh, Javad; Khalili, Keivan

    2016-08-01

    In the present research, three artificial intelligence methods including Gene Expression Programming (GEP), Artificial Neural Networks (ANN) and Adaptive Neuro-Fuzzy Inference System (ANFIS) as well as, 48 empirical equations (10, 12 and 26 equations were temperature-based, sunshine-based and meteorological parameters-based, respectively) were used to estimate daily solar radiation in Kerman, Iran in the period of 1992-2009. To develop the GEP, ANN and ANFIS models, depending on the used empirical equations, various combinations of minimum air temperature, maximum air temperature, mean air temperature, extraterrestrial radiation, actual sunshine duration, maximum possible sunshine duration, sunshine duration ratio, relative humidity and precipitation were considered as inputs in the mentioned intelligent methods. To compare the accuracy of empirical equations and intelligent models, root mean square error (RMSE), mean absolute error (MAE), mean absolute relative error (MARE) and determination coefficient (R2) indices were used. The results showed that in general, sunshine-based and meteorological parameters-based scenarios in ANN and ANFIS models presented high accuracy than mentioned empirical equations. Moreover, the most accurate method in the studied region was ANN11 scenario with five inputs. The values of RMSE, MAE, MARE and R2 indices for the mentioned model were 1.850 MJ m-2 day-1, 1.184 MJ m-2 day-1, 9.58% and 0.935, respectively.

  12. Calibrating a combined energy systems analysis and controller design method with empirical data

    International Nuclear Information System (INIS)

    Murphy, Gavin Bruce; Counsell, John; Allison, John; Brindley, Joseph

    2013-01-01

    The drive towards low carbon constructions has seen buildings increasingly utilise many different energy systems simultaneously to control the human comfort of the indoor environment; such as ventilation with heat recovery, various heating solutions and applications of renewable energy. This paper describes a dynamic modelling and simulation method (IDEAS – Inverse Dynamics based Energy Assessment and Simulation) for analysing the energy utilisation of a building and its complex servicing systems. The IDEAS case study presented in this paper is based upon small perturbation theory and can be used for the analysis of the performance of complex energy systems and also for the design of smart control systems. This paper presents a process of how any dynamic model can be calibrated against a more empirical based data model, in this case the UK Government's SAP (Standard Assessment Procedure). The research targets of this work are building simulation experts for analysing the energy use of a building and also control engineers to assist in the design of smart control systems for dwellings. The calibration process presented is transferable and has applications for simulation experts to assist in calibrating any dynamic building simulation method with an empirical based method. - Highlights: • Presentation of an energy systems analysis method for assessing the energy utilisation of buildings and their complex servicing systems. • An inverse dynamics based controller design method is detailed. • Method of how a dynamic model can be calibrated with an empirical based model

  13. Tourism forecasting using modified empirical mode decomposition and group method of data handling

    Science.gov (United States)

    Yahya, N. A.; Samsudin, R.; Shabri, A.

    2017-09-01

    In this study, a hybrid model using modified Empirical Mode Decomposition (EMD) and Group Method of Data Handling (GMDH) model is proposed for tourism forecasting. This approach reconstructs intrinsic mode functions (IMFs) produced by EMD using trial and error method. The new component and the remaining IMFs is then predicted respectively using GMDH model. Finally, the forecasted results for each component are aggregated to construct an ensemble forecast. The data used in this experiment are monthly time series data of tourist arrivals from China, Thailand and India to Malaysia from year 2000 to 2016. The performance of the model is evaluated using Root Mean Square Error (RMSE) and Mean Absolute Percentage Error (MAPE) where conventional GMDH model and EMD-GMDH model are used as benchmark models. Empirical results proved that the proposed model performed better forecasts than the benchmarked models.

  14. Probabilistic Requirements (Partial) Verification Methods Best Practices Improvement. Variables Acceptance Sampling Calculators: Empirical Testing. Volume 2

    Science.gov (United States)

    Johnson, Kenneth L.; White, K. Preston, Jr.

    2012-01-01

    The NASA Engineering and Safety Center was requested to improve on the Best Practices document produced for the NESC assessment, Verification of Probabilistic Requirements for the Constellation Program, by giving a recommended procedure for using acceptance sampling by variables techniques as an alternative to the potentially resource-intensive acceptance sampling by attributes method given in the document. In this paper, the results of empirical tests intended to assess the accuracy of acceptance sampling plan calculators implemented for six variable distributions are presented.

  15. Evaluating Method Engineer Performance: an error classification and preliminary empirical study

    Directory of Open Access Journals (Sweden)

    Steven Kelly

    1998-11-01

    Full Text Available We describe an approach to empirically test the use of metaCASE environments to model methods. Both diagrams and matrices have been proposed as a means for presenting the methods. These different paradigms may have their own effects on how easily and well users can model methods. We extend Batra's classification of errors in data modelling to cover metamodelling, and use it to measure the performance of a group of metamodellers using either diagrams or matrices. The tentative results from this pilot study confirm the usefulness of the classification, and show some interesting differences between the paradigms.

  16. A semi-empirical method for measuring thickness of pipe-wall using gamma scattering technique

    International Nuclear Information System (INIS)

    Vo Hoang Nguyen; Hua Tuyet Le; Le Dinh Minh Quan; Hoang Duc Tam; Le Bao Tran; Tran Thien Thanh; Tran Nguyen Thuy Ngan; Chau Van Tao; VNUHCM-University of Science, Ho Chi Minh City; Huynh Dinh Chuong

    2016-01-01

    In this work, we propose a semi-empirical method for determining the thickness of pipe-wall, of which the determination is performed by combining the experimental and Monte Carlo simulation data. The testing measurements show that this is an efficient method to measure the thickness of pipe-wall. In addition, this work also shows that it could use a NaI(Tl) scintillation detector and a low activity source to measure the thickness of pipe-wall, which is simple, quick and high accuracy method. (author)

  17. Efficiency indicators versus forntier methods: an empirical investigation of italian public hospitals

    Directory of Open Access Journals (Sweden)

    Lorenzo Clementi

    2013-05-01

    Full Text Available Efficiency has a key-role in the measurement of the impact of the National Health Service (NHS reforms. We investigate the issue of inefficiency in health sector and provide empirical evidence derived from Italian public hospitals. Despite the importance of efficiency measurement in health care services, only recently advanced econometric methods have been applied to hospital data. We provide a synoptic survey of few empirical analyses of efficiency measurement in health care services. An estimate of the cost efficiency level in Italian public hospitals during 2001-2003 is obtained through a sample. We propose an efficiency indicator and provide cost frontiers for such hospitals, using stochastic frontier analysis (SFA for longitudinal data.

  18. Band structure calculation of GaSe-based nanostructures using empirical pseudopotential method

    International Nuclear Information System (INIS)

    Osadchy, A V; Obraztsova, E D; Volotovskiy, S G; Golovashkin, D L; Savin, V V

    2016-01-01

    In this paper we present the results of band structure computer simulation of GaSe- based nanostructures using the empirical pseudopotential method. Calculations were performed using a specially developed software that allows performing simulations using cluster computing. Application of this method significantly reduces the demands on computing resources compared to traditional approaches based on ab-initio techniques and provides receiving the adequate comparable results. The use of cluster computing allows to obtain information for structures that require an explicit account of a significant number of atoms, such as quantum dots and quantum pillars. (paper)

  19. The scale-dependent market trend: Empirical evidences using the lagged DFA method

    Science.gov (United States)

    Li, Daye; Kou, Zhun; Sun, Qiankun

    2015-09-01

    In this paper we make an empirical research and test the efficiency of 44 important market indexes in multiple scales. A modified method based on the lagged detrended fluctuation analysis is utilized to maximize the information of long-term correlations from the non-zero lags and keep the margin of errors small when measuring the local Hurst exponent. Our empirical result illustrates that a common pattern can be found in the majority of the measured market indexes which tend to be persistent (with the local Hurst exponent > 0.5) in the small time scale, whereas it displays significant anti-persistent characteristics in large time scales. Moreover, not only the stock markets but also the foreign exchange markets share this pattern. Considering that the exchange markets are only weakly synchronized with the economic cycles, it can be concluded that the economic cycles can cause anti-persistence in the large time scale but there are also other factors at work. The empirical result supports the view that financial markets are multi-fractal and it indicates that deviations from efficiency and the type of model to describe the trend of market price are dependent on the forecasting horizon.

  20. Semi-empirical Determination of Detection Efficiency for Voluminous Source by Effective Solid Angle Method

    Energy Technology Data Exchange (ETDEWEB)

    Kang, M. Y.; Kim, J. H.; Choi, H. D. [Seoul National Univ., Seoul (Korea, Republic of); Sun, G. M. [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2014-05-15

    In the field of γ-ray measurements, the determination of full energy (FE) absorption peak efficiency for a voluminous sample is difficult, because the preparation of the certified radiation source with the same chemical composition and geometry for the original voluminous sample is not easy. In order to solve this inconvenience, simulation or semi-empirical methods are preferred in many cases. Effective Solid Angle (ESA) Code which includes semi-empirical approach has been developed by the Applied Nuclear Physics Group in Seoul National University. In this study, we validated ESA code by using Marinelli type voluminous KRISS (Korea Research Institute of Standards and Science) CRM (Certified Reference Materials) sources and IAEA standard γ-ray point sources. And semi-empirically determined efficiency curve for voluminous source by using the ESA code is compared with the experimental value. We calculated the efficiency curve of voluminous source from the measured efficiency of standard point source by using the ESA code. We will carry out the ESA code validation by measurement of various CRM volume sources with detector of different efficiency.

  1. Robust fluence map optimization via alternating direction method of multipliers with empirical parameter optimization

    International Nuclear Information System (INIS)

    Gao, Hao

    2016-01-01

    For the treatment planning during intensity modulated radiation therapy (IMRT) or volumetric modulated arc therapy (VMAT), beam fluence maps can be first optimized via fluence map optimization (FMO) under the given dose prescriptions and constraints to conformally deliver the radiation dose to the targets while sparing the organs-at-risk, and then segmented into deliverable MLC apertures via leaf or arc sequencing algorithms. This work is to develop an efficient algorithm for FMO based on alternating direction method of multipliers (ADMM). Here we consider FMO with the least-square cost function and non-negative fluence constraints, and its solution algorithm is based on ADMM, which is efficient and simple-to-implement. In addition, an empirical method for optimizing the ADMM parameter is developed to improve the robustness of the ADMM algorithm. The ADMM based FMO solver was benchmarked with the quadratic programming method based on the interior-point (IP) method using the CORT dataset. The comparison results suggested the ADMM solver had a similar plan quality with slightly smaller total objective function value than IP. A simple-to-implement ADMM based FMO solver with empirical parameter optimization is proposed for IMRT or VMAT. (paper)

  2. Empirical methods for the estimation of Southern Ocean CO2: support vector and random forest regression

    CSIR Research Space (South Africa)

    Gregor, Luke

    2017-12-01

    Full Text Available understanding with spatially integrated air–sea flux estimates (Fay and McKinley, 2014). Conversely, ocean biogeochemical process models are good tools for mechanis- tic understanding, but fail to represent the seasonality of CO2 fluxes in the Southern Ocean... of including coordinate variables as proxies of 1pCO2 in the empirical methods. In the inter- comparison study by Rödenbeck et al. (2015) proxies typi- cally include, but are not limited to, sea surface temperature (SST), chlorophyll a (Chl a), mixed layer...

  3. The performance of selected semi-empirical and DFT methods in studying C60 fullerene derivatives

    Science.gov (United States)

    Sikorska, Celina; Puzyn, Tomasz

    2015-11-01

    The capability of reproducing the open circuit voltages (V oc) of 15 representative C60 fullerene derivatives was tested using the selected quantum mechanical methods (B3LYP, PM6, and PM7) together with the two one-electron basis sets. Certain theoretical treatments (e.g. PM6) were found to be satisfactory for preliminary estimates of the open circuit voltages (V oc), whereas the use of the B3LYP/6-31G(d) approach has been proven to assure highly accurate results. We also examined the structural similarity of 19 fullerene derivatives by employing principle component analysis (PCA). In order to express the structural features of the studied compounds we used molecular descriptors calculated with semi-empirical (PM6 and PM7) and density functional (B3LYP/6-31G(d)) methods separately. In performing PCA, we noticed that semi-empirical methods (i.e. PM6 and PM7) seem satisfactory for molecules, in which one can distinguish the aromatic and the aliphatic parts in the cyclopropane ring of PCBM (phenyl-C61-buteric acid methyl ester) and they significantly overestimate the energy of the highest occupied molecular orbital (E HOMO). The use of the B3LYP functional, however, is recommended for studying methanofullerenes, which closely resemble the structure of PCBM, and for their modifications.

  4. The performance of selected semi-empirical and DFT methods in studying C60 fullerene derivatives

    International Nuclear Information System (INIS)

    Sikorska, Celina; Puzyn, Tomasz

    2015-01-01

    The capability of reproducing the open circuit voltages (V oc ) of 15 representative C 60 fullerene derivatives was tested using the selected quantum mechanical methods (B3LYP, PM6, and PM7) together with the two one-electron basis sets. Certain theoretical treatments (e.g. PM6) were found to be satisfactory for preliminary estimates of the open circuit voltages (V oc ), whereas the use of the B3LYP/6-31G(d) approach has been proven to assure highly accurate results. We also examined the structural similarity of 19 fullerene derivatives by employing principle component analysis (PCA). In order to express the structural features of the studied compounds we used molecular descriptors calculated with semi-empirical (PM6 and PM7) and density functional (B3LYP/6-31G(d)) methods separately. In performing PCA, we noticed that semi-empirical methods (i.e. PM6 and PM7) seem satisfactory for molecules, in which one can distinguish the aromatic and the aliphatic parts in the cyclopropane ring of PCBM (phenyl-C 61 -buteric acid methyl ester) and they significantly overestimate the energy of the highest occupied molecular orbital (E HOMO ). The use of the B3LYP functional, however, is recommended for studying methanofullerenes, which closely resemble the structure of PCBM, and for their modifications. (paper)

  5. An empirical method for approximating stream baseflow time series using groundwater table fluctuations

    Science.gov (United States)

    Meshgi, Ali; Schmitter, Petra; Babovic, Vladan; Chui, Ting Fong May

    2014-11-01

    Developing reliable methods to estimate stream baseflow has been a subject of interest due to its importance in catchment response and sustainable watershed management. However, to date, in the absence of complex numerical models, baseflow is most commonly estimated using statistically derived empirical approaches that do not directly incorporate physically-meaningful information. On the other hand, Artificial Intelligence (AI) tools such as Genetic Programming (GP) offer unique capabilities to reduce the complexities of hydrological systems without losing relevant physical information. This study presents a simple-to-use empirical equation to estimate baseflow time series using GP so that minimal data is required and physical information is preserved. A groundwater numerical model was first adopted to simulate baseflow for a small semi-urban catchment (0.043 km2) located in Singapore. GP was then used to derive an empirical equation relating baseflow time series to time series of groundwater table fluctuations, which are relatively easily measured and are physically related to baseflow generation. The equation was then generalized for approximating baseflow in other catchments and validated for a larger vegetation-dominated basin located in the US (24 km2). Overall, this study used GP to propose a simple-to-use equation to predict baseflow time series based on only three parameters: minimum daily baseflow of the entire period, area of the catchment and groundwater table fluctuations. It serves as an alternative approach for baseflow estimation in un-gauged systems when only groundwater table and soil information is available, and is thus complementary to other methods that require discharge measurements.

  6. Application of empirical mode decomposition method for characterization of random vibration signals

    Directory of Open Access Journals (Sweden)

    Setyamartana Parman

    2016-07-01

    Full Text Available Characterization of finite measured signals is a great of importance in dynamical modeling and system identification. This paper addresses an approach for characterization of measured random vibration signals where the approach rests on a method called empirical mode decomposition (EMD. The applicability of proposed approach is tested in one numerical and experimental data from a structural system, namely spar platform. The results are three main signal components, comprising: noise embedded in the measured signal as the first component, first intrinsic mode function (IMF called as the wave frequency response (WFR as the second component and second IMF called as the low frequency response (LFR as the third component while the residue is the trend. Band-pass filter (BPF method is taken as benchmark for the results obtained from EMD method.

  7. Empirical Performance of Cross-Validation With Oracle Methods in a Genomics Context.

    Science.gov (United States)

    Martinez, Josue G; Carroll, Raymond J; Müller, Samuel; Sampson, Joshua N; Chatterjee, Nilanjan

    2011-11-01

    When employing model selection methods with oracle properties such as the smoothly clipped absolute deviation (SCAD) and the Adaptive Lasso, it is typical to estimate the smoothing parameter by m-fold cross-validation, for example, m = 10. In problems where the true regression function is sparse and the signals large, such cross-validation typically works well. However, in regression modeling of genomic studies involving Single Nucleotide Polymorphisms (SNP), the true regression functions, while thought to be sparse, do not have large signals. We demonstrate empirically that in such problems, the number of selected variables using SCAD and the Adaptive Lasso, with 10-fold cross-validation, is a random variable that has considerable and surprising variation. Similar remarks apply to non-oracle methods such as the Lasso. Our study strongly questions the suitability of performing only a single run of m-fold cross-validation with any oracle method, and not just the SCAD and Adaptive Lasso.

  8. Aircraft directional stability and vertical tail design: A review of semi-empirical methods

    Science.gov (United States)

    Ciliberti, Danilo; Della Vecchia, Pierluigi; Nicolosi, Fabrizio; De Marco, Agostino

    2017-11-01

    Aircraft directional stability and control are related to vertical tail design. The safety, performance, and flight qualities of an aircraft also depend on a correct empennage sizing. Specifically, the vertical tail is responsible for the aircraft yaw stability and control. If these characteristics are not well balanced, the entire aircraft design may fail. Stability and control are often evaluated, especially in the preliminary design phase, with semi-empirical methods, which are based on the results of experimental investigations performed in the past decades, and occasionally are merged with data provided by theoretical assumptions. This paper reviews the standard semi-empirical methods usually applied in the estimation of airplane directional stability derivatives in preliminary design, highlighting the advantages and drawbacks of these approaches that were developed from wind tunnel tests performed mainly on fighter airplane configurations of the first decades of the past century, and discussing their applicability on current transport aircraft configurations. Recent investigations made by the authors have shown the limit of these methods, proving the existence of aerodynamic interference effects in sideslip conditions which are not adequately considered in classical formulations. The article continues with a concise review of the numerical methods for aerodynamics and their applicability in aircraft design, highlighting how Reynolds-Averaged Navier-Stokes (RANS) solvers are well-suited to attain reliable results in attached flow conditions, with reasonable computational times. From the results of RANS simulations on a modular model of a representative regional turboprop airplane layout, the authors have developed a modern method to evaluate the vertical tail and fuselage contributions to aircraft directional stability. The investigation on the modular model has permitted an effective analysis of the aerodynamic interference effects by moving, changing, and

  9. Prediction of shear wave velocity using empirical correlations and artificial intelligence methods

    Science.gov (United States)

    Maleki, Shahoo; Moradzadeh, Ali; Riabi, Reza Ghavami; Gholami, Raoof; Sadeghzadeh, Farhad

    2014-06-01

    Good understanding of mechanical properties of rock formations is essential during the development and production phases of a hydrocarbon reservoir. Conventionally, these properties are estimated from the petrophysical logs with compression and shear sonic data being the main input to the correlations. This is while in many cases the shear sonic data are not acquired during well logging, which may be for cost saving purposes. In this case, shear wave velocity is estimated using available empirical correlations or artificial intelligent methods proposed during the last few decades. In this paper, petrophysical logs corresponding to a well drilled in southern part of Iran were used to estimate the shear wave velocity using empirical correlations as well as two robust artificial intelligence methods knows as Support Vector Regression (SVR) and Back-Propagation Neural Network (BPNN). Although the results obtained by SVR seem to be reliable, the estimated values are not very precise and considering the importance of shear sonic data as the input into different models, this study suggests acquiring shear sonic data during well logging. It is important to note that the benefits of having reliable shear sonic data for estimation of rock formation mechanical properties will compensate the possible additional costs for acquiring a shear log.

  10. Empirical Bayes ranking and selection methods via semiparametric hierarchical mixture models in microarray studies.

    Science.gov (United States)

    Noma, Hisashi; Matsui, Shigeyuki

    2013-05-20

    The main purpose of microarray studies is screening of differentially expressed genes as candidates for further investigation. Because of limited resources in this stage, prioritizing genes are relevant statistical tasks in microarray studies. For effective gene selections, parametric empirical Bayes methods for ranking and selection of genes with largest effect sizes have been proposed (Noma et al., 2010; Biostatistics 11: 281-289). The hierarchical mixture model incorporates the differential and non-differential components and allows information borrowing across differential genes with separation from nuisance, non-differential genes. In this article, we develop empirical Bayes ranking methods via a semiparametric hierarchical mixture model. A nonparametric prior distribution, rather than parametric prior distributions, for effect sizes is specified and estimated using the "smoothing by roughening" approach of Laird and Louis (1991; Computational statistics and data analysis 12: 27-37). We present applications to childhood and infant leukemia clinical studies with microarrays for exploring genes related to prognosis or disease progression. Copyright © 2012 John Wiley & Sons, Ltd.

  11. Prediction of shear wave velocity using empirical correlations and artificial intelligence methods

    Directory of Open Access Journals (Sweden)

    Shahoo Maleki

    2014-06-01

    Full Text Available Good understanding of mechanical properties of rock formations is essential during the development and production phases of a hydrocarbon reservoir. Conventionally, these properties are estimated from the petrophysical logs with compression and shear sonic data being the main input to the correlations. This is while in many cases the shear sonic data are not acquired during well logging, which may be for cost saving purposes. In this case, shear wave velocity is estimated using available empirical correlations or artificial intelligent methods proposed during the last few decades. In this paper, petrophysical logs corresponding to a well drilled in southern part of Iran were used to estimate the shear wave velocity using empirical correlations as well as two robust artificial intelligence methods knows as Support Vector Regression (SVR and Back-Propagation Neural Network (BPNN. Although the results obtained by SVR seem to be reliable, the estimated values are not very precise and considering the importance of shear sonic data as the input into different models, this study suggests acquiring shear sonic data during well logging. It is important to note that the benefits of having reliable shear sonic data for estimation of rock formation mechanical properties will compensate the possible additional costs for acquiring a shear log.

  12. Developing a Clustering-Based Empirical Bayes Analysis Method for Hotspot Identification

    Directory of Open Access Journals (Sweden)

    Yajie Zou

    2017-01-01

    Full Text Available Hotspot identification (HSID is a critical part of network-wide safety evaluations. Typical methods for ranking sites are often rooted in using the Empirical Bayes (EB method to estimate safety from both observed crash records and predicted crash frequency based on similar sites. The performance of the EB method is highly related to the selection of a reference group of sites (i.e., roadway segments or intersections similar to the target site from which safety performance functions (SPF used to predict crash frequency will be developed. As crash data often contain underlying heterogeneity that, in essence, can make them appear to be generated from distinct subpopulations, methods are needed to select similar sites in a principled manner. To overcome this possible heterogeneity problem, EB-based HSID methods that use common clustering methodologies (e.g., mixture models, K-means, and hierarchical clustering to select “similar” sites for building SPFs are developed. Performance of the clustering-based EB methods is then compared using real crash data. Here, HSID results, when computed on Texas undivided rural highway cash data, suggest that all three clustering-based EB analysis methods are preferred over the conventional statistical methods. Thus, properly classifying the road segments for heterogeneous crash data can further improve HSID accuracy.

  13. Comparison of a semi-empirical method with some model codes for gamma-ray spectrum calculation

    Energy Technology Data Exchange (ETDEWEB)

    Sheng, Fan; Zhixiang, Zhao [Chinese Nuclear Data Center, Beijing, BJ (China)

    1996-06-01

    Gamma-ray spectra calculated by a semi-empirical method are compared with those calculated by the model codes such as GNASH, TNG, UNF and NDCP-1. The results of the calculations are discussed. (2 tabs., 3 figs.).

  14. Prediction of Physicochemical Properties of Organic Molecules Using Semi-Empirical Methods

    International Nuclear Information System (INIS)

    Kim, Chan Kyung; Kim, Chang Kon; Kim, Miri; Lee, Hai Whang; Cho, Soo Gyeong

    2013-01-01

    Prediction of physicochemical properties of organic molecules is an important process in chemistry and chemical engineering. The MSEP approach developed in our lab calculates the molecular surface electrostatic potential (ESP) on van der Waals (vdW) surfaces of molecules. This approach includes geometry optimization and frequency calculation using hybrid density functional theory, B3LYP, at the 6-31G(d) basis set to find minima on the potential energy surface, and is known to give satisfactory QSPR results for various properties of organic molecules. However, this MSEP method is not applicable to screen large database because geometry optimization and frequency calculation require considerable computing time. To develop a fast but yet reliable approach, we have re-examined our previous work on organic molecules using two semi-empirical methods, AM1 and PM3. This new approach can be an efficient protocol in designing new molecules with improved properties

  15. Interpretation and method: Empirical research methods and the interpretive turn, 2nd ed.

    NARCIS (Netherlands)

    Yanow, D.; Schwartz-Shea, P.

    2014-01-01

    This book demonstrates the relevance, rigor, and creativity of interpretive research methodologies for the social and human sciences. The book situates methods questions within the context of broader methodological questions--specifically, the character of social realities and their "know-ability."

  16. On the Reliability of Source Time Functions Estimated Using Empirical Green's Function Methods

    Science.gov (United States)

    Gallegos, A. C.; Xie, J.; Suarez Salas, L.

    2017-12-01

    The Empirical Green's Function (EGF) method (Hartzell, 1978) has been widely used to extract source time functions (STFs). In this method, seismograms generated by collocated events with different magnitudes are deconvolved. Under a fundamental assumption that the STF of the small event is a delta function, the deconvolved Relative Source Time Function (RSTF) yields the large event's STF. While this assumption can be empirically justified by examination of differences in event size and frequency content of the seismograms, there can be a lack of rigorous justification of the assumption. In practice, a small event might have a finite duration when the RSTF is retrieved and interpreted as the large event STF with a bias. In this study, we rigorously analyze this bias using synthetic waveforms generated by convolving a realistic Green's function waveform with pairs of finite-duration triangular or parabolic STFs. The RSTFs are found using a time-domain based matrix deconvolution. We find when the STFs of smaller events are finite, the RSTFs are a series of narrow non-physical spikes. Interpreting these RSTFs as a series of high-frequency source radiations would be very misleading. The only reliable and unambiguous information we can retrieve from these RSTFs is the difference in durations and the moment ratio of the two STFs. We can apply a Tikhonov smoothing to obtain a single-pulse RSTF, but its duration is dependent on the choice of weighting, which may be subjective. We then test the Multi-Channel Deconvolution (MCD) method (Plourde & Bostock, 2017) which assumes that both STFs have finite durations to be solved for. A concern about the MCD method is that the number of unknown parameters is larger, which would tend to make the problem rank-deficient. Because the kernel matrix is dependent on the STFs to be solved for under a positivity constraint, we can only estimate the rank-deficiency with a semi-empirical approach. Based on the results so far, we find that the

  17. The effect of empirical potential functions on modeling of amorphous carbon using molecular dynamics method

    International Nuclear Information System (INIS)

    Li, Longqiu; Xu, Ming; Song, Wenping; Ovcharenko, Andrey; Zhang, Guangyu; Jia, Ding

    2013-01-01

    Empirical potentials have a strong effect on the hybridization and structure of amorphous carbon and are of great importance in molecular dynamics (MD) simulations. In this work, amorphous carbon at densities ranging from 2.0 to 3.2 g/cm 3 was modeled by a liquid quenching method using Tersoff, 2nd REBO, and ReaxFF empirical potentials. The hybridization, structure and radial distribution function G(r) of carbon atoms were analyzed as a function of the three potentials mentioned above. The ReaxFF potential is capable to model the change of the structure of amorphous carbon and MD results are in a good agreement with experimental results and density function theory (DFT) at low density of 2.6 g/cm 3 and below. The 2nd REBO potential can be used when amorphous carbon has a very low density of 2.4 g/cm 3 and below. Considering the computational efficiency, the Tersoff potential is recommended to model amorphous carbon at a high density of 2.6 g/cm 3 and above. In addition, the influence of the quenching time on the hybridization content obtained with the three potentials is discussed.

  18. Highly comparative time-series analysis: the empirical structure of time series and their methods.

    Science.gov (United States)

    Fulcher, Ben D; Little, Max A; Jones, Nick S

    2013-06-06

    The process of collecting and organizing sets of observations represents a common theme throughout the history of science. However, despite the ubiquity of scientists measuring, recording and analysing the dynamics of different processes, an extensive organization of scientific time-series data and analysis methods has never been performed. Addressing this, annotated collections of over 35 000 real-world and model-generated time series, and over 9000 time-series analysis algorithms are analysed in this work. We introduce reduced representations of both time series, in terms of their properties measured by diverse scientific methods, and of time-series analysis methods, in terms of their behaviour on empirical time series, and use them to organize these interdisciplinary resources. This new approach to comparing across diverse scientific data and methods allows us to organize time-series datasets automatically according to their properties, retrieve alternatives to particular analysis methods developed in other scientific disciplines and automate the selection of useful methods for time-series classification and regression tasks. The broad scientific utility of these tools is demonstrated on datasets of electroencephalograms, self-affine time series, heartbeat intervals, speech signals and others, in each case contributing novel analysis techniques to the existing literature. Highly comparative techniques that compare across an interdisciplinary literature can thus be used to guide more focused research in time-series analysis for applications across the scientific disciplines.

  19. A hybrid filtering method based on a novel empirical mode decomposition for friction signals

    International Nuclear Information System (INIS)

    Li, Chengwei; Zhan, Liwei

    2015-01-01

    During a measurement, the measured signal usually contains noise. To remove the noise and preserve the important feature of the signal, we introduce a hybrid filtering method that uses a new intrinsic mode function (NIMF) and a modified Hausdorff distance. The NIMF is defined as the difference between the noisy signal and each intrinsic mode function (IMF), which is obtained by empirical mode decomposition (EMD), ensemble EMD, complementary ensemble EMD, or complete ensemble EMD with adaptive noise (CEEMDAN). The relevant mode selecting is based on the similarity between the first NIMF and the rest of the NIMFs. With this filtering method, the EMD and improved versions are used to filter the simulation and friction signals. The friction signal between an airplane tire and the runaway is recorded during a simulated airplane touchdown and features spikes of various amplitudes and noise. The filtering effectiveness of the four hybrid filtering methods are compared and discussed. The results show that the filtering method based on CEEMDAN outperforms other signal filtering methods. (paper)

  20. A novel signal compression method based on optimal ensemble empirical mode decomposition for bearing vibration signals

    Science.gov (United States)

    Guo, Wei; Tse, Peter W.

    2013-01-01

    Today, remote machine condition monitoring is popular due to the continuous advancement in wireless communication. Bearing is the most frequently and easily failed component in many rotating machines. To accurately identify the type of bearing fault, large amounts of vibration data need to be collected. However, the volume of transmitted data cannot be too high because the bandwidth of wireless communication is limited. To solve this problem, the data are usually compressed before transmitting to a remote maintenance center. This paper proposes a novel signal compression method that can substantially reduce the amount of data that need to be transmitted without sacrificing the accuracy of fault identification. The proposed signal compression method is based on ensemble empirical mode decomposition (EEMD), which is an effective method for adaptively decomposing the vibration signal into different bands of signal components, termed intrinsic mode functions (IMFs). An optimization method was designed to automatically select appropriate EEMD parameters for the analyzed signal, and in particular to select the appropriate level of the added white noise in the EEMD method. An index termed the relative root-mean-square error was used to evaluate the decomposition performances under different noise levels to find the optimal level. After applying the optimal EEMD method to a vibration signal, the IMF relating to the bearing fault can be extracted from the original vibration signal. Compressing this signal component obtains a much smaller proportion of data samples to be retained for transmission and further reconstruction. The proposed compression method were also compared with the popular wavelet compression method. Experimental results demonstrate that the optimization of EEMD parameters can automatically find appropriate EEMD parameters for the analyzed signals, and the IMF-based compression method provides a higher compression ratio, while retaining the bearing defect

  1. Effect of tidal triggering on seismicity in Taiwan revealed by the empirical mode decomposition method

    Directory of Open Access Journals (Sweden)

    H.-J. Chen

    2012-07-01

    Full Text Available The effect of tidal triggering on earthquake occurrence has been controversial for many years. This study considered earthquakes that occurred near Taiwan between 1973 and 2008. Because earthquake data are nonlinear and non-stationary, we applied the empirical mode decomposition (EMD method to analyze the temporal variations in the number of daily earthquakes to investigate the effect of tidal triggering. We compared the results obtained from the non-declustered catalog with those from two kinds of declustered catalogs and discuss the aftershock effect on the EMD-based analysis. We also investigated stacking the data based on in-phase phenomena of theoretical Earth tides with statistical significance tests. Our results show that the effects of tidal triggering, particularly the lunar tidal effect, can be extracted from the raw seismicity data using the approach proposed here. Our results suggest that the lunar tidal force is likely a factor in the triggering of earthquakes.

  2. Empirical method to measure stochasticity and multifractality in nonlinear time series

    Science.gov (United States)

    Lin, Chih-Hao; Chang, Chia-Seng; Li, Sai-Ping

    2013-12-01

    An empirical algorithm is used here to study the stochastic and multifractal nature of nonlinear time series. A parameter can be defined to quantitatively measure the deviation of the time series from a Wiener process so that the stochasticity of different time series can be compared. The local volatility of the time series under study can be constructed using this algorithm, and the multifractal structure of the time series can be analyzed by using this local volatility. As an example, we employ this method to analyze financial time series from different stock markets. The result shows that while developed markets evolve very much like an Ito process, the emergent markets are far from efficient. Differences about the multifractal structures and leverage effects between developed and emergent markets are discussed. The algorithm used here can be applied in a similar fashion to study time series of other complex systems.

  3. Empirical source strength correlations for rans-based acoustic analogy methods

    Science.gov (United States)

    Kube-McDowell, Matthew Tyndall

    JeNo is a jet noise prediction code based on an acoustic analogy method developed by Mani, Gliebe, Balsa, and Khavaran. Using the flow predictions from a standard Reynolds-averaged Navier-Stokes computational fluid dynamics solver, JeNo predicts the overall sound pressure level and angular spectra for high-speed hot jets over a range of observer angles, with a processing time suitable for rapid design purposes. JeNo models the noise from hot jets as a combination of two types of noise sources; quadrupole sources dependent on velocity fluctuations, which represent the major noise of turbulent mixing, and dipole sources dependent on enthalpy fluctuations, which represent the effects of thermal variation. These two sources are modeled by JeNo as propagating independently into the far-field, with no cross-correlation at the observer location. However, high-fidelity computational fluid dynamics solutions demonstrate that this assumption is false. In this thesis, the theory, assumptions, and limitations of the JeNo code are briefly discussed, and a modification to the acoustic analogy method is proposed in which the cross-correlation of the two primary noise sources is allowed to vary with the speed of the jet and the observer location. As a proof-of-concept implementation, an empirical correlation correction function is derived from comparisons between JeNo's noise predictions and a set of experimental measurements taken for the Air Force Aero-Propulsion Laboratory. The empirical correlation correction is then applied to JeNo's predictions of a separate data set of hot jets tested at NASA's Glenn Research Center. Metrics are derived to measure the qualitative and quantitative performance of JeNo's acoustic predictions, and the empirical correction is shown to provide a quantitative improvement in the noise prediction at low observer angles with no freestream flow, and a qualitative improvement in the presence of freestream flow. However, the results also demonstrate

  4. A comparison of usability methods for testing interactive health technologies: methodological aspects and empirical evidence.

    Science.gov (United States)

    Jaspers, Monique W M

    2009-05-01

    Usability evaluation is now widely recognized as critical to the success of interactive health care applications. However, the broad range of usability inspection and testing methods available may make it difficult to decide on a usability assessment plan. To guide novices in the human-computer interaction field, we provide an overview of the methodological and empirical research available on the three usability inspection and testing methods most often used. We describe two 'expert-based' and one 'user-based' usability method: (1) the heuristic evaluation, (2) the cognitive walkthrough, and (3) the think aloud. All three usability evaluation methods are applied in laboratory settings. Heuristic evaluation is a relatively efficient usability evaluation method with a high benefit-cost ratio, but requires high skills and usability experience of the evaluators to produce reliable results. The cognitive walkthrough is a more structured approach than the heuristic evaluation with a stronger focus on the learnability of a computer application. Major drawbacks of the cognitive walkthrough are the required level of detail of task and user background descriptions for an adequate application of the latest version of the technique. The think aloud is a very direct method to gain deep insight in the problems end users encounter in interaction with a system but data analyses is extensive and requires a high level of expertise both in the cognitive ergonomics and in computer system application domain. Each of the three usability evaluation methods has shown its usefulness, has its own advantages and disadvantages; no single method has revealed any significant results indicating that it is singularly effective in all circumstances. A combination of different techniques that compliment one another should preferably be used as their collective application will be more powerful than applied in isolation. Innovative mobile and automated solutions to support end-user testing have

  5. An alternative empirical likelihood method in missing response problems and causal inference.

    Science.gov (United States)

    Ren, Kaili; Drummond, Christopher A; Brewster, Pamela S; Haller, Steven T; Tian, Jiang; Cooper, Christopher J; Zhang, Biao

    2016-11-30

    Missing responses are common problems in medical, social, and economic studies. When responses are missing at random, a complete case data analysis may result in biases. A popular debias method is inverse probability weighting proposed by Horvitz and Thompson. To improve efficiency, Robins et al. proposed an augmented inverse probability weighting method. The augmented inverse probability weighting estimator has a double-robustness property and achieves the semiparametric efficiency lower bound when the regression model and propensity score model are both correctly specified. In this paper, we introduce an empirical likelihood-based estimator as an alternative to Qin and Zhang (2007). Our proposed estimator is also doubly robust and locally efficient. Simulation results show that the proposed estimator has better performance when the propensity score is correctly modeled. Moreover, the proposed method can be applied in the estimation of average treatment effect in observational causal inferences. Finally, we apply our method to an observational study of smoking, using data from the Cardiovascular Outcomes in Renal Atherosclerotic Lesions clinical trial. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  6. Value and depreciation of mineral resources over the very long run: An empirical contrast of different methods

    OpenAIRE

    Rubio Varas, M. del Mar

    2005-01-01

    The paper contrasts empirically the results of alternative methods for estimating the value and the depreciation of mineral resources. The historical data of Mexico and Venezuela, covering the period 1920s-1980s, is used to contrast the results of several methods. These are the present value, the net price method, the user cost method and the imputed income method. The paper establishes that the net price and the user cost are not competing methods as such, but alternative adjustments to diff...

  7. Levels of reduction in van Manen's phenomenological hermeneutic method: an empirical example.

    Science.gov (United States)

    Heinonen, Kristiina

    2015-05-01

    To describe reduction as a method using van Manen's phenomenological hermeneutic research approach. Reduction involves several levels that can be distinguished for their methodological usefulness. Researchers can use reduction in different ways and dimensions for their methodological needs. A study of Finnish multiple-birth families in which open interviews (n=38) were conducted with public health nurses, family care workers and parents of twins. A systematic literature and knowledge review showed there were no articles on multiple-birth families that used van Manen's method. Discussion The phenomena of the 'lifeworlds' of multiple-birth families consist of three core essential themes as told by parents: 'a state of constant vigilance', 'ensuring that they can continue to cope' and 'opportunities to share with other people'. Reduction provides the opportunity to carry out in-depth phenomenological hermeneutic research and understand people's lives. It helps to keep research stages separate but also enables a consolidated view. Social care and healthcare professionals have to hear parents' voices better to comprehensively understand their situation; they need further tools and training to be able to empower parents of twins. This paper adds an empirical example to the discussion of phenomenology, hermeneutic study and reduction as a method. It opens up reduction for researchers to exploit.

  8. Intermolecular interactions in the condensed phase: Evaluation of semi-empirical quantum mechanical methods.

    Science.gov (United States)

    Christensen, Anders S; Kromann, Jimmy C; Jensen, Jan H; Cui, Qiang

    2017-10-28

    To facilitate further development of approximate quantum mechanical methods for condensed phase applications, we present a new benchmark dataset of intermolecular interaction energies in the solution phase for a set of 15 dimers, each containing one charged monomer. The reference interaction energy in solution is computed via a thermodynamic cycle that integrates dimer binding energy in the gas phase at the coupled cluster level and solute-solvent interaction with density functional theory; the estimated uncertainty of such calculated interaction energy is ±1.5 kcal/mol. The dataset is used to benchmark the performance of a set of semi-empirical quantum mechanical (SQM) methods that include DFTB3-D3, DFTB3/CPE-D3, OM2-D3, PM6-D3, PM6-D3H+, and PM7 as well as the HF-3c method. We find that while all tested SQM methods tend to underestimate binding energies in the gas phase with a root-mean-squared error (RMSE) of 2-5 kcal/mol, they overestimate binding energies in the solution phase with an RMSE of 3-4 kcal/mol, with the exception of DFTB3/CPE-D3 and OM2-D3, for which the systematic deviation is less pronounced. In addition, we find that HF-3c systematically overestimates binding energies in both gas and solution phases. As most approximate QM methods are parametrized and evaluated using data measured or calculated in the gas phase, the dataset represents an important first step toward calibrating QM based methods for application in the condensed phase where polarization and exchange repulsion need to be treated in a balanced fashion.

  9. Intermolecular interactions in the condensed phase: Evaluation of semi-empirical quantum mechanical methods

    Science.gov (United States)

    Christensen, Anders S.; Kromann, Jimmy C.; Jensen, Jan H.; Cui, Qiang

    2017-10-01

    To facilitate further development of approximate quantum mechanical methods for condensed phase applications, we present a new benchmark dataset of intermolecular interaction energies in the solution phase for a set of 15 dimers, each containing one charged monomer. The reference interaction energy in solution is computed via a thermodynamic cycle that integrates dimer binding energy in the gas phase at the coupled cluster level and solute-solvent interaction with density functional theory; the estimated uncertainty of such calculated interaction energy is ±1.5 kcal/mol. The dataset is used to benchmark the performance of a set of semi-empirical quantum mechanical (SQM) methods that include DFTB3-D3, DFTB3/CPE-D3, OM2-D3, PM6-D3, PM6-D3H+, and PM7 as well as the HF-3c method. We find that while all tested SQM methods tend to underestimate binding energies in the gas phase with a root-mean-squared error (RMSE) of 2-5 kcal/mol, they overestimate binding energies in the solution phase with an RMSE of 3-4 kcal/mol, with the exception of DFTB3/CPE-D3 and OM2-D3, for which the systematic deviation is less pronounced. In addition, we find that HF-3c systematically overestimates binding energies in both gas and solution phases. As most approximate QM methods are parametrized and evaluated using data measured or calculated in the gas phase, the dataset represents an important first step toward calibrating QM based methods for application in the condensed phase where polarization and exchange repulsion need to be treated in a balanced fashion.

  10. Probabilistic performance estimators for computational chemistry methods: The empirical cumulative distribution function of absolute errors

    Science.gov (United States)

    Pernot, Pascal; Savin, Andreas

    2018-06-01

    Benchmarking studies in computational chemistry use reference datasets to assess the accuracy of a method through error statistics. The commonly used error statistics, such as the mean signed and mean unsigned errors, do not inform end-users on the expected amplitude of prediction errors attached to these methods. We show that, the distributions of model errors being neither normal nor zero-centered, these error statistics cannot be used to infer prediction error probabilities. To overcome this limitation, we advocate for the use of more informative statistics, based on the empirical cumulative distribution function of unsigned errors, namely, (1) the probability for a new calculation to have an absolute error below a chosen threshold and (2) the maximal amplitude of errors one can expect with a chosen high confidence level. Those statistics are also shown to be well suited for benchmarking and ranking studies. Moreover, the standard error on all benchmarking statistics depends on the size of the reference dataset. Systematic publication of these standard errors would be very helpful to assess the statistical reliability of benchmarking conclusions.

  11. CO2 capture in amine solutions: modelling and simulations with non-empirical methods

    Science.gov (United States)

    Andreoni, Wanda; Pietrucci, Fabio

    2016-12-01

    Absorption in aqueous amine solutions is the most advanced technology for the capture of CO2, although suffering from drawbacks that do not allow exploitation on large scale. The search for optimum solvents has been pursued with empirical methods and has also motivated a number of computational approaches over the last decade. However, a deeper level of understanding of the relevant chemical reactions in solution is required so as to contribute to this effort. We present here a brief critical overview of the most recent applications of computer simulations using ab initio methods. Comparison of their outcome shows a strong dependence on the structural models employed to represent the molecular systems in solution and on the strategy used to simulate the reactions. In particular, the results of very recent ab initio molecular dynamics augmented with metadynamics are summarized, showing the crucial role of water, which has been so far strongly underestimated both in the calculations and in the interpretation of experimental data. Indications are given for advances in computational approaches that are necessary if meant to contribute to the rational design of new solvents.

  12. CO2 capture in amine solutions: modelling and simulations with non-empirical methods

    International Nuclear Information System (INIS)

    Andreoni, Wanda; Pietrucci, Fabio

    2016-01-01

    Absorption in aqueous amine solutions is the most advanced technology for the capture of CO 2 , although suffering from drawbacks that do not allow exploitation on large scale. The search for optimum solvents has been pursued with empirical methods and has also motivated a number of computational approaches over the last decade. However, a deeper level of understanding of the relevant chemical reactions in solution is required so as to contribute to this effort. We present here a brief critical overview of the most recent applications of computer simulations using ab initio methods. Comparison of their outcome shows a strong dependence on the structural models employed to represent the molecular systems in solution and on the strategy used to simulate the reactions. In particular, the results of very recent ab initio molecular dynamics augmented with metadynamics are summarized, showing the crucial role of water, which has been so far strongly underestimated both in the calculations and in the interpretation of experimental data. Indications are given for advances in computational approaches that are necessary if meant to contribute to the rational design of new solvents. (topical review)

  13. Studying Regional Wave Source Time Functions Using the Empirical Green's Function Method: Application to Central Asia

    Science.gov (United States)

    Xie, J.; Schaff, D. P.; Chen, Y.; Schult, F.

    2013-12-01

    Reliably estimated source time functions (STFs) from high-frequency regional waveforms, such as Lg, Pn and Pg, provide important input for seismic source studies, explosion detection and discrimination, and minimization of parameter trade-off in attenuation studies. We have searched for candidate pairs of larger and small earthquakes in and around China that share the same focal mechanism but significantly differ in magnitudes, so that the empirical Green's function (EGF) method can be applied to study the STFs of the larger events. We conducted about a million deconvolutions using waveforms from 925 earthquakes, and screened the deconvolved traces to exclude those that are from event pairs that involved different mechanisms. Only 2,700 traces passed this screening and could be further analyzed using the EGF method. We have developed a series of codes for speeding up the final EGF analysis by implementing automations and user-graphic interface procedures. The codes have been fully tested with a subset of screened data and we are currently applying them to all the screened data. We will present a large number of deconvolved STFs retrieved using various phases (Lg, Pn, Sn and Pg and coda) with information on any directivities, any possible dependence of pulse durations on the wave types, on scaling relations for the pulse durations and event sizes, and on the estimated source static stress drops.

  14. Interface of the polarizable continuum model of solvation with semi-empirical methods in the GAMESS program

    DEFF Research Database (Denmark)

    Svendsen, Casper Steinmann; Blædel, Kristoffer L.; Christensen, Anders Steen

    2013-01-01

    An interface between semi-empirical methods and the polarized continuum model (PCM) of solvation successfully implemented into GAMESS following the approach by Chudinov et al (Chem. Phys. 1992, 160, 41). The interface includes energy gradients and is parallelized. For large molecules such as ubiq......An interface between semi-empirical methods and the polarized continuum model (PCM) of solvation successfully implemented into GAMESS following the approach by Chudinov et al (Chem. Phys. 1992, 160, 41). The interface includes energy gradients and is parallelized. For large molecules...

  15. Benchmarking of a T-wave alternans detection method based on empirical mode decomposition.

    Science.gov (United States)

    Blanco-Velasco, Manuel; Goya-Esteban, Rebeca; Cruz-Roldán, Fernando; García-Alberola, Arcadi; Rojo-Álvarez, José Luis

    2017-07-01

    T-wave alternans (TWA) is a fluctuation of the ST-T complex occurring on an every-other-beat basis of the surface electrocardiogram (ECG). It has been shown to be an informative risk stratifier for sudden cardiac death, though the lack of gold standard to benchmark detection methods has promoted the use of synthetic signals. This work proposes a novel signal model to study the performance of a TWA detection. Additionally, the methodological validation of a denoising technique based on empirical mode decomposition (EMD), which is used here along with the spectral method, is also tackled. The proposed test bed system is based on the following guidelines: (1) use of open source databases to enable experimental replication; (2) use of real ECG signals and physiological noise; (3) inclusion of randomized TWA episodes. Both sensitivity (Se) and specificity (Sp) are separately analyzed. Also a nonparametric hypothesis test, based on Bootstrap resampling, is used to determine whether the presence of the EMD block actually improves the performance. The results show an outstanding specificity when the EMD block is used, even in very noisy conditions (0.96 compared to 0.72 for SNR = 8 dB), being always superior than that of the conventional SM alone. Regarding the sensitivity, using the EMD method also outperforms in noisy conditions (0.57 compared to 0.46 for SNR=8 dB), while it decreases in noiseless conditions. The proposed test setting designed to analyze the performance guarantees that the actual physiological variability of the cardiac system is reproduced. The use of the EMD-based block in noisy environment enables the identification of most patients with fatal arrhythmias. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Empirical methods for controlling false positives and estimating confidence in ChIP-Seq peaks

    Directory of Open Access Journals (Sweden)

    Courdy Samir J

    2008-12-01

    Full Text Available Abstract Background High throughput signature sequencing holds many promises, one of which is the ready identification of in vivo transcription factor binding sites, histone modifications, changes in chromatin structure and patterns of DNA methylation across entire genomes. In these experiments, chromatin immunoprecipitation is used to enrich for particular DNA sequences of interest and signature sequencing is used to map the regions to the genome (ChIP-Seq. Elucidation of these sites of DNA-protein binding/modification are proving instrumental in reconstructing networks of gene regulation and chromatin remodelling that direct development, response to cellular perturbation, and neoplastic transformation. Results Here we present a package of algorithms and software that makes use of control input data to reduce false positives and estimate confidence in ChIP-Seq peaks. Several different methods were compared using two simulated spike-in datasets. Use of control input data and a normalized difference score were found to more than double the recovery of ChIP-Seq peaks at a 5% false discovery rate (FDR. Moreover, both a binomial p-value/q-value and an empirical FDR were found to predict the true FDR within 2–3 fold and are more reliable estimators of confidence than a global Poisson p-value. These methods were then used to reanalyze Johnson et al.'s neuron-restrictive silencer factor (NRSF ChIP-Seq data without relying on extensive qPCR validated NRSF sites and the presence of NRSF binding motifs for setting thresholds. Conclusion The methods developed and tested here show considerable promise for reducing false positives and estimating confidence in ChIP-Seq data without any prior knowledge of the chIP target. They are part of a larger open source package freely available from http://useq.sourceforge.net/.

  17. Mathematical method to build an empirical model for inhaled anesthetic agent wash-in

    Directory of Open Access Journals (Sweden)

    Grouls René EJ

    2011-06-01

    Full Text Available Abstract Background The wide range of fresh gas flow - vaporizer setting (FGF - FD combinations used by different anesthesiologists during the wash-in period of inhaled anesthetics indicates that the selection of FGF and FD is based on habit and personal experience. An empirical model could rationalize FGF - FD selection during wash-in. Methods During model derivation, 50 ASA PS I-II patients received desflurane in O2 with an ADU® anesthesia machine with a random combination of a fixed FGF - FD setting. The resulting course of the end-expired desflurane concentration (FA was modeled with Excel Solver, with patient age, height, and weight as covariates; NONMEM was used to check for parsimony. The resulting equation was solved for FD, and prospectively tested by having the formula calculate FD to be used by the anesthesiologist after randomly selecting a FGF, a target FA (FAt, and a specified time interval (1 - 5 min after turning on the vaporizer after which FAt had to be reached. The following targets were tested: desflurane FAt 3.5% after 3.5 min (n = 40, 5% after 5 min (n = 37, and 6% after 4.5 min (n = 37. Results Solving the equation derived during model development for FD yields FD=-(e(-FGF*-0.23+FGF*0.24*(e(FGF*-0.23*FAt*Ht*0.1-e(FGF*-0.23*FGF*2.55+40.46-e(FGF*-0.23*40.46+e(FGF*-0.23+Time/-4.08*40.46-e(Time/-4.08*40.46/((-1+e(FGF*0.24*(-1+e(Time/-4.08*39.29. Only height (Ht could be withheld as a significant covariate. Median performance error and median absolute performance error were -2.9 and 7.0% in the 3.5% after 3.5 min group, -3.4 and 11.4% in the 5% after 5 min group, and -16.2 and 16.2% in the 6% after 4.5 min groups, respectively. Conclusions An empirical model can be used to predict the FGF - FD combinations that attain a target end-expired anesthetic agent concentration with clinically acceptable accuracy within the first 5 min of the start of administration. The sequences are easily calculated in an Excel file and simple to

  18. Empirical method to calculate Clinch River Breeder Reactor (CRBR) inlet plenum transient temperatures

    International Nuclear Information System (INIS)

    Howarth, W.L.

    1976-01-01

    Sodium flow enters the CRBR inlet plenum via three loops or inlets. An empirical equation was developed to calculate transient temperatures in the CRBR inlet plenum from known loop flows and temperatures. The constants in the empirical equation were derived from 1/4 scale Inlet Plenum Model tests using water as the test fluid. The sodium temperature distribution was simulated by an electrolyte. Step electrolyte transients at 100 percent model flow were used to calculate the equation constants. Step electrolyte runs at 50 percent and 10 percent flow confirmed that the constants were independent of flow. Also, a transient was tested which varied simultaneously flow rate and electrolyte. Agreement of the test results with the empirical equation results was good which verifies the empirical equation

  19. Empirical likelihood

    CERN Document Server

    Owen, Art B

    2001-01-01

    Empirical likelihood provides inferences whose validity does not depend on specifying a parametric model for the data. Because it uses a likelihood, the method has certain inherent advantages over resampling methods: it uses the data to determine the shape of the confidence regions, and it makes it easy to combined data from multiple sources. It also facilitates incorporating side information, and it simplifies accounting for censored, truncated, or biased sampling.One of the first books published on the subject, Empirical Likelihood offers an in-depth treatment of this method for constructing confidence regions and testing hypotheses. The author applies empirical likelihood to a range of problems, from those as simple as setting a confidence region for a univariate mean under IID sampling, to problems defined through smooth functions of means, regression models, generalized linear models, estimating equations, or kernel smooths, and to sampling with non-identically distributed data. Abundant figures offer vi...

  20. SiFTO: An Empirical Method for Fitting SN Ia Light Curves

    Science.gov (United States)

    Conley, A.; Sullivan, M.; Hsiao, E. Y.; Guy, J.; Astier, P.; Balam, D.; Balland, C.; Basa, S.; Carlberg, R. G.; Fouchez, D.; Hardin, D.; Howell, D. A.; Hook, I. M.; Pain, R.; Perrett, K.; Pritchet, C. J.; Regnault, N.

    2008-07-01

    We present SiFTO, a new empirical method for modeling Type Ia supernova (SN Ia) light curves by manipulating a spectral template. We make use of high-redshift SN data when training the model, allowing us to extend it bluer than rest-frame U. This increases the utility of our high-redshift SN observations by allowing us to use more of the available data. We find that when the shape of the light curve is described using a stretch prescription, applying the same stretch at all wavelengths is not an adequate description. SiFTO therefore uses a generalization of stretch which applies different stretch factors as a function of both the wavelength of the observed filter and the stretch in the rest-frame B band. We compare SiFTO to other published light-curve models by applying them to the same set of SN photometry, and demonstrate that SiFTO and SALT2 perform better than the alternatives when judged by the scatter around the best-fit luminosity distance relationship. We further demonstrate that when SiFTO and SALT2 are trained on the same data set the cosmological results agree. Based on observations obtained with MegaPrime/MegaCam, a joint project of CFHT and CEA/DAPNIA, at the Canada-France-Hawaii Telescope (CFHT) which is operated by the National Research Council (NRC) of Canada, the Institut National des Sciences de l'Univers of the Centre National de la Recherche Scientifique (CNRS) of France, and the University of Hawaii. This work is based in part on data products produced at the Canadian Astronomy Data Centre as part of the Canada-France-Hawaii Telescope Legacy Survey, a collaborative project of NRC and CNRS.

  1. Empirical validation of a real options theory based method for optimizing evacuation decisions within chemical plants.

    Science.gov (United States)

    Reniers, G L L; Audenaert, A; Pauwels, N; Soudan, K

    2011-02-15

    This article empirically assesses and validates a methodology to make evacuation decisions in case of major fire accidents in chemical clusters. In this paper, a number of empirical results are presented, processed and discussed with respect to the implications and management of evacuation decisions in chemical companies. It has been shown in this article that in realistic industrial settings, suboptimal interventions may result in case the prospect to obtain additional information at later stages of the decision process is ignored. Empirical results also show that implications of interventions, as well as the required time and workforce to complete particular shutdown activities, may be very different from one company to another. Therefore, to be optimal from an economic viewpoint, it is essential that precautionary evacuation decisions are tailor-made per company. Copyright © 2010 Elsevier B.V. All rights reserved.

  2. Sensitivity of ab Initio vs Empirical Methods in Computing Structural Effects on NMR Chemical Shifts for the Example of Peptides.

    Science.gov (United States)

    Sumowski, Chris Vanessa; Hanni, Matti; Schweizer, Sabine; Ochsenfeld, Christian

    2014-01-14

    The structural sensitivity of NMR chemical shifts as computed by quantum chemical methods is compared to a variety of empirical approaches for the example of a prototypical peptide, the 38-residue kaliotoxin KTX comprising 573 atoms. Despite the simplicity of empirical chemical shift prediction programs, the agreement with experimental results is rather good, underlining their usefulness. However, we show in our present work that they are highly insensitive to structural changes, which renders their use for validating predicted structures questionable. In contrast, quantum chemical methods show the expected high sensitivity to structural and electronic changes. This appears to be independent of the quantum chemical approach or the inclusion of solvent effects. For the latter, explicit solvent simulations with increasing number of snapshots were performed for two conformers of an eight amino acid sequence. In conclusion, the empirical approaches neither provide the expected magnitude nor the patterns of NMR chemical shifts determined by the clearly more costly ab initio methods upon structural changes. This restricts the use of empirical prediction programs in studies where peptide and protein structures are utilized for the NMR chemical shift evaluation such as in NMR refinement processes, structural model verifications, or calculations of NMR nuclear spin relaxation rates.

  3. Bioactive conformational generation of small molecules: A comparative analysis between force-field and multiple empirical criteria based methods

    Directory of Open Access Journals (Sweden)

    Jiang Hualiang

    2010-11-01

    Full Text Available Abstract Background Conformational sampling for small molecules plays an essential role in drug discovery research pipeline. Based on multi-objective evolution algorithm (MOEA, we have developed a conformational generation method called Cyndi in the previous study. In this work, in addition to Tripos force field in the previous version, Cyndi was updated by incorporation of MMFF94 force field to assess the conformational energy more rationally. With two force fields against a larger dataset of 742 bioactive conformations of small ligands extracted from PDB, a comparative analysis was performed between pure force field based method (FFBM and multiple empirical criteria based method (MECBM hybrided with different force fields. Results Our analysis reveals that incorporating multiple empirical rules can significantly improve the accuracy of conformational generation. MECBM, which takes both empirical and force field criteria as the objective functions, can reproduce about 54% (within 1Å RMSD of the bioactive conformations in the 742-molecule testset, much higher than that of pure force field method (FFBM, about 37%. On the other hand, MECBM achieved a more complete and efficient sampling of the conformational space because the average size of unique conformations ensemble per molecule is about 6 times larger than that of FFBM, while the time scale for conformational generation is nearly the same as FFBM. Furthermore, as a complementary comparison study between the methods with and without empirical biases, we also tested the performance of the three conformational generation methods in MacroModel in combination with different force fields. Compared with the methods in MacroModel, MECBM is more competitive in retrieving the bioactive conformations in light of accuracy but has much lower computational cost. Conclusions By incorporating different energy terms with several empirical criteria, the MECBM method can produce more reasonable conformational

  4. Some features and applications of an empirical approach to the treatment of measurement data (DoD-method)

    International Nuclear Information System (INIS)

    Beyrich, W.; Golly, W.; Spannagel, G.

    1981-01-01

    An empirical method of data evaluation is described which allows the derivation of meaningful estimates of the variances of data groups even if they comprise extreme values (outliers). It can be applied to problems usually treated by variance analysis and seems to be suitable to investigate and describe the state of the art of the various analytical methods applied in international safeguards. Some examples are given to illustrate this procedure; they are based on data of the SALE program

  5. An empirical Bayes method for updating inferences in analysis of quantitative trait loci using information from related genome scans.

    Science.gov (United States)

    Zhang, Kui; Wiener, Howard; Beasley, Mark; George, Varghese; Amos, Christopher I; Allison, David B

    2006-08-01

    Individual genome scans for quantitative trait loci (QTL) mapping often suffer from low statistical power and imprecise estimates of QTL location and effect. This lack of precision yields large confidence intervals for QTL location, which are problematic for subsequent fine mapping and positional cloning. In prioritizing areas for follow-up after an initial genome scan and in evaluating the credibility of apparent linkage signals, investigators typically examine the results of other genome scans of the same phenotype and informally update their beliefs about which linkage signals in their scan most merit confidence and follow-up via a subjective-intuitive integration approach. A method that acknowledges the wisdom of this general paradigm but formally borrows information from other scans to increase confidence in objectivity would be a benefit. We developed an empirical Bayes analytic method to integrate information from multiple genome scans. The linkage statistic obtained from a single genome scan study is updated by incorporating statistics from other genome scans as prior information. This technique does not require that all studies have an identical marker map or a common estimated QTL effect. The updated linkage statistic can then be used for the estimation of QTL location and effect. We evaluate the performance of our method by using extensive simulations based on actual marker spacing and allele frequencies from available data. Results indicate that the empirical Bayes method can account for between-study heterogeneity, estimate the QTL location and effect more precisely, and provide narrower confidence intervals than results from any single individual study. We also compared the empirical Bayes method with a method originally developed for meta-analysis (a closely related but distinct purpose). In the face of marked heterogeneity among studies, the empirical Bayes method outperforms the comparator.

  6. SENSITIVITY ANALYSIS IN FLEXIBLE PAVEMENT PERFORMANCE USING MECHANISTIC EMPIRICAL METHOD (CASE STUDY: CIREBON–LOSARI ROAD SEGMENT, WEST JAVA

    Directory of Open Access Journals (Sweden)

    E. Samad

    2012-02-01

    Full Text Available Cirebon – Losari flexible pavement which is located on the North Coast of Java, Indonesia, is in the severe damage condition caused by overloading vehicles passing the road. The need for developing improved pavement design and analysis methods is very necessary. The increment of loads and quality of material properties can be evaluated through Mechanistic-Empirical (M-E method. M-E software like KENLAYER has been developed to facilitate the transition from empirical to mechanistic design methods. From the KENLAYER analysis, it can be concluded that the effect of overloading to the pavement structure performance is difficult to minimize even though the first two layers have relatively high modulus of elasticity. The occurrence of 150%, 200%, and 250% overloading have a very significant effect in reducing 84%, 95%, and 98% of the pavement design life, respectively. For the purpose of increasing the pavement service life, it is more effective to manage the allowable load.

  7. X-ray spectrum analysis of multi-component samples by a method of fundamental parameters using empirical ratios

    International Nuclear Information System (INIS)

    Karmanov, V.I.

    1986-01-01

    A type of the fundamental parameter method based on empirical relation of corrections for absorption and additional-excitation with absorbing characteristics of samples is suggested. The method is used for X-ray fluorescence analysis of multi-component samples of charges of welded electrodes. It is shown that application of the method is justified only for determination of titanium, calcium and silicon content in charges taking into account only corrections for absorption. Irn and manganese content can be calculated by the simple method of the external standard

  8. Theoretical vs. empirical discriminability: the application of ROC methods to eyewitness identification.

    Science.gov (United States)

    Wixted, John T; Mickes, Laura

    2018-01-01

    Receiver operating characteristic (ROC) analysis was introduced to the field of eyewitness identification 5 years ago. Since that time, it has been both influential and controversial, and the debate has raised an issue about measuring discriminability that is rarely considered. The issue concerns the distinction between empirical discriminability (measured by area under the ROC curve) vs. underlying/theoretical discriminability (measured by d' or variants of it). Under most circumstances, the two measures will agree about a difference between two conditions in terms of discriminability. However, it is possible for them to disagree, and that fact can lead to confusion about which condition actually yields higher discriminability. For example, if the two conditions have implications for real-world practice (e.g., a comparison of competing lineup formats), should a policymaker rely on the area-under-the-curve measure or the theory-based measure? Here, we illustrate the fact that a given empirical ROC yields as many underlying discriminability measures as there are theories that one is willing to take seriously. No matter which theory is correct, for practical purposes, the singular area-under-the-curve measure best identifies the diagnostically superior procedure. For that reason, area under the ROC curve informs policy in a way that underlying theoretical discriminability never can. At the same time, theoretical measures of discriminability are equally important, but for a different reason. Without an adequate theoretical understanding of the relevant task, the field will be in no position to enhance empirical discriminability.

  9. Interface of the polarizable continuum model of solvation with semi-empirical methods in the GAMESS program

    DEFF Research Database (Denmark)

    Svendsen, Casper Steinmann; Blædel, Kristoffer; Christensen, Anders S

    2013-01-01

    An interface between semi-empirical methods and the polarized continuum model (PCM) of solvation successfully implemented into GAMESS following the approach by Chudinov et al (Chem. Phys. 1992, 160, 41). The interface includes energy gradients and is parallelized. For large molecules such as ubiq......An interface between semi-empirical methods and the polarized continuum model (PCM) of solvation successfully implemented into GAMESS following the approach by Chudinov et al (Chem. Phys. 1992, 160, 41). The interface includes energy gradients and is parallelized. For large molecules...... such as ubiquitin a reasonable speedup (up to a factor of six) is observed for up to 16 cores. The SCF convergence is greatly improved by PCM for proteins compared to the gas phase....

  10. Interface of the polarizable continuum model of solvation with semi-empirical methods in the GAMESS program.

    Directory of Open Access Journals (Sweden)

    Casper Steinmann

    Full Text Available An interface between semi-empirical methods and the polarized continuum model (PCM of solvation successfully implemented into GAMESS following the approach by Chudinov et al (Chem. Phys. 1992, 160, 41. The interface includes energy gradients and is parallelized. For large molecules such as ubiquitin a reasonable speedup (up to a factor of six is observed for up to 16 cores. The SCF convergence is greatly improved by PCM for proteins compared to the gas phase.

  11. AN EMPIRICAL METHOD FOR MATERIALITY: WOULD CONFLICT OF INTEREST DISCLOSURES CHANGE PATIENT DECISIONS?

    Science.gov (United States)

    Spece, Roy; Yokum, David; Okoro, Andrea-Gale; Robertson Christopher

    2014-01-01

    The law has long been concerned with the agency problems that arise when advisors, such as attorneys or physicians, put themselves in financial relationships that create conflicts of interest. If the financial relationship is "material" to the transactions proposed by the advisor, then non-disclosure of the relationship may be pertinent to claims of malpractice, informed consent, and even fraud, as well as to professional discipline. In these sorts of cases, materiality is closely related to the question of causation, roughly turning on whether the withheld information might have changed the decision of a reasonable advisee (i.e., a patient). The injured plaintiff will predictably testify that the information would have impacted his or her choice, but that self-serving testimony may be unreliable. The fact finder is left to speculate about the counterfactual world in which the information was disclosed. This Article shows how randomized vignette-based experimentation may create a valuable form of evidence to address these questions, for both litigation and policymaking. To demonstrate this method and investigate conflicts of interest in healthcare in particular, we recruited 691 human subjects and asked them to imagine themselves as patients facing a choice about whether to undergo a cardiac stenting procedure recommended by a cardiologist. We manipulated the vignettes in a 2 x 3 between-subjects design, where we systematically varied the appropriateness of the proposed treatment, which was described in terms of patient risk without the procedure (low or high), and manipulated the type of disclosure provided by the physician (none, standard, or enhanced). We used physician ownership of the specialty hospital where the surgery would be performed as the conflict of interest, disclosed or not, and the "enhanced" disclosure included notice that such relationships have been associated with biases in prescribing behavior. We found that the mock patients were

  12. Creating a memory of causal relationships an integration of empirical and explanation-based learning methods

    CERN Document Server

    Pazzani, Michael J

    2014-01-01

    This book presents a theory of learning new causal relationships by making use of perceived regularities in the environment, general knowledge of causality, and existing causal knowledge. Integrating ideas from the psychology of causation and machine learning, the author introduces a new learning procedure called theory-driven learning that uses abstract knowledge of causality to guide the induction process. Known as OCCAM, the system uses theory-driven learning when new experiences conform to common patterns of causal relationships, empirical learning to learn from novel experiences, and expl

  13. Empirical Hamiltonians

    International Nuclear Information System (INIS)

    Peggs, S.; Talman, R.

    1987-01-01

    As proton accelerators get larger, and include more magnets, the conventional tracking programs which simulate them run slower. The purpose of this paper is to describe a method, still under development, in which element-by-element tracking around one turn is replaced by a single man, which can be processed far faster. It is assumed for this method that a conventional program exists which can perform faithful tracking in the lattice under study for some hundreds of turns, with all lattice parameters held constant. An empirical map is then generated by comparison with the tracking program. A procedure has been outlined for determining an empirical Hamiltonian, which can represent motion through many nonlinear kicks, by taking data from a conventional tracking program. Though derived by an approximate method this Hamiltonian is analytic in form and can be subjected to further analysis of varying degrees of mathematical rigor. Even though the empirical procedure has only been described in one transverse dimension, there is good reason to hope that it can be extended to include two transverse dimensions, so that it can become a more practical tool in realistic cases

  14. A New Statistical Method to Determine the Degree of Validity of Health Economic Model Outcomes against Empirical Data.

    Science.gov (United States)

    Corro Ramos, Isaac; van Voorn, George A K; Vemer, Pepijn; Feenstra, Talitha L; Al, Maiwenn J

    2017-09-01

    The validation of health economic (HE) model outcomes against empirical data is of key importance. Although statistical testing seems applicable, guidelines for the validation of HE models lack guidance on statistical validation, and actual validation efforts often present subjective judgment of graphs and point estimates. To discuss the applicability of existing validation techniques and to present a new method for quantifying the degrees of validity statistically, which is useful for decision makers. A new Bayesian method is proposed to determine how well HE model outcomes compare with empirical data. Validity is based on a pre-established accuracy interval in which the model outcomes should fall. The method uses the outcomes of a probabilistic sensitivity analysis and results in a posterior distribution around the probability that HE model outcomes can be regarded as valid. We use a published diabetes model (Modelling Integrated Care for Diabetes based on Observational data) to validate the outcome "number of patients who are on dialysis or with end-stage renal disease." Results indicate that a high probability of a valid outcome is associated with relatively wide accuracy intervals. In particular, 25% deviation from the observed outcome implied approximately 60% expected validity. Current practice in HE model validation can be improved by using an alternative method based on assessing whether the model outcomes fit to empirical data at a predefined level of accuracy. This method has the advantage of assessing both model bias and parameter uncertainty and resulting in a quantitative measure of the degree of validity that penalizes models predicting the mean of an outcome correctly but with overly wide credible intervals. Copyright © 2017 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  15. Methodological and Methodical Principles of the Empirical Study of Spiritual Development of a Personality

    Directory of Open Access Journals (Sweden)

    Olga Klymyshyn

    2017-06-01

    Full Text Available The article reveals the essence of the methodological principles of the spiritual development of a personality. The results of the theoretical analysis of psychological content of spirituality from the positions of system and structural approach to studying of a personality, age patterns of the mental personality development, the sacramental nature of human person, mechanisms of human spiritual development are taken into consideration. The interpretation of spirituality and the spiritual development of a personality is given. Initial principles of the organization of the empirical research of the spiritual development of a personality (ontogenetic, sociocultural, self-determination, system are presented. Such parameters of the estimation of a personality’s spiritual development as general index of the development of spiritual potential, indexes of the development of ethical, aesthetical, cognitive, existential components of spirituality, index of religiousness of a personality are described. Methodological support of psychological diagnostic research is defined.

  16. An empirical method for peak-to-total ratio computation of a gamma-ray detector

    International Nuclear Information System (INIS)

    Cesana, A.; Terrani, M.

    1989-01-01

    A simple expression for peak-to-total ratio evaluation of gamma-ray detectors in the energy range 0.3-10 MeV is proposed. The quantities one needs to know for the computation are: Detector dimensions and chemical composition, photon corss sections and an empirical energy dependent function which is valid for all the detector materials considered. This procedure seems able to produce peak-to-total values with an accuracy comparable with the most sophisticated Monte Carlo calculations. It has been tested using experimental peak-to-total values of Ge, NaI, CsI and BGO detectors but it is reasonable to suppose that it is valid for any detector material. (orig.)

  17. Analyses of reliability characteristics of emergency diesel generator population using empirical Bayes methods

    International Nuclear Information System (INIS)

    Vesely, W.E.; Uryas'ev, S.P.; Samanta, P.K.

    1993-01-01

    Emergency Diesel Generators (EDGs) provide backup power to nuclear power plants in case of failure of AC buses. The reliability of EDGs is important to assure response to loss-of-offsite power accident scenarios, a dominant contributor to the plant risk. The reliable performance of EDGs has been of concern both for regulators and plant operators. In this paper the authors present an approach and results from the analysis of failure data from a large population of EDGs. They used empirical Bayes approach to obtain both the population distribution and the individual failure probabilities from EDGs failure to start and load-run data over 4 years for 194 EDGs at 63 plant units

  18. Timing and Targeting of PSS Methods and Tools: An Empirical Study amongst Academic Contributors

    DEFF Research Database (Denmark)

    Nøhr Hinz, Hector; Bey, Niki; McAloone, Tim C.

    2013-01-01

    The emergence of product/service-systems has meant that development methods for such systems have emerged from academia. This paper investigates existing methods that are aimed at developing product/service-systems. Two aspects are determined for each examined method. The first aspect that has been...... surveyed is when a given method is meant to be used in the development of a product/service-system. This aspect has been determined through a qualitative assessment of each method. The second aspect surveyed is which persons in an organisation who are seen as the main drivers in the use of the methods....... To gain this insight a questionnaire for each method has been conducted with the authors of the methods as participants. The main finding indicates that current PSS methods cannot thoroughly support the development of product/ service-systems as their specificity is too low and that the methods need...

  19. Evaluation and Recalibration of Empirical Constant for Estimation of Reference Crop Evapotranspiration against the Modified Penman Method

    Science.gov (United States)

    Sasireka, K.; Jagan Mohan Reddy, C.; Charan Reddy, C.; Ramakrishnan, K.

    2017-07-01

    The major demand in our country is irrigation demand. Looking to the low irrigation potential and small water resources, it is felt necessary to see that water must be used economically and efficiently. This may be achieved by using latest methods of determination of water requirements for crops and applying the proper water management practices. Evapotranspiration (ET) is a basic for calculation of water requirement for crops. The various popular empirical equations for reference crop evapotranspiration (ETr) belong to three categories namely, Temperature, Radiation based methods and Combined methods. The above methods are site specific; hence it is necessary to recalibrate the coefficients for applying them in India. In the present paper, the standard combined method namely FAO modified Penman method was used to recalibrate the constants in temperature based (TB) methods and it can also be used to determine the ETr for the selected station. Four TB evapotranspiration models namely Blaney-Criddle, Romanenko, Kharrufa, and, Thronthwaite methods are recalibrated and the constant in each method are redefined for the data from Lekkur station, Cuddalore district in India. The result shows that, large error existed when ETr has been calculated with original constants. Hence regression equations were developed to minimise these variations in magnitude. It was found that out of four methods the Blaney-Criddle method suits better for the selected region.

  20. Investigation of optical effects in silicon quantum dots by using an empirical pseudopotential method

    Energy Technology Data Exchange (ETDEWEB)

    Ghoshal, S. K.; Sahar, M. R.; Rohani, M. S. [Universiti Teknologi Malaysia, Johor (Malaysia)

    2011-02-15

    A computer simulation using a pseudopotential approach has been carried out to investigate the band gap as a function of the size and the shape of small silicon (Si) dots having 3 to 44 atoms per dot with and without surface passivation. We used an empirical pseudo-potential Hamiltonian, a plane-wave basis expansion and a basic tetrahedral structure with undistorted local bonding configurations. In our simulation, the structures of the quantum dots were relaxed and optimized before and after passivation. We found that the gap increased more for an oxygenated surface than a hydrogenated one. Thus, both quantum confinement and surface passivation determined the optical and the electronic properties of Si quantum dots. Visible luminescence was probably due to radiative recombination of electrons and holes in the quantum-confined nanostructures. The effect of passivation of the surface dangling bonds by hydrogen and oxygen atoms and the role of surface states on the gap energy was also examined. We investigated the entire energy spectrum starting from the very low-lying ground state to the very high-lying excited states. The results for the sizes of the gap, the density of states, the oscillator strength and the absorption coefficient as functions of the size are presented. The importance of the confinement and the role of surface passivation on the optical effects are also discussed.

  1. Decision-oriented environmental assessment: An empirical study of its theory and methods

    International Nuclear Information System (INIS)

    Pischke, Frederik; Cashmore, Matthew

    2006-01-01

    The potential advantages of a decision-oriented theory of environmental assessment have long been recognised, but it is only in recent years that this topic has received concerted attention. This research advanced contemporary debate on environmental assessment through an empirically-informed evaluation of strategic theoretical and methodological issues associated with the practical application of decision-oriented theory. This was undertaken by critically analysing the decision-oriented Environmental Impact Assessment system of the German Development Cooperation (a bilateral development assistance agency) using a modified version of a recent conceptual and methodological development, Analytical Strategic Environmental Assessment. The results indicate that some aspects of decision-oriented theory offer considerable potential for environmental assessment process management, and should be employed routinely. Yet uncertainty remains about whether certain core concepts, notably the detailed a priori description of decision processes, can be achieved in practice. The analysis also indicates that there is considerably more common ground in many contemporary debates about environmental assessment than the literature, which has tended towards polarisation suggests. The significance of this research is that it recognises and highlights the contribution of decision-oriented theory to refocusing attention on the substantive intent of this globally significant policy tool

  2. The performance of selected semi-empirical and DFT methods in studying C₆₀ fullerene derivatives.

    Science.gov (United States)

    Sikorska, Celina; Puzyn, Tomasz

    2015-11-13

    The capability of reproducing the open circuit voltages (V(oc)) of 15 representative C60 fullerene derivatives was tested using the selected quantum mechanical methods (B3LYP, PM6, and PM7) together with the two one-electron basis sets. Certain theoretical treatments (e.g. PM6) were found to be satisfactory for preliminary estimates of the open circuit voltages (V(oc)), whereas the use of the B3LYP/6-31G(d) approach has been proven to assure highly accurate results. We also examined the structural similarity of 19 fullerene derivatives by employing principle component analysis (PCA). In order to express the structural features of the studied compounds we used molecular descriptors calculated with semi-empirical (PM6 and PM7) and density functional (B3LYP/6-31G(d)) methods separately. In performing PCA, we noticed that semi-empirical methods (i.e. PM6 and PM7) seem satisfactory for molecules, in which one can distinguish the aromatic and the aliphatic parts in the cyclopropane ring of PCBM (phenyl-C61-buteric acid methyl ester) and they significantly overestimate the energy of the highest occupied molecular orbital (E(HOMO)). The use of the B3LYP functional, however, is recommended for studying methanofullerenes, which closely resemble the structure of PCBM, and for their modifications.

  3. An Algorithmic Comparison of the Hyper-Reduction and the Discrete Empirical Interpolation Method for a Nonlinear Thermal Problem

    Directory of Open Access Journals (Sweden)

    Felix Fritzen

    2018-02-01

    Full Text Available A novel algorithmic discussion of the methodological and numerical differences of competing parametric model reduction techniques for nonlinear problems is presented. First, the Galerkin reduced basis (RB formulation is presented, which fails at providing significant gains with respect to the computational efficiency for nonlinear problems. Renowned methods for the reduction of the computing time of nonlinear reduced order models are the Hyper-Reduction and the (Discrete Empirical Interpolation Method (EIM, DEIM. An algorithmic description and a methodological comparison of both methods are provided. The accuracy of the predictions of the hyper-reduced model and the (DEIM in comparison to the Galerkin RB is investigated. All three approaches are applied to a simple uncertainty quantification of a planar nonlinear thermal conduction problem. The results are compared to computationally intense finite element simulations.

  4. Different methods for ethical analysis in health technology assessment: an empirical study.

    Science.gov (United States)

    Saarni, Samuli I; Braunack-Mayer, Annette; Hofmann, Bjørn; van der Wilt, Gert Jan

    2011-10-01

    Ethical analysis can highlight important ethical issues related to implementing a technology, values inherent in the technology itself, and value-decisions underlying the health technology assessment (HTA) process. Ethical analysis is a well-acknowledged part of HTA, yet seldom included in practice. One reason for this is lack of knowledge about the properties and differences between the methods available. This study compares different methods for ethical analysis within HTA. Ethical issues related to bariatric (obesity) surgery were independently evaluated using axiological, casuist, principlist, and EUnetHTA models for ethical analysis within HTA. The methods and results are presented and compared. Despite varying theoretical underpinnings and practical approaches, the four methods identified similar themes: personal responsibility, self-infliction, discrimination, justice, public funding, and stakeholder involvement. The axiological and EUnetHTA models identified a wider range of arguments, whereas casuistry and principlism concentrated more on analyzing a narrower set of arguments deemed more important. Different methods can be successfully used for conducting ethical analysis within HTA. Although our study does not show that different methods in ethics always produce similar results, it supports the view that different methods of ethics can yield relevantly similar results. This suggests that the key conclusions of ethical analyses within HTA can be transferable between methods and countries. The systematic and transparent use of some method of ethics appears more important than the choice of the exact method.

  5. A new empirical method to predict carbon dioxide evasion from boreal lakes

    Science.gov (United States)

    Hastie, Adam; Lauerwald, Ronny; Weyhenmeyer, Gesa; Sobek, Sebastian; Regnier, Pierre

    2016-04-01

    Carbon dioxide evasion from lakes (F CO2) is an important component of the global carbon budget. In this study, empirical models have been developed to predict CO2 partial pressure (pCO2) in boreal lakes at the 0.5° grid scale, with the aim of producing the first map of F CO2 from these high latitude aquatic systems. Approximately 57,000 samples of lake pCO2 from Sweden and Finland were used to train the models. Significant seasonality in pCO2 was identified and thus data were split into two categories based on water temperature; 0-4.5° C and >4.5° C. The lake pCO2 data and various globally available, environmental parameters such as elevation, terrestrial net primary production (NPP) and climate (temperature T, rainfall R) were spatially aggregated to a 0.5° resolution. Preliminary results from multiple regression analyses suggest that a significant proportion of the variability in boreal lake pCO2 can be explained using these globally available parameters. For water temperatures above 4.5° C, the explained proportion of the variability in lake pCO2 is particularly high (r2= 0.7). Following further refinement and validation, a map of estimated lake pCO2 for the entire boreal region will be established. This map will then be combined with lake surface area data from the GLObal WAter BOdies database (GLOWABO, Verpoorter et al., 2014), and a calculation of gas exchange velocity k to produce the first map of boreal lake F CO2. Finally, IPCC projections of the selected environmental predictors (T, NPP, and R) will be used to estimate future F CO2 from boreal lakes and their sensitivity to climate change.

  6. Revision of the South African flexible pavement design method; mechanistic-empirical components

    CSIR Research Space (South Africa)

    Theyse, HL

    2007-09-01

    Full Text Available and damage models or transfer functions. This method was implemented in a number of software packages since the late 1990s which exposed the method to a wide user group. The method therefore came under increasing scrutiny and criticism in the recent past...-intuitive results in some cases, provides unrealistic structural capacity estimates for certain pavement types and does not assess all materials equally, based on their true performance potential. In addition to these problems the method also focuses largely...

  7. Empirical comparison of four baseline covariate adjustment methods in analysis of continuous outcomes in randomized controlled trials

    Directory of Open Access Journals (Sweden)

    Zhang S

    2014-07-01

    Full Text Available Shiyuan Zhang,1 James Paul,2 Manyat Nantha-Aree,2 Norman Buckley,2 Uswa Shahzad,2 Ji Cheng,2 Justin DeBeer,5 Mitchell Winemaker,5 David Wismer,5 Dinshaw Punthakee,5 Victoria Avram,5 Lehana Thabane1–41Department of Clinical Epidemiology and Biostatistics, 2Department of Anesthesia, McMaster University, Hamilton, ON, Canada; 3Biostatistics Unit/Centre for Evaluation of Medicines, St Joseph's Healthcare - Hamilton, Hamilton, ON, Canada; 4Population Health Research Institute, Hamilton Health Science/McMaster University, 5Department of Surgery, Division of Orthopaedics, McMaster University, Hamilton, ON, CanadaBackground: Although seemingly straightforward, the statistical comparison of a continuous variable in a randomized controlled trial that has both a pre- and posttreatment score presents an interesting challenge for trialists. We present here empirical application of four statistical methods (posttreatment scores with analysis of variance, analysis of covariance, change in scores, and percent change in scores, using data from a randomized controlled trial of postoperative pain in patients following total joint arthroplasty (the Morphine COnsumption in Joint Replacement Patients, With and Without GaBapentin Treatment, a RandomIzed ControlLEd Study [MOBILE] trials.Methods: Analysis of covariance (ANCOVA was used to adjust for baseline measures and to provide an unbiased estimate of the mean group difference of the 1-year postoperative knee flexion scores in knee arthroplasty patients. Robustness tests were done by comparing ANCOVA with three comparative methods: the posttreatment scores, change in scores, and percentage change from baseline.Results: All four methods showed similar direction of effect; however, ANCOVA (-3.9; 95% confidence interval [CI]: -9.5, 1.6; P=0.15 and the posttreatment score (-4.3; 95% CI: -9.8, 1.2; P=0.12 method provided the highest precision of estimate compared with the change score (-3.0; 95% CI: -9.9, 3.8; P=0

  8. Sediment yield estimation in mountain catchments of the Camastra reservoir, southern Italy: a comparison among different empirical methods

    Science.gov (United States)

    Lazzari, Maurizio; Danese, Maria; Gioia, Dario; Piccarreta, Marco

    2013-04-01

    Sedimentary budget estimation is an important topic for both scientific and social community, because it is crucial to understand both dynamics of orogenic belts and many practical problems, such as soil conservation and sediment accumulation in reservoir. Estimations of sediment yield or denudation rates in southern-central Italy are generally obtained by simple empirical relationships based on statistical regression between geomorphic parameters of the drainage network and the measured suspended sediment yield at the outlet of several drainage basins or through the use of models based on sediment delivery ratio or on soil loss equations. In this work, we perform a study of catchment dynamics and an estimation of sedimentary yield for several mountain catchments of the central-western sector of the Basilicata region, southern Italy. Sediment yield estimation has been obtained through both an indirect estimation of suspended sediment yield based on the Tu index (mean annual suspension sediment yield, Ciccacci et al., 1980) and the application of the Rusle (Renard et al., 1997) and the USPED (Mitasova et al., 1996) empirical methods. The preliminary results indicate a reliable difference between the RUSLE and USPED methods and the estimation based on the Tu index; a critical data analysis of results has been carried out considering also the present-day spatial distribution of erosion, transport and depositional processes in relation to the maps obtained from the application of those different empirical methods. The studied catchments drain an artificial reservoir (i.e. the Camastra dam), where a detailed evaluation of the amount of historical sediment storage has been collected. Sediment yield estimation obtained by means of the empirical methods have been compared and checked with historical data of sediment accumulation measured in the artificial reservoir of the Camastra dam. The validation of such estimations of sediment yield at the scale of large catchments

  9. Empirical evaluation of decision support systems: Needs, definitions, potential methods, and an example pertaining to waterfowl management

    Science.gov (United States)

    Sojda, R.S.

    2007-01-01

    Decision support systems are often not empirically evaluated, especially the underlying modelling components. This can be attributed to such systems necessarily being designed to handle complex and poorly structured problems and decision making. Nonetheless, evaluation is critical and should be focused on empirical testing whenever possible. Verification and validation, in combination, comprise such evaluation. Verification is ensuring that the system is internally complete, coherent, and logical from a modelling and programming perspective. Validation is examining whether the system is realistic and useful to the user or decision maker, and should answer the question: “Was the system successful at addressing its intended purpose?” A rich literature exists on verification and validation of expert systems and other artificial intelligence methods; however, no single evaluation methodology has emerged as preeminent. At least five approaches to validation are feasible. First, under some conditions, decision support system performance can be tested against a preselected gold standard. Second, real-time and historic data sets can be used for comparison with simulated output. Third, panels of experts can be judiciously used, but often are not an option in some ecological domains. Fourth, sensitivity analysis of system outputs in relation to inputs can be informative. Fifth, when validation of a complete system is impossible, examining major components can be substituted, recognizing the potential pitfalls. I provide an example of evaluation of a decision support system for trumpeter swan (Cygnus buccinator) management that I developed using interacting intelligent agents, expert systems, and a queuing system. Predicted swan distributions over a 13-year period were assessed against observed numbers. Population survey numbers and banding (ringing) studies may provide long term data useful in empirical evaluation of decision support.

  10. A Physically Motivated and Empirically Calibrated Method to Measure the Effective Temperature, Metallicity, and Ti Abundance of M Dwarfs

    Science.gov (United States)

    Veyette, Mark J.; Muirhead, Philip S.; Mann, Andrew W.; Brewer, John M.; Allard, France; Homeier, Derek

    2017-12-01

    The ability to perform detailed chemical analysis of Sun-like F-, G-, and K-type stars is a powerful tool with many applications, including studying the chemical evolution of the Galaxy and constraining planet formation theories. Unfortunately, complications in modeling cooler stellar atmospheres hinders similar analyses of M dwarf stars. Empirically calibrated methods to measure M dwarf metallicity from moderate-resolution spectra are currently limited to measuring overall metallicity and rely on astrophysical abundance correlations in stellar populations. We present a new, empirical calibration of synthetic M dwarf spectra that can be used to infer effective temperature, Fe abundance, and Ti abundance. We obtained high-resolution (R ˜ 25,000), Y-band (˜1 μm) spectra of 29 M dwarfs with NIRSPEC on Keck II. Using the PHOENIX stellar atmosphere modeling code (version 15.5), we generated a grid of synthetic spectra covering a range of temperatures, metallicities, and alpha-enhancements. From our observed and synthetic spectra, we measured the equivalent widths of multiple Fe I and Ti I lines and a temperature-sensitive index based on the FeH band head. We used abundances measured from widely separated solar-type companions to empirically calibrate transformations to the observed indices and equivalent widths that force agreement with the models. Our calibration achieves precisions in T eff, [Fe/H], and [Ti/Fe] of 60 K, 0.1 dex, and 0.05 dex, respectively, and is calibrated for 3200 K < T eff < 4100 K, -0.7 < [Fe/H] < +0.3, and -0.05 < [Ti/Fe] < +0.3. This work is a step toward detailed chemical analysis of M dwarfs at a precision similar to what has been achieved for FGK stars.

  11. Interim revision of the South African Mechanistic-Empirical pavement design method for flexible pavements

    CSIR Research Space (South Africa)

    Theyse, HL

    2011-09-01

    Full Text Available Pavement design methods, in combination with network level management systems must enable road authorities to develop reliable long-term financial plans based on the estimated structural capacity of the road network. Inaccurate design models...

  12. THE EMPIRICAL METHOD OF INVESTIGATING THE CHILDHOOD SUBCULTURE: GROUP OF CHILDREN BEHAVIOR OBSERVATION IN THE GUESTHOUSE POOL

    Directory of Open Access Journals (Sweden)

    Ms. Yelena N. Suvorkina

    2016-12-01

    Full Text Available The article deals with one of the empirical research methods investigating the childhood sub-culture. The method is called observation. The author marked general theoretical position, recommendations on its implementation. Based on observations of the group of children behavior in the guesthouse pool it is found out that such category, as honesty is very important in the organization of the order, taking into account that the subculture of childhood is an open, self-organizing system. In the pool, the children come up with a wide variety of games. The adjacent areas are also involved. The author identified two borders, taking place for the child: a clear (fixed border is the side of the pool and unclear border as the transitional designation of states, qualities (dangerous – safe.

  13. A new multivariate empirical mode decomposition method for improving the performance of SSVEP-based brain-computer interface

    Science.gov (United States)

    Chen, Yi-Feng; Atal, Kiran; Xie, Sheng-Quan; Liu, Quan

    2017-08-01

    Objective. Accurate and efficient detection of steady-state visual evoked potentials (SSVEP) in electroencephalogram (EEG) is essential for the related brain-computer interface (BCI) applications. Approach. Although the canonical correlation analysis (CCA) has been applied extensively and successfully to SSVEP recognition, the spontaneous EEG activities and artifacts that often occur during data recording can deteriorate the recognition performance. Therefore, it is meaningful to extract a few frequency sub-bands of interest to avoid or reduce the influence of unrelated brain activity and artifacts. This paper presents an improved method to detect the frequency component associated with SSVEP using multivariate empirical mode decomposition (MEMD) and CCA (MEMD-CCA). EEG signals from nine healthy volunteers were recorded to evaluate the performance of the proposed method for SSVEP recognition. Main results. We compared our method with CCA and temporally local multivariate synchronization index (TMSI). The results suggest that the MEMD-CCA achieved significantly higher accuracy in contrast to standard CCA and TMSI. It gave the improvements of 1.34%, 3.11%, 3.33%, 10.45%, 15.78%, 18.45%, 15.00% and 14.22% on average over CCA at time windows from 0.5 s to 5 s and 0.55%, 1.56%, 7.78%, 14.67%, 13.67%, 7.33% and 7.78% over TMSI from 0.75 s to 5 s. The method outperformed the filter-based decomposition (FB), empirical mode decomposition (EMD) and wavelet decomposition (WT) based CCA for SSVEP recognition. Significance. The results demonstrate the ability of our proposed MEMD-CCA to improve the performance of SSVEP-based BCI.

  14. RNAontheBENCH: computational and empirical resources for benchmarking RNAseq quantification and differential expression methods

    KAUST Repository

    Germain, Pierre-Luc

    2016-06-20

    RNA sequencing (RNAseq) has become the method of choice for transcriptome analysis, yet no consensus exists as to the most appropriate pipeline for its analysis, with current benchmarks suffering important limitations. Here, we address these challenges through a rich benchmarking resource harnessing (i) two RNAseq datasets including ERCC ExFold spike-ins; (ii) Nanostring measurements of a panel of 150 genes on the same samples; (iii) a set of internal, genetically-determined controls; (iv) a reanalysis of the SEQC dataset; and (v) a focus on relative quantification (i.e. across-samples). We use this resource to compare different approaches to each step of RNAseq analysis, from alignment to differential expression testing. We show that methods providing the best absolute quantification do not necessarily provide good relative quantification across samples, that count-based methods are superior for gene-level relative quantification, and that the new generation of pseudo-alignment-based software performs as well as established methods, at a fraction of the computing time. We also assess the impact of library type and size on quantification and differential expression analysis. Finally, we have created a R package and a web platform to enable the simple and streamlined application of this resource to the benchmarking of future methods.

  15. RNAontheBENCH: computational and empirical resources for benchmarking RNAseq quantification and differential expression methods

    KAUST Repository

    Germain, Pierre-Luc; Vitriolo, Alessandro; Adamo, Antonio; Laise, Pasquale; Das, Vivek; Testa, Giuseppe

    2016-01-01

    RNA sequencing (RNAseq) has become the method of choice for transcriptome analysis, yet no consensus exists as to the most appropriate pipeline for its analysis, with current benchmarks suffering important limitations. Here, we address these challenges through a rich benchmarking resource harnessing (i) two RNAseq datasets including ERCC ExFold spike-ins; (ii) Nanostring measurements of a panel of 150 genes on the same samples; (iii) a set of internal, genetically-determined controls; (iv) a reanalysis of the SEQC dataset; and (v) a focus on relative quantification (i.e. across-samples). We use this resource to compare different approaches to each step of RNAseq analysis, from alignment to differential expression testing. We show that methods providing the best absolute quantification do not necessarily provide good relative quantification across samples, that count-based methods are superior for gene-level relative quantification, and that the new generation of pseudo-alignment-based software performs as well as established methods, at a fraction of the computing time. We also assess the impact of library type and size on quantification and differential expression analysis. Finally, we have created a R package and a web platform to enable the simple and streamlined application of this resource to the benchmarking of future methods.

  16. A comparison of usability methods for testing interactive health technologies: Methodological aspects and empirical evidence

    NARCIS (Netherlands)

    Jaspers, Monique W. M.

    2009-01-01

    OBJECTIVE: Usability evaluation is now widely recognized as critical to the success of interactive health care applications. However, the broad range of usability inspection and testing methods available may make it difficult to decide on a usability assessment plan. To guide novices in the

  17. Towards an empirical method of usability testing of system parts : a methodological study

    NARCIS (Netherlands)

    Brinkman, W.P.; Haakma, R.; Bouwhuis, D.G.

    2007-01-01

    Current usability evaluation methods are essentially holistic in nature. However, engineers that apply a component-based software engineering approach might also be interested in understanding the usability of individual parts of an interactive system. This paper examines the efficiency dimension of

  18. Empirical research in service engineering based on AHP and fuzzy methods

    Science.gov (United States)

    Zhang, Yanrui; Cao, Wenfu; Zhang, Lina

    2015-12-01

    Recent years, management consulting industry has been rapidly developing worldwide. Taking a big management consulting company as research object, this paper established an index system of service quality of consulting, based on customer satisfaction survey, evaluated service quality of the consulting company by AHP and fuzzy comprehensive evaluation methods.

  19. An Empirical Method to Fuse Partially Overlapping State Vectors for Distributed State Estimation

    NARCIS (Netherlands)

    Sijs, J.; Hanebeck, U.; Noack, B.

    2013-01-01

    State fusion is a method for merging multiple estimates of the same state into a single fused estimate. Dealing with multiple estimates is one of the main concerns in distributed state estimation, where an estimated value of the desired state vector is computed in each node of a networked system.

  20. An Empirical Comparison of Five Linear Equating Methods for the NEAT Design

    Science.gov (United States)

    Suh, Youngsuk; Mroch, Andrew A.; Kane, Michael T.; Ripkey, Douglas R.

    2009-01-01

    In this study, a data base containing the responses of 40,000 candidates to 90 multiple-choice questions was used to mimic data sets for 50-item tests under the "nonequivalent groups with anchor test" (NEAT) design. Using these smaller data sets, we evaluated the performance of five linear equating methods for the NEAT design with five levels of…

  1. Optical absorption spectra and g factor of MgO: Mn2+explored by ab initio and semi empirical methods

    Science.gov (United States)

    Andreici Eftimie, E.-L.; Avram, C. N.; Brik, M. G.; Avram, N. M.

    2018-02-01

    In this paper we present a methodology for calculations of the optical absorption spectra, ligand field parameters and g factor for the Mn2+ (3d5) ions doped in MgO host crystal. The proposed technique combines two methods: the ab initio multireference (MR) and the semi empirical ligand field (LF) in the framework of the exchange charge model (ECM) respectively. Both methods of calculations are applied to the [MnO6]10-cluster embedded in an extended point charge field of host matrix ligands based on Gellé-Lepetit procedure. The first step of such investigations was the full optimization of the cubic structure of perfect MgO crystal, followed by the structural optimization of the doped of MgO:Mn2+ system, using periodic density functional theory (DFT). The ab initio MR wave functions approaches, such as complete active space self-consistent field (CASSCF), N-electron valence second order perturbation theory (NEVPT2) and spectroscopy oriented configuration interaction (SORCI), are used for the calculations. The scalar relativistic effects have also been taken into account through the second order Douglas-Kroll-Hess (DKH2) procedure. Ab initio ligand field theory (AILFT) allows to extract all LF parameters and spin-orbit coupling constant from such calculations. In addition, the ECM of ligand field theory (LFT) has been used for modelling theoptical absorption spectra. The perturbation theory (PT) was employed for the g factor calculation in the semi empirical LFT. The results of each of the aforementioned types of calculations are discussed and the comparisons between the results obtained and the experimental results show a reasonable agreement, which justifies this new methodology based on the simultaneous use of both methods. This study establishes fundamental principles for the further modelling of larger embedded cluster models of doped metal oxides.

  2. Empirical evaluation of a practical indoor mobile robot navigation method using hybrid maps

    DEFF Research Database (Denmark)

    Özkil, Ali Gürcan; Fan, Zhun; Xiao, Jizhong

    2010-01-01

    This video presents a practical navigation scheme for indoor mobile robots using hybrid maps. The method makes use of metric maps for local navigation and a topological map for global path planning. Metric maps are generated as occupancy grids by a laser range finder to represent local information...... about partial areas. The global topological map is used to indicate the connectivity of the ‘places-of-interests’ in the environment and the interconnectivity of the local maps. Visual tags on the ceiling to be detected by the robot provide valuable information and contribute to reliable localization...... that the method is implemented successfully on physical robot in a hospital environment, which provides a practical solution for indoor navigation....

  3. Computer-Based Methods for Collecting Peer Nomination Data: Utility, Practice, and Empirical Support.

    Science.gov (United States)

    van den Berg, Yvonne H M; Gommans, Rob

    2017-09-01

    New technologies have led to several major advances in psychological research over the past few decades. Peer nomination research is no exception. Thanks to these technological innovations, computerized data collection is becoming more common in peer nomination research. However, computer-based assessment is more than simply programming the questionnaire and asking respondents to fill it in on computers. In this chapter the advantages and challenges of computer-based assessments are discussed. In addition, a list of practical recommendations and considerations is provided to inform researchers on how computer-based methods can be applied to their own research. Although the focus is on the collection of peer nomination data in particular, many of the requirements, considerations, and implications are also relevant for those who consider the use of other sociometric assessment methods (e.g., paired comparisons, peer ratings, peer rankings) or computer-based assessments in general. © 2017 Wiley Periodicals, Inc.

  4. SOFTWARE EFFORT PREDICTION: AN EMPIRICAL EVALUATION OF METHODS TO TREAT MISSING VALUES WITH RAPIDMINER ®

    OpenAIRE

    OLGA FEDOTOVA; GLADYS CASTILLO; LEONOR TEIXEIRA; HELENA ALVELOS

    2011-01-01

    Missing values is a common problem in the data analysis in all areas, being software engineering not an exception. Particularly, missing data is a widespread phenomenon observed during the elaboration of effort prediction models (EPMs) required for budget, time and functionalities planning. Current work presents the results of a study carried out on a Portuguese medium-sized software development organization in order to obtain a formal method for EPMs elicitation in development processes. Thi...

  5. Hydrodynamic Modeling for Autonomous Underwater Vehicles Using Computational and Semi-Empirical Methods

    OpenAIRE

    Geisbert, Jesse Stuart

    2007-01-01

    Buoyancy driven underwater gliders, which locomote by modulating their buoyancy and their attitude with moving mass actuators and inflatable bladders, are proving their worth as efficient long-distance, long-duration ocean sampling platforms. Gliders have the capability to travel thousands of kilometers without a need to stop or recharge. There is a need for the development of methods for hydrodynamic modeling. This thesis aims to determine the hydrodynamic parameters for the governing equat...

  6. Empirical quantification of lacustrine groundwater discharge - different methods and their limitations

    Science.gov (United States)

    Meinikmann, K.; Nützmann, G.; Lewandowski, J.

    2015-03-01

    Groundwater discharge into lakes (lacustrine groundwater discharge, LGD) can be an important driver of lake eutrophication. Its quantification is difficult for several reasons, and thus often neglected in water and nutrient budgets of lakes. In the present case several methods were applied to determine the expansion of the subsurface catchment, to reveal areas of main LGD and to identify the variability of LGD intensity. Size and shape of the subsurface catchment served as a prerequisite in order to calculate long-term groundwater recharge and thus the overall amount of LGD. Isotopic composition of near-shore groundwater was investigated to validate the quality of catchment delineation in near-shore areas. Heat as a natural tracer for groundwater-surface water interactions was used to find spatial variations of LGD intensity. Via an analytical solution of the heat transport equation, LGD rates were calculated from temperature profiles of the lake bed. The method has some uncertainties, as can be found from the results of two measurement campaigns in different years. The present study reveals that a combination of several different methods is required for a reliable identification and quantification of LGD and groundwater-borne nutrient loads.

  7. Consideration of relativistic effects in band structure calculations based on the empirical tight-binding method

    International Nuclear Information System (INIS)

    Hanke, M.; Hennig, D.; Kaschte, A.; Koeppen, M.

    1988-01-01

    The energy band structure of cadmium telluride and mercury telluride materials is investigated by means of the tight-binding (TB) method considering relativistic effects and the spin-orbit interaction. Taking into account relativistic effects in the method is rather simple though the size of the Hamilton matrix doubles. Such considerations are necessary for the interesting small-interstice semiconductors, and the experimental results are reflected correctly in the band structures. The transformation behaviour of the eigenvectors within the Brillouin zone gets more complicated, but is, nevertheless, theoretically controllable. If, however, the matrix elements of the Green operator are to be calculated, one has to use formula manipulation programmes in particular for non-diagonal elements. For defect calculations by the Koster-Slater theory of scattering it is necessary to know these matrix elements. Knowledge of the transformation behaviour of eigenfunctions saves frequent diagonalization of the Hamilton matrix and thus permits a numerical solution of the problem. Corresponding results for the sp 3 basis are available

  8. An empirical method for calculating thermodynamic parameters for U(6) phases, applications to performance assessment calculations

    International Nuclear Information System (INIS)

    Ewing, R.C.; Chen, F.; Clark, S.B.

    2002-01-01

    Uranyl minerals form by oxidation and alteration of uraninite, UO 2+x , and the UO 2 in used nuclear fuels. The thermodynamic database for these phases is extremely limited. However, the Gibbs free energies and enthalpies for uranyl phases may be estimated based on a method that sums polyhedral contributions. The molar contributions of the structural components to Δ f G m 0 and Δ f H m 0 are derived by multiple regression using the thermodynamic data of phases for which the crystal structures are known. In comparison with experimentally determined values, the average residuals associated with the predicted Δ f G m 0 and Δ f H m 0 for the uranyl phases used in the model are 0.08 and 0.10%, respectively. There is also good agreement between the predicted mineral stability relations and field occurrences, thus providing confidence in this method for the estimation of Δ f G m 0 and Δ f H m 0 of the U(VI) phases. This approach provides a means of generating estimated thermodynamic data for performance assessment calcination and a basic for making bounding calcination of phase stabilities and solubilities. (author)

  9. Empirical studies on informal patient payments for health care services: a systematic and critical review of research methods and instruments

    Directory of Open Access Journals (Sweden)

    Pavlova Milena

    2010-09-01

    Full Text Available Abstract Background Empirical evidence demonstrates that informal patient payments are an important feature of many health care systems. However, the study of these payments is a challenging task because of their potentially illegal and sensitive nature. The aim of this paper is to provide a systematic review and analysis of key methodological difficulties in measuring informal patient payments. Methods The systematic review was based on the following eligibility criteria: English language publications that reported on empirical studies measuring informal patient payments. There were no limitations with regard to the year of publication. The content of the publications was analysed qualitatively and the results were organised in the form of tables. Data sources were Econlit, Econpapers, Medline, PubMed, ScienceDirect, SocINDEX. Results Informal payments for health care services are most often investigated in studies involving patients or the general public, but providers and officials are also sample units in some studies. The majority of the studies apply a single mode of data collection that involves either face-to-face interviews or group discussions. One of the main methodological difficulties reported in the publication concerns the inability of some respondents to distinguish between official and unofficial payments. Another complication is associated with the refusal of some respondents to answer questions on informal patient payments. We do not exclude the possibility that we have missed studies that reported in non-English language journals as well as very recent studies that are not yet published. Conclusions Given the recent evidence from research on survey methods, a self-administrated questionnaire during a face-to-face interview could be a suitable mode of collecting sensitive data, such as data on informal patient payments.

  10. Semi-empirical spectrophotometric (SESp) method for the indirect determination of the ratio of cationic micellar binding constants of counterions X⁻ and Br⁻(K(X)/K(Br)).

    Science.gov (United States)

    Khan, Mohammad Niyaz; Yusof, Nor Saadah Mohd; Razak, Norazizah Abdul

    2013-01-01

    The semi-empirical spectrophotometric (SESp) method, for the indirect determination of ion exchange constants (K(X)(Br)) of ion exchange processes occurring between counterions (X⁻ and Br⁻) at the cationic micellar surface, is described in this article. The method uses an anionic spectrophotometric probe molecule, N-(2-methoxyphenyl)phthalamate ion (1⁻), which measures the effects of varying concentrations of inert inorganic or organic salt (Na(v)X, v = 1, 2) on absorbance, (A(ob)) at 310 nm, of samples containing constant concentrations of 1⁻, NaOH and cationic micelles. The observed data fit satisfactorily to an empirical equation which gives the values of two empirical constants. These empirical constants lead to the determination of K(X)(Br) (= K(X)/K(Br) with K(X) and K(Br) representing cationic micellar binding constants of counterions X and Br⁻). This method gives values of K(X)(Br) for both moderately hydrophobic and hydrophilic X⁻. The values of K(X)(Br), obtained by using this method, are comparable with the corresponding values of K(X)(Br), obtained by the use of semi-empirical kinetic (SEK) method, for different moderately hydrophobic X. The values of K(X)(Br) for X = Cl⁻ and 2,6-Cl₂C6H₃CO₂⁻, obtained by the use of SESp and SEK methods, are similar to those obtained by the use of other different conventional methods.

  11. VLE measurements using a static cell vapor phase manual sampling method accompanied with an empirical data consistency test

    International Nuclear Information System (INIS)

    Freitag, Joerg; Kosuge, Hitoshi; Schmelzer, Juergen P.; Kato, Satoru

    2015-01-01

    Highlights: • We use a new, simple static cell vapor phase manual sampling method (SCVMS) for VLE (x, y, T) measurement. • The method is applied to non-azeotropic, asymmetric and two-liquid phase forming azeotropic binaries. • The method is approved by a data consistency test, i.e., a plot of the polarity exclusion factor vs. pressure. • The consistency test reveals that with the new SCVMS method accurate VLE near ambient temperature can be measured. • Moreover, the consistency test approves that the effect of air in the SCVMS system is negligible. - Abstract: A new static cell vapor phase manual sampling (SCVMS) method is used for the simple measurement of constant temperature x, y (vapor + liquid) equilibria (VLE). The method was applied to the VLE measurements of the (methanol + water) binary at T/K = (283.2, 298.2, 308.2 and 322.9), asymmetric (acetone + 1-butanol) binary at T/K = (283.2, 295.2, 308.2 and 324.2) and two-liquid phase forming azeotropic (water + 1-butanol) binary at T/K = (283.2 and 298.2). The accuracy of the experimental data was approved by a data consistency test, that is, an empirical plot of the polarity exclusion factor, β, vs. the system pressure, P. The SCVMS data are accurate, because the VLE data converge to the same lnβ vs. lnP straight line determined from conventional distillation-still method and a headspace gas chromatography method

  12. An empirical method for estimating surface area of aggregates in hot mix asphalt

    Directory of Open Access Journals (Sweden)

    R.P. Panda

    2016-04-01

    Full Text Available Bitumen requirement in hot mix asphalt (HMA is directly dependent on the surface area of the aggregates in the mix, which in turn has effect on the asphalt film thickness and the flow characteristics. The surface area of aggregate blend in HMA is calculated using the specific surface area factors assigned to percentage passing through some specific standard sieve sizes and the imaging techniques. The first process is less capital intensive, but purely manual and labour intensive and prone to human errors. Imaging techniques though eliminating the human errors, still have limited use due to capital intensiveness and requirement of well-established laboratories with qualified technicians. Most of the developing countries like India are shortage of well-equipped laboratories and qualified technicians. To overcome these difficulties, the present mathematical model has been developed to estimate the surface area of aggregate blend of HMA from physical properties of aggregates evaluated using simple laboratory equipment. This model has been validated compared with the existing established methods of calculations and can be used as one of the tools in different developing and under developed countries for proper design of HMA.

  13. An empirical survey to investigate quality of men's clothing market using QFD method

    Directory of Open Access Journals (Sweden)

    Samira Golshan

    2012-08-01

    Full Text Available One of the most important techniques on improving customer satisfaction in clothing and textile industry is to increase the quality of goods and services. There are literally different methods for detecting important items influencing clothing products and the proposed model of this paper uses quality function deployment (QFD. The proposed model of this paper designs and distributes a questionnaire among some experts to detect necessary factors and using house of quality we determine the most important factors impacting the customer's clothing selection. The proposed study of this paper focuses men who are 15 to 45 years old living in Yazd/Iran. The brand we do the investigation sells the products in three shopping centers located in this city. We have distributed 100 questionnaires and collected 65 properly filled ones. Based on the results of our survey, suitable design, printing and packaging specifications, necessary requirements, optimization of production planning and appropriate sewing machine setting are considered as the most important characteristics influencing the purchase of a clothing products.

  14. AN EMPIRICAL METHOD FOR IMPROVING THE QUALITY OF RXTE HEXTE SPECTRA

    Energy Technology Data Exchange (ETDEWEB)

    Garcia, Javier A.; Steiner, James F.; McClintock, Jeffrey E. [Harvard-Smithsonian Center for Astrophysics, 60 Garden St., Cambridge, MA 02138 (United States); Grinberg, Victoria [MIT Kavli Institute for Astrophysics and Space Research, MIT, 70 Vassar Street, Cambridge, MA 02139 (United States); Pottschmidt, Katja [Department of Physics and Center for Space Science and Technology, UMBC, Baltimore, MD 21250 (United States); Rothschild, Richard E., E-mail: javier@head.cfa.harvard.edu, E-mail: jem@cfa.harvard.edu, E-mail: jsteiner@mit.edu, E-mail: grinberg@space.mit.edu, E-mail: katja@milkyway.gsfc.nasa.gov, E-mail: rrothschild@ucsd.edu [Center for Astrophysics and Space Sciences, University of California at San Diego, La Jolla, CA (United States)

    2016-03-01

    We have developed a correction tool to improve the quality of Rossi X-ray Timing Explorer (RXTE) High Energy X-ray Timing Experiment (HEXTE) spectra by employing the same method we used earlier to improve the quality of RXTE Proportional Counter Array (PCA) spectra. We fit all of the hundreds of HEXTE spectra of the Crab individually to a simple power-law model, some 37 million counts in total for Cluster A and 39 million counts for Cluster B, and we create for each cluster a combined spectrum of residuals. We find that the residual spectrum of Cluster A is free of instrumental artifacts while that of Cluster B contains significant features with amplitudes ∼1%; the most prominent is in the energy range 30–50 keV, which coincides with the iodine K edge. Starting with the residual spectrum for Cluster B, via an iterative procedure we created the calibration tool hexBcorr for correcting any Cluster B spectrum of interest. We demonstrate the efficacy of the tool by applying it to Cluster B spectra of two bright black holes, which contain several million counts apiece. For these spectra, application of the tool significantly improves the goodness of fit, while affecting only slightly the broadband fit parameters. The tool may be important for the study of spectral features, such as cyclotron lines, a topic that is beyond the scope of this paper.

  15. Selection of an empirical detection method for determination of water-soluble carbohydrates in feedstuffs for application in ruminant nutrition

    Science.gov (United States)

    Water-soluble carbohydrates (WSC) are commonly measured in ruminant feedstuffs for use in diet formulation. However, we lack information as to which empirical detection assay most correctly measures WSC. The objective of this study was to determine which commonly used empirical assay was most approp...

  16. Transformation of an empirical distribution to normal distribution by the use of Johnson system of translation and symmetrical quantile method

    OpenAIRE

    Ludvík Friebel; Jana Friebelová

    2006-01-01

    This article deals with approximation of empirical distribution to standard normal distribution using Johnson transformation. This transformation enables us to approximate wide spectrum of continuous distributions with a normal distribution. The estimation of parameters of transformation formulas is based on percentiles of empirical distribution. There are derived theoretical probability distribution functions of random variable obtained on the base of backward transformation standard normal ...

  17. Empirical Flutter Prediction Method.

    Science.gov (United States)

    1988-03-05

    been used in this way to discover species or subspecies of animals, and to discover different types of voter or comsumer requiring different persuasions...respect to behavior or performance or response variables. Once this were done, corresponding clusters might be sought among descriptive or predictive or...jump in a response. The first sort of usage does not apply to the flutter prediction problem. Here the types of behavior are the different kinds of

  18. An Empirical Fitting Method for Type Ia Supernova Light Curves: A Case Study of SN 2011fe

    Energy Technology Data Exchange (ETDEWEB)

    Zheng, WeiKang; Filippenko, Alexei V., E-mail: zwk@astro.berkeley.edu [Department of Astronomy, University of California, Berkeley, CA 94720-3411 (United States)

    2017-03-20

    We present a new empirical fitting method for the optical light curves of Type Ia supernovae (SNe Ia). We find that a variant broken-power-law function provides a good fit, with the simple assumption that the optical emission is approximately the blackbody emission of the expanding fireball. This function is mathematically analytic and is derived directly from the photospheric velocity evolution. When deriving the function, we assume that both the blackbody temperature and photospheric velocity are constant, but the final function is able to accommodate these changes during the fitting procedure. Applying it to the case study of SN 2011fe gives a surprisingly good fit that can describe the light curves from the first-light time to a few weeks after peak brightness, as well as over a large range of fluxes (∼5 mag, and even ∼7 mag in the g band). Since SNe Ia share similar light-curve shapes, this fitting method has the potential to fit most other SNe Ia and characterize their properties in large statistical samples such as those already gathered and in the near future as new facilities become available.

  19. Determining individual variation in growth and its implication for life-history and population processes using the empirical Bayes method.

    Directory of Open Access Journals (Sweden)

    Simone Vincenzi

    2014-09-01

    Full Text Available The differences in demographic and life-history processes between organisms living in the same population have important consequences for ecological and evolutionary dynamics. Modern statistical and computational methods allow the investigation of individual and shared (among homogeneous groups determinants of the observed variation in growth. We use an Empirical Bayes approach to estimate individual and shared variation in somatic growth using a von Bertalanffy growth model with random effects. To illustrate the power and generality of the method, we consider two populations of marble trout Salmo marmoratus living in Slovenian streams, where individually tagged fish have been sampled for more than 15 years. We use year-of-birth cohort, population density during the first year of life, and individual random effects as potential predictors of the von Bertalanffy growth function's parameters k (rate of growth and L∞ (asymptotic size. Our results showed that size ranks were largely maintained throughout marble trout lifetime in both populations. According to the Akaike Information Criterion (AIC, the best models showed different growth patterns for year-of-birth cohorts as well as the existence of substantial individual variation in growth trajectories after accounting for the cohort effect. For both populations, models including density during the first year of life showed that growth tended to decrease with increasing population density early in life. Model validation showed that predictions of individual growth trajectories using the random-effects model were more accurate than predictions based on mean size-at-age of fish.

  20. Determining individual variation in growth and its implication for life-history and population processes using the empirical Bayes method.

    Science.gov (United States)

    Vincenzi, Simone; Mangel, Marc; Crivelli, Alain J; Munch, Stephan; Skaug, Hans J

    2014-09-01

    The differences in demographic and life-history processes between organisms living in the same population have important consequences for ecological and evolutionary dynamics. Modern statistical and computational methods allow the investigation of individual and shared (among homogeneous groups) determinants of the observed variation in growth. We use an Empirical Bayes approach to estimate individual and shared variation in somatic growth using a von Bertalanffy growth model with random effects. To illustrate the power and generality of the method, we consider two populations of marble trout Salmo marmoratus living in Slovenian streams, where individually tagged fish have been sampled for more than 15 years. We use year-of-birth cohort, population density during the first year of life, and individual random effects as potential predictors of the von Bertalanffy growth function's parameters k (rate of growth) and L∞ (asymptotic size). Our results showed that size ranks were largely maintained throughout marble trout lifetime in both populations. According to the Akaike Information Criterion (AIC), the best models showed different growth patterns for year-of-birth cohorts as well as the existence of substantial individual variation in growth trajectories after accounting for the cohort effect. For both populations, models including density during the first year of life showed that growth tended to decrease with increasing population density early in life. Model validation showed that predictions of individual growth trajectories using the random-effects model were more accurate than predictions based on mean size-at-age of fish.

  1. Empirical Method to Estimate Hydrogen Embrittlement of Metals as a Function of Hydrogen Gas Pressure at Constant Temperature

    Science.gov (United States)

    Lee, Jonathan A.

    2010-01-01

    High pressure Hydrogen (H) gas has been known to have a deleterious effect on the mechanical properties of certain metals, particularly, the notched tensile strength, fracture toughness and ductility. The ratio of these properties in Hydrogen as compared to Helium or Air is called the Hydrogen Environment Embrittlement (HEE) Index, which is a useful method to classify the severity of H embrittlement and to aid in the material screening and selection for safety usage H gas environment. A comprehensive world-wide database compilation, in the past 50 years, has shown that the HEE index is mostly collected at two conveniently high H pressure points of 5 ksi and 10 ksi near room temperature. Since H embrittlement is directly related to pressure, the lack of HEE index at other pressure points has posed a technical problem for the designers to select appropriate materials at a specific H pressure for various applications in aerospace, alternate and renewable energy sectors for an emerging hydrogen economy. Based on the Power-Law mathematical relationship, an empirical method to accurately predict the HEE index, as a function of H pressure at constant temperature, is presented with a brief review on Sievert's law for gas-metal absorption.

  2. A Cutting Pattern Recognition Method for Shearers Based on Improved Ensemble Empirical Mode Decomposition and a Probabilistic Neural Network

    Directory of Open Access Journals (Sweden)

    Jing Xu

    2015-10-01

    Full Text Available In order to guarantee the stable operation of shearers and promote construction of an automatic coal mining working face, an online cutting pattern recognition method with high accuracy and speed based on Improved Ensemble Empirical Mode Decomposition (IEEMD and Probabilistic Neural Network (PNN is proposed. An industrial microphone is installed on the shearer and the cutting sound is collected as the recognition criterion to overcome the disadvantages of giant size, contact measurement and low identification rate of traditional detectors. To avoid end-point effects and get rid of undesirable intrinsic mode function (IMF components in the initial signal, IEEMD is conducted on the sound. The end-point continuation based on the practical storage data is performed first to overcome the end-point effect. Next the average correlation coefficient, which is calculated by the correlation of the first IMF with others, is introduced to select essential IMFs. Then the energy and standard deviation of the reminder IMFs are extracted as features and PNN is applied to classify the cutting patterns. Finally, a simulation example, with an accuracy of 92.67%, and an industrial application prove the efficiency and correctness of the proposed method.

  3. Inglorious Empire

    DEFF Research Database (Denmark)

    Khair, Tabish

    2017-01-01

    Review of 'Inglorious Empire: What the British did to India' by Shashi Tharoor, London, Hurst Publishers, 2017, 296 pp., £20.00......Review of 'Inglorious Empire: What the British did to India' by Shashi Tharoor, London, Hurst Publishers, 2017, 296 pp., £20.00...

  4. Empirical Hamiltonians

    International Nuclear Information System (INIS)

    Peggs, S.; Talman, R.

    1986-08-01

    As proton accelerators get larger, and include more magnets, the conventional tracking programs which simulate them run slower. At the same time, in order to more carefully optimize the higher cost of the accelerators, they must return more accurate results, even in the presence of a longer list of realistic effects, such as magnet errors and misalignments. For these reasons conventional tracking programs continue to be computationally bound, despite the continually increasing computing power available. This limitation is especially severe for a class of problems in which some lattice parameter is slowly varying, when a faithful description is only obtained by tracking for an exceedingly large number of turns. Examples are synchrotron oscillations in which the energy varies slowly with a period of, say, hundreds of turns, or magnet ripple or noise on a comparably slow time scale. In these cases one may with to track for hundreds of periods of the slowly varying parameter. The purpose of this paper is to describe a method, still under development, in which element-by-element tracking around one turn is replaced by a single map, which can be processed far faster. Similar programs have already been written in which successive elements are ''concatenated'' with truncation to linear, sextupole, or octupole order, et cetera, using Lie algebraic techniques to preserve symplecticity. The method described here is rather more empirical than this but, in principle, contains information to all orders and is able to handle resonances in a more straightforward fashion

  5. Ground-Motion Simulations of the 2008 Ms8.0 Wenchuan, China, Earthquake Using Empirical Green's Function Method

    Science.gov (United States)

    Zhang, W.; Zhang, Y.; Yao, X.

    2010-12-01

    On May 12, 2008, a huge earthquake with magnitude Ms8.0 occurred in the Wenhuan, Sichuan Province of China. This event was the most devastating earthquake in the mainland of China since the 1976 M7.8 Tangshan earthquake. It resulted in tremendous losses of life and property. There were about 90,000 persons killed. Due to occur in the mountainous area, this great earthquake and the following thousands aftershocks also caused many other geological disasters, such as landslide, mud-rock flow and “quake lakes” which formed by landslide-induced reservoirs. This earthquake occurred along the Longmenshan fault, as the result of motion on a northeast striking reverse fault or thrust fault on the northwestern margin of the Sichuan Basin. The earthquake's epicenter and focal-mechanism are consistent with it having occurred as the result of movement on the Longmenshan fault or a tectonically related fault. The earthquake reflects tectonic stresses resulting from the convergence of crustal material slowly moving from the high Tibetan Plateau, to the west, against strong crust underlying the Sichuan Basin and southeastern China. In this study, we simulate the near-field strong ground motions of this great event based on the empirical Green’s function method (EGF). Referring to the published inversion source models, at first, we assume that there are three asperities on the rupture area and choose three different small events as the EGFs. Then, we identify the parameters of the source model using a genetic algorithm (GA). We calculate the synthetic waveforms based on the obtained source model and compare with the observed records. Our result shows that for most of the synthetic waveforms agree very well with the observed ones. The result proves the validity and the stability of the method. Finally, we forward the near-field strong ground motions near the source region and try to explain the damage distribution caused by the great earthquake.

  6. Empirical Philosophy of Science

    DEFF Research Database (Denmark)

    Mansnerus, Erika; Wagenknecht, Susann

    2015-01-01

    knowledge takes place through the integration of the empirical or historical research into the philosophical studies, as Chang, Nersessian, Thagard and Schickore argue in their work. Building upon their contributions we will develop a blueprint for an Empirical Philosophy of Science that draws upon...... qualitative methods from the social sciences in order to advance our philosophical understanding of science in practice. We will regard the relationship between philosophical conceptualization and empirical data as an iterative dialogue between theory and data, which is guided by a particular ‘feeling with......Empirical insights are proven fruitful for the advancement of Philosophy of Science, but the integration of philosophical concepts and empirical data poses considerable methodological challenges. Debates in Integrated History and Philosophy of Science suggest that the advancement of philosophical...

  7. A New Statistical Method to Determine the Degree of Validity of Health Economic Model Outcomes against Empirical Data.

    NARCIS (Netherlands)

    Corro Ramos, Isaac; van Voorn, George A K; Vemer, Pepijn; Feenstra, Talitha L; Al, Maiwenn J

    2017-01-01

    The validation of health economic (HE) model outcomes against empirical data is of key importance. Although statistical testing seems applicable, guidelines for the validation of HE models lack guidance on statistical validation, and actual validation efforts often present subjective judgment of

  8. River channel and bar patterns explained and predicted by an empirical and a physics-based method

    NARCIS (Netherlands)

    Kleinhans, M.G.; Berg, J.H. van den

    2011-01-01

    Our objective is to understand general causes of different river channel patterns. In this paper we compare an empirical stream power-based classification and a physics-based bar pattern predictor. We present a careful selection of data from the literature that contains rivers with discharge and

  9. Sensitivity of Technical Efficiency Estimates to Estimation Methods: An Empirical Comparison of Parametric and Non-Parametric Approaches

    OpenAIRE

    de-Graft Acquah, Henry

    2014-01-01

    This paper highlights the sensitivity of technical efficiency estimates to estimation approaches using empirical data. Firm specific technical efficiency and mean technical efficiency are estimated using the non parametric Data Envelope Analysis (DEA) and the parametric Corrected Ordinary Least Squares (COLS) and Stochastic Frontier Analysis (SFA) approaches. Mean technical efficiency is found to be sensitive to the choice of estimation technique. Analysis of variance and Tukey’s test sugge...

  10. Stope Stability Assessment and Effect of Horizontal to Vertical Stress Ratio on the Yielding and Relaxation Zones Around Underground Open Stopes Using Empirical and Finite Element Methods

    Science.gov (United States)

    Sepehri, Mohammadali; Apel, Derek; Liu, Wei

    2017-09-01

    Predicting the stability of open stopes can be a challenging task for underground mine engineers. For decades, the stability graph method has been used as the first step of open stope design around the world. However, there are some shortcomings with this method. For instance, the stability graph method does not account for the relaxation zones around the stopes. Another limitation of the stability graph is that this method cannot to be used to evaluate the stability of the stopes with high walls made of backfill materials. However, there are several analytical and numerical methods that can be used to overcome these limitations. In this study, both empirical and numerical methods have been used to assess the stability of an open stope located between mine levels N9225 and N9250 at Diavik diamond underground mine. It was shown that the numerical methods can be used as complementary methods along with other analytical and empirical methods to assess the stability of open stopes. A three dimensional elastoplastic finite element model was constructed using Abaqus software. In this paper a sensitivity analysis was performed to investigate the impact of the stress ratio "k" on the extent of the yielding and relaxation zones around the hangingwall and footwall of the understudy stope.

  11. A comparison of entropy balance and probability weighting methods to generalize observational cohorts to a population: a simulation and empirical example.

    Science.gov (United States)

    Harvey, Raymond A; Hayden, Jennifer D; Kamble, Pravin S; Bouchard, Jonathan R; Huang, Joanna C

    2017-04-01

    We compared methods to control bias and confounding in observational studies including inverse probability weighting (IPW) and stabilized IPW (sIPW). These methods often require iteration and post-calibration to achieve covariate balance. In comparison, entropy balance (EB) optimizes covariate balance a priori by calibrating weights using the target's moments as constraints. We measured covariate balance empirically and by simulation by using absolute standardized mean difference (ASMD), absolute bias (AB), and root mean square error (RMSE), investigating two scenarios: the size of the observed (exposed) cohort exceeds the target (unexposed) cohort and vice versa. The empirical application weighted a commercial health plan cohort to a nationally representative National Health and Nutrition Examination Survey target on the same covariates and compared average total health care cost estimates across methods. Entropy balance alone achieved balance (ASMD ≤ 0.10) on all covariates in simulation and empirically. In simulation scenario I, EB achieved the lowest AB and RMSE (13.64, 31.19) compared with IPW (263.05, 263.99) and sIPW (319.91, 320.71). In scenario II, EB outperformed IPW and sIPW with smaller AB and RMSE. In scenarios I and II, EB achieved the lowest mean estimate difference from the simulated population outcome ($490.05, $487.62) compared with IPW and sIPW, respectively. Empirically, only EB differed from the unweighted mean cost indicating IPW, and sIPW weighting was ineffective. Entropy balance demonstrated the bias-variance tradeoff achieving higher estimate accuracy, yet lower estimate precision, compared with IPW methods. EB weighting required no post-processing and effectively mitigated observed bias and confounding. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  12. Choosing the correct empirical antibiotic for urinary tract infection in pediatric: Surveillance of antimicrobial susceptibility pattern of Escherichia coli by E-Test method.

    OpenAIRE

    Iraj Sedighi; Abbas Solgi; Ali Amanati; Mohammad Yousef Alikhani

    2014-01-01

    Background and Objectives Urinary Tract Infections (UTIs) are of the most common bacterial diseases worldwide. We investigate the antibiotic susceptibility patterns of Escherichia coli (E. coli) strains isolated from pediatric patients with community acquired urinary tract infection (UTI) to find a clinical guidance for choosing a right empirical antibiotic in these patients. Materials and Methods In this cross sectional study, 100 urine specimens which were positive for E. coli had been inve...

  13. Interpreting and responding to the Johannine feeding narrative: An empirical study in the SIFT hermeneutical method amongst Anglican ministry training candidates

    Directory of Open Access Journals (Sweden)

    Leslie J. Francis

    2012-08-01

    Full Text Available Drawing on Jungian psychological type theory, the SIFT method of biblical hermeneutics and liturgical preaching maintains that different psychological type preferences are associated with distinctive readings of scripture. In the present study this theory was tested amongst two groups of ministry training candidates (a total of 26 participants who were located within working groups according to their psychological type preferences, and invited to reflect on the Johannine feeding narrative (Jn 6:4−22, and to document their discussion. Analysis of these data provided empirical support for the theory underpinning the SIFT method.

  14. Uncovering Voter Preference Structures Using a Best-Worst Scaling Procedure: Method and Empirical Example in the British General Election of 2010

    DEFF Research Database (Denmark)

    Ormrod, Robert P.; Savigny, Heather

    Best-Worst scaling (BWS) is a method that can provide insights into the preference structures of voters. By asking voters to select the ‘best’ and ‘worst’ option (‘most important’ and ‘least important’ media in our investigation) from a short list of alternatives it is possible to uncover the rel...... the least information. We furthermore investigate group differences using an ANOVA procedure to demonstrate how contextual variables can enrich our empirical investigations using the BWS method....

  15. Quasi-experimental Methods in Empirical Regional Science and Policy Analysis – Is there a Scope for Application?

    DEFF Research Database (Denmark)

    Mitze, Timo; Paloyo, Alfredo R.; Alecke, Björn

    Applied econometrics has recently emphasized the identification of causal parameters for policy analysis. This revolution has yet to fully propagate to the field of regional science. We examine the scope for application of the matching approach – part of the modern applied econometrics toolkit...... – in regional science and highlight special features of regional data that make such an application difficult. In particular, our analysis of the effect of regional subsidies on labor-productivity growth in Germany indicates that such policies are effective, but only up to a certain maximum treatment intensity...... to be interpreted with some caution. The matching approach nevertheless can be of great value for regional policy analysis and should be the subject of future research efforts in the field of empirical regional science....

  16. An empirical investigation on different methods of economic growth rate forecast and its behavior from fifteen countries across five continents

    Science.gov (United States)

    Yin, Yip Chee; Hock-Eam, Lim

    2012-09-01

    Our empirical results show that we can predict GDP growth rate more accurately in continent with fewer large economies, compared to smaller economies like Malaysia. This difficulty is very likely positively correlated with subsidy or social security policies. The stage of economic development and level of competiveness also appears to have interactive effects on this forecast stability. These results are generally independent of the forecasting procedures. Countries with high stability in their economic growth, forecasting by model selection is better than model averaging. Overall forecast weight averaging (FWA) is a better forecasting procedure in most countries. FWA also outperforms simple model averaging (SMA) and has the same forecasting ability as Bayesian model averaging (BMA) in almost all countries.

  17. Empirical philosophy of science

    DEFF Research Database (Denmark)

    Wagenknecht, Susann; Nersessian, Nancy J.; Andersen, Hanne

    2015-01-01

    A growing number of philosophers of science make use of qualitative empirical data, a development that may reconfigure the relations between philosophy and sociology of science and that is reminiscent of efforts to integrate history and philosophy of science. Therefore, the first part...... of this introduction to the volume Empirical Philosophy of Science outlines the history of relations between philosophy and sociology of science on the one hand, and philosophy and history of science on the other. The second part of this introduction offers an overview of the papers in the volume, each of which...... is giving its own answer to questions such as: Why does the use of qualitative empirical methods benefit philosophical accounts of science? And how should these methods be used by the philosopher?...

  18. What 'empirical turn in bioethics'?

    Science.gov (United States)

    Hurst, Samia

    2010-10-01

    Uncertainty as to how we should articulate empirical data and normative reasoning seems to underlie most difficulties regarding the 'empirical turn' in bioethics. This article examines three different ways in which we could understand 'empirical turn'. Using real facts in normative reasoning is trivial and would not represent a 'turn'. Becoming an empirical discipline through a shift to the social and neurosciences would be a turn away from normative thinking, which we should not take. Conducting empirical research to inform normative reasoning is the usual meaning given to the term 'empirical turn'. In this sense, however, the turn is incomplete. Bioethics has imported methodological tools from empirical disciplines, but too often it has not imported the standards to which researchers in these disciplines are held. Integrating empirical and normative approaches also represents true added difficulties. Addressing these issues from the standpoint of debates on the fact-value distinction can cloud very real methodological concerns by displacing the debate to a level of abstraction where they need not be apparent. Ideally, empirical research in bioethics should meet standards for empirical and normative validity similar to those used in the source disciplines for these methods, and articulate these aspects clearly and appropriately. More modestly, criteria to ensure that none of these standards are completely left aside would improve the quality of empirical bioethics research and partly clear the air of critiques addressing its theoretical justification, when its rigour in the particularly difficult context of interdisciplinarity is what should be at stake.

  19. Empirical research through design

    NARCIS (Netherlands)

    Keyson, D.V.; Bruns, M.

    2009-01-01

    This paper describes the empirical research through design method (ERDM), which differs from current approaches to research through design by enforcing the need for the designer, after a series of pilot prototype based studies, to a-priori develop a number of testable interaction design hypothesis

  20. Empirical Music Aesthetics

    DEFF Research Database (Denmark)

    Grund, Cynthia M.

    The toolbox for empirically exploring the ways that artistic endeavors convey and activate meaning on the part of performers and audiences continues to expand. Current work employing methods at the intersection of performance studies, philosophy, motion capture and neuroscience to better understand...... musical performance and reception is inspired by traditional approaches within aesthetics, but it also challenges some of the presuppositions inherent in them. As an example of such work I present a research project in empirical music aesthetics begun last year and of which I am a team member....

  1. ARE METHODS USED TO INTEGRATE STANDARDIZED MANAGEMENT SYSTEMS A CONDITIONING FACTOR OF THE LEVEL OF INTEGRATION? AN EMPIRICAL STUDY

    Directory of Open Access Journals (Sweden)

    Merce Bernardo

    2011-09-01

    Full Text Available Organizations are increasingly implementing multiple Management System Standards (M SSs and considering managing the related Management Systems (MSs as a single system.The aim of this paper is to analyze if methods us ed to integrate standardized MSs condition the level of integration of those MSs. A descriptive methodology has been applied to 343 Spanish organizations registered to, at least, ISO 9001 and ISO 14001. Seven groups of these organizations using different combinations of methods have been analyzed Results show that these organizations have a high level of integration of their MSs. The most common method used, was the process map. Organizations using a combination of different methods achieve higher levels of integration than those using a single method. However, no evidence has been found to confirm the relationship between the method used and the integration level achieved.

  2. Comparison of direct and indirect methods of estimating health state utilities for resource allocation: review and empirical analysis.

    Science.gov (United States)

    Arnold, David; Girling, Alan; Stevens, Andrew; Lilford, Richard

    2009-07-22

    Utilities (values representing preferences) for healthcare priority setting are typically obtained indirectly by asking patients to fill in a quality of life questionnaire and then converting the results to a utility using population values. We compared such utilities with those obtained directly from patients or the public. Review of studies providing both a direct and indirect utility estimate. Papers reporting comparisons of utilities obtained directly (standard gamble or time tradeoff) or indirectly (European quality of life 5D [EQ-5D], short form 6D [SF-6D], or health utilities index [HUI]) from the same patient. PubMed and Tufts database of utilities. Sign test for paired comparisons between direct and indirect utilities; least squares regression to describe average relations between the different methods. Mean utility scores (or median if means unavailable) for each method, and differences in mean (median) scores between direct and indirect methods. We found 32 studies yielding 83 instances where direct and indirect methods could be compared for health states experienced by adults. The direct methods used were standard gamble in 57 cases and time trade off in 60(34 used both); the indirect methods were EQ-5D (67 cases), SF-6D (13), HUI-2 (5), and HUI-3 (37). Mean utility values were 0.81 (standard gamble) and 0.77 (time tradeoff) for the direct methods; for the indirect methods: 0.59(EQ-5D), 0.63 (SF-6D), 0.75 (HUI-2) and 0.68 (HUI-3). Direct methods of estimating utilities tend to result in higher health ratings than the more widely used indirect methods, and the difference can be substantial.Use of indirect methods could have important implications for decisions about resource allocation: for example, non-lifesaving treatments are relatively more favoured in comparison with lifesaving interventions than when using direct methods.

  3. A Parameter Identification Method for Helicopter Noise Source Identification and Physics-Based Semi-Empirical Modeling

    Science.gov (United States)

    Greenwood, Eric, II; Schmitz, Fredric H.

    2010-01-01

    A new physics-based parameter identification method for rotor harmonic noise sources is developed using an acoustic inverse simulation technique. This new method allows for the identification of individual rotor harmonic noise sources and allows them to be characterized in terms of their individual non-dimensional governing parameters. This new method is applied to both wind tunnel measurements and ground noise measurements of two-bladed rotors. The method is shown to match the parametric trends of main rotor Blade-Vortex Interaction (BVI) noise, allowing accurate estimates of BVI noise to be made for operating conditions based on a small number of measurements taken at different operating conditions.

  4. Modeling of the phase equilibria of polystyrene in methylcyclohexane with semi-empirical quantum mechanical methods I

    DEFF Research Database (Denmark)

    Wilczura-Wachnik, H.; Jonsdottir, Svava Osk

    2003-01-01

    for the repeating unit of the polymer, the intermolecular interaction energies between the solvent molecule and the polymer were simulated. The semiempirical quantum mechanical method AM1, and a method for sampling relevant internal orientations for a pair of molecules developed previously were used. Interaction...

  5. Social Phenomenological Analysis as a Research Method in Art Education: Developing an Empirical Model for Understanding Gallery Talks

    Science.gov (United States)

    Hofmann, Fabian

    2016-01-01

    Social phenomenological analysis is presented as a research method to study gallery talks or guided tours in art museums. The research method is based on the philosophical considerations of Edmund Husserl and sociological/social science concepts put forward by Max Weber and Alfred Schuetz. Its starting point is the everyday lifeworld; the…

  6. An adaptive and tacholess order analysis method based on enhanced empirical wavelet transform for fault detection of bearings with varying speeds

    Science.gov (United States)

    Hu, Yue; Tu, Xiaotong; Li, Fucai; Li, Hongguang; Meng, Guang

    2017-11-01

    The order tracking method based on time-frequency representation is regarded as an effective tool for fault detection of bearings with varying rotating speeds. In the traditional order tracking methods, a tachometer is required to obtain the instantaneous speed which is hardly satisfied in practice due to the technical and economical limitations. Some tacholess order tracking methods have been developed in recent years. In these methods, the instantaneous frequency ridge extraction is one of the most important parts. However, the current ridge extraction methods are sensitive to noise and may easily get trapped in a local optimum. Due to the presence of noise and other unrelated components of the signal, bearing fault features are difficult to be detected from the envelope spectrum or envelope order spectrum. To overcome the abovementioned drawbacks, an adaptive and tacholess order analysis method is proposed in this paper. In this method, a novel ridge extraction algorithm based on dynamic path optimization is adopted to estimate the instantaneous frequency. This algorithm can overcome the shortcomings of the current ridge extraction algorithms. Meanwhile, the enhanced empirical wavelet transform (EEWT) algorithm is applied to extract the bearing fault features. Both simulated and experimental results demonstrate that the proposed method is robust to noise and effective for bearing fault detection under variable speed conditions.

  7. TYCHO Brahe's Empiric Methods, His Instruments, His Sudden Escape from Denmark and a New Theory About His Death

    Science.gov (United States)

    Thykier, C.

    1992-07-01

    Tycho Brahe (1546-1601) was born a noble being, a son of Otto Brahe, and a member of the Royal Danish Council. Very early he developed a great interest in science and especially astronomy. In 1575 Tycho visited the learned Prince Wilhelm II in Kassel. Here he was inspired by the famous instrument maker Burgi to build new precise astronomical instruments, and on the recommendation of Wilhelm King Frederic II of Denmark was given the island Hven (which at that time belonged to Denmark) as an entailed estate. At 26 years old, Tycho became famous for his work DE NOVA STELLA on the supernova that brightened up in 1572, and since this phenomenon kept its position fixed among the stars, it immediately invalidated the Aristotelian dogma of the invariability of the fixed-star world. In 1577 Tycho observed the great comet and followed its celestial motion by means of a quadrant and a sextant. He then came to the conclusion that the comet orbit moved far out among the planets, in contradiction to the Aristotelian dogma of the crystal spheres for the planets. However, Tycho's great contribution to science was his construction of the observatory buildings Uraniborg and Stjerneborg ("Star Castle") with their equipment of ancient sighting instruments and his use of these instruments without telescopes for observations of the planets over a period of almost 20 years. Tycho's work is collected in 15 volumes, OPERA OMNIA by J. L. E. Dreyer. Tycho also mapped Hven correctly and he triangulated both sides of Oresund relative to Hven. When Tycho moved to Prague in 1599 he lived there for a couple of years and met Kepler who became his assistant and collaborator. Kepler was the one who analyzed Tycho's material and derived the Keplerian laws for the motions of the planets. On this basis Newton derived the law of gravitation. Tycho Brahe has been considered the father of modern empirical science. In 1596 he was accused of negligence of his administrative duties and several other things

  8. Numerical simulation of shear and the Poynting effects by the finite element method: An application of the generalised empirical inequalities in non-linear elasticity

    KAUST Repository

    Angela Mihai, L.

    2013-03-01

    Finite element simulations of different shear deformations in non-linear elasticity are presented. We pay particular attention to the Poynting effects in hyperelastic materials, complementing recent theoretical findings by showing these effects manifested by specific models. As the finite element method computes uniform deformations exactly, for simple shear deformation and pure shear stress, the Poynting effect is represented exactly, while for the generalised shear and simple torsion, where the deformation is non-uniform, the solution is approximated efficiently and guaranteed computational bounds on the magnitude of the Poynting effect are obtained. The numerical results further indicate that, for a given elastic material, the same sign effect occurs under different shearing mechanisms, showing the genericity of the Poynting effect under a variety of shearing loads. In order to derive numerical models that exhibit either the positive or the negative Poynting effect, the so-called generalised empirical inequalities, which are less restrictive than the usual empirical inequalities involving material parameters, are assumed. © 2012 Elsevier Ltd.

  9. Theoretical Proof and Empirical Confirmation of a Continuous Labeling Method Using Naturally 13C-Depleted Carbon Dioxide

    Institute of Scientific and Technical Information of China (English)

    Weixin Cheng; Feike A. Dijkstra

    2007-01-01

    Continuous isotope labeling and tracing is often needed to study the transformation, movement, and allocation of carbon in plant-soil systems. However, existing labeling methods have numerous limitations. The present study introduces a new continuous labeling method using naturally 13C-depleted CO2. We theoretically proved that a stable level of 13C-CO2 abundance In a labeling chamber can be maintained by controlling the rate of CO2-free air injection and the rate of ambient airflow with coupling of automatic control of CO2 concentration using a CO2 analyzer. The theoretical results were tested and confirmed in a 54 day experiment in a plant growth chamber. This new continuous labeling method avoids the use of radioactive 14C or expensive 13C-enriched CO2 required by existing methods and therefore eliminates issues of radiation safety or unaffordable isotope cost, as well as creating new opportunities for short- or long-term labeling experiments under a controlled environment.

  10. Modeling of the phase equilibria of polystyrene in methylcyclohexane with semi-empirical quantum mechanical methods I.

    Science.gov (United States)

    Wilczura-Wachnik, Hanna; Jónsdóttir, Svava Osk

    2003-04-01

    A method for calculating interaction parameters traditionally used in phase-equilibrium computations in low-molecular systems has been extended for the prediction of solvent activities of aromatic polymer solutions (polystyrene+methylcyclohexane). Using ethylbenzene as a model compound for the repeating unit of the polymer, the intermolecular interaction energies between the solvent molecule and the polymer were simulated. The semiempirical quantum chemical method AM1, and a method for sampling relevant internal orientations for a pair of molecules developed previously were used. Interaction energies are determined for three molecular pairs, the solvent and the model molecule, two solvent molecules and two model molecules, and used to calculated UNIQUAC interaction parameters, a(ij) and a(ji). Using these parameters, the solvent activities of the polystyrene 90,000 amu+methylcyclohexane system, and the total vapor pressures of the methylcyclohexane+ethylbenzene system were calculated. The latter system was compared to experimental data, giving qualitative agreement. Figure Solvent activities for the methylcylcohexane(1)+polystyrene(2) system at 316 K. Parameters aij (blue line) obtained with the AM1 method; parameters aij (pink line) from VLE data for the ethylbenzene+methylcyclohexane system. The abscissa is the polymer weight fraction defined as y2(x1)=(1mx1)M2/[x1M1+(1mx1)M2], where x1 is the solvent mole fraction and Mi are the molecular weights of the components.

  11. Study protocol: the empirical investigation of methods to correct for measurement error in biobanks with dietary assessment

    Directory of Open Access Journals (Sweden)

    Masson Lindsey F

    2011-10-01

    Full Text Available Abstract Background The Public Population Project in Genomics (P3G is an organisation that aims to promote collaboration between researchers in the field of population-based genomics. The main objectives of P3G are to encourage collaboration between researchers and biobankers, optimize study design, promote the harmonization of information use in biobanks, and facilitate transfer of knowledge between interested parties. The importance of calibration and harmonisation of methods for environmental exposure assessment to allow pooling of data across studies in the evaluation of gene-environment interactions has been recognised by P3G, which has set up a methodological group on calibration with the aim of; 1 reviewing the published methodological literature on measurement error correction methods with assumptions and methods of implementation; 2 reviewing the evidence available from published nutritional epidemiological studies that have used a calibration approach; 3 disseminating information in the form of a comparison chart on approaches to perform calibration studies and how to obtain correction factors in order to support research groups collaborating within the P3G network that are unfamiliar with the methods employed; 4 with application to the field of nutritional epidemiology, including gene-diet interactions, ultimately developing a inventory of the typical correction factors for various nutrients. Methods/Design Systematic review of (a the methodological literature on methods to correct for measurement error in epidemiological studies; and (b studies that have been designed primarily to investigate the association between diet and disease and have also corrected for measurement error in dietary intake. Discussion The conduct of a systematic review of the methodological literature on calibration will facilitate the evaluation of methods to correct for measurement error and the design of calibration studies for the prospective pooling of

  12. Selecting Measures to Evaluate Complex Sociotechnical Systems: An Empirical Comparison of a Task-based and Constraint-based Method

    Science.gov (United States)

    2013-07-01

    personnel selection, work methods, labour standards and an individual’s motivation to perform work. His work became less relevant as tasks became more...people were employed to do and was able to show that non-physical factors such as job satisfaction and the psychological states of workers contributed...all threats, flight conditions, consequences of their actions (for example, damaging the aircraft during a “hard” landing) and expressed satisfaction

  13. Symptom Clusters in Advanced Cancer Patients: An Empirical Comparison of Statistical Methods and the Impact on Quality of Life.

    Science.gov (United States)

    Dong, Skye T; Costa, Daniel S J; Butow, Phyllis N; Lovell, Melanie R; Agar, Meera; Velikova, Galina; Teckle, Paulos; Tong, Allison; Tebbutt, Niall C; Clarke, Stephen J; van der Hoek, Kim; King, Madeleine T; Fayers, Peter M

    2016-01-01

    Symptom clusters in advanced cancer can influence patient outcomes. There is large heterogeneity in the methods used to identify symptom clusters. To investigate the consistency of symptom cluster composition in advanced cancer patients using different statistical methodologies for all patients across five primary cancer sites, and to examine which clusters predict functional status, a global assessment of health and global quality of life. Principal component analysis and exploratory factor analysis (with different rotation and factor selection methods) and hierarchical cluster analysis (with different linkage and similarity measures) were used on a data set of 1562 advanced cancer patients who completed the European Organization for the Research and Treatment of Cancer Quality of Life Questionnaire-Core 30. Four clusters consistently formed for many of the methods and cancer sites: tense-worry-irritable-depressed (emotional cluster), fatigue-pain, nausea-vomiting, and concentration-memory (cognitive cluster). The emotional cluster was a stronger predictor of overall quality of life than the other clusters. Fatigue-pain was a stronger predictor of overall health than the other clusters. The cognitive cluster and fatigue-pain predicted physical functioning, role functioning, and social functioning. The four identified symptom clusters were consistent across statistical methods and cancer types, although there were some noteworthy differences. Statistical derivation of symptom clusters is in need of greater methodological guidance. A psychosocial pathway in the management of symptom clusters may improve quality of life. Biological mechanisms underpinning symptom clusters need to be delineated by future research. A framework for evidence-based screening, assessment, treatment, and follow-up of symptom clusters in advanced cancer is essential. Copyright © 2016 American Academy of Hospice and Palliative Medicine. Published by Elsevier Inc. All rights reserved.

  14. Empirical Phenomenology: A Qualitative Research Approach (The ...

    African Journals Online (AJOL)

    Empirical Phenomenology: A Qualitative Research Approach (The Cologne Seminars) ... and practical application of empirical phenomenology in social research. ... and considers its implications for qualitative methods such as interviewing ...

  15. Site classification for National Strong Motion Observation Network System (NSMONS) stations in China using an empirical H/V spectral ratio method

    Science.gov (United States)

    Ji, Kun; Ren, Yefei; Wen, Ruizhi

    2017-10-01

    Reliable site classification of the stations of the China National Strong Motion Observation Network System (NSMONS) has not yet been assigned because of lacking borehole data. This study used an empirical horizontal-to-vertical (H/V) spectral ratio (hereafter, HVSR) site classification method to overcome this problem. First, according to their borehole data, stations selected from KiK-net in Japan were individually assigned a site class (CL-I, CL-II, or CL-III), which is defined in the Chinese seismic code. Then, the mean HVSR curve for each site class was computed using strong motion recordings captured during the period 1996-2012. These curves were compared with those proposed by Zhao et al. (2006a) for four types of site classes (SC-I, SC-II, SC-III, and SC-IV) defined in the Japanese seismic code (JRA, 1980). It was found that an approximate range of the predominant period Tg could be identified by the predominant peak of the HVSR curve for the CL-I and SC-I sites, CL-II and SC-II sites, and CL-III and SC-III + SC-IV sites. Second, an empirical site classification method was proposed based on comprehensive consideration of peak period, amplitude, and shape of the HVSR curve. The selected stations from KiK-net were classified using the proposed method. The results showed that the success rates of the proposed method in identifying CL-I, CL-II, and CL-III sites were 63%, 64%, and 58% respectively. Finally, the HVSRs of 178 NSMONS stations were computed based on recordings from 2007 to 2015 and the sites classified using the proposed method. The mean HVSR curves were re-calculated for three site classes and compared with those from KiK-net data. It was found that both the peak period and the amplitude were similar for the mean HVSR curves derived from NSMONS classification results and KiK-net borehole data, implying the effectiveness of the proposed method in identifying different site classes. The classification results have good agreement with site classes

  16. The combined use of Green-Ampt model and Curve Number method as an empirical tool for loss estimation

    Science.gov (United States)

    Petroselli, A.; Grimaldi, S.; Romano, N.

    2012-12-01

    The Soil Conservation Service - Curve Number (SCS-CN) method is a popular rainfall-runoff model widely used to estimate losses and direct runoff from a given rainfall event, but its use is not appropriate at sub-daily time resolution. To overcome this drawback, a mixed procedure, referred to as CN4GA (Curve Number for Green-Ampt), was recently developed including the Green-Ampt (GA) infiltration model and aiming to distribute in time the information provided by the SCS-CN method. The main concept of the proposed mixed procedure is to use the initial abstraction and the total volume given by the SCS-CN to calibrate the Green-Ampt soil hydraulic conductivity parameter. The procedure is here applied on a real case study and a sensitivity analysis concerning the remaining parameters is presented; results show that CN4GA approach is an ideal candidate for the rainfall excess analysis at sub-daily time resolution, in particular for ungauged basin lacking of discharge observations.

  17. Pharmacological Classification and Activity Evaluation of Furan and Thiophene Amide Derivatives Applying Semi-Empirical ab initio Molecular Modeling Methods

    Directory of Open Access Journals (Sweden)

    Leszek Bober

    2012-05-01

    Full Text Available Pharmacological and physicochemical classification of the furan and thiophene amide derivatives by multiple regression analysis and partial least square (PLS based on semi-empirical ab initio molecular modeling studies and high-performance liquid chromatography (HPLC retention data is proposed. Structural parameters obtained from the PCM (Polarizable Continuum Model method and the literature values of biological activity (antiproliferative for the A431 cells expressed as LD50 of the examined furan and thiophene derivatives was used to search for relationships. It was tested how variable molecular modeling conditions considered together, with or without HPLC retention data, allow evaluation of the structural recognition of furan and thiophene derivatives with respect to their pharmacological properties.

  18. The "invention" of the working class as a discursive practice and the genesis of the empiric method of social sciences in France (1830-48

    Directory of Open Access Journals (Sweden)

    Federico Tomasello

    2016-12-01

    Full Text Available The essay explores some of the processes through which the ‘working class’ emerged both as a collective subjectivity and as a field of social science inquiry and public policies in 19th century France. Starting from the 1831 Canuts revolt, widely recognized as the stepping stone of the European workers’ movement, the first part retraces the process of the ‘making’ of a social and political subjectivity by stressing the relevance of its linguistic and discursive dimension. The second part examines the emergence of the empiric method of the modern social sciences through new strategies of inquiry on urban misery, which progressively focuses on the ‘working class’ and on labour conditions as a field of knowledge, rights, and governmental practices.

  19. Semi-empirical equivalent field method for dose determination in midline block fields for cobalt - 60 beam

    International Nuclear Information System (INIS)

    Tagoe, S.N.A.; Nani, E.K.; Yarney, J.; Edusa, C.; Quayson-Sackey, K.; Nyamadi, K.M.; Sasu, E.

    2012-01-01

    For teletherapy treatment time calculations, midline block fields are resolved into two fields, but neglecting scattering from other fields, the effective equivalent square field size of the midline block is assumed to the resultant field. Such approach is underestimation, and may be detrimental in achieving the recommended uncertainty of ± 5 % for patient's radiation dose delivery. By comparison, the deviations of effective equivalent square field sizes by calculations and experiments were within 13.2 % for cobalt 60 beams of GWGP80 cobalt 60 teletherapy. Therefore, a modified method incorporating the scatter contributions was adopted to estimate the effective equivalent square field size for midline block field. The measured outputs of radiation beams with the block were compared with outputs of square fields without the blocks (only the block tray) at depths of 5 and 10 cm for the teletherapy machine employing isocentric technique, and the accuracy was within ± 3 % for the cobalt 60 beams. (au)

  20. Discourse Analysis of the Documentary Method as "Key" to Self-Referential Communication Systems? Theoretic-Methodological Basics and Empirical Vignettes

    Directory of Open Access Journals (Sweden)

    Gian-Claudio Gentile

    2010-09-01

    Full Text Available Niklas LUHMANN is well known for his deliberate departure from the classical focus on studying individual actions and directing attention on the actors' relatedness through so called (autopoietic communication systems. In contrast to the gain of a new perspective of observation his focus on autopoietic systems is simultaneously its biggest methodological obstacle for the use in social and management sciences. The present contribution considers the above shift on a theoretical level and with a specific qualitative method. It argues for a deeper understanding of systemic sense making and its enactment in a systematic and comprehensible way. Central to this approach is its focus on groups. Using group discussions as the method of data collection, and the "documentary method" by Ralf BOHNSACK (2003 as a method of data analysis, the article describes a methodologically grounded way to record the self-referential systems proposed by LUHMANN's system theory. The theoretical considerations of the paper are illustrated by empirical vignettes derived from a research project conducted in Switzerland concerning the social responsibility of business. URN: urn:nbn:de:0114-fqs1003156

  1. Non-linear multivariate and multiscale monitoring and signal denoising strategy using Kernel Principal Component Analysis combined with Ensemble Empirical Mode Decomposition method

    Science.gov (United States)

    Žvokelj, Matej; Zupan, Samo; Prebil, Ivan

    2011-10-01

    The article presents a novel non-linear multivariate and multiscale statistical process monitoring and signal denoising method which combines the strengths of the Kernel Principal Component Analysis (KPCA) non-linear multivariate monitoring approach with the benefits of Ensemble Empirical Mode Decomposition (EEMD) to handle multiscale system dynamics. The proposed method which enables us to cope with complex even severe non-linear systems with a wide dynamic range was named the EEMD-based multiscale KPCA (EEMD-MSKPCA). The method is quite general in nature and could be used in different areas for various tasks even without any really deep understanding of the nature of the system under consideration. Its efficiency was first demonstrated by an illustrative example, after which the applicability for the task of bearing fault detection, diagnosis and signal denosing was tested on simulated as well as actual vibration and acoustic emission (AE) signals measured on purpose-built large-size low-speed bearing test stand. The positive results obtained indicate that the proposed EEMD-MSKPCA method provides a promising tool for tackling non-linear multiscale data which present a convolved picture of many events occupying different regions in the time-frequency plane.

  2. Reading and proclaiming the Advent call of John the Baptist: An empirical enquiry employing the SIFT method

    Directory of Open Access Journals (Sweden)

    Leslie J. Francis

    2014-10-01

    Full Text Available Drawing on Jungian psychological type theory, the SIFT method of biblical hermeneutics and liturgical preaching suggests that the reading and proclaiming of scripture reflects the psychological type preferences of the reader and preacher. This thesis is examined among a sample of clergy (training incumbents and curates serving in the one Diocese of the Church of England (N = 22. After completing the Myers-Briggs Type Indicator, the clergy worked in groups (designed to cluster individuals who shared similar psychological type characteristics to reflect on and to discuss the Advent call of John the Baptist. The Marcan account was chosen for the exercise exploring the perceiving functions (sensing and intuition in light of its rich narrative. The Lucan account was chosen for the exercise exploring the judging functions (thinking and feeling in light of the challenges offered by the passage. In accordance with the theory, the data confirmed characteristic differences between the approaches of sensing types and intuitive types, and between the approaches of thinking types and feeling types.

  3. Association of Empirically Derived Dietary Patterns with Cardiovascular Risk Factors: A Comparison of PCA and RRR Methods.

    Directory of Open Access Journals (Sweden)

    Nicolas Sauvageot

    Full Text Available Principal component analysis is used to determine dietary behaviors of a population whereas reduced rank regression is used to construct disease-related dietary patterns. This study aimed to compare both types of DP and theirs associations with cardiovascular risk factors (CVRF.Data were derived from the cross sectional NESCAV (Nutrition, Environment and Cardiovascular Health study, aiming to describe the cardiovascular health of the Greater region's population (Grand duchy of Luxembourg, Wallonia (Belgium, Lorraine (France. 2298 individuals were included for this study and dietary intake was assessed using a 134-item food frequency questionnaire.We found that CVRF-related patterns also reflect eating behaviours of the population. Comparing concordant food groups between both dietary pattern methods, a diet high in fruits, oleaginous and dried fruits, vegetables, olive oil, fats rich in omega 6 and tea and low in fried foods, lean and fatty meat, processed meat, ready meal, soft drink and beer was associated with lower prevalence of CVRF. In the opposite, a pattern characterized by high intakes of fried foods, meat, offal, beer, wine and aperitifs and spirits, and low intakes of cereals, sugar and sweets and soft drinks was associated with higher prevalence of CVRF.In sum, we found that a "Prudent" and "Animal protein and alcohol" patterns were both associated with CVRF and behaviourally meaningful. Moreover, the relationships of those dietary patterns with lifestyle characteristics support the theory that food choices are part of a larger pattern of healthy lifestyle.

  4. Sequence analysis of annually normalized citation counts: an empirical analysis based on the characteristic scores and scales (CSS) method.

    Science.gov (United States)

    Bornmann, Lutz; Ye, Adam Y; Ye, Fred Y

    2017-01-01

    In bibliometrics, only a few publications have focused on the citation histories of publications, where the citations for each citing year are assessed. In this study, therefore, annual categories of field- and time-normalized citation scores (based on the characteristic scores and scales method: 0 = poorly cited, 1 = fairly cited, 2 = remarkably cited, and 3 = outstandingly cited) are used to study the citation histories of papers. As our dataset, we used all articles published in 2000 and their annual citation scores until 2015. We generated annual sequences of citation scores (e.g., [Formula: see text]) and compared the sequences of annual citation scores of six broader fields (natural sciences, engineering and technology, medical and health sciences, agricultural sciences, social sciences, and humanities). In agreement with previous studies, our results demonstrate that sequences with poorly cited (0) and fairly cited (1) elements dominate the publication set; sequences with remarkably cited (3) and outstandingly cited (4) periods are rare. The highest percentages of constantly poorly cited papers can be found in the social sciences; the lowest percentages are in the agricultural sciences and humanities. The largest group of papers with remarkably cited (3) and/or outstandingly cited (4) periods shows an increasing impact over the citing years with the following orders of sequences: [Formula: see text] (6.01%), which is followed by [Formula: see text] (1.62%). Only 0.11% of the papers ( n  = 909) are constantly on the outstandingly cited level.

  5. Comparison and analysis of empirical equations for soil heat flux for different cropping systems and irrigation methods

    Science.gov (United States)

    Irmak, A.; Singh, Ramesh K.; Walter-Shea, Elizabeth; Verma, S.B.; Suyker, A.E.

    2011-01-01

    We evaluated the performance of four models for estimating soil heat flux density (G) in maize (Zea mays L.) and soybean (Glycine max L.) fields under different irrigation methods (center-pivot irrigated fields at Mead, Nebraska, and subsurface drip irrigated field at Clay Center, Nebraska) and rainfed conditions at Mead. The model estimates were compared against measurements made during growing seasons of 2003, 2004, and 2005 at Mead and during 2005, 2006, and 2007 at Clay Center. We observed a strong relationship between the G and net radiation (Rn) ratio (G/Rn) and the normalized difference vegetation index (NDVI). When a significant portion of the ground was bare soil, G/Rn ranged from 0.15 to 0.30 and decreased with increasing NDVI. In contrast to the NDVI progression, the G/Rn ratio decreased with crop growth and development. The G/Rn ratio for subsurface drip irrigated crops was smaller than for the center-pivot irrigated crops. The seasonal average G was 13.1%, 15.2%, 10.9%, and 12.8% of Rn for irrigated maize, rainfed maize, irrigated soybean, and rainfed soybean, respectively. Statistical analyses of the performance of the four models showed a wide range of variation in G estimation. The root mean square error (RMSE) of predictions ranged from 15 to 81.3 W m-2. Based on the wide range of RMSE, it is recommended that local calibration of the models should be carried out for remote estimation of soil heat flux.

  6. Extraction Method of Driver’s Mental Component Based on Empirical Mode Decomposition and Approximate Entropy Statistic Characteristic in Vehicle Running State

    Directory of Open Access Journals (Sweden)

    Shuan-Feng Zhao

    2017-01-01

    Full Text Available In the driver fatigue monitoring technology, the essence is to capture and analyze the driver behavior information, such as eyes, face, heart, and EEG activity during driving. However, ECG and EEG monitoring are limited by the installation electrodes and are not commercially available. The most common fatigue detection method is the analysis of driver behavior, that is, to determine whether the driver is tired by recording and analyzing the behavior characteristics of steering wheel and brake. The driver usually adjusts his or her actions based on the observed road conditions. Obviously the road path information is directly contained in the vehicle driving state; if you want to judge the driver’s driving behavior by vehicle driving status information, the first task is to remove the road information from the vehicle driving state data. Therefore, this paper proposes an effective intrinsic mode function selection method for the approximate entropy of empirical mode decomposition considering the characteristics of the frequency distribution of road and vehicle information and the unsteady and nonlinear characteristics of the driver closed-loop driving system in vehicle driving state data. The objective is to extract the effective component of the driving behavior information and to weaken the road information component. Finally the effectiveness of the proposed method is verified by simulating driving experiments.

  7. The Empirical Verification of an Assignment of Items to Subtests : The Oblique Multiple Group Method Versus the Confirmatory Common Factor Method

    NARCIS (Netherlands)

    Stuive, Ilse; Kiers, Henk A.L.; Timmerman, Marieke E.; ten Berge, Jos M.F.

    2008-01-01

    This study compares two confirmatory factor analysis methods on their ability to verify whether correct assignments of items to subtests are supported by the data. The confirmatory common factor (CCF) method is used most often and defines nonzero loadings so that they correspond to the assignment of

  8. A non-destructive surface burn detection method for ferrous metals based on acoustic emission and ensemble empirical mode decomposition: from laser simulation to grinding process

    International Nuclear Information System (INIS)

    Yang, Zhensheng; Wu, Haixi; Yu, Zhonghua; Huang, Youfang

    2014-01-01

    Grinding is usually done in the final finishing of a component. As a result, the surface quality of finished products, e.g., surface roughness, hardness and residual stress, are affected by the grinding procedure. However, the lack of methods for monitoring of grinding makes it difficult to control the quality of the process. This paper focuses on the monitoring approaches for the surface burn phenomenon in grinding. A non-destructive burn detection method based on acoustic emission (AE) and ensemble empirical mode decomposition (EEMD) was proposed for this purpose. To precisely extract the AE features caused by phase transformation during burn formation, artificial burn was produced to mimic grinding burn by means of laser irradiation, since laser-induced burn involves less mechanical and electrical noise. The burn formation process was monitored by an AE sensor. The frequency band ranging from 150 to 400 kHz was believed to be related to surface burn formation in the laser irradiation process. The burn-sensitive frequency band was further used to instruct feature extraction during the grinding process based on EEMD. Linear classification results evidenced a distinct margin between samples with and without surface burn. This work provides a practical means for grinding burn detection. (paper)

  9. Empirical Methods for Detecting Regional Trends and Other Spatial Expressions in Antrim Shale Gas Productivity, with Implications for Improving Resource Projections Using Local Nonparametric Estimation Techniques

    Science.gov (United States)

    Coburn, T.C.; Freeman, P.A.; Attanasi, E.D.

    2012-01-01

    The primary objectives of this research were to (1) investigate empirical methods for establishing regional trends in unconventional gas resources as exhibited by historical production data and (2) determine whether or not incorporating additional knowledge of a regional trend in a suite of previously established local nonparametric resource prediction algorithms influences assessment results. Three different trend detection methods were applied to publicly available production data (well EUR aggregated to 80-acre cells) from the Devonian Antrim Shale gas play in the Michigan Basin. This effort led to the identification of a southeast-northwest trend in cell EUR values across the play that, in a very general sense, conforms to the primary fracture and structural orientations of the province. However, including this trend in the resource prediction algorithms did not lead to improved results. Further analysis indicated the existence of clustering among cell EUR values that likely dampens the contribution of the regional trend. The reason for the clustering, a somewhat unexpected result, is not completely understood, although the geological literature provides some possible explanations. With appropriate data, a better understanding of this clustering phenomenon may lead to important information about the factors and their interactions that control Antrim Shale gas production, which may, in turn, help establish a more general protocol for better estimating resources in this and other shale gas plays. ?? 2011 International Association for Mathematical Geology (outside the USA).

  10. Accounting for center in the Early External Cephalic Version trials: an empirical comparison of statistical methods to adjust for center in a multicenter trial with binary outcomes.

    Science.gov (United States)

    Reitsma, Angela; Chu, Rong; Thorpe, Julia; McDonald, Sarah; Thabane, Lehana; Hutton, Eileen

    2014-09-26

    Clustering of outcomes at centers involved in multicenter trials is a type of center effect. The Consolidated Standards of Reporting Trials Statement recommends that multicenter randomized controlled trials (RCTs) should account for center effects in their analysis, however most do not. The Early External Cephalic Version (EECV) trials published in 2003 and 2011 stratified by center at randomization, but did not account for center in the analyses, and due to the nature of the intervention and number of centers, may have been prone to center effects. Using data from the EECV trials, we undertook an empirical study to compare various statistical approaches to account for center effect while estimating the impact of external cephalic version timing (early or delayed) on the outcomes of cesarean section, preterm birth, and non-cephalic presentation at the time of birth. The data from the EECV pilot trial and the EECV2 trial were merged into one dataset. Fisher's exact method was used to test the overall effect of external cephalic version timing unadjusted for center effects. Seven statistical models that accounted for center effects were applied to the data. The models included: i) the Mantel-Haenszel test, ii) logistic regression with fixed center effect and fixed treatment effect, iii) center-size weighted and iv) un-weighted logistic regression with fixed center effect and fixed treatment-by-center interaction, iv) logistic regression with random center effect and fixed treatment effect, v) logistic regression with random center effect and random treatment-by-center interaction, and vi) generalized estimating equations. For each of the three outcomes of interest approaches to account for center effect did not alter the overall findings of the trial. The results were similar for the majority of the methods used to adjust for center, illustrating the robustness of the findings. Despite literature that suggests center effect can change the estimate of effect in

  11. An empirical method for determination of elemental components of radiated powers and impurity concentrations from VUV and XUV spectral features in tokamak plasmas

    International Nuclear Information System (INIS)

    Lawson, K.; Peacock, N.; Gianella, R.

    1998-12-01

    The derivation of elemental components of radiated powers and impurity concentrations in bulk tokamak plasmas is complex, often requiring a full description of the impurity transport. A novel, empirical method, the Line Intensity Normalization Technique (LINT) has been developed on the JET (Joint European Torus) tokamak to provide routine information about the impurity content of the plasma and elemental components of radiated power (P rad ). The technique employs a few VUV and XUV resonance line intensities to represent the intrinsic impurity elements in the plasma. From a data base comprising these spectral features, the total bolometric measurement of the radiated power and the Z eff measured by visible spectroscopy, separate elemental components of P rad and Z eff are derived. The method, which converts local spectroscopic signals into global plasma parameters, has the advantage of simplicity, allowing large numbers of pulses to be processed, and, in many operational modes of JET, is found to be both reliable and accurate. It relies on normalizing the line intensities to the absolute calibration of the bolometers and visible spectrometers, using coefficients independent of density and temperature. Accuracies of the order of ± 15% can be achieved for the elemental P rad components of the most significant impurities and the impurity concentrations can be determined to within ±30%. Trace elements can be monitored, although with reduced accuracy. The present paper deals with limiter discharges, which have been the main application to date. As a check on the technique and to demonstrate the value of the LINT results, they have been applied to the transport modelling of intrinsic impurities carried out with the SANCO transport code, which uses atomic data from ADAS. The simulations provide independent confirmation of the concentrations empirically derived using the LINT technique. For this analysis, the simple case of the L-mode regime is considered, the chosen

  12. Empirical Research In Engineering Design

    DEFF Research Database (Denmark)

    Ahmed, Saeema

    2007-01-01

    Increasingly engineering design research involves the use of empirical studies that are conducted within an industrial environment [Ahmed, 2001; Court 1995; Hales 1987]. Research into the use of information by designers or understanding how engineers build up experience are examples of research...... of research issues. This paper describes case studies of empirical research carried out within industry in engineering design focusing upon information, knowledge and experience in engineering design. The paper describes the research methods employed, their suitability for the particular research aims...

  13. Choosing the correct empirical antibiotic for urinary tract infection in pediatric: Surveillance of antimicrobial susceptibility pattern of Escherichia coli by E-Test method.

    Science.gov (United States)

    Sedighi, Iraj; Solgi, Abbas; Amanati, Ali; Alikhani, Mohammad Yousef

    2014-12-01

    Urinary Tract Infections (UTIs) are of the most common bacterial diseases worldwide. We investigate the antibiotic susceptibility patterns of Escherichia coli (E. coli) strains isolated from pediatric patients with community acquired urinary tract infection (UTI) to find a clinical guidance for choosing a right empirical antibiotic in these patients. In this cross sectional study, 100 urine specimens which were positive for E. coli had been investigated for antibiotics susceptibility pattern. The susceptibility to Co-trimoxazol (25μg), Amikacin (30μg), Ceftriaxone (30μg), Nalidixic Acid (30μg), Cefixime (5μg), and Nitrofurantoin (300μg) tested with Disk diffusion agar and MIC determined with the E-test. Mean age of patients was 38 Months. Girls had greater proportion than boys (74 versus 26%). In Disk diffusion method, 26% of the isolates were susceptible to cotrimoxazole. Susceptibility to amikacin, ceftriaxone, nitrofurantoin, nalidixic acid and cefixime was 94%, 66%, 97%, 62% and 52%, respectively. By E-Test method and according to CLSI criteria susceptibility for co-trimoxazol, amikacin, ceftriaxone and nalidixic acid was 37%, 97%, 67% and 50%, respectively. The highest percentage of agreement between Disk diffusion and E-Test method was found for amikacin (96%) and the lowest percentage for co-trimoxazole (89%). Treatment failure, prolonged or repeated hospitalization, increased costs of care, and increased mortality are some consequence of bacterial resistance in UTIs. Misuse of antibiotics in each geographic location directly affects antibiotic resistance pattern. In the treatment of UTI, proper selection of antimicrobial agents should be relevant to the bacterial susceptibility testing surveillance. According to our results, amikacin as an injectable drug and nitrofurantoin as an oral agent could be used as a drug of choice in our region for children with UTIs.

  14. A Theoretical and Empirical Integrated Method to Select the Optimal Combined Signals for Geometry-Free and Geometry-Based Three-Carrier Ambiguity Resolution.

    Science.gov (United States)

    Zhao, Dongsheng; Roberts, Gethin Wyn; Lau, Lawrence; Hancock, Craig M; Bai, Ruibin

    2016-11-16

    Twelve GPS Block IIF satellites, out of the current constellation, can transmit on three-frequency signals (L1, L2, L5). Taking advantages of these signals, Three-Carrier Ambiguity Resolution (TCAR) is expected to bring much benefit for ambiguity resolution. One of the research areas is to find the optimal combined signals for a better ambiguity resolution in geometry-free (GF) and geometry-based (GB) mode. However, the existing researches select the signals through either pure theoretical analysis or testing with simulated data, which might be biased as the real observation condition could be different from theoretical prediction or simulation. In this paper, we propose a theoretical and empirical integrated method, which first selects the possible optimal combined signals in theory and then refines these signals with real triple-frequency GPS data, observed at eleven baselines of different lengths. An interpolation technique is also adopted in order to show changes of the AR performance with the increase in baseline length. The results show that the AR success rate can be improved by 3% in GF mode and 8% in GB mode at certain intervals of the baseline length. Therefore, the TCAR can perform better by adopting the combined signals proposed in this paper when the baseline meets the length condition.

  15. Measurement Method and Empirical Research on the Sustainable Development Capability of a Regional Industrial System Based on Ecological Niche Theory in China

    Directory of Open Access Journals (Sweden)

    Hang Yin

    2014-11-01

    Full Text Available From the analytical view of a recycling economy, the regional system achieves the goal of sustainable development through improving resource utilization efficiency, reducing energy consumption and ameliorating the quality of water and air. The regional economic system’s potential for sustainable development is significantly influenced by regional industrial operational efficiency, which measures the cost of ecology, environment, energy and resources accompanying the economic growth. It is vital for national and regional governments to accelerate harmonious development between products of industrial department, consumption of energy and pollutants discharged. Under the guidance of ecological niche theory and recycling economy theory, the theoretical analysis on efficient relations between regional industrial growth, energy consumption, resources utilization and environmental carrying capacity has been carried out from horizontal and vertical respects. Industrial operational efficiency and the sensitivity coefficient in response to the change of every input and output index can be calculated and critical factors, which restrict sustainable development capability, can be found out so that quantitative references could be provided for administrative decisions. As for the measurement method, a super efficiency mixed data envelopment analysis model, which wipes off self-limited condition and either contains both meeting cone characteristic indexes or not, has been established and applied. Statistics from 1993 to 2012 in China are collected to carry out empirical research. On the basis of further analysis, an adjustment strategy can be constituted to improve the capability for sustainable development.

  16. A novel method to produce nonlinear empirical physical formulas for experimental nonlinear electro-optical responses of doped nematic liquid crystals: Feedforward neural network approach

    Energy Technology Data Exchange (ETDEWEB)

    Yildiz, Nihat, E-mail: nyildiz@cumhuriyet.edu.t [Cumhuriyet University, Faculty of Science and Literature, Department of Physics, 58140 Sivas (Turkey); San, Sait Eren; Okutan, Mustafa [Department of Physics, Gebze Institute of Technology, P.O. Box 141, Gebze 41400, Kocaeli (Turkey); Kaya, Hueseyin [Cumhuriyet University, Faculty of Science and Literature, Department of Physics, 58140 Sivas (Turkey)

    2010-04-15

    Among other significant obstacles, inherent nonlinearity in experimental physical response data poses severe difficulty in empirical physical formula (EPF) construction. In this paper, we applied a novel method (namely layered feedforward neural network (LFNN) approach) to produce explicit nonlinear EPFs for experimental nonlinear electro-optical responses of doped nematic liquid crystals (NLCs). Our motivation was that, as we showed in a previous theoretical work, an appropriate LFNN, due to its exceptional nonlinear function approximation capabilities, is highly relevant to EPF construction. Therefore, in this paper, we obtained excellently produced LFNN approximation functions as our desired EPFs for above-mentioned highly nonlinear response data of NLCs. In other words, by using suitable LFNNs, we successfully fitted the experimentally measured response and predicted the new (yet-to-be measured) response data. The experimental data (response versus input) were diffraction and dielectric properties versus bias voltage; and they were all taken from our previous experimental work. We conclude that in general, LFNN can be applied to construct various types of EPFs for the corresponding various nonlinear physical perturbation (thermal, electronic, molecular, electric, optical, etc.) data of doped NLCs.

  17. A novel method to produce nonlinear empirical physical formulas for experimental nonlinear electro-optical responses of doped nematic liquid crystals: Feedforward neural network approach

    International Nuclear Information System (INIS)

    Yildiz, Nihat; San, Sait Eren; Okutan, Mustafa; Kaya, Hueseyin

    2010-01-01

    Among other significant obstacles, inherent nonlinearity in experimental physical response data poses severe difficulty in empirical physical formula (EPF) construction. In this paper, we applied a novel method (namely layered feedforward neural network (LFNN) approach) to produce explicit nonlinear EPFs for experimental nonlinear electro-optical responses of doped nematic liquid crystals (NLCs). Our motivation was that, as we showed in a previous theoretical work, an appropriate LFNN, due to its exceptional nonlinear function approximation capabilities, is highly relevant to EPF construction. Therefore, in this paper, we obtained excellently produced LFNN approximation functions as our desired EPFs for above-mentioned highly nonlinear response data of NLCs. In other words, by using suitable LFNNs, we successfully fitted the experimentally measured response and predicted the new (yet-to-be measured) response data. The experimental data (response versus input) were diffraction and dielectric properties versus bias voltage; and they were all taken from our previous experimental work. We conclude that in general, LFNN can be applied to construct various types of EPFs for the corresponding various nonlinear physical perturbation (thermal, electronic, molecular, electric, optical, etc.) data of doped NLCs.

  18. An Empirical Fitting Method to Type Ia Supernova Light Curves. III. A Three-parameter Relationship: Peak Magnitude, Rise Time, and Photospheric Velocity

    Science.gov (United States)

    Zheng, WeiKang; Kelly, Patrick L.; Filippenko, Alexei V.

    2018-05-01

    We examine the relationship between three parameters of Type Ia supernovae (SNe Ia): peak magnitude, rise time, and photospheric velocity at the time of peak brightness. The peak magnitude is corrected for extinction using an estimate determined from MLCS2k2 fitting. The rise time is measured from the well-observed B-band light curve with the first detection at least 1 mag fainter than the peak magnitude, and the photospheric velocity is measured from the strong absorption feature of Si II λ6355 at the time of peak brightness. We model the relationship among these three parameters using an expanding fireball with two assumptions: (a) the optical emission is approximately that of a blackbody, and (b) the photospheric temperatures of all SNe Ia are the same at the time of peak brightness. We compare the precision of the distance residuals inferred using this physically motivated model against those from the empirical Phillips relation and the MLCS2k2 method for 47 low-redshift SNe Ia (0.005 Ia in our sample with higher velocities are inferred to be intrinsically fainter. Eliminating the high-velocity SNe and applying a more stringent extinction cut to obtain a “low-v golden sample” of 22 SNe, we obtain significantly reduced scatter of 0.108 ± 0.018 mag in the new relation, better than those of the Phillips relation and the MLCS2k2 method. For 250 km s‑1 of residual peculiar motions, we find 68% and 95% upper limits on the intrinsic scatter of 0.07 and 0.10 mag, respectively.

  19. A Noise Reduction Method for Dual-Mass Micro-Electromechanical Gyroscopes Based on Sample Entropy Empirical Mode Decomposition and Time-Frequency Peak Filtering.

    Science.gov (United States)

    Shen, Chong; Li, Jie; Zhang, Xiaoming; Shi, Yunbo; Tang, Jun; Cao, Huiliang; Liu, Jun

    2016-05-31

    The different noise components in a dual-mass micro-electromechanical system (MEMS) gyroscope structure is analyzed in this paper, including mechanical-thermal noise (MTN), electronic-thermal noise (ETN), flicker noise (FN) and Coriolis signal in-phase noise (IPN). The structure equivalent electronic model is established, and an improved white Gaussian noise reduction method for dual-mass MEMS gyroscopes is proposed which is based on sample entropy empirical mode decomposition (SEEMD) and time-frequency peak filtering (TFPF). There is a contradiction in TFPS, i.e., selecting a short window length may lead to good preservation of signal amplitude but bad random noise reduction, whereas selecting a long window length may lead to serious attenuation of the signal amplitude but effective random noise reduction. In order to achieve a good tradeoff between valid signal amplitude preservation and random noise reduction, SEEMD is adopted to improve TFPF. Firstly, the original signal is decomposed into intrinsic mode functions (IMFs) by EMD, and the SE of each IMF is calculated in order to classify the numerous IMFs into three different components; then short window TFPF is employed for low frequency component of IMFs, and long window TFPF is employed for high frequency component of IMFs, and the noise component of IMFs is wiped off directly; at last the final signal is obtained after reconstruction. Rotation experimental and temperature experimental are carried out to verify the proposed SEEMD-TFPF algorithm, the verification and comparison results show that the de-noising performance of SEEMD-TFPF is better than that achievable with the traditional wavelet, Kalman filter and fixed window length TFPF methods.

  20. A Noise Reduction Method for Dual-Mass Micro-Electromechanical Gyroscopes Based on Sample Entropy Empirical Mode Decomposition and Time-Frequency Peak Filtering

    Directory of Open Access Journals (Sweden)

    Chong Shen

    2016-05-01

    Full Text Available The different noise components in a dual-mass micro-electromechanical system (MEMS gyroscope structure is analyzed in this paper, including mechanical-thermal noise (MTN, electronic-thermal noise (ETN, flicker noise (FN and Coriolis signal in-phase noise (IPN. The structure equivalent electronic model is established, and an improved white Gaussian noise reduction method for dual-mass MEMS gyroscopes is proposed which is based on sample entropy empirical mode decomposition (SEEMD and time-frequency peak filtering (TFPF. There is a contradiction in TFPS, i.e., selecting a short window length may lead to good preservation of signal amplitude but bad random noise reduction, whereas selecting a long window length may lead to serious attenuation of the signal amplitude but effective random noise reduction. In order to achieve a good tradeoff between valid signal amplitude preservation and random noise reduction, SEEMD is adopted to improve TFPF. Firstly, the original signal is decomposed into intrinsic mode functions (IMFs by EMD, and the SE of each IMF is calculated in order to classify the numerous IMFs into three different components; then short window TFPF is employed for low frequency component of IMFs, and long window TFPF is employed for high frequency component of IMFs, and the noise component of IMFs is wiped off directly; at last the final signal is obtained after reconstruction. Rotation experimental and temperature experimental are carried out to verify the proposed SEEMD-TFPF algorithm, the verification and comparison results show that the de-noising performance of SEEMD-TFPF is better than that achievable with the traditional wavelet, Kalman filter and fixed window length TFPF methods.

  1. Mokken scale analysis of mental health and well-being questionnaire item responses: a non-parametric IRT method in empirical research for applied health researchers

    Directory of Open Access Journals (Sweden)

    Stochl Jan

    2012-06-01

    Full Text Available Abstract Background Mokken scaling techniques are a useful tool for researchers who wish to construct unidimensional tests or use questionnaires that comprise multiple binary or polytomous items. The stochastic cumulative scaling model offered by this approach is ideally suited when the intention is to score an underlying latent trait by simple addition of the item response values. In our experience, the Mokken model appears to be less well-known than for example the (related Rasch model, but is seeing increasing use in contemporary clinical research and public health. Mokken's method is a generalisation of Guttman scaling that can assist in the determination of the dimensionality of tests or scales, and enables consideration of reliability, without reliance on Cronbach's alpha. This paper provides a practical guide to the application and interpretation of this non-parametric item response theory method in empirical research with health and well-being questionnaires. Methods Scalability of data from 1 a cross-sectional health survey (the Scottish Health Education Population Survey and 2 a general population birth cohort study (the National Child Development Study illustrate the method and modeling steps for dichotomous and polytomous items respectively. The questionnaire data analyzed comprise responses to the 12 item General Health Questionnaire, under the binary recoding recommended for screening applications, and the ordinal/polytomous responses to the Warwick-Edinburgh Mental Well-being Scale. Results and conclusions After an initial analysis example in which we select items by phrasing (six positive versus six negatively worded items we show that all items from the 12-item General Health Questionnaire (GHQ-12 – when binary scored – were scalable according to the double monotonicity model, in two short scales comprising six items each (Bech’s “well-being” and “distress” clinical scales. An illustration of ordinal item analysis

  2. The External Performance Appraisal of China Energy Regulation: An Empirical Study Using a TOPSIS Method Based on Entropy Weight and Mahalanobis Distance.

    Science.gov (United States)

    Wang, Zheng-Xin; Li, Dan-Dan; Zheng, Hong-Hao

    2018-01-30

    In China's industrialization process, the effective regulation of energy and environment can promote the positive externality of energy consumption while reducing negative externality, which is an important means for realizing the sustainable development of an economic society. The study puts forward an improved technique for order preference by similarity to an ideal solution based on entropy weight and Mahalanobis distance (briefly referred as E-M-TOPSIS). The performance of the approach was verified to be satisfactory. By separately using traditional and improved TOPSIS methods, the study carried out the empirical appraisals on the external performance of China's energy regulation during 1999~2015. The results show that the correlation between the performance indexes causes the significant difference between the appraisal results of E-M-TOPSIS and traditional TOPSIS. The E-M-TOPSIS takes the correlation between indexes into account and generally softens the closeness degree compared with traditional TOPSIS. Moreover, it makes the relative closeness degree fluctuate within a small-amplitude. The results conform to the practical condition of China's energy regulation and therefore the E-M-TOPSIS is favorably applicable for the external performance appraisal of energy regulation. Additionally, the external economic performance and social responsibility performance (including environmental and energy safety performances) based on the E-M-TOPSIS exhibit significantly different fluctuation trends. The external economic performance dramatically fluctuates with a larger fluctuation amplitude, while the social responsibility performance exhibits a relatively stable interval fluctuation. This indicates that compared to the social responsibility performance, the fluctuation of external economic performance is more sensitive to energy regulation.

  3. Mokken scale analysis of mental health and well-being questionnaire item responses: a non-parametric IRT method in empirical research for applied health researchers.

    Science.gov (United States)

    Stochl, Jan; Jones, Peter B; Croudace, Tim J

    2012-06-11

    Mokken scaling techniques are a useful tool for researchers who wish to construct unidimensional tests or use questionnaires that comprise multiple binary or polytomous items. The stochastic cumulative scaling model offered by this approach is ideally suited when the intention is to score an underlying latent trait by simple addition of the item response values. In our experience, the Mokken model appears to be less well-known than for example the (related) Rasch model, but is seeing increasing use in contemporary clinical research and public health. Mokken's method is a generalisation of Guttman scaling that can assist in the determination of the dimensionality of tests or scales, and enables consideration of reliability, without reliance on Cronbach's alpha. This paper provides a practical guide to the application and interpretation of this non-parametric item response theory method in empirical research with health and well-being questionnaires. Scalability of data from 1) a cross-sectional health survey (the Scottish Health Education Population Survey) and 2) a general population birth cohort study (the National Child Development Study) illustrate the method and modeling steps for dichotomous and polytomous items respectively. The questionnaire data analyzed comprise responses to the 12 item General Health Questionnaire, under the binary recoding recommended for screening applications, and the ordinal/polytomous responses to the Warwick-Edinburgh Mental Well-being Scale. After an initial analysis example in which we select items by phrasing (six positive versus six negatively worded items) we show that all items from the 12-item General Health Questionnaire (GHQ-12)--when binary scored--were scalable according to the double monotonicity model, in two short scales comprising six items each (Bech's "well-being" and "distress" clinical scales). An illustration of ordinal item analysis confirmed that all 14 positively worded items of the Warwick-Edinburgh Mental

  4. Empirical Specification of Utility Functions.

    Science.gov (United States)

    Mellenbergh, Gideon J.

    Decision theory can be applied to four types of decision situations in education and psychology: (1) selection; (2) placement; (3) classification; and (4) mastery. For the application of the theory, a utility function must be specified. Usually the utility function is chosen on a priori grounds. In this paper methods for the empirical assessment…

  5. The Logic of the Method of Agent-Based Simulation in the Social Sciences: Empirical and Intentional Adequacy of Computer Programs

    OpenAIRE

    Nuno David; Jaime Simão Sichman; Helder Coelho

    2005-01-01

    WOS:000235217900009 (Nº de Acesso Web of Science) The classical theory of computation does not represent an adequate model of reality for simulation in the social sciences. The aim of this paper is to construct a methodological perspective that is able to conciliate the formal and empirical logic of program verification in computer science, with the interpretative and multiparadigmatic logic of the social sciences. We attempt to evaluate whether social simulation implies an additional pers...

  6. Empirical Test Case Specification

    DEFF Research Database (Denmark)

    Kalyanova, Olena; Heiselberg, Per

    This document includes the empirical specification on the IEA task of evaluation building energy simulation computer programs for the Double Skin Facades (DSF) constructions. There are two approaches involved into this procedure, one is the comparative approach and another is the empirical one. I....... In the comparative approach the outcomes of different software tools are compared, while in the empirical approach the modelling results are compared with the results of experimental test cases....

  7. Essays on empirical likelihood in economics

    NARCIS (Netherlands)

    Gao, Z.

    2012-01-01

    This thesis intends to exploit the roots of empirical likelihood and its related methods in mathematical programming and computation. The roots will be connected and the connections will induce new solutions for the problems of estimation, computation, and generalization of empirical likelihood.

  8. Empirical training for conditional random fields

    NARCIS (Netherlands)

    Zhu, Zhemin; Hiemstra, Djoerd; Apers, Peter M.G.; Wombacher, Andreas

    2013-01-01

    In this paper (Zhu et al., 2013), we present a practi- cally scalable training method for CRFs called Empir- ical Training (EP). We show that the standard train- ing with unregularized log likelihood can have many maximum likelihood estimations (MLEs). Empirical training has a unique closed form MLE

  9. Empirical data and moral theory. A plea for integrated empirical ethics.

    Science.gov (United States)

    Molewijk, Bert; Stiggelbout, Anne M; Otten, Wilma; Dupuis, Heleen M; Kievit, Job

    2004-01-01

    Ethicists differ considerably in their reasons for using empirical data. This paper presents a brief overview of four traditional approaches to the use of empirical data: "the prescriptive applied ethicists," "the theorists," "the critical applied ethicists," and "the particularists." The main aim of this paper is to introduce a fifth approach of more recent date (i.e. "integrated empirical ethics") and to offer some methodological directives for research in integrated empirical ethics. All five approaches are presented in a table for heuristic purposes. The table consists of eight columns: "view on distinction descriptive-prescriptive sciences," "location of moral authority," "central goal(s)," "types of normativity," "use of empirical data," "method," "interaction empirical data and moral theory," and "cooperation with descriptive sciences." Ethicists can use the table in order to identify their own approach. Reflection on these issues prior to starting research in empirical ethics should lead to harmonization of the different scientific disciplines and effective planning of the final research design. Integrated empirical ethics (IEE) refers to studies in which ethicists and descriptive scientists cooperate together continuously and intensively. Both disciplines try to integrate moral theory and empirical data in order to reach a normative conclusion with respect to a specific social practice. IEE is not wholly prescriptive or wholly descriptive since IEE assumes an interdepence between facts and values and between the empirical and the normative. The paper ends with three suggestions for consideration on some of the future challenges of integrated empirical ethics.

  10. Life Writing After Empire

    DEFF Research Database (Denmark)

    A watershed moment of the twentieth century, the end of empire saw upheavals to global power structures and national identities. However, decolonisation profoundly affected individual subjectivities too. Life Writing After Empire examines how people around the globe have made sense of the post...... in order to understand how individual life writing reflects broader societal changes. From far-flung corners of the former British Empire, people have turned to life writing to manage painful or nostalgic memories, as well as to think about the past and future of the nation anew through the personal...

  11. Theological reflections on empire

    Directory of Open Access Journals (Sweden)

    Allan A. Boesak

    2009-11-01

    Full Text Available Since the meeting of the World Alliance of Reformed Churches in Accra, Ghana (2004, and the adoption of the Accra Declaration, a debate has been raging in the churches about globalisation, socio-economic justice, ecological responsibility, political and cultural domination and globalised war. Central to this debate is the concept of empire and the way the United States is increasingly becoming its embodiment. Is the United States a global empire? This article argues that the United States has indeed become the expression of a modern empire and that this reality has considerable consequences, not just for global economics and politics but for theological refl ection as well.

  12. Empirical techniques in finance

    CERN Document Server

    Bhar, Ramaprasad

    2005-01-01

    This book offers the opportunity to study and experience advanced empi- cal techniques in finance and in general financial economics. It is not only suitable for students with an interest in the field, it is also highly rec- mended for academic researchers as well as the researchers in the industry. The book focuses on the contemporary empirical techniques used in the analysis of financial markets and how these are implemented using actual market data. With an emphasis on Implementation, this book helps foc- ing on strategies for rigorously combing finance theory and modeling technology to extend extant considerations in the literature. The main aim of this book is to equip the readers with an array of tools and techniques that will allow them to explore financial market problems with a fresh perspective. In this sense it is not another volume in eco- metrics. Of course, the traditional econometric methods are still valid and important; the contents of this book will bring in other related modeling topics tha...

  13. Empirical Evidence from Kenya

    African Journals Online (AJOL)

    FIRST LADY

    2011-01-18

    Jan 18, 2011 ... Empirical results reveal that consumption of sugar in. Kenya varies ... experiences in trade in different regions of the world. Some studies ... To assess the relationship between domestic sugar retail prices and sugar sales in ...

  14. Empirical Benchmarks of Hidden Bias in Educational Research: Implication for Assessing How well Propensity Score Methods Approximate Experiments and Conducting Sensitivity Analysis

    Science.gov (United States)

    Dong, Nianbo; Lipsey, Mark

    2014-01-01

    When randomized control trials (RCT) are not feasible, researchers seek other methods to make causal inference, e.g., propensity score methods. One of the underlined assumptions for the propensity score methods to obtain unbiased treatment effect estimates is the ignorability assumption, that is, conditional on the propensity score, treatment…

  15. Empirical research in medical ethics: How conceptual accounts on normative-empirical collaboration may improve research practice

    Science.gov (United States)

    2012-01-01

    Background The methodology of medical ethics during the last few decades has shifted from a predominant use of normative-philosophical analyses to an increasing involvement of empirical methods. The articles which have been published in the course of this so-called 'empirical turn' can be divided into conceptual accounts of empirical-normative collaboration and studies which use socio-empirical methods to investigate ethically relevant issues in concrete social contexts. Discussion A considered reference to normative research questions can be expected from good quality empirical research in medical ethics. However, a significant proportion of empirical studies currently published in medical ethics lacks such linkage between the empirical research and the normative analysis. In the first part of this paper, we will outline two typical shortcomings of empirical studies in medical ethics with regard to a link between normative questions and empirical data: (1) The complete lack of normative analysis, and (2) cryptonormativity and a missing account with regard to the relationship between 'is' and 'ought' statements. Subsequently, two selected concepts of empirical-normative collaboration will be presented and how these concepts may contribute to improve the linkage between normative and empirical aspects of empirical research in medical ethics will be demonstrated. Based on our analysis, as well as our own practical experience with empirical research in medical ethics, we conclude with a sketch of concrete suggestions for the conduct of empirical research in medical ethics. Summary High quality empirical research in medical ethics is in need of a considered reference to normative analysis. In this paper, we demonstrate how conceptual approaches of empirical-normative collaboration can enhance empirical research in medical ethics with regard to the link between empirical research and normative analysis. PMID:22500496

  16. A comparison of the performance of a fundamental parameter method for analysis of total reflection X-ray fluorescence spectra and determination of trace elements, versus an empirical quantification procedure

    Science.gov (United States)

    W(egrzynek, Dariusz; Hołyńska, Barbara; Ostachowicz, Beata

    1998-01-01

    The performance has been compared of two different quantification methods — namely, the commonly used empirical quantification procedure and a fundamental parameter approach — for determination of the mass fractions of elements in particulate-like sample residues on a quartz reflector measured in the total reflection geometry. In the empirical quantification procedure, the spectrometer system needs to be calibrated with the use of samples containing known concentrations of the elements. On the basis of intensities of the X-ray peaks and the known concentration or mass fraction of an internal standard element, by using relative sensitivities of the spectrometer system the concentrations or mass fractions of the elements are calculated. The fundamental parameter approach does not require any calibration of the spectrometer system to be carried out. However, in order to account for an unknown mass per unit area of a sample and sample nonuniformity, an internal standard element is added. The concentrations/mass fractions of the elements to be determined are calculated during fitting a modelled X-ray spectrum to the measured one. The two quantification methods were applied to determine the mass fractions of elements in the cross-sections of a peat core, biological standard reference materials and to determine the concentrations of elements in samples prepared from an aqueous multi-element standard solution.

  17. Benchmarking DFT and semi-empirical methods for a reliable and cost-efficient computational screening of benzofulvene derivatives as donor materials for small-molecule organic solar cells.

    Science.gov (United States)

    Tortorella, Sara; Talamo, Maurizio Mastropasqua; Cardone, Antonio; Pastore, Mariachiara; De Angelis, Filippo

    2016-02-24

    A systematic computational investigation on the optical properties of a group of novel benzofulvene derivatives (Martinelli 2014 Org. Lett. 16 3424-7), proposed as possible donor materials in small molecule organic photovoltaic (smOPV) devices, is presented. A benchmark evaluation against experimental results on the accuracy of different exchange and correlation functionals and semi-empirical methods in predicting both reliable ground state equilibrium geometries and electronic absorption spectra is carried out. The benchmark of the geometry optimization level indicated that the best agreement with x-ray data is achieved by using the B3LYP functional. Concerning the optical gap prediction, we found that, among the employed functionals, MPW1K provides the most accurate excitation energies over the entire set of benzofulvenes. Similarly reliable results were also obtained for range-separated hybrid functionals (CAM-B3LYP and wB97XD) and for global hybrid methods incorporating a large amount of non-local exchange (M06-2X and M06-HF). Density functional theory (DFT) hybrids with a moderate (about 20-30%) extent of Hartree-Fock exchange (HFexc) (PBE0, B3LYP and M06) were also found to deliver HOMO-LUMO energy gaps which compare well with the experimental absorption maxima, thus representing a valuable alternative for a prompt and predictive estimation of the optical gap. The possibility of using completely semi-empirical approaches (AM1/ZINDO) is also discussed.

  18. Benchmarking DFT and semi-empirical methods for a reliable and cost-efficient computational screening of benzofulvene derivatives as donor materials for small-molecule organic solar cells

    International Nuclear Information System (INIS)

    Tortorella, Sara; Talamo, Maurizio Mastropasqua; Cardone, Antonio; Pastore, Mariachiara; De Angelis, Filippo

    2016-01-01

    A systematic computational investigation on the optical properties of a group of novel benzofulvene derivatives (Martinelli 2014 Org. Lett. 16 3424–7), proposed as possible donor materials in small molecule organic photovoltaic (smOPV) devices, is presented. A benchmark evaluation against experimental results on the accuracy of different exchange and correlation functionals and semi-empirical methods in predicting both reliable ground state equilibrium geometries and electronic absorption spectra is carried out. The benchmark of the geometry optimization level indicated that the best agreement with x-ray data is achieved by using the B3LYP functional. Concerning the optical gap prediction, we found that, among the employed functionals, MPW1K provides the most accurate excitation energies over the entire set of benzofulvenes. Similarly reliable results were also obtained for range-separated hybrid functionals (CAM-B3LYP and wB97XD) and for global hybrid methods incorporating a large amount of non-local exchange (M06-2X and M06-HF). Density functional theory (DFT) hybrids with a moderate (about 20–30%) extent of Hartree–Fock exchange (HFexc) (PBE0, B3LYP and M06) were also found to deliver HOMO–LUMO energy gaps which compare well with the experimental absorption maxima, thus representing a valuable alternative for a prompt and predictive estimation of the optical gap. The possibility of using completely semi-empirical approaches (AM1/ZINDO) is also discussed. (paper)

  19. Empire vs. Federation

    DEFF Research Database (Denmark)

    Gravier, Magali

    2011-01-01

    The article discusses the concepts of federation and empire in the context of the European Union (EU). Even if these two concepts are not usually contrasted to one another, the article shows that they refer to related type of polities. Furthermore, they can be used at a time because they shed light...... on different and complementary aspects of the European integration process. The article concludes that the EU is at the crossroads between federation and empire and may remain an ‘imperial federation’ for several decades. This could mean that the EU is on the verge of transforming itself to another type...

  20. Empirical comparison of theories

    International Nuclear Information System (INIS)

    Opp, K.D.; Wippler, R.

    1990-01-01

    The book represents the first, comprehensive attempt to take an empirical approach for comparative assessment of theories in sociology. The aims, problems, and advantages of the empirical approach are discussed in detail, and the three theories selected for the purpose of this work are explained. Their comparative assessment is performed within the framework of several research projects, which among other subjects also investigate the social aspects of the protest against nuclear power plants. The theories analysed in this context are the theory of mental incongruities and that of the benefit, and their efficiency in explaining protest behaviour is compared. (orig./HSCH) [de

  1. Empirical evaluation of humpback whale telomere length estimates; quality control and factors causing variability in the singleplex and multiplex qPCR methods

    DEFF Research Database (Denmark)

    Olsen, Morten Tange; Bérubé, Martine; Robbins, Jooke

    2012-01-01

    BACKGROUND:Telomeres, the protective cap of chromosomes, have emerged as powerful markers of biological age and life history in model and non-model species. The qPCR method for telomere length estimation is one of the most common methods for telomere length estimation, but has received recent...... steps of qPCR. In order to evaluate the utility of the qPCR method for telomere length estimation in non-model species, we carried out four different qPCR assays directed at humpback whale telomeres, and subsequently performed a rigorous quality control to evaluate the performance of each assay. RESULTS...... to 40% depending on assay and quantification method, however this variation only affected telomere length estimates in the worst performing assays. CONCLUSION:Our results suggest that seemingly well performing qPCR assays may contain biases that will only be detected by extensive quality control...

  2. Essays in empirical microeconomics

    NARCIS (Netherlands)

    Péter, A.N.

    2016-01-01

    The empirical studies in this thesis investigate various factors that could affect individuals' labor market, family formation and educational outcomes. Chapter 2 focuses on scheduling as a potential determinant of individuals' productivity. Chapter 3 looks at the role of a family factor on

  3. Worship, Reflection, Empirical Research

    OpenAIRE

    Ding Dong,

    2012-01-01

    In my youth, I was a worshipper of Mao Zedong. From the latter stage of the Mao Era to the early years of Reform and Opening, I began to reflect on Mao and the Communist Revolution he launched. In recent years I’ve devoted myself to empirical historical research on Mao, seeking the truth about Mao and China’s modern history.

  4. Trade and Empire

    DEFF Research Database (Denmark)

    Bang, Peter Fibiger

    2007-01-01

    This articles seeks to establish a new set of organizing concepts for the analysis of the Roman imperial economy from Republic to late antiquity: tributary empire, port-folio capitalism and protection costs. Together these concepts explain better economic developments in the Roman world than the...

  5. Empirically sampling Universal Dependencies

    DEFF Research Database (Denmark)

    Schluter, Natalie; Agic, Zeljko

    2017-01-01

    Universal Dependencies incur a high cost in computation for unbiased system development. We propose a 100% empirically chosen small subset of UD languages for efficient parsing system development. The technique used is based on measurements of model capacity globally. We show that the diversity o...

  6. USE OF STATISTICAL METHODS IN DETECTING ACCOUNTING ENGINEERING ACTIVITIES (AS EXEMPLIFIED BY THE ACCOUNTING SYSTEM IN POLAND – SECOND PART: EMPIRICAL ASPECTS OF ANALYSIS

    Directory of Open Access Journals (Sweden)

    Leszek Michalczyk

    2013-10-01

    Full Text Available This article is one in a series of two publications concerning detection of accounting engineering operations in use. Its conclusions and methods may be applied to external auditing procedures. The aim of the present duo-article is to define a method of statistical analysis that could identify procedures falling within the scope of a framework herein defined as accounting engineering. This model for analysis is meant to be employed in these aspects of initial financial and accounting audit in a business enterprise that have to do with isolating the influence of variant accounting solutions, which are a consequence of the settlement method chosen by the enterprise. Materials for statistical analysis were divided into groups according to the field in which a given company operated. In this article, we accept and elaborate on the premise that significant differences in financial results may be solely a result of either expansive policy on new markets or the acquisition of cheaper sources for operating activities. In the remaining cases, the choice of valuation and settlement methods becomes crucial; the greater the deviations, the more essential this choice becomes. Even though the research materials we analyze are regionally-conditioned, the model may find its application in other accounting systems, provided that it has been appropriately implemented. Furthermore, the article defines an innovative concept of variant accounting.

  7. An Inquiry: Effectiveness of the Complex Empirical Mode Decomposition Method, the Hilbert-Huang Transform, and the Fast-Fourier Transform for Analysis of Dynamic Objects

    Science.gov (United States)

    2012-03-01

    graphical user interface (GUI) called ALPINE© [18]. Then, it will be converted into a 10 MAT-file that can be read into MATLAB®. At this point...breathing [3]. For comparison purposes, Balocchi et al. recorded the respiratory signal simultaneously with the tachogram (or EKG ) signal. As previously...primary authors, worked to create his own code for implementing the method proposed by Rilling et al. Through reading the BEMD paper and proceeding to

  8. Mixing the Green-Ampt model and Curve Number method as an empirical tool for rainfall excess estimation in small ungauged catchments.

    Science.gov (United States)

    Grimaldi, S.; Petroselli, A.; Romano, N.

    2012-04-01

    The Soil Conservation Service - Curve Number (SCS-CN) method is a popular rainfall-runoff model that is widely used to estimate direct runoff from small and ungauged basins. The SCS-CN is a simple and valuable approach to estimate the total stream-flow volume generated by a storm rainfall, but it was developed to be used with daily rainfall data. To overcome this drawback, we propose to include the Green-Ampt (GA) infiltration model into a mixed procedure, which is referred to as CN4GA (Curve Number for Green-Ampt), aiming to distribute in time the information provided by the SCS-CN method so as to provide estimation of sub-daily incremental rainfall excess. For a given storm, the computed SCS-CN total net rainfall amount is used to calibrate the soil hydraulic conductivity parameter of the Green-Ampt model. The proposed procedure was evaluated by analyzing 100 rainfall-runoff events observed in four small catchments of varying size. CN4GA appears an encouraging tool for predicting the net rainfall peak and duration values and has shown, at least for the test cases considered in this study, a better agreement with observed hydrographs than that of the classic SCS-CN method.

  9. Teaching Empirical Software Engineering Using Expert Teams

    DEFF Research Database (Denmark)

    Kuhrmann, Marco

    2017-01-01

    Empirical software engineering aims at making software engineering claims measurable, i.e., to analyze and understand phenomena in software engineering and to evaluate software engineering approaches and solutions. Due to the involvement of humans and the multitude of fields for which software...... is crucial, software engineering is considered hard to teach. Yet, empirical software engineering increases this difficulty by adding the scientific method as extra dimension. In this paper, we present a Master-level course on empirical software engineering in which different empirical instruments...... an extra specific expertise that they offer as service to other teams, thus, fostering cross-team collaboration. The paper outlines the general course setup, topics addressed, and it provides initial lessons learned....

  10. Surface Passivation in Empirical Tight Binding

    OpenAIRE

    He, Yu; Tan, Yaohua; Jiang, Zhengping; Povolotskyi, Michael; Klimeck, Gerhard; Kubis, Tillmann

    2015-01-01

    Empirical Tight Binding (TB) methods are widely used in atomistic device simulations. Existing TB methods to passivate dangling bonds fall into two categories: 1) Method that explicitly includes passivation atoms is limited to passivation with atoms and small molecules only. 2) Method that implicitly incorporates passivation does not distinguish passivation atom types. This work introduces an implicit passivation method that is applicable to any passivation scenario with appropriate parameter...

  11. Autobiography After Empire

    DEFF Research Database (Denmark)

    Rasch, Astrid

    of the collective, but insufficient attention has been paid to how individuals respond to such narrative changes. This dissertation examines the relationship between individual and collective memory at the end of empire through analysis of 13 end of empire autobiographies by public intellectuals from Australia......Decolonisation was a major event of the twentieth century, redrawing maps and impacting on identity narratives around the globe. As new nations defined their place in the world, the national and imperial past was retold in new cultural memories. These developments have been studied at the level......, the Anglophone Caribbean and Zimbabwe. I conceive of memory as reconstructive and social, with individual memory striving to make sense of the past in the present in dialogue with surrounding narratives. By examining recurring tropes in the autobiographies, like colonial education, journeys to the imperial...

  12. Gazprom: the new empire

    International Nuclear Information System (INIS)

    Guillemoles, A.; Lazareva, A.

    2008-01-01

    Gazprom is conquering the world. The Russian industrial giant owns the hugest gas reserves and enjoys the privilege of a considerable power. Gazprom edits journals, owns hospitals, airplanes and has even built cities where most of the habitants work for him. With 400000 workers, Gazprom represents 8% of Russia's GDP. This inquiry describes the history and operation of this empire and show how its has become a masterpiece of the government's strategy of russian influence reconquest at the world scale. Is it going to be a winning game? Are the corruption affairs and the expected depletion of resources going to weaken the empire? The authors shade light on the political and diplomatic strategies that are played around the crucial dossier of the energy supply. (J.S.)

  13. Monitoring county-level chlamydia incidence in Texas, 2004 – 2005: application of empirical Bayesian smoothing and Exploratory Spatial Data Analysis (ESDA methods

    Directory of Open Access Journals (Sweden)

    Owens Chantelle J

    2009-02-01

    Full Text Available Abstract Background Chlamydia continues to be the most prevalent disease in the United States. Effective spatial monitoring of chlamydia incidence is important for successful implementation of control and prevention programs. The objective of this study is to apply Bayesian smoothing and exploratory spatial data analysis (ESDA methods to monitor Texas county-level chlamydia incidence rates by examining spatiotemporal patterns. We used county-level data on chlamydia incidence (for all ages, gender and races from the National Electronic Telecommunications System for Surveillance (NETSS for 2004 and 2005. Results Bayesian-smoothed chlamydia incidence rates were spatially dependent both in levels and in relative changes. Erath county had significantly (p 300 cases per 100,000 residents than its contiguous neighbors (195 or less in both years. Gaines county experienced the highest relative increase in smoothed rates (173% – 139 to 379. The relative change in smoothed chlamydia rates in Newton county was significantly (p Conclusion Bayesian smoothing and ESDA methods can assist programs in using chlamydia surveillance data to identify outliers, as well as relevant changes in chlamydia incidence in specific geographic units. Secondly, it may also indirectly help in assessing existing differences and changes in chlamydia surveillance systems over time.

  14. Transition States from Empirical Force Fields

    DEFF Research Database (Denmark)

    Jensen, Frank; Norrby, Per-Ola

    2003-01-01

    This is an overview of the use of empirical force fields in the study of reaction mechanisms. EVB-type methods (including RFF and MCMM) produce full reaction surfaces by mixing, in the simplest case, known force fields describing reactants and products. The SEAM method instead locates approximate...

  15. Empirical estimation of the grades of hearing impairment among industrial workers based on new artificial neural networks and classical regression methods.

    Science.gov (United States)

    Farhadian, Maryam; Aliabadi, Mohsen; Darvishi, Ebrahim

    2015-01-01

    Prediction models are used in a variety of medical domains, and they are frequently built from experience which constitutes data acquired from actual cases. This study aimed to analyze the potential of artificial neural networks and logistic regression techniques for estimation of hearing impairment among industrial workers. A total of 210 workers employed in a steel factory (in West of Iran) were selected, and their occupational exposure histories were analyzed. The hearing loss thresholds of the studied workers were determined using a calibrated audiometer. The personal noise exposures were also measured using a noise dosimeter in the workstations. Data obtained from five variables, which can influence the hearing loss, were used as input features, and the hearing loss thresholds were considered as target feature of the prediction methods. Multilayer feedforward neural networks and logistic regression were developed using MATLAB R2011a software. Based on the World Health Organization classification for the grades of hearing loss, 74.2% of the studied workers have normal hearing thresholds, 23.4% have slight hearing loss, and 2.4% have moderate hearing loss. The accuracy and kappa coefficient of the best developed neural networks for prediction of the grades of hearing loss were 88.6 and 66.30, respectively. The accuracy and kappa coefficient of the logistic regression were also 84.28 and 51.30, respectively. Neural networks could provide more accurate predictions of the hearing loss than logistic regression. The prediction method can provide reliable and comprehensible information for occupational health and medicine experts.

  16. Reading and proclaiming the Birth Narratives from Luke and Matthew: A study in empirical theology amongst curates and their training incumbents employing the SIFT method

    Directory of Open Access Journals (Sweden)

    Leslie J. Francis

    2013-07-01

    Full Text Available Drawing on Jungian psychological type theory, the SIFT method of biblical hermeneutics and liturgical preaching suggests that the reading and proclaiming of scripture reflects the psychological type preferences of the reader and preacher. This thesis is examined amongst two samples of curates and training incumbents (N = 23, 27, serving in one Diocese of the Church of England, who completed the Myers-Briggs Type Indicator. Firstly, the narrative of the shepherds from Luke was discussed by groups organised according to scores on the perceiving process. In accordance with the theory, sensing types focused on details in the passage, but could reach no consensus on the larger picture, and intuitive types quickly identified an imaginative, integrative theme, but showed little interest in the details. Secondly, the narrative of the massacre of the infants from Matthew was discussed by groups organised according to scores on the judging process. In accordance with theory, the thinking types identified and analysed the big themes raised by the passage (political power, theodicy, obedience, whilst the feeling types placed much more emphasis on the impact that the passage may have on members of the congregation mourning the death of their child or grandchild.

  17. PWR surveillance based on correspondence between empirical models and physical

    International Nuclear Information System (INIS)

    Zwingelstein, G.; Upadhyaya, B.R.; Kerlin, T.W.

    1976-01-01

    An on line surveillance method based on the correspondence between empirical models and physicals models is proposed for pressurized water reactors. Two types of empirical models are considered as well as the mathematical models defining the correspondence between the physical and empirical parameters. The efficiency of this method is illustrated for the surveillance of the Doppler coefficient for Oconee I (an 886 MWe PWR) [fr

  18. Epistemology and Empirical Investigation

    DEFF Research Database (Denmark)

    Ahlström, Kristoffer

    2008-01-01

    Recently, Hilary Kornblith has argued that epistemological investigation is substantially empirical. In the present paper, I will ¿rst show that his claim is not contingent upon the further and, admittedly, controversial assumption that all objects of epistemological investigation are natural kinds....... Then, I will argue that, contrary to what Kornblith seems to assume, this methodological contention does not imply that there is no need for attending to our epistemic concepts in epistemology. Understanding the make-up of our concepts and, in particular, the purposes they ¿ll, is necessary...

  19. Multiscale empirical interpolation for solving nonlinear PDEs

    KAUST Repository

    Calo, Victor M.

    2014-12-01

    In this paper, we propose a multiscale empirical interpolation method for solving nonlinear multiscale partial differential equations. The proposed method combines empirical interpolation techniques and local multiscale methods, such as the Generalized Multiscale Finite Element Method (GMsFEM). To solve nonlinear equations, the GMsFEM is used to represent the solution on a coarse grid with multiscale basis functions computed offline. Computing the GMsFEM solution involves calculating the system residuals and Jacobians on the fine grid. We use empirical interpolation concepts to evaluate these residuals and Jacobians of the multiscale system with a computational cost which is proportional to the size of the coarse-scale problem rather than the fully-resolved fine scale one. The empirical interpolation method uses basis functions which are built by sampling the nonlinear function we want to approximate a limited number of times. The coefficients needed for this approximation are computed in the offline stage by inverting an inexpensive linear system. The proposed multiscale empirical interpolation techniques: (1) divide computing the nonlinear function into coarse regions; (2) evaluate contributions of nonlinear functions in each coarse region taking advantage of a reduced-order representation of the solution; and (3) introduce multiscale proper-orthogonal-decomposition techniques to find appropriate interpolation vectors. We demonstrate the effectiveness of the proposed methods on several nonlinear multiscale PDEs that are solved with Newton\\'s methods and fully-implicit time marching schemes. Our numerical results show that the proposed methods provide a robust framework for solving nonlinear multiscale PDEs on a coarse grid with bounded error and significant computational cost reduction.

  20. Empirical pseudo-potential studies on electronic structure

    Indian Academy of Sciences (India)

    Theoretical investigations of electronic structure of quantum dots is of current interest in nanophase materials. Empirical theories such as effective mass approximation, tight binding methods and empirical pseudo-potential method are capable of explaining the experimentally observed optical properties. We employ the ...

  1. Empirical microeconomics action functionals

    Science.gov (United States)

    Baaquie, Belal E.; Du, Xin; Tanputraman, Winson

    2015-06-01

    A statistical generalization of microeconomics has been made in Baaquie (2013), where the market price of every traded commodity, at each instant of time, is considered to be an independent random variable. The dynamics of commodity market prices is modeled by an action functional-and the focus of this paper is to empirically determine the action functionals for different commodities. The correlation functions of the model are defined using a Feynman path integral. The model is calibrated using the unequal time correlation of the market commodity prices as well as their cubic and quartic moments using a perturbation expansion. The consistency of the perturbation expansion is verified by a numerical evaluation of the path integral. Nine commodities drawn from the energy, metal and grain sectors are studied and their market behavior is described by the model to an accuracy of over 90% using only six parameters. The paper empirically establishes the existence of the action functional for commodity prices that was postulated to exist in Baaquie (2013).

  2. Empirical Methods in Natural Language Generation

    NARCIS (Netherlands)

    Krahmer, Emiel; Theune, Mariet

    Natural language generation (NLG) is a subfield of natural language processing (NLP) that is often characterized as the study of automatically converting non-linguistic representations (e.g., from databases or other knowledge sources) into coherent natural language text. In recent years the field

  3. Empirical Bayes methods in road safety research.

    NARCIS (Netherlands)

    Vogelesang, R.A.W.

    1997-01-01

    Road safety research is a wonderful combination of counting fatal accidents and using a toolkit containing prior, posterior, overdispersed Poisson, negative binomial and Gamma distributions, together with positive and negative regression effects, shrinkage estimators and fiercy debates concerning

  4. Semi-empirical determination of the diffusion coefficient of the Fricke Xylenol Gel dosimeter through finite difference methods; Determinacao semi-empirica do coeficiente de difusao do dosimetro Fricke Xilenol Gel atraves do metodo de diferencas finitas

    Energy Technology Data Exchange (ETDEWEB)

    Nascimento, E.O.; Oliveira, L.N., E-mail: lucas@ifg.edu.br [Instituto Federal de Educacao, Ciencia e Tecnologia de Goias (IFG), Goiania, GO (Brazil)

    2014-11-01

    Partial Differential Equations (PDE) can model natural phenomena, such as related to physics, chemistry and engineering. For these classes of equations, analytical solutions are difficult to be obtained, so a computational approach is indicted. In this context, the Finite Difference Method (FDM) can provide useful tools for the field of Medical Physics. In this study, is described the implementation of a computational mesh, in order to be used in determining the Diffusion Coefficient (DC) of the Fricke Xylenol Gel dosimeter (FXG). The initial and boundary conditions both referred by experimental factors are modelled in FDM, thus making a semi-empirical study in determining the DC. Together, the method of Reflection and Superposition (SRM) and the analysis of experimental data, served as first validation for the simulation. Such methodologies interface generated concordant results for a range of error of 3% in concentration lines for small times when compared to the analytical solution. The result for the DC was 0.43 mm{sup 2} /h. This value is in concordance with measures parameters range found in polymer gels dosimeters: 0.3-2.0 mm{sup 2} /h. Therefore, the application of computer simulation methodology supported by the FDM may be used in determining the diffusion coefficient in FXG dosimeter. (author)

  5. How rational should bioethics be? The value of empirical approaches.

    Science.gov (United States)

    Alvarez, A A

    2001-10-01

    Rational justification of claims with empirical content calls for empirical and not only normative philosophical investigation. Empirical approaches to bioethics are epistemically valuable, i.e., such methods may be necessary in providing and verifying basic knowledge about cultural values and norms. Our assumptions in moral reasoning can be verified or corrected using these methods. Moral arguments can be initiated or adjudicated by data drawn from empirical investigation. One may argue that individualistic informed consent, for example, is not compatible with the Asian communitarian orientation. But this normative claim uses an empirical assumption that may be contrary to the fact that some Asians do value and argue for informed consent. Is it necessary and factual to neatly characterize some cultures as individualistic and some as communitarian? Empirical investigation can provide a reasonable way to inform such generalizations. In a multi-cultural context, such as in the Philippines, there is a need to investigate the nature of the local ethos before making any appeal to authenticity. Otherwise we may succumb to the same ethical imperialism we are trying hard to resist. Normative claims that involve empirical premises cannot be reasonable verified or evaluated without utilizing empirical methods along with philosophical reflection. The integration of empirical methods to the standard normative approach to moral reasoning should be reasonably guided by the epistemic demands of claims arising from cross-cultural discourse in bioethics.

  6. EGG: Empirical Galaxy Generator

    Science.gov (United States)

    Schreiber, C.; Elbaz, D.; Pannella, M.; Merlin, E.; Castellano, M.; Fontana, A.; Bourne, N.; Boutsia, K.; Cullen, F.; Dunlop, J.; Ferguson, H. C.; Michałowski, M. J.; Okumura, K.; Santini, P.; Shu, X. W.; Wang, T.; White, C.

    2018-04-01

    The Empirical Galaxy Generator (EGG) generates fake galaxy catalogs and images with realistic positions, morphologies and fluxes from the far-ultraviolet to the far-infrared. The catalogs are generated by egg-gencat and stored in binary FITS tables (column oriented). Another program, egg-2skymaker, is used to convert the generated catalog into ASCII tables suitable for ingestion by SkyMaker (ascl:1010.066) to produce realistic high resolution images (e.g., Hubble-like), while egg-gennoise and egg-genmap can be used to generate the low resolution images (e.g., Herschel-like). These tools can be used to test source extraction codes, or to evaluate the reliability of any map-based science (stacking, dropout identification, etc.).

  7. Alternative Approaches to Evaluation in Empirical Microeconomics

    Science.gov (United States)

    Blundell, Richard; Dias, Monica Costa

    2009-01-01

    This paper reviews some of the most popular policy evaluation methods in empirical microeconomics: social experiments, natural experiments, matching, instrumental variables, discontinuity design, and control functions. It discusses identification of traditionally used average parameters and more complex distributional parameters. The adequacy,…

  8. Empirical questions for collective-behaviour modelling

    Indian Academy of Sciences (India)

    The collective behaviour of groups of social animals has been an active topic of study ... Models have been successful at reproducing qualitative features of ... quantitative and detailed empirical results for a range of animal systems. ... standard method [23], the redundant information recorded by the cameras can be used to.

  9. Birds of the Mongol Empire

    OpenAIRE

    Eugene N. Anderson

    2016-01-01

    The Mongol Empire, the largest contiguous empire the world has ever known, had, among other things, a goodly number of falconers, poultry raisers, birdcatchers, cooks, and other experts on various aspects of birding. We have records of this, largely in the Yinshan Zhengyao, the court nutrition manual of the Mongol empire in China (the Yuan Dynasty). It discusses in some detail 22 bird taxa, from swans to chickens. The Huihui Yaofang, a medical encyclopedia, lists ten taxa used medicinally. Ma...

  10. Final Empirical Test Case Specification

    DEFF Research Database (Denmark)

    Kalyanova, Olena; Heiselberg, Per

    This document includes the empirical specification on the IEA task of evaluation building energy simulation computer programs for the Double Skin Facades (DSF) constructions. There are two approaches involved into this procedure, one is the comparative approach and another is the empirical one....

  11. Remembrances of Empires Past

    Directory of Open Access Journals (Sweden)

    Robert Aldrich

    2010-03-01

    Full Text Available This paper argues that the colonial legacy is ever present in contemporary Europe. For a generation, most Europeans largely tried, publicly, to forget the colonial past, or remembered it only through the rose-coloured lenses of nostalgia; now the pendulum has swung to memory of that past – even perhaps, in the views of some, to a surfeit of memory, where each group agitates for its own version of history, its own recognition in laws and ceremonies, its own commemoration in museums and monuments, the valorization or repatriation of its own art and artefacts. Word such as ‘invasion,’ ‘racism’ and ‘genocide’ are emotional terms that provoke emotional reactions. Whether leaders should apologize for wrongs of the past – and which wrongs – remains a highly sensitive issue. The ‘return of the colonial’ thus has to do with ethics and politics as well as with history, and can link to statements of apology or recognition, legislation about certain views of history, monetary compensation, repatriation of objects, and—perhaps most importantly—redefinition of national identity and policy. The colonial flags may have been lowered, but many barricades seem to have been raised. Private memories—of loss of land, of unacknowledged service, of political, economic, social and cultural disenfranchisement, but also on the other side of defeat, national castigation and self-flagellation—have been increasingly public. Monuments and museums act not only as sites of history but as venues for political agitation and forums for academic debate – differences of opinion that have spread to the streets. Empire has a long after-life.

  12. Empirical Support for Perceptual Conceptualism

    Directory of Open Access Journals (Sweden)

    Nicolás Alejandro Serrano

    2018-03-01

    Full Text Available The main objective of this paper is to show that perceptual conceptualism can be understood as an empirically meaningful position and, furthermore, that there is some degree of empirical support for its main theses. In order to do this, I will start by offering an empirical reading of the conceptualist position, and making three predictions from it. Then, I will consider recent experimental results from cognitive sciences that seem to point towards those predictions. I will conclude that, while the evidence offered by those experiments is far from decisive, it is enough not only to show that conceptualism is an empirically meaningful position but also that there is empirical support for it.

  13. Empire as a Geopolitical Figure

    DEFF Research Database (Denmark)

    Parker, Noel

    2010-01-01

    This article analyses the ingredients of empire as a pattern of order with geopolitical effects. Noting the imperial form's proclivity for expansion from a critical reading of historical sociology, the article argues that the principal manifestation of earlier geopolitics lay not in the nation...... but in empire. That in turn has been driven by a view of the world as disorderly and open to the ordering will of empires (emanating, at the time of geopolitics' inception, from Europe). One implication is that empires are likely to figure in the geopolitical ordering of the globe at all times, in particular...... after all that has happened in the late twentieth century to undermine nationalism and the national state. Empire is indeed a probable, even for some an attractive form of regime for extending order over the disorder produced by globalisation. Geopolitics articulated in imperial expansion is likely...

  14. A sensitivity analysis of centrifugal compressors' empirical models

    International Nuclear Information System (INIS)

    Yoon, Sung Ho; Baek, Je Hyun

    2001-01-01

    The mean-line method using empirical models is the most practical method of predicting off-design performance. To gain insight into the empirical models, the influence of empirical models on the performance prediction results is investigated. We found that, in the two-zone model, the secondary flow mass fraction has a considerable effect at high mass flow-rates on the performance prediction curves. In the TEIS model, the first element changes the slope of the performance curves as well as the stable operating range. The second element makes the performance curves move up and down as it increases or decreases. It is also discovered that the slip factor affects pressure ratio, but it has little effect on efficiency. Finally, this study reveals that the skin friction coefficient has significant effect on both the pressure ratio curve and the efficiency curve. These results show the limitations of the present empirical models, and more reasonable empirical models are reeded

  15. Birds of the Mongol Empire

    Directory of Open Access Journals (Sweden)

    Eugene N. Anderson

    2016-09-01

    Full Text Available The Mongol Empire, the largest contiguous empire the world has ever known, had, among other things, a goodly number of falconers, poultry raisers, birdcatchers, cooks, and other experts on various aspects of birding. We have records of this, largely in the Yinshan Zhengyao, the court nutrition manual of the Mongol empire in China (the Yuan Dynasty. It discusses in some detail 22 bird taxa, from swans to chickens. The Huihui Yaofang, a medical encyclopedia, lists ten taxa used medicinally. Marco Polo also made notes on Mongol bird use. There are a few other records. This allows us to draw conclusions about Mongol ornithology, which apparently was sophisticated and detailed.

  16. The Empire Strikes Back

    DEFF Research Database (Denmark)

    Babb, Jeffry S.; Nørbjerg, Jacob; Yates, David J.

    Agile methods have co-evolved with the onset of rapid change andturbidity in software and systems development and the methodologies andprocess models designed to guide them. Conceived from the lessons of practice,Agile methods brought a balanced perspective between the intensions of thestakeholder......, the management function, and developers. As an evolutionaryprogression, trends towards rapid continuous delivery have witnessed theadvent of DevOps where advances in tooling, technologies, and theenvironment of both development and consumption exert a new dynamic intothe Agile oeuvre. We investigate...... the progression from Agile to DevOps from aCritical Social Theoretic perspective to examine a paradox in agility – whatdoes an always-on conceptualization of production forestall and impinge uponthe processes of reflection and renewal that are also endemic to Agile methods?This paper is offered as a catalyst...

  17. Empirical Legality and Effective Reality

    Directory of Open Access Journals (Sweden)

    Hernán Pringe

    2015-08-01

    Full Text Available The conditions that Kant’s doctrine establishes are examined for the predication of the effective reality of certain empirical objects. It is maintained that a for such a predication, it is necessary to have not only perception but also a certain homogeneity of sensible data, and b the knowledge of the existence of certain empirical objects depends on the application of regulative principles of experience.

  18. Empirical logic and quantum mechanics

    International Nuclear Information System (INIS)

    Foulis, D.J.; Randall, C.H.

    1976-01-01

    This article discusses some of the basic notions of quantum physics within the more general framework of operational statistics and empirical logic (as developed in Foulis and Randall, 1972, and Randall and Foulis, 1973). Empirical logic is a formal mathematical system in which the notion of an operation is primitive and undefined; all other concepts are rigorously defined in terms of such operations (which are presumed to correspond to actual physical procedures). (Auth.)

  19. Surface Passivation in Empirical Tight Binding

    Science.gov (United States)

    He, Yu; Tan, Yaohua; Jiang, Zhengping; Povolotskyi, Michael; Klimeck, Gerhard; Kubis, Tillmann

    2016-03-01

    Empirical Tight Binding (TB) methods are widely used in atomistic device simulations. Existing TB methods to passivate dangling bonds fall into two categories: 1) Method that explicitly includes passivation atoms is limited to passivation with atoms and small molecules only. 2) Method that implicitly incorporates passivation does not distinguish passivation atom types. This work introduces an implicit passivation method that is applicable to any passivation scenario with appropriate parameters. This method is applied to a Si quantum well and a Si ultra-thin body transistor oxidized with SiO2 in several oxidation configurations. Comparison with ab-initio results and experiments verifies the presented method. Oxidation configurations that severely hamper the transistor performance are identified. It is also shown that the commonly used implicit H atom passivation overestimates the transistor performance.

  20. Estimation of the seismic hazards of the possible rupture of the Pastores and Venta de Bravo faults in the Acambay grabens, state of Mexico, Mexico, using the Empirical Green's Function Method

    Science.gov (United States)

    Ishizawa, O. A.; Lermo, J.; Aguirre, J.

    2003-04-01

    possible rupture of the faults being studied. For that purpose a realistic model on the basis of the source parameters of the above mentioned earthquake will be proposed. The Empirical Green's Function Method allows us to simulate strong seismic movements starting from the records of small earthquakes which have occurred near the site where the simulation is intended. This method takes advantage of the information, of trajectory and site, contained in the record of an earthquake of small magnitude. Through the utilization of the method of superposition proposed by Irikura (1986) and using the spectral scaling law stated by Aki (1967) the larger magnitude earthquake is modeled according to the proposed geometrical model. The reason for choosing the station of University Campus is the richness of seismic information of subduction and normal earthquakes during the past century. Besides, from the University Campus station, the results obtained can be extrapolated to the rest of Mexico City.

  1. Empirical Bayesian inference and model uncertainty

    International Nuclear Information System (INIS)

    Poern, K.

    1994-01-01

    This paper presents a hierarchical or multistage empirical Bayesian approach for the estimation of uncertainty concerning the intensity of a homogeneous Poisson process. A class of contaminated gamma distributions is considered to describe the uncertainty concerning the intensity. These distributions in turn are defined through a set of secondary parameters, the knowledge of which is also described and updated via Bayes formula. This two-stage Bayesian approach is an example where the modeling uncertainty is treated in a comprehensive way. Each contaminated gamma distributions, represented by a point in the 3D space of secondary parameters, can be considered as a specific model of the uncertainty about the Poisson intensity. Then, by the empirical Bayesian method each individual model is assigned a posterior probability

  2. Empirical continuation of the differential cross section

    International Nuclear Information System (INIS)

    Borbely, I.

    1978-12-01

    The theoretical basis as well as the practical methods of empirical continuation of the differential cross section into the nonphysical region of the cos theta variable are discussed. The equivalence of the different methods is proved. A physical applicability condition is given and the published applications are reviewed. In many cases the correctly applied procedure turns out to provide nonsignificant or even incorrect structure information which points to the necessity for careful and statistically complete analysis of the experimental data with a physical understanding of the analysed process. (author)

  3. English as a Second Language and Children’s literature : An empirical study on Swedish elementary school teachers’ methods and attitudes towards the use of children’s literature in the English classroom

    OpenAIRE

    Englund, Micaela

    2016-01-01

    Previous research has shown multiple benefits and challenges with the incorporation of children’s literature in the English as a Second language (ESL) classroom. In addition, the use of children’s literature in the lower elementary English classroom is recommended by the Swedish National Agency for Education. Consequently, the current study explores how teachers in Swedish elementary school teach ESL through children’s literature. This empirical study involves English teachers from seven scho...

  4. Umayyad Relations with Byzantium Empire

    Directory of Open Access Journals (Sweden)

    Mansoor Haidari

    2017-06-01

    Full Text Available This research investigates the political and military relations between Umayyad caliphates with the Byzantine Empire. The aim of this research is to clarify Umayyad caliphate’s relations with the Byzantine Empire. We know that these relations were mostly about war and fight. Because there were always intense conflicts between Muslims and the Byzantine Empire, they had to have an active continuous diplomacy to call truce and settle the disputes. Thus, based on the general policy of the Umayyad caliphs, Christians were severely ignored and segregated within Islamic territories. This segregation of the Christians was highly affected by political relationships. It is worthy of mentioning that Umayyad caliphs brought the governing style of the Sassanid kings and Roman Caesar into the Islamic Caliphate system but they didn’t establish civil institutions and administrative organizations.

  5. Gazprom the new russian empire

    International Nuclear Information System (INIS)

    Cosnard, D.

    2004-01-01

    The author analyzes the economical and political impacts of the great Gazprom group, leader in the russian energy domain, in Russia. Already number one of the world gas industry, this Group is becoming the right-hand of the Kremlin. Thus the author wonders on this empire transparency and limits. (A.L.B.)

  6. Phenomenology and the Empirical Turn

    NARCIS (Netherlands)

    Zwier, Jochem; Blok, Vincent; Lemmens, Pieter

    2016-01-01

    This paper provides a phenomenological analysis of postphenomenological philosophy of technology. While acknowledging that the results of its analyses are to be recognized as original, insightful, and valuable, we will argue that in its execution of the empirical turn, postphenomenology forfeits

  7. Empirical ethics as dialogical practice

    NARCIS (Netherlands)

    Widdershoven, G.A.M.; Abma, T.A.; Molewijk, A.C.

    2009-01-01

    In this article, we present a dialogical approach to empirical ethics, based upon hermeneutic ethics and responsive evaluation. Hermeneutic ethics regards experience as the concrete source of moral wisdom. In order to gain a good understanding of moral issues, concrete detailed experiences and

  8. Teaching "Empire of the Sun."

    Science.gov (United States)

    Riet, Fred H. van

    1990-01-01

    A Dutch teacher presents reading, film viewing, and writing activities for "Empire of the Sun," J. G. Ballard's autobiographical account of life as a boy in Shanghai and in a Japanese internment camp during World War II (the subject of Steven Spielberg's film of the same name). Includes objectives, procedures, and several literature,…

  9. Empirical Productivity Indices and Indicators

    NARCIS (Netherlands)

    B.M. Balk (Bert)

    2016-01-01

    textabstractThe empirical measurement of productivity change (or difference) by means of indices and indicators starts with the ex post profit/loss accounts of a production unit. Key concepts are profit, leading to indicators, and profitability, leading to indices. The main task for the productivity

  10. EMPIRICAL RESEARCH AND CONGREGATIONAL ANALYSIS ...

    African Journals Online (AJOL)

    empirical research has made to the process of congregational analysis. 1 Part of this ... contextual congegrational analysis – meeting social and divine desires”) at the IAPT .... methodology of a congregational analysis should be regarded as a process. ... essential to create space for a qualitative and quantitative approach.

  11. Empirical processes: theory and applications

    OpenAIRE

    Venturini Sergio

    2005-01-01

    Proceedings of the 2003 Summer School in Statistics and Probability in Torgnon (Aosta, Italy) held by Prof. Jon A. Wellner and Prof. M. Banerjee. The topic presented was the theory of empirical processes with applications to statistics (m-estimation, bootstrap, semiparametric theory).

  12. Empirical laws, regularity and necessity

    NARCIS (Netherlands)

    Koningsveld, H.

    1973-01-01

    In this book I have tried to develop an analysis of the concept of an empirical law, an analysis that differs in many ways from the alternative analyse's found in contemporary literature dealing with the subject.

    1 am referring especially to two well-known views, viz. the regularity and

  13. Empirical analysis of consumer behavior

    NARCIS (Netherlands)

    Huang, Yufeng

    2015-01-01

    This thesis consists of three essays in quantitative marketing, focusing on structural empirical analysis of consumer behavior. In the first essay, he investigates the role of a consumer's skill of product usage, and its imperfect transferability across brands, in her product choice. It shows that

  14. Calculation of Critical Temperatures by Empirical Formulae

    Directory of Open Access Journals (Sweden)

    Trzaska J.

    2016-06-01

    Full Text Available The paper presents formulas used to calculate critical temperatures of structural steels. Equations that allow calculating temperatures Ac1, Ac3, Ms and Bs were elaborated based on the chemical composition of steel. To elaborate the equations the multiple regression method was used. Particular attention was paid to the collection of experimental data which was required to calculate regression coefficients, including preparation of data for calculation. The empirical data set included more than 500 chemical compositions of structural steel and has been prepared based on information available in literature on the subject.

  15. Empirical studies on changes in oil governance

    Science.gov (United States)

    Kemal, Mohammad

    Regulation of the oil and gas sector is consequential to the economies of oil-producing countries. In the literature, there are two types of regulation: indirect regulation through taxes and tariffs or direct regulation through the creation of a National Oil Company (NOC). In the 1970s, many oil-producing countries nationalized their oil and gas sectors by creating and giving ownership rights of oil and gas resources to NOCs. In light of the success of Norway in regulating its oil and gas resources, over the past two decades several countries have changed their oil governance by changing the rights given to NOC from ownership right to mere access rights like other oil companies. However, empirical literature on these changes in oil governance is quite thin. Thus, this dissertation will explore three research questions to investigate empirically these changes in oil governance. First, I investigate empirically the impact of the changes in oil governance on aggregate domestic income. By employing a difference-in-difference method, I will show that a country which changed its oil governance increases its GDP per-capita by 10%. However, the impact is different for different types of political institution. Second, by observing the changes in oil governance in Indonesia , I explore the impact of the changes on learning-by-doing and learning spillover effect in offshore exploration drilling. By employing an econometric model which includes interaction terms between various experience variables and changes in an oil governance dummy, I will show that the change in oil governance in Indonesia enhances learning-by-doing by the rigs and learning spillover in a basin. Lastly, the impact of the changes in oil governance on expropriation risk and extraction path will be explored. By employing a difference-in-difference method, this essay will show that the changes in oil governance reduce expropriation and the impact of it is different for different sizes of resource stock.

  16. Empirical antimicrobial therapy of acute dentoalveolar abscess

    Directory of Open Access Journals (Sweden)

    Matijević Stevo

    2009-01-01

    Full Text Available Background/Aim. The most common cause of acute dental infections are oral streptococci and anaerobe bacteria. Acute dentoalveolar infections are usually treated surgically in combination with antibiotics. Empirical therapy in such infections usually requires the use of penicillin-based antibiotics. The aim of this study was to investigate the clinical efficiency of amoxicillin and cefalexin in the empirical treatment of acute odontogenic abscess and to assess the antimicrobial susceptibility of the isolated bacteria in early phases of its development. Methods. This study included 90 patients with acute odontogenic abscess who received surgical treatment (extraction of a teeth and/or abscess incision and were divided into three groups: two surgicalantibiotic groups (amoxicillin, cefalexin and the surgical group. In order to evaluate the effects of the applied therapy following clinical symptoms were monitored: inflammatory swelling, trismus, regional lymphadentytis and febrility. In all the patients before the beginning of antibiotic treatment suppuration was suched out of the abscess and antibiotic susceptibility of isolated bacteria was tested by using the disk diffusion method. Results. The infection signs and symptoms lasted on the average 4.47 days, 4.67 days, and 6.17 days in the amoxicillin, cefalexin, and surgically only treated group, respectively. A total of 111 bacterial strains were isolated from 90 patients. Mostly, the bacteria were Gram-positive facultative anaerobs (81.1%. The most common bacteria isolated were Viridans streptococci (68/111. Antibiotic susceptibility of isolated bacteria to amoxicillin was 76.6% and cefalexin 89.2%. Conclusion. Empirical, peroral use of amoxicillin or cefalexin after surgical treatment in early phase of development of dentoalveolar abscess significantly reduced the time of clinical symptoms duration in the acute odontogenic infections in comparison to surgical treatment only. Bacterial strains

  17. Empirically Testing Thematic Analysis (ETTA)

    DEFF Research Database (Denmark)

    Gildberg, Frederik Alkier; Bradley, Stephen K.; Tingleff, Elllen B.

    2015-01-01

    Text analysis is not a question of a right or wrong way to go about it, but a question of different traditions. These tend to not only give answers to how to conduct an analysis, but also to provide the answer as to why it is conducted in the way that it is. The problem however may be that the li...... for themselves. The advantage of utilizing the presented analytic approach is argued to be the integral empirical testing, which should assure systematic development, interpretation and analysis of the source textual material....... between tradition and tool is unclear. The main objective of this article is therefore to present Empirical Testing Thematic Analysis, a step by step approach to thematic text analysis; discussing strengths and weaknesses, so that others might assess its potential as an approach that they might utilize/develop...

  18. Essays in empirical industrial organization

    OpenAIRE

    Aguiar de Luque, Luis

    2013-01-01

    My PhD thesis consists of three chapters in Empirical Industrial Organization. The first two chapters focus on the relationship between firrm performance and specific public policies. In particular, we analyze the cases of cooperative research and development (R&D) in the European Union and the regulation of public transports in France. The third chapter focuses on copyright protection in the digital era and analyzes the relationship between legal and illegal consumption of di...

  19. Empirical research on Waldorf education

    OpenAIRE

    Randoll, Dirk; Peters, Jürgen

    2015-01-01

    Waldorf education began in 1919 with the first Waldorf School in Stuttgart and nowadays is widespread in many countries all over the world. Empirical research, however, has been rare until the early nineties and Waldorf education has not been discussed within educational science so far. This has changed during the last decades. This article reviews the results of surveys during the last 20 years and is mainly focused on German Waldorf Schools, because most investigations have been done in thi...

  20. Empirical distribution function under heteroscedasticity

    Czech Academy of Sciences Publication Activity Database

    Víšek, Jan Ámos

    2011-01-01

    Roč. 45, č. 5 (2011), s. 497-508 ISSN 0233-1888 Grant - others:GA UK(CZ) GA402/09/0557 Institutional research plan: CEZ:AV0Z10750506 Keywords : Robustness * Convergence * Empirical distribution * Heteroscedasticity Subject RIV: BB - Applied Statistics , Operational Research Impact factor: 0.724, year: 2011 http://library.utia.cas.cz/separaty/2011/SI/visek-0365534.pdf

  1. Expert opinion vs. empirical evidence

    OpenAIRE

    Herman, Rod A; Raybould, Alan

    2014-01-01

    Expert opinion is often sought by government regulatory agencies when there is insufficient empirical evidence to judge the safety implications of a course of action. However, it can be reckless to continue following expert opinion when a preponderance of evidence is amassed that conflicts with this opinion. Factual evidence should always trump opinion in prioritizing the information that is used to guide regulatory policy. Evidence-based medicine has seen a dramatic upturn in recent years sp...

  2. Empirical isotropic chemical shift surfaces

    International Nuclear Information System (INIS)

    Czinki, Eszter; Csaszar, Attila G.

    2007-01-01

    A list of proteins is given for which spatial structures, with a resolution better than 2.5 A, are known from entries in the Protein Data Bank (PDB) and isotropic chemical shift (ICS) values are known from the RefDB database related to the Biological Magnetic Resonance Bank (BMRB) database. The structures chosen provide, with unknown uncertainties, dihedral angles φ and ψ characterizing the backbone structure of the residues. The joint use of experimental ICSs of the same residues within the proteins, again with mostly unknown uncertainties, and ab initio ICS(φ,ψ) surfaces obtained for the model peptides For-(l-Ala) n -NH 2 , with n = 1, 3, and 5, resulted in so-called empirical ICS(φ,ψ) surfaces for all major nuclei of the 20 naturally occurring α-amino acids. Out of the many empirical surfaces determined, it is the 13C α ICS(φ,ψ) surface which seems to be most promising for identifying major secondary structure types, α-helix, β-strand, left-handed helix (α D ), and polyproline-II. Detailed tests suggest that Ala is a good model for many naturally occurring α-amino acids. Two-dimensional empirical 13C α - 1 H α ICS(φ,ψ) correlation plots, obtained so far only from computations on small peptide models, suggest the utility of the experimental information contained therein and thus they should provide useful constraints for structure determinations of proteins

  3. Two concepts of empirical ethics.

    Science.gov (United States)

    Parker, Malcolm

    2009-05-01

    The turn to empirical ethics answers two calls. The first is for a richer account of morality than that afforded by bioethical principlism, which is cast as excessively abstract and thin on the facts. The second is for the facts in question to be those of human experience and not some other, unworldly realm. Empirical ethics therefore promises a richer naturalistic ethics, but in fulfilling the second call it often fails to heed the metaethical requirements related to the first. Empirical ethics risks losing the normative edge which necessarily characterizes the ethical, by failing to account for the nature and the logic of moral norms. I sketch a naturalistic theory, teleological expressivism (TE), which negotiates the naturalistic fallacy by providing a more satisfactory means of taking into account facts and research data with ethical implications. The examples of informed consent and the euthanasia debate are used to illustrate the superiority of this approach, and the problems consequent on including the facts in the wrong kind of way.

  4. Using Loss Functions for DIF Detection: An Empirical Bayes Approach.

    Science.gov (United States)

    Zwick, Rebecca; Thayer, Dorothy; Lewis, Charles

    2000-01-01

    Studied a method for flagging differential item functioning (DIF) based on loss functions. Builds on earlier research that led to the development of an empirical Bayes enhancement to the Mantel-Haenszel DIF analysis. Tested the method through simulation and found its performance better than some commonly used DIF classification systems. (SLD)

  5. Univariate and Bivariate Empirical Mode Decomposition for Postural Stability Analysis

    Directory of Open Access Journals (Sweden)

    Jacques Duchêne

    2008-05-01

    Full Text Available The aim of this paper was to compare empirical mode decomposition (EMD and two new extended methods of  EMD named complex empirical mode decomposition (complex-EMD and bivariate empirical mode decomposition (bivariate-EMD. All methods were used to analyze stabilogram center of pressure (COP time series. The two new methods are suitable to be applied to complex time series to extract complex intrinsic mode functions (IMFs before the Hilbert transform is subsequently applied on the IMFs. The trace of the analytic IMF in the complex plane has a circular form, with each IMF having its own rotation frequency. The area of the circle and the average rotation frequency of IMFs represent efficient indicators of the postural stability status of subjects. Experimental results show the effectiveness of these indicators to identify differences in standing posture between groups.

  6. Merging expert and empirical data for rare event frequency estimation: Pool homogenisation for empirical Bayes models

    International Nuclear Information System (INIS)

    Quigley, John; Hardman, Gavin; Bedford, Tim; Walls, Lesley

    2011-01-01

    Empirical Bayes provides one approach to estimating the frequency of rare events as a weighted average of the frequencies of an event and a pool of events. The pool will draw upon, for example, events with similar precursors. The higher the degree of homogeneity of the pool, then the Empirical Bayes estimator will be more accurate. We propose and evaluate a new method using homogenisation factors under the assumption that events are generated from a Homogeneous Poisson Process. The homogenisation factors are scaling constants, which can be elicited through structured expert judgement and used to align the frequencies of different events, hence homogenising the pool. The estimation error relative to the homogeneity of the pool is examined theoretically indicating that reduced error is associated with larger pool homogeneity. The effects of misspecified expert assessments of the homogenisation factors are examined theoretically and through simulation experiments. Our results show that the proposed Empirical Bayes method using homogenisation factors is robust under different degrees of misspecification.

  7. Science and the British Empire.

    Science.gov (United States)

    Harrison, Mark

    2005-03-01

    The last few decades have witnessed a flowering of interest in the history of science in the British Empire. This essay aims to provide an overview of some of the most important work in this area, identifying interpretative shifts and emerging themes. In so doing, it raises some questions about the analytical framework in which colonial science has traditionally been viewed, highlighting interactions with indigenous scientific traditions and the use of network-based models to understand scientific relations within and beyond colonial contexts.

  8. Empirical logic and tensor products

    International Nuclear Information System (INIS)

    Foulis, D.J.; Randall, C.H.

    1981-01-01

    In our work we are developing a formalism called empirical logic to support a generalization of conventional statistics; the resulting generalization is called operational statistics. We are not attempting to develop or advocate any particular physical theory; rather we are formulating a precision 'language' in which such theories can be expressed, compared, evaluated, and related to laboratory experiments. We believe that only in such a language can the connections between real physical procedures (operations) and physical theories be made explicit and perspicuous. (orig./HSI)

  9. Discussion on “Empirical methods for determining shaft bearing capacity of semi-deep foundations socketed in rocks” [J Rock Mech Geotech Eng 6 (2017 1140–151

    Directory of Open Access Journals (Sweden)

    Ergin Arioglu

    2018-06-01

    Full Text Available A new comprehensive set of data (n = 178 is compiled by adding a data set (n = 72 collected by Arioglu et al. (2007 to the data set (n = 106 presented in Rezazadeh and Eslami (2017. Then, the compiled data set is evaluated regardless of the variation in lithology/strength. The proposed empirical equation in this study comprises a wider range of uniaxial compressive strength (UCS (0.15 MPa < σrc < 156 MPa and various rock types. Rock mass cuttability index (RMCI is correlated with shaft resistance (rs to predict the shaft resistance of rock-socketed piles. The prediction capacity of the RMCI versus rs equation is also found to be in a fair good agreement with the presented data in Rezazadeh and Eslami (2017. Since the RMCI is a promising parameter in the prediction of shaft resistance, the researchers in the rock-socketed pile design area should consider this parameter in the further investigations. Keywords: Uniaxial compressive strength (UCS, Rock mass cuttability index (RMCI, Shaft resistance, Rock socketed piles, Database

  10. Improving the desolvation penalty in empirical protein pKa modeling

    DEFF Research Database (Denmark)

    Olsson, Mats Henrik Mikael

    2012-01-01

    Unlike atomistic and continuum models, empirical pk(a) predicting methods need to include desolvation contributions explicitly. This study describes a new empirical desolvation method based on the Born solvation model. The new desolvation model was evaluated by high-level Poisson-Boltzmann...

  11. Empirical reality, empirical causality, and the measurement problem

    International Nuclear Information System (INIS)

    d'Espagnat, B.

    1987-01-01

    Does physics describe anything that can meaningfully be called independent reality, or is it merely operational? Most physicists implicitly favor an intermediate standpoint, which takes quantum physics into account, but which nevertheless strongly holds fast to quite strictly realistic ideas about apparently obvious facts concerning the macro-objects. Part 1 of this article, which is a survey of recent measurement theories, shows that, when made explicit, the standpoint in question cannot be upheld. Part 2 brings forward a proposal for making minimal changes to this standpoint in such a way as to remove such objections. The empirical reality thus constructed is a notion that, to some extent, does ultimately refer to the human means of apprehension and of data processing. It nevertheless cannot be said that it reduces to a mere name just labelling a set of recipes that never fail. It is shown that their usual notion of macroscopic causality must be endowed with similar features

  12. Moment Conditions Selection Based on Adaptive Penalized Empirical Likelihood

    Directory of Open Access Journals (Sweden)

    Yunquan Song

    2014-01-01

    Full Text Available Empirical likelihood is a very popular method and has been widely used in the fields of artificial intelligence (AI and data mining as tablets and mobile application and social media dominate the technology landscape. This paper proposes an empirical likelihood shrinkage method to efficiently estimate unknown parameters and select correct moment conditions simultaneously, when the model is defined by moment restrictions in which some are possibly misspecified. We show that our method enjoys oracle-like properties; that is, it consistently selects the correct moment conditions and at the same time its estimator is as efficient as the empirical likelihood estimator obtained by all correct moment conditions. Moreover, unlike the GMM, our proposed method allows us to carry out confidence regions for the parameters included in the model without estimating the covariances of the estimators. For empirical implementation, we provide some data-driven procedures for selecting the tuning parameter of the penalty function. The simulation results show that the method works remarkably well in terms of correct moment selection and the finite sample properties of the estimators. Also, a real-life example is carried out to illustrate the new methodology.

  13. Analysis of the Possibilities for Discussing Questions of Global Justice in Geography Classes on the Use of Methods of Empirical Social Research When Analyzing the Teaching of Geography in Class

    Science.gov (United States)

    Applis, Stefan

    2015-01-01

    This study examines students' orientations with regard to questions on the implementation of justice in production structures of the global textile industry. The students worked with the Mystery Method from the Thinking Through Geography approach by David Leat and with Lawrence Kohlberg's Method of Dilemma Discussion. During this process, the…

  14. An empirical assessment of the SERVQUAL scale

    Directory of Open Access Journals (Sweden)

    Mahla Zargar

    2015-11-01

    Full Text Available During the past few years, many people have used point of sales for purchasing goods and services. Point of sales tends to provide a reliable method for making purchases in stores. Implementation of point of sales may reduce depreciation cost of automated telling machines and helps banks increase their productivities. Therefore, for bank managers, it is important to provide high quality services. This paper presents an empirical investigation to measure quality service using SERVQUAL scale. The study first extracts six factors including Trust, Responsiveness, Reliability, Empathy, Tangibles and getting insight for future development through the implementation of structural equation modeling. Next, it has implemented structural equation modeling and realizes that all components had positive impacts on customer satisfaction.

  15. Improvement of electrocardiogram by empirical wavelet transform

    Science.gov (United States)

    Chanchang, Vikanda; Kumchaiseemak, Nakorn; Sutthiopad, Malee; Luengviriya, Chaiya

    2017-09-01

    Electrocardiogram (ECG) is a crucial tool in the detection of cardiac arrhythmia. It is also often used in a routine physical exam, especially, for elderly people. This graphical representation of electrical activity of heart is obtained by a measurement of voltage at the skin; therefore, the signal is always contaminated by noise from various sources. For a proper interpretation, the quality of the ECG should be improved by a noise reduction. In this article, we present a study of a noise filtration in the ECG by using an empirical wavelet transform (EWT). Unlike the traditional wavelet method, EWT is adaptive since the frequency spectrum of the ECG is taken into account in the construction of the wavelet basis. We show that the signal-to-noise ratio increases after the noise filtration for different noise artefacts.

  16. Expert opinion vs. empirical evidence

    Science.gov (United States)

    Herman, Rod A; Raybould, Alan

    2014-01-01

    Expert opinion is often sought by government regulatory agencies when there is insufficient empirical evidence to judge the safety implications of a course of action. However, it can be reckless to continue following expert opinion when a preponderance of evidence is amassed that conflicts with this opinion. Factual evidence should always trump opinion in prioritizing the information that is used to guide regulatory policy. Evidence-based medicine has seen a dramatic upturn in recent years spurred by examples where evidence indicated that certain treatments recommended by expert opinions increased death rates. We suggest that scientific evidence should also take priority over expert opinion in the regulation of genetically modified crops (GM). Examples of regulatory data requirements that are not justified based on the mass of evidence are described, and it is suggested that expertise in risk assessment should guide evidence-based regulation of GM crops. PMID:24637724

  17. An Empirical Illustration of Positive Stigma towards Child Labor

    OpenAIRE

    Harry A Patrinos; Najeeb Shafiq

    2010-01-01

    This empirical note complements the qualitative and theoretical research on positive household stigma towards child labor. We use data from Guatemala and two instruments for measuring stigma: a child's indigenous background and household head's childhood work experience. We then adopt binomial probit regression methods to illustrate that positive stigma has a large effect on child labor practices, and a modest effect on school enrollment.

  18. The empirical potential of live streaming beyond cognitive psychology

    Directory of Open Access Journals (Sweden)

    Alexander Nicolai Wendt

    2017-03-01

    Full Text Available Empirical methods of self-description, think aloud protocols and introspection have been extensively criticized or neglected in behaviorist and cognitivist psychology. Their methodological value has been fundamentally questioned since there apparently is no suficient proof for their validity. However, the major arguments against self-description can be critically reviewed by theoretical psychology. This way, these methods’ empirical value can be redeemed. Furthermore, self-descriptive methods can be updated by the use of contemporary media technology. In order to support the promising perspectives for future empirical research in the field of cognitive psychology, Live Streaming is proposed as a viable data source. Introducing this new paradigm, this paper presents some of the formal constituents and accessible contents of Live Streaming, and relates them to established forms of empirical research. By its structure and established usage, Live Streaming bears remarkable resemblances to the traditional methods of self-description, yet it also adds fruitful new features of use. On the basis of its qualities, the possible benefits that appear to be feasible in comparison with the traditional methods of self-description are elaborated, such as Live Streaming’s ecological validity. Ultimately, controversial theoretical concepts, such as those in phenomenology and cultural-historical psychology, are adopted to sketch further potential benefits of the utility of Live Streaming in current psychology debates.

  19. Social Justice Advocacy among Graduate Students: An Empirical Investigation

    Science.gov (United States)

    Linnemeyer, Rachel McQuown

    2009-01-01

    Although social justice advocacy has increasingly been acknowledged as important in the field of psychology (e.g., Goodman et al., 2004; Toporek et al., 2006a, Vera & Speight, 2003), there is a dearth of empirical research examining social justice advocacy across graduate psychology students. This mixed-methods study examined demographic and…

  20. A Systematic Review of the Empirical Literature on Intercessory Prayer

    Science.gov (United States)

    Hodge, David R.

    2007-01-01

    Perhaps surprisingly, many social workers appear to use intercessory prayer in direct practice settings. To help inform practitioners' use of this intervention, this article evaluates the empirical literature on the topic using the following three methods: (a) an individual assessment of each study, (b) an evaluation of intercessory prayer as an…

  1. Visual Design Principles: An Empirical Study of Design Lore

    Science.gov (United States)

    Kimball, Miles A.

    2013-01-01

    Many books, designers, and design educators talk about visual design principles such as balance, contrast, and alignment, but with little consistency. This study uses empirical methods to explore the lore surrounding design principles. The study took the form of two stages: a quantitative literature review to determine what design principles are…

  2. Comparison of empirical models and laboratory saturated hydraulic ...

    African Journals Online (AJOL)

    Numerous methods for estimating soil saturated hydraulic conductivity exist, which range from direct measurement in the laboratory to models that use only basic soil properties. A study was conducted to compare laboratory saturated hydraulic conductivity (Ksat) measurement and that estimated from empirical models.

  3. Teaching Psychology and Law: An Empirical Evaluation of Experiential Learning

    Science.gov (United States)

    Zelechoski, Amanda D.; Riggs Romaine, Christina L.; Wolbransky, Melinda

    2017-01-01

    Given the recent proliferation of undergraduate psychology and law courses, there is an increased need to empirically evaluate effective methods of teaching psycholegal material. The current study used a between- and within-subject design across four higher education institutions (N = 291 students) to evaluate the effectiveness of incorporating…

  4. An empirical investigation of Australian Stock Exchange data

    Science.gov (United States)

    Bertram, William K.

    2004-10-01

    We present an empirical study of high frequency Australian equity data examining the behaviour of distribution tails and the existence of long memory. A method is presented allowing us to deal with Australian Stock Exchange data by splitting it into two separate data series representing an intraday and overnight component. Power-law exponents for the empirical density functions are estimated and compared with results from other studies. Using the autocorrelation and variance plots we find there to be a strong indication of long-memory type behaviour in the absolute return, volume and transaction frequency.

  5. The use of moral deliberation in empirical research in bioethics

    Directory of Open Access Journals (Sweden)

    Elma Zoboli

    2016-10-01

    Full Text Available The article presents an integrated empirical ethics research project that used the moral deliberation, according to the theoretical and philosophical conception, and methodical proposal of Diego Gracia, as a theoretical and methodological framework. The application showed the potential to realize the dynamics of the studied object in real life, making it possible, from the situation presented in the vignettes, for participants to include what they considered for dealing with the conflict of values. It also made the integration of philosophical and empirical approaches in bioethics research possible. The analytical category of prudence allowed the results to be assessed in a critical and comprehensive way.

  6. The conceptual and empirical relationship between gambling, investing, and speculation.

    Science.gov (United States)

    Arthur, Jennifer N; Williams, Robert J; Delfabbro, Paul H

    2016-12-01

    Background and aims To review the conceptual and empirical relationship between gambling, investing, and speculation. Methods An analysis of the attributes differentiating these constructs as well as identification of all articles speaking to their empirical relationship. Results Gambling differs from investment on many different attributes and should be seen as conceptually distinct. On the other hand, speculation is conceptually intermediate between gambling and investment, with a few of its attributes being investment-like, some of its attributes being gambling-like, and several of its attributes being neither clearly gambling or investment-like. Empirically, gamblers, investors, and speculators have similar cognitive, motivational, and personality attributes, with this relationship being particularly strong for gambling and speculation. Population levels of gambling activity also tend to be correlated with population level of financial speculation. At an individual level, speculation has a particularly strong empirical relationship to gambling, as speculators appear to be heavily involved in traditional forms of gambling and problematic speculation is strongly correlated with problematic gambling. Discussion and conclusions Investment is distinct from gambling, but speculation and gambling have conceptual overlap and a strong empirical relationship. It is recommended that financial speculation be routinely included when assessing gambling involvement, and there needs to be greater recognition and study of financial speculation as both a contributor to problem gambling as well as an additional form of behavioral addiction in its own right.

  7. Empirical research on international environmental migration: a systematic review.

    Science.gov (United States)

    Obokata, Reiko; Veronis, Luisa; McLeman, Robert

    2014-01-01

    This paper presents the findings of a systematic review of scholarly publications that report empirical findings from studies of environmentally-related international migration. There exists a small, but growing accumulation of empirical studies that consider environmentally-linked migration that spans international borders. These studies provide useful evidence for scholars and policymakers in understanding how environmental factors interact with political, economic and social factors to influence migration behavior and outcomes that are specific to international movements of people, in highlighting promising future research directions, and in raising important considerations for international policymaking. Our review identifies countries of migrant origin and destination that have so far been the subject of empirical research, the environmental factors believed to have influenced these migrations, the interactions of environmental and non-environmental factors as well as the role of context in influencing migration behavior, and the types of methods used by researchers. In reporting our findings, we identify the strengths and challenges associated with the main empirical approaches, highlight significant gaps and future opportunities for empirical work, and contribute to advancing understanding of environmental influences on international migration more generally. Specifically, we propose an exploratory framework to take into account the role of context in shaping environmental migration across borders, including the dynamic and complex interactions between environmental and non-environmental factors at a range of scales.

  8. NOx PREDICTION FOR FBC BOILERS USING EMPIRICAL MODELS

    Directory of Open Access Journals (Sweden)

    Jiří Štefanica

    2014-02-01

    Full Text Available Reliable prediction of NOx emissions can provide useful information for boiler design and fuel selection. Recently used kinetic prediction models for FBC boilers are overly complex and require large computing capacity. Even so, there are many uncertainties in the case of FBC boilers. An empirical modeling approach for NOx prediction has been used exclusively for PCC boilers. No reference is available for modifying this method for FBC conditions. This paper presents possible advantages of empirical modeling based prediction of NOx emissions for FBC boilers, together with a discussion of its limitations. Empirical models are reviewed, and are applied to operation data from FBC boilers used for combusting Czech lignite coal or coal-biomass mixtures. Modifications to the model are proposed in accordance with theoretical knowledge and prediction accuracy.

  9. Sparsity guided empirical wavelet transform for fault diagnosis of rolling element bearings

    Science.gov (United States)

    Wang, Dong; Zhao, Yang; Yi, Cai; Tsui, Kwok-Leung; Lin, Jianhui

    2018-02-01

    Rolling element bearings are widely used in various industrial machines, such as electric motors, generators, pumps, gearboxes, railway axles, turbines, and helicopter transmissions. Fault diagnosis of rolling element bearings is beneficial to preventing any unexpected accident and reducing economic loss. In the past years, many bearing fault detection methods have been developed. Recently, a new adaptive signal processing method called empirical wavelet transform attracts much attention from readers and engineers and its applications to bearing fault diagnosis have been reported. The main problem of empirical wavelet transform is that Fourier segments required in empirical wavelet transform are strongly dependent on the local maxima of the amplitudes of the Fourier spectrum of a signal, which connotes that Fourier segments are not always reliable and effective if the Fourier spectrum of the signal is complicated and overwhelmed by heavy noises and other strong vibration components. In this paper, sparsity guided empirical wavelet transform is proposed to automatically establish Fourier segments required in empirical wavelet transform for fault diagnosis of rolling element bearings. Industrial bearing fault signals caused by single and multiple railway axle bearing defects are used to verify the effectiveness of the proposed sparsity guided empirical wavelet transform. Results show that the proposed method can automatically discover Fourier segments required in empirical wavelet transform and reveal single and multiple railway axle bearing defects. Besides, some comparisons with three popular signal processing methods including ensemble empirical mode decomposition, the fast kurtogram and the fast spectral correlation are conducted to highlight the superiority of the proposed method.

  10. Method

    Directory of Open Access Journals (Sweden)

    Ling Fiona W.M.

    2017-01-01

    Full Text Available Rapid prototyping of microchannel gain lots of attention from researchers along with the rapid development of microfluidic technology. The conventional methods carried few disadvantages such as high cost, time consuming, required high operating pressure and temperature and involve expertise in operating the equipment. In this work, new method adapting xurography method is introduced to replace the conventional method of fabrication of microchannels. The novelty in this study is replacing the adhesion film with clear plastic film which was used to cut the design of the microchannel as the material is more suitable for fabricating more complex microchannel design. The microchannel was then mold using polymethyldisiloxane (PDMS and bonded with a clean glass to produce a close microchannel. The microchannel produced had a clean edge indicating good master mold was produced using the cutting plotter and the bonding between the PDMS and glass was good where no leakage was observed. The materials used in this method is cheap and the total time consumed is less than 5 hours where this method is suitable for rapid prototyping of microchannel.

  11. Empirical study of supervised gene screening

    Directory of Open Access Journals (Sweden)

    Ma Shuangge

    2006-12-01

    Full Text Available Abstract Background Microarray studies provide a way of linking variations of phenotypes with their genetic causations. Constructing predictive models using high dimensional microarray measurements usually consists of three steps: (1 unsupervised gene screening; (2 supervised gene screening; and (3 statistical model building. Supervised gene screening based on marginal gene ranking is commonly used to reduce the number of genes in the model building. Various simple statistics, such as t-statistic or signal to noise ratio, have been used to rank genes in the supervised screening. Despite of its extensive usage, statistical study of supervised gene screening remains scarce. Our study is partly motivated by the differences in gene discovery results caused by using different supervised gene screening methods. Results We investigate concordance and reproducibility of supervised gene screening based on eight commonly used marginal statistics. Concordance is assessed by the relative fractions of overlaps between top ranked genes screened using different marginal statistics. We propose a Bootstrap Reproducibility Index, which measures reproducibility of individual genes under the supervised screening. Empirical studies are based on four public microarray data. We consider the cases where the top 20%, 40% and 60% genes are screened. Conclusion From a gene discovery point of view, the effect of supervised gene screening based on different marginal statistics cannot be ignored. Empirical studies show that (1 genes passed different supervised screenings may be considerably different; (2 concordance may vary, depending on the underlying data structure and percentage of selected genes; (3 evaluated with the Bootstrap Reproducibility Index, genes passed supervised screenings are only moderately reproducible; and (4 concordance cannot be improved by supervised screening based on reproducibility.

  12. Bayesian interpretation of Generalized empirical likelihood by maximum entropy

    OpenAIRE

    Rochet , Paul

    2011-01-01

    We study a parametric estimation problem related to moment condition models. As an alternative to the generalized empirical likelihood (GEL) and the generalized method of moments (GMM), a Bayesian approach to the problem can be adopted, extending the MEM procedure to parametric moment conditions. We show in particular that a large number of GEL estimators can be interpreted as a maximum entropy solution. Moreover, we provide a more general field of applications by proving the method to be rob...

  13. An Empirical Mass Function Distribution

    Science.gov (United States)

    Murray, S. G.; Robotham, A. S. G.; Power, C.

    2018-03-01

    The halo mass function, encoding the comoving number density of dark matter halos of a given mass, plays a key role in understanding the formation and evolution of galaxies. As such, it is a key goal of current and future deep optical surveys to constrain the mass function down to mass scales that typically host {L}\\star galaxies. Motivated by the proven accuracy of Press–Schechter-type mass functions, we introduce a related but purely empirical form consistent with standard formulae to better than 4% in the medium-mass regime, {10}10{--}{10}13 {h}-1 {M}ȯ . In particular, our form consists of four parameters, each of which has a simple interpretation, and can be directly related to parameters of the galaxy distribution, such as {L}\\star . Using this form within a hierarchical Bayesian likelihood model, we show how individual mass-measurement errors can be successfully included in a typical analysis, while accounting for Eddington bias. We apply our form to a question of survey design in the context of a semi-realistic data model, illustrating how it can be used to obtain optimal balance between survey depth and angular coverage for constraints on mass function parameters. Open-source Python and R codes to apply our new form are provided at http://mrpy.readthedocs.org and https://cran.r-project.org/web/packages/tggd/index.html respectively.

  14. The Rise and Fall of Andean Empires: El Nino History Lessons.

    Science.gov (United States)

    Wright, Kenneth R.

    2000-01-01

    Provides information on El Nino and the methods for investigating ancient climate record. Traces the rise and fall of the Andean empires focusing on the climatic forces that each empire (Tiwanaku, Wari, Moche, and Inca) endured. States that modern societies should learn from the experiences of these ancient civilizations. (CMK)

  15. On Integrating Student Empirical Software Engineering Studies with Research and Teaching Goals

    NARCIS (Netherlands)

    Galster, Matthias; Tofan, Dan; Avgeriou, Paris

    2012-01-01

    Background: Many empirical software engineering studies use students as subjects and are conducted as part of university courses. Aim: We aim at reporting our experiences with using guidelines for integrating empirical studies with our research and teaching goals. Method: We document our experience

  16. method

    Directory of Open Access Journals (Sweden)

    L. M. Kimball

    2002-01-01

    Full Text Available This paper presents an interior point algorithm to solve the multiperiod hydrothermal economic dispatch (HTED. The multiperiod HTED is a large scale nonlinear programming problem. Various optimization methods have been applied to the multiperiod HTED, but most neglect important network characteristics or require decomposition into thermal and hydro subproblems. The algorithm described here exploits the special bordered block diagonal structure and sparsity of the Newton system for the first order necessary conditions to result in a fast efficient algorithm that can account for all network aspects. Applying this new algorithm challenges a conventional method for the use of available hydro resources known as the peak shaving heuristic.

  17. Intermodal connectivity in Europe, an empirical exploration

    NARCIS (Netherlands)

    de Langen, P.W.; Lases Figueroa, D.M.; van Donselaar, K.H.; Bozuwa, J.

    2017-01-01

    In this paper we analyse the intermodal connectivity in Europe. The empirical analysis is to our knowledge the first empirical analysis of intermodal connections, and is based on a comprehensive database of intermodal connections in Europe. The paper focuses on rail and barge services, as they are

  18. Empirical Moral Philosophy and Teacher Education

    Science.gov (United States)

    Schjetne, Espen; Afdal, Hilde Wågsås; Anker, Trine; Johannesen, Nina; Afdal, Geir

    2016-01-01

    In this paper, we explore the possible contributions of empirical moral philosophy to professional ethics in teacher education. We argue that it is both possible and desirable to connect knowledge of how teachers empirically do and understand professional ethics with normative theories of teachers' professional ethics. Our argument is made in…

  19. Empirical ethics, context-sensitivity, and contextualism.

    Science.gov (United States)

    Musschenga, Albert W

    2005-10-01

    In medical ethics, business ethics, and some branches of political philosophy (multi-culturalism, issues of just allocation, and equitable distribution) the literature increasingly combines insights from ethics and the social sciences. Some authors in medical ethics even speak of a new phase in the history of ethics, hailing "empirical ethics" as a logical next step in the development of practical ethics after the turn to "applied ethics." The name empirical ethics is ill-chosen because of its associations with "descriptive ethics." Unlike descriptive ethics, however, empirical ethics aims to be both descriptive and normative. The first question on which I focus is what kind of empirical research is used by empirical ethics and for which purposes. I argue that the ultimate aim of all empirical ethics is to improve the context-sensitivity of ethics. The second question is whether empirical ethics is essentially connected with specific positions in meta-ethics. I show that in some kinds of meta-ethical theories, which I categorize as broad contextualist theories, there is an intrinsic need for connecting normative ethics with empirical social research. But context-sensitivity is a goal that can be aimed for from any meta-ethical position.

  20. The emerging empirics of evolutionary economic geography

    NARCIS (Netherlands)

    Boschma, R.A.; Frenken, K.

    2011-01-01

    Following last decade’s programmatic papers on Evolutionary Economic Geography, we report on recent empirical advances and how this empirical work can be positioned vis-a`-vis other strands of research in economic geography. First, we review studies on the path dependent nature of clustering, and

  1. The emerging empirics of evolutionary economic geography

    NARCIS (Netherlands)

    Boschma, R.A.; Frenken, K.

    2010-01-01

    Following last decade’s programmatic papers on Evolutionary Economic Geography, we report on recent empirical advances and how this empirical work can be positioned vis-à-vis other strands of research in economic geography. First, we review studies on the path dependent nature of clustering, and how

  2. The emerging empirics of evolutionary economic geography.

    NARCIS (Netherlands)

    Boschma, R.A.; Frenken, K.

    2011-01-01

    Following last decade’s programmatic papers on Evolutionary Economic Geography, we report on recent empirical advances and how this empirical work can be positioned vis-a`-vis other strands of research in economic geography. First, we review studies on the path dependent nature of clustering, and

  3. Empirical analysis of online human dynamics

    Science.gov (United States)

    Zhao, Zhi-Dan; Zhou, Tao

    2012-06-01

    Patterns of human activities have attracted increasing academic interests, since the quantitative understanding of human behavior is helpful to uncover the origins of many socioeconomic phenomena. This paper focuses on behaviors of Internet users. Six large-scale systems are studied in our experiments, including the movie-watching in Netflix and MovieLens, the transaction in Ebay, the bookmark-collecting in Delicious, and the posting in FreindFeed and Twitter. Empirical analysis reveals some common statistical features of online human behavior: (1) The total number of user's actions, the user's activity, and the interevent time all follow heavy-tailed distributions. (2) There exists a strongly positive correlation between user's activity and the total number of user's actions, and a significantly negative correlation between the user's activity and the width of the interevent time distribution. We further study the rescaling method and show that this method could to some extent eliminate the different statistics among users caused by the different activities, yet the effectiveness depends on the data sets.

  4. Empirical validation of directed functional connectivity.

    Science.gov (United States)

    Mill, Ravi D; Bagic, Anto; Bostan, Andreea; Schneider, Walter; Cole, Michael W

    2017-02-01

    Mapping directions of influence in the human brain connectome represents the next phase in understanding its functional architecture. However, a host of methodological uncertainties have impeded the application of directed connectivity methods, which have primarily been validated via "ground truth" connectivity patterns embedded in simulated functional MRI (fMRI) and magneto-/electro-encephalography (MEG/EEG) datasets. Such simulations rely on many generative assumptions, and we hence utilized a different strategy involving empirical data in which a ground truth directed connectivity pattern could be anticipated with confidence. Specifically, we exploited the established "sensory reactivation" effect in episodic memory, in which retrieval of sensory information reactivates regions involved in perceiving that sensory modality. Subjects performed a paired associate task in separate fMRI and MEG sessions, in which a ground truth reversal in directed connectivity between auditory and visual sensory regions was instantiated across task conditions. This directed connectivity reversal was successfully recovered across different algorithms, including Granger causality and Bayes network (IMAGES) approaches, and across fMRI ("raw" and deconvolved) and source-modeled MEG. These results extend simulation studies of directed connectivity, and offer practical guidelines for the use of such methods in clarifying causal mechanisms of neural processing. Copyright © 2016 Elsevier Inc. All rights reserved.

  5. Mental disorder ethics: theory and empirical investigation

    Science.gov (United States)

    Eastman, N; Starling, B

    2006-01-01

    Mental disorders and their care present unusual problems within biomedical ethics. The disorders themselves invite an ethical critique, as does society's attitude to them; researching the diagnosis and treatment of mental disorders also presents special ethical issues. The current high profile of mental disorder ethics, emphasised by recent political and legal developments, makes this a field of research that is not only important but also highly topical. For these reasons, the Wellcome Trust's biomedical ethics programme convened a meeting, “Investigating Ethics and Mental Disorders”, in order to review some current research, and to stimulate topics and methods of future research in the field. The meeting was attended by policy makers, regulators, research funders, and researchers, including social scientists, psychiatrists, psychologists, lawyers, philosophers, criminologists, and others. As well as aiming to inspire a stronger research endeavour, the meeting also sought to stimulate an improved understanding of the methods and interactions that can contribute to “empirical ethics” generally. This paper reports on the meeting by describing contributions from individual speakers and discussion sections of the meeting. At the end we describe and discuss the conclusions of the meeting. As a result, the text is referenced less than would normally be expected in a review. Also, in summarising contributions from named presenters at the meeting it is possible that we have created inaccuracies; however, the definitive version of each paper, as provided directly by the presenter, is available at http://www.wellcome.ac.uk/doc.WTX025116.html. PMID:16446414

  6. The Role of Empirical Research in Bioethics

    Science.gov (United States)

    Kon, Alexander A.

    2010-01-01

    There has long been tension between bioethicists whose work focuses on classical philosophical inquiry and those who perform empirical studies on bioethical issues. While many have argued that empirical research merely illuminates current practices and cannot inform normative ethics, others assert that research-based work has significant implications for refining our ethical norms. In this essay, I present a novel construct for classifying empirical research in bioethics into four hierarchical categories: Lay of the Land, Ideal Versus Reality, Improving Care, and Changing Ethical Norms. Through explaining these four categories and providing examples of publications in each stratum, I define how empirical research informs normative ethics. I conclude by demonstrating how philosophical inquiry and empirical research can work cooperatively to further normative ethics. PMID:19998120

  7. Empirical essays on energy economics

    Energy Technology Data Exchange (ETDEWEB)

    Thoenes, Stefan

    2013-06-13

    The main part of this thesis consists of three distinct essays that empirically analyze economic issues related to energy markets in the United States and Europe. The first chapter provides an introduction and discusses the motivation for the different analyses pursued in this thesis. The second chapter examines attention effects in the market for hybrid vehicles. We show that local media coverage, gasoline price changes and unprecedented record gasoline prices have a significant impact on the consumers' attention. As attention is not directly observable, we analyze online search behavior as a proxy for the revealed consumer attention. Our study is based on a unique weekly panel dataset for 19 metropolitan areas in the US. Additionally, we use monthly state-level panel data to show that the adoption rate of hybrid vehicles is robustly related to our measure of attention. Our results show that the consumers' attention fluctuates strongly and systematically. The third chapter shows how the effect of fuel prices varies with the level of electricity demand. It analyzes the relationship between daily prices of electricity, natural gas and carbon emission allowances with a semiparametric varying smooth coefficient cointegration model. This model is used to analyze the market impact of the nuclear moratorium by the German Government in March 2011. Futures prices of electricity, natural gas and emission allowances are used to show that the market efficiently accounts for the suspended capacity and correctly expects that several nuclear plants will not be switched on after the moratorium. In the fourth chapter, we develop a structural vector autoregressive model (VAR) for the German natural gas market. In particular, we illustrate the usefulness of our approach by disentangling the effects of different fundamental influences during four specific events: The financial crisis starting in 2008, the Russian-Ukrainian gas dispute in January 2009, the Libyan civil war

  8. Empirical essays on energy economics

    International Nuclear Information System (INIS)

    Thoenes, Stefan

    2013-01-01

    The main part of this thesis consists of three distinct essays that empirically analyze economic issues related to energy markets in the United States and Europe. The first chapter provides an introduction and discusses the motivation for the different analyses pursued in this thesis. The second chapter examines attention effects in the market for hybrid vehicles. We show that local media coverage, gasoline price changes and unprecedented record gasoline prices have a significant impact on the consumers' attention. As attention is not directly observable, we analyze online search behavior as a proxy for the revealed consumer attention. Our study is based on a unique weekly panel dataset for 19 metropolitan areas in the US. Additionally, we use monthly state-level panel data to show that the adoption rate of hybrid vehicles is robustly related to our measure of attention. Our results show that the consumers' attention fluctuates strongly and systematically. The third chapter shows how the effect of fuel prices varies with the level of electricity demand. It analyzes the relationship between daily prices of electricity, natural gas and carbon emission allowances with a semiparametric varying smooth coefficient cointegration model. This model is used to analyze the market impact of the nuclear moratorium by the German Government in March 2011. Futures prices of electricity, natural gas and emission allowances are used to show that the market efficiently accounts for the suspended capacity and correctly expects that several nuclear plants will not be switched on after the moratorium. In the fourth chapter, we develop a structural vector autoregressive model (VAR) for the German natural gas market. In particular, we illustrate the usefulness of our approach by disentangling the effects of different fundamental influences during four specific events: The financial crisis starting in 2008, the Russian-Ukrainian gas dispute in January 2009, the Libyan civil war in 2011 as

  9. Appropriate methodologies for empirical bioethics: it's all relative.

    Science.gov (United States)

    Ives, Jonathan; Draper, Heather

    2009-05-01

    In this article we distinguish between philosophical bioethics (PB), descriptive policy orientated bioethics (DPOB) and normative policy oriented bioethics (NPOB). We argue that finding an appropriate methodology for combining empirical data and moral theory depends on what the aims of the research endeavour are, and that, for the most part, this combination is only required for NPOB. After briefly discussing the debate around the is/ought problem, and suggesting that both sides of this debate are misunderstanding one another (i.e. one side treats it as a conceptual problem, whilst the other treats it as an empirical claim), we outline and defend a methodological approach to NPOB based on work we have carried out on a project exploring the normative foundations of paternal rights and responsibilities. We suggest that given the prominent role already played by moral intuition in moral theory, one appropriate way to integrate empirical data and philosophical bioethics is to utilize empirically gathered lay intuition as the foundation for ethical reasoning in NPOB. The method we propose involves a modification of a long-established tradition on non-intervention in qualitative data gathering, combined with a form of reflective equilibrium where the demands of theory and data are given equal weight and a pragmatic compromise reached.

  10. Motives and chances of firm diversification: theory and empirical evidence

    International Nuclear Information System (INIS)

    Briglauer, W.

    2001-11-01

    It is beyond controversy that the majority of the largest companies in the industrialized countries perform to a certain extent product diversification strategies. Tying up to this finding the underlying work firstly deals with alternative theoretical and empirical definitions of corporate diversification. Subsequently the theoretical part mainly elaborates an industrial economic framework for categorizing motives of firm diversification. Despite of some inevitable degree of arbitrariness, a relatively widespread and sufficient categorization can be presented. With regards to the relevant economic literature most explanations of product diversification can be classified appropriately. Observing diversification activities one would prima facie infer a positive relationship between product diversification and firm performance, but both, theory and empirical evidence, yield ambiguous results. The empirical part provides a list of existing studies, classified according to the theoretical categorization. In an overview some stylised facts are filtered and discussed consecutively. Most notably, it was found that related diversification strategies significantly outperform strategies of unrelated diversification. At the end of the empirical section econometric methods are applied to agricultural and industrial economic (relating to telecommunication markets) data sets. For the agricultural studies a significantly positive relationship between product diversification and firm performance was found. In contrast no significant results were obtained for the telecommunication markets. (author)

  11. Demystification of empirical concepts on radioactivity

    International Nuclear Information System (INIS)

    Júnior, Cláudio L.R.; Silva, Islane C.S.

    2017-01-01

    Ionizing radiation has been used for clinical diagnostic purposes since the last century, with the advancement of nuclear physics, which enabled the determination and control of doses. Nuclear Medicine is a medical specialty that uses safe, painless, and non-invasive methods to provide information that other diagnostic and therapeutic exams would not, through the use of open radionuclide sources. It has as basic principles the obtaining of scintigraphic images that are based on the ability to detect gamma radiation emitted by radioactive material. This paper aims to demystify the empirical concepts about radioactivity established by society. The knowledge of 300 people, including non-radiological professionals and people who live and work in the region surrounding the nuclear medicine services (NMS) of the metropolitan region of Recife, were heard and evaluated. For the evaluation, a questionnaire containing questions about the main doubts and fears that the professionals working with ionizing radiations have been developed. In this questionnaire, questions were also raised about the activity performed in the NMS, in order to know if they have knowledge about the procedures that are performed in the workplace. It is possible to conclude that despite being present in the daily and being responsible for numerous benefits for society, ionizing radiation causes considerable fear and still has doubts about its use. It was also observed that the vast majority of people had information about the activities developed in the evaluated services and that even if the evaluated professionals were wrong, they had concepts about the issues highlighted

  12. Demand Response in U.S. Electricity Markets: Empirical Evidence

    OpenAIRE

    Cappers, Peter

    2009-01-01

    Empirical evidence concerning demand response (DR) resources is needed in order to establish baseline conditions, develop standardized methods to assess DR availability and performance, and to build confidence among policymakers, utilities, system operators, and stakeholders that DR resources do offer a viable, cost-effective alternative to supply-side investments. This paper summarizes the existing contribution of DR resources in U.S. electric power markets. In 2008, customers enrolled in ...

  13. Structural properties of silicon clusters: An empirical potential study

    International Nuclear Information System (INIS)

    Gong, X.G.; Zheng, Q.Q.; He Yizhen

    1993-09-01

    By using our newly proposed empirical interatomic potential for silicon, the structure and some dynamical properties of silicon cluster Si n (10 ≤ n ≤ 24) have been studied. It is found that the obtained results are close to those from ab-initio methods. From present results, we can gain a new insight into the understanding of the experimental data on the Si n clusters. (author). 20 refs, 6 figs

  14. Globalization and Governance: A Critical Contribution to the Empirics

    OpenAIRE

    Asongu, Simplice; Efobi, Uchenna; Tchamyou, Vanessa

    2016-01-01

    This study assesses the effect of globalisation on governance in 51 African countries for the period 1996-2011. Ten bundled and unbundled governance indicators and four globalisation variables are used. The empirical evidence is based on Generalised Method of Moments. The following findings are established. First, on political governance, only social globalisation improves political stability while only economic globalisation does not increase voice & accountability and political governance....

  15. A simple semi-empirical approximation for bond energy

    International Nuclear Information System (INIS)

    Jorge, F.E.; Giambiagi, M.; Giambiagi, M.S. de.

    1985-01-01

    A simple semi-empirical expression for bond energy, related with a generalized bond index, is proposed and applied within the IEH framework. The correlation with experimental data is good for the intermolecular bond energy of base pairs of nucleic acids and other hydrogen bonded systems. The intramolecular bond energies for a sample of molecules containing typical bonds and for hydrides are discussed. The results are compared with those obtained by other methods. (Author) [pt

  16. Supply chain strategy: empirical case study in Europe and Asia:

    OpenAIRE

    Sillanpää, Ilkka; Sillanpää, Sebastian

    2014-01-01

    The purpose of this case study research is to present a literature review of supply chain strategy approaches, develop supply chain strategy framework and to validate a framework in empirical case study. Literature review and case study research are the research methods for this research. This study presents the supply chain strategy framework which merges together business environment, corporate strategy, supply chain demand and supply chain strategy. Research argues that all the different c...

  17. The need of data harmonization to derive robust empirical relationships between soil conditions and vegetation.

    NARCIS (Netherlands)

    Bartholomeus, R.P.; Witte, J.P.M.; van Bodegom, P.M.; Aerts, R.

    2008-01-01

    Question: Is it possible to improve the general applicability and significance of empirical relationships between abiotic conditions and vegetation by harmonization of temporal data? Location: The Netherlands. Methods: Three datasets of vegetation, recorded after periods with different

  18. Empirical estimates of the NAIRU

    DEFF Research Database (Denmark)

    Madsen, Jakob Brøchner

    2005-01-01

    equations. In this paper it is shown that a high proportion of the constant term is a statistical artefact and suggests a new method which yields approximately unbiased estimates of the time-invariant NAIRU. Using data for OECD countries it is shown that the constant-term correction lowers the unadjusted...

  19. Introducing Empirical Exercises into Principles of Economics.

    Science.gov (United States)

    McGrath, Eileen L.; Tiemann, Thomas K.

    1985-01-01

    A rationale for requiring undergraduate students to become familiar with the empirical side of economics is presented, and seven exercises that can be used in an introductory course are provided. (Author/RM)

  20. Review essay: empires, ancient and modern.

    Science.gov (United States)

    Hall, John A

    2011-09-01

    This essay drews attention to two books on empires by historians which deserve the attention of sociologists. Bang's model of the workings of the Roman economy powerfully demonstrates the tributary nature of per-industrial tributary empires. Darwin's analysis concentrates on modern overseas empires, wholly different in character as they involved the transportation of consumption items for the many rather than luxury goods for the few. Darwin is especially good at describing the conditions of existence of late nineteenth century empires, noting that their demise was caused most of all by the failure of balance of power politics in Europe. Concluding thoughts are offered about the USA. © London School of Economics and Political Science 2011.

  1. Inland empire logistics GIS mapping project.

    Science.gov (United States)

    2009-01-01

    The Inland Empire has experienced exponential growth in the area of warehousing and distribution facilities within the last decade and it seems that it will continue way into the future. Where are these facilities located? How large are the facilitie...

  2. Protein-Ligand Empirical Interaction Components for Virtual Screening.

    Science.gov (United States)

    Yan, Yuna; Wang, Weijun; Sun, Zhaoxi; Zhang, John Z H; Ji, Changge

    2017-08-28

    A major shortcoming of empirical scoring functions is that they often fail to predict binding affinity properly. Removing false positives of docking results is one of the most challenging works in structure-based virtual screening. Postdocking filters, making use of all kinds of experimental structure and activity information, may help in solving the issue. We describe a new method based on detailed protein-ligand interaction decomposition and machine learning. Protein-ligand empirical interaction components (PLEIC) are used as descriptors for support vector machine learning to develop a classification model (PLEIC-SVM) to discriminate false positives from true positives. Experimentally derived activity information is used for model training. An extensive benchmark study on 36 diverse data sets from the DUD-E database has been performed to evaluate the performance of the new method. The results show that the new method performs much better than standard empirical scoring functions in structure-based virtual screening. The trained PLEIC-SVM model is able to capture important interaction patterns between ligand and protein residues for one specific target, which is helpful in discarding false positives in postdocking filtering.

  3. Principles Involving Marketing Policies: An Empirical Assessment

    OpenAIRE

    JS Armstrong; Randall L. Schultz

    2005-01-01

    We examined nine marketing textbooks, published since 1927, to see if they contained useful marketing principles. Four doctoral students found 566 normative statements about pricing, product, place, or promotion in these texts. None of these stateinents were supported by empirical evidence. Four raters agreed on only twenty of these 566 statements as providing meaningful principles. Twenty marketing professors rated whether the twenty meaningful principles were correct, supported by empirical...

  4. Sources of Currency Crisis: An Empirical Analysis

    OpenAIRE

    Weber, Axel A.

    1997-01-01

    Two types of currency crisis models coexist in the literature: first generation models view speculative attacks as being caused by economic fundamentals which are inconsistent with a given parity. Second generation models claim self-fulfilling speculation as the main source of a currency crisis. Recent empirical research in international macroeconomics has attempted to distinguish between the sources of currency crises. This paper adds to this literature by proposing a new empirical approach ...

  5. Agency Theory and Franchising: Some Empirical Results

    OpenAIRE

    Francine Lafontaine

    1992-01-01

    This article provides an empirical assessment of various agency-theoretic explanations for franchising, including risk sharing, one-sided moral hazard, and two-sided moral hazard. The empirical models use proxies for factors such as risk, moral hazard, and franchisors' need for capital to explain both franchisors' decisions about the terms of their contracts (royalty rates and up-front franchise fees) and the extent to which they use franchising. In this article, I exploit several new sources...

  6. Gun Laws and Crime: An Empirical Assessment

    OpenAIRE

    Matti Viren

    2012-01-01

    This paper deals with the effect of gun laws on crime. Several empirical analyses are carried to investigate the relationship between five different crime rates and alternative law variables. The tests are based on cross-section data from US sates. Three different law variables are used in the analysis, together with a set of control variables for income, poverty, unemployment and ethnic background of the population. Empirical analysis does not lend support to the notion that crime laws would...

  7. Empirical direction in design and analysis

    CERN Document Server

    Anderson, Norman H

    2001-01-01

    The goal of Norman H. Anderson's new book is to help students develop skills of scientific inference. To accomplish this he organized the book around the ""Experimental Pyramid""--six levels that represent a hierarchy of considerations in empirical investigation--conceptual framework, phenomena, behavior, measurement, design, and statistical inference. To facilitate conceptual and empirical understanding, Anderson de-emphasizes computational formulas and null hypothesis testing. Other features include: *emphasis on visual inspection as a basic skill in experimental analysis to help student

  8. An Empirical Taxonomy of Crowdfunding Intermediaries

    OpenAIRE

    Haas, Philipp; Blohm, Ivo; Leimeister, Jan Marco

    2014-01-01

    Due to the recent popularity of crowdfunding, a broad magnitude of crowdfunding intermediaries has emerged, while research on crowdfunding intermediaries has been largely neglected. As a consequence, existing classifications of crowdfunding intermediaries are conceptual, lack theoretical grounding, and are not empirically validated. Thus, we develop an empirical taxonomy of crowdfunding intermediaries, which is grounded in the theories of two-sided markets and financial intermediation. Integr...

  9. An empirical analysis of Diaspora bonds

    OpenAIRE

    AKKOYUNLU, Şule; STERN, Max

    2018-01-01

    Abstract. This study is the first to investigate theoretically and empirically the determinants of Diaspora Bonds for eight developing countries (Bangladesh, Ethiopia, Ghana, India, Lebanon, Pakistan, the Philippines, and Sri-Lanka) and one developed country - Israel for the period 1951 and 2008. Empirical results are consistent with the predictions of the theoretical model. The most robust variables are the closeness indicator and the sovereign rating, both on the demand-side. The spread is ...

  10. Combining Empirical and Stochastic Models for Extreme Floods Estimation

    Science.gov (United States)

    Zemzami, M.; Benaabidate, L.

    2013-12-01

    Hydrological models can be defined as physical, mathematical or empirical. The latter class uses mathematical equations independent of the physical processes involved in the hydrological system. The linear regression and Gradex (Gradient of Extreme values) are classic examples of empirical models. However, conventional empirical models are still used as a tool for hydrological analysis by probabilistic approaches. In many regions in the world, watersheds are not gauged. This is true even in developed countries where the gauging network has continued to decline as a result of the lack of human and financial resources. Indeed, the obvious lack of data in these watersheds makes it impossible to apply some basic empirical models for daily forecast. So we had to find a combination of rainfall-runoff models in which it would be possible to create our own data and use them to estimate the flow. The estimated design floods would be a good choice to illustrate the difficulties facing the hydrologist for the construction of a standard empirical model in basins where hydrological information is rare. The construction of the climate-hydrological model, which is based on frequency analysis, was established to estimate the design flood in the Anseghmir catchments, Morocco. The choice of using this complex model returns to its ability to be applied in watersheds where hydrological information is not sufficient. It was found that this method is a powerful tool for estimating the design flood of the watershed and also other hydrological elements (runoff, volumes of water...).The hydrographic characteristics and climatic parameters were used to estimate the runoff, water volumes and design flood for different return periods.

  11. Color Multifocus Image Fusion Using Empirical Mode Decomposition

    Directory of Open Access Journals (Sweden)

    S. Savić

    2013-11-01

    Full Text Available In this paper, a recently proposed grayscale multifocus image fusion method based on the first level of Empirical Mode Decomposition (EMD has been extended to color images. In addition, this paper deals with low contrast multifocus image fusion. The major advantages of the proposed methods are simplicity, absence of artifacts and control of contrast, while this isn’t the case with other pyramidal multifocus fusion methods. The efficiency of the proposed method is tested subjectively and with a vector gradient based objective measure, that is proposed in this paper for multifocus color image fusion. Subjective analysis performed on a multifocus image dataset has shown its superiority to the existing EMD and DWT based methods. The objective measures of grayscale and color image fusion show significantly better scores for this method than for the classic complex EMD fusion method.

  12. Empirical methods for systematic reviews and evidence-based medicine

    NARCIS (Netherlands)

    van Enst, W.A.

    2014-01-01

    Evidence-Based Medicine is the integration of best research evidence with clinical expertise and patient values. Systematic reviews have become the cornerstone of evidence-based medicine, which is reflected in the position systematic reviews have in the pyramid of evidence-based medicine. Systematic

  13. Empirical methods for modeling landscape change, ecosystem services, and biodiversity

    Science.gov (United States)

    David Lewis; Ralph. Alig

    2009-01-01

    The purpose of this paper is to synthesize recent economics research aimed at integrating discrete-choice econometric models of land-use change with spatially-explicit landscape simulations and quantitative ecology. This research explicitly models changes in the spatial pattern of landscapes in two steps: 1) econometric estimation of parcel-scale transition...

  14. Functional Design of Breakwaters for Shore Protection: Empirical Methods

    Science.gov (United States)

    1990-09-01

    prepred by the Principal Investigator of the work unit, Ms. Julie Dean Rosati, Hy1. aulic Engineer, EAU, CSEB. COL Larry B. Fulton, EN, was Commander and...transmissibility, wave climate , etc.), morphologica. beach response may be either a salient or tombolo. Reef breakwaters are a type of detached breakwaters... climate chosen for design (USAED, Buffalo 1975; Pope and Dean 1986), as waves from the northwest were inappropriately weighted. Pope and Dean (1986) 26

  15. Reporting research methods of empirical studies | Korb | Journal of ...

    African Journals Online (AJOL)

    Full Text: EMAIL FULL TEXT EMAIL FULL TEXT · DOWNLOAD FULL TEXT DOWNLOAD FULL TEXT · AJOL African Journals Online. HOW TO USE AJOL... for Researchers · for Librarians · for Authors · FAQ's · More about AJOL · AJOL's Partners · Terms and Conditions of Use · Contact AJOL · News. OTHER RESOURCES.

  16. Empirical Likelihood in Nonignorable Covariate-Missing Data Problems.

    Science.gov (United States)

    Xie, Yanmei; Zhang, Biao

    2017-04-20

    Missing covariate data occurs often in regression analysis, which frequently arises in the health and social sciences as well as in survey sampling. We study methods for the analysis of a nonignorable covariate-missing data problem in an assumed conditional mean function when some covariates are completely observed but other covariates are missing for some subjects. We adopt the semiparametric perspective of Bartlett et al. (Improving upon the efficiency of complete case analysis when covariates are MNAR. Biostatistics 2014;15:719-30) on regression analyses with nonignorable missing covariates, in which they have introduced the use of two working models, the working probability model of missingness and the working conditional score model. In this paper, we study an empirical likelihood approach to nonignorable covariate-missing data problems with the objective of effectively utilizing the two working models in the analysis of covariate-missing data. We propose a unified approach to constructing a system of unbiased estimating equations, where there are more equations than unknown parameters of interest. One useful feature of these unbiased estimating equations is that they naturally incorporate the incomplete data into the data analysis, making it possible to seek efficient estimation of the parameter of interest even when the working regression function is not specified to be the optimal regression function. We apply the general methodology of empirical likelihood to optimally combine these unbiased estimating equations. We propose three maximum empirical likelihood estimators of the underlying regression parameters and compare their efficiencies with other existing competitors. We present a simulation study to compare the finite-sample performance of various methods with respect to bias, efficiency, and robustness to model misspecification. The proposed empirical likelihood method is also illustrated by an analysis of a data set from the US National Health and

  17. Evaluation of empirical methods to estimate reference evapotranspiration in Uberaba, State of Minas Gerais, Brazil Avaliação de métodos empíricos na estimativa devapotranspiração de referência para Uberaba - MG

    Directory of Open Access Journals (Sweden)

    Giovani L. de Melo

    2012-10-01

    Full Text Available Evapotranspiration is the process of water loss of vegetated soil due to evaporation and transpiration, and it may be estimated by various empirical methods. This study had the objective to carry out the evaluation of the performance of the following methods: Blaney-Criddle, Jensen-Haise, Linacre, Solar Radiation, Hargreaves-Samani, Makkink, Thornthwaite, Camargo, Priestley-Taylor and Original Penman in the estimation of the potential evapotranspiration when compared to the Penman-Monteith standard method (FAO56 to the climatic conditions of Uberaba, state of Minas Gerais, Brazil. A set of 21 years monthly data (1990 to 2010 was used, working with the climatic elements: temperature, relative humidity, wind speed and insolation. The empirical methods to estimate reference evapotranspiration were compared with the standard method using linear regression, simple statistical analysis, Willmott agreement index (d and performance index (c. The methods Makkink and Camargo showed the best performance, with "c" values ​​of 0.75 and 0.66, respectively. The Hargreaves-Samani method presented a better linear relation with the standard method, with a correlation coefficient (r of 0.88.A evapotranspiração é o processo de perda de água do solo vegetado devido à evaporação e à transpiração, podendo ser estimada por vários métodos empíricos. Objetivou-se, com o presente trabalho, realizar a avaliação do desempenho dos métodos de Blaney-Criddle, Jensen-Haise, Linacre, Radiação Solar, Hargreaves-Samani, Makkink, Thornthwaite, Camargo, Priestley-Taylor e Penman Original na estimativa da evapotranspiração potencial em comparação com o método-padrão Penman-Monteith (FAO56, para as condições climáticas do município de Uberaba-MG. Utilizou-se um conjunto de dados mensais de 21 anos (1990 a 2010, trabalhando-se com os elementos climáticos temperatura, umidade relativa, velocidade do vento e insolação. Os métodos empíricos para a

  18. An update on the "empirical turn" in bioethics: analysis of empirical research in nine bioethics journals.

    Science.gov (United States)

    Wangmo, Tenzin; Hauri, Sirin; Gennet, Eloise; Anane-Sarpong, Evelyn; Provoost, Veerle; Elger, Bernice S

    2018-02-07

    A review of literature published a decade ago noted a significant increase in empirical papers across nine bioethics journals. This study provides an update on the presence of empirical papers in the same nine journals. It first evaluates whether the empirical trend is continuing as noted in the previous study, and second, how it is changing, that is, what are the characteristics of the empirical works published in these nine bioethics journals. A review of the same nine journals (Bioethics; Journal of Medical Ethics; Journal of Clinical Ethics; Nursing Ethics; Cambridge Quarterly of Healthcare Ethics; Hastings Center Report; Theoretical Medicine and Bioethics; Christian Bioethics; and Kennedy Institute of Ethics Journal) was conducted for a 12-year period from 2004 to 2015. Data obtained was analysed descriptively and using a non-parametric Chi-square test. Of the total number of original papers (N = 5567) published in the nine bioethics journals, 18.1% (n = 1007) collected and analysed empirical data. Journal of Medical Ethics and Nursing Ethics led the empirical publications, accounting for 89.4% of all empirical papers. The former published significantly more quantitative papers than qualitative, whereas the latter published more qualitative papers. Our analysis reveals no significant difference (χ2 = 2.857; p = 0.091) between the proportion of empirical papers published in 2004-2009 and 2010-2015. However, the increasing empirical trend has continued in these journals with the proportion of empirical papers increasing from 14.9% in 2004 to 17.8% in 2015. This study presents the current state of affairs regarding empirical research published nine bioethics journals. In the quarter century of data that is available about the nine bioethics journals studied in two reviews, the proportion of empirical publications continues to increase, signifying a trend towards empirical research in bioethics. The growing volume is mainly attributable to two

  19. Reflective equilibrium and empirical data: third person moral experiences in empirical medical ethics.

    Science.gov (United States)

    De Vries, Martine; Van Leeuwen, Evert

    2010-11-01

    In ethics, the use of empirical data has become more and more popular, leading to a distinct form of applied ethics, namely empirical ethics. This 'empirical turn' is especially visible in bioethics. There are various ways of combining empirical research and ethical reflection. In this paper we discuss the use of empirical data in a special form of Reflective Equilibrium (RE), namely the Network Model with Third Person Moral Experiences. In this model, the empirical data consist of the moral experiences of people in a practice. Although inclusion of these moral experiences in this specific model of RE can be well defended, their use in the application of the model still raises important questions. What precisely are moral experiences? How to determine relevance of experiences, in other words: should there be a selection of the moral experiences that are eventually used in the RE? How much weight should the empirical data have in the RE? And the key question: can the use of RE by empirical ethicists really produce answers to practical moral questions? In this paper we start to answer the above questions by giving examples taken from our research project on understanding the norm of informed consent in the field of pediatric oncology. We especially emphasize that incorporation of empirical data in a network model can reduce the risk of self-justification and bias and can increase the credibility of the RE reached. © 2009 Blackwell Publishing Ltd.

  20. Empirical intrinsic geometry for nonlinear modeling and time series filtering.

    Science.gov (United States)

    Talmon, Ronen; Coifman, Ronald R

    2013-07-30

    In this paper, we present a method for time series analysis based on empirical intrinsic geometry (EIG). EIG enables one to reveal the low-dimensional parametric manifold as well as to infer the underlying dynamics of high-dimensional time series. By incorporating concepts of information geometry, this method extends existing geometric analysis tools to support stochastic settings and parametrizes the geometry of empirical distributions. However, the statistical models are not required as priors; hence, EIG may be applied to a wide range of real signals without existing definitive models. We show that the inferred model is noise-resilient and invariant under different observation and instrumental modalities. In addition, we show that it can be extended efficiently to newly acquired measurements in a sequential manner. These two advantages enable us to revisit the Bayesian approach and incorporate empirical dynamics and intrinsic geometry into a nonlinear filtering framework. We show applications to nonlinear and non-Gaussian tracking problems as well as to acoustic signal localization.

  1. Constructive Verification, Empirical Induction, and Falibilist Deduction: A Threefold Contrast

    Directory of Open Access Journals (Sweden)

    Julio Michael Stern

    2011-10-01

    Full Text Available This article explores some open questions related to the problem of verification of theories in the context of empirical sciences by contrasting three epistemological frameworks. Each of these epistemological frameworks is based on a corresponding central metaphor, namely: (a Neo-empiricism and the gambling metaphor; (b Popperian falsificationism and the scientific tribunal metaphor; (c Cognitive constructivism and the object as eigen-solution metaphor. Each of one of these epistemological frameworks has also historically co-evolved with a certain statistical theory and method for testing scientific hypotheses, respectively: (a Decision theoretic Bayesian statistics and Bayes factors; (b Frequentist statistics and p-values; (c Constructive Bayesian statistics and e-values. This article examines with special care the Zero Probability Paradox (ZPP, related to the verification of sharp or precise hypotheses. Finally, this article makes some remarks on Lakatos’ view of mathematics as a quasi-empirical science.

  2. Application of parameters space analysis tools for empirical model validation

    Energy Technology Data Exchange (ETDEWEB)

    Paloma del Barrio, E. [LEPT-ENSAM UMR 8508, Talence (France); Guyon, G. [Electricite de France, Moret-sur-Loing (France)

    2004-01-01

    A new methodology for empirical model validation has been proposed in the framework of the Task 22 (Building Energy Analysis Tools) of the International Energy Agency. It involves two main steps: checking model validity and diagnosis. Both steps, as well as the underlying methods, have been presented in the first part of the paper. In this part, they are applied for testing modelling hypothesis in the framework of the thermal analysis of an actual building. Sensitivity analysis tools have been first used to identify the parts of the model that can be really tested on the available data. A preliminary diagnosis is then supplied by principal components analysis. Useful information for model behaviour improvement has been finally obtained by optimisation techniques. This example of application shows how model parameters space analysis is a powerful tool for empirical validation. In particular, diagnosis possibilities are largely increased in comparison with residuals analysis techniques. (author)

  3. Empirical Estimates in Stochastic Optimization via Distribution Tails

    Czech Academy of Sciences Publication Activity Database

    Kaňková, Vlasta

    2010-01-01

    Roč. 46, č. 3 (2010), s. 459-471 ISSN 0023-5954. [International Conference on Mathematical Methods in Economy and Industry. České Budějovice, 15.06.2009-18.06.2009] R&D Projects: GA ČR GA402/07/1113; GA ČR(CZ) GA402/08/0107; GA MŠk(CZ) LC06075 Institutional research plan: CEZ:AV0Z10750506 Keywords : Stochastic programming problems * Stability * Wasserstein metric * L_1 norm * Lipschitz property * Empirical estimates * Convergence rate * Exponential tails * Heavy tails * Pareto distribution * Risk functional * Empirical quantiles Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.461, year: 2010

  4. Essays on Empirical Asset Pricing

    DEFF Research Database (Denmark)

    Gormsen, Niels Joachim

    that the expected return to the distant-future cash flows increases by more in bad times than the expected return to near-future cash flows does. This new stylized fact is important for understanding why the expected return on the market portfolio as a whole varies over time. In addition, it has strong implications...... for which economic model that drives the return to stocks. Indeed, I find that none of the canonical asset pricing models can explain this new stylized fact while also explaining the previously documented facts about stock returns. The second chapter, called Conditional Risk, studies how the expected return...... on individual stocks is influenced by the fact that their riskiness varies over time. We introduce a new ”conditional-risk factor”, which is a simple method for determining how much of the expected return to individual stocks that can be explained by time variation in their market risk, i.e. market betas. Using...

  5. Empirical studies of regulatory restructuring and incentives

    Science.gov (United States)

    Knittel, Christopher Roland

    This dissertation examines the actions of firms when faced with regulatory restructuring. Chapter I examines the equilibrium pricing behavior of local exchange telephone companies under a variety of market structures. In particular, the pricing behavior of three services are analyzed: residential local service, business local service, and intraLATA toll service. Beginning in 1984, a variety of market structure changes have taken place in the local telecommunications industry. I analyze differences in the method of price-setting regulation and the restrictions on entry. Specifically, the relative pricing behavior under rate of return and price cap regulation is analyzed, as well as the impact of entry in the local exchange and intraLATA toll service markets. In doing so, I estimate an empirical model that accounts for the stickiness of rates in regulated industries that is based on firm and regulator decision processes in the presence of adjustment costs. I find that, faced with competitive pressures that reduce rates in one service, incumbent firm rates increase in other services, thereby reducing the benefits from competition. In addition, the findings suggest that price cap regulation leads to higher rates relative to rate-of-return regulation. Chapter 2 analyzes the pricing and investment behavior of electricity firms. Electricity and natural gas markets have traditionally been serviced by one of two market structures. In some markets, electricity and natural gas are sold by a dual-product regulated monopolist, while in other markets, electricity and natural gas are sold by separate single-product regulated monopolies. This paper analyzes the relative pricing and investment decisions of electricity firms operating in the two market structures. The unique relationship between these two products imply that the relative incentives of single and dual-product firms are likely to differ. Namely electricity and natural gas are substitutes in consumption while natural

  6. Strategic Orientation of SMEs: Empirical Research

    Directory of Open Access Journals (Sweden)

    Jelena Minović

    2016-04-01

    Full Text Available The main objective of the paper is to identify the sources of competitive advantage of small and medium-sized enterprises in Serbia. Gaining a competitive advantage is the key priority of market-oriented enterprises regardless of their size and sector. Since business environment in Serbia is not stimulating enough for enterprises’ growth and development, the paper highlights the role of strategic orientation in business promotion and development. In order to identify the sources of competitive advantage, the empirical research is conducted by using the survey method. The research sample is created by using a selective approach, namely, the sample includes enterprises with more than ten employees, and enterprises identified to have the potential for growth and development. The research results indicate that small and medium-sized enterprises in Serbia are generally focused on costs as a source of competitive advantage, i.e., they gain competitive advantage in a selected market segment by offering low price and average quality products/services. In addition, the results of the research point out that the Serbian small and medium-sized enterprises are innovation-oriented. Organizations qualifying as middle-sized enterprises are predominantly focused on process innovations, while small businesses are primarily oriented towards product innovations. One of the limitations of the research refers to the small presence of the research sample within the category of middle-sized enterprises. The smaller sample presence than it was previously planned is mostly due to the lack of managers’ willingness to participate in the research, as well as to the fact that these enterprises account for the smaller share in the total number of enterprises in the small-and medium-sized enterprises’ sector. Taking into account that the sector of small and medium-sized enterprises generates around 30% of the country’s GDP, we consider the research results to be

  7. An empirical, integrated forest biomass monitoring system

    Science.gov (United States)

    Kennedy, Robert E.; Ohmann, Janet; Gregory, Matt; Roberts, Heather; Yang, Zhiqiang; Bell, David M.; Kane, Van; Hughes, M. Joseph; Cohen, Warren B.; Powell, Scott; Neeti, Neeti; Larrue, Tara; Hooper, Sam; Kane, Jonathan; Miller, David L.; Perkins, James; Braaten, Justin; Seidl, Rupert

    2018-02-01

    The fate of live forest biomass is largely controlled by growth and disturbance processes, both natural and anthropogenic. Thus, biomass monitoring strategies must characterize both the biomass of the forests at a given point in time and the dynamic processes that change it. Here, we describe and test an empirical monitoring system designed to meet those needs. Our system uses a mix of field data, statistical modeling, remotely-sensed time-series imagery, and small-footprint lidar data to build and evaluate maps of forest biomass. It ascribes biomass change to specific change agents, and attempts to capture the impact of uncertainty in methodology. We find that: • A common image framework for biomass estimation and for change detection allows for consistent comparison of both state and change processes controlling biomass dynamics. • Regional estimates of total biomass agree well with those from plot data alone. • The system tracks biomass densities up to 450-500 Mg ha-1 with little bias, but begins underestimating true biomass as densities increase further. • Scale considerations are important. Estimates at the 30 m grain size are noisy, but agreement at broad scales is good. Further investigation to determine the appropriate scales is underway. • Uncertainty from methodological choices is evident, but much smaller than uncertainty based on choice of allometric equation used to estimate biomass from tree data. • In this forest-dominated study area, growth and loss processes largely balance in most years, with loss processes dominated by human removal through harvest. In years with substantial fire activity, however, overall biomass loss greatly outpaces growth. Taken together, our methods represent a unique combination of elements foundational to an operational landscape-scale forest biomass monitoring program.

  8. Integrated empirical ethics: loss of normativity?

    Science.gov (United States)

    van der Scheer, Lieke; Widdershoven, Guy

    2004-01-01

    An important discussion in contemporary ethics concerns the relevance of empirical research for ethics. Specifically, two crucial questions pertain, respectively, to the possibility of inferring normative statements from descriptive statements, and to the danger of a loss of normativity if normative statements should be based on empirical research. Here we take part in the debate and defend integrated empirical ethical research: research in which normative guidelines are established on the basis of empirical research and in which the guidelines are empirically evaluated by focusing on observable consequences. We argue that in our concrete example normative statements are not derived from descriptive statements, but are developed within a process of reflection and dialogue that goes on within a specific praxis. Moreover, we show that the distinction in experience between the desirable and the undesirable precludes relativism. The normative guidelines so developed are both critical and normative: they help in choosing the right action and in evaluating that action. Finally, following Aristotle, we plead for a return to the view that morality and ethics are inherently related to one another, and for an acknowledgment of the fact that moral judgments have their origin in experience which is always related to historical and cultural circumstances.

  9. [Anesthesia in the Inca empire].

    Science.gov (United States)

    Fairley, H Barrie

    2007-11-01

    The Incas had no written language and their chroniclers say little about their surgery and nothing about their methods for relieving the pain it caused. It is possible that they did have some form of anesthesia. Available plants that had central effects include maize (which they used in different ways to prepare an alcoholic beverage called chicha), Datura, espingo, tobacco, San Pedro cactus, and coca. The Incas used chicha to induce unconsciousness during minor surgical operations and it was still being used in those regions in the 19th century to perform female circumcision. Datura, espingo, tobacco, and San Pedro cactus can produce a deep trance and, in all probability, anesthesia. There is evidence that they used Datura as a total or partial anesthetic. The Incas chewed coca leaves with lime and swallowed the resulting juice, and this allowed them to work long hours without eating or drinking. Modern-day Peruvian Indians say that coca only numbs the mouth, though it was observed in the 19th century that coca leaves placed in wounds provided pain relief. It is possible that the Incas used chicha - probably in combination with another narcotic - to achieve the total or partial anesthesia needed for their surgery. A decoction of coca leaves may have been used as a topical anesthetic.

  10. Empirical Investigation of Industrial Management

    Directory of Open Access Journals (Sweden)

    Elenko Zahariev

    2014-07-01

    Full Text Available The paper is devoted to an aspect in the sphere of management – business priorities of industrial management in XXI century. In modern times the actuality of treated problems is mainly laid into the necessities of the real management practice in industrial organizations and the need theoretical and applied knowledge to be offered to that practice which would allow it methodologically right and methodically correct to implement the corresponding changes in management of a concrete organization. Objects of analyses and evaluations are some fragmented approbations of theses using the corresponding instruments. The characteristic features of the organizations’ profiles and the persons interviewed participated in the investigation are summarized. The determining approaches for Bulgarian organizations are considered too. On the basis of the critical analyses the fundamental tasks are drawn which are inherent to contemporary industrial managers. Attention is paid to key management functions for an effective managerial process. An analysis of managers reaching the best results in industrial management is presented as well as when they are reached. Outlined are also specific peculiarities of industrial management in theRepublicofBulgariaand parameters of the level of productiveness in conditions of business globalization and priority forms in marketing of the ready product / service in XXI century. The results of the launched idea for the necessity to create a new International management architecture (NIMA are determined – structure and structure defining parameters. The results of the investigation of main business priorities in industrial management are commented as well as expected problems in the process of functioning of industrial organizations in XXI century. At the end the corresponding conclusions are made in respect to the techniques used in determining effectiveness of industrial management in Bulgarian organizations.

  11. Pluvials, Droughts, Energetics, and the Mongol Empire

    Science.gov (United States)

    Hessl, A. E.; Pederson, N.; Baatarbileg, N.

    2012-12-01

    The success of the Mongol Empire, the largest contiguous land empire the world has ever known, is a historical enigma. At its peak in the late 13th century, the empire influenced areas from the Hungary to southern Asia and Persia. Powered by domesticated herbivores, the Mongol Empire grew at the expense of agriculturalists in Eastern Europe, Persia, and China. What environmental factors contributed to the rise of the Mongols? What factors influenced the disintegration of the empire by 1300 CE? Until now, little high resolution environmental data have been available to address these questions. We use tree-ring records of past temperature and water to illuminate the role of energy and water in the evolution of the Mongol Empire. The study of energetics has long been applied to biological and ecological systems but has only recently become a theme in understanding modern coupled natural and human systems (CNH). Because water and energy are tightly linked in human and natural systems, studying their synergies and interactions make it possible to integrate knowledge across disciplines and human history, yielding important lessons for modern societies. We focus on the role of energy and water in the trajectory of an empire, including its rise, development, and demise. Our research is focused on the Orkhon Valley, seat of the Mongol Empire, where recent paleoenvironmental and archeological discoveries allow high resolution reconstructions of past human and environmental conditions for the first time. Our preliminary records indicate that the period 1210-1230 CE, the height of Chinggis Khan's reign is one of the longest and most consistent pluvials in our tree ring reconstruction of interannual drought. Reconstructed temperature derived from five millennium-long records from subalpine forests in Mongolia document warm temperatures beginning in the early 1200's and ending with a plunge into cold temperatures in 1260. Abrupt cooling in central Mongolia at this time is

  12. Symbiotic empirical ethics: a practical methodology.

    Science.gov (United States)

    Frith, Lucy

    2012-05-01

    Like any discipline, bioethics is a developing field of academic inquiry; and recent trends in scholarship have been towards more engagement with empirical research. This 'empirical turn' has provoked extensive debate over how such 'descriptive' research carried out in the social sciences contributes to the distinctively normative aspect of bioethics. This paper will address this issue by developing a practical research methodology for the inclusion of data from social science studies into ethical deliberation. This methodology will be based on a naturalistic conception of ethical theory that sees practice as informing theory just as theory informs practice - the two are symbiotically related. From this engagement with practice, the ways that such theories need to be extended and developed can be determined. This is a practical methodology for integrating theory and practice that can be used in empirical studies, one that uses ethical theory both to explore the data and to draw normative conclusions. © 2010 Blackwell Publishing Ltd.

  13. Reframing Serial Murder Within Empirical Research.

    Science.gov (United States)

    Gurian, Elizabeth A

    2017-04-01

    Empirical research on serial murder is limited due to the lack of consensus on a definition, the continued use of primarily descriptive statistics, and linkage to popular culture depictions. These limitations also inhibit our understanding of these offenders and affect credibility in the field of research. Therefore, this comprehensive overview of a sample of 508 cases (738 total offenders, including partnered groups of two or more offenders) provides analyses of solo male, solo female, and partnered serial killers to elucidate statistical differences and similarities in offending and adjudication patterns among the three groups. This analysis of serial homicide offenders not only supports previous research on offending patterns present in the serial homicide literature but also reveals that empirically based analyses can enhance our understanding beyond traditional case studies and descriptive statistics. Further research based on these empirical analyses can aid in the development of more accurate classifications and definitions of serial murderers.

  14. Wireless and empire geopolitics radio industry and ionosphere in the British Empire 1918-1939

    CERN Document Server

    Anduaga, Aitor

    2009-01-01

    Although the product of consensus politics, the British Empire was based on communications supremacy and the knowledge of the atmosphere. Focusing on science, industry, government, the military, and education, this book studies the relationship between wireless and Empire throughout the interwar period.

  15. Empirical psychology, common sense, and Kant's empirical markers for moral responsibility.

    Science.gov (United States)

    Frierson, Patrick

    2008-12-01

    This paper explains the empirical markers by which Kant thinks that one can identify moral responsibility. After explaining the problem of discerning such markers within a Kantian framework I briefly explain Kant's empirical psychology. I then argue that Kant's empirical markers for moral responsibility--linked to higher faculties of cognition--are not sufficient conditions for moral responsibility, primarily because they are empirical characteristics subject to natural laws. Next. I argue that these markers are not necessary conditions of moral responsibility. Given Kant's transcendental idealism, even an entity that lacks these markers could be free and morally responsible, although as a matter of fact Kant thinks that none are. Given that they are neither necessary nor sufficient conditions, I discuss the status of Kant's claim that higher faculties are empirical markers of moral responsibility. Drawing on connections between Kant's ethical theory and 'common rational cognition' (4:393), I suggest that Kant's theory of empirical markers can be traced to ordinary common sense beliefs about responsibility. This suggestion helps explain both why empirical markers are important and what the limits of empirical psychology are within Kant's account of moral responsibility.

  16. An empirical formula for scattered neutron components in fast neutron radiography

    International Nuclear Information System (INIS)

    Dou Haifeng; Tang Bin

    2011-01-01

    Scattering neutrons are one of the key factors that may affect the images of fast neutron radiography. In this paper, a mathematical model for scattered neutrons is developed on a cylinder sample, and an empirical formula for scattered neutrons is obtained. According to the results given by Monte Carlo methods, the parameters in the empirical formula are obtained with curve fitting, which confirms the logicality of the empirical formula. The curve-fitted parameters of common materials such as 6 LiD are given. (authors)

  17. Evaluation of empirical atmospheric diffusion data

    International Nuclear Information System (INIS)

    Horst, T.W.; Doran, J.C.; Nickola, P.W.

    1979-10-01

    A study has been made of atmospheric diffusion over level, homogeneous terrain of contaminants released from non-buoyant point sources up to 100 m in height. Current theories of diffusion are compared to empirical diffusion data, and specific dispersion estimation techniques are recommended which can be implemented with the on-site meteorological instrumentation required by the Nuclear Regulatory Commission. A comparison of both the recommended diffusion model and the NRC diffusion model with the empirical data demonstrates that the predictions of the recommended model have both smaller scatter and less bias, particularly for groundlevel sources

  18. Evaluation of empirical atmospheric diffusion data

    Energy Technology Data Exchange (ETDEWEB)

    Horst, T.W.; Doran, J.C.; Nickola, P.W.

    1979-10-01

    A study has been made of atmospheric diffusion over level, homogeneous terrain of contaminants released from non-buoyant point sources up to 100 m in height. Current theories of diffusion are compared to empirical diffusion data, and specific dispersion estimation techniques are recommended which can be implemented with the on-site meteorological instrumentation required by the Nuclear Regulatory Commission. A comparison of both the recommended diffusion model and the NRC diffusion model with the empirical data demonstrates that the predictions of the recommended model have both smaller scatter and less bias, particularly for ground-level sources.

  19. Empirical Model Building Data, Models, and Reality

    CERN Document Server

    Thompson, James R

    2011-01-01

    Praise for the First Edition "This...novel and highly stimulating book, which emphasizes solving real problems...should be widely read. It will have a positive and lasting effect on the teaching of modeling and statistics in general." - Short Book Reviews This new edition features developments and real-world examples that showcase essential empirical modeling techniques Successful empirical model building is founded on the relationship between data and approximate representations of the real systems that generated that data. As a result, it is essential for researchers who construct these m

  20. Intuition in Decision Making –Theoretical and Empirical Aspects

    Directory of Open Access Journals (Sweden)

    Kamila Malewska

    2015-11-01

    Full Text Available In an economy dominated by information and knowledge, analysis ceases to be the sole and sufficient source of knowledge. Managers seek alternative ways of obtaining and interpreting information and knowledge. Here, managerial intuitive potential begins to play an important role. The aim of this paper is to present the issue of intuition in decision making in both theoretical and empirical terms. The first part presents the essence of intuition and its role in management, especially in decision making. Then, the empirical part attempts to identify the intuitive potential of managers and the extent of its use in practical decision making. The case study method was used in order to achieve this goal. The analysis involved a Polish food company “Fawor” that employs more than 300 workers. These literature and empirical studies in the area of intuition were conducted within the research project „The impact of managerial intuitive potential on the effectiveness of decision making processes”, financed by the National Science Centre, Poland (funds allocated on the basis of decision No. DEC-2014/13/D/HS4/01750

  1. PROBLEMS WITH WIREDU'S EMPIRICALISM Martin Odei Ajei1 ...

    African Journals Online (AJOL)

    In his “Empiricalism: The Empirical Character of an African Philosophy”,. Kwasi Wiredu sets out ... others, that an empirical metaphysical system contains both empirical ..... realms which multiple categories of existents inhabit and conduct their being in .... to a mode of reasoning that conceives categories polarized by formal.

  2. Empirical evaluation of three machine learning method for automatic classification of neoplastic diagnoses Evaluación empírica de tres métodos de aprendizaje automático para clasificar automáticamente diagnósticos de neoplasias

    Directory of Open Access Journals (Sweden)

    José Luis Jara

    2011-12-01

    Full Text Available Diagnoses are a valuable source of information for evaluating a health system. However, they are not used extensively by information systems because diagnoses are normally written in natural language. This work empirically evaluates three machine learning methods to automatically assign codes from the International Classification of Diseases (10th Revision to 3,335 distinct diagnoses of neoplasms obtained from UMLS®. This evaluation is conducted on three different types of preprocessing. The results are encouraging: a well-known rule induction method and maximum entropy models achieve 90% accuracy in a balanced cross-validation experiment.Los diagnósticos médicos son una fuente valiosa de información para evaluar el funcionamiento de un sistema de salud. Sin embargo, su utilización en sistemas de información se ve dificultada porque éstos se encuentran normalmente escritos en lenguaje natural. Este trabajo evalúa empíricamente tres métodos de Aprendizaje Automático para asignar códigos de acuerdo a la Clasificación Internacional de Enfermedades (décima versión a 3.335 diferentes diagnósticos de neoplasias extraídos desde UMLS®. Esta evaluación se realiza con tres tipos distintos de preprocesamiento. Los resultados son alentadores: un conocido método de inducción de reglas de decisión y modelos de entropía máxima obtienen alrededor de 90% accuracy en una validación cruzada balanceada.

  3. Review of empirical results concerning the problem of acceptance of nuclear power

    International Nuclear Information System (INIS)

    Beker, G.; Berg, I. v.; Coenen, R.

    1980-05-01

    In this report, the results of empirical surveys are presented which can contribute to the explanation of the problem of public acceptance of nuclear power. Main emphasis is laid on hypotheses about the factors underlying the nuclear power controversy formulated in scientific literature and political discussions. The appendix to the report contains a documentation of the empirical surveys considered, giving informations on sample, method, and main results. (orig.) [de

  4. Downside Risk And Empirical Asset Pricing

    NARCIS (Netherlands)

    P. van Vliet (Pim)

    2004-01-01

    textabstractCurrently, the Nobel prize winning Capital Asset Pricing Model (CAPM) celebrates its 40th birthday. Although widely applied in financial management, this model does not fully capture the empirical riskreturn relation of stocks; witness the beta, size, value and momentum effects. These

  5. Empirical analysis of uranium spot prices

    International Nuclear Information System (INIS)

    Morman, M.R.

    1988-01-01

    The objective is to empirically test a market model of the uranium industry that incorporates the notion that, if the resource is viewed as an asset by economic agents, then its own rate of return along with the own rate of return of a competing asset would be a major factor in formulating the price of the resource. The model tested is based on a market model of supply and demand. The supply model incorporates the notion that the decision criteria used by uranium mine owners is to select that extraction rate that maximizes the net present value of their extraction receipts. The demand model uses a concept that allows for explicit recognition of the prospect of arbitrage between a natural-resource market and the market for other capital goods. The empirical approach used for estimation was a recursive or causal model. The empirical results were consistent with the theoretical models. The coefficients of the demand and supply equations had the appropriate signs. Tests for causality were conducted to validate the use of the causal model. The results obtained were favorable. The implication of the findings as related to future studies of exhaustible resources are: (1) in some cases causal models are the appropriate specification for empirical analysis; (2) supply models should incorporate a measure to capture depletion effects

  6. Trade costs in empirical New Economic Geography

    NARCIS (Netherlands)

    Bosker, E.M.; Garretsen, J.H.

    Trade costs are a crucial element of New Economic Geography (NEG) models. Without trade costs there is no role for geography. In empirical NEG studies the unavailability of direct trade cost data calls for the need to approximate these trade costs by introducing a trade cost function. In doing so,

  7. Characterizing Student Expectations: A Small Empirical Study

    Science.gov (United States)

    Warwick, Jonathan

    2016-01-01

    This paper describes the results of a small empirical study (n = 130), in which undergraduate students in the Business Faculty of a UK university were asked to express views and expectations relating to the study of a mathematics. Factor analysis is used to identify latent variables emerging from clusters of the measured variables and these are…

  8. EVOLVING AN EMPIRICAL METHODOLOGY DOR DETERMINING ...

    African Journals Online (AJOL)

    The uniqueness of this approach, is that it can be applied to any forest or dynamic feature on the earth, and can enjoy universal application as well. KEY WORDS: Evolving empirical methodology, innovative mathematical model, appropriate interval, remote sensing, forest environment planning and management. Global Jnl ...

  9. Caught between Empires: Ambivalence in Australian Films ...

    African Journals Online (AJOL)

    Caught between Empires: Ambivalence in Australian Films. Greg McCarthy. AJOL African Journals Online. HOW TO USE AJOL... for Researchers · for Librarians · for Authors · FAQ's · More about AJOL · AJOL's Partners · Terms and Conditions of Use · Contact AJOL · News. OTHER RESOURCES... for Researchers · for ...

  10. Spitsbergen - Imperialists beyond the British Empire

    NARCIS (Netherlands)

    Kruse, Frigga; Hacquebord, Louwrens

    2012-01-01

    This paper looks at the relationship between Spitsbergen in the European High Arctic and the global British Empire in the first quarter of the twentieth century. Spitsbergen was an uninhabited no man's land and comprised an unknown quantity of natural resources. The concepts of geopolitics and New

  11. An Empirical Investigation into Nigerian ESL Learners ...

    African Journals Online (AJOL)

    General observations indicate that ESL learners in Nigeria tend to manifest fear and anxiety in grammar classes, which could influence their performance negatively or positively. This study examines empirically some of the reasons for some ESL learners' apprehension of grammar classes. The data for the study were ...

  12. Air pollutant taxation: an empirical survey

    International Nuclear Information System (INIS)

    Cansier, D.; Krumm, R.

    1997-01-01

    An empirical analysis of the current taxation of the air pollutants sulphur dioxide, nitrogen oxides and carbon dioxide in the Scandinavian countries, the Netherlands, France and Japan is presented. Political motivation and technical factors such as tax base, rate structure and revenue use are compared. The general concepts of the current polices are characterised

  13. Empirical research on constructing Taiwan's ecoenvironmental ...

    African Journals Online (AJOL)

    In this paper, the material flow indicators and ecological footprint approach structured are adopted to construct eco-environmental stress indicators. We use relevant data to proceed with the empirical analyses on environmental stress and ecological impacts in Taiwan between the years of 1998 and 2007. Analysis of ...

  14. Empirical Bayes Approaches to Multivariate Fuzzy Partitions.

    Science.gov (United States)

    Woodbury, Max A.; Manton, Kenneth G.

    1991-01-01

    An empirical Bayes-maximum likelihood estimation procedure is presented for the application of fuzzy partition models in describing high dimensional discrete response data. The model describes individuals in terms of partial membership in multiple latent categories that represent bounded discrete spaces. (SLD)

  15. Empirically Exploring Higher Education Cultures of Assessment

    Science.gov (United States)

    Fuller, Matthew B.; Skidmore, Susan T.; Bustamante, Rebecca M.; Holzweiss, Peggy C.

    2016-01-01

    Although touted as beneficial to student learning, cultures of assessment have not been examined adequately using validated instruments. Using data collected from a stratified, random sample (N = 370) of U.S. institutional research and assessment directors, the models tested in this study provide empirical support for the value of using the…

  16. Empirically Based Myths: Astrology, Biorhythms, and ATIs.

    Science.gov (United States)

    Ragsdale, Ronald G.

    1980-01-01

    A myth may have an empirical basis through chance occurrence; perhaps Aptitude Treatment Interactions (ATIs) are in this category. While ATIs have great utility in describing, planning, and implementing instruction, few disordinal interactions have been found. Article suggests narrowing of ATI research with replications and estimates of effect…

  17. Classification of Marital Relationships: An Empirical Approach.

    Science.gov (United States)

    Snyder, Douglas K.; Smith, Gregory T.

    1986-01-01

    Derives an empirically based classification system of marital relationships, employing a multidimensional self-report measure of marital interaction. Spouses' profiles on the Marital Satisfaction Inventory for samples of clinic and nonclinic couples were subjected to cluster analysis, resulting in separate five-group typologies for husbands and…

  18. Empirical scaling for present ohmic heated tokamaks

    International Nuclear Information System (INIS)

    Daughney, C.

    1975-06-01

    Empirical scaling laws are given for the average electron temperature and electron energy confinement time as functions of plasma current, average electron density, effective ion charge, toroidal magnetic field, and major and minor plasma radius. The ohmic heating is classical, and the electron energy transport is anomalous. The present scaling indicates that ohmic-heating becomes ineffective with larger experiments. (U.S.)

  19. Developing empirical relationship between interrill erosion, rainfall ...

    African Journals Online (AJOL)

    In order to develop an empirical relationship for interrill erosion based on rainfall intensity, slope steepness and soil types, an interrill erosion experiment was conducted using laboratory rainfall simulator on three soil types (Vertisols, Cambisols and Leptosols) for the highlands of North Shewa Zone of Oromia Region.

  20. Governance and Human Development: Empirical Evidence from ...

    African Journals Online (AJOL)

    This study empirically investigates the effects of governance on human development in Nigeria. Using annual time series data covering the period 1998 to 2010, obtained from various sources, and employing the classical least squares estimation technique, the study finds that corruption, foreign aid and government ...

  1. Software Development Management: Empirical and Analytical Perspectives

    Science.gov (United States)

    Kang, Keumseok

    2011-01-01

    Managing software development is a very complex activity because it must deal with people, organizations, technologies, and business processes. My dissertation consists of three studies that examine software development management from various perspectives. The first study empirically investigates the impacts of prior experience with similar…

  2. The Italian Footwear Industry: an Empirical Analysis

    OpenAIRE

    Pirolo, Luca; Giustiniano, Luca; Nenni, Maria Elena

    2013-01-01

    This paper aims to provide readers with a deep empirical analysis on the Italian footwear industry in order to investigate the evolution of its structure (trends in sales and production, number of firms and employees, main markets, etc.), together with the identification of the main drivers of competitiveness in order to explain the strategies implemented by local actors.

  3. Multiscale empirical interpolation for solving nonlinear PDEs

    KAUST Repository

    Calo, Victor M.; Efendiev, Yalchin R.; Galvis, Juan; Ghommem, Mehdi

    2014-01-01

    residuals and Jacobians on the fine grid. We use empirical interpolation concepts to evaluate these residuals and Jacobians of the multiscale system with a computational cost which is proportional to the size of the coarse-scale problem rather than the fully

  4. Quantitative analyses of empirical fitness landscapes

    International Nuclear Information System (INIS)

    Szendro, Ivan G; Franke, Jasper; Krug, Joachim; Schenk, Martijn F; De Visser, J Arjan G M

    2013-01-01

    The concept of a fitness landscape is a powerful metaphor that offers insight into various aspects of evolutionary processes and guidance for the study of evolution. Until recently, empirical evidence on the ruggedness of these landscapes was lacking, but since it became feasible to construct all possible genotypes containing combinations of a limited set of mutations, the number of studies has grown to a point where a classification of landscapes becomes possible. The aim of this review is to identify measures of epistasis that allow a meaningful comparison of fitness landscapes and then apply them to the empirical landscapes in order to discern factors that affect ruggedness. The various measures of epistasis that have been proposed in the literature appear to be equivalent. Our comparison shows that the ruggedness of the empirical landscape is affected by whether the included mutations are beneficial or deleterious and by whether intragenic or intergenic epistasis is involved. Finally, the empirical landscapes are compared to landscapes generated with the rough Mt Fuji model. Despite the simplicity of this model, it captures the features of the experimental landscapes remarkably well. (paper)

  5. Qualitative Case Study Research as Empirical Inquiry

    Science.gov (United States)

    Ellinger, Andrea D.; McWhorter, Rochell

    2016-01-01

    This article introduces the concept of qualitative case study research as empirical inquiry. It defines and distinguishes what a case study is, the purposes, intentions, and types of case studies. It then describes how to determine if a qualitative case study is the preferred approach for conducting research. It overviews the essential steps in…

  6. The problem analysis for empirical studies

    NARCIS (Netherlands)

    Groenland, E.A.G.

    2014-01-01

    This article proposes a systematic methodology for the development of a problem analysis for cross-sectional, empirical research. This methodology is referred to as the 'Annabel approach'. It is suitable both for academic studies and applied (business) studies. In addition it can be used for both

  7. Synthetic and Empirical Capsicum Annuum Image Dataset

    NARCIS (Netherlands)

    Barth, R.

    2016-01-01

    This dataset consists of per-pixel annotated synthetic (10500) and empirical images (50) of Capsicum annuum, also known as sweet or bell pepper, situated in a commercial greenhouse. Furthermore, the source models to generate the synthetic images are included. The aim of the datasets are to

  8. An Empirical Investigation into Programming Language Syntax

    Science.gov (United States)

    Stefik, Andreas; Siebert, Susanna

    2013-01-01

    Recent studies in the literature have shown that syntax remains a significant barrier to novice computer science students in the field. While this syntax barrier is known to exist, whether and how it varies across programming languages has not been carefully investigated. For this article, we conducted four empirical studies on programming…

  9. Self-Published Books: An Empirical "Snapshot"

    Science.gov (United States)

    Bradley, Jana; Fulton, Bruce; Helm, Marlene

    2012-01-01

    The number of books published by authors using fee-based publication services, such as Lulu and AuthorHouse, is overtaking the number of books published by mainstream publishers, according to Bowker's 2009 annual data. Little empirical research exists on self-published books. This article presents the results of an investigation of a random sample…

  10. Empirical Differential Balancing for Nonlinear Systems

    NARCIS (Netherlands)

    Kawano, Yu; Scherpen, Jacquelien M.A.; Dochain, Denis; Henrion, Didier; Peaucelle, Dimitri

    In this paper, we consider empirical balancing of nonlinear systems by using its prolonged system, which consists of the original nonlinear system and its variational system. For the prolonged system, we define differential reachability and observability Gramians, which are matrix valued functions

  11. Development of an empirical model of turbine efficiency using the Taylor expansion and regression analysis

    International Nuclear Information System (INIS)

    Fang, Xiande; Xu, Yu

    2011-01-01

    The empirical model of turbine efficiency is necessary for the control- and/or diagnosis-oriented simulation and useful for the simulation and analysis of dynamic performances of the turbine equipment and systems, such as air cycle refrigeration systems, power plants, turbine engines, and turbochargers. Existing empirical models of turbine efficiency are insufficient because there is no suitable form available for air cycle refrigeration turbines. This work performs a critical review of empirical models (called mean value models in some literature) of turbine efficiency and develops an empirical model in the desired form for air cycle refrigeration, the dominant cooling approach in aircraft environmental control systems. The Taylor series and regression analysis are used to build the model, with the Taylor series being used to expand functions with the polytropic exponent and the regression analysis to finalize the model. The measured data of a turbocharger turbine and two air cycle refrigeration turbines are used for the regression analysis. The proposed model is compact and able to present the turbine efficiency map. Its predictions agree with the measured data very well, with the corrected coefficient of determination R c 2 ≥ 0.96 and the mean absolute percentage deviation = 1.19% for the three turbines. -- Highlights: → Performed a critical review of empirical models of turbine efficiency. → Developed an empirical model in the desired form for air cycle refrigeration, using the Taylor expansion and regression analysis. → Verified the method for developing the empirical model. → Verified the model.

  12. A study on online monitoring system development using empirical models

    Energy Technology Data Exchange (ETDEWEB)

    An, Sang Ha

    2010-02-15

    Maintenance technologies have been progressed from a time-based to a condition-based manner. The fundamental idea of condition-based maintenance (CBM) is built on the real-time diagnosis of impending failures and/or the prognosis of residual lifetime of equipment by monitoring health conditions using various sensors. The success of CBM, therefore, hinges on the capability to develop accurate diagnosis/prognosis models. Even though there may be an unlimited number of methods to implement models, the models can normally be classified into two categories in terms of their origins: using physical principles or historical observations. I have focused on the latter method (sometimes referred as the empirical model based on statistical learning) because of some practical benefits such as context-free applicability, configuration flexibility, and customization adaptability. While several pilot-scale systems using empirical models have been applied to work sites in Korea, it should be noticed that these do not seem to be generally competitive against conventional physical models. As a result of investigating the bottlenecks of previous attempts, I have recognized the need for a novel strategy for grouping correlated variables such that an empirical model can accept not only statistical correlation but also some extent of physical knowledge of a system. Detailed examples of problems are as follows: (1) missing of important signals in a group caused by the lack of observations, (2) problems of signals with the time delay, (3) problems of optimal kernel bandwidth. In this study an improved statistical learning framework including the proposed strategy and case studies illustrating the performance of the method are presented.

  13. Advanced empirical estimate of information value for credit scoring models

    Directory of Open Access Journals (Sweden)

    Martin Řezáč

    2011-01-01

    Full Text Available Credit scoring, it is a term for a wide spectrum of predictive models and their underlying techniques that aid financial institutions in granting credits. These methods decide who will get credit, how much credit they should get, and what further strategies will enhance the profitability of the borrowers to the lenders. Many statistical tools are avaiable for measuring quality, within the meaning of the predictive power, of credit scoring models. Because it is impossible to use a scoring model effectively without knowing how good it is, quality indexes like Gini, Kolmogorov-Smirnov statisic and Information value are used to assess quality of given credit scoring model. The paper deals primarily with the Information value, sometimes called divergency. Commonly it is computed by discretisation of data into bins using deciles. One constraint is required to be met in this case. Number of cases have to be nonzero for all bins. If this constraint is not fulfilled there are some practical procedures for preserving finite results. As an alternative method to the empirical estimates one can use the kernel smoothing theory, which allows to estimate unknown densities and consequently, using some numerical method for integration, to estimate value of the Information value. The main contribution of this paper is a proposal and description of the empirical estimate with supervised interval selection. This advanced estimate is based on requirement to have at least k, where k is a positive integer, observations of socres of both good and bad client in each considered interval. A simulation study shows that this estimate outperform both the empirical estimate using deciles and the kernel estimate. Furthermore it shows high dependency on choice of the parameter k. If we choose too small value, we get overestimated value of the Information value, and vice versa. Adjusted square root of number of bad clients seems to be a reasonable compromise.

  14. Fast empirical Bayesian LASSO for multiple quantitative trait locus mapping

    Directory of Open Access Journals (Sweden)

    Xu Shizhong

    2011-05-01

    Full Text Available Abstract Background The Bayesian shrinkage technique has been applied to multiple quantitative trait loci (QTLs mapping to estimate the genetic effects of QTLs on quantitative traits from a very large set of possible effects including the main and epistatic effects of QTLs. Although the recently developed empirical Bayes (EB method significantly reduced computation comparing with the fully Bayesian approach, its speed and accuracy are limited by the fact that numerical optimization is required to estimate the variance components in the QTL model. Results We developed a fast empirical Bayesian LASSO (EBLASSO method for multiple QTL mapping. The fact that the EBLASSO can estimate the variance components in a closed form along with other algorithmic techniques render the EBLASSO method more efficient and accurate. Comparing with the EB method, our simulation study demonstrated that the EBLASSO method could substantially improve the computational speed and detect more QTL effects without increasing the false positive rate. Particularly, the EBLASSO algorithm running on a personal computer could easily handle a linear QTL model with more than 100,000 variables in our simulation study. Real data analysis also demonstrated that the EBLASSO method detected more reasonable effects than the EB method. Comparing with the LASSO, our simulation showed that the current version of the EBLASSO implemented in Matlab had similar speed as the LASSO implemented in Fortran, and that the EBLASSO detected the same number of true effects as the LASSO but a much smaller number of false positive effects. Conclusions The EBLASSO method can handle a large number of effects possibly including both the main and epistatic QTL effects, environmental effects and the effects of gene-environment interactions. It will be a very useful tool for multiple QTL mapping.

  15. Empirical Formulae for The Calculation of Austenite Supercooled Transformation Temperatures

    Directory of Open Access Journals (Sweden)

    Trzaska J.

    2015-04-01

    Full Text Available The paper presents empirical formulae for the calculation of austenite supercooled transformation temperatures, basing on the chemical composition, austenitising temperature and cooling rate. The multiple regression method was used. Four equations were established allowing to calculate temperature of the start area of ferrite, perlite, bainite and martensite at the given cooling rate. The calculation results obtained do not allow to determine the cooling rate range of ferritic, pearlitic, bainitic and martensite transformations. Classifiers based on logistic regression or neural network were established to solve this problem.

  16. Palm vein recognition based on directional empirical mode decomposition

    Science.gov (United States)

    Lee, Jen-Chun; Chang, Chien-Ping; Chen, Wei-Kuei

    2014-04-01

    Directional empirical mode decomposition (DEMD) has recently been proposed to make empirical mode decomposition suitable for the processing of texture analysis. Using DEMD, samples are decomposed into a series of images, referred to as two-dimensional intrinsic mode functions (2-D IMFs), from finer to large scale. A DEMD-based 2 linear discriminant analysis (LDA) for palm vein recognition is proposed. The proposed method progresses through three steps: (i) a set of 2-D IMF features of various scale and orientation are extracted using DEMD, (ii) the 2LDA method is then applied to reduce the dimensionality of the feature space in both the row and column directions, and (iii) the nearest neighbor classifier is used for classification. We also propose two strategies for using the set of 2-D IMF features: ensemble DEMD vein representation (EDVR) and multichannel DEMD vein representation (MDVR). In experiments using palm vein databases, the proposed MDVR-based 2LDA method achieved recognition accuracy of 99.73%, thereby demonstrating its feasibility for palm vein recognition.

  17. Temporal structure of neuronal population oscillations with empirical model decomposition

    International Nuclear Information System (INIS)

    Li Xiaoli

    2006-01-01

    Frequency analysis of neuronal oscillation is very important for understanding the neural information processing and mechanism of disorder in the brain. This Letter addresses a new method to analyze the neuronal population oscillations with empirical mode decomposition (EMD). Following EMD of neuronal oscillation, a series of intrinsic mode functions (IMFs) are obtained, then Hilbert transform of IMFs can be used to extract the instantaneous time frequency structure of neuronal oscillation. The method is applied to analyze the neuronal oscillation in the hippocampus of epileptic rats in vivo, the results show the neuronal oscillations have different descriptions during the pre-ictal, seizure onset and ictal periods of the epileptic EEG at the different frequency band. This new method is very helpful to provide a view for the temporal structure of neural oscillation

  18. Empirical PPGIS/PGIS mapping of ecosystem services

    DEFF Research Database (Denmark)

    Brown, Gregory G; Fagerholm, Nora

    2015-01-01

    demonstrate high potential for the identification of ecosystem services, especially cultural services, there has been no review to evaluate the methods to identify best practice. Through examination of peer-reviewed, empirical PPGIS/PGIS studies, we describe the types of ecosystem services mapped, the spatial...... of experimental design and long-term case studies where the influence of mapped ecosystem services on land use decisions can be assessed....... mapping methods, the sampling approaches and range of participants, the types of spatial analyses performed, and the methodological trade-offs associated with each PPGIS/PGIS mapping approach. We found that multiple methods were implemented in nearly 30 case studies worldwide with the mapping of cultural...

  19. Porphyry of Russian Empires in Paris

    Science.gov (United States)

    Bulakh, Andrey

    2014-05-01

    Porphyry of Russian Empires in Paris A. G. Bulakh (St Petersburg State University, Russia) So called "Schokhan porphyry" from Lake Onega, Russia, belongs surely to stones of World cultural heritage. One can see this "porphyry" at facades of a lovely palace of Pavel I and in pedestal of the monument after Nicolas I in St Petersburg. There are many other cases of using this stone in Russia. In Paris, sarcophagus of Napoleon I Bonaparte is constructed of blocks of this stone. Really, it is Proterozoic quartzite. Geology situation, petrography and mineralogical characteristic will be reported too. Comparison with antique porphyre from the Egyptian Province of the Roma Empire is given. References: 1) A.G.Bulakh, N.B.Abakumova, J.V.Romanovsky. St Petersburg: a History in Stone. 2010. Print House of St Petersburg State University. 173 p.

  20. Empirically Examining Prostitution through a Feminist Perspective

    OpenAIRE

    Child, Shyann

    2009-01-01

    The purpose of this thesis is to empirically explore prostitution through a feminist perspective. Several background factors are explored on a small sample of women in the northeastern United States. Some of these women have been involved in an act of prostitution in their lifetime; some have not. This research will add to the body of knowledge on prostitution, as well as highlight the unique experiences of women. The goal is to understand whether or not these life experiences have had a h...

  1. Theoretical and Empirical Descriptions of Thermospheric Density

    Science.gov (United States)

    Solomon, S. C.; Qian, L.

    2004-12-01

    The longest-term and most accurate overall description the density of the upper thermosphere is provided by analysis of change in the ephemeris of Earth-orbiting satellites. Empirical models of the thermosphere developed in part from these measurements can do a reasonable job of describing thermospheric properties on a climatological basis, but the promise of first-principles global general circulation models of the coupled thermosphere/ionosphere system is that a true high-resolution, predictive capability may ultimately be developed for thermospheric density. However, several issues are encountered when attempting to tune such models so that they accurately represent absolute densities as a function of altitude, and their changes on solar-rotational and solar-cycle time scales. Among these are the crucial ones of getting the heating rates (from both solar and auroral sources) right, getting the cooling rates right, and establishing the appropriate boundary conditions. However, there are several ancillary issues as well, such as the problem of registering a pressure-coordinate model onto an altitude scale, and dealing with possible departures from hydrostatic equilibrium in empirical models. Thus, tuning a theoretical model to match empirical climatology may be difficult, even in the absence of high temporal or spatial variation of the energy sources. We will discuss some of the challenges involved, and show comparisons of simulations using the NCAR Thermosphere-Ionosphere-Electrodynamics General Circulation Model (TIE-GCM) to empirical model estimates of neutral thermosphere density and temperature. We will also show some recent simulations using measured solar irradiance from the TIMED/SEE instrument as input to the TIE-GCM.

  2. Insurability of Cyber Risk: An Empirical Analysis

    OpenAIRE

    Biener, Christian; Eling, Martin; Wirfs, Jan Hendrik

    2015-01-01

    This paper discusses the adequacy of insurance for managing cyber risk. To this end, we extract 994 cases of cyber losses from an operational risk database and analyse their statistical properties. Based on the empirical results and recent literature, we investigate the insurability of cyber risk by systematically reviewing the set of criteria introduced by Berliner (1982). Our findings emphasise the distinct characteristics of cyber risks compared with other operational risks and bring to li...

  3. Conducting empirical research in virtual worlds

    OpenAIRE

    Minocha, Shailey

    2011-01-01

    We will focus on the following aspects of conducting empirical research in virtual worlds: the toolbox of techniques for data collection; selection of technique(s) for the research questions; tips on how the techniques need to be adapted for conducting research in virtual worlds; guidance for developing research materials such as the consent form, project summary sheet, and how to address the possible concerns of an institution’s ethics committee who may not be familiar with the avatar-based ...

  4. Empirical solar/stellar cycle simulations

    Directory of Open Access Journals (Sweden)

    Santos Ângela R. G.

    2015-01-01

    Full Text Available As a result of the magnetic cycle, the properties of the solar oscillations vary periodically. With the recent discovery of manifestations of activity cycles in the seismic data of other stars, the understanding of the different contributions to such variations becomes even more important. With this in mind, we built an empirical parameterised model able to reproduce the properties of the sunspot cycle. The resulting simulations can be used to estimate the magnetic-induced frequency shifts.

  5. The empirical turn in international legal scholarship

    Directory of Open Access Journals (Sweden)

    Gregory Shaffer

    2015-07-01

    Full Text Available This article presents and assesses a new wave of em- pirical research on international law. Recent scholar- ship has moved away from theoretical debates over whether international law “matters,” and focuses in- stead on exploring the conditions under which inter- national law is created and produces effects. As this empirical research program has matured, it has al- lowed for new, midlevel theorizing that we call “conditional international law theory”. 

  6. Compassion: An Evolutionary Analysis and Empirical Review

    OpenAIRE

    Goetz, Jennifer L.; Keltner, Dacher; Simon-Thomas, Emiliana

    2010-01-01

    What is compassion? And how did it evolve? In this review, we integrate three evolutionary arguments that converge on the hypothesis that compassion evolved as a distinct affective experience whose primary function is to facilitate cooperation and protection of the weak and those who suffer. Our empirical review reveals compassion to have distinct appraisal processes attuned to undeserved suffering, distinct signaling behavior related to caregiving patterns of touch, posture, and vocalization...

  7. Services outsourcing and innovation: an empirical investigation

    OpenAIRE

    Görg, Holger; Hanley, Aoife

    2009-01-01

    We provide a comprehensive empirical analysis of the links between international services outsourcing, domestic outsourcing, profits and innovation using plant level data. We find a positive effect of international outsourcing of services on innovative activity at the plant level. Such a positive effect can also be observed for domestic outsourcing of services, but the magnitude is smaller. This makes intuitive sense, as international outsourcing allows more scope for exploiting international...

  8. The value of replicationg the data analysis of an empirical evaluation

    African Journals Online (AJOL)

    The aim of this research was to determine whether the results of an empirical evaluation could be confirmed using a different evaluation method. In this investigation the Qualitative Weight and Sum method used by the researchers Graf and List to evaluate several free and open source e-learning software platforms, were ...

  9. An empirical study of the information premium on electricity markets

    International Nuclear Information System (INIS)

    Benth, Fred Espen; Biegler-König, Richard; Kiesel, Rüdiger

    2013-01-01

    Due to the non-storability of electricity and the resulting lack of arbitrage-based arguments to price electricity forward contracts, a significant time-varying risk premium is exhibited. Using EEX data during the introduction of emission certificates and the German “Atom Moratorium” we show that a significant part of the risk premium in electricity forwards is due to different information sets in spot and forward markets. In order to show the existence of the resulting information premium and to analyse its size we design an empirical method based on techniques relating to enlargement of filtrations and the structure of Hilbert spaces. - Highlights: ► Electricity is non-storable and the classical spot–forward-relationship is invalid. ► Future information will cause an information premium for forward contracts. ► We model this premium mathematically using enlargement of filtrations. ► We develop a statistical method testing for the information premium empirically. ► We apply the test to the 2nd phase of the EUETS and the German “Atom Moratorium”

  10. The Role of Ethnographic Studies in Empirical Software Engineering

    DEFF Research Database (Denmark)

    Sharp, Helen; Dittrich, Yvonne; Souza, Cleidson R. B. de

    2016-01-01

    Ethnography is a qualitative research method used to study people and cultures. It is largely adopted in disciplines outside software engineering, including different areas of computer science. Ethnography can provide an in-depth understanding of the socio-technological realities surrounding ever...... as a useful and usable approach to empirical software engineering research. Throughout the paper, relevant examples of ethnographic studies of software practice are used to illustrate the points being made.......Ethnography is a qualitative research method used to study people and cultures. It is largely adopted in disciplines outside software engineering, including different areas of computer science. Ethnography can provide an in-depth understanding of the socio-technological realities surrounding...... everyday software development practice, i.e., it can help to uncover not only what practitioners do, but also why they do it. Despite its potential, ethnography has not been widely adopted by empirical software engineering researchers, and receives little attention in the related literature. The main goal...

  11. Statistical detection of EEG synchrony using empirical bayesian inference.

    Directory of Open Access Journals (Sweden)

    Archana K Singh

    Full Text Available There is growing interest in understanding how the brain utilizes synchronized oscillatory activity to integrate information across functionally connected regions. Computing phase-locking values (PLV between EEG signals is a popular method for quantifying such synchronizations and elucidating their role in cognitive tasks. However, high-dimensionality in PLV data incurs a serious multiple testing problem. Standard multiple testing methods in neuroimaging research (e.g., false discovery rate, FDR suffer severe loss of power, because they fail to exploit complex dependence structure between hypotheses that vary in spectral, temporal and spatial dimension. Previously, we showed that a hierarchical FDR and optimal discovery procedures could be effectively applied for PLV analysis to provide better power than FDR. In this article, we revisit the multiple comparison problem from a new Empirical Bayes perspective and propose the application of the local FDR method (locFDR; Efron, 2001 for PLV synchrony analysis to compute FDR as a posterior probability that an observed statistic belongs to a null hypothesis. We demonstrate the application of Efron's Empirical Bayes approach for PLV synchrony analysis for the first time. We use simulations to validate the specificity and sensitivity of locFDR and a real EEG dataset from a visual search study for experimental validation. We also compare locFDR with hierarchical FDR and optimal discovery procedures in both simulation and experimental analyses. Our simulation results showed that the locFDR can effectively control false positives without compromising on the power of PLV synchrony inference. Our results from the application locFDR on experiment data detected more significant discoveries than our previously proposed methods whereas the standard FDR method failed to detect any significant discoveries.

  12. Statistical detection of EEG synchrony using empirical bayesian inference.

    Science.gov (United States)

    Singh, Archana K; Asoh, Hideki; Takeda, Yuji; Phillips, Steven

    2015-01-01

    There is growing interest in understanding how the brain utilizes synchronized oscillatory activity to integrate information across functionally connected regions. Computing phase-locking values (PLV) between EEG signals is a popular method for quantifying such synchronizations and elucidating their role in cognitive tasks. However, high-dimensionality in PLV data incurs a serious multiple testing problem. Standard multiple testing methods in neuroimaging research (e.g., false discovery rate, FDR) suffer severe loss of power, because they fail to exploit complex dependence structure between hypotheses that vary in spectral, temporal and spatial dimension. Previously, we showed that a hierarchical FDR and optimal discovery procedures could be effectively applied for PLV analysis to provide better power than FDR. In this article, we revisit the multiple comparison problem from a new Empirical Bayes perspective and propose the application of the local FDR method (locFDR; Efron, 2001) for PLV synchrony analysis to compute FDR as a posterior probability that an observed statistic belongs to a null hypothesis. We demonstrate the application of Efron's Empirical Bayes approach for PLV synchrony analysis for the first time. We use simulations to validate the specificity and sensitivity of locFDR and a real EEG dataset from a visual search study for experimental validation. We also compare locFDR with hierarchical FDR and optimal discovery procedures in both simulation and experimental analyses. Our simulation results showed that the locFDR can effectively control false positives without compromising on the power of PLV synchrony inference. Our results from the application locFDR on experiment data detected more significant discoveries than our previously proposed methods whereas the standard FDR method failed to detect any significant discoveries.

  13. Selection Bias in Educational Transition Models: Theory and Empirical Evidence

    DEFF Research Database (Denmark)

    Holm, Anders; Jæger, Mads

    variables. This paper, first, explains theoretically how selection on unobserved variables leads to waning coefficients and, second, illustrates empirically how selection leads to biased estimates of the effect of family background on educational transitions. Our empirical analysis using data from...

  14. Evaluating guideline adherence regarding empirical vancomycin use in patients with neutropenic fever

    Directory of Open Access Journals (Sweden)

    Daniel B. Chastain

    2018-04-01

    Full Text Available Objective: The purpose of this study was to evaluate the use of empirical vancomycin for patients with neutropenic fever (NF with regard to adherence to treatment guidelines. Methods: Adult patients with a diagnosis of neutropenia, who met the definition of NF as per treatment guidelines, were identified. Use of vancomycin was evaluated as part of empirical therapy and again after 72 h. Outcomes were assessed using descriptive statistics, the Chi-square or Fisher’s exact test, and univariate exact logistic regression analyses. Results: Sixty-four patients were included. Overall, inappropriate empirical vancomycin use was observed in more than 30% of patients. Of 35 patients with indications for empirical vancomycin, only 68% received it. At 72 h, appropriate vancomycin continuation, de-escalation, or discontinuation occurred in 21 of 33 patients. On univariate regression, hematological malignancy was associated with appropriate empirical vancomycin prescribing, whether initiating or withholding (odds ratio 4.0, 95% confidence interval 1.31–12.1. No variable was independently associated with inappropriate continuation at 72 h. Conclusions: There is poor guideline adherence to vancomycin prescribing as empirical therapy and at 72-h reassessment in patients with NF. Further efforts are needed to foster a more rational use of vancomycin in patients with NF. Keywords: Antibiotics, Neutropenia, Neutropenic fever, Vancomycin

  15. A Non-standard Empirical Likelihood for Time Series

    DEFF Research Database (Denmark)

    Nordman, Daniel J.; Bunzel, Helle; Lahiri, Soumendra N.

    Standard blockwise empirical likelihood (BEL) for stationary, weakly dependent time series requires specifying a fixed block length as a tuning parameter for setting confidence regions. This aspect can be difficult and impacts coverage accuracy. As an alternative, this paper proposes a new version...... of BEL based on a simple, though non-standard, data-blocking rule which uses a data block of every possible length. Consequently, the method involves no block selection and is also anticipated to exhibit better coverage performance. Its non-standard blocking scheme, however, induces non......-standard asymptotics and requires a significantly different development compared to standard BEL. We establish the large-sample distribution of log-ratio statistics from the new BEL method for calibrating confidence regions for mean or smooth function parameters of time series. This limit law is not the usual chi...

  16. Empirical prediction of optical transitions in metallic armchair SWCNTs

    Directory of Open Access Journals (Sweden)

    G. R. Ahmed Jamal

    2015-12-01

    Full Text Available In this work, a quick and effective method to calculate the second and third optical transition energies of metallic armchair single-wall carbon nanotubes (SWCNT is presented. In this proposed method, the transition energy of any armchair SWCNT can be predicted directly by knowing its one chiral index as both of its chiral indices are same. The predicted results are compared with recent experimental data and found to be accurate over a wide diameter range from 2 to 4.8 nm. The empirical equation proposed here is also compared with that proposed in earlier works. The proposed way may help the research works or applications where information of optical transitions of armchair metallic nanotubes is needed.

  17. Empirically Estimated Heats of Combustion of Oxygenated Hydrocarbon Bio-type Oils

    Directory of Open Access Journals (Sweden)

    Dmitry A. Ponomarev

    2015-04-01

    Full Text Available An empirical method is proposed by which the heats of combustion of oxygenated hydrocarbon oils, typically found from wood pyrolysis, may be calculated additively from empirically predicted heats of combustion of individual compounds. The predicted values are in turn based on four types of energetically inequivalent carbon and four types of energetically inequivalent hydrogen atomic energy values. A method is also given to estimate the condensation heats of oil mixtures based on the presence of four types of intermolecular forces. Agreement between predicted and experimental values of combustion heats for a typical mixture of known compounds was ± 2% and < 1% for a freshly prepared mixture of known compounds.

  18. Block Empirical Likelihood for Longitudinal Single-Index Varying-Coefficient Model

    Directory of Open Access Journals (Sweden)

    Yunquan Song

    2013-01-01

    Full Text Available In this paper, we consider a single-index varying-coefficient model with application to longitudinal data. In order to accommodate the within-group correlation, we apply the block empirical likelihood procedure to longitudinal single-index varying-coefficient model, and prove a nonparametric version of Wilks’ theorem which can be used to construct the block empirical likelihood confidence region with asymptotically correct coverage probability for the parametric component. In comparison with normal approximations, the proposed method does not require a consistent estimator for the asymptotic covariance matrix, making it easier to conduct inference for the model's parametric component. Simulations demonstrate how the proposed method works.

  19. Empirical knowledge in legislation and regulation : A decision making perspective

    NARCIS (Netherlands)

    Trautmann, S.T.

    2013-01-01

    This commentary considers the pros and cons of the empirical approach to legislation from the vantage point of empirical decision making research. It focuses on methodological aspects that are typically not considered by legal scholars. It points out weaknesses in the empirical approach that are

  20. Advancing Empirical Scholarship to Further Develop Evaluation Theory and Practice

    Science.gov (United States)

    Christie, Christina A.

    2011-01-01

    Good theory development is grounded in empirical inquiry. In the context of educational evaluation, the development of empirically grounded theory has important benefits for the field and the practitioner. In particular, a shift to empirically derived theory will assist in advancing more systematic and contextually relevant evaluation practice, as…

  1. The empirical Gaia G-band extinction coefficient

    Science.gov (United States)

    Danielski, C.; Babusiaux, C.; Ruiz-Dern, L.; Sartoretti, P.; Arenou, F.

    2018-06-01

    Context. The first Gaia data release unlocked the access to photometric information for 1.1 billion sources in the G-band. Yet, given the high level of degeneracy between extinction and spectral energy distribution for large passbands such as the Gaia G-band, a correction for the interstellar reddening is needed in order to exploit Gaia data. Aims: The purpose of this manuscript is to provide the empirical estimation of the Gaia G-band extinction coefficient kG for both the red giants and main sequence stars in order to be able to exploit the first data release DR1. Methods: We selected two samples of single stars: one for the red giants and one for the main sequence. Both samples are the result of a cross-match between Gaia DR1 and 2MASS catalogues; they consist of high-quality photometry in the G-, J- and KS-bands. These samples were complemented by temperature and metallicity information retrieved from APOGEE DR13 and LAMOST DR2 surveys, respectively. We implemented a Markov chain Monte Carlo method where we used (G - KS)0 versus Teff and (J - KS)0 versus (G - KS)0, calibration relations to estimate the extinction coefficient kG and we quantify its corresponding confidence interval via bootstrap resampling. We tested our method on samples of red giants and main sequence stars, finding consistent solutions. Results: We present here the determination of the Gaia extinction coefficient through a completely empirical method. Furthermore we provide the scientific community with a formula for measuring the extinction coefficient as a function of stellar effective temperature, the intrinsic colour (G - KS)0, and absorption.

  2. Dissociative identity disorder: An empirical overview.

    Science.gov (United States)

    Dorahy, Martin J; Brand, Bethany L; Sar, Vedat; Krüger, Christa; Stavropoulos, Pam; Martínez-Taboas, Alfonso; Lewis-Fernández, Roberto; Middleton, Warwick

    2014-05-01

    Despite its long and auspicious place in the history of psychiatry, dissociative identity disorder (DID) has been associated with controversy. This paper aims to examine the empirical data related to DID and outline the contextual challenges to its scientific investigation. The overview is limited to DID-specific research in which one or more of the following conditions are met: (i) a sample of participants with DID was systematically investigated, (ii) psychometrically-sound measures were utilised, (iii) comparisons were made with other samples, (iv) DID was differentiated from other disorders, including other dissociative disorders, (v) extraneous variables were controlled or (vi) DID diagnosis was confirmed. Following an examination of challenges to research, data are organised around the validity and phenomenology of DID, its aetiology and epidemiology, the neurobiological and cognitive correlates of the disorder, and finally its treatment. DID was found to be a complex yet valid disorder across a range of markers. It can be accurately discriminated from other disorders, especially when structured diagnostic interviews assess identity alterations and amnesia. DID is aetiologically associated with a complex combination of developmental and cultural factors, including severe childhood relational trauma. The prevalence of DID appears highest in emergency psychiatric settings and affects approximately 1% of the general population. Psychobiological studies are beginning to identify clear correlates of DID associated with diverse brain areas and cognitive functions. They are also providing an understanding of the potential metacognitive origins of amnesia. Phase-oriented empirically-guided treatments are emerging for DID. The empirical literature on DID is accumulating, although some areas remain under-investigated. Existing data show DID as a complex, valid and not uncommon disorder, associated with developmental and cultural variables, that is amenable to

  3. Casual Empire: Video Games as Neocolonial Praxis

    Directory of Open Access Journals (Sweden)

    Sabine Harrer

    2018-01-01

    Full Text Available As a media form entwined in the U.S. military-industrial complex, video games continue to celebrate imperialist imagery and Western-centric narratives of the great white explorer (Breger, 2008; Dyer-Witheford & de Peuter, 2009; Geyser & Tshalabala, 2011; Mukherjee, 2016. While much ink has been spilt on the detrimental effects of colonial imagery on those it objectifies and dehumanises, the question is why these games still get made, and what mechanisms are at work in the enjoyment of empire-themed play experiences. To explore this question, this article develops the concept of ‘casual empire’, suggesting that the wish to play games as a casual pastime expedites the incidental circulation of imperialist ideology. Three examples – 'Resident Evil V' (2009, 'The Conquest: Colonization' (2015 and 'Playing History: Slave Trade' (2013 – are used to demonstrate the production and consumption of casual empire across multiple platforms, genres and player bases. Following a brief contextualisation of postcolonial (game studies, this article addresses casual design, by which I understand game designers’ casual reproduction of inferential racism (Hall, 1995 for the sake of entertainment. I then look at casual play, and players’ attitudes to games as rational commodities continuing a history of commodity racism (McClintock, 1995. Finally, the article investigates the casual involvement of formalist game studies in the construction of imperial values. These three dimensions of the casual – design, play and academia – make up the three pillars of the casual empire that must be challenged to undermine video games’ neocolonialist praxis.

  4. Salmonella typhi time to change empiric treatment

    DEFF Research Database (Denmark)

    Gade, C.; Engberg, J.; Weis, N.

    2008-01-01

    In the present case series report we describe seven recent cases of typhoid fever. All the patients were travellers returning from Pakistan, where typhoid is endemic. Salmonella typhi isolated from the patients by blood culture were reported as intermediary susceptible to fluoroquinolones in six...... out of seven cases. We recommend that empiric treatment of suspected cases of typhoid fever includes a third generation cephalosporin such as ceftriaxon. Furthermore, the present report stresses the importance of typhoid vaccination of travellers to areas where typhoid is endemic Udgivelsesdato: 2008/9/29...

  5. International Joint Venture Termination: An Empirical Investigation

    DEFF Research Database (Denmark)

    Nielsen, Ulrik B.; Rasmussen, Erik Stavnsager; Siersbæk, Nikolaj

    for the article stems from data from the project portfolio of a Danish Investment Fund for Developing Countries with a total of 773 investments. A number of hypotheses are established from the literature review and tested related to the empirical data. The result indicates that the most important factor...... in successful IJV termination is the length of the investment and to some extent the size of the investment. The psychic distance plays a negative role for investments in the African region while a general recession will lead to a lower success rate....

  6. Unemployment and Mental Disorders - An Empirical Analysis

    DEFF Research Database (Denmark)

    Agerbo, Esben; Eriksson, Tor Viking; Mortensen, Preben Bo

    1998-01-01

    The purpose of this paper is also to analyze the importance of unemployment and other social factors as risk factors for impaired mental health. It departs from previous studies in that we make use of information about first admissions to a psychiatric hospital or ward as our measure of mental...... from the Psychiatric case register. Secondly, we estimate conditional logistic regression models for case-control data on first admissions to a psychiatric hospital. The explanatory variables in the empirical analysis include age, gender, education, marital status, income, wealth, and unemployment (and...

  7. Is nondistributivity for microsystems empirically founded

    International Nuclear Information System (INIS)

    Selleri, F.; Tarozzi, G.

    1978-01-01

    Some authors have proposed nondistributive logic as a way out of the difficulties usually met in trying to describe typical quantum phenomena (e.g. the double-slit experiment). It is shown, however, that, if one takes seriously the wave-corpuscle dualism, which was after all the central fact around which quantum theory was developed, ordinary (distributive) logic can fully account for the empirical observations. Furthermore, it is pointed out that there are unavoidable physical difficulties connected with the adoption of a nondistributive corpuscolar approach. (author)

  8. The empirical equilibrium structure of diacetylene

    OpenAIRE

    Thorwirth, S.; Harding, M. E.; Muders, D.; Gauss, J.

    2008-01-01

    High-level quantum-chemical calculations are reported at the MP2 and CCSD(T) levels of theory for the equilibrium structure and the harmonic and anharmonic force fields of diacetylene, HCCCCH. The calculations were performed employing Dunning's hierarchy of correlation-consistent basis sets cc-pVXZ, cc-pCVXZ, and cc-pwCVXZ, as well as the ANO2 basis set of Almloef and Taylor. An empirical equilibrium structure based on experimental rotational constants for thirteen isotopic species of diacety...

  9. 30. L’empire de la raison

    OpenAIRE

    2018-01-01

    La pensée politique de Stanislas Leszczynski (1677–1766), roi de Pologne puis duc de Lorraine, est faite d’un mélange de pragmatisme et d’idéalisme : pour vivre en paix avec ses voisins, un État doit savoir s’en faire craindre ; mais il n’exercera durablement son empire que par la sagesse de ses lois et la vertu de son souverain. Dans l’Entretien d’un Européen avec un insulaire du Royaume de Dumocala, il fait dialoguer un voyageur, dont le vaisseau a fait naufrage sur une terre australe incon...

  10. An Empirical State Error Covariance Matrix for Batch State Estimation

    Science.gov (United States)

    Frisbee, Joseph H., Jr.

    2011-01-01

    State estimation techniques serve effectively to provide mean state estimates. However, the state error covariance matrices provided as part of these techniques suffer from some degree of lack of confidence in their ability to adequately describe the uncertainty in the estimated states. A specific problem with the traditional form of state error covariance matrices is that they represent only a mapping of the assumed observation error characteristics into the state space. Any errors that arise from other sources (environment modeling, precision, etc.) are not directly represented in a traditional, theoretical state error covariance matrix. Consider that an actual observation contains only measurement error and that an estimated observation contains all other errors, known and unknown. It then follows that a measurement residual (the difference between expected and observed measurements) contains all errors for that measurement. Therefore, a direct and appropriate inclusion of the actual measurement residuals in the state error covariance matrix will result in an empirical state error covariance matrix. This empirical state error covariance matrix will fully account for the error in the state estimate. By way of a literal reinterpretation of the equations involved in the weighted least squares estimation algorithm, it is possible to arrive at an appropriate, and formally correct, empirical state error covariance matrix. The first specific step of the method is to use the average form of the weighted measurement residual variance performance index rather than its usual total weighted residual form. Next it is helpful to interpret the solution to the normal equations as the average of a collection of sample vectors drawn from a hypothetical parent population. From here, using a standard statistical analysis approach, it directly follows as to how to determine the standard empirical state error covariance matrix. This matrix will contain the total uncertainty in the

  11. Estimating the empirical probability of submarine landslide occurrence

    Science.gov (United States)

    Geist, Eric L.; Parsons, Thomas E.; Mosher, David C.; Shipp, Craig; Moscardelli, Lorena; Chaytor, Jason D.; Baxter, Christopher D. P.; Lee, Homa J.; Urgeles, Roger

    2010-01-01

    The empirical probability for the occurrence of submarine landslides at a given location can be estimated from age dates of past landslides. In this study, tools developed to estimate earthquake probability from paleoseismic horizons are adapted to estimate submarine landslide probability. In both types of estimates, one has to account for the uncertainty associated with age-dating individual events as well as the open time intervals before and after the observed sequence of landslides. For observed sequences of submarine landslides, we typically only have the age date of the youngest event and possibly of a seismic horizon that lies below the oldest event in a landslide sequence. We use an empirical Bayes analysis based on the Poisson-Gamma conjugate prior model specifically applied to the landslide probability problem. This model assumes that landslide events as imaged in geophysical data are independent and occur in time according to a Poisson distribution characterized by a rate parameter λ. With this method, we are able to estimate the most likely value of λ and, importantly, the range of uncertainty in this estimate. Examples considered include landslide sequences observed in the Santa Barbara Channel, California, and in Port Valdez, Alaska. We confirm that given the uncertainties of age dating that landslide complexes can be treated as single events by performing statistical test of age dates representing the main failure episode of the Holocene Storegga landslide complex.

  12. Physico-empirical approach for mapping soil hydraulic behaviour

    Directory of Open Access Journals (Sweden)

    G. D'Urso

    1997-01-01

    Full Text Available Abstract: Pedo-transfer functions are largely used in soil hydraulic characterisation of large areas. The use of physico-empirical approaches for the derivation of soil hydraulic parameters from disturbed samples data can be greatly enhanced if a characterisation performed on undisturbed cores of the same type of soil is available. In this study, an experimental procedure for deriving maps of soil hydraulic behaviour is discussed with reference to its application in an irrigation district (30 km2 in southern Italy. The main steps of the proposed procedure are: i the precise identification of soil hydraulic functions from undisturbed sampling of main horizons in representative profiles for each soil map unit; ii the determination of pore-size distribution curves from larger disturbed sampling data sets within the same soil map unit. iii the calibration of physical-empirical methods for retrieving soil hydraulic parameters from particle-size data and undisturbed soil sample analysis; iv the definition of functional hydraulic properties from water balance output; and v the delimitation of soil hydraulic map units based on functional properties.

  13. Experimental determination of the empirical formula and energy content of unknown organics in waste streams

    Energy Technology Data Exchange (ETDEWEB)

    Shizas, I. [Univ. of Toronto, Dept. of Civil Engineering, Toronto, Ontario (Canada); Kosmatos, A. [Ontario Power Generation, Toronto, Ontario (Canada); Bagley, D.M. [Univ. of Toronto, Dept. of Civil Engineering, Toronto, Ontario (Canada)

    2002-06-15

    Two experimental methods are described in this paper: one for determining the empirical formula, and one for determining the energy content of unknown organics in waste streams. The empirical formula method requires volatile solids (VS), chemical oxygen demand (COD), total organic carbon (TOC), and total Kjeldahl nitrogen (TKN) to be measured for the waste; the formula can then be calculated from these values. To determine the energy content of the organic waste, bomb calorimetry was used with benzoic acid as a combustion aid. The results for standard compounds (glucose, propionic acid, L-arginine, and benzoic acid) were relatively good. The energy content measurement for wastewater and sludges had good reproducibility (i.e. 1.0 to 3.2% relative standard deviation for triplicate samples). Trouble encountered in the measurement of the empirical formulae of the waste samples was possibly due to difficulties with the TOC test; further analysis of this is required. (author)

  14. Experimental determination of the empirical formula and energy content of unknown organics in waste streams

    International Nuclear Information System (INIS)

    Shizas, I.; Kosmatos, A.; Bagley, D.M.

    2002-01-01

    Two experimental methods are described in this paper: one for determining the empirical formula, and one for determining the energy content of unknown organics in waste streams. The empirical formula method requires volatile solids (VS), chemical oxygen demand (COD), total organic carbon (TOC), and total Kjeldahl nitrogen (TKN) to be measured for the waste; the formula can then be calculated from these values. To determine the energy content of the organic waste, bomb calorimetry was used with benzoic acid as a combustion aid. The results for standard compounds (glucose, propionic acid, L-arginine, and benzoic acid) were relatively good. The energy content measurement for wastewater and sludges had good reproducibility (i.e. 1.0 to 3.2% relative standard deviation for triplicate samples). Trouble encountered in the measurement of the empirical formulae of the waste samples was possibly due to difficulties with the TOC test; further analysis of this is required. (author)

  15. Empirical Statistical Power for Testing Multilocus Genotypic Effects under Unbalanced Designs Using a Gibbs Sampler

    Directory of Open Access Journals (Sweden)

    Chaeyoung Lee

    2012-11-01

    Full Text Available Epistasis that may explain a large portion of the phenotypic variation for complex economic traits of animals has been ignored in many genetic association studies. A Baysian method was introduced to draw inferences about multilocus genotypic effects based on their marginal posterior distributions by a Gibbs sampler. A simulation study was conducted to provide statistical powers under various unbalanced designs by using this method. Data were simulated by combined designs of number of loci, within genotype variance, and sample size in unbalanced designs with or without null combined genotype cells. Mean empirical statistical power was estimated for testing posterior mean estimate of combined genotype effect. A practical example for obtaining empirical statistical power estimates with a given sample size was provided under unbalanced designs. The empirical statistical powers would be useful for determining an optimal design when interactive associations of multiple loci with complex phenotypes were examined.

  16. Semi-empirical formulas for sputtering yield

    International Nuclear Information System (INIS)

    Yamamura, Yasumichi

    1994-01-01

    When charged particles, electrons, light and so on are irradiated on solid surfaces, the materials are lost from the surfaces, and this phenomenon is called sputtering. In order to understand sputtering phenomenon, the bond energy of atoms on surfaces, the energy given to the vicinity of surfaces and the process of converting the given energy to the energy for releasing atoms must be known. The theories of sputtering and the semi-empirical formulas for evaluating the dependence of sputtering yield on incident energy are explained. The mechanisms of sputtering are that due to collision cascade in the case of heavy ion incidence and that due to surface atom recoil in the case of light ion incidence. The formulas for the sputtering yield of low energy heavy ion sputtering, high energy light ion sputtering and the general case between these extreme cases, and the Matsunami formula are shown. At the stage of the publication of Atomic Data and Nuclear Data Tables in 1984, the data up to 1983 were collected, and about 30 papers published thereafter were added. The experimental data for low Z materials, for example Be, B and C and light ion sputtering data were reported. The combination of ions and target atoms in the collected sputtering data is shown. The new semi-empirical formula by slightly adjusting the Matsunami formula was decided. (K.I.)

  17. Empirical seasonal forecasts of the NAO

    Science.gov (United States)

    Sanchezgomez, E.; Ortizbevia, M.

    2003-04-01

    We present here seasonal forecasts of the North Atlantic Oscillation (NAO) issued from ocean predictors with an empirical procedure. The Singular Values Decomposition (SVD) of the cross-correlation matrix between predictor and predictand fields at the lag used for the forecast lead is at the core of the empirical model. The main predictor field are sea surface temperature anomalies, although sea ice cover anomalies are also used. Forecasts are issued in probabilistic form. The model is an improvement over a previous version (1), where Sea Level Pressure Anomalies were first forecast, and the NAO Index built from this forecast field. Both correlation skill between forecast and observed field, and number of forecasts that hit the correct NAO sign, are used to assess the forecast performance , usually above those values found in the case of forecasts issued assuming persistence. For certain seasons and/or leads, values of the skill are above the .7 usefulness treshold. References (1) SanchezGomez, E. and Ortiz Bevia M., 2002, Estimacion de la evolucion pluviometrica de la Espana Seca atendiendo a diversos pronosticos empiricos de la NAO, in 'El Agua y el Clima', Publicaciones de la AEC, Serie A, N 3, pp 63-73, Palma de Mallorca, Spain

  18. Development of covariance capabilities in EMPIRE code

    Energy Technology Data Exchange (ETDEWEB)

    Herman,M.; Pigni, M.T.; Oblozinsky, P.; Mughabghab, S.F.; Mattoon, C.M.; Capote, R.; Cho, Young-Sik; Trkov, A.

    2008-06-24

    The nuclear reaction code EMPIRE has been extended to provide evaluation capabilities for neutron cross section covariances in the thermal, resolved resonance, unresolved resonance and fast neutron regions. The Atlas of Neutron Resonances by Mughabghab is used as a primary source of information on uncertainties at low energies. Care is taken to ensure consistency among the resonance parameter uncertainties and those for thermal cross sections. The resulting resonance parameter covariances are formatted in the ENDF-6 File 32. In the fast neutron range our methodology is based on model calculations with the code EMPIRE combined with experimental data through several available approaches. The model-based covariances can be obtained using deterministic (Kalman) or stochastic (Monte Carlo) propagation of model parameter uncertainties. We show that these two procedures yield comparable results. The Kalman filter and/or the generalized least square fitting procedures are employed to incorporate experimental information. We compare the two approaches analyzing results for the major reaction channels on {sup 89}Y. We also discuss a long-standing issue of unreasonably low uncertainties and link it to the rigidity of the model.

  19. Practice management: observations, issues, and empirical evidence.

    Science.gov (United States)

    Wong, H M; Braithwaite, J

    2001-02-01

    The primary objective of this study is to provide objective, empirical, evidence-based practice management information. This is a hitherto under-researched area of considerable interest for both the practitioner and educator. A questionnaire eliciting a mix of structured and free text responses was administered to a random sample of 480 practitioners who are members of the American Academy of Periodontology. Potential respondents not in private practice were excluded and the next listed person substituted. The results provide demographic and descriptive information about some of the main issues and problems facing practice managers, central to which are information technology (IT), financial, people management, and marketing. Human resource and marketing management appear to represent the biggest challenges. Periodontists running practices would prefer more information, development, and support in dealing with IT, finance, marketing, and people management. The empirical evidence reported here suggests that although tailored educational programs on key management issues at both undergraduate and postgraduate levels have become ubiquitous, nevertheless some respondents seek further training opportunities. Evidence-based practice management information will be invaluable to the clinician considering strategic and marketing planning, and also for those responsible for the design and conduct of predoctoral and postdoctoral programs.

  20. An empirical framework for tropical cyclone climatology

    Energy Technology Data Exchange (ETDEWEB)

    Kang, Nam-Young [Korea Meteorological Administration, Seoul (Korea, Republic of); Florida State University, Tallahassee, FL (United States); Elsner, James B. [Florida State University, Tallahassee, FL (United States)

    2012-08-15

    An empirical approach for analyzing tropical cyclone climate is presented. The approach uses lifetime-maximum wind speed and cyclone frequency to induce two orthogonal variables labeled ''activity'' and ''efficiency of intensity''. The paired variations of activity and efficiency of intensity along with the opponent variations of frequency and intensity configure a framework for evaluating tropical cyclone climate. Although cyclone activity as defined in this framework is highly correlated with the commonly used exponent indices like accumulated cyclone energy, it does not contain cyclone duration. Empirical quantiles are used to determine threshold intensity levels, and variant year ranges are used to find consistent trends in tropical cyclone climatology. In the western North Pacific, cyclone activity is decreasing despite increases in lifetime-maximum intensity. This is due to overwhelming decreases in cyclone frequency. These changes are also explained by an increasing efficiency of intensity. The North Atlantic shows different behavior. Cyclone activity is increasing due to increasing frequency and, to a lesser extent, increasing intensity. These changes are also explained by a decreasing efficiency of intensity. Tropical cyclone trends over the North Atlantic basin are more consistent over different year ranges than tropical cyclone trends over the western North Pacific. (orig.)

  1. Non-empirical energy density functional for the nuclear structure

    International Nuclear Information System (INIS)

    Rot ival, V.

    2008-09-01

    The energy density functional (EDF) formalism is the tool of choice for large-scale low-energy nuclear structure calculations both for stable experimentally known nuclei whose properties are accurately reproduced and systems that are only theoretically predicted. We highlight in the present dissertation the capability of EDF methods to tackle exotic phenomena appearing at the very limits of stability, that is the formation of nuclear halos. We devise a new quantitative and model-independent method that characterizes the existence and properties of halos in medium- to heavy-mass nuclei, and quantifies the impact of pairing correlations and the choice of the energy functional on the formation of such systems. These results are found to be limited by the predictive power of currently-used EDFs that rely on fitting to known experimental data. In the second part of this dissertation, we initiate the construction of non-empirical EDFs that make use of the new paradigm for vacuum nucleon-nucleon interactions set by so-called low-momentum interactions generated through the application of renormalization group techniques. These soft-core vacuum potentials are used as a step-stone of a long-term strategy which connects modern many-body techniques and EDF methods. We provide guidelines for designing several non-empirical models that include in-medium many-body effects at various levels of approximation, and can be handled in state-of-the art nuclear structure codes. In the present work, the first step is initiated through the adjustment of an operator representation of low-momentum vacuum interactions using a custom-designed parallel evolutionary algorithm. The first results highlight the possibility to grasp most of the relevant physics for low-energy nuclear structure using this numerically convenient Gaussian vertex. (author)

  2. Empirical Bayes conditional independence graphs for regulatory network recovery

    Science.gov (United States)

    Mahdi, Rami; Madduri, Abishek S.; Wang, Guoqing; Strulovici-Barel, Yael; Salit, Jacqueline; Hackett, Neil R.; Crystal, Ronald G.; Mezey, Jason G.

    2012-01-01

    Motivation: Computational inference methods that make use of graphical models to extract regulatory networks from gene expression data can have difficulty reconstructing dense regions of a network, a consequence of both computational complexity and unreliable parameter estimation when sample size is small. As a result, identification of hub genes is of special difficulty for these methods. Methods: We present a new algorithm, Empirical Light Mutual Min (ELMM), for large network reconstruction that has properties well suited for recovery of graphs with high-degree nodes. ELMM reconstructs the undirected graph of a regulatory network using empirical Bayes conditional independence testing with a heuristic relaxation of independence constraints in dense areas of the graph. This relaxation allows only one gene of a pair with a putative relation to be aware of the network connection, an approach that is aimed at easing multiple testing problems associated with recovering densely connected structures. Results: Using in silico data, we show that ELMM has better performance than commonly used network inference algorithms including GeneNet, ARACNE, FOCI, GENIE3 and GLASSO. We also apply ELMM to reconstruct a network among 5492 genes expressed in human lung airway epithelium of healthy non-smokers, healthy smokers and individuals with chronic obstructive pulmonary disease assayed using microarrays. The analysis identifies dense sub-networks that are consistent with known regulatory relationships in the lung airway and also suggests novel hub regulatory relationships among a number of genes that play roles in oxidative stress and secretion. Availability and implementation: Software for running ELMM is made available at http://mezeylab.cb.bscb.cornell.edu/Software.aspx. Contact: ramimahdi@yahoo.com or jgm45@cornell.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:22685074

  3. Understanding similarity of groundwater systems with empirical copulas

    Science.gov (United States)

    Haaf, Ezra; Kumar, Rohini; Samaniego, Luis; Barthel, Roland

    2016-04-01

    Within the classification framework for groundwater systems that aims for identifying similarity of hydrogeological systems and transferring information from a well-observed to an ungauged system (Haaf and Barthel, 2015; Haaf and Barthel, 2016), we propose a copula-based method for describing groundwater-systems similarity. Copulas are an emerging method in hydrological sciences that make it possible to model the dependence structure of two groundwater level time series, independently of the effects of their marginal distributions. This study is based on Samaniego et al. (2010), which described an approach calculating dissimilarity measures from bivariate empirical copula densities of streamflow time series. Subsequently, streamflow is predicted in ungauged basins by transferring properties from similar catchments. The proposed approach is innovative because copula-based similarity has not yet been applied to groundwater systems. Here we estimate the pairwise dependence structure of 600 wells in Southern Germany using 10 years of weekly groundwater level observations. Based on these empirical copulas, dissimilarity measures are estimated, such as the copula's lower- and upper corner cumulated probability, copula-based Spearman's rank correlation - as proposed by Samaniego et al. (2010). For the characterization of groundwater systems, copula-based metrics are compared with dissimilarities obtained from precipitation signals corresponding to the presumed area of influence of each groundwater well. This promising approach provides a new tool for advancing similarity-based classification of groundwater system dynamics. Haaf, E., Barthel, R., 2015. Methods for assessing hydrogeological similarity and for classification of groundwater systems on the regional scale, EGU General Assembly 2015, Vienna, Austria. Haaf, E., Barthel, R., 2016. An approach for classification of hydrogeological systems at the regional scale based on groundwater hydrographs EGU General Assembly

  4. Using Graph and Vertex Entropy to Compare Empirical Graphs with Theoretical Graph Models

    Directory of Open Access Journals (Sweden)

    Tomasz Kajdanowicz

    2016-09-01

    Full Text Available Over the years, several theoretical graph generation models have been proposed. Among the most prominent are: the Erdős–Renyi random graph model, Watts–Strogatz small world model, Albert–Barabási preferential attachment model, Price citation model, and many more. Often, researchers working with real-world data are interested in understanding the generative phenomena underlying their empirical graphs. They want to know which of the theoretical graph generation models would most probably generate a particular empirical graph. In other words, they expect some similarity assessment between the empirical graph and graphs artificially created from theoretical graph generation models. Usually, in order to assess the similarity of two graphs, centrality measure distributions are compared. For a theoretical graph model this means comparing the empirical graph to a single realization of a theoretical graph model, where the realization is generated from the given model using an arbitrary set of parameters. The similarity between centrality measure distributions can be measured using standard statistical tests, e.g., the Kolmogorov–Smirnov test of distances between cumulative distributions. However, this approach is both error-prone and leads to incorrect conclusions, as we show in our experiments. Therefore, we propose a new method for graph comparison and type classification by comparing the entropies of centrality measure distributions (degree centrality, betweenness centrality, closeness centrality. We demonstrate that our approach can help assign the empirical graph to the most similar theoretical model using a simple unsupervised learning method.

  5. Joint production and corporate pricing: An empirical analysis of joint products in the petroleum industry

    International Nuclear Information System (INIS)

    Karimnejad, H.

    1990-01-01

    This dissertation investigates the pricing mechanism of joint products in large multi-plant and multi-product corporations. The primary objective of this dissertation is to show the consistency of classical theories of production with corporate pricing of joint products. This dissertation has two major parts. Part One provides a theoretical framework for joint production and corporate pricing. In this part, joint production is defined and its historical treatment by classical and contemporary economists is analyzed. Part Two conducts an empirical analysis of joint products in the US petroleum industry. Methods of cost allocation are used in the pricing of each individual petroleum product. Three methods are employed to distribute joint production costs to individual petroleum products. These methods are, the sales value method, the barrel gravity method and the average unit cost method. The empirical findings of dissertation provide useful guidelines for pricing policies of large multi-product corporations

  6. An Empirical Study of Atmospheric Correction Procedures for Regional Infrasound Amplitudes with Ground Truth.

    Science.gov (United States)

    Howard, J. E.

    2014-12-01

    This study focusses on improving methods of accounting for atmospheric effects on infrasound amplitudes observed on arrays at regional distances in the southwestern United States. Recordings at ranges of 150 to nearly 300 km from a repeating ground truth source of small HE explosions are used. The explosions range in actual weight from approximately 2000-4000 lbs. and are detonated year-round which provides signals for a wide range of atmospheric conditions. Three methods of correcting the observed amplitudes for atmospheric effects are investigated with the data set. The first corrects amplitudes for upper stratospheric wind as developed by Mutschlecner and Whitaker (1999) and uses the average wind speed between 45-55 km altitudes in the direction of propagation to derive an empirical correction formula. This approach was developed using large chemical and nuclear explosions and is tested with the smaller explosions for which shorter wavelengths cause the energy to be scattered by the smaller scale structure of the atmosphere. The second approach isa semi-empirical method using ray tracing to determine wind speed at ray turning heights where the wind estimates replace the wind values in the existing formula. Finally, parabolic equation (PE) modeling is used to predict the amplitudes at the arrays at 1 Hz. The PE amplitudes are compared to the observed amplitudes with a narrow band filter centered at 1 Hz. An analysis is performed of the conditions under which the empirical and semi-empirical methods fail and full wave methods must be used.

  7. Empirical Reduced-Order Modeling for Boundary Feedback Flow Control

    Directory of Open Access Journals (Sweden)

    Seddik M. Djouadi

    2008-01-01

    Full Text Available This paper deals with the practical and theoretical implications of model reduction for aerodynamic flow-based control problems. Various aspects of model reduction are discussed that apply to partial differential equation- (PDE- based models in general. Specifically, the proper orthogonal decomposition (POD of a high dimension system as well as frequency domain identification methods are discussed for initial model construction. Projections on the POD basis give a nonlinear Galerkin model. Then, a model reduction method based on empirical balanced truncation is developed and applied to the Galerkin model. The rationale for doing so is that linear subspace approximations to exact submanifolds associated with nonlinear controllability and observability require only standard matrix manipulations utilizing simulation/experimental data. The proposed method uses a chirp signal as input to produce the output in the eigensystem realization algorithm (ERA. This method estimates the system's Markov parameters that accurately reproduce the output. Balanced truncation is used to show that model reduction is still effective on ERA produced approximated systems. The method is applied to a prototype convective flow on obstacle geometry. An H∞ feedback flow controller is designed based on the reduced model to achieve tracking and then applied to the full-order model with excellent performance.

  8. Empirically derived climate predictability over the extratropical northern hemisphere

    Directory of Open Access Journals (Sweden)

    J. B. Elsner

    1994-01-01

    Full Text Available A novel application of a technique developed from chaos theory is used in describing seasonal to interannual climate predictability over the Northern Hemisphere (NH. The technique is based on an empirical forecast scheme - local approximation in a reconstructed phase space - for time-series data. Data are monthly 500 hPa heights on a latitude-longitude grid covering the NH from 20° N to the equator. Predictability is estimated based on the linear correlation between actual and predicted heights averaged over a forecast range of one- to twelve.month lead. The method is capable of extracting the major climate signals on this time scale including ENSO and the North Atlantic Oscillation.

  9. Empirical modeling and data analysis for engineers and applied scientists

    CERN Document Server

    Pardo, Scott A

    2016-01-01

    This textbook teaches advanced undergraduate and first-year graduate students in Engineering and Applied Sciences to gather and analyze empirical observations (data) in order to aid in making design decisions. While science is about discovery, the primary paradigm of engineering and "applied science" is design. Scientists are in the discovery business and want, in general, to understand the natural world rather than to alter it. In contrast, engineers and applied scientists design products, processes, and solutions to problems. That said, statistics, as a discipline, is mostly oriented toward the discovery paradigm. Young engineers come out of their degree programs having taken courses such as "Statistics for Engineers and Scientists" without any clear idea as to how they can use statistical methods to help them design products or processes. Many seem to think that statistics is only useful for demonstrating that a device or process actually does what it was designed to do. Statistics courses emphasize creati...

  10. Hybrid empirical mode decomposition- ARIMA for forecasting exchange rates

    Science.gov (United States)

    Abadan, Siti Sarah; Shabri, Ani; Ismail, Shuhaida

    2015-02-01

    This paper studied the forecasting of monthly Malaysian Ringgit (MYR)/ United State Dollar (USD) exchange rates using the hybrid of two methods which are the empirical model decomposition (EMD) and the autoregressive integrated moving average (ARIMA). MYR is pegged to USD during the Asian financial crisis causing the exchange rates are fixed to 3.800 from 2nd of September 1998 until 21st of July 2005. Thus, the chosen data in this paper is the post-July 2005 data, starting from August 2005 to July 2010. The comparative study using root mean square error (RMSE) and mean absolute error (MAE) showed that the EMD-ARIMA outperformed the single-ARIMA and the random walk benchmark model.

  11. Knowledge-oriented strategies in the metal industry (empirical studies

    Directory of Open Access Journals (Sweden)

    A. Krawczyk-Sołtys

    2016-07-01

    Full Text Available The aim of this article is an attempt to determine which knowledge-oriented strategies can give metal industry enterprises the best results in achieving and maintaining a competitive advantage. To determine which of these discussed in the literature and implemented in various organizations knowledge-oriented strategies may prove to be the most effective in the metal industry, empirical research has begun. A chosen strategy of knowledge management and supporting strategies are the basis of a choice of methods and means of intended implementation. The choice of a specific knowledge management strategy may also result in the need for changes in an organization, particularly in an information system, internal communication, work organization and human resource management.

  12. Dielectric response of molecules in empirical tight-binding theory

    Science.gov (United States)

    Boykin, Timothy B.; Vogl, P.

    2002-01-01

    In this paper we generalize our previous approach to electromagnetic interactions within empirical tight-binding theory to encompass molecular solids and isolated molecules. In order to guarantee physically meaningful results, we rederive the expressions for relevant observables using commutation relations appropriate to the finite tight-binding Hilbert space. In carrying out this generalization, we examine in detail the consequences of various prescriptions for the position and momentum operators in tight binding. We show that attempting to fit parameters of the momentum matrix directly generally results in a momentum operator which is incompatible with the underlying tight-binding model, while adding extra position parameters results in numerous difficulties, including the loss of gauge invariance. We have applied our scheme, which we term the Peierls-coupling tight-binding method, to the optical dielectric function of the molecular solid PPP, showing that this approach successfully predicts its known optical properties even in the limit of isolated molecules.

  13. An empirical study on empowering private bank workers using EFQM

    Directory of Open Access Journals (Sweden)

    Jafar Beikzad

    2012-01-01

    Full Text Available Empowering workers play an essential role on increasing productivity in any organization. The service industries such as insurance companies or banks mostly rely on their own people to retain their customers and incomes. The recent increasing trend on the number of private banks in Iran has increased competition among existing banks. The banking industry strives to empower its employees as much as possible in an attempt to maintain market share by not losing its customers. In this paper, we present an empirical study to detect the most important factors empowering bank employees. The study is implemented for a recently established private bank with 228 people with 32 questions where 15 questions are focused on empowering employees. The results are analyzed using statistical tests and descriptive methods. The results indicate that leadership, academic qualification, appropriate policy and strategy, cooperation and processes play important role on empowering and enabling bank's employee.

  14. Development of an empirical typology of African American family functioning.

    Science.gov (United States)

    Mandara, Jelani; Murray, Carolyn B

    2002-09-01

    This study empirically identified types of African American families. Adolescents (N = 111) were assessed on family functioning. With cluster analytic methods, 3 types of families were identified. The cohesive-authoritative type was above average on parental education and income, averaged about 2 children, exhibited a high quality of family functioning and high self-esteem in adolescents. The conflictive-authoritarian type had average parental education and income, an average of 2.7 children, exhibited controlling and rigid discipline, and placed a high emphasis on achievement. The defensive-neglectful type was predominately headed by single mothers with below average education and income and averaged about 3 children. Such families displayed chaotic family processes, and adolescents tended to suffer from low self-esteem. The typology exhibited good reliability. The implications of the typology are discussed.

  15. Empirical research on drive mechanism of firms' environmental management

    Institute of Scientific and Technical Information of China (English)

    Cao Jingshan; Qin Ying

    2007-01-01

    Firms'transformation from passive envrionmental management to active environmental management is the key to solving environmental problems. This paper empirically studies the impact of environmental management incentives on environmental management through model construction. Based on the data and reality of China, we can build a concept model of environmental management driving mechanism, and put forward theoretical hypothesis that can be tested: take the 13 environmental management behaviors (EMBs) as substitute of the comprehensiveness, introduce counting variables, and use NB model, Possion Model and Ordered Probit model the regression analysis. The theory and methods brought forward in this paper will provide references for firms in China to further implement voluntary environmental management, and offer advises and countertneasures for leaders to implement environmental management effectively.

  16. Empirical Essays in Economics of Education and Labor

    DEFF Research Database (Denmark)

    Skibsted, Marie Kruse

    are addressed using econometric methods applied on Danish micro data. All four chapters are empirical studies and combine data from different sources. The main source of data is an administrative data set obtained from Copenhagen Business School (CBS) that contains detailed educational information on students...... Sørensen from Copenhagen Business School) estimates the wage premium of those with a master’s degree in business economics and management when compared to the wages of those with master’s degrees in other fields in the social sciences. By means of an Instrumental Variable (IV) approach, we identify...... the returns to a business education by addressing the endogenous selection of master’s programs. Using season of birth as an exogenous determinant of master’s degree choice, we find that a master’s degree in business economics and management results in a wage premium of around 12% compared to other master...

  17. Environmental management in Slovenian industrial enterprises - Empirical study

    Directory of Open Access Journals (Sweden)

    Vesna Čančer

    2002-01-01

    Full Text Available timulated with the firm belief that environmental management helps enterprises to achieve business success, expressed by a majority of managers in the sample enterprises, we present the results of an empirical study in the Slovene processing industry. The purpose of our research work is to identify, analyse and present the importance of the environment in business decision-making, the role of environmental management in strategic decision-making and its distribution across the business functions; environmental performance in business processes; the use of the methods for environmentally oriented business decision-making and the developmental tendencies of environmental management in Slovene enterprises of the processing industry. We define the key drivers of environmental management and their effect on the environmental behaviour of these enterprises. We present and interpret data indicating that environmental management is caused not only by compliance and regulation, but also by competition and enterprises’ own initiative.

  18. An Empirical Analysis of the Budget Deficit

    Directory of Open Access Journals (Sweden)

    Ioan Talpos

    2007-11-01

    Full Text Available Economic policies and, particularly, fiscal policies are not designed and implemented in an “empty space”: the structural characteristics of the economic systems, the institutional architecture of societies, the cultural paradigm and the power relations between different social groups define the borders of these policies. This paper tries to deal with these borders, to describe their nature and the implications of their existence to the fiscal policies’ quality and impact at a theoretical level as well as at an empirical one. The main results of the proposed analysis support the ideas that the mentioned variables matters both for the social mandate entrusted by the society to the state and thus to the role and functions of the state and for the economic growth as a support of the resources collected at distributed by the public authorities.

  19. Mobile Systems Development: An Empirical Study

    DEFF Research Database (Denmark)

    Hosbond, J. H.

    As part of an ongoing study on mobile systems development (MSD), this paper presents preliminary findings of research-in-progress. The debate on mobility in research has so far been dominated by mobile HCI, technological innovations, and socio-technical issues related to new and emerging mobile...... work patterns. This paper is about the development of mobile systems.Based on an on-going empirical study I present four case studies of companies each with different products or services to offer and diverging ways of establishing and sustaining a successful business in the mobile industry. From...... the case studies I propose a five-layered framework for understanding the structure and segmentation of the industry. This leads to an analysis of the different modes of operation within the mobile industry, exemplified by the four case studies.The contribution of this paper is therefore two-fold: (1) I...

  20. An Empirical Model for Energy Storage Systems

    Energy Technology Data Exchange (ETDEWEB)

    Rosewater, David Martin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Scott, Paul [TransPower, Poway, CA (United States)

    2016-03-17

    Improved models of energy storage systems are needed to enable the electric grid’s adaptation to increasing penetration of renewables. This paper develops a generic empirical model of energy storage system performance agnostic of type, chemistry, design or scale. Parameters for this model are calculated using test procedures adapted from the US DOE Protocol for Uniformly Measuring and Expressing the Performance of Energy Storage. We then assess the accuracy of this model for predicting the performance of the TransPower GridSaver – a 1 MW rated lithium-ion battery system that underwent laboratory experimentation and analysis. The developed model predicts a range of energy storage system performance based on the uncertainty of estimated model parameters. Finally, this model can be used to better understand the integration and coordination of energy storage on the electric grid.

  1. Empirical atom model of Vegard's law

    International Nuclear Information System (INIS)

    Zhang, Lei; Li, Shichun

    2014-01-01

    Vegard's law seldom holds true for most binary continuous solid solutions. When two components form a solid solution, the atom radii of component elements will change to satisfy the continuity requirement of electron density at the interface between component atom A and atom B so that the atom with larger electron density will expand and the atom with the smaller one will contract. If the expansion and contraction of the atomic radii of A and B respectively are equal in magnitude, Vegard's law will hold true. However, the expansion and contraction of two component atoms are not equal in most situations. The magnitude of the variation will depend on the cohesive energy of corresponding element crystals. An empirical atom model of Vegard's law has been proposed to account for signs of deviations according to the electron density at Wigner–Seitz cell from Thomas–Fermi–Dirac–Cheng model

  2. Visual Semiotics & Uncertainty Visualization: An Empirical Study.

    Science.gov (United States)

    MacEachren, A M; Roth, R E; O'Brien, J; Li, B; Swingley, D; Gahegan, M

    2012-12-01

    This paper presents two linked empirical studies focused on uncertainty visualization. The experiments are framed from two conceptual perspectives. First, a typology of uncertainty is used to delineate kinds of uncertainty matched with space, time, and attribute components of data. Second, concepts from visual semiotics are applied to characterize the kind of visual signification that is appropriate for representing those different categories of uncertainty. This framework guided the two experiments reported here. The first addresses representation intuitiveness, considering both visual variables and iconicity of representation. The second addresses relative performance of the most intuitive abstract and iconic representations of uncertainty on a map reading task. Combined results suggest initial guidelines for representing uncertainty and discussion focuses on practical applicability of results.

  3. Chemistry and metallurgy in the Portuguese Empire

    Energy Technology Data Exchange (ETDEWEB)

    Habashi, F. [Laval Univ., Sainte-Foy, Quebec City, PQ (Canada)

    2000-10-01

    The foundation and expansion of the Portuguese Empire is sketched, with emphasis on the development of a new type of ship by Prince Henrique the Navigator (AD 1385-1460), known as the caravel. By virtue of its advanced design, it was capable of sailing the stormy seas at high speeds, and thereby was instrumental in extending Portuguese influence over vast territories in South America, Asia and Africa, extending Portuguese know-how in mining, metallurgy, chemistry and trade along with Christianity. The role played by the University of Coimbra, founded in 1306, and the contribution of the Brazilian Geological Survey, established in 1875, and of the School of Mines in Ouro Preto in Brazil in 1876, in the exploitation of the mineral wealth of the Portuguese colonies is chronicled.

  4. Architecture Between Mind and Empirical Experience

    Directory of Open Access Journals (Sweden)

    Shatha Abbas Hassan

    2016-10-01

    Full Text Available The research aims to identify the level of balance in the architectural thought influenced by the rational type human consciousness, the materialistic based on the Empirical type, moral based on human experience as source of knowledge. This was reflected in architecture in the specialized thought that the mind is the source of knowledge which explains the phenomena of life. The rational approach based on objectivity and methodology in (Form Production, the other approach is based on subjectivity in form production (Form Inspiration. The research problem is that there is imbalance in the relationship between the rational side and the human experience in architecture, which led into imbalance between theory and application in architecture according to architectural movements.

  5. Empirical scaling for present Ohmically heated tokamaks

    International Nuclear Information System (INIS)

    Daughney, C.

    1975-01-01

    Experimental results from the Adiabatic Toroidal Compressor (ATC) tokamak are used to obtain empirical scaling laws for the average electron temperature and electron energy confinement time as functions of the average electron density, the effective ion charge, and the plasma current. These scaling laws are extended to include dependence upon minor and major plasma radius and toroidal field strength through a comparison of the various tokamaks described in the literature. Electron thermal conductivity is the dominant loss process for the ATC tokamak. The parametric dependences of the observed electron thermal conductivity are not explained by present theoretical considerations. The electron temperature obtained with Ohmic heating is shown to be a function of current density - which will not be increased in the next generation of large tokamaks. However, the temperature dependence of the electron energy confinement time suggests that significant improvement in confinement time will be obtained with supplementary electron heating. (author)

  6. An empirical examination of restructured electricity prices

    International Nuclear Information System (INIS)

    Knittel, C.R.; Roberts, M.R.

    2005-01-01

    We present an empirical analysis of restructured electricity prices. We study the distributional and temporal properties of the price process in a non-parametric framework, after which we parametrically model the price process using several common asset price specifications from the asset-pricing literature, as well as several less conventional models motivated by the peculiarities of electricity prices. The findings reveal several characteristics unique to electricity prices including several deterministic components of the price series at different frequencies. An 'inverse leverage effect' is also found, where positive shocks to the price series result in larger increases in volatility than negative shocks. We find that forecasting performance in dramatically improved when we incorporate features of electricity prices not commonly modelled in other asset prices. Our findings have implications for how empiricists model electricity prices, as well as how theorists specify models of energy pricing. (author)

  7. Empirical Design Considerations for Industrial Centrifugal Compressors

    Directory of Open Access Journals (Sweden)

    Cheng Xu

    2012-01-01

    Full Text Available Computational Fluid Dynamics (CFD has been extensively used in centrifugal compressor design. CFD provides further optimisation opportunities for the compressor design rather than designing the centrifugal compressor. The experience-based design process still plays an important role for new compressor developments. The wide variety of design subjects represents a very complex design world for centrifugal compressor designers. Therefore, some basic information for centrifugal design is still very important. The impeller is the key part of the centrifugal stage. Designing a highly efficiency impeller with a wide operation range can ensure overall stage design success. This paper provides some empirical information for designing industrial centrifugal compressors with a focus on the impeller. A ported shroud compressor basic design guideline is also discussed for improving the compressor range.

  8. A Proposed Methodology for the Conceptualization, Operationalization, and Empirical Validation of the Concept of Information Need

    Science.gov (United States)

    Afzal, Waseem

    2017-01-01

    Introduction: The purpose of this paper is to propose a methodology to conceptualize, operationalize, and empirically validate the concept of information need. Method: The proposed methodology makes use of both qualitative and quantitative perspectives, and includes a broad array of approaches such as literature reviews, expert opinions, focus…

  9. Semi-empirical calculations for the ranges of fast ions in silicon

    Science.gov (United States)

    Belkova, Yu. A.; Teplova, Ya. A.

    2018-04-01

    A semi-empirical method is proposed to calculate the ion ranges in energy region E = 0.025-10 MeV/nucleon. The dependence of ion ranges on the projectile nuclear charge, mass and velocity is analysed. The calculations presented for ranges of ions with nuclear charges Z = 2-10 in silicon are compared with SRIM results and experimental data.

  10. Using Mental Health Consultation to Decrease Disruptive Behaviors in Preschoolers: Adapting an Empirically-Supported Intervention

    Science.gov (United States)

    Williford, Amanda P.; Shelton, Terri L.

    2008-01-01

    Background: This study examined the effectiveness of an adaptation of an empirically-supported intervention delivered using mental health consultation to preschoolers who displayed elevated disruptive behaviors. Method: Ninety-six preschoolers, their teachers, and their primary caregivers participated. Children in the intervention group received…

  11. Comparison of nuisance parameters in pediatric versus adult randomized trials: a meta-epidemiologic empirical evaluation

    NARCIS (Netherlands)

    Vandermeer, Ben; van der Tweel, Ingeborg; Jansen-van der Weide, Marijke C.; Weinreich, Stephanie S.; Contopoulos-Ioannidis, Despina G.; Bassler, Dirk; Fernandes, Ricardo M.; Askie, Lisa; Saloojee, Haroon; Baiardi, Paola; Ellenberg, Susan S.; van der Lee, Johanna H.

    2018-01-01

    Background: We wished to compare the nuisance parameters of pediatric vs. adult randomized-trials (RCTs) and determine if the latter can be used in sample size computations of the former. Methods: In this meta-epidemiologic empirical evaluation we examined meta-analyses from the Cochrane Database of

  12. VizieR Online Data Catalog: A framework for empirical galaxy phenomenology (Munoz+, 2015)

    Science.gov (United States)

    Munoz, J. A.; Peeples, M. S.

    2017-11-01

    In this study, we develop a cohesive theoretical formalism for translating empirical relations into an understanding of the variations in galactic star formation histories. We achieve this goal by incorporating into the Main Sequence Integration (MSI) method the scatter suggested by the evolving fraction of quiescent galaxies and the spread in the observed stellar mass-star formation rate relation. (2 data files).

  13. From micro data to causality: Forty years of empirical labor economics

    NARCIS (Netherlands)

    van der Klaauw, B.

    2014-01-01

    This overview describes the development of methods for empirical research in the field of labor economics during the past four decades. This period is characterized by the use of micro data to answer policy relevant research question. Prominent in the literature is the search for exogenous variation

  14. Simple preconditioning technique: empirical formula for condition number reduction at a junction of several wires

    CSIR Research Space (South Africa)

    Lysko, AA

    2011-08-01

    Full Text Available The condition number for a method-of-moments’ impedance matrix resulting from a junction of several wires is frequency dependant and can be minimized at a given frequency using several approaches. An empirical formula for an optimum, condition...

  15. Exploring Advertising in Higher Education: An Empirical Analysis in North America, Europe, and Japan

    Science.gov (United States)

    Papadimitriou, Antigoni; Blanco Ramírez, Gerardo

    2015-01-01

    This empirical study explores higher education advertising campaigns displayed in five world cities: Boston, New York, Oslo, Tokyo, and Toronto. The study follows a mixed-methods research design relying on content analysis and multimodal semiotic analysis and employs a conceptual framework based on the knowledge triangle of education, research,…

  16. Speculative Carryover: An Empirical Examination of the U.S. Refined Copper Market

    OpenAIRE

    Walter N. Thurman

    1988-01-01

    This article develops and estimates an empirically tractable model of equilibrium storage. The method bridges the gap between theoretical rational expectations models and applied commodity market work. The application to the U.S. refined copper market provides estimates of structural supply and demand, rational price forecasts, and the risk of copper storage that is consistent with modern portfolio theory.

  17. Freedom from Racial Barriers: The Empirical Evidence on Vouchers and Segregation. School Choice Issues in Depth

    Science.gov (United States)

    Forster, Greg

    2006-01-01

    This report collects the results of all available studies using valid empirical methods to compare segregation in public and private schools, both in general and in the context of school voucher programs. Examining the widespread claims that private schools have high segregation levels and vouchers will lead to greater segregation, this report…

  18. Early Child Disaster Mental Health Interventions: A Review of the Empirical Evidence

    Science.gov (United States)

    Pfefferbaum, Betty; Nitiéma, Pascal; Tucker, Phebe; Newman, Elana

    2017-01-01

    Background: The need to establish an evidence base for early child disaster interventions has been long recognized. Objective: This paper presents a descriptive analysis of the empirical research on early disaster mental health interventions delivered to children within the first 3 months post event. Methods: Characteristics and findings of the…

  19. Empirical and dynamic primary energy factors

    International Nuclear Information System (INIS)

    Wilby, Mark Richard; Rodríguez González, Ana Belén; Vinagre Díaz, Juan José

    2014-01-01

    Current legislation, standards, and scientific research in the field of energy efficiency often make use of PEFs (primary energy factors). The measures employed are usually fixed and based on theoretical calculations. However given the intrinsically variable nature of energy systems, these PEFs should rely on empirical data and evolve in time. Otherwise the obtained efficiencies may not be representative of the actual energy system. In addition, incorrect PEFs may cause a negative effect on the energy efficiency measures. For instance, imposing a high value on the PEF of electricity may discourage the use of renewable energy sources, which have an actual value close to 1. In order to provide a solution to this issue, we propose an application of the Energy Networks (ENs), described in a previous work, to calculate dynamic PEFs based on empirical data. An EN represents an entire energy system both numerically and graphically, from its primary energy sources to their final energy forms, and consuming sectors. Using ENs we can calculate the PEF of any energy form and depict it in a simple and meaningful graph that shows the details of the contribution of each primary energy and the efficiency of the associated process. The analysis of these PEFs leads to significant conclusions regarding the energy models adopted among countries, their evolution in time, the selection of viable ways to improve efficiency, and the detection of best practices that could contribute to the overall energy efficiency targets. - Highlights: • Primary Energy Factors (PEFs) are foundation of much energy legislation and research. • Traditionally, they have been treated as geotemporally invariant. • This work provides a systematic and transparent methodology for adding variability. • It also shows the variability between regions due to market, policy, and technology. • Finally it demonstrates the utility of extended PEFs as a tool in their own right

  20. A research program in empirical computer science

    Science.gov (United States)

    Knight, J. C.

    1991-01-01

    During the grant reporting period our primary activities have been to begin preparation for the establishment of a research program in experimental computer science. The focus of research in this program will be safety-critical systems. Many questions that arise in the effort to improve software dependability can only be addressed empirically. For example, there is no way to predict the performance of the various proposed approaches to building fault-tolerant software. Performance models, though valuable, are parameterized and cannot be used to make quantitative predictions without experimental determination of underlying distributions. In the past, experimentation has been able to shed some light on the practical benefits and limitations of software fault tolerance. It is common, also, for experimentation to reveal new questions or new aspects of problems that were previously unknown. A good example is the Consistent Comparison Problem that was revealed by experimentation and subsequently studied in depth. The result was a clear understanding of a previously unknown problem with software fault tolerance. The purpose of a research program in empirical computer science is to perform controlled experiments in the area of real-time, embedded control systems. The goal of the various experiments will be to determine better approaches to the construction of the software for computing systems that have to be relied upon. As such it will validate research concepts from other sources, provide new research results, and facilitate the transition of research results from concepts to practical procedures that can be applied with low risk to NASA flight projects. The target of experimentation will be the production software development activities undertaken by any organization prepared to contribute to the research program. Experimental goals, procedures, data analysis and result reporting will be performed for the most part by the University of Virginia.