WorldWideScience

Sample records for models empirical models

  1. Empirical Model Building Data, Models, and Reality

    CERN Document Server

    Thompson, James R

    2011-01-01

    Praise for the First Edition "This...novel and highly stimulating book, which emphasizes solving real problems...should be widely read. It will have a positive and lasting effect on the teaching of modeling and statistics in general." - Short Book Reviews This new edition features developments and real-world examples that showcase essential empirical modeling techniques Successful empirical model building is founded on the relationship between data and approximate representations of the real systems that generated that data. As a result, it is essential for researchers who construct these m

  2. Model uncertainty in growth empirics

    NARCIS (Netherlands)

    Prüfer, P.

    2008-01-01

    This thesis applies so-called Bayesian model averaging (BMA) to three different economic questions substantially exposed to model uncertainty. Chapter 2 addresses a major issue of modern development economics: the analysis of the determinants of pro-poor growth (PPG), which seeks to combine high

  3. Empirical questions for collective-behaviour modelling

    Indian Academy of Sciences (India)

    suggestions for future research. 2. Empirical measurements of insect swarms. Unlike highly ordered animal aggregations such as bird flocks or fish schools, mating swarms of flying insects present a particular challenge for modelling as there is no simple average state of the swarm that can be used to benchmark a model.

  4. An Empirical Model for Energy Storage Systems

    Energy Technology Data Exchange (ETDEWEB)

    Rosewater, David Martin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Scott, Paul [TransPower, Poway, CA (United States)

    2016-03-17

    Improved models of energy storage systems are needed to enable the electric grid’s adaptation to increasing penetration of renewables. This paper develops a generic empirical model of energy storage system performance agnostic of type, chemistry, design or scale. Parameters for this model are calculated using test procedures adapted from the US DOE Protocol for Uniformly Measuring and Expressing the Performance of Energy Storage. We then assess the accuracy of this model for predicting the performance of the TransPower GridSaver – a 1 MW rated lithium-ion battery system that underwent laboratory experimentation and analysis. The developed model predicts a range of energy storage system performance based on the uncertainty of estimated model parameters. Finally, this model can be used to better understand the integration and coordination of energy storage on the electric grid.

  5. Empirical Model of Precipitating Ion Oval

    Science.gov (United States)

    Goldstein, Jerry

    2017-10-01

    In this brief technical report published maps of ion integral flux are used to constrain an empirical model of the precipitating ion oval. The ion oval is modeled as a Gaussian function of ionospheric latitude that depends on local time and the Kp geomagnetic index. The three parameters defining this function are the centroid latitude, width, and amplitude. The local time dependences of these three parameters are approximated by Fourier series expansions whose coefficients are constrained by the published ion maps. The Kp dependence of each coefficient is modeled by a linear fit. Optimization of the number of terms in the expansion is achieved via minimization of the global standard deviation between the model and the published ion map at each Kp. The empirical model is valid near the peak flux of the auroral oval; inside its centroid region the model reproduces the published ion maps with standard deviations of less than 5% of the peak integral flux. On the subglobal scale, average local errors (measured as a fraction of the point-to-point integral flux) are below 30% in the centroid region. Outside its centroid region the model deviates significantly from the H89 integral flux maps. The model's performance is assessed by comparing it with both local and global data from a 17 April 2002 substorm event. The model can reproduce important features of the macroscale auroral region but none of its subglobal structure, and not immediately following a substorm.

  6. Empirical Bayesian inference and model uncertainty

    International Nuclear Information System (INIS)

    Poern, K.

    1994-01-01

    This paper presents a hierarchical or multistage empirical Bayesian approach for the estimation of uncertainty concerning the intensity of a homogeneous Poisson process. A class of contaminated gamma distributions is considered to describe the uncertainty concerning the intensity. These distributions in turn are defined through a set of secondary parameters, the knowledge of which is also described and updated via Bayes formula. This two-stage Bayesian approach is an example where the modeling uncertainty is treated in a comprehensive way. Each contaminated gamma distributions, represented by a point in the 3D space of secondary parameters, can be considered as a specific model of the uncertainty about the Poisson intensity. Then, by the empirical Bayesian method each individual model is assigned a posterior probability

  7. Salt intrusion study in Cochin estuary - Using empirical models

    Digital Repository Service at National Institute of Oceanography (India)

    Jacob, B.; Revichandran, C.; NaveenKumar, K.R.

    prolonged period, which is quite expensive and time consuming, certain predictive models are preferred. A predictive model uses parameters which are measurable and quantifiable variables and determined through empirical relations. Four empirical models have...

  8. Empirical generalization assessment of neural network models

    DEFF Research Database (Denmark)

    Larsen, Jan; Hansen, Lars Kai

    1995-01-01

    competing models. Since all models are trained on the same data, a key issue is to take this dependency into account. The optimal split of the data set of size N into a cross-validation set of size Nγ and a training set of size N(1-γ) is discussed. Asymptotically (large data sees), γopt→1......This paper addresses the assessment of generalization performance of neural network models by use of empirical techniques. We suggest to use the cross-validation scheme combined with a resampling technique to obtain an estimate of the generalization performance distribution of a specific model....... This enables the formulation of a bulk of new generalization performance measures. Numerical results demonstrate the viability of the approach compared to the standard technique of using algebraic estimates like the FPE. Moreover, we consider the problem of comparing the generalization performance of different...

  9. Empirical process modeling in fast breeder reactors

    International Nuclear Information System (INIS)

    Ikonomopoulos, A.; Endou, A.

    1998-01-01

    A non-linear multi-input/single output (MISO) empirical model is introduced for monitoring vital system parameters in a nuclear reactor environment. The proposed methodology employs a scheme of non-parametric smoothing that models the local dynamics of each fitting point individually, as opposed to global modeling techniques--such as multi-layer perceptrons (MLPs)--that attempt to capture the dynamics of the entire design space. The stimulation for employing local models in monitoring rises from one's desire to capture localized idiosyncrasies of the dynamic system utilizing independent estimators. This approach alleviates the effect of negative interference between old and new observations enhancing the model prediction capabilities. Modeling the behavior of any given system comes down to a trade off between variance and bias. The building blocks of the proposed approach are tailored to each data set through two separate, adaptive procedures in order to optimize the bias-variance reconciliation. Hetero-associative schemes of the technique presented exhibit insensitivity to sensor noise and provide the operator with accurate predictions of the actual process signals. A comparison between the local model and MLP prediction capabilities is performed and the results appear in favor of the first method. The data used to demonstrate the potential of local regression have been obtained during two startup periods of the Monju fast breeder reactor (FBR)

  10. Empirical atom model of Vegard's law

    Science.gov (United States)

    Zhang, Lei; Li, Shichun

    2014-02-01

    Vegard's law seldom holds true for most binary continuous solid solutions. When two components form a solid solution, the atom radii of component elements will change to satisfy the continuity requirement of electron density at the interface between component atom A and atom B so that the atom with larger electron density will expand and the atom with the smaller one will contract. If the expansion and contraction of the atomic radii of A and B respectively are equal in magnitude, Vegard's law will hold true. However, the expansion and contraction of two component atoms are not equal in most situations. The magnitude of the variation will depend on the cohesive energy of corresponding element crystals. An empirical atom model of Vegard's law has been proposed to account for signs of deviations according to the electron density at Wigner-Seitz cell from Thomas-Fermi-Dirac-Cheng model.

  11. Empirical atom model of Vegard's law

    International Nuclear Information System (INIS)

    Zhang, Lei; Li, Shichun

    2014-01-01

    Vegard's law seldom holds true for most binary continuous solid solutions. When two components form a solid solution, the atom radii of component elements will change to satisfy the continuity requirement of electron density at the interface between component atom A and atom B so that the atom with larger electron density will expand and the atom with the smaller one will contract. If the expansion and contraction of the atomic radii of A and B respectively are equal in magnitude, Vegard's law will hold true. However, the expansion and contraction of two component atoms are not equal in most situations. The magnitude of the variation will depend on the cohesive energy of corresponding element crystals. An empirical atom model of Vegard's law has been proposed to account for signs of deviations according to the electron density at Wigner–Seitz cell from Thomas–Fermi–Dirac–Cheng model

  12. An empirical model of decadal ENSO variability

    Energy Technology Data Exchange (ETDEWEB)

    Kravtsov, S. [University of Wisconsin-Milwaukee, Department of Mathematical Sciences, Atmospheric Sciences Group, P. O. Box 413, Milwaukee, WI (United States)

    2012-11-15

    This paper assesses potential predictability of decadal variations in the El Nino/Southern Oscillation (ENSO) characteristics by constructing and performing simulations using an empirical nonlinear stochastic model of an ENSO index. The model employs decomposition of global sea-surface temperature (SST) anomalies into the modes that maximize the ratio of interdecadal-to-subdecadal SST variance to define low-frequency predictors called the canonical variates (CVs). When the whole available SST time series is so processed, the leading canonical variate (CV-1) is found to be well correlated with the area-averaged SST time series which exhibits a non-uniform warming trend, while the next two (CV-2 and CV-3) describe secular variability arguably associated with a combination of Atlantic Multidecadal Oscillation (AMO) and Pacific Decadal Oscillation (PDO) signals. The corresponding ENSO model that uses either all three (CVs 1-3) or only AMO/PDO-related (CVs 2 and 3) predictors captures well the observed autocorrelation function, probability density function, seasonal dependence of ENSO, and, most importantly, the observed interdecadal modulation of ENSO variance. The latter modulation, and its dependence on CVs, is shown to be inconsistent with the null hypothesis of random decadal ENSO variations simulated by multivariate linear inverse models. Cross-validated hindcasts of ENSO variance suggest a potential useful skill at decadal lead times. These findings thus argue that decadal modulations of ENSO variability may be predictable subject to our ability to forecast AMO/PDO-type climate modes; the latter forecasts may need to be based on simulations of dynamical models, rather than on a purely statistical scheme as in the present paper. (orig.)

  13. A note on: an empirical comparison of forgetting models

    OpenAIRE

    Sikström, Sverker; Jaber, Mohamad Y.

    2004-01-01

    In the above paper, Nembhard and Osothsilp (2001) empirically compared several forgetting models against empirical data on production breaks. Among the models compared was the learn–forget curve model (LFCM) developed by Jaber and Bonney(1996). In previous research, several studies have shown that the LFCM is advantageous to some of the models being investigated, however, Nembhard and Osothsilp (2001) found that the LFCM showed the largest deviation from empirical data. In this commentary, we...

  14. Forecasting Inflation through Econometrics Models: An Empirical ...

    African Journals Online (AJOL)

    This article aims at modeling and forecasting inflation in Pakistan. For this purpose a number of econometric approaches are implemented and their results are compared. In ARIMA models, adding additional lags for p and/or q necessarily reduced the sum of squares of the estimated residuals. When a model is estimated ...

  15. Empirical questions for collective-behaviour modelling

    Indian Academy of Sciences (India)

    2015-02-04

    Feb 4, 2015 ... The collective behaviour of groups of social animals has been an active topic of study across many disciplines, and has a long history of modelling. Classical models have been successful in capturing the large-scale patterns formed by animal aggregations, but fare less well in accounting for details, ...

  16. Identifiability of Baranyi model and comparison with empirical ...

    African Journals Online (AJOL)

    In addition, performance of the Baranyi model was compared with those of the empirical modified Gompertz and logistic models and Huang models. Higher values of R2, modeling efficiency and lower absolute values of mean bias error, root mean square error, mean percentage error and chi-square were obtained with ...

  17. Empirical soot formation and oxidation model

    Directory of Open Access Journals (Sweden)

    Boussouara Karima

    2009-01-01

    Full Text Available Modelling internal combustion engines can be made following different approaches, depending on the type of problem to be simulated. A diesel combustion model has been developed and implemented in a full cycle simulation of a combustion, model accounts for transient fuel spray evolution, fuel-air mixing, ignition, combustion, and soot pollutant formation. The models of turbulent combustion of diffusion flame, apply to diffusion flames, which one meets in industry, typically in the diesel engines particulate emission represents one of the most deleterious pollutants generated during diesel combustion. Stringent standards on particulate emission along with specific emphasis on size of emitted particulates have resulted in increased interest in fundamental understanding of the mechanisms of soot particulate formation and oxidation in internal combustion engines. A phenomenological numerical model which can predict the particle size distribution of the soot emitted will be very useful in explaining the above observed results and will also be of use to develop better particulate control techniques. A diesel engine chosen for simulation is a version of the Caterpillar 3406. We are interested in employing a standard finite-volume computational fluid dynamics code, KIVA3V-RELEASE2.

  18. Empirical questions for collective-behaviour modelling

    Indian Academy of Sciences (India)

    representation by ODEs straightforward. This kind of model has been successfully fit to animal groups that display strongly ordered motion, including marching locusts [8], floating ducks [9], and schooling fish. [10,11]. But in systems like our midge swarms that are stationary, effective social forces have not been clearly ...

  19. The Role of Empirical Evidence in Modeling Speech Segmentation

    Science.gov (United States)

    Phillips, Lawrence

    2015-01-01

    Choosing specific implementational details is one of the most important aspects of creating and evaluating a model. In order to properly model cognitive processes, choices for these details must be made based on empirical research. Unfortunately, modelers are often forced to make decisions in the absence of relevant data. My work investigates the…

  20. An Empirical Investigation into a Subsidiary Absorptive Capacity Process Model

    DEFF Research Database (Denmark)

    Schleimer, Stephanie; Pedersen, Torben

    2011-01-01

    and empirically test a process model of absorptive capacity. The setting of our empirical study is 213 subsidiaries of multinational enterprises and the focus is on the capacity of these subsidiaries to successfully absorb best practices in marketing strategy from their headquarters. This setting allows us...

  1. Empirical bayes model comparisons for differential methylation analysis.

    Science.gov (United States)

    Teng, Mingxiang; Wang, Yadong; Kim, Seongho; Li, Lang; Shen, Changyu; Wang, Guohua; Liu, Yunlong; Huang, Tim H M; Nephew, Kenneth P; Balch, Curt

    2012-01-01

    A number of empirical Bayes models (each with different statistical distribution assumptions) have now been developed to analyze differential DNA methylation using high-density oligonucleotide tiling arrays. However, it remains unclear which model performs best. For example, for analysis of differentially methylated regions for conservative and functional sequence characteristics (e.g., enrichment of transcription factor-binding sites (TFBSs)), the sensitivity of such analyses, using various empirical Bayes models, remains unclear. In this paper, five empirical Bayes models were constructed, based on either a gamma distribution or a log-normal distribution, for the identification of differential methylated loci and their cell division-(1, 3, and 5) and drug-treatment-(cisplatin) dependent methylation patterns. While differential methylation patterns generated by log-normal models were enriched with numerous TFBSs, we observed almost no TFBS-enriched sequences using gamma assumption models. Statistical and biological results suggest log-normal, rather than gamma, empirical Bayes model distribution to be a highly accurate and precise method for differential methylation microarray analysis. In addition, we presented one of the log-normal models for differential methylation analysis and tested its reproducibility by simulation study. We believe this research to be the first extensive comparison of statistical modeling for the analysis of differential DNA methylation, an important biological phenomenon that precisely regulates gene transcription.

  2. Empirical Bayes Model Comparisons for Differential Methylation Analysis

    Directory of Open Access Journals (Sweden)

    Mingxiang Teng

    2012-01-01

    Full Text Available A number of empirical Bayes models (each with different statistical distribution assumptions have now been developed to analyze differential DNA methylation using high-density oligonucleotide tiling arrays. However, it remains unclear which model performs best. For example, for analysis of differentially methylated regions for conservative and functional sequence characteristics (e.g., enrichment of transcription factor-binding sites (TFBSs, the sensitivity of such analyses, using various empirical Bayes models, remains unclear. In this paper, five empirical Bayes models were constructed, based on either a gamma distribution or a log-normal distribution, for the identification of differential methylated loci and their cell division—(1, 3, and 5 and drug-treatment-(cisplatin dependent methylation patterns. While differential methylation patterns generated by log-normal models were enriched with numerous TFBSs, we observed almost no TFBS-enriched sequences using gamma assumption models. Statistical and biological results suggest log-normal, rather than gamma, empirical Bayes model distribution to be a highly accurate and precise method for differential methylation microarray analysis. In addition, we presented one of the log-normal models for differential methylation analysis and tested its reproducibility by simulation study. We believe this research to be the first extensive comparison of statistical modeling for the analysis of differential DNA methylation, an important biological phenomenon that precisely regulates gene transcription.

  3. On the geometric modeling approach to empirical null distribution estimation for empirical Bayes modeling of multiple hypothesis testing.

    Science.gov (United States)

    Wu, Baolin

    2013-04-01

    We study the geometric modeling approach to estimating the null distribution for the empirical Bayes modeling of multiple hypothesis testing. The commonly used method is a nonparametric approach based on the Poisson regression, which however could be unduly affected by the dependence among test statistics and perform very poorly under strong dependence. In this paper, we explore a finite mixture model based geometric modeling approach to empirical null distribution estimation and multiple hypothesis testing. Through simulations and applications to two public microarray data, we will illustrate its competitive performance. Copyright © 2012 Elsevier Ltd. All rights reserved.

  4. Empirical agent-based modelling challenges and solutions

    CERN Document Server

    Barreteau, Olivier

    2014-01-01

    This instructional book showcases techniques to parameterise human agents in empirical agent-based models (ABM). In doing so, it provides a timely overview of key ABM methodologies and the most innovative approaches through a variety of empirical applications.  It features cutting-edge research from leading academics and practitioners, and will provide a guide for characterising and parameterising human agents in empirical ABM.  In order to facilitate learning, this text shares the valuable experiences of other modellers in particular modelling situations. Very little has been published in the area of empirical ABM, and this contributed volume will appeal to graduate-level students and researchers studying simulation modeling in economics, sociology, ecology, and trans-disciplinary studies, such as topics related to sustainability. In a similar vein to the instruction found in a cookbook, this text provides the empirical modeller with a set of 'recipes'  ready to be implemented. Agent-based modeling (AB...

  5. Ranking Multivariate GARCH Models by Problem Dimension: An Empirical Evaluation

    NARCIS (Netherlands)

    M. Caporin (Massimiliano); M.J. McAleer (Michael)

    2011-01-01

    textabstractIn the last 15 years, several Multivariate GARCH (MGARCH) models have appeared in the literature. Recent research has begun to examine MGARCH specifications in terms of their out-of-sample forecasting performance. In this paper, we provide an empirical comparison of a set of models,

  6. Comparison of empirical models and laboratory saturated hydraulic ...

    African Journals Online (AJOL)

    Numerous methods for estimating soil saturated hydraulic conductivity exist, which range from direct measurement in the laboratory to models that use only basic soil properties. A study was conducted to compare laboratory saturated hydraulic conductivity (Ksat) measurement and that estimated from empirical models.

  7. Psychological models of art reception must be empirically grounded

    DEFF Research Database (Denmark)

    Nadal, Marcos; Vartanian, Oshin; Skov, Martin

    2017-01-01

    We commend Menninghaus et al. for tackling the role of negative emotions in art reception. However, their model suffers from shortcomings that reduce its applicability to empirical studies of the arts: poor use of evidence, lack of integration with other models, and limited derivation of testable...

  8. Semi-empirical model for estimating molar fraction of syngas ...

    African Journals Online (AJOL)

    The gasification energy efficiency also improved with temperature and residence time, as the quantity of tar was reduced during cracking. The model was validated by comparing the experimental and numerical data. Keywords: Syngas, Gasification, Pyrolysis, Residence time, Thermal cracking of tar, Semi-empirical model ...

  9. Bankruptcy risk model and empirical tests.

    Science.gov (United States)

    Podobnik, Boris; Horvatic, Davor; Petersen, Alexander M; Urosevic, Branko; Stanley, H Eugene

    2010-10-26

    We analyze the size dependence and temporal stability of firm bankruptcy risk in the US economy by applying Zipf scaling techniques. We focus on a single risk factor--the debt-to-asset ratio R--in order to study the stability of the Zipf distribution of R over time. We find that the Zipf exponent increases during market crashes, implying that firms go bankrupt with larger values of R. Based on the Zipf analysis, we employ Bayes's theorem and relate the conditional probability that a bankrupt firm has a ratio R with the conditional probability of bankruptcy for a firm with a given R value. For 2,737 bankrupt firms, we demonstrate size dependence in assets change during the bankruptcy proceedings. Prepetition firm assets and petition firm assets follow Zipf distributions but with different exponents, meaning that firms with smaller assets adjust their assets more than firms with larger assets during the bankruptcy process. We compare bankrupt firms with nonbankrupt firms by analyzing the assets and liabilities of two large subsets of the US economy: 2,545 Nasdaq members and 1,680 New York Stock Exchange (NYSE) members. We find that both assets and liabilities follow a Pareto distribution. The finding is not a trivial consequence of the Zipf scaling relationship of firm size quantified by employees--although the market capitalization of Nasdaq stocks follows a Pareto distribution, the same distribution does not describe NYSE stocks. We propose a coupled Simon model that simultaneously evolves both assets and debt with the possibility of bankruptcy, and we also consider the possibility of firm mergers.

  10. Endogenizing technological change. Matching empirical evidence to modeling needs

    Energy Technology Data Exchange (ETDEWEB)

    Pizer, William A. [Resources for the Future, 1616 P Street NW, Washington, DC, 20009 (United States); Popp, David [Department of Public Administration, Center for Policy Research, The Maxwell School, Syracuse University, 426 Eggers Hall, Syracuse, NY 13244-1020 (United States); National Bureau of Economic Research (United States)

    2008-11-15

    Given that technologies to significantly reduce fossil fuel emissions are currently unavailable or only available at high cost, technological change will be a key component of any long-term strategy to reduce greenhouse gas emissions. In light of this, the amount of research on the pace, direction, and benefits of environmentally-friendly technological change has grown dramatically in recent years. This research includes empirical work estimating the magnitude of these effects, and modeling exercises designed to simulate the importance of endogenous technological change in response to climate policy. Unfortunately, few attempts have been made to connect these two streams of research. This paper attempts to bridge that gap. We review both the empirical and modeling literature on technological change. Our focus includes the research and development process, learning by doing, the role of public versus private research, and technology diffusion. Our goal is to provide an agenda for how both empirical and modeling research in these areas can move forward in a complementary fashion. In doing so, we discuss both how models used for policy evaluation can better capture empirical phenomena, and how empirical research can better address the needs of models used for policy evaluation. (author)

  11. Endogenizing technological change: Matching empirical evidence to modeling needs

    Energy Technology Data Exchange (ETDEWEB)

    Pizer, William A. [Resources for the Future, 1616 P Street NW, Washington, DC, 20009 (United States)], E-mail: pizer@rff.org; Popp, David [Department of Public Administration, Center for Policy Research, Maxwell School, Syracuse University, 426 Eggers Hall, Syracuse, NY 13244-1020 (United States); National Bureau of Economic Research (United States)], E-mail: dcpopp@maxwell.syr.edu

    2008-11-15

    Given that technologies to significantly reduce fossil fuel emissions are currently unavailable or only available at high cost, technological change will be a key component of any long-term strategy to reduce greenhouse gas emissions. In light of this, the amount of research on the pace, direction, and benefits of environmentally-friendly technological change has grown dramatically in recent years. This research includes empirical work estimating the magnitude of these effects, and modeling exercises designed to simulate the importance of endogenous technological change in response to climate policy. Unfortunately, few attempts have been made to connect these two streams of research. This paper attempts to bridge that gap. We review both the empirical and modeling literature on technological change. Our focus includes the research and development process, learning by doing, the role of public versus private research, and technology diffusion. Our goal is to provide an agenda for how both empirical and modeling research in these areas can move forward in a complementary fashion. In doing so, we discuss both how models used for policy evaluation can better capture empirical phenomena, and how empirical research can better address the needs of models used for policy evaluation.

  12. A Trade Study of Thermosphere Empirical Neutral Density Models

    Science.gov (United States)

    Lin, C. S.; Cable, S. B.; Sutton, E. K.

    2014-12-01

    Accurate orbit prediction of space objects critically relies on modeling of thermospheric neutral density that determines drag force. In a trade study we have investigated a methodology to assess performances of neutral density models in predicting orbit against a baseline orbit trajectory. We use a metric defined as along-track error in a day a satellite is predicted to have for a given neutral density model when compared to its GPS positions. A set of ground truth data including Gravity Recovery and Climate Experiment (GRACE) accelerometer and GPS data, solar radio F10.7 proxy and magnetic activity measurements are used to calculate the baseline orbit. This approach is applied to compare the daily along-track errors among HASDM, JB08, MSISE-00 and DTM-2012 neutral density models. The dynamically calibrated HASDM model yields a daily along-track error close to the baseline error and lower than the other empirical models. Among the three empirical models (JB08, MSISE-00 and DTM-2012) the MSISE-00 model has produced the smallest daily along-track error. The results suggest that the developed metric and methodology could be used to assess overall errors in orbit prediction expected from empirical density models. They have also been adapted in an analysis tool Satellite Orbital Drag Error Estimator (SODEE) to estimate orbit prediction errors.

  13. New Keynesian DSGE models: theory, empirical implementation, and specification

    OpenAIRE

    Röhe, Oke

    2012-01-01

    The core of the dissertation consists of three chapters. Chapter 2 provides a graphical and formal representation of a basic dynamic stochastic general equilibrium (DSGE) economy and discusses the prerequisites needed for an empirical implementation. The aim of this chapter is to present the core features of the models used in chapter 3 and 4 of this work and to introduce the estimation techniques employed in the remainder of the thesis. In chapter 3 we estimate a New Keynesian DSGE model...

  14. Empirical modeling of solar radiation exergy for Turkey

    International Nuclear Information System (INIS)

    Arslanoglu, Nurullah

    2016-01-01

    Highlights: • Solar radiation exergy is an important parameter in solar energy applications. • Empirical models are developed for estimate solar radiation exergy for Turkey. • The accuracy of the models is evaluated on the basis of statistical indicators. • The new models can be used to predict global solar radiation exergy. - Abstract: In this study, three different empirical models are developed to predict the monthly average daily global solar radiation exergy on a horizontal surface for some provinces in different regions of Turkey by using meteorological data from Turkish State Meteorological Services. To indicate the performance of the models, the following statistical test methods are used: the coefficient of determination (R 2 ), mean bias error (MBE), mean absolute bias error (MABE), mean percent error (MPE), mean absolute percent error (MAPE), root mean square error (RMSE) and the t-statistic method (t sta ). By the improved empirical models in this paper do not need exergy-to-energy ratio (ψ) and monthly average daily global solar radiation to calculate solar radiation exergy. Consequently, the average exergy-to-energy ratio (ψ) for all provinces are found to be 0.93 for Turkey. The highest and lowest monthly average daily values of solar radiation exergy are obtained at 23.4 MJ/m 2 day in June and 4 MJ/m 2 day in December, respectively. The empirical models providing the best results here can be reliably used to predict solar radiation exergy in Turkey and in other locations with similar climatic conditions in the world. The predictions of solar radiation exergy from regression models could enable the scientists to design the solar-energy systems precisely.

  15. empirical model for predicting rate of biogas production

    African Journals Online (AJOL)

    users

    Rate of biogas production using cow manure as substrate was monitored in two laboratory scale batch reactors (13 liter and 108 liter capacities). Two empirical models based on the Gompertz and the modified logistic equations were used to fit the experimental data based on non-linear regression analysis using Solver tool ...

  16. Empirical Model for Predicting Rate of Biogas Production | Adamu ...

    African Journals Online (AJOL)

    Rate of biogas production using cow manure as substrate was monitored in two laboratory scale batch reactors (13 liter and 108 liter capacities). Two empirical models based on the Gompertz and the modified logistic equations were used to fit the experimental data based on non-linear regression analysis using Solver tool ...

  17. Empirical Models for the Estimation of Global Solar Radiation in ...

    African Journals Online (AJOL)

    Empirical Models for the Estimation of Global Solar Radiation in Yola, Nigeria. ... and average daily wind speed (WS) for the interval of three years (2010 – 2012) measured using various instruments for Yola of recorded data collected from the Center for Atmospheric Research (CAR), Anyigba are presented and analyzed.

  18. Empirical validity for a comprehensive model on educational effectiveness

    NARCIS (Netherlands)

    Reezigt, G.J.; Guldemond, H.; Creemers, B.P.M.

    Educational effectiveness research is often criticised because of the absence of a theoretical background. In our study we started out from an educational effectiveness model which was developed on the basis of educational theories and empirical evidence. We have tested the main assumptions of the

  19. Testing the gravity p-median model empirically

    Directory of Open Access Journals (Sweden)

    Kenneth Carling

    2015-12-01

    Full Text Available Regarding the location of a facility, the presumption in the widely used p-median model is that the customer opts for the shortest route to the nearest facility. However, this assumption is problematic on free markets since the customer is presumed to gravitate to a facility by the distance to and the attractiveness of it. The recently introduced gravity p-median model offers an extension to the p-median model that account for this. The model is therefore potentially interesting, although it has not yet been implemented and tested empirically. In this paper, we have implemented the model in an empirical problem of locating vehicle inspections, locksmiths, and retail stores of vehicle spare-parts for the purpose of investigating its superiority to the p-median model. We found, however, the gravity p-median model to be of limited use for the problem of locating facilities as it either gives solutions similar to the p-median model, or it gives unstable solutions due to a non-concave objective function.

  20. Two empirical models for short-term forecast of Kp

    Science.gov (United States)

    Luo, B.; Liu, S.; Gong, J.

    2017-03-01

    In this paper, two empirical models are developed for short-term forecast of the Kp index, taking advantage of solar wind-magnetosphere coupling functions proposed by the research community. Both models are based on the data for years 1995 to 2004. Model 1 mainly uses solar wind parameters as the inputs, while model 2 also utilizes the previous measured Kp value. Finally, model 1 predicts Kp with a linear correlation coefficient (r) of 0.91, a prediction efficiency (PE) of 0.81, and a root-mean-square (RMS) error of 0.59. Model 2 gives an r of 0.92, a PE of 0.84, and an RMS error of 0.57. The two models are validated through out-of-sample test for years 2005 to 2013, which also yields high forecast accuracy. Unlike in the other models reported in the literature, we are taking the response time of the magnetosphere to external solar wind at the Earth explicitly in the modeling. Statistically, the time delay in the models turns out to be about 30 min. By introducing this term, both the accuracy and lead time of the model forecast are improved. Through verification and validation, the models can be used in operational geomagnetic storm warnings with reliable performance.

  1. Empirically derived neighbourhood rules for urban land-use modelling

    DEFF Research Database (Denmark)

    Hansen, Henning Sten

    2012-01-01

    interaction between neighbouring land uses is an important component in urban cellular automata. Nevertheless, this component is often calibrated through trial-and-error estimation. The aim of this project has been to develop an empirically derived landscape metric supporting cellular-automata-based land......-use modelling. Through access to very detailed urban land-use data it has been possible to derive neighbourhood rules empirically, and test their sensitivity to the land-use classification applied, the regional variability of the rules, and their time variance. The developed methodology can be implemented...

  2. Using Empirical Models for Communication Prediction of Spacecraft

    Science.gov (United States)

    Quasny, Todd

    2015-01-01

    A viable communication path to a spacecraft is vital for its successful operation. For human spaceflight, a reliable and predictable communication link between the spacecraft and the ground is essential not only for the safety of the vehicle and the success of the mission, but for the safety of the humans on board as well. However, analytical models of these communication links are challenged by unique characteristics of space and the vehicle itself. For example, effects of radio frequency during high energy solar events while traveling through a solar array of a spacecraft can be difficult to model, and thus to predict. This presentation covers the use of empirical methods of communication link predictions, using the International Space Station (ISS) and its associated historical data as the verification platform and test bed. These empirical methods can then be incorporated into communication prediction and automation tools for the ISS in order to better understand the quality of the communication path given a myriad of variables, including solar array positions, line of site to satellites, position of the sun, and other dynamic structures on the outside of the ISS. The image on the left below show the current analytical model of one of the communication systems on the ISS. The image on the right shows a rudimentary empirical model of the same system based on historical archived data from the ISS.

  3. Empirical modeling of dynamic behaviors of pneumatic artificial muscle actuators.

    Science.gov (United States)

    Wickramatunge, Kanchana Crishan; Leephakpreeda, Thananchai

    2013-11-01

    Pneumatic Artificial Muscle (PAM) actuators yield muscle-like mechanical actuation with high force to weight ratio, soft and flexible structure, and adaptable compliance for rehabilitation and prosthetic appliances to the disabled as well as humanoid robots or machines. The present study is to develop empirical models of the PAM actuators, that is, a PAM coupled with pneumatic control valves, in order to describe their dynamic behaviors for practical control design and usage. Empirical modeling is an efficient approach to computer-based modeling with observations of real behaviors. Different characteristics of dynamic behaviors of each PAM actuator are due not only to the structures of the PAM actuators themselves, but also to the variations of their material properties in manufacturing processes. To overcome the difficulties, the proposed empirical models are experimentally derived from real physical behaviors of the PAM actuators, which are being implemented. In case studies, the simulated results with good agreement to experimental results, show that the proposed methodology can be applied to describe the dynamic behaviors of the real PAM actuators. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.

  4. Plant water potential improves prediction of empirical stomatal models.

    Directory of Open Access Journals (Sweden)

    William R L Anderegg

    Full Text Available Climate change is expected to lead to increases in drought frequency and severity, with deleterious effects on many ecosystems. Stomatal responses to changing environmental conditions form the backbone of all ecosystem models, but are based on empirical relationships and are not well-tested during drought conditions. Here, we use a dataset of 34 woody plant species spanning global forest biomes to examine the effect of leaf water potential on stomatal conductance and test the predictive accuracy of three major stomatal models and a recently proposed model. We find that current leaf-level empirical models have consistent biases of over-prediction of stomatal conductance during dry conditions, particularly at low soil water potentials. Furthermore, the recently proposed stomatal conductance model yields increases in predictive capability compared to current models, and with particular improvement during drought conditions. Our results reveal that including stomatal sensitivity to declining water potential and consequent impairment of plant water transport will improve predictions during drought conditions and show that many biomes contain a diversity of plant stomatal strategies that range from risky to conservative stomatal regulation during water stress. Such improvements in stomatal simulation are greatly needed to help unravel and predict the response of ecosystems to future climate extremes.

  5. Regime switching model for financial data: Empirical risk analysis

    Science.gov (United States)

    Salhi, Khaled; Deaconu, Madalina; Lejay, Antoine; Champagnat, Nicolas; Navet, Nicolas

    2016-11-01

    This paper constructs a regime switching model for the univariate Value-at-Risk estimation. Extreme value theory (EVT) and hidden Markov models (HMM) are combined to estimate a hybrid model that takes volatility clustering into account. In the first stage, HMM is used to classify data in crisis and steady periods, while in the second stage, EVT is applied to the previously classified data to rub out the delay between regime switching and their detection. This new model is applied to prices of numerous stocks exchanged on NYSE Euronext Paris over the period 2001-2011. We focus on daily returns for which calibration has to be done on a small dataset. The relative performance of the regime switching model is benchmarked against other well-known modeling techniques, such as stable, power laws and GARCH models. The empirical results show that the regime switching model increases predictive performance of financial forecasting according to the number of violations and tail-loss tests. This suggests that the regime switching model is a robust forecasting variant of power laws model while remaining practical to implement the VaR measurement.

  6. Empirical spatial econometric modelling of small scale neighbourhood

    Science.gov (United States)

    Gerkman, Linda

    2012-07-01

    The aim of the paper is to model small scale neighbourhood in a house price model by implementing the newest methodology in spatial econometrics. A common problem when modelling house prices is that in practice it is seldom possible to obtain all the desired variables. Especially variables capturing the small scale neighbourhood conditions are hard to find. If there are important explanatory variables missing from the model, the omitted variables are spatially autocorrelated and they are correlated with the explanatory variables included in the model, it can be shown that a spatial Durbin model is motivated. In the empirical application on new house price data from Helsinki in Finland, we find the motivation for a spatial Durbin model, we estimate the model and interpret the estimates for the summary measures of impacts. By the analysis we show that the model structure makes it possible to model and find small scale neighbourhood effects, when we know that they exist, but we are lacking proper variables to measure them.

  7. Empirical Reduced-Order Modeling for Boundary Feedback Flow Control

    Directory of Open Access Journals (Sweden)

    Seddik M. Djouadi

    2008-01-01

    Full Text Available This paper deals with the practical and theoretical implications of model reduction for aerodynamic flow-based control problems. Various aspects of model reduction are discussed that apply to partial differential equation- (PDE- based models in general. Specifically, the proper orthogonal decomposition (POD of a high dimension system as well as frequency domain identification methods are discussed for initial model construction. Projections on the POD basis give a nonlinear Galerkin model. Then, a model reduction method based on empirical balanced truncation is developed and applied to the Galerkin model. The rationale for doing so is that linear subspace approximations to exact submanifolds associated with nonlinear controllability and observability require only standard matrix manipulations utilizing simulation/experimental data. The proposed method uses a chirp signal as input to produce the output in the eigensystem realization algorithm (ERA. This method estimates the system's Markov parameters that accurately reproduce the output. Balanced truncation is used to show that model reduction is still effective on ERA produced approximated systems. The method is applied to a prototype convective flow on obstacle geometry. An H∞ feedback flow controller is designed based on the reduced model to achieve tracking and then applied to the full-order model with excellent performance.

  8. An Empirical Investigation into a Subsidiary Absorptive Capacity Process Model

    DEFF Research Database (Denmark)

    Schleimer, Stephanie; Pedersen, Torben

    2011-01-01

    and empirically test a process model of absorptive capacity. The setting of our empirical study is 213 subsidiaries of multinational enterprises and the focus is on the capacity of these subsidiaries to successfully absorb best practices in marketing strategy from their headquarters. This setting allows us...... to explore the process model in its entirety, including different drivers of subsidiary absorptive capacity (organizational mechanisms and contextual drivers), the three original dimensions of absorptive capacity (recognition, assimilation, application), and related outcomes (implementation...... and internalization of the best practice). The study’s findings reveal that managers have discretion in promoting absorptive capacity through the application of specific organizational mechanism and that the impact of contextual drivers on subsidiary absorptive capacity is not direct, but mediated...

  9. Econophysics: Empirical facts and agent-based models

    OpenAIRE

    Anirban Chakraborti; Ioane Muni Toke; Marco Patriarca; Frederic Abergel

    2009-01-01

    This article aims at reviewing recent empirical and theoretical developments usually grouped under the term Econophysics. Since its name was coined in 1995 by merging the words Economics and Physics, this new interdisciplinary field has grown in various directions: theoretical macroeconomics (wealth distributions), microstructure of financial markets (order book modelling), econometrics of financial bubbles and crashes, etc. In the first part of the review, we discuss on the emergence of Econ...

  10. Empirical validation data sets for double skin facade models

    DEFF Research Database (Denmark)

    Kalyanova, Olena; Jensen, Rasmus Lund; Heiselberg, Per

    2008-01-01

    During recent years application of double skin facades (DSF) has greatly increased. However, successful application depends heavily on reliable and validated models for simulation of the DSF performance and this in turn requires access to high quality experimental data. Three sets of accurate...... empirical data for validation of DSF modeling with building simulation software were produced within the International Energy Agency (IEA) SHCTask 34 / ECBCS Annex 43. This paper describes the full-scale outdoor experimental test facility, the experimental set-up and the measurements procedure...

  11. An empirical firn-densification model comprising ice-lences

    DEFF Research Database (Denmark)

    Reeh, Niels; Fisher, D.A.; Koerner, R.M.

    2005-01-01

    In the past, several empirical firn-densification models have been developed fitted to measured density-depth profiles from Greenland and Antarctica. These models do not specifically deal with refreezing of meltwater in the firn. Ice lenses are usually indirectly taken into account by choosing...... a suitable value of the surface snow density. In the present study, a simple densification model is developed that specifically accounts for the content of ice lenses in the snowpack. An annual layer is considered to be composed of an ice fraction and a firn fraction. It is assumed that all meltwater formed...... at the surface in one year will refreeze in the corresponding annual layer, and that no additional melting or refreezing occurs in deeper layers. With this assumption, further densification is solely controlled by compaction of the firn fraction of the annual layer. Comparison of modelled and observed depth...

  12. Empirical model for mineralisation of manure nitrogen in soil

    DEFF Research Database (Denmark)

    Sørensen, Peter; Thomsen, Ingrid Kaag; Schröder, Jaap

    2017-01-01

    A simple empirical model was developed for estimation of net mineralisation of pig and cattle slurry nitrogen (N) in arable soils under cool and moist climate conditions during the initial 5 years after spring application. The model is based on a Danish 3-year field experiment with measurements...... of N uptake in spring barley and ryegrass catch crops, supplemented with data from the literature on the temporal release of organic residues in soil. The model estimates a faster mineralisation rate for organic N in pig slurry compared with cattle slurry, and the description includes an initial N...... immobilisation phase for both manure types. The model estimates a cumulated net mineralisation of 71% and 51% of organic N in pig and cattle slurry respectively after 5 years. These estimates are in accordance with some other mineralisation studies and studies of the effects of manure residual N in other North...

  13. Business models of micro businesses: Empirical evidence from creative industries

    Directory of Open Access Journals (Sweden)

    Pfeifer Sanja

    2017-01-01

    Full Text Available Business model describes how a business identifies and creates value for customers and how it organizes itself to capture some of this value in a profitable manner. Previous studies of business models in creative industries have only recently identified the unresolved issues in this field of research. The main objective of this article is to analyse the structure and diversity of business models and to deduce how these components interact or change in the context of micro and small businesses in creative services such as advertising, architecture and design. The article uses a qualitative approach. Case studies and semi-structured, in-depth interviews with six owners/managers of micro businesses in Croatia provide rich data. Structural coding in data analysis has been performed manually. The qualitative analysis has indicative relevance for the assessment and comparison of business models, however, it provides insights into which components of business models seem to be consolidated and which seem to contribute to the diversity of business models in creative industries. The article contributes to the advancement of empirical evidence and conceptual constructs that might lead to more advanced methodological approaches and proposition of the core typologies or classifications of business models in creative industries. In addition, a more detailed mapping of different choices available in managing value creation, value capturing or value networking might be a valuable help for owners/managers who want to change or cross-fertilize their business models.

  14. Empirical Modeling of the Plasmasphere Dynamics Using Neural Networks

    Science.gov (United States)

    Zhelavskaya, Irina S.; Shprits, Yuri Y.; Spasojević, Maria

    2017-11-01

    We present the PINE (Plasma density in the Inner magnetosphere Neural network-based Empirical) model - a new empirical model for reconstructing the global dynamics of the cold plasma density distribution based only on solar wind data and geomagnetic indices. Utilizing the density database obtained using the NURD (Neural-network-based Upper hybrid Resonance Determination) algorithm for the period of 1 October 2012 to 1 July 2016, in conjunction with solar wind data and geomagnetic indices, we develop a neural network model that is capable of globally reconstructing the dynamics of the cold plasma density distribution for 2≤L≤6 and all local times. We validate and test the model by measuring its performance on independent data sets withheld from the training set and by comparing the model-predicted global evolution with global images of He+ distribution in the Earth's plasmasphere from the IMAGE Extreme UltraViolet (EUV) instrument. We identify the parameters that best quantify the plasmasphere dynamics by training and comparing multiple neural networks with different combinations of input parameters (geomagnetic indices, solar wind data, and different durations of their time history). The optimal model is based on the 96 h time history of Kp, AE, SYM-H, and F10.7 indices. The model successfully reproduces erosion of the plasmasphere on the nightside and plume formation and evolution. We demonstrate results of both local and global plasma density reconstruction. This study illustrates how global dynamics can be reconstructed from local in situ observations by using machine learning techniques.

  15. Advanced empirical estimate of information value for credit scoring models

    Directory of Open Access Journals (Sweden)

    Martin Řezáč

    2011-01-01

    Full Text Available Credit scoring, it is a term for a wide spectrum of predictive models and their underlying techniques that aid financial institutions in granting credits. These methods decide who will get credit, how much credit they should get, and what further strategies will enhance the profitability of the borrowers to the lenders. Many statistical tools are avaiable for measuring quality, within the meaning of the predictive power, of credit scoring models. Because it is impossible to use a scoring model effectively without knowing how good it is, quality indexes like Gini, Kolmogorov-Smirnov statisic and Information value are used to assess quality of given credit scoring model. The paper deals primarily with the Information value, sometimes called divergency. Commonly it is computed by discretisation of data into bins using deciles. One constraint is required to be met in this case. Number of cases have to be nonzero for all bins. If this constraint is not fulfilled there are some practical procedures for preserving finite results. As an alternative method to the empirical estimates one can use the kernel smoothing theory, which allows to estimate unknown densities and consequently, using some numerical method for integration, to estimate value of the Information value. The main contribution of this paper is a proposal and description of the empirical estimate with supervised interval selection. This advanced estimate is based on requirement to have at least k, where k is a positive integer, observations of socres of both good and bad client in each considered interval. A simulation study shows that this estimate outperform both the empirical estimate using deciles and the kernel estimate. Furthermore it shows high dependency on choice of the parameter k. If we choose too small value, we get overestimated value of the Information value, and vice versa. Adjusted square root of number of bad clients seems to be a reasonable compromise.

  16. EMPIRICAL MODEL FOR HYDROCYCLONES CORRECTED CUT SIZE CALCULATION

    Directory of Open Access Journals (Sweden)

    André Carlos Silva

    2012-12-01

    Full Text Available Hydrocyclones are devices worldwide used in mineral processing for desliming, classification, selective classification, thickening and pre-concentration. A hydrocyclone is composed by one cylindrical and one conical section joint together, without any moving parts and it is capable of perform granular material separation in pulp. The mineral particles separation mechanism acting in a hydrocyclone is complex and its mathematical modelling is usually empirical. The most used model for hydrocyclone corrected cut size is proposed by Plitt. Over the years many revisions and corrections to Plitt´s model were proposed. The present paper shows a modification in the Plitt´s model constant, obtained by exponential regression of simulated data for three different hydrocyclones geometry: Rietema, Bradley and Krebs. To validate the proposed model literature data obtained from phosphate ore using fifteen different hydrocyclones geometry are used. The proposed model shows a correlation equals to 88.2% between experimental and calculated corrected cut size, while the correlation obtained using Plitt´s model is 11.5%.

  17. Development of an empirically based dynamic biomechanical strength model

    Science.gov (United States)

    Pandya, A.; Maida, J.; Aldridge, A.; Hasson, S.; Woolford, B.

    1992-01-01

    The focus here is on the development of a dynamic strength model for humans. Our model is based on empirical data. The shoulder, elbow, and wrist joints are characterized in terms of maximum isolated torque, position, and velocity in all rotational planes. This information is reduced by a least squares regression technique into a table of single variable second degree polynomial equations determining the torque as a function of position and velocity. The isolated joint torque equations are then used to compute forces resulting from a composite motion, which in this case is a ratchet wrench push and pull operation. What is presented here is a comparison of the computed or predicted results of the model with the actual measured values for the composite motion.

  18. An empirical model for friction in cold forging

    DEFF Research Database (Denmark)

    Bay, Niels; Eriksen, Morten; Tan, Xincai

    2002-01-01

    With a system of simulative tribology tests for cold forging the friction stress for aluminum, steel and stainless steel provided with typical lubricants for cold forging has been determined for varying normal pressure, surface expansion, sliding length and tool/work piece interface temperature....... The results show, that friction is strongly influenced by normal pressure and tool/work piece interface temperature, whereas the other process parameters investigated show minor influence on friction. Based on the experimental results a mathematical model has been established for friction as a function...... of normal pressure and tool/work piece interface temperature. The model is verified by process testing measuring friction at varying reductions in cold forward rod extrusion. KEY WORDS: empirical friction model, cold forging, simulative friction tests....

  19. Empirical classification of resources in a business model concept

    Directory of Open Access Journals (Sweden)

    Marko Seppänen

    2009-04-01

    Full Text Available The concept of the business model has been designed for aiding exploitation of the business potential of an innovation. This exploitation inevitably involves new activities in the organisational context and generates a need to select and arrange the resources of the firm in these new activities. A business model encompasses those resources that a firm has access to and aids in a firm’s effort to create a superior ‘innovation capability’. Selecting and arranging resources to utilise innovations requires resource allocation decisions on multiple fronts as well as poses significant challenges for management of innovations. Although current business model conceptualisations elucidate resources, explicit considerations for the composition and the structures of the resource compositions have remained ambiguous. As a result, current business model conceptualisations fail in their core purpose in assisting the decision-making that must consider the resource allocation in exploiting business opportunities. This paper contributes to the existing discussion regarding the representation of resources as components in the business model concept. The categorized list of resources in business models is validated empirically, using two samples of managers in different positions in several industries. The results indicate that most of the theoretically derived resource items have their equivalents in the business language and concepts used by managers. Thus, the categorisation of the resource components enables further development of the business model concept as well as improves daily communication between managers and their subordinates. Future research could be targeted on linking these components of a business model with each other in order to gain a model to assess the performance of different business model configurations. Furthermore, different applications for the developed resource configuration may be envisioned.

  20. Empirical Bayes Credibility Models for Economic Catastrophic Losses by Regions

    Directory of Open Access Journals (Sweden)

    Jindrová Pavla

    2017-01-01

    Full Text Available Catastrophic events affect various regions of the world with increasing frequency and intensity. The number of catastrophic events and the amount of economic losses is varying in different world regions. Part of these losses is covered by insurance. Catastrophe events in last years are associated with increases in premiums for some lines of business. The article focus on estimating the amount of net premiums that would be needed to cover the total or insured catastrophic losses in different world regions using Bühlmann and Bühlmann-Straub empirical credibility models based on data from Sigma Swiss Re 2010-2016. The empirical credibility models have been developed to estimate insurance premiums for short term insurance contracts using two ingredients: past data from the risk itself and collateral data from other sources considered to be relevant. In this article we deal with application of these models based on the real data about number of catastrophic events and about the total economic and insured catastrophe losses in seven regions of the world in time period 2009-2015. Estimated credible premiums by world regions provide information how much money in the monitored regions will be need to cover total and insured catastrophic losses in next year.

  1. Validity of empirical models of exposure in asphalt paving

    Science.gov (United States)

    Burstyn, I; Boffetta, P; Burr, G; Cenni, A; Knecht, U; Sciarra, G; Kromhout, H

    2002-01-01

    Aims: To investigate the validity of empirical models of exposure to bitumen fume and benzo(a)pyrene, developed for a historical cohort study of asphalt paving in Western Europe. Methods: Validity was evaluated using data from the USA, Italy, and Germany not used to develop the original models. Correlation between observed and predicted exposures was examined. Bias and precision were estimated. Results: Models were imprecise. Furthermore, predicted bitumen fume exposures tended to be lower (-70%) than concentrations found during paving in the USA. This apparent bias might be attributed to differences between Western European and USA paving practices. Evaluation of the validity of the benzo(a)pyrene exposure model revealed a similar to expected effect of re-paving and a larger than expected effect of tar use. Overall, benzo(a)pyrene models underestimated exposures by 51%. Conclusions: Possible bias as a result of underestimation of the impact of coal tar on benzo(a)pyrene exposure levels must be explored in sensitivity analysis of the exposure–response relation. Validation of the models, albeit limited, increased our confidence in their applicability to exposure assessment in the historical cohort study of cancer risk among asphalt workers. PMID:12205236

  2. Adaptation of an empirical model for erythemal ultraviolet irradiance

    Directory of Open Access Journals (Sweden)

    I. Foyo-Moreno

    2007-07-01

    Full Text Available In this work we adapt an empirical model to estimate ultraviolet erythemal irradiance (UVER using experimental measurements carried out at seven stations in Spain during four years (2000–2003. The measurements were taken in the framework of the Spanish UVB radiometric network operated and maintained by the Spanish Meteorological Institute. The UVER observations are recorded as half hour average values. The model is valid for all-sky conditions, estimating UVER from the ozone columnar content and parameters usually registered in radiometric networks, such as global broadband hemispherical transmittance and optical air mass. One data set was used to develop the model and another independent set was used to validate it. The model provides satisfactory results, with low mean bias error (MBE for all stations. In fact, MBEs are less than 4% and root mean square errors (RMSE are below 18% (except for one location. The model has also been evaluated to estimate the UV index. The percentage of cases with differences of 0 UVI units is in the range of 61.1% to 72.0%, while the percentage of cases with differences of ±1 UVI unit covers the range of 95.6% to 99.2%. This result confirms the applicability of the model to estimate UVER irradiance and the UV index at those locations in the Iberian Peninsula where there are no UV radiation measurements.

  3. Comparison of blade-strike modeling results with empirical data

    Energy Technology Data Exchange (ETDEWEB)

    Ploskey, Gene R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Carlson, Thomas J. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2004-03-01

    This study is the initial stage of further investigation into the dynamics of injury to fish during passage through a turbine runner. As part of the study, Pacific Northwest National Laboratory (PNNL) estimated the probability of blade strike, and associated injury, as a function of fish length and turbine operating geometry at two adjacent turbines in Powerhouse 1 of Bonneville Dam. Units 5 and 6 had identical intakes, stay vanes, wicket gates, and draft tubes, but Unit 6 had a new runner and curved discharge ring to minimize gaps between the runner hub and blades and between the blade tips and discharge ring. We used a mathematical model to predict blade strike associated with two Kaplan turbines and compared results with empirical data from biological tests conducted in 1999 and 2000. Blade-strike models take into consideration the geometry of the turbine blades and discharges as well as fish length, orientation, and distribution along the runner. The first phase of this study included a sensitivity analysis to consider the effects of difference in geometry and operations between families of turbines on the strike probability response surface. The analysis revealed that the orientation of fish relative to the leading edge of a runner blade and the location that fish pass along the blade between the hub and blade tip are critical uncertainties in blade-strike models. Over a range of discharges, the average prediction of injury from blade strike was two to five times higher than average empirical estimates of visible injury from shear and mechanical devices. Empirical estimates of mortality may be better metrics for comparison to predicted injury rates than other injury measures for fish passing at mid-blade and blade-tip locations.

  4. A New Empirical Model to Estimate Landfill Gas Pollution

    Directory of Open Access Journals (Sweden)

    Hamidreza Kamalan

    2016-07-01

    Full Text Available Background: Landfills are the most important producers of methane as human source. So, prediction of landfill gas generation is by far the most important concern of scientists, decision makers, and landfill owners as well as health authorities. Almost all the currently used models are based on Monod equation first order decay rate which is experimental while the main purpose of this research is to develop a numerical model. Methods: A real scale pilot landfill with 4500 tons of municipal solid waste has been designed, constructed, and operated for two years. Required measurements have been done to provide proper data on greenhouse gases emitted by the landfill and monitor its status such as internal temperature, leachate content, and its settlement during two years. Afterwards, weighted residual method has been used to develop the numerical model. Then, the newly mathematical method has been verified with data from another landfill. Results: Measurements showed that the minimum and maximum percentages of methane among landfill gas were 22.3 and 46.1%, respectively. These values for velocity of landfill gas are 0.3 and 0.48 meters per second, in that order. Conclusion: Since there is just 0.6 percent error in calculation as compared to real measurements from a landfill in California and most of the models used have ten percent error, this simple empirical numerical model is suggested to be utilized by scientists, decision makers, and landfill owners

  5. Empirical Modeling of Solar Radiation Pressure Forces Affecting GPS Satellites

    Science.gov (United States)

    Sibthorpe, A.; Weiss, J. P.; Harvey, N.; Kuang, D.; Bar-Sever, Y.

    2010-12-01

    At an altitude of approximately 20,000km above the Earth, Solar Radiation Pressure (SRP) forces on Global Positioning System (GPS) satellites are second in magnitude only to the gravitational attractive forces exerted by the Earth, Sun and Moon. As GPS orbit processing strategies reach unprecedented levels of precision and accuracy, subtle effects from different GPS SRP models are beginning to emerge above the noise floor. We present an updated approach to the empirical modeling of SRP forces on GPS satellites using 14 years of data. We assess the models via orbit prediction and orbit determination using a suite of internal and external metrics. Our new model results in >10% average improvement of 8th-day orbit prediction differences (3D RMS) for block IIA and IIR satellites against our best final orbit solutions. Internal orbit overlaps from precise orbit determination improve by 7%. We additionally assess the impacts of the updated SRP models on satellite laser range residuals, carrier phase ambiguity resolution, and estimation of earth orientation parameters.

  6. Production functions for climate policy modeling. An empirical analysis

    International Nuclear Information System (INIS)

    Van der Werf, Edwin

    2008-01-01

    Quantitative models for climate policy modeling differ in the production structure used and in the sizes of the elasticities of substitution. The empirical foundation for both is generally lacking. This paper estimates the parameters of 2-level CES production functions with capital, labour and energy as inputs, and is the first to systematically compare all nesting structures. Using industry-level data from 12 OECD countries, we find that the nesting structure where capital and labour are combined first, fits the data best, but for most countries and industries we cannot reject that all three inputs can be put into one single nest. These two nesting structures are used by most climate models. However, while several climate policy models use a Cobb-Douglas function for (part of the) production function, we reject elasticities equal to one, in favour of considerably smaller values. Finally we find evidence for factor-specific technological change. With lower elasticities and with factor-specific technological change, some climate policy models may find a bigger effect of endogenous technological change on mitigating the costs of climate policy. (author)

  7. Empirical model of atomic nitrogen in the upper thermosphere

    Science.gov (United States)

    Engebretson, M. J.; Mauersberger, K.; Kayser, D. C.; Potter, W. E.; Nier, A. O.

    1977-01-01

    Atomic nitrogen number densities in the upper thermosphere measured by the open source neutral mass spectrometer (OSS) on Atmosphere Explorer-C during 1974 and part of 1975 have been used to construct a global empirical model at an altitude of 375 km based on a spherical harmonic expansion. The most evident features of the model are large diurnal and seasonal variations of atomic nitrogen and only a moderate and latitude-dependent density increase during periods of geomagnetic activity. Maximum and minimum N number densities at 375 km for periods of low solar activity are 3.6 x 10 to the 6th/cu cm at 1500 LST (local solar time) and low latitude in the summer hemisphere and 1.5 x 10 to the 5th/cu cm at 0200 LST at mid-latitudes in the winter hemisphere.

  8. An Empirical Ultra Wideband Channel Model for Indoor Laboratory Environments

    Directory of Open Access Journals (Sweden)

    A. Abolghasemi

    2009-04-01

    Full Text Available Channel measurement and modeling is an important issue when designing ultra wideband (UWB communication systems. In this paper, the results of some UWB time-domain propagation measurements performed in modern laboratory (Lab environments are presented. The Labs are equipped with many electronic and measurement devices which make them different from other indoor locations like office and residential environments. The measurements have been performed for both line of sight (LOS and non-LOS (NLOS scenarios. The measurement results are used to investigate large-scale channel characteristics and temporal dispersion parameters. The clustering Saleh- Valenzuela (S-V channel impulse response (CIR parameters are investigated based on the measurement data. The small-scale amplitude fading statistics are also studied in the environment. Then, an empirical model is presented for UWB signal transmission in the Lab environment based on the obtained results.

  9. Empirical flow parameters : a tool for hydraulic model validity

    Science.gov (United States)

    Asquith, William H.; Burley, Thomas E.; Cleveland, Theodore G.

    2013-01-01

    The objectives of this project were (1) To determine and present from existing data in Texas, relations between observed stream flow, topographic slope, mean section velocity, and other hydraulic factors, to produce charts such as Figure 1 and to produce empirical distributions of the various flow parameters to provide a methodology to "check if model results are way off!"; (2) To produce a statistical regional tool to estimate mean velocity or other selected parameters for storm flows or other conditional discharges at ungauged locations (most bridge crossings) in Texas to provide a secondary way to compare such values to a conventional hydraulic modeling approach. (3.) To present ancillary values such as Froude number, stream power, Rosgen channel classification, sinuosity, and other selected characteristics (readily determinable from existing data) to provide additional information to engineers concerned with the hydraulic-soil-foundation component of transportation infrastructure.

  10. Power spectrum model of visual masking: simulations and empirical data.

    Science.gov (United States)

    Serrano-Pedraza, Ignacio; Sierra-Vázquez, Vicente; Derrington, Andrew M

    2013-06-01

    cutoffs around the spatial frequency of the signal match the shape of the visual channel (symmetric or asymmetric) involved in the detection. In order to test the explanatory power of the model with empirical data, we performed six visual masking experiments. We show that this model, with only two free parameters, fits the empirical masking data with high precision. Finally, we provide equations of the power spectrum model for six masking noises used in the simulations and in the experiments.

  11. Dynamic Modeling of a Reformed Methanol Fuel Cell System using Empirical Data and Adaptive Neuro-Fuzzy Inference System Models

    DEFF Research Database (Denmark)

    Justesen, Kristian Kjær; Andreasen, Søren Juhl; Shaker, Hamid Reza

    2014-01-01

    an empirical approach. Fin efficiency models for the cooling effect of the air are also developed using empirical methods. A fuel cell model is also implemented based on a standard model which is adapted to fit the measured performance of the H3-350 module. All the individual parts of the model are verified...

  12. An empirical conceptual gully evolution model for channelled sea cliffs

    Science.gov (United States)

    Leyland, Julian; Darby, Stephen E.

    2008-12-01

    Incised coastal channels are a specific form of incised channel that are found in locations where stream channels flowing to cliffed coasts have the excess energy required to cut down through the cliff to reach the outlet water body. The southern coast of the Isle of Wight, southern England, comprises soft cliffs that vary in height between 15 and 100 m and which are retreating at rates ≤ 1.5 m a - 1 , due to a combination of wave erosion and landslides. In several locations, river channels have cut through the cliffs to create deeply (≤ 45 m) incised gullies, known locally as 'Chines'. The Chines are unusual in that their formation is associated with dynamic shoreline encroachment during a period of rising sea-level, whereas existing models of incised channel evolution emphasise the significance of base level lowering. This paper develops a conceptual model of Chine evolution by applying space for time substitution methods using empirical data gathered from Chine channel surveys and remotely sensed data. The model identifies a sequence of evolutionary stages, which are classified based on a suite of morphometric indices and associated processes. The extent to which individual Chines are in a state of growth or decay is estimated by determining the relative rates of shoreline retreat and knickpoint recession, the former via analysis of historical aerial images and the latter through the use of a stream power erosion model.

  13. Modelling drying kinetics of thyme (Thymus vulgaris L.): theoretical and empirical models, and neural networks.

    Science.gov (United States)

    Rodríguez, J; Clemente, G; Sanjuán, N; Bon, J

    2014-01-01

    The drying kinetics of thyme was analyzed by considering different conditions: air temperature of between 40°C  and 70°C , and air velocity of 1 m/s. A theoretical diffusion model and eight different empirical models were fitted to the experimental data. From the theoretical model application, the effective diffusivity per unit area of the thyme was estimated (between 3.68 × 10(-5) and 2.12 × 10 (-4) s(-1)). The temperature dependence of the effective diffusivity was described by the Arrhenius relationship with activation energy of 49.42 kJ/mol. Eight different empirical models were fitted to the experimental data. Additionally, the dependence of the parameters of each model on the drying temperature was determined, obtaining equations that allow estimating the evolution of the moisture content at any temperature in the established range. Furthermore, artificial neural networks were developed and compared with the theoretical and empirical models using the percentage of the relative errors and the explained variance. The artificial neural networks were found to be more accurate predictors of moisture evolution with VAR ≥ 99.3% and ER ≤ 8.7%.

  14. Hybrid empirical--theoretical approach to modeling uranium adsorption

    Energy Technology Data Exchange (ETDEWEB)

    Hull, Larry C.; Grossman, Christopher; Fjeld, Robert A.; Coates, John T.; Elzerman, Alan W

    2004-05-01

    An estimated 330 metric tons of U are buried in the radioactive waste Subsurface Disposal Area (SDA) at the Idaho National Engineering and Environmental Laboratory (INEEL). An assessment of U transport parameters is being performed to decrease the uncertainty in risk and dose predictions derived from computer simulations of U fate and transport to the underlying Snake River Plain Aquifer. Uranium adsorption isotherms were measured for 14 sediment samples collected from sedimentary interbeds underlying the SDA. The adsorption data were fit with a Freundlich isotherm. The Freundlich n parameter is statistically identical for all 14 sediment samples and the Freundlich K{sub f} parameter is correlated to sediment surface area (r{sup 2}=0.80). These findings suggest an efficient approach to material characterization and implementation of a spatially variable reactive transport model that requires only the measurement of sediment surface area. To expand the potential applicability of the measured isotherms, a model is derived from the empirical observations by incorporating concepts from surface complexation theory to account for the effects of solution chemistry. The resulting model is then used to predict the range of adsorption conditions to be expected in the vadose zone at the SDA based on the range in measured pore water chemistry. Adsorption in the deep vadose zone is predicted to be stronger than in near-surface sediments because the total dissolved carbonate decreases with depth.

  15. Empirical Memory-Access Cost Models in Multicore NUMA Architectures

    Energy Technology Data Exchange (ETDEWEB)

    McCormick, Patrick S. [Los Alamos National Laboratory; Braithwaite, Ryan Karl [Los Alamos National Laboratory; Feng, Wu-chun [Virginia Tech

    2011-01-01

    Data location is of prime importance when scheduling tasks in a non-uniform memory access (NUMA) architecture. The characteristics of the NUMA architecture must be understood so tasks can be scheduled onto processors that are close to the task's data. However, in modern NUMA architectures, such as AMD Magny-Cours and Intel Nehalem, there may be a relatively large number of memory controllers with sockets that are connected in a non-intuitive manner, leading to performance degradation due to uninformed task-scheduling decisions. In this paper, we provide a method for experimentally characterizing memory-access costs for modern NUMA architectures via memory latency and bandwidth microbenchmarks. Using the results of these benchmarks, we propose a memory-access cost model to improve task-scheduling decisions by scheduling tasks near the data they need. Simple task-scheduling experiments using the memory-access cost models validate the use of empirical memory-access cost models to significantly improve program performance.

  16. Hybrid empirical--theoretical approach to modeling uranium adsorption

    International Nuclear Information System (INIS)

    Hull, Larry C.; Grossman, Christopher; Fjeld, Robert A.; Coates, John T.; Elzerman, Alan W.

    2004-01-01

    An estimated 330 metric tons of U are buried in the radioactive waste Subsurface Disposal Area (SDA) at the Idaho National Engineering and Environmental Laboratory (INEEL). An assessment of U transport parameters is being performed to decrease the uncertainty in risk and dose predictions derived from computer simulations of U fate and transport to the underlying Snake River Plain Aquifer. Uranium adsorption isotherms were measured for 14 sediment samples collected from sedimentary interbeds underlying the SDA. The adsorption data were fit with a Freundlich isotherm. The Freundlich n parameter is statistically identical for all 14 sediment samples and the Freundlich K f parameter is correlated to sediment surface area (r 2 =0.80). These findings suggest an efficient approach to material characterization and implementation of a spatially variable reactive transport model that requires only the measurement of sediment surface area. To expand the potential applicability of the measured isotherms, a model is derived from the empirical observations by incorporating concepts from surface complexation theory to account for the effects of solution chemistry. The resulting model is then used to predict the range of adsorption conditions to be expected in the vadose zone at the SDA based on the range in measured pore water chemistry. Adsorption in the deep vadose zone is predicted to be stronger than in near-surface sediments because the total dissolved carbonate decreases with depth

  17. Empirical Modeling of ICMEs Using ACE/SWICS Ionic Distributions

    Science.gov (United States)

    Rivera, Y.; Landi, E.; Lepri, S. T.; Gilbert, J. A.

    2017-12-01

    Coronal Mass Ejections (CMEs) are some of the largest, most energetic events in the solar system releasing an immense amount of plasma and magnetic field into the Heliosphere. The Earth-bound plasma plays a large role in space weather, causing geomagnetic storms that can damage space and ground based instrumentation. As a CME is released, the plasma experiences heating, expansion and acceleration; however, the physical mechanism supplying the heating as it lifts out of the corona still remains uncertain. From previous work we know the ionic composition of solar ejecta undergoes a gradual transition to a state where ionization and recombination processes become ineffective rendering the ionic composition static along its trajectory. This property makes them a good indicator of thermal conditions in the corona, where the CME plasma likely receives most of its heating. We model this so-called `freeze-in' process in Earth-directed CMEs using an ionization code to empirically determine the electron temperature, density and bulk velocity. `Frozen-in' ions from an ensemble of independently modeled plasmas within the CME are added together to fit the full range of observational ionic abundances collected by ACE/SWICS during ICME events. The models derived using this method are used to estimate the CME energy budget to determine a heating rate used to compare with a variety of heating mechanisms that can sustain the required heating with a compatible timescale.

  18. Psychological profiling of sexual murders: an empirical model.

    Science.gov (United States)

    Kocsis, Richard N; Cooksey, Ray W; Irwin, Harvey J

    2002-10-01

    Psychological profiling represents the investigative technique of analyzing crime behaviors for the identification of probable offender characteristics. Profiling has progressively been incorporated into police procedures despite a surprising lack of empirical research to support its validity. Indeed, in the study of sexual murder for the purpose of profiling, very few quantitative, academically reviewed studies exist. This article reports on the results of a 4-year study into Australian sexual murders for the development of psychological profiling. The study involved 85 cases of sexual murder sampled from all Australian police jurisdictions. The statistical procedure of multidimensional scaling was employed. This analysis produced a five-cluster model of sexual murder behavior. First, a central cluster of behaviors was identified that represents common behaviors to all patterns of sexual murder. Next, four distinct outlying patterns--predator, fury, perversion, and rape--were identified that each demonstrated distinct offense styles. Further analysis of these patterns also identified distinct offender characteristics that allow for the use of empirically robust offender profiles in future sexual murder investigations.

  19. Empirical Analysis and Modeling of the Global Economic System

    Science.gov (United States)

    Duan, Wen-Qi; Sun, Bo-Liang

    2009-09-01

    In the global economic system, each economy stimulates the growth of its gross domestic products (GDP) by increasing its international trade. Using a fluctuation analysis of the flux data of GDP and foreign trade, we find that both GDP and foreign trade are dominated by external force and driven by each other. By excluding the impact of the associated trade dependency degree, GDP and the total volume of foreign trade collapse well into a power-law function. The economy's total trade volume scales with the number of trade partners, and it is distributed among its trade partners in an exponential form. The model which incorporated these empirical results can integrate the growth dynamics of GDP and the interplay dynamics between GDP and weighted international trade networks simultaneously.

  20. Empirical fitness models for hepatitis C virus immunogen design.

    Science.gov (United States)

    Hart, Gregory R; Ferguson, Andrew L

    2015-11-24

    Hepatitis C virus (HCV) afflicts 170 million people worldwide, 2%-3% of the global population, and kills 350 000 each year. Prophylactic vaccination offers the most realistic and cost effective hope of controlling this epidemic in the developing world where expensive drug therapies are not available. Despite 20 years of research, the high mutability of the virus and lack of knowledge of what constitutes effective immune responses have impeded development of an effective vaccine. Coupling data mining of sequence databases with spin glass models from statistical physics, we have developed a computational approach to translate clinical sequence databases into empirical fitness landscapes quantifying the replicative capacity of the virus as a function of its amino acid sequence. These landscapes explicitly connect viral genotype to phenotypic fitness, and reveal vulnerable immunological targets within the viral proteome that can be exploited to rationally design vaccine immunogens. We have recovered the empirical fitness landscape for the HCV RNA-dependent RNA polymerase (protein NS5B) responsible for viral genome replication, and validated the predictions of our model by demonstrating excellent accord with experimental measurements and clinical observations. We have used our landscapes to perform exhaustive in silico screening of 16.8 million T-cell immunogen candidates to identify 86 optimal formulations. By reducing the search space of immunogen candidates by over five orders of magnitude, our approach can offer valuable savings in time, expense, and labor for experimental vaccine development and accelerate the search for a HCV vaccine. HCV-hepatitis C virus, HLA-human leukocyte antigen, CTL-cytotoxic T lymphocyte, NS5B-nonstructural protein 5B, MSA-multiple sequence alignment, PEG-IFN-pegylated interferon.

  1. Empirical fitness models for hepatitis C virus immunogen design

    Science.gov (United States)

    Hart, Gregory R.; Ferguson, Andrew L.

    2015-12-01

    Hepatitis C virus (HCV) afflicts 170 million people worldwide, 2%-3% of the global population, and kills 350 000 each year. Prophylactic vaccination offers the most realistic and cost effective hope of controlling this epidemic in the developing world where expensive drug therapies are not available. Despite 20 years of research, the high mutability of the virus and lack of knowledge of what constitutes effective immune responses have impeded development of an effective vaccine. Coupling data mining of sequence databases with spin glass models from statistical physics, we have developed a computational approach to translate clinical sequence databases into empirical fitness landscapes quantifying the replicative capacity of the virus as a function of its amino acid sequence. These landscapes explicitly connect viral genotype to phenotypic fitness, and reveal vulnerable immunological targets within the viral proteome that can be exploited to rationally design vaccine immunogens. We have recovered the empirical fitness landscape for the HCV RNA-dependent RNA polymerase (protein NS5B) responsible for viral genome replication, and validated the predictions of our model by demonstrating excellent accord with experimental measurements and clinical observations. We have used our landscapes to perform exhaustive in silico screening of 16.8 million T-cell immunogen candidates to identify 86 optimal formulations. By reducing the search space of immunogen candidates by over five orders of magnitude, our approach can offer valuable savings in time, expense, and labor for experimental vaccine development and accelerate the search for a HCV vaccine. Abbreviations: HCV—hepatitis C virus, HLA—human leukocyte antigen, CTL—cytotoxic T lymphocyte, NS5B—nonstructural protein 5B, MSA—multiple sequence alignment, PEG-IFN—pegylated interferon.

  2. Empirical modelling to predict the refractive index of human blood

    Science.gov (United States)

    Yahya, M.; Saghir, M. Z.

    2016-02-01

    Optical techniques used for the measurement of the optical properties of blood are of great interest in clinical diagnostics. Blood analysis is a routine procedure used in medical diagnostics to confirm a patient’s condition. Measuring the optical properties of blood is difficult due to the non-homogenous nature of the blood itself. In addition, there is a lot of variation in the refractive indices reported in the literature. These are the reasons that motivated the researchers to develop a mathematical model that can be used to predict the refractive index of human blood as a function of concentration, temperature and wavelength. The experimental measurements were conducted on mimicking phantom hemoglobin samples using the Abbemat Refractometer. The results analysis revealed a linear relationship between the refractive index and concentration as well as temperature, and a non-linear relationship between refractive index and wavelength. These results are in agreement with those found in the literature. In addition, a new formula was developed based on empirical modelling which suggests that temperature and wavelength coefficients be added to the Barer formula. The verification of this correlation confirmed its ability to determine refractive index and/or blood hematocrit values with appropriate clinical accuracy.

  3. An empirical model of the quiet daily geomagnetic field variation

    Science.gov (United States)

    Yamazaki, Y.; Yumoto, K.; Cardinal, M.G.; Fraser, B.J.; Hattori, P.; Kakinami, Y.; Liu, J.Y.; Lynn, K.J.W.; Marshall, R.; McNamara, D.; Nagatsuma, T.; Nikiforov, V.M.; Otadoy, R.E.; Ruhimat, M.; Shevtsov, B.M.; Shiokawa, K.; Abe, S.; Uozumi, T.; Yoshikawa, A.

    2011-01-01

    An empirical model of the quiet daily geomagnetic field variation has been constructed based on geomagnetic data obtained from 21 stations along the 210 Magnetic Meridian of the Circum-pan Pacific Magnetometer Network (CPMN) from 1996 to 2007. Using the least squares fitting method for geomagnetically quiet days (Kp ??? 2+), the quiet daily geomagnetic field variation at each station was described as a function of solar activity SA, day of year DOY, lunar age LA, and local time LT. After interpolation in latitude, the model can describe solar-activity dependence and seasonal dependence of solar quiet daily variations (S) and lunar quiet daily variations (L). We performed a spherical harmonic analysis (SHA) on these S and L variations to examine average characteristics of the equivalent external current systems. We found three particularly noteworthy results. First, the total current intensity of the S current system is largely controlled by solar activity while its focus position is not significantly affected by solar activity. Second, we found that seasonal variations of the S current intensity exhibit north-south asymmetry; the current intensity of the northern vortex shows a prominent annual variation while the southern vortex shows a clear semi-annual variation as well as annual variation. Thirdly, we found that the total intensity of the L current system changes depending on solar activity and season; seasonal variations of the L current intensity show an enhancement during the December solstice, independent of the level of solar activity. Copyright 2011 by the American Geophysical Union.

  4. Empirical modelling to predict the refractive index of human blood.

    Science.gov (United States)

    Yahya, M; Saghir, M Z

    2016-02-21

    Optical techniques used for the measurement of the optical properties of blood are of great interest in clinical diagnostics. Blood analysis is a routine procedure used in medical diagnostics to confirm a patient's condition. Measuring the optical properties of blood is difficult due to the non-homogenous nature of the blood itself. In addition, there is a lot of variation in the refractive indices reported in the literature. These are the reasons that motivated the researchers to develop a mathematical model that can be used to predict the refractive index of human blood as a function of concentration, temperature and wavelength. The experimental measurements were conducted on mimicking phantom hemoglobin samples using the Abbemat Refractometer. The results analysis revealed a linear relationship between the refractive index and concentration as well as temperature, and a non-linear relationship between refractive index and wavelength. These results are in agreement with those found in the literature. In addition, a new formula was developed based on empirical modelling which suggests that temperature and wavelength coefficients be added to the Barer formula. The verification of this correlation confirmed its ability to determine refractive index and/or blood hematocrit values with appropriate clinical accuracy.

  5. Empirical modelling to predict the refractive index of human blood

    International Nuclear Information System (INIS)

    Yahya, M; Saghir, M Z

    2016-01-01

    Optical techniques used for the measurement of the optical properties of blood are of great interest in clinical diagnostics. Blood analysis is a routine procedure used in medical diagnostics to confirm a patient’s condition. Measuring the optical properties of blood is difficult due to the non-homogenous nature of the blood itself. In addition, there is a lot of variation in the refractive indices reported in the literature. These are the reasons that motivated the researchers to develop a mathematical model that can be used to predict the refractive index of human blood as a function of concentration, temperature and wavelength. The experimental measurements were conducted on mimicking phantom hemoglobin samples using the Abbemat Refractometer. The results analysis revealed a linear relationship between the refractive index and concentration as well as temperature, and a non-linear relationship between refractive index and wavelength. These results are in agreement with those found in the literature. In addition, a new formula was developed based on empirical modelling which suggests that temperature and wavelength coefficients be added to the Barer formula. The verification of this correlation confirmed its ability to determine refractive index and/or blood hematocrit values with appropriate clinical accuracy. (paper)

  6. Empirically modelled Pc3 activity based on solar wind parameters

    Directory of Open Access Journals (Sweden)

    B. Heilig

    2010-09-01

    Full Text Available It is known that under certain solar wind (SW/interplanetary magnetic field (IMF conditions (e.g. high SW speed, low cone angle the occurrence of ground-level Pc3–4 pulsations is more likely. In this paper we demonstrate that in the event of anomalously low SW particle density, Pc3 activity is extremely low regardless of otherwise favourable SW speed and cone angle. We re-investigate the SW control of Pc3 pulsation activity through a statistical analysis and two empirical models with emphasis on the influence of SW density on Pc3 activity. We utilise SW and IMF measurements from the OMNI project and ground-based magnetometer measurements from the MM100 array to relate SW and IMF measurements to the occurrence of Pc3 activity. Multiple linear regression and artificial neural network models are used in iterative processes in order to identify sets of SW-based input parameters, which optimally reproduce a set of Pc3 activity data. The inclusion of SW density in the parameter set significantly improves the models. Not only the density itself, but other density related parameters, such as the dynamic pressure of the SW, or the standoff distance of the magnetopause work equally well in the model. The disappearance of Pc3s during low-density events can have at least four reasons according to the existing upstream wave theory: 1. Pausing the ion-cyclotron resonance that generates the upstream ultra low frequency waves in the absence of protons, 2. Weakening of the bow shock that implies less efficient reflection, 3. The SW becomes sub-Alfvénic and hence it is not able to sweep back the waves propagating upstream with the Alfvén-speed, and 4. The increase of the standoff distance of the magnetopause (and of the bow shock. Although the models cannot account for the lack of Pc3s during intervals when the SW density is extremely low, the resulting sets of optimal model inputs support the generation of mid latitude Pc3 activity predominantly through

  7. Consistent constitutive modeling of metallic target penetration using empirical, analytical, and numerical penetration models

    Directory of Open Access Journals (Sweden)

    John (Jack P. Riegel III

    2016-04-01

    Full Text Available Historically, there has been little correlation between the material properties used in (1 empirical formulae, (2 analytical formulations, and (3 numerical models. The various regressions and models may each provide excellent agreement for the depth of penetration into semi-infinite targets. But the input parameters for the empirically based procedures may have little in common with either the analytical model or the numerical model. This paper builds on previous work by Riegel and Anderson (2014 to show how the Effective Flow Stress (EFS strength model, based on empirical data, can be used as the average flow stress in the analytical Walker–Anderson Penetration model (WAPEN (Anderson and Walker, 1991 and how the same value may be utilized as an effective von Mises yield strength in numerical hydrocode simulations to predict the depth of penetration for eroding projectiles at impact velocities in the mechanical response regime of the materials. The method has the benefit of allowing the three techniques (empirical, analytical, and numerical to work in tandem. The empirical method can be used for many shot line calculations, but more advanced analytical or numerical models can be employed when necessary to address specific geometries such as edge effects or layering that are not treated by the simpler methods. Developing complete constitutive relationships for a material can be costly. If the only concern is depth of penetration, such a level of detail may not be required. The effective flow stress can be determined from a small set of depth of penetration experiments in many cases, especially for long penetrators such as the L/D = 10 ones considered here, making it a very practical approach. In the process of performing this effort, the authors considered numerical simulations by other researchers based on the same set of experimental data that the authors used for their empirical and analytical assessment. The goals were to establish a

  8. Improving the desolvation penalty in empirical protein pKa modeling

    DEFF Research Database (Denmark)

    Olsson, Mats Henrik Mikael

    2012-01-01

    Unlike atomistic and continuum models, empirical pk(a) predicting methods need to include desolvation contributions explicitly. This study describes a new empirical desolvation method based on the Born solvation model. The new desolvation model was evaluated by high-level Poisson-Boltzmann calcul......Unlike atomistic and continuum models, empirical pk(a) predicting methods need to include desolvation contributions explicitly. This study describes a new empirical desolvation method based on the Born solvation model. The new desolvation model was evaluated by high-level Poisson......-Boltzmann calculations, and discussed and compared with the current desolvation model in PROPKA-one of the most widely used empirical protein pK(a) predictors. The new desolvation model was found to remove artificial erratic behavior due to discontinuous jumps from man-made first-shell cutoffs, and thus improves...

  9. Empirical agent-based land market: Integrating adaptive economic behavior in urban land-use models

    NARCIS (Netherlands)

    Filatova, Tatiana

    2015-01-01

    This paper introduces an economic agent-based model of an urban housing market. The RHEA (Risks and Hedonics in Empirical Agent-based land market) model captures natural hazard risks and environmental amenities through hedonic analysis, facilitating empirical agent-based land market modeling. RHEA

  10. Design models as emergent features: An empirical study in communication and shared mental models in instructional

    Directory of Open Access Journals (Sweden)

    Lucca Botturi

    2006-06-01

    Full Text Available This paper reports the results of an empirical study that investigated the instructional design process of three teams involved in the development of an e-learning unit. The teams declared they were using the same fast-prototyping design and development model, and were composed of the same roles (although with a different number of SMEs. Results indicate that the design and development model actually informs the activities of the group, but that it is interpreted and adapted by the team for the specific project. Thus, the actual practice model of each team can be regarded as an emergent feature. This analysis delivers insights concerning issues about team communication, shared understanding, individual perspectives and the implementation of prescriptive instructional design models.

  11. Empirical Models of Social Learning in a Large, Evolving Network.

    Directory of Open Access Journals (Sweden)

    Ayşe Başar Bener

    Full Text Available This paper advances theories of social learning through an empirical examination of how social networks change over time. Social networks are important for learning because they constrain individuals' access to information about the behaviors and cognitions of other people. Using data on a large social network of mobile device users over a one-month time period, we test three hypotheses: 1 attraction homophily causes individuals to form ties on the basis of attribute similarity, 2 aversion homophily causes individuals to delete existing ties on the basis of attribute dissimilarity, and 3 social influence causes individuals to adopt the attributes of others they share direct ties with. Statistical models offer varied degrees of support for all three hypotheses and show that these mechanisms are more complex than assumed in prior work. Although homophily is normally thought of as a process of attraction, people also avoid relationships with others who are different. These mechanisms have distinct effects on network structure. While social influence does help explain behavior, people tend to follow global trends more than they follow their friends.

  12. Empirical Models of Social Learning in a Large, Evolving Network.

    Science.gov (United States)

    Bener, Ayşe Başar; Çağlayan, Bora; Henry, Adam Douglas; Prałat, Paweł

    2016-01-01

    This paper advances theories of social learning through an empirical examination of how social networks change over time. Social networks are important for learning because they constrain individuals' access to information about the behaviors and cognitions of other people. Using data on a large social network of mobile device users over a one-month time period, we test three hypotheses: 1) attraction homophily causes individuals to form ties on the basis of attribute similarity, 2) aversion homophily causes individuals to delete existing ties on the basis of attribute dissimilarity, and 3) social influence causes individuals to adopt the attributes of others they share direct ties with. Statistical models offer varied degrees of support for all three hypotheses and show that these mechanisms are more complex than assumed in prior work. Although homophily is normally thought of as a process of attraction, people also avoid relationships with others who are different. These mechanisms have distinct effects on network structure. While social influence does help explain behavior, people tend to follow global trends more than they follow their friends.

  13. Polycaprolactone thin-film drug delivery systems: Empirical and predictive models for device design.

    Science.gov (United States)

    Schlesinger, Erica; Ciaccio, Natalie; Desai, Tejal A

    2015-12-01

    To define empirical models and parameters based on theoretical equations to describe drug release profiles from two polycaprolactone thin-film drug delivery systems. Additionally, to develop a predictive model for empirical parameters based on drugs' physicochemical properties. Release profiles from a selection of drugs representing the standard pharmaceutical space in both polycaprolactone matrix and reservoir systems were determined experimentally. The proposed models were used to calculate empirical parameters describing drug diffusion and release. Observed correlations between empirical parameters and drug properties were used to develop equations to predict parameters based on drug properties. Predictive and empirical models were evaluated in the design of three prototype devices: a levonorgestrel matrix system for on-demand locally administered contraception, a timolol-maleate reservoir system for glaucoma treatment, and a primaquine-bisphosphate reservoir system for malaria prophylaxis. Proposed empirical equations accurately fit experimental data. Experimentally derived empirical parameters show significant correlations with LogP, molecular weight, and solubility. Empirical models based on predicted parameters accurately predict experimental release data for three prototype systems, demonstrating the accuracy and utility of these models. The proposed empirical models can be used to design polycaprolactone thin-film devices for target geometries and release rates. Empirical parameters can be predicted based on drug properties. Together, these models provide tools for preliminary evaluation and design of controlled-release delivery systems. Copyright © 2015. Published by Elsevier B.V.

  14. Development of an empirical model of turbine efficiency using the Taylor expansion and regression analysis

    International Nuclear Information System (INIS)

    Fang, Xiande; Xu, Yu

    2011-01-01

    The empirical model of turbine efficiency is necessary for the control- and/or diagnosis-oriented simulation and useful for the simulation and analysis of dynamic performances of the turbine equipment and systems, such as air cycle refrigeration systems, power plants, turbine engines, and turbochargers. Existing empirical models of turbine efficiency are insufficient because there is no suitable form available for air cycle refrigeration turbines. This work performs a critical review of empirical models (called mean value models in some literature) of turbine efficiency and develops an empirical model in the desired form for air cycle refrigeration, the dominant cooling approach in aircraft environmental control systems. The Taylor series and regression analysis are used to build the model, with the Taylor series being used to expand functions with the polytropic exponent and the regression analysis to finalize the model. The measured data of a turbocharger turbine and two air cycle refrigeration turbines are used for the regression analysis. The proposed model is compact and able to present the turbine efficiency map. Its predictions agree with the measured data very well, with the corrected coefficient of determination R c 2 ≥ 0.96 and the mean absolute percentage deviation = 1.19% for the three turbines. -- Highlights: → Performed a critical review of empirical models of turbine efficiency. → Developed an empirical model in the desired form for air cycle refrigeration, using the Taylor expansion and regression analysis. → Verified the method for developing the empirical model. → Verified the model.

  15. Modeling gallic acid production rate by empirical and statistical analysis

    Directory of Open Access Journals (Sweden)

    Bratati Kar

    2000-01-01

    Full Text Available For predicting the rate of enzymatic reaction empirical correlation based on the experimental results obtained under various operating conditions have been developed. Models represent both the activation as well as deactivation conditions of enzymatic hydrolysis and the results have been analyzed by analysis of variance (ANOVA. The tannase activity was found maximum at incubation time 5 min, reaction temperature 40ºC, pH 4.0, initial enzyme concentration 0.12 v/v, initial substrate concentration 0.42 mg/ml, ionic strength 0.2 M and under these optimal conditions, the maximum rate of gallic acid production was 33.49 mumoles/ml/min.Para predizer a taxa das reações enzimaticas uma correlação empírica baseada nos resultados experimentais foi desenvolvida. Os modelos representam a ativação e a desativativação da hydrolise enzimatica. Os resultados foram avaliados pela análise de variança (ANOVA. A atividade máxima da tannase foi obtida após 5 minutos de incubação, temperatura 40ºC, pH 4,0, concentração inicial da enzima de 0,12 v/v, concentração inicial do substrato 0,42 mg/ml, força iônica 0,2 M. Sob essas condições a taxa máxima de produção ácido galico foi de 33,49 µmoles/ml/min.

  16. Empirical Test Of The Ohlson Model: Evidence From The Mauritian ...

    African Journals Online (AJOL)

    . ... Journal of Business Research ... the existence of value relevance of accounting information in Mauritius Stock Market using the Ohlson model (1995), which encourages the adoption of the historical price model in value relevance studies, ...

  17. Poisson-generalized gamma empirical Bayes model for disease ...

    African Journals Online (AJOL)

    In spatial disease mapping, the use of Bayesian models of estimation technique is becoming popular for smoothing relative risks estimates for disease mapping. The most common Bayesian conjugate model for disease mapping is the Poisson-Gamma Model (PG). To explore further the activity of smoothing of relative risk ...

  18. A Socio-Cultural Model Based on Empirical Data of Cultural and Social Relationship

    DEFF Research Database (Denmark)

    Lipi, Afia Akhter; Nakano, Yukiko; Rehm, Matthias

    2010-01-01

    The goal of this paper is to integrate culture and social relationship as a computational term in an embodied conversational agent system by employing empirical and theoretical approach. We propose a parameter-based model that predicts nonverbal expressions appropriate for specific cultures...... with empirical data, we establish a parameterized network model that generates culture specific non-verbal expressions in different social relationships....

  19. Empirical study of the GARCH model with rational errors

    International Nuclear Information System (INIS)

    Chen, Ting Ting; Takaishi, Tetsuya

    2013-01-01

    We use the GARCH model with a fat-tailed error distribution described by a rational function and apply it to stock price data on the Tokyo Stock Exchange. To determine the model parameters we perform Bayesian inference to the model. Bayesian inference is implemented by the Metropolis-Hastings algorithm with an adaptive multi-dimensional Student's t-proposal density. In order to compare our model with the GARCH model with the standard normal errors, we calculate the information criteria AIC and DIC, and find that both criteria favor the GARCH model with a rational error distribution. We also calculate the accuracy of the volatility by using the realized volatility and find that a good accuracy is obtained for the GARCH model with a rational error distribution. Thus we conclude that the GARCH model with a rational error distribution is superior to the GARCH model with the normal errors and it can be used as an alternative GARCH model to those with other fat-tailed distributions

  20. Dynamic Modeling of a Reformed Methanol Fuel Cell System using Empirical Data and Adaptive Neuro-Fuzzy Inference System Models

    DEFF Research Database (Denmark)

    Justesen, Kristian Kjær; Andreasen, Søren Juhl; Shaker, Hamid Reza

    2013-01-01

    an empirical approach. Fin efficiency models for the cooling effect of the air are also developed using empirical methods. A fuel cell model is also implemented based on a standard model which is adapted to fit the measured performance of the H3-350 module. All the individual parts of the model are verified...... hydrogen, which is difficult and energy consuming to store and transport. The models include thermal equilibrium models of the individual components of the system. Models of the heating and cooling of the gas flows between components are also modeled and Adaptive Neuro-Fuzzy Inference System models...... of the reforming process are implemented. Models of the cooling flow of the blowers for the fuel cell and the burner which supplies process heat for the reformer are made. The two blowers have a common exhaust, which means that the two blowers influence each other’s output. The models take this into account using...

  1. Empirical methods for modeling landscape change, ecosystem services, and biodiversity

    Science.gov (United States)

    David Lewis; Ralph. Alig

    2009-01-01

    The purpose of this paper is to synthesize recent economics research aimed at integrating discrete-choice econometric models of land-use change with spatially-explicit landscape simulations and quantitative ecology. This research explicitly models changes in the spatial pattern of landscapes in two steps: 1) econometric estimation of parcel-scale transition...

  2. Hybrid modeling and empirical analysis of automobile supply chain network

    Science.gov (United States)

    Sun, Jun-yan; Tang, Jian-ming; Fu, Wei-ping; Wu, Bing-ying

    2017-05-01

    Based on the connection mechanism of nodes which automatically select upstream and downstream agents, a simulation model for dynamic evolutionary process of consumer-driven automobile supply chain is established by integrating ABM and discrete modeling in the GIS-based map. Firstly, the rationality is proved by analyzing the consistency of sales and changes in various agent parameters between the simulation model and a real automobile supply chain. Second, through complex network theory, hierarchical structures of the model and relationships of networks at different levels are analyzed to calculate various characteristic parameters such as mean distance, mean clustering coefficients, and degree distributions. By doing so, it verifies that the model is a typical scale-free network and small-world network. Finally, the motion law of this model is analyzed from the perspective of complex self-adaptive systems. The chaotic state of the simulation system is verified, which suggests that this system has typical nonlinear characteristics. This model not only macroscopically illustrates the dynamic evolution of complex networks of automobile supply chain but also microcosmically reflects the business process of each agent. Moreover, the model construction and simulation of the system by means of combining CAS theory and complex networks supplies a novel method for supply chain analysis, as well as theory bases and experience for supply chain analysis of auto companies.

  3. An Empirical Comparison of Default Swap Pricing Models

    NARCIS (Netherlands)

    P. Houweling (Patrick); A.C.F. Vorst (Ton)

    2002-01-01

    textabstractAbstract: In this paper we compare market prices of credit default swaps with model prices. We show that a simple reduced form model with a constant recovery rate outperforms the market practice of directly comparing bonds' credit spreads to default swap premiums. We find that the

  4. Empirical evaluation of a forecasting model for successful facilitation ...

    African Journals Online (AJOL)

    The forecasting model identified 8 key attributes for facilitation success based on performance measures from the 1999 Facilitator Customer Service Survey. During 2000 the annual Facilitator Customer Satisfaction Survey was employed to validate the findings of the forecasting model. A total of 1910 questionnaires were ...

  5. Travel Time Reliability for Urban Networks : Modelling and Empirics

    NARCIS (Netherlands)

    Zheng, F.; Liu, Xiaobo; van Zuylen, H.J.; Li, Jie; Lu, Chao

    2017-01-01

    The importance of travel time reliability in traffic management, control, and network design has received a lot of attention in the past decade. In this paper, a network travel time distribution model based on the Johnson curve system is proposed. The model is applied to field travel time data

  6. Drugs and Crime: An Empirically Based, Interdisciplinary Model

    Science.gov (United States)

    Quinn, James F.; Sneed, Zach

    2008-01-01

    This article synthesizes neuroscience findings with long-standing criminological models and data into a comprehensive explanation of the relationship between drug use and crime. The innate factors that make some people vulnerable to drug use are conceptually similar to those that predict criminality, supporting a spurious reciprocal model of the…

  7. Empirical Analysis of Farm Credit Risk under the Structure Model

    Science.gov (United States)

    Yan, Yan

    2009-01-01

    The study measures farm credit risk by using farm records collected by Farm Business Farm Management (FBFM) during the period 1995-2004. The study addresses the following questions: (1) whether farm's financial position is fully described by the structure model, (2) what are the determinants of farm capital structure under the structure model, (3)…

  8. Theoretical-empirical model of the steam-water cycle of the power unit

    Directory of Open Access Journals (Sweden)

    Grzegorz Szapajko

    2010-06-01

    Full Text Available The diagnostics of the energy conversion systems’ operation is realised as a result of collecting, processing, evaluatingand analysing the measurement signals. The result of the analysis is the determination of the process state. It requires a usageof the thermal processes models. Construction of the analytical model with the auxiliary empirical functions built-in brings satisfyingresults. The paper presents theoretical-empirical model of the steam-water cycle. Worked out mathematical simulation model containspartial models of the turbine, the regenerative heat exchangers and the condenser. Statistical verification of the model is presented.

  9. Selection Bias in Educational Transition Models: Theory and Empirical Evidence

    DEFF Research Database (Denmark)

    Holm, Anders; Jæger, Mads

    Most studies using Mare’s (1980, 1981) seminal model of educational transitions find that the effect of family background decreases across transitions. Recently, Cameron and Heckman (1998, 2001) have argued that the “waning coefficients” in the Mare model are driven by selection on unobserved...... the United States, United Kingdom, Denmark, and the Netherlands shows that when we take selection into account the effect of family background variables on educational transitions is largely constant across transitions. We also discuss several difficulties in estimating educational transition models which...

  10. Generalized Empirical Likelihood-Based Focused Information Criterion and Model Averaging

    Directory of Open Access Journals (Sweden)

    Naoya Sueishi

    2013-07-01

    Full Text Available This paper develops model selection and averaging methods for moment restriction models. We first propose a focused information criterion based on the generalized empirical likelihood estimator. We address the issue of selecting an optimal model, rather than a correct model, for estimating a specific parameter of interest. Then, this study investigates a generalized empirical likelihood-based model averaging estimator that minimizes the asymptotic mean squared error. A simulation study suggests that our averaging estimator can be a useful alternative to existing post-selection estimators.

  11. Empirical justification of the elementary model of money circulation

    Science.gov (United States)

    Schinckus, Christophe; Altukhov, Yurii A.; Pokrovskii, Vladimir N.

    2018-03-01

    This paper proposes an elementary model describing the money circulation for a system, composed by a production system, the government, a central bank, commercial banks and their customers. A set of equations for the system determines the main features of interaction between the production and the money circulation. It is shown, that the money system can evolve independently of the evolution of production. The model can be applied to any national economy but we will illustrate our claim in the context of the Russian monetary system.

  12. Empirical assessment of a threshold model for sylvatic plague

    DEFF Research Database (Denmark)

    Davis, Stephen; Leirs, Herwig; Viljugrein, H.

    2007-01-01

    Plague surveillance programmes established in Kazakhstan, Central Asia, during the previous century, have generated large plague archives that have been used to parameterize an abundance threshold model for sylvatic plague in great gerbil (Rhombomys opimus) populations. Here, we assess the model...... examine six hypotheses that could explain the resulting false positive predictions, namely (i) including end-of-outbreak data erroneously lowers the estimated threshold, (ii) too few gerbils were tested, (iii) plague becomes locally extinct, (iv) the abundance of fleas was too low, (v) the climate...

  13. Organizational Learning, Strategic Flexibility and Business Model Innovation: An Empirical Research Based on Logistics Enterprises

    Science.gov (United States)

    Bao, Yaodong; Cheng, Lin; Zhang, Jian

    Using the data of 237 Jiangsu logistics firms, this paper empirically studies the relationship among organizational learning capability, business model innovation, strategic flexibility. The results show as follows; organizational learning capability has positive impacts on business model innovation performance; strategic flexibility plays mediating roles on the relationship between organizational learning capability and business model innovation; interaction among strategic flexibility, explorative learning and exploitative learning play significant roles in radical business model innovation and incremental business model innovation.

  14. Empirical model for estimating the surface roughness of machined ...

    African Journals Online (AJOL)

    Michael Horsfall

    Regression Analysis to construct a prediction model for surface roughness such that once the process parameters (cutting speed, feed, depth of cut, Nose. Radius and Speed) are given, the surface roughness can be predicted. The work piece material was EN8 which was processed by carbide-inserted tool conducted on ...

  15. The Development of an Empirical Model for Estimation of the ...

    African Journals Online (AJOL)

    Nassiri P

    rate, daily water consumption, smoking habits, drugs that interfere with the thermoregulatory processes, and exposure to other harmful agents. Conclusions: Eventually, based on the criteria, a model for estimation of the workers' sensitivity to heat stress was presented for the first time, by which the sensitivity is estimated in ...

  16. An Empirical Model of Wage Dispersion with Sorting

    DEFF Research Database (Denmark)

    Bagger, Jesper; Lentz, Rasmus

    This paper studies wage dispersion in an equilibrium on-the-job-search model with endogenous search intensity. Workers differ in their permanent skill level and firms differ with respect to productivity. Positive (negative) sorting results if the match production function is supermodular (submodu...

  17. An auto-calibration procedure for empirical solar radiation models

    NARCIS (Netherlands)

    Bojanowski, J.S.; Donatelli, Marcello; Skidmore, A.K.; Vrieling, A.

    2013-01-01

    Solar radiation data are an important input for estimating evapotranspiration and modelling crop growth. Direct measurement of solar radiation is now carried out in most European countries, but the network of measuring stations is too sparse for reliable interpolation of measured values. Instead of

  18. A theoretical and empirical model for soil conservation using ...

    African Journals Online (AJOL)

    This paper illuminates the practice of indigenous soil conservation among Mamasani farmers in Fars province in Iran. Bos's decision making model was used as a conceptual framework for the study. A qualitative paradigm was used as research methodology. Qualitative techniques were: Mind Mapping, RRA ...

  19. An Empirical Study of a Solo Performance Assessment Model

    Science.gov (United States)

    Russell, Brian E.

    2015-01-01

    The purpose of this study was to test a hypothesized model of solo music performance assessment. Specifically, this study investigates the influence of technique and musical expression on perceptions of overall performance quality. The Aural Musical Performance Quality (AMPQ) measure was created to measure overall performance quality, technique,…

  20. Neural networks in economic modelling : An empirical study

    NARCIS (Netherlands)

    Verkooijen, W.J.H.

    1996-01-01

    This dissertation addresses the statistical aspects of neural networks and their usability for solving problems in economics and finance. Neural networks are discussed in a framework of modelling which is generally accepted in econometrics. Within this framework a neural network is regarded as a

  1. Empirical LTE Smartphone Power Model with DRX Operation for System Level Simulations

    DEFF Research Database (Denmark)

    Lauridsen, Mads; Noël, Laurent; Mogensen, Preben

    2013-01-01

    An LTE smartphone power model is presented to enable academia and industry to evaluate users’ battery life on system level. The model is based on empirical measurements on a smartphone using a second generation LTE chipset, and the model includes functions of receive and transmit data rates...

  2. Political economy models and agricultural policy formation : empirical applicability and relevance for the CAP

    NARCIS (Netherlands)

    Zee, van der F.A.

    1997-01-01

    This study explores the relevance and applicability of political economy models for the explanation of agricultural policies. Part I (chapters 4-7) takes a general perspective and evaluates the empirical applicability of voting models and interest group models to agricultural policy

  3. Generalized multiplicative error models: Asymptotic inference and empirical analysis

    Science.gov (United States)

    Li, Qian

    This dissertation consists of two parts. The first part focuses on extended Multiplicative Error Models (MEM) that include two extreme cases for nonnegative series. These extreme cases are common phenomena in high-frequency financial time series. The Location MEM(p,q) model incorporates a location parameter so that the series are required to have positive lower bounds. The estimator for the location parameter turns out to be the minimum of all the observations and is shown to be consistent. The second case captures the nontrivial fraction of zero outcomes feature in a series and combines a so-called Zero-Augmented general F distribution with linear MEM(p,q). Under certain strict stationary and moment conditions, we establish a consistency and asymptotic normality of the semiparametric estimation for these two new models. The second part of this dissertation examines the differences and similarities between trades in the home market and trades in the foreign market of cross-listed stocks. We exploit the multiplicative framework to model trading duration, volume per trade and price volatility for Canadian shares that are cross-listed in the New York Stock Exchange (NYSE) and the Toronto Stock Exchange (TSX). We explore the clustering effect, interaction between trading variables, and the time needed for price equilibrium after a perturbation for each market. The clustering effect is studied through the use of univariate MEM(1,1) on each variable, while the interactions among duration, volume and price volatility are captured by a multivariate system of MEM(p,q). After estimating these models by a standard QMLE procedure, we exploit the Impulse Response function to compute the calendar time for a perturbation in these variables to be absorbed into price variance, and use common statistical tests to identify the difference between the two markets in each aspect. These differences are of considerable interest to traders, stock exchanges and policy makers.

  4. Validation of Empirical and Semi-empirical Net Radiation Models versus Observed Data for Cold Semi-arid Climate Condition

    Directory of Open Access Journals (Sweden)

    aliakbar sabziparvar

    2017-03-01

    Full Text Available Introduction: Solar Net Radiation (Rn is one of the most important component which influences soil heat flux, evapotranspiration rate and hydrological cycle. This parameter (Rn is measured based on the difference between downward and upward shortwave (SW and longwave (LW irradiances reaching the Earth’s surface. Field measurements of Rn are scarce, expensive and difficult due to the instrumental maintenance. As a result, in most research cases, Rn is estimated by the empirical, semi-empirical and physical radiation models. Almorox et al. (2008 suggested a net radiation model based on a linear regression model by using global solar radiation (Rs and sunshine hours. Alados et al. (2003 evaluated the relation between Rn and Rs for Spain. They showed that the models based on shortwave radiation works perfect in estimating solar net radiation. In another work, Irmak et al. (2003 presented two empirical Rn models, which worked with the minimum numbers of weather parameters. They evaluated their models for humid, dry, inland and coastal regions of the United States. They concluded that both Rn models work better than FAO-56 Penman-Monteith model. Sabziparvar et al. (2016 estimated the daily Rn for four climate types in Iran. They examined various net radiation models namely: Wright, Basic Regression Model (BRM, Linacre, Berliand, Irmak, and Monteith. Their results highlighted that on regional averages, the linear BRM model has the superior performance in generating the most accurate daily ET0. They also showed that for 70% of the study sites, the linear Rn models can be reliable candidates instead of sophisticated nonlinear Rn models. Having considered the importance of Rn in determining crop water requirement, the aim of this study is to obtain the best performance Rn model for cold semi-arid climate of Hamedan. Materials and Methods: We employed hourly and daily weather data and Rn data, which were measured during December 2011 to June 2013 in

  5. An empirical spectral bandwidth model for superior conjunction. [spacecraft communication

    Science.gov (United States)

    Rockwell, R. S.

    1978-01-01

    The downlink signal from spacecraft in superior solar conjunction phases suffers a great reduction in signal-to-noise ratio. Responsible in large part for this effect is the line broadening of the signal spectrum. An analytic empirical expression was developed for spectral bandwidth as a function of heliocentric distance from 1 to 20 solar radii. The study is based on spectral broadening data obtained from the superior conjunctions of Helios 1 (1975), Helios 2 (1976), and Pioneer 6 (1968). The empirical fit is based in part on a function describing the electron content in the solar corona.

  6. 3D radiation belt diffusion model results using new empirical models of whistler chorus and hiss

    Science.gov (United States)

    Cunningham, G.; Chen, Y.; Henderson, M. G.; Reeves, G. D.; Tu, W.

    2012-12-01

    3D diffusion codes model the energization, radial transport, and pitch angle scattering due to wave-particle interactions. Diffusion codes are powerful but are limited by the lack of knowledge of the spatial & temporal distribution of waves that drive the interactions for a specific event. We present results from the 3D DREAM model using diffusion coefficients driven by new, activity-dependent, statistical models of chorus and hiss waves. Most 3D codes parameterize the diffusion coefficients or wave amplitudes as functions of magnetic activity indices like Kp, AE, or Dst. These functional representations produce the average value of the wave intensities for a given level of magnetic activity; however, the variability of the wave population at a given activity level is lost with such a representation. Our 3D code makes use of the full sample distributions contained in a set of empirical wave databases (one database for each wave type, including plasmaspheric hiss, lower and upper hand chorus) that were recently produced by our team using CRRES and THEMIS observations. The wave databases store the full probability distribution of observed wave intensity binned by AE, MLT, MLAT and L*. In this presentation, we show results that make use of the wave intensity sample probability distributions for lower-band and upper-band chorus by sampling the distributions stochastically during a representative CRRES-era storm. The sampling of the wave intensity probability distributions produces a collection of possible evolutions of the phase space density, which quantifies the uncertainty in the model predictions caused by the uncertainty of the chorus wave amplitudes for a specific event. A significant issue is the determination of an appropriate model for the spatio-temporal correlations of the wave intensities, since the diffusion coefficients are computed as spatio-temporal averages of the waves over MLT, MLAT and L*. The spatiotemporal correlations cannot be inferred from the

  7. New empirically-derived solar radiation pressure model for GPS satellites

    Science.gov (United States)

    Bar-Sever, Y.; Kuang, D.

    2003-04-01

    We derive a new and improved GPS solar pressure model by estimating model parameters using least square approximation to four and a half years of GPS precise orbit data. The new solar radiation model for Block IIR satellites provides 90% improvement over to the best pre-launch model, as measured by orbit fits and orbit prediction quality. The new model of Block II/IIA realizes a more modest improvement of the previous JPL empirical model. The empirical model is constructed as a set of Fourier functions of the Earth-Probe-Sun angle, to represent the solar radiation pressure forces in the coordinate system tied to the nominal solar panel surface orientation. The model derivation reveals a number of systematic patterns, some of which can be explained in terms of properties of the GPS attitude control system, and some are yet to be explained. Finally, we will discuss the overall orbit determination improvements using the new models.

  8. Towards New Empirical Versions of Financial and Accounting Models Corrected for Measurement Errors

    OpenAIRE

    Francois-Éric Racicot; Raymond Théoret; Alain Coen

    2006-01-01

    In this paper, we propose a new empirical version of the Fama and French Model based on the Hausman (1978) specification test and aimed at discarding measurement errors in the variables. The proposed empirical framework is general enough to be used for correcting other financial and accounting models of measurement errors. Removing measurement errors is important at many levels as information disclosure, corporate governance and protection of investors.

  9. Empirical Modeling of Oxygen Uptake of Flow Over Stepped Chutes ...

    African Journals Online (AJOL)

    The present investigation evaluates the influence of three different step chute geometry when skimming flow was allowed over them with the aim of determining the aerated flow length which is a significant factor when developing empirical equations for estimating aeration efficiency of flow. Overall, forty experiments were ...

  10. An Empirical Test of the Generic Model of Psychotherapy

    OpenAIRE

    KOLDEN, GREGORY G.; HOWARD, KENNETH I.

    1992-01-01

    This study examined the propositions of Orlinsky and Howard’s generic model of psychotherapy with regard to self-relatedness, therapeutic bond, therapeutic realizations, session outcome, and termination outcome. Measures representing these constructs were derived from therapy session reports obtained from patients after sessions three and seven. A multiple-regression data analytic strategy was used that focused on the proportion of variance accounted for by single vari...

  11. AN EMPIRICAL MODEL OF ONLINE BUYING CONTINUANCE INTENTION

    OpenAIRE

    ORZAN Gheorghe; ICONARU Claudia; MACOVEI Octav-Ionut

    2012-01-01

    The aim of this paper is to propose, test and validate a model of consumers` continuance intention to buy online as a main function of affective attitude towards using the Internet for purchasing goods and services and the overall satisfaction towards the decision of buying online. The confirmation of initial expectations regarding online buying is the main predictor of online consumers` satisfaction and online consumers` perceived usefulness of online buying. Affective attitude is mediating ...

  12. PERFORMANCE EVALUATION OF EMPIRICAL MODELS FOR VENTED LEAN HYDROGEN EXPLOSIONS

    OpenAIRE

    Anubhav Sinha; Vendra C. Madhav Rao; Jennifer X. Wen

    2017-01-01

    Explosion venting is a method commonly used to prevent or minimize damage to an enclosure caused by an accidental explosion. An estimate of the maximum overpressure generated though explosion is an important parameter in the design of the vents. Various engineering models (Bauwens et al., 2012, Molkov and Bragin, 2015) and European (EN 14994 ) and USA standards (NFPA 68) are available to predict such overpressure. In this study, their performance is evaluated using a number of published exper...

  13. PROPOSAL OF AN EMPIRICAL MODEL FOR SUPPLIERS SELECTION

    Directory of Open Access Journals (Sweden)

    Paulo Ávila

    2015-03-01

    Full Text Available The problem of selecting suppliers/partners is a crucial and important part in the process of decision making for companies that intend to perform competitively in their area of activity. The selection of supplier/partner is a time and resource-consuming task that involves data collection and a careful analysis of the factors that can positively or negatively influence the choice. Nevertheless it is a critical process that affects significantly the operational performance of each company. In this work, trough the literature review, there were identified five broad suppliers selection criteria: Quality, Financial, Synergies, Cost, and Production System. Within these criteria, it was also included five sub-criteria. Thereafter, a survey was elaborated and companies were contacted in order to answer which factors have more relevance in their decisions to choose the suppliers. Interpreted the results and processed the data, it was adopted a model of linear weighting to reflect the importance of each factor. The model has a hierarchical structure and can be applied with the Analytic Hierarchy Process (AHP method or Simple Multi-Attribute Rating Technique (SMART. The result of the research undertaken by the authors is a reference model that represents a decision making support for the suppliers/partners selection process.

  14. Empirical model development and validation with dynamic learning in the recurrent multilayer perception

    International Nuclear Information System (INIS)

    Parlos, A.G.; Chong, K.T.; Atiya, A.F.

    1994-01-01

    A nonlinear multivariable empirical model is developed for a U-tube steam generator using the recurrent multilayer perceptron network as the underlying model structure. The recurrent multilayer perceptron is a dynamic neural network, very effective in the input-output modeling of complex process systems. A dynamic gradient descent learning algorithm is used to train the recurrent multilayer perceptron, resulting in an order of magnitude improvement in convergence speed over static learning algorithms. In developing the U-tube steam generator empirical model, the effects of actuator, process,and sensor noise on the training and testing sets are investigated. Learning and prediction both appear very effective, despite the presence of training and testing set noise, respectively. The recurrent multilayer perceptron appears to learn the deterministic part of a stochastic training set, and it predicts approximately a moving average response. Extensive model validation studies indicate that the empirical model can substantially generalize (extrapolate), though online learning becomes necessary for tracking transients significantly different than the ones included in the training set and slowly varying U-tube steam generator dynamics. In view of the satisfactory modeling accuracy and the associated short development time, neural networks based empirical models in some cases appear to provide a serious alternative to first principles models. Caution, however, must be exercised because extensive on-line validation of these models is still warranted

  15. Creative Accounting and Financial Reporting: Model Development and Empirical Testing

    OpenAIRE

    Fizza Tassadaq; Qaisar Ali Malik

    2015-01-01

    This paper empirically and critically investigates the issue of creative accounting in financial reporting. It not only analyzes the ethical responsibility of creative accounting but also focuses on other factors which influence the financial reporting like role of auditors, role of government regulations or international standards, impact of manipulative behaviors and impact of ethical values of an individual. Data has been collected through structured questionnaire from industrial sector. D...

  16. Creative Accounting & Financial Reporting: Model Development & Empirical Testing

    OpenAIRE

    Tassadaq, Fizza; Malik, Qaisar Ali

    2015-01-01

    This paper empirically and critically investigates the issue of creative accounting in financial reporting. It not only analyzes the ethical responsibility of creative accounting but also focuses on other factors which influence the financial reporting like role of auditors, role of govt. regulations or international standards, impact of manipulative behaviors and impact of ethical values of an individual. Data has been collected through structured questionnaire from industrial sector. Descri...

  17. The Empirical Economist's Toolkit: From Models to Methods

    OpenAIRE

    Panhans, Matthew T.; Singleton, John D.

    2015-01-01

    While historians of economics have noted the transition toward empirical work in economics since the 1970s, less understood is the shift toward \\quasi-experimental" methods in applied microeconomics. Angrist and Pischke (2010) trumpet the wide application of these methods as a \\credibility revolution" in econometrics that has nally provided persuasive answers to a diverse set of questions. Particularly in uential in the applied areas of labor, education, public, and health economics, the meth...

  18. Modeling Active Aging and Explicit Memory: An Empirical Study.

    Science.gov (United States)

    Ponce de León, Laura Ponce; Lévy, Jean Pierre; Fernández, Tomás; Ballesteros, Soledad

    2015-08-01

    The rapid growth of the population of older adults and their concomitant psychological status and health needs have captured the attention of researchers and health professionals. To help fill the void of literature available to social workers interested in mental health promotion and aging, the authors provide a model for active aging that uses psychosocial variables. Structural equation modeling was used to examine the relationships among the latent variables of the state of explicit memory, the perception of social resources, depression, and the perception of quality of life in a sample of 184 older adults. The results suggest that explicit memory is not a direct indicator of the perception of quality of life, but it could be considered an indirect indicator as it is positively correlated with perception of social resources and negatively correlated with depression. These last two variables influenced the perception of quality of life directly, the former positively and the latter negatively. The main outcome suggests that the perception of social support improves explicit memory and quality of life and reduces depression in active older adults. The findings also suggest that gerontological professionals should design memory training programs, improve available social resources, and offer environments with opportunities to exercise memory.

  19. Toward an Empirically-based Parametric Explosion Spectral Model

    Science.gov (United States)

    Ford, S. R.; Walter, W. R.; Ruppert, S.; Matzel, E.; Hauk, T. F.; Gok, R.

    2010-12-01

    Small underground nuclear explosions need to be confidently detected, identified, and characterized in regions of the world where they have never occurred. We develop a parametric model of the nuclear explosion seismic source spectrum derived from regional phases (Pn, Pg, and Lg) that is compatible with earthquake-based geometrical spreading and attenuation. Earthquake spectra are fit with a generalized version of the Brune spectrum, which is a three-parameter model that describes the long-period level, corner-frequency, and spectral slope at high-frequencies. These parameters are then correlated with near-source geology and containment conditions. There is a correlation of high gas-porosity (low strength) with increased spectral slope. However, there are trade-offs between the slope and corner-frequency, which we try to independently constrain using Mueller-Murphy relations and coda-ratio techniques. The relationship between the parametric equation and the geologic and containment conditions will assist in our physical understanding of the nuclear explosion source, and aid in the prediction of observed local and regional distance seismic amplitudes for event identification and yield determination in regions with incomplete or no prior history of underground nuclear testing.

  20. An empirical test of the generic model of psychotherapy.

    Science.gov (United States)

    Kolden, G G; Howard, K I

    1992-01-01

    This study examined the propositions of Orlinsky and Howard's generic model of psychotherapy with regard to self-relatedness, therapeutic bond, therapeutic realizations, session outcome, and termination outcome. Measures representing these constructs were derived from therapy session reports obtained from patients after sessions three and seven. A multiple-regression data analytic strategy was used that focused on the proportion of variance accounted for by single variables as well as combinations of process and outcome variables. Self-relatedness and therapeutic bond accounted for significant proportions of variance in therapeutic realizations. In addition, therapeutic realizations, therapeutic bond, and self-relatedness accounted for significant proportions of variance in session outcome. Finally, session outcome accounted for a significant proportion of variance in termination outcome.

  1. DIE Deflection Modeling: Empirical Validation and Tech Transfer

    Energy Technology Data Exchange (ETDEWEB)

    R. Allen Miller

    2003-05-28

    This report summarizes computer modeling work that was designed to help understand how the die casting die and machine contribute to parting plane separation during operation. Techniques developed in earlier research (8) were applied to complete a large computational experiment that systematically explored the relationship between the stiffness of the machine platens and key dimensional and structural variables (platen area covered, die thickness, platen thickness, thickness of insert and the location of the die with respect to the platen) describing the die/machine system. The results consistently show that there are many significant interactions among the variables and it is the interactions, more than the individual variables themselves, which determine the performance of the machine/die system. That said, the results consistently show that it is the stiffness of the machine platens that has the largest single impact on die separation.

  2. Behavioral Modeling Based on Probabilistic Finite Automata: An Empirical Study.

    Science.gov (United States)

    Tîrnăucă, Cristina; Montaña, José L; Ontañón, Santiago; González, Avelino J; Pardo, Luis M

    2016-06-24

    Imagine an agent that performs tasks according to different strategies. The goal of Behavioral Recognition (BR) is to identify which of the available strategies is the one being used by the agent, by simply observing the agent's actions and the environmental conditions during a certain period of time. The goal of Behavioral Cloning (BC) is more ambitious. In this last case, the learner must be able to build a model of the behavior of the agent. In both settings, the only assumption is that the learner has access to a training set that contains instances of observed behavioral traces for each available strategy. This paper studies a machine learning approach based on Probabilistic Finite Automata (PFAs), capable of achieving both the recognition and cloning tasks. We evaluate the performance of PFAs in the context of a simulated learning environment (in this case, a virtual Roomba vacuum cleaner robot), and compare it with a collection of other machine learning approaches.

  3. Empirical investigation on modeling solar radiation series with ARMA–GARCH models

    International Nuclear Information System (INIS)

    Sun, Huaiwei; Yan, Dong; Zhao, Na; Zhou, Jianzhong

    2015-01-01

    Highlights: • Apply 6 ARMA–GARCH(-M) models to model and forecast solar radiation. • The ARMA–GARCH(-M) models produce more accurate radiation forecasting than conventional methods. • Show that ARMA–GARCH-M models are more effective for forecasting solar radiation mean and volatility. • The ARMA–EGARCH-M is robust and the ARMA–sGARCH-M is very competitive. - Abstract: Simulation of radiation is one of the most important issues in solar utilization. Time series models are useful tools in the estimation and forecasting of solar radiation series and their changes. In this paper, the effectiveness of autoregressive moving average (ARMA) models with various generalized autoregressive conditional heteroskedasticity (GARCH) processes, namely ARMA–GARCH models are evaluated for their effectiveness in radiation series. Six different GARCH approaches, which contain three different ARMA–GARCH models and corresponded GARCH in mean (ARMA–GARCH-M) models, are applied in radiation data sets from two representative climate stations in China. Multiple evaluation metrics of modeling sufficiency are used for evaluating the performances of models. The results show that the ARMA–GARCH(-M) models are effective in radiation series estimation. Both in fitting and prediction of radiation series, the ARMA–GARCH(-M) models show better modeling sufficiency than traditional models, while ARMA–EGARCH-M models are robustness in two sites and the ARMA–sGARCH-M models appear very competitive. Comparisons of statistical diagnostics and model performance clearly show that the ARMA–GARCH-M models make the mean radiation equations become more sufficient. It is recommended the ARMA–GARCH(-M) models to be the preferred method to use in the modeling of solar radiation series

  4. Temporal structure of neuronal population oscillations with empirical model decomposition

    International Nuclear Information System (INIS)

    Li Xiaoli

    2006-01-01

    Frequency analysis of neuronal oscillation is very important for understanding the neural information processing and mechanism of disorder in the brain. This Letter addresses a new method to analyze the neuronal population oscillations with empirical mode decomposition (EMD). Following EMD of neuronal oscillation, a series of intrinsic mode functions (IMFs) are obtained, then Hilbert transform of IMFs can be used to extract the instantaneous time frequency structure of neuronal oscillation. The method is applied to analyze the neuronal oscillation in the hippocampus of epileptic rats in vivo, the results show the neuronal oscillations have different descriptions during the pre-ictal, seizure onset and ictal periods of the epileptic EEG at the different frequency band. This new method is very helpful to provide a view for the temporal structure of neural oscillation

  5. Empirical modeling and data analysis for engineers and applied scientists

    CERN Document Server

    Pardo, Scott A

    2016-01-01

    This textbook teaches advanced undergraduate and first-year graduate students in Engineering and Applied Sciences to gather and analyze empirical observations (data) in order to aid in making design decisions. While science is about discovery, the primary paradigm of engineering and "applied science" is design. Scientists are in the discovery business and want, in general, to understand the natural world rather than to alter it. In contrast, engineers and applied scientists design products, processes, and solutions to problems. That said, statistics, as a discipline, is mostly oriented toward the discovery paradigm. Young engineers come out of their degree programs having taken courses such as "Statistics for Engineers and Scientists" without any clear idea as to how they can use statistical methods to help them design products or processes. Many seem to think that statistics is only useful for demonstrating that a device or process actually does what it was designed to do. Statistics courses emphasize creati...

  6. Comparing Multidimensional and Continuum Models of Vocabulary Acquisition: An Empirical Examination of the Vocabulary Knowledge Scale

    Science.gov (United States)

    Stewart, Jeffrey; Batty, Aaron Olaf; Bovee, Nicholas

    2012-01-01

    Second language vocabulary acquisition has been modeled both as multidimensional in nature and as a continuum wherein the learner's knowledge of a word develops along a cline from recognition through production. In order to empirically examine and compare these models, the authors assess the degree to which the Vocabulary Knowledge Scale (VKS;…

  7. A semi-empirical model for predicting crown diameter of cedrela ...

    African Journals Online (AJOL)

    A semi-empirical model relating age and breast height has been developed to predict individual tree crown diameter for Cedrela odorata (L) plantation in the moist evergreen forest zones of Ghana. The model was based on field records of 269 trees, and could determine the crown cover dynamics, forecast time of canopy ...

  8. An empirical model of collective household labour supply with non-participation

    NARCIS (Netherlands)

    Bloemen, H.G.

    2010-01-01

    I present a structural empirical model of collective household labour supply that includes the non-participation decision. I specify a simultaneous model for hours, participation and wages of husband and wife. I discuss the problems of identification and statistical coherency that arise in the

  9. Libor and Swap Market Models for the Pricing of Interest Rate Derivatives : An Empirical Analysis

    NARCIS (Netherlands)

    de Jong, F.C.J.M.; Driessen, J.J.A.G.; Pelsser, A.

    2000-01-01

    In this paper we empirically analyze and compare the Libor and Swap Market Models, developed by Brace, Gatarek, and Musiela (1997) and Jamshidian (1997), using paneldata on prices of US caplets and swaptions.A Libor Market Model can directly be calibrated to observed prices of caplets, whereas a

  10. A comparison of empirical and modeled nitrogen critical loads for Mediterranean forests and shrublands in California

    Science.gov (United States)

    M.E. Fenn; H.-D. Nagel; I. Koseva; J. Aherne; S.E. Jovan; L.H. Geiser; A. Schlutow; T. Scheuschner; A. Bytnerowicz; B.S. Gimeno; F. Yuan; S.A. Watmough; E.B. Allen; R.F. Johnson; T. Meixner

    2014-01-01

    Nitrogen (N) deposition is impacting a number of ecosystem types in California. Critical loads (CLs) for N deposition determined for mixed conifer forests and chaparral/oak woodlands in the Sierra Nevada Mountains of California and the San Bernardino Mountains in southern California using empirical and various modelling approaches were compared. Models used included...

  11. Block Empirical Likelihood for Longitudinal Single-Index Varying-Coefficient Model

    Directory of Open Access Journals (Sweden)

    Yunquan Song

    2013-01-01

    Full Text Available In this paper, we consider a single-index varying-coefficient model with application to longitudinal data. In order to accommodate the within-group correlation, we apply the block empirical likelihood procedure to longitudinal single-index varying-coefficient model, and prove a nonparametric version of Wilks’ theorem which can be used to construct the block empirical likelihood confidence region with asymptotically correct coverage probability for the parametric component. In comparison with normal approximations, the proposed method does not require a consistent estimator for the asymptotic covariance matrix, making it easier to conduct inference for the model's parametric component. Simulations demonstrate how the proposed method works.

  12. Financial power laws: Empirical evidence, models, and mechanisms

    International Nuclear Information System (INIS)

    Lux, Thomas; Alfarano, Simone

    2016-01-01

    Financial markets (share markets, foreign exchange markets and others) are all characterized by a number of universal power laws. The most prominent example is the ubiquitous finding of a robust, approximately cubic power law characterizing the distribution of large returns. A similarly robust feature is long-range dependence in volatility (i.e., hyperbolic decline of its autocorrelation function). The recent literature adds temporal scaling of trading volume and multi-scaling of higher moments of returns. Increasing awareness of these properties has recently spurred attempts at theoretical explanations of the emergence of these key characteristics form the market process. In principle, different types of dynamic processes could be responsible for these power-laws. Examples to be found in the economics literature include multiplicative stochastic processes as well as dynamic processes with multiple equilibria. Though both types of dynamics are characterized by intermittent behavior which occasionally generates large bursts of activity, they can be based on fundamentally different perceptions of the trading process. The present paper reviews both the analytical background of the power laws emerging from the above data generating mechanisms as well as pertinent models proposed in the economics literature.

  13. Empirical modeling of nuclear power plants using neural networks

    International Nuclear Information System (INIS)

    Parlos, A.G.; Atiya, A.; Chong, K.T.

    1991-01-01

    A summary of a procedure for nonlinear identification of process dynamics encountered in nuclear power plant components is presented in this paper using artificial neural systems. A hybrid feedforward/feedback neural network, namely, a recurrent multilayer perceptron, is used as the nonlinear structure for system identification. In the overall identification process, the feedforward portion of the network architecture provides its well-known interpolation property, while through recurrency and cross-talk, the local information feedback enables representation of time-dependent system nonlinearities. The standard backpropagation learning algorithm is modified and is used to train the proposed hybrid network in a supervised manner. The performance of recurrent multilayer perceptron networks in identifying process dynamics is investigated via the case study of a U-tube steam generator. The nonlinear response of a representative steam generator is predicted using a neural network and is compared to the response obtained from a sophisticated physical model during both high- and low-power operation. The transient responses compare well, though further research is warranted for training and testing of recurrent neural networks during more severe operational transients and accident scenarios

  14. Screening enterprising personality in youth: an empirical model.

    Science.gov (United States)

    Suárez-Álvarez, Javier; Pedrosa, Ignacio; García-Cueto, Eduardo; Muñiz, José

    2014-02-20

    Entrepreneurial attitudes of individuals are determined by different variables, some of them related to the cognitive and personality characteristics of the person, and others focused on contextual aspects. The aim of this study is to review the essential dimensions of enterprising personality and develop a test that will permit their thorough assessment. Nine dimensions were identified: achievement motivation, risk taking, innovativeness, autonomy, internal locus of control, external locus of control, stress tolerance, self-efficacy and optimism. For the assessment of these dimensions, 161 items were developed which were applied to a sample of 416 students, 54% male and 46% female (M = 17.89 years old, SD = 3.26). After conducting several qualitative and quantitative analyses, the final test was composed of 127 items with acceptable psychometric properties. Alpha coefficients for the subscales ranged from .81 to .98. The validity evidence relative to the content was provided by experts (V = .71, 95% CI = .56 - .85). Construct validity was assessed using different factorial analyses, obtaining a dimensional structure in accordance with the proposed model of nine interdependent dimensions as well as a global factor that groups these nine dimensions (explained variance = 49.07%; χ2/df = 1.78; GFI= .97; SRMR = .07). Nine out of the 127 items showed Differential Item Functioning as a function of gender (p .035). The results obtained are discussed and future lines of research analyzed.

  15. Theoretical and Empirical Review of Asset Pricing Models: A Structural Synthesis

    Directory of Open Access Journals (Sweden)

    Saban Celik

    2012-01-01

    Full Text Available The purpose of this paper is to give a comprehensive theoretical review devoted to asset pricing models by emphasizing static and dynamic versions in the line with their empirical investigations. A considerable amount of financial economics literature devoted to the concept of asset pricing and their implications. The main task of asset pricing model can be seen as the way to evaluate the present value of the pay offs or cash flows discounted for risk and time lags. The difficulty coming from discounting process is that the relevant factors that affect the pay offs vary through the time whereas the theoretical framework is still useful to incorporate the changing factors into an asset pricing models. This paper fills the gap in literature by giving a comprehensive review of the models and evaluating the historical stream of empirical investigations in the form of structural empirical review.

  16. A Time-dependent Heliospheric Model Driven by Empirical Boundary Conditions

    Science.gov (United States)

    Kim, T. K.; Arge, C. N.; Pogorelov, N. V.

    2017-12-01

    Consisting of charged particles originating from the Sun, the solar wind carries the Sun's energy and magnetic field outward through interplanetary space. The solar wind is the predominant source of space weather events, and modeling the solar wind propagation to Earth is a critical component of space weather research. Solar wind models are typically separated into coronal and heliospheric parts to account for the different physical processes and scales characterizing each region. Coronal models are often coupled with heliospheric models to propagate the solar wind out to Earth's orbit and beyond. The Wang-Sheeley-Arge (WSA) model is a semi-empirical coronal model consisting of a potential field source surface model and a current sheet model that takes synoptic magnetograms as input to estimate the magnetic field and solar wind speed at any distance above the coronal region. The current version of the WSA model takes the Air Force Data Assimilative Photospheric Flux Transport (ADAPT) model as input to provide improved time-varying solutions for the ambient solar wind structure. When heliospheric MHD models are coupled with the WSA model, density and temperature at the inner boundary are treated as free parameters that are tuned to optimal values. For example, the WSA-ENLIL model prescribes density and temperature assuming momentum flux and thermal pressure balance across the inner boundary of the ENLIL heliospheric MHD model. We consider an alternative approach of prescribing density and temperature using empirical correlations derived from Ulysses and OMNI data. We use our own modeling software (Multi-scale Fluid-kinetic Simulation Suite) to drive a heliospheric MHD model with ADAPT-WSA input. The modeling results using the two different approaches of density and temperature prescription suggest that the use of empirical correlations may be a more straightforward, consistent method.

  17. Institutions and foreign direct investment (FDI) in Malaysia: empirical evidence using ARDL model

    OpenAIRE

    Abdul Karim, Zulkefly; Zaidi, Mohd Azlan Shah; Ismail, Mohd Adib; Abdul Karim, Bakri

    2011-01-01

    Since 1990’s, institution factors have been regarded as playing important roles in stimulating foreign direct investments (FDI). However, empirical studies on their importance in affecting FDI are still lacking especially for small open economies. This paper attempts to investigate the role of institutions upon the inflow of foreign direct investment (FDI) in a small open economy of Malaysia. Using bounds testing approach (ARDL model), the empirical findings reveal that there exists a long ru...

  18. Estimation of Solar Radiation: An Empirical Model for Bangladesh

    Directory of Open Access Journals (Sweden)

    Mohammad Arif Sobhan Bhuiyan

    2013-04-01

    Full Text Available This study is carried out to compute empirically global, diffuse and direct solar radiation on a horizontal surface for the ten districts equally distributed all over Bangladesh (20o34΄and 26o34΄north latitude, 88o01΄ and 92o41΄east longitude as well as to predict correlations for them. For this study, meteorological data for 28 years (between 1980 and 2007 is used which is collected from Bangladesh Meteorological Department. The global radiation in Bangladesh is found to be maximum in the month of April/May and minimum in the month of November/December in all the districts. The values of the correlation coefficients a, b, c, d, c', d', e, f, e' and f' for ten stations of Bangladesh are also evaluated. It is evident that, the values of the coefficient “a” vary from 0.2296 to 0.2569, while the coefficient “b” varies from 0.5112 to 0.5560. The over all mean deviations of the ten values of a and b are 0.2432±0.0136 and 0.5336±0.0224, respectively. The maximum and minimum values of other correlation coefficients c, d, c', d', e, f, e' and f' are (1.5695 and 1.4357, (-1.7210 and -1.9986, (0.4011 and 0.376, (-0.2072 and -0.2510, (-0.3811 and -0.5464, (1.946 and 1.6456, (-0.1206 and -0.1684 and (0.7984 and 0.7000 respectively. Their maximum variations due to location are (1.5022±0.0672, (-1.8598±0.1388, (0.3885±0.0125, (-0.2291±0.0219, (-0.4637±0.0826, (1.7958±0.1502, (-0.1445±0.0239 and (0.7492±0.0492 respectively. ABSTRAK: Kajian ini dibuat secara empirikal mengenai kadar penyebaran secara resapan, global dan langsung radiasi solar, keatas permukaan mendatar untuk sepuluh daerah di seluruh Bangladesh (20o34΄ dan 26o34΄ utara latitud, 88o01΄ dan longitud 92o41΄ timor dan meramal korelasi mereka. Kajian ini menggunakan data meteorologi selama 28 tahun (antara 1980 dan 2007 yang dikutip dari Jabatan Meteorologi Bangladesh. Radiasi global maksima di Bangladesh adalah pada bulan April / Mei dan minima pada bulan November

  19. A semi-empirical model of the direct methanol fuel cell performance. Part I. Model development and verification

    Science.gov (United States)

    Argyropoulos, P.; Scott, K.; Shukla, A. K.; Jackson, C.

    A model equation is developed to predict the cell voltage versus current density response of a liquid feed direct methanol fuel cell (DMFC). The equation is based on a semi-empirical approach in which methanol oxidation and oxygen reduction kinetics are combined with effective mass transport coefficients for the fuel cell electrodes. The model equation is validated against experimental data for a small-scale fuel cell and is applicable over a wide range of methanol concentration and temperatures.

  20. Model and Empirical Study on Several Urban Public Transport Networks in China

    Science.gov (United States)

    Ding, Yimin; Ding, Zhuo

    2012-07-01

    In this paper, we present the empirical investigation results on the urban public transport networks (PTNs) and propose a model to understand the results obtained. We investigate some urban public traffic networks in China, which are the urban public traffic networks of Beijing, Guangzhou, Wuhan and etc. The empirical results on the big cities show that the accumulative act-degree distributions of PTNs take neither power function forms, nor exponential function forms, but they are described by a shifted power function, and the accumulative act-degree distributions of PTNs in medium-sized or small cities follow the same law. In the end, we propose a model to show a possible evolutionary mechanism for the emergence of such network. The analytic results obtained from this model are in good agreement with the empirical results.

  1. A New Empirical Model for Radar Scattering from Bare Soil Surfaces

    Directory of Open Access Journals (Sweden)

    Nicolas Baghdadi

    2016-11-01

    Full Text Available The objective of this paper is to propose a new semi-empirical radar backscattering model for bare soil surfaces based on the Dubois model. A wide dataset of backscattering coefficients extracted from synthetic aperture radar (SAR images and in situ soil surface parameter measurements (moisture content and roughness is used. The retrieval of soil parameters from SAR images remains challenging because the available backscattering models have limited performances. Existing models, physical, semi-empirical, or empirical, do not allow for a reliable estimate of soil surface geophysical parameters for all surface conditions. The proposed model, developed in HH, HV, and VV polarizations, uses a formulation of radar signals based on physical principles that are validated in numerous studies. Never before has a backscattering model been built and validated on such an important dataset as the one proposed in this study. It contains a wide range of incidence angles (18°–57° and radar wavelengths (L, C, X, well distributed, geographically, for regions with different climate conditions (humid, semi-arid, and arid sites, and involving many SAR sensors. The results show that the new model shows a very good performance for different radar wavelengths (L, C, X, incidence angles, and polarizations (RMSE of about 2 dB. This model is easy to invert and could provide a way to improve the retrieval of soil parameters.

  2. Including Finite Surface Span Effects in Empirical Jet-Surface Interaction Noise Models

    Science.gov (United States)

    Brown, Clifford A.

    2016-01-01

    The effect of finite span on the jet-surface interaction noise source and the jet mixing noise shielding and reflection effects is considered using recently acquired experimental data. First, the experimental setup and resulting data are presented with particular attention to the role of surface span on far-field noise. These effects are then included in existing empirical models that have previously assumed that all surfaces are semi-infinite. This extended abstract briefly describes the experimental setup and data leaving the empirical modeling aspects for the final paper.

  3. A theoretical and empirical evaluation and extension of the Todaro migration model.

    Science.gov (United States)

    Salvatore, D

    1981-11-01

    "This paper postulates that it is theoretically and empirically preferable to base internal labor migration on the relative difference in rural-urban real income streams and rates of unemployment, taken as separate and independent variables, rather than on the difference in the expected real income streams as postulated by the very influential and often quoted Todaro model. The paper goes on to specify several important ways of extending the resulting migration model and improving its empirical performance." The analysis is based on Italian data. excerpt

  4. Empirical modelling of ENSO dynamics: construction of optimal complexity models from data

    Science.gov (United States)

    Mukhina, A.; Kondrashov, D.; Mukhin, D.

    2012-04-01

    One of the main problems arising in modelling of data taken from natural system is finding of a phase space suitable for construction of the evolution operator model. The matter is we ususaly deal with strongly high-dimensional behavior and we are forced to construct a model working in some projection of system phase space corresponding to time scales of interest. Selection of optimal projecion is non-trivial problem since there are many ways to reconstruct phase variables from given time series, especially in the case when time series has a form of spatial field depending on time. Actually, it is sort of model selection problem, because, on the one hand, the transformation of data to some phase variables vector can be considered as a part of the model. On the other hand, such an optimization of a phase space makes sense only in relation to the parameterization of the model we use, i.e. representation of evolution operator, so we should find an optimal structure of the model togerther with phase variables vector. In this work we suggest Bayesian approach to this problem: a prior set of the models of different complexity is defined, then posterior probabilities of each model from this set given the data are calculated, and the model corresponding to largest probability is selected. The suggested approach is applied to optimization of EMR-model of ENSO phenomenon elaborated by Kondrashov et. al. This model operates with number of principal EOFs constructed from spatial field of SST in Equatorial Pacific, and has a form of stochastic differential equations (SDE) system with polynomial parameterization of the right-hand part. Optimal values for both the number of EOFs and the order of SDE system are estimated from the time series generated by Jin & Neelin intermediate ENSO model.

  5. Comparing the model-simulated global warming signal to observations using empirical estimates of unforced noise.

    Science.gov (United States)

    Brown, Patrick T; Li, Wenhong; Cordero, Eugene C; Mauget, Steven A

    2015-04-21

    The comparison of observed global mean surface air temperature (GMT) change to the mean change simulated by climate models has received much public and scientific attention. For a given global warming signal produced by a climate model ensemble, there exists an envelope of GMT values representing the range of possible unforced states of the climate system (the Envelope of Unforced Noise; EUN). Typically, the EUN is derived from climate models themselves, but climate models might not accurately simulate the correct characteristics of unforced GMT variability. Here, we simulate a new, empirical, EUN that is based on instrumental and reconstructed surface temperature records. We compare the forced GMT signal produced by climate models to observations while noting the range of GMT values provided by the empirical EUN. We find that the empirical EUN is wide enough so that the interdecadal variability in the rate of global warming over the 20(th) century does not necessarily require corresponding variability in the rate-of-increase of the forced signal. The empirical EUN also indicates that the reduced GMT warming over the past decade or so is still consistent with a middle emission scenario's forced signal, but is likely inconsistent with the steepest emission scenario's forced signal.

  6. Comparing the model-simulated global warming signal to observations using empirical estimates of unforced noise

    Science.gov (United States)

    Brown, Patrick T.; Li, Wenhong; Cordero, Eugene C.; Mauget, Steven A.

    2015-01-01

    The comparison of observed global mean surface air temperature (GMT) change to the mean change simulated by climate models has received much public and scientific attention. For a given global warming signal produced by a climate model ensemble, there exists an envelope of GMT values representing the range of possible unforced states of the climate system (the Envelope of Unforced Noise; EUN). Typically, the EUN is derived from climate models themselves, but climate models might not accurately simulate the correct characteristics of unforced GMT variability. Here, we simulate a new, empirical, EUN that is based on instrumental and reconstructed surface temperature records. We compare the forced GMT signal produced by climate models to observations while noting the range of GMT values provided by the empirical EUN. We find that the empirical EUN is wide enough so that the interdecadal variability in the rate of global warming over the 20th century does not necessarily require corresponding variability in the rate-of-increase of the forced signal. The empirical EUN also indicates that the reduced GMT warming over the past decade or so is still consistent with a middle emission scenario's forced signal, but is likely inconsistent with the steepest emission scenario's forced signal. PMID:25898351

  7. An anthology of theories and models of design philosophy, approaches and empirical explorations

    CERN Document Server

    Blessing, Lucienne

    2014-01-01

    While investigations into both theories and models has remained a major strand of engineering design research, current literature sorely lacks a reference book that provides a comprehensive and up-to-date anthology of theories and models, and their philosophical and empirical underpinnings; An Anthology of Theories and Models of Design fills this gap. The text collects the expert views of an international authorship, covering: ·         significant theories in engineering design, including CK theory, domain theory, and the theory of technical systems; ·         current models of design, from a function behavior structure model to an integrated model; ·         important empirical research findings from studies into design; and ·         philosophical underpinnings of design itself. For educators and researchers in engineering design, An Anthology of Theories and Models of Design gives access to in-depth coverage of theoretical and empirical developments in this area; for pr...

  8. Multiple-steady-state growth models explaining twin-peak empirics?

    OpenAIRE

    Ziesemer, T.H.W.

    2003-01-01

    The explanation of twin peak empirics through multiple-steady-state growth models has one serious implication: Whenever a model generates twin peaks in GDP per capita it also generates twin peaks in other variables. We check for some multiple steadystate models whether or not they have twin peaks in the other variables besides GDP per capita. It turns out that the required twin peaks do not exist for the textbook version of the population trap model but a modified version cannot be dismissed....

  9. Semi-empirical model for retrieval of soil moisture using RISAT-1 C ...

    Indian Academy of Sciences (India)

    Kishan Singh Rawat

    2018-03-02

    Mar 2, 2018 ... ric SM and S and C are volume fraction of sand and clay (by wt.%) present in the soil. The soil of the study area comprises of sand 70–85% (average. 78%) and clay 12–16% (average 14%) according to previous study. 2.3 Dielectric mixing semi-empirical model. Most of the models use dielectric constant as ...

  10. Hospetitiveness – the Empirical Model of Competitiveness in Romanian Hospitality Industry

    OpenAIRE

    Radu Emilian; Claudia Elena Tuclea; Madalina Lavinia Tala; Catalina Nicoleta Brîndusoiu

    2009-01-01

    Our interest is focused on an important sector of the national economy: the hospitality industry. The paper is the result of a careful analysis of the literature and of a field research. According to the answers of hotels' managers, competitiveness is based mainly on service quality and cost control. The analyses of questionnaires and dedicated literature lead us to the design of a competitiveness model for hospitality industry, called "Hospetitiveness – The empirical model of competitiveness...

  11. Gatekeeper Training for Suicide Prevention: A Theoretical Model and Review of the Empirical Literature

    Science.gov (United States)

    2015-01-01

    among exposed teens . Of the exposed group, 2.5 percent reported having made a (first) suicide attempt during the 18 months of follow-up compared...Gatekeeper Training for Suicide Prevention A Theoretical Model and Review of the Empirical Literature Crystal Burnette, Rajeev Ramchand, Lynsay...REPORT TYPE 3. DATES COVERED 00-00-2015 to 00-00-2015 4. TITLE AND SUBTITLE Gatekeeper Training for Suicide Prevention: A Theoretical Model and

  12. An empirical investigation on the forecasting ability of mallows model averaging in a macro economic environment

    Science.gov (United States)

    Yin, Yip Chee; Hock-Eam, Lim

    2012-09-01

    This paper investigates the forecasting ability of Mallows Model Averaging (MMA) by conducting an empirical analysis of five Asia countries, Malaysia, Thailand, Philippines, Indonesia and China's GDP growth rate. Results reveal that MMA has no noticeable differences in predictive ability compared to the general autoregressive fractional integrated moving average model (ARFIMA) and its predictive ability is sensitive to the effect of financial crisis. MMA could be an alternative forecasting method for samples without recent outliers such as financial crisis.

  13. Modeling Lolium perenne L. roots in the presence of empirical black holes

    Science.gov (United States)

    Plant root models are designed for understanding structural or functional aspects of root systems. When a process is not thoroughly understood, a black box object is used. However, when a process exists but empirical data do not indicate its existence, you have a black hole. The object of this re...

  14. Semi-empirical model for retrieval of soil moisture using RISAT-1 C ...

    Indian Academy of Sciences (India)

    Kishan Singh Rawat

    2018-03-02

    Mar 2, 2018 ... developed SEM, top soil SM can be estimated with low mean absolute percent error (MAPE) = 1.39 and can be used for operational applications. Keywords. Soil moisture; SAR; RISAT-1; TDR; semi-empirical model. Supplementary material pertaining to this article is available on the Journal of Earth System ...

  15. Ecological Forecasting in Chesapeake Bay: Using a Mechanistic-Empirical Modelling Approach

    Energy Technology Data Exchange (ETDEWEB)

    Brown, C. W.; Hood, Raleigh R.; Long, Wen; Jacobs, John M.; Ramers, D. L.; Wazniak, C.; Wiggert, J. D.; Wood, R.; Xu, J.

    2013-09-01

    The Chesapeake Bay Ecological Prediction System (CBEPS) automatically generates daily nowcasts and three-day forecasts of several environmental variables, such as sea-surface temperature and salinity, the concentrations of chlorophyll, nitrate, and dissolved oxygen, and the likelihood of encountering several noxious species, including harmful algal blooms and water-borne pathogens, for the purpose of monitoring the Bay's ecosystem. While the physical and biogeochemical variables are forecast mechanistically using the Regional Ocean Modeling System configured for the Chesapeake Bay, the species predictions are generated using a novel mechanistic empirical approach, whereby real-time output from the coupled physical biogeochemical model drives multivariate empirical habitat models of the target species. The predictions, in the form of digital images, are available via the World Wide Web to interested groups to guide recreational, management, and research activities. Though full validation of the integrated forecasts for all species is still a work in progress, we argue that the mechanistic–empirical approach can be used to generate a wide variety of short-term ecological forecasts, and that it can be applied in any marine system where sufficient data exist to develop empirical habitat models. This paper provides an overview of this system, its predictions, and the approach taken.

  16. Model Selection for Equating Testlet-Based Tests in the NEAT Design: An Empirical Study

    Science.gov (United States)

    He, Wei; Li, Feifei; Wolfe, Edward W.; Mao, Xia

    2012-01-01

    For those tests solely composed of testlets, local item independency assumption tends to be violated. This study, by using empirical data from a large-scale state assessment program, was interested in investigates the effects of using different models on equating results under the non-equivalent group anchor-test (NEAT) design. Specifically, the…

  17. Prediction of inflows into Lake Kariba using a combination of physical and empirical models

    CSIR Research Space (South Africa)

    Muchuru, S

    2015-10-01

    Full Text Available -making processes. This study investigates the use of a combination of physical and empirical models to predict seasonal inflows into Lake Kariba in southern Africa. Two predictions systems are considered. The first uses antecedent seasonal rainfall totals over...

  18. Distribution of longshore sediment transport along the Indian coast based on empirical model

    Digital Repository Service at National Institute of Oceanography (India)

    Chandramohan, P.; Nayak, B.U.

    An empirical sediment transport model has been developed based on longshore energy flux equation. Study indicates that annual gross sediment transport rate is high (1.5 x 10 super(6) cubic meters to 2.0 x 10 super(6) cubic meters) along the coasts...

  19. Integrating social science into empirical models of coupled human and natural systems

    Science.gov (United States)

    Jeffrey D. Kline; Eric M. White; A Paige Fischer; Michelle M. Steen-Adams; Susan Charnley; Christine S. Olsen; Thomas A. Spies; John D. Bailey

    2017-01-01

    Coupled human and natural systems (CHANS) research highlights reciprocal interactions (or feedbacks) between biophysical and socioeconomic variables to explain system dynamics and resilience. Empirical models often are used to test hypotheses and apply theory that represent human behavior. Parameterizing reciprocal interactions presents two challenges for social...

  20. Performance-Based Service Quality Model: An Empirical Study on Japanese Universities

    Science.gov (United States)

    Sultan, Parves; Wong, Ho

    2010-01-01

    Purpose: This paper aims to develop and empirically test the performance-based higher education service quality model. Design/methodology/approach: The study develops 67-item instrument for measuring performance-based service quality with a particular focus on the higher education sector. Scale reliability is confirmed using the Cronbach's alpha.…

  1. A stochastic empirical model for heavy-metal balnces in Agro-ecosystems

    NARCIS (Netherlands)

    Keller, A.N.; Steiger, von B.; Zee, van der S.E.A.T.M.; Schulin, R.

    2001-01-01

    Mass flux balancing provides essential information for preventive strategies against heavy-metal accumulation in agricultural soils that may result from atmospheric deposition and application of fertilizers and pesticides. In this paper we present the empirical stochastic balance model, PROTERRA-S,

  2. Use of combined biogeochemical model approaches and empirical data to assess critical loads of nitrogen

    Science.gov (United States)

    Mark Fenn; Charles Driscoll; Quingtao Zhou; Leela Rao; Thomas Meixner; Edith Allen; Fengming Yuan; Timothy Sullivan

    2015-01-01

    Empirical and dynamic biogeochemical modelling are complementary approaches for determining the critical load (CL) of atmospheric nitrogen (N) or other constituent deposition that an ecosystem can tolerate without causing ecological harm. The greatest benefits are obtained when these approaches are used in combination. Confounding environmental factors can complicate...

  3. Satellite-based empirical models linking river plume dynamics with hypoxic area andvolume

    Science.gov (United States)

    Satellite-based empirical models explaining hypoxic area and volume variation were developed for the seasonally hypoxic (O2 < 2 mg L−1) northern Gulf of Mexico adjacent to the Mississippi River. Annual variations in midsummer hypoxic area and ...

  4. Understanding users’ motivations to engage in virtual worlds: A multipurpose model and empirical testing

    NARCIS (Netherlands)

    Verhagen, T.; Feldberg, J.F.M.; van den Hooff, B.J.; Meents, S.; Merikivi, J.

    2012-01-01

    Despite the growth and commercial potential of virtual worlds, relatively little is known about what drives users' motivations to engage in virtual worlds. This paper proposes and empirically tests a conceptual model aimed at filling this research gap. Given the multipurpose nature of virtual words

  5. Analytical and Empirical Modeling of Wear and Forces of CBN Tool in Hard Turning - A Review

    Science.gov (United States)

    Patel, Vallabh Dahyabhai; Gandhi, Anishkumar Hasmukhlal

    2017-08-01

    Machining of steel material having hardness above 45 HRC (Hardness-Rockwell C) is referred as a hard turning. There are numerous models which should be scrutinized and implemented to gain optimum performance of hard turning. Various models in hard turning by cubic boron nitride tool have been reviewed, in attempt to utilize appropriate empirical and analytical models. Validation of steady state flank and crater wear model, Usui's wear model, forces due to oblique cutting theory, extended Lee and Shaffer's force model, chip formation and progressive flank wear have been depicted in this review paper. Effort has been made to understand the relationship between tool wear and tool force based on the different cutting conditions and tool geometries so that appropriate model can be used according to user requirement in hard turning.

  6. U-tube steam generator empirical model development and validation using neural networks

    International Nuclear Information System (INIS)

    Parlos, A.G.; Chong, K.T.; Atiya, A.

    1992-01-01

    Empirical modeling techniques that use model structures motivated from neural networks research have proven effective in identifying complex process dynamics. A recurrent multilayer perception (RMLP) network was developed as a nonlinear state-space model structure along with a static learning algorithm for estimating the parameter associated with it. The methods developed were demonstrated by identifying two submodels of a U-tube steam generator (UTSG), each valid around an operating power level. A significant drawback of this approach is the long off-line training times required for the development of even a simplified model of a UTSG. Subsequently, a dynamic gradient descent-based learning algorithm was developed as an accelerated alternative to train an RMLP network for use in empirical modeling of power plants. The two main advantages of this learning algorithm are its ability to consider past error gradient information for future use and the two forward passes associated with its implementation. The enhanced learning capabilities provided by the dynamic gradient descent-based learning algorithm were demonstrated via the case study of a simple steam boiler power plant. In this paper, the dynamic gradient descent-based learning algorithm is used for the development and validation of a complete UTSG empirical model

  7. Wavelet modeling and prediction of the stability of states: the Roman Empire and the European Union

    Science.gov (United States)

    Yaroshenko, Tatyana Y.; Krysko, Dmitri V.; Dobriyan, Vitalii; Zhigalov, Maksim V.; Vos, Hendrik; Vandenabeele, Peter; Krysko, Vadim A.

    2015-09-01

    How can the stability of a state be quantitatively determined and its future stability predicted? The rise and collapse of empires and states is very complex, and it is exceedingly difficult to understand and predict it. Existing theories are usually formulated as verbal models and, consequently, do not yield sharply defined, quantitative prediction that can be unambiguously validated with data. Here we describe a model that determines whether the state is in a stable or chaotic condition and predicts its future condition. The central model, which we test, is that growth and collapse of states is reflected by the changes of their territories, populations and budgets. The model was simulated within the historical societies of the Roman Empire (400 BC to 400 AD) and the European Union (1957-2007) by using wavelets and analysis of the sign change of the spectrum of Lyapunov exponents. The model matches well with the historical events. During wars and crises, the state becomes unstable; this is reflected in the wavelet analysis by a significant increase in the frequency ω (t) and wavelet coefficients W (ω, t) and the sign of the largest Lyapunov exponent becomes positive, indicating chaos. We successfully reconstructed and forecasted time series in the Roman Empire and the European Union by applying artificial neural network. The proposed model helps to quantitatively determine and forecast the stability of a state.

  8. The agony of choice: different empirical mortality models lead to sharply different future forest dynamics.

    Science.gov (United States)

    Bircher, Nicolas; Cailleret, Maxime; Bugmann, Harald

    2015-07-01

    Dynamic models are pivotal for projecting forest dynamics in a changing climate, from the local to the global scale. They encapsulate the processes of tree population dynamics with varying resolution. Yet, almost invariably, tree mortality is modeled based on simple, theoretical assumptions that lack a physiological and/or empirical basis. Although this has been widely criticized and a growing number of empirically derived alternatives are available, they have not been tested systematically in models of forest dynamics. We implemented an inventory-based and a tree-ring-based mortality routine in the forest gap model ForClim v3.0. We combined these routines with a stochastic and a deterministic approach for the determination of tree status (alive vs. dead). We tested the four new model versions for two Norway spruce forests in the Swiss Alps, one of which was managed (inventory time series spanning 72 years) and the other was unmanaged (41 years). Furthermore, we ran long-term simulations (-400 years) into the future under three climate scenarios to test model behavior under changing environmental conditions. The tests against inventory data showed an excellent match of simulated basal area and stem numbers at the managed site and a fair agreement at the unmanaged site for three of the four empirical mortality models, thus rendering the choice of one particular model difficult. However, long-term simulations under current climate revealed very different behavior of the mortality models in terms of simulated changes of basal area and stem numbers, both in timing and magnitude, thus indicating high sensitivity of simulated forest dynamics to assumptions on tree mortality. Our results underpin the potential of using empirical mortality routines in forest gap models. However, further tests are needed that span other climatic conditions and mixed forests. Short-term simulations to benchmark model behavior against empirical data are insufficient; long-term tests are

  9. Traditional Arabic & Islamic medicine: validation and empirical assessment of a conceptual model in Qatar.

    Science.gov (United States)

    AlRawi, Sara N; Khidir, Amal; Elnashar, Maha S; Abdelrahim, Huda A; Killawi, Amal K; Hammoud, Maya M; Fetters, Michael D

    2017-03-14

    Evidence indicates traditional medicine is no longer only used for the healthcare of the poor, its prevalence is also increasing in countries where allopathic medicine is predominant in the healthcare system. While these healing practices have been utilized for thousands of years in the Arabian Gulf, only recently has a theoretical model been developed illustrating the linkages and components of such practices articulated as Traditional Arabic & Islamic Medicine (TAIM). Despite previous theoretical work presenting development of the TAIM model, empirical support has been lacking. The objective of this research is to provide empirical support for the TAIM model and illustrate real world applicability. Using an ethnographic approach, we recruited 84 individuals (43 women and 41 men) who were speakers of one of four common languages in Qatar; Arabic, English, Hindi, and Urdu, Through in-depth interviews, we sought confirming and disconfirming evidence of the model components, namely, health practices, beliefs and philosophy to treat, diagnose, and prevent illnesses and/or maintain well-being, as well as patterns of communication about their TAIM practices with their allopathic providers. Based on our analysis, we find empirical support for all elements of the TAIM model. Participants in this research, visitors to major healthcare centers, mentioned using all elements of the TAIM model: herbal medicines, spiritual therapies, dietary practices, mind-body methods, and manual techniques, applied singularly or in combination. Participants had varying levels of comfort sharing information about TAIM practices with allopathic practitioners. These findings confirm an empirical basis for the elements of the TAIM model. Three elements, namely, spiritual healing, herbal medicine, and dietary practices, were most commonly found. Future research should examine the prevalence of TAIM element use, how it differs among various populations, and its impact on health.

  10. Modelling metal speciation in the Scheldt Estuary: Combining a flexible-resolution transport model with empirical functions

    International Nuclear Information System (INIS)

    Elskens, Marc; Gourgue, Olivier; Baeyens, Willy; Chou, Lei; Deleersnijder, Eric; Leermakers, Martine

    2014-01-01

    Predicting metal concentrations in surface waters is an important step in the understanding and ultimately the assessment of the ecological risk associated with metal contamination. In terms of risk an essential piece of information is the accurate knowledge of the partitioning of the metals between the dissolved and particulate phases, as the former species are generally regarded as the most bioavailable and thus harmful form. As a first step towards the understanding and prediction of metal speciation in the Scheldt Estuary (Belgium, the Netherlands), we carried out a detailed analysis of a historical dataset covering the period 1982–2011. This study reports on the results for two selected metals: Cu and Cd. Data analysis revealed that both the total metal concentration and the metal partitioning coefficient (K d ) could be predicted using relatively simple empirical functions of environmental variables such as salinity and suspended particulate matter concentration (SPM). The validity of these functions has been assessed by their application to salinity and SPM fields simulated by the hydro-environmental model SLIM. The high-resolution total and dissolved metal concentrations reconstructed using this approach, compared surprisingly well with an independent set of validation measurements. These first results from the combined mechanistic-empirical model approach suggest that it may be an interesting tool for risk assessment studies, e.g. to help identify conditions associated with elevated (dissolved) metal concentrations. - Highlights: • Empirical functions were designed for assessing metal speciation in estuarine water. • The empirical functions were implemented in the hydro-environmental model SLIM. • Validation was carried out in the Scheldt Estuary using historical data 1982–2011. • This combined mechanistic-empirical approach is useful for risk assessment

  11. Modelling metal speciation in the Scheldt Estuary: Combining a flexible-resolution transport model with empirical functions

    Energy Technology Data Exchange (ETDEWEB)

    Elskens, Marc [Vrije Universiteit Brussel, Analytical, Pleinlaan 2, BE-1050 Brussels (Belgium); Gourgue, Olivier [Université catholique de Louvain, Institute of Mechanics, Materials and Civil Engineering (IMMC), 4 Avenue G. Lemaître, bte L4.05.02, BE-1348 Louvain-la-Neuve (Belgium); Université catholique de Louvain, Georges Lemaître Centre for Earth and Climate Research (TECLIM), Place Louis Pasteur 2, bte L4.03.08, BE-1348 Louvain-la-Neuve (Belgium); Baeyens, Willy [Vrije Universiteit Brussel, Analytical, Pleinlaan 2, BE-1050 Brussels (Belgium); Chou, Lei [Université Libre de Bruxelles, Biogéochimie et Modélisation du Système Terre (BGéoSys) —Océanographie Chimique et Géochimie des Eaux, Campus de la Plaine —CP 208, Boulevard du Triomphe, BE-1050 Brussels (Belgium); Deleersnijder, Eric [Université catholique de Louvain, Institute of Mechanics, Materials and Civil Engineering (IMMC), 4 Avenue G. Lemaître, bte L4.05.02, BE-1348 Louvain-la-Neuve (Belgium); Université catholique de Louvain, Earth and Life Institute (ELI), Georges Lemaître Centre for Earth and Climate Research (TECLIM), Place Louis Pasteur 2, bte L4.03.08, BE-1348 Louvain-la-Neuve (Belgium); Leermakers, Martine [Vrije Universiteit Brussel, Analytical, Pleinlaan 2, BE-1050 Brussels (Belgium); and others

    2014-04-01

    Predicting metal concentrations in surface waters is an important step in the understanding and ultimately the assessment of the ecological risk associated with metal contamination. In terms of risk an essential piece of information is the accurate knowledge of the partitioning of the metals between the dissolved and particulate phases, as the former species are generally regarded as the most bioavailable and thus harmful form. As a first step towards the understanding and prediction of metal speciation in the Scheldt Estuary (Belgium, the Netherlands), we carried out a detailed analysis of a historical dataset covering the period 1982–2011. This study reports on the results for two selected metals: Cu and Cd. Data analysis revealed that both the total metal concentration and the metal partitioning coefficient (K{sub d}) could be predicted using relatively simple empirical functions of environmental variables such as salinity and suspended particulate matter concentration (SPM). The validity of these functions has been assessed by their application to salinity and SPM fields simulated by the hydro-environmental model SLIM. The high-resolution total and dissolved metal concentrations reconstructed using this approach, compared surprisingly well with an independent set of validation measurements. These first results from the combined mechanistic-empirical model approach suggest that it may be an interesting tool for risk assessment studies, e.g. to help identify conditions associated with elevated (dissolved) metal concentrations. - Highlights: • Empirical functions were designed for assessing metal speciation in estuarine water. • The empirical functions were implemented in the hydro-environmental model SLIM. • Validation was carried out in the Scheldt Estuary using historical data 1982–2011. • This combined mechanistic-empirical approach is useful for risk assessment.

  12. Empirical sewer water quality model for generating influent data for WWTP modelling

    OpenAIRE

    Langeveld, Jeroen; Van Daal, Petra; Schilperoort, Remy; Nopens, Ingmar; Flameling, Tony; Weijers, Stefan

    2017-01-01

    Wastewater treatment plants (WWTP) typically have a service life of several decades. During this service life, external factors, such as changes in the effluent standards or the loading of the WWTP may change, requiring WWTP performance to be optimized. WWTP modelling is widely accepted as a means to assess and optimize WWTP performance. One of the challenges for WWTP modelling remains the prediction of water quality at the inlet of a WWTP. Recent applications of water quality sensors have re...

  13. Modeling Dynamics of Wikipedia: An Empirical Analysis Using a Vector Error Correction Model

    Directory of Open Access Journals (Sweden)

    Liu Feng-Jun

    2017-01-01

    Full Text Available In this paper, we constructed a system dynamic model of Wikipedia based on the co-evolution theory, and investigated the interrelationships among topic popularity, group size, collaborative conflict, coordination mechanism, and information quality by using the vector error correction model (VECM. This study provides a useful framework for analyzing the dynamics of Wikipedia and presents a formal exposition of the VECM methodology in the information system research.

  14. Empirical angle-dependent Biot and MBA models for acoustic anisotropy in cancellous bone

    International Nuclear Information System (INIS)

    Lee, Kang ll; Hughes, E R; Humphrey, V F; Leighton, T G; Choi, Min Joo

    2007-01-01

    The Biot and the modified Biot-Attenborough (MBA) models have been found useful to understand ultrasonic wave propagation in cancellous bone. However, neither of the models, as previously applied to cancellous bone, allows for the angular dependence of acoustic properties with direction. The present study aims to account for the acoustic anisotropy in cancellous bone, by introducing empirical angle-dependent input parameters, as defined for a highly oriented structure, into the Biot and the MBA models. The anisotropy of the angle-dependent Biot model is attributed to the variation in the elastic moduli of the skeletal frame with respect to the trabecular alignment. The angle-dependent MBA model employs a simple empirical way of using the parametric fit for the fast and the slow wave speeds. The angle-dependent models were used to predict both the fast and slow wave velocities as a function of propagation angle with respect to the trabecular alignment of cancellous bone. The predictions were compared with those of the Schoenberg model for anisotropy in cancellous bone and in vitro experimental measurements from the literature. The angle-dependent models successfully predicted the angular dependence of phase velocity of the fast wave with direction. The root-mean-square errors of the measured versus predicted fast wave velocities were 79.2 m s -1 (angle-dependent Biot model) and 36.1 m s -1 (angle-dependent MBA model). They also predicted the fact that the slow wave is nearly independent of propagation angle for angles about 50 0 , but consistently underestimated the slow wave velocity with the root-mean-square errors of 187.2 m s -1 (angle-dependent Biot model) and 240.8 m s -1 (angle-dependent MBA model). The study indicates that the angle-dependent models reasonably replicate the acoustic anisotropy in cancellous bone

  15. Application of GIS to Empirical Windthrow Risk Model in Mountain Forested Landscapes

    Directory of Open Access Journals (Sweden)

    Lukas Krejci

    2018-02-01

    Full Text Available Norway spruce dominates mountain forests in Europe. Natural variations in the mountainous coniferous forests are strongly influenced by all the main components of forest and landscape dynamics: species diversity, the structure of forest stands, nutrient cycling, carbon storage, and other ecosystem services. This paper deals with an empirical windthrow risk model based on the integration of logistic regression into GIS to assess forest vulnerability to wind-disturbance in the mountain spruce forests of Šumava National Park (Czech Republic. It is an area where forest management has been the focus of international discussions by conservationists, forest managers, and stakeholders. The authors developed the empirical windthrow risk model, which involves designing an optimized data structure containing dependent and independent variables entering logistic regression. The results from the model, visualized in the form of map outputs, outline the probability of risk to forest stands from wind in the examined territory of the national park. Such an application of the empirical windthrow risk model could be used as a decision support tool for the mountain spruce forests in a study area. Future development of these models could be useful for other protected European mountain forests dominated by Norway spruce.

  16. Comparison of ITER performance predicted by semi-empirical and theory-based transport models

    International Nuclear Information System (INIS)

    Mukhovatov, V.; Shimomura, Y.; Polevoi, A.

    2003-01-01

    The values of Q=(fusion power)/(auxiliary heating power) predicted for ITER by three different methods, i.e., transport model based on empirical confinement scaling, dimensionless scaling technique, and theory-based transport models are compared. The energy confinement time given by the ITERH-98(y,2) scaling for an inductive scenario with plasma current of 15 MA and plasma density 15% below the Greenwald value is 3.6 s with one technical standard deviation of ±14%. These data are translated into a Q interval of [7-13] at the auxiliary heating power P aux = 40 MW and [7-28] at the minimum heating power satisfying a good confinement ELMy H-mode. Predictions of dimensionless scalings and theory-based transport models such as Weiland, MMM and IFS/PPPL overlap with the empirical scaling predictions within the margins of uncertainty. (author)

  17. Attachment-based family therapy for depressed and suicidal adolescents: theory, clinical model and empirical support.

    Science.gov (United States)

    Ewing, E Stephanie Krauthamer; Diamond, Guy; Levy, Suzanne

    2015-01-01

    Attachment-Based Family Therapy (ABFT) is a manualized family-based intervention designed for working with depressed adolescents, including those at risk for suicide, and their families. It is an empirically informed and supported treatment. ABFT has its theoretical underpinnings in attachment theory and clinical roots in structural family therapy and emotion focused therapies. ABFT relies on a transactional model that aims to transform the quality of adolescent-parent attachment, as a means of providing the adolescent with a more secure relationship that can support them during challenging times generally, and the crises related to suicidal thinking and behavior, specifically. This article reviews: (1) the theoretical foundations of ABFT (attachment theory, models of emotional development); (2) the ABFT clinical model, including training and supervision factors; and (3) empirical support.

  18. Alternative Specifications for the Lévy Libor Market Model: An Empirical Investigation

    DEFF Research Database (Denmark)

    Skovmand, David; Nicolato, Elisa

    This paper introduces and analyzes specications of the Lévy Market Model originally proposed by Eberlein and Özkan (2005). An investigation of the term structure of option implied moments rules out the Brownian motion and homogeneous Lévy processes as suitable modeling devices, and consequently...... a variety of more appropriate models is proposed. Besides a diffusive component the models have jump structures with low or high frequency combined with constant or stochastic volatility. The models are subjected to an empirical analysis using a time series of data for Euribor caps. The results...... of the estimation show that pricing performances are improved when a high frequency jump component is incorporated. Specifically, excellent results are achieved with the 4 parameter Sato-Variance Gamma model, which is able to fit an entire surface of caps with an average absolute percentage pricing error of less...

  19. Empirical validation of an agent-based model of wood markets in Switzerland

    Science.gov (United States)

    Hilty, Lorenz M.; Lemm, Renato; Thees, Oliver

    2018-01-01

    We present an agent-based model of wood markets and show our efforts to validate this model using empirical data from different sources, including interviews, workshops, experiments, and official statistics. Own surveys closed gaps where data was not available. Our approach to model validation used a variety of techniques, including the replication of historical production amounts, prices, and survey results, as well as a historical case study of a large sawmill entering the market and becoming insolvent only a few years later. Validating the model using this case provided additional insights, showing how the model can be used to simulate scenarios of resource availability and resource allocation. We conclude that the outcome of the rigorous validation qualifies the model to simulate scenarios concerning resource availability and allocation in our study region. PMID:29351300

  20. A semi empirical model of the direct methanol fuel cell. Part II. Parametric analysis

    Science.gov (United States)

    Scott, K.; Jackson, C.; Argyropoulos, P.

    A parametric analysis of a model equation developed to predict the cell voltage versus current density response of a liquid feed direct methanol fuel cell is presented. The equation is based on a semi-empirical approach in which methanol oxidation and oxygen reduction kinetics are combined with effective mass transport coefficients for the fuel cell electrodes. The model equation is applied to experimental data for a small-scale fuel cell and produces electrochemical parameters generally consistent with those expected for the individual components of the fuel cell MEA. The parameters thus determined are also used in the model to predict the performance of a DMFC with a new membrane electrode assembly.

  1. Factors of innovative activity in Russian regions: modeling and empirical analysis

    Directory of Open Access Journals (Sweden)

    O. S. Mariev

    2010-09-01

    Full Text Available Considering innovations as a key factor of economic growth, in this paper we identify main instruments stimulating innovative activity in Russian regions. Since the number of potential factors of enterprises innovative activity and respective hypotheses is large, the process of model selection becomes a crucial part of the empirical implementation. A new efficient solution to this problem is suggested, applying optimization heuristics. The model selection is based on information criteria and the Sargan test within the framework of a log-linear panel data model.

  2. Integrating technology readiness into the expectation-confirmation model: an empirical study of mobile services.

    Science.gov (United States)

    Chen, Shih-Chih; Liu, Ming-Ling; Lin, Chieh-Peng

    2013-08-01

    The aim of this study was to integrate technology readiness into the expectation-confirmation model (ECM) for explaining individuals' continuance of mobile data service usage. After reviewing the ECM and technology readiness, an integrated model was demonstrated via empirical data. Compared with the original ECM, the findings of this study show that the integrated model may offer an ameliorated way to clarify what factors and how they influence the continuous intention toward mobile services. Finally, the major findings are summarized, and future research directions are suggested.

  3. The Fracture Mechanical Markov Chain Fatigue Model Compared with Empirical Data

    DEFF Research Database (Denmark)

    Gansted, L.; Brincker, Rune; Hansen, Lars Pilegaard

    The applicability of the FMF-model (Fracture Mechanical Markov Chain Fatigue Model) introduced in Gansted, L., R. Brincker and L. Pilegaard Hansen (1991) is tested by simulations and compared with empirical data. Two sets of data have been used, the Virkler data (aluminium alloy) and data...... established at the Laboratory of Structural Engineering at Aalborg University, the AUC-data, (mild steel). The model, which is based on the assumption, that the crack propagation process can be described by a discrete Space Markov theory, is applicable to constant as well as random loading. It is shown...

  4. Support Vector Regression Model Based on Empirical Mode Decomposition and Auto Regression for Electric Load Forecasting

    Directory of Open Access Journals (Sweden)

    Hong-Juan Li

    2013-04-01

    Full Text Available Electric load forecasting is an important issue for a power utility, associated with the management of daily operations such as energy transfer scheduling, unit commitment, and load dispatch. Inspired by strong non-linear learning capability of support vector regression (SVR, this paper presents a SVR model hybridized with the empirical mode decomposition (EMD method and auto regression (AR for electric load forecasting. The electric load data of the New South Wales (Australia market are employed for comparing the forecasting performances of different forecasting models. The results confirm the validity of the idea that the proposed model can simultaneously provide forecasting with good accuracy and interpretability.

  5. Time-varying disaster risk models: An empirical assessment of the Rietz-Barro hypothesis

    DEFF Research Database (Denmark)

    Irarrazabal, Alfonso; Parra-Alvarez, Juan Carlos

    This paper revisits the fit of disaster risk models where a representative agent has recursive preferences and the probability of a macroeconomic disaster changes over time. We calibrate the model as in Wachter (2013) and perform two sets of tests to assess the empirical performance of the model ...... and hence to reduce the Sharpe Ratio, a lower elasticity of substitution generates a more reasonable level for the equity risk premium and for the volatility of the government bond returns without compromising the ability of the price-dividend ratio to predict excess returns....

  6. A Price Index Model for Road Freight Transportation and Its Empirical analysis in China

    Directory of Open Access Journals (Sweden)

    Liu Zhishuo

    2017-01-01

    Full Text Available The aim of price index for road freight transportation (RFT is to reflect the changes of price in the road transport market. Firstly, a price index model for RFT based on the sample data from Alibaba logistics platform is built. This model is a three levels index system including total index, classification index and individual index and the Laspeyres method is applied to calculate these indices. Finally, an empirical analysis of the price index for RFT market in Zhejiang Province is performed. In order to demonstrate the correctness and validity of the exponential model, a comparative analysis with port throughput and PMI index is carried out.

  7. An Empirical Validation of Building Simulation Software for Modelling of Double-Skin Facade (DSF)

    DEFF Research Database (Denmark)

    Larsen, Olena Kalyanova; Heiselberg, Per; Felsmann, Clemens

    2009-01-01

    , TRNSYS-TUD and BSim) was carried out in the framework of IEA SHC Task 34 /ECBCS Annex 43 "Testing and Validation of Building Energy Simulation Tools". The experimental data for the validation was gathered in a full-scale outdoor test facility. The empirical data sets comprise the key-functioning modes...... of DSF: 1. Thermal buffer mode (closed DSF cavity) and 2. External air curtain mode (naturally ventilated DSF cavity with the top and bottom openings open to outdoors). By carrying out the empirical tests, it was concluded that all models experience difficulties in predictions during the peak solar loads...

  8. A model of therapist competencies for the empirically supported interpersonal psychotherapy for adolescent depression.

    Science.gov (United States)

    Sburlati, Elizabeth S; Lyneham, Heidi J; Mufson, Laura H; Schniering, Carolyn A

    2012-06-01

    In order to treat adolescent depression, a number of empirically supported treatments (ESTs) have been developed from both the cognitive behavioral therapy (CBT) and interpersonal psychotherapy (IPT-A) frameworks. Research has shown that in order for these treatments to be implemented in routine clinical practice (RCP), effective therapist training must be generated and provided. However, before such training can be developed, a good understanding of the therapist competencies needed to implement these ESTs is required. Sburlati et al. (Clin Child Fam Psychol Rev 14:89-109, 2011) developed a model of therapist competencies for implementing CBT using the well-established Delphi technique. Given that IPT-A differs considerably to CBT, the current study aims to develop a model of therapist competencies for the implementation of IPT-A using a similar procedure as that applied in Sburlati et al. (Clin Child Fam Psychol Rev 14:89-109, 2011). This method involved: (1) identifying and reviewing an empirically supported IPT-A approach, (2) extracting therapist competencies required for the implementation of IPT-A, (3) consulting with a panel of IPT-A experts to generate an overall model of therapist competencies, and (4) validating the overall model with the IPT-A manual author. The resultant model offers an empirically derived set of competencies necessary for effectively treating adolescent depression using IPT-A and has wide implications for the development of therapist training, competence assessment measures, and evidence-based practice guidelines. This model, therefore, provides an empirical framework for the development of dissemination and implementation programs aimed at ensuring that adolescents with depression receive effective care in RCP settings. Key similarities and differences between CBT and IPT-A, and the therapist competencies required for implementing these treatments, are also highlighted throughout this article.

  9. A Comprehensive Comparison Study of Empirical Cutting Transport Models in Inclined and Horizontal Wells

    Directory of Open Access Journals (Sweden)

    Asep Mohamad Ishaq Shiddiq

    2017-07-01

    Full Text Available In deviated and horizontal drilling, hole-cleaning issues are a common and complex problem. This study explored the effect of various parameters in drilling operations and how they affect the flow rate required for effective cutting transport. Three models, developed following an empirical approach, were employed: Rudi-Shindu’s model, Hopkins’, and Tobenna’s model. Rudi-Shindu’s model needs iteration in the calculation. Firstly, the three models were compared using a sensitivity analysis of drilling parameters affecting cutting transport. The result shows that the models have similar trends but different values for minimum flow velocity. Analysis was conducted to examine the feasibility of using Rudi-Shindu’s, Hopkins’, and Tobenna’s models. The result showed that Hopkins’ model is limited by cutting size and revolution per minute (RPM. The minimum flow rate from Tobenna’s model is affected only by well inclination, drilling fluid weight and drilling fluid rheological property. Meanwhile, Rudi-Shindu’s model is limited by inclinations above 45°. The study showed that the investigated models are not suitable for horizontal wells because they do not include the effect of lateral section.

  10. Empirical Sewer Water Quality Model for Generating Influent Data for WWTP Modelling

    NARCIS (Netherlands)

    Langeveld, J.G.; van Daal-Rombouts, P.M.M.; Schilperoort, Remy; Nopens, Ingmar; Flameling, Tony; Weijers, Stefan

    2017-01-01

    Wastewater treatment plants (WWTP) typically have a service life of several decades. During this service life, external factors, such as changes in the effluent standards or the loading of the WWTP may change, requiring WWTP performance to be optimized. WWTP modelling is widely accepted as a means

  11. Incorporating Mental Representations in Discrete Choice Models of Travel Behaviour : Modelling Approach and Empirical Application

    NARCIS (Netherlands)

    T.A. Arentze (Theo); B.G.C. Dellaert (Benedict); C.G. Chorus (Casper)

    2013-01-01

    textabstractWe introduce an extension of the discrete choice model to take into account individuals’ mental representation of a choice problem. We argue that, especially in daily activity and travel choices, the activated needs of an individual have an influence on the benefits he or she pursues in

  12. Empirical Validation of a Thermal Model of a Complex Roof Including Phase Change Materials

    Directory of Open Access Journals (Sweden)

    Stéphane Guichard

    2015-12-01

    Full Text Available This paper deals with the empirical validation of a building thermal model of a complex roof including a phase change material (PCM. A mathematical model dedicated to PCMs based on the heat apparent capacity method was implemented in a multi-zone building simulation code, the aim being to increase the understanding of the thermal behavior of the whole building with PCM technologies. In order to empirically validate the model, the methodology is based both on numerical and experimental studies. A parametric sensitivity analysis was performed and a set of parameters of the thermal model has been identified for optimization. The use of the generic optimization program called GenOpt® coupled to the building simulation code enabled to determine the set of adequate parameters. We first present the empirical validation methodology and main results of previous work. We then give an overview of GenOpt® and its coupling with the building simulation code. Finally, once the optimization results are obtained, comparisons of the thermal predictions with measurements are found to be acceptable and are presented.

  13. Dynamic gradient descent learning algorithms for enhanced empirical modeling of power plants

    International Nuclear Information System (INIS)

    Parlos, A.G.; Atiya, Amir; Chong, K.T.

    1991-01-01

    A newly developed dynamic gradient descent-based learning algorithm is used to train a recurrent multilayer perceptron network for use in empirical modeling of power plants. The two main advantages of the proposed learning algorithm are its ability to consider past error gradient information for future use and the two forward passes associated with its implementation, instead of one forward and one backward pass of the backpropagation algorithm. The latter advantage results in computational time saving because both passes can be performed simultaneously. The dynamic learning algorithm is used to train a hybrid feedforward/feedback neural network, a recurrent multilayer perceptron, which was previously found to exhibit good interpolation and extrapolation capabilities in modeling nonlinear dynamic systems. One of the drawbacks, however, of the previously reported work has been the long training times associated with accurate empirical models. The enhanced learning capabilities provided by the dynamic gradient descent-based learning algorithm are demonstrated by a case study of a steam power plant. The number of iterations required for accurate empirical modeling has been reduced from tens of thousands to hundreds, thus significantly expediting the learning process

  14. An empirically based model for knowledge management in health care organizations.

    Science.gov (United States)

    Sibbald, Shannon L; Wathen, C Nadine; Kothari, Anita

    2016-01-01

    Knowledge management (KM) encompasses strategies, processes, and practices that allow an organization to capture, share, store, access, and use knowledge. Ideal KM combines different sources of knowledge to support innovation and improve performance. Despite the importance of KM in health care organizations (HCOs), there has been very little empirical research to describe KM in this context. This study explores KM in HCOs, focusing on the status of current intraorganizational KM. The intention is to provide insight for future studies and model development for effective KM implementation in HCOs. A qualitative methods approach was used to create an empirically based model of KM in HCOs. Methods included (a) qualitative interviews (n = 24) with senior leadership to identify types of knowledge important in these roles plus current information-seeking behaviors/needs and (b) in-depth case study with leaders in new executive positions (n = 2). The data were collected from 10 HCOs. Our empirically based model for KM was assessed for face and content validity. The findings highlight the paucity of formal KM in our sample HCOs. Organizational culture, leadership, and resources are instrumental in supporting KM processes. An executive's knowledge needs are extensive, but knowledge assets are often limited or difficult to acquire as much of the available information is not in a usable format. We propose an empirically based model for KM to highlight the importance of context (internal and external), and knowledge seeking, synthesis, sharing, and organization. Participants who reviewed the model supported its basic components and processes, and potential for incorporating KM into organizational processes. Our results articulate ways to improve KM, increase organizational learning, and support evidence-informed decision-making. This research has implications for how to better integrate evidence and knowledge into organizations while considering context and the role of

  15. Statistical Performance Measures of the HWM-93 And MSISE-90 Empirical Atmospheric Models and the Relation to Infrasonic CTBT Monitoring

    Science.gov (United States)

    2000-09-01

    versions of these empirical upper atmospheric models for use in verification and compliance with the CTBT. This study identifies weak areas in the current...The purpose of this paper is to document a statistical performance measures study of the Naval Research Laboratory’s (NRL) empirical upper atmospheric ... models . These models known as the MSISE-90 and HWM-93 models were originally developed at the NASA Goddard Space Flight Center and made freely

  16. Evaluation of theoretical and empirical water vapor sorption isotherm models for soils

    DEFF Research Database (Denmark)

    Arthur, Emmanuel; Tuller, Markus; Møldrup, Per

    2016-01-01

    sorption isotherms of building materials, food, and other industrial products, knowledge about the 24 applicability of these functions for soils is noticeably lacking. We present validation of nine models for characterizing adsorption/desorption isotherms for a water activity range from 0.03 to 0.......93 for 207 soils, widely varying in texture and organic carbon content. In addition the potential applicability of the models for prediction of sorption isotherms from known clay content was investigated. While in general all investigated models described measured adsorption and desorption isotherms......The mathematical characterization of water vapor sorption isotherms of soils is crucial for modeling processes such as volatilization of pesticides and diffusive and convective water vapor transport. Although numerous physically-based and empirical models were previously proposed to describe...

  17. Research Article Evaluation of different signal propagation models for a mixed indoor-outdoor scenario using empirical data

    Directory of Open Access Journals (Sweden)

    Oleksandr Artemenko

    2016-06-01

    Full Text Available In this paper, we are choosing a suitable indoor-outdoor propagation model out of the existing models by considering path loss and distance as parameters. A path loss is calculated empirically by placing emitter nodes inside a building. A receiver placed outdoors is represented by a Quadrocopter (QC that receives beacon messages from indoor nodes. As per our analysis, the International Telecommunication Union (ITU model, Stanford University Interim (SUI model, COST-231 Hata model, Green-Obaidat model, Free Space model, Log-Distance Path Loss model and Electronic Communication Committee 33 (ECC-33 models are chosen and evaluated using empirical data collected in a real environment. The aim is to determine if the analytically chosen models fit our scenario by estimating the minimal standard deviation from the empirical data.

  18. Empirical and physics based mathematical models of uranium hydride decomposition kinetics with quantified uncertainties.

    Energy Technology Data Exchange (ETDEWEB)

    Salloum, Maher N. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Gharagozloo, Patricia E. [Sandia National Lab. (SNL-CA), Livermore, CA (United States)

    2013-10-01

    Metal particle beds have recently become a major technique for hydrogen storage. In order to extract hydrogen from such beds, it is crucial to understand the decomposition kinetics of the metal hydride. We are interested in obtaining a a better understanding of the uranium hydride (UH3) decomposition kinetics. We first developed an empirical model by fitting data compiled from different experimental studies in the literature and quantified the uncertainty resulting from the scattered data. We found that the decomposition time range predicted by the obtained kinetics was in a good agreement with published experimental results. Secondly, we developed a physics based mathematical model to simulate the rate of hydrogen diffusion in a hydride particle during the decomposition. We used this model to simulate the decomposition of the particles for temperatures ranging from 300K to 1000K while propagating parametric uncertainty and evaluated the kinetics from the results. We compared the kinetics parameters derived from the empirical and physics based models and found that the uncertainty in the kinetics predicted by the physics based model covers the scattered experimental data. Finally, we used the physics-based kinetics parameters to simulate the effects of boundary resistances and powder morphological changes during decomposition in a continuum level model. We found that the species change within the bed occurring during the decomposition accelerates the hydrogen flow by increasing the bed permeability, while the pressure buildup and the thermal barrier forming at the wall significantly impede the hydrogen extraction.

  19. Antecedents of employee electricity saving behavior in organizations: An empirical study based on norm activation model

    International Nuclear Information System (INIS)

    Zhang, Yixiang; Wang, Zhaohua; Zhou, Guanghui

    2013-01-01

    China is one of the major energy-consuming countries, and is under great pressure to promote energy saving and reduce domestic energy consumption. Employees constitute an important target group for energy saving. However, only a few research efforts have been paid to study what drives employee energy saving behavior in organizations. To fill this gap, drawing on norm activation model (NAM), we built a research model to study antecedents of employee electricity saving behavior in organizations. The model was empirically tested using survey data collected from office workers in Beijing, China. Results show that personal norm positively influences employee electricity saving behavior. Organizational electricity saving climate negatively moderates the effect of personal norm on electricity saving behavior. Awareness of consequences, ascription of responsibility, and organizational electricity saving climate positively influence personal norm. Furthermore, awareness of consequences positively influences ascription of responsibility. This paper contributes to the energy saving behavior literature by building a theoretical model of employee electricity saving behavior which is understudied in the current literature. Based on the empirical results, implications on how to promote employee electricity saving are discussed. - Highlights: • We studied employee electricity saving behavior based on norm activation model. • The model was tested using survey data collected from office workers in China. • Personal norm positively influences employee′s electricity saving behavior. • Electricity saving climate negatively moderates personal norm′s effect. • This research enhances our understanding of employee electricity saving behavior

  20. Models of expected returns on the brazilian market: Empirical tests using predictive methodology

    Directory of Open Access Journals (Sweden)

    Adriano Mussa

    2009-01-01

    Full Text Available Predictive methodologies for test of the expected returns models are largely diffused on the international academic environment. However, these methods have not been used in Brazil in a systematic way. Generally, empirical studies proceeded with Brazilian stock market data are concentrated only in the first step of these methodologies. The purpose of this article was test and compare the models CAPM, 3-factors and 4-factors using a predictive methodology, considering two steps – temporal and cross-section regressions – with standard errors obtained by the techniques of Fama and Macbeth (1973. The results indicated the superiority of the 4-fators model as compared to the 3-fators model, and the superiority of the 3- factors model as compared to the CAPM, but no one of the tested models were enough on the explanation of the Brazilian stock returns. Contrary to some empirical evidences, that do not use predictive methodology, the size and momentum effect seem do not exist on the Brazilian capital markets, but there are evidences of the value effect and the relevance of the market for explanation of expected returns. These finds rise some questions, mainly caused by the originality of the methodology on the local market and by the fact that this subject is still incipient and polemic on the Brazilian academic environment.

  1. TS07D Empirical Geomagnetic Field Model as a Space Weather Tool

    Science.gov (United States)

    Sharp, N. M.; Stephens, G. K.; Sitnov, M. I.

    2011-12-01

    Empirical modeling and forecasting of the geomagnetic field is a key element of the space weather research. A dramatic increase in the number of data available for the terrestrial magnetosphere required a new generation of empirical models with large numbers of degrees of freedom and sophisticated data-mining techniques. A set of the corresponding data binning, fitting and visualization procedures known as the TS07D model is now available at \\url{http://geomag_field.jhuapl.edu/model/} and it is used for detailed investigation of storm-scale phenomena in the magnetosphere. However, the transformation of this research model into a practical space weather application, which implies its extensive running for validation and interaction with other space weather codes, requires its presentation in the form of a single state-of-the-art code, well documented and optimized for the highest performance. To this end, the model is implemented in the Java programming language with extensive self-sufficient library and a set of optimization tools, including multi-thread operations that assume the use of the code in multi-core computers and clusters. The results of the new code validation and optimization of its binning, fitting and visualization parts are presented as well as some examples of the processed storms are discussed.

  2. Regression models for analyzing radiological visual grading studies--an empirical comparison.

    Science.gov (United States)

    Saffari, S Ehsan; Löve, Áskell; Fredrikson, Mats; Smedby, Örjan

    2015-10-30

    For optimizing and evaluating image quality in medical imaging, one can use visual grading experiments, where observers rate some aspect of image quality on an ordinal scale. To analyze the grading data, several regression methods are available, and this study aimed at empirically comparing such techniques, in particular when including random effects in the models, which is appropriate for observers and patients. Data were taken from a previous study where 6 observers graded or ranked in 40 patients the image quality of four imaging protocols, differing in radiation dose and image reconstruction method. The models tested included linear regression, the proportional odds model for ordinal logistic regression, the partial proportional odds model, the stereotype logistic regression model and rank-order logistic regression (for ranking data). In the first two models, random effects as well as fixed effects could be included; in the remaining three, only fixed effects. In general, the goodness of fit (AIC and McFadden's Pseudo R (2)) showed small differences between the models with fixed effects only. For the mixed-effects models, higher AIC and lower Pseudo R (2) was obtained, which may be related to the different number of parameters in these models. The estimated potential for dose reduction by new image reconstruction methods varied only slightly between models. The authors suggest that the most suitable approach may be to use ordinal logistic regression, which can handle ordinal data and random effects appropriately.

  3. An empirical investigation of the efficiency effects of integrated care models in Switzerland

    Directory of Open Access Journals (Sweden)

    Oliver Reich

    2012-01-01

    Full Text Available Introduction: This study investigates the efficiency gains of integrated care models in Switzerland, since these models are regarded as cost containment options in national social health insurance. These plans generate much lower average health care expenditure than the basic insurance plan. The question is, however, to what extent these total savings are due to the effects of selection and efficiency. Methods: The empirical analysis is based on data from 399,274 Swiss residents that constantly had compulsory health insurance with the Helsana Group, the largest health insurer in Switzerland, covering the years 2006 to 2009. In order to evaluate the efficiency of the different integrated care models, we apply an econometric approach with a mixed-effects model. Results: Our estimations indicate that the efficiency effects of integrated care models on health care expenditure are significant. However, the different insurance plans vary, revealing the following efficiency gains per model: contracted capitated model 21.2%, contracted non-capitated model 15.5% and telemedicine model 3.7%. The remaining 8.5%, 5.6% and 22.5% respectively of the variation in total health care expenditure can be attributed to the effects of selection. Conclusions: Integrated care models have the potential to improve care for patients with chronic diseases and concurrently have a positive impact on health care expenditure. We suggest policy makers improve the incentives for patients with chronic diseases within the existing regulations providing further potential for cost-efficiency of medical care.

  4. An empirical investigation of the efficiency effects of integrated care models in Switzerland

    Directory of Open Access Journals (Sweden)

    Oliver Reich

    2012-01-01

    Full Text Available Introduction: This study investigates the efficiency gains of integrated care models in Switzerland, since these models are regarded as cost containment options in national social health insurance. These plans generate much lower average health care expenditure than the basic insurance plan. The question is, however, to what extent these total savings are due to the effects of selection and efficiency.Methods: The empirical analysis is based on data from 399,274 Swiss residents that constantly had compulsory health insurance with the Helsana Group, the largest health insurer in Switzerland, covering the years 2006 to 2009. In order to evaluate the efficiency of the different integrated care models, we apply an econometric approach with a mixed-effects model.Results: Our estimations indicate that the efficiency effects of integrated care models on health care expenditure are significant. However, the different insurance plans vary, revealing the following efficiency gains per model: contracted capitated model 21.2%, contracted non-capitated model 15.5% and telemedicine model 3.7%. The remaining 8.5%, 5.6% and 22.5% respectively of the variation in total health care expenditure can be attributed to the effects of selection.Conclusions: Integrated care models have the potential to improve care for patients with chronic diseases and concurrently have a positive impact on health care expenditure. We suggest policy makers improve the incentives for patients with chronic diseases within the existing regulations providing further potential for cost-efficiency of medical care.

  5. Analysis of model implied volatility for jump diffusion models. Empirical evidence from the Nordpool market

    International Nuclear Information System (INIS)

    Nomikos, Nikos K.; Soldatos, Orestes A.

    2010-01-01

    In this paper we examine the importance of mean reversion and spikes in the stochastic behaviour of the underlying asset when pricing options on power. We propose a model that is flexible in its formulation and captures the stylized features of power prices in a parsimonious way. The main feature of the model is that it incorporates two different speeds of mean reversion to capture the differences in price behaviour between normal and spiky periods. We derive semi-closed form solutions for European option prices using transform analysis and then examine the properties of the implied volatilities that the model generates. We find that the presence of jumps generates prominent volatility skews which depend on the sign of the mean jump size. We also show that mean reversion reduces the volatility smile as time to maturity increases. In addition, mean reversion induces volatility skews particularly for ITM options, even in the absence of jumps. Finally, jump size volatility and jump intensity mainly affect the kurtosis and thus the curvature of the smile with the former having a more important role in making the volatility smile more pronounced and thus increasing the kurtosis of the underlying price distribution. (author)

  6. Radicalization into Violent Extremism II: A Review of Conceptual Models and Empirical Research

    Directory of Open Access Journals (Sweden)

    Randy Borum

    2011-01-01

    Full Text Available Over the past decade, analysts have proposed several frameworks to explain the process of radicalization into violent extremism (RVE. These frameworks are based primarily on rational, conceptual models which are neither guided by theory nor derived from systematic research. This article reviews recent (post-9/11 conceptual models of the radicalization process and recent (post-9/11 empirical studies of RVE. It emphasizes the importance of distinguishing between ideological radicalization and terrorism involvement, though both issues deserve further empirical inquiry. Finally, it summarizes some recent RVE-related research efforts, identifies seven things that social science researchers and operational personnel still need to know about violent radicalization, and offers a set of starting assumptions to move forward with a research agenda that might help to thwart tomorrow's terrorists.

  7. An Empirical Application of a Two-Factor Model of Stochastic Volatility

    Czech Academy of Sciences Publication Activity Database

    Kuchyňka, Alexandr

    2008-01-01

    Roč. 17, č. 3 (2008), s. 243-253 ISSN 1210-0455 R&D Projects: GA ČR GA402/07/1113; GA MŠk(CZ) LC06075 Institutional research plan: CEZ:AV0Z10750506 Keywords : stochastic volatility * Kalman filter Subject RIV: AH - Economics http://library.utia.cas.cz/separaty/2008/E/kuchynka-an empirical application of a two-factor model of stochastic volatility.pdf

  8. Ion temperature in the outer ionosphere - first version of a global empirical model

    Czech Academy of Sciences Publication Activity Database

    Třísková, Ludmila; Truhlík, Vladimír; Šmilauer, Jan; Smirnova, N. F.

    2004-01-01

    Roč. 34, č. 9 (2004), s. 1998-2003 ISSN 0273-1177 R&D Projects: GA ČR GP205/02/P037; GA AV ČR IAA3042201; GA MŠk ME 651 Institutional research plan: CEZ:AV0Z3042911 Keywords : plasma temperatures * topside ionosphere * empirical models Subject RIV: DG - Athmosphere Sciences, Meteorology Impact factor: 0.548, year: 2004

  9. A model of psychological evaluation of educational environment and its empirical indicators

    OpenAIRE

    E. B. Laktionova

    2013-01-01

    The topic of the study is to identify ways of complex psychological assessment of educational en-vironment quality, the nature of conditions that affect positive personal development of its members. The solution to this problem is to develop science-based content and technology sup-port for psychological evaluation of the educational environment. The purpose of the study was theoretical rationale and empirical testing of a model of psychological examination of education-al environment. The st...

  10. An empirical topside electron density model for calculation of absolute ion densities in IRI

    Czech Academy of Sciences Publication Activity Database

    Třísková, Ludmila; Truhlík, Vladimír; Šmilauer, Jan

    2006-01-01

    Roč. 37, č. 5 (2006), s. 928-934 ISSN 0273-1177 R&D Projects: GA ČR GP205/02/P037; GA AV ČR IAA3042201; GA MŠk ME 651 Grant - others:National Science Foundation(US) 0245457 Institutional research plan: CEZ:AV0Z30420517 Keywords : Plasma density * Topside ionosphere * Ion composition * Empirical models Subject RIV: BN - Astronomy, Celestial Mechanics, Astrophysics Impact factor: 0.706, year: 2005

  11. Risky forward interest rates and swaptions: Quantum finance model and empirical results

    Science.gov (United States)

    Baaquie, Belal Ehsan; Yu, Miao; Bhanap, Jitendra

    2018-02-01

    Risk free forward interest rates (Diebold and Li, 2006 [1]; Jamshidian, 1991 [2 ]) - and their realization by US Treasury bonds as the leading exemplar - have been studied extensively. In Baaquie (2010), models of risk free bonds and their forward interest rates based on the quantum field theoretic formulation of the risk free forward interest rates have been discussed, including the empirical evidence supporting these models. The quantum finance formulation of risk free forward interest rates is extended to the case of risky forward interest rates. The examples of the Singapore and Malaysian forward interest rates are used as specific cases. The main feature of the quantum finance model is that the risky forward interest rates are modeled both a) as a stand-alone case as well as b) being driven by the US forward interest rates plus a spread - having its own term structure -above the US forward interest rates. Both the US forward interest rates and the term structure for the spread are modeled by a two dimensional Euclidean quantum field. As a precursor to the evaluation of put option of the Singapore coupon bond, the quantum finance model for swaptions is tested using empirical study of swaptions for the US Dollar -showing that the model is quite accurate. A prediction for the market price of the put option for the Singapore coupon bonds is obtained. The quantum finance model is generalized to study the Malaysian case and the Malaysian forward interest rates are shown to have anomalies absent for the US and Singapore case. The model's prediction for a Malaysian interest rate swap is obtained.

  12. Context, Experience, Expectation, and Action—Towards an Empirically Grounded, General Model for Analyzing Biographical Uncertainty

    Directory of Open Access Journals (Sweden)

    Herwig Reiter

    2010-01-01

    Full Text Available The article proposes a general, empirically grounded model for analyzing biographical uncertainty. The model is based on findings from a qualitative-explorative study of transforming meanings of unemployment among young people in post-Soviet Lithuania. In a first step, the particular features of the uncertainty puzzle in post-communist youth transitions are briefly discussed. A historical event like the collapse of state socialism in Europe, similar to the recent financial and economic crisis, is a generator of uncertainty par excellence: it undermines the foundations of societies and the taken-for-grantedness of related expectations. Against this background, the case of a young woman and how she responds to the novel threat of unemployment in the transition to the world of work is introduced. Her uncertainty management in the specific time perspective of certainty production is then conceptually rephrased by distinguishing three types or levels of biographical uncertainty: knowledge, outcome, and recognition uncertainty. Biographical uncertainty, it is argued, is empirically observable through the analysis of acting and projecting at the biographical level. The final part synthesizes the empirical findings and the conceptual discussion into a stratification model of biographical uncertainty as a general tool for the biographical analysis of uncertainty phenomena. URN: urn:nbn:de:0114-fqs100120

  13. An empirical model of the high-energy electron environment at Jupiter

    Science.gov (United States)

    Soria-Santacruz, M.; Garrett, H. B.; Evans, R. W.; Jun, I.; Kim, W.; Paranicas, C.; Drozdov, A.

    2016-10-01

    We present an empirical model of the energetic electron environment in Jupiter's magnetosphere that we have named the Galileo Interim Radiation Electron Model version-2 (GIRE2) since it is based on Galileo data from the Energetic Particle Detector (EPD). Inside 8RJ, GIRE2 adopts the previously existing model of Divine and Garrett because this region was well sampled by the Pioneer and Voyager spacecraft but poorly covered by Galileo. Outside of 8RJ, the model is based on 10 min averages of Galileo EPD data as well as on measurements from the Geiger Tube Telescope on board the Pioneer spacecraft. In the inner magnetosphere the field configuration is dipolar, while in the outer magnetosphere it presents a disk-like structure. The gradual transition between these two behaviors is centered at about 17RJ. GIRE2 distinguishes between the two different regions characterized by these two magnetic field topologies. Specifically, GIRE2 consists of an inner trapped omnidirectional model between 8 to 17RJ that smoothly joins onto the original Divine and Garrett model inside 8RJ and onto a GIRE2 plasma sheet model at large radial distances. The model provides a complete picture of the high-energy electron environment in the Jovian magnetosphere from ˜1 to 50RJ. The present manuscript describes in great detail the data sets, formulation, and fittings used in the model and provides a discussion of the predicted high-energy electron fluxes as a function of energy and radial distance from the planet.

  14. FLOOD FORECASTING MODEL USING EMPIRICAL METHOD FOR A SMALL CATCHMENT AREA

    Directory of Open Access Journals (Sweden)

    CHANG L. JUN

    2016-05-01

    Full Text Available The two most destructive natural disasters in Malaysia are monsoonal and flash floods. Malaysia is located in the tropical area and received on average, around 2800 mm of rainfall every year. Due to this high amount, a reliable and timely flood forecasting system is necessary to provide early warning to minimize the destruction caused by flash flood. This study developed and checked the adaptability and adequacy of the flood forecasting model for 93 km2 catchment area, Kampung Kasipillay, in Kuala Lumpur. The Empirical Unit Hydrograph Model was used in this study and past rainfall data, water level and stagedischarge curve were used as inputs. A Rainfall-Runoff Model (RRM which transforms the rainfall to runoff hydrograph, was developed using excel. Since some data, such as properties of the watershed, are not always complete and precise, some model parameters were calibrated through trial and error processes to fine-tune the parameters of the model to get reliable estimation. The simulated unit hydrograph model was computed in prior runs of the flood forecasting model to estimate the model parameters. These calibrated parameters are used as constant variables for flood forecasting model when the runoff hydrograph was regenerated. The comparison between the observed and simulated hydrograph was investigated for the selected flood events and performance error was determined. The performance error achieved in this study of 15 flood events ranged from -2.06% to 5.82%.e.

  15. Empirical Study of Homogeneous and Heterogeneous Ensemble Models for Software Development Effort Estimation

    Directory of Open Access Journals (Sweden)

    Mahmoud O. Elish

    2013-01-01

    Full Text Available Accurate estimation of software development effort is essential for effective management and control of software development projects. Many software effort estimation methods have been proposed in the literature including computational intelligence models. However, none of the existing models proved to be suitable under all circumstances; that is, their performance varies from one dataset to another. The goal of an ensemble model is to manage each of its individual models’ strengths and weaknesses automatically, leading to the best possible decision being taken overall. In this paper, we have developed different homogeneous and heterogeneous ensembles of optimized hybrid computational intelligence models for software development effort estimation. Different linear and nonlinear combiners have been used to combine the base hybrid learners. We have conducted an empirical study to evaluate and compare the performance of these ensembles using five popular datasets. The results confirm that individual models are not reliable as their performance is inconsistent and unstable across different datasets. Although none of the ensemble models was consistently the best, many of them were frequently among the best models for each dataset. The homogeneous ensemble of support vector regression (SVR, with the nonlinear combiner adaptive neurofuzzy inference systems-subtractive clustering (ANFIS-SC, was the best model when considering the average rank of each model across the five datasets.

  16. Creep-fatigue modelling in structural steels using empirical and constitutive creep methods implemented in a strip-yield model

    Science.gov (United States)

    Andrews, Benjamin J.

    The phenomena of creep and fatigue have each been thoroughly studied. More recently, attempts have been made to predict the damage evolution in engineering materials due to combined creep and fatigue loading, but these formulations have been strictly empirical and have not been used successfully outside of a narrow set of conditions. This work proposes a new creep-fatigue crack growth model based on constitutive creep equations (adjusted to experimental data) and Paris law fatigue crack growth. Predictions from this model are compared to experimental data in two steels: modified 9Cr-1Mo steel and AISI 316L stainless steel. Modified 9Cr-1Mo steel is a high-strength steel used in the construction of pressure vessels and piping for nuclear and conventional power plants, especially for high temperature applications. Creep-fatigue and pure creep experimental data from the literature are compared to model predictions, and they show good agreement. Material constants for the constitutive creep model are obtained for AISI 316L stainless steel, an alloy steel widely used for temperature and corrosion resistance for such components as exhaust manifolds, furnace parts, heat exchangers and jet engine parts. Model predictions are compared to pure creep experimental data, with satisfactory results. Assumptions and constraints inherent in the implementation of the present model are examined. They include: spatial discretization, similitude, plane stress constraint and linear elasticity. It is shown that the implementation of the present model had a non-trivial impact on the model solutions in 316L stainless steel, especially the spatial discretization. Based on these studies, the following conclusions are drawn: 1. The constitutive creep model consistently performs better than the Nikbin, Smith and Webster (NSW) model for predicting creep and creep-fatigue crack extension. 2. Given a database of uniaxial creep test data, a constitutive material model such as the one developed for

  17. Empirical models of the Solar Wind : Extrapolations from the Helios & Ulysses observations back to the corona

    Science.gov (United States)

    Maksimovic, M.; Zaslavsky, A.

    2017-12-01

    We will present extrapolation of the HELIOS & Ulysses proton density, temperature & bulk velocities back to the corona. Using simple mass flux conservations we show a very good agreement between these extrapolations and the current state knowledge of these parameters in the corona, based on SOHO mesurements. These simple extrapolations could potentially be very useful for the science planning of both the Parker Solar Probe and Solar Orbiter missions. Finally will also present some modelling considerations, based on simple energy balance equations which arise from these empirical observationnal models.

  18. Semi-empirical modelization of charge funneling in a NP diode

    International Nuclear Information System (INIS)

    Musseau, O.

    1991-01-01

    Heavy ion interaction with a semiconductor generates a high density of electrons and holes pairs along the trajectory and in a space charge zone the collected charge is considerably increased. The chronology of this charge funneling is described in a semi-empirical model. From initial conditions characterizing the incident ion and the studied structure, it is possible to evaluate directly the transient current, the collected charge and the length of funneling with a good agreement. The model can be extrapolated to more complex structures

  19. Empirical concentration bounds for compressive holographic bubble imaging based on a Mie scattering model.

    Science.gov (United States)

    Chen, Wensheng; Tian, Lei; Rehman, Shakil; Zhang, Zhengyun; Lee, Heow Pueh; Barbastathis, George

    2015-02-23

    We use compressive in-line holography to image air bubbles in water and investigate the effect of bubble concentration on reconstruction performance by simulation. Our forward model treats bubbles as finite spheres and uses Mie scattering to compute the scattered field in a physically rigorous manner. Although no simple analytical bounds on maximum concentration can be derived within the classical compressed sensing framework due to the complexity of the forward model, the receiver operating characteristic (ROC) curves in our simulation provide an empirical concentration bound for accurate bubble detection by compressive holography at different noise levels, resulting in a maximum tolerable concentration much higher than the traditional back-propagation method.

  20. An empirical Bayesian approach for model-based inference of cellular signaling networks

    Directory of Open Access Journals (Sweden)

    Klinke David J

    2009-11-01

    Full Text Available Abstract Background A common challenge in systems biology is to infer mechanistic descriptions of biological process given limited observations of a biological system. Mathematical models are frequently used to represent a belief about the causal relationships among proteins within a signaling network. Bayesian methods provide an attractive framework for inferring the validity of those beliefs in the context of the available data. However, efficient sampling of high-dimensional parameter space and appropriate convergence criteria provide barriers for implementing an empirical Bayesian approach. The objective of this study was to apply an Adaptive Markov chain Monte Carlo technique to a typical study of cellular signaling pathways. Results As an illustrative example, a kinetic model for the early signaling events associated with the epidermal growth factor (EGF signaling network was calibrated against dynamic measurements observed in primary rat hepatocytes. A convergence criterion, based upon the Gelman-Rubin potential scale reduction factor, was applied to the model predictions. The posterior distributions of the parameters exhibited complicated structure, including significant covariance between specific parameters and a broad range of variance among the parameters. The model predictions, in contrast, were narrowly distributed and were used to identify areas of agreement among a collection of experimental studies. Conclusion In summary, an empirical Bayesian approach was developed for inferring the confidence that one can place in a particular model that describes signal transduction mechanisms and for inferring inconsistencies in experimental measurements.

  1. Empirical evaluation of the conceptual model underpinning a regional aquatic long-term monitoring program using causal modelling

    Science.gov (United States)

    Irvine, Kathryn M.; Miller, Scott; Al-Chokhachy, Robert K.; Archer, Erik; Roper, Brett B.; Kershner, Jeffrey L.

    2015-01-01

    Conceptual models are an integral facet of long-term monitoring programs. Proposed linkages between drivers, stressors, and ecological indicators are identified within the conceptual model of most mandated programs. We empirically evaluate a conceptual model developed for a regional aquatic and riparian monitoring program using causal models (i.e., Bayesian path analysis). We assess whether data gathered for regional status and trend estimation can also provide insights on why a stream may deviate from reference conditions. We target the hypothesized causal pathways for how anthropogenic drivers of road density, percent grazing, and percent forest within a catchment affect instream biological condition. We found instream temperature and fine sediments in arid sites and only fine sediments in mesic sites accounted for a significant portion of the maximum possible variation explainable in biological condition among managed sites. However, the biological significance of the direct effects of anthropogenic drivers on instream temperature and fine sediments were minimal or not detected. Consequently, there was weak to no biological support for causal pathways related to anthropogenic drivers’ impact on biological condition. With weak biological and statistical effect sizes, ignoring environmental contextual variables and covariates that explain natural heterogeneity would have resulted in no evidence of human impacts on biological integrity in some instances. For programs targeting the effects of anthropogenic activities, it is imperative to identify both land use practices and mechanisms that have led to degraded conditions (i.e., moving beyond simple status and trend estimation). Our empirical evaluation of the conceptual model underpinning the long-term monitoring program provided an opportunity for learning and, consequently, we discuss survey design elements that require modification to achieve question driven monitoring, a necessary step in the practice of

  2. Empirical global model of upper thermosphere winds based on atmosphere and dynamics explorer satellite data

    Science.gov (United States)

    Hedin, A. E.; Spencer, N. W.; Killeen, T. L.

    1988-01-01

    Thermospheric wind data obtained from the Atmosphere Explorer E and Dynamics Explorer 2 satellites have been used to generate an empirical wind model for the upper thermosphere, analogous to the MSIS model for temperature and density, using a limited set of vector spherical harmonics. The model is limited to above approximately 220 km where the data coverage is best and wind variations with height are reduced by viscosity. The data base is not adequate to detect solar cycle (F10.7) effects at this time but does include magnetic activity effects. Mid- and low-latitude data are reproduced quite well by the model and compare favorably with published ground-based results. The polar vortices are present, but not to full detail.

  3. An empirical model of the Earth's bow shock based on an artificial neural network

    Science.gov (United States)

    Pallocchia, Giuseppe; Ambrosino, Danila; Trenchi, Lorenzo

    2014-05-01

    All of the past empirical models of the Earth's bow shock shape were obtained by best-fitting some given surfaces to sets of observed crossings. However, the issue of bow shock modeling can be addressed by means of artificial neural networks (ANN) as well. In this regard, here it is presented a perceptron, a simple feedforward network, which computes the bow shock distance along a given direction using the two angular coordinates of that direction, the bow shock predicted distance RF79 (provided by Formisano's model (F79)) and the upstream alfvénic Mach number Ma. After a brief description of the ANN architecture and training method, we discuss the results of the statistical comparison, performed over a test set of 1140 IMP8 crossings, between the prediction accuracies of ANN and F79 models.

  4. An Empirical Model for Vane-Type Vortex Generators in a Navier-Stokes Code

    Science.gov (United States)

    Dudek, Julianne C.

    2005-01-01

    An empirical model which simulates the effects of vane-type vortex generators in ducts was incorporated into the Wind-US Navier-Stokes computational fluid dynamics code. The model enables the effects of the vortex generators to be simulated without defining the details of the geometry within the grid, and makes it practical for researchers to evaluate multiple combinations of vortex generator arrangements. The model determines the strength of each vortex based on the generator geometry and the local flow conditions. Validation results are presented for flow in a straight pipe with a counter-rotating vortex generator arrangement, and the results are compared with experimental data and computational simulations using a gridded vane generator. Results are also presented for vortex generator arrays in two S-duct diffusers, along with accompanying experimental data. The effects of grid resolution and turbulence model are also examined.

  5. An Empirical Path-Loss Model for Wireless Channels in Indoor Short-Range Office Environment

    Directory of Open Access Journals (Sweden)

    Ye Wang

    2012-01-01

    Full Text Available A novel empirical path-loss model for wireless indoor short-range office environment at 4.3–7.3 GHz band is presented. The model is developed based on the experimental datum sampled in 30 office rooms in both line of sight (LOS and non-LOS (NLOS scenarios. The model is characterized as the path loss to distance with a Gaussian random variable X due to the shadow fading by using linear regression. The path-loss exponent n is fitted by the frequency using power function and modeled as a frequency-dependent Gaussian variable as the standard deviation σ of X. The presented works should be available for the research of wireless channel characteristics under universal indoor short-distance environments in the Internet of Things (IOT.

  6. An Empirical Jet-Surface Interaction Noise Model with Temperature and Nozzle Aspect Ratio Effects

    Science.gov (United States)

    Brown, Cliff

    2015-01-01

    An empirical model for jet-surface interaction (JSI) noise produced by a round jet near a flat plate is described and the resulting model evaluated. The model covers unheated and hot jet conditions (1 less than or equal to jet total temperature ratio less than or equal to 2.7) in the subsonic range (0.5 less than or equal to M(sub a) less than or equal to 0.9), surface lengths 0.6 less than or equal to (axial distance from jet exit to surface trailing edge (inches)/nozzle exit diameter) less than or equal to 10, and surface standoff distances (0 less than or equal to (radial distance from jet lipline to surface (inches)/axial distance from jet exit to surface trailing edge (inches)) less than or equal to 1) using only second-order polynomials to provide predictable behavior. The JSI noise model is combined with an existing jet mixing noise model to produce exhaust noise predictions. Fit quality metrics and comparisons to between the predicted and experimental data indicate that the model is suitable for many system level studies. A first-order correction to the JSI source model that accounts for the effect of nozzle aspect ratio is also explored. This correction is based on changes to the potential core length and frequency scaling associated with rectangular nozzles up to 8:1 aspect ratio. However, more work is needed to refine these findings into a formal model.

  7. A Semi-Empirical SNR Model for Soil Moisture Retrieval Using GNSS SNR Data

    Directory of Open Access Journals (Sweden)

    Mutian Han

    2018-02-01

    Full Text Available The Global Navigation Satellite System-Interferometry and Reflectometry (GNSS-IR technique on soil moisture remote sensing was studied. A semi-empirical Signal-to-Noise Ratio (SNR model was proposed as a curve-fitting model for SNR data routinely collected by a GNSS receiver. This model aims at reconstructing the direct and reflected signal from SNR data and at the same time extracting frequency and phase information that is affected by soil moisture as proposed by K. M. Larson et al. This is achieved empirically through approximating the direct and reflected signal by a second-order and fourth-order polynomial, respectively, based on the well-established SNR model. Compared with other models (K. M. Larson et al., T. Yang et al., this model can improve the Quality of Fit (QoF with little prior knowledge needed and can allow soil permittivity to be estimated from the reconstructed signals. In developing this model, we showed how noise affects the receiver SNR estimation and thus the model performance through simulations under the bare soil assumption. Results showed that the reconstructed signals with a grazing angle of 5°–15° were better for soil moisture retrieval. The QoF was improved by around 45%, which resulted in better estimation of the frequency and phase information. However, we found that the improvement on phase estimation could be neglected. Experimental data collected at Lamasquère, France, were also used to validate the proposed model. The results were compared with the simulation and previous works. It was found that the model could ensure good fitting quality even in the case of irregular SNR variation. Additionally, the soil moisture calculated from the reconstructed signals was about 15% closer in relation to the ground truth measurements. A deeper insight into the Larson model and the proposed model was given at this stage, which formed a possible explanation of this fact. Furthermore, frequency and phase information

  8. The gravity model specification for modeling international trade flows and free trade agreement effects: a 10-year review of empirical studies

    OpenAIRE

    Kepaptsoglou, Konstantinos; Karlaftis, Matthew G.; Tsamboulas, Dimitrios

    2010-01-01

    The gravity model has been extensively used in international trade research for the last 40 years because of its considerable empirical robustness and explanatory power. Since their introduction in the 1960's, gravity models have been used for assessing trade policy implications and, particularly recently, for analyzing the effects of Free Trade Agreements on international trade. The objective of this paper is to review the recent empirical literature on gravity models, highlight best practic...

  9. Semi-empirical model for carbon steel corrosion in long term geological nuclear waste disposal

    International Nuclear Information System (INIS)

    Foct, F.; Gras, J.M.

    2003-01-01

    In France and other countries, carbon and low alloy steels have been proposed as suitable materials for nuclear waste containers for long term geological disposal since, for such types of steels, general and localised corrosion can be fairly well predicted in geological environments (mainly argillaceous and granitic conditions) during the initial oxic and the following anoxic stages. This paper presents a model developed for the long term estimation of general and localised corrosion of carbon steel in argillaceous and granitic environments. In the case of localised corrosion, the model assumes that pitting and crevice corrosion propagation rates are similar. The estimations are based on numerous data coming from various experimental programmes conducted by the following laboratories: UKAEA (United Kingdom); NAGRA (Switzerland); SCK-CEN (Belgium); JNC (Japan) and ANDRA-CEA-EDF (France). From these data, the corrosion rates measured over long periods (from six months to several years) and derived from mass loss measurements have been selected to construct the proposed models. For general corrosion, the model takes into account an activation energy deduced from the experimental results (Arrhenius law) and proposes three equations for the corrosion rate: one for the oxic conditions, one for the early stage of the anoxic conditions and one for the long term anoxic corrosion. Concerning localised corrosion, a semi-empirical model, based on the evolution of the pitting factor (ratio between the maximum pit depth and the average general corrosion depth) as a function of the general corrosion depth, is proposed. This model is compared to other approaches where the maximum pit depth is directly calculated as a function of time, temperature and oxic or anoxic conditions. Finally, the presented semi-empirical models for long term corrosion estimation are applied to the case of nuclear waste storage. The results obtained by the different methods are then discussed and compared

  10. A new empirical model for estimating the hydraulic conductivity of low permeability media

    Science.gov (United States)

    Qi, S.; Wen, Z.; Lu, C.; Shu, L.; Shao, J.; Huang, Y.; Zhang, S.; Huang, Y.

    2015-05-01

    Hydraulic conductivity (K) is one of the significant soil characteristics in terms of flow movement and solute transport. It has been recognized that K is statistically related to the grain-size distribution. Numerous models have been developed to reveal the relationship between K and the grain-size distribution of soil, but most of these are inappropriate for fine-grained media. Therefore, a new empirical model for estimating K of low permeability media was proposed in this study. In total, the values of K of 30 soil samples collected in the Jiangning District of Nanjing were measured using the single-ring infiltrometer method. The new model was developed using the percentages of sand, silt and clay-sized particles, and the first and the second rank moment of the grain-size through the moment method as predictor variables. Multivariate nonlinear regression analysis yielded a coefficient of determination (R2) of 0.75, indicating that this empirical model seems to provide a new approach for the indirect determination of hydraulic conductivity of low permeability media.

  11. Dynamics of bloggers’ communities: Bipartite networks from empirical data and agent-based modeling

    Science.gov (United States)

    Mitrović, Marija; Tadić, Bosiljka

    2012-11-01

    We present an analysis of the empirical data and the agent-based modeling of the emotional behavior of users on the Web portals where the user interaction is mediated by posted comments, like Blogs and Diggs. We consider the dataset of discussion-driven popular Diggs, in which all comments are screened by machine-learning emotion detection in the text, to determine positive and negative valence (attractiveness and aversiveness) of each comment. By mapping the data onto a suitable bipartite network, we perform an analysis of the network topology and the related time-series of the emotional comments. The agent-based model is then introduced to simulate the dynamics and to capture the emergence of the emotional behaviors and communities. The agents are linked to posts on a bipartite network, whose structure evolves through their actions on the posts. The emotional states (arousal and valence) of each agent fluctuate in time, subject to the current contents of the posts to which the agent is exposed. By an agent’s action on a post its current emotions are transferred to the post. The model rules and the key parameters are inferred from the considered empirical data to ensure their realistic values and mutual consistency. The model assumes that the emotional arousal over posts drives the agent’s action. The simulations are preformed for the case of constant flux of agents and the results are analyzed in full analogy with the empirical data. The main conclusions are that the emotion-driven dynamics leads to long-range temporal correlations and emergent networks with community structure, that are comparable with the ones in the empirical system of popular posts. In view of pure emotion-driven agents actions, this type of comparisons provide a quantitative measure for the role of emotions in the dynamics on real blogs. Furthermore, the model reveals the underlying mechanisms which relate the post popularity with the emotion dynamics and the prevalence of negative

  12. Acculturation and mental health--empirical verification of J.W. Berry's model of acculturative stress.

    Science.gov (United States)

    Koch, M W; Bjerregaard, P; Curtis, C

    2004-01-01

    Many studies concerning mental health among ethnic minorities have used the concept of acculturation as a model of explanation, in particular J.W. Berry's model of acculturative stress. But Berry's theory has only been empirically verified few times. The aims of the study were to examine whether Berry's hypothesis about the connection between acculturation and mental health can be empirically verified for Greenlanders living in Denmark and to analyse whether acculturation plays a significant role for mental health among Greenlanders living in Denmark. The study used data from the 1999 Health Profile for Greenlanders in Denmark. As measure of mental health we applied the General Health Questionnaire (GHQ-12). Acculturation was assessed from answers to questions about how the respondents value the fact that children maintain their traditional cultural identity as Greenlander and how well the respondents speak Greenlandic and Danish. The statistical methods included binary logistic regression. We found no connection between Berry's definition of acculturation and mental health among Greenlanders in Denmark. On the other hand, our findings showed a significant relation between mental health and gender, age, marital position, occupation and long-term illness. The findings indicate that acculturation in the way Berry defines it plays a lesser role for mental health among Greenlanders in Denmark than socio-demographic and socio-economic factors. Therefore we cannot empirically verify Berry's hypothesis.

  13. Strategy for a Rock Mechanics Site Descriptive Model. Development and testing of the empirical approach

    Energy Technology Data Exchange (ETDEWEB)

    Roeshoff, Kennert; Lanaro, Flavio [Berg Bygg Konsult AB, Stockholm (Sweden); Lanru Jing [Royal Inst. of Techn., Stockholm (Sweden). Div. of Engineering Geology

    2002-05-01

    This report presents the results of one part of a wide project for the determination of a methodology for the determination of the rock mechanics properties of the rock mass for the so-called Aespoe Test Case. The Project consists of three major parts: the empirical part dealing with the characterisation of the rock mass by applying empirical methods, a part determining the rock mechanics properties of the rock mass through numerical modelling, and a third part carrying out numerical modelling for the determination of the stress state at Aespoe. All Project's parts were performed based on a limited amount of data about the geology and mechanical tests on samples selected from the Aespoe Database. This Report only considers the empirical approach. The purpose of the project is the development of a descriptive rock mechanics model for SKBs rock mass investigations for a final repository site. The empirical characterisation of the rock mass provides correlations with some of the rock mechanics properties of the rock mass such as the deformation modulus, the friction angle and cohesion for a certain stress interval and the uniaxial compressive strength. For the characterisation of the rock mass, several empirical methods were analysed and reviewed. Among those methods, some were chosen because robust, applicable and widespread in modern rock mechanics. Major weight was given to the well-known Tunnel Quality Index (Q) and Rock Mass Rating (RMR) but also the Rock Mass Index (RMi), the Geological Strength Index (GSI) and Ramamurthy's Criterion were applied for comparison with the two classical methods. The process of: i) sorting the geometrical/geological/rock mechanics data, ii) identifying homogeneous rock volumes, iii) determining the input parameters for the empirical ratings for rock mass characterisation; iv) evaluating the mechanical properties by using empirical relations with the rock mass ratings; was considered. By comparing the methodologies involved

  14. An Empirical LTE Smartphone Power Model with a View to Energy Efficiency Evolution

    DEFF Research Database (Denmark)

    Lauridsen, Mads; Noël, Laurent; Sørensen, Troels Bundgaard

    2014-01-01

    Smartphone users struggle with short battery life, and this affects their device satisfaction level and usage of the network. To evaluate how chipset manufacturers and mobile network operators can improve the battery life, we propose a Long Term Evolution (LTE) smartphone power model. The idea...... manufacturers to identify main power consumers when taking actual operating characteristics into account. The smartphone power consumption model includes the main power consumers in the cellular subsystem as a function of receive and transmit power and data rate, and is fitted to empirical power consumption...... measurements made on state-of-the-art LTE smartphones. Discontinuous Reception (DRX) sleep mode is also modeled, because it is one of the most effective methods to improve smartphone battery life. Energy efficiency has generally improved with each Radio Access Technology (RAT) generation, and to see...

  15. The Social Networking Application Success Model: An Empirical Study of Facebook and Twitter

    Directory of Open Access Journals (Sweden)

    Carol X. J. Ou

    2016-06-01

    Full Text Available Social networking applications (SNAs are among the fastest growing web applications of recent years. In this paper, we propose a causal model to assess the success of SNAs, grounded on DeLone and McLean’s updated information systems (IS success model. In addition to their original three dimensions of quality, i.e., system quality, information quality and service quality, we propose that a fourth dimension - networking quality - contributes to SNA success. We empirically examined the proposed research model with a survey of 168 Facebook and 149 Twitter users. The data validates the significant role of networking quality in determining the focal SNA’s success. The theoretical and practical implications are discussed.

  16. Empiric model for mean generation time adjustment factor for classic point kinetics equations

    International Nuclear Information System (INIS)

    Goes, David A.B.V. de; Martinez, Aquilino S.; Goncalves, Alessandro da C.

    2017-01-01

    Point reactor kinetics equations are the easiest way to observe the neutron production time behavior in a nuclear reactor. These equations are derived from the neutron transport equation using an approximation called Fick's law leading to a set of first order differential equations. The main objective of this study is to review classic point kinetics equation in order to approximate its results to the case when it is considered the time variation of the neutron currents. The computational modeling used for the calculations is based on the finite difference method. The results obtained with this model are compared with the reference model and then it is determined an empirical adjustment factor that modifies the point reactor kinetics equation to the real scenario. (author)

  17. Empiric model for mean generation time adjustment factor for classic point kinetics equations

    Energy Technology Data Exchange (ETDEWEB)

    Goes, David A.B.V. de; Martinez, Aquilino S.; Goncalves, Alessandro da C., E-mail: david.goes@poli.ufrj.br, E-mail: aquilino@lmp.ufrj.br, E-mail: alessandro@con.ufrj.br [Coordenacao de Pos-Graduacao e Pesquisa de Engenharia (COPPE/UFRJ), Rio de Janeiro, RJ (Brazil). Departamento de Engenharia Nuclear

    2017-11-01

    Point reactor kinetics equations are the easiest way to observe the neutron production time behavior in a nuclear reactor. These equations are derived from the neutron transport equation using an approximation called Fick's law leading to a set of first order differential equations. The main objective of this study is to review classic point kinetics equation in order to approximate its results to the case when it is considered the time variation of the neutron currents. The computational modeling used for the calculations is based on the finite difference method. The results obtained with this model are compared with the reference model and then it is determined an empirical adjustment factor that modifies the point reactor kinetics equation to the real scenario. (author)

  18. Brand Choice Modeling Modeling Toothpaste Brand Choice: An Empirical Comparison of Artificial Neural Networks and Multinomial Probit Model

    Directory of Open Access Journals (Sweden)

    Tolga Kaya

    2010-11-01

    Full Text Available The purpose of this study is to compare the performances of Artificial Neural Networks (ANN and Multinomial Probit (MNP approaches in modeling the choice decision within fast moving consumer goods sector. To do this, based on 2597 toothpaste purchases of a panel sample of 404 households, choice models are built and their performances are compared on the 861 purchases of a test sample of 135 households. Results show that ANN's predictions are better while MNP is useful in providing marketing insight.

  19. Experimental validation of new empirical models of the thermal properties of food products for safe shipping

    Science.gov (United States)

    Hamid, Hanan H.; Mitchell, Mark; Jahangiri, Amirreza; Thiel, David V.

    2018-04-01

    Temperature controlled food transport is essential for human safety and to minimise food waste. The thermal properties of food are important for determining the heat transfer during the transient stages of transportation (door opening during loading and unloading processes). For example, the temperature of most dairy products must be confined to a very narrow range (3-7 °C). If a predefined critical temperature is exceeded, the food is defined as spoiled and unfit for human consumption. An improved empirical model for the thermal conductivity and specific heat capacity of a wide range of food products was derived based on the food composition (moisture, fat, protein, carbohydrate and ash). The models that developed using linear regression analysis were compared with the published measured parameters in addition to previously published theoretical and empirical models. It was found that the maximum variation in the predicated thermal properties leads to less than 0.3 °C temperature change. The correlation coefficient for these models was 0.96. The t-Stat test ( P-value >0.99) demonstrated that the model results are an improvement on previous works. The transient heat transfer based on the food composition and the temperature boundary conditions was found for a Camembert cheese (short cylindrical shape) using a multiple dimension finite difference method code. The result was verified using the heat transfer today (HTT) educational software which is based on finite volume method. The core temperature rises from the initial temperature (2.7 °C) to the maximum safe temperature in ambient air (20.24 °C) was predicted to within about 35.4 ± 0.5 min. The simulation results agree very well ( +0.2 °C) with the measured temperature data. This improved model impacts on temperature estimation during loading and unloading the trucks and provides a clear direction for temperature control in all refrigerated transport applications.

  20. The Development of an Empirical Model of Mental Health Stigma in Adolescents.

    Science.gov (United States)

    Silke, Charlotte; Swords, Lorraine; Heary, Caroline

    2016-08-30

    Research on mental health stigma in adolescents is hampered by a lack of empirical investigation into the theoretical conceptualisation of stigma, as well as by the lack of validated stigma measures. This research aims to develop a model of public stigma toward depression in adolescents and to use this model to empirically examine whether stigma is composed of three separate dimensions (Stereotypes, Prejudice and Discrimination), as is theoretically proposed. Adolescents completed self-report measures assessing their stigmatising responses toward a fictional peer with depression. An exploratory factor analysis (EFA; N=332) was carried out on 58-items, which proposed to measure aspects of stigma. A confirmatory factor analysis (CFA; N=236) was then carried out to evaluate the validity of the observed stigma model. Finally, higher-order CFAs were conducted in order to assess whether the observed model supported the tripartite conceptualisation of stigma. The EFA returned a seven-factor model of stigma. These factors were designated as Dangerousness, Warmth & Competency, Responsibility, Negative Attributes, Prejudice, Classroom Discrimination and Friendship Discrimination. The CFA supported the goodness-of-fit of this seven-factor model. The higher-order CFAs indicated that these seven factors represented the latent constructs of, Stereotypes, Prejudice and Discrimination, which in turn represented Stigma. Overall, results support the tripartite conceptualisation of stigma and suggest that measurements of mental health stigma in adolescents should include assessments of all three dimensions. These results also highlight the importance of establishing valid and reliable measures for assessing stigma in adolescents. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  1. Experimental validation of new empirical models of the thermal properties of food products for safe shipping

    Science.gov (United States)

    Hamid, Hanan H.; Mitchell, Mark; Jahangiri, Amirreza; Thiel, David V.

    2017-11-01

    Temperature controlled food transport is essential for human safety and to minimise food waste. The thermal properties of food are important for determining the heat transfer during the transient stages of transportation (door opening during loading and unloading processes). For example, the temperature of most dairy products must be confined to a very narrow range (3-7 °C). If a predefined critical temperature is exceeded, the food is defined as spoiled and unfit for human consumption. An improved empirical model for the thermal conductivity and specific heat capacity of a wide range of food products was derived based on the food composition (moisture, fat, protein, carbohydrate and ash). The models that developed using linear regression analysis were compared with the published measured parameters in addition to previously published theoretical and empirical models. It was found that the maximum variation in the predicated thermal properties leads to less than 0.3 °C temperature change. The correlation coefficient for these models was 0.96. The t-Stat test (P-value >0.99) demonstrated that the model results are an improvement on previous works. The transient heat transfer based on the food composition and the temperature boundary conditions was found for a Camembert cheese (short cylindrical shape) using a multiple dimension finite difference method code. The result was verified using the heat transfer today (HTT) educational software which is based on finite volume method. The core temperature rises from the initial temperature (2.7 °C) to the maximum safe temperature in ambient air (20.24 °C) was predicted to within about 35.4 ± 0.5 min. The simulation results agree very well (+0.2 °C) with the measured temperature data. This improved model impacts on temperature estimation during loading and unloading the trucks and provides a clear direction for temperature control in all refrigerated transport applications.

  2. Empirical models of Total Electron Content based on functional fitting over Taiwan during geomagnetic quiet condition

    Directory of Open Access Journals (Sweden)

    Y. Kakinami

    2009-08-01

    Full Text Available Empirical models of Total Electron Content (TEC based on functional fitting over Taiwan (120° E, 24° N have been constructed using data of the Global Positioning System (GPS from 1998 to 2007 during geomagnetically quiet condition (Dst>−30 nT. The models provide TEC as functions of local time (LT, day of year (DOY and the solar activity (F, which are represented by 1–162 days mean of F10.7 and EUV. Other models based on median values have been also constructed and compared with the models based on the functional fitting. Under same values of F parameter, the models based on the functional fitting show better accuracy than those based on the median values in all cases. The functional fitting model using daily EUV is the most accurate with 9.2 TECu of root mean square error (RMS than the 15-days running median with 10.4 TECu RMS and the model of International Reference Ionosphere 2007 (IRI2007 with 14.7 TECu RMS. IRI2007 overestimates TEC when the solar activity is low, and underestimates TEC when the solar activity is high. Though average of 81 days centered running mean of F10.7 and daily F10.7 is often used as indicator of EUV, our result suggests that average of F10.7 mean from 1 to 54 day prior and current day is better than the average of 81 days centered running mean for reproduction of TEC. This paper is for the first time comparing the median based model with the functional fitting model. Results indicate the functional fitting model yielding a better performance than the median based one. Meanwhile we find that the EUV radiation is essential to derive an optimal TEC.

  3. Technical Note: SWIFT - a fast semi-empirical model for polar stratospheric ozone loss

    Science.gov (United States)

    Rex, M.; Kremser, S.; Huck, P.; Bodeker, G.; Wohltmann, I.; Santee, M. L.; Bernath, P.

    2014-07-01

    An extremely fast model to estimate the degree of stratospheric ozone depletion during polar winters is described. It is based on a set of coupled differential equations that simulate the seasonal evolution of vortex-averaged hydrogen chloride (HCl), nitric acid (HNO3), chlorine nitrate (ClONO2), active forms of chlorine (ClOx = Cl + ClO + 2 ClOOCl) and ozone (O3) on isentropic levels within the polar vortices. Terms in these equations account for the chemical and physical processes driving the time rate of change of these species. Eight empirical fit coefficients associated with these terms are derived by iteratively fitting the equations to vortex-averaged satellite-based measurements of HCl, HNO3 and ClONO2 and observationally derived ozone loss rates. The system of differential equations is not stiff and can be solved with a time step of one day, allowing many years to be processed per second on a standard PC. The inputs required are the daily fractions of the vortex area covered by polar stratospheric clouds and the fractions of the vortex area exposed to sunlight. The resultant model, SWIFT (Semi-empirical Weighted Iterative Fit Technique), provides a fast yet accurate method to simulate ozone loss rates in polar regions. SWIFT's capabilities are demonstrated by comparing measured and modeled total ozone loss outside of the training period.

  4. The logical primitives of thought: Empirical foundations for compositional cognitive models.

    Science.gov (United States)

    Piantadosi, Steven T; Tenenbaum, Joshua B; Goodman, Noah D

    2016-07-01

    The notion of a compositional language of thought (LOT) has been central in computational accounts of cognition from earliest attempts (Boole, 1854; Fodor, 1975) to the present day (Feldman, 2000; Penn, Holyoak, & Povinelli, 2008; Fodor, 2008; Kemp, 2012; Goodman, Tenenbaum, & Gerstenberg, 2015). Recent modeling work shows how statistical inferences over compositionally structured hypothesis spaces might explain learning and development across a variety of domains. However, the primitive components of such representations are typically assumed a priori by modelers and theoreticians rather than determined empirically. We show how different sets of LOT primitives, embedded in a psychologically realistic approximate Bayesian inference framework, systematically predict distinct learning curves in rule-based concept learning experiments. We use this feature of LOT models to design a set of large-scale concept learning experiments that can determine the most likely primitives for psychological concepts involving Boolean connectives and quantification. Subjects' inferences are most consistent with a rich (nonminimal) set of Boolean operations, including first-order, but not second-order, quantification. Our results more generally show how specific LOT theories can be distinguished empirically. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  5. The effect of empirical potential functions on modeling of amorphous carbon using molecular dynamics method

    International Nuclear Information System (INIS)

    Li, Longqiu; Xu, Ming; Song, Wenping; Ovcharenko, Andrey; Zhang, Guangyu; Jia, Ding

    2013-01-01

    Empirical potentials have a strong effect on the hybridization and structure of amorphous carbon and are of great importance in molecular dynamics (MD) simulations. In this work, amorphous carbon at densities ranging from 2.0 to 3.2 g/cm 3 was modeled by a liquid quenching method using Tersoff, 2nd REBO, and ReaxFF empirical potentials. The hybridization, structure and radial distribution function G(r) of carbon atoms were analyzed as a function of the three potentials mentioned above. The ReaxFF potential is capable to model the change of the structure of amorphous carbon and MD results are in a good agreement with experimental results and density function theory (DFT) at low density of 2.6 g/cm 3 and below. The 2nd REBO potential can be used when amorphous carbon has a very low density of 2.4 g/cm 3 and below. Considering the computational efficiency, the Tersoff potential is recommended to model amorphous carbon at a high density of 2.6 g/cm 3 and above. In addition, the influence of the quenching time on the hybridization content obtained with the three potentials is discussed.

  6. A new model of Social Support in Bereavement (SSB): An empirical investigation with a Chinese sample.

    Science.gov (United States)

    Li, Jie; Chen, Sheying

    2016-01-01

    Bereavement can be an extremely stressful experience while the protective effect of social support is expected to facilitate the adjustment after loss. The ingredients or elements of social support as illustrated by a new model of Social Support in Bereavement (SSB), however, requires empirical evidence. Who might be the most effective providers of social support in bereavement has also been understudied, particularly within specific cultural contexts. The present study uses both qualitative and quantitative analyses to explore these two important issues among bereaved Chinese families and individuals. The results show that three major types of social support described by the SSB model were frequently acknowledged by the participants in this study. Aside from relevant books, family and friends were the primary sources of social support who in turn received support from their workplaces. Helping professionals turned out to be the least significant source of social support in the Chinese cultural context. Differences by gender, age, and bereavement time were also found. The findings render empirical evidence to the conceptual model of Social Support in Bereavement and also offer culturally relevant guidance for providing effective support to the bereaved.

  7. Use of empirical likelihood to calibrate auxiliary information in partly linear monotone regression models.

    Science.gov (United States)

    Chen, Baojiang; Qin, Jing

    2014-05-10

    In statistical analysis, a regression model is needed if one is interested in finding the relationship between a response variable and covariates. When the response depends on the covariate, then it may also depend on the function of this covariate. If one has no knowledge of this functional form but expect for monotonic increasing or decreasing, then the isotonic regression model is preferable. Estimation of parameters for isotonic regression models is based on the pool-adjacent-violators algorithm (PAVA), where the monotonicity constraints are built in. With missing data, people often employ the augmented estimating method to improve estimation efficiency by incorporating auxiliary information through a working regression model. However, under the framework of the isotonic regression model, the PAVA does not work as the monotonicity constraints are violated. In this paper, we develop an empirical likelihood-based method for isotonic regression model to incorporate the auxiliary information. Because the monotonicity constraints still hold, the PAVA can be used for parameter estimation. Simulation studies demonstrate that the proposed method can yield more efficient estimates, and in some situations, the efficiency improvement is substantial. We apply this method to a dementia study. Copyright © 2013 John Wiley & Sons, Ltd.

  8. Global empirical wind model for the upper mesosphere/lower thermosphere. I. Prevailing wind

    Directory of Open Access Journals (Sweden)

    Y. I. Portnyagin

    Full Text Available An updated empirical climatic zonally averaged prevailing wind model for the upper mesosphere/lower thermosphere (70-110 km, extending from 80°N to 80°S is presented. The model is constructed from the fitting of monthly mean winds from meteor radar and MF radar measurements at more than 40 stations, well distributed over the globe. The height-latitude contour plots of monthly mean zonal and meridional winds for all months of the year, and of annual mean wind, amplitudes and phases of annual and semiannual harmonics of wind variations are analyzed to reveal the main features of the seasonal variation of the global wind structures in the Northern and Southern Hemispheres. Some results of comparison between the ground-based wind models and the space-based models are presented. It is shown that, with the exception of annual mean systematic bias between the zonal winds provided by the ground-based and space-based models, a good agreement between the models is observed. The possible origin of this bias is discussed.

    Key words: Meteorology and Atmospheric dynamics (general circulation; middle atmosphere dynamics; thermospheric dynamics

  9. Modelling of volumetric properties of binary and ternary mixtures by CEOS, CEOS/GE and empirical models

    Directory of Open Access Journals (Sweden)

    BOJAN D. DJORDJEVIC

    2007-12-01

    Full Text Available Although many cubic equations of state coupled with van der Waals-one fluid mixing rules including temperature dependent interaction parameters are sufficient for representing phase equilibria and excess properties (excess molar enthalpy HE, excess molar volume VE, etc., difficulties appear in the correlation and prediction of thermodynamic properties of complex mixtures at various temperature and pressure ranges. Great progress has been made by a new approach based on CEOS/GE models. This paper reviews the last six-year of progress achieved in modelling of the volumetric properties for complex binary and ternary systems of non-electrolytes by the CEOS and CEOS/GE approaches. In addition, the vdW1 and TCBT models were used to estimate the excess molar volume VE of ternary systems methanol + chloroform + benzene and 1-propanol + chloroform + benzene, as well as the corresponding binaries methanol + chloroform, chloroform + benzene, 1-propanol + chloroform and 1-propanol + benzene at 288.15–313.15 K and atmospheric pressure. Also, prediction of VE for both ternaries by empirical models (Radojković, Kohler, Jackob–Fitzner, Colinet, Tsao–Smith, Toop, Scatchard, Rastogi was performed.

  10. An empirical model of diagnostic x-ray attenuation under narrow-beam geometry

    International Nuclear Information System (INIS)

    Mathieu, Kelsey B.; Kappadath, S. Cheenu; White, R. Allen; Atkinson, E. Neely; Cody, Dianna D.

    2011-01-01

    Purpose: The purpose of this study was to develop and validate a mathematical model to describe narrow-beam attenuation of kilovoltage x-ray beams for the intended applications of half-value layer (HVL) and quarter-value layer (QVL) estimations, patient organ shielding, and computer modeling. Methods: An empirical model, which uses the Lambert W function and represents a generalized Lambert-Beer law, was developed. To validate this model, transmission of diagnostic energy x-ray beams was measured over a wide range of attenuator thicknesses [0.49-33.03 mm Al on a computed tomography (CT) scanner, 0.09-1.93 mm Al on two mammography systems, and 0.1-0.45 mm Cu and 0.49-14.87 mm Al using general radiography]. Exposure measurements were acquired under narrow-beam geometry using standard methods, including the appropriate ionization chamber, for each radiographic system. Nonlinear regression was used to find the best-fit curve of the proposed Lambert W model to each measured transmission versus attenuator thickness data set. In addition to validating the Lambert W model, we also assessed the performance of two-point Lambert W interpolation compared to traditional methods for estimating the HVL and QVL [i.e., semilogarithmic (exponential) and linear interpolation]. Results: The Lambert W model was validated for modeling attenuation versus attenuator thickness with respect to the data collected in this study (R 2 > 0.99). Furthermore, Lambert W interpolation was more accurate and less sensitive to the choice of interpolation points used to estimate the HVL and/or QVL than the traditional methods of semilogarithmic and linear interpolation. Conclusions: The proposed Lambert W model accurately describes attenuation of both monoenergetic radiation and (kilovoltage) polyenergetic beams (under narrow-beam geometry).

  11. Empirical model of TEC response to geomagnetic and solar forcing over Balkan Peninsula

    Science.gov (United States)

    Mukhtarov, P.; Andonov, B.; Pancheva, D.

    2018-01-01

    An empirical total electron content (TEC) model response to external forcing over Balkan Peninsula (35°N-50°N; 15°E-30°E) is built by using the Center for Orbit Determination of Europe (CODE) TEC data for full 17 years, January 1999 - December 2015. The external forcing includes geomagnetic activity described by the Kp-index and solar activity described by the solar radio flux F10.7. The model describes the most probable spatial distribution and temporal variability of the externally forced TEC anomalies assuming that they depend mainly on latitude, Kp-index, F10.7 and LT. The anomalies are expressed by the relative deviation of the TEC from its 15-day mean, rTEC, as the mean value is calculated from the 15 preceding days. The approach for building this regional model is similar to that of the global TEC model reported by Mukhtarov et al. (2013a) however it includes two important improvements related to short-term variability of the solar activity and amended geomagnetic forcing by using a "modified" Kp index. The quality assessment of the new constructing model procedure in terms of modeling error calculated for the period of 1999-2015 indicates significant improvement in accordance with the global TEC model (Mukhtarov et al., 2013a). The short-term prediction capabilities of the model based on the error calculations for 2016 are improved as well. In order to demonstrate how the model is able to reproduce the rTEC response to external forcing three geomagnetic storms, accompanied also with short-term solar activity variations, which occur at different seasons and solar activity conditions are presented.

  12. Modeling Potential Energy Surfaces: From First-Principle Approaches to Empirical Force Fields

    Directory of Open Access Journals (Sweden)

    Pietro Ballone

    2013-12-01

    Full Text Available Explicit or implicit expressions of potential energy surfaces (PES represent the basis of our ability to simulate condensed matter systems, possibly understanding and sometimes predicting their properties by purely computational methods. The paper provides an outline of the major approaches currently used to approximate and represent PESs and contains a brief discussion of what still needs to be achieved. The paper also analyses the relative role of empirical and ab initio methods, which represents a crucial issue affecting the future of modeling in chemical physics and materials science.

  13. Emerge - An empirical model for the formation of galaxies since z ˜ 10

    Science.gov (United States)

    Moster, Benjamin P.; Naab, Thorsten; White, Simon D. M.

    2018-03-01

    We present EMERGE, an Empirical ModEl for the foRmation of GalaxiEs, describing the evolution of individual galaxies in large volumes from z ˜ 10 to the present day. We assign a star formation rate to each dark matter halo based on its growth rate, which specifies how much baryonic material becomes available, and the instantaneous baryon conversion efficiency, which determines how efficiently this material is converted to stars, thereby capturing the baryonic physics. Satellites are quenched following the delayed-then-rapid model, and they are tidally disrupted once their subhalo has lost a significant fraction of its mass. The model is constrained with observed data extending out to high redshift. The empirical relations are very flexible, and the model complexity is increased only if required by the data, assessed by several model selection statistics. We find that for the same final halo mass galaxies can have very different star formation histories. Galaxies that are quenched at z = 0 typically have a higher peak star formation rate compared to their star-forming counterparts. EMERGE predicts stellar-to-halo mass ratios for individual galaxies and introduces scatter self-consistently. We find that at fixed halo mass, passive galaxies have a higher stellar mass on average. The intra-cluster-mass in massive haloes can be up to 8 times larger than the mass of the central galaxy. Clustering for star-forming and quenched galaxies is in good agreement with observational constraints, indicating a realistic assignment of galaxies to haloes.

  14. Proposed core competencies and empirical validation procedure in competency modeling: confirmation and classification

    Directory of Open Access Journals (Sweden)

    Anna Katarzyna Baczynska

    2016-03-01

    Full Text Available Competency models provide insight into key skills which are common to many positions in an organization. Moreover, there is a range of competencies that is used by many companies. Researchers have developed core competency terminology to underline their cross-organizational value. The article presents a theoretical model of core competencies consisting of two main higher-order competencies called performance and entrepreneurship. Each of them consists of three elements: the performance competency includes cooperation, organization of work and goal orientation, while entrepreneurship includes innovativeness, calculated risk-taking and pro-activeness. However, there is lack of empirical validation of competency concepts in organizations and this would seem crucial for obtaining reliable results from organizational research. We propose a two-step empirical validation procedure: 1 confirmation factor analysis, and 2 classification of employees. The sample consisted of 636 respondents (M = 44.5; SD = 15.1. Participants were administered a questionnaire developed for the study purpose. The reliability, measured by Cronbach’s alpha, ranged from .60 to .83 for six scales. Next, we tested the model using a confirmatory factor analysis. The two separate, single models of performance and entrepreneurial orientations fit quite well to the data, while a complex model based on the two single concepts needs further research. In the classification of employees based on the two higher order competencies we obtained four main groups of employees. Their profiles relate to those found in the literature, including so-called niche finders and top performers. Some proposal for organizations is discussed.

  15. An empirical Bayes model using a competition score for metabolite identification in gas chromatography mass spectrometry

    Directory of Open Access Journals (Sweden)

    Kim Seongho

    2011-10-01

    Full Text Available Abstract Background Mass spectrometry (MS based metabolite profiling has been increasingly popular for scientific and biomedical studies, primarily due to recent technological development such as comprehensive two-dimensional gas chromatography time-of-flight mass spectrometry (GCxGC/TOF-MS. Nevertheless, the identifications of metabolites from complex samples are subject to errors. Statistical/computational approaches to improve the accuracy of the identifications and false positive estimate are in great need. We propose an empirical Bayes model which accounts for a competing score in addition to the similarity score to tackle this problem. The competition score characterizes the propensity of a candidate metabolite of being matched to some spectrum based on the metabolite's similarity score with other spectra in the library searched against. The competition score allows the model to properly assess the evidence on the presence/absence status of a metabolite based on whether or not the metabolite is matched to some sample spectrum. Results With a mixture of metabolite standards, we demonstrated that our method has better identification accuracy than other four existing methods. Moreover, our method has reliable false discovery rate estimate. We also applied our method to the data collected from the plasma of a rat and identified some metabolites from the plasma under the control of false discovery rate. Conclusions We developed an empirical Bayes model for metabolite identification and validated the method through a mixture of metabolite standards and rat plasma. The results show that our hierarchical model improves identification accuracy as compared with methods that do not structurally model the involved variables. The improvement in identification accuracy is likely to facilitate downstream analysis such as peak alignment and biomarker identification. Raw data and result matrices can be found at http

  16. Monthly and Fortnightly Tidal Variations of the Earth's Rotation Rate Predicted by a TOPEX/POSEIDON Empirical Ocean Tide Model

    Science.gov (United States)

    Desai, S.; Wahr, J.

    1998-01-01

    Empirical models of the two largest constituents of the long-period ocean tides, the monthly and the fortnightly constituents, are estimated from repeat cycles 10 to 210 of the TOPEX/POSEIDON (T/P) mission.

  17. Parameterization of water vapor using high-resolution GPS data and empirical models

    Science.gov (United States)

    Ningombam, Shantikumar S.; Jade, Sridevi; Shrungeshwara, T. S.

    2018-03-01

    The present work evaluates eleven existing empirical models to estimate Precipitable Water Vapor (PWV) over a high-altitude (4500 m amsl), cold-desert environment. These models are tested extensively and used globally to estimate PWV for low altitude sites (below 1000 m amsl). The moist parameters used in the model are: water vapor scale height (Hc), dew point temperature (Td) and water vapor pressure (Es 0). These moist parameters are derived from surface air temperature and relative humidity measured at high temporal resolution from automated weather station. The performance of these models are examined statistically with observed high-resolution GPS (GPSPWV) data over the region (2005-2012). The correlation coefficient (R) between the observed GPSPWV and Model PWV is 0.98 at daily data and varies diurnally from 0.93 to 0.97. Parameterization of moisture parameters were studied in-depth (i.e., 2 h to monthly time scales) using GPSPWV , Td , and Es 0 . The slope of the linear relationships between GPSPWV and Td varies from 0.073°C-1 to 0.106°C-1 (R: 0.83 to 0.97) while GPSPWV and Es 0 varied from 1.688 to 2.209 (R: 0.95 to 0.99) at daily, monthly and diurnal time scales. In addition, the moist parameters for the cold desert, high-altitude environment are examined in-depth at various time scales during 2005-2012.

  18. [A competency model of rural general practitioners: theory construction and empirical study].

    Science.gov (United States)

    Yang, Xiu-Mu; Qi, Yu-Long; Shne, Zheng-Fu; Han, Bu-Xin; Meng, Bei

    2015-04-01

    To perform theory construction and empirical study of the competency model of rural general practitioners. Through literature study, job analysis, interviews, and expert team discussion, the questionnaire of rural general practitioners competency was constructed. A total of 1458 rural general practitioners were surveyed by the questionnaire in 6 central provinces. The common factors were constructed using the principal component method of exploratory factor analysis and confirmatory factor analysis. The influence of the competency characteristics on the working performance was analyzed using regression equation analysis. The Cronbach 's alpha coefficient of the questionnaire was 0.974. The model consisted of 9 dimensions and 59 items. The 9 competency dimensions included basic public health service ability, basic clinical skills, system analysis capability, information management capability, communication and cooperation ability, occupational moral ability, non-medical professional knowledge, personal traits and psychological adaptability. The rate of explained cumulative total variance was 76.855%. The model fitting index were Χ(2)/df 1.88, GFI=0.94, NFI=0.96, NNFI=0.98, PNFI=0.91, RMSEA=0.068, CFI=0.97, IFI=0.97, RFI=0.96, suggesting good model fitting. Regression analysis showed that the competency characteristics had a significant effect on job performance. The rural general practitioners competency model provides reference for rural doctor training, rural order directional cultivation of medical students, and competency performance management of the rural general practitioners.

  19. Generalized least squares and empirical Bayes estimation in regional partial duration series index-flood modeling

    DEFF Research Database (Denmark)

    Madsen, Henrik; Rosbjerg, Dan

    1997-01-01

    A regional estimation procedure that combines the index-flood concept with an empirical Bayes method for inferring regional information is introduced. The model is based on the partial duration series approach with generalized Pareto (GP) distributed exceedances. The prior information of the model...... parameters is inferred from regional data using generalized least squares (GLS) regression. Two different Bayesian T-year event estimators are introduced: a linear estimator that requires only some moments of the prior distributions to be specified and a parametric estimator that is based on specified...... families of prior distributions. The regional method is applied to flood records from 48 New Zealand catchments. In the case of a strongly heterogeneous intersite correlation structure, the GLS procedure provides a more efficient estimate of the regional GP shape parameter as compared to the usually...

  20. Influence of the empirical coefficients of cavitation model on predicting cavitating flow in the centrifugal pump

    Directory of Open Access Journals (Sweden)

    Hou-lin Liu

    2014-03-01

    Full Text Available The phenomenon of cavitation is an unsteady flow, which is nearly inevitable in pump. It would degrade the pump performance, produce vibration and noise and even damage the pump. Hence, to improve accuracy of the numerical prediction of the pump cavitation performance is much desirable. In the present work, a homogenous model, the Zwart-Gerber-Belamri cavitation model, is considered to investigate the influence of the empirical coefficients on predicting the pump cavitation performance, concerning a centrifugal pump. Three coefficients are analyzed, namely the nucleation site radius, evaporation and condensation coefficients. Also, the experiments are carried out to validate the numerical simulations. The results indicate that, to get a precise prediction, the approaches of declining the initial bubble radius, the condensation coefficient or increasing the evaporation coefficient are all feasible, especially for declining the condensation coefficient, which is the most effective way.

  1. Influence of the empirical coefficients of cavitation model on predicting cavitating flow in the centrifugal pump

    Directory of Open Access Journals (Sweden)

    Liu Hou-lin

    2014-03-01

    Full Text Available The phenomenon of cavitation is an unsteady flow, which is nearly inevitable in pump. It would degrade the pump performance, produce vibration and noise and even damage the pump. Hence, to improve accuracy of the nu¬merical prediction of the pump cavitation performance is much desirable. In the present work, a homogenous model, the Zwart-Gerber-Belamri cavitation model, is considered to investigate the influence of the empirical coefficients on predicting the pump cavitation performance, concerning a centrifugal pump. Three coefficients are analyzed, namely the nucleation site radius, evaporation and condensation coefficients. Also, the experiments are carried out to validate the numerical simulations. The results indicate that, to get a precise prediction, the approaches of declining the initial bubble radius, the condensation coefficient or increasing the evaporation coefficient are all feasible, especially for de¬clining the condensation coefficient, which is the most effective way.

  2. An extended technology acceptance model for detecting influencing factors: An empirical investigation

    Directory of Open Access Journals (Sweden)

    Mohamd Hakkak

    2013-11-01

    Full Text Available The rapid diffusion of the Internet has radically changed the delivery channels applied by the financial services industry. The aim of this study is to identify the influencing factors that encourage customers to adopt online banking in Khorramabad. The research constructs are developed based on the technology acceptance model (TAM and incorporates some extra important control variables. The model is empirically verified to study the factors influencing the online banking adoption behavior of 210 customers of Tejarat Banks in Khorramabad. The findings of the study suggest that the quality of the internet connection, the awareness of online banking and its benefits, the social influence and computer self-efficacy have significant impacts on the perceived usefulness (PU and perceived ease of use (PEOU of online banking acceptance. Trust and resistance to change also have significant impact on the attitude towards the likelihood of adopting online banking.

  3. Multimission empirical ocean tide modeling for shallow waters and polar seas

    DEFF Research Database (Denmark)

    Cheng, Yongcun; Andersen, Ole Baltazar

    2011-01-01

    A new global ocean tide model named DTU10 (developed at Technical University of Denmark) representing all major diurnal and semidiurnal tidal constituents is proposed based on an empirical correction to the global tide model FES2004 (Finite Element Solutions), with residual tides determined using...... to recover twice the spatial variations of the tidal signal which is particularly important in shallow waters where the spatial scale of the tidal signal is scaled down. Outside the +/- 66 degrees parallel combined Envisat, GEOSAT Follow-On, and ERS-2, data sets have been included to solve for the tides up...... to the +/- 82 degrees parallel. A new approach to removing the annual sea level variations prior to estimating the residual tides significantly improved tidal determination of diurnal constituents from the Sun-synchronous satellites (e. g., ERS-2 and Envisat) in the polar seas. Extensive evaluations with six...

  4. Semi-empirical model for optimising future heavy-ion luminosity of the LHC

    CERN Document Server

    Schaumann, M

    2014-01-01

    The wide spectrum of intensities and emittances imprinted on the LHC Pb bunches during the accumulation of bunch trains in the injector chain result in a significant spread in the single bunch luminosities and lifetimes in collision. Based on the data collected in the 2011 Pb-Pb run, an empirical model is derived to predict the single-bunch peak luminosity depending on the bunch’s position within the beam. In combination with this model, simulations of representative bunches are used to estimate the luminosity evolution for the complete ensemble of bunches. Several options are being considered to improve the injector performance and to increase the number of bunches in the LHC, leading to several potential injection scenarios, resulting in different peak and integrated luminosities. The most important options for after the long shutdown (LS) 1 and 2 are evaluated and compared.

  5. β-empirical Bayes inference and model diagnosis of microarray data

    Directory of Open Access Journals (Sweden)

    Hossain Mollah Mohammad

    2012-06-01

    Full Text Available Abstract Background Microarray data enables the high-throughput survey of mRNA expression profiles at the genomic level; however, the data presents a challenging statistical problem because of the large number of transcripts with small sample sizes that are obtained. To reduce the dimensionality, various Bayesian or empirical Bayes hierarchical models have been developed. However, because of the complexity of the microarray data, no model can explain the data fully. It is generally difficult to scrutinize the irregular patterns of expression that are not expected by the usual statistical gene by gene models. Results As an extension of empirical Bayes (EB procedures, we have developed the β-empirical Bayes (β-EB approach based on a β-likelihood measure which can be regarded as an ’evidence-based’ weighted (quasi- likelihood inference. The weight of a transcript t is described as a power function of its likelihood, fβ(yt|θ. Genes with low likelihoods have unexpected expression patterns and low weights. By assigning low weights to outliers, the inference becomes robust. The value of β, which controls the balance between the robustness and efficiency, is selected by maximizing the predictive β0-likelihood by cross-validation. The proposed β-EB approach identified six significant (p−5 contaminated transcripts as differentially expressed (DE in normal/tumor tissues from the head and neck of cancer patients. These six genes were all confirmed to be related to cancer; they were not identified as DE genes by the classical EB approach. When applied to the eQTL analysis of Arabidopsis thaliana, the proposed β-EB approach identified some potential master regulators that were missed by the EB approach. Conclusions The simulation data and real gene expression data showed that the proposed β-EB method was robust against outliers. The distribution of the weights was used to scrutinize the irregular patterns of expression and diagnose the model

  6. Empirical Bayes ranking and selection methods via semiparametric hierarchical mixture models in microarray studies.

    Science.gov (United States)

    Noma, Hisashi; Matsui, Shigeyuki

    2013-05-20

    The main purpose of microarray studies is screening of differentially expressed genes as candidates for further investigation. Because of limited resources in this stage, prioritizing genes are relevant statistical tasks in microarray studies. For effective gene selections, parametric empirical Bayes methods for ranking and selection of genes with largest effect sizes have been proposed (Noma et al., 2010; Biostatistics 11: 281-289). The hierarchical mixture model incorporates the differential and non-differential components and allows information borrowing across differential genes with separation from nuisance, non-differential genes. In this article, we develop empirical Bayes ranking methods via a semiparametric hierarchical mixture model. A nonparametric prior distribution, rather than parametric prior distributions, for effect sizes is specified and estimated using the "smoothing by roughening" approach of Laird and Louis (1991; Computational statistics and data analysis 12: 27-37). We present applications to childhood and infant leukemia clinical studies with microarrays for exploring genes related to prognosis or disease progression. Copyright © 2012 John Wiley & Sons, Ltd.

  7. A model of psychological evaluation of educational environment and its empirical indicators

    Directory of Open Access Journals (Sweden)

    E. B. Laktionova

    2013-04-01

    Full Text Available The topic of the study is to identify ways of complex psychological assessment of educational en-vironment quality, the nature of conditions that affect positive personal development of its members. The solution to this problem is to develop science-based content and technology sup-port for psychological evaluation of the educational environment. The purpose of the study was theoretical rationale and empirical testing of a model of psychological examination of education-al environment. The study is based on the assumption that in order to assess the quality of the educational environment in the aspect of its personality developing potential, we need to create a model of psychological examination as a special developmental system, reflected in terms of the personal characteristics of its subjects. The empirical material is based on a study sample of 717 students and 438 teachers from 28 educational institutions that participated in the program of urban pilot sites of the Department of Education of Moscow. In total, 1,155 people took part it the study.

  8. EMPIRICAL WEIGHTED MODELLING ON INTER-COUNTY INEQUALITIES EVOLUTION AND TO TEST ECONOMICAL CONVERGENCE IN ROMANIA

    Directory of Open Access Journals (Sweden)

    Natalia\tMOROIANU‐DUMITRESCU

    2015-06-01

    Full Text Available During the last decades, the regional convergence process in Europe has attracted a considerable interest as a highly significant issue, especially after EU enlargement with the New Member States from Central and Eastern Europe. The most usual empirical approaches are using the β- and σ-convergence, originally developed by a series of neo-classical models. Up-to-date, the EU integration process was proven to be accompanied by an increase of the regional inequalities. In order to determine the existence of a similar increase of the inequalities between the administrative counties (NUTS3 included in the NUTS2 and NUTS1 regions of Romania, this paper provides an empirical modelling of economic convergence allowing to evaluate the level and evolution of the inter-regional inequalities over more than a decade period lasting from 1995 up to 2011. The paper presents the results of a large cross-sectional study of σ-convergence and weighted coefficient of variation, using GDP and population data obtained from the National Institute of Statistics of Romania. Both graphical representation including non-linear regression and the associated tables summarizing numerical values of the main statistical tests are demonstrating the impact of pre- accession policy on the economic development of all Romanian NUTS types. The clearly emphasised convergence in the middle time subinterval can be correlated with the pre-accession drastic changes on economic, political and social level, and with the opening of the Schengen borders for Romanian labor force in 2002.

  9. Using Whole-House Field Tests to Empirically Derive Moisture Buffering Model Inputs

    Energy Technology Data Exchange (ETDEWEB)

    Woods, J.; Winkler, J.; Christensen, D.; Hancock, E.

    2014-08-01

    Building energy simulations can be used to predict a building's interior conditions, along with the energy use associated with keeping these conditions comfortable. These models simulate the loads on the building (e.g., internal gains, envelope heat transfer), determine the operation of the space conditioning equipment, and then calculate the building's temperature and humidity throughout the year. The indoor temperature and humidity are affected not only by the loads and the space conditioning equipment, but also by the capacitance of the building materials, which buffer changes in temperature and humidity. This research developed an empirical method to extract whole-house model inputs for use with a more accurate moisture capacitance model (the effective moisture penetration depth model). The experimental approach was to subject the materials in the house to a square-wave relative humidity profile, measure all of the moisture transfer terms (e.g., infiltration, air conditioner condensate) and calculate the only unmeasured term: the moisture absorption into the materials. After validating the method with laboratory measurements, we performed the tests in a field house. A least-squares fit of an analytical solution to the measured moisture absorption curves was used to determine the three independent model parameters representing the moisture buffering potential of this house and its furnishings. Follow on tests with realistic latent and sensible loads showed good agreement with the derived parameters, especially compared to the commonly-used effective capacitance approach. These results show that the EMPD model, once the inputs are known, is an accurate moisture buffering model.

  10. Comparing mechanistic and empirical approaches to modeling the thermal niche of almond

    Science.gov (United States)

    Parker, Lauren E.; Abatzoglou, John T.

    2017-09-01

    Delineating locations that are thermally viable for cultivating high-value crops can help to guide land use planning, agronomics, and water management. Three modeling approaches were used to identify the potential distribution and key thermal constraints on on almond cultivation across the southwestern United States (US), including two empirical species distribution models (SDMs)—one using commonly used bioclimatic variables (traditional SDM) and the other using more physiologically relevant climate variables (nontraditional SDM)—and a mechanistic model (MM) developed using published thermal limitations from field studies. While models showed comparable results over the majority of the domain, including over existing croplands with high almond density, the MM suggested the greatest potential for the geographic expansion of almond cultivation, with frost susceptibility and insufficient heat accumulation being the primary thermal constraints in the southwestern US. The traditional SDM over-predicted almond suitability in locations shown by the MM to be limited by frost, whereas the nontraditional SDM showed greater agreement with the MM in these locations, indicating that incorporating physiologically relevant variables in SDMs can improve predictions. Finally, opportunities for geographic expansion of almond cultivation under current climatic conditions in the region may be limited, suggesting that increasing production may rely on agronomical advances and densifying current almond plantations in existing locations.

  11. Testing the robustness of the anthropogenic climate change detection statements using different empirical models

    KAUST Repository

    Imbers, J.

    2013-04-27

    This paper aims to test the robustness of the detection and attribution of anthropogenic climate change using four different empirical models that were previously developed to explain the observed global mean temperature changes over the last few decades. These studies postulated that the main drivers of these changes included not only the usual natural forcings, such as solar and volcanic, and anthropogenic forcings, such as greenhouse gases and sulfates, but also other known Earth system oscillations such as El Niño Southern Oscillation (ENSO) or the Atlantic Multidecadal Oscillation (AMO). In this paper, we consider these signals, or forced responses, and test whether or not the anthropogenic signal can be robustly detected under different assumptions for the internal variability of the climate system. We assume that the internal variability of the global mean surface temperature can be described by simple stochastic models that explore a wide range of plausible temporal autocorrelations, ranging from short memory processes exemplified by an AR(1) model to long memory processes, represented by a fractional differenced model. In all instances, we conclude that human-induced changes to atmospheric gas composition is affecting global mean surface temperature changes. ©2013. American Geophysical Union. All Rights Reserved.

  12. EMPIRICAL MODELS FOR PERFORMANCE OF DRIPPERS APPLYING CASHEW NUT PROCESSING WASTEWATER

    Directory of Open Access Journals (Sweden)

    KETSON BRUNO DA SILVA

    2016-01-01

    Full Text Available The objective of this work was to develop empirical models for hydraulic performance of drippers operating with cashew nut processing wastewater depending on operating time, operating pressure and effluent quality. The experiment consisted of two factors, types of drippers (D1=1.65 L h-1, D2=2.00 L h-1 and D3=4.00 L h-1, and operating pressures (70, 140, 210 and 280 kPa, with three replications. The flow variation coefficient (FVC, distribution uniformity coefficient (DUC and the physicochemical and biological characteristics of the effluent were evaluated every 20 hours until complete 160 hours of operation. Data were interpreted through simple and multiple linear stepwise regression models. The regression models that fitted to the FVC and DUC as a function of operating time were square root, linear and quadratic, with 17%, 17% and 8%, and 17%, 17% and 0%, respectively. The regression models that fitted to the FVC and DUC as a function of operating pressures were square root, linear and quadratic, with 11%, 22% and 0% and the 0%, 22% and 11%, respectively. Multiple linear regressions showed that the dissolved solids content is the main wastewater characteristic that interfere in the FVC and DUC values of the drip units D1 (1.65 L h-1 and D3 (4.00 L h-1, operating at work pressure of 70 kPa (P1.

  13. Empirical Model to Extrapolate Aerosol Optical Depth (AOD) Over Cryospheric Portion of Nepal Himalaya

    Science.gov (United States)

    Bhattarai, B. C.; Burkhart, J. F.; Xu, C. Y.; Stordal, F.

    2017-12-01

    We have conducted a multivariate regression analysis to estimate the aerosol optical depth (AOD) over the cryospheric portion of Nepalese Himalayan is introduced. Multivariate regression analysis is carried out to develop an AOD prediction. Prediction using five parameters. Three geophysical parameters: altitude, longitude and latitude, and two meteorologic variables: total columnar water vapor and surface pressure were taken into account for model development. The parameters were acquired from a 30 m resolution ASTER digital elevation model (DEM) and the meteorologic parameters were extracted from daily ERA-interim datasets. Seasonal and inter annual variability in aerosol optical depth is investigate using MODIS (MODerate Imaging Spectrophotometer) product over Nepal during 2000-2015. The result shows that the AOD in winter followed by Autumn is higher then in summer and elevation dependent. The empirical model developed from spatial average data ( 2000-2015) presented here is able to predict with coefficient of determination of 0.93. The model that we have presented in this paper, could potentially be applied to other mountain in mountain climate research.

  14. LOCAL INDEPENDENCE FEATURE SCREENING FOR NONPARAMETRIC AND SEMIPARAMETRIC MODELS BY MARGINAL EMPIRICAL LIKELIHOOD

    Science.gov (United States)

    Chang, Jinyuan; Tang, Cheng Yong; Wu, Yichao

    2015-01-01

    We consider an independence feature screening technique for identifying explanatory variables that locally contribute to the response variable in high-dimensional regression analysis. Without requiring a specific parametric form of the underlying data model, our approach accommodates a wide spectrum of nonparametric and semiparametric model families. To detect the local contributions of explanatory variables, our approach constructs empirical likelihood locally in conjunction with marginal nonparametric regressions. Since our approach actually requires no estimation, it is advantageous in scenarios such as the single-index models where even specification and identification of a marginal model is an issue. By automatically incorporating the level of variation of the nonparametric regression and directly assessing the strength of data evidence supporting local contribution from each explanatory variable, our approach provides a unique perspective for solving feature screening problems. Theoretical analysis shows that our approach can handle data dimensionality growing exponentially with the sample size. With extensive theoretical illustrations and numerical examples, we show that the local independence screening approach performs promisingly. PMID:27242388

  15. A mathematical and empirical model of the performance of a membrane-coupled anaerobic fermentor.

    Science.gov (United States)

    Kim, Jong-Oh; Chung, Jinwook

    2011-09-01

    A mathematical model was developed to describe the performance of a membrane-coupled anaerobic fermentor (MCAF)-based process. In our experimental results, higher volatile fatty acid (VFA) recovery ratios were obtained at greater filtration ratios. The VFA recovery ratio peaked at an HRT of 12 h and a membrane filtration ratio of 0.95 at a constant SRT. Based on our simulation, the HRT and filtration ratio should be maintained at less than 1 day and above 0.9, respectively, to exceed an organic materials recovery ratio of 35% at a constant SRT of 10 days. Our empirical model, which predicts the effluent VFA concentration (C(o)), described the performance of the MCAF adequately. The model demonstrated that the outlet VFA concentration was a function of three independent parameters -HLR, input organic concentration (C(i)) and membrane filtration ratio (Φ). Multiple regression analyses were conducted using 50 measurements of the MCAF, yielding the following relationship: C(o) = 0.278Φ(1.13) C(i) (1.93) HLR(0.11). The correlation coefficient (R(2)) was 0.90. The simulation results were consistent with the observed data; therefore, due to its simplicity, this model predicts the effluent VFA concentration of an MCAF adequately.

  16. On the Data Mining Technology Applied to Active Marketing Model of International Luxury Marketing Strategy in China— An Empirical Analysis

    OpenAIRE

    Qishen Zhou; Shanhui Wang; Zuowei Yin

    2013-01-01

     This paper emphasizes the importance of active marketing in the customer relationship management. Especially, the data mining technology is applied to establish an active marketing model to empirically analyze the condition of the AH Jewelry Company. Michael Porter's Five Forces Model is employed to assess and calculate the similarity in the active marketing model. Then, the questionnaire analysis on the customer relationship management model is carried out to explain the target market and t...

  17. How to kill a tree: empirical mortality models for 18 species and their performance in a dynamic forest model.

    Science.gov (United States)

    Hülsmann, Lisa; Bugmann, Harald; Cailleret, Maxime; Brang, Peter

    2018-03-01

    Dynamic Vegetation Models (DVMs) are designed to be suitable for simulating forest succession and species range dynamics under current and future conditions based on mathematical representations of the three key processes regeneration, growth, and mortality. However, mortality formulations in DVMs are typically coarse and often lack an empirical basis, which increases the uncertainty of projections of future forest dynamics and hinders their use for developing adaptation strategies to climate change. Thus, sound tree mortality models are highly needed. We developed parsimonious, species-specific mortality models for 18 European tree species using >90,000 records from inventories in Swiss and German strict forest reserves along a considerable environmental gradient. We comprehensively evaluated model performance and incorporated the new mortality functions in the dynamic forest model ForClim. Tree mortality was successfully predicted by tree size and growth. Only a few species required additional covariates in their final model to consider aspects of stand structure or climate. The relationships between mortality and its predictors reflect the indirect influences of resource availability and tree vitality, which are further shaped by species-specific attributes such as maximum longevity and shade tolerance. Considering that the behavior of the models was biologically meaningful, and that their performance was reasonably high and not impacted by changes in the sampling design, we suggest that the mortality algorithms developed here are suitable for implementation and evaluation in DVMs. In the DVM ForClim, the new mortality functions resulted in simulations of stand basal area and species composition that were generally close to historical observations. However, ForClim performance was poorer than when using the original, coarse mortality formulation. The difficulties of simulating stand structure and species composition, which were most evident for Fagus sylvatica L

  18. Modeling regional cropland GPP by empirically incorporating sun-induced chlorophyll fluorescence into a coupled photosynthesis-fluorescence model

    Science.gov (United States)

    Zhang, Y.; Guanter, L.; Van der Tol, C.; Joiner, J.; Berry, J. A.

    2015-12-01

    Global sun-induced chlorophyll fluorescence (SIF) retrievals are currently available from several satellites. SIF is intrinsically linked to photosynthesis, so the new data sets allow to link remotely-sensed vegetation parameters and the actual photosynthetic activity of plants. In this study, we used space measurements of SIF together with the Soil-Canopy Observation of Photosynthesis and Energy (SCOPE) balance model in order to simulate regional photosynthetic uptake of croplands in the US corn belt. SCOPE couples fluorescence and photosynthesis at leaf and canopy levels. To do this, we first retrieved a key parameter of photosynthesis model, the maximum rate of carboxylation (Vcmax), from field measurements of CO2 and water flux during 2007-2012 at some crop eddy covariance flux sites in the Midwestern US. Then we empirically calibrated Vcmax with apparent fluorescence yield which is SIF divided by PAR. SIF retrievals are from the European GOME-2 instrument onboard the MetOp-A platform. The resulting apparent fluorescence yield shows a stronger relationship with Vcmax during the growing season than widely-used vegetation index, EVI and NDVI. New seasonal and regional Vcmax maps were derived based on the calibration model for the cropland of the corn belt. The uncertainties of Vcmax were also estimated through Gaussian error propagation. With the newly derived Vcmax maps, we modeled regional cropland GPP during the growing season for the Midwestern USA, with meteorological data from MERRA reanalysis data and LAI from MODIS product (MCD15A2). The results show the improvement in the seasonal and spatial patterns of cropland productivity in comparisons with both flux tower and agricultural inventory data.

  19. A Longitudinal Empirical Investigation of the Pathways Model of Problem Gambling.

    Science.gov (United States)

    Allami, Youssef; Vitaro, Frank; Brendgen, Mara; Carbonneau, René; Lacourse, Éric; Tremblay, Richard E

    2017-12-01

    The pathways model of problem gambling suggests the existence of three developmental pathways to problem gambling, each differentiated by a set of predisposing biopsychosocial characteristics: behaviorally conditioned (BC), emotionally vulnerable (EV), and biologically vulnerable (BV) gamblers. This study examined the empirical validity of the Pathways Model among adolescents followed up to early adulthood. A prospective-longitudinal design was used, thus overcoming limitations of past studies that used concurrent or retrospective designs. Two samples were used: (1) a population sample of French-speaking adolescents (N = 1033) living in low socio-economic status (SES) neighborhoods from the Greater Region of Montreal (Quebec, Canada), and (2) a population sample of adolescents (N = 3017), representative of French-speaking students in Quebec. Only participants with at-risk or problem gambling by mid-adolescence or early adulthood were included in the main analysis (n = 180). Latent Profile Analyses were conducted to identify the optimal number of profiles, in accordance with participants' scores on a set of variables prescribed by the Pathways Model and measured during early adolescence: depression, anxiety, impulsivity, hyperactivity, antisocial/aggressive behavior, and drug problems. A four-profile model fit the data best. Three profiles differed from each other in ways consistent with the Pathways Model (i.e., BC, EV, and BV gamblers). A fourth profile emerged, resembling a combination of EV and BV gamblers. Four profiles of at-risk and problem gamblers were identified. Three of these profiles closely resemble those suggested by the Pathways Model.

  20. Technical Note: A comparison of model and empirical measures of catchment-scale effective energy and mass transfer

    Directory of Open Access Journals (Sweden)

    C. Rasmussen

    2013-09-01

    Full Text Available Recent work suggests that a coupled effective energy and mass transfer (EEMT term, which includes the energy associated with effective precipitation and primary production, may serve as a robust prediction parameter of critical zone structure and function. However, the models used to estimate EEMT have been solely based on long-term climatological data with little validation using direct empirical measures of energy, water, and carbon balances. Here we compare catchment-scale EEMT estimates generated using two distinct approaches: (1 EEMT modeled using the established methodology based on estimates of monthly effective precipitation and net primary production derived from climatological data, and (2 empirical catchment-scale EEMT estimated using data from 86 catchments of the Model Parameter Estimation Experiment (MOPEX and MOD17A3 annual net primary production (NPP product derived from Moderate Resolution Imaging Spectroradiometer (MODIS. Results indicated positive and significant linear correspondence (R2 = 0.75; P −2 yr−1. Modeled EEMT values were consistently greater than empirical measures of EEMT. Empirical catchment estimates of the energy associated with effective precipitation (EPPT were calculated using a mass balance approach that accounts for water losses to quick surface runoff not accounted for in the climatologically modeled EPPT. Similarly, local controls on primary production such as solar radiation and nutrient limitation were not explicitly included in the climatologically based estimates of energy associated with primary production (EBIO, whereas these were captured in the remotely sensed MODIS NPP data. These differences likely explain the greater estimate of modeled EEMT relative to the empirical measures. There was significant positive correlation between catchment aridity and the fraction of EEMT partitioned into EBIO (FBIO, with an increase in FBIO as a fraction of the total as aridity increases and percentage of

  1. Modeling ionospheric foF2 by using empirical orthogonal function analysis

    Science.gov (United States)

    E, A.; Zhang, D.-H.; Xiao, Z.; Hao, Y.-Q.; Ridley, A. J.; Moldwin, M.

    2011-08-01

    A similar-parameters interpolation method and an empirical orthogonal function analysis are used to construct empirical models for the ionospheric foF2 by using the observational data from three ground-based ionosonde stations in Japan which are Wakkanai (Geographic 45.4° N, 141.7° E), Kokubunji (Geographic 35.7° N, 140.1° E) and Yamagawa (Geographic 31.2° N, 130.6° E) during the years of 1971-1987. The impact of different drivers towards ionospheric foF2 can be well indicated by choosing appropriate proxies. It is shown that the missing data of original foF2 can be optimal refilled using similar-parameters method. The characteristics of base functions and associated coefficients of EOF model are analyzed. The diurnal variation of base functions can reflect the essential nature of ionospheric foF2 while the coefficients represent the long-term alteration tendency. The 1st order EOF coefficient A1 can reflect the feature of the components with solar cycle variation. A1 also contains an evident semi-annual variation component as well as a relatively weak annual fluctuation component. Both of which are not so obvious as the solar cycle variation. The 2nd order coefficient A2 contains mainly annual variation components. The 3rd order coefficient A3 and 4th order coefficient A4 contain both annual and semi-annual variation components. The seasonal variation, solar rotation oscillation and the small-scale irregularities are also included in the 4th order coefficient A4. The amplitude range and developing tendency of all these coefficients depend on the level of solar activity and geomagnetic activity. The reliability and validity of EOF model are verified by comparison with observational data and with International Reference Ionosphere (IRI). The agreement between observations and EOF model is quite well, indicating that the EOF model can reflect the major changes and the temporal distribution characteristics of the mid-latitude ionosphere of the Sea of Japan

  2. Functionality of empirical model-based predictive analytics for the early detection of hemodynamic instabilty.

    Science.gov (United States)

    Summers, Richard L; Pipke, Matt; Wegerich, Stephan; Conkright, Gary; Isom, Kristen C

    2014-01-01

    Background. Monitoring cardiovascular hemodynamics in the modern clinical setting is a major challenge. Increasing amounts of physiologic data must be analyzed and interpreted in the context of the individual patient’s pathology and inherent biologic variability. Certain data-driven analytical methods are currently being explored for smart monitoring of data streams from patients as a first tier automated detection system for clinical deterioration. As a prelude to human clinical trials, an empirical multivariate machine learning method called Similarity-Based Modeling (“SBM”), was tested in an In Silico experiment using data generated with the aid of a detailed computer simulator of human physiology (Quantitative Circulatory Physiology or “QCP”) which contains complex control systems with realistic integrated feedback loops. Methods. SBM is a kernel-based, multivariate machine learning method that that uses monitored clinical information to generate an empirical model of a patient’s physiologic state. This platform allows for the use of predictive analytic techniques to identify early changes in a patient’s condition that are indicative of a state of deterioration or instability. The integrity of the technique was tested through an In Silico experiment using QCP in which the output of computer simulations of a slowly evolving cardiac tamponade resulted in progressive state of cardiovascular decompensation. Simulator outputs for the variables under consideration were generated at a 2-min data rate (0.083Hz) with the tamponade introduced at a point 420 minutes into the simulation sequence. The functionality of the SBM predictive analytics methodology to identify clinical deterioration was compared to the thresholds used by conventional monitoring methods. Results. The SBM modeling method was found to closely track the normal physiologic variation as simulated by QCP. With the slow development of the tamponade, the SBM model are seen to disagree while the

  3. Empirical models for the prediction of ground motion duration for intraplate earthquakes

    Science.gov (United States)

    Anbazhagan, P.; Neaz Sheikh, M.; Bajaj, Ketan; Mariya Dayana, P. J.; Madhura, H.; Reddy, G. R.

    2017-07-01

    Many empirical relationships for the earthquake ground motion duration were developed for interplate region, whereas only a very limited number of empirical relationships exist for intraplate region. Also, the existing relationships were developed based mostly on the scaled recorded interplate earthquakes to represent intraplate earthquakes. To the author's knowledge, none of the existing relationships for the intraplate regions were developed using only the data from intraplate regions. Therefore, an attempt is made in this study to develop empirical predictive relationships of earthquake ground motion duration (i.e., significant and bracketed) with earthquake magnitude, hypocentral distance, and site conditions (i.e., rock and soil sites) using the data compiled from intraplate regions of Canada, Australia, Peninsular India, and the central and southern parts of the USA. The compiled earthquake ground motion data consists of 600 records with moment magnitudes ranging from 3.0 to 6.5 and hypocentral distances ranging from 4 to 1000 km. The non-linear mixed-effect (NLMEs) and logistic regression techniques (to account for zero duration) were used to fit predictive models to the duration data. The bracketed duration was found to be decreased with an increase in the hypocentral distance and increased with an increase in the magnitude of the earthquake. The significant duration was found to be increased with the increase in the magnitude and hypocentral distance of the earthquake. Both significant and bracketed durations were predicted higher in rock sites than in soil sites. The predictive relationships developed herein are compared with the existing relationships for interplate and intraplate regions. The developed relationship for bracketed duration predicts lower durations for rock and soil sites. However, the developed relationship for a significant duration predicts lower durations up to a certain distance and thereafter predicts higher durations compared to the

  4. Evaluation of Empirical Ray-Tracing Model for an Urban Outdoor Scenario at 73 GHz E-Band

    DEFF Research Database (Denmark)

    Nguyen, Huan Cong; R. MacCartney Jr., George; Thomas, Timothy

    2014-01-01

    the measurements for immediate model development and eventual site planning, this paper presents an empirical ray-tracing model, with the goal of finding a suitable approach that ray-tracing (RT) can fill in the gaps of the measurements. Here, we use the measured data to investigate the prediction capability...

  5. Linguistics from the Perspective of the Theory of Models in Empirical Sciences: From Formal to Corpus Linguistics

    Science.gov (United States)

    Grabinska, Teresa; Zielinska, Dorota

    2010-01-01

    The authors examine language from the perspective of models of empirical sciences, which discipline studies the relationship between reality, models, and formalisms. Such a perspective allows one to notice that linguistics approached within the classical framework share a number of problems with other experimental sciences studied initially…

  6. Semi-empirical model for the generation of dose distributions produced by a scanning electron beam

    International Nuclear Information System (INIS)

    Nath, R.; Gignac, C.E.; Agostinelli, A.G.; Rothberg, S.; Schulz, R.J.

    1980-01-01

    There are linear accelerators (Sagittaire and Saturne accelerators produced by Compagnie Generale de Radiologie (CGR/MeV) Corporation) which produce broad, flat electron fields by magnetically scanning the relatively narrow electron beam as it emerges from the accelerator vacuum system. A semi-empirical model, which mimics the scanning action of this type of accelerator, was developed for the generation of dose distributions in homogeneous media. The model employs the dose distributions of the scanning electron beams. These were measured with photographic film in a polystyrene phantom by turning off the magnetic scanning system. The mean deviation calculated from measured dose distributions is about 0.2%; a few points have deviations as large as 2 to 4% inside of the 50% isodose curve, but less than 8% outside of the 50% isodose curve. The model has been used to generate the electron beam library required by a modified version of a commercially-available computerized treatment-planning system. (The RAD-8 treatment planning system was purchased from the Digital Equipment Corporation. It is currently available from Electronic Music Industries

  7. Impact of Disturbing Factors on Cooperation in Logistics Outsourcing Performance: The Empirical Model

    Directory of Open Access Journals (Sweden)

    Andreja Križman

    2010-05-01

    Full Text Available The purpose of this paper is to present the research results of a study conducted in the Slovene logistics market of conflicts and opportunism as disturbing factors while examining their impact on cooperation in logistics outsourcing performance. Relationship variables are proposed that directly or indirectly affect logistics performance and conceptualize the hypotheses based on causal linkages for the constructs. On the basis of extant literature and new argumentations that are derived from in-depth interviews of logistics experts, including providers and customers, the measurement and structural models are empirically analyzed. Existing measurement scales for the constructs are slightly modified for this analysis. Purification testing and measurement for validity and reliability are performed. Multivariate statistical methods are utilized and hypotheses are tested. The results show that conflicts have a significantly negative impact on cooperation between customers and logistics service providers (LSPs, while opportunism does not play an important role in these relationships. The observed antecedents of logistics outsourcing performance in the model account for 58.4% of the variance of the goal achievement and 36.5% of the variance of the exceeded goal. KEYWORDS: logistics outsourcing performance; logistics customer–provider relationships; conflicts and cooperation in logistics outsourcing; PLS path modelling

  8. MERGANSER: an empirical model to predict fish and loon mercury in New England lakes

    Science.gov (United States)

    Shanley, James B.; Moore, Richard; Smith, Richard A.; Miller, Eric K.; Simcox, Alison; Kamman, Neil; Nacci, Diane; Robinson, Keith; Johnston, John M.; Hughes, Melissa M.; Johnston, Craig; Evers, David; Williams, Kate; Graham, John; King, Susannah

    2012-01-01

    MERGANSER (MERcury Geo-spatial AssessmeNtS for the New England Region) is an empirical least-squares multiple regression model using mercury (Hg) deposition and readily obtainable lake and watershed features to predict fish (fillet) and common loon (blood) Hg in New England lakes. We modeled lakes larger than 8 ha (4404 lakes), using 3470 fish (12 species) and 253 loon Hg concentrations from 420 lakes. MERGANSER predictor variables included Hg deposition, watershed alkalinity, percent wetlands, percent forest canopy, percent agriculture, drainage area, population density, mean annual air temperature, and watershed slope. The model returns fish or loon Hg for user-entered species and fish length. MERGANSER explained 63% of the variance in fish and loon Hg concentrations. MERGANSER predicted that 32-cm smallmouth bass had a median Hg concentration of 0.53 μg g-1 (root-mean-square error 0.27 μg g-1) and exceeded EPA's recommended fish Hg criterion of 0.3 μg g-1 in 90% of New England lakes. Common loon had a median Hg concentration of 1.07 μg g-1 and was in the moderate or higher risk category of >1 μg g-1 Hg in 58% of New England lakes. MERGANSER can be applied to target fish advisories to specific unmonitored lakes, and for scenario evaluation, such as the effect of changes in Hg deposition, land use, or warmer climate on fish and loon mercury.

  9. Empirical modeling of drying kinetics and microwave assisted extraction of bioactive compounds from Adathoda vasica

    Directory of Open Access Journals (Sweden)

    Prithvi Simha

    2016-03-01

    Full Text Available To highlight the shortcomings in conventional methods of extraction, this study investigates the efficacy of Microwave Assisted Extraction (MAE toward bioactive compound recovery from pharmaceutically-significant medicinal plants, Adathoda vasica and Cymbopogon citratus. Initially, the microwave (MW drying behavior of the plant leaves was investigated at different sample loadings, MW power and drying time. Kinetics was analyzed through empirical modeling of drying data against 10 conventional thin-layer drying equations that were further improvised through the incorporation of Arrhenius, exponential and linear-type expressions. 81 semi-empirical Midilli equations were derived and subjected to non-linear regression to arrive at the characteristic drying equations. Bioactive compounds recovery from the leaves was examined under various parameters through a comparative approach that studied MAE against Soxhlet extraction. MAE of A. vasica reported similar yields although drastic reduction in extraction time (210 s as against the average time of 10 h in the Soxhlet apparatus. Extract yield for MAE of C. citratus was higher than the conventional process with optimal parameters determined to be 20 g sample load, 1:20 sample/solvent ratio, extraction time of 150 s and 300 W output power. Scanning Electron Microscopy and Fourier Transform Infrared Spectroscopy were performed to depict changes in internal leaf morphology.

  10. Fiscal Decentralization and Regional Financial Efficiency: An Empirical Analysis of Spatial Durbin Model

    Directory of Open Access Journals (Sweden)

    Jianmin Liu

    2016-01-01

    Full Text Available Based on panel data covering the period from 2003 to 2012 in China’s 281 prefecture-level cities, we use superefficiency SBM model to measure regional financial efficiency and empirically test the spatial effects of fiscal decentralization on regional financial efficiency with SDM. The estimated results indicate that there exist significant spatial spillover effects among regional financial efficiency with the features of time inertia and spatial dependence. The positive promoting effect of fiscal decentralization on financial efficiency in local region depends on the symmetry between fiscal expenditure decentralization and revenue decentralization. Additionally, there exists inconsistency in the spatial effects of fiscal expenditure decentralization and revenue decentralization on financial efficiency in neighboring regions. The negative effect of fiscal revenue decentralization on financial efficiency in neighboring regions is more significant than that of fiscal expenditure decentralization.

  11. Energy levies and endogenous technology in an empirical simulation model for the Netherlands

    International Nuclear Information System (INIS)

    Den Butter, F.A.G.; Dellink, R.B.; Hofkes, M.W.

    1995-01-01

    The belief in beneficial green tax swaps has been particularly prevalent in Europe, where high levels of unemployment and strong preferences for a large public sector (and hence high tax levels) accentuate the desire for revenue-neutral, growth-enhancing reductions in labor income taxes. In this context an empirical simulation model is developed for the Netherlands, especially designed to reckon with the effects of changes in prices on the level and direction of technological progress. It appears that the so-called employment double dividend, i.e. increasing employment and decreasing energy use at the same time, can occur. A general levy yields stronger effects than a levy on household use only. However, the stronger effects of a general levy on employment and energy use are accompanied by shrinking production and, in the longer run, by decreasing disposable income of workers or non-workers. 1 fig., 4 tabs., 1 appendix, 20 refs

  12. Empirical models of the electron concentration of the ionosphere and their value for radio communications purposes

    International Nuclear Information System (INIS)

    Dudeney, J.R.; Kressman, R.I.

    1986-01-01

    Criteria for the development of ionosphere electron concentration vertical profile empirical models for radio communications purposes are discussed and used to evaluate and compare four contemporary schemes. Schemes must be optimized with respect to quality of profile match, availability and simplicity of the external data required for profile specification, and numerical complexity, depending on the application. It is found that the Dudeney (1978) scheme provides the best general performance, while the Booker (1977) technique is optimized for precision radio wave studies where an observed profile is available. The CCIR (Bradley and Dudeney, 1973) scheme performance is found to be inferior to the previous two, and should be superceded except where mathematical simplicity is prioritized. The International Reference Ionosphere profile is seen to have significant disadvantages with respect to all three criteria. 17 references

  13. Semi-empirical fragmentation model of meteoroid motion and radiation during atmospheric penetration

    Science.gov (United States)

    Revelle, D. O.; Ceplecha, Z.

    2002-11-01

    A semi-empirical fragmentation model (FM) of meteoroid motion, ablation, and radiation including two types of fragmentation is outlined. The FM was applied to observational data (height as function of time and the light curve) of Lost City, Innisfree and Benešov bolides. For the Lost City bolide we were able to fit the FM to the observed height as function of time with ±13 m and to the observed light curve with ±0.17 magnitude. Corresponding numbers for Innisfree are ±25 m and ±0.14 magnitude, and for Benešov ±46 m and ±0.19 magnitude. We also define apparent and intrinsic values of σ, K, and τ. Using older results and our fit of FM to the Lost City bolide we derived corrections to intrinsic luminous efficiencies expressed as functions of velocity, mass, and normalized air density.

  14. Tax design-tax evasion relationship in Serbia: New empirical approach to standard theoretical model

    Directory of Open Access Journals (Sweden)

    Ranđelović Saša

    2015-01-01

    Full Text Available This paper provides evidence on the impact of the change in income tax rates and the degree of its progressivity on the scale of labour taxes evasion in Serbia, using the tax-benefit microsimulation model and econometric methods, on 2007 Living Standard Measurement Survey data. The empirical analysis is based on novel assumption that individual's tax evasion decision depends on a change in disposable income, which is captured by the variation in their Effective Marginal Tax Rates (EMTR, rather than on a change in after-tax income. The results suggest that the elasticity of tax evasion to EMTR equals -0.3, confirming the Yitzhaki's theory, while the propensity to evade is decreasing in the level of wages and increasing in the level of self-employment income. The results also show that introduction of revenue-neutral, progressive taxation of labour income would lead to increase in labour tax evasion by 1 percentage point.

  15. On Feature Relevance in Image-Based Prediction Models: An Empirical Study

    DEFF Research Database (Denmark)

    Konukoglu, E.; Ganz, Melanie; Van Leemput, Koen

    2013-01-01

    the community. In this article, we present an empirical study on the relevant features produced by two recently developed discriminative learning algorithms: neighborhood approximation forests (NAF) and the relevance voxel machine (RVoxM). Specifically, we examine whether the sets of features these methods......Determining disease-related variations of the anatomy and function is an important step in better understanding diseases and developing early diagnostic systems. In particular, image-based multivariate prediction models and the “relevant features” they produce are attracting attention from...... produce are exhaustive; that is whether the features that are not marked as relevant carry disease-related information. We perform experiments on three different problems: image-based regression on a synthetic dataset for which the set of relevant features is known, regression of subject age as well...

  16. An empirical model for the study of employee paticipation and its influence on job satisfaction

    Directory of Open Access Journals (Sweden)

    Lucas Joan Pujol Cols

    2015-12-01

    Full Text Available This article provides an analysis of the factors that influence the employee’s possibilities perceived to trigger actions of meaningful participation in three levels: Intra-group Level, Institutional Level and directly in the Leadership team of of the organization.Twelve (12 interviews were done with teachers from the Social and Economic Sciences School of the Mar del Plata (Argentina University, with different positions, areas and working hours.Based on qualitative evidence, an empirical model was constructed claiming to connect different factors for each manifest of participation, establishing hypothetical relations between subgroups.Additionally, in this article the implication of participation, its relationship with the job satisfaction and the role of individual expectations on the participation opportunities that receives each employee, are discussed. Keywords: Participation, Job satisfaction, University, Expectations, Qualitative Analysis. 

  17. Empirical tests of pre-main-sequence stellar evolution models with eclipsing binaries

    Science.gov (United States)

    Stassun, Keivan G.; Feiden, Gregory A.; Torres, Guillermo

    2014-06-01

    We examine the performance of standard pre-main-sequence (PMS) stellar evolution models against the accurately measured properties of a benchmark sample of 26 PMS stars in 13 eclipsing binary (EB) systems having masses 0.04-4.0 M⊙ and nominal ages ≈1-20 Myr. We provide a definitive compilation of all fundamental properties for the EBs, with a careful and consistent reassessment of observational uncertainties. We also provide a definitive compilation of the various PMS model sets, including physical ingredients and limits of applicability. No set of model isochrones is able to successfully reproduce all of the measured properties of all of the EBs. In the H-R diagram, the masses inferred for the individual stars by the models are accurate to better than 10% at ≳1 M⊙, but below 1 M⊙ they are discrepant by 50-100%. Adjusting the observed radii and temperatures using empirical relations for the effects of magnetic activity helps to resolve the discrepancies in a few cases, but fails as a general solution. We find evidence that the failure of the models to match the data is linked to the triples in the EB sample; at least half of the EBs possess tertiary companions. Excluding the triples, the models reproduce the stellar masses to better than ∼10% in the H-R diagram, down to 0.5 M⊙, below which the current sample is fully contaminated by tertiaries. We consider several mechanisms by which a tertiary might cause changes in the EB properties and thus corrupt the agreement with stellar model predictions. We show that the energies of the tertiary orbits are comparable to that needed to potentially explain the scatter in the EB properties through injection of heat, perhaps involving tidal interaction. It seems from the evidence at hand that this mechanism, however it operates in detail, has more influence on the surface properties of the stars than on their internal structure, as the lithium abundances are broadly in good agreement with model predictions. The

  18. An empirical Bayes model for gene expression and methylation profiles in antiestrogen resistant breast cancer

    Directory of Open Access Journals (Sweden)

    Huang Tim

    2010-11-01

    Full Text Available Abstract Background The nuclear transcription factor estrogen receptor alpha (ER-alpha is the target of several antiestrogen therapeutic agents for breast cancer. However, many ER-alpha positive patients do not respond to these treatments from the beginning, or stop responding after being treated for a period of time. Because of the association of gene transcription alteration and drug resistance and the emerging evidence on the role of DNA methylation on transcription regulation, understanding of these relationships can facilitate development of approaches to re-sensitize breast cancer cells to treatment by restoring DNA methylation patterns. Methods We constructed a hierarchical empirical Bayes model to investigate the simultaneous change of gene expression and promoter DNA methylation profiles among wild type (WT and OHT/ICI resistant MCF7 breast cancer cell lines. Results We found that compared with the WT cell lines, almost all of the genes in OHT or ICI resistant cell lines either do not show methylation change or hypomethylated. Moreover, the correlations between gene expression and methylation are quite heterogeneous across genes, suggesting the involvement of other factors in regulating transcription. Analysis of our results in combination with H3K4me2 data on OHT resistant cell lines suggests a clear interplay between DNA methylation and H3K4me2 in the regulation of gene expression. For hypomethylated genes with alteration of gene expression, most (~80% are up-regulated, consistent with current view on the relationship between promoter methylation and gene expression. Conclusions We developed an empirical Bayes model to study the association between DNA methylation in the promoter region and gene expression. Our approach generates both global (across all genes and local (individual gene views of the interplay. It provides important insight on future effort to develop therapeutic agent to re-sensitize breast cancer cells to treatment.

  19. An empirical Bayes model for gene expression and methylation profiles in antiestrogen resistant breast cancer.

    Science.gov (United States)

    Jeong, Jaesik; Li, Lang; Liu, Yunlong; Nephew, Kenneth P; Huang, Tim Hui-Ming; Shen, Changyu

    2010-11-25

    The nuclear transcription factor estrogen receptor alpha (ER-alpha) is the target of several antiestrogen therapeutic agents for breast cancer. However, many ER-alpha positive patients do not respond to these treatments from the beginning, or stop responding after being treated for a period of time. Because of the association of gene transcription alteration and drug resistance and the emerging evidence on the role of DNA methylation on transcription regulation, understanding of these relationships can facilitate development of approaches to re-sensitize breast cancer cells to treatment by restoring DNA methylation patterns. We constructed a hierarchical empirical Bayes model to investigate the simultaneous change of gene expression and promoter DNA methylation profiles among wild type (WT) and OHT/ICI resistant MCF7 breast cancer cell lines. We found that compared with the WT cell lines, almost all of the genes in OHT or ICI resistant cell lines either do not show methylation change or hypomethylated. Moreover, the correlations between gene expression and methylation are quite heterogeneous across genes, suggesting the involvement of other factors in regulating transcription. Analysis of our results in combination with H3K4me2 data on OHT resistant cell lines suggests a clear interplay between DNA methylation and H3K4me2 in the regulation of gene expression. For hypomethylated genes with alteration of gene expression, most (~80%) are up-regulated, consistent with current view on the relationship between promoter methylation and gene expression. We developed an empirical Bayes model to study the association between DNA methylation in the promoter region and gene expression. Our approach generates both global (across all genes) and local (individual gene) views of the interplay. It provides important insight on future effort to develop therapeutic agent to re-sensitize breast cancer cells to treatment.

  20. Using empirical Bayes predictors from generalized linear mixed models to test and visualize associations among longitudinal outcomes.

    Science.gov (United States)

    Mikulich-Gilbertson, Susan K; Wagner, Brandie D; Grunwald, Gary K; Riggs, Paula D; Zerbe, Gary O

    2018-01-01

    Medical research is often designed to investigate changes in a collection of response variables that are measured repeatedly on the same subjects. The multivariate generalized linear mixed model (MGLMM) can be used to evaluate random coefficient associations (e.g. simple correlations, partial regression coefficients) among outcomes that may be non-normal and differently distributed by specifying a multivariate normal distribution for their random effects and then evaluating the latent relationship between them. Empirical Bayes predictors are readily available for each subject from any mixed model and are observable and hence, plotable. Here, we evaluate whether second-stage association analyses of empirical Bayes predictors from a MGLMM, provide a good approximation and visual representation of these latent association analyses using medical examples and simulations. Additionally, we compare these results with association analyses of empirical Bayes predictors generated from separate mixed models for each outcome, a procedure that could circumvent computational problems that arise when the dimension of the joint covariance matrix of random effects is large and prohibits estimation of latent associations. As has been shown in other analytic contexts, the p-values for all second-stage coefficients that were determined by naively assuming normality of empirical Bayes predictors provide a good approximation to p-values determined via permutation analysis. Analyzing outcomes that are interrelated with separate models in the first stage and then associating the resulting empirical Bayes predictors in a second stage results in different mean and covariance parameter estimates from the maximum likelihood estimates generated by a MGLMM. The potential for erroneous inference from using results from these separate models increases as the magnitude of the association among the outcomes increases. Thus if computable, scatterplots of the conditionally independent empirical Bayes

  1. 137Cs applicability to soil erosion assessment: theoretical and empirical model

    International Nuclear Information System (INIS)

    Andrello, Avacir Casanova

    2004-02-01

    The soil erosion processes acceleration and the increase of soil erosion rates due to anthropogenic perturbation in soil-weather-vegetation equilibrium has influenced in the soil quality and environment. So, the possibility to assess the amplitude and severity of soil erosion impact on the productivity and quality of soil is important so local scale as regional and global scale. Several models have been developed to assess the soil erosion so qualitative as quantitatively. 137 Cs, an anthropogenic radionuclide, have been very used to assess the superficial soil erosion process Empirical and theoretical models were developed on the basis of 137 Cs redistribution as indicative of soil movement by erosive process These models incorporate many parameters that can influence in the soil erosion rates quantification by 137 Cs redistribution. Statistical analysis was realized on the models recommended by IAEA to determinate the influence that each parameter generates in results of the soil redistribution. It was verified that the most important parameter is the 137 Cs redistribution, indicating the necessity of a good determination in the 137 Cs inventory values with a minimum deviation associated with these values. After this, it was associated a 10% deviation in the reference value of 137 Cs inventory and the 5% in the 137 Cs inventory of the sample and was determinate the deviation in results of the soil redistribution calculated by models. The results of soil redistribution was compared to verify if there was difference between the models, but there was not difference in the results determinate by models, unless above 70% of 137 Cs loss. Analyzing three native forests and an area of the undisturbed pasture in the Londrina region, can be verified that the 137 Cs spatial variability in local scale was 15%. Comparing the 137 Cs inventory values determinate in the three native forest with the 137 Cs inventory value determinate in the area of undisturbed pasture in the

  2. A semi-empirical treatment planning model for optimization of multiprobe cryosurgery

    International Nuclear Information System (INIS)

    Baissalov, R.; Rewcastle, J.C.; Sandison, G.A.; Donnelly, B.J.; Saliken, J.C.; McKinnon, J.G.; Muldrew, K.

    2000-01-01

    A model is presented for treatment planning of multiprobe cryosurgery. In this model a thermal simulation algorithm is used to generate temperature distribution from cryoprobes, visualize isotherms in the anatomical region of interest (ROI) and provide tools to assist estimation of the amount of freezing damage to the target and surrounding normal structures. Calculations may be performed for any given freezing time for the selected set of operation parameters. The thermal simulation is based on solving the transient heat conduction equation using finite element methods for a multiprobe geometry. As an example, a semi-empirical optimization of 2D placement of six cryoprobes and their thermal protocol for the first freeze cycle is presented. The effectiveness of the optimized treatment protocol was estimated by generating temperature-volume histograms and calculating the objective function for the anatomy of interest. Two phantom experiments were performed to verify isotherm locations predicted by calculations. A comparison of the predicted 0 deg. C isotherm with the actual iceball boundary imaged by x-ray CT demonstrated a spatial agreement within ±2 mm. (author)

  3. An Evaluation Model for Sustainable Development of China’s Textile Industry: An Empirical Study

    Science.gov (United States)

    Zhao, Hong; Lu, Xiaodong; Yu, Ting; Yin, Yanbin

    2018-04-01

    With economy’s continuous rapid growth, textile industry is required to search for new rules and adjust strategies in order to optimize industrial structure and rationalize social spending. The sustainable development of China’s textile industry is a comprehensive research subject. This study analyzed the status of China’s textile industry and constructed the evaluation model based on the economical, ecologic, and social benefits. Analytic Hierarchy Process (AHP) and Data Envelopment Analysis (DEA) were used for an empirical study of textile industry. The result of evaluation model suggested that the status of the textile industry has become the major problems in the sustainable development of China’s textile industry. It’s nearly impossible to integrate into the global economy if no measures are taken. The enterprises concerned with the textile industry status should be reformed in terms of product design, raw material selection, technological reform, technological progress, and management, in accordance with the ideas and requirements of sustainable development. The results of this study are benefit for 1) discover the main elements restricting the industry’s sustainable development; 2) seek for corresponding solutions for policy formulation and implementation of textile industry; 3) provide references for enterprises’ development transformation in strategic deployment, fund allocation, and personnel assignment.

  4. Going Global: A Model for Evaluating Empirically Supported Family-Based Interventions in New Contexts.

    Science.gov (United States)

    Sundell, Knut; Ferrer-Wreder, Laura; Fraser, Mark W

    2014-06-01

    The spread of evidence-based practice throughout the world has resulted in the wide adoption of empirically supported interventions (ESIs) and a growing number of controlled trials of imported and culturally adapted ESIs. This article is informed by outcome research on family-based interventions including programs listed in the American Blueprints Model and Promising Programs. Evidence from these controlled trials is mixed and, because it is comprised of both successful and unsuccessful replications of ESIs, it provides clues for the translation of promising programs in the future. At least four explanations appear plausible for the mixed results in replication trials. One has to do with methodological differences across trials. A second deals with ambiguities in the cultural adaptation process. A third explanation is that ESIs in failed replications have not been adequately implemented. A fourth source of variation derives from unanticipated contextual influences that might affect the effects of ESIs when transported to other cultures and countries. This article describes a model that allows for the differential examination of adaptations of interventions in new cultural contexts. © The Author(s) 2012.

  5. Computation and empirical modeling of UV flux reaching Arabian Sea due to O3 hole

    International Nuclear Information System (INIS)

    Yousufzai, M. Ayub Khan

    2008-01-01

    Scientific organizations the world over, such as the European Space Agency, the North Atlantic Treaty Organization, the National Aeronautics and Space Administration, and the United Nations Organization, are deeply concerned about the imbalances, caused to a significant extent due to human interference in the natural make-up of the earth's ecosystem. In particular, ozone layer depletion (OLD) over the South Pole is already a serious hazard. The long-term effect of ozone layer depletion appears to be an increase in the ultraviolet radiation reaching the earth. In order to understand the effects of ozone layer depletion, investigations have been initiated by various research groups. However, to the best of our knowledge, there does not seem to be available any work treating the problem of computing and constructing an empirical model for the UV flux reaching the Arabian Sea surface due to the O3 hole. The communication presents the results of quantifying UV flux and modeling future estimation using time series analysis in a local context to understand the nature of the depletion. (author)

  6. Evaluation of environmental UV doses by empirical WL4UV model and multichannel radiometer

    Science.gov (United States)

    Piervitali, Emanuela; Benedetti, Elena; Damiani, Alessandro; Rafanelli, Claudio; Di Menno, Ivo; Casu, Giovanni; Malaspina, Fabio; Anav, Andrea; Di Menno, Massimo

    2005-08-01

    Solar UV radiation interacts both with atmospheric constituents, producing photochemical reactions, and with the biosphere, inducing changes or protection responses. Important for humans are the skin and eye diseases that result from UV exposure, in particular from social or recreational exposure. This leads to an evaluation of risks and to an assessment of suitable prevention strategies. Therefore, the correct evaluation of the available environmental dose is important; in fact, only a fraction of UV radiation will be absorbed by individuals depending on their outdoor activity. The more dense the UV solar network is, the more the doses will be correct. This paper shows the results of research carried out in Vigna di Valle (Rome, Italy) during spring 2004. The available environmental doses are evaluated by the WL4UV empirical model, developed by the authors, utilizing the solar UV spectral irradiance measured by a GUV 511C multichannel radiometer. The spectra of a wide range Brewer spectrophotometer (286.5 - 363.0 nm) have been assumed as reference. As an evaluation of the model in cloudy situations, an analysis in San Diego, Calif., USA, with a SUV 100 spectroradiometer is also shown.

  7. Universality in geometric properties of german road networks: Empirical analysis and modelling

    Energy Technology Data Exchange (ETDEWEB)

    Chan, Sonic; Donner, Reik; Laemmer, Stefan [TU Dresden (Germany); Helbing, Dirk [ETH Zuerich (Switzerland)

    2009-07-01

    In order to understand the development of urban road networks, we have investigated the structural properties of a variety of German cities. A considerable degree of universality is found in simple geometric features such as the distributions of link lengths, cell areas and cell degrees. In particular, German cities are mainly characterized by perpendicular intersections and splittings of straight roads, deviations of the link angle distributions from the rectangular pattern follow in good approximation stretched exponential distributions. It is shown that most empirical features of the studied road networks can be surprisingly well reproduced by a simple self-organizing evolving network model. For this purpose, we suggest a two-step procedure with a stochastic generation of new nodes in the presence of a sophisticated interaction potential, which is followed by the establishment of new links according to some deterministic rules. In this model, rectangular patterns naturally emerge due to basic economic considerations. It will be further discussed to which extent similar mechanisms do significantly contribute also in other technological or biological transportation networks.

  8. Multimission empirical ocean tide modeling for shallow waters and polar seas

    Science.gov (United States)

    Cheng, Yongcun; Andersen, Ole Baltazar

    2011-11-01

    A new global ocean tide model named DTU10 (developed at Technical University of Denmark) representing all major diurnal and semidiurnal tidal constituents is proposed based on an empirical correction to the global tide model FES2004 (Finite Element Solutions), with residual tides determined using the response method. The improvements are achieved by introducing 4 years of TOPEX-Jason 1 interleaved mission into existing 18 years (1993-2010) of primary joint TOPEX, Jason 1, and Jason 2 mission time series. Hereby the spatial distribution of observations are doubled and satellite altimetry should be able to recover twice the spatial variations of the tidal signal which is particularly important in shallow waters where the spatial scale of the tidal signal is scaled down. Outside the ±66° parallel combined Envisat, GEOSAT Follow-On, and ERS-2, data sets have been included to solve for the tides up to the ±82° parallel. A new approach to removing the annual sea level variations prior to estimating the residual tides significantly improved tidal determination of diurnal constituents from the Sun-synchronous satellites (e.g., ERS-2 and Envisat) in the polar seas. Extensive evaluations with six tide gauge sets show that the new tide model fits the tide gauge measurements favorably to other state of the art global ocean tide models in both the deep and shallow waters, especially in the Arctic Ocean and the Southern Ocean. One example is a comparison with 207 tide gauge data in the East Asian marginal seas where the root-mean-square agreement improved by 35.12%, 22.61%, 27.07%, and 22.65% (M2, S2, K1, and O1) for the DTU10 tide model compared with the FES2004 tide model. A similar comparison in the Arctic Ocean with 151 gauge data improved by 9.93%, 0.34%, 7.46%, and 9.52% for the M2, S2, K1, and O1 constituents, respectively.

  9. Empirical phylogenies and species abundance distributions are consistent with pre-equilibrium dynamics of neutral community models with gene flow

    KAUST Repository

    Bonnet-Lebrun, Anne-Sophie

    2017-03-17

    Community characteristics reflect past ecological and evolutionary dynamics. Here, we investigate whether it is possible to obtain realistically shaped modelled communities - i.e., with phylogenetic trees and species abundance distributions shaped similarly to typical empirical bird and mammal communities - from neutral community models. To test the effect of gene flow, we contrasted two spatially explicit individual-based neutral models: one with protracted speciation, delayed by gene flow, and one with point mutation speciation, unaffected by gene flow. The former produced more realistic communities (shape of phylogenetic tree and species-abundance distribution), consistent with gene flow being a key process in macro-evolutionary dynamics. Earlier models struggled to capture the empirically observed branching tempo in phylogenetic trees, as measured by the gamma statistic. We show that the low gamma values typical of empirical trees can be obtained in models with protracted speciation, in pre-equilibrium communities developing from an initially abundant and widespread species. This was even more so in communities sampled incompletely, particularly if the unknown species are the youngest. Overall, our results demonstrate that the characteristics of empirical communities that we have studied can, to a large extent, be explained through a purely neutral model under pre-equilibrium conditions. This article is protected by copyright. All rights reserved.

  10. Transmission calculation by empirical numerical model and Monte Carlo simulation in high energy proton radiography of thick objects

    Science.gov (United States)

    Zheng, Na; Xu, Hai-Bo

    2015-10-01

    An empirical numerical model that includes nuclear absorption, multiple Coulomb scattering and energy loss is presented for the calculation of transmission through thick objects in high energy proton radiography. In this numerical model the angular distributions are treated as Gaussians in the laboratory frame. A Monte Carlo program based on the Geant4 toolkit was developed and used for high energy proton radiography experiment simulations and verification of the empirical numerical model. The two models are used to calculate the transmission fraction of carbon and lead step-wedges in proton radiography at 24 GeV/c, and to calculate radial transmission of the French Test Object in proton radiography at 24 GeV/c with different angular cuts. It is shown that the results of the two models agree with each other, and an analysis of the slight differences is given. Supported by NSAF (11176001) and Science and Technology Developing Foundation of China Academy of Engineering Physics (2012A0202006)

  11. Cycling empirical antibiotic therapy in hospitals: meta-analysis and models.

    Directory of Open Access Journals (Sweden)

    Pia Abel zur Wiesch

    2014-06-01

    Full Text Available The rise of resistance together with the shortage of new broad-spectrum antibiotics underlines the urgency of optimizing the use of available drugs to minimize disease burden. Theoretical studies suggest that coordinating empirical usage of antibiotics in a hospital ward can contain the spread of resistance. However, theoretical and clinical studies came to different conclusions regarding the usefulness of rotating first-line therapy (cycling. Here, we performed a quantitative pathogen-specific meta-analysis of clinical studies comparing cycling to standard practice. We searched PubMed and Google Scholar and identified 46 clinical studies addressing the effect of cycling on nosocomial infections, of which 11 met our selection criteria. We employed a method for multivariate meta-analysis using incidence rates as endpoints and find that cycling reduced the incidence rate/1000 patient days of both total infections by 4.95 [9.43-0.48] and resistant infections by 7.2 [14.00-0.44]. This positive effect was observed in most pathogens despite a large variance between individual species. Our findings remain robust in uni- and multivariate metaregressions. We used theoretical models that reflect various infections and hospital settings to compare cycling to random assignment to different drugs (mixing. We make the realistic assumption that therapy is changed when first line treatment is ineffective, which we call "adjustable cycling/mixing". In concordance with earlier theoretical studies, we find that in strict regimens, cycling is detrimental. However, in adjustable regimens single resistance is suppressed and cycling is successful in most settings. Both a meta-regression and our theoretical model indicate that "adjustable cycling" is especially useful to suppress emergence of multiple resistance. While our model predicts that cycling periods of one month perform well, we expect that too long cycling periods are detrimental. Our results suggest that

  12. A MACROPRUDENTIAL SUPERVISION MODEL. EMPIRICAL EVIDENCE FROM THE CENTRAL AND EASTERN EUROPEAN BANKING SYSTEM

    Directory of Open Access Journals (Sweden)

    Trenca Ioan

    2013-07-01

    Full Text Available One of the positive effects of the financial crises is the increasing concern of the supervisors regarding the financial system’s stability. There is a need to strengthen the links between different financial components of the financial system and the macroeconomic environment. Banking systems that have an adequate capitalization and liquidity level may face easier economic and financial shocks. The purpose of this empirical study is to identify the main determinants of the banking system’s stability and soundness in the Central and Eastern Europe countries. We asses the impact of different macroeconomic variables on the quality of capital and liquidity conditions and examine the behaviour of these financial stability indicators, by analyzing a sample of 10 banking systems during 2000-2011. The availability of banking capital signals the banking system’s resiliency to shocks. Capital adequacy ratio is the main indicator used to assess the banking fragility. One of the causes of the 2008-2009 financial crisis was the lack of liquidity in the banking system which led to the collapse of several banking institutions and macroeconomic imbalances. Given the importance of liquidity for the banking system, we propose several models in order to determine the macroeconomic variables that have a significant influence on the liquid reserves to total assets ratio. We found evidence that GDP growth, inflation, domestic credit to private sector, as well as the money and quasi money aggregate indicator have significant impact on the banking stability. The empirical regression confirms the high level of interdependence of the real sector with the financial-banking sector. Also, they prove the necessity for an effective macro prudential supervision at country level which enables the supervisory authorities to have an adequate control over the macro prudential indicators and to take appropriate decisions at the right time.

  13. Saturated-liquid heat capacity of organic compounds: new empirical correlation model

    Directory of Open Access Journals (Sweden)

    DUSAN K. GROZDANIC

    2004-03-01

    Full Text Available A new saturated-liquid heat capacity model is recommended. This model is tested and compared with the known polynomial and quasi-polynomial models on 39 sets with 1453 experimental heat capacity data. The obtained results indicate that the new model is better then the existing models, especially near the critical point.

  14. EVOLUTION OF THEORIES AND EMPIRICAL MODELS OF A RELATIONSHIP BETWEEN ECONOMIC GROWTH, SCIENCE AND INNOVATIONS (PART I

    Directory of Open Access Journals (Sweden)

    Kaneva M. A.

    2017-12-01

    Full Text Available This article is a first chapter of an analytical review of existing theoretical models of a relationship between economic growth / GRP and indicators of scientific development and innovation activities, as well as empirical approaches to testing this relationship. Aim of the paper is a systematization of existing approaches to modeling of economic growth geared by science and innovations. The novelty of the current review lies in the authors’ criteria of interconnectedness of theoretical and empirical studies in the systematization of a wide range of publications presented in a final table-scheme. In the first part of the article the authors discuss evolution of theoretical approaches, while the second chapter presents a time gap between theories and their empirical verification caused by the level of development of quantitative instruments such as econometric models. The results of this study can be used by researchers and graduate students for familiarization with current scientific approaches that manifest progress from theory to empirical verification of a relationship «economic growth-innovations» for improvement of different types of models in spatial econometrics. To apply these models to management practices the presented review could be supplemented with new criteria for classification of knowledge production functions and other theories about effect of science on economic growth.

  15. The development of empirical models to evaluate energy use and energy cost in wastewater collection

    Science.gov (United States)

    Young, David Morgan

    This research introduces a unique data analysis method and develops empirical models to evaluate energy use and energy cost in wastewater collection systems using operational variables. From these models, several Best Management Processes (BMPs) are identified that should benefit utilities and positively impact the operation of existing infrastructure as well as the design of new infrastructure. Further, the conclusions generated herein display high transferability to certain manufacturing processes. Therefore, it is anticipated that these findings will also benefit pumping applications outside of the water sector. Wastewater treatment is often the single largest expense at the local government level. Not surprisingly, significant research effort has been expended on examining the energy used in wastewater treatment. However, the energy used in wastewater collection systems remains underexplored despite significant potential for energy savings. Estimates place potential energy savings as high as 60% within wastewater collection; which, if applied across the United States equates to the energy used by nearly 125,000 American homes. Employing three years of data from Renewable Water Resources (ReWa), the largest wastewater utility in the Upstate of South Carolina, this study aims to develop useful empirical equations that will allow utilities to efficiently evaluate the energy use and energy cost of its wastewater collection system. ReWa's participation was motivated, in part, by their recent adoption of the United States Environmental Protection Agency "Effective Utility Strategies" within which exists a focus on energy management. The study presented herein identifies two primary variables related to the energy use and cost associated with wastewater collection: Specific Energy (Es) and Specific Cost (Cs). These two variables were found to rely primarily on the volume pumped by the individual pump stations and exhibited similar power functions for the three year

  16. Modelling

    CERN Document Server

    Spädtke, P

    2013-01-01

    Modeling of technical machines became a standard technique since computer became powerful enough to handle the amount of data relevant to the specific system. Simulation of an existing physical device requires the knowledge of all relevant quantities. Electric fields given by the surrounding boundary as well as magnetic fields caused by coils or permanent magnets have to be known. Internal sources for both fields are sometimes taken into account, such as space charge forces or the internal magnetic field of a moving bunch of charged particles. Used solver routines are briefly described and some bench-marking is shown to estimate necessary computing times for different problems. Different types of charged particle sources will be shown together with a suitable model to describe the physical model. Electron guns are covered as well as different ion sources (volume ion sources, laser ion sources, Penning ion sources, electron resonance ion sources, and H$^-$-sources) together with some remarks on beam transport.

  17. An empirical model to predict road dust emissions based on pavement and traffic characteristics.

    Science.gov (United States)

    Padoan, Elio; Ajmone-Marsan, Franco; Querol, Xavier; Amato, Fulvio

    2017-11-08

    The relative impact of non-exhaust sources (i.e. road dust, tire wear, road wear and brake wear particles) on urban air quality is increasing. Among them, road dust resuspension has generally the highest impact on PM concentrations but its spatio-temporal variability has been rarely studied and modeled. Some recent studies attempted to observe and describe the time-variability but, as it is driven by traffic and meteorology, uncertainty remains on the seasonality of emissions. The knowledge gap on spatial variability is much wider, as several factors have been pointed out as responsible for road dust build-up: pavement characteristics, traffic intensity and speed, fleet composition, proximity to traffic lights, but also the presence of external sources. However, no parameterization is available as a function of these variables. We investigated mobile road dust smaller than 10 μm (MF10) in two cities with different climatic and traffic conditions (Barcelona and Turin), to explore MF10 seasonal variability and the relationship between MF10 and site characteristics (pavement macrotexture, traffic intensity and proximity to braking zone). Moreover, we provide the first estimates of emission factors in the Po Valley both in summer and winter conditions. Our results showed a good inverse relationship between MF10 and macro-texture, traffic intensity and distance from the nearest braking zone. We also found a clear seasonal effect of road dust emissions, with higher emission in summer, likely due to the lower pavement moisture. These results allowed building a simple empirical mode, predicting maximal dust loadings and, consequently, emission potential, based on the aforementioned data. This model will need to be scaled for meteorological effect, using methods accounting for weather and pavement moisture. This can significantly improve bottom-up emission inventory for spatial allocation of emissions and air quality management, to select those roads with higher emissions

  18. Empirical Modeling of the Viscosity of Supercritical Carbon Dioxide Foam Fracturing Fluid under Different Downhole Conditions

    Directory of Open Access Journals (Sweden)

    Shehzad Ahmed

    2018-03-01

    Full Text Available High-quality supercritical CO2 (sCO2 foam as a fracturing fluid is considered ideal for fracturing shale gas reservoirs. The apparent viscosity of the fracturing fluid holds an important role and governs the efficiency of the fracturing process. In this study, the viscosity of sCO2 foam and its empirical correlations are presented as a function of temperature, pressure, and shear rate. A series of experiments were performed to investigate the effect of temperature, pressure, and shear rate on the apparent viscosity of sCO2 foam generated by a widely used mixed surfactant system. An advanced high pressure, high temperature (HPHT foam rheometer was used to measure the apparent viscosity of the foam over a wide range of reservoir temperatures (40–120 °C, pressures (1000–2500 psi, and shear rates (10–500 s−1. A well-known power law model was modified to accommodate the individual and combined effect of temperature, pressure, and shear rate on the apparent viscosity of the foam. Flow indices of the power law were found to be a function of temperature, pressure, and shear rate. Nonlinear regression was also performed on the foam apparent viscosity data to develop these correlations. The newly developed correlations provide an accurate prediction of the foam’s apparent viscosity under different fracturing conditions. These correlations can be helpful for evaluating foam-fracturing efficiency by incorporating them into a fracturing simulator.

  19. Metabolic cost of neuronal information in an empirical stimulus-response model.

    Science.gov (United States)

    Kostal, Lubomir; Lansky, Petr; McDonnell, Mark D

    2013-06-01

    The limits on maximum information that can be transferred by single neurons may help us to understand how sensory and other information is being processed in the brain. According to the efficient-coding hypothesis (Barlow, Sensory Comunication, MIT press, Cambridge, 1961), neurons are adapted to the statistical properties of the signals to which they are exposed. In this paper we employ methods of information theory to calculate, both exactly (numerically) and approximately, the ultimate limits on reliable information transmission for an empirical neuronal model. We couple information transfer with the metabolic cost of neuronal activity and determine the optimal information-to-metabolic cost ratios. We find that the optimal input distribution is discrete with only six points of support, both with and without a metabolic constraint. However, we also find that many different input distributions achieve mutual information close to capacity, which implies that the precise structure of the capacity-achieving input is of lesser importance than the value of capacity.

  20. Matrimonios mixtos intraeuropeos: un modelo empírico (Intra-European intermarriage: an empirical model

    Directory of Open Access Journals (Sweden)

    Alaminos Chica, Antonio Francisco

    2008-06-01

    Full Text Available Resumen: La heterogeneidad con la que nos encontramos al estudiar las parejas interculturales o mixtas, va más allá de las diferencias de origen sociocultural; interviene factores tales como el rol que cada individuo adopta dentro de la pareja (por ejemplo, quién contribuye más económicamente, el status, el nivel educativo, etc.. En este artículo se propone un modelo empírico que muestra el efecto de un conjunto de variables, que expresan circunstancias sociales, sobre la decisión de formar un matrimonio interculturalmente mixto. También las consecuencias en la vida social del individuo.Abstract: The intercultural marriages or mixed marriages depend upon several factor. Not only the different cultural origin. Other determinants like the role of the partner (i.e. economic contribution, status, educational level, etc. or the type of the family (modern, traditional, etc. influence the outcomes. This paper contains a proposal of empirical model for study the intra-European mixed marriages.

  1. New insight into motor adaptation to pain revealed by a combination of modelling and empirical approaches.

    Science.gov (United States)

    Hodges, P W; Coppieters, M W; MacDonald, D; Cholewicki, J

    2013-09-01

    Movement changes in pain. Unlike the somewhat stereotypical response of limb muscles to pain, trunk muscle responses are highly variable when challenged by pain in that region. This has led many to question the existence of a common underlying theory to explain the adaptation. Here, we tested the hypotheses that (1) adaptation in muscle activation in acute pain leads to enhanced spine stability, despite variation in the pattern of muscle activation changes; and (2) individuals would use a similar 'signature' pattern for tasks with different mechanical demands. In 17 healthy individuals, electromyography recordings were made from a broad array of anterior and posterior trunk muscles while participants moved slowly between trunk flexion and extension with and without experimentally induced back pain. Hypotheses were tested by estimating spine stability (Stability Index) with an electromyography-driven spine model and analysis of individual and overall (net) adaptations in muscle activation. The Stability Index (P individuals used the same pattern of adaptation in muscle activity. For most, the adaptation was similar between movement directions despite opposite movement demands. These data provide the first empirical confirmation that, in most individuals, acute back pain leads to increased spinal stability and that the pattern of muscle activity is not stereotypical, but instead involves an individual-specific response to pain. This adaptation is likely to provide short-term benefit to enhance spinal protection, but could have long-term consequences for spinal health. © 2013 European Federation of International Association for the Study of Pain Chapters.

  2. Supervised neural network modeling: an empirical investigation into learning from imbalanced data with labeling errors.

    Science.gov (United States)

    Khoshgoftaar, Taghi M; Van Hulse, Jason; Napolitano, Amri

    2010-05-01

    Neural network algorithms such as multilayer perceptrons (MLPs) and radial basis function networks (RBFNets) have been used to construct learners which exhibit strong predictive performance. Two data related issues that can have a detrimental impact on supervised learning initiatives are class imbalance and labeling errors (or class noise). Imbalanced data can make it more difficult for the neural network learning algorithms to distinguish between examples of the various classes, and class noise can lead to the formulation of incorrect hypotheses. Both class imbalance and labeling errors are pervasive problems encountered in a wide variety of application domains. Many studies have been performed to investigate these problems in isolation, but few have focused on their combined effects. This study presents a comprehensive empirical investigation using neural network algorithms to learn from imbalanced data with labeling errors. In particular, the first component of our study investigates the impact of class noise and class imbalance on two common neural network learning algorithms, while the second component considers the ability of data sampling (which is commonly used to address the issue of class imbalance) to improve their performances. Our results, for which over two million models were trained and evaluated, show that conclusions drawn using the more commonly studied C4.5 classifier may not apply when using neural networks.

  3. Empirical tight-binding modeling of ordered and disordered semiconductor structures

    International Nuclear Information System (INIS)

    Mourad, Daniel

    2010-01-01

    In this thesis, we investigate the electronic and optical properties of pure as well as of substitutionally alloyed II-VI and III-V bulk semiconductors and corresponding semiconductor quantum dots by means of an empirical tight-binding (TB) model. In the case of the alloyed systems of the type A x B 1-x , where A and B are the pure compound semiconductor materials, we study the influence of the disorder by means of several extensions of the TB model with different levels of sophistication. Our methods range from rather simple mean-field approaches (virtual crystal approximation, VCA) over a dynamical mean-field approach (coherent potential approximation, CPA) up to calculations where substitutional disorder is incorporated on a finite ensemble of microscopically distinct configurations. In the first part of this thesis, we cover the necessary fundamentals in order to properly introduce the TB model of our choice, the effective bond-orbital model (EBOM). In this model, one s- and three p-orbitals per spin direction are localized on the sites of the underlying Bravais lattice. The matrix elements between these orbitals are treated as free parameters in order to reproduce the properties of one conduction and three valence bands per spin direction and can then be used in supercell calculations in order to model mixed bulk materials or pure as well as mixed quantum dots. Part II of this thesis deals with unalloyed systems. Here, we use the EBOM in combination with configuration interaction calculations for the investigation of the electronic and optical properties of truncated pyramidal GaN quantum dots embedded in AlN with an underlying zincblende structure. Furthermore, we develop a parametrization of the EBOM for materials with a wurtzite structure, which allows for a fit of one conduction and three valence bands per spin direction throughout the whole Brillouin zone of the hexagonal system. In Part III, we focus on the influence of alloying on the electronic and

  4. A New Statistical Method to Determine the Degree of Validity of Health Economic Model Outcomes against Empirical Data.

    Science.gov (United States)

    Corro Ramos, Isaac; van Voorn, George A K; Vemer, Pepijn; Feenstra, Talitha L; Al, Maiwenn J

    2017-09-01

    The validation of health economic (HE) model outcomes against empirical data is of key importance. Although statistical testing seems applicable, guidelines for the validation of HE models lack guidance on statistical validation, and actual validation efforts often present subjective judgment of graphs and point estimates. To discuss the applicability of existing validation techniques and to present a new method for quantifying the degrees of validity statistically, which is useful for decision makers. A new Bayesian method is proposed to determine how well HE model outcomes compare with empirical data. Validity is based on a pre-established accuracy interval in which the model outcomes should fall. The method uses the outcomes of a probabilistic sensitivity analysis and results in a posterior distribution around the probability that HE model outcomes can be regarded as valid. We use a published diabetes model (Modelling Integrated Care for Diabetes based on Observational data) to validate the outcome "number of patients who are on dialysis or with end-stage renal disease." Results indicate that a high probability of a valid outcome is associated with relatively wide accuracy intervals. In particular, 25% deviation from the observed outcome implied approximately 60% expected validity. Current practice in HE model validation can be improved by using an alternative method based on assessing whether the model outcomes fit to empirical data at a predefined level of accuracy. This method has the advantage of assessing both model bias and parameter uncertainty and resulting in a quantitative measure of the degree of validity that penalizes models predicting the mean of an outcome correctly but with overly wide credible intervals. Copyright © 2017 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  5. An empirical model for trip distribution of commuters in the Netherlands: Transferability in time and space reconsidered.

    NARCIS (Netherlands)

    Thomas, Tom; Tutert, Bas

    2013-01-01

    In this paper, we evaluate the distribution of commute trips in The Netherlands, to assess its transferability in space and time. We used Dutch Travel Surveys from 1995 and 2004–2008 to estimate the empirical distribution from a spatial interaction model as function of travel time and distance. We

  6. Empirical Testing of a Conceptual Model and Measurement Instrument for the Assessment of Trustworthiness of Project Team Members

    NARCIS (Netherlands)

    Rusman, Ellen; Van Bruggen, Jan; Valcke, Martin

    2009-01-01

    Rusman, E., Van Bruggen, J., & Valcke, M. (2009). Empirical Testing of a Conceptual Model and Measurement Instrument for the Assessment of Trustworthiness of Project Team Members. Paper presented at the Trust Workshop at the Eighth International Conference on Autonomous Agents and Multiagent Systems

  7. Interface of the polarizable continuum model of solvation with semi-empirical methods in the GAMESS program

    DEFF Research Database (Denmark)

    Svendsen, Casper Steinmann; Blædel, Kristoffer L.; Christensen, Anders Steen

    2013-01-01

    An interface between semi-empirical methods and the polarized continuum model (PCM) of solvation successfully implemented into GAMESS following the approach by Chudinov et al (Chem. Phys. 1992, 160, 41). The interface includes energy gradients and is parallelized. For large molecules...

  8. Empirical Modeling of Information Communication Technology Usage Behaviour among Business Education Teachers in Tertiary Colleges of a Developing Country

    Science.gov (United States)

    Isiyaku, Dauda Dansarki; Ayub, Ahmad Fauzi Mohd; Abdulkadir, Suhaida

    2015-01-01

    This study has empirically tested the fitness of a structural model in explaining the influence of two exogenous variables (perceived enjoyment and attitude towards ICTs) on two endogenous variables (behavioural intention and teachers' Information Communication Technology (ICT) usage behavior), based on the proposition of Technology Acceptance…

  9. An Empirical Study of Efficiency and Accuracy of Probabilistic Graphical Models

    DEFF Research Database (Denmark)

    Nielsen, Jens Dalgaard; Jaeger, Manfred

    2006-01-01

    In this paper we compare Na\\ii ve Bayes (NB) models, general Bayes Net (BN) models and Probabilistic Decision Graph (PDG) models w.r.t. accuracy and efficiency. As the basis for our analysis we use graphs of size vs. likelihood that show the theoretical capabilities of the models. We also measure...

  10. Fitting non-gaussian Models to Financial data: An Empirical Study

    Directory of Open Access Journals (Sweden)

    Pablo Olivares

    2011-04-01

    Full Text Available In this paper are presented some experiences about the modeling of financial data by three classes of models as alternative to Gaussian Linear models. Dynamic Volatility, Stable L'evy and Diffusion with Jumps models are considered. The techniques are illustrated with some examples of financial series on currency, futures and indexes.

  11. A Hybrid Model Based on Ensemble Empirical Mode Decomposition and Fruit Fly Optimization Algorithm for Wind Speed Forecasting

    Directory of Open Access Journals (Sweden)

    Zongxi Qu

    2016-01-01

    Full Text Available As a type of clean and renewable energy, the superiority of wind power has increasingly captured the world’s attention. Reliable and precise wind speed prediction is vital for wind power generation systems. Thus, a more effective and precise prediction model is essentially needed in the field of wind speed forecasting. Most previous forecasting models could adapt to various wind speed series data; however, these models ignored the importance of the data preprocessing and model parameter optimization. In view of its importance, a novel hybrid ensemble learning paradigm is proposed. In this model, the original wind speed data is firstly divided into a finite set of signal components by ensemble empirical mode decomposition, and then each signal is predicted by several artificial intelligence models with optimized parameters by using the fruit fly optimization algorithm and the final prediction values were obtained by reconstructing the refined series. To estimate the forecasting ability of the proposed model, 15 min wind speed data for wind farms in the coastal areas of China was performed to forecast as a case study. The empirical results show that the proposed hybrid model is superior to some existing traditional forecasting models regarding forecast performance.

  12. A global weighted mean temperature model based on empirical orthogonal function analysis

    Science.gov (United States)

    Li, Qinzheng; Chen, Peng; Sun, Langlang; Ma, Xiaping

    2018-03-01

    A global empirical orthogonal function (EOF) model of the tropospheric weighted mean temperature called GEOFM_Tm was developed using high-precision Global Geodetic Observing System (GGOS) Atmosphere Tm data during the years 2008-2014. Due to the quick convergence of EOF decomposition, it is possible to use the first four EOF series, which consists base functions Uk and associated coefficients Pk, to represent 99.99% of the overall variance of the original data sets and its spatial-temporal variations. Results show that U1 displays a prominent latitude distribution profile with positive peaks located at low latitude region. U2 manifests an asymmetric pattern that positive values occurred over 30° in the Northern Hemisphere, and negative values were observed at other regions. U3 and U4 displayed significant anomalies in Tibet and North America, respectively. Annual variation is the major component of the first and second associated coefficients P1 and P2, whereas P3 and P4 mainly reflects both annual and semi-annual variation components. Furthermore, the performance of constructed GEOFM_Tm was validated by comparison with GTm_III and GTm_N with different kinds of data including GGOS Atmosphere Tm data in 2015 and radiosonde data from Integrated Global Radiosonde Archive (IGRA) in 2014. Generally speaking, GEOFM_Tm can achieve the same accuracy and reliability as GTm_III and GTm_N models in a global scale, even has improved in the Antarctic and Greenland regions. The MAE and RMS of GEOFM_Tm tend to be 2.49 K and 3.14 K with respect to GGOS Tm data, respectively; and 3.38 K and 4.23 K with respect to IGRA sounding data, respectively. In addition, those three models have higher precision at low latitude than middle and high latitude regions. The magnitude of Tm remains at the range of 220-300 K, presented a high correlation with geographic latitude. In the Northern Hemisphere, there was a significant enhancement at high latitude region reaching 270 K during summer

  13. SWIFT: Semi-empirical and numerically efficient stratospheric ozone chemistry for global climate models

    OpenAIRE

    Kreyling, Daniel; Wohltmann, Ingo; Lehmann, Ralph; Rex, Markus

    2015-01-01

    The SWIFT model is a fast yet accurate chemistry scheme for calculating the chemistry of stratospheric ozone. It is mainly intended for use in Global Climate Models (GCMs), Chemistry Climate Models (CCMs) and Earth System Models (ESMs). For computing time reasons these models often do not employ full stratospheric chem- istry modules, but use prescribed ozone instead. This can lead to insufficient representation between stratosphere and troposphere. The SWIFT stratospheric ozone chem...

  14. Semi-empirical model for the assessment of railway ballast using GPR

    Science.gov (United States)

    Giulia Brancadoro, Maria; Benedetto, Andrea

    2017-04-01

    Over time, railways have become a very competitive mean of transportation, especially for long distances. In order to ensure high level of safety, comfort and regularity of transportation, an efficient maintenance of the railway track-bed is crucial. In fact, the cyclic loads passing on the rails produce a progressive deterioration of railway ballast beneath the sleepers, and a breakdown of its particles that causes a general decrease of railway performances. This work aims at proposing a semi-empirical model for the characterisation of railway ballast grading, through the spectral analysis of Ground-Penetrating Radar (GPR) signal. To this effect, a theoretical study has been preliminary conducted to investigate the propagation and scattering phenomena of the electromagnetic waves within a ballast layer. To confirm the theoretical assumptions, high-frequency GPR signals have been both collected in laboratory and virtual environment. Concerning the latter, progressively more complex numerical domains have been designed and subjected to synthetic GPR test, by a Finite Different Time Domain (FTDT) procedure, run in GPR Max 2D simulator. As first simulation steps, ballast aggregates simplified through circles have been accounted for, with increasing values of radius. Subsequently, real-scale scenarios characterized by multi-size ballast particles, consistent with three different grain size distribution from railway network standards, have been reproduced by the employment of Random Sequential Adsorption - RSA algorithm. As far as the laboratory procedures, real GPR tests have been carried out on an experimental framework purposely set up, and composed of a methacrylate tank filled up with limestone-derived railway ballast. The ballast aggregates grading has been retrieved by means of an automatic image analysis algorithm, run on the lateral sight of the transparent tank. Through their spectral analysis, an empirical relationship between the position of the amplitude

  15. Comparison of empirical models with intensively observed data for prediction of salt intrusion in the Sumjin River estuary, Korea

    Directory of Open Access Journals (Sweden)

    D. C. Shaha

    2009-06-01

    Full Text Available Performance of empirical models has been compared with extensively observed data to determine the most suitable model for prediction of salt intrusion in the Sumjin River estuary, Korea. Intensive measurements of salt intrusion were taken at high and low waters during both spring and neap tide in each season from August 2004 to April 2007. The stratification parameter varied with the distance along the estuary, tidal period and freshwater discharge, indicating that the Sumjin River estuary experiences a transition from partially- or well-mixed during spring tide to stratified during neap tide. The salt intrusion length at high water varied from 13.4 km in summer 2005 to 25.6 km in autumn 2006. The salt intrusion mostly depends on the freshwater discharge rather than spring-neap tidal oscillation. Analysis of three years observed salinity data indicates that the scale of the salt intrusion length in the Sumjin River estuary is proportional to the river discharge to the −1/5 power. Four empirical models have been applied to the Sumjin River estuary to explore the most suitable model for prediction of the salt intrusion length. Comparative results show that the Nguyen and Savenije (2006 model, developed under both partially- and well-mixed estuaries, performs best of all models studied (relative error of 4.6%. The model was also applied under stratified neap tide conditions, with a relative error of 5.2%, implying applicability of this model under stratified conditions as well.

  16. Meteorological conditions associated to high sublimation amounts in semiarid high-elevation Andes decrease the performance of empirical melt models

    Science.gov (United States)

    Ayala, Alvaro; Pellicciotti, Francesca; MacDonell, Shelley; McPhee, James; Burlando, Paolo

    2015-04-01

    Empirical melt (EM) models are often preferred to surface energy balance (SEB) models to calculate melt amounts of snow and ice in hydrological modelling of high-elevation catchments. The most common reasons to support this decision are that, in comparison to SEB models, EM models require lower levels of meteorological data, complexity and computational costs. However, EM models assume that melt can be characterized by means of a few index variables only, and their results strongly depend on the transferability in space and time of the calibrated empirical parameters. In addition, they are intrinsically limited in accounting for specific process components, the complexity of which cannot be easily reconciled with the empirical nature of the model. As an example of an EM model, in this study we use the Enhanced Temperature Index (ETI) model, which calculates melt amounts using air temperature and the shortwave radiation balance as index variables. We evaluate the performance of the ETI model on dry high-elevation sites where sublimation amounts - that are not explicitly accounted for the EM model - represent a relevant percentage of total ablation (1.1 to 8.7%). We analyse a data set of four Automatic Weather Stations (AWS), which were collected during the ablation season 2013-14, at elevations between 3466 and 4775 m asl, on the glaciers El Tapado, San Francisco, Bello and El Yeso, which are located in the semiarid Andes of central Chile. We complement our analysis using data from past studies in Juncal Norte Glacier (Chile) and Haut Glacier d'Arolla (Switzerland), during the ablation seasons 2008-09 and 2006, respectively. We use the results of a SEB model, applied to each study site, along the entire season, to calibrate the ETI model. The ETI model was not designed to calculate sublimation amounts, however, results show that their ability is low also to simulate melt amounts at sites where sublimation represents larger percentages of total ablation. In fact, we

  17. Evaluation of the existing triple point path models with new experimental data: proposal of an original empirical formulation

    Science.gov (United States)

    Boutillier, J.; Ehrhardt, L.; De Mezzo, S.; Deck, C.; Magnan, P.; Naz, P.; Willinger, R.

    2018-03-01

    With the increasing use of improvised explosive devices (IEDs), the need for better mitigation, either for building integrity or for personal security, increases in importance. Before focusing on the interaction of the shock wave with a target and the potential associated damage, knowledge must be acquired regarding the nature of the blast threat, i.e., the pressure-time history. This requirement motivates gaining further insight into the triple point (TP) path, in order to know precisely which regime the target will encounter (simple reflection or Mach reflection). Within this context, the purpose of this study is to evaluate three existing TP path empirical models, which in turn are used in other empirical models for the determination of the pressure profile. These three TP models are the empirical function of Kinney, the Unified Facilities Criteria (UFC) curves, and the model of the Natural Resources Defense Council (NRDC). As discrepancies are observed between these models, new experimental data were obtained to test their reliability and a new promising formulation is proposed for scaled heights of burst ranging from 24.6-172.9 cm/kg^{1/3}.

  18. Evaluation of the existing triple point path models with new experimental data: proposal of an original empirical formulation

    Science.gov (United States)

    Boutillier, J.; Ehrhardt, L.; De Mezzo, S.; Deck, C.; Magnan, P.; Naz, P.; Willinger, R.

    2017-08-01

    With the increasing use of improvised explosive devices (IEDs), the need for better mitigation, either for building integrity or for personal security, increases in importance. Before focusing on the interaction of the shock wave with a target and the potential associated damage, knowledge must be acquired regarding the nature of the blast threat, i.e., the pressure-time history. This requirement motivates gaining further insight into the triple point (TP) path, in order to know precisely which regime the target will encounter (simple reflection or Mach reflection). Within this context, the purpose of this study is to evaluate three existing TP path empirical models, which in turn are used in other empirical models for the determination of the pressure profile. These three TP models are the empirical function of Kinney, the Unified Facilities Criteria (UFC) curves, and the model of the Natural Resources Defense Council (NRDC). As discrepancies are observed between these models, new experimental data were obtained to test their reliability and a new promising formulation is proposed for scaled heights of burst ranging from 24.6-172.9 cm/kg^{1/3}.

  19. Empirical modeling of single-wake advection and expansion using full-scale pulsed lidar-based measurements

    DEFF Research Database (Denmark)

    Machefaux, Ewan; Larsen, Gunner Chr.; Troldborg, Niels

    2015-01-01

    and to obtain an estimate of the wake expansion in a fixed frame of reference. A comparison shows good agreement between the measured average expansion and the Computational Fluid Dynamics (CFD) large eddy simulation–actuator line computations. Frandsen’s expansion model seems to predict the wake expansion......In the present paper, single-wake dynamics have been studied both experimentally and numerically. The use of pulsed lidar measurements allows for validation of basic dynamic wake meandering modeling assumptions. Wake center tracking is used to estimate the wake advection velocity experimentally...... fairly well in the far wake but lacks accuracy in the outer region of the near wake. An empirical relationship, relating maximum wake induction and wake advection velocity, is derived and linked to the characteristics of a spherical vortex structure. Furthermore, a new empirical model for single...

  20. Circumplex models for the similarity relationships between higher-order factors of personality and personality disorders: an empirical analysis.

    Science.gov (United States)

    Pukrop, R; Sass, H; Steinmeyer, E M

    2000-01-01

    Similarity relationships between personality factors and personality disorders (PDs) are usually described within the conceptual framework of the "big five" model. Recently, two-dimensional circumplex models have been suggested as alternatives, such as the interpersonal circle, the multifacet circumplex, and the circumplex of premorbid personality types. The present study is an empirical investigation of the similarity relationships between the big five, the 11 DSM-III-R personality disorders and four subaffective disorders. This was performed in a sample of 165 psychiatric inpatients. We tested the extent to which the relationships could be adequately represented in two dimensions and which circumplex model can be supported by the empirical configuration. Results obtained by principal-components analysis (PCA) strongly confirm the circumplex of premorbid personality, and to some extent the multifacet circumplex. However, the interpersonal circle cannot be confirmed.

  1. An empirically grounded agent based model for modeling directs, conflict detection and resolution operations in air traffic management.

    Science.gov (United States)

    Bongiorno, Christian; Miccichè, Salvatore; Mantegna, Rosario N

    2017-01-01

    We present an agent based model of the Air Traffic Management socio-technical complex system aiming at modeling the interactions between aircraft and air traffic controllers at a tactical level. The core of the model is given by the conflict detection and resolution module and by the directs module. Directs are flight shortcuts that are given by air controllers to speed up the passage of an aircraft within a certain airspace and therefore to facilitate airline operations. Conflicts between flight trajectories can occur for two main reasons: either the planning of the flight trajectory was not sufficiently detailed to rule out all potential conflicts or unforeseen events during the flight require modifications of the flight plan that can conflict with other flight trajectories. Our model performs a local conflict detection and resolution procedure. Once a flight trajectory has been made conflict-free, the model searches for possible improvements of the system efficiency by issuing directs. We give an example of model calibration based on real data. We then provide an illustration of the capability of our model in generating scenario simulations able to give insights about the air traffic management system. We show that the calibrated model is able to reproduce the existence of a geographical localization of air traffic controllers' operations. Finally, we use the model to investigate the relationship between directs and conflict resolutions (i) in the presence of perfect forecast ability of controllers, and (ii) in the presence of some degree of uncertainty in flight trajectory forecast.

  2. An empirically grounded agent based model for modeling directs, conflict detection and resolution operations in air traffic management.

    Directory of Open Access Journals (Sweden)

    Christian Bongiorno

    Full Text Available We present an agent based model of the Air Traffic Management socio-technical complex system aiming at modeling the interactions between aircraft and air traffic controllers at a tactical level. The core of the model is given by the conflict detection and resolution module and by the directs module. Directs are flight shortcuts that are given by air controllers to speed up the passage of an aircraft within a certain airspace and therefore to facilitate airline operations. Conflicts between flight trajectories can occur for two main reasons: either the planning of the flight trajectory was not sufficiently detailed to rule out all potential conflicts or unforeseen events during the flight require modifications of the flight plan that can conflict with other flight trajectories. Our model performs a local conflict detection and resolution procedure. Once a flight trajectory has been made conflict-free, the model searches for possible improvements of the system efficiency by issuing directs. We give an example of model calibration based on real data. We then provide an illustration of the capability of our model in generating scenario simulations able to give insights about the air traffic management system. We show that the calibrated model is able to reproduce the existence of a geographical localization of air traffic controllers' operations. Finally, we use the model to investigate the relationship between directs and conflict resolutions (i in the presence of perfect forecast ability of controllers, and (ii in the presence of some degree of uncertainty in flight trajectory forecast.

  3. The Relative Effectiveness of Empirical and Physical Models for Simulating the Dense Undercurrent of Pyroclastic Flows under Different Emplacement Conditions

    Directory of Open Access Journals (Sweden)

    Sarah E. Ogburn

    2017-11-01

    Full Text Available High concentration pyroclastic density currents (PDCs are hot avalanches of volcanic rock and gas and are among the most destructive volcanic hazards due to their speed and mobility. Mitigating the risk associated with these flows depends upon accurate forecasting of possible impacted areas, often using empirical or physical models. TITAN2D, VolcFlow, LAHARZ, and ΔH/L or energy cone models each employ different rheologies or empirical relationships and therefore differ in appropriateness of application for different types of mass flows and topographic environments. This work seeks to test different statistically- and physically-based models against a range of PDCs of different volumes, emplaced under different conditions, over different topography in order to test the relative effectiveness, operational aspects, and ultimately, the utility of each model for use in hazard assessments. The purpose of this work is not to rank models, but rather to understand the extent to which the different modeling approaches can replicate reality in certain conditions, and to explore the dynamics of PDCs themselves. In this work, these models are used to recreate the inundation areas of the dense-basal undercurrent of all 13 mapped, land-confined, Soufrière Hills Volcano dome-collapse PDCs emplaced from 1996 to 2010 to test the relative effectiveness of different computational models. Best-fit model results and their input parameters are compared with results using observation- and deposit-derived input parameters. Additional comparison is made between best-fit model results and those using empirically-derived input parameters from the FlowDat global database, which represent “forward” modeling simulations as would be completed for hazard assessment purposes. Results indicate that TITAN2D is able to reproduce inundated areas well using flux sources, although velocities are often unrealistically high. VolcFlow is also able to replicate flow runout well, but

  4. An empirical modeling tool and glass property database in development of US-DOE radioactive waste glasses

    International Nuclear Information System (INIS)

    Muller, I.; Gan, H.

    1997-01-01

    An integrated glass database has been developed at the Vitreous State Laboratory of Catholic University of America. The major objective of this tool was to support glass formulation using the MAWS approach (Minimum Additives Waste Stabilization). An empirical modeling capability, based on the properties of over 1000 glasses in the database, was also developed to help formulate glasses from waste streams under multiple user-imposed constraints. The use of this modeling capability, the performance of resulting models in predicting properties of waste glasses, and the correlation of simple structural theories to glass properties are the subjects of this paper. (authors)

  5. An empirical model for predicting urban roadside nitrogen dioxide concentrations in the UK

    International Nuclear Information System (INIS)

    Stedman, J.R.; Goodwin, J.W.L.; King, K.; Murrells, T.P.; Bush, T.J.

    2001-01-01

    An annual mean concentration of 40μgm -3 has been proposed as a limit value within the European Union Air Quality Directives and as a provisional objective within the UK National Air Quality Strategy for 2010 and 2005, respectively. Emissions reduction measures resulting from current national and international policies are likely to deliver significant reductions in emissions of oxides of nitrogen from road traffic in the near future. It is likely that there will still be exceedances of this target value in 2005 and in 2009 if national measures are considered in isolation, particularly at the roadside. It is envisaged that this 'policy gap' will be addressed by implementing local air quality management to reduce concentrations in locations that are at risk of exceeding the objective. Maps of estimated annual mean NO 2 concentrations in both urban background and roadside locations are a valuable resource for the development of UK air quality policy and for the identification of locations at which local air quality management measures may be required. Maps of annual mean NO 2 concentrations at both background and roadside locations for 1998 have been calculated using modelling methods, which make use of four mathematically straightforward, empirically derived linear relationships. Maps of projected concentrations in 2005 and 2009 have also been calculated using an illustrative emissions scenario. For this emissions scenario, annual mean urban background NO 2 concentrations in 2005 are likely to be below 40μgm -3 , in all areas except for inner London, where current national and international policies are expected to lead to concentrations in the range 40-41μgm -3 . Reductions in NO x emissions between 2005 and 2009 are expected to reduce background concentrations to the extent that our modelling results indicate that 40μgm -3 is unlikely to be exceeded in background locations by 2009. Roadside NO 2 concentrations in urban areas in 2005 and 2009 are expected to be

  6. Empirical probability model of cold plasma environment in the Jovian magnetosphere

    Science.gov (United States)

    Futaana, Yoshifumi; Wang, Xiao-Dong; Barabash, Stas; Roussos, Elias; Truscott, Pete

    2015-04-01

    We analyzed the Galileo PLS dataset to produce a new cold plasma environment model for the Jovian magneto- sphere. Although there exist many sophisticated radiation models, treating energetic plasma (e.g. JOSE, GIRE, or Salammbo), only a limited number of simple models has been utilized for cold plasma environment. By extend- ing the existing cold plasma models toward the probability domain, we can predict the extreme periods of Jovian environment by specifying the percentile of the environmental parameters. The new model was produced in the following procedure. We first referred to the existing cold plasma models of Divine and Garrett, 1983 (DG83) or Bagenal and Delamere 2011 (BD11). These models are scaled to fit the statistical median of the parameters obtained from Galileo PLS data. The scaled model (also called as "mean model") indicates the median environment of Jovian magnetosphere. Then, assuming that the deviations in the Galileo PLS parameters are purely due to variations in the environment, we extended the mean model toward the percentile domain. The input parameter of the model is simply the position of the spacecraft (distance, magnetic longitude and lati- tude) and the specific percentile (e.g. 0.5 for the mean model). All the parameters in the model are described in mathematical forms; therefore the needed computational resources are quite low. The new model can be used for assessing the JUICE mission profile. The spatial extent of the model covers the main phase of the JUICE mission; namely from the Europa orbit to 40 Rj (where Rj is the radius of Jupiter). In addition, theoretical extensions toward the latitudinal direction are also included in the model to support the high latitude orbit of the JUICE spacecraft.

  7. Modeling Zero-Inflated and Overdispersed Count Data: An Empirical Study of School Suspensions

    Science.gov (United States)

    Desjardins, Christopher David

    2016-01-01

    The purpose of this article is to develop a statistical model that best explains variability in the number of school days suspended. Number of school days suspended is a count variable that may be zero-inflated and overdispersed relative to a Poisson model. Four models were examined: Poisson, negative binomial, Poisson hurdle, and negative…

  8. Empirical validation of landscape resistance models: insights from the Greater Sage-Grouse (Centrocercus urophasianus)

    Science.gov (United States)

    Andrew J. Shirk; Michael A. Schroeder; Leslie A. Robb; Samuel A. Cushman

    2015-01-01

    The ability of landscapes to impede species’ movement or gene flow may be quantified by resistance models. Few studies have assessed the performance of resistance models parameterized by expert opinion. In addition, resistance models differ in terms of spatial and thematic resolution as well as their focus on the ecology of a particular species or more generally on the...

  9. The EZ diffusion model provides a powerful test of simple empirical effects

    NARCIS (Netherlands)

    van Ravenzwaaij, Don; Donkin, Chris; Vandekerckhove, Joachim

    Over the last four decades, sequential accumulation models for choice response times have spread through cognitive psychology like wildfire. The most popular style of accumulator model is the diffusion model (Ratcliff Psychological Review, 85, 59–108, 1978), which has been shown to account for data

  10. Prediction of Human Glomerular Filtration Rate from Preterm Neonates to Adults: Evaluation of Predictive Performance of Several Empirical Models.

    Science.gov (United States)

    Mahmood, Iftekhar; Staschen, Carl-Michael

    2016-03-01

    The objective of this study was to evaluate the predictive performance of several allometric empirical models (body weight dependent, age dependent, fixed exponent 0.75, a data-dependent single exponent, and maturation models) to predict glomerular filtration rate (GFR) in preterm and term neonates, infants, children, and adults without any renal disease. In this analysis, the models were developed from GFR data obtained from inulin clearance (preterm neonates to adults; n = 93) and the predictive performance of these models were evaluated in 335 subjects (preterm neonates to adults). The primary end point was the prediction of GFR from the empirical allometric models and the comparison of the predicted GFR with measured GFR. A prediction error within ±30% was considered acceptable. Overall, the predictive performance of the four models (BDE, ADE, and two maturation models) for the prediction of mean GFR was good across all age groups but the prediction of GFR in individual healthy subjects especially in neonates and infants was erratic and may be clinically unacceptable.

  11. An Empirical Study of Propagation Models for Wireless Communications in Open-pit Mines

    DEFF Research Database (Denmark)

    Portela Lopes de Almeida, Erika; Caldwell, George; Rodriguez Larrad, Ignacio

    2018-01-01

    —In this paper, we investigate the suitability of the propagation models ITU-R 526, Okumura Hata, COST Hata models and Standard Propagation Model (SPM) to predict the path loss in open-pit mines. The models are evaluated by comparing the predicted data with measurements obtained in two operationa......B and 9.2 dB, and it is capable of providing a close approximation of the best predictions (i.e. those with lowest root-mean-squared error) as provided by the SPM. The proposed model, however, reduces the calibration complexity considerably...... iron-ore mining complexes in Brazil. Additionally, a simple deterministic model, based on the inclusion of an effective antenna height term to the ITU-R 526, is proposed and compared to the other methods. The results show that the proposed model results in root-mean-square error values between 5.5 d...

  12. Efficiency test of modeled empirical equations in predicting soil loss from ephemeral gully erosion around Mubi, Northeast Nigeria

    Directory of Open Access Journals (Sweden)

    Ijasini John Tekwa

    2016-03-01

    Full Text Available A field study was carried out to assess soil loss from ephemeral gully (EG erosion at 6 different locations (Digil, Vimtim, Muvur, Gella, Lamorde and Madanya around the Mubi area between April, 2008 and October, 2009. Each location consisted of 3 watershed sites from where data was collected. EG shape, land use, and conservation practices were noted, while EG length, width, and depth were measured. Physico-chemical properties of the soils were studied in the field and laboratory. Soil loss was both measured and predicted using modeled empirical equations. Results showed that the soils are heterogeneous and lying on flat to hilly topographies with few grasses, shrubs and tree vegetations. The soils comprised of sand fractions that predominated the texture, with considerable silt and clay contents. The empirical soil loss was generally related with the measured soil loss and the predictions were widely reliable at all sites, regardless of season. The measured and empirical aggregate soil loss were more related in terms of volume of soil loss (VSL (r2=0.93 and mass of soil loss (MSL (r2=0.92, than area of soil loss (ASL (r2=0.27. The empirical estimates of VSL and MSL were consistently higher at Muvur (less vegetation and lower at Madanya and Gella (denser vegetations in both years. The maximum efficiency (Mse of the empirical equation in predicting ASL was between 1.41 (Digil and 89.07 (Lamorde, while the Mse was higher at Madanya (2.56 and lowest at Vimtim (15.66 in terms of VSL prediction efficiencies. The Mse also ranged from 1.84 (Madanya to 15.74 (Vimtim in respect of MSL predictions. These results led to the recommendation that soil conservationists, farmers, private and/or government agencies should implement the empirical model in erosion studies around Mubi area.

  13. Optimizing irrigation and nitrogen for wheat through empirical modeling under semi-arid environment.

    Science.gov (United States)

    Saeed, Umer; Wajid, Syed Aftab; Khaliq, Tasneem; Zahir, Zahir Ahmad

    2017-04-01

    Nitrogen fertilizer availability to plants is strongly linked with water availability. Excessive or insufficient use of nitrogen can cause reduction in grain yield of wheat and environmental issues. The per capita per annum water availability in Pakistan has reduced to less than 1000 m 3 and is expected to reach 800 m 3 during 2025. Irrigating crops with 3 or more than 3 in. of depth without measuring volume of water is not a feasible option anymore. Water productivity and economic return of grain yield can be improved by efficient management of water and nitrogen fertilizer. A study was conducted at post-graduate agricultural research station, University of Agriculture Faisalabad, during 2012-2013 and 2013-2014 to optimize volume of water per irrigation and nitrogen application. Split plot design with three replications was used to conduct experiment; four irrigation levels (I 300  = 300 mm, I 240  = 240 mm, I 180  = 180 mm, I 120  = 120 mm for whole growing season at critical growth stages) and four nitrogen levels (N 60  = 60 kg ha -1 , N 120  = 120 kg ha -1 , N 180  = 180 kg ha -1 , and N 240  = 240 kg ha -1 ) were randomized as main and sub-plot factors, respectively. The recorded data on grain yield was used to develop empirical regression models. The results based on quadratic equations and economic analysis showed 164, 162, 158, and 107 kg ha -1 nitrogen as economic optimum with I 300 , I 240 , I 180 , and I 120 mm water, respectively, during 2012-2013. During 2013-2014, quadratic equations and economic analysis showed 165, 162, 161, and 117 kg ha -1 nitrogen as economic optimum with I 300 , I 240 , I 180 , and I 120 mm water, respectively. The optimum irrigation level was obtained by fitting economic optimum nitrogen as function of total water. Equations predicted 253 mm as optimum irrigation water for whole growing season during 2012-2013 and 256 mm water as optimum for 2013-2014. The results also revealed that

  14. Recommended survey designs for occupancy modelling using motion-activated cameras: insights from empirical wildlife data

    Directory of Open Access Journals (Sweden)

    Graeme Shannon

    2014-08-01

    Full Text Available Motion-activated cameras are a versatile tool that wildlife biologists can use for sampling wild animal populations to estimate species occurrence. Occupancy modelling provides a flexible framework for the analysis of these data; explicitly recognizing that given a species occupies an area the probability of detecting it is often less than one. Despite the number of studies using camera data in an occupancy framework, there is only limited guidance from the scientific literature about survey design trade-offs when using motion-activated cameras. A fuller understanding of these trade-offs will allow researchers to maximise available resources and determine whether the objectives of a monitoring program or research study are achievable. We use an empirical dataset collected from 40 cameras deployed across 160 km2 of the Western Slope of Colorado, USA to explore how survey effort (number of cameras deployed and the length of sampling period affects the accuracy and precision (i.e., error of the occupancy estimate for ten mammal and three virtual species. We do this using a simulation approach where species occupancy and detection parameters were informed by empirical data from motion-activated cameras. A total of 54 survey designs were considered by varying combinations of sites (10–120 cameras and occasions (20–120 survey days. Our findings demonstrate that increasing total sampling effort generally decreases error associated with the occupancy estimate, but changing the number of sites or sampling duration can have very different results, depending on whether a species is spatially common or rare (occupancy = ψ and easy or hard to detect when available (detection probability = p. For rare species with a low probability of detection (i.e., raccoon and spotted skunk the required survey effort includes maximizing the number of sites and the number of survey days, often to a level that may be logistically unrealistic for many studies. For common

  15. A dynamic model of the marriage market-Part 2: simulation of marital states and application to empirical data.

    Science.gov (United States)

    Matthews, A P; Garenne, M L

    2013-09-01

    A dynamic, two-sex, age-structured marriage model is presented. Part 1 focused on first marriage only and described a marriage market matching algorithm. In Part 2 the model is extended to include divorce, widowing, and remarriage. The model produces a self-consistent set of marital states distributed by age and sex in a stable population by means of a gender-symmetric numerical method. The model is compared with empirical data for the case of Zambia. Furthermore, a dynamic marriage function for a changing population is demonstrated in simulations of three hypothetical scenarios of elevated mortality in young to middle adulthood. The marriage model has its primary application to simulation of HIV-AIDS epidemics in African countries. Copyright © 2013 Elsevier Inc. All rights reserved.

  16. Theoretical Insight Into the Empirical Tortuosity-Connectivity Factor in the Burdine-Brooks-Corey Water Relative Permeability Model

    Science.gov (United States)

    Ghanbarian, Behzad; Ioannidis, Marios A.; Hunt, Allen G.

    2017-12-01

    A model commonly applied to the estimation of water relative permeability krw in porous media is the Burdine-Brooks-Corey model, which relies on a simplified picture of pores as a bundle of noninterconnected capillary tubes. In this model, the empirical tortuosity-connectivity factor is assumed to be a power law function of effective saturation with an exponent (μ) commonly set equal to 2 in the literature. Invoking critical path analysis and using percolation theory, we relate the tortuosity-connectivity exponent μ to the critical scaling exponent t of percolation that characterizes the power law behavior of the saturation-dependent electrical conductivity of porous media. We also discuss the cause of the nonuniversality of μ in terms of the nonuniversality of t and compare model estimations with water relative permeability from experiments. The comparison supports determining μ from the electrical conductivity scaling exponent t, but also highlights limitations of the model.

  17. An empirical model of L-band scintillation S4 index constructed by using FORMOSAT-3/COSMIC data

    Science.gov (United States)

    Chen, Shih-Ping; Bilitza, Dieter; Liu, Jann-Yenq; Caton, Ronald; Chang, Loren C.; Yeh, Wen-Hao

    2017-09-01

    Modern society relies heavily on the Global Navigation Satellite System (GNSS) technology for applications such as satellite communication, navigation, and positioning on the ground and/or aviation in the troposphere/stratosphere. However, ionospheric scintillations can severely impact GNSS systems and their related applications. In this study, a global empirical ionospheric scintillation model is constructed with S4-index data obtained by the FORMOSAT-3/COSMIC (F3/C) satellites during 2007-2014 (hereafter referred to as the F3CGS4 model). This model describes the S4-index as a function of local time, day of year, dip-latitude, and solar activity using the index PF10.7. The model reproduces the F3/C S4-index observations well, and yields good agreement with ground-based reception of satellite signals. This confirms that the constructed model can be used to forecast global L-band scintillations on the ground and in the near surface atmosphere.

  18. Combining empirical and theory-based land-use modelling approaches to assess economic potential of biofuel production avoiding iLUC: Argentina as a case study

    NARCIS (Netherlands)

    Diogo, V.; van der Hilst, F.; van Eijck, J.; Verstegen, J.A.; Hilbert, J.; Carballo, S.; Volante, J.; Faaij, A.

    2014-01-01

    In this paper, a land-use modelling framework is presented combining empirical and theory-based modelling approaches to determine economic potential of biofuel production avoiding indirect land-use changes (iLUC) resulting from land competition with other functions. The empirical approach explores

  19. A Web Application For Visualizing Empirical Models of the Space-Atmosphere Interface Region: AtModWeb

    Science.gov (United States)

    Knipp, D.; Kilcommons, L. M.; Damas, M. C.

    2015-12-01

    We have created a simple and user-friendly web application to visualize output from empirical atmospheric models that describe the lower atmosphere and the Space-Atmosphere Interface Region (SAIR). The Atmospheric Model Web Explorer (AtModWeb) is a lightweight, multi-user, Python-driven application which uses standard web technology (jQuery, HTML5, CSS3) to give an in-browser interface that can produce plots of modeled quantities such as temperature and individual species and total densities of neutral and ionized upper-atmosphere. Output may be displayed as: 1) a contour plot over a map projection, 2) a pseudo-color plot (heatmap) which allows visualization of a variable as a function of two spatial coordinates, or 3) a simple line plot of one spatial coordinate versus any number of desired model output variables. The application is designed around an abstraction of an empirical atmospheric model, essentially treating the model code as a black box, which makes it simple to add additional models without modifying the main body of the application. Currently implemented are the Naval Research Laboratory NRLMSISE00 model for neutral atmosphere and the International Reference Ionosphere (IRI). These models are relevant to the Low Earth Orbit environment and the SAIR. The interface is simple and usable, allowing users (students and experts) to specify time and location, and choose between historical (i.e. the values for the given date) or manual specification of whichever solar or geomagnetic activity drivers are required by the model. We present a number of use-case examples from research and education: 1) How does atmospheric density between the surface and 1000 km vary with time of day, season and solar cycle?; 2) How do ionospheric layers change with the solar cycle?; 3 How does the composition of the SAIR vary between day and night at a fixed altitude?

  20. Testing seasonal and long-term controls of streamwater DOC using empirical and process-based models.

    Science.gov (United States)

    Futter, Martyn N; de Wit, Heleen A

    2008-12-15

    Concentrations of dissolved organic carbon (DOC) in surface waters are increasing across Europe and parts of North America. Several mechanisms have been proposed to explain these increases including reductions in acid deposition, change in frequency of winter storms and changes in temperature and precipitation patterns. We used two modelling approaches to identify the mechanisms responsible for changing surface water DOC concentrations. Empirical regression analysis and INCA-C, a process-based model of stream-water DOC, were used to simulate long-term (1986--2003) patterns in stream water DOC concentrations in a small boreal stream. Both modelling approaches successfully simulated seasonal and inter-annual patterns in DOC concentration. In both models, seasonal patterns of DOC concentration were controlled by hydrology and inter-annual patterns were explained by climatic variation. There was a non-linear relationship between warmer summer temperatures and INCA-C predicted DOC. Only the empirical model was able to satisfactorily simulate the observed long-term increase in DOC. The observed long-term trends in DOC are likely to be driven by in-soil processes controlled by SO4(2-) and Cl(-) deposition, and to a lesser extent by temperature-controlled processes. Given the projected changes in climate and deposition, future modelling and experimental research should focus on the possible effects of soil temperature and moisture on organic carbon production, sorption and desorption rates, and chemical controls on organic matter solubility.

  1. An empirical model of optical and radiative characteristics of the tropospheric aerosol over West Siberia in summer

    Directory of Open Access Journals (Sweden)

    M. V. Panchenko

    2012-07-01

    Full Text Available An empirical model of the vertical profiles of aerosol optical characteristics is described. This model was developed based on data acquired from multi-year airborne sensing of optical and microphysical characteristics of the tropospheric aerosol over West Siberia. The main initial characteristics for the creation of the model were measurement data of the vertical profiles of the aerosol angular scattering coefficients in the visible wavelength range, particle size distribution functions and mass concentrations of black carbon (BC. The proposed model allows us to retrieve the aerosol optical and radiative characteristics in the visible and near-IR wavelength range, using the season, air mass type and time of day as input parameters. The columnar single scattering albedo and asymmetry factor of the aerosol scattering phase function, calculated using the average vertical profiles, are in good agreement with data from the AERONET station located in Tomsk.

    For solar radiative flux calculations, this empirical model has been tested for typical summer conditions. The available experimental database obtained for the regional features of West Siberia and the model developed on this basis are shown to be sufficient for performing these calculations.

  2. An empirical model for the statistics of sea surface diurnal warming

    Directory of Open Access Journals (Sweden)

    M. J. Filipiak

    2012-03-01

    Full Text Available A statistical model is derived relating the diurnal variation of sea surface temperature (SST to the net surface heat flux and surface wind speed from a numerical weather prediction (NWP model. The model is derived using fluxes and winds from the European Centre for Medium-Range Weather Forecasting (ECMWF NWP model and SSTs from the Spinning Enhanced Visible and Infrared Imager (SEVIRI. In the model, diurnal warming has a linear dependence on the net surface heat flux integrated since (approximately dawn and an inverse quadratic dependence on the maximum of the surface wind speed in the same period. The model coefficients are found by matching, for a given integrated heat flux, the frequency distributions of the maximum wind speed and the observed warming. Diurnal cooling, where it occurs, is modelled as proportional to the integrated heat flux divided by the heat capacity of the seasonal mixed layer. The model reproduces the statistics (mean, standard deviation, and 95-percentile of the diurnal variation of SST seen by SEVIRI and reproduces the geographical pattern of mean warming seen by the Advanced Microwave Scanning Radiometer (AMSR-E. We use the functional dependencies in the statistical model to test the behaviour of two physical model of diurnal warming that display contrasting systematic errors.

  3. The Effect of Private Benefits of Control on Minority Shareholders: A Theoretical Model and Empirical Evidence from State Ownership

    Directory of Open Access Journals (Sweden)

    Kerry Liu

    2017-06-01

    Full Text Available Purpose: The purpose of this paper is to examine the effect of private benefits of control on minority shareholders. Design/methodology/approach: A theoretical model is established. The empirical analysis includes hand-collected data from a wide range of data sources. OLS and 2SLS regression analysis are applied with Huber-White standard errors. Findings: The theoretical model shows that, while private benefits are generally harmful to minority shareholders, the overall effect depends on the size of large shareholder ownership. The empirical evidence from government ownership is consistent with theoretical analysis. Research limitations/implications: The empirical evidence is based on a small number of hand-collected data sets of government ownership. Further studies can be expanded to other types of ownership, such as family ownership and financial institutional ownership. Originality/value: This study is the first to theoretically analyse and empirically test the effect of private benefits. In general, this study significantly contributes to the understanding of the effect of large shareholder and corporate governance.

  4. THE "MAN INCULTS" AND PACIFICATION DURING BRAZILIAN EMPIRE: A MODEL OF HISTORICAL INTERPRETATION BUILT FROM THE APPROACH TO HUMAN RIGHTS

    Directory of Open Access Journals (Sweden)

    José Ernesto Pimentel Filho

    2011-06-01

    Full Text Available The construction of peace in the Empire of Brazil was one of the forms of public space’s monopoly by the dominant sectors of the Empire Society. On the one hand, the Empire built an urban sociability based on patriarchal relations. On the other hand, the Empire was struggling against all forms of disorder and social deviance, as in a diptych image. The center of that peace was the capitals of the provinces. We he discuss here how to construct a model for approaching a mentality of combating crime in rural areas according to the patriarchal minds during the nineteenth century in Brazil. For it, the case of Ceara has been chosen. A historical hermeneutic might been applied for understanding the role of poor white men in social life of the Empire of Brazil. We observe that the education, when associated with the moral, has been seen as able to modify any violent behavior and able shaping the individual attitude before the justice and punishment policy. Discrimination and stereotypes are part of our interpretation as contribution to a debate on Human Rights in the history of Brazil.

  5. Interface of the polarizable continuum model of solvation with semi-empirical methods in the GAMESS program

    DEFF Research Database (Denmark)

    Svendsen, Casper Steinmann; Blædel, Kristoffer; Christensen, Anders S

    2013-01-01

    An interface between semi-empirical methods and the polarized continuum model (PCM) of solvation successfully implemented into GAMESS following the approach by Chudinov et al (Chem. Phys. 1992, 160, 41). The interface includes energy gradients and is parallelized. For large molecules such as ubiq......An interface between semi-empirical methods and the polarized continuum model (PCM) of solvation successfully implemented into GAMESS following the approach by Chudinov et al (Chem. Phys. 1992, 160, 41). The interface includes energy gradients and is parallelized. For large molecules...... such as ubiquitin a reasonable speedup (up to a factor of six) is observed for up to 16 cores. The SCF convergence is greatly improved by PCM for proteins compared to the gas phase....

  6. Empirically Grounded Agent-Based Models of Innovation Diffusion: A Critical Review

    OpenAIRE

    Zhang, Haifeng; Vorobeychik, Yevgeniy

    2016-01-01

    Innovation diffusion has been studied extensively in a variety of disciplines, including sociology, economics, marketing, ecology, and computer science. Traditional literature on innovation diffusion has been dominated by models of aggregate behavior and trends. However, the agent-based modeling (ABM) paradigm is gaining popularity as it captures agent heterogeneity and enables fine-grained modeling of interactions mediated by social and geographic networks. While most ABM work on innovation ...

  7. Semi-empirical model for fluorescence lines evaluation in diagnostic x-ray beams.

    Science.gov (United States)

    Bontempi, Marco; Andreani, Lucia; Labanti, Claudio; Costa, Paulo Roberto; Rossi, Pier Luca; Baldazzi, Giuseppe

    2016-01-01

    Diagnostic x-ray beams are composed of bremsstrahlung and discrete fluorescence lines. The aim of this study is the development of an efficient model for the evaluation of the fluorescence lines. The most important electron ionization models are analyzed and implemented. The model results were compared with experimental data and with other independent spectra presented in the literature. The implemented peak models allow the discrimination between direct and indirect radiation emitted from tungsten anodes. The comparison with the independent literature spectra indicated a good agreement. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. Quantitative prediction of intestinal metabolism in humans from a simplified intestinal availability model and empirical scaling factor.

    Science.gov (United States)

    Kadono, Keitaro; Akabane, Takafumi; Tabata, Kenji; Gato, Katsuhiko; Terashita, Shigeyuki; Teramura, Toshio

    2010-07-01

    This study aimed to establish a practical and convenient method of predicting intestinal availability (F(g)) in humans for highly permeable compounds at the drug discovery stage, with a focus on CYP3A4-mediated metabolism. We constructed a "simplified F(g) model," described using only metabolic parameters, assuming that passive diffusion is dominant when permeability is high and that the effect of transporters in epithelial cells is negligible. Five substrates for CYP3A4 (alprazolam, amlodipine, clonazepam, midazolam, and nifedipine) and four for both CYP3A4 and P-glycoprotein (P-gp) (nicardipine, quinidine, tacrolimus, and verapamil) were used as model compounds. Observed fraction of drug absorbed (F(a)F(g)) values for these compounds were calculated from in vivo pharmacokinetic (PK) parameters, whereas in vitro intestinal intrinsic clearance (CL(int,intestine)) was determined using human intestinal microsomes. The CL(int,intestine) for the model compounds corrected with that of midazolam was defined as CL(m,index) and incorporated into a simplified F(g) model with empirical scaling factor. Regardless of whether the compound was a P-gp substrate, the F(a)F(g) could be reasonably fitted by the simplified F(g) model, and the value of the empirical scaling factor was well estimated. These results suggest that the effects of P-gp on F(a) and F(g) are substantially minor, at least in the case of highly permeable compounds. Furthermore, liver intrinsic clearance (CL(int,liver)) can be used as a surrogate index of intestinal metabolism based on the relationship between CL(int,liver) and CL(m,index). F(g) can be easily predicted using a simplified F(g) model with the empirical scaling factor, enabling more confident selection of drug candidates with desirable PK profiles in humans.

  9. Radiosensitivity of grapevines. Empirical modelling of the radiosensitivity of some clones to x-ray irradiation. Pt. 1

    International Nuclear Information System (INIS)

    Koeroesi, F.; Jezierska-Szabo, E.

    1999-01-01

    Empirical and formal (Poisson) models were utilized, applying experimental growth data to characterize the radiosensitivity of six grapevine clones to X-ray irradiation. According to the radiosensitivity constants (k), target numbers (n) and volumes, GR 37 doses and energy deposition, the following radiosensitivity order has been found for various vine brands: Chardonnay clone type < Harslevelue K. 9 < Koevidinka K. 8 < Muscat Ottonel clone type < Irsai Oliver K. 11 < Cabernet Sauvignon E. 153. The model can be expanded to describe the radiosensitivity of other plant species and varieties, and also the efficiency of various radioprotecting agents and conditions. (author)

  10. Semi-empirical model for the calculation of flow friction factors in wire-wrapped rod bundles

    International Nuclear Information System (INIS)

    Carajilescov, P.; Fernandez y Fernandez, E.

    1981-08-01

    LMFBR fuel elements consist of wire-wrapped rod bundles, with triangular array, with the fluid flowing parallel to the rods. A semi-empirical model is developed in order to obtain the average bundle friction factor, as well as the friction factor for each subchannel. The model also calculates the flow distribution factors. The results are compared to experimental data for geometrical parameters in the range: P(div)D = 1.063 - 1.417, H(div)D = 4 - 50, and are considered satisfactory. (Author) [pt

  11. Testing of empirical grounds for theoretical models of real exchange rate: research of real exchange rate between RSD and Euro

    Directory of Open Access Journals (Sweden)

    Predrag Petrović

    2013-04-01

    Full Text Available The focus of this research holds the most important determinants of real exchange rate covered by various theoretical models. The empirical testing was carried out on the real exchange rate between RSD and Euro for the period from January 2007 to December 2010, which was significantly imposed by availability of consistent time series. The research pertains to five basic model specifications and is based on the testing of time series cointegration by applying Johansen and Engle-Granger’s test. The obtained results have shown that the observed models do not have grounds in empirical data. Time series figuring in models are not cointegrated, and besides that, the estimated cointegration coefficients have signs opposite to the expected ones in large number of cases. In our opinion, the reasons for such findings can be found in the fact that used time series are quite short, i.e. they pertain to the period of only four years, as well as that prices of some significant services are still under the administrative control. Still, despite the aforementioned lacks, we think that our findings can be accepted as preliminary knowledge about the ability of the observed models to explain the dynamics of real exchange rate between RSD and Euro.

  12. A semi-empirical model for magnetic braking of solar-type stars

    Science.gov (United States)

    Sadeghi Ardestani, Leila; Guillot, Tristan; Morel, Pierre

    2017-12-01

    We develop new angular momentum evolution models for stars with masses of 0.5 to 1.6 M⊙ and from the pre-main-sequence (PMS) through the end of their main-sequence (MS) lifetime. The parametric models include magnetic braking based on numerical simulations of magnetized stellar winds, mass-loss-rate prescription, core-envelope decoupling as well as disc locking phenomena. We have also accounted for recent developments in modelling dramatically weakened magnetic braking in stars more evolved than the Sun. We fit the free parameters in our model by comparing model predictions to rotational distributions of a number of stellar clusters as well as individual field stars. Our model reasonably successfully reproduces the rotational behaviour of stars during the PMS phase to the zero-age main-sequence (ZAMS) spin-up, sudden ZAMS spin-down and convergence of the rotation rates afterwards. We find that including core-envelope decoupling improves our models, especially for low-mass stars at younger ages. In addition, by accounting for the almost complete suppression of magnetic braking at slow-spin periods, we provide better fits to observations of stellar rotations compared to previous models.

  13. The Psychosocial Adjustment of the Southeast Asian Refugee: An Overview of Empirical Findings and Theoretical Models.

    Science.gov (United States)

    Nicassio, Perry M.

    1985-01-01

    Summarizes clinical and research literature on Southeast Asian refugees' adjustment in the United States and proposes the adoption of theoretical models that may help explain individual differences. Reports that acculturation, learned helplessness, and stress management models appear to aid the conceptualizing of refugee problems and provide a…

  14. The social networking application success model : An empirical study of Facebook and Twitter

    NARCIS (Netherlands)

    Ou, Carol; Davison, R.M.; Huang, Q.

    2016-01-01

    Social networking applications (SNAs) are among the fastest growing web applications of recent years. In this paper, we propose a causal model to assess the success of SNAs, grounded on DeLone and McLean’s updated information systems (IS) success model. In addition to their original three dimensions

  15. Asset Pricing Model and the Liquidity Effect: Empirical Evidence in the Brazilian Stock Market

    Directory of Open Access Journals (Sweden)

    Otávio Ribeiro de Medeiros

    2011-09-01

    Full Text Available This paper is aims to analyze whether a liquidity premium exists in the Brazilian stock market. As a second goal, we include liquidity as an extra risk factor in asset pricing models and test whether this factor is priced and whether stock returns were explained not only by systematic risk, as proposed by the CAPM, by Fama and French’s (1993 three-factor model, and by Carhart’s (1997 momentum-factor model, but also by liquidity, as suggested by Amihud and Mendelson (1986. To achieve this, we used stock portfolios and five measures of liquidity. Among the asset pricing models tested, the CAPM was the least capable of explaining returns. We found that the inclusion of size and book-to-market factors in the CAPM, a momentum factor in the three-factor model, and a liquidity factor in the four-factor model improve their explanatory power of portfolio returns. In addition, we found that the five-factor model is marginally superior to the other asset pricing models tested.

  16. Toward an Integrative Model of Creativity and Personality: Theoretical Suggestions and Preliminary Empirical Testing

    Science.gov (United States)

    Fü rst, Guillaume; Ghisletta, Paolo; Lubart, Todd

    2016-01-01

    The present work proposes an integrative model of creativity that includes personality traits and cognitive processes. This model hypothesizes that three high-order personality factors predict two main process factors, which in turn predict intensity and achievement of creative activities. The personality factors are: "Plasticity" (high…

  17. Is there a global model of learning organizations? An empirical, cross-nation study

    NARCIS (Netherlands)

    Shipton, H.; Zhou, Q.; Mooi, E.A.

    2013-01-01

    This paper develops and tests a learning organization model derived from HRM and dynamic capability literatures in order to ascertain the model's applicability across divergent global contexts. We define a learning organization as one capable of achieving on-going strategic renewal, arguing based on

  18. Job Search Models, the Duration of Unemployment, and the Asking Wage: Some Empirical Evidence

    Science.gov (United States)

    Barnes, William F.

    1975-01-01

    Three recent untested theoretical models of the wage setting behavior of the unemployed jobseeker by Gronau, Mortensen, and McCall are compared. The investigation supports McCall's model which indicates downward flexibility in the minimum asking wage resulting from learning during search and unemployment. (Author/MW)

  19. An empirical validation of a unified model of electronic government adoption (UMEGA)

    NARCIS (Netherlands)

    Dwivedi, Yogesh K.; Rana, Nripendra P.; Janssen, M.F.W.H.A.; Lal, Banita; Williams, Michael D.; Clement, Marc

    In electronic government (hereafter e-government), a large variety of technology adoption models are employed, which make researchers and policymakers puzzled about which one to use. In this research, nine well-known theoretical models of information technology adoption are evaluated and 29

  20. A New Empirical Sewer Water Quality Model for the Prediction of WWTP Influent Quality

    NARCIS (Netherlands)

    Langeveld, J.G.; Schilperoort, R.P.S.; Rombouts, P.M.M.; Benedetti, L.; Amerlinck, Y.; de Jonge, J.; Flameling, T.; Nopens, I.; Weijers, S.

    2014-01-01

    Modelling of the integrated urban water system is a powerful tool to optimise wastewater system performance or to find cost-effective solutions for receiving water problems. One of the challenges of integrated modelling is the prediction of water quality at the inlet of a WWTP. Recent applications

  1. A semi-empirical model to assess uncertainty of spatial patterns of erosion

    NARCIS (Netherlands)

    Sterk, G.; Vigiak, O.; Romanowicz, R.J.; Beven, K.J.

    2006-01-01

    Distributed erosion models are potentially good tools for locating soil sediment sources and guiding efficient Soil and Water Conservation (SWC) planning, but the uncertainty of model predictions may be high. In this study, the distribution of erosion within a catchment was predicted with a

  2. Safewards: the empirical basis of the model and a critical appraisal

    NARCIS (Netherlands)

    Bowers, L.; Alexander, J.; Bilgin, H. dr.; Botha, M.; Dack, C.; James, K.; Jarrett, M.; Jeffery, D.; Nijman, H.L.I.; Owiti, J.A.; Papadopoulos, C.; Ross, J.; Wright, S.; Stewart, D.

    2014-01-01

    In a previous paper, we described a proposed model explaining differences in rates of conflict (aggression, absconding, self-harm, etc.) and containment (seclusion, special observation, manual restraint, etc.). The Safewards Model identified six originating domains as sources of conflict and

  3. On the Adequacy of Current Empirical Evaluations of Formal Models of Categorization

    Science.gov (United States)

    Wills, Andy J.; Pothos, Emmanuel M.

    2012-01-01

    Categorization is one of the fundamental building blocks of cognition, and the study of categorization is notable for the extent to which formal modeling has been a central and influential component of research. However, the field has seen a proliferation of noncomplementary models with little consensus on the relative adequacy of these accounts.…

  4. Biomass viability: An experimental study and the development of an empirical mathematical model for submerged membrane bioreactor.

    Science.gov (United States)

    Zuthi, M F R; Ngo, H H; Guo, W S; Nghiem, L D; Hai, F I; Xia, S Q; Zhang, Z Q; Li, J X

    2015-08-01

    This study investigates the influence of key biomass parameters on specific oxygen uptake rate (SOUR) in a sponge submerged membrane bioreactor (SSMBR) to develop mathematical models of biomass viability. Extra-cellular polymeric substances (EPS) were considered as a lumped parameter of bound EPS (bEPS) and soluble microbial products (SMP). Statistical analyses of experimental results indicate that the bEPS, SMP, mixed liquor suspended solids and volatile suspended solids (MLSS and MLVSS) have functional relationships with SOUR and their relative influence on SOUR was in the order of EPS>bEPS>SMP>MLVSS/MLSS. Based on correlations among biomass parameters and SOUR, two independent empirical models of biomass viability were developed. The models were validated using results of the SSMBR. However, further validation of the models for different operating conditions is suggested. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. Oil production responses to price changes. An empirical application of the competitive model to OPEC and non-OPEC countries

    International Nuclear Information System (INIS)

    Ramcharran, Harri

    2002-01-01

    Falling oil prices over the last decade, accompanied by over-production by some OPEC members and the growth of non-OPEC supply, warrant further empirical investigation of the competitive model to ascertain production behavior. A supply function, based on a modification of Griffin's model, is estimated using data from 1973-1997. The sample period, unlike Griffin's, however, includes phases of price increase (1970s) and price decrease (1980s-1990s), thus providing a better framework for examining production behavior using the competitive model. The OPEC results do not support the competitive hypothesis; instead, a negative and significant price elasticity of supply is obtained. This result offers partial support for the target revenue theory. For most of the non-OPEC members, the estimates support the competitive model. OPEC's loss of market share and the drop in the share of oil-based energy should signal adjustments in price and quantity based on a competitive world market for crude oil

  6. A control-oriented real-time semi-empirical model for the prediction of NOx emissions in diesel engines

    International Nuclear Information System (INIS)

    D’Ambrosio, Stefano; Finesso, Roberto; Fu, Lezhong; Mittica, Antonio; Spessa, Ezio

    2014-01-01

    Highlights: • New semi-empirical correlation to predict NOx emissions in diesel engines. • Based on a real-time three-zone diagnostic combustion model. • The model is of fast application, and is therefore suitable for control-oriented applications. - Abstract: The present work describes the development of a fast control-oriented semi-empirical model that is capable of predicting NOx emissions in diesel engines under steady state and transient conditions. The model takes into account the maximum in-cylinder burned gas temperature of the main injection, the ambient gas-to-fuel ratio, the mass of injected fuel, the engine speed and the injection pressure. The evaluation of the temperature of the burned gas is based on a three-zone real-time diagnostic thermodynamic model that has recently been developed by the authors. Two correlations have also been developed in the present study, in order to evaluate the maximum burned gas temperature during the main combustion phase (derived from the three-zone diagnostic model) on the basis of significant engine parameters. The model has been tuned and applied to two diesel engines that feature different injection systems of the indirect acting piezoelectric, direct acting piezoelectric and solenoid type, respectively, over a wide range of steady-state operating conditions. The model has also been validated in transient operation conditions, over the urban and extra-urban phases of an NEDC. It has been shown that the proposed approach is capable of improving the predictive capability of NOx emissions, compared to previous approaches, and is characterized by a very low computational effort, as it is based on a single-equation correlation. It is therefore suitable for real-time applications, and could also be integrated in the engine control unit for closed-loop or feed-forward control tasks

  7. Semi-empirical approach to modeling of soil flushing: Model development, application to soil polluted by zinc and copper

    Czech Academy of Sciences Publication Activity Database

    Šváb, M.; Žilka, M.; Müllerová, M.; Kočí, V.; Müller, Vladimír

    2008-01-01

    Roč. 392, 2-3 (2008), s. 187-197 ISSN 0048-9697 Institutional research plan: CEZ:AV0Z10190503 Keywords : copper * flushing * modeling * remediation * soil * zinc Subject RIV: EH - Ecology, Behaviour Impact factor: 2.579, year: 2008

  8. A calibration hierarchy for risk models was defined: from utopia to empirical data.

    Science.gov (United States)

    Van Calster, Ben; Nieboer, Daan; Vergouwe, Yvonne; De Cock, Bavo; Pencina, Michael J; Steyerberg, Ewout W

    2016-06-01

    Calibrated risk models are vital for valid decision support. We define four levels of calibration and describe implications for model development and external validation of predictions. We present results based on simulated data sets. A common definition of calibration is "having an event rate of R% among patients with a predicted risk of R%," which we refer to as "moderate calibration." Weaker forms of calibration only require the average predicted risk (mean calibration) or the average prediction effects (weak calibration) to be correct. "Strong calibration" requires that the event rate equals the predicted risk for every covariate pattern. This implies that the model is fully correct for the validation setting. We argue that this is unrealistic: the model type may be incorrect, the linear predictor is only asymptotically unbiased, and all nonlinear and interaction effects should be correctly modeled. In addition, we prove that moderate calibration guarantees nonharmful decision making. Finally, results indicate that a flexible assessment of calibration in small validation data sets is problematic. Strong calibration is desirable for individualized decision support but unrealistic and counter productive by stimulating the development of overly complex models. Model development and external validation should focus on moderate calibration. Copyright © 2016 Elsevier Inc. All rights reserved.

  9. PREDICTING THE EFFECTIVENESS OF WEB INFORMATION SYSTEMS USING NEURAL NETWORKS MODELING: FRAMEWORK & EMPIRICAL TESTING

    Directory of Open Access Journals (Sweden)

    Dr. Kamal Mohammed Alhendawi

    2018-02-01

    Full Text Available The information systems (IS assessment studies have still used the commonly traditional tools such as questionnaires in evaluating the dependent variables and specially effectiveness of systems. Artificial neural networks have been recently accepted as an effective alternative tool for modeling the complicated systems and widely used for forecasting. A very few is known about the employment of Artificial Neural Network (ANN in the prediction IS effectiveness. For this reason, this study is considered as one of the fewest studies to investigate the efficiency and capability of using ANN for forecasting the user perceptions towards IS effectiveness where MATLAB is utilized for building and training the neural network model. A dataset of 175 subjects collected from international organization are utilized for ANN learning where each subject consists of 6 features (5 quality factors as inputs and one Boolean output. A percentage of 75% o subjects are used in the training phase. The results indicate an evidence on the ANN models has a reasonable accuracy in forecasting the IS effectiveness. For prediction, ANN with PURELIN (ANNP and ANN with TANSIG (ANNTS transfer functions are used. It is found that both two models have a reasonable prediction, however, the accuracy of ANNTS model is better than ANNP model (88.6% and 70.4% respectively. As the study proposes a new model for predicting IS dependent variables, it could save the considerably high cost that might be spent in sample data collection in the quantitative studies in the fields science, management, education, arts and others.

  10. Assessing the performance of community-available global MHD models using key system parameters and empirical relationships

    Science.gov (United States)

    Gordeev, E.; Sergeev, V.; Honkonen, I.; Kuznetsova, M.; Rastätter, L.; Palmroth, M.; Janhunen, P.; Tóth, G.; Lyon, J.; Wiltberger, M.

    2015-12-01

    Global magnetohydrodynamic (MHD) modeling is a powerful tool in space weather research and predictions. There are several advanced and still developing global MHD (GMHD) models that are publicly available via Community Coordinated Modeling Center's (CCMC) Run on Request system, which allows the users to simulate the magnetospheric response to different solar wind conditions including extraordinary events, like geomagnetic storms. Systematic validation of GMHD models against observations still continues to be a challenge, as well as comparative benchmarking of different models against each other. In this paper we describe and test a new approach in which (i) a set of critical large-scale system parameters is explored/tested, which are produced by (ii) specially designed set of computer runs to simulate realistic statistical distributions of critical solar wind parameters and are compared to (iii) observation-based empirical relationships for these parameters. Being tested in approximately similar conditions (similar inputs, comparable grid resolution, etc.), the four models publicly available at the CCMC predict rather well the absolute values and variations of those key parameters (magnetospheric size, magnetic field, and pressure) which are directly related to the large-scale magnetospheric equilibrium in the outer magnetosphere, for which the MHD is supposed to be a valid approach. At the same time, the models have systematic differences in other parameters, being especially different in predicting the global convection rate, total field-aligned current, and magnetic flux loading into the magnetotail after the north-south interplanetary magnetic field turning. According to validation results, none of the models emerges as an absolute leader. The new approach suggested for the evaluation of the models performance against reality may be used by model users while planning their investigations, as well as by model developers and those interesting to quantitatively

  11. Development of nonlinear empirical models to forecast daily PM2.5 and ozone levels in three large Chinese cities

    Science.gov (United States)

    Lv, Baolei; Cobourn, W. Geoffrey; Bai, Yuqi

    2016-12-01

    Empirical regression models for next-day forecasting of PM2.5 and O3 air pollution concentrations have been developed and evaluated for three large Chinese cities, Beijing, Nanjing and Guangzhou. The forecast models are empirical nonlinear regression models designed for use in an automated data retrieval and forecasting platform. The PM2.5 model includes an upwind air quality variable, PM24, to account for regional transport of PM2.5, and a persistence variable (previous day PM2.5 concentration). The models were evaluated in the hindcast mode with a two-year air quality and meteorological data set using a leave-one-month-out cross validation method, and in the forecast mode with a one-year air quality and forecasted weather dataset that included forecasted air trajectories. The PM2.5 models performed well in the hindcast mode, with coefficient of determination (R2) values of 0.54, 0.65 and 0.64, and normalized mean error (NME) values of 0.40, 0.26 and 0.23 respectively, for the three cities. The O3 models also performed well in the hindcast mode, with R2 values of 0.75, 0.55 and 0.73, and NME values of 0.29, 0.26 and 0.24 in the three cities. The O3 models performed better in summertime than in winter in Beijing and Guangzhou, and captured the O3 variations well all the year round in Nanjing. The overall forecast performance of the PM2.5 and O3 models during the test year varied from fair to good, depending on location. The forecasts were somewhat degraded compared with hindcasts from the same year, depending on the accuracy of the forecasted meteorological input data. For the O3 models, the model forecast accuracy was strongly dependent on the maximum temperature forecasts. For the critical forecasts, involving air quality standard exceedences, the PM2.5 model forecasts were fair to good, and the O3 model forecasts were poor to fair.

  12. Extracting Knowledge From Time Series An Introduction to Nonlinear Empirical Modeling

    CERN Document Server

    Bezruchko, Boris P

    2010-01-01

    This book addresses the fundamental question of how to construct mathematical models for the evolution of dynamical systems from experimentally-obtained time series. It places emphasis on chaotic signals and nonlinear modeling and discusses different approaches to the forecast of future system evolution. In particular, it teaches readers how to construct difference and differential model equations depending on the amount of a priori information that is available on the system in addition to the experimental data sets. This book will benefit graduate students and researchers from all natural sciences who seek a self-contained and thorough introduction to this subject.

  13. An Empirical Investigation of the Black-Scholes Model: Evidence from the Australian Stock Exchange

    Directory of Open Access Journals (Sweden)

    Zaffar Subedar

    2007-12-01

    Full Text Available This paper evaluates the probability of an exchange traded European call option beingexercised on the ASX200 Options Index. Using single-parameter estimates of factors withinthe Black-Scholes model, this paper utilises qualitative regression and a maximum likelihoodapproach. Results indicate that the Black-Scholes model is statistically significant at the 1%level. The results also provide evidence that the use of implied volatility and a jump-diffusionapproach, which increases the tail properties of the underlying lognormal distribution,improves the statistical significance of the Black-Scholes model.

  14. An Empirical Rate Constant Based Model to Study Capacity Fading in Lithium Ion Batteries

    Directory of Open Access Journals (Sweden)

    Srivatsan Ramesh

    2015-01-01

    Full Text Available A one-dimensional model based on solvent diffusion and kinetics to study the formation of the SEI (solid electrolyte interphase layer and its impact on the capacity of a lithium ion battery is developed. The model uses the earlier work on silicon oxidation but studies the kinetic limitations of the SEI growth process. The rate constant of the SEI formation reaction at the anode is seen to play a major role in film formation. The kinetics of the reactions for capacity fading for various battery systems are studied and the rate constants are evaluated. The model is used to fit the capacity fade in different battery systems.

  15. An Empirical Bayes Mixture Model for Effect Size Distributions in Genome-Wide Association Studies

    DEFF Research Database (Denmark)

    Thompson, Wesley K.; Wang, Yunpeng; Schork, Andrew J.

    2015-01-01

    minimizing discrepancies between the parametric mixture model and resampling-based nonparametric estimates of replication effect sizes and variances. We describe in detail the implications of this model for estimation of the non-null proportion, the probability of replication in de novo samples, the local...... for discovery, and polygenic risk prediction. To this end, previous work has used effect-size models based on various distributions, including the normal and normal mixture distributions, among others. In this paper we propose a scale mixture of two normals model for effect size distributions of genome...... analytically and in simulations. We apply this approach to meta-analysis test statistics from two large GWAS, one for Crohn’s disease (CD) and the other for schizophrenia (SZ). A scale mixture of two normals distribution provides an excellent fit to the SZ nonparametric replication effect size estimates. While...

  16. Ploidy frequencies in plants with ploidy heterogeneity: fitting a general gametic model to empirical population data

    Czech Academy of Sciences Publication Activity Database

    Suda, Jan; Herben, Tomáš

    2013-01-01

    Roč. 280, č. 1751 (2013), no.20122387 ISSN 0962-8452 Institutional support: RVO:67985939 Keywords : cytometry * statiscical modelling * polyploidy Subject RIV: EF - Botanics Impact factor: 5.292, year: 2013

  17. Regional differences of outpatient physician supply as a theoretical economic and empirical generalized linear model.

    Science.gov (United States)

    Scholz, Stefan; Graf von der Schulenburg, Johann-Matthias; Greiner, Wolfgang

    2015-11-17

    Regional differences in physician supply can be found in many health care systems, regardless of their organizational and financial structure. A theoretical model is developed for the physicians' decision on office allocation, covering demand-side factors and a consumption time function. To test the propositions following the theoretical model, generalized linear models were estimated to explain differences in 412 German districts. Various factors found in the literature were included to control for physicians' regional preferences. Evidence in favor of the first three propositions of the theoretical model could be found. Specialists show a stronger association to higher populated districts than GPs. Although indicators for regional preferences are significantly correlated with physician density, their coefficients are not as high as population density. If regional disparities should be addressed by political actions, the focus should be to counteract those parameters representing physicians' preferences in over- and undersupplied regions.

  18. An empirical assessment of exposure measurement errors and effect attenuation in bi-pollutant epidemiologic models

    Science.gov (United States)

    Using multipollutant models to understand the combined health effects of exposure to multiple pollutants is becoming more common. However, the complex relationships between pollutants and differing degrees of exposure error across pollutants can make health effect estimates from ...

  19. An empirical assessment of exposure measurement error and effect attenuation in bi-pollutant epidemiologic models

    Science.gov (United States)

    Background: Using multipollutant models to understand combined health effects of exposure to multiple pollutants is becoming more common. However, complex relationships between pollutants and differing degrees of exposure error across pollutants can make health effect estimates f...

  20. Empirical potential and elasticity theory modelling of interstitial dislocation loops in UO2 for cluster dynamics application

    International Nuclear Information System (INIS)

    Le-Prioux, Arno

    2017-01-01

    During irradiation in reactor, the microstructure of UO 2 changes and deteriorates, causing modifications of its physical and mechanical properties. The kinetic models used to describe these changes such as cluster dynamics (CRESCENDO calculation code) consider the main microstructural elements that are cavities and interstitial dislocation loops, and provide a rather rough description of the loop thermodynamics. In order to tackle this issue, this work has led to the development of a thermodynamic model of interstitial dislocation loops based on empirical potential calculations. The model considers two types of interstitial dislocation loops on two different size domains: Type 1: Dislocation loops similar to Frank partials in F.C.C. materials which are stable in the smaller size domain. Type 2: Perfect dislocation loops of Burgers vector (a/2)(110) stable in the larger size domain. The analytical formula used to compute the interstitial dislocation loop formation energies is the one for circular loops which has been modified in order to take into account the effects of the dislocation core, which are significant at smaller sizes. The parameters have been determined by empirical potential calculations of the formation energies of prismatic pure edge dislocation loops. The effect of the habit plane reorientation on the formation energies of perfect dislocation loops has been taken into account by a simple interpolation method. All the different types of loops seen during TEM observations are thus accounted for by the model. (author) [fr

  1. Comparative empirical analysis of flow-weighted transit route networks in R-space and evolution modeling

    Science.gov (United States)

    Huang, Ailing; Zang, Guangzhi; He, Zhengbing; Guan, Wei

    2017-05-01

    Urban public transit system is a typical mixed complex network with dynamic flow, and its evolution should be a process coupling topological structure with flow dynamics, which has received little attention. This paper presents the R-space to make a comparative empirical analysis on Beijing’s flow-weighted transit route network (TRN) and we found that both the Beijing’s TRNs in the year of 2011 and 2015 exhibit the scale-free properties. As such, we propose an evolution model driven by flow to simulate the development of TRNs with consideration of the passengers’ dynamical behaviors triggered by topological change. The model simulates that the evolution of TRN is an iterative process. At each time step, a certain number of new routes are generated driven by travel demands, which leads to dynamical evolution of new routes’ flow and triggers perturbation in nearby routes that will further impact the next round of opening new routes. We present the theoretical analysis based on the mean-field theory, as well as the numerical simulation for this model. The results obtained agree well with our empirical analysis results, which indicate that our model can simulate the TRN evolution with scale-free properties for distributions of node’s strength and degree. The purpose of this paper is to illustrate the global evolutional mechanism of transit network that will be used to exploit planning and design strategies for real TRNs.

  2. An improved empirical dynamic control system model of global mean sea level rise and surface temperature change

    Science.gov (United States)

    Wu, Qing; Luu, Quang-Hung; Tkalich, Pavel; Chen, Ge

    2018-04-01

    Having great impacts on human lives, global warming and associated sea level rise are believed to be strongly linked to anthropogenic causes. Statistical approach offers a simple and yet conceptually verifiable combination of remotely connected climate variables and indices, including sea level and surface temperature. We propose an improved statistical reconstruction model based on the empirical dynamic control system by taking into account the climate variability and deriving parameters from Monte Carlo cross-validation random experiments. For the historic data from 1880 to 2001, we yielded higher correlation results compared to those from other dynamic empirical models. The averaged root mean square errors are reduced in both reconstructed fields, namely, the global mean surface temperature (by 24-37%) and the global mean sea level (by 5-25%). Our model is also more robust as it notably diminished the unstable problem associated with varying initial values. Such results suggest that the model not only enhances significantly the global mean reconstructions of temperature and sea level but also may have a potential to improve future projections.

  3. An improved empirical dynamic control system model of global mean sea level rise and surface temperature change

    Science.gov (United States)

    Wu, Qing; Luu, Quang-Hung; Tkalich, Pavel; Chen, Ge

    2017-03-01

    Having great impacts on human lives, global warming and associated sea level rise are believed to be strongly linked to anthropogenic causes. Statistical approach offers a simple and yet conceptually verifiable combination of remotely connected climate variables and indices, including sea level and surface temperature. We propose an improved statistical reconstruction model based on the empirical dynamic control system by taking into account the climate variability and deriving parameters from Monte Carlo cross-validation random experiments. For the historic data from 1880 to 2001, we yielded higher correlation results compared to those from other dynamic empirical models. The averaged root mean square errors are reduced in both reconstructed fields, namely, the global mean surface temperature (by 24-37%) and the global mean sea level (by 5-25%). Our model is also more robust as it notably diminished the unstable problem associated with varying initial values. Such results suggest that the model not only enhances significantly the global mean reconstructions of temperature and sea level but also may have a potential to improve future projections.

  4. a Empirical Modelation of Runoff in Small Watersheds Using LIDAR Data

    Science.gov (United States)

    Lopatin, J.; Hernández, J.; Galleguillos, M.; Mancilla, G.

    2013-12-01

    Hydrological models allow the simulation of water natural processes and also the quantification and prediction of the effects of human impacts in runoff behavior. However, obtaining the information that is need for applying these models can be costly in both time and resources, especially in large and difficult to access areas. The objective of this research was to integrate LiDAR data in the hydrological modeling of runoff in small watersheds, using derivated hydrologic, vegetation and topography variables. The study area includes 10 small head watersheds cover bay forest, between 2 and 16 ha, which are located in the south-central coastal range of Chile. In each of the former instantaneous rainfall and runoff flow of a total of 15 rainfall events were measured, between August 2012 and July 2013, yielding a total of 79 observations. In March 2011 a Harrier 54/G4 Dual System was used to obtain a LiDAR point cloud of discrete pulse with an average of 4.64 points per square meter. A Digital Terrain Model (DTM) of 1 meter resolution was obtained from the point cloud, and subsequently 55 topographic variables were derived, such as physical watershed parameters and morphometric features. At the same time, 30 vegetation descriptive variables were obtained directly from the point cloud and from a Digital Canopy Model (DCM). The classification and regression "Random Forest" (RF) algorithm was used to select the most important variables in predicting water height (liters), and the "Partial Least Squares Path Modeling" (PLS-PM) algorithm was used to fit a model using the selected set of variables. Four Latent variables were selected (outer model) related to: climate, topography, vegetation and runoff, where in each one was designated a group of the predictor variables selected by RF (inner model). The coefficient of determination (R2) and Goodnes-of-Fit (GoF) of the final model were obtained. The best results were found when modeling using only the upper 50th percentile of

  5. Safewards: the empirical basis of the model and a critical appraisal.

    Science.gov (United States)

    Bowers, L; Alexander, J; Bilgin, H; Botha, M; Dack, C; James, K; Jarrett, M; Jeffery, D; Nijman, H; Owiti, J A; Papadopoulos, C; Ross, J; Wright, S; Stewart, D

    2014-05-01

    In the previous paper we described a model explaining differences in rates of conflict and containment between wards, grouping causal factors into six domains: the staff team, the physical environment, outside hospital, the patient community, patient characteristics and the regulatory framework. This paper reviews and evaluates the evidence for the model from previously published research. The model is supported, but the evidence is not very strong. More research using more rigorous methods is required in order to confirm or improve this model. In a previous paper, we described a proposed model explaining differences in rates of conflict (aggression, absconding, self-harm, etc.) and containment (seclusion, special observation, manual restraint, etc.). The Safewards Model identified six originating domains as sources of conflict and containment: the patient community, patient characteristics, the regulatory framework, the staff team, the physical environment, and outside hospital. In this paper, we assemble the evidence underpinning the inclusion of these six domains, drawing upon a wide ranging review of the literature across all conflict and containment items; our own programme of research; and reasoned thinking. There is good evidence that the six domains are important in conflict and containment generation. Specific claims about single items within those domains are more difficult to support with convincing evidence, although the weight of evidence does vary between items and between different types of conflict behaviour or containment method. The Safewards Model is supported by the evidence, but that evidence is not particularly strong. There is a dearth of rigorous outcome studies and trials in this area, and an excess of descriptive studies. The model allows the generation of a number of different interventions in order to reduce rates of conflict and containment, and properly conducted trials are now needed to test its validity. © 2014 John Wiley & Sons Ltd.

  6. Customer orientation on online newspaper business models with paid content strategies: An empirical study

    OpenAIRE

    Goyanes, Manuel; Sylvie, George

    2014-01-01

    This study examines the transformations that trigger business models with paid content strategies on news organizations under the theoretical framework of market orientation. The results show three main factors: those related to competence, to the organization culture and to understanding of needs and wants of the audience. The findings also suggest that online newspapers business models with paid content strategies are more like experiments or forays rather than definitive methods that monet...

  7. An Empirical Model for Probability of Packet Reception in Vehicular Ad Hoc Networks

    Directory of Open Access Journals (Sweden)

    2009-03-01

    Full Text Available Today's advanced simulators facilitate thorough studies on VANETs but are hampered by the computational effort required to consider all of the important influencing factors. In particular, large-scale simulations involving thousands of communicating vehicles cannot be served in reasonable simulation times with typical network simulation frameworks. A solution to this challenge might be found in hybrid simulations that encapsulate parts of a discrete-event simulation in an analytical model while maintaining the simulation's credibility. In this paper, we introduce a hybrid simulation model that analytically represents the probability of packet reception in an IEEE 802.11p network based on four inputs: the distance between sender and receiver, transmission power, transmission rate, and vehicular traffic density. We also describe the process of building our model which utilizes a large set of simulation traces and is based on general linear least squares approximation techniques. The model is then validated via the comparison of simulation results with the model output. In addition, we present a transmission power control problem in order to show the model's suitability for solving parameter optimization problems, which are of fundamental importance to VANETs.

  8. A new empirical model to estimate hourly diffuse photosynthetic photon flux density

    Science.gov (United States)

    Foyo-Moreno, I.; Alados, I.; Alados-Arboledas, L.

    2018-05-01

    Knowledge of the photosynthetic photon flux density (Qp) is critical in different applications dealing with climate change, plant physiology, biomass production, and natural illumination in greenhouses. This is particularly true regarding its diffuse component (Qpd), which can enhance canopy light-use efficiency and thereby boost carbon uptake. Therefore, diffuse photosynthetic photon flux density is a key driving factor of ecosystem-productivity models. In this work, we propose a model to estimate this component, using a previous model to calculate Qp and furthermore divide it into its components. We have used measurements in urban Granada (southern Spain), of global solar radiation (Rs) to study relationships between the ratio Qpd/Rs with different parameters accounting for solar position, water-vapour absorption and sky conditions. The model performance has been validated with experimental measurements from sites having varied climatic conditions. The model provides acceptable results, with the mean bias error and root mean square error varying between - 0.3 and - 8.8% and between 9.6 and 20.4%, respectively. Direct measurements of this flux are very scarce so that modelling simulations are needed, this is particularly true regarding its diffuse component. We propose a new parameterization to estimate this component using only measured data of solar global irradiance, which facilitates its use for the construction of long-term data series of PAR in regions where continuous measurements of PAR are not yet performed.

  9. The use of spatial empirical models to estimate soil erosion in arid ecosystems.

    Science.gov (United States)

    Abdullah, Meshal; Feagin, Rusty; Musawi, Layla

    2017-02-01

    The central objective of this project was to utilize geographical information systems and remote sensing to compare soil erosion models, including Modified Pacific South-west Inter Agency Committee (MPSIAC), Erosion Potential Method (EPM), and Revised Universal Soil Loss Equation (RUSLE), and to determine their applicability for arid regions such as Kuwait. The northern portion of Umm Nigga, containing both coastal and desert ecosystems, falls within the boundaries of the de-militarized zone (DMZ) adjacent to Iraq and has been fenced off to restrict public access since 1994. Results showed that the MPSIAC and EPM models were similar in spatial distribution of erosion, though the MPSIAC had a more realistic spatial distribution of erosion and presented finer level details. The RUSLE presented unrealistic results. We then predicted the amount of soil loss between coastal and desert areas and fenced and unfenced sites for each model. In the MPSIAC and EPM models, soil loss was different between fenced and unfenced sites at the desert areas, which was higher at the unfenced due to the low vegetation cover. The overall results implied that vegetation cover played an important role in reducing soil erosion and that fencing is much more important in the desert ecosystems to protect against human activities such as overgrazing. We conclude that the MPSIAC model is best for predicting soil erosion for arid regions such as Kuwait. We also recommend the integration of field-based experiments with lab-based spatial analysis and modeling in future research.

  10. Empirically based models of oceanographic and biological influences on Pacific Herring recruitment in Prince William Sound

    Science.gov (United States)

    Sewall, Fletcher; Norcross, Brenda; Mueter, Franz; Heintz, Ron

    2018-01-01

    Abundances of small pelagic fish can change dramatically over time and are difficult to forecast, partially due to variable numbers of fish that annually mature and recruit to the spawning population. Recruitment strength of age-3 Pacific Herring (Clupea pallasii) in Prince William Sound, Alaska, is estimated in an age-structured model framework as a function of spawning stock biomass via a Ricker stock-recruitment model, and forecasted using the 10-year median recruitment estimates. However, stock size has little influence on subsequent numbers of recruits. This study evaluated the usefulness of herring recruitment models that incorporate oceanographic and biological variables. Results indicated herring recruitment estimates were significantly improved by modifying the standard Ricker model to include an index of young-of-the-year (YOY) Walleye Pollock (Gadus chalcogrammus) abundance. The positive relationship between herring recruits-per-spawner and YOY pollock abundance has persisted through three decades, including the herring stock crash of the early 1990s. Including sea surface temperature, primary productivity, and additional predator or competitor abundances singly or in combination did not improve model performance. We suggest that synchrony of juvenile herring and pollock survival may be caused by increased abundance of their zooplankton prey, or high juvenile pollock abundance may promote prey switching and satiation of predators. Regardless of the mechanism, the relationship has practical application to herring recruitment forecasting, and serves as an example of incorporating ecosystem components into a stock assessment model.

  11. Sustainable fisheries in shallow lakes: an independent empirical test of the Chinese mitten crab yield model

    Science.gov (United States)

    Wang, Haijun; Liang, Xiaomin; Wang, Hongzhu

    2017-07-01

    Next to excessive nutrient loading, intensive aquaculture is one of the major anthropogenic impacts threatening lake ecosystems. In China, particularly in the shallow lakes of mid-lower Changjiang (Yangtze) River, continuous overstocking of the Chinese mitten crab ( Eriocheir sinensis) could deteriorate water quality and exhaust natural resources. A series of crab yield models and a general optimum-stocking rate model have been established, which seek to benefit both crab culture and the environment. In this research, independent investigations were carried out to evaluate the crab yield models and modify the optimum-stocking model. Low percentage errors (average 47%, median 36%) between observed and calculated crab yields were obtained. Specific values were defined for adult crab body mass (135 g/ind.) and recapture rate (18% and 30% in lakes with submerged macrophyte biomass above and below 1 000 g/m2) to modify the optimum-stocking model. Analysis based on the modified optimum-stocking model indicated that the actual stocking rates in most lakes were much higher than the calculated optimum-stocking rates. This implies that, for most lakes, the current stocking rates should be greatly reduced to maintain healthy lake ecosystems.

  12. Evaluation of Empirical Tropospheric Models Using Satellite-Tracking Tropospheric Wet Delays with Water Vapor Radiometer at Tongji, China

    Directory of Open Access Journals (Sweden)

    Miaomiao Wang

    2016-02-01

    Full Text Available An empirical tropospheric delay model, together with a mapping function, is commonly used to correct the tropospheric errors in global navigation satellite system (GNSS processing. As is well-known, the accuracy of tropospheric delay models relies mainly on the correction efficiency for tropospheric wet delays. In this paper, we evaluate the accuracy of three tropospheric delay models, together with five mapping functions in wet delays calculation. The evaluations are conducted by comparing their slant wet delays with those measured by water vapor radiometer based on its satellite-tracking function (collected data with large liquid water path is removed. For all 15 combinations of three tropospheric models and five mapping functions, their accuracies as a function of elevation are statistically analyzed by using nine-day data in two scenarios, with and without meteorological data. The results show that (1 no matter with or without meteorological data, there is no practical difference between mapping functions, i.e., Chao, Ifadis, Vienna Mapping Function 1 (VMF1, Niell Mapping Function (NMF, and MTT Mapping Function (MTT; (2 without meteorological data, the UNB3 is much better than Saastamoinen and Hopfield models, while the Saastamoinen model performed slightly better than the Hopfield model; (3 with meteorological data, the accuracies of all three tropospheric delay models are improved to be comparable, especially for lower elevations. In addition, the kinematic precise point positioning where no parameter is set up for tropospheric delay modification is conducted to further evaluate the performance of tropospheric delay models in positioning accuracy. It is shown that the UNB3 model is best and can achieve about 10 cm accuracy for the N and E coordinate component while 20 cm accuracy for the U coordinate component no matter the meteorological data is available or not. This accuracy can be obtained by the Saastamoinen model only when

  13. Rockfall travel distance analysis by using empirical models (Solà d'Andorra la Vella, Central Pyrenees

    Directory of Open Access Journals (Sweden)

    R. Copons

    2009-12-01

    Full Text Available The prediction of rockfall travel distance below a rock cliff is an indispensable activity in rockfall susceptibility, hazard and risk assessment. Although the size of the detached rock mass may differ considerably at each specific rock cliff, small rockfall (<100 m3 is the most frequent process. Empirical models may provide us with suitable information for predicting the travel distance of small rockfalls over an extensive area at a medium scale (1:100 000–1:25 000. "Solà d'Andorra la Vella" is a rocky slope located close to the town of Andorra la Vella, where the government has been documenting rockfalls since 1999. This documentation consists in mapping the release point and the individual fallen blocks immediately after the event. The documentation of historical rockfalls by morphological analysis, eye-witness accounts and historical images serve to increase available information. In total, data from twenty small rockfalls have been gathered which reveal an amount of a hundred individual fallen rock blocks. The data acquired has been used to check the reliability of the main empirical models widely adopted (reach and shadow angle models and to analyse the influence of parameters which affecting the travel distance (rockfall size, height of fall along the rock cliff and volume of the individual fallen rock block. For predicting travel distances in maps with medium scales, a method has been proposed based on the "reach probability" concept. The accuracy of results has been tested from the line entailing the farthest fallen boulders which represents the maximum travel distance of past rockfalls. The paper concludes with a discussion of the application of both empirical models to other study areas.

  14. Empirical models of monthly and annual surface albedo in managed boreal forests of Norway

    Science.gov (United States)

    Bright, Ryan M.; Astrup, Rasmus; Strømman, Anders H.

    2013-04-01

    As forest management activities play an increasingly important role in climate change mitigation strategies of Nordic regions such as Norway, Sweden, and Finland -- the need for a more comprehensive understanding of the types and magnitude of biogeophysical climate effects and their various tradeoffs with the global carbon cycle becomes essential to avoid implementation of sub-optimal policy. Forest harvest in these regions reduces the albedo "masking effect" and impacts Earth's radiation budget in opposing ways to that of concomitant carbon cycle perturbations; thus, policies based solely on biogeochemical considerations in these regions risk being counterproductive. There is therefore a need to better understand how human disturbances (i.e., forest management activities) affect important biophysical factors like surface albedo. An 11-year remotely sensed surface albedo dataset coupled with stand-level forest management data for a variety of stands in Norway's most productive logging region are used to develop regression models describing temporal changes in monthly and annual forest albedo following clear-cut harvest disturbance events. Datasets are grouped by dominant tree species and site indices (productivity), and two alternate multiple regression models are developed and tested following a potential plus modifier approach. This resulted in an annual albedo model with statistically significant parameters that explains a large proportion of the observed variation, requiring as few as two predictor variables: i) average stand age - a canopy modifier predictor of albedo, and ii) stand elevation - a local climate predictor of a forest's potential albedo. The same model structure is used to derive monthly albedo models, with models for winter months generally found superior to summer models, and conifer models generally outperforming deciduous. We demonstrate how these statistical models can be applied to routine forest inventory data to predict the albedo

  15. Pre- and Post-equinox ROSINA production rates calculated using a realistic empirical coma model derived from AMPS-DSMC simulations of comet 67P/Churyumov-Gerasimenko

    Science.gov (United States)

    Hansen, Kenneth; Altwegg, Kathrin; Berthelier, Jean-Jacques; Bieler, Andre; Calmonte, Ursina; Combi, Michael; De Keyser, Johan; Fiethe, Björn; Fougere, Nicolas; Fuselier, Stephen; Gombosi, Tamas; Hässig, Myrtha; Huang, Zhenguang; Le Roy, Lena; Rubin, Martin; Tenishev, Valeriy; Toth, Gabor; Tzou, Chia-Yu

    2016-04-01

    We have previously used results from the AMPS DSMC (Adaptive Mesh Particle Simulator Direct Simulation Monte Carlo) model to create an empirical model of the near comet coma (empirical model to the post-equinox, post-perihelion time period. In addition, we extend the coma model to significantly further from the comet (~100,000-1,000,000 km). The empirical model characterizes the neutral coma in a comet centered, sun fixed reference frame as a function of heliocentric distance, radial distance from the comet, local time and declination. Furthermore, we have generalized the model beyond application to 67P by replacing the heliocentric distance parameterizations and mapping them to production rates. Using this method, the model become significantly more general and can be applied to any comet. The model is a significant improvement over simpler empirical models, such as the Haser model. For 67P, the DSMC results are, of course, a more accurate representation of the coma at any given time, but the advantage of a mean state, empirical model is the ease and speed of use. One application of the empirical model is to de-trend the spacecraft motion from the ROSINA COPS and DFMS data (Rosetta Orbiter Spectrometer for Ion and Neutral Analysis, Comet Pressure Sensor, Double Focusing Mass Spectrometer). The ROSINA instrument measures the neutral coma density at a single point and the measured value is influenced by the location of the spacecraft relative to the comet and the comet-sun line. Using the empirical coma model we can correct for the position of the spacecraft and compute a total production rate based on the single point measurement. In this presentation we will present the coma production rate as a function of heliocentric distance both pre- and post-equinox and perihelion.

  16. Empirical versus mechanistic modelling: comparison of an artificial neural network to a mechanistically based model for quantitative structure pharmacokinetic relationships of a homologous series of barbiturates.

    Science.gov (United States)

    Nestorov, I S; Hadjitodorov, S T; Petrov, I; Rowland, M

    1999-01-01

    The aim of the current study was to compare the predictive performance of a mechanistically based model and an empirical artificial neural network (ANN) model to describe the relationship between the tissue-to-unbound plasma concentration ratios (Kpu's) of 14 rat tissues and the lipophilicity (LogP) of a series of nine 5-n-alkyl-5-ethyl barbituric acids. The mechanistic model comprised the water content, binding capacity, number of the binding sites, and binding association constant of each tissue. A backpropagation ANN with 2 hidden layers (33 neurons in the first layer, 9 neurons in the second) was used for the comparison. The network was trained by an algorithm with adaptive momentum and learning rate, programmed using the ANN Toolbox of MATLAB. The predictive performance of both models was evaluated using a leave-one-out procedure and computation of both the mean prediction error (ME, showing the prediction bias) and the mean squared prediction error (MSE, showing the prediction accuracy). The ME of the mechanistic model was 18% (range, 20 to 57%), indicating a tendency for overprediction; the MSE is 32% (range, 6 to 104%). The ANN had almost no bias: the ME was 2% (range, 36 to 64%) and had greater precision than the mechanistic model, MSE 18% (range, 4 to 70%). Generally, neither model appeared to be a significantly better predictor of the Kpu's in the rat.

  17. Recent extensions and use of the statistical model code EMPIRE-II - version: 2.17 Millesimo

    International Nuclear Information System (INIS)

    Herman, M.

    2003-01-01

    This lecture notes describe new features of the modular code EMPIRE-2.17 designed to perform comprehensive calculations of nuclear reactions using variety of nuclear reaction models. Compared to the version 2.13, the current release has been extended by including Coupled-Channel mechanism, exciton model, Monte Carlo approach to preequilibrium emission, use of microscopic level densities, widths fluctuation correction, detailed calculation of the recoil spectra, and powerful plotting capabilities provided by the ZVView package. The second part of this lecture concentrates on the use of the code in practical calculations, with emphasis on the aspects relevant to nuclear data evaluation. In particular, adjusting model parameters is discussed in details. (author)

  18. A semi-empirical model for the formation and depletion of the high burnup structure in UO2

    Science.gov (United States)

    Pizzocri, D.; Cappia, F.; Luzzi, L.; Pastore, G.; Rondinella, V. V.; Van Uffelen, P.

    2017-04-01

    In the rim zone of UO2 nuclear fuel pellets, the combination of high burnup and low temperature drives a microstructural change, leading to the formation of the high burnup structure (HBS). In this work, we propose a semi-empirical model to describe the formation of the HBS, which embraces the polygonisation/recrystallization process and the depletion of intra-granular fission gas, describing them as inherently related. For this purpose, we performed grain-size measurements on samples at radial positions in which the restructuring was incomplete. Based on these new experimental data, we infer an exponential reduction of the average grain size with local effective burnup, paired with a simultaneous depletion of intra-granular fission gas driven by diffusion. The comparison with currently used models indicates the applicability of the herein developed model within integral fuel performance codes.

  19. An Empirical Agent-Based Model to Simulate the Adoption of Water Reuse Using the Social Amplification of Risk Framework.

    Science.gov (United States)

    Kandiah, Venu; Binder, Andrew R; Berglund, Emily Z

    2017-10-01

    Water reuse can serve as a sustainable alternative water source for urban areas. However, the successful implementation of large-scale water reuse projects depends on community acceptance. Because of the negative perceptions that are traditionally associated with reclaimed water, water reuse is often not considered in the development of urban water management plans. This study develops a simulation model for understanding community opinion dynamics surrounding the issue of water reuse, and how individual perceptions evolve within that context, which can help in the planning and decision-making process. Based on the social amplification of risk framework, our agent-based model simulates consumer perceptions, discussion patterns, and their adoption or rejection of water reuse. The model is based on the "risk publics" model, an empirical approach that uses the concept of belief clusters to explain the adoption of new technology. Each household is represented as an agent, and parameters that define their behavior and attributes are defined from survey data. Community-level parameters-including social groups, relationships, and communication variables, also from survey data-are encoded to simulate the social processes that influence community opinion. The model demonstrates its capabilities to simulate opinion dynamics and consumer adoption of water reuse. In addition, based on empirical data, the model is applied to investigate water reuse behavior in different regions of the United States. Importantly, our results reveal that public opinion dynamics emerge differently based on membership in opinion clusters, frequency of discussion, and the structure of social networks. © 2017 Society for Risk Analysis.

  20. Modelling short and long-term risks in power markets. Empirical evidence from Nord Pool

    International Nuclear Information System (INIS)

    Nomikos, Nikos K.; Soldatos, Orestes A.

    2010-01-01

    In this paper we propose a three-factor spike model that accounts for different speeds of mean reversion between normal and spiky shocks in the Scandinavian power market. In this model both short and long-run factors are unobservable and are hence estimated as latent variables using the Kalman filter. The proposed model has several advantages. First, it seems to capture in a parsimonious way the most important risks that practitioners face in the market, such as spike risk, short-term risk and long-term risk. Second, it explains the seasonal risk premium observed in the market and improves the fit between theoretical and observed forward prices, particularly for long-dated forward contracts. Finally, closed-form solutions for forward contracts, derived from the model, are consistent with the fact that the correlation between contracts of different maturities is imperfect. The resulting model is very promising, providing a very useful policy analysis and financial engineering tool to market participants for risk management and derivative pricing particularly for long-dated contracts. (author)

  1. An empirical model to describe performance degradation for warranty abuse detection in portable electronics

    International Nuclear Information System (INIS)

    Oh, Hyunseok; Choi, Seunghyuk; Kim, Keunsu; Youn, Byeng D.; Pecht, Michael

    2015-01-01

    Portable electronics makers have introduced liquid damage indicators (LDIs) into their products to detect warranty abuse caused by water damage. However, under certain conditions, these indicators can exhibit inconsistencies in detecting liquid damage. This study is motivated by the fact that the reliability of LDIs in portable electronics is suspected. In this paper, first, the scheme of life tests is devised for LDIs in conjunction with a robust color classification rule. Second, a degradation model is proposed by considering the two physical mechanisms—(1) phase change from vapor to water and (2) water transport in the porous paper—for LDIs. Finally, the degradation model is validated with additional tests using actual smartphone sets subjected to the thermal cycling of −15 °C to 25 °C and the relative humidity of 95%. By employing the innovative life testing scheme and the novel performance degradation model, it is expected that the performance of LDIs for a particular application can be assessed quickly and accurately. - Highlights: • Devise an efficient scheme of life testing for a warranty abuse detector in portable electronics. • Develop a performance degradation model for the warranty abuse detector used in portable electronics. • Validate the performance degradation model with life tests of actual smartphone sets. • Help make a decision on warranty service in portable electronics manufacturers

  2. Parametric and Nonparametric Empirical Regression Models: Case Study of Copper Bromide Laser Generation

    Directory of Open Access Journals (Sweden)

    S. G. Gocheva-Ilieva

    2010-01-01

    Full Text Available In order to model the output laser power of a copper bromide laser with wavelengths of 510.6 and 578.2 nm we have applied two regression techniques—multiple linear regression and multivariate adaptive regression splines. The models have been constructed on the basis of PCA factors for historical data. The influence of first- and second-order interactions between predictors has been taken into account. The models are easily interpreted and have good prediction power, which is established from the results of their validation. The comparison of the derived models shows that these based on multivariate adaptive regression splines have an advantage over the others. The obtained results allow for the clarification of relationships between laser generation and the observed laser input variables, for better determining their influence on laser generation, in order to improve the experimental setup and laser production technology. They can be useful for evaluation of known experiments as well as for prediction of future experiments. The developed modeling methodology is also applicable for a wide range of similar laser devices—metal vapor lasers and gas lasers.

  3. Estimation Risk Modeling in Optimal Portfolio Selection: An Empirical Study from Emerging Markets

    Directory of Open Access Journals (Sweden)

    Sarayut Nathaphan

    2010-01-01

    Full Text Available Efficient portfolio is a portfolio that yields maximum expected return given a level of risk or has a minimum level of risk given a level of expected return. However, the optimal portfolios do not seem to be as efficient as intended. Especially during financial crisis period, optimal portfolio is not an optimal investment as it does not yield maximum return given a specific level of risk, and vice versa. One possible explanation for an unimpressive performance of the seemingly efficient portfolio is incorrectness in parameter estimates called “estimation risk in parameter estimates”. Six different estimating strategies are employed to explore ex-post-portfolio performance when estimation risk is incorporated. These strategies are traditional Mean-Variance (EV, Adjusted Beta (AB approach, Resampled Efficient Frontier (REF, Capital Asset Pricing Model (CAPM, Single Index Model (SIM, and Single Index Model incorporating shrinkage Bayesian factor namely, Bayesian Single Index Model (BSIM. Among the six alternative strategies, shrinkage estimators incorporating the single index model outperform other traditional portfolio selection strategies. Allowing for asset mispricing and applying Bayesian shrinkage adjusted factor to each asset's alpha, a single factor namely, excess market return is adequate in alleviating estimation uncertainty.

  4. Carbon emissions, logistics volume and GDP in China: empirical analysis based on panel data model.

    Science.gov (United States)

    Guo, Xiaopeng; Ren, Dongfang; Shi, Jiaxing

    2016-12-01

    This paper studies the relationship among carbon emissions, GDP, and logistics by using a panel data model and a combination of statistics and econometrics theory. The model is based on the historical data of 10 typical provinces and cities in China during 2005-2014. The model in this paper adds the variability of logistics on the basis of previous studies, and this variable is replaced by the freight turnover of the provinces. Carbon emissions are calculated by using the annual consumption of coal, oil, and natural gas. GDP is the gross domestic product. The results showed that the amount of logistics and GDP have a contribution to carbon emissions and the long-term relationships are different between different cities in China, mainly influenced by the difference among development mode, economic structure, and level of logistic development. After the testing of panel model setting, this paper established a variable coefficient model of the panel. The influence of GDP and logistics on carbon emissions is obtained according to the influence factors among the variables. The paper concludes with main findings and provides recommendations toward rational planning of urban sustainable development and environmental protection for China.

  5. The Ease of Language Understanding (ELU model: theoretical, empirical, and clinical advances

    Directory of Open Access Journals (Sweden)

    Jerker eRönnberg

    2013-07-01

    Full Text Available Working memory is important for online language processing during conversation. We use it to maintain relevant information, to inhibit or ignore irrelevant information, and to attend to conversation selectively. Working memory helps us to keep track of and actively participate in conversation, including taking turns and following the gist. This paper examines the Ease of Language Understanding model (i.e., the ELU model, Rönnberg, 2003; Rönnberg et al., 2008 in light of new behavioral and neural findings concerning the role of working memory capacity (WMC in uni-modal and bimodal language processing. The new ELU model is a meaning prediction system that depends on phonological and semantic interactions in rapid implicit and slower explicit processing mechanisms that both depend on WMC albeit in different ways. A revised ELU model is proposed based on findings that address the relationship between WMC and (a early attention processes in listening to speech, (b signal processing in hearing aids and its effects on short-term memory, (c inhibition of speech maskers and its effect on episodic long-term memory, (d the effects of hearing impairment on episodic and semantic long-term memory, and finally, (e listening effort. New predictions and clinical implications are outlined. Comparisons with other WMC and speech perception models are made.

  6. Dynamic Model of Islamic Hybrid Securities: Empirical Evidence From Malaysia Islamic Capital Market

    Directory of Open Access Journals (Sweden)

    Jaafar Pyeman

    2016-12-01

    Full Text Available Capital structure selection is fundamentally important in corporate financial management as it influence on mutually return and risk to stakeholders. Despite of Malaysia’s position as one of the major players of Islamic Financial Market, there are still lack of studies has been conducted on the capital structure of shariah compliant firms especially related to hybrid securities. The objective of this study is to determine the hybrid securities issuance model among the shariah compliant firms in Malaysia. As such, this study is to expand the literature review by providing comprehensive analysis on the hybrid capital structure and to develop dynamic Islamic hybrid securities model for shariah compliant firms. We use panel data of 50 companies that have been issuing the hybrid securities from the year of 2004- 2012. The outcomes of the studies are based on the dynamic model GMM estimation for the determinants of hybrid securities. Based on our model, risk and growth are considered as the most determinant factors for issuing convertible bond and loan stock. These results suggest that, the firms that have high risk but having good growth prospect will choose hybrid securities of convertible bond. The model also support the backdoor equity listing hypothesis by Stein (1992 where the hybrid securities enable the profitable firms to venture into positive NPV project by issuing convertible bond as it offer lower coupon rate as compare to the normal debt rate

  7. An optimisation approach for capacity planning: modelling insights and empirical findings from a tactical perspective

    Directory of Open Access Journals (Sweden)

    Andréa Nunes Carvalho

    2017-09-01

    Full Text Available Abstract The academic literature presents a research-practice gap on the application of decision support tools to address tactical planning problems in real-world organisations. This paper addresses this gap and extends a previous action research relative to an optimisation model applied for tactical capacity planning in an engineer-to-order industrial setting. The issues discussed herein raise new insights to better understand the practical results that can be achieved through the proposed model. The topics presented include the modelling of objectives, the representation of the production process and the costing approach, as well as findings regarding managerial decisions and the scope of action considered. These insights may inspire ideas to academics and practitioners when developing tools for capacity planning problems in similar contexts.

  8. Psychological first aid: a consensus-derived, empirically supported, competency-based training model.

    Science.gov (United States)

    McCabe, O Lee; Everly, George S; Brown, Lisa M; Wendelboe, Aaron M; Abd Hamid, Nor Hashidah; Tallchief, Vicki L; Links, Jonathan M

    2014-04-01

    Surges in demand for professional mental health services occasioned by disasters represent a major public health challenge. To build response capacity, numerous psychological first aid (PFA) training models for professional and lay audiences have been developed that, although often concurring on broad intervention aims, have not systematically addressed pedagogical elements necessary for optimal learning or teaching. We describe a competency-based model of PFA training developed under the auspices of the Centers for Disease Control and Prevention and the Association of Schools of Public Health. We explain the approach used for developing and refining the competency set and summarize the observable knowledge, skills, and attitudes underlying the 6 core competency domains. We discuss the strategies for model dissemination, validation, and adoption in professional and lay communities.

  9. The Development and Empirical Validation of an E-based Supply Chain Strategy Optimization Model

    DEFF Research Database (Denmark)

    Kotzab, Herbert; Skjoldager, Niels; Vinum, Thorkil

    2003-01-01

    Examines the formulation of supply chain strategies in complex environments. Argues that current state‐of‐the‐art e‐business and supply chain management, combined into the concept of e‐SCM, as well as the use of transaction cost theory, network theory and resource‐based theory, altogether can...... be used to form a model for analyzing supply chains with the purpose of reducing the uncertainty of formulating supply chain strategies. Presents e‐supply chain strategy optimization model (e‐SOM) as a way to analyze supply chains in a structured manner as regards strategic preferences for supply chain...... design, relations and resources in the chains with the ultimate purpose of enabling the formulation of optimal, executable strategies for specific supply chains. Uses research results for a specific supply chain to validate the usefulness of the model....

  10. Empirical models for predicting wind potential for wind energy applications in rural locations of Nigeria

    Energy Technology Data Exchange (ETDEWEB)

    Odo, F.C. [National Centre for Energy Research and Development, University of Nigeria, Nsukka (Nigeria); Department of Physics and Astronomy, University of Nigeria, Nsukka (Nigeria); Akubue, G.U.; Offiah, S.U.; Ugwuoke, P.E. [National Centre for Energy Research and Development, University of Nigeria, Nsukka (Nigeria)

    2013-07-01

    In this paper, we use the correlation between the average wind speed and ambient temperature to develop models for predicting wind potentials for two Nigerian locations. Assuming that the troposphere is a typical heterogeneous mixture of ideal gases, we find that for the studied locations, wind speed clearly correlates with ambient temperature in a simple polynomial of 3rd degree. The coefficient of determination and root-mean-square error of the models are 0.81; 0.0024 and 0.56; 0.0041, respectively, for Enugu (6.40N; 7.50E) and Owerri (5.50N; 7.00E). These results suggest that the temperature-based model can be used, with acceptable accuracy, in predicting wind potentials needed for preliminary design assessment of wind energy conversion devices for the locations and others with similar meteorological conditions.

  11. Sea ice thermohaline dynamics and biogeochemistry in the Arctic Ocean: Empirical and model results

    Science.gov (United States)

    Duarte, Pedro; Meyer, Amelie; Olsen, Lasse M.; Kauko, Hanna M.; Assmy, Philipp; Rösel, Anja; Itkin, Polona; Hudson, Stephen R.; Granskog, Mats A.; Gerland, Sebastian; Sundfjord, Arild; Steen, Harald; Hop, Haakon; Cohen, Lana; Peterson, Algot K.; Jeffery, Nicole; Elliott, Scott M.; Hunke, Elizabeth C.; Turner, Adrian K.

    2017-07-01

    Large changes in the sea ice regime of the Arctic Ocean have occurred over the last decades justifying the development of models to forecast sea ice physics and biogeochemistry. The main goal of this study is to evaluate the performance of the Los Alamos Sea Ice Model (CICE) to simulate physical and biogeochemical properties at time scales of a few weeks and to use the model to analyze ice algal bloom dynamics in different types of ice. Ocean and atmospheric forcing data and observations of the evolution of the sea ice properties collected from 18 April to 4 June 2015, during the Norwegian young sea ICE expedition, were used to test the CICE model. Our results show the following: (i) model performance is reasonable for sea ice thickness and bulk salinity; good for vertically resolved temperature, vertically averaged Chl a concentrations, and standing stocks; and poor for vertically resolved Chl a concentrations. (ii) Improving current knowledge about nutrient exchanges, ice algal recruitment, and motion is critical to improve sea ice biogeochemical modeling. (iii) Ice algae may bloom despite some degree of basal melting. (iv) Ice algal motility driven by gradients in limiting factors is a plausible mechanism to explain their vertical distribution. (v) Different ice algal bloom and net primary production (NPP) patterns were identified in the ice types studied, suggesting that ice algal maximal growth rates will increase, while sea ice vertically integrated NPP and biomass will decrease as a result of the predictable increase in the area covered by refrozen leads in the Arctic Ocean.

  12. Empirical model for estimating daily erythemal UV radiation in the Central European region

    Energy Technology Data Exchange (ETDEWEB)

    Hlavinka, P.; Trnka, M.; Semeradova, D.; Zalud, Z.; Eitzinger, J. [Mendel Univ., Brno (Czech Republic). Inst. of Agrosystems and Bioclimatology; Dubrovsky, M. [Czech Academy of Sciences, Prague (Czech Republic). Inst. of Atmospheric Physics; Weihs, P.; Simic, S. [Department of Water, Atmosphere and Environment, Vienna (Austria). Inst. of Meteorology; Blumthaler, M. [Medical Univ. of Innsbruck (Austria). Inst. of Medical Physics; Schreder, J. [CMS - Ing. Dr. Schreder GmbH, Kirchbichl (Austria)

    2007-04-15

    Because of its biological effects, erythemal ultraviolet (UV-ERY) radiation (280-400 nm) is a significant part of solar radiation spectrum. In this study a statistical model for estimating daily UV-ERY radiation values was developed. It is based on ground measurements conducted at eight Austrian stations (from 2000 to 2002). As inputs the model requires daily global radiation, daily extraterrestrial radiation, information about the column of total ozone in the atmosphere and the altitude of a selected station. Subsequently the performance of the model was verified by an independent data set originating from measurements in Austria and Czech Republic stations. The verification showed satisfactory performance of the model: the coefficient of determination (R2) varied from 0.97 to 0.99, the root mean square error (RMSE) varied from 9.3 % to 17.7 % and the mean bias error (MBE) varied from -2.5 % to 2.0 %. In addition, the results of the model at Hradec Kralove station were compared with equivalent UV-ERY data from the Solar Radiation Database (SoDa), which are available on the Internet. After successful verification, the model was implemented within the ArcInfo GIS framework in order to carry out a spatial assessment of a stratospheric ozone reduction episode. The event of July 2005 in the Czech Republic was used as a case study. On 30 July 2005 the total ozone amount dropped 12.5 % below the long-term mean, which led to UV-ERY radiation increment ranging from 214 to 391 J.m{sup -2}.day{sup -1}. (orig.)

  13. Synthetic Empirical Chorus Wave Model From Combined Van Allen Probes and Cluster Statistics

    Science.gov (United States)

    Agapitov, O. V.; Mourenas, D.; Artemyev, A. V.; Mozer, F. S.; Hospodarsky, G.; Bonnell, J.; Krasnoselskikh, V.

    2018-01-01

    Chorus waves are among the most important natural electromagnetic emissions in the magnetosphere as regards to their potential effects on electron dynamics. They can efficiently accelerate or precipitate electrons trapped in the outer radiation belt, producing either fast increases of relativistic particle fluxes or auroras at high latitudes. Accurately modeling their effects, however, requires detailed models of their wave power and obliquity distribution as a function of geomagnetic activity in a particularly wide spatial domain, rarely available based solely on the statistics obtained from only one satellite mission. Here we seize the opportunity of synthesizing data from the Van Allen Probes and Cluster spacecraft to provide a new comprehensive chorus wave model in the outer radiation belt. The respective spatial coverages of these two missions are shown to be especially complementary and further allow a good cross calibration in the overlap domain. We used 4 years (2012-2016) of Van Allen Probes VLF data in the chorus frequency range up to 12 kHz at latitudes lower than 20°, combined with 10 years of Cluster VLF measurements up to 4 kHz in order to provide a full coverage of geomagnetic latitudes up to 45° in the chorus frequency range 0.1fce-0.8fce. The resulting synthetic statistical model of chorus wave amplitude, obliquity, and frequency is presented in the form of analytical functions of latitude and Kp in three different magnetic local time sectors and for two ranges of L shells outside the plasmasphere. Such a synthetic and reliable chorus model is crucially important for accurately modeling global acceleration and loss of electrons over the long run in the outer radiation belt, allowing a comprehensive description of electron flux variations over a very wide energy range.

  14. Comparison of ensemble post-processing approaches, based on empirical and dynamical error modelisation of rainfall-runoff model forecasts

    Science.gov (United States)

    Chardon, J.; Mathevet, T.; Le Lay, M.; Gailhard, J.

    2012-04-01

    In the context of a national energy company (EDF : Electricité de France), hydro-meteorological forecasts are necessary to ensure safety and security of installations, meet environmental standards and improve water ressources management and decision making. Hydrological ensemble forecasts allow a better representation of meteorological and hydrological forecasts uncertainties and improve human expertise of hydrological forecasts, which is essential to synthesize available informations, coming from different meteorological and hydrological models and human experience. An operational hydrological ensemble forecasting chain has been developed at EDF since 2008 and is being used since 2010 on more than 30 watersheds in France. This ensemble forecasting chain is characterized ensemble pre-processing (rainfall and temperature) and post-processing (streamflow), where a large human expertise is solicited. The aim of this paper is to compare 2 hydrological ensemble post-processing methods developed at EDF in order improve ensemble forecasts reliability (similar to Monatanari &Brath, 2004; Schaefli et al., 2007). The aim of the post-processing methods is to dress hydrological ensemble forecasts with hydrological model uncertainties, based on perfect forecasts. The first method (called empirical approach) is based on a statistical modelisation of empirical error of perfect forecasts, by streamflow sub-samples of quantile class and lead-time. The second method (called dynamical approach) is based on streamflow sub-samples of quantile class and streamflow variation, and lead-time. On a set of 20 watersheds used for operational forecasts, results show that both approaches are necessary to ensure a good post-processing of hydrological ensemble, allowing a good improvement of reliability, skill and sharpness of ensemble forecasts. The comparison of the empirical and dynamical approaches shows the limits of the empirical approach which is not able to take into account hydrological

  15. An Empirical Bayes Mixture Model for Effect Size Distributions in Genome-Wide Association Studies.

    Directory of Open Access Journals (Sweden)

    Wesley K Thompson

    2015-12-01

    Full Text Available Characterizing the distribution of effects from genome-wide genotyping data is crucial for understanding important aspects of the genetic architecture of complex traits, such as number or proportion of non-null loci, average proportion of phenotypic variance explained per non-null effect, power for discovery, and polygenic risk prediction. To this end, previous work has used effect-size models based on various distributions, including the normal and normal mixture distributions, among others. In this paper we propose a scale mixture of two normals model for effect size distributions of genome-wide association study (GWAS test statistics. Test statistics corresponding to null associations are modeled as random draws from a normal distribution with zero mean; test statistics corresponding to non-null associations are also modeled as normal with zero mean, but with larger variance. The model is fit via minimizing discrepancies between the parametric mixture model and resampling-based nonparametric estimates of replication effect sizes and variances. We describe in detail the implications of this model for estimation of the non-null proportion, the probability of replication in de novo samples, the local false discovery rate, and power for discovery of a specified proportion of phenotypic variance explained from additive effects of loci surpassing a given significance threshold. We also examine the crucial issue of the impact of linkage disequilibrium (LD on effect sizes and parameter estimates, both analytically and in simulations. We apply this approach to meta-analysis test statistics from two large GWAS, one for Crohn's disease (CD and the other for schizophrenia (SZ. A scale mixture of two normals distribution provides an excellent fit to the SZ nonparametric replication effect size estimates. While capturing the general behavior of the data, this mixture model underestimates the tails of the CD effect size distribution. We discuss the

  16. An Empirical Bayes Mixture Model for Effect Size Distributions in Genome-Wide Association Studies.

    Science.gov (United States)

    Thompson, Wesley K; Wang, Yunpeng; Schork, Andrew J; Witoelar, Aree; Zuber, Verena; Xu, Shujing; Werge, Thomas; Holland, Dominic; Andreassen, Ole A; Dale, Anders M

    2015-12-01

    Characterizing the distribution of effects from genome-wide genotyping data is crucial for understanding important aspects of the genetic architecture of complex traits, such as number or proportion of non-null loci, average proportion of phenotypic variance explained per non-null effect, power for discovery, and polygenic risk prediction. To this end, previous work has used effect-size models based on various distributions, including the normal and normal mixture distributions, among others. In this paper we propose a scale mixture of two normals model for effect size distributions of genome-wide association study (GWAS) test statistics. Test statistics corresponding to null associations are modeled as random draws from a normal distribution with zero mean; test statistics corresponding to non-null associations are also modeled as normal with zero mean, but with larger variance. The model is fit via minimizing discrepancies between the parametric mixture model and resampling-based nonparametric estimates of replication effect sizes and variances. We describe in detail the implications of this model for estimation of the non-null proportion, the probability of replication in de novo samples, the local false discovery rate, and power for discovery of a specified proportion of phenotypic variance explained from additive effects of loci surpassing a given significance threshold. We also examine the crucial issue of the impact of linkage disequilibrium (LD) on effect sizes and parameter estimates, both analytically and in simulations. We apply this approach to meta-analysis test statistics from two large GWAS, one for Crohn's disease (CD) and the other for schizophrenia (SZ). A scale mixture of two normals distribution provides an excellent fit to the SZ nonparametric replication effect size estimates. While capturing the general behavior of the data, this mixture model underestimates the tails of the CD effect size distribution. We discuss the implications of

  17. Patients' Acceptance of Smartphone Health Technology for Chronic Disease Management: A Theoretical Model and Empirical Test.

    Science.gov (United States)

    Dou, Kaili; Yu, Ping; Deng, Ning; Liu, Fang; Guan, YingPing; Li, Zhenye; Ji, Yumeng; Du, Ningkai; Lu, Xudong; Duan, Huilong

    2017-12-06

    Chronic disease patients often face multiple challenges from difficult comorbidities. Smartphone health technology can be used to help them manage their conditions only if they accept and use the technology. The aim of this study was to develop and test a theoretical model to predict and explain the factors influencing patients' acceptance of smartphone health technology for chronic disease management. Multiple theories and factors that may influence patients' acceptance of smartphone health technology have been reviewed. A hybrid theoretical model was built based on the technology acceptance model, dual-factor model, health belief model, and the factors identified from interviews that might influence patients' acceptance of smartphone health technology for chronic disease management. Data were collected from patient questionnaire surveys and computer log records about 157 hypertensive patients' actual use of a smartphone health app. The partial least square method was used to test the theoretical model. The model accounted for .412 of the variance in patients' intention to adopt the smartphone health technology. Intention to use accounted for .111 of the variance in actual use and had a significant weak relationship with the latter. Perceived ease of use was affected by patients' smartphone usage experience, relationship with doctor, and self-efficacy. Although without a significant effect on intention to use, perceived ease of use had a significant positive influence on perceived usefulness. Relationship with doctor and perceived health threat had significant positive effects on perceived usefulness, countering the negative influence of resistance to change. Perceived usefulness, perceived health threat, and resistance to change significantly predicted patients' intentions to use the technology. Age and gender had no significant influence on patients' acceptance of smartphone technology. The study also confirmed the positive relationship between intention to use

  18. An Empirically-Calibrated Model For Interpreting the Evolution of Galaxies During the Reionization Era

    OpenAIRE

    Stark, Daniel P.; Loeb, Abraham; Ellis, Richard S.

    2007-01-01

    [Abridged] We develop a simple star formation model whose goal is to interpret the emerging body of observational data on star-forming galaxies at z>~6. The efficiency and duty cycle of the star formation activity within dark matter halos are determined by fitting the luminosity functions of Lya emitter and Lyman-break galaxies at redshifts z~5-6. Using our model parameters we predict the likely abundance of star forming galaxies at earlier epochs and compare these to the emerging data in the...

  19. Hypothesis Testing of Edge Organizations: Empirically Calibrating an Organizational Model for Experimentation

    Science.gov (United States)

    2007-06-01

    Jaber, M.Y., Kher, H. V., and Davis, D. J., “Countering forgetting through training and deployment,” International Journal of Production Economics , 85... Journal of Production Economics , 92, (2004), pp. 281-294. [22] Jin, Y. and Levitt, R.E., “The Virtual Design Team: A Computational Model of Project...2003), pp. 33-46. [21] Jaber, M.Y. and Sikstrom, S., “A numerical comparison of three potential learning and forgetting models,” International

  20. An empirical exploration of the world oil price under the target zone model

    International Nuclear Information System (INIS)

    Tang, Linghui; Hammoudeh, Shawkat

    2002-01-01

    This paper investigates the behavior of the world oil price based on the first-generation target zone model. Using anecdotal data during the period of 1988-1999, we found that OPEC has tried to maintain a weak target zone regime for the oil price. Our econometric tests suggest that the movement of the oil price is not only manipulated by actual and substantial interventions by OPEC but also tempered by market participants' expectations of interventions. As a consequence, the non-linear model based on the target zone theory has very good forecasting ability when the oil price approaches the upper or lower limit of the band

  1. An empirical exploration of the world oil price under the target zone model

    International Nuclear Information System (INIS)

    Linghui Tang; Shawkat Hammoudeh

    2002-01-01

    This paper investigates the behavior of the world oil price based on the first-generation target zone model. Using anecdotal data during the period of 1988-1999, we found that OPEC has tried to maintain a weak target zone regime for the oil price. Our econometric tests suggest that the movement of the oil price is not only manipulated by actual and substantial interventions by OPEC but also tempered by market participants' expectations of interventions. As a consequence, the non-linear model based on the target zone theory has very good forecasting ability when the oil price approaches the upper or lower limit of the band. (author)

  2. Probing the (empirical quantum structure embedded in the periodic table with an effective Bohr model

    Directory of Open Access Journals (Sweden)

    Wellington Nardin Favaro

    2013-01-01

    Full Text Available The atomic shell structure can be observed by inspecting the experimental periodic properties of the Periodic Table. The (quantum shell structure emerges from these properties and in this way quantum mechanics can be explicitly shown considering the (semi-quantitative periodic properties. These periodic properties can be obtained with a simple effective Bohr model. An effective Bohr model with an effective quantum defect (u was considered as a probe in order to show the quantum structure embedded in the Periodic Table. u(Z shows a quasi-smoothed dependence of Z, i.e., u(Z ≈ Z2/5 - 1.

  3. Empirical analyses of a choice model that captures ordering among attribute values

    DEFF Research Database (Denmark)

    Mabit, Stefan Lindhard

    2017-01-01

    an alternative additionally because it has the highest price. In this paper, we specify a discrete choice model that takes into account the ordering of attribute values across alternatives. This model is used to investigate the effect of attribute value ordering in three case studies related to alternative-fuel...... vehicles, mode choice, and route choice. In our application to choices among alternative-fuel vehicles, we see that especially the price coefficient is sensitive to changes in ordering. The ordering effect is also found in the applications to mode and route choice data where both travel time and cost...

  4. The Pruned State-Space System for Non-Linear DSGE Models: Theory and Empirical Applications

    DEFF Research Database (Denmark)

    Andreasen, Martin Møller; Fernández-Villaverde, Jesús; Rubio-Ramírez, Juan F.

    and impulse response functions. Thus, our analysis introduces GMM estimation for DSGE models approximated up to third-order and provides the foundation for indirect inference and SMM when simulation is required. We illustrate the usefulness of our approach by estimating a New Keynesian model with habits...... and Epstein-Zin preferences by GMM when using …rst and second unconditional moments of macroeconomic and …nancial data and by SMM when using additional third and fourth unconditional moments and non-Gaussian innovations....

  5. Semi-Empirical Calibration of the Integral Equation Model for Co-Polarized L-Band Backscattering

    Directory of Open Access Journals (Sweden)

    Nicolas Baghdadi

    2015-10-01

    Full Text Available The objective of this paper is to extend the semi-empirical calibration of the backscattering Integral Equation Model (IEM initially proposed for Synthetic Aperture Radar (SAR data at C- and X-bands to SAR data at L-band. A large dataset of radar signal and in situ measurements (soil moisture and surface roughness over bare soil surfaces were used. This dataset was collected over numerous agricultural study sites in France, Luxembourg, Belgium, Germany and Italy using various SAR sensors (AIRSAR, SIR-C, JERS-1, PALSAR-1, ESAR. Results showed slightly better simulations with exponential autocorrelation function than with Gaussian function and with HH than with VV. Using the exponential autocorrelation function, the mean difference between experimental data and Integral Equation Model (IEM simulations is +0.4 dB in HH and −1.2 dB in VV with a Root Mean Square Error (RMSE about 3.5 dB. In order to improve the modeling results of the IEM for a better use in the inversion of SAR data, a semi-empirical calibration of the IEM was performed at L-band in replacing the correlation length derived from field experiments by a fitting parameter. Better agreement was observed between the backscattering coefficient provided by the SAR and that simulated by the calibrated version of the IEM (RMSE about 2.2 dB.

  6. An empirical model of the Baltic Sea reveals the importance of social dynamics for ecological regime shifts.

    Science.gov (United States)

    Lade, Steven J; Niiranen, Susa; Hentati-Sundberg, Jonas; Blenckner, Thorsten; Boonstra, Wiebren J; Orach, Kirill; Quaas, Martin F; Österblom, Henrik; Schlüter, Maja

    2015-09-01

    Regime shifts triggered by human activities and environmental changes have led to significant ecological and socioeconomic consequences in marine and terrestrial ecosystems worldwide. Ecological processes and feedbacks associated with regime shifts have received considerable attention, but human individual and collective behavior is rarely treated as an integrated component of such shifts. Here, we used generalized modeling to develop a coupled social-ecological model that integrated rich social and ecological data to investigate the role of social dynamics in the 1980s Baltic Sea cod boom and collapse. We showed that psychological, economic, and regulatory aspects of fisher decision making, in addition to ecological interactions, contributed both to the temporary persistence of the cod boom and to its subsequent collapse. These features of the social-ecological system also would have limited the effectiveness of stronger fishery regulations. Our results provide quantitative, empirical evidence that incorporating social dynamics into models of natural resources is critical for understanding how resources can be managed sustainably. We also show that generalized modeling, which is well-suited to collaborative model development and does not require detailed specification of causal relationships between system variables, can help tackle the complexities involved in creating and analyzing social-ecological models.

  7. Lung cancer mortality (1950-1999 among Eldorado uranium workers: a comparison of models of carcinogenesis and empirical excess risk models.

    Directory of Open Access Journals (Sweden)

    Markus Eidemüller

    Full Text Available Lung cancer mortality after exposure to radon decay products (RDP among 16,236 male Eldorado uranium workers was analyzed. Male workers from the Beaverlodge and Port Radium uranium mines and the Port Hope radium and uranium refinery and processing facility who were first employed between 1932 and 1980 were followed up from 1950 to 1999. A total of 618 lung cancer deaths were observed. The analysis compared the results of the biologically-based two-stage clonal expansion (TSCE model to the empirical excess risk model. The spontaneous clonal expansion rate of pre-malignant cells was reduced at older ages under the assumptions of the TSCE model. Exposure to RDP was associated with increase in the clonal expansion rate during exposure but not afterwards. The increase was stronger for lower exposure rates. A radiation-induced bystander effect could be a possible explanation for such an exposure response. Results on excess risks were compared to a linear dose-response parametric excess risk model with attained age, time since exposure and dose rate as effect modifiers. In all models the excess relative risk decreased with increasing attained age, increasing time since exposure and increasing exposure rate. Large model uncertainties were found in particular for small exposure rates.

  8. Applications of ecological niche modeling for species delimitation: a review and empirical evaluation using day geckos (Phelsuma) from Madagascar.

    Science.gov (United States)

    Raxworthy, Christopher J; Ingram, Colleen M; Rabibisoa, Nirhy; Pearson, Richard G

    2007-12-01

    Although the systematic utility of ecological niche modeling is generally well known (e.g., concerning the recognition and discovery of areas of endemism for biogeographic analyses), there has been little discussion of applications concerning species delimitation, and to date, no empirical evaluation has been conducted. However, ecological niche modeling can provide compelling evidence for allopatry between populations, and can also detect divergent ecological niches between candidate species. Here we present results for two taxonomically problematic groups of Phelsuma day geckos from Madagascar, where we integrate ecological niche modeling with mitochondrial DNA and morphological data to evaluate species limits. Despite relatively modest levels of genetic and morphological divergence, for both species groups we find divergent ecological niches between closely related species and parapatric ecological niche models. Niche models based on the new species limits provide a better fit to the known distribution than models based upon the combined (lumped) species limits. Based on these results, we elevate three subspecies of Phelsuma madagascariensis to species rank and describe a new species of Phelsuma from the P. dubia species group. Our phylogeny continues to support a major endemic radiation of Phelsuma in Madagascar, with dispersals to Pemba Island and the Mascarene Islands. We conclude that ecological niche modeling offers great potential for species delimitation, especially for taxonomic groups exhibiting low vagility and localized endemism and for groups with more poorly known distributions. In particular, niche modeling should be especially sensitive for detecting recent parapatric speciation driven by ecological divergence, when the environmental gradients driving speciation are represented within the ecological niche models.

  9. Anxiety Psychopathology in African American Adults: Literature Review and Development of an Empirically Informed Sociocultural Model

    Science.gov (United States)

    Hunter, Lora Rose; Schmidt, Norman B.

    2010-01-01

    In this review, the extant literature concerning anxiety psychopathology in African American adults is summarized to develop a testable, explanatory framework with implications for future research. The model was designed to account for purported lower rates of anxiety disorders in African Americans compared to European Americans, along with other…

  10. An Empirical Study of Kirkpatrick's Evaluation Model in the Hospitality Industry

    Science.gov (United States)

    Chang, Ya-Hui Elegance

    2010-01-01

    This study examined Kirkpatrick's training evaluation model (Kirkpatrick & Kirkpatrick, 2006) by assessing a sales training program conducted at an organization in the hospitality industry. The study assessed the employees' training outcomes of knowledge and skills, job performance, and the impact of the training upon the organization. By…

  11. Medical students' attitudes towards breaking bad news: an empirical test of the World Health Organization model.

    NARCIS (Netherlands)

    Valck, C. de; Bensing, J.; Bruynooghe, R.

    2001-01-01

    The literature regarding breaking bad news distinguishes three disclosure models: non-disclosure, full-disclosure and individualized disclosure. In this study, we investigated the relations between attitudes regarding disclosure of bad news and global professional attitudes regarding medical care in

  12. Application of a Model for the Integration of Technology in Kindergarten: An Empirical Investigation in Taiwan

    Science.gov (United States)

    Lin, Chien-Heng

    2012-01-01

    The present models for the integration of computer technology not only cannot satisfy teachers' actual needs but also are difficult to follow and perform by teachers in their classroom teaching. Consequently, the integration cannot be implemented properly and effectively in the real classroom teaching. Therefore, a practical integration model…

  13. An Empirical Study on the Preference of Supermarkets with Analytic Hierarchy Process Model

    Science.gov (United States)

    Weng Siew, Lam; Singh, Ranjeet; Singh, Bishan; Weng Hoe, Lam; Kah Fai, Liew

    2018-04-01

    Large-scale retailers are very important to the consumers in this fast-paced world. Selection of desirable market to purchase products and services becomes major concern among consumers in their daily life due to vast choices available. Therefore, the objective of this paper is to determine the most preferred supermarket among AEON, Jaya Grocer, Tesco, Giant and Econsave by the undergraduate students in Malaysia with Analytic Hierarchy Process (AHP) model. Besides that, this study also aims to determine the priority of decision criteria in the selection of supermarkets among the undergraduatestudents with AHP model. The decision criteria employed in this study are product quality, competitive price, cleanliness, product variety, location, good price labelling, fast checkout and employee courtesy. The results of this study show that AEON is the most preferred supermarket followed by Jaya Grocer, Tesco, Econsave and Giant among the students based on AHP model. Product quality, cleanliness and competitive price are ranked as the top three influential factors in this study. This study is significant because it helps to determine the most preferred supermarket as well as the most influential decision criteria in the preference of supermarkets among the undergraduate students with AHP model.

  14. Analyzing the business model for mobile payment from banks' perspective : An empirical study

    NARCIS (Netherlands)

    Guo, J.; Nikou, S.; Bouwman, W.A.G.A.

    2013-01-01

    The increasing number of smart phones presents a significant opportunity for the development of m-payment services. Despite the predicted success of m-payment, the market remains immature in most countries. This can be explained by the lack of agreement on standards and business models for all

  15. THE INFLUENCE OF A MATHEMATICAL MODEL IN PRODUCTION STRATEGY: CONCEPTUAL DEVELOPMENT AND EMPIRICAL TEST

    Directory of Open Access Journals (Sweden)

    Paulo Cesar Chagas Rodrigues

    2012-07-01

    Full Text Available Acquire and produce what is strictly necessary are the goals of the organizations, since they aim companies more competitive and thereby reducing production costs. The research method is applied in nature, with a qualitative and quantitative approach, in which the objective of the research will be: exploratory and descriptive, with technical procedures, divided into: bibliographic, documentary, survey and concluding with a case study. On this assumption the main objective of this research is to develop and analyze a mathematical model that minimizes costs and maximizes the postponement of stocks in a company in the pulp, paper and paper products. Having been found only four papers, two articles and two theses that deal with the issue of demand management, supply chain and inventory postponement. These studies address the issue by modeling the productive time of the supply chain. For production segments this research may enable development of management practices demand and production strategy, allowing cost reductions and productivity gains possible. With the development of the mathematical model could ever analyze the behavior of demand and its influence on the productive strategy, strategy formulation regarding the purchase of raw materials and finished product storage in the last four years the company's results for the proposed model.

  16. Developing a Model for Agile Supply: an Empirical Study from Iranian Pharmaceutical Supply Chain

    Science.gov (United States)

    Rajabzadeh Ghatari, Ali; Mehralian, Gholamhossein; Zarenezhad, Forouzandeh; Rasekh, Hamid Reza

    2013-01-01

    Agility is the fundamental characteristic of a supply chain needed for survival in turbulent markets, where environmental forces create additional uncertainty resulting in higher risk in the supply chain management. In addition, agility helps providing the right product, at the right time to the consumer. The main goal of this research is therefore to promote supplier selection in pharmaceutical industry according to the formative basic factors. Moreover, this paper can configure its supply network to achieve the agile supply chain. The present article analyzes the supply part of supply chain based on SCOR model, used to assess agile supply chains by highlighting their specific characteristics and applicability in providing the active pharmaceutical ingredient (API). This methodology provides an analytical modeling; the model enables potential suppliers to be assessed against the multiple criteria using both quantitative and qualitative measures. In addition, for making priority of critical factors, TOPSIS algorithm has been used as a common technique of MADM model. Finally, several factors such as delivery speed, planning and reorder segmentation, trust development and material quantity adjustment are identified and prioritized as critical factors for being agile in supply of API. PMID:24250689

  17. Toward a Multifaceted Model of Internet Access for Understanding Digital Divides: An Empirical Investigation

    NARCIS (Netherlands)

    van Deursen, Alexander Johannes Aloysius Maria; van Dijk, Johannes A.G.M.

    2015-01-01

    In this investigation, a multifaceted model of Internet appropriation that encompasses four types of access—motivational, material, skills, and usage—is tested with a representative sample of the Dutch population. The analysis indicates that while the digital divide policies' focus has moved to

  18. A framework for evaluating forest landscape model predictions using empirical data and knowledge

    Science.gov (United States)

    Wen J. Wang; Hong S. He; Martin A. Spetich; Stephen R. Shifley; Frank R. Thompson; William D. Dijak; Qia. Wang

    2014-01-01

    Evaluation of forest landscape model (FLM) predictions is indispensable to establish the credibility of predictions. We present a framework that evaluates short- and long-term FLM predictions at site and landscape scales. Site-scale evaluation is conducted through comparing raster cell-level predictions with inventory plot data whereas landscape-scale evaluation is...

  19. Taboo trade-off aversion : A discrete choice model and empirical analysis

    NARCIS (Netherlands)

    Chorus, C.G.; Pudane, B.; Mouter, N.; Campbell, Danny

    2017-01-01

    An influential body of literature in moral psychology suggests that decision makers consider trade-offs morally problematic, or taboo, when the attributes traded off against each other belong to different 'spheres', such as friendship versus market transactions. This study is the first to model

  20. Circumplex Model of Marital Systems: An Empirical Study of Clinic and Nonclinic Couples

    Science.gov (United States)

    Sprenkle, Douglas H.; Olson, David H. L.

    1978-01-01

    The interaction processes of 25 couples receiving marriage counseling were compared with a control group of 25 couples not receiving counseling. The study was a partial test of a circumplex model of marital and family systems. The major variable was adaptability. Creativity and support were also examined. (Author)

  1. The influence of online store beliefs on consumer online impulse buying: A model and empirical application

    NARCIS (Netherlands)

    Verhagen, T.; van Dolen, W.

    2011-01-01

    Our study provides insight into the relationships between online store beliefs and consumer online impulse buying behavior. Drawing upon cognitive emotion theory, we developed a model and showed how beliefs about functional convenience (online store merchandise attractiveness and ease of use) and

  2. Collaborative Models of Instruction: The Empirical Foundations of Inclusion and Co-Teaching

    Science.gov (United States)

    Solis, Michael; Vaughn, Sharon; Swanson, Elizabeth; Mcculley, Lisa

    2012-01-01

    A summary of inclusion and co-teaching syntheses was conducted to better understand the evidence base associated with collaborative models of instruction. Six syntheses were identified: four investigated inclusion, and two investigated co-teaching. Collectively, the syntheses represented 146 studies. The syntheses investigated research on…

  3. Preliminary Empirical Model of Crucial Determinants of Best Practice for Peer Tutoring on Academic Achievement

    Science.gov (United States)

    Leung, Kim Chau

    2015-01-01

    Previous meta-analyses of the effects of peer tutoring on academic achievement have been plagued with theoretical and methodological flaws. Specifically, these studies have not adopted both fixed and mixed effects models for analyzing the effect size; they have not evaluated the moderating effect of some commonly used parameters, such as comparing…

  4. Empirical Test of the Know, See, Plan, Do Model for Curriculum Design in Leadership Education

    Science.gov (United States)

    Martin, Beth Ann; Allen, Scott J.

    2016-01-01

    This research assesses the Know, See, Plan, portions of the Know, See, Plan, Do (KSPD) model for curriculum design in leadership education. There were 3 graduate student groups, each taught using 1 of 3 different curriculum designs (KSPD and 2 control groups). Based on a pretest, post-test design, students' performance was measured to assess their…

  5. Developing a model for agile supply: an empirical study from Iranian pharmaceutical supply chain.

    Science.gov (United States)

    Rajabzadeh Ghatari, Ali; Mehralian, Gholamhossein; Zarenezhad, Forouzandeh; Rasekh, Hamid Reza

    2013-01-01

    Agility is the fundamental characteristic of a supply chain needed for survival in turbulent markets, where environmental forces create additional uncertainty resulting in higher risk in the supply chain management. In addition, agility helps providing the right product, at the right time to the consumer. The main goal of this research is therefore to promote supplier selection in pharmaceutical industry according to the formative basic factors. Moreover, this paper can configure its supply network to achieve the agile supply chain. The present article analyzes the supply part of supply chain based on SCOR model, used to assess agile supply chains by highlighting their specific characteristics and applicability in providing the active pharmaceutical ingredient (API). This methodology provides an analytical modeling; the model enables potential suppliers to be assessed against the multiple criteria using both quantitative and qualitative measures. In addition, for making priority of critical factors, TOPSIS algorithm has been used as a common technique of MADM model. Finally, several factors such as delivery speed, planning and reorder segmentation, trust development and material quantity adjustment are identified and prioritized as critical factors for being agile in supply of API.

  6. Combining DSMC Simulations and ROSINA/COPS Data of Comet 67P/Churyumov-Gerasimenko to Develop a Realistic Empirical Coma Model and to Determine Accurate Production Rates

    Science.gov (United States)

    Hansen, K. C.; Fougere, N.; Bieler, A. M.; Altwegg, K.; Combi, M. R.; Gombosi, T. I.; Huang, Z.; Rubin, M.; Tenishev, V.; Toth, G.; Tzou, C. Y.

    2015-12-01

    We have previously published results from the AMPS DSMC (Adaptive Mesh Particle Simulator Direct Simulation Monte Carlo) model and its characterization of the neutral coma of comet 67P/Churyumov-Gerasimenko through detailed comparison with data collected by the ROSINA/COPS (Rosetta Orbiter Spectrometer for Ion and Neutral Analysis/COmet Pressure Sensor) instrument aboard the Rosetta spacecraft [Bieler, 2015]. Results from these DSMC models have been used to create an empirical model of the near comet coma (empirical model characterizes the neutral coma in a comet centered, sun fixed reference frame as a function of heliocentric distance, radial distance from the comet, local time and declination. The model is a significant improvement over more simple empirical models, such as the Haser model. While the DSMC results are a more accurate representation of the coma at any given time, the advantage of a mean state, empirical model is the ease and speed of use. One use of such an empirical model is in the calculation of a total cometary coma production rate from the ROSINA/COPS data. The COPS data are in situ measurements of gas density and velocity along the ROSETTA spacecraft track. Converting the measured neutral density into a production rate requires knowledge of the neutral gas distribution in the coma. Our empirical model provides this information and therefore allows us to correct for the spacecraft location to calculate a production rate as a function of heliocentric distance. We will present the full empirical model as well as the calculated neutral production rate for the period of August 2014 - August 2015 (perihelion).

  7. EVALUATION OF PERCEIVED QUALITY OF THE WEBSITE OF AN ONLINE BOOKSTORE: AN EMPIRICAL APPLICATION OF THE BARNES AND VIDGEN MODEL

    Directory of Open Access Journals (Sweden)

    Ueliton da Costa Leonidio

    2011-05-01

    Full Text Available This paper’s objective is to evaluate the perceived quality of the Website of an online bookstore using the Barnes and Vidgen Model. Implemented over the Internet, this empirical research collected data on the perceived quality of the Website, used to sell products and online services. The questionnaire used to gather the data was answered by a convenience sample of 213 respondents. The importance of quality attributes and the dimension of perceived quality were investigated. The results indicate that the three dimensions named Reliability, Usability and Information were the most noticeable.

  8. A new global empirical model of the electron temperature with the inclusion of the solar activity variations for IRI

    Czech Academy of Sciences Publication Activity Database

    Truhlík, Vladimír; Bilitza, D.; Třísková, Ludmila

    2012-01-01

    Roč. 64, č. 6 (2012), s. 531-543 ISSN 1343-8832 R&D Projects: GA AV ČR IAA300420603; GA ČR GAP209/10/2086 Grant - others: NASA (US) NNH06CD17C. Institutional support: RVO:68378289 Keywords : Electron temperature * ionosphere * plasmasphere * empirical models * International Reference Ionosphere Subject RIV: DG - Athmosphere Sciences, Meteorology Impact factor: 2.921, year: 2012 http://www.terrapub.co.jp/journals/EPS/abstract/6406/64060531.html

  9. Empirical assessment of state-and-transition models with a long-term vegetation record from the Sonoran Desert.

    Science.gov (United States)

    Bagchi, Sumanta; Briske, David D; Wu, X B; McClaran, Mitchel P; Bestelmeyer, Brandon T; Fernández-Giménez, Maria E

    2012-03-01

    Resilience-based frameworks, including state-and-transition models (STM), are being increasingly called upon to inform policy and guide ecosystem management, particularly in rangelands. Yet, multiple challenges impede their effective implementation: (1) paucity of empirical tests of resilience concepts, such as alternative states and thresholds, and (2) heavy reliance on expert models, which are seldom tested against empirical data. We developed an analytical protocol to identify unique plant communities and their transitions, and applied it to a long-term vegetation record from the Sonoran Desert (1953-2009). We assessed whether empirical trends were consistent with resilience concepts, and evaluated how they may inform the construction and interpretation of expert STMs. Seven statistically distinct plant communities were identified based on the cover of 22 plant species in 68 permanent transects. We recorded 253 instances of community transitions, associated with changes in species composition between successive samplings. Expectedly, transitions were more frequent among proximate communities with similar species pools than among distant communities. But unexpectedly, communities and transitions were not strongly constrained by soil type and topography. Only 18 transitions featured disproportionately large compositional turnover (species dissimilarity ranged between 0.54 and 0.68), and these were closely associated with communities that were dominated by the common shrub (burroweed, Haplopappus tenuisecta); indicating that only some, and not all, communities may be prone to large compositional change. Temporal dynamics in individual transects illustrated four general trajectories: stability, nondirectional drift, reversibility, and directional shifts that were not reversed even after 2-3 decades. The frequency of transitions and the accompanying species dissimilarity were both positively correlated with fluctuation in precipitation, indicating that climatic

  10. The Influence of Quality on E-Commerce Success: An Empirical Application of the Delone and Mclean IS Success Model

    OpenAIRE

    Ultan Sharkey; Murray Scott; Thomas Acton

    2010-01-01

    This research addresses difficulties in measuring e-commerce success by implementing the DeLone and McLean (D&M) model of IS success (1992, 2003) in an e-commerce environment. This research considers the influence of quality on e-commerce success by measuring the information quality and system quality attributes of an e-commerce system and the intention to use, user satisfaction and intention to transact from a sample of respondents. This research provides an empirical e-commerce application ...

  11. A one-dimensional semi-empirical model considering transition boiling effect for dispersed flow film boiling

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Yu-Jou [Institute of Nuclear Engineering and Science, National Tsing Hua University, Hsinchu 30013, Taiwan, ROC (China); Pan, Chin, E-mail: cpan@ess.nthu.edu.tw [Institute of Nuclear Engineering and Science, National Tsing Hua University, Hsinchu 30013, Taiwan, ROC (China); Department of Engineering and System Science, National Tsing Hua University, Hsinchu 30013, Taiwan, ROC (China); Low Carbon Energy Research Center, National Tsing Hua University, Hsinchu 30013, Taiwan, ROC (China)

    2017-05-15

    Highlights: • Seven heat transfer mechanisms are studied numerically by the model. • A semi-empirical method is proposed to account for the transition boiling effect. • The parametric effects on the heat transfer mechanisms are investigated. • The thermal non-equilibrium phenomenon between vapor and droplets is investigated. - Abstract: The objective of this paper is to develop a one-dimensional semi-empirical model for the dispersed flow film boiling considering transition boiling effects. The proposed model consists of conservation equations, i.e., vapor mass, vapor energy, droplet mass and droplet momentum conservation, and a set of closure relations to address the interactions among wall, vapor and droplets. The results show that the transition boiling effect is of vital importance in the dispersed flow film boiling regime, since the flowing situation in the downstream would be influenced by the conditions in the upstream. In addition, the present paper, through evaluating the vapor temperature and the amount of heat transferred to droplets, investigates the thermal non-equilibrium phenomenon under different flowing conditions. Comparison of the wall temperature predictions with the 1394 experimental data in the literature, the present model ranging from system pressure of 30–140 bar, heat flux of 204–1837 kW/m{sup 2} and mass flux of 380–5180 kg/m{sup 2} s, shows very good agreement with RMS of 8.80% and standard deviation of 8.81%. Moreover, the model well depicts the thermal non-equilibrium phenomenon for the dispersed flow film boiling.

  12. Empirical model of impingement impact. Environmental Sciences Division publication No. 1289

    International Nuclear Information System (INIS)

    Barnthouse, L.W.; DeAngelis, D.L.; Christensen, S.W.

    1979-01-01

    A simple model, derived from Ricker's (1975) theory of fisheries dynamics, that can be used to estimate the impact of impingement of juvenile fish by power plants on year-class abundance in vulnerable species is described. The only data required are estimates of the initial number of impingeable juveniles, the number impinged, and the rate of total mortality during the period of vulnerability. The impact of impingement is expressed in the model as the conditional mortality rate, rather than the more commonly used exploitation rate. The conditional mortality rate is superior as a measure of impact for two reasons: it accounts for the differential impact of impinging fish of different ages, and it is numerically equivalent to the fractional reduction in year-class abundance due to impingement. We present an application of the model using the 1974 year-class of the Hudson River striped bass population as an example. We then show how the model can be modified to account for seasonal fluctuations in the rate of impingement, discuss the effect of these fluctuations on the calculated impact, and discuss the influence on model output of errors in the measurement of abundance, impingement, and total mortality. It is evident from this analysis that estimates of impingement impact are as sensitive to errors in estimates of population size and mortality as to estimates of the number of fish impinged. Thus, it is not possible to reliably estimate the impact of impingement on a vulnerable fish species unless a substantial effort is devoted to population studies

  13. Empirical assessment of the validity limits of the surface wave full ray theory using realistic 3-D Earth models

    KAUST Repository

    Parisi, Laura

    2016-02-10

    The surface wave full ray theory (FRT) is an efficient tool to calculate synthetic waveforms of surface waves. It combines the concept of local modes with exact ray tracing as a function of frequency, providing a more complete description of surface wave propagation than the widely used great circle approximation (GCA). The purpose of this study is to evaluate the ability of the FRT approach to model teleseismic long-period surface waveforms (T ∼ 45–150 s) in the context of current 3-D Earth models to empirically assess its validity domain and its scope for future studies in seismic tomography. To achieve this goal, we compute vertical and horizontal component fundamental mode synthetic Rayleigh waveforms using the FRT, which are compared with calculations using the highly accurate spectral element method. We use 13 global earth models including 3-D crustal and mantle structure, which are derived by successively varying the strength and lengthscale of heterogeneity in current tomographic models. For completeness, GCA waveforms are also compared with the spectral element method. We find that the FRT accurately predicts the phase and amplitude of long-period Rayleigh waves (T ∼ 45–150 s) for almost all the models considered, with errors in the modelling of the phase (amplitude) of Rayleigh waves being smaller than 5 per cent (10 per cent) in most cases. The largest errors in phase and amplitude are observed for T ∼ 45 s and for the three roughest earth models considered that exhibit shear wave anomalies of up to ∼20 per cent, which is much larger than in current global tomographic models. In addition, we find that overall the GCA does not predict Rayleigh wave amplitudes well, except for the longest wave periods (T ∼ 150 s) and the smoothest models considered. Although the GCA accurately predicts Rayleigh wave phase for current earth models such as S20RTS and S40RTS, FRT\\'s phase errors are smaller, notably for the shortest wave periods considered (T

  14. Microwave Remote Sensing Modeling of Ocean Surface Salinity and Winds Using an Empirical Sea Surface Spectrum

    Science.gov (United States)

    Yueh, Simon H.

    2004-01-01

    Active and passive microwave remote sensing techniques have been investigated for the remote sensing of ocean surface wind and salinity. We revised an ocean surface spectrum using the CMOD-5 geophysical model function (GMF) for the European Remote Sensing (ERS) C-band scatterometer and the Ku-band GMF for the NASA SeaWinds scatterometer. The predictions of microwave brightness temperatures from this model agree well with satellite, aircraft and tower-based microwave radiometer data. This suggests that the impact of surface roughness on microwave brightness temperatures and radar scattering coefficients of sea surfaces can be consistently characterized by a roughness spectrum, providing physical basis for using combined active and passive remote sensing techniques for ocean surface wind and salinity remote sensing.

  15. Circumplex model of marital and family systems: III. Empirical evaluation with families.

    Science.gov (United States)

    Russell, C S

    1979-03-01

    This study was designed to test the circumplex model of family systems that hypothesizes moderate family cohesion and moderate adaptability to be more functional than either extreme. Thirty-one Catholic family triads with daughters ranging in age from 14 to 17 years participated in a structured family interaction game (SIMFAM) and filled out questionnaires that measured the variables of cohesion and adaptability and the facilitative variables of support and creativity. All families were considered normal but were subdivided into those that had more and less difficulty with this adolescent. Analysis of the data yielded considerable support for the circumplex model. High family functioning was associated with moderate family cohesion and adaptability, and low family functioning had extreme scores on these dimensions. As predicted, high family support and creativity were also related to high family functioning. Implications of these findings for family therapy are discussed.

  16. Lead, copper and zinc biosorption from bicomponent systems modelled by empirical Freundlich isotherm

    Energy Technology Data Exchange (ETDEWEB)

    Sag, Y.; Kaya, A.; Kutsal, T. [Dept. of Chemical Engineering, Hacettepe Univ., Beytepe, Ankara (Turkey)

    2000-07-01

    The biosorption of lead, copper and zinc ions on Rhizopus arrhizus has been studied for three single-component and two binary systems. The equilibrium data have been analysed using the Freundlich adsorption model. The characteristic parameters for the Freundlich adsorption model have been determined and the competition coefficients for the competitive biosorption of Pb(II)-Cu(II) at pH 4.0 and 5.0, and Pb(II)-Zn(II) at pH 5.0 have been calcualted. For the individual single-component isotherms, lead has the highest biosorption capacity followed by copper, then zinc. The capacity of lead in the two binary systems is always significantly greater than those of the other metal ions, in agreement with the single-component data. Only a partial selectivity for copper ions has been obtained at pH 4.0. (orig.)

  17. An Empirical Assessment of a Technology Acceptance Model for Apps in Medical Education.

    Science.gov (United States)

    Briz-Ponce, Laura; García-Peñalvo, Francisco José

    2015-11-01

    The evolution and the growth of mobile applications ("apps") in our society is a reality. This general trend is still upward and the app use has also penetrated the medical education community. However, there is a lot of unawareness of the students' and professionals' point of view about introducing "apps" within Medical School curriculum. The aim of this research is to design, implement and verify that the Technology Acceptance Model (TAM) can be employed to measure and explain the acceptance of mobile technology and "apps" within Medical Education. The methodology was based on a survey distributed to students and medical professionals from University of Salamanca. This model explains 46.7% of behavioral intention to use mobile devise or "apps" for learning and will help us to justify and understand the current situation of introducing "apps" into the Medical School curriculum.

  18. Empirical models in the description of prickly pear shoot (Nopal drying kinetics

    Directory of Open Access Journals (Sweden)

    Emmanuel M. Pereira

    Full Text Available ABSTRACT The objective of this study was to describe the technological process involved in the drying kinetics of fresh-cut prickly pear shoots through numerical and analytical solutions. Shoots of two different prickly pear species were used, ‘Gigante’ and ‘Miúda’. Drying was performed at different temperatures (50, 60, 70 and 80 °C and weighing procedures were made continuously. The experimental data were expressed as moisture ratio. The Page model showed the best fit to the drying kinetics of minimally processed ‘Gigante’ and ‘Miúda’ prickly pear shoots, with the best coefficients of determination and Chi-square. Peleg and Wang & Singh models can not be used to simulate the drying of ‘Gigante’ and ‘Miúda’ prickly pear shoots within the evaluated range of temperatures, showing an incoherent graphic pattern.

  19. Mental health, violence, sexual abuse, tobacco and alcohol Acculturation and mental health – empirical verification of J.W. Berry’s model of acculturative stress

    OpenAIRE

    Koch, M. W.; Bjerregaard, P.; Curtis, C.

    2004-01-01

    Objectives. Many studies concerning mental health among ethnic minorities have used the concept of acculturation as a model of explanation, in particular J.W. Berry’s model of acculturative stress. But Berry’s theory has only been empirically verified few times. The aims of the study were to examine whether Berry’s hypothesis about the connection between acculturation and mental health can be empirically verified for Greenlanders living in Denmark and to analyse whether acculturation plays a ...

  20. Projected Climate Impacts to South African Maize and Wheat Production in 2055: A Comparison of Empirical and Mechanistic Modeling Approaches

    Science.gov (United States)

    Estes, Lyndon D.; Beukes, Hein; Bradley, Bethany A.; Debats, Stephanie R.; Oppenheimer, Michael; Ruane, Alex C.; Schulze, Roland; Tadross, Mark

    2013-01-01

    Crop model-specific biases are a key uncertainty affecting our understanding of climate change impacts to agriculture. There is increasing research focus on intermodel variation, but comparisons between mechanistic (MMs) and empirical models (EMs) are rare despite both being used widely in this field. We combined MMs and EMs to project future (2055) changes in the potential distribution (suitability) and productivity of maize and spring wheat in South Africa under 18 downscaled climate scenarios (9 models run under 2 emissions scenarios). EMs projected larger yield losses or smaller gains than MMs. The EMs' median-projected maize and wheat yield changes were 3.6% and 6.2%, respectively, compared to 6.5% and 15.2% for the MM. The EM projected a 10% reduction in the potential maize growing area, where the MM projected a 9% gain. Both models showed increases in the potential spring wheat production region (EM = 48%, MM = 20%), but these results were more equivocal because both models (particularly the EM) substantially overestimated the extent of current suitability. The substantial water-use efficiency gains simulated by the MMs under elevated CO2 accounted for much of the EMMM difference, but EMs may have more accurately represented crop temperature sensitivities. Our results align with earlier studies showing that EMs may show larger climate change losses than MMs. Crop forecasting efforts should expand to include EMMM comparisons to provide a fuller picture of crop-climate response uncertainties.