WorldWideScience

Sample records for models empirical models

  1. Empirical Vector Autoregressive Modeling

    NARCIS (Netherlands)

    M. Ooms (Marius)

    1993-01-01

    textabstractChapter 2 introduces the baseline version of the VAR model, with its basic statistical assumptions that we examine in the sequel. We first check whether the variables in the VAR can be transformed to meet these assumptions. We analyze the univariate characteristics of the series.

  2. Empirical Model Building Data, Models, and Reality

    CERN Document Server

    Thompson, James R

    2011-01-01

    Praise for the First Edition "This...novel and highly stimulating book, which emphasizes solving real problems...should be widely read. It will have a positive and lasting effect on the teaching of modeling and statistics in general." - Short Book Reviews This new edition features developments and real-world examples that showcase essential empirical modeling techniques Successful empirical model building is founded on the relationship between data and approximate representations of the real systems that generated that data. As a result, it is essential for researchers who construct these m

  3. Model uncertainty in growth empirics

    NARCIS (Netherlands)

    Prüfer, P.

    2008-01-01

    This thesis applies so-called Bayesian model averaging (BMA) to three different economic questions substantially exposed to model uncertainty. Chapter 2 addresses a major issue of modern development economics: the analysis of the determinants of pro-poor growth (PPG), which seeks to combine high

  4. Empirical questions for collective-behaviour modelling

    Indian Academy of Sciences (India)

    The collective behaviour of groups of social animals has been an active topic of study ... Models have been successful at reproducing qualitative features of ... quantitative and detailed empirical results for a range of animal systems. ... standard method [23], the redundant information recorded by the cameras can be used to.

  5. An Empirical Model for Energy Storage Systems

    Energy Technology Data Exchange (ETDEWEB)

    Rosewater, David Martin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Scott, Paul [TransPower, Poway, CA (United States)

    2016-03-17

    Improved models of energy storage systems are needed to enable the electric grid’s adaptation to increasing penetration of renewables. This paper develops a generic empirical model of energy storage system performance agnostic of type, chemistry, design or scale. Parameters for this model are calculated using test procedures adapted from the US DOE Protocol for Uniformly Measuring and Expressing the Performance of Energy Storage. We then assess the accuracy of this model for predicting the performance of the TransPower GridSaver – a 1 MW rated lithium-ion battery system that underwent laboratory experimentation and analysis. The developed model predicts a range of energy storage system performance based on the uncertainty of estimated model parameters. Finally, this model can be used to better understand the integration and coordination of energy storage on the electric grid.

  6. Empirical Bayesian inference and model uncertainty

    International Nuclear Information System (INIS)

    Poern, K.

    1994-01-01

    This paper presents a hierarchical or multistage empirical Bayesian approach for the estimation of uncertainty concerning the intensity of a homogeneous Poisson process. A class of contaminated gamma distributions is considered to describe the uncertainty concerning the intensity. These distributions in turn are defined through a set of secondary parameters, the knowledge of which is also described and updated via Bayes formula. This two-stage Bayesian approach is an example where the modeling uncertainty is treated in a comprehensive way. Each contaminated gamma distributions, represented by a point in the 3D space of secondary parameters, can be considered as a specific model of the uncertainty about the Poisson intensity. Then, by the empirical Bayesian method each individual model is assigned a posterior probability

  7. Empirically evaluating decision-analytic models.

    Science.gov (United States)

    Goldhaber-Fiebert, Jeremy D; Stout, Natasha K; Goldie, Sue J

    2010-08-01

    Model-based cost-effectiveness analyses support decision-making. To augment model credibility, evaluation via comparison to independent, empirical studies is recommended. We developed a structured reporting format for model evaluation and conducted a structured literature review to characterize current model evaluation recommendations and practices. As an illustration, we applied the reporting format to evaluate a microsimulation of human papillomavirus and cervical cancer. The model's outputs and uncertainty ranges were compared with multiple outcomes from a study of long-term progression from high-grade precancer (cervical intraepithelial neoplasia [CIN]) to cancer. Outcomes included 5 to 30-year cumulative cancer risk among women with and without appropriate CIN treatment. Consistency was measured by model ranges overlapping study confidence intervals. The structured reporting format included: matching baseline characteristics and follow-up, reporting model and study uncertainty, and stating metrics of consistency for model and study results. Structured searches yielded 2963 articles with 67 meeting inclusion criteria and found variation in how current model evaluations are reported. Evaluation of the cervical cancer microsimulation, reported using the proposed format, showed a modeled cumulative risk of invasive cancer for inadequately treated women of 39.6% (30.9-49.7) at 30 years, compared with the study: 37.5% (28.4-48.3). For appropriately treated women, modeled risks were 1.0% (0.7-1.3) at 30 years, study: 1.5% (0.4-3.3). To support external and projective validity, cost-effectiveness models should be iteratively evaluated as new studies become available, with reporting standardized to facilitate assessment. Such evaluations are particularly relevant for models used to conduct comparative effectiveness analyses.

  8. Empirical high-latitude electric field models

    International Nuclear Information System (INIS)

    Heppner, J.P.; Maynard, N.C.

    1987-01-01

    Electric field measurements from the Dynamics Explorer 2 satellite have been analyzed to extend the empirical models previously developed from dawn-dusk OGO 6 measurements (J.P. Heppner, 1977). The analysis embraces large quantities of data from polar crossings entering and exiting the high latitudes in all magnetic local time zones. Paralleling the previous analysis, the modeling is based on the distinctly different polar cap and dayside convective patterns that occur as a function of the sign of the Y component of the interplanetary magnetic field. The objective, which is to represent the typical distributions of convective electric fields with a minimum number of characteristic patterns, is met by deriving one pattern (model BC) for the northern hemisphere with a +Y interplanetary magnetic field (IMF) and southern hemisphere with a -Y IMF and two patterns (models A and DE) for the northern hemisphere with a -Y IMF and southern hemisphere with a +Y IMF. The most significant large-scale revisions of the OGO 6 models are (1) on the dayside where the latitudinal overlap of morning and evening convection cells reverses with the sign of the IMF Y component, (2) on the nightside where a westward flow region poleward from the Harang discontinuity appears under model BC conditions, and (3) magnetic local time shifts in the positions of the convection cell foci. The modeling above was followed by a detailed examination of cases where the IMF Z component was clearly positive (northward). Neglecting the seasonally dependent cases where irregularities obscure pattern recognition, the observations range from reasonable agreement with the new BC and DE models, to cases where different characteristics appeared primarily at dayside high latitudes

  9. Empirical atom model of Vegard's law

    International Nuclear Information System (INIS)

    Zhang, Lei; Li, Shichun

    2014-01-01

    Vegard's law seldom holds true for most binary continuous solid solutions. When two components form a solid solution, the atom radii of component elements will change to satisfy the continuity requirement of electron density at the interface between component atom A and atom B so that the atom with larger electron density will expand and the atom with the smaller one will contract. If the expansion and contraction of the atomic radii of A and B respectively are equal in magnitude, Vegard's law will hold true. However, the expansion and contraction of two component atoms are not equal in most situations. The magnitude of the variation will depend on the cohesive energy of corresponding element crystals. An empirical atom model of Vegard's law has been proposed to account for signs of deviations according to the electron density at Wigner–Seitz cell from Thomas–Fermi–Dirac–Cheng model

  10. Empirical particle transport model for tokamaks

    International Nuclear Information System (INIS)

    Petravic, M.; Kuo-Petravic, G.

    1986-08-01

    A simple empirical particle transport model has been constructed with the purpose of gaining insight into the L- to H-mode transition in tokamaks. The aim was to construct the simplest possible model which would reproduce the measured density profiles in the L-regime, and also produce a qualitatively correct transition to the H-regime without having to assume a completely different transport mode for the bulk of the plasma. Rather than using completely ad hoc constructions for the particle diffusion coefficient, we assume D = 1/5 chi/sub total/, where chi/sub total/ ≅ chi/sub e/ is the thermal diffusivity, and then use the κ/sub e/ = n/sub e/chi/sub e/ values derived from experiments. The observed temperature profiles are then automatically reproduced, but nontrivially, the correct density profiles are also obtained, for realistic fueling rates and profiles. Our conclusion is that it is sufficient to reduce the transport coefficients within a few centimeters of the surface to produce the H-mode behavior. An additional simple assumption, concerning the particle mean-free path, leads to a convective transport term which reverses sign a few centimeters inside the surface, as required by the H-mode density profiles

  11. PWR surveillance based on correspondence between empirical models and physical

    International Nuclear Information System (INIS)

    Zwingelstein, G.; Upadhyaya, B.R.; Kerlin, T.W.

    1976-01-01

    An on line surveillance method based on the correspondence between empirical models and physicals models is proposed for pressurized water reactors. Two types of empirical models are considered as well as the mathematical models defining the correspondence between the physical and empirical parameters. The efficiency of this method is illustrated for the surveillance of the Doppler coefficient for Oconee I (an 886 MWe PWR) [fr

  12. A sensitivity analysis of centrifugal compressors' empirical models

    International Nuclear Information System (INIS)

    Yoon, Sung Ho; Baek, Je Hyun

    2001-01-01

    The mean-line method using empirical models is the most practical method of predicting off-design performance. To gain insight into the empirical models, the influence of empirical models on the performance prediction results is investigated. We found that, in the two-zone model, the secondary flow mass fraction has a considerable effect at high mass flow-rates on the performance prediction curves. In the TEIS model, the first element changes the slope of the performance curves as well as the stable operating range. The second element makes the performance curves move up and down as it increases or decreases. It is also discovered that the slip factor affects pressure ratio, but it has little effect on efficiency. Finally, this study reveals that the skin friction coefficient has significant effect on both the pressure ratio curve and the efficiency curve. These results show the limitations of the present empirical models, and more reasonable empirical models are reeded

  13. Forecasting Inflation through Econometrics Models: An Empirical ...

    African Journals Online (AJOL)

    This article aims at modeling and forecasting inflation in Pakistan. For this purpose a number of econometric approaches are implemented and their results are compared. In ARIMA models, adding additional lags for p and/or q necessarily reduced the sum of squares of the estimated residuals. When a model is estimated ...

  14. Psychological Models of Art Reception must be Empirically Grounded

    DEFF Research Database (Denmark)

    Nadal, Marcos; Vartanian, Oshin; Skov, Martin

    2017-01-01

    We commend Menninghaus et al. for tackling the role of negative emotions in art reception. However, their model suffers from shortcomings that reduce its applicability to empirical studies of the arts: poor use of evidence, lack of integration with other models, and limited derivation of testable...... hypotheses. We argue that theories about art experiences should be based on empirical evidence....

  15. Empirical questions for collective-behaviour modelling

    Indian Academy of Sciences (India)

    2015-02-04

    Feb 4, 2015 ... The collective behaviour of groups of social animals has been an active topic of study across many disciplines, and has a long history of modelling. Classical models have been successful in capturing the large-scale patterns formed by animal aggregations, but fare less well in accounting for details, ...

  16. Identifiability of Baranyi model and comparison with empirical ...

    African Journals Online (AJOL)

    In addition, performance of the Baranyi model was compared with those of the empirical modified Gompertz and logistic models and Huang models. Higher values of R2, modeling efficiency and lower absolute values of mean bias error, root mean square error, mean percentage error and chi-square were obtained with ...

  17. Empirical soot formation and oxidation model

    Directory of Open Access Journals (Sweden)

    Boussouara Karima

    2009-01-01

    Full Text Available Modelling internal combustion engines can be made following different approaches, depending on the type of problem to be simulated. A diesel combustion model has been developed and implemented in a full cycle simulation of a combustion, model accounts for transient fuel spray evolution, fuel-air mixing, ignition, combustion, and soot pollutant formation. The models of turbulent combustion of diffusion flame, apply to diffusion flames, which one meets in industry, typically in the diesel engines particulate emission represents one of the most deleterious pollutants generated during diesel combustion. Stringent standards on particulate emission along with specific emphasis on size of emitted particulates have resulted in increased interest in fundamental understanding of the mechanisms of soot particulate formation and oxidation in internal combustion engines. A phenomenological numerical model which can predict the particle size distribution of the soot emitted will be very useful in explaining the above observed results and will also be of use to develop better particulate control techniques. A diesel engine chosen for simulation is a version of the Caterpillar 3406. We are interested in employing a standard finite-volume computational fluid dynamics code, KIVA3V-RELEASE2.

  18. Salt intrusion study in Cochin estuary - Using empirical models

    Digital Repository Service at National Institute of Oceanography (India)

    Jacob, B.; Revichandran, C.; NaveenKumar, K.R.

    been applied to the Cochin estuary in the present study to identify the most suitable model for predicting the salt intrusion length. Comparison of the obtained results indicate that the model of Van der Burgh (1972) is the most suitable empirical model...

  19. Empirical model for estimating the surface roughness of machined ...

    African Journals Online (AJOL)

    Empirical model for estimating the surface roughness of machined ... as well as surface finish is one of the most critical quality measure in mechanical products. ... various cutting speed have been developed using regression analysis software.

  20. Empirical model for estimating the surface roughness of machined ...

    African Journals Online (AJOL)

    Michael Horsfall

    one of the most critical quality measure in mechanical products. In the ... Keywords: cutting speed, centre lathe, empirical model, surface roughness, Mean absolute percentage deviation ... The factors considered were work piece properties.

  1. Empirical Comparison of Criterion Referenced Measurement Models

    Science.gov (United States)

    1976-10-01

    rument cons isting of a l a r~c nu mb r of items. The models ~o·ould the n be used to es t imate the tna.• s t""~r c us in~ a smaller and mor r ea lis...ti number o f items. This. rrrun·h is em- piri ca l a nd more dir\\’c tly o ri e nted to pr ti ca l app li ·a i on:; \\ viH ’ r t. tes ting time a nd the

  2. On the empirical relevance of the transient in opinion models

    Energy Technology Data Exchange (ETDEWEB)

    Banisch, Sven, E-mail: sven.banisch@universecity.d [Mathematical Physics, Physics Department, Bielefeld University, 33501 Bielefeld (Germany); Institute for Complexity Science (ICC), 1249-078 Lisbon (Portugal); Araujo, Tanya, E-mail: tanya@iseg.utl.p [Research Unit on Complexity in Economics (UECE), ISEG, TULisbon, 1249-078 Lisbon (Portugal); Institute for Complexity Science (ICC), 1249-078 Lisbon (Portugal)

    2010-07-12

    While the number and variety of models to explain opinion exchange dynamics is huge, attempts to justify the model results using empirical data are relatively rare. As linking to real data is essential for establishing model credibility, this Letter develops an empirical confirmation experiment by which an opinion model is related to real election data. The model is based on a representation of opinions as a vector of k bits. Individuals interact according to the principle that similarity leads to interaction and interaction leads to still more similarity. In the comparison to real data we concentrate on the transient opinion profiles that form during the dynamic process. An artificial election procedure is introduced which allows to relate transient opinion configurations to the electoral performance of candidates for which data are available. The election procedure based on the well-established principle of proximity voting is repeatedly performed during the transient period and remarkable statistical agreement with the empirical data is observed.

  3. On the empirical relevance of the transient in opinion models

    International Nuclear Information System (INIS)

    Banisch, Sven; Araujo, Tanya

    2010-01-01

    While the number and variety of models to explain opinion exchange dynamics is huge, attempts to justify the model results using empirical data are relatively rare. As linking to real data is essential for establishing model credibility, this Letter develops an empirical confirmation experiment by which an opinion model is related to real election data. The model is based on a representation of opinions as a vector of k bits. Individuals interact according to the principle that similarity leads to interaction and interaction leads to still more similarity. In the comparison to real data we concentrate on the transient opinion profiles that form during the dynamic process. An artificial election procedure is introduced which allows to relate transient opinion configurations to the electoral performance of candidates for which data are available. The election procedure based on the well-established principle of proximity voting is repeatedly performed during the transient period and remarkable statistical agreement with the empirical data is observed.

  4. NOx PREDICTION FOR FBC BOILERS USING EMPIRICAL MODELS

    Directory of Open Access Journals (Sweden)

    Jiří Štefanica

    2014-02-01

    Full Text Available Reliable prediction of NOx emissions can provide useful information for boiler design and fuel selection. Recently used kinetic prediction models for FBC boilers are overly complex and require large computing capacity. Even so, there are many uncertainties in the case of FBC boilers. An empirical modeling approach for NOx prediction has been used exclusively for PCC boilers. No reference is available for modifying this method for FBC conditions. This paper presents possible advantages of empirical modeling based prediction of NOx emissions for FBC boilers, together with a discussion of its limitations. Empirical models are reviewed, and are applied to operation data from FBC boilers used for combusting Czech lignite coal or coal-biomass mixtures. Modifications to the model are proposed in accordance with theoretical knowledge and prediction accuracy.

  5. Empirical agent-based modelling challenges and solutions

    CERN Document Server

    Barreteau, Olivier

    2014-01-01

    This instructional book showcases techniques to parameterise human agents in empirical agent-based models (ABM). In doing so, it provides a timely overview of key ABM methodologies and the most innovative approaches through a variety of empirical applications.  It features cutting-edge research from leading academics and practitioners, and will provide a guide for characterising and parameterising human agents in empirical ABM.  In order to facilitate learning, this text shares the valuable experiences of other modellers in particular modelling situations. Very little has been published in the area of empirical ABM, and this contributed volume will appeal to graduate-level students and researchers studying simulation modeling in economics, sociology, ecology, and trans-disciplinary studies, such as topics related to sustainability. In a similar vein to the instruction found in a cookbook, this text provides the empirical modeller with a set of 'recipes'  ready to be implemented. Agent-based modeling (AB...

  6. Ranking Multivariate GARCH Models by Problem Dimension: An Empirical Evaluation

    NARCIS (Netherlands)

    M. Caporin (Massimiliano); M.J. McAleer (Michael)

    2011-01-01

    textabstractIn the last 15 years, several Multivariate GARCH (MGARCH) models have appeared in the literature. Recent research has begun to examine MGARCH specifications in terms of their out-of-sample forecasting performance. In this paper, we provide an empirical comparison of a set of models,

  7. Comparison of empirical models and laboratory saturated hydraulic ...

    African Journals Online (AJOL)

    Numerous methods for estimating soil saturated hydraulic conductivity exist, which range from direct measurement in the laboratory to models that use only basic soil properties. A study was conducted to compare laboratory saturated hydraulic conductivity (Ksat) measurement and that estimated from empirical models.

  8. empirical modeling of oxygen modeling of oxygen uptake of flow

    African Journals Online (AJOL)

    eobe

    structure. Keywords: stepped chute, skimming flow, aeration l. 1. INTRODUCTION ..... [3] Toombes, L. and Chanson, H., “Air-water flow and gas transfer at aeration ... of numerical model of the flow behaviour through smooth and stepped.

  9. Bankruptcy risk model and empirical tests

    Science.gov (United States)

    Podobnik, Boris; Horvatic, Davor; Petersen, Alexander M.; Urošević, Branko; Stanley, H. Eugene

    2010-01-01

    We analyze the size dependence and temporal stability of firm bankruptcy risk in the US economy by applying Zipf scaling techniques. We focus on a single risk factor—the debt-to-asset ratio R—in order to study the stability of the Zipf distribution of R over time. We find that the Zipf exponent increases during market crashes, implying that firms go bankrupt with larger values of R. Based on the Zipf analysis, we employ Bayes’s theorem and relate the conditional probability that a bankrupt firm has a ratio R with the conditional probability of bankruptcy for a firm with a given R value. For 2,737 bankrupt firms, we demonstrate size dependence in assets change during the bankruptcy proceedings. Prepetition firm assets and petition firm assets follow Zipf distributions but with different exponents, meaning that firms with smaller assets adjust their assets more than firms with larger assets during the bankruptcy process. We compare bankrupt firms with nonbankrupt firms by analyzing the assets and liabilities of two large subsets of the US economy: 2,545 Nasdaq members and 1,680 New York Stock Exchange (NYSE) members. We find that both assets and liabilities follow a Pareto distribution. The finding is not a trivial consequence of the Zipf scaling relationship of firm size quantified by employees—although the market capitalization of Nasdaq stocks follows a Pareto distribution, the same distribution does not describe NYSE stocks. We propose a coupled Simon model that simultaneously evolves both assets and debt with the possibility of bankruptcy, and we also consider the possibility of firm mergers. PMID:20937903

  10. Vocational Teachers and Professionalism - A Model Based on Empirical Analyses

    DEFF Research Database (Denmark)

    Duch, Henriette Skjærbæk; Andreasen, Karen E

    Vocational Teachers and Professionalism - A Model Based on Empirical Analyses Several theorists has developed models to illustrate the processes of adult learning and professional development (e.g. Illeris, Argyris, Engeström; Wahlgren & Aarkorg, Kolb and Wenger). Models can sometimes be criticized...... emphasis on the adult employee, the organization, its surroundings as well as other contextual factors. Our concern is adult vocational teachers attending a pedagogical course and teaching at vocational colleges. The aim of the paper is to discuss different models and develop a model concerning teachers...... at vocational colleges based on empirical data in a specific context, vocational teacher-training course in Denmark. By offering a basis and concepts for analysis of practice such model is meant to support the development of vocational teachers’ professionalism at courses and in organizational contexts...

  11. Application of parameters space analysis tools for empirical model validation

    Energy Technology Data Exchange (ETDEWEB)

    Paloma del Barrio, E. [LEPT-ENSAM UMR 8508, Talence (France); Guyon, G. [Electricite de France, Moret-sur-Loing (France)

    2004-01-01

    A new methodology for empirical model validation has been proposed in the framework of the Task 22 (Building Energy Analysis Tools) of the International Energy Agency. It involves two main steps: checking model validity and diagnosis. Both steps, as well as the underlying methods, have been presented in the first part of the paper. In this part, they are applied for testing modelling hypothesis in the framework of the thermal analysis of an actual building. Sensitivity analysis tools have been first used to identify the parts of the model that can be really tested on the available data. A preliminary diagnosis is then supplied by principal components analysis. Useful information for model behaviour improvement has been finally obtained by optimisation techniques. This example of application shows how model parameters space analysis is a powerful tool for empirical validation. In particular, diagnosis possibilities are largely increased in comparison with residuals analysis techniques. (author)

  12. An empirical and model study on automobile market in Taiwan

    Science.gov (United States)

    Tang, Ji-Ying; Qiu, Rong; Zhou, Yueping; He, Da-Ren

    2006-03-01

    We have done an empirical investigation on automobile market in Taiwan including the development of the possession rate of the companies in the market from 1979 to 2003, the development of the largest possession rate, and so on. A dynamic model for describing the competition between the companies is suggested based on the empirical study. In the model each company is given a long-term competition factor (such as technology, capital and scale) and a short-term competition factor (such as management, service and advertisement). Then the companies play games in order to obtain more possession rate in the market under certain rules. Numerical simulation based on the model display a competition developing process, which qualitatively and quantitatively agree with our empirical investigation results.

  13. Predicting acid dew point with a semi-empirical model

    International Nuclear Information System (INIS)

    Xiang, Baixiang; Tang, Bin; Wu, Yuxin; Yang, Hairui; Zhang, Man; Lu, Junfu

    2016-01-01

    Highlights: • The previous semi-empirical models are systematically studied. • An improved thermodynamic correlation is derived. • A semi-empirical prediction model is proposed. • The proposed semi-empirical model is validated. - Abstract: Decreasing the temperature of exhaust flue gas in boilers is one of the most effective ways to further improve the thermal efficiency, electrostatic precipitator efficiency and to decrease the water consumption of desulfurization tower, while, when this temperature is below the acid dew point, the fouling and corrosion will occur on the heating surfaces in the second pass of boilers. So, the knowledge on accurately predicting the acid dew point is essential. By investigating the previous models on acid dew point prediction, an improved thermodynamic correlation formula between the acid dew point and its influencing factors is derived first. And then, a semi-empirical prediction model is proposed, which is validated with the data both in field test and experiment, and comparing with the previous models.

  14. Combining Empirical and Stochastic Models for Extreme Floods Estimation

    Science.gov (United States)

    Zemzami, M.; Benaabidate, L.

    2013-12-01

    Hydrological models can be defined as physical, mathematical or empirical. The latter class uses mathematical equations independent of the physical processes involved in the hydrological system. The linear regression and Gradex (Gradient of Extreme values) are classic examples of empirical models. However, conventional empirical models are still used as a tool for hydrological analysis by probabilistic approaches. In many regions in the world, watersheds are not gauged. This is true even in developed countries where the gauging network has continued to decline as a result of the lack of human and financial resources. Indeed, the obvious lack of data in these watersheds makes it impossible to apply some basic empirical models for daily forecast. So we had to find a combination of rainfall-runoff models in which it would be possible to create our own data and use them to estimate the flow. The estimated design floods would be a good choice to illustrate the difficulties facing the hydrologist for the construction of a standard empirical model in basins where hydrological information is rare. The construction of the climate-hydrological model, which is based on frequency analysis, was established to estimate the design flood in the Anseghmir catchments, Morocco. The choice of using this complex model returns to its ability to be applied in watersheds where hydrological information is not sufficient. It was found that this method is a powerful tool for estimating the design flood of the watershed and also other hydrological elements (runoff, volumes of water...).The hydrographic characteristics and climatic parameters were used to estimate the runoff, water volumes and design flood for different return periods.

  15. An empirical model for friction in cold forging

    DEFF Research Database (Denmark)

    Bay, Niels; Eriksen, Morten; Tan, Xincai

    2002-01-01

    With a system of simulative tribology tests for cold forging the friction stress for aluminum, steel and stainless steel provided with typical lubricants for cold forging has been determined for varying normal pressure, surface expansion, sliding length and tool/work piece interface temperature...... of normal pressure and tool/work piece interface temperature. The model is verified by process testing measuring friction at varying reductions in cold forward rod extrusion. KEY WORDS: empirical friction model, cold forging, simulative friction tests....

  16. Empirical Models for the Estimation of Global Solar Radiation in ...

    African Journals Online (AJOL)

    Empirical Models for the Estimation of Global Solar Radiation in Yola, Nigeria. ... and average daily wind speed (WS) for the interval of three years (2010 – 2012) measured using various instruments for Yola of recorded data collected from the Center for Atmospheric Research (CAR), Anyigba are presented and analyzed.

  17. Empirical Model for Predicting Rate of Biogas Production | Adamu ...

    African Journals Online (AJOL)

    Rate of biogas production using cow manure as substrate was monitored in two laboratory scale batch reactors (13 liter and 108 liter capacities). Two empirical models based on the Gompertz and the modified logistic equations were used to fit the experimental data based on non-linear regression analysis using Solver tool ...

  18. A semi-empirical two phase model for rocks

    International Nuclear Information System (INIS)

    Fogel, M.B.

    1993-01-01

    This article presents data from an experiment simulating a spherically symmetric tamped nuclear explosion. A semi-empirical two-phase model of the measured response in tuff is presented. A comparison is made of the computed peak stress and velocity versus scaled range and that measured on several recent tuff events

  19. Testing the gravity p-median model empirically

    Directory of Open Access Journals (Sweden)

    Kenneth Carling

    2015-12-01

    Full Text Available Regarding the location of a facility, the presumption in the widely used p-median model is that the customer opts for the shortest route to the nearest facility. However, this assumption is problematic on free markets since the customer is presumed to gravitate to a facility by the distance to and the attractiveness of it. The recently introduced gravity p-median model offers an extension to the p-median model that account for this. The model is therefore potentially interesting, although it has not yet been implemented and tested empirically. In this paper, we have implemented the model in an empirical problem of locating vehicle inspections, locksmiths, and retail stores of vehicle spare-parts for the purpose of investigating its superiority to the p-median model. We found, however, the gravity p-median model to be of limited use for the problem of locating facilities as it either gives solutions similar to the p-median model, or it gives unstable solutions due to a non-concave objective function.

  20. Empirical model for mineralisation of manure nitrogen in soil

    DEFF Research Database (Denmark)

    Sørensen, Peter; Thomsen, Ingrid Kaag; Schröder, Jaap

    2017-01-01

    A simple empirical model was developed for estimation of net mineralisation of pig and cattle slurry nitrogen (N) in arable soils under cool and moist climate conditions during the initial 5 years after spring application. The model is based on a Danish 3-year field experiment with measurements...... of N uptake in spring barley and ryegrass catch crops, supplemented with data from the literature on the temporal release of organic residues in soil. The model estimates a faster mineralisation rate for organic N in pig slurry compared with cattle slurry, and the description includes an initial N...

  1. Conceptual Model of IT Infrastructure Capability and Its Empirical Justification

    Institute of Scientific and Technical Information of China (English)

    QI Xianfeng; LAN Boxiong; GUO Zhenwei

    2008-01-01

    Increasing importance has been attached to the value of information technology (IT) infrastructure in today's organizations. The development of efficacious IT infrastructure capability enhances business performance and brings sustainable competitive advantage. This study analyzed the IT infrastructure capability in a holistic way and then presented a concept model of IT capability. IT infrastructure capability was categorized into sharing capability, service capability, and flexibility. This study then empirically tested the model using a set of survey data collected from 145 firms. Three factors emerge from the factor analysis as IT flexibility, IT service capability, and IT sharing capability, which agree with those in the conceptual model built in this study.

  2. Empirical modeling of dynamic behaviors of pneumatic artificial muscle actuators.

    Science.gov (United States)

    Wickramatunge, Kanchana Crishan; Leephakpreeda, Thananchai

    2013-11-01

    Pneumatic Artificial Muscle (PAM) actuators yield muscle-like mechanical actuation with high force to weight ratio, soft and flexible structure, and adaptable compliance for rehabilitation and prosthetic appliances to the disabled as well as humanoid robots or machines. The present study is to develop empirical models of the PAM actuators, that is, a PAM coupled with pneumatic control valves, in order to describe their dynamic behaviors for practical control design and usage. Empirical modeling is an efficient approach to computer-based modeling with observations of real behaviors. Different characteristics of dynamic behaviors of each PAM actuator are due not only to the structures of the PAM actuators themselves, but also to the variations of their material properties in manufacturing processes. To overcome the difficulties, the proposed empirical models are experimentally derived from real physical behaviors of the PAM actuators, which are being implemented. In case studies, the simulated results with good agreement to experimental results, show that the proposed methodology can be applied to describe the dynamic behaviors of the real PAM actuators. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.

  3. Semi-empirical corrosion model for Zircaloy-4 cladding

    International Nuclear Information System (INIS)

    Nadeem Elahi, Waseem; Atif Rana, Muhammad

    2015-01-01

    The Zircaloy-4 cladding tube in Pressurize Water Reactors (PWRs) bears corrosion due to fast neutron flux, coolant temperature, and water chemistry. The thickness of Zircaloy-4 cladding tube may be decreased due to the increase in corrosion penetration which may affect the integrity of the fuel rod. The tin content and inter-metallic particles sizes has been found significantly in the magnitude of oxide thickness. In present study we have developed a Semiempirical corrosion model by modifying the Arrhenius equation for corrosion as a function of acceleration factor for tin content and accumulative annealing. This developed model has been incorporated into fuel performance computer code. The cladding oxide thickness data obtained from the Semi-empirical corrosion model has been compared with the experimental results i.e., numerous cases of measured cladding oxide thickness from UO 2 fuel rods, irradiated in various PWRs. The results of the both studies lie within the error band of 20μm, which confirms the validity of the developed Semi-empirical corrosion model. Key words: Corrosion, Zircaloy-4, tin content, accumulative annealing factor, Semi-empirical, PWR. (author)

  4. A study on online monitoring system development using empirical models

    Energy Technology Data Exchange (ETDEWEB)

    An, Sang Ha

    2010-02-15

    Maintenance technologies have been progressed from a time-based to a condition-based manner. The fundamental idea of condition-based maintenance (CBM) is built on the real-time diagnosis of impending failures and/or the prognosis of residual lifetime of equipment by monitoring health conditions using various sensors. The success of CBM, therefore, hinges on the capability to develop accurate diagnosis/prognosis models. Even though there may be an unlimited number of methods to implement models, the models can normally be classified into two categories in terms of their origins: using physical principles or historical observations. I have focused on the latter method (sometimes referred as the empirical model based on statistical learning) because of some practical benefits such as context-free applicability, configuration flexibility, and customization adaptability. While several pilot-scale systems using empirical models have been applied to work sites in Korea, it should be noticed that these do not seem to be generally competitive against conventional physical models. As a result of investigating the bottlenecks of previous attempts, I have recognized the need for a novel strategy for grouping correlated variables such that an empirical model can accept not only statistical correlation but also some extent of physical knowledge of a system. Detailed examples of problems are as follows: (1) missing of important signals in a group caused by the lack of observations, (2) problems of signals with the time delay, (3) problems of optimal kernel bandwidth. In this study an improved statistical learning framework including the proposed strategy and case studies illustrating the performance of the method are presented.

  5. Plant water potential improves prediction of empirical stomatal models.

    Directory of Open Access Journals (Sweden)

    William R L Anderegg

    Full Text Available Climate change is expected to lead to increases in drought frequency and severity, with deleterious effects on many ecosystems. Stomatal responses to changing environmental conditions form the backbone of all ecosystem models, but are based on empirical relationships and are not well-tested during drought conditions. Here, we use a dataset of 34 woody plant species spanning global forest biomes to examine the effect of leaf water potential on stomatal conductance and test the predictive accuracy of three major stomatal models and a recently proposed model. We find that current leaf-level empirical models have consistent biases of over-prediction of stomatal conductance during dry conditions, particularly at low soil water potentials. Furthermore, the recently proposed stomatal conductance model yields increases in predictive capability compared to current models, and with particular improvement during drought conditions. Our results reveal that including stomatal sensitivity to declining water potential and consequent impairment of plant water transport will improve predictions during drought conditions and show that many biomes contain a diversity of plant stomatal strategies that range from risky to conservative stomatal regulation during water stress. Such improvements in stomatal simulation are greatly needed to help unravel and predict the response of ecosystems to future climate extremes.

  6. Empirical spatial econometric modelling of small scale neighbourhood

    Science.gov (United States)

    Gerkman, Linda

    2012-07-01

    The aim of the paper is to model small scale neighbourhood in a house price model by implementing the newest methodology in spatial econometrics. A common problem when modelling house prices is that in practice it is seldom possible to obtain all the desired variables. Especially variables capturing the small scale neighbourhood conditions are hard to find. If there are important explanatory variables missing from the model, the omitted variables are spatially autocorrelated and they are correlated with the explanatory variables included in the model, it can be shown that a spatial Durbin model is motivated. In the empirical application on new house price data from Helsinki in Finland, we find the motivation for a spatial Durbin model, we estimate the model and interpret the estimates for the summary measures of impacts. By the analysis we show that the model structure makes it possible to model and find small scale neighbourhood effects, when we know that they exist, but we are lacking proper variables to measure them.

  7. Regime switching model for financial data: Empirical risk analysis

    Science.gov (United States)

    Salhi, Khaled; Deaconu, Madalina; Lejay, Antoine; Champagnat, Nicolas; Navet, Nicolas

    2016-11-01

    This paper constructs a regime switching model for the univariate Value-at-Risk estimation. Extreme value theory (EVT) and hidden Markov models (HMM) are combined to estimate a hybrid model that takes volatility clustering into account. In the first stage, HMM is used to classify data in crisis and steady periods, while in the second stage, EVT is applied to the previously classified data to rub out the delay between regime switching and their detection. This new model is applied to prices of numerous stocks exchanged on NYSE Euronext Paris over the period 2001-2011. We focus on daily returns for which calibration has to be done on a small dataset. The relative performance of the regime switching model is benchmarked against other well-known modeling techniques, such as stable, power laws and GARCH models. The empirical results show that the regime switching model increases predictive performance of financial forecasting according to the number of violations and tail-loss tests. This suggests that the regime switching model is a robust forecasting variant of power laws model while remaining practical to implement the VaR measurement.

  8. Empirical Reduced-Order Modeling for Boundary Feedback Flow Control

    Directory of Open Access Journals (Sweden)

    Seddik M. Djouadi

    2008-01-01

    Full Text Available This paper deals with the practical and theoretical implications of model reduction for aerodynamic flow-based control problems. Various aspects of model reduction are discussed that apply to partial differential equation- (PDE- based models in general. Specifically, the proper orthogonal decomposition (POD of a high dimension system as well as frequency domain identification methods are discussed for initial model construction. Projections on the POD basis give a nonlinear Galerkin model. Then, a model reduction method based on empirical balanced truncation is developed and applied to the Galerkin model. The rationale for doing so is that linear subspace approximations to exact submanifolds associated with nonlinear controllability and observability require only standard matrix manipulations utilizing simulation/experimental data. The proposed method uses a chirp signal as input to produce the output in the eigensystem realization algorithm (ERA. This method estimates the system's Markov parameters that accurately reproduce the output. Balanced truncation is used to show that model reduction is still effective on ERA produced approximated systems. The method is applied to a prototype convective flow on obstacle geometry. An H∞ feedback flow controller is designed based on the reduced model to achieve tracking and then applied to the full-order model with excellent performance.

  9. An Empirical Investigation into a Subsidiary Absorptive Capacity Process Model

    DEFF Research Database (Denmark)

    Schleimer, Stephanie; Pedersen, Torben

    2011-01-01

    and empirically test a process model of absorptive capacity. The setting of our empirical study is 213 subsidiaries of multinational enterprises and the focus is on the capacity of these subsidiaries to successfully absorb best practices in marketing strategy from their headquarters. This setting allows us...... to explore the process model in its entirety, including different drivers of subsidiary absorptive capacity (organizational mechanisms and contextual drivers), the three original dimensions of absorptive capacity (recognition, assimilation, application), and related outcomes (implementation...... and internalization of the best practice). The study’s findings reveal that managers have discretion in promoting absorptive capacity through the application of specific organizational mechanism and that the impact of contextual drivers on subsidiary absorptive capacity is not direct, but mediated...

  10. An empirical model for the melt viscosity of polymer blends

    International Nuclear Information System (INIS)

    Dobrescu, V.

    1981-01-01

    On the basis of experimental data for blends of polyethylene with different polymers an empirical equation is proposed to describe the dependence of melt viscosity of blends on component viscosities and composition. The model ensures the continuity of viscosity vs. composition curves throughout the whole composition range, the possibility of obtaining extremum values higher or lower than the viscosities of components, allows the calculation of flow curves of blends from the flow curves of components and their volume fractions. (orig.)

  11. Empirical intrinsic geometry for nonlinear modeling and time series filtering.

    Science.gov (United States)

    Talmon, Ronen; Coifman, Ronald R

    2013-07-30

    In this paper, we present a method for time series analysis based on empirical intrinsic geometry (EIG). EIG enables one to reveal the low-dimensional parametric manifold as well as to infer the underlying dynamics of high-dimensional time series. By incorporating concepts of information geometry, this method extends existing geometric analysis tools to support stochastic settings and parametrizes the geometry of empirical distributions. However, the statistical models are not required as priors; hence, EIG may be applied to a wide range of real signals without existing definitive models. We show that the inferred model is noise-resilient and invariant under different observation and instrumental modalities. In addition, we show that it can be extended efficiently to newly acquired measurements in a sequential manner. These two advantages enable us to revisit the Bayesian approach and incorporate empirical dynamics and intrinsic geometry into a nonlinear filtering framework. We show applications to nonlinear and non-Gaussian tracking problems as well as to acoustic signal localization.

  12. Business models of micro businesses: Empirical evidence from creative industries

    Directory of Open Access Journals (Sweden)

    Pfeifer Sanja

    2017-01-01

    Full Text Available Business model describes how a business identifies and creates value for customers and how it organizes itself to capture some of this value in a profitable manner. Previous studies of business models in creative industries have only recently identified the unresolved issues in this field of research. The main objective of this article is to analyse the structure and diversity of business models and to deduce how these components interact or change in the context of micro and small businesses in creative services such as advertising, architecture and design. The article uses a qualitative approach. Case studies and semi-structured, in-depth interviews with six owners/managers of micro businesses in Croatia provide rich data. Structural coding in data analysis has been performed manually. The qualitative analysis has indicative relevance for the assessment and comparison of business models, however, it provides insights into which components of business models seem to be consolidated and which seem to contribute to the diversity of business models in creative industries. The article contributes to the advancement of empirical evidence and conceptual constructs that might lead to more advanced methodological approaches and proposition of the core typologies or classifications of business models in creative industries. In addition, a more detailed mapping of different choices available in managing value creation, value capturing or value networking might be a valuable help for owners/managers who want to change or cross-fertilize their business models.

  13. Advanced empirical estimate of information value for credit scoring models

    Directory of Open Access Journals (Sweden)

    Martin Řezáč

    2011-01-01

    Full Text Available Credit scoring, it is a term for a wide spectrum of predictive models and their underlying techniques that aid financial institutions in granting credits. These methods decide who will get credit, how much credit they should get, and what further strategies will enhance the profitability of the borrowers to the lenders. Many statistical tools are avaiable for measuring quality, within the meaning of the predictive power, of credit scoring models. Because it is impossible to use a scoring model effectively without knowing how good it is, quality indexes like Gini, Kolmogorov-Smirnov statisic and Information value are used to assess quality of given credit scoring model. The paper deals primarily with the Information value, sometimes called divergency. Commonly it is computed by discretisation of data into bins using deciles. One constraint is required to be met in this case. Number of cases have to be nonzero for all bins. If this constraint is not fulfilled there are some practical procedures for preserving finite results. As an alternative method to the empirical estimates one can use the kernel smoothing theory, which allows to estimate unknown densities and consequently, using some numerical method for integration, to estimate value of the Information value. The main contribution of this paper is a proposal and description of the empirical estimate with supervised interval selection. This advanced estimate is based on requirement to have at least k, where k is a positive integer, observations of socres of both good and bad client in each considered interval. A simulation study shows that this estimate outperform both the empirical estimate using deciles and the kernel estimate. Furthermore it shows high dependency on choice of the parameter k. If we choose too small value, we get overestimated value of the Information value, and vice versa. Adjusted square root of number of bad clients seems to be a reasonable compromise.

  14. Empirical models of wind conditions on Upper Klamath Lake, Oregon

    Science.gov (United States)

    Buccola, Norman L.; Wood, Tamara M.

    2010-01-01

    Upper Klamath Lake is a large (230 square kilometers), shallow (mean depth 2.8 meters at full pool) lake in southern Oregon. Lake circulation patterns are driven largely by wind, and the resulting currents affect the water quality and ecology of the lake. To support hydrodynamic modeling of the lake and statistical investigations of the relation between wind and lake water-quality measurements, the U.S. Geological Survey has monitored wind conditions along the lakeshore and at floating raft sites in the middle of the lake since 2005. In order to make the existing wind archive more useful, this report summarizes the development of empirical wind models that serve two purposes: (1) to fill short (on the order of hours or days) wind data gaps at raft sites in the middle of the lake, and (2) to reconstruct, on a daily basis, over periods of months to years, historical wind conditions at U.S. Geological Survey sites prior to 2005. Empirical wind models based on Artificial Neural Network (ANN) and Multivariate-Adaptive Regressive Splines (MARS) algorithms were compared. ANNs were better suited to simulating the 10-minute wind data that are the dependent variables of the gap-filling models, but the simpler MARS algorithm may be adequate to accurately simulate the daily wind data that are the dependent variables of the historical wind models. To further test the accuracy of the gap-filling models, the resulting simulated winds were used to force the hydrodynamic model of the lake, and the resulting simulated currents were compared to measurements from an acoustic Doppler current profiler. The error statistics indicated that the simulation of currents was degraded as compared to when the model was forced with observed winds, but probably is adequate for short gaps in the data of a few days or less. Transport seems to be less affected by the use of the simulated winds in place of observed winds. The simulated tracer concentration was similar between model results when

  15. Empirical classification of resources in a business model concept

    Directory of Open Access Journals (Sweden)

    Marko Seppänen

    2009-04-01

    Full Text Available The concept of the business model has been designed for aiding exploitation of the business potential of an innovation. This exploitation inevitably involves new activities in the organisational context and generates a need to select and arrange the resources of the firm in these new activities. A business model encompasses those resources that a firm has access to and aids in a firm’s effort to create a superior ‘innovation capability’. Selecting and arranging resources to utilise innovations requires resource allocation decisions on multiple fronts as well as poses significant challenges for management of innovations. Although current business model conceptualisations elucidate resources, explicit considerations for the composition and the structures of the resource compositions have remained ambiguous. As a result, current business model conceptualisations fail in their core purpose in assisting the decision-making that must consider the resource allocation in exploiting business opportunities. This paper contributes to the existing discussion regarding the representation of resources as components in the business model concept. The categorized list of resources in business models is validated empirically, using two samples of managers in different positions in several industries. The results indicate that most of the theoretically derived resource items have their equivalents in the business language and concepts used by managers. Thus, the categorisation of the resource components enables further development of the business model concept as well as improves daily communication between managers and their subordinates. Future research could be targeted on linking these components of a business model with each other in order to gain a model to assess the performance of different business model configurations. Furthermore, different applications for the developed resource configuration may be envisioned.

  16. Empirical Bayes Credibility Models for Economic Catastrophic Losses by Regions

    Directory of Open Access Journals (Sweden)

    Jindrová Pavla

    2017-01-01

    Full Text Available Catastrophic events affect various regions of the world with increasing frequency and intensity. The number of catastrophic events and the amount of economic losses is varying in different world regions. Part of these losses is covered by insurance. Catastrophe events in last years are associated with increases in premiums for some lines of business. The article focus on estimating the amount of net premiums that would be needed to cover the total or insured catastrophic losses in different world regions using Bühlmann and Bühlmann-Straub empirical credibility models based on data from Sigma Swiss Re 2010-2016. The empirical credibility models have been developed to estimate insurance premiums for short term insurance contracts using two ingredients: past data from the risk itself and collateral data from other sources considered to be relevant. In this article we deal with application of these models based on the real data about number of catastrophic events and about the total economic and insured catastrophe losses in seven regions of the world in time period 2009-2015. Estimated credible premiums by world regions provide information how much money in the monitored regions will be need to cover total and insured catastrophic losses in next year.

  17. Threshold model of cascades in empirical temporal networks

    Science.gov (United States)

    Karimi, Fariba; Holme, Petter

    2013-08-01

    Threshold models try to explain the consequences of social influence like the spread of fads and opinions. Along with models of epidemics, they constitute a major theoretical framework of social spreading processes. In threshold models on static networks, an individual changes her state if a certain fraction of her neighbors has done the same. When there are strong correlations in the temporal aspects of contact patterns, it is useful to represent the system as a temporal network. In such a system, not only contacts but also the time of the contacts are represented explicitly. In many cases, bursty temporal patterns slow down disease spreading. However, as we will see, this is not a universal truth for threshold models. In this work we propose an extension of Watts’s classic threshold model to temporal networks. We do this by assuming that an agent is influenced by contacts which lie a certain time into the past. I.e., the individuals are affected by contacts within a time window. In addition to thresholds in the fraction of contacts, we also investigate the number of contacts within the time window as a basis for influence. To elucidate the model’s behavior, we run the model on real and randomized empirical contact datasets.

  18. Empirical membrane lifetime model for heavy duty fuel cell systems

    Science.gov (United States)

    Macauley, Natalia; Watson, Mark; Lauritzen, Michael; Knights, Shanna; Wang, G. Gary; Kjeang, Erik

    2016-12-01

    Heavy duty fuel cells used in transportation system applications such as transit buses expose the fuel cell membranes to conditions that can lead to lifetime-limiting membrane failure via combined chemical and mechanical degradation. Highly durable membranes and reliable predictive models are therefore needed in order to achieve the ultimate heavy duty fuel cell lifetime target of 25,000 h. In the present work, an empirical membrane lifetime model was developed based on laboratory data from a suite of accelerated membrane durability tests. The model considers the effects of cell voltage, temperature, oxygen concentration, humidity cycling, humidity level, and platinum in the membrane using inverse power law and exponential relationships within the framework of a general log-linear Weibull life-stress statistical distribution. The obtained model is capable of extrapolating the membrane lifetime from accelerated test conditions to use level conditions during field operation. Based on typical conditions for the Whistler, British Columbia fuel cell transit bus fleet, the model predicts a stack lifetime of 17,500 h and a membrane leak initiation time of 9200 h. Validation performed with the aid of a field operated stack confirmed the initial goal of the model to predict membrane lifetime within 20% of the actual operating time.

  19. Empirical STORM-E Model. [I. Theoretical and Observational Basis

    Science.gov (United States)

    Mertens, Christopher J.; Xu, Xiaojing; Bilitza, Dieter; Mlynczak, Martin G.; Russell, James M., III

    2013-01-01

    Auroral nighttime infrared emission observed by the Sounding of the Atmosphere using Broadband Emission Radiometry (SABER) instrument onboard the Thermosphere-Ionosphere-Mesosphere Energetics and Dynamics (TIMED) satellite is used to develop an empirical model of geomagnetic storm enhancements to E-region peak electron densities. The empirical model is called STORM-E and will be incorporated into the 2012 release of the International Reference Ionosphere (IRI). The proxy for characterizing the E-region response to geomagnetic forcing is NO+(v) volume emission rates (VER) derived from the TIMED/SABER 4.3 lm channel limb radiance measurements. The storm-time response of the NO+(v) 4.3 lm VER is sensitive to auroral particle precipitation. A statistical database of storm-time to climatological quiet-time ratios of SABER-observed NO+(v) 4.3 lm VER are fit to widely available geomagnetic indices using the theoretical framework of linear impulse-response theory. The STORM-E model provides a dynamic storm-time correction factor to adjust a known quiescent E-region electron density peak concentration for geomagnetic enhancements due to auroral particle precipitation. Part II of this series describes the explicit development of the empirical storm-time correction factor for E-region peak electron densities, and shows comparisons of E-region electron densities between STORM-E predictions and incoherent scatter radar measurements. In this paper, Part I of the series, the efficacy of using SABER-derived NO+(v) VER as a proxy for the E-region response to solar-geomagnetic disturbances is presented. Furthermore, a detailed description of the algorithms and methodologies used to derive NO+(v) VER from SABER 4.3 lm limb emission measurements is given. Finally, an assessment of key uncertainties in retrieving NO+(v) VER is presented

  20. Production functions for climate policy modeling. An empirical analysis

    International Nuclear Information System (INIS)

    Van der Werf, Edwin

    2008-01-01

    Quantitative models for climate policy modeling differ in the production structure used and in the sizes of the elasticities of substitution. The empirical foundation for both is generally lacking. This paper estimates the parameters of 2-level CES production functions with capital, labour and energy as inputs, and is the first to systematically compare all nesting structures. Using industry-level data from 12 OECD countries, we find that the nesting structure where capital and labour are combined first, fits the data best, but for most countries and industries we cannot reject that all three inputs can be put into one single nest. These two nesting structures are used by most climate models. However, while several climate policy models use a Cobb-Douglas function for (part of the) production function, we reject elasticities equal to one, in favour of considerably smaller values. Finally we find evidence for factor-specific technological change. With lower elasticities and with factor-specific technological change, some climate policy models may find a bigger effect of endogenous technological change on mitigating the costs of climate policy. (author)

  1. Empirical atom model of Vegard's law

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Lei, E-mail: zhleile2002@163.com [Materials Department, College of Electromechanical Engineering, China University of Petroleum, Qingdao 266555 (China); School of Electromechanical Automobile Engineering, Yantai University, Yantai 264005 (China); Li, Shichun [Materials Department, College of Electromechanical Engineering, China University of Petroleum, Qingdao 266555 (China)

    2014-02-01

    Vegard's law seldom holds true for most binary continuous solid solutions. When two components form a solid solution, the atom radii of component elements will change to satisfy the continuity requirement of electron density at the interface between component atom A and atom B so that the atom with larger electron density will expand and the atom with the smaller one will contract. If the expansion and contraction of the atomic radii of A and B respectively are equal in magnitude, Vegard's law will hold true. However, the expansion and contraction of two component atoms are not equal in most situations. The magnitude of the variation will depend on the cohesive energy of corresponding element crystals. An empirical atom model of Vegard's law has been proposed to account for signs of deviations according to the electron density at Wigner–Seitz cell from Thomas–Fermi–Dirac–Cheng model.

  2. Semi-empirical neural network models of controlled dynamical systems

    Directory of Open Access Journals (Sweden)

    Mihail V. Egorchev

    2017-12-01

    Full Text Available A simulation approach is discussed for maneuverable aircraft motion as nonlinear controlled dynamical system under multiple and diverse uncertainties including knowledge imperfection concerning simulated plant and its environment exposure. The suggested approach is based on a merging of theoretical knowledge for the plant with training tools of artificial neural network field. The efficiency of this approach is demonstrated using the example of motion modeling and the identification of the aerodynamic characteristics of a maneuverable aircraft. A semi-empirical recurrent neural network based model learning algorithm is proposed for multi-step ahead prediction problem. This algorithm sequentially states and solves numerical optimization subproblems of increasing complexity, using each solution as initial guess for subsequent subproblem. We also consider a procedure for representative training set acquisition that utilizes multisine control signals.

  3. Empirical flow parameters : a tool for hydraulic model validity

    Science.gov (United States)

    Asquith, William H.; Burley, Thomas E.; Cleveland, Theodore G.

    2013-01-01

    The objectives of this project were (1) To determine and present from existing data in Texas, relations between observed stream flow, topographic slope, mean section velocity, and other hydraulic factors, to produce charts such as Figure 1 and to produce empirical distributions of the various flow parameters to provide a methodology to "check if model results are way off!"; (2) To produce a statistical regional tool to estimate mean velocity or other selected parameters for storm flows or other conditional discharges at ungauged locations (most bridge crossings) in Texas to provide a secondary way to compare such values to a conventional hydraulic modeling approach. (3.) To present ancillary values such as Froude number, stream power, Rosgen channel classification, sinuosity, and other selected characteristics (readily determinable from existing data) to provide additional information to engineers concerned with the hydraulic-soil-foundation component of transportation infrastructure.

  4. Power spectrum model of visual masking: simulations and empirical data.

    Science.gov (United States)

    Serrano-Pedraza, Ignacio; Sierra-Vázquez, Vicente; Derrington, Andrew M

    2013-06-01

    cutoffs around the spatial frequency of the signal match the shape of the visual channel (symmetric or asymmetric) involved in the detection. In order to test the explanatory power of the model with empirical data, we performed six visual masking experiments. We show that this model, with only two free parameters, fits the empirical masking data with high precision. Finally, we provide equations of the power spectrum model for six masking noises used in the simulations and in the experiments.

  5. EMPIRE-II statistical model code for nuclear reaction calculations

    Energy Technology Data Exchange (ETDEWEB)

    Herman, M [International Atomic Energy Agency, Vienna (Austria)

    2001-12-15

    EMPIRE II is a nuclear reaction code, comprising various nuclear models, and designed for calculations in the broad range of energies and incident particles. A projectile can be any nucleon or Heavy Ion. The energy range starts just above the resonance region, in the case of neutron projectile, and extends up to few hundreds of MeV for Heavy Ion induced reactions. The code accounts for the major nuclear reaction mechanisms, such as optical model (SCATB), Multistep Direct (ORION + TRISTAN), NVWY Multistep Compound, and the full featured Hauser-Feshbach model. Heavy Ion fusion cross section can be calculated within the simplified coupled channels approach (CCFUS). A comprehensive library of input parameters covers nuclear masses, optical model parameters, ground state deformations, discrete levels and decay schemes, level densities, fission barriers (BARFIT), moments of inertia (MOMFIT), and {gamma}-ray strength functions. Effects of the dynamic deformation of a fast rotating nucleus can be taken into account in the calculations. The results can be converted into the ENDF-VI format using the accompanying code EMPEND. The package contains the full EXFOR library of experimental data. Relevant EXFOR entries are automatically retrieved during the calculations. Plots comparing experimental results with the calculated ones can be produced using X4TOC4 and PLOTC4 codes linked to the rest of the system through bash-shell (UNIX) scripts. The graphic user interface written in Tcl/Tk is provided. (author)

  6. Empirical Modeling of the Plasmasphere Dynamics Using Neural Networks

    Science.gov (United States)

    Zhelavskaya, I. S.; Shprits, Y.; Spasojevic, M.

    2017-12-01

    We present a new empirical model for reconstructing the global dynamics of the cold plasma density distribution based only on solar wind data and geomagnetic indices. Utilizing the density database obtained using the NURD (Neural-network-based Upper hybrid Resonance Determination) algorithm for the period of October 1, 2012 - July 1, 2016, in conjunction with solar wind data and geomagnetic indices, we develop a neural network model that is capable of globally reconstructing the dynamics of the cold plasma density distribution for 2 ≤ L ≤ 6 and all local times. We validate and test the model by measuring its performance on independent datasets withheld from the training set and by comparing the model predicted global evolution with global images of He+ distribution in the Earth's plasmasphere from the IMAGE Extreme UltraViolet (EUV) instrument. We identify the parameters that best quantify the plasmasphere dynamics by training and comparing multiple neural networks with different combinations of input parameters (geomagnetic indices, solar wind data, and different durations of their time history). We demonstrate results of both local and global plasma density reconstruction. This study illustrates how global dynamics can be reconstructed from local in-situ observations by using machine learning techniques.

  7. An Empirical Temperature Variance Source Model in Heated Jets

    Science.gov (United States)

    Khavaran, Abbas; Bridges, James

    2012-01-01

    An acoustic analogy approach is implemented that models the sources of jet noise in heated jets. The equivalent sources of turbulent mixing noise are recognized as the differences between the fluctuating and Favre-averaged Reynolds stresses and enthalpy fluxes. While in a conventional acoustic analogy only Reynolds stress components are scrutinized for their noise generation properties, it is now accepted that a comprehensive source model should include the additional entropy source term. Following Goldstein s generalized acoustic analogy, the set of Euler equations are divided into two sets of equations that govern a non-radiating base flow plus its residual components. When the base flow is considered as a locally parallel mean flow, the residual equations may be rearranged to form an inhomogeneous third-order wave equation. A general solution is written subsequently using a Green s function method while all non-linear terms are treated as the equivalent sources of aerodynamic sound and are modeled accordingly. In a previous study, a specialized Reynolds-averaged Navier-Stokes (RANS) solver was implemented to compute the variance of thermal fluctuations that determine the enthalpy flux source strength. The main objective here is to present an empirical model capable of providing a reasonable estimate of the stagnation temperature variance in a jet. Such a model is parameterized as a function of the mean stagnation temperature gradient in the jet, and is evaluated using commonly available RANS solvers. The ensuing thermal source distribution is compared with measurements as well as computational result from a dedicated RANS solver that employs an enthalpy variance and dissipation rate model. Turbulent mixing noise predictions are presented for a wide range of jet temperature ratios from 1.0 to 3.20.

  8. Design Models as Emergent Features: An Empirical Study in Communication and Shared Mental Models in Instructional

    Science.gov (United States)

    Botturi, Luca

    2006-01-01

    This paper reports the results of an empirical study that investigated the instructional design process of three teams involved in the development of an e-­learning unit. The teams declared they were using the same fast-­prototyping design and development model, and were composed of the same roles (although with a different number of SMEs).…

  9. Hybrid empirical--theoretical approach to modeling uranium adsorption

    International Nuclear Information System (INIS)

    Hull, Larry C.; Grossman, Christopher; Fjeld, Robert A.; Coates, John T.; Elzerman, Alan W.

    2004-01-01

    An estimated 330 metric tons of U are buried in the radioactive waste Subsurface Disposal Area (SDA) at the Idaho National Engineering and Environmental Laboratory (INEEL). An assessment of U transport parameters is being performed to decrease the uncertainty in risk and dose predictions derived from computer simulations of U fate and transport to the underlying Snake River Plain Aquifer. Uranium adsorption isotherms were measured for 14 sediment samples collected from sedimentary interbeds underlying the SDA. The adsorption data were fit with a Freundlich isotherm. The Freundlich n parameter is statistically identical for all 14 sediment samples and the Freundlich K f parameter is correlated to sediment surface area (r 2 =0.80). These findings suggest an efficient approach to material characterization and implementation of a spatially variable reactive transport model that requires only the measurement of sediment surface area. To expand the potential applicability of the measured isotherms, a model is derived from the empirical observations by incorporating concepts from surface complexation theory to account for the effects of solution chemistry. The resulting model is then used to predict the range of adsorption conditions to be expected in the vadose zone at the SDA based on the range in measured pore water chemistry. Adsorption in the deep vadose zone is predicted to be stronger than in near-surface sediments because the total dissolved carbonate decreases with depth

  10. Empirical Modeling of ICMEs Using ACE/SWICS Ionic Distributions

    Science.gov (United States)

    Rivera, Y.; Landi, E.; Lepri, S. T.; Gilbert, J. A.

    2017-12-01

    Coronal Mass Ejections (CMEs) are some of the largest, most energetic events in the solar system releasing an immense amount of plasma and magnetic field into the Heliosphere. The Earth-bound plasma plays a large role in space weather, causing geomagnetic storms that can damage space and ground based instrumentation. As a CME is released, the plasma experiences heating, expansion and acceleration; however, the physical mechanism supplying the heating as it lifts out of the corona still remains uncertain. From previous work we know the ionic composition of solar ejecta undergoes a gradual transition to a state where ionization and recombination processes become ineffective rendering the ionic composition static along its trajectory. This property makes them a good indicator of thermal conditions in the corona, where the CME plasma likely receives most of its heating. We model this so-called `freeze-in' process in Earth-directed CMEs using an ionization code to empirically determine the electron temperature, density and bulk velocity. `Frozen-in' ions from an ensemble of independently modeled plasmas within the CME are added together to fit the full range of observational ionic abundances collected by ACE/SWICS during ICME events. The models derived using this method are used to estimate the CME energy budget to determine a heating rate used to compare with a variety of heating mechanisms that can sustain the required heating with a compatible timescale.

  11. Semi-Empirical Models for Buoyancy-Driven Ventilation

    DEFF Research Database (Denmark)

    Terpager Andersen, Karl

    2015-01-01

    A literature study is presented on the theories and models dealing with buoyancy-driven ventilation in rooms. The models are categorised into four types according to how the physical process is conceived: column model, fan model, neutral plane model and pressure model. These models are analysed...... and compared with a reference model. Discrepancies and differences are shown, and the deviations are discussed. It is concluded that a reliable buoyancy model based solely on the fundamental flow equations is desirable....

  12. Flexible Modeling of Epidemics with an Empirical Bayes Framework

    Science.gov (United States)

    Brooks, Logan C.; Farrow, David C.; Hyun, Sangwon; Tibshirani, Ryan J.; Rosenfeld, Roni

    2015-01-01

    Seasonal influenza epidemics cause consistent, considerable, widespread loss annually in terms of economic burden, morbidity, and mortality. With access to accurate and reliable forecasts of a current or upcoming influenza epidemic’s behavior, policy makers can design and implement more effective countermeasures. This past year, the Centers for Disease Control and Prevention hosted the “Predict the Influenza Season Challenge”, with the task of predicting key epidemiological measures for the 2013–2014 U.S. influenza season with the help of digital surveillance data. We developed a framework for in-season forecasts of epidemics using a semiparametric Empirical Bayes framework, and applied it to predict the weekly percentage of outpatient doctors visits for influenza-like illness, and the season onset, duration, peak time, and peak height, with and without using Google Flu Trends data. Previous work on epidemic modeling has focused on developing mechanistic models of disease behavior and applying time series tools to explain historical data. However, tailoring these models to certain types of surveillance data can be challenging, and overly complex models with many parameters can compromise forecasting ability. Our approach instead produces possibilities for the epidemic curve of the season of interest using modified versions of data from previous seasons, allowing for reasonable variations in the timing, pace, and intensity of the seasonal epidemics, as well as noise in observations. Since the framework does not make strict domain-specific assumptions, it can easily be applied to some other diseases with seasonal epidemics. This method produces a complete posterior distribution over epidemic curves, rather than, for example, solely point predictions of forecasting targets. We report prospective influenza-like-illness forecasts made for the 2013–2014 U.S. influenza season, and compare the framework’s cross-validated prediction error on historical data to

  13. Empirical modelling to predict the refractive index of human blood

    Science.gov (United States)

    Yahya, M.; Saghir, M. Z.

    2016-02-01

    Optical techniques used for the measurement of the optical properties of blood are of great interest in clinical diagnostics. Blood analysis is a routine procedure used in medical diagnostics to confirm a patient’s condition. Measuring the optical properties of blood is difficult due to the non-homogenous nature of the blood itself. In addition, there is a lot of variation in the refractive indices reported in the literature. These are the reasons that motivated the researchers to develop a mathematical model that can be used to predict the refractive index of human blood as a function of concentration, temperature and wavelength. The experimental measurements were conducted on mimicking phantom hemoglobin samples using the Abbemat Refractometer. The results analysis revealed a linear relationship between the refractive index and concentration as well as temperature, and a non-linear relationship between refractive index and wavelength. These results are in agreement with those found in the literature. In addition, a new formula was developed based on empirical modelling which suggests that temperature and wavelength coefficients be added to the Barer formula. The verification of this correlation confirmed its ability to determine refractive index and/or blood hematocrit values with appropriate clinical accuracy.

  14. Empirical modelling to predict the refractive index of human blood

    International Nuclear Information System (INIS)

    Yahya, M; Saghir, M Z

    2016-01-01

    Optical techniques used for the measurement of the optical properties of blood are of great interest in clinical diagnostics. Blood analysis is a routine procedure used in medical diagnostics to confirm a patient’s condition. Measuring the optical properties of blood is difficult due to the non-homogenous nature of the blood itself. In addition, there is a lot of variation in the refractive indices reported in the literature. These are the reasons that motivated the researchers to develop a mathematical model that can be used to predict the refractive index of human blood as a function of concentration, temperature and wavelength. The experimental measurements were conducted on mimicking phantom hemoglobin samples using the Abbemat Refractometer. The results analysis revealed a linear relationship between the refractive index and concentration as well as temperature, and a non-linear relationship between refractive index and wavelength. These results are in agreement with those found in the literature. In addition, a new formula was developed based on empirical modelling which suggests that temperature and wavelength coefficients be added to the Barer formula. The verification of this correlation confirmed its ability to determine refractive index and/or blood hematocrit values with appropriate clinical accuracy. (paper)

  15. Empirically modelled Pc3 activity based on solar wind parameters

    Directory of Open Access Journals (Sweden)

    B. Heilig

    2010-09-01

    Full Text Available It is known that under certain solar wind (SW/interplanetary magnetic field (IMF conditions (e.g. high SW speed, low cone angle the occurrence of ground-level Pc3–4 pulsations is more likely. In this paper we demonstrate that in the event of anomalously low SW particle density, Pc3 activity is extremely low regardless of otherwise favourable SW speed and cone angle. We re-investigate the SW control of Pc3 pulsation activity through a statistical analysis and two empirical models with emphasis on the influence of SW density on Pc3 activity. We utilise SW and IMF measurements from the OMNI project and ground-based magnetometer measurements from the MM100 array to relate SW and IMF measurements to the occurrence of Pc3 activity. Multiple linear regression and artificial neural network models are used in iterative processes in order to identify sets of SW-based input parameters, which optimally reproduce a set of Pc3 activity data. The inclusion of SW density in the parameter set significantly improves the models. Not only the density itself, but other density related parameters, such as the dynamic pressure of the SW, or the standoff distance of the magnetopause work equally well in the model. The disappearance of Pc3s during low-density events can have at least four reasons according to the existing upstream wave theory: 1. Pausing the ion-cyclotron resonance that generates the upstream ultra low frequency waves in the absence of protons, 2. Weakening of the bow shock that implies less efficient reflection, 3. The SW becomes sub-Alfvénic and hence it is not able to sweep back the waves propagating upstream with the Alfvén-speed, and 4. The increase of the standoff distance of the magnetopause (and of the bow shock. Although the models cannot account for the lack of Pc3s during intervals when the SW density is extremely low, the resulting sets of optimal model inputs support the generation of mid latitude Pc3 activity predominantly through

  16. Bridging process-based and empirical approaches to modeling tree growth

    Science.gov (United States)

    Harry T. Valentine; Annikki Makela; Annikki Makela

    2005-01-01

    The gulf between process-based and empirical approaches to modeling tree growth may be bridged, in part, by the use of a common model. To this end, we have formulated a process-based model of tree growth that can be fitted and applied in an empirical mode. The growth model is grounded in pipe model theory and an optimal control model of crown development. Together, the...

  17. Consistent constitutive modeling of metallic target penetration using empirical, analytical, and numerical penetration models

    Directory of Open Access Journals (Sweden)

    John (Jack P. Riegel III

    2016-04-01

    Full Text Available Historically, there has been little correlation between the material properties used in (1 empirical formulae, (2 analytical formulations, and (3 numerical models. The various regressions and models may each provide excellent agreement for the depth of penetration into semi-infinite targets. But the input parameters for the empirically based procedures may have little in common with either the analytical model or the numerical model. This paper builds on previous work by Riegel and Anderson (2014 to show how the Effective Flow Stress (EFS strength model, based on empirical data, can be used as the average flow stress in the analytical Walker–Anderson Penetration model (WAPEN (Anderson and Walker, 1991 and how the same value may be utilized as an effective von Mises yield strength in numerical hydrocode simulations to predict the depth of penetration for eroding projectiles at impact velocities in the mechanical response regime of the materials. The method has the benefit of allowing the three techniques (empirical, analytical, and numerical to work in tandem. The empirical method can be used for many shot line calculations, but more advanced analytical or numerical models can be employed when necessary to address specific geometries such as edge effects or layering that are not treated by the simpler methods. Developing complete constitutive relationships for a material can be costly. If the only concern is depth of penetration, such a level of detail may not be required. The effective flow stress can be determined from a small set of depth of penetration experiments in many cases, especially for long penetrators such as the L/D = 10 ones considered here, making it a very practical approach. In the process of performing this effort, the authors considered numerical simulations by other researchers based on the same set of experimental data that the authors used for their empirical and analytical assessment. The goals were to establish a

  18. Tests of Parameters Instability: Theoretical Study and Empirical Applications on Two Types of Models (ARMA Model and Market Model

    Directory of Open Access Journals (Sweden)

    Sahbi FARHANI

    2012-01-01

    Full Text Available This paper considers tests of parameters instability and structural change with known, unknown or multiple breakpoints. The results apply to a wide class of parametric models that are suitable for estimation by strong rules for detecting the number of breaks in a time series. For that, we use Chow, CUSUM, CUSUM of squares, Wald, likelihood ratio and Lagrange multiplier tests. Each test implicitly uses an estimate of a change point. We conclude with an empirical analysis on two different models (ARMA model and simple linear regression model.

  19. Bias-dependent hybrid PKI empirical-neural model of microwave FETs

    Science.gov (United States)

    Marinković, Zlatica; Pronić-Rančić, Olivera; Marković, Vera

    2011-10-01

    Empirical models of microwave transistors based on an equivalent circuit are valid for only one bias point. Bias-dependent analysis requires repeated extractions of the model parameters for each bias point. In order to make model bias-dependent, a new hybrid empirical-neural model of microwave field-effect transistors is proposed in this article. The model is a combination of an equivalent circuit model including noise developed for one bias point and two prior knowledge input artificial neural networks (PKI ANNs) aimed at introducing bias dependency of scattering (S) and noise parameters, respectively. The prior knowledge of the proposed ANNs involves the values of the S- and noise parameters obtained by the empirical model. The proposed hybrid model is valid in the whole range of bias conditions. Moreover, the proposed model provides better accuracy than the empirical model, which is illustrated by an appropriate modelling example of a pseudomorphic high-electron mobility transistor device.

  20. Empirical Models of Social Learning in a Large, Evolving Network.

    Directory of Open Access Journals (Sweden)

    Ayşe Başar Bener

    Full Text Available This paper advances theories of social learning through an empirical examination of how social networks change over time. Social networks are important for learning because they constrain individuals' access to information about the behaviors and cognitions of other people. Using data on a large social network of mobile device users over a one-month time period, we test three hypotheses: 1 attraction homophily causes individuals to form ties on the basis of attribute similarity, 2 aversion homophily causes individuals to delete existing ties on the basis of attribute dissimilarity, and 3 social influence causes individuals to adopt the attributes of others they share direct ties with. Statistical models offer varied degrees of support for all three hypotheses and show that these mechanisms are more complex than assumed in prior work. Although homophily is normally thought of as a process of attraction, people also avoid relationships with others who are different. These mechanisms have distinct effects on network structure. While social influence does help explain behavior, people tend to follow global trends more than they follow their friends.

  1. Improving the desolvation penalty in empirical protein pKa modeling

    DEFF Research Database (Denmark)

    Olsson, Mats Henrik Mikael

    2012-01-01

    Unlike atomistic and continuum models, empirical pk(a) predicting methods need to include desolvation contributions explicitly. This study describes a new empirical desolvation method based on the Born solvation model. The new desolvation model was evaluated by high-level Poisson-Boltzmann...

  2. Development of an empirical model of turbine efficiency using the Taylor expansion and regression analysis

    International Nuclear Information System (INIS)

    Fang, Xiande; Xu, Yu

    2011-01-01

    The empirical model of turbine efficiency is necessary for the control- and/or diagnosis-oriented simulation and useful for the simulation and analysis of dynamic performances of the turbine equipment and systems, such as air cycle refrigeration systems, power plants, turbine engines, and turbochargers. Existing empirical models of turbine efficiency are insufficient because there is no suitable form available for air cycle refrigeration turbines. This work performs a critical review of empirical models (called mean value models in some literature) of turbine efficiency and develops an empirical model in the desired form for air cycle refrigeration, the dominant cooling approach in aircraft environmental control systems. The Taylor series and regression analysis are used to build the model, with the Taylor series being used to expand functions with the polytropic exponent and the regression analysis to finalize the model. The measured data of a turbocharger turbine and two air cycle refrigeration turbines are used for the regression analysis. The proposed model is compact and able to present the turbine efficiency map. Its predictions agree with the measured data very well, with the corrected coefficient of determination R c 2 ≥ 0.96 and the mean absolute percentage deviation = 1.19% for the three turbines. -- Highlights: → Performed a critical review of empirical models of turbine efficiency. → Developed an empirical model in the desired form for air cycle refrigeration, using the Taylor expansion and regression analysis. → Verified the method for developing the empirical model. → Verified the model.

  3. Process health management using success tree and empirical model

    Energy Technology Data Exchange (ETDEWEB)

    Heo, Gyunyoung [Kyung Hee Univ., Yongin (Korea, Republic of); Kim, Suyoung [BNF Technology, Daejeon (Korea, Republic of); Sung, Wounkyoung [Korea South-East Power Co. Ltd., Seoul (Korea, Republic of)

    2012-03-15

    Interests on predictive or condition-based maintenance are heightening in power industries. The ultimate goal of the condition-based maintenance is to prioritize and optimize the maintenance resources by taking a reasonable decision-making process depending op plant's conditions. Such decision-making process should be able to not only observe the deviation from a normal state but also determine the severity or impact of the deviation on different levels such as a component, a system, or a plant. In order to achieve this purpose, a Plant Health Index (PHI) monitoring system was developed, which is operational in more than 10 units of large steam turbine cycles in Korea as well as desalination plants in Saudi Arabia as a proto-type demonstration. The PHI monitoring system has capability to detect whether the deviation between a measured and an estimated parameter which is the result of kernel regression using the accumulated operation data and the current plant boundary conditions (referred as an empirical model) is statistically meaningful. This deviation is converted into a certain index considering the margin to set points which are associated with safety. This index is referred as a PHI and the PHIs can be monitored for an individual parameter as well as a component, system, or plant level. In order to organize the PHIs at the component, system, or plant level, a success tree was developed. At the top of the success tree, the PHIs nodes in the middle of the success tree, the PHIs represent the health status of a component or a system. The concept and definition of the PHI, the key methodologies, the architecture of the developed system, and a practical case of using the PHI monitoring system are described in this article.

  4. Process health management using success tree and empirical model

    International Nuclear Information System (INIS)

    Heo, Gyunyoung; Kim, Suyoung; Sung, Wounkyoung

    2012-01-01

    Interests on predictive or condition-based maintenance are heightening in power industries. The ultimate goal of the condition-based maintenance is to prioritize and optimize the maintenance resources by taking a reasonable decision-making process depending op plant's conditions. Such decision-making process should be able to not only observe the deviation from a normal state but also determine the severity or impact of the deviation on different levels such as a component, a system, or a plant. In order to achieve this purpose, a Plant Health Index (PHI) monitoring system was developed, which is operational in more than 10 units of large steam turbine cycles in Korea as well as desalination plants in Saudi Arabia as a proto-type demonstration. The PHI monitoring system has capability to detect whether the deviation between a measured and an estimated parameter which is the result of kernel regression using the accumulated operation data and the current plant boundary conditions (referred as an empirical model) is statistically meaningful. This deviation is converted into a certain index considering the margin to set points which are associated with safety. This index is referred as a PHI and the PHIs can be monitored for an individual parameter as well as a component, system, or plant level. In order to organize the PHIs at the component, system, or plant level, a success tree was developed. At the top of the success tree, the PHIs nodes in the middle of the success tree, the PHIs represent the health status of a component or a system. The concept and definition of the PHI, the key methodologies, the architecture of the developed system, and a practical case of using the PHI monitoring system are described in this article

  5. Modeling gallic acid production rate by empirical and statistical analysis

    Directory of Open Access Journals (Sweden)

    Bratati Kar

    2000-01-01

    Full Text Available For predicting the rate of enzymatic reaction empirical correlation based on the experimental results obtained under various operating conditions have been developed. Models represent both the activation as well as deactivation conditions of enzymatic hydrolysis and the results have been analyzed by analysis of variance (ANOVA. The tannase activity was found maximum at incubation time 5 min, reaction temperature 40ºC, pH 4.0, initial enzyme concentration 0.12 v/v, initial substrate concentration 0.42 mg/ml, ionic strength 0.2 M and under these optimal conditions, the maximum rate of gallic acid production was 33.49 mumoles/ml/min.Para predizer a taxa das reações enzimaticas uma correlação empírica baseada nos resultados experimentais foi desenvolvida. Os modelos representam a ativação e a desativativação da hydrolise enzimatica. Os resultados foram avaliados pela análise de variança (ANOVA. A atividade máxima da tannase foi obtida após 5 minutos de incubação, temperatura 40ºC, pH 4,0, concentração inicial da enzima de 0,12 v/v, concentração inicial do substrato 0,42 mg/ml, força iônica 0,2 M. Sob essas condições a taxa máxima de produção ácido galico foi de 33,49 µmoles/ml/min.

  6. Empirically derived neighbourhood rules for urban land-use modelling

    DEFF Research Database (Denmark)

    Hansen, Henning Sten

    2012-01-01

    Land-use modelling and spatial scenarios have gained attention as a means to meet the challenge of reducing uncertainty in spatial planning and decision making. Many of the recent modelling efforts incorporate cellular automata to accomplish spatially explicit land-use-change modelling. Spatial...

  7. Poisson-generalized gamma empirical Bayes model for disease ...

    African Journals Online (AJOL)

    In spatial disease mapping, the use of Bayesian models of estimation technique is becoming popular for smoothing relative risks estimates for disease mapping. The most common Bayesian conjugate model for disease mapping is the Poisson-Gamma Model (PG). To explore further the activity of smoothing of relative risk ...

  8. Business Processes Modeling Recommender Systems: User Expectations and Empirical Evidence

    Directory of Open Access Journals (Sweden)

    Michael Fellmann

    2018-04-01

    Full Text Available Recommender systems are in widespread use in many areas, especially electronic commerce solutions. In this contribution, we apply recommender functionalities to business process modeling and investigate their potential for supporting process modeling. To do so, we have implemented two prototypes, demonstrated them at a major fair and collected user feedback. After analysis of the feedback, we have confronted the findings with the results of the experiment. Our results indicate that fairgoers expect increased modeling speed as the key advantage and completeness of models as the most unlikely advantage. This stands in contrast to an initial experiment revealing that modelers, in fact, increase the completeness of their models when adequate knowledge is presented while time consumption is not necessarily reduced. We explain possible causes of this mismatch and finally hypothesize on two “sweet spots” of process modeling recommender systems.

  9. Political economy models and agricultural policy formation : empirical applicability and relevance for the CAP

    OpenAIRE

    Zee, van der, F.A.

    1997-01-01

    This study explores the relevance and applicability of political economy models for the explanation of agricultural policies. Part I (chapters 4-7) takes a general perspective and evaluates the empirical applicability of voting models and interest group models to agricultural policy formation in industrialised market economics. Part II (chapters 8-11) focuses on the empirical applicability of political economy models to agricultural policy formation and agricultural policy developmen...

  10. Empirical study of the GARCH model with rational errors

    International Nuclear Information System (INIS)

    Chen, Ting Ting; Takaishi, Tetsuya

    2013-01-01

    We use the GARCH model with a fat-tailed error distribution described by a rational function and apply it to stock price data on the Tokyo Stock Exchange. To determine the model parameters we perform Bayesian inference to the model. Bayesian inference is implemented by the Metropolis-Hastings algorithm with an adaptive multi-dimensional Student's t-proposal density. In order to compare our model with the GARCH model with the standard normal errors, we calculate the information criteria AIC and DIC, and find that both criteria favor the GARCH model with a rational error distribution. We also calculate the accuracy of the volatility by using the realized volatility and find that a good accuracy is obtained for the GARCH model with a rational error distribution. Thus we conclude that the GARCH model with a rational error distribution is superior to the GARCH model with the normal errors and it can be used as an alternative GARCH model to those with other fat-tailed distributions

  11. Selection Bias in Educational Transition Models: Theory and Empirical Evidence

    DEFF Research Database (Denmark)

    Holm, Anders; Jæger, Mads

    variables. This paper, first, explains theoretically how selection on unobserved variables leads to waning coefficients and, second, illustrates empirically how selection leads to biased estimates of the effect of family background on educational transitions. Our empirical analysis using data from...

  12. Empirical methods for modeling landscape change, ecosystem services, and biodiversity

    Science.gov (United States)

    David Lewis; Ralph. Alig

    2009-01-01

    The purpose of this paper is to synthesize recent economics research aimed at integrating discrete-choice econometric models of land-use change with spatially-explicit landscape simulations and quantitative ecology. This research explicitly models changes in the spatial pattern of landscapes in two steps: 1) econometric estimation of parcel-scale transition...

  13. An Empirical Comparison of Default Swap Pricing Models

    NARCIS (Netherlands)

    P. Houweling (Patrick); A.C.F. Vorst (Ton)

    2002-01-01

    textabstractAbstract: In this paper we compare market prices of credit default swaps with model prices. We show that a simple reduced form model with a constant recovery rate outperforms the market practice of directly comparing bonds' credit spreads to default swap premiums. We find that the

  14. Empirical Analysis of Farm Credit Risk under the Structure Model

    Science.gov (United States)

    Yan, Yan

    2009-01-01

    The study measures farm credit risk by using farm records collected by Farm Business Farm Management (FBFM) during the period 1995-2004. The study addresses the following questions: (1) whether farm's financial position is fully described by the structure model, (2) what are the determinants of farm capital structure under the structure model, (3)…

  15. Hybrid modeling and empirical analysis of automobile supply chain network

    Science.gov (United States)

    Sun, Jun-yan; Tang, Jian-ming; Fu, Wei-ping; Wu, Bing-ying

    2017-05-01

    Based on the connection mechanism of nodes which automatically select upstream and downstream agents, a simulation model for dynamic evolutionary process of consumer-driven automobile supply chain is established by integrating ABM and discrete modeling in the GIS-based map. Firstly, the rationality is proved by analyzing the consistency of sales and changes in various agent parameters between the simulation model and a real automobile supply chain. Second, through complex network theory, hierarchical structures of the model and relationships of networks at different levels are analyzed to calculate various characteristic parameters such as mean distance, mean clustering coefficients, and degree distributions. By doing so, it verifies that the model is a typical scale-free network and small-world network. Finally, the motion law of this model is analyzed from the perspective of complex self-adaptive systems. The chaotic state of the simulation system is verified, which suggests that this system has typical nonlinear characteristics. This model not only macroscopically illustrates the dynamic evolution of complex networks of automobile supply chain but also microcosmically reflects the business process of each agent. Moreover, the model construction and simulation of the system by means of combining CAS theory and complex networks supplies a novel method for supply chain analysis, as well as theory bases and experience for supply chain analysis of auto companies.

  16. An Empirical Test of a Model of Resistance to Persuasion.

    Science.gov (United States)

    And Others; Burgoon, Michael

    1978-01-01

    Tests a model of resistance to persuasion based upon variables not considered by earlier congruity and inoculation models. Supports the prediction that the kind of critical response set induced and the target of the criticism are mediators of resistance to persuasion. (JMF)

  17. Travel Time Reliability for Urban Networks : Modelling and Empirics

    NARCIS (Netherlands)

    Zheng, F.; Liu, Xiaobo; van Zuylen, H.J.; Li, Jie; Lu, Chao

    2017-01-01

    The importance of travel time reliability in traffic management, control, and network design has received a lot of attention in the past decade. In this paper, a network travel time distribution model based on the Johnson curve system is proposed. The model is applied to field travel time data

  18. An empirical model of global spread-f occurrence

    International Nuclear Information System (INIS)

    Singleton, D.G.

    1974-09-01

    A method of combining models of ionospheric F-layer peak electron density and irregularity incremental electron density into a model of the occurrence probability of the frequency spreading component of spread-F is presented. The predictions of the model are compared with spread-F occurrence data obtained under sunspot maximum conditions. Good agreement is obtained for latitudes less than 70 0 geomagnetic. At higher latitudes, the inclusion of a 'blackout factor' in the model allows it to accurately represent the data and, in so doing, resolves an apparent discrepancy in the occurrence statistics at high latitudes. The blackout factor is ascribed to the effect of polar blackout on the spread-F statistics and/or the lack of a definitve incremental electron density model for irregularities at polar latitudes. Ways of isolating these effects and assessing their relative importance in the blackout factor are discussed. The model, besides providing estimates of spread-F occurrence on a worldwide basis, which will be of value in the engineering of HF and VHF communications, also furnishes a means of further checking the irregularity incremental electron density model on which it is based. (author)

  19. Theoretical-empirical model of the steam-water cycle of the power unit

    Directory of Open Access Journals (Sweden)

    Grzegorz Szapajko

    2010-06-01

    Full Text Available The diagnostics of the energy conversion systems’ operation is realised as a result of collecting, processing, evaluatingand analysing the measurement signals. The result of the analysis is the determination of the process state. It requires a usageof the thermal processes models. Construction of the analytical model with the auxiliary empirical functions built-in brings satisfyingresults. The paper presents theoretical-empirical model of the steam-water cycle. Worked out mathematical simulation model containspartial models of the turbine, the regenerative heat exchangers and the condenser. Statistical verification of the model is presented.

  20. An Empirical Model for Estimating the Probability of Electrical Short Circuits from Tin Whiskers. Part 2

    Science.gov (United States)

    Courey, Karim; Wright, Clara; Asfour, Shihab; Onar, Arzu; Bayliss, Jon; Ludwig, Larry

    2009-01-01

    In this experiment, an empirical model to quantify the probability of occurrence of an electrical short circuit from tin whiskers as a function of voltage was developed. This empirical model can be used to improve existing risk simulation models. FIB and TEM images of a tin whisker confirm the rare polycrystalline structure on one of the three whiskers studied. FIB cross-section of the card guides verified that the tin finish was bright tin.

  1. Empirical evaluation of a forecasting model for successful facilitation ...

    African Journals Online (AJOL)

    During 2000 the annual Facilitator Customer Satisfaction Survey was ... the forecasting model is successful concerning the CSI value and a high positive linear ... namely that of human behaviour to incorporate other influences than just the ...

  2. Using Graph and Vertex Entropy to Compare Empirical Graphs with Theoretical Graph Models

    Directory of Open Access Journals (Sweden)

    Tomasz Kajdanowicz

    2016-09-01

    Full Text Available Over the years, several theoretical graph generation models have been proposed. Among the most prominent are: the Erdős–Renyi random graph model, Watts–Strogatz small world model, Albert–Barabási preferential attachment model, Price citation model, and many more. Often, researchers working with real-world data are interested in understanding the generative phenomena underlying their empirical graphs. They want to know which of the theoretical graph generation models would most probably generate a particular empirical graph. In other words, they expect some similarity assessment between the empirical graph and graphs artificially created from theoretical graph generation models. Usually, in order to assess the similarity of two graphs, centrality measure distributions are compared. For a theoretical graph model this means comparing the empirical graph to a single realization of a theoretical graph model, where the realization is generated from the given model using an arbitrary set of parameters. The similarity between centrality measure distributions can be measured using standard statistical tests, e.g., the Kolmogorov–Smirnov test of distances between cumulative distributions. However, this approach is both error-prone and leads to incorrect conclusions, as we show in our experiments. Therefore, we propose a new method for graph comparison and type classification by comparing the entropies of centrality measure distributions (degree centrality, betweenness centrality, closeness centrality. We demonstrate that our approach can help assign the empirical graph to the most similar theoretical model using a simple unsupervised learning method.

  3. Empirical assessment of a threshold model for sylvatic plague

    DEFF Research Database (Denmark)

    Davis, Stephen; Leirs, Herwig; Viljugrein, H.

    2007-01-01

    Plague surveillance programmes established in Kazakhstan, Central Asia, during the previous century, have generated large plague archives that have been used to parameterize an abundance threshold model for sylvatic plague in great gerbil (Rhombomys opimus) populations. Here, we assess the model...... examine six hypotheses that could explain the resulting false positive predictions, namely (i) including end-of-outbreak data erroneously lowers the estimated threshold, (ii) too few gerbils were tested, (iii) plague becomes locally extinct, (iv) the abundance of fleas was too low, (v) the climate...

  4. Empirical justification of the elementary model of money circulation

    Science.gov (United States)

    Schinckus, Christophe; Altukhov, Yurii A.; Pokrovskii, Vladimir N.

    2018-03-01

    This paper proposes an elementary model describing the money circulation for a system, composed by a production system, the government, a central bank, commercial banks and their customers. A set of equations for the system determines the main features of interaction between the production and the money circulation. It is shown, that the money system can evolve independently of the evolution of production. The model can be applied to any national economy but we will illustrate our claim in the context of the Russian monetary system.

  5. Generalized Empirical Likelihood-Based Focused Information Criterion and Model Averaging

    Directory of Open Access Journals (Sweden)

    Naoya Sueishi

    2013-07-01

    Full Text Available This paper develops model selection and averaging methods for moment restriction models. We first propose a focused information criterion based on the generalized empirical likelihood estimator. We address the issue of selecting an optimal model, rather than a correct model, for estimating a specific parameter of interest. Then, this study investigates a generalized empirical likelihood-based model averaging estimator that minimizes the asymptotic mean squared error. A simulation study suggests that our averaging estimator can be a useful alternative to existing post-selection estimators.

  6. An Empirical Generative Framework for Computational Modeling of Language Acquisition

    Science.gov (United States)

    Waterfall, Heidi R.; Sandbank, Ben; Onnis, Luca; Edelman, Shimon

    2010-01-01

    This paper reports progress in developing a computer model of language acquisition in the form of (1) a generative grammar that is (2) algorithmically learnable from realistic corpus data, (3) viable in its large-scale quantitative performance and (4) psychologically real. First, we describe new algorithmic methods for unsupervised learning of…

  7. An auto-calibration procedure for empirical solar radiation models

    NARCIS (Netherlands)

    Bojanowski, J.S.; Donatelli, Marcello; Skidmore, A.K.; Vrieling, A.

    2013-01-01

    Solar radiation data are an important input for estimating evapotranspiration and modelling crop growth. Direct measurement of solar radiation is now carried out in most European countries, but the network of measuring stations is too sparse for reliable interpolation of measured values. Instead of

  8. Neural networks in economic modelling : An empirical study

    NARCIS (Netherlands)

    Verkooijen, W.J.H.

    1996-01-01

    This dissertation addresses the statistical aspects of neural networks and their usability for solving problems in economics and finance. Neural networks are discussed in a framework of modelling which is generally accepted in econometrics. Within this framework a neural network is regarded as a

  9. An Empirical Study of a Solo Performance Assessment Model

    Science.gov (United States)

    Russell, Brian E.

    2015-01-01

    The purpose of this study was to test a hypothesized model of solo music performance assessment. Specifically, this study investigates the influence of technique and musical expression on perceptions of overall performance quality. The Aural Musical Performance Quality (AMPQ) measure was created to measure overall performance quality, technique,…

  10. Modeling social networks in geographic space: approach and empirical application

    NARCIS (Netherlands)

    Arentze, T.A.; Berg, van den P.E.W.; Timmermans, H.J.P.

    2012-01-01

    Social activities are responsible for a large proportion of travel demands of individuals. Modeling of the social network of a studied population offers a basis to predict social travel in a more comprehensive way than currently is possible. In this paper we develop a method to generate a whole

  11. Organizational Learning, Strategic Flexibility and Business Model Innovation: An Empirical Research Based on Logistics Enterprises

    Science.gov (United States)

    Bao, Yaodong; Cheng, Lin; Zhang, Jian

    Using the data of 237 Jiangsu logistics firms, this paper empirically studies the relationship among organizational learning capability, business model innovation, strategic flexibility. The results show as follows; organizational learning capability has positive impacts on business model innovation performance; strategic flexibility plays mediating roles on the relationship between organizational learning capability and business model innovation; interaction among strategic flexibility, explorative learning and exploitative learning play significant roles in radical business model innovation and incremental business model innovation.

  12. Political economy models and agricultural policy formation : empirical applicability and relevance for the CAP

    NARCIS (Netherlands)

    Zee, van der F.A.

    1997-01-01

    This study explores the relevance and applicability of political economy models for the explanation of agricultural policies. Part I (chapters 4-7) takes a general perspective and evaluates the empirical applicability of voting models and interest group models to agricultural policy

  13. Assessing and improving the quality of modeling : a series of empirical studies about the UML

    NARCIS (Netherlands)

    Lange, C.F.J.

    2007-01-01

    Assessing and Improving the Quality of Modeling A Series of Empirical Studies about the UML This thesis addresses the assessment and improvement of the quality of modeling in software engineering. In particular, we focus on the Unified Modeling Language (UML), which is the de facto standard in

  14. Empirical Modeling of Oxygen Uptake of Flow Over Stepped Chutes ...

    African Journals Online (AJOL)

    The present investigation evaluates the influence of three different step chute geometry when skimming flow was allowed over them with the aim of determining the aerated flow length which is a significant factor when developing empirical equations for estimating aeration efficiency of flow. Overall, forty experiments were ...

  15. On the Complete Instability of Empirically Implemented Dynamic Leontief Models

    NARCIS (Netherlands)

    Steenge, A.E.

    1990-01-01

    On theoretical grounds, real world implementations of forward-looking dynamic Leontief systems were expected to be stable. Empirical work, however, showed the opposite to be true: all investigated systems proved to be unstable. In fact, an extreme form of instability ('complete instability')

  16. Empirical study on entropy models of cellular manufacturing systems

    Institute of Scientific and Technical Information of China (English)

    Zhifeng Zhang; Renbin Xiao

    2009-01-01

    From the theoretical point of view,the states of manufacturing resources can be monitored and assessed through the amount of information needed to describe their technological structure and operational state.The amount of information needed to describe cellular manufacturing systems is investigated by two measures:the structural entropy and the operational entropy.Based on the Shannon entropy,the models of the structural entropy and the operational entropy of cellular manufacturing systems are developed,and the cognizance of the states of manufacturing resources is also illustrated.Scheduling is introduced to measure the entropy models of cellular manufacturing systems,and the feasible concepts of maximum schedule horizon and schedule adherence are advanced to quantitatively evaluate the effectiveness of schedules.Finally,an example is used to demonstrate the validity of the proposed methodology.

  17. An Empirical Model of Wage Dispersion with Sorting

    DEFF Research Database (Denmark)

    Bagger, Jesper; Lentz, Rasmus

    (submodular). The model is estimated on Danish matched employer-employee data. We find evidence of positive assortative matching. In the estimated equilibrium match distribution, the correlation between worker skill and firm productivity is 0.12. The assortative matching has a substantial impact on wage......This paper studies wage dispersion in an equilibrium on-the-job-search model with endogenous search intensity. Workers differ in their permanent skill level and firms differ with respect to productivity. Positive (negative) sorting results if the match production function is supermodular...... to mismatch by asking how much greater output would be if the estimated population of matches were perfectly positively assorted. In this case, output would increase by 7.7%....

  18. A Trade Study of Thermosphere Empirical Neutral Density Models

    Science.gov (United States)

    2014-08-01

    solar radio F10.7 proxy and magnetic activity measurements are used to calculate the baseline orbit. This approach is applied to compare the daily... approach is to calculate along-track errors for these models and compare them against the baseline error based on the “ground truth” neutral density data...n,m = Degree and order, respectively ′ = Geocentric latitude Approved for public release; distribution is unlimited. 2 λ = Geocentric

  19. Towards an Empirical-Relational Model of Supply Chain Flexibility

    OpenAIRE

    Santanu Mandal

    2015-01-01

    Supply chains are prone to disruptions and associated risks. To develop capabilities for risk mitigation, supply chains need to be flexible. A flexible supply chain can respond better to environmental contingencies. Based on the theoretical tenets of resource-based view, relational view and dynamic capabilities theory, the current study develops a relational model of supply chain flexibility comprising trust, commitment, communication, co-operation, adaptation and interdependence. Subsequentl...

  20. PERFORMANCE EVALUATION OF EMPIRICAL MODELS FOR VENTED LEAN HYDROGEN EXPLOSIONS

    OpenAIRE

    Anubhav Sinha; Vendra C. Madhav Rao; Jennifer X. Wen

    2017-01-01

    Explosion venting is a method commonly used to prevent or minimize damage to an enclosure caused by an accidental explosion. An estimate of the maximum overpressure generated though explosion is an important parameter in the design of the vents. Various engineering models (Bauwens et al., 2012, Molkov and Bragin, 2015) and European (EN 14994 ) and USA standards (NFPA 68) are available to predict such overpressure. In this study, their performance is evaluated using a number of published exper...

  1. An empirical firn-densification model comprising ice-lences

    DEFF Research Database (Denmark)

    Reeh, Niels; Fisher, D.A.; Koerner, R.M.

    2005-01-01

    a suitable value of the surface snow density. In the present study, a simple densification model is developed that specifically accounts for the content of ice lenses in the snowpack. An annual layer is considered to be composed of an ice fraction and a firn fraction. It is assumed that all meltwater formed...... changes reflect a volume change of the ice sheet with no corresponding change of mass, i.e. a volume change that does not influence global sea level....

  2. Towards New Empirical Versions of Financial and Accounting Models Corrected for Measurement Errors

    OpenAIRE

    Francois-Éric Racicot; Raymond Théoret; Alain Coen

    2006-01-01

    In this paper, we propose a new empirical version of the Fama and French Model based on the Hausman (1978) specification test and aimed at discarding measurement errors in the variables. The proposed empirical framework is general enough to be used for correcting other financial and accounting models of measurement errors. Removing measurement errors is important at many levels as information disclosure, corporate governance and protection of investors.

  3. Reconstructing plateau icefields: Evaluating empirical and modelled approaches

    Science.gov (United States)

    Pearce, Danni; Rea, Brice; Barr, Iestyn

    2013-04-01

    Glacial landforms are widely utilised to reconstruct former glacier geometries with a common aim to estimate the Equilibrium Line Altitudes (ELAs) and from these, infer palaeoclimatic conditions. Such inferences may be studied on a regional scale and used to correlate climatic gradients across large distances (e.g., Europe). In Britain, the traditional approach uses geomorphological mapping with hand contouring to derive the palaeo-ice surface. Recently, ice surface modelling enables an equilibrium profile reconstruction tuned using the geomorphology. Both methods permit derivation of palaeo-climate but no study has compared the two methods for the same ice-mass. This is important because either approach may result in differences in glacier limits, ELAs and palaeo-climate. This research uses both methods to reconstruct a plateau icefield and quantifies the results from a cartographic and geometrical aspect. Detailed geomorphological mapping of the Tweedsmuir Hills in the Southern Uplands, Scotland (c. 320 km2) was conducted to examine the extent of Younger Dryas (YD; 12.9 -11.7 cal. ka BP) glaciation. Landform evidence indicates a plateau icefield configuration of two separate ice-masses during the YD covering an area c. 45 km2 and 25 km2. The interpreted age is supported by new radiocarbon dating of basal stratigraphies and Terrestrial Cosmogenic Nuclide Analysis (TCNA) of in situ boulders. Both techniques produce similar configurations however; the model results in a coarser resolution requiring further processing if a cartographic map is required. When landforms are absent or fragmentary (e.g., trimlines and lateral moraines), like in many accumulation zones on plateau icefields, the geomorphological approach increasingly relies on extrapolation between lines of evidence and on the individual's perception of how the ice-mass ought to look. In some locations this results in an underestimation of the ice surface compared to the modelled surface most likely due to

  4. Semiphysiological versus Empirical Modelling of the Population Pharmacokinetics of Free and Total Cefazolin during Pregnancy

    Directory of Open Access Journals (Sweden)

    J. G. Coen van Hasselt

    2014-01-01

    Full Text Available This work describes a first population pharmacokinetic (PK model for free and total cefazolin during pregnancy, which can be used for dose regimen optimization. Secondly, analysis of PK studies in pregnant patients is challenging due to study design limitations. We therefore developed a semiphysiological modeling approach, which leveraged gestation-induced changes in creatinine clearance (CrCL into a population PK model. This model was then compared to the conventional empirical covariate model. First, a base two-compartmental PK model with a linear protein binding was developed. The empirical covariate model for gestational changes consisted of a linear relationship between CL and gestational age. The semiphysiological model was based on the base population PK model and a separately developed mixed-effect model for gestation-induced change in CrCL. Estimates for baseline clearance (CL were 0.119 L/min (RSE 58% and 0.142 L/min (RSE 44% for the empirical and semiphysiological models, respectively. Both models described the available PK data comparably well. However, as the semiphysiological model was based on prior knowledge of gestation-induced changes in renal function, this model may have improved predictive performance. This work demonstrates how a hybrid semiphysiological population PK approach may be of relevance in order to derive more informative inferences.

  5. Empirical model of subdaily variations in the Earth rotation from GPS and its stability

    Science.gov (United States)

    Panafidina, N.; Kurdubov, S.; Rothacher, M.

    2012-12-01

    The model recommended by the IERS for these variations at diurnal and semidiurnal periods has been computed from an ocean tide model and comprises 71 terms in polar motion and Universal Time. In the present study we compute an empirical model of variations in the Earth rotation on tidal frequencies from homogeneously re-processed GPS-observations over 1994-2007 available as free daily normal equations. We discuss the reliability of the obtained amplitudes of the ERP variations and compare results from GPS and VLBI data to identify technique-specific problems and instabilities of the empirical tidal models.

  6. Merging expert and empirical data for rare event frequency estimation: Pool homogenisation for empirical Bayes models

    International Nuclear Information System (INIS)

    Quigley, John; Hardman, Gavin; Bedford, Tim; Walls, Lesley

    2011-01-01

    Empirical Bayes provides one approach to estimating the frequency of rare events as a weighted average of the frequencies of an event and a pool of events. The pool will draw upon, for example, events with similar precursors. The higher the degree of homogeneity of the pool, then the Empirical Bayes estimator will be more accurate. We propose and evaluate a new method using homogenisation factors under the assumption that events are generated from a Homogeneous Poisson Process. The homogenisation factors are scaling constants, which can be elicited through structured expert judgement and used to align the frequencies of different events, hence homogenising the pool. The estimation error relative to the homogeneity of the pool is examined theoretically indicating that reduced error is associated with larger pool homogeneity. The effects of misspecified expert assessments of the homogenisation factors are examined theoretically and through simulation experiments. Our results show that the proposed Empirical Bayes method using homogenisation factors is robust under different degrees of misspecification.

  7. A Socio-Cultural Model Based on Empirical Data of Cultural and Social Relationship

    DEFF Research Database (Denmark)

    Lipi, Afia Akhter; Nakano, Yukiko; Rehm, Matthias

    2010-01-01

    The goal of this paper is to integrate culture and social relationship as a computational term in an embodied conversational agent system by employing empirical and theoretical approach. We propose a parameter-based model that predicts nonverbal expressions appropriate for specific cultures...... in different social relationship. So, first, we introduce the theories of social and cultural characteristics. Then, we did corpus analysis of human interaction of two cultures in two different social situations and extracted empirical data and finally, by integrating socio-cultural characteristics...... with empirical data, we establish a parameterized network model that generates culture specific non-verbal expressions in different social relationships....

  8. Empirical model development and validation with dynamic learning in the recurrent multilayer perception

    International Nuclear Information System (INIS)

    Parlos, A.G.; Chong, K.T.; Atiya, A.F.

    1994-01-01

    A nonlinear multivariable empirical model is developed for a U-tube steam generator using the recurrent multilayer perceptron network as the underlying model structure. The recurrent multilayer perceptron is a dynamic neural network, very effective in the input-output modeling of complex process systems. A dynamic gradient descent learning algorithm is used to train the recurrent multilayer perceptron, resulting in an order of magnitude improvement in convergence speed over static learning algorithms. In developing the U-tube steam generator empirical model, the effects of actuator, process,and sensor noise on the training and testing sets are investigated. Learning and prediction both appear very effective, despite the presence of training and testing set noise, respectively. The recurrent multilayer perceptron appears to learn the deterministic part of a stochastic training set, and it predicts approximately a moving average response. Extensive model validation studies indicate that the empirical model can substantially generalize (extrapolate), though online learning becomes necessary for tracking transients significantly different than the ones included in the training set and slowly varying U-tube steam generator dynamics. In view of the satisfactory modeling accuracy and the associated short development time, neural networks based empirical models in some cases appear to provide a serious alternative to first principles models. Caution, however, must be exercised because extensive on-line validation of these models is still warranted

  9. Stochastic Modeling of Empirical Storm Loss in Germany

    Science.gov (United States)

    Prahl, B. F.; Rybski, D.; Kropp, J. P.; Burghoff, O.; Held, H.

    2012-04-01

    Based on German insurance loss data for residential property we derive storm damage functions that relate daily loss with maximum gust wind speed. Over a wide range of loss, steep power law relationships are found with spatially varying exponents ranging between approximately 8 and 12. Global correlations between parameters and socio-demographic data are employed to reduce the number of local parameters to 3. We apply a Monte Carlo approach to calculate German loss estimates including confidence bounds in daily and annual resolution. Our model reproduces the annual progression of winter storm losses and enables to estimate daily losses over a wide range of magnitude.

  10. Modeling Active Aging and Explicit Memory: An Empirical Study.

    Science.gov (United States)

    Ponce de León, Laura Ponce; Lévy, Jean Pierre; Fernández, Tomás; Ballesteros, Soledad

    2015-08-01

    The rapid growth of the population of older adults and their concomitant psychological status and health needs have captured the attention of researchers and health professionals. To help fill the void of literature available to social workers interested in mental health promotion and aging, the authors provide a model for active aging that uses psychosocial variables. Structural equation modeling was used to examine the relationships among the latent variables of the state of explicit memory, the perception of social resources, depression, and the perception of quality of life in a sample of 184 older adults. The results suggest that explicit memory is not a direct indicator of the perception of quality of life, but it could be considered an indirect indicator as it is positively correlated with perception of social resources and negatively correlated with depression. These last two variables influenced the perception of quality of life directly, the former positively and the latter negatively. The main outcome suggests that the perception of social support improves explicit memory and quality of life and reduces depression in active older adults. The findings also suggest that gerontological professionals should design memory training programs, improve available social resources, and offer environments with opportunities to exercise memory.

  11. Toward an Empirically-based Parametric Explosion Spectral Model

    Science.gov (United States)

    Ford, S. R.; Walter, W. R.; Ruppert, S.; Matzel, E.; Hauk, T. F.; Gok, R.

    2010-12-01

    Small underground nuclear explosions need to be confidently detected, identified, and characterized in regions of the world where they have never occurred. We develop a parametric model of the nuclear explosion seismic source spectrum derived from regional phases (Pn, Pg, and Lg) that is compatible with earthquake-based geometrical spreading and attenuation. Earthquake spectra are fit with a generalized version of the Brune spectrum, which is a three-parameter model that describes the long-period level, corner-frequency, and spectral slope at high-frequencies. These parameters are then correlated with near-source geology and containment conditions. There is a correlation of high gas-porosity (low strength) with increased spectral slope. However, there are trade-offs between the slope and corner-frequency, which we try to independently constrain using Mueller-Murphy relations and coda-ratio techniques. The relationship between the parametric equation and the geologic and containment conditions will assist in our physical understanding of the nuclear explosion source, and aid in the prediction of observed local and regional distance seismic amplitudes for event identification and yield determination in regions with incomplete or no prior history of underground nuclear testing.

  12. Cycle length maximization in PWRs using empirical core models

    International Nuclear Information System (INIS)

    Okafor, K.C.; Aldemir, T.

    1987-01-01

    The problem of maximizing cycle length in nuclear reactors through optimal fuel and poison management has been addressed by many investigators. An often-used neutronic modeling technique is to find correlations between the state and control variables to describe the response of the core to changes in the control variables. In this study, a set of linear correlations, generated by two-dimensional diffusion-depletion calculations, is used to find the enrichment distribution that maximizes cycle length for the initial core of a pressurized water reactor (PWR). These correlations (a) incorporate the effect of composition changes in all the control zones on a given fuel assembly and (b) are valid for a given range of control variables. The advantage of using such correlations is that the cycle length maximization problem can be reduced to a linear programming problem

  13. Behavioral Modeling Based on Probabilistic Finite Automata: An Empirical Study.

    Science.gov (United States)

    Tîrnăucă, Cristina; Montaña, José L; Ontañón, Santiago; González, Avelino J; Pardo, Luis M

    2016-06-24

    Imagine an agent that performs tasks according to different strategies. The goal of Behavioral Recognition (BR) is to identify which of the available strategies is the one being used by the agent, by simply observing the agent's actions and the environmental conditions during a certain period of time. The goal of Behavioral Cloning (BC) is more ambitious. In this last case, the learner must be able to build a model of the behavior of the agent. In both settings, the only assumption is that the learner has access to a training set that contains instances of observed behavioral traces for each available strategy. This paper studies a machine learning approach based on Probabilistic Finite Automata (PFAs), capable of achieving both the recognition and cloning tasks. We evaluate the performance of PFAs in the context of a simulated learning environment (in this case, a virtual Roomba vacuum cleaner robot), and compare it with a collection of other machine learning approaches.

  14. Empirical modeling and data analysis for engineers and applied scientists

    CERN Document Server

    Pardo, Scott A

    2016-01-01

    This textbook teaches advanced undergraduate and first-year graduate students in Engineering and Applied Sciences to gather and analyze empirical observations (data) in order to aid in making design decisions. While science is about discovery, the primary paradigm of engineering and "applied science" is design. Scientists are in the discovery business and want, in general, to understand the natural world rather than to alter it. In contrast, engineers and applied scientists design products, processes, and solutions to problems. That said, statistics, as a discipline, is mostly oriented toward the discovery paradigm. Young engineers come out of their degree programs having taken courses such as "Statistics for Engineers and Scientists" without any clear idea as to how they can use statistical methods to help them design products or processes. Many seem to think that statistics is only useful for demonstrating that a device or process actually does what it was designed to do. Statistics courses emphasize creati...

  15. Temporal structure of neuronal population oscillations with empirical model decomposition

    International Nuclear Information System (INIS)

    Li Xiaoli

    2006-01-01

    Frequency analysis of neuronal oscillation is very important for understanding the neural information processing and mechanism of disorder in the brain. This Letter addresses a new method to analyze the neuronal population oscillations with empirical mode decomposition (EMD). Following EMD of neuronal oscillation, a series of intrinsic mode functions (IMFs) are obtained, then Hilbert transform of IMFs can be used to extract the instantaneous time frequency structure of neuronal oscillation. The method is applied to analyze the neuronal oscillation in the hippocampus of epileptic rats in vivo, the results show the neuronal oscillations have different descriptions during the pre-ictal, seizure onset and ictal periods of the epileptic EEG at the different frequency band. This new method is very helpful to provide a view for the temporal structure of neural oscillation

  16. Empirical investigation on modeling solar radiation series with ARMA–GARCH models

    International Nuclear Information System (INIS)

    Sun, Huaiwei; Yan, Dong; Zhao, Na; Zhou, Jianzhong

    2015-01-01

    Highlights: • Apply 6 ARMA–GARCH(-M) models to model and forecast solar radiation. • The ARMA–GARCH(-M) models produce more accurate radiation forecasting than conventional methods. • Show that ARMA–GARCH-M models are more effective for forecasting solar radiation mean and volatility. • The ARMA–EGARCH-M is robust and the ARMA–sGARCH-M is very competitive. - Abstract: Simulation of radiation is one of the most important issues in solar utilization. Time series models are useful tools in the estimation and forecasting of solar radiation series and their changes. In this paper, the effectiveness of autoregressive moving average (ARMA) models with various generalized autoregressive conditional heteroskedasticity (GARCH) processes, namely ARMA–GARCH models are evaluated for their effectiveness in radiation series. Six different GARCH approaches, which contain three different ARMA–GARCH models and corresponded GARCH in mean (ARMA–GARCH-M) models, are applied in radiation data sets from two representative climate stations in China. Multiple evaluation metrics of modeling sufficiency are used for evaluating the performances of models. The results show that the ARMA–GARCH(-M) models are effective in radiation series estimation. Both in fitting and prediction of radiation series, the ARMA–GARCH(-M) models show better modeling sufficiency than traditional models, while ARMA–EGARCH-M models are robustness in two sites and the ARMA–sGARCH-M models appear very competitive. Comparisons of statistical diagnostics and model performance clearly show that the ARMA–GARCH-M models make the mean radiation equations become more sufficient. It is recommended the ARMA–GARCH(-M) models to be the preferred method to use in the modeling of solar radiation series

  17. Libor and Swap Market Models for the Pricing of Interest Rate Derivatives : An Empirical Analysis

    NARCIS (Netherlands)

    de Jong, F.C.J.M.; Driessen, J.J.A.G.; Pelsser, A.

    2000-01-01

    In this paper we empirically analyze and compare the Libor and Swap Market Models, developed by Brace, Gatarek, and Musiela (1997) and Jamshidian (1997), using paneldata on prices of US caplets and swaptions.A Libor Market Model can directly be calibrated to observed prices of caplets, whereas a

  18. An improved empirical model for diversity gain on Earth-space propagation paths

    Science.gov (United States)

    Hodge, D. B.

    1981-01-01

    An empirical model was generated to estimate diversity gain on Earth-space propagation paths as a function of Earth terminal separation distance, link frequency, elevation angle, and angle between the baseline and the path azimuth. The resulting model reproduces the entire experimental data set with an RMS error of 0.73 dB.

  19. Comparing Multidimensional and Continuum Models of Vocabulary Acquisition: An Empirical Examination of the Vocabulary Knowledge Scale

    Science.gov (United States)

    Stewart, Jeffrey; Batty, Aaron Olaf; Bovee, Nicholas

    2012-01-01

    Second language vocabulary acquisition has been modeled both as multidimensional in nature and as a continuum wherein the learner's knowledge of a word develops along a cline from recognition through production. In order to empirically examine and compare these models, the authors assess the degree to which the Vocabulary Knowledge Scale (VKS;…

  20. A semi-empirical model for predicting crown diameter of cedrela ...

    African Journals Online (AJOL)

    A semi-empirical model relating age and breast height has been developed to predict individual tree crown diameter for Cedrela odorata (L) plantation in the moist evergreen forest zones of Ghana. The model was based on field records of 269 trees, and could determine the crown cover dynamics, forecast time of canopy ...

  1. Financial power laws: Empirical evidence, models, and mechanisms

    International Nuclear Information System (INIS)

    Lux, Thomas; Alfarano, Simone

    2016-01-01

    Financial markets (share markets, foreign exchange markets and others) are all characterized by a number of universal power laws. The most prominent example is the ubiquitous finding of a robust, approximately cubic power law characterizing the distribution of large returns. A similarly robust feature is long-range dependence in volatility (i.e., hyperbolic decline of its autocorrelation function). The recent literature adds temporal scaling of trading volume and multi-scaling of higher moments of returns. Increasing awareness of these properties has recently spurred attempts at theoretical explanations of the emergence of these key characteristics form the market process. In principle, different types of dynamic processes could be responsible for these power-laws. Examples to be found in the economics literature include multiplicative stochastic processes as well as dynamic processes with multiple equilibria. Though both types of dynamics are characterized by intermittent behavior which occasionally generates large bursts of activity, they can be based on fundamentally different perceptions of the trading process. The present paper reviews both the analytical background of the power laws emerging from the above data generating mechanisms as well as pertinent models proposed in the economics literature.

  2. Empirical modeling of nuclear power plants using neural networks

    International Nuclear Information System (INIS)

    Parlos, A.G.; Atiya, A.; Chong, K.T.

    1991-01-01

    A summary of a procedure for nonlinear identification of process dynamics encountered in nuclear power plant components is presented in this paper using artificial neural systems. A hybrid feedforward/feedback neural network, namely, a recurrent multilayer perceptron, is used as the nonlinear structure for system identification. In the overall identification process, the feedforward portion of the network architecture provides its well-known interpolation property, while through recurrency and cross-talk, the local information feedback enables representation of time-dependent system nonlinearities. The standard backpropagation learning algorithm is modified and is used to train the proposed hybrid network in a supervised manner. The performance of recurrent multilayer perceptron networks in identifying process dynamics is investigated via the case study of a U-tube steam generator. The nonlinear response of a representative steam generator is predicted using a neural network and is compared to the response obtained from a sophisticated physical model during both high- and low-power operation. The transient responses compare well, though further research is warranted for training and testing of recurrent neural networks during more severe operational transients and accident scenarios

  3. Screening enterprising personality in youth: an empirical model.

    Science.gov (United States)

    Suárez-Álvarez, Javier; Pedrosa, Ignacio; García-Cueto, Eduardo; Muñiz, José

    2014-02-20

    Entrepreneurial attitudes of individuals are determined by different variables, some of them related to the cognitive and personality characteristics of the person, and others focused on contextual aspects. The aim of this study is to review the essential dimensions of enterprising personality and develop a test that will permit their thorough assessment. Nine dimensions were identified: achievement motivation, risk taking, innovativeness, autonomy, internal locus of control, external locus of control, stress tolerance, self-efficacy and optimism. For the assessment of these dimensions, 161 items were developed which were applied to a sample of 416 students, 54% male and 46% female (M = 17.89 years old, SD = 3.26). After conducting several qualitative and quantitative analyses, the final test was composed of 127 items with acceptable psychometric properties. Alpha coefficients for the subscales ranged from .81 to .98. The validity evidence relative to the content was provided by experts (V = .71, 95% CI = .56 - .85). Construct validity was assessed using different factorial analyses, obtaining a dimensional structure in accordance with the proposed model of nine interdependent dimensions as well as a global factor that groups these nine dimensions (explained variance = 49.07%; χ2/df = 1.78; GFI= .97; SRMR = .07). Nine out of the 127 items showed Differential Item Functioning as a function of gender (p .035). The results obtained are discussed and future lines of research analyzed.

  4. Block Empirical Likelihood for Longitudinal Single-Index Varying-Coefficient Model

    Directory of Open Access Journals (Sweden)

    Yunquan Song

    2013-01-01

    Full Text Available In this paper, we consider a single-index varying-coefficient model with application to longitudinal data. In order to accommodate the within-group correlation, we apply the block empirical likelihood procedure to longitudinal single-index varying-coefficient model, and prove a nonparametric version of Wilks’ theorem which can be used to construct the block empirical likelihood confidence region with asymptotically correct coverage probability for the parametric component. In comparison with normal approximations, the proposed method does not require a consistent estimator for the asymptotic covariance matrix, making it easier to conduct inference for the model's parametric component. Simulations demonstrate how the proposed method works.

  5. Theoretical and Empirical Review of Asset Pricing Models: A Structural Synthesis

    Directory of Open Access Journals (Sweden)

    Saban Celik

    2012-01-01

    Full Text Available The purpose of this paper is to give a comprehensive theoretical review devoted to asset pricing models by emphasizing static and dynamic versions in the line with their empirical investigations. A considerable amount of financial economics literature devoted to the concept of asset pricing and their implications. The main task of asset pricing model can be seen as the way to evaluate the present value of the pay offs or cash flows discounted for risk and time lags. The difficulty coming from discounting process is that the relevant factors that affect the pay offs vary through the time whereas the theoretical framework is still useful to incorporate the changing factors into an asset pricing models. This paper fills the gap in literature by giving a comprehensive review of the models and evaluating the historical stream of empirical investigations in the form of structural empirical review.

  6. Empirical wind retrieval model based on SAR spectrum measurements

    Science.gov (United States)

    Panfilova, Maria; Karaev, Vladimir; Balandina, Galina; Kanevsky, Mikhail; Portabella, Marcos; Stoffelen, Ad

    The present paper considers polarimetric SAR wind vector applications. Remote-sensing measurements of the near-surface wind over the ocean are of great importance for the understanding of atmosphere-ocean interaction. In recent years investigations for wind vector retrieval using Synthetic Aperture Radar (SAR) data have been performed. In contrast with scatterometers, a SAR has a finer spatial resolution that makes it a more suitable microwave instrument to explore wind conditions in the marginal ice zones, coastal regions and lakes. The wind speed retrieval procedure from scatterometer data matches the measured radar backscattering signal with the geophysical model function (GMF). The GMF determines the radar cross section dependence on the wind speed and direction with respect to the azimuthal angle of the radar beam. Scatterometers provide information on wind speed and direction simultaneously due to the fact that each wind vector cell (WVC) is observed at several azimuth angles. However, SAR is not designed to be used as a high resolution scatterometer. In this case, each WVC is observed at only one single azimuth angle. That is why for wind vector determination additional information such as wind streak orientation over the sea surface is required. It is shown that the wind vector can be obtained using polarimetric SAR without additional information. The main idea is to analyze the spectrum of a homogeneous SAR image area instead of the backscattering normalized radar cross section. Preliminary numerical simulations revealed that SAR image spectral maxima positions depend on the wind vector. Thus the following method for wind speed retrieval is proposed. In the first stage of the algorithm, the SAR spectrum maxima are determined. This procedure is carried out to estimate the wind speed and direction with ambiguities separated by 180 degrees due to the SAR spectrum symmetry. The second stage of the algorithm allows us to select the correct wind direction

  7. A Time-dependent Heliospheric Model Driven by Empirical Boundary Conditions

    Science.gov (United States)

    Kim, T. K.; Arge, C. N.; Pogorelov, N. V.

    2017-12-01

    Consisting of charged particles originating from the Sun, the solar wind carries the Sun's energy and magnetic field outward through interplanetary space. The solar wind is the predominant source of space weather events, and modeling the solar wind propagation to Earth is a critical component of space weather research. Solar wind models are typically separated into coronal and heliospheric parts to account for the different physical processes and scales characterizing each region. Coronal models are often coupled with heliospheric models to propagate the solar wind out to Earth's orbit and beyond. The Wang-Sheeley-Arge (WSA) model is a semi-empirical coronal model consisting of a potential field source surface model and a current sheet model that takes synoptic magnetograms as input to estimate the magnetic field and solar wind speed at any distance above the coronal region. The current version of the WSA model takes the Air Force Data Assimilative Photospheric Flux Transport (ADAPT) model as input to provide improved time-varying solutions for the ambient solar wind structure. When heliospheric MHD models are coupled with the WSA model, density and temperature at the inner boundary are treated as free parameters that are tuned to optimal values. For example, the WSA-ENLIL model prescribes density and temperature assuming momentum flux and thermal pressure balance across the inner boundary of the ENLIL heliospheric MHD model. We consider an alternative approach of prescribing density and temperature using empirical correlations derived from Ulysses and OMNI data. We use our own modeling software (Multi-scale Fluid-kinetic Simulation Suite) to drive a heliospheric MHD model with ADAPT-WSA input. The modeling results using the two different approaches of density and temperature prescription suggest that the use of empirical correlations may be a more straightforward, consistent method.

  8. An empirical probability model of detecting species at low densities.

    Science.gov (United States)

    Delaney, David G; Leung, Brian

    2010-06-01

    False negatives, not detecting things that are actually present, are an important but understudied problem. False negatives are the result of our inability to perfectly detect species, especially those at low density such as endangered species or newly arriving introduced species. They reduce our ability to interpret presence-absence survey data and make sound management decisions (e.g., rapid response). To reduce the probability of false negatives, we need to compare the efficacy and sensitivity of different sampling approaches and quantify an unbiased estimate of the probability of detection. We conducted field experiments in the intertidal zone of New England and New York to test the sensitivity of two sampling approaches (quadrat vs. total area search, TAS), given different target characteristics (mobile vs. sessile). Using logistic regression we built detection curves for each sampling approach that related the sampling intensity and the density of targets to the probability of detection. The TAS approach reduced the probability of false negatives and detected targets faster than the quadrat approach. Mobility of targets increased the time to detection but did not affect detection success. Finally, we interpreted two years of presence-absence data on the distribution of the Asian shore crab (Hemigrapsus sanguineus) in New England and New York, using our probability model for false negatives. The type of experimental approach in this paper can help to reduce false negatives and increase our ability to detect species at low densities by refining sampling approaches, which can guide conservation strategies and management decisions in various areas of ecology such as conservation biology and invasion ecology.

  9. Modelling of proton exchange membrane fuel cell performance based on semi-empirical equations

    Energy Technology Data Exchange (ETDEWEB)

    Al-Baghdadi, Maher A.R. Sadiq [Babylon Univ., Dept. of Mechanical Engineering, Babylon (Iraq)

    2005-08-01

    Using semi-empirical equations for modeling a proton exchange membrane fuel cell is proposed for providing a tool for the design and analysis of fuel cell total systems. The focus of this study is to derive an empirical model including process variations to estimate the performance of fuel cell without extensive calculations. The model take into account not only the current density but also the process variations, such as the gas pressure, temperature, humidity, and utilization to cover operating processes, which are important factors in determining the real performance of fuel cell. The modelling results are compared well with known experimental results. The comparison shows good agreements between the modeling results and the experimental data. The model can be used to investigate the influence of process variables for design optimization of fuel cells, stacks, and complete fuel cell power system. (Author)

  10. Dynamic Modeling of a Reformed Methanol Fuel Cell System using Empirical Data and Adaptive Neuro-Fuzzy Inference System Models

    DEFF Research Database (Denmark)

    Justesen, Kristian Kjær; Andreasen, Søren Juhl; Shaker, Hamid Reza

    2013-01-01

    In this work, a dynamic MATLAB Simulink model of a H3-350 Reformed Methanol Fuel Cell (RMFC) stand-alone battery charger produced by Serenergy is developed on the basis of theoretical and empirical methods. The advantage of RMFC systems is that they use liquid methanol as a fuel instead of gaseous...... of the reforming process are implemented. Models of the cooling flow of the blowers for the fuel cell and the burner which supplies process heat for the reformer are made. The two blowers have a common exhaust, which means that the two blowers influence each other’s output. The models take this into account using...... an empirical approach. Fin efficiency models for the cooling effect of the air are also developed using empirical methods. A fuel cell model is also implemented based on a standard model which is adapted to fit the measured performance of the H3-350 module. All the individual parts of the model are verified...

  11. Dynamic Modeling of a Reformed Methanol Fuel Cell System using Empirical Data and Adaptive Neuro-Fuzzy Inference System Models

    DEFF Research Database (Denmark)

    Justesen, Kristian Kjær; Andreasen, Søren Juhl; Shaker, Hamid Reza

    2014-01-01

    In this work, a dynamic MATLAB Simulink model of a H3-350 Reformed Methanol Fuel Cell (RMFC) stand-alone battery charger produced by Serenergy is developed on the basis of theoretical and empirical methods. The advantage of RMFC systems is that they use liquid methanol as a fuel instead of gaseous...... of the reforming process are implemented. Models of the cooling flow of the blowers for the fuel cell and the burner which supplies process heat for the reformer are made. The two blowers have a common exhaust, which means that the two blowers influence each other’s output. The models take this into account using...... an empirical approach. Fin efficiency models for the cooling effect of the air are also developed using empirical methods. A fuel cell model is also implemented based on a standard model which is adapted to fit the measured performance of the H3-350 module. All the individual parts of the model are verified...

  12. Graph and model transformation tools for model migration : empirical results from the transformation tool contest

    NARCIS (Netherlands)

    Rose, L.M.; Herrmannsdoerfer, M.; Mazanek, S.; Van Gorp, P.M.E.; Buchwald, S.; Horn, T.; Kalnina, E.; Koch, A.; Lano, K.; Schätz, B.; Wimmer, M.

    2014-01-01

    We describe the results of the Transformation Tool Contest 2010 workshop, in which nine graph and model transformation tools were compared for specifying model migration. The model migration problem—migration of UML activity diagrams from version 1.4 to version 2.2—is non-trivial and practically

  13. Model and Empirical Study on Several Urban Public Transport Networks in China

    Science.gov (United States)

    Ding, Yimin; Ding, Zhuo

    2012-07-01

    In this paper, we present the empirical investigation results on the urban public transport networks (PTNs) and propose a model to understand the results obtained. We investigate some urban public traffic networks in China, which are the urban public traffic networks of Beijing, Guangzhou, Wuhan and etc. The empirical results on the big cities show that the accumulative act-degree distributions of PTNs take neither power function forms, nor exponential function forms, but they are described by a shifted power function, and the accumulative act-degree distributions of PTNs in medium-sized or small cities follow the same law. In the end, we propose a model to show a possible evolutionary mechanism for the emergence of such network. The analytic results obtained from this model are in good agreement with the empirical results.

  14. Empirical Modeling of Lithium-ion Batteries Based on Electrochemical Impedance Spectroscopy Tests

    International Nuclear Information System (INIS)

    Samadani, Ehsan; Farhad, Siamak; Scott, William; Mastali, Mehrdad; Gimenez, Leonardo E.; Fowler, Michael; Fraser, Roydon A.

    2015-01-01

    Highlights: • Two commercial Lithium-ion batteries are studied through HPPC and EIS tests. • An equivalent circuit model is developed for a range of operating conditions. • This model improves the current battery empirical models for vehicle applications • This model is proved to be efficient in terms of predicting HPPC test resistances. - ABSTRACT: An empirical model for commercial lithium-ion batteries is developed based on electrochemical impedance spectroscopy (EIS) tests. An equivalent circuit is established according to EIS test observations at various battery states of charge and temperatures. A Laplace transfer time based model is developed based on the circuit which can predict the battery operating output potential difference in battery electric and plug-in hybrid vehicles at various operating conditions. This model demonstrates up to 6% improvement compared to simple resistance and Thevenin models and is suitable for modeling and on-board controller purposes. Results also show that this model can be used to predict the battery internal resistance obtained from hybrid pulse power characterization (HPPC) tests to within 20 percent, making it suitable for low to medium fidelity powertrain design purposes. In total, this simple battery model can be employed as a real-time model in electrified vehicle battery management systems

  15. Empirical Analysis of Stochastic Volatility Model by Hybrid Monte Carlo Algorithm

    International Nuclear Information System (INIS)

    Takaishi, Tetsuya

    2013-01-01

    The stochastic volatility model is one of volatility models which infer latent volatility of asset returns. The Bayesian inference of the stochastic volatility (SV) model is performed by the hybrid Monte Carlo (HMC) algorithm which is superior to other Markov Chain Monte Carlo methods in sampling volatility variables. We perform the HMC simulations of the SV model for two liquid stock returns traded on the Tokyo Stock Exchange and measure the volatilities of those stock returns. Then we calculate the accuracy of the volatility measurement using the realized volatility as a proxy of the true volatility and compare the SV model with the GARCH model which is one of other volatility models. Using the accuracy calculated with the realized volatility we find that empirically the SV model performs better than the GARCH model.

  16. A New Empirical Model for Radar Scattering from Bare Soil Surfaces

    Directory of Open Access Journals (Sweden)

    Nicolas Baghdadi

    2016-11-01

    Full Text Available The objective of this paper is to propose a new semi-empirical radar backscattering model for bare soil surfaces based on the Dubois model. A wide dataset of backscattering coefficients extracted from synthetic aperture radar (SAR images and in situ soil surface parameter measurements (moisture content and roughness is used. The retrieval of soil parameters from SAR images remains challenging because the available backscattering models have limited performances. Existing models, physical, semi-empirical, or empirical, do not allow for a reliable estimate of soil surface geophysical parameters for all surface conditions. The proposed model, developed in HH, HV, and VV polarizations, uses a formulation of radar signals based on physical principles that are validated in numerous studies. Never before has a backscattering model been built and validated on such an important dataset as the one proposed in this study. It contains a wide range of incidence angles (18°–57° and radar wavelengths (L, C, X, well distributed, geographically, for regions with different climate conditions (humid, semi-arid, and arid sites, and involving many SAR sensors. The results show that the new model shows a very good performance for different radar wavelengths (L, C, X, incidence angles, and polarizations (RMSE of about 2 dB. This model is easy to invert and could provide a way to improve the retrieval of soil parameters.

  17. A theoretical and empirical evaluation and extension of the Todaro migration model.

    Science.gov (United States)

    Salvatore, D

    1981-11-01

    "This paper postulates that it is theoretically and empirically preferable to base internal labor migration on the relative difference in rural-urban real income streams and rates of unemployment, taken as separate and independent variables, rather than on the difference in the expected real income streams as postulated by the very influential and often quoted Todaro model. The paper goes on to specify several important ways of extending the resulting migration model and improving its empirical performance." The analysis is based on Italian data. excerpt

  18. Comparing the model-simulated global warming signal to observations using empirical estimates of unforced noise

    Science.gov (United States)

    Brown, Patrick T.; Li, Wenhong; Cordero, Eugene C.; Mauget, Steven A.

    2015-01-01

    The comparison of observed global mean surface air temperature (GMT) change to the mean change simulated by climate models has received much public and scientific attention. For a given global warming signal produced by a climate model ensemble, there exists an envelope of GMT values representing the range of possible unforced states of the climate system (the Envelope of Unforced Noise; EUN). Typically, the EUN is derived from climate models themselves, but climate models might not accurately simulate the correct characteristics of unforced GMT variability. Here, we simulate a new, empirical, EUN that is based on instrumental and reconstructed surface temperature records. We compare the forced GMT signal produced by climate models to observations while noting the range of GMT values provided by the empirical EUN. We find that the empirical EUN is wide enough so that the interdecadal variability in the rate of global warming over the 20th century does not necessarily require corresponding variability in the rate-of-increase of the forced signal. The empirical EUN also indicates that the reduced GMT warming over the past decade or so is still consistent with a middle emission scenario's forced signal, but is likely inconsistent with the steepest emission scenario's forced signal. PMID:25898351

  19. Comparing the model-simulated global warming signal to observations using empirical estimates of unforced noise.

    Science.gov (United States)

    Brown, Patrick T; Li, Wenhong; Cordero, Eugene C; Mauget, Steven A

    2015-04-21

    The comparison of observed global mean surface air temperature (GMT) change to the mean change simulated by climate models has received much public and scientific attention. For a given global warming signal produced by a climate model ensemble, there exists an envelope of GMT values representing the range of possible unforced states of the climate system (the Envelope of Unforced Noise; EUN). Typically, the EUN is derived from climate models themselves, but climate models might not accurately simulate the correct characteristics of unforced GMT variability. Here, we simulate a new, empirical, EUN that is based on instrumental and reconstructed surface temperature records. We compare the forced GMT signal produced by climate models to observations while noting the range of GMT values provided by the empirical EUN. We find that the empirical EUN is wide enough so that the interdecadal variability in the rate of global warming over the 20(th) century does not necessarily require corresponding variability in the rate-of-increase of the forced signal. The empirical EUN also indicates that the reduced GMT warming over the past decade or so is still consistent with a middle emission scenario's forced signal, but is likely inconsistent with the steepest emission scenario's forced signal.

  20. An anthology of theories and models of design philosophy, approaches and empirical explorations

    CERN Document Server

    Blessing, Lucienne

    2014-01-01

    While investigations into both theories and models has remained a major strand of engineering design research, current literature sorely lacks a reference book that provides a comprehensive and up-to-date anthology of theories and models, and their philosophical and empirical underpinnings; An Anthology of Theories and Models of Design fills this gap. The text collects the expert views of an international authorship, covering: ·         significant theories in engineering design, including CK theory, domain theory, and the theory of technical systems; ·         current models of design, from a function behavior structure model to an integrated model; ·         important empirical research findings from studies into design; and ·         philosophical underpinnings of design itself. For educators and researchers in engineering design, An Anthology of Theories and Models of Design gives access to in-depth coverage of theoretical and empirical developments in this area; for pr...

  1. Evaluation of theoretical and empirical water vapor sorption isotherm models for soils

    Science.gov (United States)

    Arthur, Emmanuel; Tuller, Markus; Moldrup, Per; de Jonge, Lis W.

    2016-01-01

    The mathematical characterization of water vapor sorption isotherms of soils is crucial for modeling processes such as volatilization of pesticides and diffusive and convective water vapor transport. Although numerous physically based and empirical models were previously proposed to describe sorption isotherms of building materials, food, and other industrial products, knowledge about the applicability of these functions for soils is noticeably lacking. We present an evaluation of nine models for characterizing adsorption/desorption isotherms for a water activity range from 0.03 to 0.93 based on measured data of 207 soils with widely varying textures, organic carbon contents, and clay mineralogy. In addition, the potential applicability of the models for prediction of sorption isotherms from known clay content was investigated. While in general, all investigated models described measured adsorption and desorption isotherms reasonably well, distinct differences were observed between physical and empirical models and due to the different degrees of freedom of the model equations. There were also considerable differences in model performance for adsorption and desorption data. While regression analysis relating model parameters and clay content and subsequent model application for prediction of measured isotherms showed promise for the majority of investigated soils, for soils with distinct kaolinitic and smectitic clay mineralogy predicted isotherms did not closely match the measurements.

  2. Semi-empirical model for prediction of unsteady forces on an airfoil with application to flutter

    Science.gov (United States)

    Mahajan, A. J.; Kaza, K. R. V.; Dowell, E. H.

    1993-01-01

    A semi-empirical model is described for predicting unsteady aerodynamic forces on arbitrary airfoils under mildly stalled and unstalled conditions. Aerodynamic forces are modeled using second order ordinary differential equations for lift and moment with airfoil motion as the input. This model is simultaneously integrated with structural dynamics equations to determine flutter characteristics for a two degrees-of-freedom system. Results for a number of cases are presented to demonstrate the suitability of this model to predict flutter. Comparison is made to the flutter characteristics determined by a Navier-Stokes solver and also the classical incompressible potential flow theory.

  3. Empirical Modeling on Hot Air Drying of Fresh and Pre-treated Pineapples

    Directory of Open Access Journals (Sweden)

    Tanongkankit Yardfon

    2016-01-01

    Full Text Available This research was aimed to study drying kinetics and determine empirical model of fresh pineapple and pre-treated pineapple with sucrose solution at different concentrations during drying. 3 mm thick samples were immersed into 30, 40 and 50 Brix of sucrose solution before hot air drying at temperatures of 60, 70 and 80°C. The empirical models to predict the drying kinetics were investigated. The results showed that the moisture content decreased when increasing the drying temperatures and times. Increase in sucrose concentration led to longer drying time. According to the statistical values of the highest coefficients (R2, the lowest least of chi-square (χ2 and root mean square error (RMSE, Logarithmic model was the best models for describing the drying behavior of soaked samples into 30, 40 and 50 Brix of sucrose solution.

  4. An empirical investigation on the forecasting ability of mallows model averaging in a macro economic environment

    Science.gov (United States)

    Yin, Yip Chee; Hock-Eam, Lim

    2012-09-01

    This paper investigates the forecasting ability of Mallows Model Averaging (MMA) by conducting an empirical analysis of five Asia countries, Malaysia, Thailand, Philippines, Indonesia and China's GDP growth rate. Results reveal that MMA has no noticeable differences in predictive ability compared to the general autoregressive fractional integrated moving average model (ARFIMA) and its predictive ability is sensitive to the effect of financial crisis. MMA could be an alternative forecasting method for samples without recent outliers such as financial crisis.

  5. Development and empirical exploration of an extended model of intragroup conflict

    OpenAIRE

    Hjertø, Kjell B.; Kuvaas, Bård

    2009-01-01

    Dette er post-print av artikkelen publisert i International Journal of Conflict Management Purpose - The purpose of this study was to develop and empirically explore a model of four intragroup conflict types (the 4IC model), consisting of an emotional person, a cognitive task, an emotional task, and a cognitive person conflict. The two first conflict types are similar to existing conceptualizations, whereas the two latter represent new dimensions of group conflict. Design/m...

  6. Autonomous e-coaching in the wild: Empirical validation of a model-based reasoning system

    OpenAIRE

    Kamphorst, B.A.; Klein, M.C.A.; van Wissen, A.

    2014-01-01

    Autonomous e-coaching systems have the potential to improve people's health behaviors on a large scale. The intelligent behavior change support system eMate exploits a model of the human agent to support individuals in adopting a healthy lifestyle. The system attempts to identify the causes of a person's non-adherence by reasoning over a computational model (COMBI) that is based on established psychological theories of behavior change. The present work presents an extensive, monthlong empiric...

  7. A stochastic empirical model for heavy-metal balnces in Agro-ecosystems

    NARCIS (Netherlands)

    Keller, A.N.; Steiger, von B.; Zee, van der S.E.A.T.M.; Schulin, R.

    2001-01-01

    Mass flux balancing provides essential information for preventive strategies against heavy-metal accumulation in agricultural soils that may result from atmospheric deposition and application of fertilizers and pesticides. In this paper we present the empirical stochastic balance model, PROTERRA-S,

  8. Modeling Lolium perenne L. roots in the presence of empirical black holes

    Science.gov (United States)

    Plant root models are designed for understanding structural or functional aspects of root systems. When a process is not thoroughly understood, a black box object is used. However, when a process exists but empirical data do not indicate its existence, you have a black hole. The object of this re...

  9. Performance-Based Service Quality Model: An Empirical Study on Japanese Universities

    Science.gov (United States)

    Sultan, Parves; Wong, Ho

    2010-01-01

    Purpose: This paper aims to develop and empirically test the performance-based higher education service quality model. Design/methodology/approach: The study develops 67-item instrument for measuring performance-based service quality with a particular focus on the higher education sector. Scale reliability is confirmed using the Cronbach's alpha.…

  10. Climate Prediction for Brazil's Nordeste: Performance of Empirical and Numerical Modeling Methods.

    Science.gov (United States)

    Moura, Antonio Divino; Hastenrath, Stefan

    2004-07-01

    Comparisons of performance of climate forecast methods require consistency in the predictand and a long common reference period. For Brazil's Nordeste, empirical methods developed at the University of Wisconsin use preseason (October January) rainfall and January indices of the fields of meridional wind component and sea surface temperature (SST) in the tropical Atlantic and the equatorial Pacific as input to stepwise multiple regression and neural networking. These are used to predict the March June rainfall at a network of 27 stations. An experiment at the International Research Institute for Climate Prediction, Columbia University, with a numerical model (ECHAM4.5) used global SST information through February to predict the March June rainfall at three grid points in the Nordeste. The predictands for the empirical and numerical model forecasts are correlated at +0.96, and the period common to the independent portion of record of the empirical prediction and the numerical modeling is 1968 99. Over this period, predicted versus observed rainfall are evaluated in terms of correlation, root-mean-square error, absolute error, and bias. Performance is high for both approaches. Numerical modeling produces a correlation of +0.68, moderate errors, and strong negative bias. For the empirical methods, errors and bias are small, and correlations of +0.73 and +0.82 are reached between predicted and observed rainfall.

  11. Understanding users’ motivations to engage in virtual worlds: A multipurpose model and empirical testing

    NARCIS (Netherlands)

    Verhagen, T.; Feldberg, J.F.M.; van den Hooff, B.J.; Meents, S.; Merikivi, J.

    2012-01-01

    Despite the growth and commercial potential of virtual worlds, relatively little is known about what drives users' motivations to engage in virtual worlds. This paper proposes and empirically tests a conceptual model aimed at filling this research gap. Given the multipurpose nature of virtual words

  12. MERGANSER - An Empirical Model to Predict Fish and Loon Mercury in New England Lakes

    Science.gov (United States)

    MERGANSER (MERcury Geo-spatial AssessmeNtS for the New England Region) is an empirical least-squares multiple regression model using mercury (Hg) deposition and readily obtainable lake and watershed features to predict fish (fillet) and common loon (blood) Hg in New England lakes...

  13. Distribution of longshore sediment transport along the Indian coast based on empirical model

    Digital Repository Service at National Institute of Oceanography (India)

    Chandramohan, P.; Nayak, B.U.

    An empirical sediment transport model has been developed based on longshore energy flux equation. Study indicates that annual gross sediment transport rate is high (1.5 x 10 super(6) cubic meters to 2.0 x 10 super(6) cubic meters) along the coasts...

  14. An empirical test of stage models of e-government development: evidence from Dutch municipalities

    NARCIS (Netherlands)

    Rooks, G.; Matzat, U.; Sadowski, B.M.

    2017-01-01

    In this article we empirically test stage models of e-government development. We use Lee's classification to make a distinction between four stages of e-government: informational, requests, personal, and e-democracy. We draw on a comprehensive data set on the adoption and development of e-government

  15. Satellite-based empirical models linking river plume dynamics with hypoxic area andvolume

    Science.gov (United States)

    Satellite-based empirical models explaining hypoxic area and volume variation were developed for the seasonally hypoxic (O2 < 2 mg L−1) northern Gulf of Mexico adjacent to the Mississippi River. Annual variations in midsummer hypoxic area and ...

  16. Integrating social science into empirical models of coupled human and natural systems

    Science.gov (United States)

    Jeffrey D. Kline; Eric M. White; A Paige Fischer; Michelle M. Steen-Adams; Susan Charnley; Christine S. Olsen; Thomas A. Spies; John D. Bailey

    2017-01-01

    Coupled human and natural systems (CHANS) research highlights reciprocal interactions (or feedbacks) between biophysical and socioeconomic variables to explain system dynamics and resilience. Empirical models often are used to test hypotheses and apply theory that represent human behavior. Parameterizing reciprocal interactions presents two challenges for social...

  17. Analytical and Empirical Modeling of Wear and Forces of CBN Tool in Hard Turning - A Review

    Science.gov (United States)

    Patel, Vallabh Dahyabhai; Gandhi, Anishkumar Hasmukhlal

    2017-08-01

    Machining of steel material having hardness above 45 HRC (Hardness-Rockwell C) is referred as a hard turning. There are numerous models which should be scrutinized and implemented to gain optimum performance of hard turning. Various models in hard turning by cubic boron nitride tool have been reviewed, in attempt to utilize appropriate empirical and analytical models. Validation of steady state flank and crater wear model, Usui's wear model, forces due to oblique cutting theory, extended Lee and Shaffer's force model, chip formation and progressive flank wear have been depicted in this review paper. Effort has been made to understand the relationship between tool wear and tool force based on the different cutting conditions and tool geometries so that appropriate model can be used according to user requirement in hard turning.

  18. U-tube steam generator empirical model development and validation using neural networks

    International Nuclear Information System (INIS)

    Parlos, A.G.; Chong, K.T.; Atiya, A.

    1992-01-01

    Empirical modeling techniques that use model structures motivated from neural networks research have proven effective in identifying complex process dynamics. A recurrent multilayer perception (RMLP) network was developed as a nonlinear state-space model structure along with a static learning algorithm for estimating the parameter associated with it. The methods developed were demonstrated by identifying two submodels of a U-tube steam generator (UTSG), each valid around an operating power level. A significant drawback of this approach is the long off-line training times required for the development of even a simplified model of a UTSG. Subsequently, a dynamic gradient descent-based learning algorithm was developed as an accelerated alternative to train an RMLP network for use in empirical modeling of power plants. The two main advantages of this learning algorithm are its ability to consider past error gradient information for future use and the two forward passes associated with its implementation. The enhanced learning capabilities provided by the dynamic gradient descent-based learning algorithm were demonstrated via the case study of a simple steam boiler power plant. In this paper, the dynamic gradient descent-based learning algorithm is used for the development and validation of a complete UTSG empirical model

  19. Corrosion-induced bond strength degradation in reinforced concrete-Analytical and empirical models

    International Nuclear Information System (INIS)

    Bhargava, Kapilesh; Ghosh, A.K.; Mori, Yasuhiro; Ramanujam, S.

    2007-01-01

    The present paper aims to investigate the relationship between the bond strength and the reinforcement corrosion in reinforced concrete (RC). Analytical and empirical models are proposed for the bond strength of corroded reinforcing bars. Analytical model proposed by Cairns.and Abdullah [Cairns, J., Abdullah, R.B., 1996. Bond strength of black and epoxy-coated reinforcement-a theoretical approach. ACI Mater. J. 93 (4), 362-369] for splitting bond failure and later modified by Coronelli [Coronelli, D. 2002. Corrosion cracking and bond strength modeling for corroded bars in reinforced concrete. ACI Struct. J. 99 (3), 267-276] to consider the corroded bars, has been adopted. Estimation of the various parameters in the earlier analytical model has been proposed by the present authors. These parameters include corrosion pressure due to expansive action of corrosion products, modeling of tensile behaviour of cracked concrete and adhesion and friction coefficient between the corroded bar and cracked concrete. Simple empirical models are also proposed to evaluate the reduction in bond strength as a function of reinforcement corrosion in RC specimens. These empirical models are proposed by considering a wide range of published experimental investigations related to the bond degradation in RC specimens due to reinforcement corrosion. It has been found that the proposed analytical and empirical bond models are capable of providing the estimates of predicted bond strength of corroded reinforcement that are in reasonably good agreement with the experimentally observed values and with those of the other reported published data on analytical and empirical predictions. An attempt has also been made to evaluate the flexural strength of RC beams with corroded reinforcement failing in bond. It has also been found that the analytical predictions for the flexural strength of RC beams based on the proposed bond degradation models are in agreement with those of the experimentally

  20. Traditional Arabic & Islamic medicine: validation and empirical assessment of a conceptual model in Qatar.

    Science.gov (United States)

    AlRawi, Sara N; Khidir, Amal; Elnashar, Maha S; Abdelrahim, Huda A; Killawi, Amal K; Hammoud, Maya M; Fetters, Michael D

    2017-03-14

    Evidence indicates traditional medicine is no longer only used for the healthcare of the poor, its prevalence is also increasing in countries where allopathic medicine is predominant in the healthcare system. While these healing practices have been utilized for thousands of years in the Arabian Gulf, only recently has a theoretical model been developed illustrating the linkages and components of such practices articulated as Traditional Arabic & Islamic Medicine (TAIM). Despite previous theoretical work presenting development of the TAIM model, empirical support has been lacking. The objective of this research is to provide empirical support for the TAIM model and illustrate real world applicability. Using an ethnographic approach, we recruited 84 individuals (43 women and 41 men) who were speakers of one of four common languages in Qatar; Arabic, English, Hindi, and Urdu, Through in-depth interviews, we sought confirming and disconfirming evidence of the model components, namely, health practices, beliefs and philosophy to treat, diagnose, and prevent illnesses and/or maintain well-being, as well as patterns of communication about their TAIM practices with their allopathic providers. Based on our analysis, we find empirical support for all elements of the TAIM model. Participants in this research, visitors to major healthcare centers, mentioned using all elements of the TAIM model: herbal medicines, spiritual therapies, dietary practices, mind-body methods, and manual techniques, applied singularly or in combination. Participants had varying levels of comfort sharing information about TAIM practices with allopathic practitioners. These findings confirm an empirical basis for the elements of the TAIM model. Three elements, namely, spiritual healing, herbal medicine, and dietary practices, were most commonly found. Future research should examine the prevalence of TAIM element use, how it differs among various populations, and its impact on health.

  1. Modelling metal speciation in the Scheldt Estuary: Combining a flexible-resolution transport model with empirical functions

    Energy Technology Data Exchange (ETDEWEB)

    Elskens, Marc [Vrije Universiteit Brussel, Analytical, Pleinlaan 2, BE-1050 Brussels (Belgium); Gourgue, Olivier [Université catholique de Louvain, Institute of Mechanics, Materials and Civil Engineering (IMMC), 4 Avenue G. Lemaître, bte L4.05.02, BE-1348 Louvain-la-Neuve (Belgium); Université catholique de Louvain, Georges Lemaître Centre for Earth and Climate Research (TECLIM), Place Louis Pasteur 2, bte L4.03.08, BE-1348 Louvain-la-Neuve (Belgium); Baeyens, Willy [Vrije Universiteit Brussel, Analytical, Pleinlaan 2, BE-1050 Brussels (Belgium); Chou, Lei [Université Libre de Bruxelles, Biogéochimie et Modélisation du Système Terre (BGéoSys) —Océanographie Chimique et Géochimie des Eaux, Campus de la Plaine —CP 208, Boulevard du Triomphe, BE-1050 Brussels (Belgium); Deleersnijder, Eric [Université catholique de Louvain, Institute of Mechanics, Materials and Civil Engineering (IMMC), 4 Avenue G. Lemaître, bte L4.05.02, BE-1348 Louvain-la-Neuve (Belgium); Université catholique de Louvain, Earth and Life Institute (ELI), Georges Lemaître Centre for Earth and Climate Research (TECLIM), Place Louis Pasteur 2, bte L4.03.08, BE-1348 Louvain-la-Neuve (Belgium); Leermakers, Martine [Vrije Universiteit Brussel, Analytical, Pleinlaan 2, BE-1050 Brussels (Belgium); and others

    2014-04-01

    Predicting metal concentrations in surface waters is an important step in the understanding and ultimately the assessment of the ecological risk associated with metal contamination. In terms of risk an essential piece of information is the accurate knowledge of the partitioning of the metals between the dissolved and particulate phases, as the former species are generally regarded as the most bioavailable and thus harmful form. As a first step towards the understanding and prediction of metal speciation in the Scheldt Estuary (Belgium, the Netherlands), we carried out a detailed analysis of a historical dataset covering the period 1982–2011. This study reports on the results for two selected metals: Cu and Cd. Data analysis revealed that both the total metal concentration and the metal partitioning coefficient (K{sub d}) could be predicted using relatively simple empirical functions of environmental variables such as salinity and suspended particulate matter concentration (SPM). The validity of these functions has been assessed by their application to salinity and SPM fields simulated by the hydro-environmental model SLIM. The high-resolution total and dissolved metal concentrations reconstructed using this approach, compared surprisingly well with an independent set of validation measurements. These first results from the combined mechanistic-empirical model approach suggest that it may be an interesting tool for risk assessment studies, e.g. to help identify conditions associated with elevated (dissolved) metal concentrations. - Highlights: • Empirical functions were designed for assessing metal speciation in estuarine water. • The empirical functions were implemented in the hydro-environmental model SLIM. • Validation was carried out in the Scheldt Estuary using historical data 1982–2011. • This combined mechanistic-empirical approach is useful for risk assessment.

  2. Ensemble empirical model decomposition and neuro-fuzzy conjunction model for middle and long-term runoff forecast

    Science.gov (United States)

    Tan, Q.

    2017-12-01

    Forecasting the runoff over longer periods, such as months and years, is one of the important tasks for hydrologists and water resource managers to maximize the potential of the limited water. However, due to the nonlinear and nonstationary characteristic of the natural runoff, it is hard to forecast the middle and long-term runoff with a satisfactory accuracy. It has been proven that the forecast performance can be improved by using signal decomposition techniques to product more cleaner signals as model inputs. In this study, a new conjunction model (EEMD-neuro-fuzzy) with adaptive ability is proposed. The ensemble empirical model decomposition (EEMD) is used to decompose the runoff time series into several components, which are with different frequencies and more cleaner than the original time series. Then the neuro-fuzzy model is developed for each component. The final forecast results can be obtained by summing the outputs of all neuro-fuzzy models. Unlike the conventional forecast model, the decomposition and forecast models in this study are adjusted adaptively as long as new runoff information is added. The proposed models are applied to forecast the monthly runoff of Yichang station, located in Yangtze River of China. The results show that the performance of adaptive forecast model we proposed outperforms than the conventional forecast model, the Nash-Sutcliffe efficiency coefficient can reach to 0.9392. Due to its ability to process the nonstationary data, the forecast accuracy, especially in flood season, is improved significantly.

  3. Empirical angle-dependent Biot and MBA models for acoustic anisotropy in cancellous bone

    International Nuclear Information System (INIS)

    Lee, Kang ll; Hughes, E R; Humphrey, V F; Leighton, T G; Choi, Min Joo

    2007-01-01

    The Biot and the modified Biot-Attenborough (MBA) models have been found useful to understand ultrasonic wave propagation in cancellous bone. However, neither of the models, as previously applied to cancellous bone, allows for the angular dependence of acoustic properties with direction. The present study aims to account for the acoustic anisotropy in cancellous bone, by introducing empirical angle-dependent input parameters, as defined for a highly oriented structure, into the Biot and the MBA models. The anisotropy of the angle-dependent Biot model is attributed to the variation in the elastic moduli of the skeletal frame with respect to the trabecular alignment. The angle-dependent MBA model employs a simple empirical way of using the parametric fit for the fast and the slow wave speeds. The angle-dependent models were used to predict both the fast and slow wave velocities as a function of propagation angle with respect to the trabecular alignment of cancellous bone. The predictions were compared with those of the Schoenberg model for anisotropy in cancellous bone and in vitro experimental measurements from the literature. The angle-dependent models successfully predicted the angular dependence of phase velocity of the fast wave with direction. The root-mean-square errors of the measured versus predicted fast wave velocities were 79.2 m s -1 (angle-dependent Biot model) and 36.1 m s -1 (angle-dependent MBA model). They also predicted the fact that the slow wave is nearly independent of propagation angle for angles about 50 0 , but consistently underestimated the slow wave velocity with the root-mean-square errors of 187.2 m s -1 (angle-dependent Biot model) and 240.8 m s -1 (angle-dependent MBA model). The study indicates that the angle-dependent models reasonably replicate the acoustic anisotropy in cancellous bone

  4. Prediction of early summer rainfall over South China by a physical-empirical model

    Science.gov (United States)

    Yim, So-Young; Wang, Bin; Xing, Wen

    2014-10-01

    In early summer (May-June, MJ) the strongest rainfall belt of the northern hemisphere occurs over the East Asian (EA) subtropical front. During this period the South China (SC) rainfall reaches its annual peak and represents the maximum rainfall variability over EA. Hence we establish an SC rainfall index, which is the MJ mean precipitation averaged over 72 stations over SC (south of 28°N and east of 110°E) and represents superbly the leading empirical orthogonal function mode of MJ precipitation variability over EA. In order to predict SC rainfall, we established a physical-empirical model. Analysis of 34-year observations (1979-2012) reveals three physically consequential predictors. A plentiful SC rainfall is preceded in the previous winter by (a) a dipole sea surface temperature (SST) tendency in the Indo-Pacific warm pool, (b) a tripolar SST tendency in North Atlantic Ocean, and (c) a warming tendency in northern Asia. These precursors foreshadow enhanced Philippine Sea subtropical High and Okhotsk High in early summer, which are controlling factors for enhanced subtropical frontal rainfall. The physical empirical model built on these predictors achieves a cross-validated forecast correlation skill of 0.75 for 1979-2012. Surprisingly, this skill is substantially higher than four-dynamical models' ensemble prediction for 1979-2010 period (0.15). The results here suggest that the low prediction skill of current dynamical models is largely due to models' deficiency and the dynamical prediction has large room to improve.

  5. Application of GIS to Empirical Windthrow Risk Model in Mountain Forested Landscapes

    Directory of Open Access Journals (Sweden)

    Lukas Krejci

    2018-02-01

    Full Text Available Norway spruce dominates mountain forests in Europe. Natural variations in the mountainous coniferous forests are strongly influenced by all the main components of forest and landscape dynamics: species diversity, the structure of forest stands, nutrient cycling, carbon storage, and other ecosystem services. This paper deals with an empirical windthrow risk model based on the integration of logistic regression into GIS to assess forest vulnerability to wind-disturbance in the mountain spruce forests of Šumava National Park (Czech Republic. It is an area where forest management has been the focus of international discussions by conservationists, forest managers, and stakeholders. The authors developed the empirical windthrow risk model, which involves designing an optimized data structure containing dependent and independent variables entering logistic regression. The results from the model, visualized in the form of map outputs, outline the probability of risk to forest stands from wind in the examined territory of the national park. Such an application of the empirical windthrow risk model could be used as a decision support tool for the mountain spruce forests in a study area. Future development of these models could be useful for other protected European mountain forests dominated by Norway spruce.

  6. Physical Limitations of Empirical Field Models: Force Balance and Plasma Pressure

    International Nuclear Information System (INIS)

    Sorin Zaharia; Cheng, C.Z.

    2002-01-01

    In this paper, we study whether the magnetic field of the T96 empirical model can be in force balance with an isotropic plasma pressure distribution. Using the field of T96, we obtain values for the pressure P by solving a Poisson-type equation (gradient) 2 P = (gradient) · (J x B) in the equatorial plane, and 1-D profiles on the Sun-Earth axis by integrating (gradient)P = J x B. We work in a flux coordinate system in which the magnetic field is expressed in terms of Euler potentials. Our results lead to the conclusion that the T96 model field cannot be in equilibrium with an isotropic pressure. We also analyze in detail the computation of Birkeland currents using the Vasyliunas relation and the T96 field, which yields unphysical results, again indicating the lack of force balance in the empirical model. The underlying reason for the force imbalance is likely the fact that the derivatives of the least-square fitted model B are not accurate predictions of the actual magnetospheric field derivatives. Finally, we discuss a possible solution to the problem of lack of force balance in empirical field models

  7. Comparison of ITER performance predicted by semi-empirical and theory-based transport models

    International Nuclear Information System (INIS)

    Mukhovatov, V.; Shimomura, Y.; Polevoi, A.

    2003-01-01

    The values of Q=(fusion power)/(auxiliary heating power) predicted for ITER by three different methods, i.e., transport model based on empirical confinement scaling, dimensionless scaling technique, and theory-based transport models are compared. The energy confinement time given by the ITERH-98(y,2) scaling for an inductive scenario with plasma current of 15 MA and plasma density 15% below the Greenwald value is 3.6 s with one technical standard deviation of ±14%. These data are translated into a Q interval of [7-13] at the auxiliary heating power P aux = 40 MW and [7-28] at the minimum heating power satisfying a good confinement ELMy H-mode. Predictions of dimensionless scalings and theory-based transport models such as Weiland, MMM and IFS/PPPL overlap with the empirical scaling predictions within the margins of uncertainty. (author)

  8. Attachment-based family therapy for depressed and suicidal adolescents: theory, clinical model and empirical support.

    Science.gov (United States)

    Ewing, E Stephanie Krauthamer; Diamond, Guy; Levy, Suzanne

    2015-01-01

    Attachment-Based Family Therapy (ABFT) is a manualized family-based intervention designed for working with depressed adolescents, including those at risk for suicide, and their families. It is an empirically informed and supported treatment. ABFT has its theoretical underpinnings in attachment theory and clinical roots in structural family therapy and emotion focused therapies. ABFT relies on a transactional model that aims to transform the quality of adolescent-parent attachment, as a means of providing the adolescent with a more secure relationship that can support them during challenging times generally, and the crises related to suicidal thinking and behavior, specifically. This article reviews: (1) the theoretical foundations of ABFT (attachment theory, models of emotional development); (2) the ABFT clinical model, including training and supervision factors; and (3) empirical support.

  9. Empirical Correction to the Likelihood Ratio Statistic for Structural Equation Modeling with Many Variables.

    Science.gov (United States)

    Yuan, Ke-Hai; Tian, Yubin; Yanagihara, Hirokazu

    2015-06-01

    Survey data typically contain many variables. Structural equation modeling (SEM) is commonly used in analyzing such data. The most widely used statistic for evaluating the adequacy of a SEM model is T ML, a slight modification to the likelihood ratio statistic. Under normality assumption, T ML approximately follows a chi-square distribution when the number of observations (N) is large and the number of items or variables (p) is small. However, in practice, p can be rather large while N is always limited due to not having enough participants. Even with a relatively large N, empirical results show that T ML rejects the correct model too often when p is not too small. Various corrections to T ML have been proposed, but they are mostly heuristic. Following the principle of the Bartlett correction, this paper proposes an empirical approach to correct T ML so that the mean of the resulting statistic approximately equals the degrees of freedom of the nominal chi-square distribution. Results show that empirically corrected statistics follow the nominal chi-square distribution much more closely than previously proposed corrections to T ML, and they control type I errors reasonably well whenever N ≥ max(50,2p). The formulations of the empirically corrected statistics are further used to predict type I errors of T ML as reported in the literature, and they perform well.

  10. An empirical model for independent control of variable speed refrigeration system

    International Nuclear Information System (INIS)

    Li Hua; Jeong, Seok-Kwon; Yoon, Jung-In; You, Sam-Sang

    2008-01-01

    This paper deals with an empirical dynamic model for decoupling control of the variable speed refrigeration system (VSRS). To cope with inherent complexity and nonlinearity in system dynamics, the model parameters are first obtained based on experimental data. In the study, the dynamic characteristics of indoor temperature and superheat are assumed to be first-order model with time delay. While the compressor frequency and opening angle of electronic expansion valve are varying, the indoor temperature and the superheat exhibit interfering characteristics each other in the VSRS. Thus, each decoupling model has been proposed to eliminate such interference. Finally, the experiment and simulation results indicate that the proposed model offers more tractable means for describing the actual VSRS comparing to other models currently available

  11. A generalized preferential attachment model for business firms growth rates. I. Empirical evidence

    Science.gov (United States)

    Pammolli, F.; Fu, D.; Buldyrev, S. V.; Riccaboni, M.; Matia, K.; Yamasaki, K.; Stanley, H. E.

    2007-05-01

    We introduce a model of proportional growth to explain the distribution P(g) of business firm growth rates. The model predicts that P(g) is Laplace in the central part and depicts an asymptotic power-law behavior in the tails with an exponent ζ = 3. Because of data limitations, previous studies in this field have been focusing exclusively on the Laplace shape of the body of the distribution. We test the model at different levels of aggregation in the economy, from products, to firms, to countries, and we find that the predictions are in good agreement with empirical evidence on both growth distributions and size-variance relationships.

  12. Time-varying disaster risk models: An empirical assessment of the Rietz-Barro hypothesis

    DEFF Research Database (Denmark)

    Irarrazabal, Alfonso; Parra-Alvarez, Juan Carlos

    This paper revisits the fit of disaster risk models where a representative agent has recursive preferences and the probability of a macroeconomic disaster changes over time. We calibrate the model as in Wachter (2013) and perform two sets of tests to assess the empirical performance of the model ...... and hence to reduce the Sharpe Ratio, a lower elasticity of substitution generates a more reasonable level for the equity risk premium and for the volatility of the government bond returns without compromising the ability of the price-dividend ratio to predict excess returns....

  13. Support Vector Regression Model Based on Empirical Mode Decomposition and Auto Regression for Electric Load Forecasting

    Directory of Open Access Journals (Sweden)

    Hong-Juan Li

    2013-04-01

    Full Text Available Electric load forecasting is an important issue for a power utility, associated with the management of daily operations such as energy transfer scheduling, unit commitment, and load dispatch. Inspired by strong non-linear learning capability of support vector regression (SVR, this paper presents a SVR model hybridized with the empirical mode decomposition (EMD method and auto regression (AR for electric load forecasting. The electric load data of the New South Wales (Australia market are employed for comparing the forecasting performances of different forecasting models. The results confirm the validity of the idea that the proposed model can simultaneously provide forecasting with good accuracy and interpretability.

  14. Integrating technology readiness into the expectation-confirmation model: an empirical study of mobile services.

    Science.gov (United States)

    Chen, Shih-Chih; Liu, Ming-Ling; Lin, Chieh-Peng

    2013-08-01

    The aim of this study was to integrate technology readiness into the expectation-confirmation model (ECM) for explaining individuals' continuance of mobile data service usage. After reviewing the ECM and technology readiness, an integrated model was demonstrated via empirical data. Compared with the original ECM, the findings of this study show that the integrated model may offer an ameliorated way to clarify what factors and how they influence the continuous intention toward mobile services. Finally, the major findings are summarized, and future research directions are suggested.

  15. The Fracture Mechanical Markov Chain Fatigue Model Compared with Empirical Data

    DEFF Research Database (Denmark)

    Gansted, L.; Brincker, Rune; Hansen, Lars Pilegaard

    The applicability of the FMF-model (Fracture Mechanical Markov Chain Fatigue Model) introduced in Gansted, L., R. Brincker and L. Pilegaard Hansen (1991) is tested by simulations and compared with empirical data. Two sets of data have been used, the Virkler data (aluminium alloy) and data...... established at the Laboratory of Structural Engineering at Aalborg University, the AUC-data, (mild steel). The model, which is based on the assumption, that the crack propagation process can be described by a discrete Space Markov theory, is applicable to constant as well as random loading. It is shown...

  16. Empirical LTE Smartphone Power Model with DRX Operation for System Level Simulations

    DEFF Research Database (Denmark)

    Lauridsen, Mads; Noël, Laurent; Mogensen, Preben

    2013-01-01

    An LTE smartphone power model is presented to enable academia and industry to evaluate users’ battery life on system level. The model is based on empirical measurements on a smartphone using a second generation LTE chipset, and the model includes functions of receive and transmit data rates...... and power levels. The first comprehensive Discontinuous Reception (DRX) power consumption measurements are reported together with cell bandwidth, screen and CPU power consumption. The transmit power level and to some extent the receive data rate constitute the overall power consumption, while DRX proves...

  17. A Price Index Model for Road Freight Transportation and Its Empirical analysis in China

    Directory of Open Access Journals (Sweden)

    Liu Zhishuo

    2017-01-01

    Full Text Available The aim of price index for road freight transportation (RFT is to reflect the changes of price in the road transport market. Firstly, a price index model for RFT based on the sample data from Alibaba logistics platform is built. This model is a three levels index system including total index, classification index and individual index and the Laspeyres method is applied to calculate these indices. Finally, an empirical analysis of the price index for RFT market in Zhejiang Province is performed. In order to demonstrate the correctness and validity of the exponential model, a comparative analysis with port throughput and PMI index is carried out.

  18. Calibrating mechanistic-empirical pavement performance models with an expert matrix

    Energy Technology Data Exchange (ETDEWEB)

    Tighe, S.; AlAssar, R.; Haas, R. [Waterloo Univ., ON (Canada). Dept. of Civil Engineering; Zhiwei, H. [Stantec Consulting Ltd., Cambridge, ON (Canada)

    2001-07-01

    Proper management of pavement infrastructure requires pavement performance modelling. For the past 20 years, the Ontario Ministry of Transportation has used the Ontario Pavement Analysis of Costs (OPAC) system for pavement design. Pavement needs, however, have changed substantially during that time. To address this need, a new research contract is underway to enhance the model and verify the predictions, particularly at extreme points such as low and high traffic volume pavement design. This initiative included a complete evaluation of the existing OPAC pavement design method, the construction of a new set of pavement performance prediction models, and the development of the flexible pavement design procedure that incorporates reliability analysis. The design was also expanded to include rigid pavement designs and modification of the existing life cycle cost analysis procedure which includes both the agency cost and road user cost. Performance prediction and life-cycle costs were developed based on several factors, including material properties, traffic loads and climate. Construction and maintenance schedules were also considered. The methodology for the calibration and validation of a mechanistic-empirical flexible pavement performance model was described. Mechanistic-empirical design methods combine theory based design such as calculated stresses, strains or deflections with empirical methods, where a measured response is associated with thickness and pavement performance. Elastic layer analysis was used to determine pavement response to determine the most effective design using cumulative Equivalent Single Axle Loads (ESALs), below grade type and layer thickness.The new mechanistic-empirical model separates the environment and traffic effects on performance. This makes it possible to quantify regional differences between Southern and Northern Ontario. In addition, roughness can be calculated in terms of the International Roughness Index or Riding comfort Index

  19. Empirical Results of Modeling EUR/RON Exchange Rate using ARCH, GARCH, EGARCH, TARCH and PARCH models

    Directory of Open Access Journals (Sweden)

    Andreea – Cristina PETRICĂ

    2017-03-01

    Full Text Available The aim of this study consists in examining the changes in the volatility of daily returns of EUR/RON exchange rate using on the one hand symmetric GARCH models (ARCH and GARCH and on the other hand the asymmetric GARCH models (EGARCH, TARCH and PARCH, since the conditional variance is time-varying. The analysis takes into account daily quotations of EUR/RON exchange rate over the period of 04th January 1999 to 13th June 2016. Thus, we are modeling heteroscedasticity by applying different specifications of GARCH models followed by looking for significant parameters and low information criteria (minimum Akaike Information Criterion. All models are estimated using the maximum likelihood method under the assumption of several distributions of the innovation terms such as: Normal (Gaussian distribution, Student’s t distribution, Generalized Error distribution (GED, Student’s with fixed df. Distribution, and GED with fixed parameter distribution. The predominant models turned out to be EGARCH and PARCH models, and the empirical results point out that the best model for estimating daily returns of EUR/RON exchange rate is EGARCH(2,1 with Asymmetric order 2 under the assumption of Student’s t distributed innovation terms. This can be explained by the fact that in case of EGARCH model, the restriction regarding the positivity of the conditional variance is automatically satisfied.

  20. A Comprehensive Comparison Study of Empirical Cutting Transport Models in Inclined and Horizontal Wells

    Directory of Open Access Journals (Sweden)

    Asep Mohamad Ishaq Shiddiq

    2017-07-01

    Full Text Available In deviated and horizontal drilling, hole-cleaning issues are a common and complex problem. This study explored the effect of various parameters in drilling operations and how they affect the flow rate required for effective cutting transport. Three models, developed following an empirical approach, were employed: Rudi-Shindu’s model, Hopkins’, and Tobenna’s model. Rudi-Shindu’s model needs iteration in the calculation. Firstly, the three models were compared using a sensitivity analysis of drilling parameters affecting cutting transport. The result shows that the models have similar trends but different values for minimum flow velocity. Analysis was conducted to examine the feasibility of using Rudi-Shindu’s, Hopkins’, and Tobenna’s models. The result showed that Hopkins’ model is limited by cutting size and revolution per minute (RPM. The minimum flow rate from Tobenna’s model is affected only by well inclination, drilling fluid weight and drilling fluid rheological property. Meanwhile, Rudi-Shindu’s model is limited by inclinations above 45°. The study showed that the investigated models are not suitable for horizontal wells because they do not include the effect of lateral section.

  1. Dynamic gradient descent learning algorithms for enhanced empirical modeling of power plants

    International Nuclear Information System (INIS)

    Parlos, A.G.; Atiya, Amir; Chong, K.T.

    1991-01-01

    A newly developed dynamic gradient descent-based learning algorithm is used to train a recurrent multilayer perceptron network for use in empirical modeling of power plants. The two main advantages of the proposed learning algorithm are its ability to consider past error gradient information for future use and the two forward passes associated with its implementation, instead of one forward and one backward pass of the backpropagation algorithm. The latter advantage results in computational time saving because both passes can be performed simultaneously. The dynamic learning algorithm is used to train a hybrid feedforward/feedback neural network, a recurrent multilayer perceptron, which was previously found to exhibit good interpolation and extrapolation capabilities in modeling nonlinear dynamic systems. One of the drawbacks, however, of the previously reported work has been the long training times associated with accurate empirical models. The enhanced learning capabilities provided by the dynamic gradient descent-based learning algorithm are demonstrated by a case study of a steam power plant. The number of iterations required for accurate empirical modeling has been reduced from tens of thousands to hundreds, thus significantly expediting the learning process

  2. Modeling the NPE with finite sources and empirical Green`s functions

    Energy Technology Data Exchange (ETDEWEB)

    Hutchings, L.; Kasameyer, P.; Goldstein, P. [Lawrence Livermore National Lab., CA (United States)] [and others

    1994-12-31

    In order to better understand the source characteristics of both nuclear and chemical explosions for purposes of discrimination, we have modeled the NPE chemical explosion as a finite source and with empirical Green`s functions. Seismograms are synthesized at four sties to test the validity of source models. We use a smaller chemical explosion detonated in the vicinity of the working point to obtain empirical Green`s functions. Empirical Green`s functions contain all the linear information of the geology along the propagation path and recording site, which are identical for chemical or nuclear explosions, and therefore reduce the variability in modeling the source of the larger event. We further constrain the solution to have the overall source duration obtained from point-source deconvolution results. In modeling the source, we consider both an elastic source on a spherical surface and an inelastic expanding spherical volume source. We found that the spherical volume solution provides better fits to observed seismograms. The potential to identify secondary sources was examined, but the resolution is too poor to be definitive.

  3. Empirical Validation of a Thermal Model of a Complex Roof Including Phase Change Materials

    Directory of Open Access Journals (Sweden)

    Stéphane Guichard

    2015-12-01

    Full Text Available This paper deals with the empirical validation of a building thermal model of a complex roof including a phase change material (PCM. A mathematical model dedicated to PCMs based on the heat apparent capacity method was implemented in a multi-zone building simulation code, the aim being to increase the understanding of the thermal behavior of the whole building with PCM technologies. In order to empirically validate the model, the methodology is based both on numerical and experimental studies. A parametric sensitivity analysis was performed and a set of parameters of the thermal model has been identified for optimization. The use of the generic optimization program called GenOpt® coupled to the building simulation code enabled to determine the set of adequate parameters. We first present the empirical validation methodology and main results of previous work. We then give an overview of GenOpt® and its coupling with the building simulation code. Finally, once the optimization results are obtained, comparisons of the thermal predictions with measurements are found to be acceptable and are presented.

  4. An empirically based model for knowledge management in health care organizations.

    Science.gov (United States)

    Sibbald, Shannon L; Wathen, C Nadine; Kothari, Anita

    2016-01-01

    Knowledge management (KM) encompasses strategies, processes, and practices that allow an organization to capture, share, store, access, and use knowledge. Ideal KM combines different sources of knowledge to support innovation and improve performance. Despite the importance of KM in health care organizations (HCOs), there has been very little empirical research to describe KM in this context. This study explores KM in HCOs, focusing on the status of current intraorganizational KM. The intention is to provide insight for future studies and model development for effective KM implementation in HCOs. A qualitative methods approach was used to create an empirically based model of KM in HCOs. Methods included (a) qualitative interviews (n = 24) with senior leadership to identify types of knowledge important in these roles plus current information-seeking behaviors/needs and (b) in-depth case study with leaders in new executive positions (n = 2). The data were collected from 10 HCOs. Our empirically based model for KM was assessed for face and content validity. The findings highlight the paucity of formal KM in our sample HCOs. Organizational culture, leadership, and resources are instrumental in supporting KM processes. An executive's knowledge needs are extensive, but knowledge assets are often limited or difficult to acquire as much of the available information is not in a usable format. We propose an empirically based model for KM to highlight the importance of context (internal and external), and knowledge seeking, synthesis, sharing, and organization. Participants who reviewed the model supported its basic components and processes, and potential for incorporating KM into organizational processes. Our results articulate ways to improve KM, increase organizational learning, and support evidence-informed decision-making. This research has implications for how to better integrate evidence and knowledge into organizations while considering context and the role of

  5. Antecedents of employee electricity saving behavior in organizations: An empirical study based on norm activation model

    International Nuclear Information System (INIS)

    Zhang, Yixiang; Wang, Zhaohua; Zhou, Guanghui

    2013-01-01

    China is one of the major energy-consuming countries, and is under great pressure to promote energy saving and reduce domestic energy consumption. Employees constitute an important target group for energy saving. However, only a few research efforts have been paid to study what drives employee energy saving behavior in organizations. To fill this gap, drawing on norm activation model (NAM), we built a research model to study antecedents of employee electricity saving behavior in organizations. The model was empirically tested using survey data collected from office workers in Beijing, China. Results show that personal norm positively influences employee electricity saving behavior. Organizational electricity saving climate negatively moderates the effect of personal norm on electricity saving behavior. Awareness of consequences, ascription of responsibility, and organizational electricity saving climate positively influence personal norm. Furthermore, awareness of consequences positively influences ascription of responsibility. This paper contributes to the energy saving behavior literature by building a theoretical model of employee electricity saving behavior which is understudied in the current literature. Based on the empirical results, implications on how to promote employee electricity saving are discussed. - Highlights: • We studied employee electricity saving behavior based on norm activation model. • The model was tested using survey data collected from office workers in China. • Personal norm positively influences employee′s electricity saving behavior. • Electricity saving climate negatively moderates personal norm′s effect. • This research enhances our understanding of employee electricity saving behavior

  6. Models of expected returns on the brazilian market: Empirical tests using predictive methodology

    Directory of Open Access Journals (Sweden)

    Adriano Mussa

    2009-01-01

    Full Text Available Predictive methodologies for test of the expected returns models are largely diffused on the international academic environment. However, these methods have not been used in Brazil in a systematic way. Generally, empirical studies proceeded with Brazilian stock market data are concentrated only in the first step of these methodologies. The purpose of this article was test and compare the models CAPM, 3-factors and 4-factors using a predictive methodology, considering two steps – temporal and cross-section regressions – with standard errors obtained by the techniques of Fama and Macbeth (1973. The results indicated the superiority of the 4-fators model as compared to the 3-fators model, and the superiority of the 3- factors model as compared to the CAPM, but no one of the tested models were enough on the explanation of the Brazilian stock returns. Contrary to some empirical evidences, that do not use predictive methodology, the size and momentum effect seem do not exist on the Brazilian capital markets, but there are evidences of the value effect and the relevance of the market for explanation of expected returns. These finds rise some questions, mainly caused by the originality of the methodology on the local market and by the fact that this subject is still incipient and polemic on the Brazilian academic environment.

  7. Empirical Reconstruction and Numerical Modeling of the First Geoeffective Coronal Mass Ejection of Solar Cycle 24

    Science.gov (United States)

    Wood, B. E.; Wu, C.-C.; Howard, R. A.; Socker, D. G.; Rouillard, A. P.

    2011-03-01

    We analyze the kinematics and morphology of a coronal mass ejection (CME) from 2010 April 3, which was responsible for the first significant geomagnetic storm of solar cycle 24. The analysis utilizes coronagraphic and heliospheric images from the two STEREO spacecraft, and coronagraphic images from SOHO/LASCO. Using an empirical three-dimensional (3D) reconstruction technique, we demonstrate that the CME can be reproduced reasonably well at all times with a 3D flux rope shape, but the case for a flux rope being the correct interpretation is not as strong as some events studied with STEREO in the past, given that we are unable to infer a unique orientation for the flux rope. A model with an orientation angle of -80° from the ecliptic plane (i.e., nearly N-S) works best close to the Sun, but a model at 10° (i.e., nearly E-W) works better far from the Sun. Both interpretations require the cross section of the flux rope to be significantly elliptical rather than circular. In addition to our empirical modeling, we also present a fully 3D numerical MHD model of the CME. This physical model appears to effectively reproduce aspects of the shape and kinematics of the CME's leading edge. It is particularly encouraging that the model reproduces the amount of interplanetary deceleration observed for the CME during its journey from the Sun to 1 AU.

  8. EMPIRICAL RECONSTRUCTION AND NUMERICAL MODELING OF THE FIRST GEOEFFECTIVE CORONAL MASS EJECTION OF SOLAR CYCLE 24

    International Nuclear Information System (INIS)

    Wood, B. E.; Wu, C.-C.; Howard, R. A.; Socker, D. G.; Rouillard, A. P.

    2011-01-01

    We analyze the kinematics and morphology of a coronal mass ejection (CME) from 2010 April 3, which was responsible for the first significant geomagnetic storm of solar cycle 24. The analysis utilizes coronagraphic and heliospheric images from the two STEREO spacecraft, and coronagraphic images from SOHO/LASCO. Using an empirical three-dimensional (3D) reconstruction technique, we demonstrate that the CME can be reproduced reasonably well at all times with a 3D flux rope shape, but the case for a flux rope being the correct interpretation is not as strong as some events studied with STEREO in the past, given that we are unable to infer a unique orientation for the flux rope. A model with an orientation angle of -80 deg. from the ecliptic plane (i.e., nearly N-S) works best close to the Sun, but a model at 10 deg. (i.e., nearly E-W) works better far from the Sun. Both interpretations require the cross section of the flux rope to be significantly elliptical rather than circular. In addition to our empirical modeling, we also present a fully 3D numerical MHD model of the CME. This physical model appears to effectively reproduce aspects of the shape and kinematics of the CME's leading edge. It is particularly encouraging that the model reproduces the amount of interplanetary deceleration observed for the CME during its journey from the Sun to 1 AU.

  9. Research Article Evaluation of different signal propagation models for a mixed indoor-outdoor scenario using empirical data

    Directory of Open Access Journals (Sweden)

    Oleksandr Artemenko

    2016-06-01

    Full Text Available In this paper, we are choosing a suitable indoor-outdoor propagation model out of the existing models by considering path loss and distance as parameters. A path loss is calculated empirically by placing emitter nodes inside a building. A receiver placed outdoors is represented by a Quadrocopter (QC that receives beacon messages from indoor nodes. As per our analysis, the International Telecommunication Union (ITU model, Stanford University Interim (SUI model, COST-231 Hata model, Green-Obaidat model, Free Space model, Log-Distance Path Loss model and Electronic Communication Committee 33 (ECC-33 models are chosen and evaluated using empirical data collected in a real environment. The aim is to determine if the analytically chosen models fit our scenario by estimating the minimal standard deviation from the empirical data.

  10. An empirical investigation of the efficiency effects of integrated care models in Switzerland

    Directory of Open Access Journals (Sweden)

    Oliver Reich

    2012-01-01

    Full Text Available Introduction: This study investigates the efficiency gains of integrated care models in Switzerland, since these models are regarded as cost containment options in national social health insurance. These plans generate much lower average health care expenditure than the basic insurance plan. The question is, however, to what extent these total savings are due to the effects of selection and efficiency. Methods: The empirical analysis is based on data from 399,274 Swiss residents that constantly had compulsory health insurance with the Helsana Group, the largest health insurer in Switzerland, covering the years 2006 to 2009. In order to evaluate the efficiency of the different integrated care models, we apply an econometric approach with a mixed-effects model. Results: Our estimations indicate that the efficiency effects of integrated care models on health care expenditure are significant. However, the different insurance plans vary, revealing the following efficiency gains per model: contracted capitated model 21.2%, contracted non-capitated model 15.5% and telemedicine model 3.7%. The remaining 8.5%, 5.6% and 22.5% respectively of the variation in total health care expenditure can be attributed to the effects of selection. Conclusions: Integrated care models have the potential to improve care for patients with chronic diseases and concurrently have a positive impact on health care expenditure. We suggest policy makers improve the incentives for patients with chronic diseases within the existing regulations providing further potential for cost-efficiency of medical care.

  11. An empirical investigation of the efficiency effects of integrated care models in Switzerland

    Directory of Open Access Journals (Sweden)

    Oliver Reich

    2012-01-01

    Full Text Available Introduction: This study investigates the efficiency gains of integrated care models in Switzerland, since these models are regarded as cost containment options in national social health insurance. These plans generate much lower average health care expenditure than the basic insurance plan. The question is, however, to what extent these total savings are due to the effects of selection and efficiency.Methods: The empirical analysis is based on data from 399,274 Swiss residents that constantly had compulsory health insurance with the Helsana Group, the largest health insurer in Switzerland, covering the years 2006 to 2009. In order to evaluate the efficiency of the different integrated care models, we apply an econometric approach with a mixed-effects model.Results: Our estimations indicate that the efficiency effects of integrated care models on health care expenditure are significant. However, the different insurance plans vary, revealing the following efficiency gains per model: contracted capitated model 21.2%, contracted non-capitated model 15.5% and telemedicine model 3.7%. The remaining 8.5%, 5.6% and 22.5% respectively of the variation in total health care expenditure can be attributed to the effects of selection.Conclusions: Integrated care models have the potential to improve care for patients with chronic diseases and concurrently have a positive impact on health care expenditure. We suggest policy makers improve the incentives for patients with chronic diseases within the existing regulations providing further potential for cost-efficiency of medical care.

  12. Space evolution model and empirical analysis of an urban public transport network

    Science.gov (United States)

    Sui, Yi; Shao, Feng-jing; Sun, Ren-cheng; Li, Shu-jing

    2012-07-01

    This study explores the space evolution of an urban public transport network, using empirical evidence and a simulation model validated on that data. Public transport patterns primarily depend on traffic spatial-distribution, demands of passengers and expected utility of investors. Evolution is an iterative process of satisfying the needs of passengers and investors based on a given traffic spatial-distribution. The temporal change of urban public transport network is evaluated both using topological measures and spatial ones. The simulation model is validated using empirical data from nine big cities in China. Statistical analyses on topological and spatial attributes suggest that an evolution network with traffic demands characterized by power-law numerical values which distribute in a mode of concentric circles tallies well with these nine cities.

  13. Analysis of model implied volatility for jump diffusion models: Empirical evidence from the Nordpool market

    International Nuclear Information System (INIS)

    Nomikos, Nikos K.; Soldatos, Orestes A.

    2010-01-01

    In this paper we examine the importance of mean reversion and spikes in the stochastic behaviour of the underlying asset when pricing options on power. We propose a model that is flexible in its formulation and captures the stylized features of power prices in a parsimonious way. The main feature of the model is that it incorporates two different speeds of mean reversion to capture the differences in price behaviour between normal and spiky periods. We derive semi-closed form solutions for European option prices using transform analysis and then examine the properties of the implied volatilities that the model generates. We find that the presence of jumps generates prominent volatility skews which depend on the sign of the mean jump size. We also show that mean reversion reduces the volatility smile as time to maturity increases. In addition, mean reversion induces volatility skews particularly for ITM options, even in the absence of jumps. Finally, jump size volatility and jump intensity mainly affect the kurtosis and thus the curvature of the smile with the former having a more important role in making the volatility smile more pronounced and thus increasing the kurtosis of the underlying price distribution.

  14. Ion temperature in the outer ionosphere - first version of a global empirical model

    Czech Academy of Sciences Publication Activity Database

    Třísková, Ludmila; Truhlík, Vladimír; Šmilauer, Jan; Smirnova, N. F.

    2004-01-01

    Roč. 34, č. 9 (2004), s. 1998-2003 ISSN 0273-1177 R&D Projects: GA ČR GP205/02/P037; GA AV ČR IAA3042201; GA MŠk ME 651 Institutional research plan: CEZ:AV0Z3042911 Keywords : plasma temperatures * topside ionosphere * empirical models Subject RIV: DG - Athmosphere Sciences, Meteorology Impact factor: 0.548, year: 2004

  15. An Empirical Application of a Two-Factor Model of Stochastic Volatility

    Czech Academy of Sciences Publication Activity Database

    Kuchyňka, Alexandr

    2008-01-01

    Roč. 17, č. 3 (2008), s. 243-253 ISSN 1210-0455 R&D Projects: GA ČR GA402/07/1113; GA MŠk(CZ) LC06075 Institutional research plan: CEZ:AV0Z10750506 Keywords : stochastic volatility * Kalman filter Subject RIV: AH - Economics http://library.utia.cas.cz/separaty/2008/E/kuchynka-an empirical application of a two-factor model of stochastic volatility.pdf

  16. Establishment of Grain Farmers' Supply Response Model and Empirical Analysis under Minimum Grain Purchase Price Policy

    OpenAIRE

    Zhang, Shuang

    2012-01-01

    Based on farmers' supply behavior theory and price expectations theory, this paper establishes grain farmers' supply response model of two major grain varieties (early indica rice and mixed wheat) in the major producing areas, to test whether the minimum grain purchase price policy can have price-oriented effect on grain production and supply in the major producing areas. Empirical analysis shows that the minimum purchase price published annually by the government has significant positive imp...

  17. Integrating social science into empirical models of coupled human and natural systems

    Directory of Open Access Journals (Sweden)

    Jeffrey D. Kline

    2017-09-01

    Full Text Available Coupled human and natural systems (CHANS research highlights reciprocal interactions (or feedbacks between biophysical and socioeconomic variables to explain system dynamics and resilience. Empirical models often are used to test hypotheses and apply theory that represent human behavior. Parameterizing reciprocal interactions presents two challenges for social scientists: (1 how to represent human behavior as influenced by biophysical factors and integrate this into CHANS empirical models; (2 how to organize and function as a multidisciplinary social science team to accomplish that task. We reflect on these challenges regarding our CHANS research that investigated human adaptation to fire-prone landscapes. Our project sought to characterize the forest management activities of land managers and landowners (or "actors" and their influence on wildfire behavior and landscape outcomes by focusing on biophysical and socioeconomic feedbacks in central Oregon (USA. We used an agent-based model (ABM to compile biophysical and social information pertaining to actor behavior, and to project future landscape conditions under alternative management scenarios. Project social scientists were tasked with identifying actors' forest management activities and biophysical and socioeconomic factors that influence them, and with developing decision rules for incorporation into the ABM to represent actor behavior. We (1 briefly summarize what we learned about actor behavior on this fire-prone landscape and how we represented it in an ABM, and (2 more significantly, report our observations about how we organized and functioned as a diverse team of social scientists to fulfill these CHANS research tasks. We highlight several challenges we experienced, involving quantitative versus qualitative data and methods, distilling complex behavior into empirical models, varying sensitivity of biophysical models to social factors, synchronization of research tasks, and the need to

  18. Risky forward interest rates and swaptions: Quantum finance model and empirical results

    Science.gov (United States)

    Baaquie, Belal Ehsan; Yu, Miao; Bhanap, Jitendra

    2018-02-01

    Risk free forward interest rates (Diebold and Li, 2006 [1]; Jamshidian, 1991 [2 ]) - and their realization by US Treasury bonds as the leading exemplar - have been studied extensively. In Baaquie (2010), models of risk free bonds and their forward interest rates based on the quantum field theoretic formulation of the risk free forward interest rates have been discussed, including the empirical evidence supporting these models. The quantum finance formulation of risk free forward interest rates is extended to the case of risky forward interest rates. The examples of the Singapore and Malaysian forward interest rates are used as specific cases. The main feature of the quantum finance model is that the risky forward interest rates are modeled both a) as a stand-alone case as well as b) being driven by the US forward interest rates plus a spread - having its own term structure -above the US forward interest rates. Both the US forward interest rates and the term structure for the spread are modeled by a two dimensional Euclidean quantum field. As a precursor to the evaluation of put option of the Singapore coupon bond, the quantum finance model for swaptions is tested using empirical study of swaptions for the US Dollar -showing that the model is quite accurate. A prediction for the market price of the put option for the Singapore coupon bonds is obtained. The quantum finance model is generalized to study the Malaysian case and the Malaysian forward interest rates are shown to have anomalies absent for the US and Singapore case. The model's prediction for a Malaysian interest rate swap is obtained.

  19. Context, Experience, Expectation, and Action—Towards an Empirically Grounded, General Model for Analyzing Biographical Uncertainty

    Directory of Open Access Journals (Sweden)

    Herwig Reiter

    2010-01-01

    Full Text Available The article proposes a general, empirically grounded model for analyzing biographical uncertainty. The model is based on findings from a qualitative-explorative study of transforming meanings of unemployment among young people in post-Soviet Lithuania. In a first step, the particular features of the uncertainty puzzle in post-communist youth transitions are briefly discussed. A historical event like the collapse of state socialism in Europe, similar to the recent financial and economic crisis, is a generator of uncertainty par excellence: it undermines the foundations of societies and the taken-for-grantedness of related expectations. Against this background, the case of a young woman and how she responds to the novel threat of unemployment in the transition to the world of work is introduced. Her uncertainty management in the specific time perspective of certainty production is then conceptually rephrased by distinguishing three types or levels of biographical uncertainty: knowledge, outcome, and recognition uncertainty. Biographical uncertainty, it is argued, is empirically observable through the analysis of acting and projecting at the biographical level. The final part synthesizes the empirical findings and the conceptual discussion into a stratification model of biographical uncertainty as a general tool for the biographical analysis of uncertainty phenomena. URN: urn:nbn:de:0114-fqs100120

  20. A simple empirical model for the clarification-thickening process in wastewater treatment plants.

    Science.gov (United States)

    Zhang, Y K; Wang, H C; Qi, L; Liu, G H; He, Z J; Fan, H T

    2015-01-01

    In wastewater treatment plants (WWTPs), activated sludge is thickened in secondary settling tanks and recycled into the biological reactor to maintain enough biomass for wastewater treatment. Accurately estimating the activated sludge concentration in the lower portion of the secondary clarifiers is of great importance for evaluating and controlling the sludge recycled ratio, ensuring smooth and efficient operation of the WWTP. By dividing the overall activated sludge-thickening curve into a hindered zone and a compression zone, an empirical model describing activated sludge thickening in the compression zone was obtained by empirical regression. This empirical model was developed through experiments conducted using sludge from five WWTPs, and validated by the measured data from a sixth WWTP, which fit the model well (R² = 0.98, p settling was also developed. Finally, the effects of denitrification and addition of a polymer were also analysed because of their effect on sludge thickening, which can be useful for WWTP operation, e.g., improving wastewater treatment or the proper use of the polymer.

  1. Empirical models of the Solar Wind : Extrapolations from the Helios & Ulysses observations back to the corona

    Science.gov (United States)

    Maksimovic, M.; Zaslavsky, A.

    2017-12-01

    We will present extrapolation of the HELIOS & Ulysses proton density, temperature & bulk velocities back to the corona. Using simple mass flux conservations we show a very good agreement between these extrapolations and the current state knowledge of these parameters in the corona, based on SOHO mesurements. These simple extrapolations could potentially be very useful for the science planning of both the Parker Solar Probe and Solar Orbiter missions. Finally will also present some modelling considerations, based on simple energy balance equations which arise from these empirical observationnal models.

  2. Generalized least squares and empirical Bayes estimation in regional partial duration series index-flood modeling

    DEFF Research Database (Denmark)

    Madsen, Henrik; Rosbjerg, Dan

    1997-01-01

    parameters is inferred from regional data using generalized least squares (GLS) regression. Two different Bayesian T-year event estimators are introduced: a linear estimator that requires only some moments of the prior distributions to be specified and a parametric estimator that is based on specified......A regional estimation procedure that combines the index-flood concept with an empirical Bayes method for inferring regional information is introduced. The model is based on the partial duration series approach with generalized Pareto (GP) distributed exceedances. The prior information of the model...

  3. Semi-empirical modelization of charge funneling in a NP diode

    International Nuclear Information System (INIS)

    Musseau, O.

    1991-01-01

    Heavy ion interaction with a semiconductor generates a high density of electrons and holes pairs along the trajectory and in a space charge zone the collected charge is considerably increased. The chronology of this charge funneling is described in a semi-empirical model. From initial conditions characterizing the incident ion and the studied structure, it is possible to evaluate directly the transient current, the collected charge and the length of funneling with a good agreement. The model can be extrapolated to more complex structures

  4. An empirical model of the high-energy electron environment at Jupiter

    Science.gov (United States)

    Soria-Santacruz, M.; Garrett, H. B.; Evans, R. W.; Jun, I.; Kim, W.; Paranicas, C.; Drozdov, A.

    2016-10-01

    We present an empirical model of the energetic electron environment in Jupiter's magnetosphere that we have named the Galileo Interim Radiation Electron Model version-2 (GIRE2) since it is based on Galileo data from the Energetic Particle Detector (EPD). Inside 8RJ, GIRE2 adopts the previously existing model of Divine and Garrett because this region was well sampled by the Pioneer and Voyager spacecraft but poorly covered by Galileo. Outside of 8RJ, the model is based on 10 min averages of Galileo EPD data as well as on measurements from the Geiger Tube Telescope on board the Pioneer spacecraft. In the inner magnetosphere the field configuration is dipolar, while in the outer magnetosphere it presents a disk-like structure. The gradual transition between these two behaviors is centered at about 17RJ. GIRE2 distinguishes between the two different regions characterized by these two magnetic field topologies. Specifically, GIRE2 consists of an inner trapped omnidirectional model between 8 to 17RJ that smoothly joins onto the original Divine and Garrett model inside 8RJ and onto a GIRE2 plasma sheet model at large radial distances. The model provides a complete picture of the high-energy electron environment in the Jovian magnetosphere from ˜1 to 50RJ. The present manuscript describes in great detail the data sets, formulation, and fittings used in the model and provides a discussion of the predicted high-energy electron fluxes as a function of energy and radial distance from the planet.

  5. EMERGE - an empirical model for the formation of galaxies since z ˜ 10

    Science.gov (United States)

    Moster, Benjamin P.; Naab, Thorsten; White, Simon D. M.

    2018-06-01

    We present EMERGE, an Empirical ModEl for the foRmation of GalaxiEs, describing the evolution of individual galaxies in large volumes from z ˜ 10 to the present day. We assign a star formation rate to each dark matter halo based on its growth rate, which specifies how much baryonic material becomes available, and the instantaneous baryon conversion efficiency, which determines how efficiently this material is converted to stars, thereby capturing the baryonic physics. Satellites are quenched following the delayed-then-rapid model, and they are tidally disrupted once their subhalo has lost a significant fraction of its mass. The model is constrained with observed data extending out to high redshift. The empirical relations are very flexible, and the model complexity is increased only if required by the data, assessed by several model selection statistics. We find that for the same final halo mass galaxies can have very different star formation histories. Galaxies that are quenched at z = 0 typically have a higher peak star formation rate compared to their star-forming counterparts. EMERGE predicts stellar-to-halo mass ratios for individual galaxies and introduces scatter self-consistently. We find that at fixed halo mass, passive galaxies have a higher stellar mass on average. The intracluster mass in massive haloes can be up to eight times larger than the mass of the central galaxy. Clustering for star-forming and quenched galaxies is in good agreement with observational constraints, indicating a realistic assignment of galaxies to haloes.

  6. An empirical Bayesian approach for model-based inference of cellular signaling networks

    Directory of Open Access Journals (Sweden)

    Klinke David J

    2009-11-01

    Full Text Available Abstract Background A common challenge in systems biology is to infer mechanistic descriptions of biological process given limited observations of a biological system. Mathematical models are frequently used to represent a belief about the causal relationships among proteins within a signaling network. Bayesian methods provide an attractive framework for inferring the validity of those beliefs in the context of the available data. However, efficient sampling of high-dimensional parameter space and appropriate convergence criteria provide barriers for implementing an empirical Bayesian approach. The objective of this study was to apply an Adaptive Markov chain Monte Carlo technique to a typical study of cellular signaling pathways. Results As an illustrative example, a kinetic model for the early signaling events associated with the epidermal growth factor (EGF signaling network was calibrated against dynamic measurements observed in primary rat hepatocytes. A convergence criterion, based upon the Gelman-Rubin potential scale reduction factor, was applied to the model predictions. The posterior distributions of the parameters exhibited complicated structure, including significant covariance between specific parameters and a broad range of variance among the parameters. The model predictions, in contrast, were narrowly distributed and were used to identify areas of agreement among a collection of experimental studies. Conclusion In summary, an empirical Bayesian approach was developed for inferring the confidence that one can place in a particular model that describes signal transduction mechanisms and for inferring inconsistencies in experimental measurements.

  7. Development of an Empirical Model for Optimization of Machining Parameters to Minimize Power Consumption

    Science.gov (United States)

    Kant Garg, Girish; Garg, Suman; Sangwan, K. S.

    2018-04-01

    The manufacturing sector consumes huge energy demand and the machine tools used in this sector have very less energy efficiency. Selection of the optimum machining parameters for machine tools is significant for energy saving and for reduction of environmental emission. In this work an empirical model is developed to minimize the power consumption using response surface methodology. The experiments are performed on a lathe machine tool during the turning of AISI 6061 Aluminum with coated tungsten inserts. The relationship between the power consumption and machining parameters is adequately modeled. This model is used for formulation of minimum power consumption criterion as a function of optimal machining parameters using desirability function approach. The influence of machining parameters on the energy consumption has been found using the analysis of variance. The validation of the developed empirical model is proved using the confirmation experiments. The results indicate that the developed model is effective and has potential to be adopted by the industry for minimum power consumption of machine tools.

  8. An Empirical Model for Vane-Type Vortex Generators in a Navier-Stokes Code

    Science.gov (United States)

    Dudek, Julianne C.

    2005-01-01

    An empirical model which simulates the effects of vane-type vortex generators in ducts was incorporated into the Wind-US Navier-Stokes computational fluid dynamics code. The model enables the effects of the vortex generators to be simulated without defining the details of the geometry within the grid, and makes it practical for researchers to evaluate multiple combinations of vortex generator arrangements. The model determines the strength of each vortex based on the generator geometry and the local flow conditions. Validation results are presented for flow in a straight pipe with a counter-rotating vortex generator arrangement, and the results are compared with experimental data and computational simulations using a gridded vane generator. Results are also presented for vortex generator arrays in two S-duct diffusers, along with accompanying experimental data. The effects of grid resolution and turbulence model are also examined.

  9. Empirical global model of upper thermosphere winds based on atmosphere and dynamics explorer satellite data

    Science.gov (United States)

    Hedin, A. E.; Spencer, N. W.; Killeen, T. L.

    1988-01-01

    Thermospheric wind data obtained from the Atmosphere Explorer E and Dynamics Explorer 2 satellites have been used to generate an empirical wind model for the upper thermosphere, analogous to the MSIS model for temperature and density, using a limited set of vector spherical harmonics. The model is limited to above approximately 220 km where the data coverage is best and wind variations with height are reduced by viscosity. The data base is not adequate to detect solar cycle (F10.7) effects at this time but does include magnetic activity effects. Mid- and low-latitude data are reproduced quite well by the model and compare favorably with published ground-based results. The polar vortices are present, but not to full detail.

  10. An empirical model of the Earth's bow shock based on an artificial neural network

    Science.gov (United States)

    Pallocchia, Giuseppe; Ambrosino, Danila; Trenchi, Lorenzo

    2014-05-01

    All of the past empirical models of the Earth's bow shock shape were obtained by best-fitting some given surfaces to sets of observed crossings. However, the issue of bow shock modeling can be addressed by means of artificial neural networks (ANN) as well. In this regard, here it is presented a perceptron, a simple feedforward network, which computes the bow shock distance along a given direction using the two angular coordinates of that direction, the bow shock predicted distance RF79 (provided by Formisano's model (F79)) and the upstream alfvénic Mach number Ma. After a brief description of the ANN architecture and training method, we discuss the results of the statistical comparison, performed over a test set of 1140 IMP8 crossings, between the prediction accuracies of ANN and F79 models.

  11. Multiband Prediction Model for Financial Time Series with Multivariate Empirical Mode Decomposition

    Directory of Open Access Journals (Sweden)

    Md. Rabiul Islam

    2012-01-01

    Full Text Available This paper presents a subband approach to financial time series prediction. Multivariate empirical mode decomposition (MEMD is employed here for multiband representation of multichannel financial time series together. Autoregressive moving average (ARMA model is used in prediction of individual subband of any time series data. Then all the predicted subband signals are summed up to obtain the overall prediction. The ARMA model works better for stationary signal. With multiband representation, each subband becomes a band-limited (narrow band signal and hence better prediction is achieved. The performance of the proposed MEMD-ARMA model is compared with classical EMD, discrete wavelet transform (DWT, and with full band ARMA model in terms of signal-to-noise ratio (SNR and mean square error (MSE between the original and predicted time series. The simulation results show that the MEMD-ARMA-based method performs better than the other methods.

  12. An empirical model describing the postnatal growth of organs in ICRP reference humans: Pt. 1

    International Nuclear Information System (INIS)

    Walker, J.T.

    1991-01-01

    An empirical model is presented for describing the postnatal mass growth of lungs in ICRP reference humans. A combined exponential and logistic function containing six parameters is fitted to ICRP 23 lung data using a weighted non-linear least squares technique. The results indicate that the model delineates the data well. Further analysis shows that reference male lungs attain a higher pubertal peak velocity (PPV) and adult mass size than female lungs, although the latter reach their PPV and adult mass size first. Furthermore, the model shows that lung growth rates in infants are two to three orders of magnitude higher than those in mature adults. This finding is important because of the possible association between higher radiation risks in infants' organs that have faster cell turnover rates compared to mature adult organs. The significance of the model for ICRP dosimetric purposes will be discussed. (author)

  13. An Empirical Path-Loss Model for Wireless Channels in Indoor Short-Range Office Environment

    Directory of Open Access Journals (Sweden)

    Ye Wang

    2012-01-01

    Full Text Available A novel empirical path-loss model for wireless indoor short-range office environment at 4.3–7.3 GHz band is presented. The model is developed based on the experimental datum sampled in 30 office rooms in both line of sight (LOS and non-LOS (NLOS scenarios. The model is characterized as the path loss to distance with a Gaussian random variable X due to the shadow fading by using linear regression. The path-loss exponent n is fitted by the frequency using power function and modeled as a frequency-dependent Gaussian variable as the standard deviation σ of X. The presented works should be available for the research of wireless channel characteristics under universal indoor short-distance environments in the Internet of Things (IOT.

  14. Empirical evaluation of the conceptual model underpinning a regional aquatic long-term monitoring program using causal modelling

    Science.gov (United States)

    Irvine, Kathryn M.; Miller, Scott; Al-Chokhachy, Robert K.; Archer, Erik; Roper, Brett B.; Kershner, Jeffrey L.

    2015-01-01

    Conceptual models are an integral facet of long-term monitoring programs. Proposed linkages between drivers, stressors, and ecological indicators are identified within the conceptual model of most mandated programs. We empirically evaluate a conceptual model developed for a regional aquatic and riparian monitoring program using causal models (i.e., Bayesian path analysis). We assess whether data gathered for regional status and trend estimation can also provide insights on why a stream may deviate from reference conditions. We target the hypothesized causal pathways for how anthropogenic drivers of road density, percent grazing, and percent forest within a catchment affect instream biological condition. We found instream temperature and fine sediments in arid sites and only fine sediments in mesic sites accounted for a significant portion of the maximum possible variation explainable in biological condition among managed sites. However, the biological significance of the direct effects of anthropogenic drivers on instream temperature and fine sediments were minimal or not detected. Consequently, there was weak to no biological support for causal pathways related to anthropogenic drivers’ impact on biological condition. With weak biological and statistical effect sizes, ignoring environmental contextual variables and covariates that explain natural heterogeneity would have resulted in no evidence of human impacts on biological integrity in some instances. For programs targeting the effects of anthropogenic activities, it is imperative to identify both land use practices and mechanisms that have led to degraded conditions (i.e., moving beyond simple status and trend estimation). Our empirical evaluation of the conceptual model underpinning the long-term monitoring program provided an opportunity for learning and, consequently, we discuss survey design elements that require modification to achieve question driven monitoring, a necessary step in the practice of

  15. Uncertainty analysis and validation of environmental models. The empirically based uncertainty analysis

    International Nuclear Information System (INIS)

    Monte, Luigi; Hakanson, Lars; Bergstroem, Ulla; Brittain, John; Heling, Rudie

    1996-01-01

    The principles of Empirically Based Uncertainty Analysis (EBUA) are described. EBUA is based on the evaluation of 'performance indices' that express the level of agreement between the model and sets of empirical independent data collected in different experimental circumstances. Some of these indices may be used to evaluate the confidence limits of the model output. The method is based on the statistical analysis of the distribution of the index values and on the quantitative relationship of these values with the ratio 'experimental data/model output'. Some performance indices are described in the present paper. Among these, the so-called 'functional distance' (d) between the logarithm of model output and the logarithm of the experimental data, defined as d 2 =Σ n 1 ( ln M i - ln O i ) 2 /n where M i is the i-th experimental value, O i the corresponding model evaluation and n the number of the couplets 'experimental value, predicted value', is an important tool for the EBUA method. From the statistical distribution of this performance index, it is possible to infer the characteristics of the distribution of the ratio 'experimental data/model output' and, consequently to evaluate the confidence limits for the model predictions. This method was applied to calculate the uncertainty level of a model developed to predict the migration of radiocaesium in lacustrine systems. Unfortunately, performance indices are affected by the uncertainty of the experimental data used in validation. Indeed, measurement results of environmental levels of contamination are generally associated with large uncertainty due to the measurement and sampling techniques and to the large variability in space and time of the measured quantities. It is demonstrated that this non-desired effect, in some circumstances, may be corrected by means of simple formulae

  16. A Semi-Empirical SNR Model for Soil Moisture Retrieval Using GNSS SNR Data

    Directory of Open Access Journals (Sweden)

    Mutian Han

    2018-02-01

    Full Text Available The Global Navigation Satellite System-Interferometry and Reflectometry (GNSS-IR technique on soil moisture remote sensing was studied. A semi-empirical Signal-to-Noise Ratio (SNR model was proposed as a curve-fitting model for SNR data routinely collected by a GNSS receiver. This model aims at reconstructing the direct and reflected signal from SNR data and at the same time extracting frequency and phase information that is affected by soil moisture as proposed by K. M. Larson et al. This is achieved empirically through approximating the direct and reflected signal by a second-order and fourth-order polynomial, respectively, based on the well-established SNR model. Compared with other models (K. M. Larson et al., T. Yang et al., this model can improve the Quality of Fit (QoF with little prior knowledge needed and can allow soil permittivity to be estimated from the reconstructed signals. In developing this model, we showed how noise affects the receiver SNR estimation and thus the model performance through simulations under the bare soil assumption. Results showed that the reconstructed signals with a grazing angle of 5°–15° were better for soil moisture retrieval. The QoF was improved by around 45%, which resulted in better estimation of the frequency and phase information. However, we found that the improvement on phase estimation could be neglected. Experimental data collected at Lamasquère, France, were also used to validate the proposed model. The results were compared with the simulation and previous works. It was found that the model could ensure good fitting quality even in the case of irregular SNR variation. Additionally, the soil moisture calculated from the reconstructed signals was about 15% closer in relation to the ground truth measurements. A deeper insight into the Larson model and the proposed model was given at this stage, which formed a possible explanation of this fact. Furthermore, frequency and phase information

  17. The gravity model specification for modeling international trade flows and free trade agreement effects: a 10-year review of empirical studies

    OpenAIRE

    Kepaptsoglou, Konstantinos; Karlaftis, Matthew G.; Tsamboulas, Dimitrios

    2010-01-01

    The gravity model has been extensively used in international trade research for the last 40 years because of its considerable empirical robustness and explanatory power. Since their introduction in the 1960's, gravity models have been used for assessing trade policy implications and, particularly recently, for analyzing the effects of Free Trade Agreements on international trade. The objective of this paper is to review the recent empirical literature on gravity models, highlight best practic...

  18. Dynamics of bloggers’ communities: Bipartite networks from empirical data and agent-based modeling

    Science.gov (United States)

    Mitrović, Marija; Tadić, Bosiljka

    2012-11-01

    We present an analysis of the empirical data and the agent-based modeling of the emotional behavior of users on the Web portals where the user interaction is mediated by posted comments, like Blogs and Diggs. We consider the dataset of discussion-driven popular Diggs, in which all comments are screened by machine-learning emotion detection in the text, to determine positive and negative valence (attractiveness and aversiveness) of each comment. By mapping the data onto a suitable bipartite network, we perform an analysis of the network topology and the related time-series of the emotional comments. The agent-based model is then introduced to simulate the dynamics and to capture the emergence of the emotional behaviors and communities. The agents are linked to posts on a bipartite network, whose structure evolves through their actions on the posts. The emotional states (arousal and valence) of each agent fluctuate in time, subject to the current contents of the posts to which the agent is exposed. By an agent’s action on a post its current emotions are transferred to the post. The model rules and the key parameters are inferred from the considered empirical data to ensure their realistic values and mutual consistency. The model assumes that the emotional arousal over posts drives the agent’s action. The simulations are preformed for the case of constant flux of agents and the results are analyzed in full analogy with the empirical data. The main conclusions are that the emotion-driven dynamics leads to long-range temporal correlations and emergent networks with community structure, that are comparable with the ones in the empirical system of popular posts. In view of pure emotion-driven agents actions, this type of comparisons provide a quantitative measure for the role of emotions in the dynamics on real blogs. Furthermore, the model reveals the underlying mechanisms which relate the post popularity with the emotion dynamics and the prevalence of negative

  19. Prediction of Meiyu rainfall in Taiwan by multi-lead physical-empirical models

    Science.gov (United States)

    Yim, So-Young; Wang, Bin; Xing, Wen; Lu, Mong-Ming

    2015-06-01

    Taiwan is located at the dividing point of the tropical and subtropical monsoons over East Asia. Taiwan has double rainy seasons, the Meiyu in May-June and the Typhoon rains in August-September. To predict the amount of Meiyu rainfall is of profound importance to disaster preparedness and water resource management. The seasonal forecast of May-June Meiyu rainfall has been a challenge to current dynamical models and the factors controlling Taiwan Meiyu variability has eluded climate scientists for decades. Here we investigate the physical processes that are possibly important for leading to significant fluctuation of the Taiwan Meiyu rainfall. Based on this understanding, we develop a physical-empirical model to predict Taiwan Meiyu rainfall at a lead time of 0- (end of April), 1-, and 2-month, respectively. Three physically consequential and complementary predictors are used: (1) a contrasting sea surface temperature (SST) tendency in the Indo-Pacific warm pool, (2) the tripolar SST tendency in North Atlantic that is associated with North Atlantic Oscillation, and (3) a surface warming tendency in northeast Asia. These precursors foreshadow an enhanced Philippine Sea anticyclonic anomalies and the anomalous cyclone near the southeastern China in the ensuing summer, which together favor increasing Taiwan Meiyu rainfall. Note that the identified precursors at various lead-times represent essentially the same physical processes, suggesting the robustness of the predictors. The physical empirical model made by these predictors is capable of capturing the Taiwan rainfall variability with a significant cross-validated temporal correlation coefficient skill of 0.75, 0.64, and 0.61 for 1979-2012 at the 0-, 1-, and 2-month lead time, respectively. The physical-empirical model concept used here can be extended to summer monsoon rainfall prediction over the Southeast Asia and other regions.

  20. An empirical model to predict infield thin layer drying rate of cut switchgrass

    International Nuclear Information System (INIS)

    Khanchi, A.; Jones, C.L.; Sharma, B.; Huhnke, R.L.; Weckler, P.; Maness, N.O.

    2013-01-01

    A series of 62 thin layer drying experiments were conducted to evaluate the effect of solar radiation, vapor pressure deficit and wind speed on drying rate of switchgrass. An environmental chamber was fabricated that can simulate field drying conditions. An empirical drying model based on maturity stage of switchgrass was also developed during the study. It was observed that solar radiation was the most significant factor in improving the drying rate of switchgrass at seed shattering and seed shattered maturity stage. Therefore, drying switchgrass in wide swath to intercept the maximum amount of radiation at these stages of maturity is recommended. Moreover, it was observed that under low radiation intensity conditions, wind speed helps to improve the drying rate of switchgrass. Field operations such as raking or turning of the windrows are recommended to improve air circulation within a swath on cloudy days. Additionally, it was found that the effect of individual weather parameters on the drying rate of switchgrass was dependent on maturity stage. Vapor pressure deficit was strongly correlated with the drying rate during seed development stage whereas, vapor pressure deficit was weakly correlated during seed shattering and seed shattered stage. These findings suggest the importance of using separate drying rate models for each maturity stage of switchgrass. The empirical models developed in this study can predict the drying time of switchgrass based on the forecasted weather conditions so that the appropriate decisions can be made. -- Highlights: • An environmental chamber was developed in the present study to simulate field drying conditions. • An empirical model was developed that can estimate drying rate of switchgrass based on forecasted weather conditions. • Separate equations were developed based on maturity stage of switchgrass. • Designed environmental chamber can be used to evaluate the effect of other parameters that affect drying of crops

  1. EMPIRICAL MODELS FOR DESCRIBING FIRE BEHAVIOR IN BRAZILIAN COMMERCIAL EUCALYPT PLANTATIONS

    Directory of Open Access Journals (Sweden)

    Benjamin Leonardo Alves White

    2016-12-01

    Full Text Available Modeling forest fire behavior is an important task that can be used to assist in fire prevention and suppression operations. However, according to previous studies, the existing common worldwide fire behavior models used do not correctly estimate the fire behavior in Brazilian commercial hybrid eucalypt plantations. Therefore, this study aims to build new empirical models to predict the fire rate of spread, flame length and fuel consumption for such vegetation. To meet these objectives, 105 laboratory experimental burns were done, where the main fuel characteristics and weather variables that influence fire behavior were controlled and/or measured in each experiment. Dependent and independent variables were fitted through multiple regression analysis. The fire rate of spread proposed model is based on the wind speed, fuel bed bulk density and 1-h dead fuel moisture content (r2 = 0.86; the flame length model is based on the fuel bed depth, 1-h dead fuel moisture content and wind speed (r2 = 0.72; the fuel consumption proposed model has the 1-h dead fuel moisture, fuel bed bulk density and 1-h dead dry fuel load as independent variables (r2= 0.80. These models were used to develop a new fire behavior software, the “Eucalyptus Fire Safety System”.

  2. Strategy for a Rock Mechanics Site Descriptive Model. Development and testing of the empirical approach

    International Nuclear Information System (INIS)

    Roeshoff, Kennert; Lanaro, Flavio; Lanru Jing

    2002-05-01

    This report presents the results of one part of a wide project for the determination of a methodology for the determination of the rock mechanics properties of the rock mass for the so-called Aespoe Test Case. The Project consists of three major parts: the empirical part dealing with the characterisation of the rock mass by applying empirical methods, a part determining the rock mechanics properties of the rock mass through numerical modelling, and a third part carrying out numerical modelling for the determination of the stress state at Aespoe. All Project's parts were performed based on a limited amount of data about the geology and mechanical tests on samples selected from the Aespoe Database. This Report only considers the empirical approach. The purpose of the project is the development of a descriptive rock mechanics model for SKBs rock mass investigations for a final repository site. The empirical characterisation of the rock mass provides correlations with some of the rock mechanics properties of the rock mass such as the deformation modulus, the friction angle and cohesion for a certain stress interval and the uniaxial compressive strength. For the characterisation of the rock mass, several empirical methods were analysed and reviewed. Among those methods, some were chosen because robust, applicable and widespread in modern rock mechanics. Major weight was given to the well-known Tunnel Quality Index (Q) and Rock Mass Rating (RMR) but also the Rock Mass Index (RMi), the Geological Strength Index (GSI) and Ramamurthy's Criterion were applied for comparison with the two classical methods. The process of: i) sorting the geometrical/geological/rock mechanics data, ii) identifying homogeneous rock volumes, iii) determining the input parameters for the empirical ratings for rock mass characterisation; iv) evaluating the mechanical properties by using empirical relations with the rock mass ratings; was considered. By comparing the methodologies involved by the

  3. Strategy for a Rock Mechanics Site Descriptive Model. Development and testing of the empirical approach

    Energy Technology Data Exchange (ETDEWEB)

    Roeshoff, Kennert; Lanaro, Flavio [Berg Bygg Konsult AB, Stockholm (Sweden); Lanru Jing [Royal Inst. of Techn., Stockholm (Sweden). Div. of Engineering Geology

    2002-05-01

    This report presents the results of one part of a wide project for the determination of a methodology for the determination of the rock mechanics properties of the rock mass for the so-called Aespoe Test Case. The Project consists of three major parts: the empirical part dealing with the characterisation of the rock mass by applying empirical methods, a part determining the rock mechanics properties of the rock mass through numerical modelling, and a third part carrying out numerical modelling for the determination of the stress state at Aespoe. All Project's parts were performed based on a limited amount of data about the geology and mechanical tests on samples selected from the Aespoe Database. This Report only considers the empirical approach. The purpose of the project is the development of a descriptive rock mechanics model for SKBs rock mass investigations for a final repository site. The empirical characterisation of the rock mass provides correlations with some of the rock mechanics properties of the rock mass such as the deformation modulus, the friction angle and cohesion for a certain stress interval and the uniaxial compressive strength. For the characterisation of the rock mass, several empirical methods were analysed and reviewed. Among those methods, some were chosen because robust, applicable and widespread in modern rock mechanics. Major weight was given to the well-known Tunnel Quality Index (Q) and Rock Mass Rating (RMR) but also the Rock Mass Index (RMi), the Geological Strength Index (GSI) and Ramamurthy's Criterion were applied for comparison with the two classical methods. The process of: i) sorting the geometrical/geological/rock mechanics data, ii) identifying homogeneous rock volumes, iii) determining the input parameters for the empirical ratings for rock mass characterisation; iv) evaluating the mechanical properties by using empirical relations with the rock mass ratings; was considered. By comparing the methodologies involved

  4. An Empirical Validation of Building Simulation Software for Modelling of Double-Skin Facade (DSF)

    DEFF Research Database (Denmark)

    Larsen, Olena Kalyanova; Heiselberg, Per; Felsmann, Clemens

    2009-01-01

    buildings, but their accuracy might be limited in cases with DSFs because of the complexity of the heat and mass transfer processes within the DSF. To address this problem, an empirical validation of building models with DSF, performed with various building simulation tools (ESP-r, IDA ICE 3.0, VA114......Double-skin facade (DSF) buildings are being built as an attractive, innovative and energy efficient solution. Nowadays, several design tools are used for assessment of thermal and energy performance of DSF buildings. Existing design tools are well-suited for performance assessment of conventional......, TRNSYS-TUD and BSim) was carried out in the framework of IEA SHC Task 34 /ECBCS Annex 43 "Testing and Validation of Building Energy Simulation Tools". The experimental data for the validation was gathered in a full-scale outdoor test facility. The empirical data sets comprise the key-functioning modes...

  5. Empiric model for mean generation time adjustment factor for classic point kinetics equations

    Energy Technology Data Exchange (ETDEWEB)

    Goes, David A.B.V. de; Martinez, Aquilino S.; Goncalves, Alessandro da C., E-mail: david.goes@poli.ufrj.br, E-mail: aquilino@lmp.ufrj.br, E-mail: alessandro@con.ufrj.br [Coordenacao de Pos-Graduacao e Pesquisa de Engenharia (COPPE/UFRJ), Rio de Janeiro, RJ (Brazil). Departamento de Engenharia Nuclear

    2017-11-01

    Point reactor kinetics equations are the easiest way to observe the neutron production time behavior in a nuclear reactor. These equations are derived from the neutron transport equation using an approximation called Fick's law leading to a set of first order differential equations. The main objective of this study is to review classic point kinetics equation in order to approximate its results to the case when it is considered the time variation of the neutron currents. The computational modeling used for the calculations is based on the finite difference method. The results obtained with this model are compared with the reference model and then it is determined an empirical adjustment factor that modifies the point reactor kinetics equation to the real scenario. (author)

  6. Empiric model for mean generation time adjustment factor for classic point kinetics equations

    International Nuclear Information System (INIS)

    Goes, David A.B.V. de; Martinez, Aquilino S.; Goncalves, Alessandro da C.

    2017-01-01

    Point reactor kinetics equations are the easiest way to observe the neutron production time behavior in a nuclear reactor. These equations are derived from the neutron transport equation using an approximation called Fick's law leading to a set of first order differential equations. The main objective of this study is to review classic point kinetics equation in order to approximate its results to the case when it is considered the time variation of the neutron currents. The computational modeling used for the calculations is based on the finite difference method. The results obtained with this model are compared with the reference model and then it is determined an empirical adjustment factor that modifies the point reactor kinetics equation to the real scenario. (author)

  7. An empirically tractable model of optimal oil spills prevention in Russian sea harbours

    Energy Technology Data Exchange (ETDEWEB)

    Deissenberg, C. [CEFI-CNRS, Les Milles (France); Gurman, V.; Tsirlin, A. [RAS, Program Systems Inst., Pereslavl-Zalessky (Russian Federation); Ryumina, E. [Russian Academy of Sciences, Moscow (Russian Federation). Inst. of Economic Market Problems

    2001-07-01

    Based on previous theoretical work by Gottinger (1997, 1998), we propose a simple model of optimal monitoring of oil-related activities in harbour areas that is suitable for empirical estimation within the Russian-Ukrainian context, in spite of the poor availability of data in these countries. Specifically, the model indicates how to best allocate at the steady state a given monitoring budget between different monitoring activities. An approximate analytical solution to the optimization problem is derived, and a simple procedure for estimating the model on the basis of the actually available data is suggested. An application using data obtained for several harbours of the Black and Baltic Seas is given. It suggests that the current Russian monitoring practice could be much improved by better allocating the available monitoring resources. (Author)

  8. Brand Choice Modeling Modeling Toothpaste Brand Choice: An Empirical Comparison of Artificial Neural Networks and Multinomial Probit Model

    Directory of Open Access Journals (Sweden)

    Tolga Kaya

    2010-11-01

    Full Text Available The purpose of this study is to compare the performances of Artificial Neural Networks (ANN and Multinomial Probit (MNP approaches in modeling the choice decision within fast moving consumer goods sector. To do this, based on 2597 toothpaste purchases of a panel sample of 404 households, choice models are built and their performances are compared on the 861 purchases of a test sample of 135 households. Results show that ANN's predictions are better while MNP is useful in providing marketing insight.

  9. Multiscale empirical modeling of the geomagnetic field: From storms to substorms

    Science.gov (United States)

    Stephens, G. K.; Sitnov, M. I.; Korth, H.; Gkioulidou, M.; Ukhorskiy, A. Y.; Merkin, V. G.

    2017-12-01

    An advanced version of the TS07D empirical geomagnetic field model, herein called SST17, is used to model the global picture of the geomagnetic field and its characteristic variations on both storm and substorm scales. The new SST17 model uses two regular expansions describing the equatorial currents with each having distinctly different scales, one corresponding to a thick and one to a thin current sheet relative to the thermal ion gyroradius. These expansions have an arbitrary distribution of currents in the equatorial plane that is constrained only by magnetometer data. This multi-scale description allows one to reproduce the current sheet thinning during the growth phase. Additionaly, the model uses a flexible description of field-aligned currents that reproduces their spiral structure at low altitudes and provides a continuous transition from region 1 to region 2 current systems. The empirical picture of substorms is obtained by combining magnetometer data from Geotail, THEMIS, Van Allen Probes, Cluster II, Polar, IMP-8, GOES 8, 9, 10 and 12 and then binning this data based on similar values of the auroral index AL, its time derivative and the integral of the solar wind electric field parameter (from ACE, Wind, and IMP-8) in time over substorm scales. The performance of the model is demonstrated for several events, including the 3 July 2012 substorm, which had multi-probe coverage and a series of substorms during the March 2008 storm. It is shown that the AL binning helps reproduce dipolarization signatures in the northward magnetic field Bz, while the solar wind electric field integral allows one to capture the current sheet thinning during the growth phase. The model allows one to trace the substorm dipolarization from the tail to the inner magnetosphere where the dipolarization of strongly stretched tail field lines causes a redistribution of the tail current resulting in an enhancement of the partial ring current in the premidnight sector.

  10. Experimental validation of new empirical models of the thermal properties of food products for safe shipping

    Science.gov (United States)

    Hamid, Hanan H.; Mitchell, Mark; Jahangiri, Amirreza; Thiel, David V.

    2018-04-01

    Temperature controlled food transport is essential for human safety and to minimise food waste. The thermal properties of food are important for determining the heat transfer during the transient stages of transportation (door opening during loading and unloading processes). For example, the temperature of most dairy products must be confined to a very narrow range (3-7 °C). If a predefined critical temperature is exceeded, the food is defined as spoiled and unfit for human consumption. An improved empirical model for the thermal conductivity and specific heat capacity of a wide range of food products was derived based on the food composition (moisture, fat, protein, carbohydrate and ash). The models that developed using linear regression analysis were compared with the published measured parameters in addition to previously published theoretical and empirical models. It was found that the maximum variation in the predicated thermal properties leads to less than 0.3 °C temperature change. The correlation coefficient for these models was 0.96. The t-Stat test ( P-value >0.99) demonstrated that the model results are an improvement on previous works. The transient heat transfer based on the food composition and the temperature boundary conditions was found for a Camembert cheese (short cylindrical shape) using a multiple dimension finite difference method code. The result was verified using the heat transfer today (HTT) educational software which is based on finite volume method. The core temperature rises from the initial temperature (2.7 °C) to the maximum safe temperature in ambient air (20.24 °C) was predicted to within about 35.4 ± 0.5 min. The simulation results agree very well ( +0.2 °C) with the measured temperature data. This improved model impacts on temperature estimation during loading and unloading the trucks and provides a clear direction for temperature control in all refrigerated transport applications.

  11. Are Model Transferability And Complexity Antithetical? Insights From Validation of a Variable-Complexity Empirical Snow Model in Space and Time

    Science.gov (United States)

    Lute, A. C.; Luce, Charles H.

    2017-11-01

    The related challenges of predictions in ungauged basins and predictions in ungauged climates point to the need to develop environmental models that are transferable across both space and time. Hydrologic modeling has historically focused on modelling one or only a few basins using highly parameterized conceptual or physically based models. However, model parameters and structures have been shown to change significantly when calibrated to new basins or time periods, suggesting that model complexity and model transferability may be antithetical. Empirical space-for-time models provide a framework within which to assess model transferability and any tradeoff with model complexity. Using 497 SNOTEL sites in the western U.S., we develop space-for-time models of April 1 SWE and Snow Residence Time based on mean winter temperature and cumulative winter precipitation. The transferability of the models to new conditions (in both space and time) is assessed using non-random cross-validation tests with consideration of the influence of model complexity on transferability. As others have noted, the algorithmic empirical models transfer best when minimal extrapolation in input variables is required. Temporal split-sample validations use pseudoreplicated samples, resulting in the selection of overly complex models, which has implications for the design of hydrologic model validation tests. Finally, we show that low to moderate complexity models transfer most successfully to new conditions in space and time, providing empirical confirmation of the parsimony principal.

  12. The logical primitives of thought: Empirical foundations for compositional cognitive models.

    Science.gov (United States)

    Piantadosi, Steven T; Tenenbaum, Joshua B; Goodman, Noah D

    2016-07-01

    The notion of a compositional language of thought (LOT) has been central in computational accounts of cognition from earliest attempts (Boole, 1854; Fodor, 1975) to the present day (Feldman, 2000; Penn, Holyoak, & Povinelli, 2008; Fodor, 2008; Kemp, 2012; Goodman, Tenenbaum, & Gerstenberg, 2015). Recent modeling work shows how statistical inferences over compositionally structured hypothesis spaces might explain learning and development across a variety of domains. However, the primitive components of such representations are typically assumed a priori by modelers and theoreticians rather than determined empirically. We show how different sets of LOT primitives, embedded in a psychologically realistic approximate Bayesian inference framework, systematically predict distinct learning curves in rule-based concept learning experiments. We use this feature of LOT models to design a set of large-scale concept learning experiments that can determine the most likely primitives for psychological concepts involving Boolean connectives and quantification. Subjects' inferences are most consistent with a rich (nonminimal) set of Boolean operations, including first-order, but not second-order, quantification. Our results more generally show how specific LOT theories can be distinguished empirically. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  13. A new model of Social Support in Bereavement (SSB): An empirical investigation with a Chinese sample.

    Science.gov (United States)

    Li, Jie; Chen, Sheying

    2016-01-01

    Bereavement can be an extremely stressful experience while the protective effect of social support is expected to facilitate the adjustment after loss. The ingredients or elements of social support as illustrated by a new model of Social Support in Bereavement (SSB), however, requires empirical evidence. Who might be the most effective providers of social support in bereavement has also been understudied, particularly within specific cultural contexts. The present study uses both qualitative and quantitative analyses to explore these two important issues among bereaved Chinese families and individuals. The results show that three major types of social support described by the SSB model were frequently acknowledged by the participants in this study. Aside from relevant books, family and friends were the primary sources of social support who in turn received support from their workplaces. Helping professionals turned out to be the least significant source of social support in the Chinese cultural context. Differences by gender, age, and bereavement time were also found. The findings render empirical evidence to the conceptual model of Social Support in Bereavement and also offer culturally relevant guidance for providing effective support to the bereaved.

  14. The effect of empirical potential functions on modeling of amorphous carbon using molecular dynamics method

    International Nuclear Information System (INIS)

    Li, Longqiu; Xu, Ming; Song, Wenping; Ovcharenko, Andrey; Zhang, Guangyu; Jia, Ding

    2013-01-01

    Empirical potentials have a strong effect on the hybridization and structure of amorphous carbon and are of great importance in molecular dynamics (MD) simulations. In this work, amorphous carbon at densities ranging from 2.0 to 3.2 g/cm 3 was modeled by a liquid quenching method using Tersoff, 2nd REBO, and ReaxFF empirical potentials. The hybridization, structure and radial distribution function G(r) of carbon atoms were analyzed as a function of the three potentials mentioned above. The ReaxFF potential is capable to model the change of the structure of amorphous carbon and MD results are in a good agreement with experimental results and density function theory (DFT) at low density of 2.6 g/cm 3 and below. The 2nd REBO potential can be used when amorphous carbon has a very low density of 2.4 g/cm 3 and below. Considering the computational efficiency, the Tersoff potential is recommended to model amorphous carbon at a high density of 2.6 g/cm 3 and above. In addition, the influence of the quenching time on the hybridization content obtained with the three potentials is discussed.

  15. Multimission empirical ocean tide modeling for shallow waters and polar seas

    DEFF Research Database (Denmark)

    Cheng, Yongcun; Andersen, Ole Baltazar

    2011-01-01

    A new global ocean tide model named DTU10 (developed at Technical University of Denmark) representing all major diurnal and semidiurnal tidal constituents is proposed based on an empirical correction to the global tide model FES2004 (Finite Element Solutions), with residual tides determined using...... tide gauge sets show that the new tide model fits the tide gauge measurements favorably to other state of the art global ocean tide models in both the deep and shallow waters, especially in the Arctic Ocean and the Southern Ocean. One example is a comparison with 207 tide gauge data in the East Asian...... marginal seas where the root-mean-square agreement improved by 35.12%, 22.61%, 27.07%, and 22.65% (M-2, S-2, K-1, and O-1) for the DTU10 tide model compared with the FES2004 tide model. A similar comparison in the Arctic Ocean with 151 gauge data improved by 9.93%, 0.34%, 7.46%, and 9.52% for the M-2, S-2...

  16. Use of empirical likelihood to calibrate auxiliary information in partly linear monotone regression models.

    Science.gov (United States)

    Chen, Baojiang; Qin, Jing

    2014-05-10

    In statistical analysis, a regression model is needed if one is interested in finding the relationship between a response variable and covariates. When the response depends on the covariate, then it may also depend on the function of this covariate. If one has no knowledge of this functional form but expect for monotonic increasing or decreasing, then the isotonic regression model is preferable. Estimation of parameters for isotonic regression models is based on the pool-adjacent-violators algorithm (PAVA), where the monotonicity constraints are built in. With missing data, people often employ the augmented estimating method to improve estimation efficiency by incorporating auxiliary information through a working regression model. However, under the framework of the isotonic regression model, the PAVA does not work as the monotonicity constraints are violated. In this paper, we develop an empirical likelihood-based method for isotonic regression model to incorporate the auxiliary information. Because the monotonicity constraints still hold, the PAVA can be used for parameter estimation. Simulation studies demonstrate that the proposed method can yield more efficient estimates, and in some situations, the efficiency improvement is substantial. We apply this method to a dementia study. Copyright © 2013 John Wiley & Sons, Ltd.

  17. An empirical comparison of alternate regime-switching models for electricity spot prices

    Energy Technology Data Exchange (ETDEWEB)

    Janczura, Joanna [Hugo Steinhaus Center, Institute of Mathematics and Computer Science, Wroclaw University of Technology, 50-370 Wroclaw (Poland); Weron, Rafal [Institute of Organization and Management, Wroclaw University of Technology, 50-370 Wroclaw (Poland)

    2010-09-15

    One of the most profound features of electricity spot prices are the price spikes. Markov regime-switching (MRS) models seem to be a natural candidate for modeling this spiky behavior. However, in the studies published so far, the goodness-of-fit of the proposed models has not been a major focus. While most of the models were elegant, their fit to empirical data has either been not examined thoroughly or the signs of a bad fit ignored. With this paper we want to fill the gap. We calibrate and test a range of MRS models in an attempt to find parsimonious specifications that not only address the main characteristics of electricity prices but are statistically sound as well. We find that the best structure is that of an independent spike 3-regime model with time-varying transition probabilities, heteroscedastic diffusion-type base regime dynamics and shifted spike regime distributions. Not only does it allow for a seasonal spike intensity throughout the year and consecutive spikes or price drops, which is consistent with market observations, but also exhibits the 'inverse leverage effect' reported in the literature for spot electricity prices. (author)

  18. Construction and utilization of linear empirical core models for PWR in-core fuel management

    International Nuclear Information System (INIS)

    Okafor, K.C.

    1988-01-01

    An empirical core-model construction procedure for pressurized water reactor (PWR) in-core fuel management is developed that allows determining the optimal BOC k ∞ profiles in PWRs as a single linear-programming problem and thus facilitates the overall optimization process for in-core fuel management due to algorithmic simplification and reduction in computation time. The optimal profile is defined as one that maximizes cycle burnup. The model construction scheme treats the fuel-assembly power fractions, burnup, and leakage as state variables and BOC zone enrichments as control variables. The core model consists of linear correlations between the state and control variables that describe fuel-assembly behavior in time and space. These correlations are obtained through time-dependent two-dimensional core simulations. The core model incorporates the effects of composition changes in all the enrichment control zones on a given fuel assembly and is valid at all times during the cycle for a given range of control variables. No assumption is made on the geometry of the control zones. A scatter-composition distribution, as well as annular, can be considered for model construction. The application of the methodology to a typical PWR core indicates good agreement between the model and exact simulation results

  19. Global empirical wind model for the upper mesosphere/lower thermosphere. I. Prevailing wind

    Directory of Open Access Journals (Sweden)

    Y. I. Portnyagin

    Full Text Available An updated empirical climatic zonally averaged prevailing wind model for the upper mesosphere/lower thermosphere (70-110 km, extending from 80°N to 80°S is presented. The model is constructed from the fitting of monthly mean winds from meteor radar and MF radar measurements at more than 40 stations, well distributed over the globe. The height-latitude contour plots of monthly mean zonal and meridional winds for all months of the year, and of annual mean wind, amplitudes and phases of annual and semiannual harmonics of wind variations are analyzed to reveal the main features of the seasonal variation of the global wind structures in the Northern and Southern Hemispheres. Some results of comparison between the ground-based wind models and the space-based models are presented. It is shown that, with the exception of annual mean systematic bias between the zonal winds provided by the ground-based and space-based models, a good agreement between the models is observed. The possible origin of this bias is discussed.

    Key words: Meteorology and Atmospheric dynamics (general circulation; middle atmosphere dynamics; thermospheric dynamics

  20. An empirical comparison of alternate regime-switching models for electricity spot prices

    International Nuclear Information System (INIS)

    Janczura, Joanna; Weron, Rafal

    2010-01-01

    One of the most profound features of electricity spot prices are the price spikes. Markov regime-switching (MRS) models seem to be a natural candidate for modeling this spiky behavior. However, in the studies published so far, the goodness-of-fit of the proposed models has not been a major focus. While most of the models were elegant, their fit to empirical data has either been not examined thoroughly or the signs of a bad fit ignored. With this paper we want to fill the gap. We calibrate and test a range of MRS models in an attempt to find parsimonious specifications that not only address the main characteristics of electricity prices but are statistically sound as well. We find that the best structure is that of an independent spike 3-regime model with time-varying transition probabilities, heteroscedastic diffusion-type base regime dynamics and shifted spike regime distributions. Not only does it allow for a seasonal spike intensity throughout the year and consecutive spikes or price drops, which is consistent with market observations, but also exhibits the 'inverse leverage effect' reported in the literature for spot electricity prices. (author)

  1. Modelling of volumetric properties of binary and ternary mixtures by CEOS, CEOS/GE and empirical models

    Directory of Open Access Journals (Sweden)

    BOJAN D. DJORDJEVIC

    2007-12-01

    Full Text Available Although many cubic equations of state coupled with van der Waals-one fluid mixing rules including temperature dependent interaction parameters are sufficient for representing phase equilibria and excess properties (excess molar enthalpy HE, excess molar volume VE, etc., difficulties appear in the correlation and prediction of thermodynamic properties of complex mixtures at various temperature and pressure ranges. Great progress has been made by a new approach based on CEOS/GE models. This paper reviews the last six-year of progress achieved in modelling of the volumetric properties for complex binary and ternary systems of non-electrolytes by the CEOS and CEOS/GE approaches. In addition, the vdW1 and TCBT models were used to estimate the excess molar volume VE of ternary systems methanol + chloroform + benzene and 1-propanol + chloroform + benzene, as well as the corresponding binaries methanol + chloroform, chloroform + benzene, 1-propanol + chloroform and 1-propanol + benzene at 288.15–313.15 K and atmospheric pressure. Also, prediction of VE for both ternaries by empirical models (Radojković, Kohler, Jackob–Fitzner, Colinet, Tsao–Smith, Toop, Scatchard, Rastogi was performed.

  2. Empirical model of TEC response to geomagnetic and solar forcing over Balkan Peninsula

    Science.gov (United States)

    Mukhtarov, P.; Andonov, B.; Pancheva, D.

    2018-01-01

    An empirical total electron content (TEC) model response to external forcing over Balkan Peninsula (35°N-50°N; 15°E-30°E) is built by using the Center for Orbit Determination of Europe (CODE) TEC data for full 17 years, January 1999 - December 2015. The external forcing includes geomagnetic activity described by the Kp-index and solar activity described by the solar radio flux F10.7. The model describes the most probable spatial distribution and temporal variability of the externally forced TEC anomalies assuming that they depend mainly on latitude, Kp-index, F10.7 and LT. The anomalies are expressed by the relative deviation of the TEC from its 15-day mean, rTEC, as the mean value is calculated from the 15 preceding days. The approach for building this regional model is similar to that of the global TEC model reported by Mukhtarov et al. (2013a) however it includes two important improvements related to short-term variability of the solar activity and amended geomagnetic forcing by using a "modified" Kp index. The quality assessment of the new constructing model procedure in terms of modeling error calculated for the period of 1999-2015 indicates significant improvement in accordance with the global TEC model (Mukhtarov et al., 2013a). The short-term prediction capabilities of the model based on the error calculations for 2016 are improved as well. In order to demonstrate how the model is able to reproduce the rTEC response to external forcing three geomagnetic storms, accompanied also with short-term solar activity variations, which occur at different seasons and solar activity conditions are presented.

  3. An empirical model of diagnostic x-ray attenuation under narrow-beam geometry

    International Nuclear Information System (INIS)

    Mathieu, Kelsey B.; Kappadath, S. Cheenu; White, R. Allen; Atkinson, E. Neely; Cody, Dianna D.

    2011-01-01

    Purpose: The purpose of this study was to develop and validate a mathematical model to describe narrow-beam attenuation of kilovoltage x-ray beams for the intended applications of half-value layer (HVL) and quarter-value layer (QVL) estimations, patient organ shielding, and computer modeling. Methods: An empirical model, which uses the Lambert W function and represents a generalized Lambert-Beer law, was developed. To validate this model, transmission of diagnostic energy x-ray beams was measured over a wide range of attenuator thicknesses [0.49-33.03 mm Al on a computed tomography (CT) scanner, 0.09-1.93 mm Al on two mammography systems, and 0.1-0.45 mm Cu and 0.49-14.87 mm Al using general radiography]. Exposure measurements were acquired under narrow-beam geometry using standard methods, including the appropriate ionization chamber, for each radiographic system. Nonlinear regression was used to find the best-fit curve of the proposed Lambert W model to each measured transmission versus attenuator thickness data set. In addition to validating the Lambert W model, we also assessed the performance of two-point Lambert W interpolation compared to traditional methods for estimating the HVL and QVL [i.e., semilogarithmic (exponential) and linear interpolation]. Results: The Lambert W model was validated for modeling attenuation versus attenuator thickness with respect to the data collected in this study (R 2 > 0.99). Furthermore, Lambert W interpolation was more accurate and less sensitive to the choice of interpolation points used to estimate the HVL and/or QVL than the traditional methods of semilogarithmic and linear interpolation. Conclusions: The proposed Lambert W model accurately describes attenuation of both monoenergetic radiation and (kilovoltage) polyenergetic beams (under narrow-beam geometry).

  4. An empirical model of diagnostic x-ray attenuation under narrow-beam geometry.

    Science.gov (United States)

    Mathieu, Kelsey B; Kappadath, S Cheenu; White, R Allen; Atkinson, E Neely; Cody, Dianna D

    2011-08-01

    The purpose of this study was to develop and validate a mathematical model to describe narrow-beam attenuation of kilovoltage x-ray beams for the intended applications of half-value layer (HVL) and quarter-value layer (QVL) estimations, patient organ shielding, and computer modeling. An empirical model, which uses the Lambert W function and represents a generalized Lambert-Beer law, was developed. To validate this model, transmission of diagnostic energy x-ray beams was measured over a wide range of attenuator thicknesses [0.49-33.03 mm Al on a computed tomography (CT) scanner, 0.09-1.93 mm Al on two mammography systems, and 0.1-0.45 mm Cu and 0.49-14.87 mm Al using general radiography]. Exposure measurements were acquired under narrow-beam geometry using standard methods, including the appropriate ionization chamber, for each radiographic system. Nonlinear regression was used to find the best-fit curve of the proposed Lambert W model to each measured transmission versus attenuator thickness data set. In addition to validating the Lambert W model, we also assessed the performance of two-point Lambert W interpolation compared to traditional methods for estimating the HVL and QVL [i.e., semi-logarithmic (exponential) and linear interpolation]. The Lambert W model was validated for modeling attenuation versus attenuator thickness with respect to the data collected in this study (R2 > 0.99). Furthermore, Lambert W interpolation was more accurate and less sensitive to the choice of interpolation points used to estimate the HVL and/or QVL than the traditional methods of semilogarithmic and linear interpolation. The proposed Lambert W model accurately describes attenuation of both monoenergetic radiation and (kilovoltage) polyenergetic beams (under narrow-beam geometry).

  5. Generation of synthetic Kinect depth images based on empirical noise model

    DEFF Research Database (Denmark)

    Iversen, Thorbjørn Mosekjær; Kraft, Dirk

    2017-01-01

    The development, training and evaluation of computer vision algorithms rely on the availability of a large number of images. The acquisition of these images can be time-consuming if they are recorded using real sensors. An alternative is to rely on synthetic images which can be rapidly generated....... This Letter describes a novel method for the simulation of Kinect v1 depth images. The method is based on an existing empirical noise model from the literature. The authors show that their relatively simple method is able to provide depth images which have a high similarity with real depth images....

  6. IT-enabled dynamic capability on performance: An empirical study of BSC model

    Directory of Open Access Journals (Sweden)

    Adilson Carlos Yoshikuni

    2017-05-01

    Full Text Available ew studies have investigated the influence of “information capital,” through IT-enabled dynamic capability, on corporate performance, particularly in economic turbulence. Our study investigates the causal relationship between performance perspectives of the balanced scorecard using partial least squares path modeling. Using data on 845 Brazilian companies, we conduct a quantitative empirical study of firms during an economic crisis and observe the following interesting results. Operational and analytical IT-enabled dynamic capability had positive effects on business process improvement and corporate performance. Results pertaining to mediation (endogenous variables and moderation (control variables clarify IT’s role in and benefits for corporate performance.

  7. Permeability-driven selection in a semi-empirical protocell model

    DEFF Research Database (Denmark)

    Piedrafita, Gabriel; Monnard, Pierre-Alain; Mavelli, Fabio

    2017-01-01

    to prebiotic systems evolution more intricate, but were surely essential for sustaining far-from-equilibrium chemical dynamics, given their functional relevance in all modern cells. Here we explore a protocellular scenario in which some of those additional constraints/mechanisms are addressed, demonstrating...... their 'system-level' implications. In particular, an experimental study on the permeability of prebiotic vesicle membranes composed of binary lipid mixtures allows us to construct a semi-empirical model where protocells are able to reproduce and undergo an evolutionary process based on their coupling...

  8. Proposed Core Competencies and Empirical Validation Procedure in Competency Modeling: Confirmation and Classification.

    Science.gov (United States)

    Baczyńska, Anna K; Rowiński, Tomasz; Cybis, Natalia

    2016-01-01

    Competency models provide insight into key skills which are common to many positions in an organization. Moreover, there is a range of competencies that is used by many companies. Researchers have developed core competency terminology to underline their cross-organizational value. The article presents a theoretical model of core competencies consisting of two main higher-order competencies called performance and entrepreneurship. Each of them consists of three elements: the performance competency includes cooperation, organization of work and goal orientation, while entrepreneurship includes innovativeness, calculated risk-taking and pro-activeness. However, there is lack of empirical validation of competency concepts in organizations and this would seem crucial for obtaining reliable results from organizational research. We propose a two-step empirical validation procedure: (1) confirmation factor analysis, and (2) classification of employees. The sample consisted of 636 respondents (M = 44.5; SD = 15.1). Participants were administered a questionnaire developed for the study purpose. The reliability, measured by Cronbach's alpha, ranged from 0.60 to 0.83 for six scales. Next, we tested the model using a confirmatory factor analysis. The two separate, single models of performance and entrepreneurial orientations fit quite well to the data, while a complex model based on the two single concepts needs further research. In the classification of employees based on the two higher order competencies we obtained four main groups of employees. Their profiles relate to those found in the literature, including so-called niche finders and top performers. Some proposal for organizations is discussed.

  9. Computational optogenetics: empirically-derived voltage- and light-sensitive channelrhodopsin-2 model.

    Directory of Open Access Journals (Sweden)

    John C Williams

    Full Text Available Channelrhodospin-2 (ChR2, a light-sensitive ion channel, and its variants have emerged as new excitatory optogenetic tools not only in neuroscience, but also in other areas, including cardiac electrophysiology. An accurate quantitative model of ChR2 is necessary for in silico prediction of the response to optical stimulation in realistic tissue/organ settings. Such a model can guide the rational design of new ion channel functionality tailored to different cell types/tissues. Focusing on one of the most widely used ChR2 mutants (H134R with enhanced current, we collected a comprehensive experimental data set of the response of this ion channel to different irradiances and voltages, and used these data to develop a model of ChR2 with empirically-derived voltage- and irradiance- dependence, where parameters were fine-tuned via simulated annealing optimization. This ChR2 model offers: 1 accurate inward rectification in the current-voltage response across irradiances; 2 empirically-derived voltage- and light-dependent kinetics (activation, deactivation and recovery from inactivation; and 3 accurate amplitude and morphology of the response across voltage and irradiance settings. Temperature-scaling factors (Q10 were derived and model kinetics was adjusted to physiological temperatures. Using optical action potential clamp, we experimentally validated model-predicted ChR2 behavior in guinea pig ventricular myocytes. The model was then incorporated in a variety of cardiac myocytes, including human ventricular, atrial and Purkinje cell models. We demonstrate the ability of ChR2 to trigger action potentials in human cardiomyocytes at relatively low light levels, as well as the differential response of these cells to light, with the Purkinje cells being most easily excitable and ventricular cells requiring the highest irradiance at all pulse durations. This new experimentally-validated ChR2 model will facilitate virtual experimentation in neural and

  10. Monthly and Fortnightly Tidal Variations of the Earth's Rotation Rate Predicted by a TOPEX/POSEIDON Empirical Ocean Tide Model

    Science.gov (United States)

    Desai, S.; Wahr, J.

    1998-01-01

    Empirical models of the two largest constituents of the long-period ocean tides, the monthly and the fortnightly constituents, are estimated from repeat cycles 10 to 210 of the TOPEX/POSEIDON (T/P) mission.

  11. Comparison of a semi-empirical method with some model codes for gamma-ray spectrum calculation

    Energy Technology Data Exchange (ETDEWEB)

    Sheng, Fan; Zhixiang, Zhao [Chinese Nuclear Data Center, Beijing, BJ (China)

    1996-06-01

    Gamma-ray spectra calculated by a semi-empirical method are compared with those calculated by the model codes such as GNASH, TNG, UNF and NDCP-1. The results of the calculations are discussed. (2 tabs., 3 figs.).

  12. Data mining of Ti-Al semi-empirical parameters for developing reduced order models

    Energy Technology Data Exchange (ETDEWEB)

    Broderick, Scott R [Department of Materials Science and Engineering and Institute for Combinatorial Discovery, Iowa State University, Ames, IA 50011 (United States); Aourag, Hafid [Department of Physics, University Abou Bakr Belkaid, Tlemcen 13000 (Algeria); Rajan, Krishna [Department of Materials Science and Engineering and Institute for Combinatorial Discovery, Iowa State University, Ames, IA 50011 (United States)

    2011-05-15

    A focus of materials design is determining the minimum amount of information necessary to fully describe a system, thus reducing the number of empirical results required and simplifying the data analysis. Screening descriptors calculated through a semi-empirical model, we demonstrate how an informatics-based analysis can be used to address this issue with no prior assumptions. We have developed a unique approach for identifying the minimum number of descriptors necessary to capture all the information of a system. Using Ti-Al alloys of varying compositions and crystal chemistries as the test bed, 5 of the 21 original descriptors from electronic structure calculations are found to capture all the information from the calculation, thereby reducing the structure-chemistry-property search space. Additionally, by combining electronic structure calculations with data mining, we classify the systems by chemistries and structures, based on the electronic structure inputs, and thereby rank the impact of change in chemistry and crystal structure on the electronic structure. -- Research Highlights: {yields} We developed an informatics-based methodology to minimize the necessary information. {yields} We applied this methodology to descriptors from semi-empirical calculations. {yields} We developed a validation approach for maintaining information from screening. {yields} We classified intermetallics and identified patterns of composition and structure.

  13. Data mining of Ti-Al semi-empirical parameters for developing reduced order models

    International Nuclear Information System (INIS)

    Broderick, Scott R.; Aourag, Hafid; Rajan, Krishna

    2011-01-01

    A focus of materials design is determining the minimum amount of information necessary to fully describe a system, thus reducing the number of empirical results required and simplifying the data analysis. Screening descriptors calculated through a semi-empirical model, we demonstrate how an informatics-based analysis can be used to address this issue with no prior assumptions. We have developed a unique approach for identifying the minimum number of descriptors necessary to capture all the information of a system. Using Ti-Al alloys of varying compositions and crystal chemistries as the test bed, 5 of the 21 original descriptors from electronic structure calculations are found to capture all the information from the calculation, thereby reducing the structure-chemistry-property search space. Additionally, by combining electronic structure calculations with data mining, we classify the systems by chemistries and structures, based on the electronic structure inputs, and thereby rank the impact of change in chemistry and crystal structure on the electronic structure. -- Research Highlights: → We developed an informatics-based methodology to minimize the necessary information. → We applied this methodology to descriptors from semi-empirical calculations. → We developed a validation approach for maintaining information from screening. → We classified intermetallics and identified patterns of composition and structure.

  14. An Empirical Model and Ethnic Differences in Cultural Meanings Via Motives for Suicide.

    Science.gov (United States)

    Chu, Joyce; Khoury, Oula; Ma, Johnson; Bahn, Francesca; Bongar, Bruce; Goldblum, Peter

    2017-10-01

    The importance of cultural meanings via motives for suicide - what is considered acceptable to motivate suicide - has been advocated as a key step in understanding and preventing development of suicidal behaviors. There have been limited systematic empirical attempts to establish different cultural motives ascribed to suicide across ethnic groups. We used a mixed methods approach and grounded theory methodology to guide the analysis of qualitative data querying for meanings via motives for suicide among 232 Caucasians, Asian Americans, and Latino/a Americans with a history of suicide attempts, ideation, intent, or plan. We used subsequent logistic regression analyses to examine ethnic differences in suicide motive themes. This inductive approach of generating theory from data yielded an empirical model of 6 cultural meanings via motives for suicide themes: intrapersonal perceptions, intrapersonal emotions, intrapersonal behavior, interpersonal, mental health/medical, and external environment. Logistic regressions showed ethnic differences in intrapersonal perceptions (low endorsement by Latino/a Americans) and external environment (high endorsement by Latino/a Americans) categories. Results advance suicide research and practice by establishing 6 empirically based cultural motives for suicide themes that may represent a key intermediary step in the pathway toward suicidal behaviors. Clinicians can use these suicide meanings via motives to guide their assessment and determination of suicide risk. Emphasis on environmental stressors rather than negative perceptions like hopelessness should be considered with Latino/a clients. © 2017 Wiley Periodicals, Inc.

  15. A comparative empirical analysis of statistical models for evaluating highway segment crash frequency

    Directory of Open Access Journals (Sweden)

    Bismark R.D.K. Agbelie

    2016-08-01

    Full Text Available The present study conducted an empirical highway segment crash frequency analysis on the basis of fixed-parameters negative binomial and random-parameters negative binomial models. Using a 4-year data from a total of 158 highway segments, with a total of 11,168 crashes, the results from both models were presented, discussed, and compared. About 58% of the selected variables produced normally distributed parameters across highway segments, while the remaining produced fixed parameters. The presence of a noise barrier along a highway segment would increase mean annual crash frequency by 0.492 for 88.21% of the highway segments, and would decrease crash frequency for 11.79% of the remaining highway segments. Besides, the number of vertical curves per mile along a segment would increase mean annual crash frequency by 0.006 for 84.13% of the highway segments, and would decrease crash frequency for 15.87% of the remaining highway segments. Thus, constraining the parameters to be fixed across all highway segments would lead to an inaccurate conclusion. Although, the estimated parameters from both models showed consistency in direction, the magnitudes were significantly different. Out of the two models, the random-parameters negative binomial model was found to be statistically superior in evaluating highway segment crashes compared with the fixed-parameters negative binomial model. On average, the marginal effects from the fixed-parameters negative binomial model were observed to be significantly overestimated compared with those from the random-parameters negative binomial model.

  16. Dispersal kernel estimation: A comparison of empirical and modelled particle dispersion in a coastal marine system

    Science.gov (United States)

    Hrycik, Janelle M.; Chassé, Joël; Ruddick, Barry R.; Taggart, Christopher T.

    2013-11-01

    Early life-stage dispersal influences recruitment and is of significance in explaining the distribution and connectivity of marine species. Motivations for quantifying dispersal range from biodiversity conservation to the design of marine reserves and the mitigation of species invasions. Here we compare estimates of real particle dispersion in a coastal marine environment with similar estimates provided by hydrodynamic modelling. We do so by using a system of magnetically attractive particles (MAPs) and a magnetic-collector array that provides measures of Lagrangian dispersion based on the time-integration of MAPs dispersing through the array. MAPs released as a point source in a coastal marine location dispersed through the collector array over a 5-7 d period. A virtual release and observed (real-time) environmental conditions were used in a high-resolution three-dimensional hydrodynamic model to estimate the dispersal of virtual particles (VPs). The number of MAPs captured throughout the collector array and the number of VPs that passed through each corresponding model location were enumerated and compared. Although VP dispersal reflected several aspects of the observed MAP dispersal, the comparisons demonstrated model sensitivity to the small-scale (random-walk) particle diffusivity parameter (Kp). The one-dimensional dispersal kernel for the MAPs had an e-folding scale estimate in the range of 5.19-11.44 km, while those from the model simulations were comparable at 1.89-6.52 km, and also demonstrated sensitivity to Kp. Variations among comparisons are related to the value of Kp used in modelling and are postulated to be related to MAP losses from the water column and (or) shear dispersion acting on the MAPs; a process that is constrained in the model. Our demonstration indicates a promising new way of 1) quantitatively and empirically estimating the dispersal kernel in aquatic systems, and 2) quantitatively assessing and (or) improving regional hydrodynamic

  17. Empirical forecast of the quiet time Ionosphere over Europe: a comparative model investigation

    Science.gov (United States)

    Badeke, R.; Borries, C.; Hoque, M. M.; Minkwitz, D.

    2016-12-01

    The purpose of this work is to find the best empirical model for a reliable 24 hour forecast of the ionospheric Total Electron Content (TEC) over Europe under geomagnetically quiet conditions. It will be used as an improved reference for the description of storm-induced perturbations in the ionosphere. The observational TEC-data were obtained from the International GNSS Service (IGS). Four different forecast model approaches were validated with observational IGS TEC-data: a 27 day median model (27d), a Fourier Analysis (FA) approach, the Neustrelitz TEC global model (NTCM-GL) and NeQuick 2. Two years were investigated depending on the solar activity: 2015 (high activity) and 2008 (low avtivity) The time periods of magnetic storms, which were identified with the Dst index, were excluded from the validation. For both years the two models 27d and FA show better results than NTCM-GL and NeQuick 2. For example for the year 2015 and 15° E / 50° N the difference between the IGS data and the predicted 27d model shows a mean value of 0.413 TEC units (TECU), a standard deviation of 3.307 TECU and a correlation coefficient of 0.921, while NTCM-GL and NeQuick 2 have mean differences of around 2-3 TECU, standard deviations of 4.5-5 TECU and correlation coefficients below 0.85. Since 27d and FA predictions strongly depend on observational data, the results confirm that data driven forecasts perform better than the climatological models NTCM-GL and NeQuick 2. However, the benefits of NTCM-GL and NeQuick 2 are actually the lower data dependency, i.e. they do not lack on precision when observational IGS TEC data are unavailable. Hence a combination of the different models is recommended reacting accordingly to the different data availabilities.

  18. Conceptual modeling in systems biology fosters empirical findings: the mRNA lifecycle.

    Directory of Open Access Journals (Sweden)

    Dov Dori

    Full Text Available One of the main obstacles to understanding complex biological systems is the extent and rapid evolution of information, way beyond the capacity individuals to manage and comprehend. Current modeling approaches and tools lack adequate capacity to model concurrently structure and behavior of biological systems. Here we propose Object-Process Methodology (OPM, a holistic conceptual modeling paradigm, as a means to model both diagrammatically and textually biological systems formally and intuitively at any desired number of levels of detail. OPM combines objects, e.g., proteins, and processes, e.g., transcription, in a way that is simple and easily comprehensible to researchers and scholars. As a case in point, we modeled the yeast mRNA lifecycle. The mRNA lifecycle involves mRNA synthesis in the nucleus, mRNA transport to the cytoplasm, and its subsequent translation and degradation therein. Recent studies have identified specific cytoplasmic foci, termed processing bodies that contain large complexes of mRNAs and decay factors. Our OPM model of this cellular subsystem, presented here, led to the discovery of a new constituent of these complexes, the translation termination factor eRF3. Association of eRF3 with processing bodies is observed after a long-term starvation period. We suggest that OPM can eventually serve as a comprehensive evolvable model of the entire living cell system. The model would serve as a research and communication platform, highlighting unknown and uncertain aspects that can be addressed empirically and updated consequently while maintaining consistency.

  19. Parameterization of water vapor using high-resolution GPS data and empirical models

    Science.gov (United States)

    Ningombam, Shantikumar S.; Jade, Sridevi; Shrungeshwara, T. S.

    2018-03-01

    The present work evaluates eleven existing empirical models to estimate Precipitable Water Vapor (PWV) over a high-altitude (4500 m amsl), cold-desert environment. These models are tested extensively and used globally to estimate PWV for low altitude sites (below 1000 m amsl). The moist parameters used in the model are: water vapor scale height (Hc), dew point temperature (Td) and water vapor pressure (Es 0). These moist parameters are derived from surface air temperature and relative humidity measured at high temporal resolution from automated weather station. The performance of these models are examined statistically with observed high-resolution GPS (GPSPWV) data over the region (2005-2012). The correlation coefficient (R) between the observed GPSPWV and Model PWV is 0.98 at daily data and varies diurnally from 0.93 to 0.97. Parameterization of moisture parameters were studied in-depth (i.e., 2 h to monthly time scales) using GPSPWV , Td , and Es 0 . The slope of the linear relationships between GPSPWV and Td varies from 0.073°C-1 to 0.106°C-1 (R: 0.83 to 0.97) while GPSPWV and Es 0 varied from 1.688 to 2.209 (R: 0.95 to 0.99) at daily, monthly and diurnal time scales. In addition, the moist parameters for the cold desert, high-altitude environment are examined in-depth at various time scales during 2005-2012.

  20. Comparison of precipitating electron energy flux on March 22, 1979 with an empirical model: CDAW-6

    International Nuclear Information System (INIS)

    Simons, S.L. Jr.; Reiff, P.H.; Spiro, R.W.; Hardy, D.A.; Kroehl, H.W.

    1985-01-01

    Data recorded by Defense Meterological Satellite Program, TIROS and P-78-1 satellites for the CDAW 6 event on March 22, 1979, have been compared with a statistical model of precipitating electron fluxes. Comparisons have been made on both an orbit-by-orbit basis and on a global basis by sorting and binning the data by AE index, invariant latitude and magnetic local time in a manner similar to which the model was generated. We conclude that the model flux agrees with the data to within a factor of two, although small features and the exact locations of features are not consistently reproduced. In addition, the latitude of highest electron precipitation usually occurs about 3 0 more pole-ward in the model than in the data. We attribute this discrepancy to ring current inflation of the storm time magnetosphere (as evidenced by negative Dst's). We suggest that a similar empirical model based on AL instead of AE and including some indicator of the history of the event would provide an even better comparison. Alternatively, in situ data such as electrojet location should be used routinely to normalize the latitude of the auroral precipitation

  1. [A competency model of rural general practitioners: theory construction and empirical study].

    Science.gov (United States)

    Yang, Xiu-Mu; Qi, Yu-Long; Shne, Zheng-Fu; Han, Bu-Xin; Meng, Bei

    2015-04-01

    To perform theory construction and empirical study of the competency model of rural general practitioners. Through literature study, job analysis, interviews, and expert team discussion, the questionnaire of rural general practitioners competency was constructed. A total of 1458 rural general practitioners were surveyed by the questionnaire in 6 central provinces. The common factors were constructed using the principal component method of exploratory factor analysis and confirmatory factor analysis. The influence of the competency characteristics on the working performance was analyzed using regression equation analysis. The Cronbach 's alpha coefficient of the questionnaire was 0.974. The model consisted of 9 dimensions and 59 items. The 9 competency dimensions included basic public health service ability, basic clinical skills, system analysis capability, information management capability, communication and cooperation ability, occupational moral ability, non-medical professional knowledge, personal traits and psychological adaptability. The rate of explained cumulative total variance was 76.855%. The model fitting index were Χ(2)/df 1.88, GFI=0.94, NFI=0.96, NNFI=0.98, PNFI=0.91, RMSEA=0.068, CFI=0.97, IFI=0.97, RFI=0.96, suggesting good model fitting. Regression analysis showed that the competency characteristics had a significant effect on job performance. The rural general practitioners competency model provides reference for rural doctor training, rural order directional cultivation of medical students, and competency performance management of the rural general practitioners.

  2. a Semi-Empirical Topographic Correction Model for Multi-Source Satellite Images

    Science.gov (United States)

    Xiao, Sa; Tian, Xinpeng; Liu, Qiang; Wen, Jianguang; Ma, Yushuang; Song, Zhenwei

    2018-04-01

    Topographic correction of surface reflectance in rugged terrain areas is the prerequisite for the quantitative application of remote sensing in mountainous areas. Physics-based radiative transfer model can be applied to correct the topographic effect and accurately retrieve the reflectance of the slope surface from high quality satellite image such as Landsat8 OLI. However, as more and more images data available from various of sensors, some times we can not get the accurate sensor calibration parameters and atmosphere conditions which are needed in the physics-based topographic correction model. This paper proposed a semi-empirical atmosphere and topographic corrction model for muti-source satellite images without accurate calibration parameters.Based on this model we can get the topographic corrected surface reflectance from DN data, and we tested and verified this model with image data from Chinese satellite HJ and GF. The result shows that the correlation factor was reduced almost 85 % for near infrared bands and the classification overall accuracy of classification increased 14 % after correction for HJ. The reflectance difference of slope face the sun and face away the sun have reduced after correction.

  3. Soil Moisture Estimate under Forest using a Semi-empirical Model at P-Band

    Science.gov (United States)

    Truong-Loi, M.; Saatchi, S.; Jaruwatanadilok, S.

    2013-12-01

    In this paper we show the potential of a semi-empirical algorithm to retrieve soil moisture under forests using P-band polarimetric SAR data. In past decades, several remote sensing techniques have been developed to estimate the surface soil moisture. In most studies associated with radar sensing of soil moisture, the proposed algorithms are focused on bare or sparsely vegetated surfaces where the effect of vegetation can be ignored. At long wavelengths such as L-band, empirical or physical models such as the Small Perturbation Model (SPM) provide reasonable estimates of surface soil moisture at depths of 0-5cm. However for densely covered vegetated surfaces such as forests, the problem becomes more challenging because the vegetation canopy is a complex scattering environment. For this reason there have been only few studies focusing on retrieving soil moisture under vegetation canopy in the literature. Moghaddam et al. developed an algorithm to estimate soil moisture under a boreal forest using L- and P-band SAR data. For their studied area, double-bounce between trunks and ground appear to be the most important scattering mechanism. Thereby, they implemented parametric models of radar backscatter for double-bounce using simulations of a numerical forest scattering model. Hajnsek et al. showed the potential of estimating the soil moisture under agricultural vegetation using L-band polarimetric SAR data and using polarimetric-decomposition techniques to remove the vegetation layer. Here we use an approach based on physical formulation of dominant scattering mechanisms and three parameters that integrates the vegetation and soil effects at long wavelengths. The algorithm is a simplification of a 3-D coherent model of forest canopy based on the Distorted Born Approximation (DBA). The simplified model has three equations and three unknowns, preserving the three dominant scattering mechanisms of volume, double-bounce and surface for three polarized backscattering

  4. An extended technology acceptance model for detecting influencing factors: An empirical investigation

    Directory of Open Access Journals (Sweden)

    Mohamd Hakkak

    2013-11-01

    Full Text Available The rapid diffusion of the Internet has radically changed the delivery channels applied by the financial services industry. The aim of this study is to identify the influencing factors that encourage customers to adopt online banking in Khorramabad. The research constructs are developed based on the technology acceptance model (TAM and incorporates some extra important control variables. The model is empirically verified to study the factors influencing the online banking adoption behavior of 210 customers of Tejarat Banks in Khorramabad. The findings of the study suggest that the quality of the internet connection, the awareness of online banking and its benefits, the social influence and computer self-efficacy have significant impacts on the perceived usefulness (PU and perceived ease of use (PEOU of online banking acceptance. Trust and resistance to change also have significant impact on the attitude towards the likelihood of adopting online banking.

  5. An empirical model for estimating solar radiation in the Algerian Sahara

    Science.gov (United States)

    Benatiallah, Djelloul; Benatiallah, Ali; Bouchouicha, Kada; Hamouda, Messaoud; Nasri, Bahous

    2018-05-01

    The present work aims to determine the empirical model R.sun that will allow us to evaluate the solar radiation flues on a horizontal plane and in clear-sky on the located Adrar city (27°18 N and 0°11 W) of Algeria and compare with the results measured at the localized site. The expected results of this comparison are of importance for the investment study of solar systems (solar power plants for electricity production, CSP) and also for the design and performance analysis of any system using the solar energy. Statistical indicators used to evaluate the accuracy of the model where the mean bias error (MBE), root mean square error (RMSE) and coefficient of determination. The results show that for global radiation, the daily correlation coefficient is 0.9984. The mean absolute percentage error is 9.44 %. The daily mean bias error is -7.94 %. The daily root mean square error is 12.31 %.

  6. A semi-empirical molecular orbital model of silica, application to radiation compaction

    International Nuclear Information System (INIS)

    Tasker, P.W.

    1978-11-01

    Semi-empirical molecular-orbital theory is used to calculate the bonding in a cluster of two SiO 4 tetrahedra, with the outer bonds saturated with pseudo-hydrogen atoms. The basic properties of the cluster, bond energies and band gap are calculated using a very simple parameterisation scheme. The resulting cluster is used to study the rebonding that occurs when an oxygen vacancy is created. It is suggested that a vacancy model is capable of producing the observed differences between quartz and vitreous silica, and the calculations show that the compaction effect observed in the glass is of a magnitude compatible with the relaxations around the vacancy. More detailed lattice models will be needed to examine this mechanism further. (author)

  7. Use of empirically based corrosion model to aid steam generator life management

    Energy Technology Data Exchange (ETDEWEB)

    Angell, P.; Balakrishnan, P.V.; Turner, C.W

    2000-07-01

    Alloy 800 (N08800) tubes used in CANDU 6 steam generators have shown a low incidence of corrosion damage because of the good corrosion resistance of N08800 and successful water chemistry control strategies. However, N08800 is not immune to corrosion, especially pitting, under plausible SG conditions. Electrochemical potentials are critical in determining both susceptibility and rates of corrosion and are known to be a function of water-chemistry. Using laboratory data an empirical model for pitting and crevice corrosion has been developed for N08800. Combination of such a model with chemistry monitoring and diagnostic software makes it possible to arm the impact of plant operating conditions on SG tube corrosion for plant life management (PLIM). Possible transient chemistry regimes that could significantly shorten expected tube lifetimes have been identified and predictions continue to support the position dud under normal, low dissolved oxygen conditions, pitting of N08800 will not initiate. (author)

  8. Use of empirically based corrosion model to aid steam generator life management

    International Nuclear Information System (INIS)

    Angell, P.; Balakrishnan, P.V.; Turner, C.W.

    2000-01-01

    Alloy 800 (N08800) tubes used in CANDU 6 steam generators have shown a low incidence of corrosion damage because of the good corrosion resistance of N08800 and successful water chemistry control strategies. However, N08800 is not immune to corrosion, especially pitting, under plausible SG conditions. Electrochemical potentials are critical in determining both susceptibility and rates of corrosion and are known to be a function of water-chemistry. Using laboratory data an empirical model for pitting and crevice corrosion has been developed for N08800. Combination of such a model with chemistry monitoring and diagnostic software makes it possible to arm the impact of plant operating conditions on SG tube corrosion for plant life management (PLIM). Possible transient chemistry regimes that could significantly shorten expected tube lifetimes have been identified and predictions continue to support the position dud under normal, low dissolved oxygen conditions, pitting of N08800 will not initiate. (author)

  9. Semi-empirical model for optimising future heavy-ion luminosity of the LHC

    CERN Document Server

    Schaumann, M

    2014-01-01

    The wide spectrum of intensities and emittances imprinted on the LHC Pb bunches during the accumulation of bunch trains in the injector chain result in a significant spread in the single bunch luminosities and lifetimes in collision. Based on the data collected in the 2011 Pb-Pb run, an empirical model is derived to predict the single-bunch peak luminosity depending on the bunch’s position within the beam. In combination with this model, simulations of representative bunches are used to estimate the luminosity evolution for the complete ensemble of bunches. Several options are being considered to improve the injector performance and to increase the number of bunches in the LHC, leading to several potential injection scenarios, resulting in different peak and integrated luminosities. The most important options for after the long shutdown (LS) 1 and 2 are evaluated and compared.

  10. β-empirical Bayes inference and model diagnosis of microarray data

    Directory of Open Access Journals (Sweden)

    Hossain Mollah Mohammad

    2012-06-01

    Full Text Available Abstract Background Microarray data enables the high-throughput survey of mRNA expression profiles at the genomic level; however, the data presents a challenging statistical problem because of the large number of transcripts with small sample sizes that are obtained. To reduce the dimensionality, various Bayesian or empirical Bayes hierarchical models have been developed. However, because of the complexity of the microarray data, no model can explain the data fully. It is generally difficult to scrutinize the irregular patterns of expression that are not expected by the usual statistical gene by gene models. Results As an extension of empirical Bayes (EB procedures, we have developed the β-empirical Bayes (β-EB approach based on a β-likelihood measure which can be regarded as an ’evidence-based’ weighted (quasi- likelihood inference. The weight of a transcript t is described as a power function of its likelihood, fβ(yt|θ. Genes with low likelihoods have unexpected expression patterns and low weights. By assigning low weights to outliers, the inference becomes robust. The value of β, which controls the balance between the robustness and efficiency, is selected by maximizing the predictive β0-likelihood by cross-validation. The proposed β-EB approach identified six significant (p−5 contaminated transcripts as differentially expressed (DE in normal/tumor tissues from the head and neck of cancer patients. These six genes were all confirmed to be related to cancer; they were not identified as DE genes by the classical EB approach. When applied to the eQTL analysis of Arabidopsis thaliana, the proposed β-EB approach identified some potential master regulators that were missed by the EB approach. Conclusions The simulation data and real gene expression data showed that the proposed β-EB method was robust against outliers. The distribution of the weights was used to scrutinize the irregular patterns of expression and diagnose the model

  11. EMPIRICAL WEIGHTED MODELLING ON INTER-COUNTY INEQUALITIES EVOLUTION AND TO TEST ECONOMICAL CONVERGENCE IN ROMANIA

    Directory of Open Access Journals (Sweden)

    Natalia\tMOROIANU‐DUMITRESCU

    2015-06-01

    Full Text Available During the last decades, the regional convergence process in Europe has attracted a considerable interest as a highly significant issue, especially after EU enlargement with the New Member States from Central and Eastern Europe. The most usual empirical approaches are using the β- and σ-convergence, originally developed by a series of neo-classical models. Up-to-date, the EU integration process was proven to be accompanied by an increase of the regional inequalities. In order to determine the existence of a similar increase of the inequalities between the administrative counties (NUTS3 included in the NUTS2 and NUTS1 regions of Romania, this paper provides an empirical modelling of economic convergence allowing to evaluate the level and evolution of the inter-regional inequalities over more than a decade period lasting from 1995 up to 2011. The paper presents the results of a large cross-sectional study of σ-convergence and weighted coefficient of variation, using GDP and population data obtained from the National Institute of Statistics of Romania. Both graphical representation including non-linear regression and the associated tables summarizing numerical values of the main statistical tests are demonstrating the impact of pre- accession policy on the economic development of all Romanian NUTS types. The clearly emphasised convergence in the middle time subinterval can be correlated with the pre-accession drastic changes on economic, political and social level, and with the opening of the Schengen borders for Romanian labor force in 2002.

  12. Empirical Bayes ranking and selection methods via semiparametric hierarchical mixture models in microarray studies.

    Science.gov (United States)

    Noma, Hisashi; Matsui, Shigeyuki

    2013-05-20

    The main purpose of microarray studies is screening of differentially expressed genes as candidates for further investigation. Because of limited resources in this stage, prioritizing genes are relevant statistical tasks in microarray studies. For effective gene selections, parametric empirical Bayes methods for ranking and selection of genes with largest effect sizes have been proposed (Noma et al., 2010; Biostatistics 11: 281-289). The hierarchical mixture model incorporates the differential and non-differential components and allows information borrowing across differential genes with separation from nuisance, non-differential genes. In this article, we develop empirical Bayes ranking methods via a semiparametric hierarchical mixture model. A nonparametric prior distribution, rather than parametric prior distributions, for effect sizes is specified and estimated using the "smoothing by roughening" approach of Laird and Louis (1991; Computational statistics and data analysis 12: 27-37). We present applications to childhood and infant leukemia clinical studies with microarrays for exploring genes related to prognosis or disease progression. Copyright © 2012 John Wiley & Sons, Ltd.

  13. Comparison of physical and semi-empirical hydraulic models for flood inundation mapping

    Science.gov (United States)

    Tavakoly, A. A.; Afshari, S.; Omranian, E.; Feng, D.; Rajib, A.; Snow, A.; Cohen, S.; Merwade, V.; Fekete, B. M.; Sharif, H. O.; Beighley, E.

    2016-12-01

    Various hydraulic/GIS-based tools can be used for illustrating spatial extent of flooding for first-responders, policy makers and the general public. The objective of this study is to compare four flood inundation modeling tools: HEC-RAS-2D, Gridded Surface Subsurface Hydrologic Analysis (GSSHA), AutoRoute and Height Above the Nearest Drainage (HAND). There is a trade-off among accuracy, workability and computational demand in detailed, physics-based flood inundation models (e.g. HEC-RAS-2D and GSSHA) in contrast with semi-empirical, topography-based, computationally less expensive approaches (e.g. AutoRoute and HAND). The motivation for this study is to evaluate this trade-off and offer guidance to potential large-scale application in an operational prediction system. The models were assessed and contrasted via comparability analysis (e.g. overlapping statistics) by using three case studies in the states of Alabama, Texas, and West Virginia. The sensitivity and accuracy of physical and semi-eimpirical models in producing inundation extent were evaluated for the following attributes: geophysical characteristics (e.g. high topographic variability vs. flat natural terrain, urbanized vs. rural zones, effect of surface roughness paratermer value), influence of hydraulic structures such as dams and levees compared to unobstructed flow condition, accuracy in large vs. small study domain, effect of spatial resolution in topographic data (e.g. 10m National Elevation Dataset vs. 0.3m LiDAR). Preliminary results suggest that semi-empericial models tend to underestimate in a flat, urbanized area with controlled/managed river channel around 40% of the inundation extent compared to the physical models, regardless of topographic resolution. However, in places where there are topographic undulations, semi-empericial models attain relatively higher level of accuracy than they do in flat non-urbanized terrain.

  14. Using Whole-House Field Tests to Empirically Derive Moisture Buffering Model Inputs

    Energy Technology Data Exchange (ETDEWEB)

    Woods, J.; Winkler, J.; Christensen, D.; Hancock, E.

    2014-08-01

    Building energy simulations can be used to predict a building's interior conditions, along with the energy use associated with keeping these conditions comfortable. These models simulate the loads on the building (e.g., internal gains, envelope heat transfer), determine the operation of the space conditioning equipment, and then calculate the building's temperature and humidity throughout the year. The indoor temperature and humidity are affected not only by the loads and the space conditioning equipment, but also by the capacitance of the building materials, which buffer changes in temperature and humidity. This research developed an empirical method to extract whole-house model inputs for use with a more accurate moisture capacitance model (the effective moisture penetration depth model). The experimental approach was to subject the materials in the house to a square-wave relative humidity profile, measure all of the moisture transfer terms (e.g., infiltration, air conditioner condensate) and calculate the only unmeasured term: the moisture absorption into the materials. After validating the method with laboratory measurements, we performed the tests in a field house. A least-squares fit of an analytical solution to the measured moisture absorption curves was used to determine the three independent model parameters representing the moisture buffering potential of this house and its furnishings. Follow on tests with realistic latent and sensible loads showed good agreement with the derived parameters, especially compared to the commonly-used effective capacitance approach. These results show that the EMPD model, once the inputs are known, is an accurate moisture buffering model.

  15. Empirical models for the estimation of global solar radiation with sunshine hours on horizontal surface in various cities of Pakistan

    International Nuclear Information System (INIS)

    Gadiwala, M.S.; Usman, A.; Akhtar, M.; Jamil, K.

    2013-01-01

    In developing countries like Pakistan the global solar radiation and its components is not available for all locations due to which there is a requirement of using different models for the estimation of global solar radiation that use climatological parameters of the locations. Only five long-period locations data of solar radiation data is available in Pakistan (Karachi, Quetta, Lahore, Multan and Peshawar). These locations almost encompass the different geographical features of Pakistan. For this reason in this study the Mean monthly global solar radiation has been estimated using empirical models of Angstrom, FAO, Glover Mc-Culloch, Sangeeta & Tiwari for the diversity of approach and use of climatic and geographical parameters. Empirical constants for these models have been estimated and the results obtained by these models have been tested statistically. The results show encouraging agreement between estimated and measured values. The outcome of these empirical models will assist the researchers working on solar energy estimation of the location having similar conditions

  16. EMPIRICAL MODELS FOR PERFORMANCE OF DRIPPERS APPLYING CASHEW NUT PROCESSING WASTEWATER

    Directory of Open Access Journals (Sweden)

    KETSON BRUNO DA SILVA

    2016-01-01

    Full Text Available The objective of this work was to develop empirical models for hydraulic performance of drippers operating with cashew nut processing wastewater depending on operating time, operating pressure and effluent quality. The experiment consisted of two factors, types of drippers (D1=1.65 L h-1, D2=2.00 L h-1 and D3=4.00 L h-1, and operating pressures (70, 140, 210 and 280 kPa, with three replications. The flow variation coefficient (FVC, distribution uniformity coefficient (DUC and the physicochemical and biological characteristics of the effluent were evaluated every 20 hours until complete 160 hours of operation. Data were interpreted through simple and multiple linear stepwise regression models. The regression models that fitted to the FVC and DUC as a function of operating time were square root, linear and quadratic, with 17%, 17% and 8%, and 17%, 17% and 0%, respectively. The regression models that fitted to the FVC and DUC as a function of operating pressures were square root, linear and quadratic, with 11%, 22% and 0% and the 0%, 22% and 11%, respectively. Multiple linear regressions showed that the dissolved solids content is the main wastewater characteristic that interfere in the FVC and DUC values of the drip units D1 (1.65 L h-1 and D3 (4.00 L h-1, operating at work pressure of 70 kPa (P1.

  17. Testing the robustness of the anthropogenic climate change detection statements using different empirical models

    KAUST Repository

    Imbers, J.; Lopez, A.; Huntingford, C.; Allen, M. R.

    2013-01-01

    This paper aims to test the robustness of the detection and attribution of anthropogenic climate change using four different empirical models that were previously developed to explain the observed global mean temperature changes over the last few decades. These studies postulated that the main drivers of these changes included not only the usual natural forcings, such as solar and volcanic, and anthropogenic forcings, such as greenhouse gases and sulfates, but also other known Earth system oscillations such as El Niño Southern Oscillation (ENSO) or the Atlantic Multidecadal Oscillation (AMO). In this paper, we consider these signals, or forced responses, and test whether or not the anthropogenic signal can be robustly detected under different assumptions for the internal variability of the climate system. We assume that the internal variability of the global mean surface temperature can be described by simple stochastic models that explore a wide range of plausible temporal autocorrelations, ranging from short memory processes exemplified by an AR(1) model to long memory processes, represented by a fractional differenced model. In all instances, we conclude that human-induced changes to atmospheric gas composition is affecting global mean surface temperature changes. ©2013. American Geophysical Union. All Rights Reserved.

  18. Identifying mechanisms that structure ecological communities by snapping model parameters to empirically observed tradeoffs.

    Science.gov (United States)

    Thomas Clark, Adam; Lehman, Clarence; Tilman, David

    2018-04-01

    Theory predicts that interspecific tradeoffs are primary determinants of coexistence and community composition. Using information from empirically observed tradeoffs to augment the parametrisation of mechanism-based models should therefore improve model predictions, provided that tradeoffs and mechanisms are chosen correctly. We developed and tested such a model for 35 grassland plant species using monoculture measurements of three species characteristics related to nitrogen uptake and retention, which previous experiments indicate as important at our site. Matching classical theoretical expectations, these characteristics defined a distinct tradeoff surface, and models parameterised with these characteristics closely matched observations from experimental multi-species mixtures. Importantly, predictions improved significantly when we incorporated information from tradeoffs by 'snapping' characteristics to the nearest location on the tradeoff surface, suggesting that the tradeoffs and mechanisms we identify are important determinants of local community structure. This 'snapping' method could therefore constitute a broadly applicable test for identifying influential tradeoffs and mechanisms. © 2018 The Authors. Ecology Letters published by CNRS and John Wiley & Sons Ltd.

  19. Testing the robustness of the anthropogenic climate change detection statements using different empirical models

    KAUST Repository

    Imbers, J.

    2013-04-27

    This paper aims to test the robustness of the detection and attribution of anthropogenic climate change using four different empirical models that were previously developed to explain the observed global mean temperature changes over the last few decades. These studies postulated that the main drivers of these changes included not only the usual natural forcings, such as solar and volcanic, and anthropogenic forcings, such as greenhouse gases and sulfates, but also other known Earth system oscillations such as El Niño Southern Oscillation (ENSO) or the Atlantic Multidecadal Oscillation (AMO). In this paper, we consider these signals, or forced responses, and test whether or not the anthropogenic signal can be robustly detected under different assumptions for the internal variability of the climate system. We assume that the internal variability of the global mean surface temperature can be described by simple stochastic models that explore a wide range of plausible temporal autocorrelations, ranging from short memory processes exemplified by an AR(1) model to long memory processes, represented by a fractional differenced model. In all instances, we conclude that human-induced changes to atmospheric gas composition is affecting global mean surface temperature changes. ©2013. American Geophysical Union. All Rights Reserved.

  20. Benefits of Applying Hierarchical Models to the Empirical Green's Function Approach

    Science.gov (United States)

    Denolle, M.; Van Houtte, C.

    2017-12-01

    Stress drops calculated from source spectral studies currently show larger variability than what is implied by empirical ground motion models. One of the potential origins of the inflated variability is the simplified model-fitting techniques used in most source spectral studies. This study improves upon these existing methods, and shows that the fitting method may explain some of the discrepancy. In particular, Bayesian hierarchical modelling is shown to be a method that can reduce bias, better quantify uncertainties and allow additional effects to be resolved. The method is applied to the Mw7.1 Kumamoto, Japan earthquake, and other global, moderate-magnitude, strike-slip earthquakes between Mw5 and Mw7.5. It is shown that the variation of the corner frequency, fc, and the falloff rate, n, across the focal sphere can be reliably retrieved without overfitting the data. Additionally, it is shown that methods commonly used to calculate corner frequencies can give substantial biases. In particular, if fc were calculated for the Kumamoto earthquake using a model with a falloff rate fixed at 2 instead of the best fit 1.6, the obtained fc would be as large as twice its realistic value. The reliable retrieval of the falloff rate allows deeper examination of this parameter for a suite of global, strike-slip earthquakes, and its scaling with magnitude. The earthquake sequences considered in this study are from Japan, New Zealand, Haiti and California.

  1. A Longitudinal Empirical Investigation of the Pathways Model of Problem Gambling.

    Science.gov (United States)

    Allami, Youssef; Vitaro, Frank; Brendgen, Mara; Carbonneau, René; Lacourse, Éric; Tremblay, Richard E

    2017-12-01

    The pathways model of problem gambling suggests the existence of three developmental pathways to problem gambling, each differentiated by a set of predisposing biopsychosocial characteristics: behaviorally conditioned (BC), emotionally vulnerable (EV), and biologically vulnerable (BV) gamblers. This study examined the empirical validity of the Pathways Model among adolescents followed up to early adulthood. A prospective-longitudinal design was used, thus overcoming limitations of past studies that used concurrent or retrospective designs. Two samples were used: (1) a population sample of French-speaking adolescents (N = 1033) living in low socio-economic status (SES) neighborhoods from the Greater Region of Montreal (Quebec, Canada), and (2) a population sample of adolescents (N = 3017), representative of French-speaking students in Quebec. Only participants with at-risk or problem gambling by mid-adolescence or early adulthood were included in the main analysis (n = 180). Latent Profile Analyses were conducted to identify the optimal number of profiles, in accordance with participants' scores on a set of variables prescribed by the Pathways Model and measured during early adolescence: depression, anxiety, impulsivity, hyperactivity, antisocial/aggressive behavior, and drug problems. A four-profile model fit the data best. Three profiles differed from each other in ways consistent with the Pathways Model (i.e., BC, EV, and BV gamblers). A fourth profile emerged, resembling a combination of EV and BV gamblers. Four profiles of at-risk and problem gamblers were identified. Three of these profiles closely resemble those suggested by the Pathways Model.

  2. A Semi-empirical Model of the Stratosphere in the Climate System

    Science.gov (United States)

    Sodergren, A. H.; Bodeker, G. E.; Kremser, S.; Meinshausen, M.; McDonald, A.

    2014-12-01

    Chemistry climate models (CCMs) currently used to project changes in Antarctic ozone are extremely computationally demanding. CCM projections are uncertain due to lack of knowledge of future emissions of greenhouse gases (GHGs) and ozone depleting substances (ODSs), as well as parameterizations within the CCMs that have weakly constrained tuning parameters. While projections should be based on an ensemble of simulations, this is not currently possible due to the complexity of the CCMs. An inexpensive but realistic approach to simulate changes in stratospheric ozone, and its coupling to the climate system, is needed as a complement to CCMs. A simple climate model (SCM) can be used as a fast emulator of complex atmospheric-ocean climate models. If such an SCM includes a representation of stratospheric ozone, the evolution of the global ozone layer can be simulated for a wide range of GHG and ODS emissions scenarios. MAGICC is an SCM used in previous IPCC reports. In the current version of the MAGICC SCM, stratospheric ozone changes depend only on equivalent effective stratospheric chlorine (EESC). In this work, MAGICC is extended to include an interactive stratospheric ozone layer using a semi-empirical model of ozone responses to CO2and EESC, with changes in ozone affecting the radiative forcing in the SCM. To demonstrate the ability of our new, extended SCM to generate projections of global changes in ozone, tuning parameters from 19 coupled atmosphere-ocean general circulation models (AOGCMs) and 10 carbon cycle models (to create an ensemble of 190 simulations) have been used to generate probability density functions of the dates of return of stratospheric column ozone to 1960 and 1980 levels for different latitudes.

  3. A New Empirical Model for Short-Term Forecasting of the Broadband Penetration: A Short Research in Greece

    Directory of Open Access Journals (Sweden)

    Salpasaranis Konstantinos

    2011-01-01

    Full Text Available The objective of this paper is to present a short research about the overall broadband penetration in Greece. In this research, a new empirical deterministic model is proposed for the short-term forecast of the cumulative broadband adoption. The fitting performance of the model is compared with some widely used diffusion models for the cumulative adoption of new telecommunication products, namely, Logistic, Gompertz, Flexible Logistic (FLOG, Box-Cox, Richards, and Bass models. The fitting process is done with broadband penetration official data for Greece. In conclusion, comparing these models with the empirical model, it could be argued that the latter yields well enough statistics indicators for fitting and forecasting performance. It also stresses the need for further research and performance analysis of the model in other more mature broadband markets.

  4. Technical Note: A comparison of model and empirical measures of catchment-scale effective energy and mass transfer

    Directory of Open Access Journals (Sweden)

    C. Rasmussen

    2013-09-01

    Full Text Available Recent work suggests that a coupled effective energy and mass transfer (EEMT term, which includes the energy associated with effective precipitation and primary production, may serve as a robust prediction parameter of critical zone structure and function. However, the models used to estimate EEMT have been solely based on long-term climatological data with little validation using direct empirical measures of energy, water, and carbon balances. Here we compare catchment-scale EEMT estimates generated using two distinct approaches: (1 EEMT modeled using the established methodology based on estimates of monthly effective precipitation and net primary production derived from climatological data, and (2 empirical catchment-scale EEMT estimated using data from 86 catchments of the Model Parameter Estimation Experiment (MOPEX and MOD17A3 annual net primary production (NPP product derived from Moderate Resolution Imaging Spectroradiometer (MODIS. Results indicated positive and significant linear correspondence (R2 = 0.75; P −2 yr−1. Modeled EEMT values were consistently greater than empirical measures of EEMT. Empirical catchment estimates of the energy associated with effective precipitation (EPPT were calculated using a mass balance approach that accounts for water losses to quick surface runoff not accounted for in the climatologically modeled EPPT. Similarly, local controls on primary production such as solar radiation and nutrient limitation were not explicitly included in the climatologically based estimates of energy associated with primary production (EBIO, whereas these were captured in the remotely sensed MODIS NPP data. These differences likely explain the greater estimate of modeled EEMT relative to the empirical measures. There was significant positive correlation between catchment aridity and the fraction of EEMT partitioned into EBIO (FBIO, with an increase in FBIO as a fraction of the total as aridity increases and percentage of

  5. Health Status and Health Dynamics in an Empirical Model of Expected Longevity*

    Science.gov (United States)

    Benítez-Silva, Hugo; Ni, Huan

    2010-01-01

    Expected longevity is an important factor influencing older individuals’ decisions such as consumption, savings, purchase of life insurance and annuities, claiming of Social Security benefits, and labor supply. It has also been shown to be a good predictor of actual longevity, which in turn is highly correlated with health status. A relatively new literature on health investments under uncertainty, which builds upon the seminal work by Grossman (1972), has directly linked longevity with characteristics, behaviors, and decisions by utility maximizing agents. Our empirical model can be understood within that theoretical framework as estimating a production function of longevity. Using longitudinal data from the Health and Retirement Study, we directly incorporate health dynamics in explaining the variation in expected longevities, and compare two alternative measures of health dynamics: the self-reported health change, and the computed health change based on self-reports of health status. In 38% of the reports in our sample, computed health changes are inconsistent with the direct report on health changes over time. And another 15% of the sample can suffer from information losses if computed changes are used to assess changes in actual health. These potentially serious problems raise doubts regarding the use and interpretation of the computed health changes and even the lagged measures of self-reported health as controls for health dynamics in a variety of empirical settings. Our empirical results, controlling for both subjective and objective measures of health status and unobserved heterogeneity in reporting, suggest that self-reported health changes are a preferred measure of health dynamics. PMID:18187217

  6. Mathematical method to build an empirical model for inhaled anesthetic agent wash-in

    Directory of Open Access Journals (Sweden)

    Grouls René EJ

    2011-06-01

    Full Text Available Abstract Background The wide range of fresh gas flow - vaporizer setting (FGF - FD combinations used by different anesthesiologists during the wash-in period of inhaled anesthetics indicates that the selection of FGF and FD is based on habit and personal experience. An empirical model could rationalize FGF - FD selection during wash-in. Methods During model derivation, 50 ASA PS I-II patients received desflurane in O2 with an ADU® anesthesia machine with a random combination of a fixed FGF - FD setting. The resulting course of the end-expired desflurane concentration (FA was modeled with Excel Solver, with patient age, height, and weight as covariates; NONMEM was used to check for parsimony. The resulting equation was solved for FD, and prospectively tested by having the formula calculate FD to be used by the anesthesiologist after randomly selecting a FGF, a target FA (FAt, and a specified time interval (1 - 5 min after turning on the vaporizer after which FAt had to be reached. The following targets were tested: desflurane FAt 3.5% after 3.5 min (n = 40, 5% after 5 min (n = 37, and 6% after 4.5 min (n = 37. Results Solving the equation derived during model development for FD yields FD=-(e(-FGF*-0.23+FGF*0.24*(e(FGF*-0.23*FAt*Ht*0.1-e(FGF*-0.23*FGF*2.55+40.46-e(FGF*-0.23*40.46+e(FGF*-0.23+Time/-4.08*40.46-e(Time/-4.08*40.46/((-1+e(FGF*0.24*(-1+e(Time/-4.08*39.29. Only height (Ht could be withheld as a significant covariate. Median performance error and median absolute performance error were -2.9 and 7.0% in the 3.5% after 3.5 min group, -3.4 and 11.4% in the 5% after 5 min group, and -16.2 and 16.2% in the 6% after 4.5 min groups, respectively. Conclusions An empirical model can be used to predict the FGF - FD combinations that attain a target end-expired anesthetic agent concentration with clinically acceptable accuracy within the first 5 min of the start of administration. The sequences are easily calculated in an Excel file and simple to

  7. Modeling ionospheric foF2 by using empirical orthogonal function analysis

    Directory of Open Access Journals (Sweden)

    E. A

    2011-08-01

    Full Text Available A similar-parameters interpolation method and an empirical orthogonal function analysis are used to construct empirical models for the ionospheric foF2 by using the observational data from three ground-based ionosonde stations in Japan which are Wakkanai (Geographic 45.4° N, 141.7° E, Kokubunji (Geographic 35.7° N, 140.1° E and Yamagawa (Geographic 31.2° N, 130.6° E during the years of 1971–1987. The impact of different drivers towards ionospheric foF2 can be well indicated by choosing appropriate proxies. It is shown that the missing data of original foF2 can be optimal refilled using similar-parameters method. The characteristics of base functions and associated coefficients of EOF model are analyzed. The diurnal variation of base functions can reflect the essential nature of ionospheric foF2 while the coefficients represent the long-term alteration tendency. The 1st order EOF coefficient A1 can reflect the feature of the components with solar cycle variation. A1 also contains an evident semi-annual variation component as well as a relatively weak annual fluctuation component. Both of which are not so obvious as the solar cycle variation. The 2nd order coefficient A2 contains mainly annual variation components. The 3rd order coefficient A3 and 4th order coefficient A4 contain both annual and semi-annual variation components. The seasonal variation, solar rotation oscillation and the small-scale irregularities are also included in the 4th order coefficient A4. The amplitude range and developing tendency of all these coefficients depend on the level of solar activity and geomagnetic activity. The reliability and validity of EOF model are verified by comparison with observational data and with International Reference Ionosphere (IRI. The agreement between observations and EOF model is quite well, indicating that the EOF model can reflect the major changes and the temporal distribution characteristics of the mid-latitude ionosphere of the

  8. Functionality of empirical model-based predictive analytics for the early detection of hemodynamic instabilty.

    Science.gov (United States)

    Summers, Richard L; Pipke, Matt; Wegerich, Stephan; Conkright, Gary; Isom, Kristen C

    2014-01-01

    Background. Monitoring cardiovascular hemodynamics in the modern clinical setting is a major challenge. Increasing amounts of physiologic data must be analyzed and interpreted in the context of the individual patient’s pathology and inherent biologic variability. Certain data-driven analytical methods are currently being explored for smart monitoring of data streams from patients as a first tier automated detection system for clinical deterioration. As a prelude to human clinical trials, an empirical multivariate machine learning method called Similarity-Based Modeling (“SBM”), was tested in an In Silico experiment using data generated with the aid of a detailed computer simulator of human physiology (Quantitative Circulatory Physiology or “QCP”) which contains complex control systems with realistic integrated feedback loops. Methods. SBM is a kernel-based, multivariate machine learning method that that uses monitored clinical information to generate an empirical model of a patient’s physiologic state. This platform allows for the use of predictive analytic techniques to identify early changes in a patient’s condition that are indicative of a state of deterioration or instability. The integrity of the technique was tested through an In Silico experiment using QCP in which the output of computer simulations of a slowly evolving cardiac tamponade resulted in progressive state of cardiovascular decompensation. Simulator outputs for the variables under consideration were generated at a 2-min data rate (0.083Hz) with the tamponade introduced at a point 420 minutes into the simulation sequence. The functionality of the SBM predictive analytics methodology to identify clinical deterioration was compared to the thresholds used by conventional monitoring methods. Results. The SBM modeling method was found to closely track the normal physiologic variation as simulated by QCP. With the slow development of the tamponade, the SBM model are seen to disagree while the

  9. A New Statistical Method to Determine the Degree of Validity of Health Economic Model Outcomes against Empirical Data.

    NARCIS (Netherlands)

    Corro Ramos, Isaac; van Voorn, George A K; Vemer, Pepijn; Feenstra, Talitha L; Al, Maiwenn J

    2017-01-01

    The validation of health economic (HE) model outcomes against empirical data is of key importance. Although statistical testing seems applicable, guidelines for the validation of HE models lack guidance on statistical validation, and actual validation efforts often present subjective judgment of

  10. Empirical Testing of a Theoretical Extension of the Technology Acceptance Model: An Exploratory Study of Educational Wikis

    Science.gov (United States)

    Liu, Xun

    2010-01-01

    This study extended the technology acceptance model and empirically tested the new model with wikis, a new type of educational technology. Based on social cognitive theory and the theory of planned behavior, three new variables, wiki self-efficacy, online posting anxiety, and perceived behavioral control, were added to the original technology…

  11. How "Does" the Comforting Process Work? An Empirical Test of an Appraisal-Based Model of Comforting

    Science.gov (United States)

    Jones, Susanne M.; Wirtz, John G.

    2006-01-01

    Burleson and Goldsmith's (1998) comforting model suggests an appraisal-based mechanism through which comforting messages can bring about a positive change in emotional states. This study is a first empirical test of three causal linkages implied by the appraisal-based comforting model. Participants (N=258) talked about an upsetting event with a…

  12. Tracking the sleep onset process: an empirical model of behavioral and physiological dynamics.

    Directory of Open Access Journals (Sweden)

    Michael J Prerau

    2014-10-01

    Full Text Available The sleep onset process (SOP is a dynamic process correlated with a multitude of behavioral and physiological markers. A principled analysis of the SOP can serve as a foundation for answering questions of fundamental importance in basic neuroscience and sleep medicine. Unfortunately, current methods for analyzing the SOP fail to account for the overwhelming evidence that the wake/sleep transition is governed by continuous, dynamic physiological processes. Instead, current practices coarsely discretize sleep both in terms of state, where it is viewed as a binary (wake or sleep process, and in time, where it is viewed as a single time point derived from subjectively scored stages in 30-second epochs, effectively eliminating SOP dynamics from the analysis. These methods also fail to integrate information from both behavioral and physiological data. It is thus imperative to resolve the mismatch between the physiological evidence and analysis methodologies. In this paper, we develop a statistically and physiologically principled dynamic framework and empirical SOP model, combining simultaneously-recorded physiological measurements with behavioral data from a novel breathing task requiring no arousing external sensory stimuli. We fit the model using data from healthy subjects, and estimate the instantaneous probability that a subject is awake during the SOP. The model successfully tracked physiological and behavioral dynamics for individual nights, and significantly outperformed the instantaneous transition models implicit in clinical definitions of sleep onset. Our framework also provides a principled means for cross-subject data alignment as a function of wake probability, allowing us to characterize and compare SOP dynamics across different populations. This analysis enabled us to quantitatively compare the EEG of subjects showing reduced alpha power with the remaining subjects at identical response probabilities. Thus, by incorporating both

  13. Empirical radiation belt models: Comparison with in situ data and implications for environment definition

    Science.gov (United States)

    de Soria-Santacruz Pich, Maria; Jun, Insoo; Evans, Robin

    2017-09-01

    The empirical AP8/AE8 model has been the de facto Earth's radiation belts engineering reference for decades. The need from the community for a better model incubated the development of AP9/AE9/SPM, which addresses several shortcomings of the old model. We provide additional validation of AP9/AE9 by comparing in situ electron and proton data from Jason-2, Polar Orbiting Environmental Satellites (POES), and the Van Allen Probes spacecraft with the 5th, 50th, and 95th percentiles from AE9/AP9 and with the model outputs from AE8/AP8. The relatively short duration of Van Allen Probes and Jason-2 missions means that their measurements are most certainly the result of specific climatological conditions. In low Earth orbit (LEO), the Jason-2 proton flux is better reproduced by AP8 compared to AP9, while the POES electron data are well enveloped by AE9 5th and 95th percentiles. The shape of the South Atlantic anomaly (SAA) from Jason-2 data is better captured by AP9 compared to AP8, while the peak SAA flux is better reproduced by AP8. The <1.5 MeV inner belt electrons from Magnetic Electron Ion Spectrometer (MagEIS) are well enveloped by AE9 5th and 95th percentiles, while AE8 overpredicts the measurements. In the outer radiation belt, MagEIS and Relativistic Electron and Proton Telescope (REPT) electrons closely follow the median estimate from AE9, while AP9 5th and 95th percentiles generally envelope REPT proton measurements in the inner belt and slot regions. While AE9/AP9 offer the flexibility to specify the environment with different confidence levels, the dose and trapped proton peak flux for POES and Jason-2 trajectories from the AE9/AP9 50th percentile and above are larger than the estimates from the AE8/AP8 models.

  14. Empirical modeling of drying kinetics and microwave assisted extraction of bioactive compounds from Adathoda vasica

    Directory of Open Access Journals (Sweden)

    Prithvi Simha

    2016-03-01

    Full Text Available To highlight the shortcomings in conventional methods of extraction, this study investigates the efficacy of Microwave Assisted Extraction (MAE toward bioactive compound recovery from pharmaceutically-significant medicinal plants, Adathoda vasica and Cymbopogon citratus. Initially, the microwave (MW drying behavior of the plant leaves was investigated at different sample loadings, MW power and drying time. Kinetics was analyzed through empirical modeling of drying data against 10 conventional thin-layer drying equations that were further improvised through the incorporation of Arrhenius, exponential and linear-type expressions. 81 semi-empirical Midilli equations were derived and subjected to non-linear regression to arrive at the characteristic drying equations. Bioactive compounds recovery from the leaves was examined under various parameters through a comparative approach that studied MAE against Soxhlet extraction. MAE of A. vasica reported similar yields although drastic reduction in extraction time (210 s as against the average time of 10 h in the Soxhlet apparatus. Extract yield for MAE of C. citratus was higher than the conventional process with optimal parameters determined to be 20 g sample load, 1:20 sample/solvent ratio, extraction time of 150 s and 300 W output power. Scanning Electron Microscopy and Fourier Transform Infrared Spectroscopy were performed to depict changes in internal leaf morphology.

  15. Empirical valence bond models for reactive potential energy surfaces: a parallel multilevel genetic program approach.

    Science.gov (United States)

    Bellucci, Michael A; Coker, David F

    2011-07-28

    We describe a new method for constructing empirical valence bond potential energy surfaces using a parallel multilevel genetic program (PMLGP). Genetic programs can be used to perform an efficient search through function space and parameter space to find the best functions and sets of parameters that fit energies obtained by ab initio electronic structure calculations. Building on the traditional genetic program approach, the PMLGP utilizes a hierarchy of genetic programming on two different levels. The lower level genetic programs are used to optimize coevolving populations in parallel while the higher level genetic program (HLGP) is used to optimize the genetic operator probabilities of the lower level genetic programs. The HLGP allows the algorithm to dynamically learn the mutation or combination of mutations that most effectively increase the fitness of the populations, causing a significant increase in the algorithm's accuracy and efficiency. The algorithm's accuracy and efficiency is tested against a standard parallel genetic program with a variety of one-dimensional test cases. Subsequently, the PMLGP is utilized to obtain an accurate empirical valence bond model for proton transfer in 3-hydroxy-gamma-pyrone in gas phase and protic solvent. © 2011 American Institute of Physics

  16. Time-varying volatility in Malaysian stock exchange: An empirical study using multiple-volatility-shift fractionally integrated model

    Science.gov (United States)

    Cheong, Chin Wen

    2008-02-01

    This article investigated the influences of structural breaks on the fractionally integrated time-varying volatility model in the Malaysian stock markets which included the Kuala Lumpur composite index and four major sectoral indices. A fractionally integrated time-varying volatility model combined with sudden changes is developed to study the possibility of structural change in the empirical data sets. Our empirical results showed substantial reduction in fractional differencing parameters after the inclusion of structural change during the Asian financial and currency crises. Moreover, the fractionally integrated model with sudden change in volatility performed better in the estimation and specification evaluations.

  17. Correcting the bias of empirical frequency parameter estimators in codon models.

    Directory of Open Access Journals (Sweden)

    Sergei Kosakovsky Pond

    2010-07-01

    Full Text Available Markov models of codon substitution are powerful inferential tools for studying biological processes such as natural selection and preferences in amino acid substitution. The equilibrium character distributions of these models are almost always estimated using nucleotide frequencies observed in a sequence alignment, primarily as a matter of historical convention. In this note, we demonstrate that a popular class of such estimators are biased, and that this bias has an adverse effect on goodness of fit and estimates of substitution rates. We propose a "corrected" empirical estimator that begins with observed nucleotide counts, but accounts for the nucleotide composition of stop codons. We show via simulation that the corrected estimates outperform the de facto standard estimates not just by providing better estimates of the frequencies themselves, but also by leading to improved estimation of other parameters in the evolutionary models. On a curated collection of sequence alignments, our estimators show a significant improvement in goodness of fit compared to the approach. Maximum likelihood estimation of the frequency parameters appears to be warranted in many cases, albeit at a greater computational cost. Our results demonstrate that there is little justification, either statistical or computational, for continued use of the -style estimators.

  18. Impact of Disturbing Factors on Cooperation in Logistics Outsourcing Performance: The Empirical Model

    Directory of Open Access Journals (Sweden)

    Andreja Križman

    2010-05-01

    Full Text Available The purpose of this paper is to present the research results of a study conducted in the Slovene logistics market of conflicts and opportunism as disturbing factors while examining their impact on cooperation in logistics outsourcing performance. Relationship variables are proposed that directly or indirectly affect logistics performance and conceptualize the hypotheses based on causal linkages for the constructs. On the basis of extant literature and new argumentations that are derived from in-depth interviews of logistics experts, including providers and customers, the measurement and structural models are empirically analyzed. Existing measurement scales for the constructs are slightly modified for this analysis. Purification testing and measurement for validity and reliability are performed. Multivariate statistical methods are utilized and hypotheses are tested. The results show that conflicts have a significantly negative impact on cooperation between customers and logistics service providers (LSPs, while opportunism does not play an important role in these relationships. The observed antecedents of logistics outsourcing performance in the model account for 58.4% of the variance of the goal achievement and 36.5% of the variance of the exceeded goal. KEYWORDS: logistics outsourcing performance; logistics customer–provider relationships; conflicts and cooperation in logistics outsourcing; PLS path modelling

  19. MERGANSER: an empirical model to predict fish and loon mercury in New England lakes.

    Science.gov (United States)

    Shanley, James B; Moore, Richard; Smith, Richard A; Miller, Eric K; Simcox, Alison; Kamman, Neil; Nacci, Diane; Robinson, Keith; Johnston, John M; Hughes, Melissa M; Johnston, Craig; Evers, David; Williams, Kate; Graham, John; King, Susannah

    2012-04-17

    MERGANSER (MERcury Geo-spatial AssessmeNtS for the New England Region) is an empirical least-squares multiple regression model using mercury (Hg) deposition and readily obtainable lake and watershed features to predict fish (fillet) and common loon (blood) Hg in New England lakes. We modeled lakes larger than 8 ha (4404 lakes), using 3470 fish (12 species) and 253 loon Hg concentrations from 420 lakes. MERGANSER predictor variables included Hg deposition, watershed alkalinity, percent wetlands, percent forest canopy, percent agriculture, drainage area, population density, mean annual air temperature, and watershed slope. The model returns fish or loon Hg for user-entered species and fish length. MERGANSER explained 63% of the variance in fish and loon Hg concentrations. MERGANSER predicted that 32-cm smallmouth bass had a median Hg concentration of 0.53 μg g(-1) (root-mean-square error 0.27 μg g(-1)) and exceeded EPA's recommended fish Hg criterion of 0.3 μg g(-1) in 90% of New England lakes. Common loon had a median Hg concentration of 1.07 μg g(-1) and was in the moderate or higher risk category of >1 μg g(-1) Hg in 58% of New England lakes. MERGANSER can be applied to target fish advisories to specific unmonitored lakes, and for scenario evaluation, such as the effect of changes in Hg deposition, land use, or warmer climate on fish and loon mercury.

  20. Semi-empirical model for the generation of dose distributions produced by a scanning electron beam

    International Nuclear Information System (INIS)

    Nath, R.; Gignac, C.E.; Agostinelli, A.G.; Rothberg, S.; Schulz, R.J.

    1980-01-01

    There are linear accelerators (Sagittaire and Saturne accelerators produced by Compagnie Generale de Radiologie (CGR/MeV) Corporation) which produce broad, flat electron fields by magnetically scanning the relatively narrow electron beam as it emerges from the accelerator vacuum system. A semi-empirical model, which mimics the scanning action of this type of accelerator, was developed for the generation of dose distributions in homogeneous media. The model employs the dose distributions of the scanning electron beams. These were measured with photographic film in a polystyrene phantom by turning off the magnetic scanning system. The mean deviation calculated from measured dose distributions is about 0.2%; a few points have deviations as large as 2 to 4% inside of the 50% isodose curve, but less than 8% outside of the 50% isodose curve. The model has been used to generate the electron beam library required by a modified version of a commercially-available computerized treatment-planning system. (The RAD-8 treatment planning system was purchased from the Digital Equipment Corporation. It is currently available from Electronic Music Industries

  1. Testing an empirically derived mental health training model featuring small groups, distributed practice and patient discussion.

    Science.gov (United States)

    Murrihy, Rachael C; Byrne, Mitchell K; Gonsalvez, Craig J

    2009-02-01

    Internationally, family doctors seeking to enhance their skills in evidence-based mental health treatment are attending brief training workshops, despite clear evidence in the literature that short-term, massed formats are not likely to improve skills in this complex area. Reviews of the educational literature suggest that an optimal model of training would incorporate distributed practice techniques; repeated practice over a lengthy time period, small-group interactive learning, mentoring relationships, skills-based training and an ongoing discussion of actual patients. This study investigates the potential role of group-based training incorporating multiple aspects of good pedagogy for training doctors in basic competencies in brief cognitive behaviour therapy (BCBT). Six groups of family doctors (n = 32) completed eight 2-hour sessions of BCBT group training over a 6-month period. A baseline control design was utilised with pre- and post-training measures of doctors' BCBT skills, knowledge and engagement in BCBT treatment. Family doctors' knowledge, skills in and actual use of BCBT with patients improved significantly over the course of training compared with the control period. This research demonstrates preliminary support for the efficacy of an empirically derived group training model for family doctors. Brief CBT group-based training could prove to be an effective and viable model for future doctor training.

  2. Detailed empirical models for the winds of early-type stars

    International Nuclear Information System (INIS)

    Olson, G.L.; Castor, J.I.

    1981-01-01

    Owing to the recent accumulation of ultraviolet data from the IUE satellite, of X-ray data from the Einstein (HEAO 2) satellite, of visible data from ground based electronic detectors, and of radio data from the Very Large Array (VLA) telescope, it is becoming possible to build much more complete models for the winds of early-type stars. The present work takes the empirical approach of assuming that there exists a coronal region at the base of a cool wind (T/sub e/roughly-equalT/sub eff/). This will be an extension of previous papers by Olson and by Cassinelli and Olson; however, refinements to the model will be presented, and the model will be applied to seven O stars and one BO star. Ionization equilibria are computed to match the line strengths found in UV spectra. The coronal fluxes that are required to produce the observed abundance of O +5 are compared to the X-ray fluxes observed by the Einstein satellite

  3. Longitudinal hopping in intervehicle communication: Theory and simulations on modeled and empirical trajectory data

    Science.gov (United States)

    Thiemann, Christian; Treiber, Martin; Kesting, Arne

    2008-09-01

    Intervehicle communication enables vehicles to exchange messages within a limited broadcast range and thus self-organize into dynamical and geographically embedded wireless ad hoc networks. We study the longitudinal hopping mode in which messages are transported using equipped vehicles driving in the same direction as a relay. Given a finite communication range, we investigate the conditions where messages can percolate through the network, i.e., a linked chain of relay vehicles exists between the sender and receiver. We simulate message propagation in different traffic scenarios and for different fractions of equipped vehicles. Simulations are done with both, modeled and empirical traffic data. These results are used to test the limits of applicability of an analytical model assuming a Poissonian distance distribution between the relays. We found a good agreement for homogeneous traffic scenarios and sufficiently low percentages of equipped vehicles. For higher percentages, the observed connectivity was higher than that of the model while in stop-and-go traffic situations it was lower. We explain these results in terms of correlations of the distances between the relay vehicles. Finally, we introduce variable transmission ranges and found that this additional stochastic component generally increased connectivity compared to a deterministic transmission with the same mean.

  4. Does the U.S. exercise contagion on Italy? A theoretical model and empirical evidence

    Science.gov (United States)

    Cerqueti, Roy; Fenga, Livio; Ventura, Marco

    2018-06-01

    This paper deals with the theme of contagion in financial markets. At this aim, we develop a model based on Mixed Poisson Processes to describe the abnormal returns of financial markets of two considered countries. In so doing, the article defines the theoretical conditions to be satisfied in order to state that one of them - the so-called leader - exercises contagion on the others - the followers. Specifically, we employ an invariant probabilistic result stating that a suitable transformation of a Mixed Poisson Process is still a Mixed Poisson Process. The theoretical claim is validated by implementing an extensive simulation analysis grounded on empirical data. The countries considered are the U.S. (as the leader) and Italy (as the follower) and the period under scrutiny is very large, ranging from 1970 to 2014.

  5. Energy levies and endogenous technology in an empirical simulation model for the Netherlands

    International Nuclear Information System (INIS)

    Den Butter, F.A.G.; Dellink, R.B.; Hofkes, M.W.

    1995-01-01

    The belief in beneficial green tax swaps has been particularly prevalent in Europe, where high levels of unemployment and strong preferences for a large public sector (and hence high tax levels) accentuate the desire for revenue-neutral, growth-enhancing reductions in labor income taxes. In this context an empirical simulation model is developed for the Netherlands, especially designed to reckon with the effects of changes in prices on the level and direction of technological progress. It appears that the so-called employment double dividend, i.e. increasing employment and decreasing energy use at the same time, can occur. A general levy yields stronger effects than a levy on household use only. However, the stronger effects of a general levy on employment and energy use are accompanied by shrinking production and, in the longer run, by decreasing disposable income of workers or non-workers. 1 fig., 4 tabs., 1 appendix, 20 refs

  6. Semi-empirical fragmentation model of meteoroid motion and radiation during atmospheric penetration

    Science.gov (United States)

    Revelle, D. O.; Ceplecha, Z.

    2002-11-01

    A semi-empirical fragmentation model (FM) of meteoroid motion, ablation, and radiation including two types of fragmentation is outlined. The FM was applied to observational data (height as function of time and the light curve) of Lost City, Innisfree and Benešov bolides. For the Lost City bolide we were able to fit the FM to the observed height as function of time with ±13 m and to the observed light curve with ±0.17 magnitude. Corresponding numbers for Innisfree are ±25 m and ±0.14 magnitude, and for Benešov ±46 m and ±0.19 magnitude. We also define apparent and intrinsic values of σ, K, and τ. Using older results and our fit of FM to the Lost City bolide we derived corrections to intrinsic luminous efficiencies expressed as functions of velocity, mass, and normalized air density.

  7. Empirical modeling of high-intensity electron beam interaction with materials

    Science.gov (United States)

    Koleva, E.; Tsonevska, Ts; Mladenov, G.

    2018-03-01

    The paper proposes an empirical modeling approach to the prediction followed by optimization of the exact shape of the cross-section of a welded seam, as obtained by electron beam welding. The approach takes into account the electron beam welding process parameters, namely, electron beam power, welding speed, and distances from the magnetic lens of the electron gun to the focus position of the beam and to the surface of the samples treated. The results are verified by comparison with experimental results for type 1H18NT stainless steel samples. The ranges considered of the beam power and the welding speed are 4.2 – 8.4 kW and 3.333 – 13.333 mm/s, respectively.

  8. An empirical model for the study of employee paticipation and its influence on job satisfaction

    Directory of Open Access Journals (Sweden)

    Lucas Joan Pujol Cols

    2015-12-01

    Full Text Available This article provides an analysis of the factors that influence the employee’s possibilities perceived to trigger actions of meaningful participation in three levels: Intra-group Level, Institutional Level and directly in the Leadership team of of the organization.Twelve (12 interviews were done with teachers from the Social and Economic Sciences School of the Mar del Plata (Argentina University, with different positions, areas and working hours.Based on qualitative evidence, an empirical model was constructed claiming to connect different factors for each manifest of participation, establishing hypothetical relations between subgroups.Additionally, in this article the implication of participation, its relationship with the job satisfaction and the role of individual expectations on the participation opportunities that receives each employee, are discussed. Keywords: Participation, Job satisfaction, University, Expectations, Qualitative Analysis. 

  9. Energy and the future of human settlement patterns: theory, models and empirical considerations

    Energy Technology Data Exchange (ETDEWEB)

    Zucchetto, J

    1983-11-01

    A review of the diverse literature pertaining to the organization of human settlements is presented with special emphasis on the influence that energy may have on concentration vs. dispersal of human populations. A simple, abstract energy-based model of urban growth is presented in order to capture some of the qualitative behavior of competition between urban core and peripheral regions. Empirical difficulties associated with the determination of energy consumption and population density are illustrated with an analysis of counties in Florida. There is no hard evidence that large urban systems are inherently more energy efficient than small ones are so that a future world of energy scarcity cannot be said to imply a selection for urban agglomeration.

  10. Tax design-tax evasion relationship in Serbia: New empirical approach to standard theoretical model

    Directory of Open Access Journals (Sweden)

    Ranđelović Saša

    2015-01-01

    Full Text Available This paper provides evidence on the impact of the change in income tax rates and the degree of its progressivity on the scale of labour taxes evasion in Serbia, using the tax-benefit microsimulation model and econometric methods, on 2007 Living Standard Measurement Survey data. The empirical analysis is based on novel assumption that individual's tax evasion decision depends on a change in disposable income, which is captured by the variation in their Effective Marginal Tax Rates (EMTR, rather than on a change in after-tax income. The results suggest that the elasticity of tax evasion to EMTR equals -0.3, confirming the Yitzhaki's theory, while the propensity to evade is decreasing in the level of wages and increasing in the level of self-employment income. The results also show that introduction of revenue-neutral, progressive taxation of labour income would lead to increase in labour tax evasion by 1 percentage point.

  11. Empirical models of the electron concentration of the ionosphere and their value for radio communications purposes

    International Nuclear Information System (INIS)

    Dudeney, J.R.; Kressman, R.I.

    1986-01-01

    Criteria for the development of ionosphere electron concentration vertical profile empirical models for radio communications purposes are discussed and used to evaluate and compare four contemporary schemes. Schemes must be optimized with respect to quality of profile match, availability and simplicity of the external data required for profile specification, and numerical complexity, depending on the application. It is found that the Dudeney (1978) scheme provides the best general performance, while the Booker (1977) technique is optimized for precision radio wave studies where an observed profile is available. The CCIR (Bradley and Dudeney, 1973) scheme performance is found to be inferior to the previous two, and should be superceded except where mathematical simplicity is prioritized. The International Reference Ionosphere profile is seen to have significant disadvantages with respect to all three criteria. 17 references

  12. A DISTANCE EDUCATION MODEL FOR JORDANIAN STUDENTS BASED ON AN EMPIRICAL STUDY

    Directory of Open Access Journals (Sweden)

    Ahmad SHAHER MASHHOUR

    2007-04-01

    Full Text Available Distance education is expanding worldwide. Numbers of students enrolled in distance education are increasing at very high rates. Distance education is said to be the future of education because it addresses educational needs of the new millennium. This paper represents the findings of an empirical study on a sample of Jordanian distance education students into a requirement model that addresses the need of such education at the national level. The responses of the sample show that distance education is offering a viable and satisfactory alternative to those who cannot enroll in regular residential education. The study also shows that the shortcomings of the regular and the current form of distance education in Jordan can be overcome by the use of modern information technology.

  13. Empirical tests of pre-main-sequence stellar evolution models with eclipsing binaries

    Science.gov (United States)

    Stassun, Keivan G.; Feiden, Gregory A.; Torres, Guillermo

    2014-06-01

    We examine the performance of standard pre-main-sequence (PMS) stellar evolution models against the accurately measured properties of a benchmark sample of 26 PMS stars in 13 eclipsing binary (EB) systems having masses 0.04-4.0 M⊙ and nominal ages ≈1-20 Myr. We provide a definitive compilation of all fundamental properties for the EBs, with a careful and consistent reassessment of observational uncertainties. We also provide a definitive compilation of the various PMS model sets, including physical ingredients and limits of applicability. No set of model isochrones is able to successfully reproduce all of the measured properties of all of the EBs. In the H-R diagram, the masses inferred for the individual stars by the models are accurate to better than 10% at ≳1 M⊙, but below 1 M⊙ they are discrepant by 50-100%. Adjusting the observed radii and temperatures using empirical relations for the effects of magnetic activity helps to resolve the discrepancies in a few cases, but fails as a general solution. We find evidence that the failure of the models to match the data is linked to the triples in the EB sample; at least half of the EBs possess tertiary companions. Excluding the triples, the models reproduce the stellar masses to better than ∼10% in the H-R diagram, down to 0.5 M⊙, below which the current sample is fully contaminated by tertiaries. We consider several mechanisms by which a tertiary might cause changes in the EB properties and thus corrupt the agreement with stellar model predictions. We show that the energies of the tertiary orbits are comparable to that needed to potentially explain the scatter in the EB properties through injection of heat, perhaps involving tidal interaction. It seems from the evidence at hand that this mechanism, however it operates in detail, has more influence on the surface properties of the stars than on their internal structure, as the lithium abundances are broadly in good agreement with model predictions. The

  14. A semi-empirical model for the prediction of fouling in railway ballast using GPR

    Science.gov (United States)

    Bianchini Ciampoli, Luca; Tosti, Fabio; Benedetto, Andrea; Alani, Amir M.; Loizos, Andreas; D'Amico, Fabrizio; Calvi, Alessandro

    2016-04-01

    The first step in the planning for a renewal of a railway network consists in gathering information, as effectively as possible, about the state of the railway tracks. Nowadays, this activity is mostly carried out by digging trenches at regular intervals along the whole network, to evaluate both geometrical and geotechnical properties of the railway track bed. This involves issues, mainly concerning the invasiveness of the operations, the impacts on the rail traffic, the high costs, and the low levels of significance concerning such discrete data set. Ground-penetrating radar (GPR) can represent a useful technique for overstepping these issues, as it can be directly mounted onto a train crossing the railway, and collect continuous information along the network. This study is aimed at defining an empirical model for the prediction of fouling in railway ballast, by using GPR. With this purpose, a thorough laboratory campaign was implemented within the facilities of Roma Tre University. In more details, a 1.47 m long × 1.47 m wide × 0.48 m height plexiglass framework, accounting for the domain of investigation, was laid over a perfect electric conductor, and filled up with several configuration of railway ballast and fouling material (clayey sand), thereby representing different levels of fouling. Then, the set of fouling configurations was surveyed with several GPR systems. In particular, a ground-coupled multi-channel radar (600 MHz and 1600 MHz center frequency antennas) and three air-launched radar systems (1000 MHz and 2000 MHz center frequency antennas) were employed for surveying the materials. By observing the results both in terms of time and frequency domains, interesting insights are highlighted and an empirical model, relating in particular the shape of the frequency spectrum of the signal and the percentage of fouling characterizing the surveyed material, is finally proposed. Acknowledgement The Authors thank COST, for funding the Action TU1208 "Civil

  15. Using empirical Bayes predictors from generalized linear mixed models to test and visualize associations among longitudinal outcomes.

    Science.gov (United States)

    Mikulich-Gilbertson, Susan K; Wagner, Brandie D; Grunwald, Gary K; Riggs, Paula D; Zerbe, Gary O

    2018-01-01

    Medical research is often designed to investigate changes in a collection of response variables that are measured repeatedly on the same subjects. The multivariate generalized linear mixed model (MGLMM) can be used to evaluate random coefficient associations (e.g. simple correlations, partial regression coefficients) among outcomes that may be non-normal and differently distributed by specifying a multivariate normal distribution for their random effects and then evaluating the latent relationship between them. Empirical Bayes predictors are readily available for each subject from any mixed model and are observable and hence, plotable. Here, we evaluate whether second-stage association analyses of empirical Bayes predictors from a MGLMM, provide a good approximation and visual representation of these latent association analyses using medical examples and simulations. Additionally, we compare these results with association analyses of empirical Bayes predictors generated from separate mixed models for each outcome, a procedure that could circumvent computational problems that arise when the dimension of the joint covariance matrix of random effects is large and prohibits estimation of latent associations. As has been shown in other analytic contexts, the p-values for all second-stage coefficients that were determined by naively assuming normality of empirical Bayes predictors provide a good approximation to p-values determined via permutation analysis. Analyzing outcomes that are interrelated with separate models in the first stage and then associating the resulting empirical Bayes predictors in a second stage results in different mean and covariance parameter estimates from the maximum likelihood estimates generated by a MGLMM. The potential for erroneous inference from using results from these separate models increases as the magnitude of the association among the outcomes increases. Thus if computable, scatterplots of the conditionally independent empirical Bayes

  16. EMPIRE-II 2.18, Comprehensive Nuclear Model Code, Nucleons, Ions Induced Cross-Sections

    International Nuclear Information System (INIS)

    Herman, Michal Wladyslaw; Panini, Gian Carlo

    2003-01-01

    1 - Description of program or function: EMPIRE-II is a flexible code for calculation of nuclear reactions in the frame of combined optical, Multi-step Direct (TUL), Multi-step Compound (NVWY) and statistical (Hauser-Feshbach) models. Incident particle can be a nucleon or any nucleus(Heavy Ion). Isomer ratios, residue production cross sections and emission spectra for neutrons, protons, alpha-particles, gamma-rays, and one type of Light Ion can be calculated. The energy range starts just above the resonance region for neutron induced reactions and extends up to several hundreds of MeV for the Heavy Ion induced reactions. IAEA1169/06: This version corrects an error in the Absoft compile procedure. 2 - Method of solution: For projectiles with A<5 EMPIRE calculates fusion cross section using spherical optical model transmission coefficients. In the case of Heavy Ion induced reactions the fusion cross section can be determined using various approaches including simplified coupled channels method (code CCFUS). Pre-equilibrium emission is treated in terms of quantum-mechanical theories (TUL-MSD and NVWY-MSC). MSC contribution to the gamma emission is taken into account. These calculations are followed by statistical decay with arbitrary number of subsequent particle emissions. Gamma-ray competition is considered in detail for every decaying compound nucleus. Different options for level densities are available including dynamical approach with collective effects taken into account. EMPIRE contains following third party codes converted into subroutines: - SCAT2 by O. Bersillon, - ORION and TRISTAN by H. Lenske and H. Wolter, - CCFUS by C.H. Dasso and S. Landowne, - BARMOM by A. Sierk. 3 - Restrictions on the complexity of the problem: The code can be easily adjusted to the problem by changing dimensions in the dimensions.h file. The actual limits are set by the available memory. In the current formulation up to 4 ejectiles plus gamma are allowed. This limit can be relaxed

  17. Development and evaluation of an empirical diurnal sea surface temperature model

    Science.gov (United States)

    Weihs, R. R.; Bourassa, M. A.

    2013-12-01

    An innovative method is developed to determine the diurnal heating amplitude of sea surface temperatures (SSTs) using observations of high-quality satellite SST measurements and NWP atmospheric meteorological data. The diurnal cycle results from heating that develops at the surface of the ocean from low mechanical or shear produced turbulence and large solar radiation absorption. During these typically calm weather conditions, the absorption of solar radiation causes heating of the upper few meters of the ocean, which become buoyantly stable; this heating causes a temperature differential between the surface and the mixed [or bulk] layer on the order of a few degrees. It has been shown that capturing the diurnal cycle is important for a variety of applications, including surface heat flux estimates, which have been shown to be underestimated when neglecting diurnal warming, and satellite and buoy calibrations, which can be complicated because of the heating differential. An empirical algorithm using a pre-dawn sea surface temperature, peak solar radiation, and accumulated wind stress is used to estimate the cycle. The empirical algorithm is derived from a multistep process in which SSTs from MTG's SEVIRI SST experimental hourly data set are combined with hourly wind stress fields derived from a bulk flux algorithm. Inputs for the flux model are taken from NASA's MERRA reanalysis product. NWP inputs are necessary because the inputs need to incorporate diurnal and air-sea interactive processes, which are vital to the ocean surface dynamics, with a high enough temporal resolution. The MERRA winds are adjusted with CCMP winds to obtain more realistic spatial and variance characteristics and the other atmospheric inputs (air temperature, specific humidity) are further corrected on the basis of in situ comparisons. The SSTs are fitted to a Gaussian curve (using one or two peaks), forming a set of coefficients used to fit the data. The coefficient data are combined with

  18. 137Cs applicability to soil erosion assessment: theoretical and empirical model

    International Nuclear Information System (INIS)

    Andrello, Avacir Casanova

    2004-02-01

    The soil erosion processes acceleration and the increase of soil erosion rates due to anthropogenic perturbation in soil-weather-vegetation equilibrium has influenced in the soil quality and environment. So, the possibility to assess the amplitude and severity of soil erosion impact on the productivity and quality of soil is important so local scale as regional and global scale. Several models have been developed to assess the soil erosion so qualitative as quantitatively. 137 Cs, an anthropogenic radionuclide, have been very used to assess the superficial soil erosion process Empirical and theoretical models were developed on the basis of 137 Cs redistribution as indicative of soil movement by erosive process These models incorporate many parameters that can influence in the soil erosion rates quantification by 137 Cs redistribution. Statistical analysis was realized on the models recommended by IAEA to determinate the influence that each parameter generates in results of the soil redistribution. It was verified that the most important parameter is the 137 Cs redistribution, indicating the necessity of a good determination in the 137 Cs inventory values with a minimum deviation associated with these values. After this, it was associated a 10% deviation in the reference value of 137 Cs inventory and the 5% in the 137 Cs inventory of the sample and was determinate the deviation in results of the soil redistribution calculated by models. The results of soil redistribution was compared to verify if there was difference between the models, but there was not difference in the results determinate by models, unless above 70% of 137 Cs loss. Analyzing three native forests and an area of the undisturbed pasture in the Londrina region, can be verified that the 137 Cs spatial variability in local scale was 15%. Comparing the 137 Cs inventory values determinate in the three native forest with the 137 Cs inventory value determinate in the area of undisturbed pasture in the

  19. Empirical tests of the Chicago model and the Easterlin hypothesis: a case study of Japan.

    Science.gov (United States)

    Ohbuchi, H

    1982-05-01

    The objective of this discussion is to test the applicability of economic theory of fertility with special reference to postwar Japan and to find a clue for forecasting the future trend of fertility. The theories examined are the "Chicago model" and the "Easterlin hypothesis." The major conclusion common among the leading economic theories of fertility, which have their origin with Gary S. Becker (1960, 1965) and Richard A. Easterlin (1966), is the positive income effect, i.e., that the relationship between income and fertility is positive despite the evidence that higher income families have fewer children and that fertility has declined with economic development. To bridge the gap between theory and fact is the primary purpose of the economic theory of fertility, and each offers a different interpretation for it. The point of the Chicago model, particularly of the household decision making model of the "new home economics," is the mechanism that a positive effect of husband's income growth on fertility is offset by a negative price effect caused by the opportunity cost of wife's time. While the opportunity cost of wife's time is independent of the female wage rate for an unemployed wife, it is directly associated with the wage rate for a gainfully employed wife. Thus, the fertility response to female wages occurs only among families with an employed wife. The primary concern of empirical efforts to test the Chicago model has been with the determination of income and price elasticities. An attempt is made to test the relevance of the Chicago model and the Easterlin hypothesis in explaning the fertility movement in postwar Japan. In case of the Chicago model, the statistical results appeared fairly successful but did not match with the theory. The effect on fertility of a rise in women's real wage (and, therefore in the opportunity cost of mother's time) and of a rise in labor force participation rate of married women of childbearing age in recent years could not

  20. Mass Balance Modelling of Saskatchewan Glacier, Canada Using Empirically Downscaled Reanalysis Data

    Science.gov (United States)

    Larouche, O.; Kinnard, C.; Demuth, M. N.

    2017-12-01

    Observations show that glaciers around the world are retreating. As sites with long-term mass balance observations are scarce, models are needed to reconstruct glacier mass balance and assess its sensitivity to climate. In regions with discontinuous and/or sparse meteorological data, high-resolution climate reanalysis data provide a convenient alternative to in situ weather observations, but can also suffer from strong bias due to the spatial and temporal scale mismatch. In this study we used data from the North American Regional Reanalysis (NARR) project with a 30 x 30 km spatial resolution and 3-hour temporal resolution to produce the meteorological forcings needed to drive a physically-based, distributed glacier mass balance model (DEBAM, Hock and Holmgren 2005) for the historical period 1979-2016. A two-year record from an automatic weather station (AWS) operated on Saskatchewan Glacier (2014-2016) was used to downscale air temperature, relative humidity, wind speed and incoming solar radiation from the nearest NARR gridpoint to the glacier AWS site. An homogenized historical precipitation record was produced using data from two nearby, low-elevation weather stations and used to downscale the NARR precipitation data. Three bias correction methods were applied (scaling, delta and empirical quantile mapping - EQM) and evaluated using split sample cross-validation. The EQM method gave better results for precipitation and for air temperature. Only a slight improvement in the relative humidity was obtained using the scaling method, while none of the methods improved the wind speed. The later correlates poorly with AWS observations, probably because the local glacier wind is decoupled from the larger scale NARR wind field. The downscaled data was used to drive the DEBAM model in order to reconstruct the mass balance of Saskatchewan Glacier over the past 30 years. The model was validated using recent snow thickness measurements and previously published geodetic mass

  1. Comparing cycling world hour records, 1967-1996: modeling with empirical data.

    Science.gov (United States)

    Bassett, D R; Kyle, C R; Passfield, L; Broker, J P; Burke, E R

    1999-11-01

    The world hour record in cycling has increased dramatically in recent years. The present study was designed to compare the performances of former/current record holders, after adjusting for differences in aerodynamic equipment and altitude. Additionally, we sought to determine the ideal elevation for future hour record attempts. The first step was constructing a mathematical model to predict power requirements of track cycling. The model was based on empirical data from wind-tunnel tests, the relationship of body size to frontal surface area, and field power measurements using a crank dynamometer (SRM). The model agreed reasonably well with actual measurements of power output on elite cyclists. Subsequently, the effects of altitude on maximal aerobic power were estimated from published research studies of elite athletes. This information was combined with the power requirement equation to predict what each cyclist's power output would have been at sea level. This allowed us to estimate the distance that each rider could have covered using state-of-the-art equipment at sea level. According to these calculations, when racing under equivalent conditions, Rominger would be first, Boardman second, Merckx third, and Indurain fourth. In addition, about 60% of the increase in hour record distances since Bracke's record (1967) have come from advances in technology and 40% from physiological improvements. To break the current world hour record, field measurements and the model indicate that a cyclist would have to deliver over 440 W for 1 h at sea level, or correspondingly less at altitude. The optimal elevation for future hour record attempts is predicted to be about 2500 m for acclimatized riders and 2000 m for unacclimatized riders.

  2. An Evaluation Model for Sustainable Development of China’s Textile Industry: An Empirical Study

    Science.gov (United States)

    Zhao, Hong; Lu, Xiaodong; Yu, Ting; Yin, Yanbin

    2018-04-01

    With economy’s continuous rapid growth, textile industry is required to search for new rules and adjust strategies in order to optimize industrial structure and rationalize social spending. The sustainable development of China’s textile industry is a comprehensive research subject. This study analyzed the status of China’s textile industry and constructed the evaluation model based on the economical, ecologic, and social benefits. Analytic Hierarchy Process (AHP) and Data Envelopment Analysis (DEA) were used for an empirical study of textile industry. The result of evaluation model suggested that the status of the textile industry has become the major problems in the sustainable development of China’s textile industry. It’s nearly impossible to integrate into the global economy if no measures are taken. The enterprises concerned with the textile industry status should be reformed in terms of product design, raw material selection, technological reform, technological progress, and management, in accordance with the ideas and requirements of sustainable development. The results of this study are benefit for 1) discover the main elements restricting the industry’s sustainable development; 2) seek for corresponding solutions for policy formulation and implementation of textile industry; 3) provide references for enterprises’ development transformation in strategic deployment, fund allocation, and personnel assignment.

  3. Going Global: A Model for Evaluating Empirically Supported Family-Based Interventions in New Contexts.

    Science.gov (United States)

    Sundell, Knut; Ferrer-Wreder, Laura; Fraser, Mark W

    2014-06-01

    The spread of evidence-based practice throughout the world has resulted in the wide adoption of empirically supported interventions (ESIs) and a growing number of controlled trials of imported and culturally adapted ESIs. This article is informed by outcome research on family-based interventions including programs listed in the American Blueprints Model and Promising Programs. Evidence from these controlled trials is mixed and, because it is comprised of both successful and unsuccessful replications of ESIs, it provides clues for the translation of promising programs in the future. At least four explanations appear plausible for the mixed results in replication trials. One has to do with methodological differences across trials. A second deals with ambiguities in the cultural adaptation process. A third explanation is that ESIs in failed replications have not been adequately implemented. A fourth source of variation derives from unanticipated contextual influences that might affect the effects of ESIs when transported to other cultures and countries. This article describes a model that allows for the differential examination of adaptations of interventions in new cultural contexts. © The Author(s) 2012.

  4. Computation and empirical modeling of UV flux reaching Arabian Sea due to O3 hole

    International Nuclear Information System (INIS)

    Yousufzai, M. Ayub Khan

    2008-01-01

    Scientific organizations the world over, such as the European Space Agency, the North Atlantic Treaty Organization, the National Aeronautics and Space Administration, and the United Nations Organization, are deeply concerned about the imbalances, caused to a significant extent due to human interference in the natural make-up of the earth's ecosystem. In particular, ozone layer depletion (OLD) over the South Pole is already a serious hazard. The long-term effect of ozone layer depletion appears to be an increase in the ultraviolet radiation reaching the earth. In order to understand the effects of ozone layer depletion, investigations have been initiated by various research groups. However, to the best of our knowledge, there does not seem to be available any work treating the problem of computing and constructing an empirical model for the UV flux reaching the Arabian Sea surface due to the O3 hole. The communication presents the results of quantifying UV flux and modeling future estimation using time series analysis in a local context to understand the nature of the depletion. (author)

  5. Empirical Succession Mapping and Data Assimilation to Constrain Demographic Processes in an Ecosystem Model

    Science.gov (United States)

    Kelly, R.; Andrews, T.; Dietze, M.

    2015-12-01

    Shifts in ecological communities in response to environmental change have implications for biodiversity, ecosystem function, and feedbacks to global climate change. Community composition is fundamentally the product of demography, but demographic processes are simplified or missing altogether in many ecosystem, Earth system, and species distribution models. This limitation arises in part because demographic data are noisy and difficult to synthesize. As a consequence, demographic processes are challenging to formulate in models in the first place, and to verify and constrain with data thereafter. Here, we used a novel analysis of the USFS Forest Inventory Analysis to improve the representation of demography in an ecosystem model. First, we created an Empirical Succession Mapping (ESM) based on ~1 million individual tree observations from the eastern U.S. to identify broad demographic patterns related to forest succession and disturbance. We used results from this analysis to guide reformulation of the Ecosystem Demography model (ED), an existing forest simulator with explicit tree demography. Results from the ESM reveal a coherent, cyclic pattern of change in temperate forest tree size and density over the eastern U.S. The ESM captures key ecological processes including succession, self-thinning, and gap-filling, and quantifies the typical trajectory of these processes as a function of tree size and stand density. Recruitment is most rapid in early-successional stands with low density and mean diameter, but slows as stand density increases; mean diameter increases until thinning promotes recruitment of small-diameter trees. Strikingly, the upper bound of size-density space that emerges in the ESM conforms closely to the self-thinning power law often observed in ecology. The ED model obeys this same overall size-density boundary, but overestimates plot-level growth, mortality, and fecundity rates, leading to unrealistic emergent demographic patterns. In particular

  6. Semi-empirical models for the estimation of clear sky solar global and direct normal irradiances in the tropics

    International Nuclear Information System (INIS)

    Janjai, S.; Sricharoen, K.; Pattarapanitchai, S.

    2011-01-01

    Highlights: → New semi-empirical models for predicting clear sky irradiance were developed. → The proposed models compare favorably with other empirical models. → Performance of proposed models is comparable with that of widely used physical models. → The proposed models have advantage over the physical models in terms of simplicity. -- Abstract: This paper presents semi-empirical models for estimating global and direct normal solar irradiances under clear sky conditions in the tropics. The models are based on a one-year period of clear sky global and direct normal irradiances data collected at three solar radiation monitoring stations in Thailand: Chiang Mai (18.78 o N, 98.98 o E) located in the North of the country, Nakhon Pathom (13.82 o N, 100.04 o E) in the Centre and Songkhla (7.20 o N, 100.60 o E) in the South. The models describe global and direct normal irradiances as functions of the Angstrom turbidity coefficient, the Angstrom wavelength exponent, precipitable water and total column ozone. The data of Angstrom turbidity coefficient, wavelength exponent and precipitable water were obtained from AERONET sunphotometers, and column ozone was retrieved from the OMI/AURA satellite. Model validation was accomplished using data from these three stations for the data periods which were not included in the model formulation. The models were also validated against an independent data set collected at Ubon Ratchathani (15.25 o N, 104.87 o E) in the Northeast. The global and direct normal irradiances calculated from the models and those obtained from measurements are in good agreement, with the root mean square difference (RMSD) of 7.5% for both global and direct normal irradiances. The performance of the models was also compared with that of other models. The performance of the models compared favorably with that of empirical models. Additionally, the accuracy of irradiances predicted from the proposed model are comparable with that obtained from some

  7. Empirical phylogenies and species abundance distributions are consistent with pre-equilibrium dynamics of neutral community models with gene flow

    KAUST Repository

    Bonnet-Lebrun, Anne-Sophie

    2017-03-17

    Community characteristics reflect past ecological and evolutionary dynamics. Here, we investigate whether it is possible to obtain realistically shaped modelled communities - i.e., with phylogenetic trees and species abundance distributions shaped similarly to typical empirical bird and mammal communities - from neutral community models. To test the effect of gene flow, we contrasted two spatially explicit individual-based neutral models: one with protracted speciation, delayed by gene flow, and one with point mutation speciation, unaffected by gene flow. The former produced more realistic communities (shape of phylogenetic tree and species-abundance distribution), consistent with gene flow being a key process in macro-evolutionary dynamics. Earlier models struggled to capture the empirically observed branching tempo in phylogenetic trees, as measured by the gamma statistic. We show that the low gamma values typical of empirical trees can be obtained in models with protracted speciation, in pre-equilibrium communities developing from an initially abundant and widespread species. This was even more so in communities sampled incompletely, particularly if the unknown species are the youngest. Overall, our results demonstrate that the characteristics of empirical communities that we have studied can, to a large extent, be explained through a purely neutral model under pre-equilibrium conditions. This article is protected by copyright. All rights reserved.

  8. Empirical phylogenies and species abundance distributions are consistent with pre-equilibrium dynamics of neutral community models with gene flow

    KAUST Repository

    Bonnet-Lebrun, Anne-Sophie; Manica, Andrea; Eriksson, Anders; Rodrigues, Ana S.L.

    2017-01-01

    Community characteristics reflect past ecological and evolutionary dynamics. Here, we investigate whether it is possible to obtain realistically shaped modelled communities - i.e., with phylogenetic trees and species abundance distributions shaped similarly to typical empirical bird and mammal communities - from neutral community models. To test the effect of gene flow, we contrasted two spatially explicit individual-based neutral models: one with protracted speciation, delayed by gene flow, and one with point mutation speciation, unaffected by gene flow. The former produced more realistic communities (shape of phylogenetic tree and species-abundance distribution), consistent with gene flow being a key process in macro-evolutionary dynamics. Earlier models struggled to capture the empirically observed branching tempo in phylogenetic trees, as measured by the gamma statistic. We show that the low gamma values typical of empirical trees can be obtained in models with protracted speciation, in pre-equilibrium communities developing from an initially abundant and widespread species. This was even more so in communities sampled incompletely, particularly if the unknown species are the youngest. Overall, our results demonstrate that the characteristics of empirical communities that we have studied can, to a large extent, be explained through a purely neutral model under pre-equilibrium conditions. This article is protected by copyright. All rights reserved.

  9. A MACROPRUDENTIAL SUPERVISION MODEL. EMPIRICAL EVIDENCE FROM THE CENTRAL AND EASTERN EUROPEAN BANKING SYSTEM

    Directory of Open Access Journals (Sweden)

    Trenca Ioan

    2013-07-01

    Full Text Available One of the positive effects of the financial crises is the increasing concern of the supervisors regarding the financial system’s stability. There is a need to strengthen the links between different financial components of the financial system and the macroeconomic environment. Banking systems that have an adequate capitalization and liquidity level may face easier economic and financial shocks. The purpose of this empirical study is to identify the main determinants of the banking system’s stability and soundness in the Central and Eastern Europe countries. We asses the impact of different macroeconomic variables on the quality of capital and liquidity conditions and examine the behaviour of these financial stability indicators, by analyzing a sample of 10 banking systems during 2000-2011. The availability of banking capital signals the banking system’s resiliency to shocks. Capital adequacy ratio is the main indicator used to assess the banking fragility. One of the causes of the 2008-2009 financial crisis was the lack of liquidity in the banking system which led to the collapse of several banking institutions and macroeconomic imbalances. Given the importance of liquidity for the banking system, we propose several models in order to determine the macroeconomic variables that have a significant influence on the liquid reserves to total assets ratio. We found evidence that GDP growth, inflation, domestic credit to private sector, as well as the money and quasi money aggregate indicator have significant impact on the banking stability. The empirical regression confirms the high level of interdependence of the real sector with the financial-banking sector. Also, they prove the necessity for an effective macro prudential supervision at country level which enables the supervisory authorities to have an adequate control over the macro prudential indicators and to take appropriate decisions at the right time.

  10. Empirical Analysis and Modeling of Stop-Line Crossing Time and Speed at Signalized Intersections

    Directory of Open Access Journals (Sweden)

    Keshuang Tang

    2016-12-01

    Full Text Available In China, a flashing green (FG indication of 3 s followed by a yellow (Y indication of 3 s is commonly applied to end the green phase at signalized intersections. Stop-line crossing behavior of drivers during such a phase transition period significantly influences safety performance of signalized intersections. The objective of this study is thus to empirically analyze and model drivers’ stop-line crossing time and speed in response to the specific phase transition period of FG and Y. High-resolution trajectories for 1465 vehicles were collected at three rural high-speed intersections with a speed limit of 80 km/h and two urban intersections with a speed limit of 50 km/h in Shanghai. With the vehicle trajectory data, statistical analyses were performed to look into the general characteristics of stop-line crossing time and speed at the two types of intersections. A multinomial logit model and a multiple linear regression model were then developed to predict the stop-line crossing patterns and speeds respectively. It was found that the percentage of stop-line crossings during the Y interval is remarkably higher and the stop-line crossing time is approximately 0.7 s longer at the urban intersections, as compared with the rural intersections. In addition, approaching speed and distance to the stop-line at the onset of FG as well as area type significantly affect the percentages of stop-line crossings during the FG and Y intervals. Vehicle type and stop-line crossing pattern were found to significantly influence the stop-line crossing speed, in addition to the above factors. The red-light-running seems to occur more frequently at the large intersections with a long cycle length.

  11. Cycling empirical antibiotic therapy in hospitals: meta-analysis and models.

    Directory of Open Access Journals (Sweden)

    Pia Abel zur Wiesch

    2014-06-01

    Full Text Available The rise of resistance together with the shortage of new broad-spectrum antibiotics underlines the urgency of optimizing the use of available drugs to minimize disease burden. Theoretical studies suggest that coordinating empirical usage of antibiotics in a hospital ward can contain the spread of resistance. However, theoretical and clinical studies came to different conclusions regarding the usefulness of rotating first-line therapy (cycling. Here, we performed a quantitative pathogen-specific meta-analysis of clinical studies comparing cycling to standard practice. We searched PubMed and Google Scholar and identified 46 clinical studies addressing the effect of cycling on nosocomial infections, of which 11 met our selection criteria. We employed a method for multivariate meta-analysis using incidence rates as endpoints and find that cycling reduced the incidence rate/1000 patient days of both total infections by 4.95 [9.43-0.48] and resistant infections by 7.2 [14.00-0.44]. This positive effect was observed in most pathogens despite a large variance between individual species. Our findings remain robust in uni- and multivariate metaregressions. We used theoretical models that reflect various infections and hospital settings to compare cycling to random assignment to different drugs (mixing. We make the realistic assumption that therapy is changed when first line treatment is ineffective, which we call "adjustable cycling/mixing". In concordance with earlier theoretical studies, we find that in strict regimens, cycling is detrimental. However, in adjustable regimens single resistance is suppressed and cycling is successful in most settings. Both a meta-regression and our theoretical model indicate that "adjustable cycling" is especially useful to suppress emergence of multiple resistance. While our model predicts that cycling periods of one month perform well, we expect that too long cycling periods are detrimental. Our results suggest that

  12. Empirical modeling of single-wake advection and expansion using full-scale pulsed lidar-based measurements

    DEFF Research Database (Denmark)

    Machefaux, Ewan; Larsen, Gunner Chr.; Troldborg, Niels

    2015-01-01

    In the present paper, single-wake dynamics have been studied both experimentally and numerically. The use of pulsed lidar measurements allows for validation of basic dynamic wake meandering modeling assumptions. Wake center tracking is used to estimate the wake advection velocity experimentally...... fairly well in the far wake but lacks accuracy in the outer region of the near wake. An empirical relationship, relating maximum wake induction and wake advection velocity, is derived and linked to the characteristics of a spherical vortex structure. Furthermore, a new empirical model for single...

  13. Interface of the polarizable continuum model of solvation with semi-empirical methods in the GAMESS program

    DEFF Research Database (Denmark)

    Svendsen, Casper Steinmann; Blædel, Kristoffer L.; Christensen, Anders Steen

    2013-01-01

    An interface between semi-empirical methods and the polarized continuum model (PCM) of solvation successfully implemented into GAMESS following the approach by Chudinov et al (Chem. Phys. 1992, 160, 41). The interface includes energy gradients and is parallelized. For large molecules such as ubiq......An interface between semi-empirical methods and the polarized continuum model (PCM) of solvation successfully implemented into GAMESS following the approach by Chudinov et al (Chem. Phys. 1992, 160, 41). The interface includes energy gradients and is parallelized. For large molecules...

  14. A semi-empirical model for mesospheric and stratospheric NOy produced by energetic particle precipitation

    Directory of Open Access Journals (Sweden)

    B. Funke

    2016-07-01

    Full Text Available The MIPAS Fourier transform spectrometer on board Envisat has measured global distributions of the six principal reactive nitrogen (NOy compounds (HNO3, NO2, NO, N2O5, ClONO2, and HNO4 during 2002–2012. These observations were used previously to detect regular polar winter descent of reactive nitrogen produced by energetic particle precipitation (EPP down to the lower stratosphere, often called the EPP indirect effect. It has further been shown that the observed fraction of NOy produced by EPP (EPP-NOy has a nearly linear relationship with the geomagnetic Ap index when taking into account the time lag introduced by transport. Here we exploit these results in a semi-empirical model for computation of EPP-modulated NOy densities and wintertime downward fluxes through stratospheric and mesospheric pressure levels. Since the Ap dependence of EPP-NOy is distorted during episodes of strong descent in Arctic winters associated with elevated stratopause events, a specific parameterization has been developed for these episodes. This model accurately reproduces the observations from MIPAS and is also consistent with estimates from other satellite instruments. Since stratospheric EPP-NOy depositions lead to changes in stratospheric ozone with possible implications for climate, the model presented here can be utilized in climate simulations without the need to incorporate many thermospheric and upper mesospheric processes. By employing historical geomagnetic indices, the model also allows for reconstruction of the EPP indirect effect since 1850. We found secular variations of solar cycle-averaged stratospheric EPP-NOy depositions on the order of 1 GM. In particular, we model a reduction of the EPP-NOy deposition rate during the last 3 decades, related to the coincident decline of geomagnetic activity that corresponds to 1.8 % of the NOy production rate by N2O oxidation. As the decline of the geomagnetic activity level is expected to continue in the

  15. EVOLUTION OF THEORIES AND EMPIRICAL MODELS OF A RELATIONSHIP BETWEEN ECONOMIC GROWTH, SCIENCE AND INNOVATIONS (PART I

    Directory of Open Access Journals (Sweden)

    Kaneva M. A.

    2017-12-01

    Full Text Available This article is a first chapter of an analytical review of existing theoretical models of a relationship between economic growth / GRP and indicators of scientific development and innovation activities, as well as empirical approaches to testing this relationship. Aim of the paper is a systematization of existing approaches to modeling of economic growth geared by science and innovations. The novelty of the current review lies in the authors’ criteria of interconnectedness of theoretical and empirical studies in the systematization of a wide range of publications presented in a final table-scheme. In the first part of the article the authors discuss evolution of theoretical approaches, while the second chapter presents a time gap between theories and their empirical verification caused by the level of development of quantitative instruments such as econometric models. The results of this study can be used by researchers and graduate students for familiarization with current scientific approaches that manifest progress from theory to empirical verification of a relationship «economic growth-innovations» for improvement of different types of models in spatial econometrics. To apply these models to management practices the presented review could be supplemented with new criteria for classification of knowledge production functions and other theories about effect of science on economic growth.

  16. An empirical model of the topside plasma density around 600 km based on ROCSAT-1 and Hinotori observations

    Science.gov (United States)

    Huang, He; Chen, Yiding; Liu, Libo; Le, Huijun; Wan, Weixing

    2015-05-01

    It is an urgent task to improve the ability of ionospheric empirical models to more precisely reproduce the plasma density variations in the topside ionosphere. Based on the Republic of China Satellite 1 (ROCSAT-1) observations, we developed a new empirical model of topside plasma density around 600 km under relatively quiet geomagnetic conditions. The model reproduces the ROCSAT-1 plasma density observations with a root-mean-square-error of 0.125 in units of lg(Ni(cm-3)) and reasonably describes the temporal and spatial variations of plasma density at altitudes in the range from 550 to 660 km. The model results are also in good agreement with observations from Hinotori, Coupled Ion-Neutral Dynamics Investigations/Communications/Navigation Outage Forecasting System satellites and the incoherent scatter radar at Arecibo. Further, we combined ROCSAT-1 and Hinotori data to improve the ROCSAT-1 model and built a new model (R&H model) after the consistency between the two data sets had been confirmed with the original ROCSAT-1 model. In particular, we studied the solar activity dependence of topside plasma density at a fixed altitude by R&H model and find that its feature slightly differs from the case when the orbit altitude evolution is ignored. In addition, the R&H model shows the merging of the two crests of equatorial ionization anomaly above the F2 peak, while the IRI_Nq topside option always produces two separate crests in this range of altitudes.

  17. An empirical model to predict road dust emissions based on pavement and traffic characteristics.

    Science.gov (United States)

    Padoan, Elio; Ajmone-Marsan, Franco; Querol, Xavier; Amato, Fulvio

    2018-06-01

    The relative impact of non-exhaust sources (i.e. road dust, tire wear, road wear and brake wear particles) on urban air quality is increasing. Among them, road dust resuspension has generally the highest impact on PM concentrations but its spatio-temporal variability has been rarely studied and modeled. Some recent studies attempted to observe and describe the time-variability but, as it is driven by traffic and meteorology, uncertainty remains on the seasonality of emissions. The knowledge gap on spatial variability is much wider, as several factors have been pointed out as responsible for road dust build-up: pavement characteristics, traffic intensity and speed, fleet composition, proximity to traffic lights, but also the presence of external sources. However, no parameterization is available as a function of these variables. We investigated mobile road dust smaller than 10 μm (MF10) in two cities with different climatic and traffic conditions (Barcelona and Turin), to explore MF10 seasonal variability and the relationship between MF10 and site characteristics (pavement macrotexture, traffic intensity and proximity to braking zone). Moreover, we provide the first estimates of emission factors in the Po Valley both in summer and winter conditions. Our results showed a good inverse relationship between MF10 and macro-texture, traffic intensity and distance from the nearest braking zone. We also found a clear seasonal effect of road dust emissions, with higher emission in summer, likely due to the lower pavement moisture. These results allowed building a simple empirical mode, predicting maximal dust loadings and, consequently, emission potential, based on the aforementioned data. This model will need to be scaled for meteorological effect, using methods accounting for weather and pavement moisture. This can significantly improve bottom-up emission inventory for spatial allocation of emissions and air quality management, to select those roads with higher emissions

  18. Light Curve Simulation Using Spacecraft CAD Models and Empirical Material Spectral BRDFS

    Science.gov (United States)

    Willison, A.; Bedard, D.

    This paper presents a Matlab-based light curve simulation software package that uses computer-aided design (CAD) models of spacecraft and the spectral bidirectional reflectance distribution function (sBRDF) of their homogenous surface materials. It represents the overall optical reflectance of objects as a sBRDF, a spectrometric quantity, obtainable during an optical ground truth experiment. The broadband bidirectional reflectance distribution function (BRDF), the basis of a broadband light curve, is produced by integrating the sBRDF over the optical wavelength range. Colour-filtered BRDFs, the basis of colour-filtered light curves, are produced by first multiplying the sBRDF by colour filters, and integrating the products. The software package's validity is established through comparison of simulated reflectance spectra and broadband light curves with those measured of the CanX-1 Engineering Model (EM) nanosatellite, collected during an optical ground truth experiment. It is currently being extended to simulate light curves of spacecraft in Earth orbit, using spacecraft Two-Line-Element (TLE) sets, yaw/pitch/roll angles, and observer coordinates. Measured light curves of the NEOSSat spacecraft will be used to validate simulated quantities. The sBRDF was chosen to represent material reflectance as it is spectrometric and a function of illumination and observation geometry. Homogeneous material sBRDFs were obtained using a goniospectrometer for a range of illumination and observation geometries, collected in a controlled environment. The materials analyzed include aluminum alloy, two types of triple-junction photovoltaic (TJPV) cell, white paint, and multi-layer insulation (MLI). Interpolation and extrapolation methods were used to determine the sBRDF for all possible illumination and observation geometries not measured in the laboratory, resulting in empirical look-up tables. These look-up tables are referenced when calculating the overall sBRDF of objects, where

  19. An Improved Empirical Harmonic Model of the Celestial Intermediate Pole Offsets from a Global VLBI Solution

    Science.gov (United States)

    Belda, Santiago; Heinkelmann, Robert; Ferrándiz, José M.; Karbon, Maria; Nilsson, Tobias; Schuh, Harald

    2017-10-01

    Very Long Baseline Interferometry (VLBI) is the only space geodetic technique capable of measuring all the Earth orientation parameters (EOP) accurately and simultaneously. Modeling the Earth's rotational motion in space within the stringent consistency goals of the Global Geodetic Observing System (GGOS) makes VLBI observations essential for constraining the rotation theories. However, the inaccuracy of early VLBI data and the outdated products could cause non-compliance with these goals. In this paper, we perform a global VLBI analysis of sessions with different processing settings to determine a new set of empirical corrections to the precession offsets and rates, and to the amplitudes of a wide set of terms included in the IAU 2006/2000A precession-nutation theory. We discuss the results in terms of consistency, systematic errors, and physics of the Earth. We find that the largest improvements w.r.t. the values from IAU 2006/2000A precession-nutation theory are associated with the longest periods (e.g., 18.6-yr nutation). A statistical analysis of the residuals shows that the provided corrections attain an error reduction at the level of 15 μas. Additionally, including a Free Core Nutation (FCN) model into a priori Celestial Pole Offsets (CPOs) provides the lowest Weighted Root Mean Square (WRMS) of residuals. We show that the CPO estimates are quite insensitive to TRF choice, but slightly sensitive to the a priori EOP and the inclusion of different VLBI sessions. Finally, the remaining residuals reveal two apparent retrograde signals with periods of nearly 2069 and 1034 days.

  20. An Empirical Outdoor-to-Indoor Path Loss Model from below 6 GHz to cm-Wave Frequency Bands

    DEFF Research Database (Denmark)

    Rodriguez Larrad, Ignacio; Nguyen, Huan Cong; Kovács, István Z.

    2017-01-01

    This letter presents an empirical multi-frequency outdoor-to-indoor path loss model. The model is based on measurements performed on the exact same set of scenarios for different frequency bands ranging from traditional cellular allocations below 6 GHz (0.8, 2, 3.5 and 5.2 GHz), up to cm-wave fre......This letter presents an empirical multi-frequency outdoor-to-indoor path loss model. The model is based on measurements performed on the exact same set of scenarios for different frequency bands ranging from traditional cellular allocations below 6 GHz (0.8, 2, 3.5 and 5.2 GHz), up to cm...

  1. CO2 capture in amine solutions: modelling and simulations with non-empirical methods

    Science.gov (United States)

    Andreoni, Wanda; Pietrucci, Fabio

    2016-12-01

    Absorption in aqueous amine solutions is the most advanced technology for the capture of CO2, although suffering from drawbacks that do not allow exploitation on large scale. The search for optimum solvents has been pursued with empirical methods and has also motivated a number of computational approaches over the last decade. However, a deeper level of understanding of the relevant chemical reactions in solution is required so as to contribute to this effort. We present here a brief critical overview of the most recent applications of computer simulations using ab initio methods. Comparison of their outcome shows a strong dependence on the structural models employed to represent the molecular systems in solution and on the strategy used to simulate the reactions. In particular, the results of very recent ab initio molecular dynamics augmented with metadynamics are summarized, showing the crucial role of water, which has been so far strongly underestimated both in the calculations and in the interpretation of experimental data. Indications are given for advances in computational approaches that are necessary if meant to contribute to the rational design of new solvents.

  2. CO2 capture in amine solutions: modelling and simulations with non-empirical methods

    International Nuclear Information System (INIS)

    Andreoni, Wanda; Pietrucci, Fabio

    2016-01-01

    Absorption in aqueous amine solutions is the most advanced technology for the capture of CO 2 , although suffering from drawbacks that do not allow exploitation on large scale. The search for optimum solvents has been pursued with empirical methods and has also motivated a number of computational approaches over the last decade. However, a deeper level of understanding of the relevant chemical reactions in solution is required so as to contribute to this effort. We present here a brief critical overview of the most recent applications of computer simulations using ab initio methods. Comparison of their outcome shows a strong dependence on the structural models employed to represent the molecular systems in solution and on the strategy used to simulate the reactions. In particular, the results of very recent ab initio molecular dynamics augmented with metadynamics are summarized, showing the crucial role of water, which has been so far strongly underestimated both in the calculations and in the interpretation of experimental data. Indications are given for advances in computational approaches that are necessary if meant to contribute to the rational design of new solvents. (topical review)

  3. Empirical Modeling of the Viscosity of Supercritical Carbon Dioxide Foam Fracturing Fluid under Different Downhole Conditions

    Directory of Open Access Journals (Sweden)

    Shehzad Ahmed

    2018-03-01

    Full Text Available High-quality supercritical CO2 (sCO2 foam as a fracturing fluid is considered ideal for fracturing shale gas reservoirs. The apparent viscosity of the fracturing fluid holds an important role and governs the efficiency of the fracturing process. In this study, the viscosity of sCO2 foam and its empirical correlations are presented as a function of temperature, pressure, and shear rate. A series of experiments were performed to investigate the effect of temperature, pressure, and shear rate on the apparent viscosity of sCO2 foam generated by a widely used mixed surfactant system. An advanced high pressure, high temperature (HPHT foam rheometer was used to measure the apparent viscosity of the foam over a wide range of reservoir temperatures (40–120 °C, pressures (1000–2500 psi, and shear rates (10–500 s−1. A well-known power law model was modified to accommodate the individual and combined effect of temperature, pressure, and shear rate on the apparent viscosity of the foam. Flow indices of the power law were found to be a function of temperature, pressure, and shear rate. Nonlinear regression was also performed on the foam apparent viscosity data to develop these correlations. The newly developed correlations provide an accurate prediction of the foam’s apparent viscosity under different fracturing conditions. These correlations can be helpful for evaluating foam-fracturing efficiency by incorporating them into a fracturing simulator.

  4. Matrimonios mixtos intraeuropeos: un modelo empírico (Intra-European intermarriage: an empirical model

    Directory of Open Access Journals (Sweden)

    Alaminos Chica, Antonio Francisco

    2008-06-01

    Full Text Available Resumen: La heterogeneidad con la que nos encontramos al estudiar las parejas interculturales o mixtas, va más allá de las diferencias de origen sociocultural; interviene factores tales como el rol que cada individuo adopta dentro de la pareja (por ejemplo, quién contribuye más económicamente, el status, el nivel educativo, etc.. En este artículo se propone un modelo empírico que muestra el efecto de un conjunto de variables, que expresan circunstancias sociales, sobre la decisión de formar un matrimonio interculturalmente mixto. También las consecuencias en la vida social del individuo.Abstract: The intercultural marriages or mixed marriages depend upon several factor. Not only the different cultural origin. Other determinants like the role of the partner (i.e. economic contribution, status, educational level, etc. or the type of the family (modern, traditional, etc. influence the outcomes. This paper contains a proposal of empirical model for study the intra-European mixed marriages.

  5. Empirical model for the electron density peak height disturbance in response to solar wind conditions

    Science.gov (United States)

    Blanch, E.; Altadill, D.

    2009-04-01

    Geomagnetic storms disturb the quiet behaviour of the ionosphere, its electron density and the electron density peak height, hmF2. Many works have been done to predict the variations of the electron density but few efforts have been dedicated to predict the variations the hmF2 under disturbed helio-geomagnetic conditions. We present the results of the analyses of the F2 layer peak height disturbances occurred during intense geomagnetic storms for one solar cycle. The results systematically show a significant peak height increase about 2 hours after the beginning of the main phase of the geomagnetic storm, independently of both the local time position of the station at the onset of the storm and the intensity of the storm. An additional uplift is observed in the post sunset sector. The duration of the uplift and the height increase are dependent of the intensity of the geomagnetic storm, the season and the local time position of the station at the onset of the storm. An empirical model has been developed to predict the electron density peak height disturbances in response to solar wind conditions and local time which can be used for nowcasting and forecasting the hmF2 disturbances for the middle latitude ionosphere. This being an important output for EURIPOS project operational purposes.

  6. Supervised neural network modeling: an empirical investigation into learning from imbalanced data with labeling errors.

    Science.gov (United States)

    Khoshgoftaar, Taghi M; Van Hulse, Jason; Napolitano, Amri

    2010-05-01

    Neural network algorithms such as multilayer perceptrons (MLPs) and radial basis function networks (RBFNets) have been used to construct learners which exhibit strong predictive performance. Two data related issues that can have a detrimental impact on supervised learning initiatives are class imbalance and labeling errors (or class noise). Imbalanced data can make it more difficult for the neural network learning algorithms to distinguish between examples of the various classes, and class noise can lead to the formulation of incorrect hypotheses. Both class imbalance and labeling errors are pervasive problems encountered in a wide variety of application domains. Many studies have been performed to investigate these problems in isolation, but few have focused on their combined effects. This study presents a comprehensive empirical investigation using neural network algorithms to learn from imbalanced data with labeling errors. In particular, the first component of our study investigates the impact of class noise and class imbalance on two common neural network learning algorithms, while the second component considers the ability of data sampling (which is commonly used to address the issue of class imbalance) to improve their performances. Our results, for which over two million models were trained and evaluated, show that conclusions drawn using the more commonly studied C4.5 classifier may not apply when using neural networks.

  7. Empirical tight-binding modeling of ordered and disordered semiconductor structures

    International Nuclear Information System (INIS)

    Mourad, Daniel

    2010-01-01

    In this thesis, we investigate the electronic and optical properties of pure as well as of substitutionally alloyed II-VI and III-V bulk semiconductors and corresponding semiconductor quantum dots by means of an empirical tight-binding (TB) model. In the case of the alloyed systems of the type A x B 1-x , where A and B are the pure compound semiconductor materials, we study the influence of the disorder by means of several extensions of the TB model with different levels of sophistication. Our methods range from rather simple mean-field approaches (virtual crystal approximation, VCA) over a dynamical mean-field approach (coherent potential approximation, CPA) up to calculations where substitutional disorder is incorporated on a finite ensemble of microscopically distinct configurations. In the first part of this thesis, we cover the necessary fundamentals in order to properly introduce the TB model of our choice, the effective bond-orbital model (EBOM). In this model, one s- and three p-orbitals per spin direction are localized on the sites of the underlying Bravais lattice. The matrix elements between these orbitals are treated as free parameters in order to reproduce the properties of one conduction and three valence bands per spin direction and can then be used in supercell calculations in order to model mixed bulk materials or pure as well as mixed quantum dots. Part II of this thesis deals with unalloyed systems. Here, we use the EBOM in combination with configuration interaction calculations for the investigation of the electronic and optical properties of truncated pyramidal GaN quantum dots embedded in AlN with an underlying zincblende structure. Furthermore, we develop a parametrization of the EBOM for materials with a wurtzite structure, which allows for a fit of one conduction and three valence bands per spin direction throughout the whole Brillouin zone of the hexagonal system. In Part III, we focus on the influence of alloying on the electronic and

  8. Modelling

    CERN Document Server

    Spädtke, P

    2013-01-01

    Modeling of technical machines became a standard technique since computer became powerful enough to handle the amount of data relevant to the specific system. Simulation of an existing physical device requires the knowledge of all relevant quantities. Electric fields given by the surrounding boundary as well as magnetic fields caused by coils or permanent magnets have to be known. Internal sources for both fields are sometimes taken into account, such as space charge forces or the internal magnetic field of a moving bunch of charged particles. Used solver routines are briefly described and some bench-marking is shown to estimate necessary computing times for different problems. Different types of charged particle sources will be shown together with a suitable model to describe the physical model. Electron guns are covered as well as different ion sources (volume ion sources, laser ion sources, Penning ion sources, electron resonance ion sources, and H$^-$-sources) together with some remarks on beam transport.

  9. A New Statistical Method to Determine the Degree of Validity of Health Economic Model Outcomes against Empirical Data.

    Science.gov (United States)

    Corro Ramos, Isaac; van Voorn, George A K; Vemer, Pepijn; Feenstra, Talitha L; Al, Maiwenn J

    2017-09-01

    The validation of health economic (HE) model outcomes against empirical data is of key importance. Although statistical testing seems applicable, guidelines for the validation of HE models lack guidance on statistical validation, and actual validation efforts often present subjective judgment of graphs and point estimates. To discuss the applicability of existing validation techniques and to present a new method for quantifying the degrees of validity statistically, which is useful for decision makers. A new Bayesian method is proposed to determine how well HE model outcomes compare with empirical data. Validity is based on a pre-established accuracy interval in which the model outcomes should fall. The method uses the outcomes of a probabilistic sensitivity analysis and results in a posterior distribution around the probability that HE model outcomes can be regarded as valid. We use a published diabetes model (Modelling Integrated Care for Diabetes based on Observational data) to validate the outcome "number of patients who are on dialysis or with end-stage renal disease." Results indicate that a high probability of a valid outcome is associated with relatively wide accuracy intervals. In particular, 25% deviation from the observed outcome implied approximately 60% expected validity. Current practice in HE model validation can be improved by using an alternative method based on assessing whether the model outcomes fit to empirical data at a predefined level of accuracy. This method has the advantage of assessing both model bias and parameter uncertainty and resulting in a quantitative measure of the degree of validity that penalizes models predicting the mean of an outcome correctly but with overly wide credible intervals. Copyright © 2017 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  10. Development of efficient air-cooling strategies for lithium-ion battery module based on empirical heat source model

    International Nuclear Information System (INIS)

    Wang, Tao; Tseng, K.J.; Zhao, Jiyun

    2015-01-01

    Thermal modeling is the key issue in thermal management of lithium-ion battery system, and cooling strategies need to be carefully investigated to guarantee the temperature of batteries in operation within a narrow optimal range as well as provide cost effective and energy saving solutions for cooling system. This article reviews and summarizes the past cooling methods especially forced air cooling and introduces an empirical heat source model which can be widely applied in the battery module/pack thermal modeling. In the development of empirical heat source model, three-dimensional computational fluid dynamics (CFD) method is employed, and thermal insulation experiments are conducted to provide the key parameters. A transient thermal model of 5 × 5 battery module with forced air cooling is then developed based on the empirical heat source model. Thermal behaviors of battery module under different air cooling conditions, discharge rates and ambient temperatures are characterized and summarized. Varies cooling strategies are simulated and compared in order to obtain an optimal cooling method. Besides, the battery fault conditions are predicted from transient simulation scenarios. The temperature distributions and variations during discharge process are quantitatively described, and it is found that the upper limit of ambient temperature for forced air cooling is 35 °C, and when ambient temperature is lower than 20 °C, forced air-cooling is not necessary. - Highlights: • An empirical heat source model is developed for battery thermal modeling. • Different air-cooling strategies on module thermal characteristics are investigated. • Impact of different discharge rates on module thermal responses are investigated. • Impact of ambient temperatures on module thermal behaviors are investigated. • Locations of maximum temperatures under different operation conditions are studied.

  11. Empirical Modeling of Information Communication Technology Usage Behaviour among Business Education Teachers in Tertiary Colleges of a Developing Country

    Science.gov (United States)

    Isiyaku, Dauda Dansarki; Ayub, Ahmad Fauzi Mohd; Abdulkadir, Suhaida

    2015-01-01

    This study has empirically tested the fitness of a structural model in explaining the influence of two exogenous variables (perceived enjoyment and attitude towards ICTs) on two endogenous variables (behavioural intention and teachers' Information Communication Technology (ICT) usage behavior), based on the proposition of Technology Acceptance…

  12. An empirical model for trip distribution of commuters in the Netherlands: Transferability in time and space reconsidered.

    NARCIS (Netherlands)

    Thomas, Tom; Tutert, Bas

    2013-01-01

    In this paper, we evaluate the distribution of commute trips in The Netherlands, to assess its transferability in space and time. We used Dutch Travel Surveys from 1995 and 2004–2008 to estimate the empirical distribution from a spatial interaction model as function of travel time and distance. We

  13. Antecedents and Consequences of Individual Performance Analysis of Turnover Intention Model (Empirical Study of Public Accountants in Indonesia)

    OpenAIRE

    Raza, Hendra; Maksum, Azhar; Erlina; Lumban Raja, Prihatin

    2014-01-01

    Azhar Maksum This study aims to examine empirically the antecedents of individual performance on its consequences of turnover intention in public accounting firms. There are eight variables measured which consists of auditors' empowerment, innovation professionalism, role ambiguity, role conflict, organizational commitment, individual performance and turnover intention. Data analysis is based on 163 public accountant using the Structural Equation Modeling assisted with an appli...

  14. Patient Safety and Satisfaction Drivers in Emergency Departments Re-visited - An Empirical Analysis using Structural Equation Modeling

    DEFF Research Database (Denmark)

    Sørup, Christian Michel; Jacobsen, Peter

    2014-01-01

    are entitled safety and satisfaction, waiting time, information delivery, and infrastructure accordingly. As an empirical foundation, a recently published comprehensive survey in 11 Danish EDs is analysed in depth using structural equation modeling (SEM). Consulting the proposed framework, ED decision makers...

  15. An Empirical Study of Propagation Models for Wireless Communications in Open-pit Mines

    DEFF Research Database (Denmark)

    Portela Lopes de Almeida, Erika; Caldwell, George; Rodriguez Larrad, Ignacio

    2018-01-01

    —In this paper, we investigate the suitability of the propagation models ITU-R 526, Okumura Hata, COST Hata models and Standard Propagation Model (SPM) to predict the path loss in open-pit mines. The models are evaluated by comparing the predicted data with measurements obtained in two operational...

  16. An Empirical Study of Efficiency and Accuracy of Probabilistic Graphical Models

    DEFF Research Database (Denmark)

    Nielsen, Jens Dalgaard; Jaeger, Manfred

    2006-01-01

    In this paper we compare Na\\ii ve Bayes (NB) models, general Bayes Net (BN) models and Probabilistic Decision Graph (PDG) models w.r.t. accuracy and efficiency. As the basis for our analysis we use graphs of size vs. likelihood that show the theoretical capabilities of the models. We also measure...

  17. Fitting non-gaussian Models to Financial data: An Empirical Study

    Directory of Open Access Journals (Sweden)

    Pablo Olivares

    2011-04-01

    Full Text Available In this paper are presented some experiences about the modeling of financial data by three classes of models as alternative to Gaussian Linear models. Dynamic Volatility, Stable L'evy and Diffusion with Jumps models are considered. The techniques are illustrated with some examples of financial series on currency, futures and indexes.

  18. An empirical velocity scale relation for modelling a design of large mesh pelagic trawl

    NARCIS (Netherlands)

    Ferro, R.S.T.; Marlen, van B.; Hansen, K.E.

    1996-01-01

    Physical models of fishing nets are used in fishing technology research at scales of 1:40 or smaller. As with all modelling involving fluid flow, a set of rules is required to determine the geometry of the model and its velocity relative to the water. Appropriate rules ensure that the model is

  19. A global weighted mean temperature model based on empirical orthogonal function analysis

    Science.gov (United States)

    Li, Qinzheng; Chen, Peng; Sun, Langlang; Ma, Xiaping

    2018-03-01

    A global empirical orthogonal function (EOF) model of the tropospheric weighted mean temperature called GEOFM_Tm was developed using high-precision Global Geodetic Observing System (GGOS) Atmosphere Tm data during the years 2008-2014. Due to the quick convergence of EOF decomposition, it is possible to use the first four EOF series, which consists base functions Uk and associated coefficients Pk, to represent 99.99% of the overall variance of the original data sets and its spatial-temporal variations. Results show that U1 displays a prominent latitude distribution profile with positive peaks located at low latitude region. U2 manifests an asymmetric pattern that positive values occurred over 30° in the Northern Hemisphere, and negative values were observed at other regions. U3 and U4 displayed significant anomalies in Tibet and North America, respectively. Annual variation is the major component of the first and second associated coefficients P1 and P2, whereas P3 and P4 mainly reflects both annual and semi-annual variation components. Furthermore, the performance of constructed GEOFM_Tm was validated by comparison with GTm_III and GTm_N with different kinds of data including GGOS Atmosphere Tm data in 2015 and radiosonde data from Integrated Global Radiosonde Archive (IGRA) in 2014. Generally speaking, GEOFM_Tm can achieve the same accuracy and reliability as GTm_III and GTm_N models in a global scale, even has improved in the Antarctic and Greenland regions. The MAE and RMS of GEOFM_Tm tend to be 2.49 K and 3.14 K with respect to GGOS Tm data, respectively; and 3.38 K and 4.23 K with respect to IGRA sounding data, respectively. In addition, those three models have higher precision at low latitude than middle and high latitude regions. The magnitude of Tm remains at the range of 220-300 K, presented a high correlation with geographic latitude. In the Northern Hemisphere, there was a significant enhancement at high latitude region reaching 270 K during summer

  20. Upscaling Empirically Based Conceptualisations to Model Tropical Dominant Hydrological Processes for Historical Land Use Change

    Science.gov (United States)

    Toohey, R.; Boll, J.; Brooks, E.; Jones, J.

    2009-12-01

    Surface runoff and percolation to ground water are two hydrological processes of concern to the Atlantic slope of Costa Rica because of their impacts on flooding and drinking water contamination. As per legislation, the Costa Rican Government funds land use management from the farm to the regional scale to improve or conserve hydrological ecosystem services. In this study, we examined how land use (e.g., forest, coffee, sugar cane, and pasture) affects hydrological response at the point, plot (1 m2), and the field scale (1-6ha) to empirically conceptualize the dominant hydrological processes in each land use. Using our field data, we upscaled these conceptual processes into a physically-based distributed hydrological model at the field, watershed (130 km2), and regional (1500 km2) scales. At the point and plot scales, the presence of macropores and large roots promoted greater vertical percolation and subsurface connectivity in the forest and coffee field sites. The lack of macropores and large roots, plus the addition of management artifacts (e.g., surface compaction and a plough layer), altered the dominant hydrological processes by increasing lateral flow and surface runoff in the pasture and sugar cane field sites. Macropores and topography were major influences on runoff generation at the field scale. Also at the field scale, antecedent moisture conditions suggest a threshold behavior as a temporal control on surface runoff generation. However, in this tropical climate with very intense rainstorms, annual surface runoff was less than 10% of annual precipitation at the field scale. Significant differences in soil and hydrological characteristics observed at the point and plot scales appear to have less significance when upscaled to the field scale. At the point and plot scales, percolation acted as the dominant hydrological process in this tropical environment. However, at the field scale for sugar cane and pasture sites, saturation-excess runoff increased as

  1. SWIFT: Semi-empirical and numerically efficient stratospheric ozone chemistry for global climate models

    OpenAIRE

    Kreyling, Daniel; Wohltmann, Ingo; Lehmann, Ralph; Rex, Markus

    2015-01-01

    The SWIFT model is a fast yet accurate chemistry scheme for calculating the chemistry of stratospheric ozone. It is mainly intended for use in Global Climate Models (GCMs), Chemistry Climate Models (CCMs) and Earth System Models (ESMs). For computing time reasons these models often do not employ full stratospheric chem- istry modules, but use prescribed ozone instead. This can lead to insufficient representation between stratosphere and troposphere. The SWIFT stratospheric ozone chem...

  2. Comparative approaches from empirical to mechanistic simulation modelling in Land Evaluation studies

    Science.gov (United States)

    Manna, P.; Basile, A.; Bonfante, A.; Terribile, F.

    2009-04-01

    The Land Evaluation (LE) comprise the evaluation procedures to asses the attitudes of the land to a generic or specific use (e.g. biomass production). From local to regional and national scale the approach to the land use planning should requires a deep knowledge of the processes that drive the functioning of the soil-plant-atmosphere system. According to the classical approaches the assessment of attitudes is the result of a qualitative comparison between the land/soil physical properties and the land use requirements. These approaches have a quick and inexpensive applicability; however, they are based on empirical and qualitative models with a basic knowledge structure specifically built for a specific landscape and for the specific object of the evaluation (e.g. crop). The outcome from this situation is the huge difficulties in the spatial extrapolation of the LE results and the rigidity of the system. Modern techniques instead, rely on the application of mechanistic and quantitative simulation modelling that allow a dynamic characterisation of the interrelated physical and chemical processes taking place in the soil landscape. Moreover, the insertion of physical based rules in the LE procedure may make it less difficult in terms of both extending spatially the results and changing the object (e.g. crop species, nitrate dynamics, etc.) of the evaluation. On the other side these modern approaches require high quality and quantity of input data that cause a significant increase in costs. In this scenario nowadays the LE expert is asked to choose the best LE methodology considering costs, complexity of the procedure and benefits in handling a specific land evaluation. In this work we performed a forage maize land suitability study by comparing 9 different methods having increasing complexity and costs. The study area, of about 2000 ha, is located in North Italy in the Lodi plain (Po valley). The range of the 9 employed methods ranged from standard LE approaches to

  3. Modeling of Principal Flank Wear: An Empirical Approach Combining the Effect of Tool, Environment and Workpiece Hardness

    Science.gov (United States)

    Mia, Mozammel; Al Bashir, Mahmood; Dhar, Nikhil Ranjan

    2016-10-01

    Hard turning is increasingly employed in machining, lately, to replace time-consuming conventional turning followed by grinding process. An excessive amount of tool wear in hard turning is one of the main hurdles to be overcome. Many researchers have developed tool wear model, but most of them developed it for a particular work-tool-environment combination. No aggregate model is developed that can be used to predict the amount of principal flank wear for specific machining time. An empirical model of principal flank wear (VB) has been developed for the different hardness of workpiece (HRC40, HRC48 and HRC56) while turning by coated carbide insert with different configurations (SNMM and SNMG) under both dry and high pressure coolant conditions. Unlike other developed model, this model includes the use of dummy variables along with the base empirical equation to entail the effect of any changes in the input conditions on the response. The base empirical equation for principal flank wear is formulated adopting the Exponential Associate Function using the experimental results. The coefficient of dummy variable reflects the shifting of the response from one set of machining condition to another set of machining condition which is determined by simple linear regression. The independent cutting parameters (speed, rate, depth of cut) are kept constant while formulating and analyzing this model. The developed model is validated with different sets of machining responses in turning hardened medium carbon steel by coated carbide inserts. For any particular set, the model can be used to predict the amount of principal flank wear for specific machining time. Since the predicted results exhibit good resemblance with experimental data and the average percentage error is <10 %, this model can be used to predict the principal flank wear for stated conditions.

  4. Evaluation of the existing triple point path models with new experimental data: proposal of an original empirical formulation

    Science.gov (United States)

    Boutillier, J.; Ehrhardt, L.; De Mezzo, S.; Deck, C.; Magnan, P.; Naz, P.; Willinger, R.

    2018-03-01

    With the increasing use of improvised explosive devices (IEDs), the need for better mitigation, either for building integrity or for personal security, increases in importance. Before focusing on the interaction of the shock wave with a target and the potential associated damage, knowledge must be acquired regarding the nature of the blast threat, i.e., the pressure-time history. This requirement motivates gaining further insight into the triple point (TP) path, in order to know precisely which regime the target will encounter (simple reflection or Mach reflection). Within this context, the purpose of this study is to evaluate three existing TP path empirical models, which in turn are used in other empirical models for the determination of the pressure profile. These three TP models are the empirical function of Kinney, the Unified Facilities Criteria (UFC) curves, and the model of the Natural Resources Defense Council (NRDC). As discrepancies are observed between these models, new experimental data were obtained to test their reliability and a new promising formulation is proposed for scaled heights of burst ranging from 24.6-172.9 cm/kg^{1/3}.

  5. Meteorological conditions associated to high sublimation amounts in semiarid high-elevation Andes decrease the performance of empirical melt models

    Science.gov (United States)

    Ayala, Alvaro; Pellicciotti, Francesca; MacDonell, Shelley; McPhee, James; Burlando, Paolo

    2015-04-01

    Empirical melt (EM) models are often preferred to surface energy balance (SEB) models to calculate melt amounts of snow and ice in hydrological modelling of high-elevation catchments. The most common reasons to support this decision are that, in comparison to SEB models, EM models require lower levels of meteorological data, complexity and computational costs. However, EM models assume that melt can be characterized by means of a few index variables only, and their results strongly depend on the transferability in space and time of the calibrated empirical parameters. In addition, they are intrinsically limited in accounting for specific process components, the complexity of which cannot be easily reconciled with the empirical nature of the model. As an example of an EM model, in this study we use the Enhanced Temperature Index (ETI) model, which calculates melt amounts using air temperature and the shortwave radiation balance as index variables. We evaluate the performance of the ETI model on dry high-elevation sites where sublimation amounts - that are not explicitly accounted for the EM model - represent a relevant percentage of total ablation (1.1 to 8.7%). We analyse a data set of four Automatic Weather Stations (AWS), which were collected during the ablation season 2013-14, at elevations between 3466 and 4775 m asl, on the glaciers El Tapado, San Francisco, Bello and El Yeso, which are located in the semiarid Andes of central Chile. We complement our analysis using data from past studies in Juncal Norte Glacier (Chile) and Haut Glacier d'Arolla (Switzerland), during the ablation seasons 2008-09 and 2006, respectively. We use the results of a SEB model, applied to each study site, along the entire season, to calibrate the ETI model. The ETI model was not designed to calculate sublimation amounts, however, results show that their ability is low also to simulate melt amounts at sites where sublimation represents larger percentages of total ablation. In fact, we

  6. An empirically-based model for the lift coefficients of twisted airfoils with leading-edge tubercles

    Science.gov (United States)

    Ni, Zao; Su, Tsung-chow; Dhanak, Manhar

    2018-04-01

    Experimental data for untwisted airfoils are utilized to propose a model for predicting the lift coefficients of twisted airfoils with leading-edge tubercles. The effectiveness of the empirical model is verified through comparison with results of a corresponding computational fluid-dynamic (CFD) study. The CFD study is carried out for both twisted and untwisted airfoils with tubercles, the latter shown to compare well with available experimental data. Lift coefficients of twisted airfoils predicted from the proposed empirically-based model match well with the corresponding coefficients determined using the verified CFD study. Flow details obtained from the latter provide better insight into the underlying mechanism and behavior at stall of twisted airfoils with leading edge tubercles.

  7. Semi empirical model for astrophysical nuclear fusion reactions of 1≤Z≤15

    International Nuclear Information System (INIS)

    Manjunatha, H.C.; Seenappa, L.; Sridhar, K.N.

    2017-01-01

    The fusion reaction is one of the most important reactions in the stellar evolution. Due to the complicated reaction mechanism of fusion, there is great uncertainty in the reaction rate which limits our understanding of various stellar objects. Low z elements are formed through many fusion reactions such as "4He+"1"2C→"1"6O, "1"2C+"1"2C→"2"0Ne+"4He, "1"2C+"1"2C→"2"3Na, "1"2C+"1"2C→"2"3Mg, "1"6O+"1"6O→"2"8Si+"4He, "1"2C+"1H→"1"3N and "1"3C+"4He→"1"6O. A detail study is required on Coulomb and nuclear interaction in formation of low Z elements in stars through fusion reactions. For astrophysics, the important energy range extends from 1 MeV to 3 MeV in the center of mass frame, which is only partially covered by experiments. In the present work, we have studied the basic fusion parameters such as barrier heights (V_B), positions (R_B), curvature of the inverted parabola (ħω_1) for fusion barrier, cross section and compound nucleus formation probability (P_C_N) and fusion process in the low Z element (1≤Z≤15) formation process. For each isotope, we have studied all possible projectile-target combinations. We have also studied the astrophysical S(E) factor for these reactions. Based on this study, we have formulated the semi empirical relations for barrier heights (V_B), positions (R_B), curvature of the inverted parabola and hence for the fusion cross section and astrophysical S(E) factor. The values produced by the present model compared with the experiments and data available in the literature. (author)

  8. Estimating total maximum daily loads with the Stochastic Empirical Loading and Dilution Model

    Science.gov (United States)

    Granato, Gregory; Jones, Susan Cheung

    2017-01-01

    The Massachusetts Department of Transportation (DOT) and the Rhode Island DOT are assessing and addressing roadway contributions to total maximum daily loads (TMDLs). Example analyses for total nitrogen, total phosphorus, suspended sediment, and total zinc in highway runoff were done by the U.S. Geological Survey in cooperation with FHWA to simulate long-term annual loads for TMDL analyses with the stochastic empirical loading and dilution model known as SELDM. Concentration statistics from 19 highway runoff monitoring sites in Massachusetts were used with precipitation statistics from 11 long-term monitoring sites to simulate long-term pavement yields (loads per unit area). Highway sites were stratified by traffic volume or surrounding land use to calculate concentration statistics for rural roads, low-volume highways, high-volume highways, and ultraurban highways. The median of the event mean concentration statistics in each traffic volume category was used to simulate annual yields from pavement for a 29- or 30-year period. Long-term average yields for total nitrogen, phosphorus, and zinc from rural roads are lower than yields from the other categories, but yields of sediment are higher than for the low-volume highways. The average yields of the selected water quality constituents from high-volume highways are 1.35 to 2.52 times the associated yields from low-volume highways. The average yields of the selected constituents from ultraurban highways are 1.52 to 3.46 times the associated yields from high-volume highways. Example simulations indicate that both concentration reduction and flow reduction by structural best management practices are crucial for reducing runoff yields.

  9. Assessment of radiological parameters and patient dose audit using semi-empirical model

    International Nuclear Information System (INIS)

    Olowookere, C.J.; Onabiyi, B.; Ajumobi, S. A.; Obed, R.I.; Babalola, I. A.; Bamidele, L.

    2011-01-01

    Risk is associated with all human activities, medical imaging is no exception. The risk in medical imaging is quantified using effective dose. However, measurement of effective dose is rather difficult and time consuming, therefore, energy imparted and entrance surface dose are obtained and converted into effective dose using the appropriate conversion factors. In this study, data on exposure parameters and patient characteristics were obtained during the routine diagnostic examinations for four common types of X-ray procedures. A semi-empirical model involving computer software Xcomp5 was used to determine energy imparted per unit exposure-area product, entrance skin exposure(ESE) and incident air kerma which are radiation dose indices. The value of energy imparted per unit exposure-area product ranges between 0.60 and 1.21x 10 -3 JR -1 cm -2 and entrance skin exposure range from 5.07±1.25 to 36.62±27.79 mR, while the incident air kerma range between 43.93μGy and 265.5μGy. The filtrations of two of the three machines investigated were lower than the standard requirement of CEC for the machines used in conventional radiography. The values of and ESE obtained in the study were relatively lower compared to the published data, indicating that patients irradiated during the routine examinations in this study are at lower health risk. The energy imparted per unit exposure- area product could be used to determine the energy delivered to the patient during diagnostic examinations, and it is an approximate indicator of patient risk.

  10. General empirical model for 60Co generation in pressurized water reactors with continuous refueling

    International Nuclear Information System (INIS)

    Urrutia, G.A.; Blesa, M.A.; Fernandez-Prini, R.; Maroto, A.J.G.

    1984-01-01

    A simplified model is presented that permits one to calculate the average activity on the fuel elements of a reactor that operates under continuous refueling, based on the assumption of crud interchange between fuel element surface and coolant in the form of particulate material only and using the crud specific activity as an empirical parameter determined in plant. The net activity flux from core to out-of-core components is then calculated in the form of parametric curves depending on crud specific activity and rate of particulate release from fuel surface. In pressure vessel reactors, contribution to out-ofcore radionuclide inventory arising in the release of activated materials from core components must be taken into account. The contribution from in situ activation of core components is calculated from the rates of release and the specific activities corresponding to the exposed surface of the component (calculated in a straightforward way on the basis of core geometry and neutron fluxes). The rates of release can be taken from the literature, or in the case of cobalt-rich alloys, can be calculated from experimentally determined cobalt contents of structural components and crud. For pressure vessel reactors operating under continuous refueling, activation of deposited crud and release of activated materials are compared; the latter, in certain cases, may represent a sizable (and even the largest) fraction of the total cobalt activity. It is proposed that the ratio of activities of 59 Fe to 54 Mn may be used as a diagnostic tool for in situ activation of structural materials; available data indicate ratios close to unity for pressure tube heavy water reactors (no in situ activation) and ratios around 4 to 10 for pressure vessel, heavy water reactors

  11. An empirically grounded agent based model for modeling directs, conflict detection and resolution operations in air traffic management.

    Science.gov (United States)

    Bongiorno, Christian; Miccichè, Salvatore; Mantegna, Rosario N

    2017-01-01

    We present an agent based model of the Air Traffic Management socio-technical complex system aiming at modeling the interactions between aircraft and air traffic controllers at a tactical level. The core of the model is given by the conflict detection and resolution module and by the directs module. Directs are flight shortcuts that are given by air controllers to speed up the passage of an aircraft within a certain airspace and therefore to facilitate airline operations. Conflicts between flight trajectories can occur for two main reasons: either the planning of the flight trajectory was not sufficiently detailed to rule out all potential conflicts or unforeseen events during the flight require modifications of the flight plan that can conflict with other flight trajectories. Our model performs a local conflict detection and resolution procedure. Once a flight trajectory has been made conflict-free, the model searches for possible improvements of the system efficiency by issuing directs. We give an example of model calibration based on real data. We then provide an illustration of the capability of our model in generating scenario simulations able to give insights about the air traffic management system. We show that the calibrated model is able to reproduce the existence of a geographical localization of air traffic controllers' operations. Finally, we use the model to investigate the relationship between directs and conflict resolutions (i) in the presence of perfect forecast ability of controllers, and (ii) in the presence of some degree of uncertainty in flight trajectory forecast.

  12. An empirically grounded agent based model for modeling directs, conflict detection and resolution operations in air traffic management.

    Directory of Open Access Journals (Sweden)

    Christian Bongiorno

    Full Text Available We present an agent based model of the Air Traffic Management socio-technical complex system aiming at modeling the interactions between aircraft and air traffic controllers at a tactical level. The core of the model is given by the conflict detection and resolution module and by the directs module. Directs are flight shortcuts that are given by air controllers to speed up the passage of an aircraft within a certain airspace and therefore to facilitate airline operations. Conflicts between flight trajectories can occur for two main reasons: either the planning of the flight trajectory was not sufficiently detailed to rule out all potential conflicts or unforeseen events during the flight require modifications of the flight plan that can conflict with other flight trajectories. Our model performs a local conflict detection and resolution procedure. Once a flight trajectory has been made conflict-free, the model searches for possible improvements of the system efficiency by issuing directs. We give an example of model calibration based on real data. We then provide an illustration of the capability of our model in generating scenario simulations able to give insights about the air traffic management system. We show that the calibrated model is able to reproduce the existence of a geographical localization of air traffic controllers' operations. Finally, we use the model to investigate the relationship between directs and conflict resolutions (i in the presence of perfect forecast ability of controllers, and (ii in the presence of some degree of uncertainty in flight trajectory forecast.

  13. Development of ANC-type empirical two-phase pump model for full size CANDU primary heat transport pump

    International Nuclear Information System (INIS)

    Chan, A.M.C.; Huynh, H.M.

    2004-01-01

    The development of an ANC-type empirical two-phase pump model for CANDU (Canadian Deuterium) reactor primary heat transport pumps is described in the present paper. The model was developed based on Ontario Hydro Technologies' full scale Darlington pump first quadrant test data. The functional form of the ANC model which is widely used was chosen to facilitate the implementation of the model into existing computer codes. The work is part of a bigger test program with the aims: (1) to produce high quality pump performance data under off-normal operating conditions using both full-size and model scale pumps; (2) to advance our basic understanding of the dominant mechanisms affecting pump performance based on more detailed local measurements; and (3) to develop a 'best-estimate' or improved pump model for use in reactor licensing and safety analyses. (author)

  14. The relative effectiveness of empirical and physical models for simulating the dense undercurrent of pyroclastic flows under different emplacement conditions

    Science.gov (United States)

    Ogburn, Sarah E.; Calder, Eliza S

    2017-01-01

    High concentration pyroclastic density currents (PDCs) are hot avalanches of volcanic rock and gas and are among the most destructive volcanic hazards due to their speed and mobility. Mitigating the risk associated with these flows depends upon accurate forecasting of possible impacted areas, often using empirical or physical models. TITAN2D, VolcFlow, LAHARZ, and ΔH/L or energy cone models each employ different rheologies or empirical relationships and therefore differ in appropriateness of application for different types of mass flows and topographic environments. This work seeks to test different statistically- and physically-based models against a range of PDCs of different volumes, emplaced under different conditions, over different topography in order to test the relative effectiveness, operational aspects, and ultimately, the utility of each model for use in hazard assessments. The purpose of this work is not to rank models, but rather to understand the extent to which the different modeling approaches can replicate reality in certain conditions, and to explore the dynamics of PDCs themselves. In this work, these models are used to recreate the inundation areas of the dense-basal undercurrent of all 13 mapped, land-confined, Soufrière Hills Volcano dome-collapse PDCs emplaced from 1996 to 2010 to test the relative effectiveness of different computational models. Best-fit model results and their input parameters are compared with results using observation- and deposit-derived input parameters. Additional comparison is made between best-fit model results and those using empirically-derived input parameters from the FlowDat global database, which represent “forward” modeling simulations as would be completed for hazard assessment purposes. Results indicate that TITAN2D is able to reproduce inundated areas well using flux sources, although velocities are often unrealistically high. VolcFlow is also able to replicate flow runout well, but does not capture

  15. An empirical modeling tool and glass property database in development of US-DOE radioactive waste glasses

    International Nuclear Information System (INIS)

    Muller, I.; Gan, H.

    1997-01-01

    An integrated glass database has been developed at the Vitreous State Laboratory of Catholic University of America. The major objective of this tool was to support glass formulation using the MAWS approach (Minimum Additives Waste Stabilization). An empirical modeling capability, based on the properties of over 1000 glasses in the database, was also developed to help formulate glasses from waste streams under multiple user-imposed constraints. The use of this modeling capability, the performance of resulting models in predicting properties of waste glasses, and the correlation of simple structural theories to glass properties are the subjects of this paper. (authors)

  16. An empirical model for predicting urban roadside nitrogen dioxide concentrations in the UK

    International Nuclear Information System (INIS)

    Stedman, J.R.; Goodwin, J.W.L.; King, K.; Murrells, T.P.; Bush, T.J.

    2001-01-01

    An annual mean concentration of 40μgm -3 has been proposed as a limit value within the European Union Air Quality Directives and as a provisional objective within the UK National Air Quality Strategy for 2010 and 2005, respectively. Emissions reduction measures resulting from current national and international policies are likely to deliver significant reductions in emissions of oxides of nitrogen from road traffic in the near future. It is likely that there will still be exceedances of this target value in 2005 and in 2009 if national measures are considered in isolation, particularly at the roadside. It is envisaged that this 'policy gap' will be addressed by implementing local air quality management to reduce concentrations in locations that are at risk of exceeding the objective. Maps of estimated annual mean NO 2 concentrations in both urban background and roadside locations are a valuable resource for the development of UK air quality policy and for the identification of locations at which local air quality management measures may be required. Maps of annual mean NO 2 concentrations at both background and roadside locations for 1998 have been calculated using modelling methods, which make use of four mathematically straightforward, empirically derived linear relationships. Maps of projected concentrations in 2005 and 2009 have also been calculated using an illustrative emissions scenario. For this emissions scenario, annual mean urban background NO 2 concentrations in 2005 are likely to be below 40μgm -3 , in all areas except for inner London, where current national and international policies are expected to lead to concentrations in the range 40-41μgm -3 . Reductions in NO x emissions between 2005 and 2009 are expected to reduce background concentrations to the extent that our modelling results indicate that 40μgm -3 is unlikely to be exceeded in background locations by 2009. Roadside NO 2 concentrations in urban areas in 2005 and 2009 are expected to be

  17. Empirical probability model of cold plasma environment in the Jovian magnetosphere

    Science.gov (United States)

    Futaana, Yoshifumi; Wang, Xiao-Dong; Barabash, Stas; Roussos, Elias; Truscott, Pete

    2015-04-01

    We analyzed the Galileo PLS dataset to produce a new cold plasma environment model for the Jovian magneto- sphere. Although there exist many sophisticated radiation models, treating energetic plasma (e.g. JOSE, GIRE, or Salammbo), only a limited number of simple models has been utilized for cold plasma environment. By extend- ing the existing cold plasma models toward the probability domain, we can predict the extreme periods of Jovian environment by specifying the percentile of the environmental parameters. The new model was produced in the following procedure. We first referred to the existing cold plasma models of Divine and Garrett, 1983 (DG83) or Bagenal and Delamere 2011 (BD11). These models are scaled to fit the statistical median of the parameters obtained from Galileo PLS data. The scaled model (also called as "mean model") indicates the median environment of Jovian magnetosphere. Then, assuming that the deviations in the Galileo PLS parameters are purely due to variations in the environment, we extended the mean model toward the percentile domain. The input parameter of the model is simply the position of the spacecraft (distance, magnetic longitude and lati- tude) and the specific percentile (e.g. 0.5 for the mean model). All the parameters in the model are described in mathematical forms; therefore the needed computational resources are quite low. The new model can be used for assessing the JUICE mission profile. The spatial extent of the model covers the main phase of the JUICE mission; namely from the Europa orbit to 40 Rj (where Rj is the radius of Jupiter). In addition, theoretical extensions toward the latitudinal direction are also included in the model to support the high latitude orbit of the JUICE spacecraft.

  18. An Empirical Review of the Connection Between Model Viewer Characteristics and the Comprehension of Conceptual Process Models

    NARCIS (Netherlands)

    Mendling, Jan; Recker, Jan; Reijers, Hajo A.; Leopold, Henrik

    2018-01-01

    Understanding conceptual models of business domains is a key skill for practitioners tasked with systems analysis and design. Research in this field predominantly uses experiments with specific user proxy cohorts to examine factors that explain how well different types of conceptual models can be

  19. The demand-induced strain compensation model : renewed theoretical considerations and empirical evidence

    NARCIS (Netherlands)

    de Jonge, J.; Dormann, C.; van den Tooren, M.; Näswall, K.; Hellgren, J.; Sverke, M.

    2008-01-01

    This chapter presents a recently developed theoretical model on jobrelated stress and performance, the so-called Demand-Induced Strain Compensation (DISC) model. The DISC model predicts in general that adverse health effects of high job demands can best be compensated for by matching job resources

  20. Empirical validation of landscape resistance models: insights from the Greater Sage-Grouse (Centrocercus urophasianus)

    Science.gov (United States)

    Andrew J. Shirk; Michael A. Schroeder; Leslie A. Robb; Samuel A. Cushman

    2015-01-01

    The ability of landscapes to impede species’ movement or gene flow may be quantified by resistance models. Few studies have assessed the performance of resistance models parameterized by expert opinion. In addition, resistance models differ in terms of spatial and thematic resolution as well as their focus on the ecology of a particular species or more generally on the...

  1. Modeling Zero-Inflated and Overdispersed Count Data: An Empirical Study of School Suspensions

    Science.gov (United States)

    Desjardins, Christopher David

    2016-01-01

    The purpose of this article is to develop a statistical model that best explains variability in the number of school days suspended. Number of school days suspended is a count variable that may be zero-inflated and overdispersed relative to a Poisson model. Four models were examined: Poisson, negative binomial, Poisson hurdle, and negative…

  2. An Improved Semi-Empirical Model for Radar Backscattering from Rough Sea Surfaces at X-Band

    Directory of Open Access Journals (Sweden)

    Taekyeong Jin

    2018-04-01

    Full Text Available We propose an improved semi-empirical scattering model for X-band radar backscattering from rough sea surfaces. This new model has a wider validity range of wind speeds than does the existing semi-empirical sea spectrum (SESS model. First, we retrieved the small-roughness parameters from the sea surfaces, which were numerically generated using the Pierson-Moskowitz spectrum and measurement datasets for various wind speeds. Then, we computed the backscattering coefficients of the small-roughness surfaces for various wind speeds using the integral equation method model. Finally, the large-roughness characteristics were taken into account by integrating the small-roughness backscattering coefficients multiplying them with the surface slope probability density function for all possible surface slopes. The new model includes a wind speed range below 3.46 m/s, which was not covered by the existing SESS model. The accuracy of the new model was verified with two measurement datasets for various wind speeds from 0.5 m/s to 14 m/s.

  3. A Novel Multiscale Ensemble Carbon Price Prediction Model Integrating Empirical Mode Decomposition, Genetic Algorithm and Artificial Neural Network

    Directory of Open Access Journals (Sweden)

    Bangzhu Zhu

    2012-02-01

    Full Text Available Due to the movement and complexity of the carbon market, traditional monoscale forecasting approaches often fail to capture its nonstationary and nonlinear properties and accurately describe its moving tendencies. In this study, a multiscale ensemble forecasting model integrating empirical mode decomposition (EMD, genetic algorithm (GA and artificial neural network (ANN is proposed to forecast carbon price. Firstly, the proposed model uses EMD to decompose carbon price data into several intrinsic mode functions (IMFs and one residue. Then, the IMFs and residue are composed into a high frequency component, a low frequency component and a trend component which have similar frequency characteristics, simple components and strong regularity using the fine-to-coarse reconstruction algorithm. Finally, those three components are predicted using an ANN trained by GA, i.e., a GAANN model, and the final forecasting results can be obtained by the sum of these three forecasting results. For verification and testing, two main carbon future prices with different maturity in the European Climate Exchange (ECX are used to test the effectiveness of the proposed multiscale ensemble forecasting model. Empirical results obtained demonstrate that the proposed multiscale ensemble forecasting model can outperform the single random walk (RW, ARIMA, ANN and GAANN models without EMD preprocessing and the ensemble ARIMA model with EMD preprocessing.

  4. Recommended survey designs for occupancy modelling using motion-activated cameras: insights from empirical wildlife data

    Directory of Open Access Journals (Sweden)

    Graeme Shannon

    2014-08-01

    Full Text Available Motion-activated cameras are a versatile tool that wildlife biologists can use for sampling wild animal populations to estimate species occurrence. Occupancy modelling provides a flexible framework for the analysis of these data; explicitly recognizing that given a species occupies an area the probability of detecting it is often less than one. Despite the number of studies using camera data in an occupancy framework, there is only limited guidance from the scientific literature about survey design trade-offs when using motion-activated cameras. A fuller understanding of these trade-offs will allow researchers to maximise available resources and determine whether the objectives of a monitoring program or research study are achievable. We use an empirical dataset collected from 40 cameras deployed across 160 km2 of the Western Slope of Colorado, USA to explore how survey effort (number of cameras deployed and the length of sampling period affects the accuracy and precision (i.e., error of the occupancy estimate for ten mammal and three virtual species. We do this using a simulation approach where species occupancy and detection parameters were informed by empirical data from motion-activated cameras. A total of 54 survey designs were considered by varying combinations of sites (10–120 cameras and occasions (20–120 survey days. Our findings demonstrate that increasing total sampling effort generally decreases error associated with the occupancy estimate, but changing the number of sites or sampling duration can have very different results, depending on whether a species is spatially common or rare (occupancy = ψ and easy or hard to detect when available (detection probability = p. For rare species with a low probability of detection (i.e., raccoon and spotted skunk the required survey effort includes maximizing the number of sites and the number of survey days, often to a level that may be logistically unrealistic for many studies. For common

  5. Optimizing irrigation and nitrogen for wheat through empirical modeling under semi-arid environment.

    Science.gov (United States)

    Saeed, Umer; Wajid, Syed Aftab; Khaliq, Tasneem; Zahir, Zahir Ahmad

    2017-04-01

    Nitrogen fertilizer availability to plants is strongly linked with water availability. Excessive or insufficient use of nitrogen can cause reduction in grain yield of wheat and environmental issues. The per capita per annum water availability in Pakistan has reduced to less than 1000 m 3 and is expected to reach 800 m 3 during 2025. Irrigating crops with 3 or more than 3 in. of depth without measuring volume of water is not a feasible option anymore. Water productivity and economic return of grain yield can be improved by efficient management of water and nitrogen fertilizer. A study was conducted at post-graduate agricultural research station, University of Agriculture Faisalabad, during 2012-2013 and 2013-2014 to optimize volume of water per irrigation and nitrogen application. Split plot design with three replications was used to conduct experiment; four irrigation levels (I 300  = 300 mm, I 240  = 240 mm, I 180  = 180 mm, I 120  = 120 mm for whole growing season at critical growth stages) and four nitrogen levels (N 60  = 60 kg ha -1 , N 120  = 120 kg ha -1 , N 180  = 180 kg ha -1 , and N 240  = 240 kg ha -1 ) were randomized as main and sub-plot factors, respectively. The recorded data on grain yield was used to develop empirical regression models. The results based on quadratic equations and economic analysis showed 164, 162, 158, and 107 kg ha -1 nitrogen as economic optimum with I 300 , I 240 , I 180 , and I 120 mm water, respectively, during 2012-2013. During 2013-2014, quadratic equations and economic analysis showed 165, 162, 161, and 117 kg ha -1 nitrogen as economic optimum with I 300 , I 240 , I 180 , and I 120 mm water, respectively. The optimum irrigation level was obtained by fitting economic optimum nitrogen as function of total water. Equations predicted 253 mm as optimum irrigation water for whole growing season during 2012-2013 and 256 mm water as optimum for 2013-2014. The results also revealed that

  6. Achilles tendons from decorin- and biglycan-null mouse models have inferior mechanical and structural properties predicted by an image-based empirical damage model.

    Science.gov (United States)

    Gordon, J A; Freedman, B R; Zuskov, A; Iozzo, R V; Birk, D E; Soslowsky, L J

    2015-07-16

    Achilles tendons are a common source of pain and injury, and their pathology may originate from aberrant structure function relationships. Small leucine rich proteoglycans (SLRPs) influence mechanical and structural properties in a tendon-specific manner. However, their roles in the Achilles tendon have not been defined. The objective of this study was to evaluate the mechanical and structural differences observed in mouse Achilles tendons lacking class I SLRPs; either decorin or biglycan. In addition, empirical modeling techniques based on mechanical and image-based measures were employed. Achilles tendons from decorin-null (Dcn(-/-)) and biglycan-null (Bgn(-/-)) C57BL/6 female mice (N=102) were used. Each tendon underwent a dynamic mechanical testing protocol including simultaneous polarized light image capture to evaluate both structural and mechanical properties of each Achilles tendon. An empirical damage model was adapted for application to genetic variation and for use with image based structural properties to predict tendon dynamic mechanical properties. We found that Achilles tendons lacking decorin and biglycan had inferior mechanical and structural properties that were age dependent; and that simple empirical models, based on previously described damage models, were predictive of Achilles tendon dynamic modulus in both decorin- and biglycan-null mice. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. An empirical model of L-band scintillation S4 index constructed by using FORMOSAT-3/COSMIC data

    Science.gov (United States)

    Chen, Shih-Ping; Bilitza, Dieter; Liu, Jann-Yenq; Caton, Ronald; Chang, Loren C.; Yeh, Wen-Hao

    2017-09-01

    Modern society relies heavily on the Global Navigation Satellite System (GNSS) technology for applications such as satellite communication, navigation, and positioning on the ground and/or aviation in the troposphere/stratosphere. However, ionospheric scintillations can severely impact GNSS systems and their related applications. In this study, a global empirical ionospheric scintillation model is constructed with S4-index data obtained by the FORMOSAT-3/COSMIC (F3/C) satellites during 2007-2014 (hereafter referred to as the F3CGS4 model). This model describes the S4-index as a function of local time, day of year, dip-latitude, and solar activity using the index PF10.7. The model reproduces the F3/C S4-index observations well, and yields good agreement with ground-based reception of satellite signals. This confirms that the constructed model can be used to forecast global L-band scintillations on the ground and in the near surface atmosphere.

  8. Semi-empirical model for the threshold voltage of a double implanted MOSFET and its temperature dependence

    Energy Technology Data Exchange (ETDEWEB)

    Arora, N D

    1987-05-01

    A simple and accurate semi-empirical model for the threshold voltage of a small geometry double implanted enhancement type MOSFET, especially useful in a circuit simulation program like SPICE, has been developed. The effect of short channel length and narrow width on the threshold voltage has been taken into account through a geometrical approximation, which involves parameters whose values can be determined from the curve fitting experimental data. A model for the temperature dependence of the threshold voltage for the implanted devices has also been presented. The temperature coefficient of the threshold voltage was found to change with decreasing channel length and width. Experimental results from various device sizes, both short and narrow, show very good agreement with the model. The model has been implemented in SPICE as part of the complete dc model.

  9. A dynamic model of the marriage market-Part 2: simulation of marital states and application to empirical data.

    Science.gov (United States)

    Matthews, A P; Garenne, M L

    2013-09-01

    A dynamic, two-sex, age-structured marriage model is presented. Part 1 focused on first marriage only and described a marriage market matching algorithm. In Part 2 the model is extended to include divorce, widowing, and remarriage. The model produces a self-consistent set of marital states distributed by age and sex in a stable population by means of a gender-symmetric numerical method. The model is compared with empirical data for the case of Zambia. Furthermore, a dynamic marriage function for a changing population is demonstrated in simulations of three hypothetical scenarios of elevated mortality in young to middle adulthood. The marriage model has its primary application to simulation of HIV-AIDS epidemics in African countries. Copyright © 2013 Elsevier Inc. All rights reserved.

  10. Theoretical Insight Into the Empirical Tortuosity-Connectivity Factor in the Burdine-Brooks-Corey Water Relative Permeability Model

    Science.gov (United States)

    Ghanbarian, Behzad; Ioannidis, Marios A.; Hunt, Allen G.

    2017-12-01

    A model commonly applied to the estimation of water relative permeability krw in porous media is the Burdine-Brooks-Corey model, which relies on a simplified picture of pores as a bundle of noninterconnected capillary tubes. In this model, the empirical tortuosity-connectivity factor is assumed to be a power law function of effective saturation with an exponent (μ) commonly set equal to 2 in the literature. Invoking critical path analysis and using percolation theory, we relate the tortuosity-connectivity exponent μ to the critical scaling exponent t of percolation that characterizes the power law behavior of the saturation-dependent electrical conductivity of porous media. We also discuss the cause of the nonuniversality of μ in terms of the nonuniversality of t and compare model estimations with water relative permeability from experiments. The comparison supports determining μ from the electrical conductivity scaling exponent t, but also highlights limitations of the model.

  11. Combining empirical and theory-based land-use modelling approaches to assess economic potential of biofuel production avoiding iLUC: Argentina as a case study

    NARCIS (Netherlands)

    Diogo, V.; van der Hilst, F.; van Eijck, J.; Verstegen, J.A.; Hilbert, J.; Carballo, S.; Volante, J.; Faaij, A.

    2014-01-01

    In this paper, a land-use modelling framework is presented combining empirical and theory-based modelling approaches to determine economic potential of biofuel production avoiding indirect land-use changes (iLUC) resulting from land competition with other functions. The empirical approach explores

  12. THE "MAN INCULTS" AND PACIFICATION DURING BRAZILIAN EMPIRE: A MODEL OF HISTORICAL INTERPRETATION BUILT FROM THE APPROACH TO HUMAN RIGHTS

    Directory of Open Access Journals (Sweden)

    José Ernesto Pimentel Filho

    2011-06-01

    Full Text Available The construction of peace in the Empire of Brazil was one of the forms of public space’s monopoly by the dominant sectors of the Empire Society. On the one hand, the Empire built an urban sociability based on patriarchal relations. On the other hand, the Empire was struggling against all forms of disorder and social deviance, as in a diptych image. The center of that peace was the capitals of the provinces. We he discuss here how to construct a model for approaching a mentality of combating crime in rural areas according to the patriarchal minds during the nineteenth century in Brazil. For it, the case of Ceara has been chosen. A historical hermeneutic might been applied for understanding the role of poor white men in social life of the Empire of Brazil. We observe that the education, when associated with the moral, has been seen as able to modify any violent behavior and able shaping the individual attitude before the justice and punishment policy. Discrimination and stereotypes are part of our interpretation as contribution to a debate on Human Rights in the history of Brazil.

  13. The EZ diffusion model provides a powerful test of simple empirical effects.

    Science.gov (United States)

    van Ravenzwaaij, Don; Donkin, Chris; Vandekerckhove, Joachim

    2017-04-01

    Over the last four decades, sequential accumulation models for choice response times have spread through cognitive psychology like wildfire. The most popular style of accumulator model is the diffusion model (Ratcliff Psychological Review, 85, 59-108, 1978), which has been shown to account for data from a wide range of paradigms, including perceptual discrimination, letter identification, lexical decision, recognition memory, and signal detection. Since its original inception, the model has become increasingly complex in order to account for subtle, but reliable, data patterns. The additional complexity of the diffusion model renders it a tool that is only for experts. In response, Wagenmakers et al. (Psychonomic Bulletin & Review, 14, 3-22, 2007) proposed that researchers could use a more basic version of the diffusion model, the EZ diffusion. Here, we simulate experimental effects on data generated from the full diffusion model and compare the power of the full diffusion model and EZ diffusion to detect those effects. We show that the EZ diffusion model, by virtue of its relative simplicity, will be sometimes better able to detect experimental effects than the data-generating full diffusion model.

  14. The Effect of Private Benefits of Control on Minority Shareholders: A Theoretical Model and Empirical Evidence from State Ownership

    Directory of Open Access Journals (Sweden)

    Kerry Liu

    2017-06-01

    Full Text Available Purpose: The purpose of this paper is to examine the effect of private benefits of control on minority shareholders. Design/methodology/approach: A theoretical model is established. The empirical analysis includes hand-collected data from a wide range of data sources. OLS and 2SLS regression analysis are applied with Huber-White standard errors. Findings: The theoretical model shows that, while private benefits are generally harmful to minority shareholders, the overall effect depends on the size of large shareholder ownership. The empirical evidence from government ownership is consistent with theoretical analysis. Research limitations/implications: The empirical evidence is based on a small number of hand-collected data sets of government ownership. Further studies can be expanded to other types of ownership, such as family ownership and financial institutional ownership. Originality/value: This study is the first to theoretically analyse and empirically test the effect of private benefits. In general, this study significantly contributes to the understanding of the effect of large shareholder and corporate governance.

  15. Testing seasonal and long-term controls of streamwater DOC using empirical and process-based models.

    Science.gov (United States)

    Futter, Martyn N; de Wit, Heleen A

    2008-12-15

    Concentrations of dissolved organic carbon (DOC) in surface waters are increasing across Europe and parts of North America. Several mechanisms have been proposed to explain these increases including reductions in acid deposition, change in frequency of winter storms and changes in temperature and precipitation patterns. We used two modelling approaches to identify the mechanisms responsible for changing surface water DOC concentrations. Empirical regression analysis and INCA-C, a process-based model of stream-water DOC, were used to simulate long-term (1986--2003) patterns in stream water DOC concentrations in a small boreal stream. Both modelling approaches successfully simulated seasonal and inter-annual patterns in DOC concentration. In both models, seasonal patterns of DOC concentration were controlled by hydrology and inter-annual patterns were explained by climatic variation. There was a non-linear relationship between warmer summer temperatures and INCA-C predicted DOC. Only the empirical model was able to satisfactorily simulate the observed long-term increase in DOC. The observed long-term trends in DOC are likely to be driven by in-soil processes controlled by SO4(2-) and Cl(-) deposition, and to a lesser extent by temperature-controlled processes. Given the projected changes in climate and deposition, future modelling and experimental research should focus on the possible effects of soil temperature and moisture on organic carbon production, sorption and desorption rates, and chemical controls on organic matter solubility.

  16. Condition monitoring using empirical models: technical review and prospects for nuclear applications

    International Nuclear Information System (INIS)

    Heo, Gyun Young

    2008-01-01

    The purpose of this paper is to extensively review the Condition Monitoring (CM) techniques using empirical models in an effort to reduce or eliminate unexpected downtimes in general industry, and to illustrate the feasibility of applying them to the nuclear industry. CM provides on-time warnings of system states to enable the optimal scheduling of maintenance and, ultimately, plant uptime is maximized. Currently, most maintenance processes tend to be either reactive, or part of scheduled, or preventive maintenance. Such maintenance is being increasingly reported as a poor practice for two reasons: first, the component does not necessarily require maintenance, thus the maintenance cost is wasted, and secondly, failure catalysts are introduced into properly working components, which is worse. This paper first summarizes the technical aspects of CM including state estimation and state monitoring. The mathematical background of CM is mature enough even for commercial use in the nuclear industry. Considering the current computational capabilities of CM, its application is not limited by technical difficulties, but by a lack of desire on the part of industry to implement it. For practical applications in the nuclear industry, it may be more important to clarify and quantify the negative impact of unexpected outcomes or failures in CM than it is to investigate its advantages. In other words, while issues regarding accuracy have been targeted to date, the concerns regarding robustness should now be concentrated on. Standardizing the anticipated failures and the possibly harsh operating conditions, and then evaluating the impact of the proposed CM under those conditions may be necessary. In order to make the CM techniques practical for the nuclear industry in the future, it is recommended that a prototype CM system be applied to a secondary system in which most of the components are non-safety grade. Recently, many activities to enhance the safety and efficiency of the

  17. Interface of the polarizable continuum model of solvation with semi-empirical methods in the GAMESS program

    DEFF Research Database (Denmark)

    Svendsen, Casper Steinmann; Blædel, Kristoffer; Christensen, Anders S

    2013-01-01

    An interface between semi-empirical methods and the polarized continuum model (PCM) of solvation successfully implemented into GAMESS following the approach by Chudinov et al (Chem. Phys. 1992, 160, 41). The interface includes energy gradients and is parallelized. For large molecules such as ubiq......An interface between semi-empirical methods and the polarized continuum model (PCM) of solvation successfully implemented into GAMESS following the approach by Chudinov et al (Chem. Phys. 1992, 160, 41). The interface includes energy gradients and is parallelized. For large molecules...... such as ubiquitin a reasonable speedup (up to a factor of six) is observed for up to 16 cores. The SCF convergence is greatly improved by PCM for proteins compared to the gas phase....

  18. On the adequacy of current empirical evaluations of formal models of categorization.

    Science.gov (United States)

    Wills, Andy J; Pothos, Emmanuel M

    2012-01-01

    Categorization is one of the fundamental building blocks of cognition, and the study of categorization is notable for the extent to which formal modeling has been a central and influential component of research. However, the field has seen a proliferation of noncomplementary models with little consensus on the relative adequacy of these accounts. Progress in assessing the relative adequacy of formal categorization models has, to date, been limited because (a) formal model comparisons are narrow in the number of models and phenomena considered and (b) models do not often clearly define their explanatory scope. Progress is further hampered by the practice of fitting models with arbitrarily variable parameters to each data set independently. Reviewing examples of good practice in the literature, we conclude that model comparisons are most fruitful when relative adequacy is assessed by comparing well-defined models on the basis of the number and proportion of irreversible, ordinal, penetrable successes (principles of minimal flexibility, breadth, good-enough precision, maximal simplicity, and psychological focus).

  19. Interface of the polarizable continuum model of solvation with semi-empirical methods in the GAMESS program.

    Directory of Open Access Journals (Sweden)

    Casper Steinmann

    Full Text Available An interface between semi-empirical methods and the polarized continuum model (PCM of solvation successfully implemented into GAMESS following the approach by Chudinov et al (Chem. Phys. 1992, 160, 41. The interface includes energy gradients and is parallelized. For large molecules such as ubiquitin a reasonable speedup (up to a factor of six is observed for up to 16 cores. The SCF convergence is greatly improved by PCM for proteins compared to the gas phase.

  20. Radiosensitivity of grapevines. Empirical modelling of the radiosensitivity of some clones to x-ray irradiation. Pt. 1

    International Nuclear Information System (INIS)

    Koeroesi, F.; Jezierska-Szabo, E.

    1999-01-01

    Empirical and formal (Poisson) models were utilized, applying experimental growth data to characterize the radiosensitivity of six grapevine clones to X-ray irradiation. According to the radiosensitivity constants (k), target numbers (n) and volumes, GR 37 doses and energy deposition, the following radiosensitivity order has been found for various vine brands: Chardonnay clone type < Harslevelue K. 9 < Koevidinka K. 8 < Muscat Ottonel clone type < Irsai Oliver K. 11 < Cabernet Sauvignon E. 153. The model can be expanded to describe the radiosensitivity of other plant species and varieties, and also the efficiency of various radioprotecting agents and conditions. (author)

  1. Semi-empirical model for the calculation of flow friction factors in wire-wrapped rod bundles

    International Nuclear Information System (INIS)

    Carajilescov, P.; Fernandez y Fernandez, E.

    1981-08-01

    LMFBR fuel elements consist of wire-wrapped rod bundles, with triangular array, with the fluid flowing parallel to the rods. A semi-empirical model is developed in order to obtain the average bundle friction factor, as well as the friction factor for each subchannel. The model also calculates the flow distribution factors. The results are compared to experimental data for geometrical parameters in the range: P(div)D = 1.063 - 1.417, H(div)D = 4 - 50, and are considered satisfactory. (Author) [pt

  2. The hydrodynamic basis of the vacuum cleaner effect in continuous-flow PCNL instruments: an empiric approach and mathematical model.

    Science.gov (United States)

    Mager, R; Balzereit, C; Gust, K; Hüsch, T; Herrmann, T; Nagele, U; Haferkamp, A; Schilling, D

    2016-05-01

    Passive removal of stone fragments in the irrigation stream is one of the characteristics in continuous-flow PCNL instruments. So far the physical principle of this so-called vacuum cleaner effect has not been fully understood yet. The aim of the study was to empirically prove the existence of the vacuum cleaner effect and to develop a physical hypothesis and generate a mathematical model for this phenomenon. In an empiric approach, common low-pressure PCNL instruments and conventional PCNL sheaths were tested using an in vitro model. Flow characteristics were visualized by coloring of irrigation fluid. Influence of irrigation pressure, sheath diameter, sheath design, nephroscope design and position of the nephroscope was assessed. Experiments were digitally recorded for further slow-motion analysis to deduce a physical model. In each tested nephroscope design, we could observe the vacuum cleaner effect. Increase in irrigation pressure and reduction in cross section of sheath sustained the effect. Slow-motion analysis of colored flow revealed a synergism of two effects causing suction and transportation of the stone. For the first time, our model showed a flow reversal in the sheath as an integral part of the origin of the stone transportation during vacuum cleaner effect. The application of Bernoulli's equation provided the explanation of these effects and confirmed our experimental results. We widen the understanding of PCNL with a conclusive physical model, which explains fluid mechanics of the vacuum cleaner effect.

  3. Methodological and empirical developments for the Ratcliff diffusion model of response times and accuracy

    NARCIS (Netherlands)

    Wagenmakers, E.-J.

    2009-01-01

    The Ratcliff diffusion model for simple two-choice decisions (e.g., Ratcliff, 1978; Ratcliff & McKoon, 2008) has two outstanding advantages. First, the model generally provides an excellent fit to the observed data (i.e., response accuracy and the shape of RT distributions, both for correct and

  4. Preliminary empirical models to predict reductions in total and low flows resulting from afforestation

    CSIR Research Space (South Africa)

    Scott, DF

    1997-04-01

    Full Text Available Mathematical models to predict runoff reductions due to afforestation are presented. The models are intended to aid decision-makers and planners who need to evaluate the water requirements of competing land uses at a district or regional scale. Five...

  5. Using Empirical Data to Refine a Model for Information Literacy Instruction for Elementary School Students

    Science.gov (United States)

    Nesset, Valerie

    2015-01-01

    Introduction: As part of a larger study in 2006 of the information-seeking behaviour of third-grade students in Montreal, Quebec, Canada, a model of their information-seeking behaviour was developed. To further improve the model, an extensive examination of the literature into information-seeking behaviour and information literacy was conducted…

  6. Asset Pricing Model and the Liquidity Effect: Empirical Evidence in the Brazilian Stock Market

    Directory of Open Access Journals (Sweden)

    Otávio Ribeiro de Medeiros

    2011-09-01

    Full Text Available This paper is aims to analyze whether a liquidity premium exists in the Brazilian stock market. As a second goal, we include liquidity as an extra risk factor in asset pricing models and test whether this factor is priced and whether stock returns were explained not only by systematic risk, as proposed by the CAPM, by Fama and French’s (1993 three-factor model, and by Carhart’s (1997 momentum-factor model, but also by liquidity, as suggested by Amihud and Mendelson (1986. To achieve this, we used stock portfolios and five measures of liquidity. Among the asset pricing models tested, the CAPM was the least capable of explaining returns. We found that the inclusion of size and book-to-market factors in the CAPM, a momentum factor in the three-factor model, and a liquidity factor in the four-factor model improve their explanatory power of portfolio returns. In addition, we found that the five-factor model is marginally superior to the other asset pricing models tested.

  7. The social networking application success model : An empirical study of Facebook and Twitter

    NARCIS (Netherlands)

    Ou, Carol; Davison, R.M.; Huang, Q.

    2016-01-01

    Social networking applications (SNAs) are among the fastest growing web applications of recent years. In this paper, we propose a causal model to assess the success of SNAs, grounded on DeLone and McLean’s updated information systems (IS) success model. In addition to their original three dimensions

  8. A New Empirical Sewer Water Quality Model for the Prediction of WWTP Influent Quality

    NARCIS (Netherlands)

    Langeveld, J.G.; Schilperoort, R.P.S.; Rombouts, P.M.M.; Benedetti, L.; Amerlinck, Y.; de Jonge, J.; Flameling, T.; Nopens, I.; Weijers, S.

    2014-01-01

    Modelling of the integrated urban water system is a powerful tool to optimise wastewater system performance or to find cost-effective solutions for receiving water problems. One of the challenges of integrated modelling is the prediction of water quality at the inlet of a WWTP. Recent applications

  9. Alternative Specifications for the Lévy Libor Market Model: An Empirical Investigation

    DEFF Research Database (Denmark)

    Skovmand, David; Nicolato, Elisa

    This paper introduces and analyzes specications of the Lévy Market Model originally proposed by Eberlein and Özkan (2005). An investigation of the term structure of option implied moments rules out the Brownian motion and homogeneous Lévy processes as suitable modeling devices, and consequently a...

  10. On the Adequacy of Current Empirical Evaluations of Formal Models of Categorization

    Science.gov (United States)

    Wills, Andy J.; Pothos, Emmanuel M.

    2012-01-01

    Categorization is one of the fundamental building blocks of cognition, and the study of categorization is notable for the extent to which formal modeling has been a central and influential component of research. However, the field has seen a proliferation of noncomplementary models with little consensus on the relative adequacy of these accounts.…

  11. Reviewing the effort-reward imbalance model: drawing up the balance of 45 empirical studies

    NARCIS (Netherlands)

    Vegchel, van N.; Jonge, de J.; Bosma, H.; Schaufeli, W.B.

    2005-01-01

    The present paper provides a review of 45 studies on the Effort–Reward Imbalance (ERI) Model published from 1986 to 2003 (inclusive). In 1986, the ERI Model was introduced by Siegrist et al. (Biological and Psychological Factors in Cardiovascular Disease, Springer, Berlin, 1986, pp. 104–126; Social

  12. Comparing the model-simulated global warming signal to observations using empirical estimates of unforced noise

    Science.gov (United States)

    The comparison of observed global mean surface air temperature (GMT) change to the mean change simulated by climate models has received much attention. For a given global warming signal produced by a climate model ensemble, there exists an envelope of GMT values representing the range of possible un...

  13. An Empirical Bayes Mixture Model for Effect Size Distributions in Genome-Wide Association Studies

    DEFF Research Database (Denmark)

    Thompson, Wesley K.; Wang, Yunpeng; Schork, Andrew J.

    2015-01-01

    -wide association study (GWAS) test statistics. Test statistics corresponding to null associations are modeled as random draws from a normal distribution with zero mean; test statistics corresponding to non-null associations are also modeled as normal with zero mean, but with larger variance. The model is fit via...... analytically and in simulations. We apply this approach to meta-analysis test statistics from two large GWAS, one for Crohn’s disease (CD) and the other for schizophrenia (SZ). A scale mixture of two normals distribution provides an excellent fit to the SZ nonparametric replication effect size estimates. While...... minimizing discrepancies between the parametric mixture model and resampling-based nonparametric estimates of replication effect sizes and variances. We describe in detail the implications of this model for estimation of the non-null proportion, the probability of replication in de novo samples, the local...

  14. Precision comparison of the erosion rates derived from 137Cs measurements models with predictions based on empirical relationship

    International Nuclear Information System (INIS)

    Yang Mingyi; Liu Puling; Li Liqing

    2004-01-01

    The soil samples were collected in 6 cultivated runoff plots with grid sampling method, and the soil erosion rates derived from 137 Cs measurements were calculated. The models precision of Zhang Xinbao, Zhou Weizhi, Yang Hao and Walling were compared with predictions based on empirical relationship, data showed that the precision of 4 models is high within 50m slope length except for the slope with low slope angle and short length. Relatively, the precision of Walling's model is better than that of Zhang Xinbao, Zhou Weizhi and Yang Hao. In addition, the relationship between parameter Γ in Walling's improved model and slope angle was analyzed, the ralation is: Y=0.0109 X 1.0072 . (authors)

  15. FARIMA MODELING OF SOLAR FLARE ACTIVITY FROM EMPIRICAL TIME SERIES OF SOFT X-RAY SOLAR EMISSION

    International Nuclear Information System (INIS)

    Stanislavsky, A. A.; Burnecki, K.; Magdziarz, M.; Weron, A.; Weron, K.

    2009-01-01

    A time series of soft X-ray emission observed by the Geostationary Operational Environment Satellites from 1974 to 2007 is analyzed. We show that in the solar-maximum periods the energy distribution of soft X-ray solar flares for C, M, and X classes is well described by a fractional autoregressive integrated moving average model with Pareto noise. The model incorporates two effects detected in our empirical studies. One effect is a long-term dependence (long-term memory), and another corresponds to heavy-tailed distributions. The parameters of the model: self-similarity exponent H, tail index α, and memory parameter d are statistically stable enough during the periods 1977-1981, 1988-1992, 1999-2003. However, when the solar activity tends to minimum, the parameters vary. We discuss the possible causes of this evolution and suggest a statistically justified model for predicting the solar flare activity.

  16. Oil production responses to price changes. An empirical application of the competitive model to OPEC and non-OPEC countries

    International Nuclear Information System (INIS)

    Ramcharran, Harri

    2002-01-01

    Falling oil prices over the last decade, accompanied by over-production by some OPEC members and the growth of non-OPEC supply, warrant further empirical investigation of the competitive model to ascertain production behavior. A supply function, based on a modification of Griffin's model, is estimated using data from 1973-1997. The sample period, unlike Griffin's, however, includes phases of price increase (1970s) and price decrease (1980s-1990s), thus providing a better framework for examining production behavior using the competitive model. The OPEC results do not support the competitive hypothesis; instead, a negative and significant price elasticity of supply is obtained. This result offers partial support for the target revenue theory. For most of the non-OPEC members, the estimates support the competitive model. OPEC's loss of market share and the drop in the share of oil-based energy should signal adjustments in price and quantity based on a competitive world market for crude oil

  17. Biomass viability: An experimental study and the development of an empirical mathematical model for submerged membrane bioreactor.

    Science.gov (United States)

    Zuthi, M F R; Ngo, H H; Guo, W S; Nghiem, L D; Hai, F I; Xia, S Q; Zhang, Z Q; Li, J X

    2015-08-01

    This study investigates the influence of key biomass parameters on specific oxygen uptake rate (SOUR) in a sponge submerged membrane bioreactor (SSMBR) to develop mathematical models of biomass viability. Extra-cellular polymeric substances (EPS) were considered as a lumped parameter of bound EPS (bEPS) and soluble microbial products (SMP). Statistical analyses of experimental results indicate that the bEPS, SMP, mixed liquor suspended solids and volatile suspended solids (MLSS and MLVSS) have functional relationships with SOUR and their relative influence on SOUR was in the order of EPS>bEPS>SMP>MLVSS/MLSS. Based on correlations among biomass parameters and SOUR, two independent empirical models of biomass viability were developed. The models were validated using results of the SSMBR. However, further validation of the models for different operating conditions is suggested. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. High-resolution empirical geomagnetic field model TS07D: Investigating run-on-request and forecasting modes of operation

    Science.gov (United States)

    Stephens, G. K.; Sitnov, M. I.; Ukhorskiy, A. Y.; Vandegriff, J. D.; Tsyganenko, N. A.

    2010-12-01

    The dramatic increase of the geomagnetic field data volume available due to many recent missions, including GOES, Polar, Geotail, Cluster, and THEMIS, required at some point the appropriate qualitative transition in the empirical modeling tools. Classical empirical models, such as T96 and T02, used few custom-tailored modules to represent major magnetospheric current systems and simple data binning or loading-unloading inputs for their fitting with data and the subsequent applications. They have been replaced by more systematic expansions of the equatorial and field-aligned current contributions as well as by the advanced data-mining algorithms searching for events with the global activity parameters, such as the Sym-H index, similar to those at the time of interest, as is done in the model TS07D (Tsyganenko and Sitnov, 2007; Sitnov et al., 2008). The necessity to mine and fit data dynamically, with the individual subset of the database being used to reproduce the geomagnetic field pattern at every new moment in time, requires the corresponding transition in the use of the new empirical geomagnetic field models. It becomes more similar to runs-on-request offered by the Community Coordinated Modeling Center for many first principles MHD and kinetic codes. To provide this mode of operation for the TS07D model a new web-based modeling tool has been created and tested at the JHU/APL (http://geomag_field.jhuapl.edu/model/), and we discuss the first results of its performance testing and validation, including in-sample and out-of-sample modeling of a number of CME- and CIR-driven magnetic storms. We also report on the first tests of the forecasting version of the TS07D model, where the magnetospheric part of the macro-parameters involved in the data-binning process (Sym-H index and its trend parameter) are replaced by their solar wind-based analogs obtained using the Burton-McPherron-Russell approach.

  19. A control-oriented real-time semi-empirical model for the prediction of NOx emissions in diesel engines

    International Nuclear Information System (INIS)

    D’Ambrosio, Stefano; Finesso, Roberto; Fu, Lezhong; Mittica, Antonio; Spessa, Ezio

    2014-01-01

    Highlights: • New semi-empirical correlation to predict NOx emissions in diesel engines. • Based on a real-time three-zone diagnostic combustion model. • The model is of fast application, and is therefore suitable for control-oriented applications. - Abstract: The present work describes the development of a fast control-oriented semi-empirical model that is capable of predicting NOx emissions in diesel engines under steady state and transient conditions. The model takes into account the maximum in-cylinder burned gas temperature of the main injection, the ambient gas-to-fuel ratio, the mass of injected fuel, the engine speed and the injection pressure. The evaluation of the temperature of the burned gas is based on a three-zone real-time diagnostic thermodynamic model that has recently been developed by the authors. Two correlations have also been developed in the present study, in order to evaluate the maximum burned gas temperature during the main combustion phase (derived from the three-zone diagnostic model) on the basis of significant engine parameters. The model has been tuned and applied to two diesel engines that feature different injection systems of the indirect acting piezoelectric, direct acting piezoelectric and solenoid type, respectively, over a wide range of steady-state operating conditions. The model has also been validated in transient operation conditions, over the urban and extra-urban phases of an NEDC. It has been shown that the proposed approach is capable of improving the predictive capability of NOx emissions, compared to previous approaches, and is characterized by a very low computational effort, as it is based on a single-equation correlation. It is therefore suitable for real-time applications, and could also be integrated in the engine control unit for closed-loop or feed-forward control tasks

  20. Application of an empirical model in CFD simulations to predict the local high temperature corrosion potential in biomass fired boilers

    International Nuclear Information System (INIS)

    Gruber, Thomas; Scharler, Robert; Obernberger, Ingwald

    2015-01-01

    To gain reliable data for the development of an empirical model for the prediction of the local high temperature corrosion potential in biomass fired boilers, online corrosion probe measurements have been carried out. The measurements have been performed in a specially designed fixed bed/drop tube reactor in order to simulate a superheater boiler tube under well-controlled conditions. The investigated boiler steel 13CrMo4-5 is commonly used as steel for superheater tube bundles in biomass fired boilers. Within the test runs the flue gas temperature at the corrosion probe has been varied between 625 °C and 880 °C, while the steel temperature has been varied between 450 °C and 550 °C to simulate typical current and future live steam temperatures of biomass fired steam boilers. To investigate the dependence on the flue gas velocity, variations from 2 m·s −1 to 8 m·s −1 have been considered. The empirical model developed fits the measured data sufficiently well. Therefore, the model has been applied within a Computational Fluid Dynamics (CFD) simulation of flue gas flow and heat transfer to estimate the local corrosion potential of a wood chips fired 38 MW steam boiler. Additionally to the actual state analysis two further simulations have been carried out to investigate the influence of enhanced steam temperatures and a change of the flow direction of the final superheater tube bundle from parallel to counter-flow on the local corrosion potential. - Highlights: • Online corrosion probe measurements in a fixed bed/drop tube reactor. • Development of an empirical corrosion model. • Application of the model in a CFD simulation of flow and heat transfer. • Variation of boundary conditions and their effects on the corrosion potential

  1. Adolescent mental health and academic functioning: empirical support for contrasting models of risk and vulnerability.

    Science.gov (United States)

    Lucier-Greer, Mallory; O'Neal, Catherine W; Arnold, A Laura; Mancini, Jay A; Wickrama, Kandauda K A S

    2014-11-01

    Adolescents in military families contend with normative stressors that are universal and exist across social contexts (minority status, family disruptions, and social isolation) as well as stressors reflective of their military life context (e.g., parental deployment, school transitions, and living outside the United States). This study utilizes a social ecological perspective and a stress process lens to examine the relationship between multiple risk factors and relevant indicators of youth well-being, namely depressive symptoms and academic performance, as well as the mediating role of self-efficacy (N = 1,036). Three risk models were tested: an additive effects model (each risk factor uniquely influences outcomes), a full cumulative effects model (the collection of risk factors influences outcomes), a comparative model (a cumulative effects model exploring the differential effects of normative and military-related risks). This design allowed for the simultaneous examination of multiple risk factors and a comparison of alternative perspectives on measuring risk. Each model was predictive of depressive symptoms and academic performance through persistence; however, each model provides unique findings about the relationship between risk factors and youth outcomes. Discussion is provided pertinent to service providers and researchers on how risk is conceptualized and suggestions for identifying at-risk youth. Reprint & Copyright © 2014 Association of Military Surgeons of the U.S.

  2. PREDICTING THE EFFECTIVENESS OF WEB INFORMATION SYSTEMS USING NEURAL NETWORKS MODELING: FRAMEWORK & EMPIRICAL TESTING

    Directory of Open Access Journals (Sweden)

    Dr. Kamal Mohammed Alhendawi

    2018-02-01

    Full Text Available The information systems (IS assessment studies have still used the commonly traditional tools such as questionnaires in evaluating the dependent variables and specially effectiveness of systems. Artificial neural networks have been recently accepted as an effective alternative tool for modeling the complicated systems and widely used for forecasting. A very few is known about the employment of Artificial Neural Network (ANN in the prediction IS effectiveness. For this reason, this study is considered as one of the fewest studies to investigate the efficiency and capability of using ANN for forecasting the user perceptions towards IS effectiveness where MATLAB is utilized for building and training the neural network model. A dataset of 175 subjects collected from international organization are utilized for ANN learning where each subject consists of 6 features (5 quality factors as inputs and one Boolean output. A percentage of 75% o subjects are used in the training phase. The results indicate an evidence on the ANN models has a reasonable accuracy in forecasting the IS effectiveness. For prediction, ANN with PURELIN (ANNP and ANN with TANSIG (ANNTS transfer functions are used. It is found that both two models have a reasonable prediction, however, the accuracy of ANNTS model is better than ANNP model (88.6% and 70.4% respectively. As the study proposes a new model for predicting IS dependent variables, it could save the considerably high cost that might be spent in sample data collection in the quantitative studies in the fields science, management, education, arts and others.

  3. Empirical models validation to estimate global solar irradiance on a horizontal plan in Ouargla, Algeria

    Science.gov (United States)

    Gougui, Abdelmoumen; Djafour, Ahmed; Khelfaoui, Narimane; Boutelli, Halima

    2018-05-01

    In this paper a comparison between three models for predicting the total solar flux falling on a horizontal surface has been processed. Capderou, Perrin & Brichambaut and Hottel models used to estimate the global solar radiation, the models are identified and evaluated using MATLAB environment. The recorded data have been obtained from a small weather station installed at the LAGE laboratory of Ouargla University, Algeria. Solar radiation data have been recorded using four sample days, every 15thday of the month, (March, April, May and October). The Root Mean Square Error (RMSE), Correlation Coefficient (CC) and Mean Absolute Percentage Error (MAPE) have been also calculated so as that to test the reliability of the proposed models. A comparisons between the measured and the calculated values have been made. The results obtained in this study depict that Perrin & Brichambaut and Capderou models are more effective to estimate the total solar intensity on a horizontal surface for clear sky over Ouargla city (Latitude of 31° 95' N, Longitude of 5° 24' E, and Altitude of 0.141km above Mean Sea Level), these models dedicated from meteorological parameters, geographical location and number of days since the first January. Perrin & Brichambaut and Capderou models give the best tendency with a CC of 0.985-0.999 and 0.932-0.995 consecutively further, Hottel give's a CC of 0.617-0.942.

  4. A calibration hierarchy for risk models was defined: from utopia to empirical data.

    Science.gov (United States)

    Van Calster, Ben; Nieboer, Daan; Vergouwe, Yvonne; De Cock, Bavo; Pencina, Michael J; Steyerberg, Ewout W

    2016-06-01

    Calibrated risk models are vital for valid decision support. We define four levels of calibration and describe implications for model development and external validation of predictions. We present results based on simulated data sets. A common definition of calibration is "having an event rate of R% among patients with a predicted risk of R%," which we refer to as "moderate calibration." Weaker forms of calibration only require the average predicted risk (mean calibration) or the average prediction effects (weak calibration) to be correct. "Strong calibration" requires that the event rate equals the predicted risk for every covariate pattern. This implies that the model is fully correct for the validation setting. We argue that this is unrealistic: the model type may be incorrect, the linear predictor is only asymptotically unbiased, and all nonlinear and interaction effects should be correctly modeled. In addition, we prove that moderate calibration guarantees nonharmful decision making. Finally, results indicate that a flexible assessment of calibration in small validation data sets is problematic. Strong calibration is desirable for individualized decision support but unrealistic and counter productive by stimulating the development of overly complex models. Model development and external validation should focus on moderate calibration. Copyright © 2016 Elsevier Inc. All rights reserved.

  5. Extracting Knowledge From Time Series An Introduction to Nonlinear Empirical Modeling

    CERN Document Server

    Bezruchko, Boris P

    2010-01-01

    This book addresses the fundamental question of how to construct mathematical models for the evolution of dynamical systems from experimentally-obtained time series. It places emphasis on chaotic signals and nonlinear modeling and discusses different approaches to the forecast of future system evolution. In particular, it teaches readers how to construct difference and differential model equations depending on the amount of a priori information that is available on the system in addition to the experimental data sets. This book will benefit graduate students and researchers from all natural sciences who seek a self-contained and thorough introduction to this subject.

  6. An Empirical Investigation of the Black-Scholes Model: Evidence from the Australian Stock Exchange

    Directory of Open Access Journals (Sweden)

    Zaffar Subedar

    2007-12-01

    Full Text Available This paper evaluates the probability of an exchange traded European call option beingexercised on the ASX200 Options Index. Using single-parameter estimates of factors withinthe Black-Scholes model, this paper utilises qualitative regression and a maximum likelihoodapproach. Results indicate that the Black-Scholes model is statistically significant at the 1%level. The results also provide evidence that the use of implied volatility and a jump-diffusionapproach, which increases the tail properties of the underlying lognormal distribution,improves the statistical significance of the Black-Scholes model.

  7. An Empirical Rate Constant Based Model to Study Capacity Fading in Lithium Ion Batteries

    Directory of Open Access Journals (Sweden)

    Srivatsan Ramesh

    2015-01-01

    Full Text Available A one-dimensional model based on solvent diffusion and kinetics to study the formation of the SEI (solid electrolyte interphase layer and its impact on the capacity of a lithium ion battery is developed. The model uses the earlier work on silicon oxidation but studies the kinetic limitations of the SEI growth process. The rate constant of the SEI formation reaction at the anode is seen to play a major role in film formation. The kinetics of the reactions for capacity fading for various battery systems are studied and the rate constants are evaluated. The model is used to fit the capacity fade in different battery systems.

  8. Development of nonlinear empirical models to forecast daily PM2.5 and ozone levels in three large Chinese cities

    Science.gov (United States)

    Lv, Baolei; Cobourn, W. Geoffrey; Bai, Yuqi

    2016-12-01

    Empirical regression models for next-day forecasting of PM2.5 and O3 air pollution concentrations have been developed and evaluated for three large Chinese cities, Beijing, Nanjing and Guangzhou. The forecast models are empirical nonlinear regression models designed for use in an automated data retrieval and forecasting platform. The PM2.5 model includes an upwind air quality variable, PM24, to account for regional transport of PM2.5, and a persistence variable (previous day PM2.5 concentration). The models were evaluated in the hindcast mode with a two-year air quality and meteorological data set using a leave-one-month-out cross validation method, and in the forecast mode with a one-year air quality and forecasted weather dataset that included forecasted air trajectories. The PM2.5 models performed well in the hindcast mode, with coefficient of determination (R2) values of 0.54, 0.65 and 0.64, and normalized mean error (NME) values of 0.40, 0.26 and 0.23 respectively, for the three cities. The O3 models also performed well in the hindcast mode, with R2 values of 0.75, 0.55 and 0.73, and NME values of 0.29, 0.26 and 0.24 in the three cities. The O3 models performed better in summertime than in winter in Beijing and Guangzhou, and captured the O3 variations well all the year round in Nanjing. The overall forecast performance of the PM2.5 and O3 models during the test year varied from fair to good, depending on location. The forecasts were somewhat degraded compared with hindcasts from the same year, depending on the accuracy of the forecasted meteorological input data. For the O3 models, the model forecast accuracy was strongly dependent on the maximum temperature forecasts. For the critical forecasts, involving air quality standard exceedences, the PM2.5 model forecasts were fair to good, and the O3 model forecasts were poor to fair.

  9. Ploidy frequencies in plants with ploidy heterogeneity: fitting a general gametic model to empirical population data

    Czech Academy of Sciences Publication Activity Database

    Suda, Jan; Herben, Tomáš

    2013-01-01

    Roč. 280, č. 1751 (2013), no.20122387 ISSN 0962-8452 Institutional support: RVO:67985939 Keywords : cytometry * statiscical modelling * polyploidy Subject RIV: EF - Botanics Impact factor: 5.292, year: 2013

  10. Regional differences of outpatient physician supply as a theoretical economic and empirical generalized linear model.

    Science.gov (United States)

    Scholz, Stefan; Graf von der Schulenburg, Johann-Matthias; Greiner, Wolfgang

    2015-11-17

    Regional differences in physician supply can be found in many health care systems, regardless of their organizational and financial structure. A theoretical model is developed for the physicians' decision on office allocation, covering demand-side factors and a consumption time function. To test the propositions following the theoretical model, generalized linear models were estimated to explain differences in 412 German districts. Various factors found in the literature were included to control for physicians' regional preferences. Evidence in favor of the first three propositions of the theoretical model could be found. Specialists show a stronger association to higher populated districts than GPs. Although indicators for regional preferences are significantly correlated with physician density, their coefficients are not as high as population density. If regional disparities should be addressed by political actions, the focus should be to counteract those parameters representing physicians' preferences in over- and undersupplied regions.

  11. Semi-empirical model for retrieval of soil moisture using RISAT-1 C ...

    Indian Academy of Sciences (India)

    Kishan Singh Rawat

    2018-03-02

    Mar 2, 2018 ... We have estimated soil moisture (SM) by using circular horizontal polarization backscattering coefficient. (σo ..... samples, 14 were used for model generation and 8 .... mechanisms; this needs to take into account the dynamic ...

  12. Semi-empirical device model for Cu2ZnSn(S,Se)4 solar cells

    Science.gov (United States)

    Gokmen, Tayfun; Gunawan, Oki; Mitzi, David B.

    2014-07-01

    We present a device model for the hydrazine processed kesterite Cu2ZnSn(S,Se)4 (CZTSSe) solar cell with a world record efficiency of ˜12.6%. Detailed comparison of the simulation results, performed using wxAMPS software, to the measured device parameters shows that our model captures the vast majority of experimental observations, including VOC, JSC, FF, and efficiency under normal operating conditions, and temperature vs. VOC, sun intensity vs. VOC, and quantum efficiency. Moreover, our model is consistent with material properties derived from various techniques. Interestingly, this model does not have any interface defects/states, suggesting that all the experimentally observed features can be accounted for by the bulk properties of CZTSSe. An electrical (mobility) gap that is smaller than the optical gap is critical to fit the VOC data. These findings point to the importance of tail states in CZTSSe solar cells.

  13. Semi-empirical device model for Cu2ZnSn(S,Se)4 solar cells

    International Nuclear Information System (INIS)

    Gokmen, Tayfun; Gunawan, Oki; Mitzi, David B.

    2014-01-01

    We present a device model for the hydrazine processed kesterite Cu 2 ZnSn(S,Se) 4 (CZTSSe) solar cell with a world record efficiency of ∼12.6%. Detailed comparison of the simulation results, performed using wxAMPS software, to the measured device parameters shows that our model captures the vast majority of experimental observations, including V OC , J SC , FF, and efficiency under normal operating conditions, and temperature vs. V OC , sun intensity vs. V OC , and quantum efficiency. Moreover, our model is consistent with material properties derived from various techniques. Interestingly, this model does not have any interface defects/states, suggesting that all the experimentally observed features can be accounted for by the bulk properties of CZTSSe. An electrical (mobility) gap that is smaller than the optical gap is critical to fit the V OC data. These findings point to the importance of tail states in CZTSSe solar cells.

  14. Development of an empirical dynamic model for a Nexa PEM fuel cell power module

    Energy Technology Data Exchange (ETDEWEB)

    Soltani, Mehdi; Mohammad Taghi Bathaee, S. [Power Systems Laboratory, Department of Electrical Engineering, K.N. Toosi University of Technology, 16317-14191 Tehran (Iran)

    2010-12-15

    The goal of this study is to develop a fuel cell model which is capable of characterizing fuel cell steady-state performance as well as dynamic behavior. In this paper a new dynamic model of a 1.2 kW Polymer Electrolyte Membrane Fuel Cell (PEMFC) is developed and validated through a series of experiments. The experimental results have been obtained from a Nexa trademark PEM fuel cell power module under different load conditions. Based on this model, a simulator software package has been developed using the MATLAB {sup registered} and Simulink {sup registered} software and simulation results have been carried out. The proposed model exhibits good agreement with experiment results in steady-state and dynamic performance. (author)

  15. Verification of mechanistic-empirical design models for flexible pavements through accelerated pavement testing : technical summary.

    Science.gov (United States)

    2014-08-01

    Midwest States Accelerated Pavement Testing Pooled-Fund Program, financed by the : highway departments of Kansas, Iowa, and Missouri, has supported an accelerated : pavement testing (APT) project to validate several models incorporated in the NCHRP :...

  16. Verification of mechanistic-empirical design models for flexible pavements through accelerated pavement testing.

    Science.gov (United States)

    2014-08-01

    The Midwest States Accelerated Pavement Testing Pooled Fund Program, financed by the highway : departments of Kansas, Iowa, and Missouri, has supported an accelerated pavement testing (APT) project to : validate several models incorporated in the NCH...

  17. Mechanical Properties of Nanoporous Au: From Empirical Evidence to Phenomenological Modeling

    Directory of Open Access Journals (Sweden)

    Giorgio Pia

    2015-09-01

    Full Text Available The present work focuses on the development of a theoretical model aimed at relating the mechanical properties of nanoporous metals to the bending response of thick ligaments. The model describes the structure of nanoporous metal foams in terms of an idealized regular lattice of massive cubic nodes and thick ligaments with square cross-sections. Following a general introduction to the subject, model predictions are compared with Young’s modulus and the yield strength of nanoporous Au foams determined experimentally and available in literature. It is shown that the model provides a quantitative description of the elastic and plastic deformation behavior of nanoporous metals, reproducing to a satisfactory extent the experimental Young’s modulus and yield strength values of nanoporous Au.

  18. Evaluation of empirical heat transfer models using TFG heat flux sensors

    International Nuclear Information System (INIS)

    De Cuyper, T.; Broekaert, S.; Chana, K.; De Paepe, M.; Verhelst, S.

    2017-01-01

    Thermodynamic engine cycle models are used to support the development of the internal combustion engine (ICE) in a cost and time effective manner. The sub model which describes the in-cylinder heat transfer from the working gases to the combustion chamber walls plays an important role in the accuracy of these simulation tools. The heat transfer affects the power output, engine efficiency and emissions of the engine. The most common heat transfer models in engine research are the models of Annand and Woschni. These models provide an instantaneous spatial averaged heat flux. In this research, prototype thin film gauge (TFG) heat flux sensors are used to capture the transient in-cylinder heat flux behavior within a production spark ignition (SI) engine as they are small, robust and able to capture the highly transient temperature swings. An inlet valve and two different zones of the cylinder head are instrumented with multiple TFG sensors. The heat flux traces are used to calculate the convection coefficient which includes all information of the convective heat transfer phenomena inside the combustion chamber. The implementation of TFG sensors inside the combustion chamber and the signal processing technique are discussed. The heat transfer measurements are used to analyze the spatial variation in heat flux under motored and fired operation. Spatial variation in peak heat flux was observed even under motored operation. Under fired operation the observed spatial variation is mainly driven by the flame propagation. Next, the paper evaluates the models of Annand and Woschni. These models fail to predict the total heat loss even with calibration of the models coefficients using a reference motored operating condition. The effect of engine speed and inlet pressure is analyzed under motored operation after calibration of the models. The models are able to predict the trend in peak heat flux value for a varying engine speed and inlet pressure. Next, the accuracy of the

  19. An improved empirical dynamic control system model of global mean sea level rise and surface temperature change

    Science.gov (United States)

    Wu, Qing; Luu, Quang-Hung; Tkalich, Pavel; Chen, Ge

    2018-04-01

    Having great impacts on human lives, global warming and associated sea level rise are believed to be strongly linked to anthropogenic causes. Statistical approach offers a simple and yet conceptually verifiable combination of remotely connected climate variables and indices, including sea level and surface temperature. We propose an improved statistical reconstruction model based on the empirical dynamic control system by taking into account the climate variability and deriving parameters from Monte Carlo cross-validation random experiments. For the historic data from 1880 to 2001, we yielded higher correlation results compared to those from other dynamic empirical models. The averaged root mean square errors are reduced in both reconstructed fields, namely, the global mean surface temperature (by 24-37%) and the global mean sea level (by 5-25%). Our model is also more robust as it notably diminished the unstable problem associated with varying initial values. Such results suggest that the model not only enhances significantly the global mean reconstructions of temperature and sea level but also may have a potential to improve future projections.

  20. Comparative empirical analysis of flow-weighted transit route networks in R-space and evolution modeling

    Science.gov (United States)

    Huang, Ailing; Zang, Guangzhi; He, Zhengbing; Guan, Wei

    2017-05-01

    Urban public transit system is a typical mixed complex network with dynamic flow, and its evolution should be a process coupling topological structure with flow dynamics, which has received little attention. This paper presents the R-space to make a comparative empirical analysis on Beijing’s flow-weighted transit route network (TRN) and we found that both the Beijing’s TRNs in the year of 2011 and 2015 exhibit the scale-free properties. As such, we propose an evolution model driven by flow to simulate the development of TRNs with consideration of the passengers’ dynamical behaviors triggered by topological change. The model simulates that the evolution of TRN is an iterative process. At each time step, a certain number of new routes are generated driven by travel demands, which leads to dynamical evolution of new routes’ flow and triggers perturbation in nearby routes that will further impact the next round of opening new routes. We present the theoretical analysis based on the mean-field theory, as well as the numerical simulation for this model. The results obtained agree well with our empirical analysis results, which indicate that our model can simulate the TRN evolution with scale-free properties for distributions of node’s strength and degree. The purpose of this paper is to illustrate the global evolutional mechanism of transit network that will be used to exploit planning and design strategies for real TRNs.

  1. Empirical potential and elasticity theory modelling of interstitial dislocation loops in UO2 for cluster dynamics application

    International Nuclear Information System (INIS)

    Le-Prioux, Arno

    2017-01-01

    During irradiation in reactor, the microstructure of UO 2 changes and deteriorates, causing modifications of its physical and mechanical properties. The kinetic models used to describe these changes such as cluster dynamics (CRESCENDO calculation code) consider the main microstructural elements that are cavities and interstitial dislocation loops, and provide a rather rough description of the loop thermodynamics. In order to tackle this issue, this work has led to the development of a thermodynamic model of interstitial dislocation loops based on empirical potential calculations. The model considers two types of interstitial dislocation loops on two different size domains: Type 1: Dislocation loops similar to Frank partials in F.C.C. materials which are stable in the smaller size domain. Type 2: Perfect dislocation loops of Burgers vector (a/2)(110) stable in the larger size domain. The analytical formula used to compute the interstitial dislocation loop formation energies is the one for circular loops which has been modified in order to take into account the effects of the dislocation core, which are significant at smaller sizes. The parameters have been determined by empirical potential calculations of the formation energies of prismatic pure edge dislocation loops. The effect of the habit plane reorientation on the formation energies of perfect dislocation loops has been taken into account by a simple interpolation method. All the different types of loops seen during TEM observations are thus accounted for by the model. (author) [fr

  2. The empirical content of models with multiple equilibria in economies with social interactions

    OpenAIRE

    Alberto Bisin; Andrea Moro; Giorgio Topa

    2011-01-01

    We study a general class of models with social interactions that might display multiple equilibria. We propose an estimation procedure for these models and evaluate its efficiency and computational feasibility relative to different approaches taken to the curse of dimensionality implied by the multiplicity. Using data on smoking among teenagers, we implement the proposed estimation procedure to understand how group interactions affect health-related choices. We find that interaction effects a...

  3. Customer orientation on online newspaper business models with paid content strategies: An empirical study

    OpenAIRE

    Goyanes, Manuel; Sylvie, George

    2014-01-01

    This study examines the transformations that trigger business models with paid content strategies on news organizations under the theoretical framework of market orientation. The results show three main factors: those related to competence, to the organization culture and to understanding of needs and wants of the audience. The findings also suggest that online newspapers business models with paid content strategies are more like experiments or forays rather than definitive methods that monet...

  4. An Empirical LTE Smartphone Power Model with a View to Energy Efficiency Evolution

    OpenAIRE

    Lauridsen, Mads; Noël, Laurent; Sørensen, Troels Bundgaard; Mogensen, Preben

    2014-01-01

    Smartphone users struggle with short battery life, and this affects their device satisfaction level and usage of the network. To evaluate how chipset manufacturers and mobile network operators can improve the battery life, we propose a Long Term Evolution (LTE) smartphone power model. The idea is to provide a model that makes it possible to evaluate the effect of different terminal and network settings to the overall user equipment energy consumption. It is primarily intended as an instrument...

  5. An Empirical Based Proposal for Mass Customization Business Model in Footwear Industry

    OpenAIRE

    Pourabdollahian , Golboo; Corti , Donatella; Galbusera , Chiara; Silva , Julio ,

    2012-01-01

    Part 2: Design, Manufacturing and Production Management; International audience; This research aims at developing a business model for companies in the footwear industry interested in implementing Mass Customization with the goal of offering to the market products which perfectly match customers’ needs. The studies on mass customization are actually mostly focused on product development and production system aspects. This study extends the business modeling including also Supply Chain aspects...

  6. Gatekeeper Training for Suicide Prevention: A Theoretical Model and Review of the Empirical Literature

    Science.gov (United States)

    2015-01-01

    effectiveness of such trainings or of intervention behaviors. The model is consistent with Bandura’s social cognitive theory , which posits that...interactions between environmental and personal factors influence the learning of new behav- ior ( Bandura , 2001). The model is depicted in Figure 1. In the...A., “ Social Cognitive Theory : An Agentic Perspective,” Annual Review of Clinical Psychology, Vol. 52, 2001, pp. 1–26. Bean, G., and K. M. Baber

  7. A new empirical model to estimate hourly diffuse photosynthetic photon flux density

    Science.gov (United States)

    Foyo-Moreno, I.; Alados, I.; Alados-Arboledas, L.

    2018-05-01

    Knowledge of the photosynthetic photon flux density (Qp) is critical in different applications dealing with climate change, plant physiology, biomass production, and natural illumination in greenhouses. This is particularly true regarding its diffuse component (Qpd), which can enhance canopy light-use efficiency and thereby boost carbon uptake. Therefore, diffuse photosynthetic photon flux density is a key driving factor of ecosystem-productivity models. In this work, we propose a model to estimate this component, using a previous model to calculate Qp and furthermore divide it into its components. We have used measurements in urban Granada (southern Spain), of global solar radiation (Rs) to study relationships between the ratio Qpd/Rs with different parameters accounting for solar position, water-vapour absorption and sky conditions. The model performance has been validated with experimental measurements from sites having varied climatic conditions. The model provides acceptable results, with the mean bias error and root mean square error varying between - 0.3 and - 8.8% and between 9.6 and 20.4%, respectively. Direct measurements of this flux are very scarce so that modelling simulations are needed, this is particularly true regarding its diffuse component. We propose a new parameterization to estimate this component using only measured data of solar global irradiance, which facilitates its use for the construction of long-term data series of PAR in regions where continuous measurements of PAR are not yet performed.

  8. High resolution modelling of wind fields for optimization of empirical storm flood predictions

    Science.gov (United States)

    Brecht, B.; Frank, H.

    2014-05-01

    High resolution wind fields are necessary to predict the occurrence of storm flood events and their magnitude. Deutscher Wetterdienst (DWD) created a catalogue of detailed wind fields of 39 historical storms at the German North Sea coast from the years 1962 to 2011. The catalogue is used by the Niedersächsisches Landesamt für Wasser-, Küsten- und Naturschutz (NLWKN) coastal research center to improve their flood alert service. The computation of wind fields and other meteorological parameters is based on the model chain of the DWD going from the global model GME via the limited-area model COSMO with 7 km mesh size down to a COSMO model with 2.2 km. To obtain an improved analysis COSMO runs are nudged against observations for the historical storms. The global model GME is initialised from the ERA reanalysis data of the European Centre for Medium-Range Weather Forecasts (ECMWF). As expected, we got better congruency with observations of the model for the nudging runs than the normal forecast runs for most storms. We also found during the verification process that different land use data sets could influence the results considerably.

  9. Empirically based models of oceanographic and biological influences on Pacific Herring recruitment in Prince William Sound

    Science.gov (United States)

    Sewall, Fletcher; Norcross, Brenda; Mueter, Franz; Heintz, Ron

    2018-01-01

    Abundances of small pelagic fish can change dramatically over time and are difficult to forecast, partially due to variable numbers of fish that annually mature and recruit to the spawning population. Recruitment strength of age-3 Pacific Herring (Clupea pallasii) in Prince William Sound, Alaska, is estimated in an age-structured model framework as a function of spawning stock biomass via a Ricker stock-recruitment model, and forecasted using the 10-year median recruitment estimates. However, stock size has little influence on subsequent numbers of recruits. This study evaluated the usefulness of herring recruitment models that incorporate oceanographic and biological variables. Results indicated herring recruitment estimates were significantly improved by modifying the standard Ricker model to include an index of young-of-the-year (YOY) Walleye Pollock (Gadus chalcogrammus) abundance. The positive relationship between herring recruits-per-spawner and YOY pollock abundance has persisted through three decades, including the herring stock crash of the early 1990s. Including sea surface temperature, primary productivity, and additional predator or competitor abundances singly or in combination did not improve model performance. We suggest that synchrony of juvenile herring and pollock survival may be caused by increased abundance of their zooplankton prey, or high juvenile pollock abundance may promote prey switching and satiation of predators. Regardless of the mechanism, the relationship has practical application to herring recruitment forecasting, and serves as an example of incorporating ecosystem components into a stock assessment model.

  10. Sustainable fisheries in shallow lakes: an independent empirical test of the Chinese mitten crab yield model

    Science.gov (United States)

    Wang, Haijun; Liang, Xiaomin; Wang, Hongzhu

    2017-07-01

    Next to excessive nutrient loading, intensive aquaculture is one of the major anthropogenic impacts threatening lake ecosystems. In China, particularly in the shallow lakes of mid-lower Changjiang (Yangtze) River, continuous overstocking of the Chinese mitten crab ( Eriocheir sinensis) could deteriorate water quality and exhaust natural resources. A series of crab yield models and a general optimum-stocking rate model have been established, which seek to benefit both crab culture and the environment. In this research, independent investigations were carried out to evaluate the crab yield models and modify the optimum-stocking model. Low percentage errors (average 47%, median 36%) between observed and calculated crab yields were obtained. Specific values were defined for adult crab body mass (135 g/ind.) and recapture rate (18% and 30% in lakes with submerged macrophyte biomass above and below 1 000 g/m2) to modify the optimum-stocking model. Analysis based on the modified optimum-stocking model indicated that the actual stocking rates in most lakes were much higher than the calculated optimum-stocking rates. This implies that, for most lakes, the current stocking rates should be greatly reduced to maintain healthy lake ecosystems.

  11. Global characteristics of geomagnetic excursions as seen in global empirical models and a numerical geodynamo simulation

    Science.gov (United States)

    Korte, M. C.; Wardinski, I.; Brown, M. C.

    2016-12-01

    Paleomagnetic results from sediments and lava flows provide observational evidence of numerous geomagnetic excursions throughout Earth's history. Two new spherical harmonic geomagnetic field models covering 50-30 ka, including the Laschamp ( 41ka) and Mono Lake ( 32-35 ka) excursions allow us to characterize the global behaviour of these events, both at Earth's surface and the core-mantle boundary. We investigate the evolution of dipole and large-scale non-dipole power throughout the duration of the model and the morphology of the large-scale radial field at the core-mantle boundary. The models suggest clear differences in both the decrease in axial dipole strength and dipole tilt between the two excursions and unlike the previously published model by Leonhardt et al. (2009), they suggest some increase of non-dipole power during the early and late stages of the Laschamp excursion. Global characteristics from the models can be directly compared with results from numerical simulations. We do so for several excursions generated by a numerical simulation driven by purely compositional convection, which appears Earth-like in terms of excursion and reversal occurrence frequency. Excursions from this simulation show differing characteristics, including differences in spectral power evolution. Some cases show similarities to the Laschamp and Mono Lake excursions in the spherical harmonic models. In particular they all indicate that excursions are mainly governed by the axial dipole term and equatorial dipole terms play a minor role.

  12. The use of spatial empirical models to estimate soil erosion in arid ecosystems.

    Science.gov (United States)

    Abdullah, Meshal; Feagin, Rusty; Musawi, Layla

    2017-02-01

    The central objective of this project was to utilize geographical information systems and remote sensing to compare soil erosion models, including Modified Pacific South-west Inter Agency Committee (MPSIAC), Erosion Potential Method (EPM), and Revised Universal Soil Loss Equation (RUSLE), and to determine their applicability for arid regions such as Kuwait. The northern portion of Umm Nigga, containing both coastal and desert ecosystems, falls within the boundaries of the de-militarized zone (DMZ) adjacent to Iraq and has been fenced off to restrict public access since 1994. Results showed that the MPSIAC and EPM models were similar in spatial distribution of erosion, though the MPSIAC had a more realistic spatial distribution of erosion and presented finer level details. The RUSLE presented unrealistic results. We then predicted the amount of soil loss between coastal and desert areas and fenced and unfenced sites for each model. In the MPSIAC and EPM models, soil loss was different between fenced and unfenced sites at the desert areas, which was higher at the unfenced due to the low vegetation cover. The overall results implied that vegetation cover played an important role in reducing soil erosion and that fencing is much more important in the desert ecosystems to protect against human activities such as overgrazing. We conclude that the MPSIAC model is best for predicting soil erosion for arid regions such as Kuwait. We also recommend the integration of field-based experiments with lab-based spatial analysis and modeling in future research.

  13. Comparing Absolute Error with Squared Error for Evaluating Empirical Models of Continuous Variables: Compositions, Implications, and Consequences

    Science.gov (United States)

    Gao, J.

    2014-12-01

    Reducing modeling error is often a major concern of empirical geophysical models. However, modeling errors can be defined in different ways: When the response variable is continuous, the most commonly used metrics are squared (SQ) and absolute (ABS) errors. For most applications, ABS error is the more natural, but SQ error is mathematically more tractable, so is often used as a substitute with little scientific justification. Existing literature has not thoroughly investigated the implications of using SQ error in place of ABS error, especially not geospatially. This study compares the two metrics through the lens of bias-variance decomposition (BVD). BVD breaks down the expected modeling error of each model evaluation point into bias (systematic error), variance (model sensitivity), and noise (observation instability). It offers a way to probe the composition of various error metrics. I analytically derived the BVD of ABS error and compared it with the well-known SQ error BVD, and found that not only the two metrics measure the characteristics of the probability distributions of modeling errors differently, but also the effects of these characteristics on the overall expected error are different. Most notably, under SQ error all bias, variance, and noise increase expected error, while under ABS error certain parts of the error components reduce expected error. Since manipulating these subtractive terms is a legitimate way to reduce expected modeling error, SQ error can never capture the complete story embedded in ABS error. I then empirically compared the two metrics with a supervised remote sensing model for mapping surface imperviousness. Pair-wise spatially-explicit comparison for each error component showed that SQ error overstates all error components in comparison to ABS error, especially variance-related terms. Hence, substituting ABS error with SQ error makes model performance appear worse than it actually is, and the analyst would more likely accept a

  14. Empirically Based Composite Fracture Prediction Model From the Global Longitudinal Study of Osteoporosis in Postmenopausal Women (GLOW)

    Science.gov (United States)

    Compston, Juliet E.; Chapurlat, Roland D.; Pfeilschifter, Johannes; Cooper, Cyrus; Hosmer, David W.; Adachi, Jonathan D.; Anderson, Frederick A.; Díez-Pérez, Adolfo; Greenspan, Susan L.; Netelenbos, J. Coen; Nieves, Jeri W.; Rossini, Maurizio; Watts, Nelson B.; Hooven, Frederick H.; LaCroix, Andrea Z.; March, Lyn; Roux, Christian; Saag, Kenneth G.; Siris, Ethel S.; Silverman, Stuart; Gehlbach, Stephen H.

    2014-01-01

    Context: Several fracture prediction models that combine fractures at different sites into a composite outcome are in current use. However, to the extent individual fracture sites have differing risk factor profiles, model discrimination is impaired. Objective: The objective of the study was to improve model discrimination by developing a 5-year composite fracture prediction model for fracture sites that display similar risk profiles. Design: This was a prospective, observational cohort study. Setting: The study was conducted at primary care practices in 10 countries. Patients: Women aged 55 years or older participated in the study. Intervention: Self-administered questionnaires collected data on patient characteristics, fracture risk factors, and previous fractures. Main Outcome Measure: The main outcome is time to first clinical fracture of hip, pelvis, upper leg, clavicle, or spine, each of which exhibits a strong association with advanced age. Results: Of four composite fracture models considered, model discrimination (c index) is highest for an age-related fracture model (c index of 0.75, 47 066 women), and lowest for Fracture Risk Assessment Tool (FRAX) major fracture and a 10-site model (c indices of 0.67 and 0.65). The unadjusted increase in fracture risk for an additional 10 years of age ranges from 80% to 180% for the individual bones in the age-associated model. Five other fracture sites not considered for the age-associated model (upper arm/shoulder, rib, wrist, lower leg, and ankle) have age associations for an additional 10 years of age from a 10% decrease to a 60% increase. Conclusions: After examining results for 10 different bone fracture sites, advanced age appeared the single best possibility for uniting several different sites, resulting in an empirically based composite fracture risk model. PMID:24423345

  15. Empirical models of monthly and annual surface albedo in managed boreal forests of Norway

    Science.gov (United States)

    Bright, Ryan M.; Astrup, Rasmus; Strømman, Anders H.

    2013-04-01

    As forest management activities play an increasingly important role in climate change mitigation strategies of Nordic regions such as Norway, Sweden, and Finland -- the need for a more comprehensive understanding of the types and magnitude of biogeophysical climate effects and their various tradeoffs with the global carbon cycle becomes essential to avoid implementation of sub-optimal policy. Forest harvest in these regions reduces the albedo "masking effect" and impacts Earth's radiation budget in opposing ways to that of concomitant carbon cycle perturbations; thus, policies based solely on biogeochemical considerations in these regions risk being counterproductive. There is therefore a need to better understand how human disturbances (i.e., forest management activities) affect important biophysical factors like surface albedo. An 11-year remotely sensed surface albedo dataset coupled with stand-level forest management data for a variety of stands in Norway's most productive logging region are used to develop regression models describing temporal changes in monthly and annual forest albedo following clear-cut harvest disturbance events. Datasets are grouped by dominant tree species and site indices (productivity), and two alternate multiple regression models are developed and tested following a potential plus modifier approach. This resulted in an annual albedo model with statistically significant parameters that explains a large proportion of the observed variation, requiring as few as two predictor variables: i) average stand age - a canopy modifier predictor of albedo, and ii) stand elevation - a local climate predictor of a forest's potential albedo. The same model structure is used to derive monthly albedo models, with models for winter months generally found superior to summer models, and conifer models generally outperforming deciduous. We demonstrate how these statistical models can be applied to routine forest inventory data to predict the albedo

  16. Recent extensions and use of the statistical model code EMPIRE-II - version: 2.17 Millesimo

    International Nuclear Information System (INIS)

    Herman, M.

    2003-01-01

    This lecture notes describe new features of the modular code EMPIRE-2.17 designed to perform comprehensive calculations of nuclear reactions using variety of nuclear reaction models. Compared to the version 2.13, the current release has been extended by including Coupled-Channel mechanism, exciton model, Monte Carlo approach to preequilibrium emission, use of microscopic level densities, widths fluctuation correction, detailed calculation of the recoil spectra, and powerful plotting capabilities provided by the ZVView package. The second part of this lecture concentrates on the use of the code in practical calculations, with emphasis on the aspects relevant to nuclear data evaluation. In particular, adjusting model parameters is discussed in details. (author)

  17. Comparison of the SASSYS/SAS4A radial core expansion reactivity feedback model and the empirical correlation for FFTF

    International Nuclear Information System (INIS)

    Wigeland, R.A.

    1987-01-01

    The present emphasis on inherent safety for LMR designs has resulted in a need to represent the various reactivity feedback mechanisms as accurately as possible. The dominant negative reactivity feedback has been found to result from radial expansion of the core for most postulated ATWS events. For this reason, a more detailed model for calculating the reactivity feedback from radial core expansion has been recently developed for use with the SASSYS/SAS4A Code System. The purpose of this summary is to present an extension to the model so that it is more suitable for handling a core restraint design as used in FFTF, and to compare the SASSYS/SAS4A results using this model to the empirical correlation presently being used to account for radial core expansion reactivity feedback to FFTF

  18. Empirical models for end-use properties prediction of LDPE: application in the flexible plastic packaging industry

    Directory of Open Access Journals (Sweden)

    Maria Carolina Burgos Costa

    2008-03-01

    Full Text Available The objective of this work is to develop empirical models to predict end use properties of low density polyethylene (LDPE resins as functions of two intrinsic properties easily measured in the polymers industry. The most important properties for application in the flexible plastic packaging industry were evaluated experimentally for seven commercial polymer grades. Statistical correlation analysis was performed for all variables and used as the basis for proper choice of inputs to each model output. Intrinsic properties selected for resin characterization are fluidity index (FI, which is essentially an indirect measurement of viscosity and weight average molecular weight (MW, and density. In general, models developed are able to reproduce and predict experimental data within experimental accuracy and show that a significant number of end use properties improve as the MW and density increase. Optical properties are mainly determined by the polymer morphology.

  19. Empirical model with independent variable moments of inertia for triaxial nuclei applied to 76Ge and 192Os

    Science.gov (United States)

    Sugawara, M.

    2018-05-01

    An empirical model with independent variable moments of inertia for triaxial nuclei is devised and applied to 76Ge and 192Os. Three intrinsic moments of inertia, J1, J2, and J3, are varied independently as a particular function of spin I within a revised version of the triaxial rotor model so as to reproduce the energy levels of the ground-state, γ , and (in the case of 192Os) Kπ=4+ bands. The staggering in the γ band is well reproduced in both phase and amplitude. Effective γ values are extracted as a function of spin I from the ratios of the three moments of inertia. The eigenfunctions and the effective γ values are subsequently used to calculate the ratios of B (E 2 ) values associated with these bands. Good agreement between the model calculation and the experimental data is obtained for both 76Ge and 192Os.

  20. Modelling short and long-term risks in power markets: Empirical evidence from Nord Pool

    International Nuclear Information System (INIS)

    Nomikos, Nikos K.; Soldatos, Orestes A.

    2010-01-01

    In this paper we propose a three-factor spike model that accounts for different speeds of mean reversion between normal and spiky shocks in the Scandinavian power market. In this model both short and long-run factors are unobservable and are hence estimated as latent variables using the Kalman filter. The proposed model has several advantages. First, it seems to capture in a parsimonious way the most important risks that practitioners face in the market, such as spike risk, short-term risk and long-term risk. Second, it explains the seasonal risk premium observed in the market and improves the fit between theoretical and observed forward prices, particularly for long-dated forward contracts. Finally, closed-form solutions for forward contracts, derived from the model, are consistent with the fact that the correlation between contracts of different maturities is imperfect. The resulting model is very promising, providing a very useful policy analysis and financial engineering tool to market participants for risk management and derivative pricing particularly for long-dated contracts.

  1. The Ease of Language Understanding (ELU model: theoretical, empirical, and clinical advances

    Directory of Open Access Journals (Sweden)

    Jerker eRönnberg

    2013-07-01

    Full Text Available Working memory is important for online language processing during conversation. We use it to maintain relevant information, to inhibit or ignore irrelevant information, and to attend to conversation selectively. Working memory helps us to keep track of and actively participate in conversation, including taking turns and following the gist. This paper examines the Ease of Language Understanding model (i.e., the ELU model, Rönnberg, 2003; Rönnberg et al., 2008 in light of new behavioral and neural findings concerning the role of working memory capacity (WMC in uni-modal and bimodal language processing. The new ELU model is a meaning prediction system that depends on phonological and semantic interactions in rapid implicit and slower explicit processing mechanisms that both depend on WMC albeit in different ways. A revised ELU model is proposed based on findings that address the relationship between WMC and (a early attention processes in listening to speech, (b signal processing in hearing aids and its effects on short-term memory, (c inhibition of speech maskers and its effect on episodic long-term memory, (d the effects of hearing impairment on episodic and semantic long-term memory, and finally, (e listening effort. New predictions and clinical implications are outlined. Comparisons with other WMC and speech perception models are made.

  2. An empirical model to describe performance degradation for warranty abuse detection in portable electronics

    International Nuclear Information System (INIS)

    Oh, Hyunseok; Choi, Seunghyuk; Kim, Keunsu; Youn, Byeng D.; Pecht, Michael

    2015-01-01

    Portable electronics makers have introduced liquid damage indicators (LDIs) into their products to detect warranty abuse caused by water damage. However, under certain conditions, these indicators can exhibit inconsistencies in detecting liquid damage. This study is motivated by the fact that the reliability of LDIs in portable electronics is suspected. In this paper, first, the scheme of life tests is devised for LDIs in conjunction with a robust color classification rule. Second, a degradation model is proposed by considering the two physical mechanisms—(1) phase change from vapor to water and (2) water transport in the porous paper—for LDIs. Finally, the degradation model is validated with additional tests using actual smartphone sets subjected to the thermal cycling of −15 °C to 25 °C and the relative humidity of 95%. By employing the innovative life testing scheme and the novel performance degradation model, it is expected that the performance of LDIs for a particular application can be assessed quickly and accurately. - Highlights: • Devise an efficient scheme of life testing for a warranty abuse detector in portable electronics. • Develop a performance degradation model for the warranty abuse detector used in portable electronics. • Validate the performance degradation model with life tests of actual smartphone sets. • Help make a decision on warranty service in portable electronics manufacturers

  3. Carbon emissions, logistics volume and GDP in China: empirical analysis based on panel data model.

    Science.gov (United States)

    Guo, Xiaopeng; Ren, Dongfang; Shi, Jiaxing

    2016-12-01

    This paper studies the relationship among carbon emissions, GDP, and logistics by using a panel data model and a combination of statistics and econometrics theory. The model is based on the historical data of 10 typical provinces and cities in China during 2005-2014. The model in this paper adds the variability of logistics on the basis of previous studies, and this variable is replaced by the freight turnover of the provinces. Carbon emissions are calculated by using the annual consumption of coal, oil, and natural gas. GDP is the gross domestic product. The results showed that the amount of logistics and GDP have a contribution to carbon emissions and the long-term relationships are different between different cities in China, mainly influenced by the difference among development mode, economic structure, and level of logistic development. After the testing of panel model setting, this paper established a variable coefficient model of the panel. The influence of GDP and logistics on carbon emissions is obtained according to the influence factors among the variables. The paper concludes with main findings and provides recommendations toward rational planning of urban sustainable development and environmental protection for China.

  4. Dynamic Model of Islamic Hybrid Securities: Empirical Evidence From Malaysia Islamic Capital Market

    Directory of Open Access Journals (Sweden)

    Jaafar Pyeman

    2016-12-01

    Full Text Available Capital structure selection is fundamentally important in corporate financial management as it influence on mutually return and risk to stakeholders. Despite of Malaysia’s position as one of the major players of Islamic Financial Market, there are still lack of studies has been conducted on the capital structure of shariah compliant firms especially related to hybrid securities. The objective of this study is to determine the hybrid securities issuance model among the shariah compliant firms in Malaysia. As such, this study is to expand the literature review by providing comprehensive analysis on the hybrid capital structure and to develop dynamic Islamic hybrid securities model for shariah compliant firms. We use panel data of 50 companies that have been issuing the hybrid securities from the year of 2004- 2012. The outcomes of the studies are based on the dynamic model GMM estimation for the determinants of hybrid securities. Based on our model, risk and growth are considered as the most determinant factors for issuing convertible bond and loan stock. These results suggest that, the firms that have high risk but having good growth prospect will choose hybrid securities of convertible bond. The model also support the backdoor equity listing hypothesis by Stein (1992 where the hybrid securities enable the profitable firms to venture into positive NPV project by issuing convertible bond as it offer lower coupon rate as compare to the normal debt rate

  5. An Empirical Agent-Based Model to Simulate the Adoption of Water Reuse Using the Social Amplification of Risk Framework.

    Science.gov (United States)

    Kandiah, Venu; Binder, Andrew R; Berglund, Emily Z

    2017-10-01

    Water reuse can serve as a sustainable alternative water source for urban areas. However, the successful implementation of large-scale water reuse projects depends on community acceptance. Because of the negative perceptions that are traditionally associated with reclaimed water, water reuse is often not considered in the development of urban water management plans. This study develops a simulation model for understanding community opinion dynamics surrounding the issue of water reuse, and how individual perceptions evolve within that context, which can help in the planning and decision-making process. Based on the social amplification of risk framework, our agent-based model simulates consumer perceptions, discussion patterns, and their adoption or rejection of water reuse. The model is based on the "risk publics" model, an empirical approach that uses the concept of belief clusters to explain the adoption of new technology. Each household is represented as an agent, and parameters that define their behavior and attributes are defined from survey data. Community-level parameters-including social groups, relationships, and communication variables, also from survey data-are encoded to simulate the social processes that influence community opinion. The model demonstrates its capabilities to simulate opinion dynamics and consumer adoption of water reuse. In addition, based on empirical data, the model is applied to investigate water reuse behavior in different regions of the United States. Importantly, our results reveal that public opinion dynamics emerge differently based on membership in opinion clusters, frequency of discussion, and the structure of social networks. © 2017 Society for Risk Analysis.

  6. Development of Response Spectral Ground Motion Prediction Equations from Empirical Models for Fourier Spectra and Duration of Ground Motion

    Science.gov (United States)

    Bora, S. S.; Scherbaum, F.; Kuehn, N. M.; Stafford, P.; Edwards, B.

    2014-12-01

    In a probabilistic seismic hazard assessment (PSHA) framework, it still remains a challenge to adjust ground motion prediction equations (GMPEs) for application in different seismological environments. In this context, this study presents a complete framework for the development of a response spectral GMPE easily adjustable to different seismological conditions; and which does not suffer from the technical problems associated with the adjustment in response spectral domain. Essentially, the approach consists of an empirical FAS (Fourier Amplitude Spectrum) model and a duration model for ground motion which are combined within the random vibration theory (RVT) framework to obtain the full response spectral ordinates. Additionally, FAS corresponding to individual acceleration records are extrapolated beyond the frequency range defined by the data using the stochastic FAS model, obtained by inversion as described in Edwards & Faeh, (2013). To that end, an empirical model for a duration, which is tuned to optimize the fit between RVT based and observed response spectral ordinate, at each oscillator frequency is derived. Although, the main motive of the presented approach was to address the adjustability issues of response spectral GMPEs; comparison, of median predicted response spectra with the other regional models indicate that presented approach can also be used as a stand-alone model. Besides that, a significantly lower aleatory variability (σbrands it to a potentially viable alternative to the classical regression (on response spectral ordinates) based GMPEs for seismic hazard studies in the near future. The dataset used for the presented analysis is a subset of the recently compiled database RESORCE-2012 across Europe, Middle East and the Mediterranean region.

  7. Empirical test of Capital Asset Pricing Model on Selected Banking Shares from Borsa Istanbul

    Directory of Open Access Journals (Sweden)

    Fuzuli Aliyev

    2018-03-01

    Full Text Available In this paper we tested Capital Asset Pricing Model (shortly CAPM hereafter on the selected banking stocks of Borsa Istanbul. Here we tried to explain how to price financial assets based on their risks in the case of BIST-100 index. CAPM is an important model in the portfolio management theory used by economic agents for the selection of financial assets. We used 12 random banking stocks’ monthly return data for 2001–2010 periods. To test the validity of the CAPM, we first derived the regression equation for the risk-free interest rate and risk premium relationship using January 2001–December 2009 data. Then, estimated January–December 2010 returns with the equation. Comparing forecasted return with the actual return, we concluded that the CAPM is valid for the portfolio consisting of the 12 banks traded in the ISE, i.e. The model could predict the overall outcome of portfolio of selected banking shares

  8. Psychological first aid: a consensus-derived, empirically supported, competency-based training model.

    Science.gov (United States)

    McCabe, O Lee; Everly, George S; Brown, Lisa M; Wendelboe, Aaron M; Abd Hamid, Nor Hashidah; Tallchief, Vicki L; Links, Jonathan M

    2014-04-01

    Surges in demand for professional mental health services occasioned by disasters represent a major public health challenge. To build response capacity, numerous psychological first aid (PFA) training models for professional and lay audiences have been developed that, although often concurring on broad intervention aims, have not systematically addressed pedagogical elements necessary for optimal learning or teaching. We describe a competency-based model of PFA training developed under the auspices of the Centers for Disease Control and Prevention and the Association of Schools of Public Health. We explain the approach used for developing and refining the competency set and summarize the observable knowledge, skills, and attitudes underlying the 6 core competency domains. We discuss the strategies for model dissemination, validation, and adoption in professional and lay communities.

  9. The Development and Empirical Validation of an E-based Supply Chain Strategy Optimization Model

    DEFF Research Database (Denmark)

    Kotzab, Herbert; Skjoldager, Niels; Vinum, Thorkil

    2003-01-01

    Examines the formulation of supply chain strategies in complex environments. Argues that current state‐of‐the‐art e‐business and supply chain management, combined into the concept of e‐SCM, as well as the use of transaction cost theory, network theory and resource‐based theory, altogether can...... be used to form a model for analyzing supply chains with the purpose of reducing the uncertainty of formulating supply chain strategies. Presents e‐supply chain strategy optimization model (e‐SOM) as a way to analyze supply chains in a structured manner as regards strategic preferences for supply chain...... design, relations and resources in the chains with the ultimate purpose of enabling the formulation of optimal, executable strategies for specific supply chains. Uses research results for a specific supply chain to validate the usefulness of the model....

  10. Predictive time-series modeling using artificial neural networks for Linac beam symmetry: an empirical study.

    Science.gov (United States)

    Li, Qiongge; Chan, Maria F

    2017-01-01

    Over half of cancer patients receive radiotherapy (RT) as partial or full cancer treatment. Daily quality assurance (QA) of RT in cancer treatment closely monitors the performance of the medical linear accelerator (Linac) and is critical for continuous improvement of patient safety and quality of care. Cumulative longitudinal QA measurements are valuable for understanding the behavior of the Linac and allow physicists to identify trends in the output and take preventive actions. In this study, artificial neural networks (ANNs) and autoregressive moving average (ARMA) time-series prediction modeling techniques were both applied to 5-year daily Linac QA data. Verification tests and other evaluations were then performed for all models. Preliminary results showed that ANN time-series predictive modeling has more advantages over ARMA techniques for accurate and effective applicability in the dosimetry and QA field. © 2016 New York Academy of Sciences.

  11. An optimisation approach for capacity planning: modelling insights and empirical findings from a tactical perspective

    Directory of Open Access Journals (Sweden)

    Andréa Nunes Carvalho

    2017-09-01

    Full Text Available Abstract The academic literature presents a research-practice gap on the application of decision support tools to address tactical planning problems in real-world organisations. This paper addresses this gap and extends a previous action research relative to an optimisation model applied for tactical capacity planning in an engineer-to-order industrial setting. The issues discussed herein raise new insights to better understand the practical results that can be achieved through the proposed model. The topics presented include the modelling of objectives, the representation of the production process and the costing approach, as well as findings regarding managerial decisions and the scope of action considered. These insights may inspire ideas to academics and practitioners when developing tools for capacity planning problems in similar contexts.

  12. The bohm-penrose-hameroff model for consciousness and free will theoretical foundations and empirical evidences

    Directory of Open Access Journals (Sweden)

    Manuel Bejar Gallego

    2013-07-01

    Full Text Available The Bohm-Penrose-Hameroff (BPH model offers a heuristic explanation of consciousness from the complementary works of Bohm and Penrose-Hameroff. Physically, in the microscopic regime,the quantum neurology of Penrose and the quantum potential of Bohm play a unified role in the BPH model. Both, Bohm and Penrose look for an answer to the emergence of the classical regime from the quantum background of reality. In the biological level, Bohm’s macroneurons and Penrose’s microtubules work together as biophysical crucial elements to understand the consciousness phenomenon. The mindas an unconscious neural system to control the body in the environment, the arising of the conscious subject with self-perception in the whole reality, and the subjective sensation of free-will in the law-ruledworld, are some traditional philosophical problems that could be partially illuminated by the new biophysics of the BPH model.

  13. Empirical models for predicting wind potential for wind energy applications in rural locations of Nigeria

    Energy Technology Data Exchange (ETDEWEB)

    Odo, F.C. [National Centre for Energy Research and Development, University of Nigeria, Nsukka (Nigeria); Department of Physics and Astronomy, University of Nigeria, Nsukka (Nigeria); Akubue, G.U.; Offiah, S.U.; Ugwuoke, P.E. [National Centre for Energy Research and Development, University of Nigeria, Nsukka (Nigeria)

    2013-07-01

    In this paper, we use the correlation between the average wind speed and ambient temperature to develop models for predicting wind potentials for two Nigerian locations. Assuming that the troposphere is a typical heterogeneous mixture of ideal gases, we find that for the studied locations, wind speed clearly correlates with ambient temperature in a simple polynomial of 3rd degree. The coefficient of determination and root-mean-square error of the models are 0.81; 0.0024 and 0.56; 0.0041, respectively, for Enugu (6.40N; 7.50E) and Owerri (5.50N; 7.00E). These results suggest that the temperature-based model can be used, with acceptable accuracy, in predicting wind potentials needed for preliminary design assessment of wind energy conversion devices for the locations and others with similar meteorological conditions.

  14. A Structural Model of Business Performance: An Empirical Study on Tobacco Farmers

    Directory of Open Access Journals (Sweden)

    Sony Heru Priyanto

    2006-01-01

    The results of the analysis indicate that factors like personal aspects, together with physical, economic and institutional environments, affect farmers’ entrepreneurship. Personal aspects turn out to be the dominant factor that determines entrepreneurship and farm performance. This study also shows that farmers’ entrepreneurship is affected by their management capacity, which, in turn, affects the farmers’ farm performance. While there is no doubt in the adequacy of the model to estimate farm performance, this finding invites further investigation to validate it in other fields and scale of business, such as in small and medium enterprises and other companies. Furthermore, in order to evaluate the goodness of fit of the model in various contexts, further research both in a cross-cultural context and cross-national contexts using this model should be conducted.

  15. Sea ice thermohaline dynamics and biogeochemistry in the Arctic Ocean: Empirical and model results

    Science.gov (United States)

    Duarte, Pedro; Meyer, Amelie; Olsen, Lasse M.; Kauko, Hanna M.; Assmy, Philipp; Rösel, Anja; Itkin, Polona; Hudson, Stephen R.; Granskog, Mats A.; Gerland, Sebastian; Sundfjord, Arild; Steen, Harald; Hop, Haakon; Cohen, Lana; Peterson, Algot K.; Jeffery, Nicole; Elliott, Scott M.; Hunke, Elizabeth C.; Turner, Adrian K.

    2017-07-01

    Large changes in the sea ice regime of the Arctic Ocean have occurred over the last decades justifying the development of models to forecast sea ice physics and biogeochemistry. The main goal of this study is to evaluate the performance of the Los Alamos Sea Ice Model (CICE) to simulate physical and biogeochemical properties at time scales of a few weeks and to use the model to analyze ice algal bloom dynamics in different types of ice. Ocean and atmospheric forcing data and observations of the evolution of the sea ice properties collected from 18 April to 4 June 2015, during the Norwegian young sea ICE expedition, were used to test the CICE model. Our results show the following: (i) model performance is reasonable for sea ice thickness and bulk salinity; good for vertically resolved temperature, vertically averaged Chl a concentrations, and standing stocks; and poor for vertically resolved Chl a concentrations. (ii) Improving current knowledge about nutrient exchanges, ice algal recruitment, and motion is critical to improve sea ice biogeochemical modeling. (iii) Ice algae may bloom despite some degree of basal melting. (iv) Ice algal motility driven by gradients in limiting factors is a plausible mechanism to explain their vertical distribution. (v) Different ice algal bloom and net primary production (NPP) patterns were identified in the ice types studied, suggesting that ice algal maximal growth rates will increase, while sea ice vertically integrated NPP and biomass will decrease as a result of the predictable increase in the area covered by refrozen leads in the Arctic Ocean.

  16. Comparison of ensemble post-processing approaches, based on empirical and dynamical error modelisation of rainfall-runoff model forecasts

    Science.gov (United States)

    Chardon, J.; Mathevet, T.; Le Lay, M.; Gailhard, J.

    2012-04-01

    In the context of a national energy company (EDF : Electricité de France), hydro-meteorological forecasts are necessary to ensure safety and security of installations, meet environmental standards and improve water ressources management and decision making. Hydrological ensemble forecasts allow a better representation of meteorological and hydrological forecasts uncertainties and improve human expertise of hydrological forecasts, which is essential to synthesize available informations, coming from different meteorological and hydrological models and human experience. An operational hydrological ensemble forecasting chain has been developed at EDF since 2008 and is being used since 2010 on more than 30 watersheds in France. This ensemble forecasting chain is characterized ensemble pre-processing (rainfall and temperature) and post-processing (streamflow), where a large human expertise is solicited. The aim of this paper is to compare 2 hydrological ensemble post-processing methods developed at EDF in order improve ensemble forecasts reliability (similar to Monatanari &Brath, 2004; Schaefli et al., 2007). The aim of the post-processing methods is to dress hydrological ensemble forecasts with hydrological model uncertainties, based on perfect forecasts. The first method (called empirical approach) is based on a statistical modelisation of empirical error of perfect forecasts, by streamflow sub-samples of quantile class and lead-time. The second method (called dynamical approach) is based on streamflow sub-samples of quantile class and streamflow variation, and lead-time. On a set of 20 watersheds used for operational forecasts, results show that both approaches are necessary to ensure a good post-processing of hydrological ensemble, allowing a good improvement of reliability, skill and sharpness of ensemble forecasts. The comparison of the empirical and dynamical approaches shows the limits of the empirical approach which is not able to take into account hydrological

  17. An empirical exploration of the world oil price under the target zone model

    International Nuclear Information System (INIS)

    Tang, Linghui; Hammoudeh, Shawkat

    2002-01-01

    This paper investigates the behavior of the world oil price based on the first-generation target zone model. Using anecdotal data during the period of 1988-1999, we found that OPEC has tried to maintain a weak target zone regime for the oil price. Our econometric tests suggest that the movement of the oil price is not only manipulated by actual and substantial interventions by OPEC but also tempered by market participants' expectations of interventions. As a consequence, the non-linear model based on the target zone theory has very good forecasting ability when the oil price approaches the upper or lower limit of the band

  18. An empirical exploration of the world oil price under the target zone model

    International Nuclear Information System (INIS)

    Linghui Tang; Shawkat Hammoudeh

    2002-01-01

    This paper investigates the behavior of the world oil price based on the first-generation target zone model. Using anecdotal data during the period of 1988-1999, we found that OPEC has tried to maintain a weak target zone regime for the oil price. Our econometric tests suggest that the movement of the oil price is not only manipulated by actual and substantial interventions by OPEC but also tempered by market participants' expectations of interventions. As a consequence, the non-linear model based on the target zone theory has very good forecasting ability when the oil price approaches the upper or lower limit of the band. (author)

  19. The Pruned State-Space System for Non-Linear DSGE Models: Theory and Empirical Applications

    DEFF Research Database (Denmark)

    Andreasen, Martin Møller; Fernández-Villaverde, Jesús; Rubio-Ramírez, Juan F.

    and impulse response functions. Thus, our analysis introduces GMM estimation for DSGE models approximated up to third-order and provides the foundation for indirect inference and SMM when simulation is required. We illustrate the usefulness of our approach by estimating a New Keynesian model with habits...... and Epstein-Zin preferences by GMM when using …rst and second unconditional moments of macroeconomic and …nancial data and by SMM when using additional third and fourth unconditional moments and non-Gaussian innovations....

  20. Probing the (empirical quantum structure embedded in the periodic table with an effective Bohr model

    Directory of Open Access Journals (Sweden)

    Wellington Nardin Favaro

    2013-01-01

    Full Text Available The atomic shell structure can be observed by inspecting the experimental periodic properties of the Periodic Table. The (quantum shell structure emerges from these properties and in this way quantum mechanics can be explicitly shown considering the (semi-quantitative periodic properties. These periodic properties can be obtained with a simple effective Bohr model. An effective Bohr model with an effective quantum defect (u was considered as a probe in order to show the quantum structure embedded in the Periodic Table. u(Z shows a quasi-smoothed dependence of Z, i.e., u(Z ≈ Z2/5 - 1.