WorldWideScience

Sample records for empirical model aplicabilidade

  1. FASES DA TEORIA HUMANÍSTICA: ANÁLISE DA APLICABILIDADE EM PESQUISA

    Directory of Open Access Journals (Sweden)

    Ana Luiza Paula de Aguiar Lélis

    2014-01-01

    Full Text Available El objetivo fue analizar la aplicabilidad de las fases de la Teoría Humanística en investigaciones, según un Modelo de Análisis de Teoría. Estudio de análisis crítica de teoría de enfermería con énfasis en la aplicabilidad. Fue efectuada una búsq ueda, en abril y mayo de 2013, en las bases de datos SCOPUS, Cumulative Index of Nursing and Allied Health Literature y en las disponibles en la Biblioteca Virtual de Salud. Fueron utilizados los descriptores: “Teoria de Enfermagem” y “Pesquisa em Enfermagem” y Nursing Theory y Nursing Research , en publicaciones del 2002 hasta mayo de 2013. Siete artículos fueron seleccionados, de los cuales cinco revelaron todas las etapas en su desarrollo, evidenciando la aplicabilidad de la Teoría Humanística como soporte metodológico e n las investigaciones de enfermería.

  2. Teorías sobre la exclusión social: reflexionando acerca de su aplicabilidad

    Directory of Open Access Journals (Sweden)

    Santiago Bachiller

    2013-08-01

    Full Text Available Teorías sobre la exclusión social: reflexionando acerca de su aplicabilidad en el análisis de los procesos de precariedad social que afectan a los recolectores informales de un basural municipal

  3. Empirical Model Building Data, Models, and Reality

    CERN Document Server

    Thompson, James R

    2011-01-01

    Praise for the First Edition "This...novel and highly stimulating book, which emphasizes solving real problems...should be widely read. It will have a positive and lasting effect on the teaching of modeling and statistics in general." - Short Book Reviews This new edition features developments and real-world examples that showcase essential empirical modeling techniques Successful empirical model building is founded on the relationship between data and approximate representations of the real systems that generated that data. As a result, it is essential for researchers who construct these m

  4. Empirical agent-based modelling challenges and solutions

    CERN Document Server

    Barreteau, Olivier

    2014-01-01

    This instructional book showcases techniques to parameterise human agents in empirical agent-based models (ABM). In doing so, it provides a timely overview of key ABM methodologies and the most innovative approaches through a variety of empirical applications.  It features cutting-edge research from leading academics and practitioners, and will provide a guide for characterising and parameterising human agents in empirical ABM.  In order to facilitate learning, this text shares the valuable experiences of other modellers in particular modelling situations. Very little has been published in the area of empirical ABM, and this contributed volume will appeal to graduate-level students and researchers studying simulation modeling in economics, sociology, ecology, and trans-disciplinary studies, such as topics related to sustainability. In a similar vein to the instruction found in a cookbook, this text provides the empirical modeller with a set of 'recipes'  ready to be implemented. Agent-based modeling (AB...

  5. A sensitivity analysis of centrifugal compressors' empirical models

    International Nuclear Information System (INIS)

    Yoon, Sung Ho; Baek, Je Hyun

    2001-01-01

    The mean-line method using empirical models is the most practical method of predicting off-design performance. To gain insight into the empirical models, the influence of empirical models on the performance prediction results is investigated. We found that, in the two-zone model, the secondary flow mass fraction has a considerable effect at high mass flow-rates on the performance prediction curves. In the TEIS model, the first element changes the slope of the performance curves as well as the stable operating range. The second element makes the performance curves move up and down as it increases or decreases. It is also discovered that the slip factor affects pressure ratio, but it has little effect on efficiency. Finally, this study reveals that the skin friction coefficient has significant effect on both the pressure ratio curve and the efficiency curve. These results show the limitations of the present empirical models, and more reasonable empirical models are reeded

  6. Estudio de soluciones metálicas. (I Aplicabilidad y limitaciones de algunos modelos termodinámicos

    Directory of Open Access Journals (Sweden)

    Jaime Valderrama N.

    2009-07-01

    Full Text Available Se presentan y analizan las características centrales de los modelos termodinámicos energéticos basados en la interacción por pares entre un átomo y sus próximos vecinos. Se discute la aplicabilidad de estos modelos a sistemas metálicos binarios. Se concluye sugiriendo posibles alternativas para el refinamiento de estos modelos.

  7. Aplicabilidad de la teoría de colas al fenómeno hospitalario

    OpenAIRE

    Llano Monelos, Pablo de

    2017-01-01

    El objetivo de la tesis es el análisis de la problemática de congestión de los sistemas hospitalarios, en concreto del área de influencia de la ciudad de A Coruña, bajo la óptica de la teoría de colas. [Resumen] Se trata de buscar la aplicabilidad de la teoría de colas como herramienta valida para realizar predicciones en el entorno de los problemas de congestión hospitalaria, con la ayuda de la simulación como herramienta para la validación de los resultados obteni...

  8. NOx PREDICTION FOR FBC BOILERS USING EMPIRICAL MODELS

    Directory of Open Access Journals (Sweden)

    Jiří Štefanica

    2014-02-01

    Full Text Available Reliable prediction of NOx emissions can provide useful information for boiler design and fuel selection. Recently used kinetic prediction models for FBC boilers are overly complex and require large computing capacity. Even so, there are many uncertainties in the case of FBC boilers. An empirical modeling approach for NOx prediction has been used exclusively for PCC boilers. No reference is available for modifying this method for FBC conditions. This paper presents possible advantages of empirical modeling based prediction of NOx emissions for FBC boilers, together with a discussion of its limitations. Empirical models are reviewed, and are applied to operation data from FBC boilers used for combusting Czech lignite coal or coal-biomass mixtures. Modifications to the model are proposed in accordance with theoretical knowledge and prediction accuracy.

  9. PWR surveillance based on correspondence between empirical models and physical

    International Nuclear Information System (INIS)

    Zwingelstein, G.; Upadhyaya, B.R.; Kerlin, T.W.

    1976-01-01

    An on line surveillance method based on the correspondence between empirical models and physicals models is proposed for pressurized water reactors. Two types of empirical models are considered as well as the mathematical models defining the correspondence between the physical and empirical parameters. The efficiency of this method is illustrated for the surveillance of the Doppler coefficient for Oconee I (an 886 MWe PWR) [fr

  10. Predicting acid dew point with a semi-empirical model

    International Nuclear Information System (INIS)

    Xiang, Baixiang; Tang, Bin; Wu, Yuxin; Yang, Hairui; Zhang, Man; Lu, Junfu

    2016-01-01

    Highlights: • The previous semi-empirical models are systematically studied. • An improved thermodynamic correlation is derived. • A semi-empirical prediction model is proposed. • The proposed semi-empirical model is validated. - Abstract: Decreasing the temperature of exhaust flue gas in boilers is one of the most effective ways to further improve the thermal efficiency, electrostatic precipitator efficiency and to decrease the water consumption of desulfurization tower, while, when this temperature is below the acid dew point, the fouling and corrosion will occur on the heating surfaces in the second pass of boilers. So, the knowledge on accurately predicting the acid dew point is essential. By investigating the previous models on acid dew point prediction, an improved thermodynamic correlation formula between the acid dew point and its influencing factors is derived first. And then, a semi-empirical prediction model is proposed, which is validated with the data both in field test and experiment, and comparing with the previous models.

  11. Empirical Bayesian inference and model uncertainty

    International Nuclear Information System (INIS)

    Poern, K.

    1994-01-01

    This paper presents a hierarchical or multistage empirical Bayesian approach for the estimation of uncertainty concerning the intensity of a homogeneous Poisson process. A class of contaminated gamma distributions is considered to describe the uncertainty concerning the intensity. These distributions in turn are defined through a set of secondary parameters, the knowledge of which is also described and updated via Bayes formula. This two-stage Bayesian approach is an example where the modeling uncertainty is treated in a comprehensive way. Each contaminated gamma distributions, represented by a point in the 3D space of secondary parameters, can be considered as a specific model of the uncertainty about the Poisson intensity. Then, by the empirical Bayesian method each individual model is assigned a posterior probability

  12. APLICABILIDADE DA TEORIA DE VIRGINIA HENDERSON PARA FUNDAMENTAÇÃO NA ENFERMAGEM: FRAGILIDADES E POTENCIALIDADES

    OpenAIRE

    Ferrari, Roberta Fernanda Rogonni; Rodrigues, Daysi Mara Murio Ribeiro; Baldissera, Vanessa Denardi Antoniassi; Pelloso, Sandra Marisa; Carreira, Lígia

    2015-01-01

    A presente pesquisa teve como objetivo analisar a aplicabilidade da teoria dos 14 componentes do cuidado de Virginia Henderson por meio do levantamento da sua utilização em pesquisas nacionais e internacionais. Foi realizada uma revisão integrativa, a seleção dos artigos para o levantamento bibliográfico foi realizado nas bases de dados LILACS, ScIELOe BDENF, utilizando dois descritores controlados na base da BIREME (DeCs): “teoria de enfermagem”, “enfermagem”, e como não controlados “teoria ...

  13. Bias-dependent hybrid PKI empirical-neural model of microwave FETs

    Science.gov (United States)

    Marinković, Zlatica; Pronić-Rančić, Olivera; Marković, Vera

    2011-10-01

    Empirical models of microwave transistors based on an equivalent circuit are valid for only one bias point. Bias-dependent analysis requires repeated extractions of the model parameters for each bias point. In order to make model bias-dependent, a new hybrid empirical-neural model of microwave field-effect transistors is proposed in this article. The model is a combination of an equivalent circuit model including noise developed for one bias point and two prior knowledge input artificial neural networks (PKI ANNs) aimed at introducing bias dependency of scattering (S) and noise parameters, respectively. The prior knowledge of the proposed ANNs involves the values of the S- and noise parameters obtained by the empirical model. The proposed hybrid model is valid in the whole range of bias conditions. Moreover, the proposed model provides better accuracy than the empirical model, which is illustrated by an appropriate modelling example of a pseudomorphic high-electron mobility transistor device.

  14. Bridging process-based and empirical approaches to modeling tree growth

    Science.gov (United States)

    Harry T. Valentine; Annikki Makela; Annikki Makela

    2005-01-01

    The gulf between process-based and empirical approaches to modeling tree growth may be bridged, in part, by the use of a common model. To this end, we have formulated a process-based model of tree growth that can be fitted and applied in an empirical mode. The growth model is grounded in pipe model theory and an optimal control model of crown development. Together, the...

  15. Combining Empirical and Stochastic Models for Extreme Floods Estimation

    Science.gov (United States)

    Zemzami, M.; Benaabidate, L.

    2013-12-01

    Hydrological models can be defined as physical, mathematical or empirical. The latter class uses mathematical equations independent of the physical processes involved in the hydrological system. The linear regression and Gradex (Gradient of Extreme values) are classic examples of empirical models. However, conventional empirical models are still used as a tool for hydrological analysis by probabilistic approaches. In many regions in the world, watersheds are not gauged. This is true even in developed countries where the gauging network has continued to decline as a result of the lack of human and financial resources. Indeed, the obvious lack of data in these watersheds makes it impossible to apply some basic empirical models for daily forecast. So we had to find a combination of rainfall-runoff models in which it would be possible to create our own data and use them to estimate the flow. The estimated design floods would be a good choice to illustrate the difficulties facing the hydrologist for the construction of a standard empirical model in basins where hydrological information is rare. The construction of the climate-hydrological model, which is based on frequency analysis, was established to estimate the design flood in the Anseghmir catchments, Morocco. The choice of using this complex model returns to its ability to be applied in watersheds where hydrological information is not sufficient. It was found that this method is a powerful tool for estimating the design flood of the watershed and also other hydrological elements (runoff, volumes of water...).The hydrographic characteristics and climatic parameters were used to estimate the runoff, water volumes and design flood for different return periods.

  16. Semi-empirical corrosion model for Zircaloy-4 cladding

    International Nuclear Information System (INIS)

    Nadeem Elahi, Waseem; Atif Rana, Muhammad

    2015-01-01

    The Zircaloy-4 cladding tube in Pressurize Water Reactors (PWRs) bears corrosion due to fast neutron flux, coolant temperature, and water chemistry. The thickness of Zircaloy-4 cladding tube may be decreased due to the increase in corrosion penetration which may affect the integrity of the fuel rod. The tin content and inter-metallic particles sizes has been found significantly in the magnitude of oxide thickness. In present study we have developed a Semiempirical corrosion model by modifying the Arrhenius equation for corrosion as a function of acceleration factor for tin content and accumulative annealing. This developed model has been incorporated into fuel performance computer code. The cladding oxide thickness data obtained from the Semi-empirical corrosion model has been compared with the experimental results i.e., numerous cases of measured cladding oxide thickness from UO 2 fuel rods, irradiated in various PWRs. The results of the both studies lie within the error band of 20μm, which confirms the validity of the developed Semi-empirical corrosion model. Key words: Corrosion, Zircaloy-4, tin content, accumulative annealing factor, Semi-empirical, PWR. (author)

  17. Psychological Models of Art Reception must be Empirically Grounded

    DEFF Research Database (Denmark)

    Nadal, Marcos; Vartanian, Oshin; Skov, Martin

    2017-01-01

    We commend Menninghaus et al. for tackling the role of negative emotions in art reception. However, their model suffers from shortcomings that reduce its applicability to empirical studies of the arts: poor use of evidence, lack of integration with other models, and limited derivation of testable...... hypotheses. We argue that theories about art experiences should be based on empirical evidence....

  18. Testing the gravity p-median model empirically

    Directory of Open Access Journals (Sweden)

    Kenneth Carling

    2015-12-01

    Full Text Available Regarding the location of a facility, the presumption in the widely used p-median model is that the customer opts for the shortest route to the nearest facility. However, this assumption is problematic on free markets since the customer is presumed to gravitate to a facility by the distance to and the attractiveness of it. The recently introduced gravity p-median model offers an extension to the p-median model that account for this. The model is therefore potentially interesting, although it has not yet been implemented and tested empirically. In this paper, we have implemented the model in an empirical problem of locating vehicle inspections, locksmiths, and retail stores of vehicle spare-parts for the purpose of investigating its superiority to the p-median model. We found, however, the gravity p-median model to be of limited use for the problem of locating facilities as it either gives solutions similar to the p-median model, or it gives unstable solutions due to a non-concave objective function.

  19. Instrumentos de avaliação de linguagem infantil: aplicabilidade em deficientes

    Directory of Open Access Journals (Sweden)

    Cristhiane Ferreira Guimarães

    2013-12-01

    Full Text Available Este estudo tem como objetivo analisar testes e provas de avaliação de linguagem infantil de modo a discutir a aplicabilidade na população com deficiência física, auditiva, visual, mental e múltipla. No segundo semestre de 2011, pesquisou-se acerca das deficiências e das avaliações diretas de linguagem infantil oral, gestual e escrita, nacionais ou traduzidas. Consultou-se artigos e teses nas bases de dados online, além de livros e avaliações publicadas. Selecionou-se 28 avaliações, que foram agrupadas por objetivos de aplicação, descritas segundo estímulo e realização esperados, e analisadas pelos seguintes critérios: modalidades de avaliação, habilidades requeridas e conversão de códigos. Encontrou-se 23 modalidades de avaliação, cuja análise sugere que indivíduos que têm possibilidade de uso da visão, membros superiores e mente, e que conseguem compreender e utilizar imagem e português oral ou escrito como códigos, provavelmente terão maior gama de avaliações que os atenda. As dimensões semântica e pragmática pareceram ser as mais acessíveis, corroborando com a aplicação encontrada na literatura. Sobre a possibilidade de avaliação completa, verificou-se que apenas a dupla de habilidades "visão/membros superiores" permitiria isto. Um levantamento das informações sobre o perfil comunicativo do examinando comparado com o perfil comunicativo requisitado na avaliação auxilia na decisão sobre a compatibilidade destes e consequente aplicabilidade. No geral, consideradas as particularidades dos casos e das avaliações, instrumentos pré-selecionados poderão ser aplicáveis a indivíduos deficientes. Contudo, poderá ocorrer que, para alguns pacientes, não será possível realizar uma avaliação completa utilizando apenas instrumentos do tipo direto.

  20. Generalized Empirical Likelihood-Based Focused Information Criterion and Model Averaging

    Directory of Open Access Journals (Sweden)

    Naoya Sueishi

    2013-07-01

    Full Text Available This paper develops model selection and averaging methods for moment restriction models. We first propose a focused information criterion based on the generalized empirical likelihood estimator. We address the issue of selecting an optimal model, rather than a correct model, for estimating a specific parameter of interest. Then, this study investigates a generalized empirical likelihood-based model averaging estimator that minimizes the asymptotic mean squared error. A simulation study suggests that our averaging estimator can be a useful alternative to existing post-selection estimators.

  1. Identifiability of Baranyi model and comparison with empirical ...

    African Journals Online (AJOL)

    In addition, performance of the Baranyi model was compared with those of the empirical modified Gompertz and logistic models and Huang models. Higher values of R2, modeling efficiency and lower absolute values of mean bias error, root mean square error, mean percentage error and chi-square were obtained with ...

  2. Empirical questions for collective-behaviour modelling

    Indian Academy of Sciences (India)

    The collective behaviour of groups of social animals has been an active topic of study ... Models have been successful at reproducing qualitative features of ... quantitative and detailed empirical results for a range of animal systems. ... standard method [23], the redundant information recorded by the cameras can be used to.

  3. An empirical and model study on automobile market in Taiwan

    Science.gov (United States)

    Tang, Ji-Ying; Qiu, Rong; Zhou, Yueping; He, Da-Ren

    2006-03-01

    We have done an empirical investigation on automobile market in Taiwan including the development of the possession rate of the companies in the market from 1979 to 2003, the development of the largest possession rate, and so on. A dynamic model for describing the competition between the companies is suggested based on the empirical study. In the model each company is given a long-term competition factor (such as technology, capital and scale) and a short-term competition factor (such as management, service and advertisement). Then the companies play games in order to obtain more possession rate in the market under certain rules. Numerical simulation based on the model display a competition developing process, which qualitatively and quantitatively agree with our empirical investigation results.

  4. Esferas y pliegues: la aplicabilidad de la biopolítica de Fichte a Deleuze

    Directory of Open Access Journals (Sweden)

    Ferreyra, Julian

    2018-01-01

    Full Text Available Resumen. El artículo propone construir una biopolítica afirmativa a través de la confrontación de las ontologías políticas de Johann Fichte y Gilles Deleuze. Se mostrará la afinidad entre la Thathandlung y cuatro conceptos deleuzianos que se entrelazan a lo largo de su obra: eterno retorno, intensidad, affectus y pliegue. Sin embargo, ambas filosofías trascendentales divergen en su concepción del tiempo: en Fichte, lleva a contraponer la agilidad de la actividad pura del Yo (eterna y la fijeza de las determinaciones práctico-políticas (temporales, mientras en Deleuze concreta la inmanencia entre ambos planos. Esa fijeza en Fichte se observa en el plano de la aplicabilidad (el cuerpo como esfera de acción y determina su concepción del Estado. Por tanto, una aplicabilidad basada en la forma del pliegue permite postular, para concluir, una forma deleuziana de Estado que sea eficaz ante los desafíos del mundo contemporáneo. This paper aims to construct an affirmative biopolitics through the confrontation of the political ontologies of Johann Fichte and Gilles Deleuze. We will show the affinity between the Thathandlung and four Deleuzian concepts that are interwoven throughout his work: eternal return, intensity, affectus and fold. However, both transcendental philosophies differ in their conception of time: Fichte opposes the agility of the pure activity of the I (eternal with the inflexibility of the practicalpolitical determinations (temporal, whereas Deleuze’s philosophy realizes the immanence of both realms. Ficthean inflexibility is observable in the realm of applicability (the body as the sphere of action and determines its conception of the State. Therefore, to conclude, an applicability based on the form of the fold enables us to propose a Deleuzian form of State that may be effective faced with the challenges of the contemporary world.

  5. On the empirical relevance of the transient in opinion models

    International Nuclear Information System (INIS)

    Banisch, Sven; Araujo, Tanya

    2010-01-01

    While the number and variety of models to explain opinion exchange dynamics is huge, attempts to justify the model results using empirical data are relatively rare. As linking to real data is essential for establishing model credibility, this Letter develops an empirical confirmation experiment by which an opinion model is related to real election data. The model is based on a representation of opinions as a vector of k bits. Individuals interact according to the principle that similarity leads to interaction and interaction leads to still more similarity. In the comparison to real data we concentrate on the transient opinion profiles that form during the dynamic process. An artificial election procedure is introduced which allows to relate transient opinion configurations to the electoral performance of candidates for which data are available. The election procedure based on the well-established principle of proximity voting is repeatedly performed during the transient period and remarkable statistical agreement with the empirical data is observed.

  6. On the empirical relevance of the transient in opinion models

    Energy Technology Data Exchange (ETDEWEB)

    Banisch, Sven, E-mail: sven.banisch@universecity.d [Mathematical Physics, Physics Department, Bielefeld University, 33501 Bielefeld (Germany); Institute for Complexity Science (ICC), 1249-078 Lisbon (Portugal); Araujo, Tanya, E-mail: tanya@iseg.utl.p [Research Unit on Complexity in Economics (UECE), ISEG, TULisbon, 1249-078 Lisbon (Portugal); Institute for Complexity Science (ICC), 1249-078 Lisbon (Portugal)

    2010-07-12

    While the number and variety of models to explain opinion exchange dynamics is huge, attempts to justify the model results using empirical data are relatively rare. As linking to real data is essential for establishing model credibility, this Letter develops an empirical confirmation experiment by which an opinion model is related to real election data. The model is based on a representation of opinions as a vector of k bits. Individuals interact according to the principle that similarity leads to interaction and interaction leads to still more similarity. In the comparison to real data we concentrate on the transient opinion profiles that form during the dynamic process. An artificial election procedure is introduced which allows to relate transient opinion configurations to the electoral performance of candidates for which data are available. The election procedure based on the well-established principle of proximity voting is repeatedly performed during the transient period and remarkable statistical agreement with the empirical data is observed.

  7. Comparison of empirical models and laboratory saturated hydraulic ...

    African Journals Online (AJOL)

    Numerous methods for estimating soil saturated hydraulic conductivity exist, which range from direct measurement in the laboratory to models that use only basic soil properties. A study was conducted to compare laboratory saturated hydraulic conductivity (Ksat) measurement and that estimated from empirical models.

  8. Salt intrusion study in Cochin estuary - Using empirical models

    Digital Repository Service at National Institute of Oceanography (India)

    Jacob, B.; Revichandran, C.; NaveenKumar, K.R.

    been applied to the Cochin estuary in the present study to identify the most suitable model for predicting the salt intrusion length. Comparison of the obtained results indicate that the model of Van der Burgh (1972) is the most suitable empirical model...

  9. Improving the desolvation penalty in empirical protein pKa modeling

    DEFF Research Database (Denmark)

    Olsson, Mats Henrik Mikael

    2012-01-01

    Unlike atomistic and continuum models, empirical pk(a) predicting methods need to include desolvation contributions explicitly. This study describes a new empirical desolvation method based on the Born solvation model. The new desolvation model was evaluated by high-level Poisson-Boltzmann...

  10. {sup 137}Cs applicability to soil erosion assessment: theoretical and empirical model; Aplicabilidade do {sup 137}Cs para medir erosao do solo: modelos teoricos e empiricos

    Energy Technology Data Exchange (ETDEWEB)

    Andrello, Avacir Casanova

    2004-02-15

    The soil erosion processes acceleration and the increase of soil erosion rates due to anthropogenic perturbation in soil-weather-vegetation equilibrium has influenced in the soil quality and environment. So, the possibility to assess the amplitude and severity of soil erosion impact on the productivity and quality of soil is important so local scale as regional and global scale. Several models have been developed to assess the soil erosion so qualitative as quantitatively. {sup 137}Cs, an anthropogenic radionuclide, have been very used to assess the superficial soil erosion process Empirical and theoretical models were developed on the basis of {sup 137} Cs redistribution as indicative of soil movement by erosive process These models incorporate many parameters that can influence in the soil erosion rates quantification by {sup 137} Cs redistribution. Statistical analysis was realized on the models recommended by IAEA to determinate the influence that each parameter generates in results of the soil redistribution. It was verified that the most important parameter is the {sup 137} Cs redistribution, indicating the necessity of a good determination in the {sup 137} Cs inventory values with a minimum deviation associated with these values. After this, it was associated a 10% deviation in the reference value of {sup 137} Cs inventory and the 5% in the {sup 137} Cs inventory of the sample and was determinate the deviation in results of the soil redistribution calculated by models. The results of soil redistribution was compared to verify if there was difference between the models, but there was not difference in the results determinate by models, unless above 70% of {sup 137} Cs loss. Analyzing three native forests and an area of the undisturbed pasture in the Londrina region, can be verified that the {sup 137} Cs spatial variability in local scale was 15%. Comparing the {sup 137} Cs inventory values determinate in the three native forest with the {sup 137} Cs inventory

  11. Empirical intrinsic geometry for nonlinear modeling and time series filtering.

    Science.gov (United States)

    Talmon, Ronen; Coifman, Ronald R

    2013-07-30

    In this paper, we present a method for time series analysis based on empirical intrinsic geometry (EIG). EIG enables one to reveal the low-dimensional parametric manifold as well as to infer the underlying dynamics of high-dimensional time series. By incorporating concepts of information geometry, this method extends existing geometric analysis tools to support stochastic settings and parametrizes the geometry of empirical distributions. However, the statistical models are not required as priors; hence, EIG may be applied to a wide range of real signals without existing definitive models. We show that the inferred model is noise-resilient and invariant under different observation and instrumental modalities. In addition, we show that it can be extended efficiently to newly acquired measurements in a sequential manner. These two advantages enable us to revisit the Bayesian approach and incorporate empirical dynamics and intrinsic geometry into a nonlinear filtering framework. We show applications to nonlinear and non-Gaussian tracking problems as well as to acoustic signal localization.

  12. Consistent constitutive modeling of metallic target penetration using empirical, analytical, and numerical penetration models

    Directory of Open Access Journals (Sweden)

    John (Jack P. Riegel III

    2016-04-01

    Full Text Available Historically, there has been little correlation between the material properties used in (1 empirical formulae, (2 analytical formulations, and (3 numerical models. The various regressions and models may each provide excellent agreement for the depth of penetration into semi-infinite targets. But the input parameters for the empirically based procedures may have little in common with either the analytical model or the numerical model. This paper builds on previous work by Riegel and Anderson (2014 to show how the Effective Flow Stress (EFS strength model, based on empirical data, can be used as the average flow stress in the analytical Walker–Anderson Penetration model (WAPEN (Anderson and Walker, 1991 and how the same value may be utilized as an effective von Mises yield strength in numerical hydrocode simulations to predict the depth of penetration for eroding projectiles at impact velocities in the mechanical response regime of the materials. The method has the benefit of allowing the three techniques (empirical, analytical, and numerical to work in tandem. The empirical method can be used for many shot line calculations, but more advanced analytical or numerical models can be employed when necessary to address specific geometries such as edge effects or layering that are not treated by the simpler methods. Developing complete constitutive relationships for a material can be costly. If the only concern is depth of penetration, such a level of detail may not be required. The effective flow stress can be determined from a small set of depth of penetration experiments in many cases, especially for long penetrators such as the L/D = 10 ones considered here, making it a very practical approach. In the process of performing this effort, the authors considered numerical simulations by other researchers based on the same set of experimental data that the authors used for their empirical and analytical assessment. The goals were to establish a

  13. APLICABILIDADE DAS NORMAS PENAIS NAS CONDUTAS ILÍCITAS DE CYBERBULLYING COMETIDAS EM REDES SOCIAIS NA INTERNET

    Directory of Open Access Journals (Sweden)

    Rafael Giordano Gonçalves Brito

    2016-08-01

    Full Text Available O número de adeptos da Internet tem aumentado consideravelmente nos últimos anos, principalmente em virtude da utilização das redes sociais. Muitos desses usuários utilizam a rede mundial de computadores para a prática de condutas ilícitas, sendo comumente denominada de “crimes de informática”. Dentre os vários atos ilícitos, tais como: pedofilia, disseminação de vírus de computador, racismo, apologia e incitação aos crimes contra a vida, etc., somente um foi escolhido para ser abordado neste trabalho, o cyberbullying. O cometimento de tal conduta será estudado a fim de verificar a aplicabilidade das normas penais nestes atos.

  14. Theoretical and Empirical Review of Asset Pricing Models: A Structural Synthesis

    Directory of Open Access Journals (Sweden)

    Saban Celik

    2012-01-01

    Full Text Available The purpose of this paper is to give a comprehensive theoretical review devoted to asset pricing models by emphasizing static and dynamic versions in the line with their empirical investigations. A considerable amount of financial economics literature devoted to the concept of asset pricing and their implications. The main task of asset pricing model can be seen as the way to evaluate the present value of the pay offs or cash flows discounted for risk and time lags. The difficulty coming from discounting process is that the relevant factors that affect the pay offs vary through the time whereas the theoretical framework is still useful to incorporate the changing factors into an asset pricing models. This paper fills the gap in literature by giving a comprehensive review of the models and evaluating the historical stream of empirical investigations in the form of structural empirical review.

  15. Vocational Teachers and Professionalism - A Model Based on Empirical Analyses

    DEFF Research Database (Denmark)

    Duch, Henriette Skjærbæk; Andreasen, Karen E

    Vocational Teachers and Professionalism - A Model Based on Empirical Analyses Several theorists has developed models to illustrate the processes of adult learning and professional development (e.g. Illeris, Argyris, Engeström; Wahlgren & Aarkorg, Kolb and Wenger). Models can sometimes be criticized...... emphasis on the adult employee, the organization, its surroundings as well as other contextual factors. Our concern is adult vocational teachers attending a pedagogical course and teaching at vocational colleges. The aim of the paper is to discuss different models and develop a model concerning teachers...... at vocational colleges based on empirical data in a specific context, vocational teacher-training course in Denmark. By offering a basis and concepts for analysis of practice such model is meant to support the development of vocational teachers’ professionalism at courses and in organizational contexts...

  16. Using Graph and Vertex Entropy to Compare Empirical Graphs with Theoretical Graph Models

    Directory of Open Access Journals (Sweden)

    Tomasz Kajdanowicz

    2016-09-01

    Full Text Available Over the years, several theoretical graph generation models have been proposed. Among the most prominent are: the Erdős–Renyi random graph model, Watts–Strogatz small world model, Albert–Barabási preferential attachment model, Price citation model, and many more. Often, researchers working with real-world data are interested in understanding the generative phenomena underlying their empirical graphs. They want to know which of the theoretical graph generation models would most probably generate a particular empirical graph. In other words, they expect some similarity assessment between the empirical graph and graphs artificially created from theoretical graph generation models. Usually, in order to assess the similarity of two graphs, centrality measure distributions are compared. For a theoretical graph model this means comparing the empirical graph to a single realization of a theoretical graph model, where the realization is generated from the given model using an arbitrary set of parameters. The similarity between centrality measure distributions can be measured using standard statistical tests, e.g., the Kolmogorov–Smirnov test of distances between cumulative distributions. However, this approach is both error-prone and leads to incorrect conclusions, as we show in our experiments. Therefore, we propose a new method for graph comparison and type classification by comparing the entropies of centrality measure distributions (degree centrality, betweenness centrality, closeness centrality. We demonstrate that our approach can help assign the empirical graph to the most similar theoretical model using a simple unsupervised learning method.

  17. Estudo eletrofisiológico do nervo cutâneo dorsal lateral: aplicabilidade técnica e valores de referência

    OpenAIRE

    DIAS, RAFAEL JOSÉ SOARES; CARNEIRO, ARMANDO PEREIRA

    2000-01-01

    O estudo da condução neural dos segmentos mais distais dos nervos mais longos pode ser capaz de reconhecer mais precocemente as alterações oriundas da maioria das polineuropatias. O objetivo deste estudo, foi verificar a aplicabilidade técnica do exame de condução ortodrômica do ramo cutâneo dorsal do nervo sural (nervo cutâneo dorsal lateral) em pessoas saudáveis, padronizar os valores normais para serem utilizados como referência e comparar seus valores com os do nervo sural na perna. Quare...

  18. Corrosion-induced bond strength degradation in reinforced concrete-Analytical and empirical models

    International Nuclear Information System (INIS)

    Bhargava, Kapilesh; Ghosh, A.K.; Mori, Yasuhiro; Ramanujam, S.

    2007-01-01

    The present paper aims to investigate the relationship between the bond strength and the reinforcement corrosion in reinforced concrete (RC). Analytical and empirical models are proposed for the bond strength of corroded reinforcing bars. Analytical model proposed by Cairns.and Abdullah [Cairns, J., Abdullah, R.B., 1996. Bond strength of black and epoxy-coated reinforcement-a theoretical approach. ACI Mater. J. 93 (4), 362-369] for splitting bond failure and later modified by Coronelli [Coronelli, D. 2002. Corrosion cracking and bond strength modeling for corroded bars in reinforced concrete. ACI Struct. J. 99 (3), 267-276] to consider the corroded bars, has been adopted. Estimation of the various parameters in the earlier analytical model has been proposed by the present authors. These parameters include corrosion pressure due to expansive action of corrosion products, modeling of tensile behaviour of cracked concrete and adhesion and friction coefficient between the corroded bar and cracked concrete. Simple empirical models are also proposed to evaluate the reduction in bond strength as a function of reinforcement corrosion in RC specimens. These empirical models are proposed by considering a wide range of published experimental investigations related to the bond degradation in RC specimens due to reinforcement corrosion. It has been found that the proposed analytical and empirical bond models are capable of providing the estimates of predicted bond strength of corroded reinforcement that are in reasonably good agreement with the experimentally observed values and with those of the other reported published data on analytical and empirical predictions. An attempt has also been made to evaluate the flexural strength of RC beams with corroded reinforcement failing in bond. It has also been found that the analytical predictions for the flexural strength of RC beams based on the proposed bond degradation models are in agreement with those of the experimentally

  19. Block Empirical Likelihood for Longitudinal Single-Index Varying-Coefficient Model

    Directory of Open Access Journals (Sweden)

    Yunquan Song

    2013-01-01

    Full Text Available In this paper, we consider a single-index varying-coefficient model with application to longitudinal data. In order to accommodate the within-group correlation, we apply the block empirical likelihood procedure to longitudinal single-index varying-coefficient model, and prove a nonparametric version of Wilks’ theorem which can be used to construct the block empirical likelihood confidence region with asymptotically correct coverage probability for the parametric component. In comparison with normal approximations, the proposed method does not require a consistent estimator for the asymptotic covariance matrix, making it easier to conduct inference for the model's parametric component. Simulations demonstrate how the proposed method works.

  20. Theoretical-empirical model of the steam-water cycle of the power unit

    Directory of Open Access Journals (Sweden)

    Grzegorz Szapajko

    2010-06-01

    Full Text Available The diagnostics of the energy conversion systems’ operation is realised as a result of collecting, processing, evaluatingand analysing the measurement signals. The result of the analysis is the determination of the process state. It requires a usageof the thermal processes models. Construction of the analytical model with the auxiliary empirical functions built-in brings satisfyingresults. The paper presents theoretical-empirical model of the steam-water cycle. Worked out mathematical simulation model containspartial models of the turbine, the regenerative heat exchangers and the condenser. Statistical verification of the model is presented.

  1. Empirical modeling of dynamic behaviors of pneumatic artificial muscle actuators.

    Science.gov (United States)

    Wickramatunge, Kanchana Crishan; Leephakpreeda, Thananchai

    2013-11-01

    Pneumatic Artificial Muscle (PAM) actuators yield muscle-like mechanical actuation with high force to weight ratio, soft and flexible structure, and adaptable compliance for rehabilitation and prosthetic appliances to the disabled as well as humanoid robots or machines. The present study is to develop empirical models of the PAM actuators, that is, a PAM coupled with pneumatic control valves, in order to describe their dynamic behaviors for practical control design and usage. Empirical modeling is an efficient approach to computer-based modeling with observations of real behaviors. Different characteristics of dynamic behaviors of each PAM actuator are due not only to the structures of the PAM actuators themselves, but also to the variations of their material properties in manufacturing processes. To overcome the difficulties, the proposed empirical models are experimentally derived from real physical behaviors of the PAM actuators, which are being implemented. In case studies, the simulated results with good agreement to experimental results, show that the proposed methodology can be applied to describe the dynamic behaviors of the real PAM actuators. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.

  2. Development of an empirical model of turbine efficiency using the Taylor expansion and regression analysis

    International Nuclear Information System (INIS)

    Fang, Xiande; Xu, Yu

    2011-01-01

    The empirical model of turbine efficiency is necessary for the control- and/or diagnosis-oriented simulation and useful for the simulation and analysis of dynamic performances of the turbine equipment and systems, such as air cycle refrigeration systems, power plants, turbine engines, and turbochargers. Existing empirical models of turbine efficiency are insufficient because there is no suitable form available for air cycle refrigeration turbines. This work performs a critical review of empirical models (called mean value models in some literature) of turbine efficiency and develops an empirical model in the desired form for air cycle refrigeration, the dominant cooling approach in aircraft environmental control systems. The Taylor series and regression analysis are used to build the model, with the Taylor series being used to expand functions with the polytropic exponent and the regression analysis to finalize the model. The measured data of a turbocharger turbine and two air cycle refrigeration turbines are used for the regression analysis. The proposed model is compact and able to present the turbine efficiency map. Its predictions agree with the measured data very well, with the corrected coefficient of determination R c 2 ≥ 0.96 and the mean absolute percentage deviation = 1.19% for the three turbines. -- Highlights: → Performed a critical review of empirical models of turbine efficiency. → Developed an empirical model in the desired form for air cycle refrigeration, using the Taylor expansion and regression analysis. → Verified the method for developing the empirical model. → Verified the model.

  3. A Time-dependent Heliospheric Model Driven by Empirical Boundary Conditions

    Science.gov (United States)

    Kim, T. K.; Arge, C. N.; Pogorelov, N. V.

    2017-12-01

    Consisting of charged particles originating from the Sun, the solar wind carries the Sun's energy and magnetic field outward through interplanetary space. The solar wind is the predominant source of space weather events, and modeling the solar wind propagation to Earth is a critical component of space weather research. Solar wind models are typically separated into coronal and heliospheric parts to account for the different physical processes and scales characterizing each region. Coronal models are often coupled with heliospheric models to propagate the solar wind out to Earth's orbit and beyond. The Wang-Sheeley-Arge (WSA) model is a semi-empirical coronal model consisting of a potential field source surface model and a current sheet model that takes synoptic magnetograms as input to estimate the magnetic field and solar wind speed at any distance above the coronal region. The current version of the WSA model takes the Air Force Data Assimilative Photospheric Flux Transport (ADAPT) model as input to provide improved time-varying solutions for the ambient solar wind structure. When heliospheric MHD models are coupled with the WSA model, density and temperature at the inner boundary are treated as free parameters that are tuned to optimal values. For example, the WSA-ENLIL model prescribes density and temperature assuming momentum flux and thermal pressure balance across the inner boundary of the ENLIL heliospheric MHD model. We consider an alternative approach of prescribing density and temperature using empirical correlations derived from Ulysses and OMNI data. We use our own modeling software (Multi-scale Fluid-kinetic Simulation Suite) to drive a heliospheric MHD model with ADAPT-WSA input. The modeling results using the two different approaches of density and temperature prescription suggest that the use of empirical correlations may be a more straightforward, consistent method.

  4. Proposta e aplicabilidade de modelo para avaliação da gestão municipal do Programa Nacional de Alimentação Escolar

    Directory of Open Access Journals (Sweden)

    Cristine Garcia Gabriel

    2014-08-01

    Full Text Available Apresenta-se uma proposta de modelo para avaliar a gestão municipal do Programa Nacional de Alimentação Escolar (PNAE, verificando sua aplicabilidade nos dez municípios de Santa Catarina, Brasil, com mais de 100 mil habitantes. A construção do modelo ocorreu mediante oficinas com especialistas e sua adequação foi realizada por meio do método Delphi, com a participação de 14 colaboradores. O modelo contemplou duas dimensões da gestão municipal: a dimensão político-organizacional, organizada nas subdimensões recursos, atuação intersetorial e controle social, e a dimensão técnico-operacional, que contempla as subdimensões eficácia alimentar e nutricional, monitoramento alimentar e nutricional e atuação pedagógica para a alimentação saudável. No total foram elencados 22 indicadores, coletados por meio de entrevistas com os nutricionistas responsáveis pelo PNAE. Na aplicabilidade os indicadores mostraram-se viáveis para contemplar as atribuições de responsabilidade municipal, devendo o modelo ser futuramente empregado para qualificar a atuação da gestão do PNAE.

  5. Modeling the NPE with finite sources and empirical Green`s functions

    Energy Technology Data Exchange (ETDEWEB)

    Hutchings, L.; Kasameyer, P.; Goldstein, P. [Lawrence Livermore National Lab., CA (United States)] [and others

    1994-12-31

    In order to better understand the source characteristics of both nuclear and chemical explosions for purposes of discrimination, we have modeled the NPE chemical explosion as a finite source and with empirical Green`s functions. Seismograms are synthesized at four sties to test the validity of source models. We use a smaller chemical explosion detonated in the vicinity of the working point to obtain empirical Green`s functions. Empirical Green`s functions contain all the linear information of the geology along the propagation path and recording site, which are identical for chemical or nuclear explosions, and therefore reduce the variability in modeling the source of the larger event. We further constrain the solution to have the overall source duration obtained from point-source deconvolution results. In modeling the source, we consider both an elastic source on a spherical surface and an inelastic expanding spherical volume source. We found that the spherical volume solution provides better fits to observed seismograms. The potential to identify secondary sources was examined, but the resolution is too poor to be definitive.

  6. Empirical model for estimating the surface roughness of machined ...

    African Journals Online (AJOL)

    Michael Horsfall

    one of the most critical quality measure in mechanical products. In the ... Keywords: cutting speed, centre lathe, empirical model, surface roughness, Mean absolute percentage deviation ... The factors considered were work piece properties.

  7. Physical Limitations of Empirical Field Models: Force Balance and Plasma Pressure

    International Nuclear Information System (INIS)

    Sorin Zaharia; Cheng, C.Z.

    2002-01-01

    In this paper, we study whether the magnetic field of the T96 empirical model can be in force balance with an isotropic plasma pressure distribution. Using the field of T96, we obtain values for the pressure P by solving a Poisson-type equation (gradient) 2 P = (gradient) · (J x B) in the equatorial plane, and 1-D profiles on the Sun-Earth axis by integrating (gradient)P = J x B. We work in a flux coordinate system in which the magnetic field is expressed in terms of Euler potentials. Our results lead to the conclusion that the T96 model field cannot be in equilibrium with an isotropic pressure. We also analyze in detail the computation of Birkeland currents using the Vasyliunas relation and the T96 field, which yields unphysical results, again indicating the lack of force balance in the empirical model. The underlying reason for the force imbalance is likely the fact that the derivatives of the least-square fitted model B are not accurate predictions of the actual magnetospheric field derivatives. Finally, we discuss a possible solution to the problem of lack of force balance in empirical field models

  8. Application of parameters space analysis tools for empirical model validation

    Energy Technology Data Exchange (ETDEWEB)

    Paloma del Barrio, E. [LEPT-ENSAM UMR 8508, Talence (France); Guyon, G. [Electricite de France, Moret-sur-Loing (France)

    2004-01-01

    A new methodology for empirical model validation has been proposed in the framework of the Task 22 (Building Energy Analysis Tools) of the International Energy Agency. It involves two main steps: checking model validity and diagnosis. Both steps, as well as the underlying methods, have been presented in the first part of the paper. In this part, they are applied for testing modelling hypothesis in the framework of the thermal analysis of an actual building. Sensitivity analysis tools have been first used to identify the parts of the model that can be really tested on the available data. A preliminary diagnosis is then supplied by principal components analysis. Useful information for model behaviour improvement has been finally obtained by optimisation techniques. This example of application shows how model parameters space analysis is a powerful tool for empirical validation. In particular, diagnosis possibilities are largely increased in comparison with residuals analysis techniques. (author)

  9. Model and Empirical Study on Several Urban Public Transport Networks in China

    Science.gov (United States)

    Ding, Yimin; Ding, Zhuo

    2012-07-01

    In this paper, we present the empirical investigation results on the urban public transport networks (PTNs) and propose a model to understand the results obtained. We investigate some urban public traffic networks in China, which are the urban public traffic networks of Beijing, Guangzhou, Wuhan and etc. The empirical results on the big cities show that the accumulative act-degree distributions of PTNs take neither power function forms, nor exponential function forms, but they are described by a shifted power function, and the accumulative act-degree distributions of PTNs in medium-sized or small cities follow the same law. In the end, we propose a model to show a possible evolutionary mechanism for the emergence of such network. The analytic results obtained from this model are in good agreement with the empirical results.

  10. Empirical model development and validation with dynamic learning in the recurrent multilayer perception

    International Nuclear Information System (INIS)

    Parlos, A.G.; Chong, K.T.; Atiya, A.F.

    1994-01-01

    A nonlinear multivariable empirical model is developed for a U-tube steam generator using the recurrent multilayer perceptron network as the underlying model structure. The recurrent multilayer perceptron is a dynamic neural network, very effective in the input-output modeling of complex process systems. A dynamic gradient descent learning algorithm is used to train the recurrent multilayer perceptron, resulting in an order of magnitude improvement in convergence speed over static learning algorithms. In developing the U-tube steam generator empirical model, the effects of actuator, process,and sensor noise on the training and testing sets are investigated. Learning and prediction both appear very effective, despite the presence of training and testing set noise, respectively. The recurrent multilayer perceptron appears to learn the deterministic part of a stochastic training set, and it predicts approximately a moving average response. Extensive model validation studies indicate that the empirical model can substantially generalize (extrapolate), though online learning becomes necessary for tracking transients significantly different than the ones included in the training set and slowly varying U-tube steam generator dynamics. In view of the satisfactory modeling accuracy and the associated short development time, neural networks based empirical models in some cases appear to provide a serious alternative to first principles models. Caution, however, must be exercised because extensive on-line validation of these models is still warranted

  11. A New Empirical Model for Radar Scattering from Bare Soil Surfaces

    Directory of Open Access Journals (Sweden)

    Nicolas Baghdadi

    2016-11-01

    Full Text Available The objective of this paper is to propose a new semi-empirical radar backscattering model for bare soil surfaces based on the Dubois model. A wide dataset of backscattering coefficients extracted from synthetic aperture radar (SAR images and in situ soil surface parameter measurements (moisture content and roughness is used. The retrieval of soil parameters from SAR images remains challenging because the available backscattering models have limited performances. Existing models, physical, semi-empirical, or empirical, do not allow for a reliable estimate of soil surface geophysical parameters for all surface conditions. The proposed model, developed in HH, HV, and VV polarizations, uses a formulation of radar signals based on physical principles that are validated in numerous studies. Never before has a backscattering model been built and validated on such an important dataset as the one proposed in this study. It contains a wide range of incidence angles (18°–57° and radar wavelengths (L, C, X, well distributed, geographically, for regions with different climate conditions (humid, semi-arid, and arid sites, and involving many SAR sensors. The results show that the new model shows a very good performance for different radar wavelengths (L, C, X, incidence angles, and polarizations (RMSE of about 2 dB. This model is easy to invert and could provide a way to improve the retrieval of soil parameters.

  12. A study on online monitoring system development using empirical models

    Energy Technology Data Exchange (ETDEWEB)

    An, Sang Ha

    2010-02-15

    Maintenance technologies have been progressed from a time-based to a condition-based manner. The fundamental idea of condition-based maintenance (CBM) is built on the real-time diagnosis of impending failures and/or the prognosis of residual lifetime of equipment by monitoring health conditions using various sensors. The success of CBM, therefore, hinges on the capability to develop accurate diagnosis/prognosis models. Even though there may be an unlimited number of methods to implement models, the models can normally be classified into two categories in terms of their origins: using physical principles or historical observations. I have focused on the latter method (sometimes referred as the empirical model based on statistical learning) because of some practical benefits such as context-free applicability, configuration flexibility, and customization adaptability. While several pilot-scale systems using empirical models have been applied to work sites in Korea, it should be noticed that these do not seem to be generally competitive against conventional physical models. As a result of investigating the bottlenecks of previous attempts, I have recognized the need for a novel strategy for grouping correlated variables such that an empirical model can accept not only statistical correlation but also some extent of physical knowledge of a system. Detailed examples of problems are as follows: (1) missing of important signals in a group caused by the lack of observations, (2) problems of signals with the time delay, (3) problems of optimal kernel bandwidth. In this study an improved statistical learning framework including the proposed strategy and case studies illustrating the performance of the method are presented.

  13. Empirical model for estimating the surface roughness of machined ...

    African Journals Online (AJOL)

    Empirical model for estimating the surface roughness of machined ... as well as surface finish is one of the most critical quality measure in mechanical products. ... various cutting speed have been developed using regression analysis software.

  14. Calibrating mechanistic-empirical pavement performance models with an expert matrix

    Energy Technology Data Exchange (ETDEWEB)

    Tighe, S.; AlAssar, R.; Haas, R. [Waterloo Univ., ON (Canada). Dept. of Civil Engineering; Zhiwei, H. [Stantec Consulting Ltd., Cambridge, ON (Canada)

    2001-07-01

    Proper management of pavement infrastructure requires pavement performance modelling. For the past 20 years, the Ontario Ministry of Transportation has used the Ontario Pavement Analysis of Costs (OPAC) system for pavement design. Pavement needs, however, have changed substantially during that time. To address this need, a new research contract is underway to enhance the model and verify the predictions, particularly at extreme points such as low and high traffic volume pavement design. This initiative included a complete evaluation of the existing OPAC pavement design method, the construction of a new set of pavement performance prediction models, and the development of the flexible pavement design procedure that incorporates reliability analysis. The design was also expanded to include rigid pavement designs and modification of the existing life cycle cost analysis procedure which includes both the agency cost and road user cost. Performance prediction and life-cycle costs were developed based on several factors, including material properties, traffic loads and climate. Construction and maintenance schedules were also considered. The methodology for the calibration and validation of a mechanistic-empirical flexible pavement performance model was described. Mechanistic-empirical design methods combine theory based design such as calculated stresses, strains or deflections with empirical methods, where a measured response is associated with thickness and pavement performance. Elastic layer analysis was used to determine pavement response to determine the most effective design using cumulative Equivalent Single Axle Loads (ESALs), below grade type and layer thickness.The new mechanistic-empirical model separates the environment and traffic effects on performance. This makes it possible to quantify regional differences between Southern and Northern Ontario. In addition, roughness can be calculated in terms of the International Roughness Index or Riding comfort Index

  15. An Empirical Model for Energy Storage Systems

    Energy Technology Data Exchange (ETDEWEB)

    Rosewater, David Martin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Scott, Paul [TransPower, Poway, CA (United States)

    2016-03-17

    Improved models of energy storage systems are needed to enable the electric grid’s adaptation to increasing penetration of renewables. This paper develops a generic empirical model of energy storage system performance agnostic of type, chemistry, design or scale. Parameters for this model are calculated using test procedures adapted from the US DOE Protocol for Uniformly Measuring and Expressing the Performance of Energy Storage. We then assess the accuracy of this model for predicting the performance of the TransPower GridSaver – a 1 MW rated lithium-ion battery system that underwent laboratory experimentation and analysis. The developed model predicts a range of energy storage system performance based on the uncertainty of estimated model parameters. Finally, this model can be used to better understand the integration and coordination of energy storage on the electric grid.

  16. A semi-empirical two phase model for rocks

    International Nuclear Information System (INIS)

    Fogel, M.B.

    1993-01-01

    This article presents data from an experiment simulating a spherically symmetric tamped nuclear explosion. A semi-empirical two-phase model of the measured response in tuff is presented. A comparison is made of the computed peak stress and velocity versus scaled range and that measured on several recent tuff events

  17. Plant water potential improves prediction of empirical stomatal models.

    Directory of Open Access Journals (Sweden)

    William R L Anderegg

    Full Text Available Climate change is expected to lead to increases in drought frequency and severity, with deleterious effects on many ecosystems. Stomatal responses to changing environmental conditions form the backbone of all ecosystem models, but are based on empirical relationships and are not well-tested during drought conditions. Here, we use a dataset of 34 woody plant species spanning global forest biomes to examine the effect of leaf water potential on stomatal conductance and test the predictive accuracy of three major stomatal models and a recently proposed model. We find that current leaf-level empirical models have consistent biases of over-prediction of stomatal conductance during dry conditions, particularly at low soil water potentials. Furthermore, the recently proposed stomatal conductance model yields increases in predictive capability compared to current models, and with particular improvement during drought conditions. Our results reveal that including stomatal sensitivity to declining water potential and consequent impairment of plant water transport will improve predictions during drought conditions and show that many biomes contain a diversity of plant stomatal strategies that range from risky to conservative stomatal regulation during water stress. Such improvements in stomatal simulation are greatly needed to help unravel and predict the response of ecosystems to future climate extremes.

  18. An empirical model for friction in cold forging

    DEFF Research Database (Denmark)

    Bay, Niels; Eriksen, Morten; Tan, Xincai

    2002-01-01

    With a system of simulative tribology tests for cold forging the friction stress for aluminum, steel and stainless steel provided with typical lubricants for cold forging has been determined for varying normal pressure, surface expansion, sliding length and tool/work piece interface temperature...... of normal pressure and tool/work piece interface temperature. The model is verified by process testing measuring friction at varying reductions in cold forward rod extrusion. KEY WORDS: empirical friction model, cold forging, simulative friction tests....

  19. APLICABILIDADE DA SISTEMATIZAÇÃO DA ASSISTÊNCIA DE ENFERMAGEM NA ESTRATÉGIA DE SAÚDE DA FAMÍLIA: UMA REVISÃO BIBLIOGRÁFICA

    Directory of Open Access Journals (Sweden)

    Anna Paula Mendonça Barros

    2016-01-01

    Full Text Available Trata-se de uma revisão da literatura realizada no período de 2002 a 2012 cujo objetivo foi revisar e analisar as publicações relacionadas à aplicabilidade da Sistematização da Assistência de Enfermagem na Estratégia de Saúde da Família. Os trabalhos selecionados foram agrupados por similaridade em três categorias temáticas: Funções assistenciais do enfermeiro na ESF; Aplicabilidade da SAE na prática do enfermeiro da ESF; Implementação da ESF e da SAE. Foi demonstrado aumento das publicações nos últimos anos e maior preocupação por parte dos enfermeiros a cerca da temática e que a utilização da SAE na ESF proporciona o estabelecimento de vínculo entre enfermeiro-usuário-família com fortalecimento do elo com a comunidade, de modo que o enfermeiro poderá interligar as informações disponíveis, guiando a assistência prestada a nível individual e familiar. Aplicar a SAE na ESF requer ao enfermeiro dedicação, ousadia e determinação frente às dificuldades em assimilar gerência e assistência, duas funções importantes que o enfermeiro realiza diariamente.

  20. Political economy models and agricultural policy formation : empirical applicability and relevance for the CAP

    OpenAIRE

    Zee, van der, F.A.

    1997-01-01

    This study explores the relevance and applicability of political economy models for the explanation of agricultural policies. Part I (chapters 4-7) takes a general perspective and evaluates the empirical applicability of voting models and interest group models to agricultural policy formation in industrialised market economics. Part II (chapters 8-11) focuses on the empirical applicability of political economy models to agricultural policy formation and agricultural policy developmen...

  1. Ranking Multivariate GARCH Models by Problem Dimension: An Empirical Evaluation

    NARCIS (Netherlands)

    M. Caporin (Massimiliano); M.J. McAleer (Michael)

    2011-01-01

    textabstractIn the last 15 years, several Multivariate GARCH (MGARCH) models have appeared in the literature. Recent research has begun to examine MGARCH specifications in terms of their out-of-sample forecasting performance. In this paper, we provide an empirical comparison of a set of models,

  2. Climate Prediction for Brazil's Nordeste: Performance of Empirical and Numerical Modeling Methods.

    Science.gov (United States)

    Moura, Antonio Divino; Hastenrath, Stefan

    2004-07-01

    Comparisons of performance of climate forecast methods require consistency in the predictand and a long common reference period. For Brazil's Nordeste, empirical methods developed at the University of Wisconsin use preseason (October January) rainfall and January indices of the fields of meridional wind component and sea surface temperature (SST) in the tropical Atlantic and the equatorial Pacific as input to stepwise multiple regression and neural networking. These are used to predict the March June rainfall at a network of 27 stations. An experiment at the International Research Institute for Climate Prediction, Columbia University, with a numerical model (ECHAM4.5) used global SST information through February to predict the March June rainfall at three grid points in the Nordeste. The predictands for the empirical and numerical model forecasts are correlated at +0.96, and the period common to the independent portion of record of the empirical prediction and the numerical modeling is 1968 99. Over this period, predicted versus observed rainfall are evaluated in terms of correlation, root-mean-square error, absolute error, and bias. Performance is high for both approaches. Numerical modeling produces a correlation of +0.68, moderate errors, and strong negative bias. For the empirical methods, errors and bias are small, and correlations of +0.73 and +0.82 are reached between predicted and observed rainfall.

  3. An anthology of theories and models of design philosophy, approaches and empirical explorations

    CERN Document Server

    Blessing, Lucienne

    2014-01-01

    While investigations into both theories and models has remained a major strand of engineering design research, current literature sorely lacks a reference book that provides a comprehensive and up-to-date anthology of theories and models, and their philosophical and empirical underpinnings; An Anthology of Theories and Models of Design fills this gap. The text collects the expert views of an international authorship, covering: ·         significant theories in engineering design, including CK theory, domain theory, and the theory of technical systems; ·         current models of design, from a function behavior structure model to an integrated model; ·         important empirical research findings from studies into design; and ·         philosophical underpinnings of design itself. For educators and researchers in engineering design, An Anthology of Theories and Models of Design gives access to in-depth coverage of theoretical and empirical developments in this area; for pr...

  4. Empirically evaluating decision-analytic models.

    Science.gov (United States)

    Goldhaber-Fiebert, Jeremy D; Stout, Natasha K; Goldie, Sue J

    2010-08-01

    Model-based cost-effectiveness analyses support decision-making. To augment model credibility, evaluation via comparison to independent, empirical studies is recommended. We developed a structured reporting format for model evaluation and conducted a structured literature review to characterize current model evaluation recommendations and practices. As an illustration, we applied the reporting format to evaluate a microsimulation of human papillomavirus and cervical cancer. The model's outputs and uncertainty ranges were compared with multiple outcomes from a study of long-term progression from high-grade precancer (cervical intraepithelial neoplasia [CIN]) to cancer. Outcomes included 5 to 30-year cumulative cancer risk among women with and without appropriate CIN treatment. Consistency was measured by model ranges overlapping study confidence intervals. The structured reporting format included: matching baseline characteristics and follow-up, reporting model and study uncertainty, and stating metrics of consistency for model and study results. Structured searches yielded 2963 articles with 67 meeting inclusion criteria and found variation in how current model evaluations are reported. Evaluation of the cervical cancer microsimulation, reported using the proposed format, showed a modeled cumulative risk of invasive cancer for inadequately treated women of 39.6% (30.9-49.7) at 30 years, compared with the study: 37.5% (28.4-48.3). For appropriately treated women, modeled risks were 1.0% (0.7-1.3) at 30 years, study: 1.5% (0.4-3.3). To support external and projective validity, cost-effectiveness models should be iteratively evaluated as new studies become available, with reporting standardized to facilitate assessment. Such evaluations are particularly relevant for models used to conduct comparative effectiveness analyses.

  5. A Socio-Cultural Model Based on Empirical Data of Cultural and Social Relationship

    DEFF Research Database (Denmark)

    Lipi, Afia Akhter; Nakano, Yukiko; Rehm, Matthias

    2010-01-01

    The goal of this paper is to integrate culture and social relationship as a computational term in an embodied conversational agent system by employing empirical and theoretical approach. We propose a parameter-based model that predicts nonverbal expressions appropriate for specific cultures...... in different social relationship. So, first, we introduce the theories of social and cultural characteristics. Then, we did corpus analysis of human interaction of two cultures in two different social situations and extracted empirical data and finally, by integrating socio-cultural characteristics...... with empirical data, we establish a parameterized network model that generates culture specific non-verbal expressions in different social relationships....

  6. Reflective equilibrium and empirical data: third person moral experiences in empirical medical ethics.

    Science.gov (United States)

    De Vries, Martine; Van Leeuwen, Evert

    2010-11-01

    In ethics, the use of empirical data has become more and more popular, leading to a distinct form of applied ethics, namely empirical ethics. This 'empirical turn' is especially visible in bioethics. There are various ways of combining empirical research and ethical reflection. In this paper we discuss the use of empirical data in a special form of Reflective Equilibrium (RE), namely the Network Model with Third Person Moral Experiences. In this model, the empirical data consist of the moral experiences of people in a practice. Although inclusion of these moral experiences in this specific model of RE can be well defended, their use in the application of the model still raises important questions. What precisely are moral experiences? How to determine relevance of experiences, in other words: should there be a selection of the moral experiences that are eventually used in the RE? How much weight should the empirical data have in the RE? And the key question: can the use of RE by empirical ethicists really produce answers to practical moral questions? In this paper we start to answer the above questions by giving examples taken from our research project on understanding the norm of informed consent in the field of pediatric oncology. We especially emphasize that incorporation of empirical data in a network model can reduce the risk of self-justification and bias and can increase the credibility of the RE reached. © 2009 Blackwell Publishing Ltd.

  7. An Empirical Investigation into a Subsidiary Absorptive Capacity Process Model

    DEFF Research Database (Denmark)

    Schleimer, Stephanie; Pedersen, Torben

    2011-01-01

    and empirically test a process model of absorptive capacity. The setting of our empirical study is 213 subsidiaries of multinational enterprises and the focus is on the capacity of these subsidiaries to successfully absorb best practices in marketing strategy from their headquarters. This setting allows us...... to explore the process model in its entirety, including different drivers of subsidiary absorptive capacity (organizational mechanisms and contextual drivers), the three original dimensions of absorptive capacity (recognition, assimilation, application), and related outcomes (implementation...... and internalization of the best practice). The study’s findings reveal that managers have discretion in promoting absorptive capacity through the application of specific organizational mechanism and that the impact of contextual drivers on subsidiary absorptive capacity is not direct, but mediated...

  8. An empirically based model for knowledge management in health care organizations.

    Science.gov (United States)

    Sibbald, Shannon L; Wathen, C Nadine; Kothari, Anita

    2016-01-01

    Knowledge management (KM) encompasses strategies, processes, and practices that allow an organization to capture, share, store, access, and use knowledge. Ideal KM combines different sources of knowledge to support innovation and improve performance. Despite the importance of KM in health care organizations (HCOs), there has been very little empirical research to describe KM in this context. This study explores KM in HCOs, focusing on the status of current intraorganizational KM. The intention is to provide insight for future studies and model development for effective KM implementation in HCOs. A qualitative methods approach was used to create an empirically based model of KM in HCOs. Methods included (a) qualitative interviews (n = 24) with senior leadership to identify types of knowledge important in these roles plus current information-seeking behaviors/needs and (b) in-depth case study with leaders in new executive positions (n = 2). The data were collected from 10 HCOs. Our empirically based model for KM was assessed for face and content validity. The findings highlight the paucity of formal KM in our sample HCOs. Organizational culture, leadership, and resources are instrumental in supporting KM processes. An executive's knowledge needs are extensive, but knowledge assets are often limited or difficult to acquire as much of the available information is not in a usable format. We propose an empirically based model for KM to highlight the importance of context (internal and external), and knowledge seeking, synthesis, sharing, and organization. Participants who reviewed the model supported its basic components and processes, and potential for incorporating KM into organizational processes. Our results articulate ways to improve KM, increase organizational learning, and support evidence-informed decision-making. This research has implications for how to better integrate evidence and knowledge into organizations while considering context and the role of

  9. Application of GIS to Empirical Windthrow Risk Model in Mountain Forested Landscapes

    Directory of Open Access Journals (Sweden)

    Lukas Krejci

    2018-02-01

    Full Text Available Norway spruce dominates mountain forests in Europe. Natural variations in the mountainous coniferous forests are strongly influenced by all the main components of forest and landscape dynamics: species diversity, the structure of forest stands, nutrient cycling, carbon storage, and other ecosystem services. This paper deals with an empirical windthrow risk model based on the integration of logistic regression into GIS to assess forest vulnerability to wind-disturbance in the mountain spruce forests of Šumava National Park (Czech Republic. It is an area where forest management has been the focus of international discussions by conservationists, forest managers, and stakeholders. The authors developed the empirical windthrow risk model, which involves designing an optimized data structure containing dependent and independent variables entering logistic regression. The results from the model, visualized in the form of map outputs, outline the probability of risk to forest stands from wind in the examined territory of the national park. Such an application of the empirical windthrow risk model could be used as a decision support tool for the mountain spruce forests in a study area. Future development of these models could be useful for other protected European mountain forests dominated by Norway spruce.

  10. A theoretical and empirical evaluation and extension of the Todaro migration model.

    Science.gov (United States)

    Salvatore, D

    1981-11-01

    "This paper postulates that it is theoretically and empirically preferable to base internal labor migration on the relative difference in rural-urban real income streams and rates of unemployment, taken as separate and independent variables, rather than on the difference in the expected real income streams as postulated by the very influential and often quoted Todaro model. The paper goes on to specify several important ways of extending the resulting migration model and improving its empirical performance." The analysis is based on Italian data. excerpt

  11. Towards New Empirical Versions of Financial and Accounting Models Corrected for Measurement Errors

    OpenAIRE

    Francois-Éric Racicot; Raymond Théoret; Alain Coen

    2006-01-01

    In this paper, we propose a new empirical version of the Fama and French Model based on the Hausman (1978) specification test and aimed at discarding measurement errors in the variables. The proposed empirical framework is general enough to be used for correcting other financial and accounting models of measurement errors. Removing measurement errors is important at many levels as information disclosure, corporate governance and protection of investors.

  12. U-tube steam generator empirical model development and validation using neural networks

    International Nuclear Information System (INIS)

    Parlos, A.G.; Chong, K.T.; Atiya, A.

    1992-01-01

    Empirical modeling techniques that use model structures motivated from neural networks research have proven effective in identifying complex process dynamics. A recurrent multilayer perception (RMLP) network was developed as a nonlinear state-space model structure along with a static learning algorithm for estimating the parameter associated with it. The methods developed were demonstrated by identifying two submodels of a U-tube steam generator (UTSG), each valid around an operating power level. A significant drawback of this approach is the long off-line training times required for the development of even a simplified model of a UTSG. Subsequently, a dynamic gradient descent-based learning algorithm was developed as an accelerated alternative to train an RMLP network for use in empirical modeling of power plants. The two main advantages of this learning algorithm are its ability to consider past error gradient information for future use and the two forward passes associated with its implementation. The enhanced learning capabilities provided by the dynamic gradient descent-based learning algorithm were demonstrated via the case study of a simple steam boiler power plant. In this paper, the dynamic gradient descent-based learning algorithm is used for the development and validation of a complete UTSG empirical model

  13. Proposta de aplicabilidade da preservação digital ao prontuário eletrônico do paciente

    Directory of Open Access Journals (Sweden)

    Virginia Bentes Pinto

    2017-04-01

    Full Text Available Apresentam-se os resultados de uma pesquisa baseada na revisão de literatura sobre a preservação e a curadoria digital, bem como a possibilidade de aplicabilidade ao contexto da documentação sanitária, enquanto memória da saúde e que buscou responder a seguinte questão: Como o padrão Open Archival Information System (OAIS pode ser aplicado aos prontuários eletrônicos do paciente, na perspectiva da preservação de conteúdos digitais, visando assegurar a confidencialidade, confiabilidade, autenticidade e acesso, a quem de direito, à informação registrada nesses documentos? O objetivo básico é estudar a literatura referente à preservação e à curadoria digital, com ênfase no padrão Open Archival Information System (OAIS, considerando sua aplicabilidade ao âmbito dos prontuários eletrônicos de pacientes, com vistas na confidencialidade, confiabilidade, autenticidade e acesso à recuperação da informação, observando-se o ordenamento jurídico concernente a esse documento. Pesquisa exploratória pautada no levantamento do estado da arte sobre o tema em lide. O corpus do estudo foi constituído de 01 prontuário (5 volumes da especialidade da nefrologia, com recorte no ano de 1970. O estudo empírico foi no Serviço de Arquivo Médico e Estatística, do Hospital Universitário Walter Cantídio da Universidade Federal do Ceará. Os achados evidenciam que, embora já existam várias iniciativas sobre a preservação digital da documentação cientifica tecnológica e cultural, não encontramos experiências sobre prontuários. Ademais, o modelo OAIS pode ser aplicado ao contexto do prontuário eletrônico do paciente desde que observadas as características particulares de legalidade de acesso a esses documentos.

  14. Dynamic Modeling of a Reformed Methanol Fuel Cell System using Empirical Data and Adaptive Neuro-Fuzzy Inference System Models

    DEFF Research Database (Denmark)

    Justesen, Kristian Kjær; Andreasen, Søren Juhl; Shaker, Hamid Reza

    2013-01-01

    In this work, a dynamic MATLAB Simulink model of a H3-350 Reformed Methanol Fuel Cell (RMFC) stand-alone battery charger produced by Serenergy is developed on the basis of theoretical and empirical methods. The advantage of RMFC systems is that they use liquid methanol as a fuel instead of gaseous...... of the reforming process are implemented. Models of the cooling flow of the blowers for the fuel cell and the burner which supplies process heat for the reformer are made. The two blowers have a common exhaust, which means that the two blowers influence each other’s output. The models take this into account using...... an empirical approach. Fin efficiency models for the cooling effect of the air are also developed using empirical methods. A fuel cell model is also implemented based on a standard model which is adapted to fit the measured performance of the H3-350 module. All the individual parts of the model are verified...

  15. Dynamic Modeling of a Reformed Methanol Fuel Cell System using Empirical Data and Adaptive Neuro-Fuzzy Inference System Models

    DEFF Research Database (Denmark)

    Justesen, Kristian Kjær; Andreasen, Søren Juhl; Shaker, Hamid Reza

    2014-01-01

    In this work, a dynamic MATLAB Simulink model of a H3-350 Reformed Methanol Fuel Cell (RMFC) stand-alone battery charger produced by Serenergy is developed on the basis of theoretical and empirical methods. The advantage of RMFC systems is that they use liquid methanol as a fuel instead of gaseous...... of the reforming process are implemented. Models of the cooling flow of the blowers for the fuel cell and the burner which supplies process heat for the reformer are made. The two blowers have a common exhaust, which means that the two blowers influence each other’s output. The models take this into account using...... an empirical approach. Fin efficiency models for the cooling effect of the air are also developed using empirical methods. A fuel cell model is also implemented based on a standard model which is adapted to fit the measured performance of the H3-350 module. All the individual parts of the model are verified...

  16. A semi-empirical model for predicting crown diameter of cedrela ...

    African Journals Online (AJOL)

    A semi-empirical model relating age and breast height has been developed to predict individual tree crown diameter for Cedrela odorata (L) plantation in the moist evergreen forest zones of Ghana. The model was based on field records of 269 trees, and could determine the crown cover dynamics, forecast time of canopy ...

  17. Análisis de la aplicabilidad de la técnica Value Stream Mapping en el rediseño de sistemas productivos

    OpenAIRE

    Serrano Lasa, Ibon

    2007-01-01

    Los sistemas productivos de las empresas han de adaptarse a las exigencias de los mercados. El Value Stream Mapping (VSM) es una técnica desarrollada por la Producción Ajustada y orientada al rediseño de dichos sistemas productivos. Si bien existe divulgación teórica sobre la técnica así como publicaciones de casos prácticos exitosos, se detecta la carencia de un análisis que explore en profundidad la aplicabilidad de la técnica en entornos productivos relacionados con las lineas de flujo des...

  18. Correlação das escalas de avaliação utilizadas na doença de Parkinson com aplicabilidade na fisioterapia

    OpenAIRE

    Mello,Marcella Patrícia Bezerra de; Botelho,Ana Carla Gomes

    2010-01-01

    INTRODUÇÃO: A doença de Parkinson (DP) é uma patologia neurológica crônica e degenerativa do sistema nervoso central que acomete os gânglios da base, cujas características principais são tremor, rigidez e bradicinesia. Com o progresso terapêutico, desenvolveram-se várias escalas visando monitorar a evolução da doença e a eficácia de tratamentos. O objetivo deste estudo de revisão bibliográfica é caracterizar as principais escalas usadas para a avaliação da DP, discutindo sua aplicabilidade à ...

  19. Dynamic gradient descent learning algorithms for enhanced empirical modeling of power plants

    International Nuclear Information System (INIS)

    Parlos, A.G.; Atiya, Amir; Chong, K.T.

    1991-01-01

    A newly developed dynamic gradient descent-based learning algorithm is used to train a recurrent multilayer perceptron network for use in empirical modeling of power plants. The two main advantages of the proposed learning algorithm are its ability to consider past error gradient information for future use and the two forward passes associated with its implementation, instead of one forward and one backward pass of the backpropagation algorithm. The latter advantage results in computational time saving because both passes can be performed simultaneously. The dynamic learning algorithm is used to train a hybrid feedforward/feedback neural network, a recurrent multilayer perceptron, which was previously found to exhibit good interpolation and extrapolation capabilities in modeling nonlinear dynamic systems. One of the drawbacks, however, of the previously reported work has been the long training times associated with accurate empirical models. The enhanced learning capabilities provided by the dynamic gradient descent-based learning algorithm are demonstrated by a case study of a steam power plant. The number of iterations required for accurate empirical modeling has been reduced from tens of thousands to hundreds, thus significantly expediting the learning process

  20. Modelling metal speciation in the Scheldt Estuary: Combining a flexible-resolution transport model with empirical functions

    Energy Technology Data Exchange (ETDEWEB)

    Elskens, Marc [Vrije Universiteit Brussel, Analytical, Pleinlaan 2, BE-1050 Brussels (Belgium); Gourgue, Olivier [Université catholique de Louvain, Institute of Mechanics, Materials and Civil Engineering (IMMC), 4 Avenue G. Lemaître, bte L4.05.02, BE-1348 Louvain-la-Neuve (Belgium); Université catholique de Louvain, Georges Lemaître Centre for Earth and Climate Research (TECLIM), Place Louis Pasteur 2, bte L4.03.08, BE-1348 Louvain-la-Neuve (Belgium); Baeyens, Willy [Vrije Universiteit Brussel, Analytical, Pleinlaan 2, BE-1050 Brussels (Belgium); Chou, Lei [Université Libre de Bruxelles, Biogéochimie et Modélisation du Système Terre (BGéoSys) —Océanographie Chimique et Géochimie des Eaux, Campus de la Plaine —CP 208, Boulevard du Triomphe, BE-1050 Brussels (Belgium); Deleersnijder, Eric [Université catholique de Louvain, Institute of Mechanics, Materials and Civil Engineering (IMMC), 4 Avenue G. Lemaître, bte L4.05.02, BE-1348 Louvain-la-Neuve (Belgium); Université catholique de Louvain, Earth and Life Institute (ELI), Georges Lemaître Centre for Earth and Climate Research (TECLIM), Place Louis Pasteur 2, bte L4.03.08, BE-1348 Louvain-la-Neuve (Belgium); Leermakers, Martine [Vrije Universiteit Brussel, Analytical, Pleinlaan 2, BE-1050 Brussels (Belgium); and others

    2014-04-01

    Predicting metal concentrations in surface waters is an important step in the understanding and ultimately the assessment of the ecological risk associated with metal contamination. In terms of risk an essential piece of information is the accurate knowledge of the partitioning of the metals between the dissolved and particulate phases, as the former species are generally regarded as the most bioavailable and thus harmful form. As a first step towards the understanding and prediction of metal speciation in the Scheldt Estuary (Belgium, the Netherlands), we carried out a detailed analysis of a historical dataset covering the period 1982–2011. This study reports on the results for two selected metals: Cu and Cd. Data analysis revealed that both the total metal concentration and the metal partitioning coefficient (K{sub d}) could be predicted using relatively simple empirical functions of environmental variables such as salinity and suspended particulate matter concentration (SPM). The validity of these functions has been assessed by their application to salinity and SPM fields simulated by the hydro-environmental model SLIM. The high-resolution total and dissolved metal concentrations reconstructed using this approach, compared surprisingly well with an independent set of validation measurements. These first results from the combined mechanistic-empirical model approach suggest that it may be an interesting tool for risk assessment studies, e.g. to help identify conditions associated with elevated (dissolved) metal concentrations. - Highlights: • Empirical functions were designed for assessing metal speciation in estuarine water. • The empirical functions were implemented in the hydro-environmental model SLIM. • Validation was carried out in the Scheldt Estuary using historical data 1982–2011. • This combined mechanistic-empirical approach is useful for risk assessment.

  1. Empirical Model for Predicting Rate of Biogas Production | Adamu ...

    African Journals Online (AJOL)

    Rate of biogas production using cow manure as substrate was monitored in two laboratory scale batch reactors (13 liter and 108 liter capacities). Two empirical models based on the Gompertz and the modified logistic equations were used to fit the experimental data based on non-linear regression analysis using Solver tool ...

  2. Empirical spatial econometric modelling of small scale neighbourhood

    Science.gov (United States)

    Gerkman, Linda

    2012-07-01

    The aim of the paper is to model small scale neighbourhood in a house price model by implementing the newest methodology in spatial econometrics. A common problem when modelling house prices is that in practice it is seldom possible to obtain all the desired variables. Especially variables capturing the small scale neighbourhood conditions are hard to find. If there are important explanatory variables missing from the model, the omitted variables are spatially autocorrelated and they are correlated with the explanatory variables included in the model, it can be shown that a spatial Durbin model is motivated. In the empirical application on new house price data from Helsinki in Finland, we find the motivation for a spatial Durbin model, we estimate the model and interpret the estimates for the summary measures of impacts. By the analysis we show that the model structure makes it possible to model and find small scale neighbourhood effects, when we know that they exist, but we are lacking proper variables to measure them.

  3. Empirical atom model of Vegard's law

    International Nuclear Information System (INIS)

    Zhang, Lei; Li, Shichun

    2014-01-01

    Vegard's law seldom holds true for most binary continuous solid solutions. When two components form a solid solution, the atom radii of component elements will change to satisfy the continuity requirement of electron density at the interface between component atom A and atom B so that the atom with larger electron density will expand and the atom with the smaller one will contract. If the expansion and contraction of the atomic radii of A and B respectively are equal in magnitude, Vegard's law will hold true. However, the expansion and contraction of two component atoms are not equal in most situations. The magnitude of the variation will depend on the cohesive energy of corresponding element crystals. An empirical atom model of Vegard's law has been proposed to account for signs of deviations according to the electron density at Wigner–Seitz cell from Thomas–Fermi–Dirac–Cheng model

  4. A simple empirical model for the clarification-thickening process in wastewater treatment plants.

    Science.gov (United States)

    Zhang, Y K; Wang, H C; Qi, L; Liu, G H; He, Z J; Fan, H T

    2015-01-01

    In wastewater treatment plants (WWTPs), activated sludge is thickened in secondary settling tanks and recycled into the biological reactor to maintain enough biomass for wastewater treatment. Accurately estimating the activated sludge concentration in the lower portion of the secondary clarifiers is of great importance for evaluating and controlling the sludge recycled ratio, ensuring smooth and efficient operation of the WWTP. By dividing the overall activated sludge-thickening curve into a hindered zone and a compression zone, an empirical model describing activated sludge thickening in the compression zone was obtained by empirical regression. This empirical model was developed through experiments conducted using sludge from five WWTPs, and validated by the measured data from a sixth WWTP, which fit the model well (R² = 0.98, p settling was also developed. Finally, the effects of denitrification and addition of a polymer were also analysed because of their effect on sludge thickening, which can be useful for WWTP operation, e.g., improving wastewater treatment or the proper use of the polymer.

  5. Empirical STORM-E Model. [I. Theoretical and Observational Basis

    Science.gov (United States)

    Mertens, Christopher J.; Xu, Xiaojing; Bilitza, Dieter; Mlynczak, Martin G.; Russell, James M., III

    2013-01-01

    Auroral nighttime infrared emission observed by the Sounding of the Atmosphere using Broadband Emission Radiometry (SABER) instrument onboard the Thermosphere-Ionosphere-Mesosphere Energetics and Dynamics (TIMED) satellite is used to develop an empirical model of geomagnetic storm enhancements to E-region peak electron densities. The empirical model is called STORM-E and will be incorporated into the 2012 release of the International Reference Ionosphere (IRI). The proxy for characterizing the E-region response to geomagnetic forcing is NO+(v) volume emission rates (VER) derived from the TIMED/SABER 4.3 lm channel limb radiance measurements. The storm-time response of the NO+(v) 4.3 lm VER is sensitive to auroral particle precipitation. A statistical database of storm-time to climatological quiet-time ratios of SABER-observed NO+(v) 4.3 lm VER are fit to widely available geomagnetic indices using the theoretical framework of linear impulse-response theory. The STORM-E model provides a dynamic storm-time correction factor to adjust a known quiescent E-region electron density peak concentration for geomagnetic enhancements due to auroral particle precipitation. Part II of this series describes the explicit development of the empirical storm-time correction factor for E-region peak electron densities, and shows comparisons of E-region electron densities between STORM-E predictions and incoherent scatter radar measurements. In this paper, Part I of the series, the efficacy of using SABER-derived NO+(v) VER as a proxy for the E-region response to solar-geomagnetic disturbances is presented. Furthermore, a detailed description of the algorithms and methodologies used to derive NO+(v) VER from SABER 4.3 lm limb emission measurements is given. Finally, an assessment of key uncertainties in retrieving NO+(v) VER is presented

  6. Empirical model of subdaily variations in the Earth rotation from GPS and its stability

    Science.gov (United States)

    Panafidina, N.; Kurdubov, S.; Rothacher, M.

    2012-12-01

    The model recommended by the IERS for these variations at diurnal and semidiurnal periods has been computed from an ocean tide model and comprises 71 terms in polar motion and Universal Time. In the present study we compute an empirical model of variations in the Earth rotation on tidal frequencies from homogeneously re-processed GPS-observations over 1994-2007 available as free daily normal equations. We discuss the reliability of the obtained amplitudes of the ERP variations and compare results from GPS and VLBI data to identify technique-specific problems and instabilities of the empirical tidal models.

  7. Empirical Modeling on Hot Air Drying of Fresh and Pre-treated Pineapples

    Directory of Open Access Journals (Sweden)

    Tanongkankit Yardfon

    2016-01-01

    Full Text Available This research was aimed to study drying kinetics and determine empirical model of fresh pineapple and pre-treated pineapple with sucrose solution at different concentrations during drying. 3 mm thick samples were immersed into 30, 40 and 50 Brix of sucrose solution before hot air drying at temperatures of 60, 70 and 80°C. The empirical models to predict the drying kinetics were investigated. The results showed that the moisture content decreased when increasing the drying temperatures and times. Increase in sucrose concentration led to longer drying time. According to the statistical values of the highest coefficients (R2, the lowest least of chi-square (χ2 and root mean square error (RMSE, Logarithmic model was the best models for describing the drying behavior of soaked samples into 30, 40 and 50 Brix of sucrose solution.

  8. Empirical model for mineralisation of manure nitrogen in soil

    DEFF Research Database (Denmark)

    Sørensen, Peter; Thomsen, Ingrid Kaag; Schröder, Jaap

    2017-01-01

    A simple empirical model was developed for estimation of net mineralisation of pig and cattle slurry nitrogen (N) in arable soils under cool and moist climate conditions during the initial 5 years after spring application. The model is based on a Danish 3-year field experiment with measurements...... of N uptake in spring barley and ryegrass catch crops, supplemented with data from the literature on the temporal release of organic residues in soil. The model estimates a faster mineralisation rate for organic N in pig slurry compared with cattle slurry, and the description includes an initial N...

  9. Empirical likelihood

    CERN Document Server

    Owen, Art B

    2001-01-01

    Empirical likelihood provides inferences whose validity does not depend on specifying a parametric model for the data. Because it uses a likelihood, the method has certain inherent advantages over resampling methods: it uses the data to determine the shape of the confidence regions, and it makes it easy to combined data from multiple sources. It also facilitates incorporating side information, and it simplifies accounting for censored, truncated, or biased sampling.One of the first books published on the subject, Empirical Likelihood offers an in-depth treatment of this method for constructing confidence regions and testing hypotheses. The author applies empirical likelihood to a range of problems, from those as simple as setting a confidence region for a univariate mean under IID sampling, to problems defined through smooth functions of means, regression models, generalized linear models, estimating equations, or kernel smooths, and to sampling with non-identically distributed data. Abundant figures offer vi...

  10. Research Article Evaluation of different signal propagation models for a mixed indoor-outdoor scenario using empirical data

    Directory of Open Access Journals (Sweden)

    Oleksandr Artemenko

    2016-06-01

    Full Text Available In this paper, we are choosing a suitable indoor-outdoor propagation model out of the existing models by considering path loss and distance as parameters. A path loss is calculated empirically by placing emitter nodes inside a building. A receiver placed outdoors is represented by a Quadrocopter (QC that receives beacon messages from indoor nodes. As per our analysis, the International Telecommunication Union (ITU model, Stanford University Interim (SUI model, COST-231 Hata model, Green-Obaidat model, Free Space model, Log-Distance Path Loss model and Electronic Communication Committee 33 (ECC-33 models are chosen and evaluated using empirical data collected in a real environment. The aim is to determine if the analytically chosen models fit our scenario by estimating the minimal standard deviation from the empirical data.

  11. Modelling of proton exchange membrane fuel cell performance based on semi-empirical equations

    Energy Technology Data Exchange (ETDEWEB)

    Al-Baghdadi, Maher A.R. Sadiq [Babylon Univ., Dept. of Mechanical Engineering, Babylon (Iraq)

    2005-08-01

    Using semi-empirical equations for modeling a proton exchange membrane fuel cell is proposed for providing a tool for the design and analysis of fuel cell total systems. The focus of this study is to derive an empirical model including process variations to estimate the performance of fuel cell without extensive calculations. The model take into account not only the current density but also the process variations, such as the gas pressure, temperature, humidity, and utilization to cover operating processes, which are important factors in determining the real performance of fuel cell. The modelling results are compared well with known experimental results. The comparison shows good agreements between the modeling results and the experimental data. The model can be used to investigate the influence of process variables for design optimization of fuel cells, stacks, and complete fuel cell power system. (Author)

  12. Comparing the model-simulated global warming signal to observations using empirical estimates of unforced noise

    Science.gov (United States)

    Brown, Patrick T.; Li, Wenhong; Cordero, Eugene C.; Mauget, Steven A.

    2015-01-01

    The comparison of observed global mean surface air temperature (GMT) change to the mean change simulated by climate models has received much public and scientific attention. For a given global warming signal produced by a climate model ensemble, there exists an envelope of GMT values representing the range of possible unforced states of the climate system (the Envelope of Unforced Noise; EUN). Typically, the EUN is derived from climate models themselves, but climate models might not accurately simulate the correct characteristics of unforced GMT variability. Here, we simulate a new, empirical, EUN that is based on instrumental and reconstructed surface temperature records. We compare the forced GMT signal produced by climate models to observations while noting the range of GMT values provided by the empirical EUN. We find that the empirical EUN is wide enough so that the interdecadal variability in the rate of global warming over the 20th century does not necessarily require corresponding variability in the rate-of-increase of the forced signal. The empirical EUN also indicates that the reduced GMT warming over the past decade or so is still consistent with a middle emission scenario's forced signal, but is likely inconsistent with the steepest emission scenario's forced signal. PMID:25898351

  13. Comparing the model-simulated global warming signal to observations using empirical estimates of unforced noise.

    Science.gov (United States)

    Brown, Patrick T; Li, Wenhong; Cordero, Eugene C; Mauget, Steven A

    2015-04-21

    The comparison of observed global mean surface air temperature (GMT) change to the mean change simulated by climate models has received much public and scientific attention. For a given global warming signal produced by a climate model ensemble, there exists an envelope of GMT values representing the range of possible unforced states of the climate system (the Envelope of Unforced Noise; EUN). Typically, the EUN is derived from climate models themselves, but climate models might not accurately simulate the correct characteristics of unforced GMT variability. Here, we simulate a new, empirical, EUN that is based on instrumental and reconstructed surface temperature records. We compare the forced GMT signal produced by climate models to observations while noting the range of GMT values provided by the empirical EUN. We find that the empirical EUN is wide enough so that the interdecadal variability in the rate of global warming over the 20(th) century does not necessarily require corresponding variability in the rate-of-increase of the forced signal. The empirical EUN also indicates that the reduced GMT warming over the past decade or so is still consistent with a middle emission scenario's forced signal, but is likely inconsistent with the steepest emission scenario's forced signal.

  14. Empirical Models for the Estimation of Global Solar Radiation in ...

    African Journals Online (AJOL)

    Empirical Models for the Estimation of Global Solar Radiation in Yola, Nigeria. ... and average daily wind speed (WS) for the interval of three years (2010 – 2012) measured using various instruments for Yola of recorded data collected from the Center for Atmospheric Research (CAR), Anyigba are presented and analyzed.

  15. Semiphysiological versus Empirical Modelling of the Population Pharmacokinetics of Free and Total Cefazolin during Pregnancy

    Directory of Open Access Journals (Sweden)

    J. G. Coen van Hasselt

    2014-01-01

    Full Text Available This work describes a first population pharmacokinetic (PK model for free and total cefazolin during pregnancy, which can be used for dose regimen optimization. Secondly, analysis of PK studies in pregnant patients is challenging due to study design limitations. We therefore developed a semiphysiological modeling approach, which leveraged gestation-induced changes in creatinine clearance (CrCL into a population PK model. This model was then compared to the conventional empirical covariate model. First, a base two-compartmental PK model with a linear protein binding was developed. The empirical covariate model for gestational changes consisted of a linear relationship between CL and gestational age. The semiphysiological model was based on the base population PK model and a separately developed mixed-effect model for gestation-induced change in CrCL. Estimates for baseline clearance (CL were 0.119 L/min (RSE 58% and 0.142 L/min (RSE 44% for the empirical and semiphysiological models, respectively. Both models described the available PK data comparably well. However, as the semiphysiological model was based on prior knowledge of gestation-induced changes in renal function, this model may have improved predictive performance. This work demonstrates how a hybrid semiphysiological population PK approach may be of relevance in order to derive more informative inferences.

  16. The gravity model specification for modeling international trade flows and free trade agreement effects: a 10-year review of empirical studies

    OpenAIRE

    Kepaptsoglou, Konstantinos; Karlaftis, Matthew G.; Tsamboulas, Dimitrios

    2010-01-01

    The gravity model has been extensively used in international trade research for the last 40 years because of its considerable empirical robustness and explanatory power. Since their introduction in the 1960's, gravity models have been used for assessing trade policy implications and, particularly recently, for analyzing the effects of Free Trade Agreements on international trade. The objective of this paper is to review the recent empirical literature on gravity models, highlight best practic...

  17. Empirical Validation of a Thermal Model of a Complex Roof Including Phase Change Materials

    Directory of Open Access Journals (Sweden)

    Stéphane Guichard

    2015-12-01

    Full Text Available This paper deals with the empirical validation of a building thermal model of a complex roof including a phase change material (PCM. A mathematical model dedicated to PCMs based on the heat apparent capacity method was implemented in a multi-zone building simulation code, the aim being to increase the understanding of the thermal behavior of the whole building with PCM technologies. In order to empirically validate the model, the methodology is based both on numerical and experimental studies. A parametric sensitivity analysis was performed and a set of parameters of the thermal model has been identified for optimization. The use of the generic optimization program called GenOpt® coupled to the building simulation code enabled to determine the set of adequate parameters. We first present the empirical validation methodology and main results of previous work. We then give an overview of GenOpt® and its coupling with the building simulation code. Finally, once the optimization results are obtained, comparisons of the thermal predictions with measurements are found to be acceptable and are presented.

  18. Using empirical Bayes predictors from generalized linear mixed models to test and visualize associations among longitudinal outcomes.

    Science.gov (United States)

    Mikulich-Gilbertson, Susan K; Wagner, Brandie D; Grunwald, Gary K; Riggs, Paula D; Zerbe, Gary O

    2018-01-01

    Medical research is often designed to investigate changes in a collection of response variables that are measured repeatedly on the same subjects. The multivariate generalized linear mixed model (MGLMM) can be used to evaluate random coefficient associations (e.g. simple correlations, partial regression coefficients) among outcomes that may be non-normal and differently distributed by specifying a multivariate normal distribution for their random effects and then evaluating the latent relationship between them. Empirical Bayes predictors are readily available for each subject from any mixed model and are observable and hence, plotable. Here, we evaluate whether second-stage association analyses of empirical Bayes predictors from a MGLMM, provide a good approximation and visual representation of these latent association analyses using medical examples and simulations. Additionally, we compare these results with association analyses of empirical Bayes predictors generated from separate mixed models for each outcome, a procedure that could circumvent computational problems that arise when the dimension of the joint covariance matrix of random effects is large and prohibits estimation of latent associations. As has been shown in other analytic contexts, the p-values for all second-stage coefficients that were determined by naively assuming normality of empirical Bayes predictors provide a good approximation to p-values determined via permutation analysis. Analyzing outcomes that are interrelated with separate models in the first stage and then associating the resulting empirical Bayes predictors in a second stage results in different mean and covariance parameter estimates from the maximum likelihood estimates generated by a MGLMM. The potential for erroneous inference from using results from these separate models increases as the magnitude of the association among the outcomes increases. Thus if computable, scatterplots of the conditionally independent empirical Bayes

  19. Empirical Modeling of Lithium-ion Batteries Based on Electrochemical Impedance Spectroscopy Tests

    International Nuclear Information System (INIS)

    Samadani, Ehsan; Farhad, Siamak; Scott, William; Mastali, Mehrdad; Gimenez, Leonardo E.; Fowler, Michael; Fraser, Roydon A.

    2015-01-01

    Highlights: • Two commercial Lithium-ion batteries are studied through HPPC and EIS tests. • An equivalent circuit model is developed for a range of operating conditions. • This model improves the current battery empirical models for vehicle applications • This model is proved to be efficient in terms of predicting HPPC test resistances. - ABSTRACT: An empirical model for commercial lithium-ion batteries is developed based on electrochemical impedance spectroscopy (EIS) tests. An equivalent circuit is established according to EIS test observations at various battery states of charge and temperatures. A Laplace transfer time based model is developed based on the circuit which can predict the battery operating output potential difference in battery electric and plug-in hybrid vehicles at various operating conditions. This model demonstrates up to 6% improvement compared to simple resistance and Thevenin models and is suitable for modeling and on-board controller purposes. Results also show that this model can be used to predict the battery internal resistance obtained from hybrid pulse power characterization (HPPC) tests to within 20 percent, making it suitable for low to medium fidelity powertrain design purposes. In total, this simple battery model can be employed as a real-time model in electrified vehicle battery management systems

  20. A stochastic empirical model for heavy-metal balnces in Agro-ecosystems

    NARCIS (Netherlands)

    Keller, A.N.; Steiger, von B.; Zee, van der S.E.A.T.M.; Schulin, R.

    2001-01-01

    Mass flux balancing provides essential information for preventive strategies against heavy-metal accumulation in agricultural soils that may result from atmospheric deposition and application of fertilizers and pesticides. In this paper we present the empirical stochastic balance model, PROTERRA-S,

  1. Space evolution model and empirical analysis of an urban public transport network

    Science.gov (United States)

    Sui, Yi; Shao, Feng-jing; Sun, Ren-cheng; Li, Shu-jing

    2012-07-01

    This study explores the space evolution of an urban public transport network, using empirical evidence and a simulation model validated on that data. Public transport patterns primarily depend on traffic spatial-distribution, demands of passengers and expected utility of investors. Evolution is an iterative process of satisfying the needs of passengers and investors based on a given traffic spatial-distribution. The temporal change of urban public transport network is evaluated both using topological measures and spatial ones. The simulation model is validated using empirical data from nine big cities in China. Statistical analyses on topological and spatial attributes suggest that an evolution network with traffic demands characterized by power-law numerical values which distribute in a mode of concentric circles tallies well with these nine cities.

  2. An Empirical Model for Estimating the Probability of Electrical Short Circuits from Tin Whiskers. Part 2

    Science.gov (United States)

    Courey, Karim; Wright, Clara; Asfour, Shihab; Onar, Arzu; Bayliss, Jon; Ludwig, Larry

    2009-01-01

    In this experiment, an empirical model to quantify the probability of occurrence of an electrical short circuit from tin whiskers as a function of voltage was developed. This empirical model can be used to improve existing risk simulation models. FIB and TEM images of a tin whisker confirm the rare polycrystalline structure on one of the three whiskers studied. FIB cross-section of the card guides verified that the tin finish was bright tin.

  3. Evaluation of theoretical and empirical water vapor sorption isotherm models for soils

    Science.gov (United States)

    Arthur, Emmanuel; Tuller, Markus; Moldrup, Per; de Jonge, Lis W.

    2016-01-01

    The mathematical characterization of water vapor sorption isotherms of soils is crucial for modeling processes such as volatilization of pesticides and diffusive and convective water vapor transport. Although numerous physically based and empirical models were previously proposed to describe sorption isotherms of building materials, food, and other industrial products, knowledge about the applicability of these functions for soils is noticeably lacking. We present an evaluation of nine models for characterizing adsorption/desorption isotherms for a water activity range from 0.03 to 0.93 based on measured data of 207 soils with widely varying textures, organic carbon contents, and clay mineralogy. In addition, the potential applicability of the models for prediction of sorption isotherms from known clay content was investigated. While in general, all investigated models described measured adsorption and desorption isotherms reasonably well, distinct differences were observed between physical and empirical models and due to the different degrees of freedom of the model equations. There were also considerable differences in model performance for adsorption and desorption data. While regression analysis relating model parameters and clay content and subsequent model application for prediction of measured isotherms showed promise for the majority of investigated soils, for soils with distinct kaolinitic and smectitic clay mineralogy predicted isotherms did not closely match the measurements.

  4. Technical Note: A comparison of model and empirical measures of catchment-scale effective energy and mass transfer

    Directory of Open Access Journals (Sweden)

    C. Rasmussen

    2013-09-01

    Full Text Available Recent work suggests that a coupled effective energy and mass transfer (EEMT term, which includes the energy associated with effective precipitation and primary production, may serve as a robust prediction parameter of critical zone structure and function. However, the models used to estimate EEMT have been solely based on long-term climatological data with little validation using direct empirical measures of energy, water, and carbon balances. Here we compare catchment-scale EEMT estimates generated using two distinct approaches: (1 EEMT modeled using the established methodology based on estimates of monthly effective precipitation and net primary production derived from climatological data, and (2 empirical catchment-scale EEMT estimated using data from 86 catchments of the Model Parameter Estimation Experiment (MOPEX and MOD17A3 annual net primary production (NPP product derived from Moderate Resolution Imaging Spectroradiometer (MODIS. Results indicated positive and significant linear correspondence (R2 = 0.75; P −2 yr−1. Modeled EEMT values were consistently greater than empirical measures of EEMT. Empirical catchment estimates of the energy associated with effective precipitation (EPPT were calculated using a mass balance approach that accounts for water losses to quick surface runoff not accounted for in the climatologically modeled EPPT. Similarly, local controls on primary production such as solar radiation and nutrient limitation were not explicitly included in the climatologically based estimates of energy associated with primary production (EBIO, whereas these were captured in the remotely sensed MODIS NPP data. These differences likely explain the greater estimate of modeled EEMT relative to the empirical measures. There was significant positive correlation between catchment aridity and the fraction of EEMT partitioned into EBIO (FBIO, with an increase in FBIO as a fraction of the total as aridity increases and percentage of

  5. Traditional Arabic & Islamic medicine: validation and empirical assessment of a conceptual model in Qatar.

    Science.gov (United States)

    AlRawi, Sara N; Khidir, Amal; Elnashar, Maha S; Abdelrahim, Huda A; Killawi, Amal K; Hammoud, Maya M; Fetters, Michael D

    2017-03-14

    Evidence indicates traditional medicine is no longer only used for the healthcare of the poor, its prevalence is also increasing in countries where allopathic medicine is predominant in the healthcare system. While these healing practices have been utilized for thousands of years in the Arabian Gulf, only recently has a theoretical model been developed illustrating the linkages and components of such practices articulated as Traditional Arabic & Islamic Medicine (TAIM). Despite previous theoretical work presenting development of the TAIM model, empirical support has been lacking. The objective of this research is to provide empirical support for the TAIM model and illustrate real world applicability. Using an ethnographic approach, we recruited 84 individuals (43 women and 41 men) who were speakers of one of four common languages in Qatar; Arabic, English, Hindi, and Urdu, Through in-depth interviews, we sought confirming and disconfirming evidence of the model components, namely, health practices, beliefs and philosophy to treat, diagnose, and prevent illnesses and/or maintain well-being, as well as patterns of communication about their TAIM practices with their allopathic providers. Based on our analysis, we find empirical support for all elements of the TAIM model. Participants in this research, visitors to major healthcare centers, mentioned using all elements of the TAIM model: herbal medicines, spiritual therapies, dietary practices, mind-body methods, and manual techniques, applied singularly or in combination. Participants had varying levels of comfort sharing information about TAIM practices with allopathic practitioners. These findings confirm an empirical basis for the elements of the TAIM model. Three elements, namely, spiritual healing, herbal medicine, and dietary practices, were most commonly found. Future research should examine the prevalence of TAIM element use, how it differs among various populations, and its impact on health.

  6. Empirical high-latitude electric field models

    International Nuclear Information System (INIS)

    Heppner, J.P.; Maynard, N.C.

    1987-01-01

    Electric field measurements from the Dynamics Explorer 2 satellite have been analyzed to extend the empirical models previously developed from dawn-dusk OGO 6 measurements (J.P. Heppner, 1977). The analysis embraces large quantities of data from polar crossings entering and exiting the high latitudes in all magnetic local time zones. Paralleling the previous analysis, the modeling is based on the distinctly different polar cap and dayside convective patterns that occur as a function of the sign of the Y component of the interplanetary magnetic field. The objective, which is to represent the typical distributions of convective electric fields with a minimum number of characteristic patterns, is met by deriving one pattern (model BC) for the northern hemisphere with a +Y interplanetary magnetic field (IMF) and southern hemisphere with a -Y IMF and two patterns (models A and DE) for the northern hemisphere with a -Y IMF and southern hemisphere with a +Y IMF. The most significant large-scale revisions of the OGO 6 models are (1) on the dayside where the latitudinal overlap of morning and evening convection cells reverses with the sign of the IMF Y component, (2) on the nightside where a westward flow region poleward from the Harang discontinuity appears under model BC conditions, and (3) magnetic local time shifts in the positions of the convection cell foci. The modeling above was followed by a detailed examination of cases where the IMF Z component was clearly positive (northward). Neglecting the seasonally dependent cases where irregularities obscure pattern recognition, the observations range from reasonable agreement with the new BC and DE models, to cases where different characteristics appeared primarily at dayside high latitudes

  7. Are Model Transferability And Complexity Antithetical? Insights From Validation of a Variable-Complexity Empirical Snow Model in Space and Time

    Science.gov (United States)

    Lute, A. C.; Luce, Charles H.

    2017-11-01

    The related challenges of predictions in ungauged basins and predictions in ungauged climates point to the need to develop environmental models that are transferable across both space and time. Hydrologic modeling has historically focused on modelling one or only a few basins using highly parameterized conceptual or physically based models. However, model parameters and structures have been shown to change significantly when calibrated to new basins or time periods, suggesting that model complexity and model transferability may be antithetical. Empirical space-for-time models provide a framework within which to assess model transferability and any tradeoff with model complexity. Using 497 SNOTEL sites in the western U.S., we develop space-for-time models of April 1 SWE and Snow Residence Time based on mean winter temperature and cumulative winter precipitation. The transferability of the models to new conditions (in both space and time) is assessed using non-random cross-validation tests with consideration of the influence of model complexity on transferability. As others have noted, the algorithmic empirical models transfer best when minimal extrapolation in input variables is required. Temporal split-sample validations use pseudoreplicated samples, resulting in the selection of overly complex models, which has implications for the design of hydrologic model validation tests. Finally, we show that low to moderate complexity models transfer most successfully to new conditions in space and time, providing empirical confirmation of the parsimony principal.

  8. Advanced empirical estimate of information value for credit scoring models

    Directory of Open Access Journals (Sweden)

    Martin Řezáč

    2011-01-01

    Full Text Available Credit scoring, it is a term for a wide spectrum of predictive models and their underlying techniques that aid financial institutions in granting credits. These methods decide who will get credit, how much credit they should get, and what further strategies will enhance the profitability of the borrowers to the lenders. Many statistical tools are avaiable for measuring quality, within the meaning of the predictive power, of credit scoring models. Because it is impossible to use a scoring model effectively without knowing how good it is, quality indexes like Gini, Kolmogorov-Smirnov statisic and Information value are used to assess quality of given credit scoring model. The paper deals primarily with the Information value, sometimes called divergency. Commonly it is computed by discretisation of data into bins using deciles. One constraint is required to be met in this case. Number of cases have to be nonzero for all bins. If this constraint is not fulfilled there are some practical procedures for preserving finite results. As an alternative method to the empirical estimates one can use the kernel smoothing theory, which allows to estimate unknown densities and consequently, using some numerical method for integration, to estimate value of the Information value. The main contribution of this paper is a proposal and description of the empirical estimate with supervised interval selection. This advanced estimate is based on requirement to have at least k, where k is a positive integer, observations of socres of both good and bad client in each considered interval. A simulation study shows that this estimate outperform both the empirical estimate using deciles and the kernel estimate. Furthermore it shows high dependency on choice of the parameter k. If we choose too small value, we get overestimated value of the Information value, and vice versa. Adjusted square root of number of bad clients seems to be a reasonable compromise.

  9. An improved empirical model for diversity gain on Earth-space propagation paths

    Science.gov (United States)

    Hodge, D. B.

    1981-01-01

    An empirical model was generated to estimate diversity gain on Earth-space propagation paths as a function of Earth terminal separation distance, link frequency, elevation angle, and angle between the baseline and the path azimuth. The resulting model reproduces the entire experimental data set with an RMS error of 0.73 dB.

  10. Empirical Bayes Credibility Models for Economic Catastrophic Losses by Regions

    Directory of Open Access Journals (Sweden)

    Jindrová Pavla

    2017-01-01

    Full Text Available Catastrophic events affect various regions of the world with increasing frequency and intensity. The number of catastrophic events and the amount of economic losses is varying in different world regions. Part of these losses is covered by insurance. Catastrophe events in last years are associated with increases in premiums for some lines of business. The article focus on estimating the amount of net premiums that would be needed to cover the total or insured catastrophic losses in different world regions using Bühlmann and Bühlmann-Straub empirical credibility models based on data from Sigma Swiss Re 2010-2016. The empirical credibility models have been developed to estimate insurance premiums for short term insurance contracts using two ingredients: past data from the risk itself and collateral data from other sources considered to be relevant. In this article we deal with application of these models based on the real data about number of catastrophic events and about the total economic and insured catastrophe losses in seven regions of the world in time period 2009-2015. Estimated credible premiums by world regions provide information how much money in the monitored regions will be need to cover total and insured catastrophic losses in next year.

  11. Empirical Correction to the Likelihood Ratio Statistic for Structural Equation Modeling with Many Variables.

    Science.gov (United States)

    Yuan, Ke-Hai; Tian, Yubin; Yanagihara, Hirokazu

    2015-06-01

    Survey data typically contain many variables. Structural equation modeling (SEM) is commonly used in analyzing such data. The most widely used statistic for evaluating the adequacy of a SEM model is T ML, a slight modification to the likelihood ratio statistic. Under normality assumption, T ML approximately follows a chi-square distribution when the number of observations (N) is large and the number of items or variables (p) is small. However, in practice, p can be rather large while N is always limited due to not having enough participants. Even with a relatively large N, empirical results show that T ML rejects the correct model too often when p is not too small. Various corrections to T ML have been proposed, but they are mostly heuristic. Following the principle of the Bartlett correction, this paper proposes an empirical approach to correct T ML so that the mean of the resulting statistic approximately equals the degrees of freedom of the nominal chi-square distribution. Results show that empirically corrected statistics follow the nominal chi-square distribution much more closely than previously proposed corrections to T ML, and they control type I errors reasonably well whenever N ≥ max(50,2p). The formulations of the empirically corrected statistics are further used to predict type I errors of T ML as reported in the literature, and they perform well.

  12. β-empirical Bayes inference and model diagnosis of microarray data

    Directory of Open Access Journals (Sweden)

    Hossain Mollah Mohammad

    2012-06-01

    Full Text Available Abstract Background Microarray data enables the high-throughput survey of mRNA expression profiles at the genomic level; however, the data presents a challenging statistical problem because of the large number of transcripts with small sample sizes that are obtained. To reduce the dimensionality, various Bayesian or empirical Bayes hierarchical models have been developed. However, because of the complexity of the microarray data, no model can explain the data fully. It is generally difficult to scrutinize the irregular patterns of expression that are not expected by the usual statistical gene by gene models. Results As an extension of empirical Bayes (EB procedures, we have developed the β-empirical Bayes (β-EB approach based on a β-likelihood measure which can be regarded as an ’evidence-based’ weighted (quasi- likelihood inference. The weight of a transcript t is described as a power function of its likelihood, fβ(yt|θ. Genes with low likelihoods have unexpected expression patterns and low weights. By assigning low weights to outliers, the inference becomes robust. The value of β, which controls the balance between the robustness and efficiency, is selected by maximizing the predictive β0-likelihood by cross-validation. The proposed β-EB approach identified six significant (p−5 contaminated transcripts as differentially expressed (DE in normal/tumor tissues from the head and neck of cancer patients. These six genes were all confirmed to be related to cancer; they were not identified as DE genes by the classical EB approach. When applied to the eQTL analysis of Arabidopsis thaliana, the proposed β-EB approach identified some potential master regulators that were missed by the EB approach. Conclusions The simulation data and real gene expression data showed that the proposed β-EB method was robust against outliers. The distribution of the weights was used to scrutinize the irregular patterns of expression and diagnose the model

  13. Integrating social science into empirical models of coupled human and natural systems

    Directory of Open Access Journals (Sweden)

    Jeffrey D. Kline

    2017-09-01

    Full Text Available Coupled human and natural systems (CHANS research highlights reciprocal interactions (or feedbacks between biophysical and socioeconomic variables to explain system dynamics and resilience. Empirical models often are used to test hypotheses and apply theory that represent human behavior. Parameterizing reciprocal interactions presents two challenges for social scientists: (1 how to represent human behavior as influenced by biophysical factors and integrate this into CHANS empirical models; (2 how to organize and function as a multidisciplinary social science team to accomplish that task. We reflect on these challenges regarding our CHANS research that investigated human adaptation to fire-prone landscapes. Our project sought to characterize the forest management activities of land managers and landowners (or "actors" and their influence on wildfire behavior and landscape outcomes by focusing on biophysical and socioeconomic feedbacks in central Oregon (USA. We used an agent-based model (ABM to compile biophysical and social information pertaining to actor behavior, and to project future landscape conditions under alternative management scenarios. Project social scientists were tasked with identifying actors' forest management activities and biophysical and socioeconomic factors that influence them, and with developing decision rules for incorporation into the ABM to represent actor behavior. We (1 briefly summarize what we learned about actor behavior on this fire-prone landscape and how we represented it in an ABM, and (2 more significantly, report our observations about how we organized and functioned as a diverse team of social scientists to fulfill these CHANS research tasks. We highlight several challenges we experienced, involving quantitative versus qualitative data and methods, distilling complex behavior into empirical models, varying sensitivity of biophysical models to social factors, synchronization of research tasks, and the need to

  14. Prediction of early summer rainfall over South China by a physical-empirical model

    Science.gov (United States)

    Yim, So-Young; Wang, Bin; Xing, Wen

    2014-10-01

    In early summer (May-June, MJ) the strongest rainfall belt of the northern hemisphere occurs over the East Asian (EA) subtropical front. During this period the South China (SC) rainfall reaches its annual peak and represents the maximum rainfall variability over EA. Hence we establish an SC rainfall index, which is the MJ mean precipitation averaged over 72 stations over SC (south of 28°N and east of 110°E) and represents superbly the leading empirical orthogonal function mode of MJ precipitation variability over EA. In order to predict SC rainfall, we established a physical-empirical model. Analysis of 34-year observations (1979-2012) reveals three physically consequential predictors. A plentiful SC rainfall is preceded in the previous winter by (a) a dipole sea surface temperature (SST) tendency in the Indo-Pacific warm pool, (b) a tripolar SST tendency in North Atlantic Ocean, and (c) a warming tendency in northern Asia. These precursors foreshadow enhanced Philippine Sea subtropical High and Okhotsk High in early summer, which are controlling factors for enhanced subtropical frontal rainfall. The physical empirical model built on these predictors achieves a cross-validated forecast correlation skill of 0.75 for 1979-2012. Surprisingly, this skill is substantially higher than four-dynamical models' ensemble prediction for 1979-2010 period (0.15). The results here suggest that the low prediction skill of current dynamical models is largely due to models' deficiency and the dynamical prediction has large room to improve.

  15. Semi-empirical modelization of charge funneling in a NP diode

    International Nuclear Information System (INIS)

    Musseau, O.

    1991-01-01

    Heavy ion interaction with a semiconductor generates a high density of electrons and holes pairs along the trajectory and in a space charge zone the collected charge is considerably increased. The chronology of this charge funneling is described in a semi-empirical model. From initial conditions characterizing the incident ion and the studied structure, it is possible to evaluate directly the transient current, the collected charge and the length of funneling with a good agreement. The model can be extrapolated to more complex structures

  16. Political economy models and agricultural policy formation : empirical applicability and relevance for the CAP

    NARCIS (Netherlands)

    Zee, van der F.A.

    1997-01-01

    This study explores the relevance and applicability of political economy models for the explanation of agricultural policies. Part I (chapters 4-7) takes a general perspective and evaluates the empirical applicability of voting models and interest group models to agricultural policy

  17. Comparison of ITER performance predicted by semi-empirical and theory-based transport models

    International Nuclear Information System (INIS)

    Mukhovatov, V.; Shimomura, Y.; Polevoi, A.

    2003-01-01

    The values of Q=(fusion power)/(auxiliary heating power) predicted for ITER by three different methods, i.e., transport model based on empirical confinement scaling, dimensionless scaling technique, and theory-based transport models are compared. The energy confinement time given by the ITERH-98(y,2) scaling for an inductive scenario with plasma current of 15 MA and plasma density 15% below the Greenwald value is 3.6 s with one technical standard deviation of ±14%. These data are translated into a Q interval of [7-13] at the auxiliary heating power P aux = 40 MW and [7-28] at the minimum heating power satisfying a good confinement ELMy H-mode. Predictions of dimensionless scalings and theory-based transport models such as Weiland, MMM and IFS/PPPL overlap with the empirical scaling predictions within the margins of uncertainty. (author)

  18. The logical primitives of thought: Empirical foundations for compositional cognitive models.

    Science.gov (United States)

    Piantadosi, Steven T; Tenenbaum, Joshua B; Goodman, Noah D

    2016-07-01

    The notion of a compositional language of thought (LOT) has been central in computational accounts of cognition from earliest attempts (Boole, 1854; Fodor, 1975) to the present day (Feldman, 2000; Penn, Holyoak, & Povinelli, 2008; Fodor, 2008; Kemp, 2012; Goodman, Tenenbaum, & Gerstenberg, 2015). Recent modeling work shows how statistical inferences over compositionally structured hypothesis spaces might explain learning and development across a variety of domains. However, the primitive components of such representations are typically assumed a priori by modelers and theoreticians rather than determined empirically. We show how different sets of LOT primitives, embedded in a psychologically realistic approximate Bayesian inference framework, systematically predict distinct learning curves in rule-based concept learning experiments. We use this feature of LOT models to design a set of large-scale concept learning experiments that can determine the most likely primitives for psychological concepts involving Boolean connectives and quantification. Subjects' inferences are most consistent with a rich (nonminimal) set of Boolean operations, including first-order, but not second-order, quantification. Our results more generally show how specific LOT theories can be distinguished empirically. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  19. Development and empirical exploration of an extended model of intragroup conflict

    OpenAIRE

    Hjertø, Kjell B.; Kuvaas, Bård

    2009-01-01

    Dette er post-print av artikkelen publisert i International Journal of Conflict Management Purpose - The purpose of this study was to develop and empirically explore a model of four intragroup conflict types (the 4IC model), consisting of an emotional person, a cognitive task, an emotional task, and a cognitive person conflict. The two first conflict types are similar to existing conceptualizations, whereas the two latter represent new dimensions of group conflict. Design/m...

  20. Performance-Based Service Quality Model: An Empirical Study on Japanese Universities

    Science.gov (United States)

    Sultan, Parves; Wong, Ho

    2010-01-01

    Purpose: This paper aims to develop and empirically test the performance-based higher education service quality model. Design/methodology/approach: The study develops 67-item instrument for measuring performance-based service quality with a particular focus on the higher education sector. Scale reliability is confirmed using the Cronbach's alpha.…

  1. Conceptual Model of IT Infrastructure Capability and Its Empirical Justification

    Institute of Scientific and Technical Information of China (English)

    QI Xianfeng; LAN Boxiong; GUO Zhenwei

    2008-01-01

    Increasing importance has been attached to the value of information technology (IT) infrastructure in today's organizations. The development of efficacious IT infrastructure capability enhances business performance and brings sustainable competitive advantage. This study analyzed the IT infrastructure capability in a holistic way and then presented a concept model of IT capability. IT infrastructure capability was categorized into sharing capability, service capability, and flexibility. This study then empirically tested the model using a set of survey data collected from 145 firms. Three factors emerge from the factor analysis as IT flexibility, IT service capability, and IT sharing capability, which agree with those in the conceptual model built in this study.

  2. Satellite-based empirical models linking river plume dynamics with hypoxic area andvolume

    Science.gov (United States)

    Satellite-based empirical models explaining hypoxic area and volume variation were developed for the seasonally hypoxic (O2 < 2 mg L−1) northern Gulf of Mexico adjacent to the Mississippi River. Annual variations in midsummer hypoxic area and ...

  3. The Fracture Mechanical Markov Chain Fatigue Model Compared with Empirical Data

    DEFF Research Database (Denmark)

    Gansted, L.; Brincker, Rune; Hansen, Lars Pilegaard

    The applicability of the FMF-model (Fracture Mechanical Markov Chain Fatigue Model) introduced in Gansted, L., R. Brincker and L. Pilegaard Hansen (1991) is tested by simulations and compared with empirical data. Two sets of data have been used, the Virkler data (aluminium alloy) and data...... established at the Laboratory of Structural Engineering at Aalborg University, the AUC-data, (mild steel). The model, which is based on the assumption, that the crack propagation process can be described by a discrete Space Markov theory, is applicable to constant as well as random loading. It is shown...

  4. Dynamics of bloggers’ communities: Bipartite networks from empirical data and agent-based modeling

    Science.gov (United States)

    Mitrović, Marija; Tadić, Bosiljka

    2012-11-01

    We present an analysis of the empirical data and the agent-based modeling of the emotional behavior of users on the Web portals where the user interaction is mediated by posted comments, like Blogs and Diggs. We consider the dataset of discussion-driven popular Diggs, in which all comments are screened by machine-learning emotion detection in the text, to determine positive and negative valence (attractiveness and aversiveness) of each comment. By mapping the data onto a suitable bipartite network, we perform an analysis of the network topology and the related time-series of the emotional comments. The agent-based model is then introduced to simulate the dynamics and to capture the emergence of the emotional behaviors and communities. The agents are linked to posts on a bipartite network, whose structure evolves through their actions on the posts. The emotional states (arousal and valence) of each agent fluctuate in time, subject to the current contents of the posts to which the agent is exposed. By an agent’s action on a post its current emotions are transferred to the post. The model rules and the key parameters are inferred from the considered empirical data to ensure their realistic values and mutual consistency. The model assumes that the emotional arousal over posts drives the agent’s action. The simulations are preformed for the case of constant flux of agents and the results are analyzed in full analogy with the empirical data. The main conclusions are that the emotion-driven dynamics leads to long-range temporal correlations and emergent networks with community structure, that are comparable with the ones in the empirical system of popular posts. In view of pure emotion-driven agents actions, this type of comparisons provide a quantitative measure for the role of emotions in the dynamics on real blogs. Furthermore, the model reveals the underlying mechanisms which relate the post popularity with the emotion dynamics and the prevalence of negative

  5. Empirical modeling of single-wake advection and expansion using full-scale pulsed lidar-based measurements

    DEFF Research Database (Denmark)

    Machefaux, Ewan; Larsen, Gunner Chr.; Troldborg, Niels

    2015-01-01

    In the present paper, single-wake dynamics have been studied both experimentally and numerically. The use of pulsed lidar measurements allows for validation of basic dynamic wake meandering modeling assumptions. Wake center tracking is used to estimate the wake advection velocity experimentally...... fairly well in the far wake but lacks accuracy in the outer region of the near wake. An empirical relationship, relating maximum wake induction and wake advection velocity, is derived and linked to the characteristics of a spherical vortex structure. Furthermore, a new empirical model for single...

  6. Correlação das escalas de avaliação utilizadas na doença de Parkinson com aplicabilidade na fisioterapia

    Directory of Open Access Journals (Sweden)

    Marcella Patrícia Bezerra de Mello

    Full Text Available INTRODUÇÃO: A doença de Parkinson (DP é uma patologia neurológica crônica e degenerativa do sistema nervoso central que acomete os gânglios da base, cujas características principais são tremor, rigidez e bradicinesia. Com o progresso terapêutico, desenvolveram-se várias escalas visando monitorar a evolução da doença e a eficácia de tratamentos. O objetivo deste estudo de revisão bibliográfica é caracterizar as principais escalas usadas para a avaliação da DP, discutindo sua aplicabilidade à prática fisioterapêutica. MATERIAIS E MÉTODO: Levantamento bibliográfico a partir de bases de dados como Scielo, Medline, Lilacs, PubMed, entre 1990 a 2005. RESULTADOS: Seis escalas são abordadas: Escala dos Estágios de Incapacidade de Hoehn e Yahr; Escala Unificada de Avaliação da DP (UPDRS; Escala de Sydney; Questionário da DP (PDQ-39; Qualidade de vida (PSN; Escala de atividade de Parkinson (PAS. Destacando-se a PDQ-39 pela percepção do paciente sobre sua qualidade de vida. A PAS é a que melhor atende aos objetivos específicos da fisioterapia, pois avalia os principais problemas de mobilidade funcional. Além das escalas de Hoehn e Yahr e a UPDRS, por sua confiabilidade, pois podem ser usadas por fisioterapeutas para melhor avaliação do estado clínico-funcional do paciente. CONCLUSÃO: A necessidade de monitorar a evolução dos pacientes e os resultados de intervenção fisioterapêutica exige do fisioterapeuta o conhecimento para utilizar medidas sistematizadas e de fácil aplicabilidade para avaliar pacientes com DP. Cabe a esse profissional optar pela que permita uma tomada de decisão clínica compatível com seu local de trabalho, com as necessidades do paciente e com o meio em que ele vive.

  7. EVOLUTION OF THEORIES AND EMPIRICAL MODELS OF A RELATIONSHIP BETWEEN ECONOMIC GROWTH, SCIENCE AND INNOVATIONS (PART I

    Directory of Open Access Journals (Sweden)

    Kaneva M. A.

    2017-12-01

    Full Text Available This article is a first chapter of an analytical review of existing theoretical models of a relationship between economic growth / GRP and indicators of scientific development and innovation activities, as well as empirical approaches to testing this relationship. Aim of the paper is a systematization of existing approaches to modeling of economic growth geared by science and innovations. The novelty of the current review lies in the authors’ criteria of interconnectedness of theoretical and empirical studies in the systematization of a wide range of publications presented in a final table-scheme. In the first part of the article the authors discuss evolution of theoretical approaches, while the second chapter presents a time gap between theories and their empirical verification caused by the level of development of quantitative instruments such as econometric models. The results of this study can be used by researchers and graduate students for familiarization with current scientific approaches that manifest progress from theory to empirical verification of a relationship «economic growth-innovations» for improvement of different types of models in spatial econometrics. To apply these models to management practices the presented review could be supplemented with new criteria for classification of knowledge production functions and other theories about effect of science on economic growth.

  8. Empirical Test Case Specification

    DEFF Research Database (Denmark)

    Kalyanova, Olena; Heiselberg, Per

    This document includes the empirical specification on the IEA task of evaluation building energy simulation computer programs for the Double Skin Facades (DSF) constructions. There are two approaches involved into this procedure, one is the comparative approach and another is the empirical one. I....... In the comparative approach the outcomes of different software tools are compared, while in the empirical approach the modelling results are compared with the results of experimental test cases....

  9. Tests of Parameters Instability: Theoretical Study and Empirical Applications on Two Types of Models (ARMA Model and Market Model

    Directory of Open Access Journals (Sweden)

    Sahbi FARHANI

    2012-01-01

    Full Text Available This paper considers tests of parameters instability and structural change with known, unknown or multiple breakpoints. The results apply to a wide class of parametric models that are suitable for estimation by strong rules for detecting the number of breaks in a time series. For that, we use Chow, CUSUM, CUSUM of squares, Wald, likelihood ratio and Lagrange multiplier tests. Each test implicitly uses an estimate of a change point. We conclude with an empirical analysis on two different models (ARMA model and simple linear regression model.

  10. Development of an Empirical Model for Optimization of Machining Parameters to Minimize Power Consumption

    Science.gov (United States)

    Kant Garg, Girish; Garg, Suman; Sangwan, K. S.

    2018-04-01

    The manufacturing sector consumes huge energy demand and the machine tools used in this sector have very less energy efficiency. Selection of the optimum machining parameters for machine tools is significant for energy saving and for reduction of environmental emission. In this work an empirical model is developed to minimize the power consumption using response surface methodology. The experiments are performed on a lathe machine tool during the turning of AISI 6061 Aluminum with coated tungsten inserts. The relationship between the power consumption and machining parameters is adequately modeled. This model is used for formulation of minimum power consumption criterion as a function of optimal machining parameters using desirability function approach. The influence of machining parameters on the energy consumption has been found using the analysis of variance. The validation of the developed empirical model is proved using the confirmation experiments. The results indicate that the developed model is effective and has potential to be adopted by the industry for minimum power consumption of machine tools.

  11. Patentes sobre invenciones biotecnológicas: criterios legales y jurisprudenciales europeos y su aplicabilidad en el derecho mexicano

    Directory of Open Access Journals (Sweden)

    Carlos Ernesto Arcudia Hernández

    2015-01-01

    Full Text Available Las primeras solicitudes de protección de las invenciones biotecnológicas por medio de patente encontraron resistencia en las oficinas de patentes y en los tribunales. Se les consideraba productos de la naturaleza, o bien, que el sistema de patentes no era adecuado para la materia viva. También se argumentaba que las invenciones biotecnológicas no cumplían con los requisitos de patentabilidad; o que había prohibiciones legales que impedían su protección por medio de patente. La Oficina Europea de Patentes y los tribunales europeos, mediante una serie de resoluciones, fueron derogando las excepciones a la patentabilidad de la materia viva. Ese desarrollo jurisprudencial sirvió de base para la elaboración de la Directiva 98/44. Toda vez que la regulación del sistema de patentes europeo guarda una estrecha similitud con el sis - tema de patentes mexicano, proponemos la aplicabilidad de los criterios europeos para el examen de las solicitudes de patentes sobre materia viva.

  12. Regime switching model for financial data: Empirical risk analysis

    Science.gov (United States)

    Salhi, Khaled; Deaconu, Madalina; Lejay, Antoine; Champagnat, Nicolas; Navet, Nicolas

    2016-11-01

    This paper constructs a regime switching model for the univariate Value-at-Risk estimation. Extreme value theory (EVT) and hidden Markov models (HMM) are combined to estimate a hybrid model that takes volatility clustering into account. In the first stage, HMM is used to classify data in crisis and steady periods, while in the second stage, EVT is applied to the previously classified data to rub out the delay between regime switching and their detection. This new model is applied to prices of numerous stocks exchanged on NYSE Euronext Paris over the period 2001-2011. We focus on daily returns for which calibration has to be done on a small dataset. The relative performance of the regime switching model is benchmarked against other well-known modeling techniques, such as stable, power laws and GARCH models. The empirical results show that the regime switching model increases predictive performance of financial forecasting according to the number of violations and tail-loss tests. This suggests that the regime switching model is a robust forecasting variant of power laws model while remaining practical to implement the VaR measurement.

  13. Permeability-driven selection in a semi-empirical protocell model

    DEFF Research Database (Denmark)

    Piedrafita, Gabriel; Monnard, Pierre-Alain; Mavelli, Fabio

    2017-01-01

    to prebiotic systems evolution more intricate, but were surely essential for sustaining far-from-equilibrium chemical dynamics, given their functional relevance in all modern cells. Here we explore a protocellular scenario in which some of those additional constraints/mechanisms are addressed, demonstrating...... their 'system-level' implications. In particular, an experimental study on the permeability of prebiotic vesicle membranes composed of binary lipid mixtures allows us to construct a semi-empirical model where protocells are able to reproduce and undergo an evolutionary process based on their coupling...

  14. An empirically-based model for the lift coefficients of twisted airfoils with leading-edge tubercles

    Science.gov (United States)

    Ni, Zao; Su, Tsung-chow; Dhanak, Manhar

    2018-04-01

    Experimental data for untwisted airfoils are utilized to propose a model for predicting the lift coefficients of twisted airfoils with leading-edge tubercles. The effectiveness of the empirical model is verified through comparison with results of a corresponding computational fluid-dynamic (CFD) study. The CFD study is carried out for both twisted and untwisted airfoils with tubercles, the latter shown to compare well with available experimental data. Lift coefficients of twisted airfoils predicted from the proposed empirically-based model match well with the corresponding coefficients determined using the verified CFD study. Flow details obtained from the latter provide better insight into the underlying mechanism and behavior at stall of twisted airfoils with leading edge tubercles.

  15. Empirical particle transport model for tokamaks

    International Nuclear Information System (INIS)

    Petravic, M.; Kuo-Petravic, G.

    1986-08-01

    A simple empirical particle transport model has been constructed with the purpose of gaining insight into the L- to H-mode transition in tokamaks. The aim was to construct the simplest possible model which would reproduce the measured density profiles in the L-regime, and also produce a qualitatively correct transition to the H-regime without having to assume a completely different transport mode for the bulk of the plasma. Rather than using completely ad hoc constructions for the particle diffusion coefficient, we assume D = 1/5 chi/sub total/, where chi/sub total/ ≅ chi/sub e/ is the thermal diffusivity, and then use the κ/sub e/ = n/sub e/chi/sub e/ values derived from experiments. The observed temperature profiles are then automatically reproduced, but nontrivially, the correct density profiles are also obtained, for realistic fueling rates and profiles. Our conclusion is that it is sufficient to reduce the transport coefficients within a few centimeters of the surface to produce the H-mode behavior. An additional simple assumption, concerning the particle mean-free path, leads to a convective transport term which reverses sign a few centimeters inside the surface, as required by the H-mode density profiles

  16. Prediction of Meiyu rainfall in Taiwan by multi-lead physical-empirical models

    Science.gov (United States)

    Yim, So-Young; Wang, Bin; Xing, Wen; Lu, Mong-Ming

    2015-06-01

    Taiwan is located at the dividing point of the tropical and subtropical monsoons over East Asia. Taiwan has double rainy seasons, the Meiyu in May-June and the Typhoon rains in August-September. To predict the amount of Meiyu rainfall is of profound importance to disaster preparedness and water resource management. The seasonal forecast of May-June Meiyu rainfall has been a challenge to current dynamical models and the factors controlling Taiwan Meiyu variability has eluded climate scientists for decades. Here we investigate the physical processes that are possibly important for leading to significant fluctuation of the Taiwan Meiyu rainfall. Based on this understanding, we develop a physical-empirical model to predict Taiwan Meiyu rainfall at a lead time of 0- (end of April), 1-, and 2-month, respectively. Three physically consequential and complementary predictors are used: (1) a contrasting sea surface temperature (SST) tendency in the Indo-Pacific warm pool, (2) the tripolar SST tendency in North Atlantic that is associated with North Atlantic Oscillation, and (3) a surface warming tendency in northeast Asia. These precursors foreshadow an enhanced Philippine Sea anticyclonic anomalies and the anomalous cyclone near the southeastern China in the ensuing summer, which together favor increasing Taiwan Meiyu rainfall. Note that the identified precursors at various lead-times represent essentially the same physical processes, suggesting the robustness of the predictors. The physical empirical model made by these predictors is capable of capturing the Taiwan rainfall variability with a significant cross-validated temporal correlation coefficient skill of 0.75, 0.64, and 0.61 for 1979-2012 at the 0-, 1-, and 2-month lead time, respectively. The physical-empirical model concept used here can be extended to summer monsoon rainfall prediction over the Southeast Asia and other regions.

  17. Threshold model of cascades in empirical temporal networks

    Science.gov (United States)

    Karimi, Fariba; Holme, Petter

    2013-08-01

    Threshold models try to explain the consequences of social influence like the spread of fads and opinions. Along with models of epidemics, they constitute a major theoretical framework of social spreading processes. In threshold models on static networks, an individual changes her state if a certain fraction of her neighbors has done the same. When there are strong correlations in the temporal aspects of contact patterns, it is useful to represent the system as a temporal network. In such a system, not only contacts but also the time of the contacts are represented explicitly. In many cases, bursty temporal patterns slow down disease spreading. However, as we will see, this is not a universal truth for threshold models. In this work we propose an extension of Watts’s classic threshold model to temporal networks. We do this by assuming that an agent is influenced by contacts which lie a certain time into the past. I.e., the individuals are affected by contacts within a time window. In addition to thresholds in the fraction of contacts, we also investigate the number of contacts within the time window as a basis for influence. To elucidate the model’s behavior, we run the model on real and randomized empirical contact datasets.

  18. Empirical Analysis of Stochastic Volatility Model by Hybrid Monte Carlo Algorithm

    International Nuclear Information System (INIS)

    Takaishi, Tetsuya

    2013-01-01

    The stochastic volatility model is one of volatility models which infer latent volatility of asset returns. The Bayesian inference of the stochastic volatility (SV) model is performed by the hybrid Monte Carlo (HMC) algorithm which is superior to other Markov Chain Monte Carlo methods in sampling volatility variables. We perform the HMC simulations of the SV model for two liquid stock returns traded on the Tokyo Stock Exchange and measure the volatilities of those stock returns. Then we calculate the accuracy of the volatility measurement using the realized volatility as a proxy of the true volatility and compare the SV model with the GARCH model which is one of other volatility models. Using the accuracy calculated with the realized volatility we find that empirically the SV model performs better than the GARCH model.

  19. Integrating social science into empirical models of coupled human and natural systems

    Science.gov (United States)

    Jeffrey D. Kline; Eric M. White; A Paige Fischer; Michelle M. Steen-Adams; Susan Charnley; Christine S. Olsen; Thomas A. Spies; John D. Bailey

    2017-01-01

    Coupled human and natural systems (CHANS) research highlights reciprocal interactions (or feedbacks) between biophysical and socioeconomic variables to explain system dynamics and resilience. Empirical models often are used to test hypotheses and apply theory that represent human behavior. Parameterizing reciprocal interactions presents two challenges for social...

  20. An Improved Semi-Empirical Model for Radar Backscattering from Rough Sea Surfaces at X-Band

    Directory of Open Access Journals (Sweden)

    Taekyeong Jin

    2018-04-01

    Full Text Available We propose an improved semi-empirical scattering model for X-band radar backscattering from rough sea surfaces. This new model has a wider validity range of wind speeds than does the existing semi-empirical sea spectrum (SESS model. First, we retrieved the small-roughness parameters from the sea surfaces, which were numerically generated using the Pierson-Moskowitz spectrum and measurement datasets for various wind speeds. Then, we computed the backscattering coefficients of the small-roughness surfaces for various wind speeds using the integral equation method model. Finally, the large-roughness characteristics were taken into account by integrating the small-roughness backscattering coefficients multiplying them with the surface slope probability density function for all possible surface slopes. The new model includes a wind speed range below 3.46 m/s, which was not covered by the existing SESS model. The accuracy of the new model was verified with two measurement datasets for various wind speeds from 0.5 m/s to 14 m/s.

  1. An empirical model for independent control of variable speed refrigeration system

    International Nuclear Information System (INIS)

    Li Hua; Jeong, Seok-Kwon; Yoon, Jung-In; You, Sam-Sang

    2008-01-01

    This paper deals with an empirical dynamic model for decoupling control of the variable speed refrigeration system (VSRS). To cope with inherent complexity and nonlinearity in system dynamics, the model parameters are first obtained based on experimental data. In the study, the dynamic characteristics of indoor temperature and superheat are assumed to be first-order model with time delay. While the compressor frequency and opening angle of electronic expansion valve are varying, the indoor temperature and the superheat exhibit interfering characteristics each other in the VSRS. Thus, each decoupling model has been proposed to eliminate such interference. Finally, the experiment and simulation results indicate that the proposed model offers more tractable means for describing the actual VSRS comparing to other models currently available

  2. Interface of the polarizable continuum model of solvation with semi-empirical methods in the GAMESS program

    DEFF Research Database (Denmark)

    Svendsen, Casper Steinmann; Blædel, Kristoffer L.; Christensen, Anders Steen

    2013-01-01

    An interface between semi-empirical methods and the polarized continuum model (PCM) of solvation successfully implemented into GAMESS following the approach by Chudinov et al (Chem. Phys. 1992, 160, 41). The interface includes energy gradients and is parallelized. For large molecules such as ubiq......An interface between semi-empirical methods and the polarized continuum model (PCM) of solvation successfully implemented into GAMESS following the approach by Chudinov et al (Chem. Phys. 1992, 160, 41). The interface includes energy gradients and is parallelized. For large molecules...

  3. Adaptação e aplicabilidade de um algoritmo de diurético para pacientes com insuficiência cardíaca

    Directory of Open Access Journals (Sweden)

    Maria Karolina Echer Ferreira Feijó

    2013-06-01

    Full Text Available FUNDAMENTO: Estados congestivos podem ser identificados e manejados através de algoritmos como o Diuretic Treatment Algorithm (DTA para ajuste de diurético por telefone, com enfoque na avaliação clínica. Porém, o DTA está disponível somente em língua inglesa. OBJETIVO: Adaptar o DTA e testar sua aplicabilidade para uso no Brasil em pacientes ambulatoriais com IC. MÉTODOS: Seguiram-se as etapas de tradução, síntese, retrotradução, avaliação por comitê de especialistas e pré-teste (aplicabilidade clínica por meio de ensaio clínico randomizado. O DTA foi denominado, na versão para o Brasil, algoritmo de ajuste de diurético (AAD. Os pacientes foram randomizados para grupo intervenção (GI - ajuste de diurético conforme o AAD - ou grupo controle (GC - ajuste convencional. Foi avaliado o escore clínico de congestão (ECC e o peso para ambos os grupos. RESULTADOS: Foram realizadas 12 modificações no DTA. Incluíram-se 34 pacientes. Para aqueles congestos, o aumento de diurético guiado pelo AAD resultou em maior resolução da congestão, com redução de dois pontos no ECC para 50% da amostra -2 (-3,5; -1,0, enquanto a mediana para o GC foi 0 (-1,25; -1,0, (p < 0,001. A mediana de variação de peso foi maior no GI -1,4 (-1,7; -0,5 quando comparado ao GC 0,1 (1,2; -0,6, p = 0,001. CONCLUSÕES: O ADD mostrou-se aplicável na prática clínica após adaptação e parece resultar em melhor controle da congestão em pacientes com IC. A efetividade clínica da ferramenta merece ser testada em amostra maior de pacientes visando sua validação para uso no Brasil (Universal Trial Number: U1111-1130-5749 (Arq Bras Cardiol. 2013; [online]. ahead print, PP.0-0.

  4. Empirical models of wind conditions on Upper Klamath Lake, Oregon

    Science.gov (United States)

    Buccola, Norman L.; Wood, Tamara M.

    2010-01-01

    Upper Klamath Lake is a large (230 square kilometers), shallow (mean depth 2.8 meters at full pool) lake in southern Oregon. Lake circulation patterns are driven largely by wind, and the resulting currents affect the water quality and ecology of the lake. To support hydrodynamic modeling of the lake and statistical investigations of the relation between wind and lake water-quality measurements, the U.S. Geological Survey has monitored wind conditions along the lakeshore and at floating raft sites in the middle of the lake since 2005. In order to make the existing wind archive more useful, this report summarizes the development of empirical wind models that serve two purposes: (1) to fill short (on the order of hours or days) wind data gaps at raft sites in the middle of the lake, and (2) to reconstruct, on a daily basis, over periods of months to years, historical wind conditions at U.S. Geological Survey sites prior to 2005. Empirical wind models based on Artificial Neural Network (ANN) and Multivariate-Adaptive Regressive Splines (MARS) algorithms were compared. ANNs were better suited to simulating the 10-minute wind data that are the dependent variables of the gap-filling models, but the simpler MARS algorithm may be adequate to accurately simulate the daily wind data that are the dependent variables of the historical wind models. To further test the accuracy of the gap-filling models, the resulting simulated winds were used to force the hydrodynamic model of the lake, and the resulting simulated currents were compared to measurements from an acoustic Doppler current profiler. The error statistics indicated that the simulation of currents was degraded as compared to when the model was forced with observed winds, but probably is adequate for short gaps in the data of a few days or less. Transport seems to be less affected by the use of the simulated winds in place of observed winds. The simulated tracer concentration was similar between model results when

  5. Assessing and improving the quality of modeling : a series of empirical studies about the UML

    NARCIS (Netherlands)

    Lange, C.F.J.

    2007-01-01

    Assessing and Improving the Quality of Modeling A Series of Empirical Studies about the UML This thesis addresses the assessment and improvement of the quality of modeling in software engineering. In particular, we focus on the Unified Modeling Language (UML), which is the de facto standard in

  6. Production functions for climate policy modeling. An empirical analysis

    International Nuclear Information System (INIS)

    Van der Werf, Edwin

    2008-01-01

    Quantitative models for climate policy modeling differ in the production structure used and in the sizes of the elasticities of substitution. The empirical foundation for both is generally lacking. This paper estimates the parameters of 2-level CES production functions with capital, labour and energy as inputs, and is the first to systematically compare all nesting structures. Using industry-level data from 12 OECD countries, we find that the nesting structure where capital and labour are combined first, fits the data best, but for most countries and industries we cannot reject that all three inputs can be put into one single nest. These two nesting structures are used by most climate models. However, while several climate policy models use a Cobb-Douglas function for (part of the) production function, we reject elasticities equal to one, in favour of considerably smaller values. Finally we find evidence for factor-specific technological change. With lower elasticities and with factor-specific technological change, some climate policy models may find a bigger effect of endogenous technological change on mitigating the costs of climate policy. (author)

  7. Strategy for a Rock Mechanics Site Descriptive Model. Development and testing of the empirical approach

    Energy Technology Data Exchange (ETDEWEB)

    Roeshoff, Kennert; Lanaro, Flavio [Berg Bygg Konsult AB, Stockholm (Sweden); Lanru Jing [Royal Inst. of Techn., Stockholm (Sweden). Div. of Engineering Geology

    2002-05-01

    This report presents the results of one part of a wide project for the determination of a methodology for the determination of the rock mechanics properties of the rock mass for the so-called Aespoe Test Case. The Project consists of three major parts: the empirical part dealing with the characterisation of the rock mass by applying empirical methods, a part determining the rock mechanics properties of the rock mass through numerical modelling, and a third part carrying out numerical modelling for the determination of the stress state at Aespoe. All Project's parts were performed based on a limited amount of data about the geology and mechanical tests on samples selected from the Aespoe Database. This Report only considers the empirical approach. The purpose of the project is the development of a descriptive rock mechanics model for SKBs rock mass investigations for a final repository site. The empirical characterisation of the rock mass provides correlations with some of the rock mechanics properties of the rock mass such as the deformation modulus, the friction angle and cohesion for a certain stress interval and the uniaxial compressive strength. For the characterisation of the rock mass, several empirical methods were analysed and reviewed. Among those methods, some were chosen because robust, applicable and widespread in modern rock mechanics. Major weight was given to the well-known Tunnel Quality Index (Q) and Rock Mass Rating (RMR) but also the Rock Mass Index (RMi), the Geological Strength Index (GSI) and Ramamurthy's Criterion were applied for comparison with the two classical methods. The process of: i) sorting the geometrical/geological/rock mechanics data, ii) identifying homogeneous rock volumes, iii) determining the input parameters for the empirical ratings for rock mass characterisation; iv) evaluating the mechanical properties by using empirical relations with the rock mass ratings; was considered. By comparing the methodologies involved

  8. Strategy for a Rock Mechanics Site Descriptive Model. Development and testing of the empirical approach

    International Nuclear Information System (INIS)

    Roeshoff, Kennert; Lanaro, Flavio; Lanru Jing

    2002-05-01

    This report presents the results of one part of a wide project for the determination of a methodology for the determination of the rock mechanics properties of the rock mass for the so-called Aespoe Test Case. The Project consists of three major parts: the empirical part dealing with the characterisation of the rock mass by applying empirical methods, a part determining the rock mechanics properties of the rock mass through numerical modelling, and a third part carrying out numerical modelling for the determination of the stress state at Aespoe. All Project's parts were performed based on a limited amount of data about the geology and mechanical tests on samples selected from the Aespoe Database. This Report only considers the empirical approach. The purpose of the project is the development of a descriptive rock mechanics model for SKBs rock mass investigations for a final repository site. The empirical characterisation of the rock mass provides correlations with some of the rock mechanics properties of the rock mass such as the deformation modulus, the friction angle and cohesion for a certain stress interval and the uniaxial compressive strength. For the characterisation of the rock mass, several empirical methods were analysed and reviewed. Among those methods, some were chosen because robust, applicable and widespread in modern rock mechanics. Major weight was given to the well-known Tunnel Quality Index (Q) and Rock Mass Rating (RMR) but also the Rock Mass Index (RMi), the Geological Strength Index (GSI) and Ramamurthy's Criterion were applied for comparison with the two classical methods. The process of: i) sorting the geometrical/geological/rock mechanics data, ii) identifying homogeneous rock volumes, iii) determining the input parameters for the empirical ratings for rock mass characterisation; iv) evaluating the mechanical properties by using empirical relations with the rock mass ratings; was considered. By comparing the methodologies involved by the

  9. Empirical Reduced-Order Modeling for Boundary Feedback Flow Control

    Directory of Open Access Journals (Sweden)

    Seddik M. Djouadi

    2008-01-01

    Full Text Available This paper deals with the practical and theoretical implications of model reduction for aerodynamic flow-based control problems. Various aspects of model reduction are discussed that apply to partial differential equation- (PDE- based models in general. Specifically, the proper orthogonal decomposition (POD of a high dimension system as well as frequency domain identification methods are discussed for initial model construction. Projections on the POD basis give a nonlinear Galerkin model. Then, a model reduction method based on empirical balanced truncation is developed and applied to the Galerkin model. The rationale for doing so is that linear subspace approximations to exact submanifolds associated with nonlinear controllability and observability require only standard matrix manipulations utilizing simulation/experimental data. The proposed method uses a chirp signal as input to produce the output in the eigensystem realization algorithm (ERA. This method estimates the system's Markov parameters that accurately reproduce the output. Balanced truncation is used to show that model reduction is still effective on ERA produced approximated systems. The method is applied to a prototype convective flow on obstacle geometry. An H∞ feedback flow controller is designed based on the reduced model to achieve tracking and then applied to the full-order model with excellent performance.

  10. Modeling Lolium perenne L. roots in the presence of empirical black holes

    Science.gov (United States)

    Plant root models are designed for understanding structural or functional aspects of root systems. When a process is not thoroughly understood, a black box object is used. However, when a process exists but empirical data do not indicate its existence, you have a black hole. The object of this re...

  11. An Empirical Outdoor-to-Indoor Path Loss Model from below 6 GHz to cm-Wave Frequency Bands

    DEFF Research Database (Denmark)

    Rodriguez Larrad, Ignacio; Nguyen, Huan Cong; Kovács, István Z.

    2017-01-01

    This letter presents an empirical multi-frequency outdoor-to-indoor path loss model. The model is based on measurements performed on the exact same set of scenarios for different frequency bands ranging from traditional cellular allocations below 6 GHz (0.8, 2, 3.5 and 5.2 GHz), up to cm-wave fre......This letter presents an empirical multi-frequency outdoor-to-indoor path loss model. The model is based on measurements performed on the exact same set of scenarios for different frequency bands ranging from traditional cellular allocations below 6 GHz (0.8, 2, 3.5 and 5.2 GHz), up to cm...

  12. Empirical LTE Smartphone Power Model with DRX Operation for System Level Simulations

    DEFF Research Database (Denmark)

    Lauridsen, Mads; Noël, Laurent; Mogensen, Preben

    2013-01-01

    An LTE smartphone power model is presented to enable academia and industry to evaluate users’ battery life on system level. The model is based on empirical measurements on a smartphone using a second generation LTE chipset, and the model includes functions of receive and transmit data rates...... and power levels. The first comprehensive Discontinuous Reception (DRX) power consumption measurements are reported together with cell bandwidth, screen and CPU power consumption. The transmit power level and to some extent the receive data rate constitute the overall power consumption, while DRX proves...

  13. Empirical atom model of Vegard's law

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Lei, E-mail: zhleile2002@163.com [Materials Department, College of Electromechanical Engineering, China University of Petroleum, Qingdao 266555 (China); School of Electromechanical Automobile Engineering, Yantai University, Yantai 264005 (China); Li, Shichun [Materials Department, College of Electromechanical Engineering, China University of Petroleum, Qingdao 266555 (China)

    2014-02-01

    Vegard's law seldom holds true for most binary continuous solid solutions. When two components form a solid solution, the atom radii of component elements will change to satisfy the continuity requirement of electron density at the interface between component atom A and atom B so that the atom with larger electron density will expand and the atom with the smaller one will contract. If the expansion and contraction of the atomic radii of A and B respectively are equal in magnitude, Vegard's law will hold true. However, the expansion and contraction of two component atoms are not equal in most situations. The magnitude of the variation will depend on the cohesive energy of corresponding element crystals. An empirical atom model of Vegard's law has been proposed to account for signs of deviations according to the electron density at Wigner–Seitz cell from Thomas–Fermi–Dirac–Cheng model.

  14. Empirical analysis of uranium spot prices

    International Nuclear Information System (INIS)

    Morman, M.R.

    1988-01-01

    The objective is to empirically test a market model of the uranium industry that incorporates the notion that, if the resource is viewed as an asset by economic agents, then its own rate of return along with the own rate of return of a competing asset would be a major factor in formulating the price of the resource. The model tested is based on a market model of supply and demand. The supply model incorporates the notion that the decision criteria used by uranium mine owners is to select that extraction rate that maximizes the net present value of their extraction receipts. The demand model uses a concept that allows for explicit recognition of the prospect of arbitrage between a natural-resource market and the market for other capital goods. The empirical approach used for estimation was a recursive or causal model. The empirical results were consistent with the theoretical models. The coefficients of the demand and supply equations had the appropriate signs. Tests for causality were conducted to validate the use of the causal model. The results obtained were favorable. The implication of the findings as related to future studies of exhaustible resources are: (1) in some cases causal models are the appropriate specification for empirical analysis; (2) supply models should incorporate a measure to capture depletion effects

  15. An Empirical Application of a Two-Factor Model of Stochastic Volatility

    Czech Academy of Sciences Publication Activity Database

    Kuchyňka, Alexandr

    2008-01-01

    Roč. 17, č. 3 (2008), s. 243-253 ISSN 1210-0455 R&D Projects: GA ČR GA402/07/1113; GA MŠk(CZ) LC06075 Institutional research plan: CEZ:AV0Z10750506 Keywords : stochastic volatility * Kalman filter Subject RIV: AH - Economics http://library.utia.cas.cz/separaty/2008/E/kuchynka-an empirical application of a two-factor model of stochastic volatility.pdf

  16. Semi-empirical neural network models of controlled dynamical systems

    Directory of Open Access Journals (Sweden)

    Mihail V. Egorchev

    2017-12-01

    Full Text Available A simulation approach is discussed for maneuverable aircraft motion as nonlinear controlled dynamical system under multiple and diverse uncertainties including knowledge imperfection concerning simulated plant and its environment exposure. The suggested approach is based on a merging of theoretical knowledge for the plant with training tools of artificial neural network field. The efficiency of this approach is demonstrated using the example of motion modeling and the identification of the aerodynamic characteristics of a maneuverable aircraft. A semi-empirical recurrent neural network based model learning algorithm is proposed for multi-step ahead prediction problem. This algorithm sequentially states and solves numerical optimization subproblems of increasing complexity, using each solution as initial guess for subsequent subproblem. We also consider a procedure for representative training set acquisition that utilizes multisine control signals.

  17. Design Models as Emergent Features: An Empirical Study in Communication and Shared Mental Models in Instructional

    Science.gov (United States)

    Botturi, Luca

    2006-01-01

    This paper reports the results of an empirical study that investigated the instructional design process of three teams involved in the development of an e-­learning unit. The teams declared they were using the same fast-­prototyping design and development model, and were composed of the same roles (although with a different number of SMEs).…

  18. Time-varying volatility in Malaysian stock exchange: An empirical study using multiple-volatility-shift fractionally integrated model

    Science.gov (United States)

    Cheong, Chin Wen

    2008-02-01

    This article investigated the influences of structural breaks on the fractionally integrated time-varying volatility model in the Malaysian stock markets which included the Kuala Lumpur composite index and four major sectoral indices. A fractionally integrated time-varying volatility model combined with sudden changes is developed to study the possibility of structural change in the empirical data sets. Our empirical results showed substantial reduction in fractional differencing parameters after the inclusion of structural change during the Asian financial and currency crises. Moreover, the fractionally integrated model with sudden change in volatility performed better in the estimation and specification evaluations.

  19. Attachment-based family therapy for depressed and suicidal adolescents: theory, clinical model and empirical support.

    Science.gov (United States)

    Ewing, E Stephanie Krauthamer; Diamond, Guy; Levy, Suzanne

    2015-01-01

    Attachment-Based Family Therapy (ABFT) is a manualized family-based intervention designed for working with depressed adolescents, including those at risk for suicide, and their families. It is an empirically informed and supported treatment. ABFT has its theoretical underpinnings in attachment theory and clinical roots in structural family therapy and emotion focused therapies. ABFT relies on a transactional model that aims to transform the quality of adolescent-parent attachment, as a means of providing the adolescent with a more secure relationship that can support them during challenging times generally, and the crises related to suicidal thinking and behavior, specifically. This article reviews: (1) the theoretical foundations of ABFT (attachment theory, models of emotional development); (2) the ABFT clinical model, including training and supervision factors; and (3) empirical support.

  20. Risky forward interest rates and swaptions: Quantum finance model and empirical results

    Science.gov (United States)

    Baaquie, Belal Ehsan; Yu, Miao; Bhanap, Jitendra

    2018-02-01

    Risk free forward interest rates (Diebold and Li, 2006 [1]; Jamshidian, 1991 [2 ]) - and their realization by US Treasury bonds as the leading exemplar - have been studied extensively. In Baaquie (2010), models of risk free bonds and their forward interest rates based on the quantum field theoretic formulation of the risk free forward interest rates have been discussed, including the empirical evidence supporting these models. The quantum finance formulation of risk free forward interest rates is extended to the case of risky forward interest rates. The examples of the Singapore and Malaysian forward interest rates are used as specific cases. The main feature of the quantum finance model is that the risky forward interest rates are modeled both a) as a stand-alone case as well as b) being driven by the US forward interest rates plus a spread - having its own term structure -above the US forward interest rates. Both the US forward interest rates and the term structure for the spread are modeled by a two dimensional Euclidean quantum field. As a precursor to the evaluation of put option of the Singapore coupon bond, the quantum finance model for swaptions is tested using empirical study of swaptions for the US Dollar -showing that the model is quite accurate. A prediction for the market price of the put option for the Singapore coupon bonds is obtained. The quantum finance model is generalized to study the Malaysian case and the Malaysian forward interest rates are shown to have anomalies absent for the US and Singapore case. The model's prediction for a Malaysian interest rate swap is obtained.

  1. Empirical phylogenies and species abundance distributions are consistent with pre-equilibrium dynamics of neutral community models with gene flow

    KAUST Repository

    Bonnet-Lebrun, Anne-Sophie

    2017-03-17

    Community characteristics reflect past ecological and evolutionary dynamics. Here, we investigate whether it is possible to obtain realistically shaped modelled communities - i.e., with phylogenetic trees and species abundance distributions shaped similarly to typical empirical bird and mammal communities - from neutral community models. To test the effect of gene flow, we contrasted two spatially explicit individual-based neutral models: one with protracted speciation, delayed by gene flow, and one with point mutation speciation, unaffected by gene flow. The former produced more realistic communities (shape of phylogenetic tree and species-abundance distribution), consistent with gene flow being a key process in macro-evolutionary dynamics. Earlier models struggled to capture the empirically observed branching tempo in phylogenetic trees, as measured by the gamma statistic. We show that the low gamma values typical of empirical trees can be obtained in models with protracted speciation, in pre-equilibrium communities developing from an initially abundant and widespread species. This was even more so in communities sampled incompletely, particularly if the unknown species are the youngest. Overall, our results demonstrate that the characteristics of empirical communities that we have studied can, to a large extent, be explained through a purely neutral model under pre-equilibrium conditions. This article is protected by copyright. All rights reserved.

  2. Empirical phylogenies and species abundance distributions are consistent with pre-equilibrium dynamics of neutral community models with gene flow

    KAUST Repository

    Bonnet-Lebrun, Anne-Sophie; Manica, Andrea; Eriksson, Anders; Rodrigues, Ana S.L.

    2017-01-01

    Community characteristics reflect past ecological and evolutionary dynamics. Here, we investigate whether it is possible to obtain realistically shaped modelled communities - i.e., with phylogenetic trees and species abundance distributions shaped similarly to typical empirical bird and mammal communities - from neutral community models. To test the effect of gene flow, we contrasted two spatially explicit individual-based neutral models: one with protracted speciation, delayed by gene flow, and one with point mutation speciation, unaffected by gene flow. The former produced more realistic communities (shape of phylogenetic tree and species-abundance distribution), consistent with gene flow being a key process in macro-evolutionary dynamics. Earlier models struggled to capture the empirically observed branching tempo in phylogenetic trees, as measured by the gamma statistic. We show that the low gamma values typical of empirical trees can be obtained in models with protracted speciation, in pre-equilibrium communities developing from an initially abundant and widespread species. This was even more so in communities sampled incompletely, particularly if the unknown species are the youngest. Overall, our results demonstrate that the characteristics of empirical communities that we have studied can, to a large extent, be explained through a purely neutral model under pre-equilibrium conditions. This article is protected by copyright. All rights reserved.

  3. Applicability of special quasi-random structure models in thermodynamic calculations using semi-empirical Debye–Grüneisen theory

    International Nuclear Information System (INIS)

    Kim, Jiwoong

    2015-01-01

    In theoretical calculations, expressing the random distribution of atoms in a certain crystal structure is still challenging. The special quasi-random structure (SQS) model is effective for depicting such random distributions. The SQS model has not been applied to semi-empirical thermodynamic calculations; however, Debye–Grüneisen theory (DGT), a semi-empirical method, was used here for that purpose. The model reliability was obtained by comparing supercell models of various sizes. The results for chemical bonds, pair correlation, and elastic properties demonstrated the reliability of the SQS models. Thermodynamic calculations using density functional perturbation theory (DFPT) and DGT assessed the applicability of the SQS models. DGT and DFPT led to similar variations of the mixing and formation energies. This study provides guidelines for theoretical assessments to obtain the reliable SQS models and to calculate the thermodynamic properties of numerous materials with a random atomic distribution. - Highlights: • Various material properties are used to examine reliability of special quasi-random structures. • SQS models are applied to thermodynamic calculations by semi-empirical methods. • Basic calculation guidelines for materials with random atomic distribution are given.

  4. A New Empirical Model for Short-Term Forecasting of the Broadband Penetration: A Short Research in Greece

    Directory of Open Access Journals (Sweden)

    Salpasaranis Konstantinos

    2011-01-01

    Full Text Available The objective of this paper is to present a short research about the overall broadband penetration in Greece. In this research, a new empirical deterministic model is proposed for the short-term forecast of the cumulative broadband adoption. The fitting performance of the model is compared with some widely used diffusion models for the cumulative adoption of new telecommunication products, namely, Logistic, Gompertz, Flexible Logistic (FLOG, Box-Cox, Richards, and Bass models. The fitting process is done with broadband penetration official data for Greece. In conclusion, comparing these models with the empirical model, it could be argued that the latter yields well enough statistics indicators for fitting and forecasting performance. It also stresses the need for further research and performance analysis of the model in other more mature broadband markets.

  5. An empirical model for the melt viscosity of polymer blends

    International Nuclear Information System (INIS)

    Dobrescu, V.

    1981-01-01

    On the basis of experimental data for blends of polyethylene with different polymers an empirical equation is proposed to describe the dependence of melt viscosity of blends on component viscosities and composition. The model ensures the continuity of viscosity vs. composition curves throughout the whole composition range, the possibility of obtaining extremum values higher or lower than the viscosities of components, allows the calculation of flow curves of blends from the flow curves of components and their volume fractions. (orig.)

  6. Development of efficient air-cooling strategies for lithium-ion battery module based on empirical heat source model

    International Nuclear Information System (INIS)

    Wang, Tao; Tseng, K.J.; Zhao, Jiyun

    2015-01-01

    Thermal modeling is the key issue in thermal management of lithium-ion battery system, and cooling strategies need to be carefully investigated to guarantee the temperature of batteries in operation within a narrow optimal range as well as provide cost effective and energy saving solutions for cooling system. This article reviews and summarizes the past cooling methods especially forced air cooling and introduces an empirical heat source model which can be widely applied in the battery module/pack thermal modeling. In the development of empirical heat source model, three-dimensional computational fluid dynamics (CFD) method is employed, and thermal insulation experiments are conducted to provide the key parameters. A transient thermal model of 5 × 5 battery module with forced air cooling is then developed based on the empirical heat source model. Thermal behaviors of battery module under different air cooling conditions, discharge rates and ambient temperatures are characterized and summarized. Varies cooling strategies are simulated and compared in order to obtain an optimal cooling method. Besides, the battery fault conditions are predicted from transient simulation scenarios. The temperature distributions and variations during discharge process are quantitatively described, and it is found that the upper limit of ambient temperature for forced air cooling is 35 °C, and when ambient temperature is lower than 20 °C, forced air-cooling is not necessary. - Highlights: • An empirical heat source model is developed for battery thermal modeling. • Different air-cooling strategies on module thermal characteristics are investigated. • Impact of different discharge rates on module thermal responses are investigated. • Impact of ambient temperatures on module thermal behaviors are investigated. • Locations of maximum temperatures under different operation conditions are studied.

  7. An empirical Bayesian approach for model-based inference of cellular signaling networks

    Directory of Open Access Journals (Sweden)

    Klinke David J

    2009-11-01

    Full Text Available Abstract Background A common challenge in systems biology is to infer mechanistic descriptions of biological process given limited observations of a biological system. Mathematical models are frequently used to represent a belief about the causal relationships among proteins within a signaling network. Bayesian methods provide an attractive framework for inferring the validity of those beliefs in the context of the available data. However, efficient sampling of high-dimensional parameter space and appropriate convergence criteria provide barriers for implementing an empirical Bayesian approach. The objective of this study was to apply an Adaptive Markov chain Monte Carlo technique to a typical study of cellular signaling pathways. Results As an illustrative example, a kinetic model for the early signaling events associated with the epidermal growth factor (EGF signaling network was calibrated against dynamic measurements observed in primary rat hepatocytes. A convergence criterion, based upon the Gelman-Rubin potential scale reduction factor, was applied to the model predictions. The posterior distributions of the parameters exhibited complicated structure, including significant covariance between specific parameters and a broad range of variance among the parameters. The model predictions, in contrast, were narrowly distributed and were used to identify areas of agreement among a collection of experimental studies. Conclusion In summary, an empirical Bayesian approach was developed for inferring the confidence that one can place in a particular model that describes signal transduction mechanisms and for inferring inconsistencies in experimental measurements.

  8. Distribution of longshore sediment transport along the Indian coast based on empirical model

    Digital Repository Service at National Institute of Oceanography (India)

    Chandramohan, P.; Nayak, B.U.

    An empirical sediment transport model has been developed based on longshore energy flux equation. Study indicates that annual gross sediment transport rate is high (1.5 x 10 super(6) cubic meters to 2.0 x 10 super(6) cubic meters) along the coasts...

  9. Models of expected returns on the brazilian market: Empirical tests using predictive methodology

    Directory of Open Access Journals (Sweden)

    Adriano Mussa

    2009-01-01

    Full Text Available Predictive methodologies for test of the expected returns models are largely diffused on the international academic environment. However, these methods have not been used in Brazil in a systematic way. Generally, empirical studies proceeded with Brazilian stock market data are concentrated only in the first step of these methodologies. The purpose of this article was test and compare the models CAPM, 3-factors and 4-factors using a predictive methodology, considering two steps – temporal and cross-section regressions – with standard errors obtained by the techniques of Fama and Macbeth (1973. The results indicated the superiority of the 4-fators model as compared to the 3-fators model, and the superiority of the 3- factors model as compared to the CAPM, but no one of the tested models were enough on the explanation of the Brazilian stock returns. Contrary to some empirical evidences, that do not use predictive methodology, the size and momentum effect seem do not exist on the Brazilian capital markets, but there are evidences of the value effect and the relevance of the market for explanation of expected returns. These finds rise some questions, mainly caused by the originality of the methodology on the local market and by the fact that this subject is still incipient and polemic on the Brazilian academic environment.

  10. Computational optogenetics: empirically-derived voltage- and light-sensitive channelrhodopsin-2 model.

    Directory of Open Access Journals (Sweden)

    John C Williams

    Full Text Available Channelrhodospin-2 (ChR2, a light-sensitive ion channel, and its variants have emerged as new excitatory optogenetic tools not only in neuroscience, but also in other areas, including cardiac electrophysiology. An accurate quantitative model of ChR2 is necessary for in silico prediction of the response to optical stimulation in realistic tissue/organ settings. Such a model can guide the rational design of new ion channel functionality tailored to different cell types/tissues. Focusing on one of the most widely used ChR2 mutants (H134R with enhanced current, we collected a comprehensive experimental data set of the response of this ion channel to different irradiances and voltages, and used these data to develop a model of ChR2 with empirically-derived voltage- and irradiance- dependence, where parameters were fine-tuned via simulated annealing optimization. This ChR2 model offers: 1 accurate inward rectification in the current-voltage response across irradiances; 2 empirically-derived voltage- and light-dependent kinetics (activation, deactivation and recovery from inactivation; and 3 accurate amplitude and morphology of the response across voltage and irradiance settings. Temperature-scaling factors (Q10 were derived and model kinetics was adjusted to physiological temperatures. Using optical action potential clamp, we experimentally validated model-predicted ChR2 behavior in guinea pig ventricular myocytes. The model was then incorporated in a variety of cardiac myocytes, including human ventricular, atrial and Purkinje cell models. We demonstrate the ability of ChR2 to trigger action potentials in human cardiomyocytes at relatively low light levels, as well as the differential response of these cells to light, with the Purkinje cells being most easily excitable and ventricular cells requiring the highest irradiance at all pulse durations. This new experimentally-validated ChR2 model will facilitate virtual experimentation in neural and

  11. An empirical model to predict infield thin layer drying rate of cut switchgrass

    International Nuclear Information System (INIS)

    Khanchi, A.; Jones, C.L.; Sharma, B.; Huhnke, R.L.; Weckler, P.; Maness, N.O.

    2013-01-01

    A series of 62 thin layer drying experiments were conducted to evaluate the effect of solar radiation, vapor pressure deficit and wind speed on drying rate of switchgrass. An environmental chamber was fabricated that can simulate field drying conditions. An empirical drying model based on maturity stage of switchgrass was also developed during the study. It was observed that solar radiation was the most significant factor in improving the drying rate of switchgrass at seed shattering and seed shattered maturity stage. Therefore, drying switchgrass in wide swath to intercept the maximum amount of radiation at these stages of maturity is recommended. Moreover, it was observed that under low radiation intensity conditions, wind speed helps to improve the drying rate of switchgrass. Field operations such as raking or turning of the windrows are recommended to improve air circulation within a swath on cloudy days. Additionally, it was found that the effect of individual weather parameters on the drying rate of switchgrass was dependent on maturity stage. Vapor pressure deficit was strongly correlated with the drying rate during seed development stage whereas, vapor pressure deficit was weakly correlated during seed shattering and seed shattered stage. These findings suggest the importance of using separate drying rate models for each maturity stage of switchgrass. The empirical models developed in this study can predict the drying time of switchgrass based on the forecasted weather conditions so that the appropriate decisions can be made. -- Highlights: • An environmental chamber was developed in the present study to simulate field drying conditions. • An empirical model was developed that can estimate drying rate of switchgrass based on forecasted weather conditions. • Separate equations were developed based on maturity stage of switchgrass. • Designed environmental chamber can be used to evaluate the effect of other parameters that affect drying of crops

  12. Context, Experience, Expectation, and Action—Towards an Empirically Grounded, General Model for Analyzing Biographical Uncertainty

    Directory of Open Access Journals (Sweden)

    Herwig Reiter

    2010-01-01

    Full Text Available The article proposes a general, empirically grounded model for analyzing biographical uncertainty. The model is based on findings from a qualitative-explorative study of transforming meanings of unemployment among young people in post-Soviet Lithuania. In a first step, the particular features of the uncertainty puzzle in post-communist youth transitions are briefly discussed. A historical event like the collapse of state socialism in Europe, similar to the recent financial and economic crisis, is a generator of uncertainty par excellence: it undermines the foundations of societies and the taken-for-grantedness of related expectations. Against this background, the case of a young woman and how she responds to the novel threat of unemployment in the transition to the world of work is introduced. Her uncertainty management in the specific time perspective of certainty production is then conceptually rephrased by distinguishing three types or levels of biographical uncertainty: knowledge, outcome, and recognition uncertainty. Biographical uncertainty, it is argued, is empirically observable through the analysis of acting and projecting at the biographical level. The final part synthesizes the empirical findings and the conceptual discussion into a stratification model of biographical uncertainty as a general tool for the biographical analysis of uncertainty phenomena. URN: urn:nbn:de:0114-fqs100120

  13. Semi-empirical models for the estimation of clear sky solar global and direct normal irradiances in the tropics

    International Nuclear Information System (INIS)

    Janjai, S.; Sricharoen, K.; Pattarapanitchai, S.

    2011-01-01

    Highlights: → New semi-empirical models for predicting clear sky irradiance were developed. → The proposed models compare favorably with other empirical models. → Performance of proposed models is comparable with that of widely used physical models. → The proposed models have advantage over the physical models in terms of simplicity. -- Abstract: This paper presents semi-empirical models for estimating global and direct normal solar irradiances under clear sky conditions in the tropics. The models are based on a one-year period of clear sky global and direct normal irradiances data collected at three solar radiation monitoring stations in Thailand: Chiang Mai (18.78 o N, 98.98 o E) located in the North of the country, Nakhon Pathom (13.82 o N, 100.04 o E) in the Centre and Songkhla (7.20 o N, 100.60 o E) in the South. The models describe global and direct normal irradiances as functions of the Angstrom turbidity coefficient, the Angstrom wavelength exponent, precipitable water and total column ozone. The data of Angstrom turbidity coefficient, wavelength exponent and precipitable water were obtained from AERONET sunphotometers, and column ozone was retrieved from the OMI/AURA satellite. Model validation was accomplished using data from these three stations for the data periods which were not included in the model formulation. The models were also validated against an independent data set collected at Ubon Ratchathani (15.25 o N, 104.87 o E) in the Northeast. The global and direct normal irradiances calculated from the models and those obtained from measurements are in good agreement, with the root mean square difference (RMSD) of 7.5% for both global and direct normal irradiances. The performance of the models was also compared with that of other models. The performance of the models compared favorably with that of empirical models. Additionally, the accuracy of irradiances predicted from the proposed model are comparable with that obtained from some

  14. Aplicabilidade das ações preconizadas pelo método canguru Share applicability recommended by kangaroo method

    Directory of Open Access Journals (Sweden)

    Alessandra Patricia Stelmak

    2017-07-01

    Full Text Available Objetivo:Identificar a prevalência das ações preconizadas pelo MC, na prática de cuidados ao recém-nascido pré-termo e/ou baixo peso, pela equipe de enfermagem de uma unidade de terapia intensiva neonatal que é referência estadual para o MC. Método: Pesquisa descritiva quantitativa, realizada através da aplicação de um questionário estruturado com 37 profissionais de enfermagem de nível médio, em Unidade de Terapia Intensiva Neonatal, de fevereiro a abril de 2014. Resultados: o acolhimento, o incentivo ao toque, o aleitamento materno e o controle ambiental são as ações mais executadas pela equipe, apresentando cada uma 97% de aplicabilidade prática, e como ações menos executadas, a troca de fralda em decúbito lateral (83%, e o banho envolto em cueiros (58%. Conclusão: Esta equipe realiza as ações humanizadas de cuidado conforme preconizados pelo MC, e compreende a importância desses cuidados para o desenvolvimento infantil dos recém-nascidos. Existe necessidade de processo de educação permanente em serviço.

  15. Libor and Swap Market Models for the Pricing of Interest Rate Derivatives : An Empirical Analysis

    NARCIS (Netherlands)

    de Jong, F.C.J.M.; Driessen, J.J.A.G.; Pelsser, A.

    2000-01-01

    In this paper we empirically analyze and compare the Libor and Swap Market Models, developed by Brace, Gatarek, and Musiela (1997) and Jamshidian (1997), using paneldata on prices of US caplets and swaptions.A Libor Market Model can directly be calibrated to observed prices of caplets, whereas a

  16. Uncertainty analysis and validation of environmental models. The empirically based uncertainty analysis

    International Nuclear Information System (INIS)

    Monte, Luigi; Hakanson, Lars; Bergstroem, Ulla; Brittain, John; Heling, Rudie

    1996-01-01

    The principles of Empirically Based Uncertainty Analysis (EBUA) are described. EBUA is based on the evaluation of 'performance indices' that express the level of agreement between the model and sets of empirical independent data collected in different experimental circumstances. Some of these indices may be used to evaluate the confidence limits of the model output. The method is based on the statistical analysis of the distribution of the index values and on the quantitative relationship of these values with the ratio 'experimental data/model output'. Some performance indices are described in the present paper. Among these, the so-called 'functional distance' (d) between the logarithm of model output and the logarithm of the experimental data, defined as d 2 =Σ n 1 ( ln M i - ln O i ) 2 /n where M i is the i-th experimental value, O i the corresponding model evaluation and n the number of the couplets 'experimental value, predicted value', is an important tool for the EBUA method. From the statistical distribution of this performance index, it is possible to infer the characteristics of the distribution of the ratio 'experimental data/model output' and, consequently to evaluate the confidence limits for the model predictions. This method was applied to calculate the uncertainty level of a model developed to predict the migration of radiocaesium in lacustrine systems. Unfortunately, performance indices are affected by the uncertainty of the experimental data used in validation. Indeed, measurement results of environmental levels of contamination are generally associated with large uncertainty due to the measurement and sampling techniques and to the large variability in space and time of the measured quantities. It is demonstrated that this non-desired effect, in some circumstances, may be corrected by means of simple formulae

  17. An accuracy assessment of an empirical sine model, a novel sine model and an artificial neural network model for forecasting illuminance/irradiance on horizontal plane of all sky types at Mahasarakham, Thailand

    International Nuclear Information System (INIS)

    Pattanasethanon, Singthong; Lertsatitthanakorn, Charoenporn; Atthajariyakul, Surat; Soponronnarit, Somchart

    2008-01-01

    The results of a study on all sky modeling and forecasting daylight availability for the tropical climate found in the central region of the northeastern part of Thailand (16 deg. 14' N, 103 deg. 15' E) is presented. The required components of sky quantities, namely, global and diffuse horizontal irradiance and global horizontal illuminance for saving energy used in buildings are estimated. The empirical sinusoidal models are validated. A and B values of the empirical sinusoidal model for all sky conditions are determined and developed to become a form of the sky conditions. In addition, a novel sinusoidal model, which consists of polynomial or exponential functions, is validated. A and B values of the empirical sinusoidal model for all sky conditions are determined and developed to become a new function in the polynomial or exponential form of the sky conditions. Novelettes, an artificial intelligent agent, namely, artificial neural network (ANN) model is also identified. Back propagation learning algorithms were used in the networks. Moreover, a one year data set and a next half year data set were used in order to train and test the neural network, respectively. Observation results from one year's round data indicate that luminosity and energy from the sky on horizontal in the area around Mahasarakham are frequently brighter than those of Bangkok. The accuracy of the validated model is determined in terms of the mean bias deviation (MBD), the root-mean-square-deviation (RMSD) and the coefficient of correlation (R 2 ) values. A comparison of the estimated solar irradiation values and the observed values revealed a small error slide in the empirical sinusoidal model as well. In addition, some results of the sky quantity forecast by the ANN model indicate that the ANN model is more accurate than the empirical models and the novel sinusoidal models. This study confirms the ability of the ANN to predict highly accurate solar radiance/illuminance values. We believe

  18. Empirical evaluation of the conceptual model underpinning a regional aquatic long-term monitoring program using causal modelling

    Science.gov (United States)

    Irvine, Kathryn M.; Miller, Scott; Al-Chokhachy, Robert K.; Archer, Erik; Roper, Brett B.; Kershner, Jeffrey L.

    2015-01-01

    Conceptual models are an integral facet of long-term monitoring programs. Proposed linkages between drivers, stressors, and ecological indicators are identified within the conceptual model of most mandated programs. We empirically evaluate a conceptual model developed for a regional aquatic and riparian monitoring program using causal models (i.e., Bayesian path analysis). We assess whether data gathered for regional status and trend estimation can also provide insights on why a stream may deviate from reference conditions. We target the hypothesized causal pathways for how anthropogenic drivers of road density, percent grazing, and percent forest within a catchment affect instream biological condition. We found instream temperature and fine sediments in arid sites and only fine sediments in mesic sites accounted for a significant portion of the maximum possible variation explainable in biological condition among managed sites. However, the biological significance of the direct effects of anthropogenic drivers on instream temperature and fine sediments were minimal or not detected. Consequently, there was weak to no biological support for causal pathways related to anthropogenic drivers’ impact on biological condition. With weak biological and statistical effect sizes, ignoring environmental contextual variables and covariates that explain natural heterogeneity would have resulted in no evidence of human impacts on biological integrity in some instances. For programs targeting the effects of anthropogenic activities, it is imperative to identify both land use practices and mechanisms that have led to degraded conditions (i.e., moving beyond simple status and trend estimation). Our empirical evaluation of the conceptual model underpinning the long-term monitoring program provided an opportunity for learning and, consequently, we discuss survey design elements that require modification to achieve question driven monitoring, a necessary step in the practice of

  19. Empirical flow parameters : a tool for hydraulic model validity

    Science.gov (United States)

    Asquith, William H.; Burley, Thomas E.; Cleveland, Theodore G.

    2013-01-01

    The objectives of this project were (1) To determine and present from existing data in Texas, relations between observed stream flow, topographic slope, mean section velocity, and other hydraulic factors, to produce charts such as Figure 1 and to produce empirical distributions of the various flow parameters to provide a methodology to "check if model results are way off!"; (2) To produce a statistical regional tool to estimate mean velocity or other selected parameters for storm flows or other conditional discharges at ungauged locations (most bridge crossings) in Texas to provide a secondary way to compare such values to a conventional hydraulic modeling approach. (3.) To present ancillary values such as Froude number, stream power, Rosgen channel classification, sinuosity, and other selected characteristics (readily determinable from existing data) to provide additional information to engineers concerned with the hydraulic-soil-foundation component of transportation infrastructure.

  20. Instrumento para a abordagem psicossocial do indivíduo e da família na assistência domiciliar: condições de aplicabilidade Instrumento para el abordaje psicosocial del indivíduo y familia en la asistencia domiciliaria: las condiciones de aplicabilidad Psychosocial instrument to approach person and family in home care: conditions of applicability

    Directory of Open Access Journals (Sweden)

    Vilanice Alves de Araújo Püschel

    2005-06-01

    Full Text Available Na Assistência Domiciliar (AD, o modelo clínico e hospitalar tem sido hegemônico. A abordagem psicossocial mostra-se necessária para os problemas identificados no âmbito domiciliar. Este artigo tem como objetivos: apresentar um instrumento de abordagem psicossocial do indivíduo e da família na assistência domiciliar e mostrar as condições de aplicabilidade do modelo psicossocial no domicílio, a partir da utilização desse instrumento. Esse instrumento foi aplicado pelas participantes de um curso de capacitação para a AD numa abordagem psicossocial, em pessoas doentes e familiares que estavam recebendo o cuidado no domicílio. Verificou-se que este é viável, porém à onipotência própria da clínica se sobrepõe uma impotência quase absoluta de raciocínio clínico-psicossocial. Há necessidade de validação desse instrumento e da composição de um campo de pesquisa significativo para a qualificação do ensino e da prática assistencial na AD.En la asistencia domiciliaria (AD el modelo clínico y hospitalario ha sido hegemónico. El abordaje psicosocial muestra que es necesario para los problemas identificados en el ámbito domiciliario. Este artículo tiene como objetivos: presentar un instrumento de abordaje psicosocial al individuo y a la familia en la asistencia domiciliaria y mostrar las condiciones de aplicabilidad del modelo psicosocial en el domicilio a partir de la utilización de ese instrumento. El instrumento fue aplicado, por las participantes de un curso de capacitación para la AD con abordaje psicosocial, en personas enfermas y familiares que estaban recibiendo el cuidado en el domicilio. Se verificó que éste es viable, sin embargo la omnipotencia propia de la clínica se sobrepone a una impotencia casi absoluta de raciocinio clínico-psicosocial. Hay necesidad de validar el instrumento y de la composición de un campo de investigación significativo para la calificación de la enseñanza y de la pr

  1. Evaluation of the existing triple point path models with new experimental data: proposal of an original empirical formulation

    Science.gov (United States)

    Boutillier, J.; Ehrhardt, L.; De Mezzo, S.; Deck, C.; Magnan, P.; Naz, P.; Willinger, R.

    2018-03-01

    With the increasing use of improvised explosive devices (IEDs), the need for better mitigation, either for building integrity or for personal security, increases in importance. Before focusing on the interaction of the shock wave with a target and the potential associated damage, knowledge must be acquired regarding the nature of the blast threat, i.e., the pressure-time history. This requirement motivates gaining further insight into the triple point (TP) path, in order to know precisely which regime the target will encounter (simple reflection or Mach reflection). Within this context, the purpose of this study is to evaluate three existing TP path empirical models, which in turn are used in other empirical models for the determination of the pressure profile. These three TP models are the empirical function of Kinney, the Unified Facilities Criteria (UFC) curves, and the model of the Natural Resources Defense Council (NRDC). As discrepancies are observed between these models, new experimental data were obtained to test their reliability and a new promising formulation is proposed for scaled heights of burst ranging from 24.6-172.9 cm/kg^{1/3}.

  2. Time-varying disaster risk models: An empirical assessment of the Rietz-Barro hypothesis

    DEFF Research Database (Denmark)

    Irarrazabal, Alfonso; Parra-Alvarez, Juan Carlos

    This paper revisits the fit of disaster risk models where a representative agent has recursive preferences and the probability of a macroeconomic disaster changes over time. We calibrate the model as in Wachter (2013) and perform two sets of tests to assess the empirical performance of the model ...... and hence to reduce the Sharpe Ratio, a lower elasticity of substitution generates a more reasonable level for the equity risk premium and for the volatility of the government bond returns without compromising the ability of the price-dividend ratio to predict excess returns....

  3. Empirical angle-dependent Biot and MBA models for acoustic anisotropy in cancellous bone

    International Nuclear Information System (INIS)

    Lee, Kang ll; Hughes, E R; Humphrey, V F; Leighton, T G; Choi, Min Joo

    2007-01-01

    The Biot and the modified Biot-Attenborough (MBA) models have been found useful to understand ultrasonic wave propagation in cancellous bone. However, neither of the models, as previously applied to cancellous bone, allows for the angular dependence of acoustic properties with direction. The present study aims to account for the acoustic anisotropy in cancellous bone, by introducing empirical angle-dependent input parameters, as defined for a highly oriented structure, into the Biot and the MBA models. The anisotropy of the angle-dependent Biot model is attributed to the variation in the elastic moduli of the skeletal frame with respect to the trabecular alignment. The angle-dependent MBA model employs a simple empirical way of using the parametric fit for the fast and the slow wave speeds. The angle-dependent models were used to predict both the fast and slow wave velocities as a function of propagation angle with respect to the trabecular alignment of cancellous bone. The predictions were compared with those of the Schoenberg model for anisotropy in cancellous bone and in vitro experimental measurements from the literature. The angle-dependent models successfully predicted the angular dependence of phase velocity of the fast wave with direction. The root-mean-square errors of the measured versus predicted fast wave velocities were 79.2 m s -1 (angle-dependent Biot model) and 36.1 m s -1 (angle-dependent MBA model). They also predicted the fact that the slow wave is nearly independent of propagation angle for angles about 50 0 , but consistently underestimated the slow wave velocity with the root-mean-square errors of 187.2 m s -1 (angle-dependent Biot model) and 240.8 m s -1 (angle-dependent MBA model). The study indicates that the angle-dependent models reasonably replicate the acoustic anisotropy in cancellous bone

  4. INDICADORES DE AVALIAÇÃO DE GOVERNANÇA EM DESTINOS TURÍSTICOS – uma análise da aplicabilidade dos modelos propostos

    Directory of Open Access Journals (Sweden)

    Doris Van de Meene Ruschmann

    2017-06-01

    Full Text Available Este estudo tem como objetivo identificar os conceitos relacionados à governança do turismo e, a partir dessa base, são relacionados diferentes modelos de indicadores propostos e tipologias aplicadas à governança, que ensaiam a construção de uma ferramenta de avaliação para a mensuração da governança do turismo. Este trabalho utilizou-se de pesquisa bibliográfica para identificar indicadores de avaliação encontrados na literatura sobre governança. Como resultado, foram identificados quatro estudos, dois deles relacionados com a governança do turismo, um com a governança pública e outro relacionado com as tipologias de governança aplicadas ao turismo. Esta análise também percebe a necessidade de um instrumento de real aplicabilidade que possa avaliar a governança do turismo em diferentes destinos.

  5. Organizational Learning, Strategic Flexibility and Business Model Innovation: An Empirical Research Based on Logistics Enterprises

    Science.gov (United States)

    Bao, Yaodong; Cheng, Lin; Zhang, Jian

    Using the data of 237 Jiangsu logistics firms, this paper empirically studies the relationship among organizational learning capability, business model innovation, strategic flexibility. The results show as follows; organizational learning capability has positive impacts on business model innovation performance; strategic flexibility plays mediating roles on the relationship between organizational learning capability and business model innovation; interaction among strategic flexibility, explorative learning and exploitative learning play significant roles in radical business model innovation and incremental business model innovation.

  6. Interface of the polarizable continuum model of solvation with semi-empirical methods in the GAMESS program

    DEFF Research Database (Denmark)

    Svendsen, Casper Steinmann; Blædel, Kristoffer; Christensen, Anders S

    2013-01-01

    An interface between semi-empirical methods and the polarized continuum model (PCM) of solvation successfully implemented into GAMESS following the approach by Chudinov et al (Chem. Phys. 1992, 160, 41). The interface includes energy gradients and is parallelized. For large molecules such as ubiq......An interface between semi-empirical methods and the polarized continuum model (PCM) of solvation successfully implemented into GAMESS following the approach by Chudinov et al (Chem. Phys. 1992, 160, 41). The interface includes energy gradients and is parallelized. For large molecules...... such as ubiquitin a reasonable speedup (up to a factor of six) is observed for up to 16 cores. The SCF convergence is greatly improved by PCM for proteins compared to the gas phase....

  7. Merging expert and empirical data for rare event frequency estimation: Pool homogenisation for empirical Bayes models

    International Nuclear Information System (INIS)

    Quigley, John; Hardman, Gavin; Bedford, Tim; Walls, Lesley

    2011-01-01

    Empirical Bayes provides one approach to estimating the frequency of rare events as a weighted average of the frequencies of an event and a pool of events. The pool will draw upon, for example, events with similar precursors. The higher the degree of homogeneity of the pool, then the Empirical Bayes estimator will be more accurate. We propose and evaluate a new method using homogenisation factors under the assumption that events are generated from a Homogeneous Poisson Process. The homogenisation factors are scaling constants, which can be elicited through structured expert judgement and used to align the frequencies of different events, hence homogenising the pool. The estimation error relative to the homogeneity of the pool is examined theoretically indicating that reduced error is associated with larger pool homogeneity. The effects of misspecified expert assessments of the homogenisation factors are examined theoretically and through simulation experiments. Our results show that the proposed Empirical Bayes method using homogenisation factors is robust under different degrees of misspecification.

  8. Multiscale empirical modeling of the geomagnetic field: From storms to substorms

    Science.gov (United States)

    Stephens, G. K.; Sitnov, M. I.; Korth, H.; Gkioulidou, M.; Ukhorskiy, A. Y.; Merkin, V. G.

    2017-12-01

    An advanced version of the TS07D empirical geomagnetic field model, herein called SST17, is used to model the global picture of the geomagnetic field and its characteristic variations on both storm and substorm scales. The new SST17 model uses two regular expansions describing the equatorial currents with each having distinctly different scales, one corresponding to a thick and one to a thin current sheet relative to the thermal ion gyroradius. These expansions have an arbitrary distribution of currents in the equatorial plane that is constrained only by magnetometer data. This multi-scale description allows one to reproduce the current sheet thinning during the growth phase. Additionaly, the model uses a flexible description of field-aligned currents that reproduces their spiral structure at low altitudes and provides a continuous transition from region 1 to region 2 current systems. The empirical picture of substorms is obtained by combining magnetometer data from Geotail, THEMIS, Van Allen Probes, Cluster II, Polar, IMP-8, GOES 8, 9, 10 and 12 and then binning this data based on similar values of the auroral index AL, its time derivative and the integral of the solar wind electric field parameter (from ACE, Wind, and IMP-8) in time over substorm scales. The performance of the model is demonstrated for several events, including the 3 July 2012 substorm, which had multi-probe coverage and a series of substorms during the March 2008 storm. It is shown that the AL binning helps reproduce dipolarization signatures in the northward magnetic field Bz, while the solar wind electric field integral allows one to capture the current sheet thinning during the growth phase. The model allows one to trace the substorm dipolarization from the tail to the inner magnetosphere where the dipolarization of strongly stretched tail field lines causes a redistribution of the tail current resulting in an enhancement of the partial ring current in the premidnight sector.

  9. Understanding users’ motivations to engage in virtual worlds: A multipurpose model and empirical testing

    NARCIS (Netherlands)

    Verhagen, T.; Feldberg, J.F.M.; van den Hooff, B.J.; Meents, S.; Merikivi, J.

    2012-01-01

    Despite the growth and commercial potential of virtual worlds, relatively little is known about what drives users' motivations to engage in virtual worlds. This paper proposes and empirically tests a conceptual model aimed at filling this research gap. Given the multipurpose nature of virtual words

  10. MERGANSER - An Empirical Model to Predict Fish and Loon Mercury in New England Lakes

    Science.gov (United States)

    MERGANSER (MERcury Geo-spatial AssessmeNtS for the New England Region) is an empirical least-squares multiple regression model using mercury (Hg) deposition and readily obtainable lake and watershed features to predict fish (fillet) and common loon (blood) Hg in New England lakes...

  11. Empirical models for the estimation of global solar radiation with sunshine hours on horizontal surface in various cities of Pakistan

    International Nuclear Information System (INIS)

    Gadiwala, M.S.; Usman, A.; Akhtar, M.; Jamil, K.

    2013-01-01

    In developing countries like Pakistan the global solar radiation and its components is not available for all locations due to which there is a requirement of using different models for the estimation of global solar radiation that use climatological parameters of the locations. Only five long-period locations data of solar radiation data is available in Pakistan (Karachi, Quetta, Lahore, Multan and Peshawar). These locations almost encompass the different geographical features of Pakistan. For this reason in this study the Mean monthly global solar radiation has been estimated using empirical models of Angstrom, FAO, Glover Mc-Culloch, Sangeeta & Tiwari for the diversity of approach and use of climatic and geographical parameters. Empirical constants for these models have been estimated and the results obtained by these models have been tested statistically. The results show encouraging agreement between estimated and measured values. The outcome of these empirical models will assist the researchers working on solar energy estimation of the location having similar conditions

  12. Selection Bias in Educational Transition Models: Theory and Empirical Evidence

    DEFF Research Database (Denmark)

    Holm, Anders; Jæger, Mads

    variables. This paper, first, explains theoretically how selection on unobserved variables leads to waning coefficients and, second, illustrates empirically how selection leads to biased estimates of the effect of family background on educational transitions. Our empirical analysis using data from...

  13. Achilles tendons from decorin- and biglycan-null mouse models have inferior mechanical and structural properties predicted by an image-based empirical damage model.

    Science.gov (United States)

    Gordon, J A; Freedman, B R; Zuskov, A; Iozzo, R V; Birk, D E; Soslowsky, L J

    2015-07-16

    Achilles tendons are a common source of pain and injury, and their pathology may originate from aberrant structure function relationships. Small leucine rich proteoglycans (SLRPs) influence mechanical and structural properties in a tendon-specific manner. However, their roles in the Achilles tendon have not been defined. The objective of this study was to evaluate the mechanical and structural differences observed in mouse Achilles tendons lacking class I SLRPs; either decorin or biglycan. In addition, empirical modeling techniques based on mechanical and image-based measures were employed. Achilles tendons from decorin-null (Dcn(-/-)) and biglycan-null (Bgn(-/-)) C57BL/6 female mice (N=102) were used. Each tendon underwent a dynamic mechanical testing protocol including simultaneous polarized light image capture to evaluate both structural and mechanical properties of each Achilles tendon. An empirical damage model was adapted for application to genetic variation and for use with image based structural properties to predict tendon dynamic mechanical properties. We found that Achilles tendons lacking decorin and biglycan had inferior mechanical and structural properties that were age dependent; and that simple empirical models, based on previously described damage models, were predictive of Achilles tendon dynamic modulus in both decorin- and biglycan-null mice. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. Detailed empirical models for the winds of early-type stars

    International Nuclear Information System (INIS)

    Olson, G.L.; Castor, J.I.

    1981-01-01

    Owing to the recent accumulation of ultraviolet data from the IUE satellite, of X-ray data from the Einstein (HEAO 2) satellite, of visible data from ground based electronic detectors, and of radio data from the Very Large Array (VLA) telescope, it is becoming possible to build much more complete models for the winds of early-type stars. The present work takes the empirical approach of assuming that there exists a coronal region at the base of a cool wind (T/sub e/roughly-equalT/sub eff/). This will be an extension of previous papers by Olson and by Cassinelli and Olson; however, refinements to the model will be presented, and the model will be applied to seven O stars and one BO star. Ionization equilibria are computed to match the line strengths found in UV spectra. The coronal fluxes that are required to produce the observed abundance of O +5 are compared to the X-ray fluxes observed by the Einstein satellite

  15. An empirical test of stage models of e-government development: evidence from Dutch municipalities

    NARCIS (Netherlands)

    Rooks, G.; Matzat, U.; Sadowski, B.M.

    2017-01-01

    In this article we empirically test stage models of e-government development. We use Lee's classification to make a distinction between four stages of e-government: informational, requests, personal, and e-democracy. We draw on a comprehensive data set on the adoption and development of e-government

  16. Comparison of a semi-empirical method with some model codes for gamma-ray spectrum calculation

    Energy Technology Data Exchange (ETDEWEB)

    Sheng, Fan; Zhixiang, Zhao [Chinese Nuclear Data Center, Beijing, BJ (China)

    1996-06-01

    Gamma-ray spectra calculated by a semi-empirical method are compared with those calculated by the model codes such as GNASH, TNG, UNF and NDCP-1. The results of the calculations are discussed. (2 tabs., 3 figs.).

  17. Evaluation of empirical atmospheric diffusion data

    International Nuclear Information System (INIS)

    Horst, T.W.; Doran, J.C.; Nickola, P.W.

    1979-10-01

    A study has been made of atmospheric diffusion over level, homogeneous terrain of contaminants released from non-buoyant point sources up to 100 m in height. Current theories of diffusion are compared to empirical diffusion data, and specific dispersion estimation techniques are recommended which can be implemented with the on-site meteorological instrumentation required by the Nuclear Regulatory Commission. A comparison of both the recommended diffusion model and the NRC diffusion model with the empirical data demonstrates that the predictions of the recommended model have both smaller scatter and less bias, particularly for groundlevel sources

  18. Empirical Bayes ranking and selection methods via semiparametric hierarchical mixture models in microarray studies.

    Science.gov (United States)

    Noma, Hisashi; Matsui, Shigeyuki

    2013-05-20

    The main purpose of microarray studies is screening of differentially expressed genes as candidates for further investigation. Because of limited resources in this stage, prioritizing genes are relevant statistical tasks in microarray studies. For effective gene selections, parametric empirical Bayes methods for ranking and selection of genes with largest effect sizes have been proposed (Noma et al., 2010; Biostatistics 11: 281-289). The hierarchical mixture model incorporates the differential and non-differential components and allows information borrowing across differential genes with separation from nuisance, non-differential genes. In this article, we develop empirical Bayes ranking methods via a semiparametric hierarchical mixture model. A nonparametric prior distribution, rather than parametric prior distributions, for effect sizes is specified and estimated using the "smoothing by roughening" approach of Laird and Louis (1991; Computational statistics and data analysis 12: 27-37). We present applications to childhood and infant leukemia clinical studies with microarrays for exploring genes related to prognosis or disease progression. Copyright © 2012 John Wiley & Sons, Ltd.

  19. Sources of Currency Crisis: An Empirical Analysis

    OpenAIRE

    Weber, Axel A.

    1997-01-01

    Two types of currency crisis models coexist in the literature: first generation models view speculative attacks as being caused by economic fundamentals which are inconsistent with a given parity. Second generation models claim self-fulfilling speculation as the main source of a currency crisis. Recent empirical research in international macroeconomics has attempted to distinguish between the sources of currency crises. This paper adds to this literature by proposing a new empirical approach ...

  20. Empirical Modeling of the Plasmasphere Dynamics Using Neural Networks

    Science.gov (United States)

    Zhelavskaya, I. S.; Shprits, Y.; Spasojevic, M.

    2017-12-01

    We present a new empirical model for reconstructing the global dynamics of the cold plasma density distribution based only on solar wind data and geomagnetic indices. Utilizing the density database obtained using the NURD (Neural-network-based Upper hybrid Resonance Determination) algorithm for the period of October 1, 2012 - July 1, 2016, in conjunction with solar wind data and geomagnetic indices, we develop a neural network model that is capable of globally reconstructing the dynamics of the cold plasma density distribution for 2 ≤ L ≤ 6 and all local times. We validate and test the model by measuring its performance on independent datasets withheld from the training set and by comparing the model predicted global evolution with global images of He+ distribution in the Earth's plasmasphere from the IMAGE Extreme UltraViolet (EUV) instrument. We identify the parameters that best quantify the plasmasphere dynamics by training and comparing multiple neural networks with different combinations of input parameters (geomagnetic indices, solar wind data, and different durations of their time history). We demonstrate results of both local and global plasma density reconstruction. This study illustrates how global dynamics can be reconstructed from local in-situ observations by using machine learning techniques.

  1. Aplicabilidade da classificação de alcoolismo tipo A/tipo B Aplicability of the type A/type B classification of alcoholics

    Directory of Open Access Journals (Sweden)

    Mário Sérgio Ribeiro

    2009-01-01

    Full Text Available OBJETIVOS: Avaliar a aplicabilidade da tipologia e caracterizar os subtipos identificados. MÉTODOS: Características de 300 homens alcoolistas atendidos em um programa ambulatorial foram submetidas à análise de cluster para separá-los em dois subgrupos de acordo com a tipologia de Babor et al. Efetivaram-se cruzamentos de dados (significância avaliada pelo Teste do qui-quadrado de Pearson para se verificar a associação dos clusters com variáveis clínicas e demográficas. RESULTADOS: Comparado ao outro grupo e pacientes, um dos clusters identificados foi caracterizado por um perfil de maior gravidade clínica. Pacientes do subtipo menos grave foram mais frequentemente (65,3% encaminhados a tratamentos simbólicos, enquanto pacientes do subtipo mais grave foram atendidos predominantemente (58,5% por abordagem exclusivamente farmacológica e aderiram mais ao tratamento proposto. CONCLUSÕES: Como os resultados identificaram subtipos de alcoolistas com distintas características, este estudo evidenciou a aplicabilidade clínica da tipologia de Babor et al. em nosso meio sociocultural¹. Também aponta para a relevância de estudos tipológicos que possam contribuir para uma mais ampla compreensão dos aspectos etiológicos, preventivos e terapêuticos do alcoolismo.OBJECTIVE: To test the applicability of this typology and to characterize the identified subtypes. METHODS: Characteristics of 300 alcoholic men attending an out-treatment program were submitted to cluster analysis for identification of two subgroups (clusters, according to the original classification. Cross-tabulations were then performed to test for possible association of identified clusters to demographic and clinical features. Statistical significance was given by Pearson chi-square tests. RESULTS: Compared to the other group, one of the identified clusters was characterized by a more severe clinical profile. Patients of the mild subtype were principally (65,3% referred to

  2. O papel da internet como canal de marketing e vendas : modelos de aplicabilidade da internet como canal de marketing e vendas sob o ponto de vista dos fabricantes de eletroeletrônicos

    OpenAIRE

    Silva, Flavio Dias Fonseca da

    2010-01-01

    A presente dissertação tem como objetivo analisar a aplicabilidade da Internet como um canal de Marketing para empresas fabricantes de eletroeletrônicos. A partir de um levantamento feito a respeito do crescimento da importância da Internet como uma ferramenta de Marketing e Vendas no Brasil e no Mundo, são colocadas três possíveis posturas estratégicas a serem adotadas pelos fabricantes com relação à adoção da Internet para o seu negócio: Não fazer comércio eletrônico, adotar ...

  3. Investigação sobre a satisfação do usuário dos serviços prestados pelo Metrô de São Paulo: um estudo exploratório, descritivo e ilustrativo com a utilização do modelo de equações estruturais User satisfaction with the Metro of Sao Paulo: an exploratory, descriptive and illustrative study using structural equation modeling

    Directory of Open Access Journals (Sweden)

    André Castilho Ferreira da Costa

    2008-01-01

    Full Text Available O objetivo desta pesquisa foi discutir a aplicabilidade da metodologia do American Customer Satisfaction Index (ACSI a situações da realidade brasileira, com base no modelo de equações estruturais. Para tanto, foi realizada uma verificação empírica do comportamento do modelo, aplicando-o à amostra de dados colhida entre usuários dos serviços prestados pela Companhia do Metropolitano de São Paulo - Metrô. Os resultados confirmaram apenas em parte as relações do modelo, sugerindo, principalmente, que para aumentar sua aplicabilidade a situações similares a esta que foi estudada é imprescindível que o conceito de Expectativa e sua relação com as demais variáveis latentes do modelo sejam revistos. O impacto da expectativa nas demais variáveis latentes do modelo é o ponto mais destoante daquele proposto por Fornell, já que nenhum coeficiente estrutural suficientemente relevante foi obtido. Esses resultados, de forma geral, corroboram outras experimentações empíricas envolvendo tal modelo que demonstraram que a Satisfação do Consumidor está muito mais orientada para a Qualidade e para o Valor do que para a Expectativa. A pesquisa ainda calculou um escore global de satisfação do consumidor, que poderá ser comparado a outros escores obtidos entre usuários de serviços afins.The applicability in Brazil of the American Customer Satisfaction Index Model using the Structural Equation Modeling technique was discussed. An empirical survey was made of model behavior with data from those using the "Companhia do Metropolitano de São Paulo" - Metro. Results confirmed, in part, the associations of the Model suggesting however that the concept of customer expectations and the relation to other latent variables of the Model must be revised for better results. The impact of this concept on other latent Model variables differed the most from that proposed by Fornell because no sufficiently relevant structural coefficient was obtained

  4. Hybrid empirical--theoretical approach to modeling uranium adsorption

    International Nuclear Information System (INIS)

    Hull, Larry C.; Grossman, Christopher; Fjeld, Robert A.; Coates, John T.; Elzerman, Alan W.

    2004-01-01

    An estimated 330 metric tons of U are buried in the radioactive waste Subsurface Disposal Area (SDA) at the Idaho National Engineering and Environmental Laboratory (INEEL). An assessment of U transport parameters is being performed to decrease the uncertainty in risk and dose predictions derived from computer simulations of U fate and transport to the underlying Snake River Plain Aquifer. Uranium adsorption isotherms were measured for 14 sediment samples collected from sedimentary interbeds underlying the SDA. The adsorption data were fit with a Freundlich isotherm. The Freundlich n parameter is statistically identical for all 14 sediment samples and the Freundlich K f parameter is correlated to sediment surface area (r 2 =0.80). These findings suggest an efficient approach to material characterization and implementation of a spatially variable reactive transport model that requires only the measurement of sediment surface area. To expand the potential applicability of the measured isotherms, a model is derived from the empirical observations by incorporating concepts from surface complexation theory to account for the effects of solution chemistry. The resulting model is then used to predict the range of adsorption conditions to be expected in the vadose zone at the SDA based on the range in measured pore water chemistry. Adsorption in the deep vadose zone is predicted to be stronger than in near-surface sediments because the total dissolved carbonate decreases with depth

  5. Integrating technology readiness into the expectation-confirmation model: an empirical study of mobile services.

    Science.gov (United States)

    Chen, Shih-Chih; Liu, Ming-Ling; Lin, Chieh-Peng

    2013-08-01

    The aim of this study was to integrate technology readiness into the expectation-confirmation model (ECM) for explaining individuals' continuance of mobile data service usage. After reviewing the ECM and technology readiness, an integrated model was demonstrated via empirical data. Compared with the original ECM, the findings of this study show that the integrated model may offer an ameliorated way to clarify what factors and how they influence the continuous intention toward mobile services. Finally, the major findings are summarized, and future research directions are suggested.

  6. Semi-empirical long-term cycle life model coupled with an electrolyte depletion function for large-format graphite/LiFePO4 lithium-ion batteries

    Science.gov (United States)

    Park, Joonam; Appiah, Williams Agyei; Byun, Seoungwoo; Jin, Dahee; Ryou, Myung-Hyun; Lee, Yong Min

    2017-10-01

    To overcome the limitation of simple empirical cycle life models based on only equivalent circuits, we attempt to couple a conventional empirical capacity loss model with Newman's porous composite electrode model, which contains both electrochemical reaction kinetics and material/charge balances. In addition, an electrolyte depletion function is newly introduced to simulate a sudden capacity drop at the end of cycling, which is frequently observed in real lithium-ion batteries (LIBs). When simulated electrochemical properties are compared with experimental data obtained with 20 Ah-level graphite/LiFePO4 LIB cells, our semi-empirical model is sufficiently accurate to predict a voltage profile having a low standard deviation of 0.0035 V, even at 5C. Additionally, our model can provide broad cycle life color maps under different c-rate and depth-of-discharge operating conditions. Thus, this semi-empirical model with an electrolyte depletion function will be a promising platform to predict long-term cycle lives of large-format LIB cells under various operating conditions.

  7. Empirical microeconomics action functionals

    Science.gov (United States)

    Baaquie, Belal E.; Du, Xin; Tanputraman, Winson

    2015-06-01

    A statistical generalization of microeconomics has been made in Baaquie (2013), where the market price of every traded commodity, at each instant of time, is considered to be an independent random variable. The dynamics of commodity market prices is modeled by an action functional-and the focus of this paper is to empirically determine the action functionals for different commodities. The correlation functions of the model are defined using a Feynman path integral. The model is calibrated using the unequal time correlation of the market commodity prices as well as their cubic and quartic moments using a perturbation expansion. The consistency of the perturbation expansion is verified by a numerical evaluation of the path integral. Nine commodities drawn from the energy, metal and grain sectors are studied and their market behavior is described by the model to an accuracy of over 90% using only six parameters. The paper empirically establishes the existence of the action functional for commodity prices that was postulated to exist in Baaquie (2013).

  8. Ion temperature in the outer ionosphere - first version of a global empirical model

    Czech Academy of Sciences Publication Activity Database

    Třísková, Ludmila; Truhlík, Vladimír; Šmilauer, Jan; Smirnova, N. F.

    2004-01-01

    Roč. 34, č. 9 (2004), s. 1998-2003 ISSN 0273-1177 R&D Projects: GA ČR GP205/02/P037; GA AV ČR IAA3042201; GA MŠk ME 651 Institutional research plan: CEZ:AV0Z3042911 Keywords : plasma temperatures * topside ionosphere * empirical models Subject RIV: DG - Athmosphere Sciences, Meteorology Impact factor: 0.548, year: 2004

  9. Multimission empirical ocean tide modeling for shallow waters and polar seas

    DEFF Research Database (Denmark)

    Cheng, Yongcun; Andersen, Ole Baltazar

    2011-01-01

    A new global ocean tide model named DTU10 (developed at Technical University of Denmark) representing all major diurnal and semidiurnal tidal constituents is proposed based on an empirical correction to the global tide model FES2004 (Finite Element Solutions), with residual tides determined using...... tide gauge sets show that the new tide model fits the tide gauge measurements favorably to other state of the art global ocean tide models in both the deep and shallow waters, especially in the Arctic Ocean and the Southern Ocean. One example is a comparison with 207 tide gauge data in the East Asian...... marginal seas where the root-mean-square agreement improved by 35.12%, 22.61%, 27.07%, and 22.65% (M-2, S-2, K-1, and O-1) for the DTU10 tide model compared with the FES2004 tide model. A similar comparison in the Arctic Ocean with 151 gauge data improved by 9.93%, 0.34%, 7.46%, and 9.52% for the M-2, S-2...

  10. The Effect of Private Benefits of Control on Minority Shareholders: A Theoretical Model and Empirical Evidence from State Ownership

    Directory of Open Access Journals (Sweden)

    Kerry Liu

    2017-06-01

    Full Text Available Purpose: The purpose of this paper is to examine the effect of private benefits of control on minority shareholders. Design/methodology/approach: A theoretical model is established. The empirical analysis includes hand-collected data from a wide range of data sources. OLS and 2SLS regression analysis are applied with Huber-White standard errors. Findings: The theoretical model shows that, while private benefits are generally harmful to minority shareholders, the overall effect depends on the size of large shareholder ownership. The empirical evidence from government ownership is consistent with theoretical analysis. Research limitations/implications: The empirical evidence is based on a small number of hand-collected data sets of government ownership. Further studies can be expanded to other types of ownership, such as family ownership and financial institutional ownership. Originality/value: This study is the first to theoretically analyse and empirically test the effect of private benefits. In general, this study significantly contributes to the understanding of the effect of large shareholder and corporate governance.

  11. EMERGE - an empirical model for the formation of galaxies since z ˜ 10

    Science.gov (United States)

    Moster, Benjamin P.; Naab, Thorsten; White, Simon D. M.

    2018-06-01

    We present EMERGE, an Empirical ModEl for the foRmation of GalaxiEs, describing the evolution of individual galaxies in large volumes from z ˜ 10 to the present day. We assign a star formation rate to each dark matter halo based on its growth rate, which specifies how much baryonic material becomes available, and the instantaneous baryon conversion efficiency, which determines how efficiently this material is converted to stars, thereby capturing the baryonic physics. Satellites are quenched following the delayed-then-rapid model, and they are tidally disrupted once their subhalo has lost a significant fraction of its mass. The model is constrained with observed data extending out to high redshift. The empirical relations are very flexible, and the model complexity is increased only if required by the data, assessed by several model selection statistics. We find that for the same final halo mass galaxies can have very different star formation histories. Galaxies that are quenched at z = 0 typically have a higher peak star formation rate compared to their star-forming counterparts. EMERGE predicts stellar-to-halo mass ratios for individual galaxies and introduces scatter self-consistently. We find that at fixed halo mass, passive galaxies have a higher stellar mass on average. The intracluster mass in massive haloes can be up to eight times larger than the mass of the central galaxy. Clustering for star-forming and quenched galaxies is in good agreement with observational constraints, indicating a realistic assignment of galaxies to haloes.

  12. Proposed Core Competencies and Empirical Validation Procedure in Competency Modeling: Confirmation and Classification.

    Science.gov (United States)

    Baczyńska, Anna K; Rowiński, Tomasz; Cybis, Natalia

    2016-01-01

    Competency models provide insight into key skills which are common to many positions in an organization. Moreover, there is a range of competencies that is used by many companies. Researchers have developed core competency terminology to underline their cross-organizational value. The article presents a theoretical model of core competencies consisting of two main higher-order competencies called performance and entrepreneurship. Each of them consists of three elements: the performance competency includes cooperation, organization of work and goal orientation, while entrepreneurship includes innovativeness, calculated risk-taking and pro-activeness. However, there is lack of empirical validation of competency concepts in organizations and this would seem crucial for obtaining reliable results from organizational research. We propose a two-step empirical validation procedure: (1) confirmation factor analysis, and (2) classification of employees. The sample consisted of 636 respondents (M = 44.5; SD = 15.1). Participants were administered a questionnaire developed for the study purpose. The reliability, measured by Cronbach's alpha, ranged from 0.60 to 0.83 for six scales. Next, we tested the model using a confirmatory factor analysis. The two separate, single models of performance and entrepreneurial orientations fit quite well to the data, while a complex model based on the two single concepts needs further research. In the classification of employees based on the two higher order competencies we obtained four main groups of employees. Their profiles relate to those found in the literature, including so-called niche finders and top performers. Some proposal for organizations is discussed.

  13. Comparing Multidimensional and Continuum Models of Vocabulary Acquisition: An Empirical Examination of the Vocabulary Knowledge Scale

    Science.gov (United States)

    Stewart, Jeffrey; Batty, Aaron Olaf; Bovee, Nicholas

    2012-01-01

    Second language vocabulary acquisition has been modeled both as multidimensional in nature and as a continuum wherein the learner's knowledge of a word develops along a cline from recognition through production. In order to empirically examine and compare these models, the authors assess the degree to which the Vocabulary Knowledge Scale (VKS;…

  14. How "Does" the Comforting Process Work? An Empirical Test of an Appraisal-Based Model of Comforting

    Science.gov (United States)

    Jones, Susanne M.; Wirtz, John G.

    2006-01-01

    Burleson and Goldsmith's (1998) comforting model suggests an appraisal-based mechanism through which comforting messages can bring about a positive change in emotional states. This study is a first empirical test of three causal linkages implied by the appraisal-based comforting model. Participants (N=258) talked about an upsetting event with a…

  15. Evaluation of empirical atmospheric diffusion data

    Energy Technology Data Exchange (ETDEWEB)

    Horst, T.W.; Doran, J.C.; Nickola, P.W.

    1979-10-01

    A study has been made of atmospheric diffusion over level, homogeneous terrain of contaminants released from non-buoyant point sources up to 100 m in height. Current theories of diffusion are compared to empirical diffusion data, and specific dispersion estimation techniques are recommended which can be implemented with the on-site meteorological instrumentation required by the Nuclear Regulatory Commission. A comparison of both the recommended diffusion model and the NRC diffusion model with the empirical data demonstrates that the predictions of the recommended model have both smaller scatter and less bias, particularly for ground-level sources.

  16. Empirical membrane lifetime model for heavy duty fuel cell systems

    Science.gov (United States)

    Macauley, Natalia; Watson, Mark; Lauritzen, Michael; Knights, Shanna; Wang, G. Gary; Kjeang, Erik

    2016-12-01

    Heavy duty fuel cells used in transportation system applications such as transit buses expose the fuel cell membranes to conditions that can lead to lifetime-limiting membrane failure via combined chemical and mechanical degradation. Highly durable membranes and reliable predictive models are therefore needed in order to achieve the ultimate heavy duty fuel cell lifetime target of 25,000 h. In the present work, an empirical membrane lifetime model was developed based on laboratory data from a suite of accelerated membrane durability tests. The model considers the effects of cell voltage, temperature, oxygen concentration, humidity cycling, humidity level, and platinum in the membrane using inverse power law and exponential relationships within the framework of a general log-linear Weibull life-stress statistical distribution. The obtained model is capable of extrapolating the membrane lifetime from accelerated test conditions to use level conditions during field operation. Based on typical conditions for the Whistler, British Columbia fuel cell transit bus fleet, the model predicts a stack lifetime of 17,500 h and a membrane leak initiation time of 9200 h. Validation performed with the aid of a field operated stack confirmed the initial goal of the model to predict membrane lifetime within 20% of the actual operating time.

  17. A generalized preferential attachment model for business firms growth rates. I. Empirical evidence

    Science.gov (United States)

    Pammolli, F.; Fu, D.; Buldyrev, S. V.; Riccaboni, M.; Matia, K.; Yamasaki, K.; Stanley, H. E.

    2007-05-01

    We introduce a model of proportional growth to explain the distribution P(g) of business firm growth rates. The model predicts that P(g) is Laplace in the central part and depicts an asymptotic power-law behavior in the tails with an exponent ζ = 3. Because of data limitations, previous studies in this field have been focusing exclusively on the Laplace shape of the body of the distribution. We test the model at different levels of aggregation in the economy, from products, to firms, to countries, and we find that the predictions are in good agreement with empirical evidence on both growth distributions and size-variance relationships.

  18. Modeling of Principal Flank Wear: An Empirical Approach Combining the Effect of Tool, Environment and Workpiece Hardness

    Science.gov (United States)

    Mia, Mozammel; Al Bashir, Mahmood; Dhar, Nikhil Ranjan

    2016-10-01

    Hard turning is increasingly employed in machining, lately, to replace time-consuming conventional turning followed by grinding process. An excessive amount of tool wear in hard turning is one of the main hurdles to be overcome. Many researchers have developed tool wear model, but most of them developed it for a particular work-tool-environment combination. No aggregate model is developed that can be used to predict the amount of principal flank wear for specific machining time. An empirical model of principal flank wear (VB) has been developed for the different hardness of workpiece (HRC40, HRC48 and HRC56) while turning by coated carbide insert with different configurations (SNMM and SNMG) under both dry and high pressure coolant conditions. Unlike other developed model, this model includes the use of dummy variables along with the base empirical equation to entail the effect of any changes in the input conditions on the response. The base empirical equation for principal flank wear is formulated adopting the Exponential Associate Function using the experimental results. The coefficient of dummy variable reflects the shifting of the response from one set of machining condition to another set of machining condition which is determined by simple linear regression. The independent cutting parameters (speed, rate, depth of cut) are kept constant while formulating and analyzing this model. The developed model is validated with different sets of machining responses in turning hardened medium carbon steel by coated carbide inserts. For any particular set, the model can be used to predict the amount of principal flank wear for specific machining time. Since the predicted results exhibit good resemblance with experimental data and the average percentage error is <10 %, this model can be used to predict the principal flank wear for stated conditions.

  19. A percepção dos formandos a respeito dos instrumentos básicos de enfermagem e sua aplicabilidade La percepción de los estudiantes del ultimo curso con respecto a los instrumentos basicos de enfermería y sú aplicabilidad The senior students perception about basic nursing tools and their applicability

    Directory of Open Access Journals (Sweden)

    Anesilda Alves de Almeida Ribeiro

    2005-12-01

    Full Text Available Este artigo apresenta um estudo exploratório descritivo da percepção de estudantes de enfermagem sobre a aplicabilidade dos Instrumentos Básicos de Enfermagem (IBE no cotidiano da prática profissional. Os resultados evidenciam os fatores que contribuem para a continuidade da dicotomia entre teoria e prática e as possibilidades de superação/transformação desta realidade. Além disso, o estudo mostra que a concretização do saber-fazer, em enfermagem, demanda de cada enfermeira a conscientização de que é sua responsabilidade pessoal, profissional e social prestar assistência/cuidado com qualidade, através da aplicação plena dos saberes/conhecimentos profissionais.Este artículo presenta un estudio descriptivo-exploratório de la percepción de estudiantes sobre la aplicabilidad de los Instrumentos Básicos de Enfermería (IBE en la rutina diaria de la práctica profesional. Los resultados evidencian los factores que contribuyen para la continuidad de la dicotomía entre teoría y práctica y las posibilidades de superación/transformación de esta realidad. Además de eso, el estudio muestra que la concretización del saber hacer, en enfermería, demanda de cada enfermera la conscientización de que es de su responzabilidad personal, profesional y social prestar asistencia y cuidado con calidad a través de la aplicación plena de los saberes/conocimientos profesionales.This paper presents a descriptive exploratory study of student's perception about the applicability of basic nursing tools on the daily professional practice. Results highlights the factors that contribute to the continuity of dichotomy between theory and practice and the possibilities of overcoming/transformation of this reality. In addition, the study shows that the concretization of the know-how in nursing requires from each nurse to be aware of his/her personal, professional and social responsibility to offer a high quality health care, through the full

  20. Review essay: empires, ancient and modern.

    Science.gov (United States)

    Hall, John A

    2011-09-01

    This essay drews attention to two books on empires by historians which deserve the attention of sociologists. Bang's model of the workings of the Roman economy powerfully demonstrates the tributary nature of per-industrial tributary empires. Darwin's analysis concentrates on modern overseas empires, wholly different in character as they involved the transportation of consumption items for the many rather than luxury goods for the few. Darwin is especially good at describing the conditions of existence of late nineteenth century empires, noting that their demise was caused most of all by the failure of balance of power politics in Europe. Concluding thoughts are offered about the USA. © London School of Economics and Political Science 2011.

  1. Data mining of Ti-Al semi-empirical parameters for developing reduced order models

    Energy Technology Data Exchange (ETDEWEB)

    Broderick, Scott R [Department of Materials Science and Engineering and Institute for Combinatorial Discovery, Iowa State University, Ames, IA 50011 (United States); Aourag, Hafid [Department of Physics, University Abou Bakr Belkaid, Tlemcen 13000 (Algeria); Rajan, Krishna [Department of Materials Science and Engineering and Institute for Combinatorial Discovery, Iowa State University, Ames, IA 50011 (United States)

    2011-05-15

    A focus of materials design is determining the minimum amount of information necessary to fully describe a system, thus reducing the number of empirical results required and simplifying the data analysis. Screening descriptors calculated through a semi-empirical model, we demonstrate how an informatics-based analysis can be used to address this issue with no prior assumptions. We have developed a unique approach for identifying the minimum number of descriptors necessary to capture all the information of a system. Using Ti-Al alloys of varying compositions and crystal chemistries as the test bed, 5 of the 21 original descriptors from electronic structure calculations are found to capture all the information from the calculation, thereby reducing the structure-chemistry-property search space. Additionally, by combining electronic structure calculations with data mining, we classify the systems by chemistries and structures, based on the electronic structure inputs, and thereby rank the impact of change in chemistry and crystal structure on the electronic structure. -- Research Highlights: {yields} We developed an informatics-based methodology to minimize the necessary information. {yields} We applied this methodology to descriptors from semi-empirical calculations. {yields} We developed a validation approach for maintaining information from screening. {yields} We classified intermetallics and identified patterns of composition and structure.

  2. Data mining of Ti-Al semi-empirical parameters for developing reduced order models

    International Nuclear Information System (INIS)

    Broderick, Scott R.; Aourag, Hafid; Rajan, Krishna

    2011-01-01

    A focus of materials design is determining the minimum amount of information necessary to fully describe a system, thus reducing the number of empirical results required and simplifying the data analysis. Screening descriptors calculated through a semi-empirical model, we demonstrate how an informatics-based analysis can be used to address this issue with no prior assumptions. We have developed a unique approach for identifying the minimum number of descriptors necessary to capture all the information of a system. Using Ti-Al alloys of varying compositions and crystal chemistries as the test bed, 5 of the 21 original descriptors from electronic structure calculations are found to capture all the information from the calculation, thereby reducing the structure-chemistry-property search space. Additionally, by combining electronic structure calculations with data mining, we classify the systems by chemistries and structures, based on the electronic structure inputs, and thereby rank the impact of change in chemistry and crystal structure on the electronic structure. -- Research Highlights: → We developed an informatics-based methodology to minimize the necessary information. → We applied this methodology to descriptors from semi-empirical calculations. → We developed a validation approach for maintaining information from screening. → We classified intermetallics and identified patterns of composition and structure.

  3. Autonomous e-coaching in the wild: Empirical validation of a model-based reasoning system

    OpenAIRE

    Kamphorst, B.A.; Klein, M.C.A.; van Wissen, A.

    2014-01-01

    Autonomous e-coaching systems have the potential to improve people's health behaviors on a large scale. The intelligent behavior change support system eMate exploits a model of the human agent to support individuals in adopting a healthy lifestyle. The system attempts to identify the causes of a person's non-adherence by reasoning over a computational model (COMBI) that is based on established psychological theories of behavior change. The present work presents an extensive, monthlong empiric...

  4. Power spectrum model of visual masking: simulations and empirical data.

    Science.gov (United States)

    Serrano-Pedraza, Ignacio; Sierra-Vázquez, Vicente; Derrington, Andrew M

    2013-06-01

    cutoffs around the spatial frequency of the signal match the shape of the visual channel (symmetric or asymmetric) involved in the detection. In order to test the explanatory power of the model with empirical data, we performed six visual masking experiments. We show that this model, with only two free parameters, fits the empirical masking data with high precision. Finally, we provide equations of the power spectrum model for six masking noises used in the simulations and in the experiments.

  5. Testing seasonal and long-term controls of streamwater DOC using empirical and process-based models.

    Science.gov (United States)

    Futter, Martyn N; de Wit, Heleen A

    2008-12-15

    Concentrations of dissolved organic carbon (DOC) in surface waters are increasing across Europe and parts of North America. Several mechanisms have been proposed to explain these increases including reductions in acid deposition, change in frequency of winter storms and changes in temperature and precipitation patterns. We used two modelling approaches to identify the mechanisms responsible for changing surface water DOC concentrations. Empirical regression analysis and INCA-C, a process-based model of stream-water DOC, were used to simulate long-term (1986--2003) patterns in stream water DOC concentrations in a small boreal stream. Both modelling approaches successfully simulated seasonal and inter-annual patterns in DOC concentration. In both models, seasonal patterns of DOC concentration were controlled by hydrology and inter-annual patterns were explained by climatic variation. There was a non-linear relationship between warmer summer temperatures and INCA-C predicted DOC. Only the empirical model was able to satisfactorily simulate the observed long-term increase in DOC. The observed long-term trends in DOC are likely to be driven by in-soil processes controlled by SO4(2-) and Cl(-) deposition, and to a lesser extent by temperature-controlled processes. Given the projected changes in climate and deposition, future modelling and experimental research should focus on the possible effects of soil temperature and moisture on organic carbon production, sorption and desorption rates, and chemical controls on organic matter solubility.

  6. Combining empirical and theory-based land-use modelling approaches to assess economic potential of biofuel production avoiding iLUC: Argentina as a case study

    NARCIS (Netherlands)

    Diogo, V.; van der Hilst, F.; van Eijck, J.; Verstegen, J.A.; Hilbert, J.; Carballo, S.; Volante, J.; Faaij, A.

    2014-01-01

    In this paper, a land-use modelling framework is presented combining empirical and theory-based modelling approaches to determine economic potential of biofuel production avoiding indirect land-use changes (iLUC) resulting from land competition with other functions. The empirical approach explores

  7. Theoretical and Empirical Descriptions of Thermospheric Density

    Science.gov (United States)

    Solomon, S. C.; Qian, L.

    2004-12-01

    The longest-term and most accurate overall description the density of the upper thermosphere is provided by analysis of change in the ephemeris of Earth-orbiting satellites. Empirical models of the thermosphere developed in part from these measurements can do a reasonable job of describing thermospheric properties on a climatological basis, but the promise of first-principles global general circulation models of the coupled thermosphere/ionosphere system is that a true high-resolution, predictive capability may ultimately be developed for thermospheric density. However, several issues are encountered when attempting to tune such models so that they accurately represent absolute densities as a function of altitude, and their changes on solar-rotational and solar-cycle time scales. Among these are the crucial ones of getting the heating rates (from both solar and auroral sources) right, getting the cooling rates right, and establishing the appropriate boundary conditions. However, there are several ancillary issues as well, such as the problem of registering a pressure-coordinate model onto an altitude scale, and dealing with possible departures from hydrostatic equilibrium in empirical models. Thus, tuning a theoretical model to match empirical climatology may be difficult, even in the absence of high temporal or spatial variation of the energy sources. We will discuss some of the challenges involved, and show comparisons of simulations using the NCAR Thermosphere-Ionosphere-Electrodynamics General Circulation Model (TIE-GCM) to empirical model estimates of neutral thermosphere density and temperature. We will also show some recent simulations using measured solar irradiance from the TIMED/SEE instrument as input to the TIE-GCM.

  8. Empirical classification of resources in a business model concept

    Directory of Open Access Journals (Sweden)

    Marko Seppänen

    2009-04-01

    Full Text Available The concept of the business model has been designed for aiding exploitation of the business potential of an innovation. This exploitation inevitably involves new activities in the organisational context and generates a need to select and arrange the resources of the firm in these new activities. A business model encompasses those resources that a firm has access to and aids in a firm’s effort to create a superior ‘innovation capability’. Selecting and arranging resources to utilise innovations requires resource allocation decisions on multiple fronts as well as poses significant challenges for management of innovations. Although current business model conceptualisations elucidate resources, explicit considerations for the composition and the structures of the resource compositions have remained ambiguous. As a result, current business model conceptualisations fail in their core purpose in assisting the decision-making that must consider the resource allocation in exploiting business opportunities. This paper contributes to the existing discussion regarding the representation of resources as components in the business model concept. The categorized list of resources in business models is validated empirically, using two samples of managers in different positions in several industries. The results indicate that most of the theoretically derived resource items have their equivalents in the business language and concepts used by managers. Thus, the categorisation of the resource components enables further development of the business model concept as well as improves daily communication between managers and their subordinates. Future research could be targeted on linking these components of a business model with each other in order to gain a model to assess the performance of different business model configurations. Furthermore, different applications for the developed resource configuration may be envisioned.

  9. Development and evaluation of an empirical diurnal sea surface temperature model

    Science.gov (United States)

    Weihs, R. R.; Bourassa, M. A.

    2013-12-01

    An innovative method is developed to determine the diurnal heating amplitude of sea surface temperatures (SSTs) using observations of high-quality satellite SST measurements and NWP atmospheric meteorological data. The diurnal cycle results from heating that develops at the surface of the ocean from low mechanical or shear produced turbulence and large solar radiation absorption. During these typically calm weather conditions, the absorption of solar radiation causes heating of the upper few meters of the ocean, which become buoyantly stable; this heating causes a temperature differential between the surface and the mixed [or bulk] layer on the order of a few degrees. It has been shown that capturing the diurnal cycle is important for a variety of applications, including surface heat flux estimates, which have been shown to be underestimated when neglecting diurnal warming, and satellite and buoy calibrations, which can be complicated because of the heating differential. An empirical algorithm using a pre-dawn sea surface temperature, peak solar radiation, and accumulated wind stress is used to estimate the cycle. The empirical algorithm is derived from a multistep process in which SSTs from MTG's SEVIRI SST experimental hourly data set are combined with hourly wind stress fields derived from a bulk flux algorithm. Inputs for the flux model are taken from NASA's MERRA reanalysis product. NWP inputs are necessary because the inputs need to incorporate diurnal and air-sea interactive processes, which are vital to the ocean surface dynamics, with a high enough temporal resolution. The MERRA winds are adjusted with CCMP winds to obtain more realistic spatial and variance characteristics and the other atmospheric inputs (air temperature, specific humidity) are further corrected on the basis of in situ comparisons. The SSTs are fitted to a Gaussian curve (using one or two peaks), forming a set of coefficients used to fit the data. The coefficient data are combined with

  10. ANÁLISIS SOBRE LA APLICABILIDAD DE LAS HERRAMIENTAS DE GESTIÓN AMBIENTAL PARA EL MANEJO DE LOS HUMEDALES NATURALES INTERIORES DE COLOMBIA

    Directory of Open Access Journals (Sweden)

    HERRERA ARANGO MARÍA ALEJANDRA

    2010-05-01

    Full Text Available Como resultado de la investigación de la información sobre la gestión ambiental de los humedales naturales interiores de Colombia, se analiza la aplicabilidad de las herramientas de gestión empleadas en el manejo integral de estos ecosistemas. Para la evaluación de esta información, se identificaron las principales categorías y subcategorías de análisis, partiendo de una clasificación de los humedales naturales interiores presentes en el país y su estado actual, la revisión de la normatividad ambiental vigente, los planes de manejo formulados por diferentes entidades y por último, la identificación de grupos de investigación de Colciencias que realizan estudios sobre este tema. Con base en estos resultados, se sistematiza la información encontrada, generando con ello una aproximación a una propuesta de análisis que contribuya a orientar futuras investigaciones y a la toma de decisiones acerca del manejo racional de los humedales en el país.

  11. A New Statistical Method to Determine the Degree of Validity of Health Economic Model Outcomes against Empirical Data.

    NARCIS (Netherlands)

    Corro Ramos, Isaac; van Voorn, George A K; Vemer, Pepijn; Feenstra, Talitha L; Al, Maiwenn J

    2017-01-01

    The validation of health economic (HE) model outcomes against empirical data is of key importance. Although statistical testing seems applicable, guidelines for the validation of HE models lack guidance on statistical validation, and actual validation efforts often present subjective judgment of

  12. Empirical modeling of high-intensity electron beam interaction with materials

    Science.gov (United States)

    Koleva, E.; Tsonevska, Ts; Mladenov, G.

    2018-03-01

    The paper proposes an empirical modeling approach to the prediction followed by optimization of the exact shape of the cross-section of a welded seam, as obtained by electron beam welding. The approach takes into account the electron beam welding process parameters, namely, electron beam power, welding speed, and distances from the magnetic lens of the electron gun to the focus position of the beam and to the surface of the samples treated. The results are verified by comparison with experimental results for type 1H18NT stainless steel samples. The ranges considered of the beam power and the welding speed are 4.2 – 8.4 kW and 3.333 – 13.333 mm/s, respectively.

  13. A New Statistical Method to Determine the Degree of Validity of Health Economic Model Outcomes against Empirical Data.

    Science.gov (United States)

    Corro Ramos, Isaac; van Voorn, George A K; Vemer, Pepijn; Feenstra, Talitha L; Al, Maiwenn J

    2017-09-01

    The validation of health economic (HE) model outcomes against empirical data is of key importance. Although statistical testing seems applicable, guidelines for the validation of HE models lack guidance on statistical validation, and actual validation efforts often present subjective judgment of graphs and point estimates. To discuss the applicability of existing validation techniques and to present a new method for quantifying the degrees of validity statistically, which is useful for decision makers. A new Bayesian method is proposed to determine how well HE model outcomes compare with empirical data. Validity is based on a pre-established accuracy interval in which the model outcomes should fall. The method uses the outcomes of a probabilistic sensitivity analysis and results in a posterior distribution around the probability that HE model outcomes can be regarded as valid. We use a published diabetes model (Modelling Integrated Care for Diabetes based on Observational data) to validate the outcome "number of patients who are on dialysis or with end-stage renal disease." Results indicate that a high probability of a valid outcome is associated with relatively wide accuracy intervals. In particular, 25% deviation from the observed outcome implied approximately 60% expected validity. Current practice in HE model validation can be improved by using an alternative method based on assessing whether the model outcomes fit to empirical data at a predefined level of accuracy. This method has the advantage of assessing both model bias and parameter uncertainty and resulting in a quantitative measure of the degree of validity that penalizes models predicting the mean of an outcome correctly but with overly wide credible intervals. Copyright © 2017 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  14. Business models of micro businesses: Empirical evidence from creative industries

    Directory of Open Access Journals (Sweden)

    Pfeifer Sanja

    2017-01-01

    Full Text Available Business model describes how a business identifies and creates value for customers and how it organizes itself to capture some of this value in a profitable manner. Previous studies of business models in creative industries have only recently identified the unresolved issues in this field of research. The main objective of this article is to analyse the structure and diversity of business models and to deduce how these components interact or change in the context of micro and small businesses in creative services such as advertising, architecture and design. The article uses a qualitative approach. Case studies and semi-structured, in-depth interviews with six owners/managers of micro businesses in Croatia provide rich data. Structural coding in data analysis has been performed manually. The qualitative analysis has indicative relevance for the assessment and comparison of business models, however, it provides insights into which components of business models seem to be consolidated and which seem to contribute to the diversity of business models in creative industries. The article contributes to the advancement of empirical evidence and conceptual constructs that might lead to more advanced methodological approaches and proposition of the core typologies or classifications of business models in creative industries. In addition, a more detailed mapping of different choices available in managing value creation, value capturing or value networking might be a valuable help for owners/managers who want to change or cross-fertilize their business models.

  15. Generation of synthetic Kinect depth images based on empirical noise model

    DEFF Research Database (Denmark)

    Iversen, Thorbjørn Mosekjær; Kraft, Dirk

    2017-01-01

    The development, training and evaluation of computer vision algorithms rely on the availability of a large number of images. The acquisition of these images can be time-consuming if they are recorded using real sensors. An alternative is to rely on synthetic images which can be rapidly generated....... This Letter describes a novel method for the simulation of Kinect v1 depth images. The method is based on an existing empirical noise model from the literature. The authors show that their relatively simple method is able to provide depth images which have a high similarity with real depth images....

  16. Empirical Testing of a Theoretical Extension of the Technology Acceptance Model: An Exploratory Study of Educational Wikis

    Science.gov (United States)

    Liu, Xun

    2010-01-01

    This study extended the technology acceptance model and empirically tested the new model with wikis, a new type of educational technology. Based on social cognitive theory and the theory of planned behavior, three new variables, wiki self-efficacy, online posting anxiety, and perceived behavioral control, were added to the original technology…

  17. Modeling ionospheric foF2 by using empirical orthogonal function analysis

    Directory of Open Access Journals (Sweden)

    E. A

    2011-08-01

    Full Text Available A similar-parameters interpolation method and an empirical orthogonal function analysis are used to construct empirical models for the ionospheric foF2 by using the observational data from three ground-based ionosonde stations in Japan which are Wakkanai (Geographic 45.4° N, 141.7° E, Kokubunji (Geographic 35.7° N, 140.1° E and Yamagawa (Geographic 31.2° N, 130.6° E during the years of 1971–1987. The impact of different drivers towards ionospheric foF2 can be well indicated by choosing appropriate proxies. It is shown that the missing data of original foF2 can be optimal refilled using similar-parameters method. The characteristics of base functions and associated coefficients of EOF model are analyzed. The diurnal variation of base functions can reflect the essential nature of ionospheric foF2 while the coefficients represent the long-term alteration tendency. The 1st order EOF coefficient A1 can reflect the feature of the components with solar cycle variation. A1 also contains an evident semi-annual variation component as well as a relatively weak annual fluctuation component. Both of which are not so obvious as the solar cycle variation. The 2nd order coefficient A2 contains mainly annual variation components. The 3rd order coefficient A3 and 4th order coefficient A4 contain both annual and semi-annual variation components. The seasonal variation, solar rotation oscillation and the small-scale irregularities are also included in the 4th order coefficient A4. The amplitude range and developing tendency of all these coefficients depend on the level of solar activity and geomagnetic activity. The reliability and validity of EOF model are verified by comparison with observational data and with International Reference Ionosphere (IRI. The agreement between observations and EOF model is quite well, indicating that the EOF model can reflect the major changes and the temporal distribution characteristics of the mid-latitude ionosphere of the

  18. Multiband Prediction Model for Financial Time Series with Multivariate Empirical Mode Decomposition

    Directory of Open Access Journals (Sweden)

    Md. Rabiul Islam

    2012-01-01

    Full Text Available This paper presents a subband approach to financial time series prediction. Multivariate empirical mode decomposition (MEMD is employed here for multiband representation of multichannel financial time series together. Autoregressive moving average (ARMA model is used in prediction of individual subband of any time series data. Then all the predicted subband signals are summed up to obtain the overall prediction. The ARMA model works better for stationary signal. With multiband representation, each subband becomes a band-limited (narrow band signal and hence better prediction is achieved. The performance of the proposed MEMD-ARMA model is compared with classical EMD, discrete wavelet transform (DWT, and with full band ARMA model in terms of signal-to-noise ratio (SNR and mean square error (MSE between the original and predicted time series. The simulation results show that the MEMD-ARMA-based method performs better than the other methods.

  19. A Semi-Empirical SNR Model for Soil Moisture Retrieval Using GNSS SNR Data

    Directory of Open Access Journals (Sweden)

    Mutian Han

    2018-02-01

    Full Text Available The Global Navigation Satellite System-Interferometry and Reflectometry (GNSS-IR technique on soil moisture remote sensing was studied. A semi-empirical Signal-to-Noise Ratio (SNR model was proposed as a curve-fitting model for SNR data routinely collected by a GNSS receiver. This model aims at reconstructing the direct and reflected signal from SNR data and at the same time extracting frequency and phase information that is affected by soil moisture as proposed by K. M. Larson et al. This is achieved empirically through approximating the direct and reflected signal by a second-order and fourth-order polynomial, respectively, based on the well-established SNR model. Compared with other models (K. M. Larson et al., T. Yang et al., this model can improve the Quality of Fit (QoF with little prior knowledge needed and can allow soil permittivity to be estimated from the reconstructed signals. In developing this model, we showed how noise affects the receiver SNR estimation and thus the model performance through simulations under the bare soil assumption. Results showed that the reconstructed signals with a grazing angle of 5°–15° were better for soil moisture retrieval. The QoF was improved by around 45%, which resulted in better estimation of the frequency and phase information. However, we found that the improvement on phase estimation could be neglected. Experimental data collected at Lamasquère, France, were also used to validate the proposed model. The results were compared with the simulation and previous works. It was found that the model could ensure good fitting quality even in the case of irregular SNR variation. Additionally, the soil moisture calculated from the reconstructed signals was about 15% closer in relation to the ground truth measurements. A deeper insight into the Larson model and the proposed model was given at this stage, which formed a possible explanation of this fact. Furthermore, frequency and phase information

  20. The effect of empirical potential functions on modeling of amorphous carbon using molecular dynamics method

    International Nuclear Information System (INIS)

    Li, Longqiu; Xu, Ming; Song, Wenping; Ovcharenko, Andrey; Zhang, Guangyu; Jia, Ding

    2013-01-01

    Empirical potentials have a strong effect on the hybridization and structure of amorphous carbon and are of great importance in molecular dynamics (MD) simulations. In this work, amorphous carbon at densities ranging from 2.0 to 3.2 g/cm 3 was modeled by a liquid quenching method using Tersoff, 2nd REBO, and ReaxFF empirical potentials. The hybridization, structure and radial distribution function G(r) of carbon atoms were analyzed as a function of the three potentials mentioned above. The ReaxFF potential is capable to model the change of the structure of amorphous carbon and MD results are in a good agreement with experimental results and density function theory (DFT) at low density of 2.6 g/cm 3 and below. The 2nd REBO potential can be used when amorphous carbon has a very low density of 2.4 g/cm 3 and below. Considering the computational efficiency, the Tersoff potential is recommended to model amorphous carbon at a high density of 2.6 g/cm 3 and above. In addition, the influence of the quenching time on the hybridization content obtained with the three potentials is discussed.

  1. Quantitative analyses of empirical fitness landscapes

    International Nuclear Information System (INIS)

    Szendro, Ivan G; Franke, Jasper; Krug, Joachim; Schenk, Martijn F; De Visser, J Arjan G M

    2013-01-01

    The concept of a fitness landscape is a powerful metaphor that offers insight into various aspects of evolutionary processes and guidance for the study of evolution. Until recently, empirical evidence on the ruggedness of these landscapes was lacking, but since it became feasible to construct all possible genotypes containing combinations of a limited set of mutations, the number of studies has grown to a point where a classification of landscapes becomes possible. The aim of this review is to identify measures of epistasis that allow a meaningful comparison of fitness landscapes and then apply them to the empirical landscapes in order to discern factors that affect ruggedness. The various measures of epistasis that have been proposed in the literature appear to be equivalent. Our comparison shows that the ruggedness of the empirical landscape is affected by whether the included mutations are beneficial or deleterious and by whether intragenic or intergenic epistasis is involved. Finally, the empirical landscapes are compared to landscapes generated with the rough Mt Fuji model. Despite the simplicity of this model, it captures the features of the experimental landscapes remarkably well. (paper)

  2. Health Status and Health Dynamics in an Empirical Model of Expected Longevity*

    Science.gov (United States)

    Benítez-Silva, Hugo; Ni, Huan

    2010-01-01

    Expected longevity is an important factor influencing older individuals’ decisions such as consumption, savings, purchase of life insurance and annuities, claiming of Social Security benefits, and labor supply. It has also been shown to be a good predictor of actual longevity, which in turn is highly correlated with health status. A relatively new literature on health investments under uncertainty, which builds upon the seminal work by Grossman (1972), has directly linked longevity with characteristics, behaviors, and decisions by utility maximizing agents. Our empirical model can be understood within that theoretical framework as estimating a production function of longevity. Using longitudinal data from the Health and Retirement Study, we directly incorporate health dynamics in explaining the variation in expected longevities, and compare two alternative measures of health dynamics: the self-reported health change, and the computed health change based on self-reports of health status. In 38% of the reports in our sample, computed health changes are inconsistent with the direct report on health changes over time. And another 15% of the sample can suffer from information losses if computed changes are used to assess changes in actual health. These potentially serious problems raise doubts regarding the use and interpretation of the computed health changes and even the lagged measures of self-reported health as controls for health dynamics in a variety of empirical settings. Our empirical results, controlling for both subjective and objective measures of health status and unobserved heterogeneity in reporting, suggest that self-reported health changes are a preferred measure of health dynamics. PMID:18187217

  3. Empirical Descriptions of Criminal Sentencing Decision-Making

    Directory of Open Access Journals (Sweden)

    Rasmus H. Wandall

    2014-05-01

    Full Text Available The article addresses the widespread use of statistical causal modelling to describe criminal sentencing decision-making empirically in Scandinavia. The article describes the characteristics of this model, and on this basis discusses three aspects of sentencing decision-making that the model does not capture: 1 the role of law and legal structures in sentencing, 2 the processes of constructing law and facts as they occur in the processes of handling criminal cases, and 3 reflecting newer organisational changes to sentencing decision-making. The article argues for a stronger empirically based design of sentencing models and for a more balanced use of different social scientific methodologies and models of sentencing decision-making.

  4. EMPIRE-II statistical model code for nuclear reaction calculations

    Energy Technology Data Exchange (ETDEWEB)

    Herman, M [International Atomic Energy Agency, Vienna (Austria)

    2001-12-15

    EMPIRE II is a nuclear reaction code, comprising various nuclear models, and designed for calculations in the broad range of energies and incident particles. A projectile can be any nucleon or Heavy Ion. The energy range starts just above the resonance region, in the case of neutron projectile, and extends up to few hundreds of MeV for Heavy Ion induced reactions. The code accounts for the major nuclear reaction mechanisms, such as optical model (SCATB), Multistep Direct (ORION + TRISTAN), NVWY Multistep Compound, and the full featured Hauser-Feshbach model. Heavy Ion fusion cross section can be calculated within the simplified coupled channels approach (CCFUS). A comprehensive library of input parameters covers nuclear masses, optical model parameters, ground state deformations, discrete levels and decay schemes, level densities, fission barriers (BARFIT), moments of inertia (MOMFIT), and {gamma}-ray strength functions. Effects of the dynamic deformation of a fast rotating nucleus can be taken into account in the calculations. The results can be converted into the ENDF-VI format using the accompanying code EMPEND. The package contains the full EXFOR library of experimental data. Relevant EXFOR entries are automatically retrieved during the calculations. Plots comparing experimental results with the calculated ones can be produced using X4TOC4 and PLOTC4 codes linked to the rest of the system through bash-shell (UNIX) scripts. The graphic user interface written in Tcl/Tk is provided. (author)

  5. EMPIRICAL MODELS FOR DESCRIBING FIRE BEHAVIOR IN BRAZILIAN COMMERCIAL EUCALYPT PLANTATIONS

    Directory of Open Access Journals (Sweden)

    Benjamin Leonardo Alves White

    2016-12-01

    Full Text Available Modeling forest fire behavior is an important task that can be used to assist in fire prevention and suppression operations. However, according to previous studies, the existing common worldwide fire behavior models used do not correctly estimate the fire behavior in Brazilian commercial hybrid eucalypt plantations. Therefore, this study aims to build new empirical models to predict the fire rate of spread, flame length and fuel consumption for such vegetation. To meet these objectives, 105 laboratory experimental burns were done, where the main fuel characteristics and weather variables that influence fire behavior were controlled and/or measured in each experiment. Dependent and independent variables were fitted through multiple regression analysis. The fire rate of spread proposed model is based on the wind speed, fuel bed bulk density and 1-h dead fuel moisture content (r2 = 0.86; the flame length model is based on the fuel bed depth, 1-h dead fuel moisture content and wind speed (r2 = 0.72; the fuel consumption proposed model has the 1-h dead fuel moisture, fuel bed bulk density and 1-h dead dry fuel load as independent variables (r2= 0.80. These models were used to develop a new fire behavior software, the “Eucalyptus Fire Safety System”.

  6. a Semi-Empirical Topographic Correction Model for Multi-Source Satellite Images

    Science.gov (United States)

    Xiao, Sa; Tian, Xinpeng; Liu, Qiang; Wen, Jianguang; Ma, Yushuang; Song, Zhenwei

    2018-04-01

    Topographic correction of surface reflectance in rugged terrain areas is the prerequisite for the quantitative application of remote sensing in mountainous areas. Physics-based radiative transfer model can be applied to correct the topographic effect and accurately retrieve the reflectance of the slope surface from high quality satellite image such as Landsat8 OLI. However, as more and more images data available from various of sensors, some times we can not get the accurate sensor calibration parameters and atmosphere conditions which are needed in the physics-based topographic correction model. This paper proposed a semi-empirical atmosphere and topographic corrction model for muti-source satellite images without accurate calibration parameters.Based on this model we can get the topographic corrected surface reflectance from DN data, and we tested and verified this model with image data from Chinese satellite HJ and GF. The result shows that the correlation factor was reduced almost 85 % for near infrared bands and the classification overall accuracy of classification increased 14 % after correction for HJ. The reflectance difference of slope face the sun and face away the sun have reduced after correction.

  7. Semi-empirical model for prediction of unsteady forces on an airfoil with application to flutter

    Science.gov (United States)

    Mahajan, A. J.; Kaza, K. R. V.; Dowell, E. H.

    1993-01-01

    A semi-empirical model is described for predicting unsteady aerodynamic forces on arbitrary airfoils under mildly stalled and unstalled conditions. Aerodynamic forces are modeled using second order ordinary differential equations for lift and moment with airfoil motion as the input. This model is simultaneously integrated with structural dynamics equations to determine flutter characteristics for a two degrees-of-freedom system. Results for a number of cases are presented to demonstrate the suitability of this model to predict flutter. Comparison is made to the flutter characteristics determined by a Navier-Stokes solver and also the classical incompressible potential flow theory.

  8. An improved empirical dynamic control system model of global mean sea level rise and surface temperature change

    Science.gov (United States)

    Wu, Qing; Luu, Quang-Hung; Tkalich, Pavel; Chen, Ge

    2018-04-01

    Having great impacts on human lives, global warming and associated sea level rise are believed to be strongly linked to anthropogenic causes. Statistical approach offers a simple and yet conceptually verifiable combination of remotely connected climate variables and indices, including sea level and surface temperature. We propose an improved statistical reconstruction model based on the empirical dynamic control system by taking into account the climate variability and deriving parameters from Monte Carlo cross-validation random experiments. For the historic data from 1880 to 2001, we yielded higher correlation results compared to those from other dynamic empirical models. The averaged root mean square errors are reduced in both reconstructed fields, namely, the global mean surface temperature (by 24-37%) and the global mean sea level (by 5-25%). Our model is also more robust as it notably diminished the unstable problem associated with varying initial values. Such results suggest that the model not only enhances significantly the global mean reconstructions of temperature and sea level but also may have a potential to improve future projections.

  9. Empirical Modeling of ICMEs Using ACE/SWICS Ionic Distributions

    Science.gov (United States)

    Rivera, Y.; Landi, E.; Lepri, S. T.; Gilbert, J. A.

    2017-12-01

    Coronal Mass Ejections (CMEs) are some of the largest, most energetic events in the solar system releasing an immense amount of plasma and magnetic field into the Heliosphere. The Earth-bound plasma plays a large role in space weather, causing geomagnetic storms that can damage space and ground based instrumentation. As a CME is released, the plasma experiences heating, expansion and acceleration; however, the physical mechanism supplying the heating as it lifts out of the corona still remains uncertain. From previous work we know the ionic composition of solar ejecta undergoes a gradual transition to a state where ionization and recombination processes become ineffective rendering the ionic composition static along its trajectory. This property makes them a good indicator of thermal conditions in the corona, where the CME plasma likely receives most of its heating. We model this so-called `freeze-in' process in Earth-directed CMEs using an ionization code to empirically determine the electron temperature, density and bulk velocity. `Frozen-in' ions from an ensemble of independently modeled plasmas within the CME are added together to fit the full range of observational ionic abundances collected by ACE/SWICS during ICME events. The models derived using this method are used to estimate the CME energy budget to determine a heating rate used to compare with a variety of heating mechanisms that can sustain the required heating with a compatible timescale.

  10. Empiric model for mean generation time adjustment factor for classic point kinetics equations

    Energy Technology Data Exchange (ETDEWEB)

    Goes, David A.B.V. de; Martinez, Aquilino S.; Goncalves, Alessandro da C., E-mail: david.goes@poli.ufrj.br, E-mail: aquilino@lmp.ufrj.br, E-mail: alessandro@con.ufrj.br [Coordenacao de Pos-Graduacao e Pesquisa de Engenharia (COPPE/UFRJ), Rio de Janeiro, RJ (Brazil). Departamento de Engenharia Nuclear

    2017-11-01

    Point reactor kinetics equations are the easiest way to observe the neutron production time behavior in a nuclear reactor. These equations are derived from the neutron transport equation using an approximation called Fick's law leading to a set of first order differential equations. The main objective of this study is to review classic point kinetics equation in order to approximate its results to the case when it is considered the time variation of the neutron currents. The computational modeling used for the calculations is based on the finite difference method. The results obtained with this model are compared with the reference model and then it is determined an empirical adjustment factor that modifies the point reactor kinetics equation to the real scenario. (author)

  11. Empiric model for mean generation time adjustment factor for classic point kinetics equations

    International Nuclear Information System (INIS)

    Goes, David A.B.V. de; Martinez, Aquilino S.; Goncalves, Alessandro da C.

    2017-01-01

    Point reactor kinetics equations are the easiest way to observe the neutron production time behavior in a nuclear reactor. These equations are derived from the neutron transport equation using an approximation called Fick's law leading to a set of first order differential equations. The main objective of this study is to review classic point kinetics equation in order to approximate its results to the case when it is considered the time variation of the neutron currents. The computational modeling used for the calculations is based on the finite difference method. The results obtained with this model are compared with the reference model and then it is determined an empirical adjustment factor that modifies the point reactor kinetics equation to the real scenario. (author)

  12. A Price Index Model for Road Freight Transportation and Its Empirical analysis in China

    Directory of Open Access Journals (Sweden)

    Liu Zhishuo

    2017-01-01

    Full Text Available The aim of price index for road freight transportation (RFT is to reflect the changes of price in the road transport market. Firstly, a price index model for RFT based on the sample data from Alibaba logistics platform is built. This model is a three levels index system including total index, classification index and individual index and the Laspeyres method is applied to calculate these indices. Finally, an empirical analysis of the price index for RFT market in Zhejiang Province is performed. In order to demonstrate the correctness and validity of the exponential model, a comparative analysis with port throughput and PMI index is carried out.

  13. Semi-Empirical Calibration of the Integral Equation Model for Co-Polarized L-Band Backscattering

    Directory of Open Access Journals (Sweden)

    Nicolas Baghdadi

    2015-10-01

    Full Text Available The objective of this paper is to extend the semi-empirical calibration of the backscattering Integral Equation Model (IEM initially proposed for Synthetic Aperture Radar (SAR data at C- and X-bands to SAR data at L-band. A large dataset of radar signal and in situ measurements (soil moisture and surface roughness over bare soil surfaces were used. This dataset was collected over numerous agricultural study sites in France, Luxembourg, Belgium, Germany and Italy using various SAR sensors (AIRSAR, SIR-C, JERS-1, PALSAR-1, ESAR. Results showed slightly better simulations with exponential autocorrelation function than with Gaussian function and with HH than with VV. Using the exponential autocorrelation function, the mean difference between experimental data and Integral Equation Model (IEM simulations is +0.4 dB in HH and −1.2 dB in VV with a Root Mean Square Error (RMSE about 3.5 dB. In order to improve the modeling results of the IEM for a better use in the inversion of SAR data, a semi-empirical calibration of the IEM was performed at L-band in replacing the correlation length derived from field experiments by a fitting parameter. Better agreement was observed between the backscattering coefficient provided by the SAR and that simulated by the calibrated version of the IEM (RMSE about 2.2 dB.

  14. A semi-empirical molecular orbital model of silica, application to radiation compaction

    International Nuclear Information System (INIS)

    Tasker, P.W.

    1978-11-01

    Semi-empirical molecular-orbital theory is used to calculate the bonding in a cluster of two SiO 4 tetrahedra, with the outer bonds saturated with pseudo-hydrogen atoms. The basic properties of the cluster, bond energies and band gap are calculated using a very simple parameterisation scheme. The resulting cluster is used to study the rebonding that occurs when an oxygen vacancy is created. It is suggested that a vacancy model is capable of producing the observed differences between quartz and vitreous silica, and the calculations show that the compaction effect observed in the glass is of a magnitude compatible with the relaxations around the vacancy. More detailed lattice models will be needed to examine this mechanism further. (author)

  15. Aplicabilidad del Impuesto Diferido en el Ecuador con los efectos introducidos a través de la Ley Orgánica de Incentivos a la Producción y Prevención del Fraude Fiscal y su Reglamento

    OpenAIRE

    Beltrán Benalcázar, Delia Alexandra

    2015-01-01

    Con el propósito de establecer la aplicabilidad del Impuesto Diferido en el Ecuador con los efectos introducidos a través de la Ley Orgánica de Incentivos a la Producción y Prevención del Fraude Fiscal y su reglamento, este trabajo de investigación compila en el Capítulo Uno los conceptos fundamentales relacionados con el Impuesto Diferido, adicionalmente interrelaciona los casos de Impuestos Diferidos reconocidos por la Administración Tributaria en el artículo innumerado colocado a continuac...

  16. An Empirical Model for Vane-Type Vortex Generators in a Navier-Stokes Code

    Science.gov (United States)

    Dudek, Julianne C.

    2005-01-01

    An empirical model which simulates the effects of vane-type vortex generators in ducts was incorporated into the Wind-US Navier-Stokes computational fluid dynamics code. The model enables the effects of the vortex generators to be simulated without defining the details of the geometry within the grid, and makes it practical for researchers to evaluate multiple combinations of vortex generator arrangements. The model determines the strength of each vortex based on the generator geometry and the local flow conditions. Validation results are presented for flow in a straight pipe with a counter-rotating vortex generator arrangement, and the results are compared with experimental data and computational simulations using a gridded vane generator. Results are also presented for vortex generator arrays in two S-duct diffusers, along with accompanying experimental data. The effects of grid resolution and turbulence model are also examined.

  17. An Empirical Model and Ethnic Differences in Cultural Meanings Via Motives for Suicide.

    Science.gov (United States)

    Chu, Joyce; Khoury, Oula; Ma, Johnson; Bahn, Francesca; Bongar, Bruce; Goldblum, Peter

    2017-10-01

    The importance of cultural meanings via motives for suicide - what is considered acceptable to motivate suicide - has been advocated as a key step in understanding and preventing development of suicidal behaviors. There have been limited systematic empirical attempts to establish different cultural motives ascribed to suicide across ethnic groups. We used a mixed methods approach and grounded theory methodology to guide the analysis of qualitative data querying for meanings via motives for suicide among 232 Caucasians, Asian Americans, and Latino/a Americans with a history of suicide attempts, ideation, intent, or plan. We used subsequent logistic regression analyses to examine ethnic differences in suicide motive themes. This inductive approach of generating theory from data yielded an empirical model of 6 cultural meanings via motives for suicide themes: intrapersonal perceptions, intrapersonal emotions, intrapersonal behavior, interpersonal, mental health/medical, and external environment. Logistic regressions showed ethnic differences in intrapersonal perceptions (low endorsement by Latino/a Americans) and external environment (high endorsement by Latino/a Americans) categories. Results advance suicide research and practice by establishing 6 empirically based cultural motives for suicide themes that may represent a key intermediary step in the pathway toward suicidal behaviors. Clinicians can use these suicide meanings via motives to guide their assessment and determination of suicide risk. Emphasis on environmental stressors rather than negative perceptions like hopelessness should be considered with Latino/a clients. © 2017 Wiley Periodicals, Inc.

  18. Semi-empirical model for the calculation of flow friction factors in wire-wrapped rod bundles

    International Nuclear Information System (INIS)

    Carajilescov, P.; Fernandez y Fernandez, E.

    1981-08-01

    LMFBR fuel elements consist of wire-wrapped rod bundles, with triangular array, with the fluid flowing parallel to the rods. A semi-empirical model is developed in order to obtain the average bundle friction factor, as well as the friction factor for each subchannel. The model also calculates the flow distribution factors. The results are compared to experimental data for geometrical parameters in the range: P(div)D = 1.063 - 1.417, H(div)D = 4 - 50, and are considered satisfactory. (Author) [pt

  19. An Empirical Temperature Variance Source Model in Heated Jets

    Science.gov (United States)

    Khavaran, Abbas; Bridges, James

    2012-01-01

    An acoustic analogy approach is implemented that models the sources of jet noise in heated jets. The equivalent sources of turbulent mixing noise are recognized as the differences between the fluctuating and Favre-averaged Reynolds stresses and enthalpy fluxes. While in a conventional acoustic analogy only Reynolds stress components are scrutinized for their noise generation properties, it is now accepted that a comprehensive source model should include the additional entropy source term. Following Goldstein s generalized acoustic analogy, the set of Euler equations are divided into two sets of equations that govern a non-radiating base flow plus its residual components. When the base flow is considered as a locally parallel mean flow, the residual equations may be rearranged to form an inhomogeneous third-order wave equation. A general solution is written subsequently using a Green s function method while all non-linear terms are treated as the equivalent sources of aerodynamic sound and are modeled accordingly. In a previous study, a specialized Reynolds-averaged Navier-Stokes (RANS) solver was implemented to compute the variance of thermal fluctuations that determine the enthalpy flux source strength. The main objective here is to present an empirical model capable of providing a reasonable estimate of the stagnation temperature variance in a jet. Such a model is parameterized as a function of the mean stagnation temperature gradient in the jet, and is evaluated using commonly available RANS solvers. The ensuing thermal source distribution is compared with measurements as well as computational result from a dedicated RANS solver that employs an enthalpy variance and dissipation rate model. Turbulent mixing noise predictions are presented for a wide range of jet temperature ratios from 1.0 to 3.20.

  20. An Empirical Validation of Building Simulation Software for Modelling of Double-Skin Facade (DSF)

    DEFF Research Database (Denmark)

    Larsen, Olena Kalyanova; Heiselberg, Per; Felsmann, Clemens

    2009-01-01

    buildings, but their accuracy might be limited in cases with DSFs because of the complexity of the heat and mass transfer processes within the DSF. To address this problem, an empirical validation of building models with DSF, performed with various building simulation tools (ESP-r, IDA ICE 3.0, VA114......Double-skin facade (DSF) buildings are being built as an attractive, innovative and energy efficient solution. Nowadays, several design tools are used for assessment of thermal and energy performance of DSF buildings. Existing design tools are well-suited for performance assessment of conventional......, TRNSYS-TUD and BSim) was carried out in the framework of IEA SHC Task 34 /ECBCS Annex 43 "Testing and Validation of Building Energy Simulation Tools". The experimental data for the validation was gathered in a full-scale outdoor test facility. The empirical data sets comprise the key-functioning modes...

  1. Empirically sampling Universal Dependencies

    DEFF Research Database (Denmark)

    Schluter, Natalie; Agic, Zeljko

    2017-01-01

    Universal Dependencies incur a high cost in computation for unbiased system development. We propose a 100% empirically chosen small subset of UD languages for efficient parsing system development. The technique used is based on measurements of model capacity globally. We show that the diversity o...

  2. Empirical Analysis of Closed-Loop Duopoly Advertising Strategies

    OpenAIRE

    Gary M. Erickson

    1992-01-01

    Closed-loop (perfect) equilibria in a Lanchester duopoly differential game of advertising competition are used as the basis for empirical investigation. Two systems of simultaneous nonlinear equations are formed, one from a general Lanchester model and one from a constrained model. Two empirical applications are conducted. In one involving Coca-Cola and Pepsi-Cola, a formal statistical testing procedure is used to detect whether closed-loop equilibrium advertising strategies are used by the c...

  3. Patient Safety and Satisfaction Drivers in Emergency Departments Re-visited - An Empirical Analysis using Structural Equation Modeling

    DEFF Research Database (Denmark)

    Sørup, Christian Michel; Jacobsen, Peter

    2014-01-01

    are entitled safety and satisfaction, waiting time, information delivery, and infrastructure accordingly. As an empirical foundation, a recently published comprehensive survey in 11 Danish EDs is analysed in depth using structural equation modeling (SEM). Consulting the proposed framework, ED decision makers...

  4. An empirical analysis of Diaspora bonds

    OpenAIRE

    AKKOYUNLU, Şule; STERN, Max

    2018-01-01

    Abstract. This study is the first to investigate theoretically and empirically the determinants of Diaspora Bonds for eight developing countries (Bangladesh, Ethiopia, Ghana, India, Lebanon, Pakistan, the Philippines, and Sri-Lanka) and one developed country - Israel for the period 1951 and 2008. Empirical results are consistent with the predictions of the theoretical model. The most robust variables are the closeness indicator and the sovereign rating, both on the demand-side. The spread is ...

  5. An empirically tractable model of optimal oil spills prevention in Russian sea harbours

    Energy Technology Data Exchange (ETDEWEB)

    Deissenberg, C. [CEFI-CNRS, Les Milles (France); Gurman, V.; Tsirlin, A. [RAS, Program Systems Inst., Pereslavl-Zalessky (Russian Federation); Ryumina, E. [Russian Academy of Sciences, Moscow (Russian Federation). Inst. of Economic Market Problems

    2001-07-01

    Based on previous theoretical work by Gottinger (1997, 1998), we propose a simple model of optimal monitoring of oil-related activities in harbour areas that is suitable for empirical estimation within the Russian-Ukrainian context, in spite of the poor availability of data in these countries. Specifically, the model indicates how to best allocate at the steady state a given monitoring budget between different monitoring activities. An approximate analytical solution to the optimization problem is derived, and a simple procedure for estimating the model on the basis of the actually available data is suggested. An application using data obtained for several harbours of the Black and Baltic Seas is given. It suggests that the current Russian monitoring practice could be much improved by better allocating the available monitoring resources. (Author)

  6. High-resolution empirical geomagnetic field model TS07D: Investigating run-on-request and forecasting modes of operation

    Science.gov (United States)

    Stephens, G. K.; Sitnov, M. I.; Ukhorskiy, A. Y.; Vandegriff, J. D.; Tsyganenko, N. A.

    2010-12-01

    The dramatic increase of the geomagnetic field data volume available due to many recent missions, including GOES, Polar, Geotail, Cluster, and THEMIS, required at some point the appropriate qualitative transition in the empirical modeling tools. Classical empirical models, such as T96 and T02, used few custom-tailored modules to represent major magnetospheric current systems and simple data binning or loading-unloading inputs for their fitting with data and the subsequent applications. They have been replaced by more systematic expansions of the equatorial and field-aligned current contributions as well as by the advanced data-mining algorithms searching for events with the global activity parameters, such as the Sym-H index, similar to those at the time of interest, as is done in the model TS07D (Tsyganenko and Sitnov, 2007; Sitnov et al., 2008). The necessity to mine and fit data dynamically, with the individual subset of the database being used to reproduce the geomagnetic field pattern at every new moment in time, requires the corresponding transition in the use of the new empirical geomagnetic field models. It becomes more similar to runs-on-request offered by the Community Coordinated Modeling Center for many first principles MHD and kinetic codes. To provide this mode of operation for the TS07D model a new web-based modeling tool has been created and tested at the JHU/APL (http://geomag_field.jhuapl.edu/model/), and we discuss the first results of its performance testing and validation, including in-sample and out-of-sample modeling of a number of CME- and CIR-driven magnetic storms. We also report on the first tests of the forecasting version of the TS07D model, where the magnetospheric part of the macro-parameters involved in the data-binning process (Sym-H index and its trend parameter) are replaced by their solar wind-based analogs obtained using the Burton-McPherron-Russell approach.

  7. A new model of Social Support in Bereavement (SSB): An empirical investigation with a Chinese sample.

    Science.gov (United States)

    Li, Jie; Chen, Sheying

    2016-01-01

    Bereavement can be an extremely stressful experience while the protective effect of social support is expected to facilitate the adjustment after loss. The ingredients or elements of social support as illustrated by a new model of Social Support in Bereavement (SSB), however, requires empirical evidence. Who might be the most effective providers of social support in bereavement has also been understudied, particularly within specific cultural contexts. The present study uses both qualitative and quantitative analyses to explore these two important issues among bereaved Chinese families and individuals. The results show that three major types of social support described by the SSB model were frequently acknowledged by the participants in this study. Aside from relevant books, family and friends were the primary sources of social support who in turn received support from their workplaces. Helping professionals turned out to be the least significant source of social support in the Chinese cultural context. Differences by gender, age, and bereavement time were also found. The findings render empirical evidence to the conceptual model of Social Support in Bereavement and also offer culturally relevant guidance for providing effective support to the bereaved.

  8. Soil Moisture Estimate under Forest using a Semi-empirical Model at P-Band

    Science.gov (United States)

    Truong-Loi, M.; Saatchi, S.; Jaruwatanadilok, S.

    2013-12-01

    In this paper we show the potential of a semi-empirical algorithm to retrieve soil moisture under forests using P-band polarimetric SAR data. In past decades, several remote sensing techniques have been developed to estimate the surface soil moisture. In most studies associated with radar sensing of soil moisture, the proposed algorithms are focused on bare or sparsely vegetated surfaces where the effect of vegetation can be ignored. At long wavelengths such as L-band, empirical or physical models such as the Small Perturbation Model (SPM) provide reasonable estimates of surface soil moisture at depths of 0-5cm. However for densely covered vegetated surfaces such as forests, the problem becomes more challenging because the vegetation canopy is a complex scattering environment. For this reason there have been only few studies focusing on retrieving soil moisture under vegetation canopy in the literature. Moghaddam et al. developed an algorithm to estimate soil moisture under a boreal forest using L- and P-band SAR data. For their studied area, double-bounce between trunks and ground appear to be the most important scattering mechanism. Thereby, they implemented parametric models of radar backscatter for double-bounce using simulations of a numerical forest scattering model. Hajnsek et al. showed the potential of estimating the soil moisture under agricultural vegetation using L-band polarimetric SAR data and using polarimetric-decomposition techniques to remove the vegetation layer. Here we use an approach based on physical formulation of dominant scattering mechanisms and three parameters that integrates the vegetation and soil effects at long wavelengths. The algorithm is a simplification of a 3-D coherent model of forest canopy based on the Distorted Born Approximation (DBA). The simplified model has three equations and three unknowns, preserving the three dominant scattering mechanisms of volume, double-bounce and surface for three polarized backscattering

  9. Application of Generalized Student’s T-Distribution In Modeling The Distribution of Empirical Return Rates on Selected Stock Exchange Indexes

    Directory of Open Access Journals (Sweden)

    Purczyńskiz Jan

    2014-07-01

    Full Text Available This paper examines the application of the so called generalized Student’s t-distribution in modeling the distribution of empirical return rates on selected Warsaw stock exchange indexes. It deals with distribution parameters by means of the method of logarithmic moments, the maximum likelihood method and the method of moments. Generalized Student’s t-distribution ensures better fitting to empirical data than the classical Student’s t-distribution.

  10. An empirical comparison of alternate regime-switching models for electricity spot prices

    Energy Technology Data Exchange (ETDEWEB)

    Janczura, Joanna [Hugo Steinhaus Center, Institute of Mathematics and Computer Science, Wroclaw University of Technology, 50-370 Wroclaw (Poland); Weron, Rafal [Institute of Organization and Management, Wroclaw University of Technology, 50-370 Wroclaw (Poland)

    2010-09-15

    One of the most profound features of electricity spot prices are the price spikes. Markov regime-switching (MRS) models seem to be a natural candidate for modeling this spiky behavior. However, in the studies published so far, the goodness-of-fit of the proposed models has not been a major focus. While most of the models were elegant, their fit to empirical data has either been not examined thoroughly or the signs of a bad fit ignored. With this paper we want to fill the gap. We calibrate and test a range of MRS models in an attempt to find parsimonious specifications that not only address the main characteristics of electricity prices but are statistically sound as well. We find that the best structure is that of an independent spike 3-regime model with time-varying transition probabilities, heteroscedastic diffusion-type base regime dynamics and shifted spike regime distributions. Not only does it allow for a seasonal spike intensity throughout the year and consecutive spikes or price drops, which is consistent with market observations, but also exhibits the 'inverse leverage effect' reported in the literature for spot electricity prices. (author)

  11. An empirical comparison of alternate regime-switching models for electricity spot prices

    International Nuclear Information System (INIS)

    Janczura, Joanna; Weron, Rafal

    2010-01-01

    One of the most profound features of electricity spot prices are the price spikes. Markov regime-switching (MRS) models seem to be a natural candidate for modeling this spiky behavior. However, in the studies published so far, the goodness-of-fit of the proposed models has not been a major focus. While most of the models were elegant, their fit to empirical data has either been not examined thoroughly or the signs of a bad fit ignored. With this paper we want to fill the gap. We calibrate and test a range of MRS models in an attempt to find parsimonious specifications that not only address the main characteristics of electricity prices but are statistically sound as well. We find that the best structure is that of an independent spike 3-regime model with time-varying transition probabilities, heteroscedastic diffusion-type base regime dynamics and shifted spike regime distributions. Not only does it allow for a seasonal spike intensity throughout the year and consecutive spikes or price drops, which is consistent with market observations, but also exhibits the 'inverse leverage effect' reported in the literature for spot electricity prices. (author)

  12. Sci—Thur AM: YIS - 09: Validation of a General Empirically-Based Beam Model for kV X-ray Sources

    Energy Technology Data Exchange (ETDEWEB)

    Poirier, Y. [CancerCare Manitoba (Canada); University of Calgary (Canada); Sommerville, M.; Johnstone, C.D. [San Diego State University (United States); Gräfe, J.; Nygren, I.; Jacso, F. [Tom Baker Cancer Centre (Canada); Khan, R.; Villareal-Barajas, J.E. [University of Calgary (Canada); Tom Baker Cancer Centre (Canada); Tambasco, M. [University of Calgary (Canada); San Diego State University (United States)

    2014-08-15

    Purpose: To present an empirically-based beam model for computing dose deposited by kilovoltage (kV) x-rays and validate it for radiographic, CT, CBCT, superficial, and orthovoltage kV sources. Method and Materials: We modeled a wide variety of imaging (radiographic, CT, CBCT) and therapeutic (superficial, orthovoltage) kV x-ray sources. The model characterizes spatial variations of the fluence and spectrum independently. The spectrum is derived by matching measured values of the half value layer (HVL) and nominal peak potential (kVp) to computationally-derived spectra while the fluence is derived from in-air relative dose measurements. This model relies only on empirical values and requires no knowledge of proprietary source specifications or other theoretical aspects of the kV x-ray source. To validate the model, we compared measured doses to values computed using our previously validated in-house kV dose computation software, kVDoseCalc. The dose was measured in homogeneous and anthropomorphic phantoms using ionization chambers and LiF thermoluminescent detectors (TLDs), respectively. Results: The maximum difference between measured and computed dose measurements was within 2.6%, 3.6%, 2.0%, 4.8%, and 4.0% for the modeled radiographic, CT, CBCT, superficial, and the orthovoltage sources, respectively. In the anthropomorphic phantom, the computed CBCT dose generally agreed with TLD measurements, with an average difference and standard deviation ranging from 2.4 ± 6.0% to 5.7 ± 10.3% depending on the imaging technique. Most (42/62) measured TLD doses were within 10% of computed values. Conclusions: The proposed model can be used to accurately characterize a wide variety of kV x-ray sources using only empirical values.

  13. Antecedents of employee electricity saving behavior in organizations: An empirical study based on norm activation model

    International Nuclear Information System (INIS)

    Zhang, Yixiang; Wang, Zhaohua; Zhou, Guanghui

    2013-01-01

    China is one of the major energy-consuming countries, and is under great pressure to promote energy saving and reduce domestic energy consumption. Employees constitute an important target group for energy saving. However, only a few research efforts have been paid to study what drives employee energy saving behavior in organizations. To fill this gap, drawing on norm activation model (NAM), we built a research model to study antecedents of employee electricity saving behavior in organizations. The model was empirically tested using survey data collected from office workers in Beijing, China. Results show that personal norm positively influences employee electricity saving behavior. Organizational electricity saving climate negatively moderates the effect of personal norm on electricity saving behavior. Awareness of consequences, ascription of responsibility, and organizational electricity saving climate positively influence personal norm. Furthermore, awareness of consequences positively influences ascription of responsibility. This paper contributes to the energy saving behavior literature by building a theoretical model of employee electricity saving behavior which is understudied in the current literature. Based on the empirical results, implications on how to promote employee electricity saving are discussed. - Highlights: • We studied employee electricity saving behavior based on norm activation model. • The model was tested using survey data collected from office workers in China. • Personal norm positively influences employee′s electricity saving behavior. • Electricity saving climate negatively moderates personal norm′s effect. • This research enhances our understanding of employee electricity saving behavior

  14. Empirical Reconstruction and Numerical Modeling of the First Geoeffective Coronal Mass Ejection of Solar Cycle 24

    Science.gov (United States)

    Wood, B. E.; Wu, C.-C.; Howard, R. A.; Socker, D. G.; Rouillard, A. P.

    2011-03-01

    We analyze the kinematics and morphology of a coronal mass ejection (CME) from 2010 April 3, which was responsible for the first significant geomagnetic storm of solar cycle 24. The analysis utilizes coronagraphic and heliospheric images from the two STEREO spacecraft, and coronagraphic images from SOHO/LASCO. Using an empirical three-dimensional (3D) reconstruction technique, we demonstrate that the CME can be reproduced reasonably well at all times with a 3D flux rope shape, but the case for a flux rope being the correct interpretation is not as strong as some events studied with STEREO in the past, given that we are unable to infer a unique orientation for the flux rope. A model with an orientation angle of -80° from the ecliptic plane (i.e., nearly N-S) works best close to the Sun, but a model at 10° (i.e., nearly E-W) works better far from the Sun. Both interpretations require the cross section of the flux rope to be significantly elliptical rather than circular. In addition to our empirical modeling, we also present a fully 3D numerical MHD model of the CME. This physical model appears to effectively reproduce aspects of the shape and kinematics of the CME's leading edge. It is particularly encouraging that the model reproduces the amount of interplanetary deceleration observed for the CME during its journey from the Sun to 1 AU.

  15. EMPIRICAL RECONSTRUCTION AND NUMERICAL MODELING OF THE FIRST GEOEFFECTIVE CORONAL MASS EJECTION OF SOLAR CYCLE 24

    International Nuclear Information System (INIS)

    Wood, B. E.; Wu, C.-C.; Howard, R. A.; Socker, D. G.; Rouillard, A. P.

    2011-01-01

    We analyze the kinematics and morphology of a coronal mass ejection (CME) from 2010 April 3, which was responsible for the first significant geomagnetic storm of solar cycle 24. The analysis utilizes coronagraphic and heliospheric images from the two STEREO spacecraft, and coronagraphic images from SOHO/LASCO. Using an empirical three-dimensional (3D) reconstruction technique, we demonstrate that the CME can be reproduced reasonably well at all times with a 3D flux rope shape, but the case for a flux rope being the correct interpretation is not as strong as some events studied with STEREO in the past, given that we are unable to infer a unique orientation for the flux rope. A model with an orientation angle of -80 deg. from the ecliptic plane (i.e., nearly N-S) works best close to the Sun, but a model at 10 deg. (i.e., nearly E-W) works better far from the Sun. Both interpretations require the cross section of the flux rope to be significantly elliptical rather than circular. In addition to our empirical modeling, we also present a fully 3D numerical MHD model of the CME. This physical model appears to effectively reproduce aspects of the shape and kinematics of the CME's leading edge. It is particularly encouraging that the model reproduces the amount of interplanetary deceleration observed for the CME during its journey from the Sun to 1 AU.

  16. Downside Risk And Empirical Asset Pricing

    NARCIS (Netherlands)

    P. van Vliet (Pim)

    2004-01-01

    textabstractCurrently, the Nobel prize winning Capital Asset Pricing Model (CAPM) celebrates its 40th birthday. Although widely applied in financial management, this model does not fully capture the empirical riskreturn relation of stocks; witness the beta, size, value and momentum effects. These

  17. Empirical Results of Modeling EUR/RON Exchange Rate using ARCH, GARCH, EGARCH, TARCH and PARCH models

    Directory of Open Access Journals (Sweden)

    Andreea – Cristina PETRICĂ

    2017-03-01

    Full Text Available The aim of this study consists in examining the changes in the volatility of daily returns of EUR/RON exchange rate using on the one hand symmetric GARCH models (ARCH and GARCH and on the other hand the asymmetric GARCH models (EGARCH, TARCH and PARCH, since the conditional variance is time-varying. The analysis takes into account daily quotations of EUR/RON exchange rate over the period of 04th January 1999 to 13th June 2016. Thus, we are modeling heteroscedasticity by applying different specifications of GARCH models followed by looking for significant parameters and low information criteria (minimum Akaike Information Criterion. All models are estimated using the maximum likelihood method under the assumption of several distributions of the innovation terms such as: Normal (Gaussian distribution, Student’s t distribution, Generalized Error distribution (GED, Student’s with fixed df. Distribution, and GED with fixed parameter distribution. The predominant models turned out to be EGARCH and PARCH models, and the empirical results point out that the best model for estimating daily returns of EUR/RON exchange rate is EGARCH(2,1 with Asymmetric order 2 under the assumption of Student’s t distributed innovation terms. This can be explained by the fact that in case of EGARCH model, the restriction regarding the positivity of the conditional variance is automatically satisfied.

  18. A one-dimensional semi-empirical model considering transition boiling effect for dispersed flow film boiling

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Yu-Jou [Institute of Nuclear Engineering and Science, National Tsing Hua University, Hsinchu 30013, Taiwan, ROC (China); Pan, Chin, E-mail: cpan@ess.nthu.edu.tw [Institute of Nuclear Engineering and Science, National Tsing Hua University, Hsinchu 30013, Taiwan, ROC (China); Department of Engineering and System Science, National Tsing Hua University, Hsinchu 30013, Taiwan, ROC (China); Low Carbon Energy Research Center, National Tsing Hua University, Hsinchu 30013, Taiwan, ROC (China)

    2017-05-15

    Highlights: • Seven heat transfer mechanisms are studied numerically by the model. • A semi-empirical method is proposed to account for the transition boiling effect. • The parametric effects on the heat transfer mechanisms are investigated. • The thermal non-equilibrium phenomenon between vapor and droplets is investigated. - Abstract: The objective of this paper is to develop a one-dimensional semi-empirical model for the dispersed flow film boiling considering transition boiling effects. The proposed model consists of conservation equations, i.e., vapor mass, vapor energy, droplet mass and droplet momentum conservation, and a set of closure relations to address the interactions among wall, vapor and droplets. The results show that the transition boiling effect is of vital importance in the dispersed flow film boiling regime, since the flowing situation in the downstream would be influenced by the conditions in the upstream. In addition, the present paper, through evaluating the vapor temperature and the amount of heat transferred to droplets, investigates the thermal non-equilibrium phenomenon under different flowing conditions. Comparison of the wall temperature predictions with the 1394 experimental data in the literature, the present model ranging from system pressure of 30–140 bar, heat flux of 204–1837 kW/m{sup 2} and mass flux of 380–5180 kg/m{sup 2} s, shows very good agreement with RMS of 8.80% and standard deviation of 8.81%. Moreover, the model well depicts the thermal non-equilibrium phenomenon for the dispersed flow film boiling.

  19. Normalization of time-series satellite reflectance data to a standard sun-target-sensor geometry using a semi-empirical model

    Science.gov (United States)

    Zhao, Yongguang; Li, Chuanrong; Ma, Lingling; Tang, Lingli; Wang, Ning; Zhou, Chuncheng; Qian, Yonggang

    2017-10-01

    Time series of satellite reflectance data have been widely used to characterize environmental phenomena, describe trends in vegetation dynamics and study climate change. However, several sensors with wide spatial coverage and high observation frequency are usually designed to have large field of view (FOV), which cause variations in the sun-targetsensor geometry in time-series reflectance data. In this study, on the basis of semiempirical kernel-driven BRDF model, a new semi-empirical model was proposed to normalize the sun-target-sensor geometry of remote sensing image. To evaluate the proposed model, bidirectional reflectance under different canopy growth conditions simulated by Discrete Anisotropic Radiative Transfer (DART) model were used. The semi-empirical model was first fitted by using all simulated bidirectional reflectance. Experimental result showed a good fit between the bidirectional reflectance estimated by the proposed model and the simulated value. Then, MODIS time-series reflectance data was normalized to a common sun-target-sensor geometry by the proposed model. The experimental results showed the proposed model yielded good fits between the observed and estimated values. The noise-like fluctuations in time-series reflectance data was also reduced after the sun-target-sensor normalization process.

  20. An empirical model for estimating solar radiation in the Algerian Sahara

    Science.gov (United States)

    Benatiallah, Djelloul; Benatiallah, Ali; Bouchouicha, Kada; Hamouda, Messaoud; Nasri, Bahous

    2018-05-01

    The present work aims to determine the empirical model R.sun that will allow us to evaluate the solar radiation flues on a horizontal plane and in clear-sky on the located Adrar city (27°18 N and 0°11 W) of Algeria and compare with the results measured at the localized site. The expected results of this comparison are of importance for the investment study of solar systems (solar power plants for electricity production, CSP) and also for the design and performance analysis of any system using the solar energy. Statistical indicators used to evaluate the accuracy of the model where the mean bias error (MBE), root mean square error (RMSE) and coefficient of determination. The results show that for global radiation, the daily correlation coefficient is 0.9984. The mean absolute percentage error is 9.44 %. The daily mean bias error is -7.94 %. The daily root mean square error is 12.31 %.

  1. Comparative empirical analysis of flow-weighted transit route networks in R-space and evolution modeling

    Science.gov (United States)

    Huang, Ailing; Zang, Guangzhi; He, Zhengbing; Guan, Wei

    2017-05-01

    Urban public transit system is a typical mixed complex network with dynamic flow, and its evolution should be a process coupling topological structure with flow dynamics, which has received little attention. This paper presents the R-space to make a comparative empirical analysis on Beijing’s flow-weighted transit route network (TRN) and we found that both the Beijing’s TRNs in the year of 2011 and 2015 exhibit the scale-free properties. As such, we propose an evolution model driven by flow to simulate the development of TRNs with consideration of the passengers’ dynamical behaviors triggered by topological change. The model simulates that the evolution of TRN is an iterative process. At each time step, a certain number of new routes are generated driven by travel demands, which leads to dynamical evolution of new routes’ flow and triggers perturbation in nearby routes that will further impact the next round of opening new routes. We present the theoretical analysis based on the mean-field theory, as well as the numerical simulation for this model. The results obtained agree well with our empirical analysis results, which indicate that our model can simulate the TRN evolution with scale-free properties for distributions of node’s strength and degree. The purpose of this paper is to illustrate the global evolutional mechanism of transit network that will be used to exploit planning and design strategies for real TRNs.

  2. An Empirical Path-Loss Model for Wireless Channels in Indoor Short-Range Office Environment

    Directory of Open Access Journals (Sweden)

    Ye Wang

    2012-01-01

    Full Text Available A novel empirical path-loss model for wireless indoor short-range office environment at 4.3–7.3 GHz band is presented. The model is developed based on the experimental datum sampled in 30 office rooms in both line of sight (LOS and non-LOS (NLOS scenarios. The model is characterized as the path loss to distance with a Gaussian random variable X due to the shadow fading by using linear regression. The path-loss exponent n is fitted by the frequency using power function and modeled as a frequency-dependent Gaussian variable as the standard deviation σ of X. The presented works should be available for the research of wireless channel characteristics under universal indoor short-distance environments in the Internet of Things (IOT.

  3. A Novel Multiscale Ensemble Carbon Price Prediction Model Integrating Empirical Mode Decomposition, Genetic Algorithm and Artificial Neural Network

    Directory of Open Access Journals (Sweden)

    Bangzhu Zhu

    2012-02-01

    Full Text Available Due to the movement and complexity of the carbon market, traditional monoscale forecasting approaches often fail to capture its nonstationary and nonlinear properties and accurately describe its moving tendencies. In this study, a multiscale ensemble forecasting model integrating empirical mode decomposition (EMD, genetic algorithm (GA and artificial neural network (ANN is proposed to forecast carbon price. Firstly, the proposed model uses EMD to decompose carbon price data into several intrinsic mode functions (IMFs and one residue. Then, the IMFs and residue are composed into a high frequency component, a low frequency component and a trend component which have similar frequency characteristics, simple components and strong regularity using the fine-to-coarse reconstruction algorithm. Finally, those three components are predicted using an ANN trained by GA, i.e., a GAANN model, and the final forecasting results can be obtained by the sum of these three forecasting results. For verification and testing, two main carbon future prices with different maturity in the European Climate Exchange (ECX are used to test the effectiveness of the proposed multiscale ensemble forecasting model. Empirical results obtained demonstrate that the proposed multiscale ensemble forecasting model can outperform the single random walk (RW, ARIMA, ANN and GAANN models without EMD preprocessing and the ensemble ARIMA model with EMD preprocessing.

  4. An empirical investigation on the forecasting ability of mallows model averaging in a macro economic environment

    Science.gov (United States)

    Yin, Yip Chee; Hock-Eam, Lim

    2012-09-01

    This paper investigates the forecasting ability of Mallows Model Averaging (MMA) by conducting an empirical analysis of five Asia countries, Malaysia, Thailand, Philippines, Indonesia and China's GDP growth rate. Results reveal that MMA has no noticeable differences in predictive ability compared to the general autoregressive fractional integrated moving average model (ARFIMA) and its predictive ability is sensitive to the effect of financial crisis. MMA could be an alternative forecasting method for samples without recent outliers such as financial crisis.

  5. Generalized least squares and empirical Bayes estimation in regional partial duration series index-flood modeling

    DEFF Research Database (Denmark)

    Madsen, Henrik; Rosbjerg, Dan

    1997-01-01

    parameters is inferred from regional data using generalized least squares (GLS) regression. Two different Bayesian T-year event estimators are introduced: a linear estimator that requires only some moments of the prior distributions to be specified and a parametric estimator that is based on specified......A regional estimation procedure that combines the index-flood concept with an empirical Bayes method for inferring regional information is introduced. The model is based on the partial duration series approach with generalized Pareto (GP) distributed exceedances. The prior information of the model...

  6. Recent extensions and use of the statistical model code EMPIRE-II - version: 2.17 Millesimo

    International Nuclear Information System (INIS)

    Herman, M.

    2003-01-01

    This lecture notes describe new features of the modular code EMPIRE-2.17 designed to perform comprehensive calculations of nuclear reactions using variety of nuclear reaction models. Compared to the version 2.13, the current release has been extended by including Coupled-Channel mechanism, exciton model, Monte Carlo approach to preequilibrium emission, use of microscopic level densities, widths fluctuation correction, detailed calculation of the recoil spectra, and powerful plotting capabilities provided by the ZVView package. The second part of this lecture concentrates on the use of the code in practical calculations, with emphasis on the aspects relevant to nuclear data evaluation. In particular, adjusting model parameters is discussed in details. (author)

  7. Empirical data and moral theory. A plea for integrated empirical ethics.

    Science.gov (United States)

    Molewijk, Bert; Stiggelbout, Anne M; Otten, Wilma; Dupuis, Heleen M; Kievit, Job

    2004-01-01

    Ethicists differ considerably in their reasons for using empirical data. This paper presents a brief overview of four traditional approaches to the use of empirical data: "the prescriptive applied ethicists," "the theorists," "the critical applied ethicists," and "the particularists." The main aim of this paper is to introduce a fifth approach of more recent date (i.e. "integrated empirical ethics") and to offer some methodological directives for research in integrated empirical ethics. All five approaches are presented in a table for heuristic purposes. The table consists of eight columns: "view on distinction descriptive-prescriptive sciences," "location of moral authority," "central goal(s)," "types of normativity," "use of empirical data," "method," "interaction empirical data and moral theory," and "cooperation with descriptive sciences." Ethicists can use the table in order to identify their own approach. Reflection on these issues prior to starting research in empirical ethics should lead to harmonization of the different scientific disciplines and effective planning of the final research design. Integrated empirical ethics (IEE) refers to studies in which ethicists and descriptive scientists cooperate together continuously and intensively. Both disciplines try to integrate moral theory and empirical data in order to reach a normative conclusion with respect to a specific social practice. IEE is not wholly prescriptive or wholly descriptive since IEE assumes an interdepence between facts and values and between the empirical and the normative. The paper ends with three suggestions for consideration on some of the future challenges of integrated empirical ethics.

  8. An empirical model of the high-energy electron environment at Jupiter

    Science.gov (United States)

    Soria-Santacruz, M.; Garrett, H. B.; Evans, R. W.; Jun, I.; Kim, W.; Paranicas, C.; Drozdov, A.

    2016-10-01

    We present an empirical model of the energetic electron environment in Jupiter's magnetosphere that we have named the Galileo Interim Radiation Electron Model version-2 (GIRE2) since it is based on Galileo data from the Energetic Particle Detector (EPD). Inside 8RJ, GIRE2 adopts the previously existing model of Divine and Garrett because this region was well sampled by the Pioneer and Voyager spacecraft but poorly covered by Galileo. Outside of 8RJ, the model is based on 10 min averages of Galileo EPD data as well as on measurements from the Geiger Tube Telescope on board the Pioneer spacecraft. In the inner magnetosphere the field configuration is dipolar, while in the outer magnetosphere it presents a disk-like structure. The gradual transition between these two behaviors is centered at about 17RJ. GIRE2 distinguishes between the two different regions characterized by these two magnetic field topologies. Specifically, GIRE2 consists of an inner trapped omnidirectional model between 8 to 17RJ that smoothly joins onto the original Divine and Garrett model inside 8RJ and onto a GIRE2 plasma sheet model at large radial distances. The model provides a complete picture of the high-energy electron environment in the Jovian magnetosphere from ˜1 to 50RJ. The present manuscript describes in great detail the data sets, formulation, and fittings used in the model and provides a discussion of the predicted high-energy electron fluxes as a function of energy and radial distance from the planet.

  9. Empirical isotropic chemical shift surfaces

    International Nuclear Information System (INIS)

    Czinki, Eszter; Csaszar, Attila G.

    2007-01-01

    A list of proteins is given for which spatial structures, with a resolution better than 2.5 A, are known from entries in the Protein Data Bank (PDB) and isotropic chemical shift (ICS) values are known from the RefDB database related to the Biological Magnetic Resonance Bank (BMRB) database. The structures chosen provide, with unknown uncertainties, dihedral angles φ and ψ characterizing the backbone structure of the residues. The joint use of experimental ICSs of the same residues within the proteins, again with mostly unknown uncertainties, and ab initio ICS(φ,ψ) surfaces obtained for the model peptides For-(l-Ala) n -NH 2 , with n = 1, 3, and 5, resulted in so-called empirical ICS(φ,ψ) surfaces for all major nuclei of the 20 naturally occurring α-amino acids. Out of the many empirical surfaces determined, it is the 13C α ICS(φ,ψ) surface which seems to be most promising for identifying major secondary structure types, α-helix, β-strand, left-handed helix (α D ), and polyproline-II. Detailed tests suggest that Ala is a good model for many naturally occurring α-amino acids. Two-dimensional empirical 13C α - 1 H α ICS(φ,ψ) correlation plots, obtained so far only from computations on small peptide models, suggest the utility of the experimental information contained therein and thus they should provide useful constraints for structure determinations of proteins

  10. Semi-empirical model for optimising future heavy-ion luminosity of the LHC

    CERN Document Server

    Schaumann, M

    2014-01-01

    The wide spectrum of intensities and emittances imprinted on the LHC Pb bunches during the accumulation of bunch trains in the injector chain result in a significant spread in the single bunch luminosities and lifetimes in collision. Based on the data collected in the 2011 Pb-Pb run, an empirical model is derived to predict the single-bunch peak luminosity depending on the bunch’s position within the beam. In combination with this model, simulations of representative bunches are used to estimate the luminosity evolution for the complete ensemble of bunches. Several options are being considered to improve the injector performance and to increase the number of bunches in the LHC, leading to several potential injection scenarios, resulting in different peak and integrated luminosities. The most important options for after the long shutdown (LS) 1 and 2 are evaluated and compared.

  11. Aplicabilidade dos resultados de enfermagem em pacientes com insuficiência cardíaca e volume de líquidos excessivo

    Directory of Open Access Journals (Sweden)

    Joelza Celesilvia Chisté Linhares

    Full Text Available RESUMO Objetivo Testar a aplicabilidade clínica da Nursing Outcomes Classification em pacientes com insuficiência cardíaca descompensada e Diagnóstico de Enfermagem Volume de Líquidos Excessivo. Métodos Estudo longitudinal conduzido em duas etapas em um hospital universitário no ano de 2013. Na primeira etapa, utilizou-se a validação por consenso de especialistas para selecionar os resultados de enfermagem e os indicadores relacionados ao diagnóstico de enfermagem; na segunda, foi realizado um estudo longitudinal para avaliação clínica dos pacientes, utilizando-se o instrumento contendo os resultados e indicadores produzidos no consenso. Resultados Foram realizadas avaliações em 17 pacientes. Na avaliação clínica, mensuraram-se os resultados de enfermagem através da avaliação de seus indicadores. Seis resultados apresentaram aumento nos escores, quando comparados às médias da primeira e da última avaliação. A utilização da Nursing Outcomes Classification na prática clínica demonstrou melhora dos pacientes internados por insuficiência cardíaca descompensada. Conclusão A Nursing Outcomes Classification foi sensível às alterações no quadro clínico dos pacientes.

  12. Ensemble empirical model decomposition and neuro-fuzzy conjunction model for middle and long-term runoff forecast

    Science.gov (United States)

    Tan, Q.

    2017-12-01

    Forecasting the runoff over longer periods, such as months and years, is one of the important tasks for hydrologists and water resource managers to maximize the potential of the limited water. However, due to the nonlinear and nonstationary characteristic of the natural runoff, it is hard to forecast the middle and long-term runoff with a satisfactory accuracy. It has been proven that the forecast performance can be improved by using signal decomposition techniques to product more cleaner signals as model inputs. In this study, a new conjunction model (EEMD-neuro-fuzzy) with adaptive ability is proposed. The ensemble empirical model decomposition (EEMD) is used to decompose the runoff time series into several components, which are with different frequencies and more cleaner than the original time series. Then the neuro-fuzzy model is developed for each component. The final forecast results can be obtained by summing the outputs of all neuro-fuzzy models. Unlike the conventional forecast model, the decomposition and forecast models in this study are adjusted adaptively as long as new runoff information is added. The proposed models are applied to forecast the monthly runoff of Yichang station, located in Yangtze River of China. The results show that the performance of adaptive forecast model we proposed outperforms than the conventional forecast model, the Nash-Sutcliffe efficiency coefficient can reach to 0.9392. Due to its ability to process the nonstationary data, the forecast accuracy, especially in flood season, is improved significantly.

  13. Modelling of volumetric properties of binary and ternary mixtures by CEOS, CEOS/GE and empirical models

    Directory of Open Access Journals (Sweden)

    BOJAN D. DJORDJEVIC

    2007-12-01

    Full Text Available Although many cubic equations of state coupled with van der Waals-one fluid mixing rules including temperature dependent interaction parameters are sufficient for representing phase equilibria and excess properties (excess molar enthalpy HE, excess molar volume VE, etc., difficulties appear in the correlation and prediction of thermodynamic properties of complex mixtures at various temperature and pressure ranges. Great progress has been made by a new approach based on CEOS/GE models. This paper reviews the last six-year of progress achieved in modelling of the volumetric properties for complex binary and ternary systems of non-electrolytes by the CEOS and CEOS/GE approaches. In addition, the vdW1 and TCBT models were used to estimate the excess molar volume VE of ternary systems methanol + chloroform + benzene and 1-propanol + chloroform + benzene, as well as the corresponding binaries methanol + chloroform, chloroform + benzene, 1-propanol + chloroform and 1-propanol + benzene at 288.15–313.15 K and atmospheric pressure. Also, prediction of VE for both ternaries by empirical models (Radojković, Kohler, Jackob–Fitzner, Colinet, Tsao–Smith, Toop, Scatchard, Rastogi was performed.

  14. Empirical modelling to predict the refractive index of human blood

    Science.gov (United States)

    Yahya, M.; Saghir, M. Z.

    2016-02-01

    Optical techniques used for the measurement of the optical properties of blood are of great interest in clinical diagnostics. Blood analysis is a routine procedure used in medical diagnostics to confirm a patient’s condition. Measuring the optical properties of blood is difficult due to the non-homogenous nature of the blood itself. In addition, there is a lot of variation in the refractive indices reported in the literature. These are the reasons that motivated the researchers to develop a mathematical model that can be used to predict the refractive index of human blood as a function of concentration, temperature and wavelength. The experimental measurements were conducted on mimicking phantom hemoglobin samples using the Abbemat Refractometer. The results analysis revealed a linear relationship between the refractive index and concentration as well as temperature, and a non-linear relationship between refractive index and wavelength. These results are in agreement with those found in the literature. In addition, a new formula was developed based on empirical modelling which suggests that temperature and wavelength coefficients be added to the Barer formula. The verification of this correlation confirmed its ability to determine refractive index and/or blood hematocrit values with appropriate clinical accuracy.

  15. Empirical and theoretical challenges in aboveground-belowground ecology

    DEFF Research Database (Denmark)

    W.H. van der Putten,; R.D. Bardgett; P.C. de Ruiter

    2009-01-01

    of the current conceptual succession models into more predictive models can help targeting empirical studies and generalising their results. Then, we discuss how understanding succession may help to enhance managing arable crops, grasslands and invasive plants, as well as provide insights into the effects...... and environmental settings, we explore where and how they can be supported by theoretical approaches to develop testable predictions and to generalise empirical results. We review four key areas where a combined aboveground-belowground approach offers perspectives for enhancing ecological understanding, namely...

  16. An empirical model of the Earth's bow shock based on an artificial neural network

    Science.gov (United States)

    Pallocchia, Giuseppe; Ambrosino, Danila; Trenchi, Lorenzo

    2014-05-01

    All of the past empirical models of the Earth's bow shock shape were obtained by best-fitting some given surfaces to sets of observed crossings. However, the issue of bow shock modeling can be addressed by means of artificial neural networks (ANN) as well. In this regard, here it is presented a perceptron, a simple feedforward network, which computes the bow shock distance along a given direction using the two angular coordinates of that direction, the bow shock predicted distance RF79 (provided by Formisano's model (F79)) and the upstream alfvénic Mach number Ma. After a brief description of the ANN architecture and training method, we discuss the results of the statistical comparison, performed over a test set of 1140 IMP8 crossings, between the prediction accuracies of ANN and F79 models.

  17. IT-enabled dynamic capability on performance: An empirical study of BSC model

    Directory of Open Access Journals (Sweden)

    Adilson Carlos Yoshikuni

    2017-05-01

    Full Text Available ew studies have investigated the influence of “information capital,” through IT-enabled dynamic capability, on corporate performance, particularly in economic turbulence. Our study investigates the causal relationship between performance perspectives of the balanced scorecard using partial least squares path modeling. Using data on 845 Brazilian companies, we conduct a quantitative empirical study of firms during an economic crisis and observe the following interesting results. Operational and analytical IT-enabled dynamic capability had positive effects on business process improvement and corporate performance. Results pertaining to mediation (endogenous variables and moderation (control variables clarify IT’s role in and benefits for corporate performance.

  18. A Longitudinal Empirical Investigation of the Pathways Model of Problem Gambling.

    Science.gov (United States)

    Allami, Youssef; Vitaro, Frank; Brendgen, Mara; Carbonneau, René; Lacourse, Éric; Tremblay, Richard E

    2017-12-01

    The pathways model of problem gambling suggests the existence of three developmental pathways to problem gambling, each differentiated by a set of predisposing biopsychosocial characteristics: behaviorally conditioned (BC), emotionally vulnerable (EV), and biologically vulnerable (BV) gamblers. This study examined the empirical validity of the Pathways Model among adolescents followed up to early adulthood. A prospective-longitudinal design was used, thus overcoming limitations of past studies that used concurrent or retrospective designs. Two samples were used: (1) a population sample of French-speaking adolescents (N = 1033) living in low socio-economic status (SES) neighborhoods from the Greater Region of Montreal (Quebec, Canada), and (2) a population sample of adolescents (N = 3017), representative of French-speaking students in Quebec. Only participants with at-risk or problem gambling by mid-adolescence or early adulthood were included in the main analysis (n = 180). Latent Profile Analyses were conducted to identify the optimal number of profiles, in accordance with participants' scores on a set of variables prescribed by the Pathways Model and measured during early adolescence: depression, anxiety, impulsivity, hyperactivity, antisocial/aggressive behavior, and drug problems. A four-profile model fit the data best. Three profiles differed from each other in ways consistent with the Pathways Model (i.e., BC, EV, and BV gamblers). A fourth profile emerged, resembling a combination of EV and BV gamblers. Four profiles of at-risk and problem gamblers were identified. Three of these profiles closely resemble those suggested by the Pathways Model.

  19. THE "MAN INCULTS" AND PACIFICATION DURING BRAZILIAN EMPIRE: A MODEL OF HISTORICAL INTERPRETATION BUILT FROM THE APPROACH TO HUMAN RIGHTS

    Directory of Open Access Journals (Sweden)

    José Ernesto Pimentel Filho

    2011-06-01

    Full Text Available The construction of peace in the Empire of Brazil was one of the forms of public space’s monopoly by the dominant sectors of the Empire Society. On the one hand, the Empire built an urban sociability based on patriarchal relations. On the other hand, the Empire was struggling against all forms of disorder and social deviance, as in a diptych image. The center of that peace was the capitals of the provinces. We he discuss here how to construct a model for approaching a mentality of combating crime in rural areas according to the patriarchal minds during the nineteenth century in Brazil. For it, the case of Ceara has been chosen. A historical hermeneutic might been applied for understanding the role of poor white men in social life of the Empire of Brazil. We observe that the education, when associated with the moral, has been seen as able to modify any violent behavior and able shaping the individual attitude before the justice and punishment policy. Discrimination and stereotypes are part of our interpretation as contribution to a debate on Human Rights in the history of Brazil.

  20. Empirical global model of upper thermosphere winds based on atmosphere and dynamics explorer satellite data

    Science.gov (United States)

    Hedin, A. E.; Spencer, N. W.; Killeen, T. L.

    1988-01-01

    Thermospheric wind data obtained from the Atmosphere Explorer E and Dynamics Explorer 2 satellites have been used to generate an empirical wind model for the upper thermosphere, analogous to the MSIS model for temperature and density, using a limited set of vector spherical harmonics. The model is limited to above approximately 220 km where the data coverage is best and wind variations with height are reduced by viscosity. The data base is not adequate to detect solar cycle (F10.7) effects at this time but does include magnetic activity effects. Mid- and low-latitude data are reproduced quite well by the model and compare favorably with published ground-based results. The polar vortices are present, but not to full detail.

  1. An empirical modeling tool and glass property database in development of US-DOE radioactive waste glasses

    International Nuclear Information System (INIS)

    Muller, I.; Gan, H.

    1997-01-01

    An integrated glass database has been developed at the Vitreous State Laboratory of Catholic University of America. The major objective of this tool was to support glass formulation using the MAWS approach (Minimum Additives Waste Stabilization). An empirical modeling capability, based on the properties of over 1000 glasses in the database, was also developed to help formulate glasses from waste streams under multiple user-imposed constraints. The use of this modeling capability, the performance of resulting models in predicting properties of waste glasses, and the correlation of simple structural theories to glass properties are the subjects of this paper. (authors)

  2. An empirical model of the topside plasma density around 600 km based on ROCSAT-1 and Hinotori observations

    Science.gov (United States)

    Huang, He; Chen, Yiding; Liu, Libo; Le, Huijun; Wan, Weixing

    2015-05-01

    It is an urgent task to improve the ability of ionospheric empirical models to more precisely reproduce the plasma density variations in the topside ionosphere. Based on the Republic of China Satellite 1 (ROCSAT-1) observations, we developed a new empirical model of topside plasma density around 600 km under relatively quiet geomagnetic conditions. The model reproduces the ROCSAT-1 plasma density observations with a root-mean-square-error of 0.125 in units of lg(Ni(cm-3)) and reasonably describes the temporal and spatial variations of plasma density at altitudes in the range from 550 to 660 km. The model results are also in good agreement with observations from Hinotori, Coupled Ion-Neutral Dynamics Investigations/Communications/Navigation Outage Forecasting System satellites and the incoherent scatter radar at Arecibo. Further, we combined ROCSAT-1 and Hinotori data to improve the ROCSAT-1 model and built a new model (R&H model) after the consistency between the two data sets had been confirmed with the original ROCSAT-1 model. In particular, we studied the solar activity dependence of topside plasma density at a fixed altitude by R&H model and find that its feature slightly differs from the case when the orbit altitude evolution is ignored. In addition, the R&H model shows the merging of the two crests of equatorial ionization anomaly above the F2 peak, while the IRI_Nq topside option always produces two separate crests in this range of altitudes.

  3. Monthly and Fortnightly Tidal Variations of the Earth's Rotation Rate Predicted by a TOPEX/POSEIDON Empirical Ocean Tide Model

    Science.gov (United States)

    Desai, S.; Wahr, J.

    1998-01-01

    Empirical models of the two largest constituents of the long-period ocean tides, the monthly and the fortnightly constituents, are estimated from repeat cycles 10 to 210 of the TOPEX/POSEIDON (T/P) mission.

  4. Flexible Modeling of Epidemics with an Empirical Bayes Framework

    Science.gov (United States)

    Brooks, Logan C.; Farrow, David C.; Hyun, Sangwon; Tibshirani, Ryan J.; Rosenfeld, Roni

    2015-01-01

    Seasonal influenza epidemics cause consistent, considerable, widespread loss annually in terms of economic burden, morbidity, and mortality. With access to accurate and reliable forecasts of a current or upcoming influenza epidemic’s behavior, policy makers can design and implement more effective countermeasures. This past year, the Centers for Disease Control and Prevention hosted the “Predict the Influenza Season Challenge”, with the task of predicting key epidemiological measures for the 2013–2014 U.S. influenza season with the help of digital surveillance data. We developed a framework for in-season forecasts of epidemics using a semiparametric Empirical Bayes framework, and applied it to predict the weekly percentage of outpatient doctors visits for influenza-like illness, and the season onset, duration, peak time, and peak height, with and without using Google Flu Trends data. Previous work on epidemic modeling has focused on developing mechanistic models of disease behavior and applying time series tools to explain historical data. However, tailoring these models to certain types of surveillance data can be challenging, and overly complex models with many parameters can compromise forecasting ability. Our approach instead produces possibilities for the epidemic curve of the season of interest using modified versions of data from previous seasons, allowing for reasonable variations in the timing, pace, and intensity of the seasonal epidemics, as well as noise in observations. Since the framework does not make strict domain-specific assumptions, it can easily be applied to some other diseases with seasonal epidemics. This method produces a complete posterior distribution over epidemic curves, rather than, for example, solely point predictions of forecasting targets. We report prospective influenza-like-illness forecasts made for the 2013–2014 U.S. influenza season, and compare the framework’s cross-validated prediction error on historical data to

  5. Instrumentos de avaliação da postura dinâmica: aplicabilidade ao ambiente escolar

    Directory of Open Access Journals (Sweden)

    Matias Noll

    Full Text Available INTRODUÇÃO: Para que a avaliação da postura dinâmica seja efetivada é necessário, primeiramente, conhecer os diversos instrumentos, disponíveis e validados na literatura, apropriados para esse fim. OBJETIVO: O objetivo deste artigo de revisão sistemática foi descrever, sintetizar e analisar criticamente os instrumentos encontrados na literatura que objetivem avaliar a postura dinâmica, tanto em adultos quanto em escolares, e refletir sobre a possibilidade de utilização desses métodos no ambiente escolar. MATERIAIS E MÉTODOS: Foi realizada uma busca sistemática de artigos em bases de dados (Scopus, ScienceDirect, PubMed, SciELO publicados a partir da década de 1980 e no Banco de Teses e Dissertações da Capes. As palavras-chave utilizadas na busca pelos artigos foram back, spine, back injuries, school, back school, postural hygiene program, education, child, student, posture, em combinação com as palavras-chave evaluation, assessment, measurement, e os respectivos termos em português. Os instrumentos propostos deveriam preencher os seguintes critérios: (a avaliar a postura corporal durante a realização de atividades da vida diária (AVDs; (b utilizar critérios pré-definidos de avaliação da postura dinâmica; e (c avaliar a postura a partir de observação, direta ou a partir de filmagem. RESULTADOS: Foram identificados oito artigos originais que apresentam instrumentos de avaliação da postura dinâmica, avaliando a execução de AVDs a partir de critérios biomecânicos pré-definidos por escalas numéricas; destes, apenas quatro instrumentos foram elaborados com o propósito de avaliar a execução de AVDs de escolares. CONSIDERAÇÕES FINAIS: Em geral, os instrumentos apresentam algumas limitações metodológicas, embora sejam de fácil aplicabilidade.

  6. Aplicabilidade do mismatch negativity em crianças e adolescentes: uma revisão descritiva

    Directory of Open Access Journals (Sweden)

    Mirtes Bruckmann

    Full Text Available RESUMO O Mismatch Negativity (MMN é um potencial cortical que ocorre em resposta a uma mudança de um estímulo acústico em meio a uma sequência de repetidos estímulos, o que reflete a capacidade do cérebro em discriminar o som de modo passivo, ou seja, sem a necessidade de atenção do indivíduo ao estímulo sonoro. Diante disso, o objetivo deste estudo foi realizar uma revisão descritiva sobre o MMN, a fim de identificar a sua aplicabilidade em crianças e adolescentes nos últimos cinco anos. Para isso, realizou-se uma busca nas bases de dados Lilacs, SciELO, Medline e Pubmed utilizando os seguintes descritores: córtex auditivo, eletrofisiologia, potenciais evocados auditivos e as palavras Mismacth e Negativity. Nesta revisão, foram encontrados 14 estudos que avaliaram crianças e/ou adolescentes com dificuldade de articulação na fala, distúrbio específico de linguagem, transtorno do processamento auditivo, Transtorno do Déficit de Atenção e Hiperatividade (TDAH, dislexia, autismo, risco para esquizofrenia, psicose, amusia, fenilcetonúria e atenção seletiva. Foi possível, assim, realizar a revisão descritiva sobre a aplicação do MMN em crianças e adolescentes, concluindo-se que, nos últimos cinco anos, houve uma produção considerável de artigos sobre o tema, embora no Brasil a presença de estudos a esse respeito ainda seja escassa. Nesse sentido, apesar de existir uma variedade de aplicações para o MMN, no que diz respeito à população brasileira, necessita-se ainda de evidências científicas que assegurem o efeito deste potencial nas diferentes faixas etárias. Verificou-se, também, que a busca por estudos sobre MMN nas bases de dados citadas pode ser realizada apenas utilizando as palavras Mismacth e Negativity.

  7. The frontiers of empirical science: A Thomist-inspired critique of ...

    African Journals Online (AJOL)

    2016-07-08

    Jul 8, 2016 ... of scientism, is, however, self-destructive of scientism because contrary to its ... The theory that only empirical facts have epistemic meaning is supported by the ..... (2002:1436). The cyclic model lacks empirical verification,.

  8. An extended technology acceptance model for detecting influencing factors: An empirical investigation

    Directory of Open Access Journals (Sweden)

    Mohamd Hakkak

    2013-11-01

    Full Text Available The rapid diffusion of the Internet has radically changed the delivery channels applied by the financial services industry. The aim of this study is to identify the influencing factors that encourage customers to adopt online banking in Khorramabad. The research constructs are developed based on the technology acceptance model (TAM and incorporates some extra important control variables. The model is empirically verified to study the factors influencing the online banking adoption behavior of 210 customers of Tejarat Banks in Khorramabad. The findings of the study suggest that the quality of the internet connection, the awareness of online banking and its benefits, the social influence and computer self-efficacy have significant impacts on the perceived usefulness (PU and perceived ease of use (PEOU of online banking acceptance. Trust and resistance to change also have significant impact on the attitude towards the likelihood of adopting online banking.

  9. An empirical model for trip distribution of commuters in the Netherlands: Transferability in time and space reconsidered.

    NARCIS (Netherlands)

    Thomas, Tom; Tutert, Bas

    2013-01-01

    In this paper, we evaluate the distribution of commute trips in The Netherlands, to assess its transferability in space and time. We used Dutch Travel Surveys from 1995 and 2004–2008 to estimate the empirical distribution from a spatial interaction model as function of travel time and distance. We

  10. Empirical psychology, common sense, and Kant's empirical markers for moral responsibility.

    Science.gov (United States)

    Frierson, Patrick

    2008-12-01

    This paper explains the empirical markers by which Kant thinks that one can identify moral responsibility. After explaining the problem of discerning such markers within a Kantian framework I briefly explain Kant's empirical psychology. I then argue that Kant's empirical markers for moral responsibility--linked to higher faculties of cognition--are not sufficient conditions for moral responsibility, primarily because they are empirical characteristics subject to natural laws. Next. I argue that these markers are not necessary conditions of moral responsibility. Given Kant's transcendental idealism, even an entity that lacks these markers could be free and morally responsible, although as a matter of fact Kant thinks that none are. Given that they are neither necessary nor sufficient conditions, I discuss the status of Kant's claim that higher faculties are empirical markers of moral responsibility. Drawing on connections between Kant's ethical theory and 'common rational cognition' (4:393), I suggest that Kant's theory of empirical markers can be traced to ordinary common sense beliefs about responsibility. This suggestion helps explain both why empirical markers are important and what the limits of empirical psychology are within Kant's account of moral responsibility.

  11. Analytical and Empirical Modeling of Wear and Forces of CBN Tool in Hard Turning - A Review

    Science.gov (United States)

    Patel, Vallabh Dahyabhai; Gandhi, Anishkumar Hasmukhlal

    2017-08-01

    Machining of steel material having hardness above 45 HRC (Hardness-Rockwell C) is referred as a hard turning. There are numerous models which should be scrutinized and implemented to gain optimum performance of hard turning. Various models in hard turning by cubic boron nitride tool have been reviewed, in attempt to utilize appropriate empirical and analytical models. Validation of steady state flank and crater wear model, Usui's wear model, forces due to oblique cutting theory, extended Lee and Shaffer's force model, chip formation and progressive flank wear have been depicted in this review paper. Effort has been made to understand the relationship between tool wear and tool force based on the different cutting conditions and tool geometries so that appropriate model can be used according to user requirement in hard turning.

  12. Global empirical wind model for the upper mesosphere/lower thermosphere. I. Prevailing wind

    Directory of Open Access Journals (Sweden)

    Y. I. Portnyagin

    Full Text Available An updated empirical climatic zonally averaged prevailing wind model for the upper mesosphere/lower thermosphere (70-110 km, extending from 80°N to 80°S is presented. The model is constructed from the fitting of monthly mean winds from meteor radar and MF radar measurements at more than 40 stations, well distributed over the globe. The height-latitude contour plots of monthly mean zonal and meridional winds for all months of the year, and of annual mean wind, amplitudes and phases of annual and semiannual harmonics of wind variations are analyzed to reveal the main features of the seasonal variation of the global wind structures in the Northern and Southern Hemispheres. Some results of comparison between the ground-based wind models and the space-based models are presented. It is shown that, with the exception of annual mean systematic bias between the zonal winds provided by the ground-based and space-based models, a good agreement between the models is observed. The possible origin of this bias is discussed.

    Key words: Meteorology and Atmospheric dynamics (general circulation; middle atmosphere dynamics; thermospheric dynamics

  13. A Comprehensive Comparison Study of Empirical Cutting Transport Models in Inclined and Horizontal Wells

    Directory of Open Access Journals (Sweden)

    Asep Mohamad Ishaq Shiddiq

    2017-07-01

    Full Text Available In deviated and horizontal drilling, hole-cleaning issues are a common and complex problem. This study explored the effect of various parameters in drilling operations and how they affect the flow rate required for effective cutting transport. Three models, developed following an empirical approach, were employed: Rudi-Shindu’s model, Hopkins’, and Tobenna’s model. Rudi-Shindu’s model needs iteration in the calculation. Firstly, the three models were compared using a sensitivity analysis of drilling parameters affecting cutting transport. The result shows that the models have similar trends but different values for minimum flow velocity. Analysis was conducted to examine the feasibility of using Rudi-Shindu’s, Hopkins’, and Tobenna’s models. The result showed that Hopkins’ model is limited by cutting size and revolution per minute (RPM. The minimum flow rate from Tobenna’s model is affected only by well inclination, drilling fluid weight and drilling fluid rheological property. Meanwhile, Rudi-Shindu’s model is limited by inclinations above 45°. The study showed that the investigated models are not suitable for horizontal wells because they do not include the effect of lateral section.

  14. Protein-Ligand Empirical Interaction Components for Virtual Screening.

    Science.gov (United States)

    Yan, Yuna; Wang, Weijun; Sun, Zhaoxi; Zhang, John Z H; Ji, Changge

    2017-08-28

    A major shortcoming of empirical scoring functions is that they often fail to predict binding affinity properly. Removing false positives of docking results is one of the most challenging works in structure-based virtual screening. Postdocking filters, making use of all kinds of experimental structure and activity information, may help in solving the issue. We describe a new method based on detailed protein-ligand interaction decomposition and machine learning. Protein-ligand empirical interaction components (PLEIC) are used as descriptors for support vector machine learning to develop a classification model (PLEIC-SVM) to discriminate false positives from true positives. Experimentally derived activity information is used for model training. An extensive benchmark study on 36 diverse data sets from the DUD-E database has been performed to evaluate the performance of the new method. The results show that the new method performs much better than standard empirical scoring functions in structure-based virtual screening. The trained PLEIC-SVM model is able to capture important interaction patterns between ligand and protein residues for one specific target, which is helpful in discarding false positives in postdocking filtering.

  15. Empirical Bayes Approaches to Multivariate Fuzzy Partitions.

    Science.gov (United States)

    Woodbury, Max A.; Manton, Kenneth G.

    1991-01-01

    An empirical Bayes-maximum likelihood estimation procedure is presented for the application of fuzzy partition models in describing high dimensional discrete response data. The model describes individuals in terms of partial membership in multiple latent categories that represent bounded discrete spaces. (SLD)

  16. Empirical models of the Solar Wind : Extrapolations from the Helios & Ulysses observations back to the corona

    Science.gov (United States)

    Maksimovic, M.; Zaslavsky, A.

    2017-12-01

    We will present extrapolation of the HELIOS & Ulysses proton density, temperature & bulk velocities back to the corona. Using simple mass flux conservations we show a very good agreement between these extrapolations and the current state knowledge of these parameters in the corona, based on SOHO mesurements. These simple extrapolations could potentially be very useful for the science planning of both the Parker Solar Probe and Solar Orbiter missions. Finally will also present some modelling considerations, based on simple energy balance equations which arise from these empirical observationnal models.

  17. Agency Theory and Franchising: Some Empirical Results

    OpenAIRE

    Francine Lafontaine

    1992-01-01

    This article provides an empirical assessment of various agency-theoretic explanations for franchising, including risk sharing, one-sided moral hazard, and two-sided moral hazard. The empirical models use proxies for factors such as risk, moral hazard, and franchisors' need for capital to explain both franchisors' decisions about the terms of their contracts (royalty rates and up-front franchise fees) and the extent to which they use franchising. In this article, I exploit several new sources...

  18. Correcting the bias of empirical frequency parameter estimators in codon models.

    Directory of Open Access Journals (Sweden)

    Sergei Kosakovsky Pond

    2010-07-01

    Full Text Available Markov models of codon substitution are powerful inferential tools for studying biological processes such as natural selection and preferences in amino acid substitution. The equilibrium character distributions of these models are almost always estimated using nucleotide frequencies observed in a sequence alignment, primarily as a matter of historical convention. In this note, we demonstrate that a popular class of such estimators are biased, and that this bias has an adverse effect on goodness of fit and estimates of substitution rates. We propose a "corrected" empirical estimator that begins with observed nucleotide counts, but accounts for the nucleotide composition of stop codons. We show via simulation that the corrected estimates outperform the de facto standard estimates not just by providing better estimates of the frequencies themselves, but also by leading to improved estimation of other parameters in the evolutionary models. On a curated collection of sequence alignments, our estimators show a significant improvement in goodness of fit compared to the approach. Maximum likelihood estimation of the frequency parameters appears to be warranted in many cases, albeit at a greater computational cost. Our results demonstrate that there is little justification, either statistical or computational, for continued use of the -style estimators.

  19. Support Vector Regression Model Based on Empirical Mode Decomposition and Auto Regression for Electric Load Forecasting

    Directory of Open Access Journals (Sweden)

    Hong-Juan Li

    2013-04-01

    Full Text Available Electric load forecasting is an important issue for a power utility, associated with the management of daily operations such as energy transfer scheduling, unit commitment, and load dispatch. Inspired by strong non-linear learning capability of support vector regression (SVR, this paper presents a SVR model hybridized with the empirical mode decomposition (EMD method and auto regression (AR for electric load forecasting. The electric load data of the New South Wales (Australia market are employed for comparing the forecasting performances of different forecasting models. The results confirm the validity of the idea that the proposed model can simultaneously provide forecasting with good accuracy and interpretability.

  20. Empirical modelling to predict the refractive index of human blood

    International Nuclear Information System (INIS)

    Yahya, M; Saghir, M Z

    2016-01-01

    Optical techniques used for the measurement of the optical properties of blood are of great interest in clinical diagnostics. Blood analysis is a routine procedure used in medical diagnostics to confirm a patient’s condition. Measuring the optical properties of blood is difficult due to the non-homogenous nature of the blood itself. In addition, there is a lot of variation in the refractive indices reported in the literature. These are the reasons that motivated the researchers to develop a mathematical model that can be used to predict the refractive index of human blood as a function of concentration, temperature and wavelength. The experimental measurements were conducted on mimicking phantom hemoglobin samples using the Abbemat Refractometer. The results analysis revealed a linear relationship between the refractive index and concentration as well as temperature, and a non-linear relationship between refractive index and wavelength. These results are in agreement with those found in the literature. In addition, a new formula was developed based on empirical modelling which suggests that temperature and wavelength coefficients be added to the Barer formula. The verification of this correlation confirmed its ability to determine refractive index and/or blood hematocrit values with appropriate clinical accuracy. (paper)

  1. Use of empirically based corrosion model to aid steam generator life management

    Energy Technology Data Exchange (ETDEWEB)

    Angell, P.; Balakrishnan, P.V.; Turner, C.W

    2000-07-01

    Alloy 800 (N08800) tubes used in CANDU 6 steam generators have shown a low incidence of corrosion damage because of the good corrosion resistance of N08800 and successful water chemistry control strategies. However, N08800 is not immune to corrosion, especially pitting, under plausible SG conditions. Electrochemical potentials are critical in determining both susceptibility and rates of corrosion and are known to be a function of water-chemistry. Using laboratory data an empirical model for pitting and crevice corrosion has been developed for N08800. Combination of such a model with chemistry monitoring and diagnostic software makes it possible to arm the impact of plant operating conditions on SG tube corrosion for plant life management (PLIM). Possible transient chemistry regimes that could significantly shorten expected tube lifetimes have been identified and predictions continue to support the position dud under normal, low dissolved oxygen conditions, pitting of N08800 will not initiate. (author)

  2. Use of empirically based corrosion model to aid steam generator life management

    International Nuclear Information System (INIS)

    Angell, P.; Balakrishnan, P.V.; Turner, C.W.

    2000-01-01

    Alloy 800 (N08800) tubes used in CANDU 6 steam generators have shown a low incidence of corrosion damage because of the good corrosion resistance of N08800 and successful water chemistry control strategies. However, N08800 is not immune to corrosion, especially pitting, under plausible SG conditions. Electrochemical potentials are critical in determining both susceptibility and rates of corrosion and are known to be a function of water-chemistry. Using laboratory data an empirical model for pitting and crevice corrosion has been developed for N08800. Combination of such a model with chemistry monitoring and diagnostic software makes it possible to arm the impact of plant operating conditions on SG tube corrosion for plant life management (PLIM). Possible transient chemistry regimes that could significantly shorten expected tube lifetimes have been identified and predictions continue to support the position dud under normal, low dissolved oxygen conditions, pitting of N08800 will not initiate. (author)

  3. Intubação difícil em crianças: aplicabilidade do índice de Mallampati

    Directory of Open Access Journals (Sweden)

    Ana Paula S Vieira Santos

    2011-04-01

    Full Text Available JUSTIFICATIVA E OBJETIVOS: A preocupação de estar diante de uma via aérea difícil trouxe à tona a necessidade de se desenvolverem testes preditivos de intubação difícil. Tais testes foram, primariamente, desenvolvidos para populações adultas. Nos pacientes pediátricos, os estudos existentes sempre trataram de pacientes com malformações congênitas, politraumatizados e recém-nascidos. O objetivo deste trabalho foi verificar, em pacientes na faixa etária de 4 a 8 anos, a aplicabilidade do teste preditivo de intubação difícil mais comumente utilizado em adultos, o índice de Mallampati, correlacionando-o com o índice de Cormack-Lehane. MÉTODO: Foram estudados 108 pacientes com idades entre 4 e 8 anos, ASA I, sem quaisquer tipos de malformações anatômicas, síndromes genéticas ou déficits cognitivos. Os pacientes foram submetidos, durante a avaliação pré-anestésica, ao índice de Mallampati. Após a indução anestésica, realizava-se a avaliação do índice de Cormack-Lehane. Nos testes estatísticos p < 0,05, foi considerado significativo. RESULTADOS: O índice de Mallampati apresentou correlação significativa com o índice de Cormack-Lehane. A sensibilidade e a especificidade do índice de Mallampati foram, respectivamente, de 75,8% e 96,2%, mas o intervalo de confiança da sensibilidade foi muito grande. CONCLUSÕES: O índice de Mallampati se mostrou aplicável em crianças de 4 a 8 anos.

  4. Identifying mechanisms that structure ecological communities by snapping model parameters to empirically observed tradeoffs.

    Science.gov (United States)

    Thomas Clark, Adam; Lehman, Clarence; Tilman, David

    2018-04-01

    Theory predicts that interspecific tradeoffs are primary determinants of coexistence and community composition. Using information from empirically observed tradeoffs to augment the parametrisation of mechanism-based models should therefore improve model predictions, provided that tradeoffs and mechanisms are chosen correctly. We developed and tested such a model for 35 grassland plant species using monoculture measurements of three species characteristics related to nitrogen uptake and retention, which previous experiments indicate as important at our site. Matching classical theoretical expectations, these characteristics defined a distinct tradeoff surface, and models parameterised with these characteristics closely matched observations from experimental multi-species mixtures. Importantly, predictions improved significantly when we incorporated information from tradeoffs by 'snapping' characteristics to the nearest location on the tradeoff surface, suggesting that the tradeoffs and mechanisms we identify are important determinants of local community structure. This 'snapping' method could therefore constitute a broadly applicable test for identifying influential tradeoffs and mechanisms. © 2018 The Authors. Ecology Letters published by CNRS and John Wiley & Sons Ltd.

  5. Relative performance of empirical and physical models in assessing the seasonal and annual glacier surface mass balance of Saint-Sorlin Glacier (French Alps)

    Science.gov (United States)

    Réveillet, Marion; Six, Delphine; Vincent, Christian; Rabatel, Antoine; Dumont, Marie; Lafaysse, Matthieu; Morin, Samuel; Vionnet, Vincent; Litt, Maxime

    2018-04-01

    This study focuses on simulations of the seasonal and annual surface mass balance (SMB) of Saint-Sorlin Glacier (French Alps) for the period 1996-2015 using the detailed SURFEX/ISBA-Crocus snowpack model. The model is forced by SAFRAN meteorological reanalysis data, adjusted with automatic weather station (AWS) measurements to ensure that simulations of all the energy balance components, in particular turbulent fluxes, are accurately represented with respect to the measured energy balance. Results indicate good model performance for the simulation of summer SMB when using meteorological forcing adjusted with in situ measurements. Model performance however strongly decreases without in situ meteorological measurements. The sensitivity of the model to meteorological forcing indicates a strong sensitivity to wind speed, higher than the sensitivity to ice albedo. Compared to an empirical approach, the model exhibited better performance for simulations of snow and firn melting in the accumulation area and similar performance in the ablation area when forced with meteorological data adjusted with nearby AWS measurements. When such measurements were not available close to the glacier, the empirical model performed better. Our results suggest that simulations of the evolution of future mass balance using an energy balance model require very accurate meteorological data. Given the uncertainties in the temporal evolution of the relevant meteorological variables and glacier surface properties in the future, empirical approaches based on temperature and precipitation could be more appropriate for simulations of glaciers in the future.

  6. Relative performance of empirical and physical models in assessing the seasonal and annual glacier surface mass balance of Saint-Sorlin Glacier (French Alps

    Directory of Open Access Journals (Sweden)

    M. Réveillet

    2018-04-01

    Full Text Available This study focuses on simulations of the seasonal and annual surface mass balance (SMB of Saint-Sorlin Glacier (French Alps for the period 1996–2015 using the detailed SURFEX/ISBA-Crocus snowpack model. The model is forced by SAFRAN meteorological reanalysis data, adjusted with automatic weather station (AWS measurements to ensure that simulations of all the energy balance components, in particular turbulent fluxes, are accurately represented with respect to the measured energy balance. Results indicate good model performance for the simulation of summer SMB when using meteorological forcing adjusted with in situ measurements. Model performance however strongly decreases without in situ meteorological measurements. The sensitivity of the model to meteorological forcing indicates a strong sensitivity to wind speed, higher than the sensitivity to ice albedo. Compared to an empirical approach, the model exhibited better performance for simulations of snow and firn melting in the accumulation area and similar performance in the ablation area when forced with meteorological data adjusted with nearby AWS measurements. When such measurements were not available close to the glacier, the empirical model performed better. Our results suggest that simulations of the evolution of future mass balance using an energy balance model require very accurate meteorological data. Given the uncertainties in the temporal evolution of the relevant meteorological variables and glacier surface properties in the future, empirical approaches based on temperature and precipitation could be more appropriate for simulations of glaciers in the future.

  7. Development of Specialization Scales for the MSPI: A Comparison of Empirical and Inductive Strategies

    Science.gov (United States)

    Porfeli, Erik J.; Richard, George V.; Savickas, Mark L.

    2010-01-01

    An empirical measurement model for interest inventory construction uses internal criteria whereas an inductive measurement model uses external criteria. The empirical and inductive measurement models are compared and contrasted and then two models are assessed through tests of the effectiveness and economy of scales for the Medical Specialty…

  8. Empirical Likelihood in Nonignorable Covariate-Missing Data Problems.

    Science.gov (United States)

    Xie, Yanmei; Zhang, Biao

    2017-04-20

    Missing covariate data occurs often in regression analysis, which frequently arises in the health and social sciences as well as in survey sampling. We study methods for the analysis of a nonignorable covariate-missing data problem in an assumed conditional mean function when some covariates are completely observed but other covariates are missing for some subjects. We adopt the semiparametric perspective of Bartlett et al. (Improving upon the efficiency of complete case analysis when covariates are MNAR. Biostatistics 2014;15:719-30) on regression analyses with nonignorable missing covariates, in which they have introduced the use of two working models, the working probability model of missingness and the working conditional score model. In this paper, we study an empirical likelihood approach to nonignorable covariate-missing data problems with the objective of effectively utilizing the two working models in the analysis of covariate-missing data. We propose a unified approach to constructing a system of unbiased estimating equations, where there are more equations than unknown parameters of interest. One useful feature of these unbiased estimating equations is that they naturally incorporate the incomplete data into the data analysis, making it possible to seek efficient estimation of the parameter of interest even when the working regression function is not specified to be the optimal regression function. We apply the general methodology of empirical likelihood to optimally combine these unbiased estimating equations. We propose three maximum empirical likelihood estimators of the underlying regression parameters and compare their efficiencies with other existing competitors. We present a simulation study to compare the finite-sample performance of various methods with respect to bias, efficiency, and robustness to model misspecification. The proposed empirical likelihood method is also illustrated by an analysis of a data set from the US National Health and

  9. Semi-empirical model for the threshold voltage of a double implanted MOSFET and its temperature dependence

    Energy Technology Data Exchange (ETDEWEB)

    Arora, N D

    1987-05-01

    A simple and accurate semi-empirical model for the threshold voltage of a small geometry double implanted enhancement type MOSFET, especially useful in a circuit simulation program like SPICE, has been developed. The effect of short channel length and narrow width on the threshold voltage has been taken into account through a geometrical approximation, which involves parameters whose values can be determined from the curve fitting experimental data. A model for the temperature dependence of the threshold voltage for the implanted devices has also been presented. The temperature coefficient of the threshold voltage was found to change with decreasing channel length and width. Experimental results from various device sizes, both short and narrow, show very good agreement with the model. The model has been implemented in SPICE as part of the complete dc model.

  10. [A competency model of rural general practitioners: theory construction and empirical study].

    Science.gov (United States)

    Yang, Xiu-Mu; Qi, Yu-Long; Shne, Zheng-Fu; Han, Bu-Xin; Meng, Bei

    2015-04-01

    To perform theory construction and empirical study of the competency model of rural general practitioners. Through literature study, job analysis, interviews, and expert team discussion, the questionnaire of rural general practitioners competency was constructed. A total of 1458 rural general practitioners were surveyed by the questionnaire in 6 central provinces. The common factors were constructed using the principal component method of exploratory factor analysis and confirmatory factor analysis. The influence of the competency characteristics on the working performance was analyzed using regression equation analysis. The Cronbach 's alpha coefficient of the questionnaire was 0.974. The model consisted of 9 dimensions and 59 items. The 9 competency dimensions included basic public health service ability, basic clinical skills, system analysis capability, information management capability, communication and cooperation ability, occupational moral ability, non-medical professional knowledge, personal traits and psychological adaptability. The rate of explained cumulative total variance was 76.855%. The model fitting index were Χ(2)/df 1.88, GFI=0.94, NFI=0.96, NNFI=0.98, PNFI=0.91, RMSEA=0.068, CFI=0.97, IFI=0.97, RFI=0.96, suggesting good model fitting. Regression analysis showed that the competency characteristics had a significant effect on job performance. The rural general practitioners competency model provides reference for rural doctor training, rural order directional cultivation of medical students, and competency performance management of the rural general practitioners.

  11. Experimental validation of new empirical models of the thermal properties of food products for safe shipping

    Science.gov (United States)

    Hamid, Hanan H.; Mitchell, Mark; Jahangiri, Amirreza; Thiel, David V.

    2018-04-01

    Temperature controlled food transport is essential for human safety and to minimise food waste. The thermal properties of food are important for determining the heat transfer during the transient stages of transportation (door opening during loading and unloading processes). For example, the temperature of most dairy products must be confined to a very narrow range (3-7 °C). If a predefined critical temperature is exceeded, the food is defined as spoiled and unfit for human consumption. An improved empirical model for the thermal conductivity and specific heat capacity of a wide range of food products was derived based on the food composition (moisture, fat, protein, carbohydrate and ash). The models that developed using linear regression analysis were compared with the published measured parameters in addition to previously published theoretical and empirical models. It was found that the maximum variation in the predicated thermal properties leads to less than 0.3 °C temperature change. The correlation coefficient for these models was 0.96. The t-Stat test ( P-value >0.99) demonstrated that the model results are an improvement on previous works. The transient heat transfer based on the food composition and the temperature boundary conditions was found for a Camembert cheese (short cylindrical shape) using a multiple dimension finite difference method code. The result was verified using the heat transfer today (HTT) educational software which is based on finite volume method. The core temperature rises from the initial temperature (2.7 °C) to the maximum safe temperature in ambient air (20.24 °C) was predicted to within about 35.4 ± 0.5 min. The simulation results agree very well ( +0.2 °C) with the measured temperature data. This improved model impacts on temperature estimation during loading and unloading the trucks and provides a clear direction for temperature control in all refrigerated transport applications.

  12. The relative effectiveness of empirical and physical models for simulating the dense undercurrent of pyroclastic flows under different emplacement conditions

    Science.gov (United States)

    Ogburn, Sarah E.; Calder, Eliza S

    2017-01-01

    High concentration pyroclastic density currents (PDCs) are hot avalanches of volcanic rock and gas and are among the most destructive volcanic hazards due to their speed and mobility. Mitigating the risk associated with these flows depends upon accurate forecasting of possible impacted areas, often using empirical or physical models. TITAN2D, VolcFlow, LAHARZ, and ΔH/L or energy cone models each employ different rheologies or empirical relationships and therefore differ in appropriateness of application for different types of mass flows and topographic environments. This work seeks to test different statistically- and physically-based models against a range of PDCs of different volumes, emplaced under different conditions, over different topography in order to test the relative effectiveness, operational aspects, and ultimately, the utility of each model for use in hazard assessments. The purpose of this work is not to rank models, but rather to understand the extent to which the different modeling approaches can replicate reality in certain conditions, and to explore the dynamics of PDCs themselves. In this work, these models are used to recreate the inundation areas of the dense-basal undercurrent of all 13 mapped, land-confined, Soufrière Hills Volcano dome-collapse PDCs emplaced from 1996 to 2010 to test the relative effectiveness of different computational models. Best-fit model results and their input parameters are compared with results using observation- and deposit-derived input parameters. Additional comparison is made between best-fit model results and those using empirically-derived input parameters from the FlowDat global database, which represent “forward” modeling simulations as would be completed for hazard assessment purposes. Results indicate that TITAN2D is able to reproduce inundated areas well using flux sources, although velocities are often unrealistically high. VolcFlow is also able to replicate flow runout well, but does not capture

  13. Collective Labour Supply, Taxes, and Intrahousehold Allocation: An Empirical Approach

    NARCIS (Netherlands)

    Bloemen, H.G.

    2017-01-01

    Most empirical studies of the impact of labour income taxation on the labour supply behaviour of households use a unitary modelling approach. In this paper we empirically analyze income taxation and the choice of working hours by combining the collective approach for household behaviour and the

  14. EMPIRICAL MODELS FOR PERFORMANCE OF DRIPPERS APPLYING CASHEW NUT PROCESSING WASTEWATER

    Directory of Open Access Journals (Sweden)

    KETSON BRUNO DA SILVA

    2016-01-01

    Full Text Available The objective of this work was to develop empirical models for hydraulic performance of drippers operating with cashew nut processing wastewater depending on operating time, operating pressure and effluent quality. The experiment consisted of two factors, types of drippers (D1=1.65 L h-1, D2=2.00 L h-1 and D3=4.00 L h-1, and operating pressures (70, 140, 210 and 280 kPa, with three replications. The flow variation coefficient (FVC, distribution uniformity coefficient (DUC and the physicochemical and biological characteristics of the effluent were evaluated every 20 hours until complete 160 hours of operation. Data were interpreted through simple and multiple linear stepwise regression models. The regression models that fitted to the FVC and DUC as a function of operating time were square root, linear and quadratic, with 17%, 17% and 8%, and 17%, 17% and 0%, respectively. The regression models that fitted to the FVC and DUC as a function of operating pressures were square root, linear and quadratic, with 11%, 22% and 0% and the 0%, 22% and 11%, respectively. Multiple linear regressions showed that the dissolved solids content is the main wastewater characteristic that interfere in the FVC and DUC values of the drip units D1 (1.65 L h-1 and D3 (4.00 L h-1, operating at work pressure of 70 kPa (P1.

  15. Benefits of Applying Hierarchical Models to the Empirical Green's Function Approach

    Science.gov (United States)

    Denolle, M.; Van Houtte, C.

    2017-12-01

    Stress drops calculated from source spectral studies currently show larger variability than what is implied by empirical ground motion models. One of the potential origins of the inflated variability is the simplified model-fitting techniques used in most source spectral studies. This study improves upon these existing methods, and shows that the fitting method may explain some of the discrepancy. In particular, Bayesian hierarchical modelling is shown to be a method that can reduce bias, better quantify uncertainties and allow additional effects to be resolved. The method is applied to the Mw7.1 Kumamoto, Japan earthquake, and other global, moderate-magnitude, strike-slip earthquakes between Mw5 and Mw7.5. It is shown that the variation of the corner frequency, fc, and the falloff rate, n, across the focal sphere can be reliably retrieved without overfitting the data. Additionally, it is shown that methods commonly used to calculate corner frequencies can give substantial biases. In particular, if fc were calculated for the Kumamoto earthquake using a model with a falloff rate fixed at 2 instead of the best fit 1.6, the obtained fc would be as large as twice its realistic value. The reliable retrieval of the falloff rate allows deeper examination of this parameter for a suite of global, strike-slip earthquakes, and its scaling with magnitude. The earthquake sequences considered in this study are from Japan, New Zealand, Haiti and California.

  16. Inglorious Empire

    DEFF Research Database (Denmark)

    Khair, Tabish

    2017-01-01

    Review of 'Inglorious Empire: What the British did to India' by Shashi Tharoor, London, Hurst Publishers, 2017, 296 pp., £20.00......Review of 'Inglorious Empire: What the British did to India' by Shashi Tharoor, London, Hurst Publishers, 2017, 296 pp., £20.00...

  17. A Semi-empirical Model of the Stratosphere in the Climate System

    Science.gov (United States)

    Sodergren, A. H.; Bodeker, G. E.; Kremser, S.; Meinshausen, M.; McDonald, A.

    2014-12-01

    Chemistry climate models (CCMs) currently used to project changes in Antarctic ozone are extremely computationally demanding. CCM projections are uncertain due to lack of knowledge of future emissions of greenhouse gases (GHGs) and ozone depleting substances (ODSs), as well as parameterizations within the CCMs that have weakly constrained tuning parameters. While projections should be based on an ensemble of simulations, this is not currently possible due to the complexity of the CCMs. An inexpensive but realistic approach to simulate changes in stratospheric ozone, and its coupling to the climate system, is needed as a complement to CCMs. A simple climate model (SCM) can be used as a fast emulator of complex atmospheric-ocean climate models. If such an SCM includes a representation of stratospheric ozone, the evolution of the global ozone layer can be simulated for a wide range of GHG and ODS emissions scenarios. MAGICC is an SCM used in previous IPCC reports. In the current version of the MAGICC SCM, stratospheric ozone changes depend only on equivalent effective stratospheric chlorine (EESC). In this work, MAGICC is extended to include an interactive stratospheric ozone layer using a semi-empirical model of ozone responses to CO2and EESC, with changes in ozone affecting the radiative forcing in the SCM. To demonstrate the ability of our new, extended SCM to generate projections of global changes in ozone, tuning parameters from 19 coupled atmosphere-ocean general circulation models (AOGCMs) and 10 carbon cycle models (to create an ensemble of 190 simulations) have been used to generate probability density functions of the dates of return of stratospheric column ozone to 1960 and 1980 levels for different latitudes.

  18. A DISTANCE EDUCATION MODEL FOR JORDANIAN STUDENTS BASED ON AN EMPIRICAL STUDY

    Directory of Open Access Journals (Sweden)

    Ahmad SHAHER MASHHOUR

    2007-04-01

    Full Text Available Distance education is expanding worldwide. Numbers of students enrolled in distance education are increasing at very high rates. Distance education is said to be the future of education because it addresses educational needs of the new millennium. This paper represents the findings of an empirical study on a sample of Jordanian distance education students into a requirement model that addresses the need of such education at the national level. The responses of the sample show that distance education is offering a viable and satisfactory alternative to those who cannot enroll in regular residential education. The study also shows that the shortcomings of the regular and the current form of distance education in Jordan can be overcome by the use of modern information technology.

  19. Conceptual modeling in systems biology fosters empirical findings: the mRNA lifecycle.

    Directory of Open Access Journals (Sweden)

    Dov Dori

    Full Text Available One of the main obstacles to understanding complex biological systems is the extent and rapid evolution of information, way beyond the capacity individuals to manage and comprehend. Current modeling approaches and tools lack adequate capacity to model concurrently structure and behavior of biological systems. Here we propose Object-Process Methodology (OPM, a holistic conceptual modeling paradigm, as a means to model both diagrammatically and textually biological systems formally and intuitively at any desired number of levels of detail. OPM combines objects, e.g., proteins, and processes, e.g., transcription, in a way that is simple and easily comprehensible to researchers and scholars. As a case in point, we modeled the yeast mRNA lifecycle. The mRNA lifecycle involves mRNA synthesis in the nucleus, mRNA transport to the cytoplasm, and its subsequent translation and degradation therein. Recent studies have identified specific cytoplasmic foci, termed processing bodies that contain large complexes of mRNAs and decay factors. Our OPM model of this cellular subsystem, presented here, led to the discovery of a new constituent of these complexes, the translation termination factor eRF3. Association of eRF3 with processing bodies is observed after a long-term starvation period. We suggest that OPM can eventually serve as a comprehensive evolvable model of the entire living cell system. The model would serve as a research and communication platform, highlighting unknown and uncertain aspects that can be addressed empirically and updated consequently while maintaining consistency.

  20. EMPIRE-II 2.18, Comprehensive Nuclear Model Code, Nucleons, Ions Induced Cross-Sections

    International Nuclear Information System (INIS)

    Herman, Michal Wladyslaw; Panini, Gian Carlo

    2003-01-01

    1 - Description of program or function: EMPIRE-II is a flexible code for calculation of nuclear reactions in the frame of combined optical, Multi-step Direct (TUL), Multi-step Compound (NVWY) and statistical (Hauser-Feshbach) models. Incident particle can be a nucleon or any nucleus(Heavy Ion). Isomer ratios, residue production cross sections and emission spectra for neutrons, protons, alpha-particles, gamma-rays, and one type of Light Ion can be calculated. The energy range starts just above the resonance region for neutron induced reactions and extends up to several hundreds of MeV for the Heavy Ion induced reactions. IAEA1169/06: This version corrects an error in the Absoft compile procedure. 2 - Method of solution: For projectiles with A<5 EMPIRE calculates fusion cross section using spherical optical model transmission coefficients. In the case of Heavy Ion induced reactions the fusion cross section can be determined using various approaches including simplified coupled channels method (code CCFUS). Pre-equilibrium emission is treated in terms of quantum-mechanical theories (TUL-MSD and NVWY-MSC). MSC contribution to the gamma emission is taken into account. These calculations are followed by statistical decay with arbitrary number of subsequent particle emissions. Gamma-ray competition is considered in detail for every decaying compound nucleus. Different options for level densities are available including dynamical approach with collective effects taken into account. EMPIRE contains following third party codes converted into subroutines: - SCAT2 by O. Bersillon, - ORION and TRISTAN by H. Lenske and H. Wolter, - CCFUS by C.H. Dasso and S. Landowne, - BARMOM by A. Sierk. 3 - Restrictions on the complexity of the problem: The code can be easily adjusted to the problem by changing dimensions in the dimensions.h file. The actual limits are set by the available memory. In the current formulation up to 4 ejectiles plus gamma are allowed. This limit can be relaxed

  1. Aplicabilidade do Brums: estados de humor em atletas de voleibol e tênis no alto rendimento

    Directory of Open Access Journals (Sweden)

    Tatiana Marcela Rotta

    2014-12-01

    Full Text Available Introdução: Os estados de humor são indicadores que auxiliam sobremaneira o rendimento e a prevenção da saúde do atleta. Objetivo: Analisar a aplicabilidade do instrumento BRUMS na avaliação do perfil de estados de humor em atletas de alto rendimento do sexo masculino, de voleibol n=59 e tênis n=69. Métodos: comparar as variáveis independentes: modalidade esportiva (voleibol e tênis; tempo de prática no alto rendimento (até 2 anos; mais de 2 anos e categorias de idade (jovens e adultos com as variáveis dependente do perfil de humor (tensão, depressão, raiva, vigor, fadiga e confusão mental. O estudo causal-comparativo, utilizou, para coleta de dados, o instrumento BRUMS, validado no Brasil, com participação das autoras desse estudo. Resultados: Ao executar a MANOVA, foram verificadas diferenças entre as modalidades (F=4,289/ p=0,001; Hotellings's Trace = 0,216 e tempo de prática (F=5,845/ p<0,001; Hotelling's Trace = 0,295 no vigor. A modalidade versus tempo de prática apresentou significância na interação das duas variáveis (p=0,003, em torno de 7% da variância. Também na variável tensão p=0,05 (voleibol vs. tempo de prática, na variável raiva p=0,001 (voleibol vs. categoria de idade. No tênis, a depressão p=0,001, raiva p=0,04 e confusão mental p< 0,02 com médias maiores em adultos e mais experientes. Conclusão: A modalidade foi responsável por 11% da alteração no perfil de humor (p=0,001.

  2. A control-oriented real-time semi-empirical model for the prediction of NOx emissions in diesel engines

    International Nuclear Information System (INIS)

    D’Ambrosio, Stefano; Finesso, Roberto; Fu, Lezhong; Mittica, Antonio; Spessa, Ezio

    2014-01-01

    Highlights: • New semi-empirical correlation to predict NOx emissions in diesel engines. • Based on a real-time three-zone diagnostic combustion model. • The model is of fast application, and is therefore suitable for control-oriented applications. - Abstract: The present work describes the development of a fast control-oriented semi-empirical model that is capable of predicting NOx emissions in diesel engines under steady state and transient conditions. The model takes into account the maximum in-cylinder burned gas temperature of the main injection, the ambient gas-to-fuel ratio, the mass of injected fuel, the engine speed and the injection pressure. The evaluation of the temperature of the burned gas is based on a three-zone real-time diagnostic thermodynamic model that has recently been developed by the authors. Two correlations have also been developed in the present study, in order to evaluate the maximum burned gas temperature during the main combustion phase (derived from the three-zone diagnostic model) on the basis of significant engine parameters. The model has been tuned and applied to two diesel engines that feature different injection systems of the indirect acting piezoelectric, direct acting piezoelectric and solenoid type, respectively, over a wide range of steady-state operating conditions. The model has also been validated in transient operation conditions, over the urban and extra-urban phases of an NEDC. It has been shown that the proposed approach is capable of improving the predictive capability of NOx emissions, compared to previous approaches, and is characterized by a very low computational effort, as it is based on a single-equation correlation. It is therefore suitable for real-time applications, and could also be integrated in the engine control unit for closed-loop or feed-forward control tasks

  3. An empirical investigation of the efficiency effects of integrated care models in Switzerland

    Directory of Open Access Journals (Sweden)

    Oliver Reich

    2012-01-01

    Full Text Available Introduction: This study investigates the efficiency gains of integrated care models in Switzerland, since these models are regarded as cost containment options in national social health insurance. These plans generate much lower average health care expenditure than the basic insurance plan. The question is, however, to what extent these total savings are due to the effects of selection and efficiency. Methods: The empirical analysis is based on data from 399,274 Swiss residents that constantly had compulsory health insurance with the Helsana Group, the largest health insurer in Switzerland, covering the years 2006 to 2009. In order to evaluate the efficiency of the different integrated care models, we apply an econometric approach with a mixed-effects model. Results: Our estimations indicate that the efficiency effects of integrated care models on health care expenditure are significant. However, the different insurance plans vary, revealing the following efficiency gains per model: contracted capitated model 21.2%, contracted non-capitated model 15.5% and telemedicine model 3.7%. The remaining 8.5%, 5.6% and 22.5% respectively of the variation in total health care expenditure can be attributed to the effects of selection. Conclusions: Integrated care models have the potential to improve care for patients with chronic diseases and concurrently have a positive impact on health care expenditure. We suggest policy makers improve the incentives for patients with chronic diseases within the existing regulations providing further potential for cost-efficiency of medical care.

  4. An empirical investigation of the efficiency effects of integrated care models in Switzerland

    Directory of Open Access Journals (Sweden)

    Oliver Reich

    2012-01-01

    Full Text Available Introduction: This study investigates the efficiency gains of integrated care models in Switzerland, since these models are regarded as cost containment options in national social health insurance. These plans generate much lower average health care expenditure than the basic insurance plan. The question is, however, to what extent these total savings are due to the effects of selection and efficiency.Methods: The empirical analysis is based on data from 399,274 Swiss residents that constantly had compulsory health insurance with the Helsana Group, the largest health insurer in Switzerland, covering the years 2006 to 2009. In order to evaluate the efficiency of the different integrated care models, we apply an econometric approach with a mixed-effects model.Results: Our estimations indicate that the efficiency effects of integrated care models on health care expenditure are significant. However, the different insurance plans vary, revealing the following efficiency gains per model: contracted capitated model 21.2%, contracted non-capitated model 15.5% and telemedicine model 3.7%. The remaining 8.5%, 5.6% and 22.5% respectively of the variation in total health care expenditure can be attributed to the effects of selection.Conclusions: Integrated care models have the potential to improve care for patients with chronic diseases and concurrently have a positive impact on health care expenditure. We suggest policy makers improve the incentives for patients with chronic diseases within the existing regulations providing further potential for cost-efficiency of medical care.

  5. Empirical potential and elasticity theory modelling of interstitial dislocation loops in UO2 for cluster dynamics application

    International Nuclear Information System (INIS)

    Le-Prioux, Arno

    2017-01-01

    During irradiation in reactor, the microstructure of UO 2 changes and deteriorates, causing modifications of its physical and mechanical properties. The kinetic models used to describe these changes such as cluster dynamics (CRESCENDO calculation code) consider the main microstructural elements that are cavities and interstitial dislocation loops, and provide a rather rough description of the loop thermodynamics. In order to tackle this issue, this work has led to the development of a thermodynamic model of interstitial dislocation loops based on empirical potential calculations. The model considers two types of interstitial dislocation loops on two different size domains: Type 1: Dislocation loops similar to Frank partials in F.C.C. materials which are stable in the smaller size domain. Type 2: Perfect dislocation loops of Burgers vector (a/2)(110) stable in the larger size domain. The analytical formula used to compute the interstitial dislocation loop formation energies is the one for circular loops which has been modified in order to take into account the effects of the dislocation core, which are significant at smaller sizes. The parameters have been determined by empirical potential calculations of the formation energies of prismatic pure edge dislocation loops. The effect of the habit plane reorientation on the formation energies of perfect dislocation loops has been taken into account by a simple interpolation method. All the different types of loops seen during TEM observations are thus accounted for by the model. (author) [fr

  6. An empirical model describing the postnatal growth of organs in ICRP reference humans: Pt. 1

    International Nuclear Information System (INIS)

    Walker, J.T.

    1991-01-01

    An empirical model is presented for describing the postnatal mass growth of lungs in ICRP reference humans. A combined exponential and logistic function containing six parameters is fitted to ICRP 23 lung data using a weighted non-linear least squares technique. The results indicate that the model delineates the data well. Further analysis shows that reference male lungs attain a higher pubertal peak velocity (PPV) and adult mass size than female lungs, although the latter reach their PPV and adult mass size first. Furthermore, the model shows that lung growth rates in infants are two to three orders of magnitude higher than those in mature adults. This finding is important because of the possible association between higher radiation risks in infants' organs that have faster cell turnover rates compared to mature adult organs. The significance of the model for ICRP dosimetric purposes will be discussed. (author)

  7. Using Whole-House Field Tests to Empirically Derive Moisture Buffering Model Inputs

    Energy Technology Data Exchange (ETDEWEB)

    Woods, J.; Winkler, J.; Christensen, D.; Hancock, E.

    2014-08-01

    Building energy simulations can be used to predict a building's interior conditions, along with the energy use associated with keeping these conditions comfortable. These models simulate the loads on the building (e.g., internal gains, envelope heat transfer), determine the operation of the space conditioning equipment, and then calculate the building's temperature and humidity throughout the year. The indoor temperature and humidity are affected not only by the loads and the space conditioning equipment, but also by the capacitance of the building materials, which buffer changes in temperature and humidity. This research developed an empirical method to extract whole-house model inputs for use with a more accurate moisture capacitance model (the effective moisture penetration depth model). The experimental approach was to subject the materials in the house to a square-wave relative humidity profile, measure all of the moisture transfer terms (e.g., infiltration, air conditioner condensate) and calculate the only unmeasured term: the moisture absorption into the materials. After validating the method with laboratory measurements, we performed the tests in a field house. A least-squares fit of an analytical solution to the measured moisture absorption curves was used to determine the three independent model parameters representing the moisture buffering potential of this house and its furnishings. Follow on tests with realistic latent and sensible loads showed good agreement with the derived parameters, especially compared to the commonly-used effective capacitance approach. These results show that the EMPD model, once the inputs are known, is an accurate moisture buffering model.

  8. Empirical methods for estimating future climatic conditions

    International Nuclear Information System (INIS)

    Anon.

    1990-01-01

    Applying the empirical approach permits the derivation of estimates of the future climate that are nearly independent of conclusions based on theoretical (model) estimates. This creates an opportunity to compare these results with those derived from the model simulations of the forthcoming changes in climate, thus increasing confidence in areas of agreement and focusing research attention on areas of disagreements. The premise underlying this approach for predicting anthropogenic climate change is based on associating the conditions of the climatic optimums of the Holocene, Eemian, and Pliocene with corresponding stages of the projected increase of mean global surface air temperature. Provided that certain assumptions are fulfilled in matching the value of the increased mean temperature for a certain epoch with the model-projected change in global mean temperature in the future, the empirical approach suggests that relationships leading to the regional variations in air temperature and other meteorological elements could be deduced and interpreted based on use of empirical data describing climatic conditions for past warm epochs. Considerable care must be taken, of course, in making use of these spatial relationships, especially in accounting for possible large-scale differences that might, in some cases, result from different factors contributing to past climate changes than future changes and, in other cases, might result from the possible influences of changes in orography and geography on regional climatic conditions over time

  9. Application of an empirical model in CFD simulations to predict the local high temperature corrosion potential in biomass fired boilers

    International Nuclear Information System (INIS)

    Gruber, Thomas; Scharler, Robert; Obernberger, Ingwald

    2015-01-01

    To gain reliable data for the development of an empirical model for the prediction of the local high temperature corrosion potential in biomass fired boilers, online corrosion probe measurements have been carried out. The measurements have been performed in a specially designed fixed bed/drop tube reactor in order to simulate a superheater boiler tube under well-controlled conditions. The investigated boiler steel 13CrMo4-5 is commonly used as steel for superheater tube bundles in biomass fired boilers. Within the test runs the flue gas temperature at the corrosion probe has been varied between 625 °C and 880 °C, while the steel temperature has been varied between 450 °C and 550 °C to simulate typical current and future live steam temperatures of biomass fired steam boilers. To investigate the dependence on the flue gas velocity, variations from 2 m·s −1 to 8 m·s −1 have been considered. The empirical model developed fits the measured data sufficiently well. Therefore, the model has been applied within a Computational Fluid Dynamics (CFD) simulation of flue gas flow and heat transfer to estimate the local corrosion potential of a wood chips fired 38 MW steam boiler. Additionally to the actual state analysis two further simulations have been carried out to investigate the influence of enhanced steam temperatures and a change of the flow direction of the final superheater tube bundle from parallel to counter-flow on the local corrosion potential. - Highlights: • Online corrosion probe measurements in a fixed bed/drop tube reactor. • Development of an empirical corrosion model. • Application of the model in a CFD simulation of flow and heat transfer. • Variation of boundary conditions and their effects on the corrosion potential

  10. Use of empirical likelihood to calibrate auxiliary information in partly linear monotone regression models.

    Science.gov (United States)

    Chen, Baojiang; Qin, Jing

    2014-05-10

    In statistical analysis, a regression model is needed if one is interested in finding the relationship between a response variable and covariates. When the response depends on the covariate, then it may also depend on the function of this covariate. If one has no knowledge of this functional form but expect for monotonic increasing or decreasing, then the isotonic regression model is preferable. Estimation of parameters for isotonic regression models is based on the pool-adjacent-violators algorithm (PAVA), where the monotonicity constraints are built in. With missing data, people often employ the augmented estimating method to improve estimation efficiency by incorporating auxiliary information through a working regression model. However, under the framework of the isotonic regression model, the PAVA does not work as the monotonicity constraints are violated. In this paper, we develop an empirical likelihood-based method for isotonic regression model to incorporate the auxiliary information. Because the monotonicity constraints still hold, the PAVA can be used for parameter estimation. Simulation studies demonstrate that the proposed method can yield more efficient estimates, and in some situations, the efficiency improvement is substantial. We apply this method to a dementia study. Copyright © 2013 John Wiley & Sons, Ltd.

  11. Empirical Modeling of Information Communication Technology Usage Behaviour among Business Education Teachers in Tertiary Colleges of a Developing Country

    Science.gov (United States)

    Isiyaku, Dauda Dansarki; Ayub, Ahmad Fauzi Mohd; Abdulkadir, Suhaida

    2015-01-01

    This study has empirically tested the fitness of a structural model in explaining the influence of two exogenous variables (perceived enjoyment and attitude towards ICTs) on two endogenous variables (behavioural intention and teachers' Information Communication Technology (ICT) usage behavior), based on the proposition of Technology Acceptance…

  12. Establishment of Grain Farmers' Supply Response Model and Empirical Analysis under Minimum Grain Purchase Price Policy

    OpenAIRE

    Zhang, Shuang

    2012-01-01

    Based on farmers' supply behavior theory and price expectations theory, this paper establishes grain farmers' supply response model of two major grain varieties (early indica rice and mixed wheat) in the major producing areas, to test whether the minimum grain purchase price policy can have price-oriented effect on grain production and supply in the major producing areas. Empirical analysis shows that the minimum purchase price published annually by the government has significant positive imp...

  13. Moment Conditions Selection Based on Adaptive Penalized Empirical Likelihood

    Directory of Open Access Journals (Sweden)

    Yunquan Song

    2014-01-01

    Full Text Available Empirical likelihood is a very popular method and has been widely used in the fields of artificial intelligence (AI and data mining as tablets and mobile application and social media dominate the technology landscape. This paper proposes an empirical likelihood shrinkage method to efficiently estimate unknown parameters and select correct moment conditions simultaneously, when the model is defined by moment restrictions in which some are possibly misspecified. We show that our method enjoys oracle-like properties; that is, it consistently selects the correct moment conditions and at the same time its estimator is as efficient as the empirical likelihood estimator obtained by all correct moment conditions. Moreover, unlike the GMM, our proposed method allows us to carry out confidence regions for the parameters included in the model without estimating the covariances of the estimators. For empirical implementation, we provide some data-driven procedures for selecting the tuning parameter of the penalty function. The simulation results show that the method works remarkably well in terms of correct moment selection and the finite sample properties of the estimators. Also, a real-life example is carried out to illustrate the new methodology.

  14. Empirical Models of Social Learning in a Large, Evolving Network.

    Directory of Open Access Journals (Sweden)

    Ayşe Başar Bener

    Full Text Available This paper advances theories of social learning through an empirical examination of how social networks change over time. Social networks are important for learning because they constrain individuals' access to information about the behaviors and cognitions of other people. Using data on a large social network of mobile device users over a one-month time period, we test three hypotheses: 1 attraction homophily causes individuals to form ties on the basis of attribute similarity, 2 aversion homophily causes individuals to delete existing ties on the basis of attribute dissimilarity, and 3 social influence causes individuals to adopt the attributes of others they share direct ties with. Statistical models offer varied degrees of support for all three hypotheses and show that these mechanisms are more complex than assumed in prior work. Although homophily is normally thought of as a process of attraction, people also avoid relationships with others who are different. These mechanisms have distinct effects on network structure. While social influence does help explain behavior, people tend to follow global trends more than they follow their friends.

  15. On the Complete Instability of Empirically Implemented Dynamic Leontief Models

    NARCIS (Netherlands)

    Steenge, A.E.

    1990-01-01

    On theoretical grounds, real world implementations of forward-looking dynamic Leontief systems were expected to be stable. Empirical work, however, showed the opposite to be true: all investigated systems proved to be unstable. In fact, an extreme form of instability ('complete instability')

  16. Parameterization of water vapor using high-resolution GPS data and empirical models

    Science.gov (United States)

    Ningombam, Shantikumar S.; Jade, Sridevi; Shrungeshwara, T. S.

    2018-03-01

    The present work evaluates eleven existing empirical models to estimate Precipitable Water Vapor (PWV) over a high-altitude (4500 m amsl), cold-desert environment. These models are tested extensively and used globally to estimate PWV for low altitude sites (below 1000 m amsl). The moist parameters used in the model are: water vapor scale height (Hc), dew point temperature (Td) and water vapor pressure (Es 0). These moist parameters are derived from surface air temperature and relative humidity measured at high temporal resolution from automated weather station. The performance of these models are examined statistically with observed high-resolution GPS (GPSPWV) data over the region (2005-2012). The correlation coefficient (R) between the observed GPSPWV and Model PWV is 0.98 at daily data and varies diurnally from 0.93 to 0.97. Parameterization of moisture parameters were studied in-depth (i.e., 2 h to monthly time scales) using GPSPWV , Td , and Es 0 . The slope of the linear relationships between GPSPWV and Td varies from 0.073°C-1 to 0.106°C-1 (R: 0.83 to 0.97) while GPSPWV and Es 0 varied from 1.688 to 2.209 (R: 0.95 to 0.99) at daily, monthly and diurnal time scales. In addition, the moist parameters for the cold desert, high-altitude environment are examined in-depth at various time scales during 2005-2012.

  17. An Empirical State Error Covariance Matrix Orbit Determination Example

    Science.gov (United States)

    Frisbee, Joseph H., Jr.

    2015-01-01

    State estimation techniques serve effectively to provide mean state estimates. However, the state error covariance matrices provided as part of these techniques suffer from some degree of lack of confidence in their ability to adequately describe the uncertainty in the estimated states. A specific problem with the traditional form of state error covariance matrices is that they represent only a mapping of the assumed observation error characteristics into the state space. Any errors that arise from other sources (environment modeling, precision, etc.) are not directly represented in a traditional, theoretical state error covariance matrix. First, consider that an actual observation contains only measurement error and that an estimated observation contains all other errors, known and unknown. Then it follows that a measurement residual (the difference between expected and observed measurements) contains all errors for that measurement. Therefore, a direct and appropriate inclusion of the actual measurement residuals in the state error covariance matrix of the estimate will result in an empirical state error covariance matrix. This empirical state error covariance matrix will fully include all of the errors in the state estimate. The empirical error covariance matrix is determined from a literal reinterpretation of the equations involved in the weighted least squares estimation algorithm. It is a formally correct, empirical state error covariance matrix obtained through use of the average form of the weighted measurement residual variance performance index rather than the usual total weighted residual form. Based on its formulation, this matrix will contain the total uncertainty in the state estimate, regardless as to the source of the uncertainty and whether the source is anticipated or not. It is expected that the empirical error covariance matrix will give a better, statistical representation of the state error in poorly modeled systems or when sensor performance

  18. Empirical estimates to reduce modeling uncertainties of soil organic carbon in permafrost regions: a review of recent progress and remaining challenges

    International Nuclear Information System (INIS)

    Mishra, U; Jastrow, J D; Matamala, R; Fan, Z; Miller, R M; Hugelius, G; Kuhry, P; Koven, C D; Riley, W J; Harden, J W; Ping, C L; Michaelson, G J; McGuire, A D; Tarnocai, C; Schaefer, K; Schuur, E A G; Jorgenson, M T; Hinzman, L D

    2013-01-01

    The vast amount of organic carbon (OC) stored in soils of the northern circumpolar permafrost region is a potentially vulnerable component of the global carbon cycle. However, estimates of the quantity, decomposability, and combustibility of OC contained in permafrost-region soils remain highly uncertain, thereby limiting our ability to predict the release of greenhouse gases due to permafrost thawing. Substantial differences exist between empirical and modeling estimates of the quantity and distribution of permafrost-region soil OC, which contribute to large uncertainties in predictions of carbon–climate feedbacks under future warming. Here, we identify research challenges that constrain current assessments of the distribution and potential decomposability of soil OC stocks in the northern permafrost region and suggest priorities for future empirical and modeling studies to address these challenges. (letter)

  19. Energy and the future of human settlement patterns: theory, models and empirical considerations

    Energy Technology Data Exchange (ETDEWEB)

    Zucchetto, J

    1983-11-01

    A review of the diverse literature pertaining to the organization of human settlements is presented with special emphasis on the influence that energy may have on concentration vs. dispersal of human populations. A simple, abstract energy-based model of urban growth is presented in order to capture some of the qualitative behavior of competition between urban core and peripheral regions. Empirical difficulties associated with the determination of energy consumption and population density are illustrated with an analysis of counties in Florida. There is no hard evidence that large urban systems are inherently more energy efficient than small ones are so that a future world of energy scarcity cannot be said to imply a selection for urban agglomeration.

  20. Empirical Models of Demand for Out-Patient Physician Services and Their Relevance to the Assessment of Patient Payment Policies: A Critical Review of the Literature

    Directory of Open Access Journals (Sweden)

    Olga Skriabikova

    2010-06-01

    Full Text Available This paper reviews the existing empirical micro-level models of demand for out-patient physician services where the size of patient payment is included either directly as an independent variable (when a flat-rate co-payment fee or indirectly as a level of deductibles and/or co-insurance defined by the insurance coverage. The paper also discusses the relevance of these models for the assessment of patient payment policies. For this purpose, a systematic literature review is carried out. In total, 46 relevant publications were identified. These publications are classified into categories based on their general approach to demand modeling, specifications of data collection, data analysis, and main empirical findings. The analysis indicates a rising research interest in the empirical micro-level models of demand for out-patient physician services that incorporate the size of patient payment. Overall, the size of patient payments, consumer socio-economic and demographic features, and quality of services provided emerge as important determinants of demand for out-patient physician services. However, there is a great variety in the modeling approaches and inconsistencies in the findings regarding the impact of price on demand for out-patient physician services. Hitherto, the empirical research fails to offer policy-makers a clear strategy on how to develop a country-specific model of demand for out-patient physician services suitable for the assessment of patient payment policies in their countries. In particular, theoretically important factors, such as provider behavior, consumer attitudes, experience and culture, and informal patient payments, are not considered. Although we recognize that it is difficult to measure these factors and to incorporate them in the demand models, it is apparent that there is a gap in research for the construction of effective patient payment schemes.

  1. Empirical models of demand for out-patient physician services and their relevance to the assessment of patient payment policies: a critical review of the literature.

    Science.gov (United States)

    Skriabikova, Olga; Pavlova, Milena; Groot, Wim

    2010-06-01

    This paper reviews the existing empirical micro-level models of demand for out-patient physician services where the size of patient payment is included either directly as an independent variable (when a flat-rate co-payment fee) or indirectly as a level of deductibles and/or co-insurance defined by the insurance coverage. The paper also discusses the relevance of these models for the assessment of patient payment policies. For this purpose, a systematic literature review is carried out. In total, 46 relevant publications were identified. These publications are classified into categories based on their general approach to demand modeling, specifications of data collection, data analysis, and main empirical findings. The analysis indicates a rising research interest in the empirical micro-level models of demand for out-patient physician services that incorporate the size of patient payment. Overall, the size of patient payments, consumer socio-economic and demographic features, and quality of services provided emerge as important determinants of demand for out-patient physician services. However, there is a great variety in the modeling approaches and inconsistencies in the findings regarding the impact of price on demand for out-patient physician services. Hitherto, the empirical research fails to offer policy-makers a clear strategy on how to develop a country-specific model of demand for out-patient physician services suitable for the assessment of patient payment policies in their countries. In particular, theoretically important factors, such as provider behavior, consumer attitudes, experience and culture, and informal patient payments, are not considered. Although we recognize that it is difficult to measure these factors and to incorporate them in the demand models, it is apparent that there is a gap in research for the construction of effective patient payment schemes.

  2. Development of covariance capabilities in EMPIRE code

    Energy Technology Data Exchange (ETDEWEB)

    Herman,M.; Pigni, M.T.; Oblozinsky, P.; Mughabghab, S.F.; Mattoon, C.M.; Capote, R.; Cho, Young-Sik; Trkov, A.

    2008-06-24

    The nuclear reaction code EMPIRE has been extended to provide evaluation capabilities for neutron cross section covariances in the thermal, resolved resonance, unresolved resonance and fast neutron regions. The Atlas of Neutron Resonances by Mughabghab is used as a primary source of information on uncertainties at low energies. Care is taken to ensure consistency among the resonance parameter uncertainties and those for thermal cross sections. The resulting resonance parameter covariances are formatted in the ENDF-6 File 32. In the fast neutron range our methodology is based on model calculations with the code EMPIRE combined with experimental data through several available approaches. The model-based covariances can be obtained using deterministic (Kalman) or stochastic (Monte Carlo) propagation of model parameter uncertainties. We show that these two procedures yield comparable results. The Kalman filter and/or the generalized least square fitting procedures are employed to incorporate experimental information. We compare the two approaches analyzing results for the major reaction channels on {sup 89}Y. We also discuss a long-standing issue of unreasonably low uncertainties and link it to the rigidity of the model.

  3. Electronic structure prediction via data-mining the empirical pseudopotential method

    Energy Technology Data Exchange (ETDEWEB)

    Zenasni, H; Aourag, H [LEPM, URMER, Departement of Physics, University Abou Bakr Belkaid, Tlemcen 13000 (Algeria); Broderick, S R; Rajan, K [Department of Materials Science and Engineering, Iowa State University, Ames, Iowa 50011-2230 (United States)

    2010-01-15

    We introduce a new approach for accelerating the calculation of the electronic structure of new materials by utilizing the empirical pseudopotential method combined with data mining tools. Combining data mining with the empirical pseudopotential method allows us to convert an empirical approach to a predictive approach. Here we consider tetrahedrally bounded III-V Bi semiconductors, and through the prediction of form factors based on basic elemental properties we can model the band structure and charge density for these semi-conductors, for which limited results exist. This work represents a unique approach to modeling the electronic structure of a material which may be used to identify new promising semi-conductors and is one of the few efforts utilizing data mining at an electronic level. (Abstract Copyright [2010], Wiley Periodicals, Inc.)

  4. A comparative empirical analysis of statistical models for evaluating highway segment crash frequency

    Directory of Open Access Journals (Sweden)

    Bismark R.D.K. Agbelie

    2016-08-01

    Full Text Available The present study conducted an empirical highway segment crash frequency analysis on the basis of fixed-parameters negative binomial and random-parameters negative binomial models. Using a 4-year data from a total of 158 highway segments, with a total of 11,168 crashes, the results from both models were presented, discussed, and compared. About 58% of the selected variables produced normally distributed parameters across highway segments, while the remaining produced fixed parameters. The presence of a noise barrier along a highway segment would increase mean annual crash frequency by 0.492 for 88.21% of the highway segments, and would decrease crash frequency for 11.79% of the remaining highway segments. Besides, the number of vertical curves per mile along a segment would increase mean annual crash frequency by 0.006 for 84.13% of the highway segments, and would decrease crash frequency for 15.87% of the remaining highway segments. Thus, constraining the parameters to be fixed across all highway segments would lead to an inaccurate conclusion. Although, the estimated parameters from both models showed consistency in direction, the magnitudes were significantly different. Out of the two models, the random-parameters negative binomial model was found to be statistically superior in evaluating highway segment crashes compared with the fixed-parameters negative binomial model. On average, the marginal effects from the fixed-parameters negative binomial model were observed to be significantly overestimated compared with those from the random-parameters negative binomial model.

  5. Aspectos teórico-metodológicos da história e sua aplicabilidade na prática de ensino

    Directory of Open Access Journals (Sweden)

    Leandro Mendonça Barbosa

    2012-10-01

    Full Text Available É perceptível que os rumos no estudo acerca das ciências da educação têm se modificadocom os novos olhares que os intelectuais e os próprios educadores estão tendo desta teoriacientífica. Destarte quando se vai para a prática desta teoria, quando do momento em salade aula, muitas vezes esta ainda é pouco dinâmica e muito menos discutida e analisada doque sua esfera teórica. O que pretendemos com este artigo é creditar uma importância dosaspectos teórico-metodológicos discutidos na ciência histórica e da mesma importância quese faz aplicar esta metodologia na prática de ensino com alunos de graduação, para queestes, por sua vez, apliquem em salas de aulas do ensino regular. Na primeira parte destareflexão elucidaremos como a teoria e o tratamento de fontes foi enxergado pelo campotradicional da história – os ditos positivistas – e a aplicabilidades destes conceitos teóricometodológicosem uma pesquisa prática; no segundo momento analisaremos brevemente aprática teórico-metodológica em uma pesquisa histórica. Finalmente, no terceiro momentodeste artigo analisaremos duas realidades de salas de aula propostas por dois professores dePrática de Ensino em História de duas universidades federais distintas e tentaremoscompreender os métodos utilizados em aula para formulação de conceitos históricos.

  6. EMPIRICAL WEIGHTED MODELLING ON INTER-COUNTY INEQUALITIES EVOLUTION AND TO TEST ECONOMICAL CONVERGENCE IN ROMANIA

    Directory of Open Access Journals (Sweden)

    Natalia\tMOROIANU‐DUMITRESCU

    2015-06-01

    Full Text Available During the last decades, the regional convergence process in Europe has attracted a considerable interest as a highly significant issue, especially after EU enlargement with the New Member States from Central and Eastern Europe. The most usual empirical approaches are using the β- and σ-convergence, originally developed by a series of neo-classical models. Up-to-date, the EU integration process was proven to be accompanied by an increase of the regional inequalities. In order to determine the existence of a similar increase of the inequalities between the administrative counties (NUTS3 included in the NUTS2 and NUTS1 regions of Romania, this paper provides an empirical modelling of economic convergence allowing to evaluate the level and evolution of the inter-regional inequalities over more than a decade period lasting from 1995 up to 2011. The paper presents the results of a large cross-sectional study of σ-convergence and weighted coefficient of variation, using GDP and population data obtained from the National Institute of Statistics of Romania. Both graphical representation including non-linear regression and the associated tables summarizing numerical values of the main statistical tests are demonstrating the impact of pre- accession policy on the economic development of all Romanian NUTS types. The clearly emphasised convergence in the middle time subinterval can be correlated with the pre-accession drastic changes on economic, political and social level, and with the opening of the Schengen borders for Romanian labor force in 2002.

  7. EVOLVING AN EMPIRICAL METHODOLOGY DOR DETERMINING ...

    African Journals Online (AJOL)

    The uniqueness of this approach, is that it can be applied to any forest or dynamic feature on the earth, and can enjoy universal application as well. KEY WORDS: Evolving empirical methodology, innovative mathematical model, appropriate interval, remote sensing, forest environment planning and management. Global Jnl ...

  8. Interface of the polarizable continuum model of solvation with semi-empirical methods in the GAMESS program.

    Directory of Open Access Journals (Sweden)

    Casper Steinmann

    Full Text Available An interface between semi-empirical methods and the polarized continuum model (PCM of solvation successfully implemented into GAMESS following the approach by Chudinov et al (Chem. Phys. 1992, 160, 41. The interface includes energy gradients and is parallelized. For large molecules such as ubiquitin a reasonable speedup (up to a factor of six is observed for up to 16 cores. The SCF convergence is greatly improved by PCM for proteins compared to the gas phase.

  9. Comparison of ensemble post-processing approaches, based on empirical and dynamical error modelisation of rainfall-runoff model forecasts

    Science.gov (United States)

    Chardon, J.; Mathevet, T.; Le Lay, M.; Gailhard, J.

    2012-04-01

    In the context of a national energy company (EDF : Electricité de France), hydro-meteorological forecasts are necessary to ensure safety and security of installations, meet environmental standards and improve water ressources management and decision making. Hydrological ensemble forecasts allow a better representation of meteorological and hydrological forecasts uncertainties and improve human expertise of hydrological forecasts, which is essential to synthesize available informations, coming from different meteorological and hydrological models and human experience. An operational hydrological ensemble forecasting chain has been developed at EDF since 2008 and is being used since 2010 on more than 30 watersheds in France. This ensemble forecasting chain is characterized ensemble pre-processing (rainfall and temperature) and post-processing (streamflow), where a large human expertise is solicited. The aim of this paper is to compare 2 hydrological ensemble post-processing methods developed at EDF in order improve ensemble forecasts reliability (similar to Monatanari &Brath, 2004; Schaefli et al., 2007). The aim of the post-processing methods is to dress hydrological ensemble forecasts with hydrological model uncertainties, based on perfect forecasts. The first method (called empirical approach) is based on a statistical modelisation of empirical error of perfect forecasts, by streamflow sub-samples of quantile class and lead-time. The second method (called dynamical approach) is based on streamflow sub-samples of quantile class and streamflow variation, and lead-time. On a set of 20 watersheds used for operational forecasts, results show that both approaches are necessary to ensure a good post-processing of hydrological ensemble, allowing a good improvement of reliability, skill and sharpness of ensemble forecasts. The comparison of the empirical and dynamical approaches shows the limits of the empirical approach which is not able to take into account hydrological

  10. An empirical model of diagnostic x-ray attenuation under narrow-beam geometry

    International Nuclear Information System (INIS)

    Mathieu, Kelsey B.; Kappadath, S. Cheenu; White, R. Allen; Atkinson, E. Neely; Cody, Dianna D.

    2011-01-01

    Purpose: The purpose of this study was to develop and validate a mathematical model to describe narrow-beam attenuation of kilovoltage x-ray beams for the intended applications of half-value layer (HVL) and quarter-value layer (QVL) estimations, patient organ shielding, and computer modeling. Methods: An empirical model, which uses the Lambert W function and represents a generalized Lambert-Beer law, was developed. To validate this model, transmission of diagnostic energy x-ray beams was measured over a wide range of attenuator thicknesses [0.49-33.03 mm Al on a computed tomography (CT) scanner, 0.09-1.93 mm Al on two mammography systems, and 0.1-0.45 mm Cu and 0.49-14.87 mm Al using general radiography]. Exposure measurements were acquired under narrow-beam geometry using standard methods, including the appropriate ionization chamber, for each radiographic system. Nonlinear regression was used to find the best-fit curve of the proposed Lambert W model to each measured transmission versus attenuator thickness data set. In addition to validating the Lambert W model, we also assessed the performance of two-point Lambert W interpolation compared to traditional methods for estimating the HVL and QVL [i.e., semilogarithmic (exponential) and linear interpolation]. Results: The Lambert W model was validated for modeling attenuation versus attenuator thickness with respect to the data collected in this study (R 2 > 0.99). Furthermore, Lambert W interpolation was more accurate and less sensitive to the choice of interpolation points used to estimate the HVL and/or QVL than the traditional methods of semilogarithmic and linear interpolation. Conclusions: The proposed Lambert W model accurately describes attenuation of both monoenergetic radiation and (kilovoltage) polyenergetic beams (under narrow-beam geometry).

  11. An empirical model of diagnostic x-ray attenuation under narrow-beam geometry.

    Science.gov (United States)

    Mathieu, Kelsey B; Kappadath, S Cheenu; White, R Allen; Atkinson, E Neely; Cody, Dianna D

    2011-08-01

    The purpose of this study was to develop and validate a mathematical model to describe narrow-beam attenuation of kilovoltage x-ray beams for the intended applications of half-value layer (HVL) and quarter-value layer (QVL) estimations, patient organ shielding, and computer modeling. An empirical model, which uses the Lambert W function and represents a generalized Lambert-Beer law, was developed. To validate this model, transmission of diagnostic energy x-ray beams was measured over a wide range of attenuator thicknesses [0.49-33.03 mm Al on a computed tomography (CT) scanner, 0.09-1.93 mm Al on two mammography systems, and 0.1-0.45 mm Cu and 0.49-14.87 mm Al using general radiography]. Exposure measurements were acquired under narrow-beam geometry using standard methods, including the appropriate ionization chamber, for each radiographic system. Nonlinear regression was used to find the best-fit curve of the proposed Lambert W model to each measured transmission versus attenuator thickness data set. In addition to validating the Lambert W model, we also assessed the performance of two-point Lambert W interpolation compared to traditional methods for estimating the HVL and QVL [i.e., semi-logarithmic (exponential) and linear interpolation]. The Lambert W model was validated for modeling attenuation versus attenuator thickness with respect to the data collected in this study (R2 > 0.99). Furthermore, Lambert W interpolation was more accurate and less sensitive to the choice of interpolation points used to estimate the HVL and/or QVL than the traditional methods of semilogarithmic and linear interpolation. The proposed Lambert W model accurately describes attenuation of both monoenergetic radiation and (kilovoltage) polyenergetic beams (under narrow-beam geometry).

  12. Empirical research in medical ethics: How conceptual accounts on normative-empirical collaboration may improve research practice

    Science.gov (United States)

    2012-01-01

    Background The methodology of medical ethics during the last few decades has shifted from a predominant use of normative-philosophical analyses to an increasing involvement of empirical methods. The articles which have been published in the course of this so-called 'empirical turn' can be divided into conceptual accounts of empirical-normative collaboration and studies which use socio-empirical methods to investigate ethically relevant issues in concrete social contexts. Discussion A considered reference to normative research questions can be expected from good quality empirical research in medical ethics. However, a significant proportion of empirical studies currently published in medical ethics lacks such linkage between the empirical research and the normative analysis. In the first part of this paper, we will outline two typical shortcomings of empirical studies in medical ethics with regard to a link between normative questions and empirical data: (1) The complete lack of normative analysis, and (2) cryptonormativity and a missing account with regard to the relationship between 'is' and 'ought' statements. Subsequently, two selected concepts of empirical-normative collaboration will be presented and how these concepts may contribute to improve the linkage between normative and empirical aspects of empirical research in medical ethics will be demonstrated. Based on our analysis, as well as our own practical experience with empirical research in medical ethics, we conclude with a sketch of concrete suggestions for the conduct of empirical research in medical ethics. Summary High quality empirical research in medical ethics is in need of a considered reference to normative analysis. In this paper, we demonstrate how conceptual approaches of empirical-normative collaboration can enhance empirical research in medical ethics with regard to the link between empirical research and normative analysis. PMID:22500496

  13. Empirical P-L-C relations for delta Scuti stars

    International Nuclear Information System (INIS)

    Gupta, S.K.

    1978-01-01

    Separate P-L-C relations have been empirically derived by sampling the delta Scuti stars according to their pulsation modes. The results based on these relations have been compared with those estimated from the model based P-L-C relations and the other existing empirical P-L-C relations. It is found that a separate P-L-C relation for each pulsation mode provides a better correspondence with observations. (Auth.)

  14. Development of ANC-type empirical two-phase pump model for full size CANDU primary heat transport pump

    International Nuclear Information System (INIS)

    Chan, A.M.C.; Huynh, H.M.

    2004-01-01

    The development of an ANC-type empirical two-phase pump model for CANDU (Canadian Deuterium) reactor primary heat transport pumps is described in the present paper. The model was developed based on Ontario Hydro Technologies' full scale Darlington pump first quadrant test data. The functional form of the ANC model which is widely used was chosen to facilitate the implementation of the model into existing computer codes. The work is part of a bigger test program with the aims: (1) to produce high quality pump performance data under off-normal operating conditions using both full-size and model scale pumps; (2) to advance our basic understanding of the dominant mechanisms affecting pump performance based on more detailed local measurements; and (3) to develop a 'best-estimate' or improved pump model for use in reactor licensing and safety analyses. (author)

  15. GIS-based analysis and modelling with empirical and remotely-sensed data on coastline advance and retreat

    Science.gov (United States)

    Ahmad, Sajid Rashid

    With the understanding that far more research remains to be done on the development and use of innovative and functional geospatial techniques and procedures to investigate coastline changes this thesis focussed on the integration of remote sensing, geographical information systems (GIS) and modelling techniques to provide meaningful insights on the spatial and temporal dynamics of coastline changes. One of the unique strengths of this research was the parameterization of the GIS with long-term empirical and remote sensing data. Annual empirical data from 1941--2007 were analyzed by the GIS, and then modelled with statistical techniques. Data were also extracted from Landsat TM and ETM+ images. The band ratio method was used to extract the coastlines. Topographic maps were also used to extract digital map data. All data incorporated into ArcGIS 9.2 were analyzed with various modules, including Spatial Analyst, 3D Analyst, and Triangulated Irregular Networks. The Digital Shoreline Analysis System was used to analyze and predict rates of coastline change. GIS results showed the spatial locations along the coast that will either advance or retreat over time. The linear regression results highlighted temporal changes which are likely to occur along the coastline. Box-Jenkins modelling procedures were utilized to determine statistical models which best described the time series (1941--2007) of coastline change data. After several iterations and goodness-of-fit tests, second-order spatial cyclic autoregressive models, first-order autoregressive models and autoregressive moving average models were identified as being appropriate for describing the deterministic and random processes operating in Guyana's coastal system. The models highlighted not only cyclical patterns in advance and retreat of the coastline, but also the existence of short and long-term memory processes. Long-term memory processes could be associated with mudshoal propagation and stabilization while short

  16. Empirical method to calculate Clinch River Breeder Reactor (CRBR) inlet plenum transient temperatures

    International Nuclear Information System (INIS)

    Howarth, W.L.

    1976-01-01

    Sodium flow enters the CRBR inlet plenum via three loops or inlets. An empirical equation was developed to calculate transient temperatures in the CRBR inlet plenum from known loop flows and temperatures. The constants in the empirical equation were derived from 1/4 scale Inlet Plenum Model tests using water as the test fluid. The sodium temperature distribution was simulated by an electrolyte. Step electrolyte transients at 100 percent model flow were used to calculate the equation constants. Step electrolyte runs at 50 percent and 10 percent flow confirmed that the constants were independent of flow. Also, a transient was tested which varied simultaneously flow rate and electrolyte. Agreement of the test results with the empirical equation results was good which verifies the empirical equation

  17. Empirical Modeling of Oxygen Uptake of Flow Over Stepped Chutes ...

    African Journals Online (AJOL)

    The present investigation evaluates the influence of three different step chute geometry when skimming flow was allowed over them with the aim of determining the aerated flow length which is a significant factor when developing empirical equations for estimating aeration efficiency of flow. Overall, forty experiments were ...

  18. Radiosensitivity of grapevines. Empirical modelling of the radiosensitivity of some clones to x-ray irradiation. Pt. 1

    International Nuclear Information System (INIS)

    Koeroesi, F.; Jezierska-Szabo, E.

    1999-01-01

    Empirical and formal (Poisson) models were utilized, applying experimental growth data to characterize the radiosensitivity of six grapevine clones to X-ray irradiation. According to the radiosensitivity constants (k), target numbers (n) and volumes, GR 37 doses and energy deposition, the following radiosensitivity order has been found for various vine brands: Chardonnay clone type < Harslevelue K. 9 < Koevidinka K. 8 < Muscat Ottonel clone type < Irsai Oliver K. 11 < Cabernet Sauvignon E. 153. The model can be expanded to describe the radiosensitivity of other plant species and varieties, and also the efficiency of various radioprotecting agents and conditions. (author)

  19. Antecedents and Consequences of Individual Performance Analysis of Turnover Intention Model (Empirical Study of Public Accountants in Indonesia)

    OpenAIRE

    Raza, Hendra; Maksum, Azhar; Erlina; Lumban Raja, Prihatin

    2014-01-01

    Azhar Maksum This study aims to examine empirically the antecedents of individual performance on its consequences of turnover intention in public accounting firms. There are eight variables measured which consists of auditors' empowerment, innovation professionalism, role ambiguity, role conflict, organizational commitment, individual performance and turnover intention. Data analysis is based on 163 public accountant using the Structural Equation Modeling assisted with an appli...

  20. Empirical model for calculating vapor-liquid equilibrium and associated phase enthalpy for the CO2--O2--Kr--Xe system for application to the KALC process

    International Nuclear Information System (INIS)

    Glass, R.W.; Gilliam, T.M.; Fowler, V.L.

    1976-01-01

    An empirical model is presented for vapor-liquid equilibria and enthalpy for the CO 2 -O 2 system. In the model, krypton and xenon in very low concentrations are combined with the CO 2 -O 2 system, thereby representing the total system of primary interest in the High-Temperature Gas-Cooled Reactor program for removing krypton from off-gas generated during the reprocessing of spent fuel. Selected properties of the individual and combined components being considered are presented in the form of tables and empirical equations

  1. Empirical valence bond models for reactive potential energy surfaces: a parallel multilevel genetic program approach.

    Science.gov (United States)

    Bellucci, Michael A; Coker, David F

    2011-07-28

    We describe a new method for constructing empirical valence bond potential energy surfaces using a parallel multilevel genetic program (PMLGP). Genetic programs can be used to perform an efficient search through function space and parameter space to find the best functions and sets of parameters that fit energies obtained by ab initio electronic structure calculations. Building on the traditional genetic program approach, the PMLGP utilizes a hierarchy of genetic programming on two different levels. The lower level genetic programs are used to optimize coevolving populations in parallel while the higher level genetic program (HLGP) is used to optimize the genetic operator probabilities of the lower level genetic programs. The HLGP allows the algorithm to dynamically learn the mutation or combination of mutations that most effectively increase the fitness of the populations, causing a significant increase in the algorithm's accuracy and efficiency. The algorithm's accuracy and efficiency is tested against a standard parallel genetic program with a variety of one-dimensional test cases. Subsequently, the PMLGP is utilized to obtain an accurate empirical valence bond model for proton transfer in 3-hydroxy-gamma-pyrone in gas phase and protic solvent. © 2011 American Institute of Physics

  2. Construction and utilization of linear empirical core models for PWR in-core fuel management

    International Nuclear Information System (INIS)

    Okafor, K.C.

    1988-01-01

    An empirical core-model construction procedure for pressurized water reactor (PWR) in-core fuel management is developed that allows determining the optimal BOC k ∞ profiles in PWRs as a single linear-programming problem and thus facilitates the overall optimization process for in-core fuel management due to algorithmic simplification and reduction in computation time. The optimal profile is defined as one that maximizes cycle burnup. The model construction scheme treats the fuel-assembly power fractions, burnup, and leakage as state variables and BOC zone enrichments as control variables. The core model consists of linear correlations between the state and control variables that describe fuel-assembly behavior in time and space. These correlations are obtained through time-dependent two-dimensional core simulations. The core model incorporates the effects of composition changes in all the enrichment control zones on a given fuel assembly and is valid at all times during the cycle for a given range of control variables. No assumption is made on the geometry of the control zones. A scatter-composition distribution, as well as annular, can be considered for model construction. The application of the methodology to a typical PWR core indicates good agreement between the model and exact simulation results

  3. Meteorological conditions associated to high sublimation amounts in semiarid high-elevation Andes decrease the performance of empirical melt models

    Science.gov (United States)

    Ayala, Alvaro; Pellicciotti, Francesca; MacDonell, Shelley; McPhee, James; Burlando, Paolo

    2015-04-01

    Empirical melt (EM) models are often preferred to surface energy balance (SEB) models to calculate melt amounts of snow and ice in hydrological modelling of high-elevation catchments. The most common reasons to support this decision are that, in comparison to SEB models, EM models require lower levels of meteorological data, complexity and computational costs. However, EM models assume that melt can be characterized by means of a few index variables only, and their results strongly depend on the transferability in space and time of the calibrated empirical parameters. In addition, they are intrinsically limited in accounting for specific process components, the complexity of which cannot be easily reconciled with the empirical nature of the model. As an example of an EM model, in this study we use the Enhanced Temperature Index (ETI) model, which calculates melt amounts using air temperature and the shortwave radiation balance as index variables. We evaluate the performance of the ETI model on dry high-elevation sites where sublimation amounts - that are not explicitly accounted for the EM model - represent a relevant percentage of total ablation (1.1 to 8.7%). We analyse a data set of four Automatic Weather Stations (AWS), which were collected during the ablation season 2013-14, at elevations between 3466 and 4775 m asl, on the glaciers El Tapado, San Francisco, Bello and El Yeso, which are located in the semiarid Andes of central Chile. We complement our analysis using data from past studies in Juncal Norte Glacier (Chile) and Haut Glacier d'Arolla (Switzerland), during the ablation seasons 2008-09 and 2006, respectively. We use the results of a SEB model, applied to each study site, along the entire season, to calibrate the ETI model. The ETI model was not designed to calculate sublimation amounts, however, results show that their ability is low also to simulate melt amounts at sites where sublimation represents larger percentages of total ablation. In fact, we

  4. Empirically modelled Pc3 activity based on solar wind parameters

    Directory of Open Access Journals (Sweden)

    B. Heilig

    2010-09-01

    Full Text Available It is known that under certain solar wind (SW/interplanetary magnetic field (IMF conditions (e.g. high SW speed, low cone angle the occurrence of ground-level Pc3–4 pulsations is more likely. In this paper we demonstrate that in the event of anomalously low SW particle density, Pc3 activity is extremely low regardless of otherwise favourable SW speed and cone angle. We re-investigate the SW control of Pc3 pulsation activity through a statistical analysis and two empirical models with emphasis on the influence of SW density on Pc3 activity. We utilise SW and IMF measurements from the OMNI project and ground-based magnetometer measurements from the MM100 array to relate SW and IMF measurements to the occurrence of Pc3 activity. Multiple linear regression and artificial neural network models are used in iterative processes in order to identify sets of SW-based input parameters, which optimally reproduce a set of Pc3 activity data. The inclusion of SW density in the parameter set significantly improves the models. Not only the density itself, but other density related parameters, such as the dynamic pressure of the SW, or the standoff distance of the magnetopause work equally well in the model. The disappearance of Pc3s during low-density events can have at least four reasons according to the existing upstream wave theory: 1. Pausing the ion-cyclotron resonance that generates the upstream ultra low frequency waves in the absence of protons, 2. Weakening of the bow shock that implies less efficient reflection, 3. The SW becomes sub-Alfvénic and hence it is not able to sweep back the waves propagating upstream with the Alfvén-speed, and 4. The increase of the standoff distance of the magnetopause (and of the bow shock. Although the models cannot account for the lack of Pc3s during intervals when the SW density is extremely low, the resulting sets of optimal model inputs support the generation of mid latitude Pc3 activity predominantly through

  5. Precision comparison of the erosion rates derived from 137Cs measurements models with predictions based on empirical relationship

    International Nuclear Information System (INIS)

    Yang Mingyi; Liu Puling; Li Liqing

    2004-01-01

    The soil samples were collected in 6 cultivated runoff plots with grid sampling method, and the soil erosion rates derived from 137 Cs measurements were calculated. The models precision of Zhang Xinbao, Zhou Weizhi, Yang Hao and Walling were compared with predictions based on empirical relationship, data showed that the precision of 4 models is high within 50m slope length except for the slope with low slope angle and short length. Relatively, the precision of Walling's model is better than that of Zhang Xinbao, Zhou Weizhi and Yang Hao. In addition, the relationship between parameter Γ in Walling's improved model and slope angle was analyzed, the ralation is: Y=0.0109 X 1.0072 . (authors)

  6. Comparing cycling world hour records, 1967-1996: modeling with empirical data.

    Science.gov (United States)

    Bassett, D R; Kyle, C R; Passfield, L; Broker, J P; Burke, E R

    1999-11-01

    The world hour record in cycling has increased dramatically in recent years. The present study was designed to compare the performances of former/current record holders, after adjusting for differences in aerodynamic equipment and altitude. Additionally, we sought to determine the ideal elevation for future hour record attempts. The first step was constructing a mathematical model to predict power requirements of track cycling. The model was based on empirical data from wind-tunnel tests, the relationship of body size to frontal surface area, and field power measurements using a crank dynamometer (SRM). The model agreed reasonably well with actual measurements of power output on elite cyclists. Subsequently, the effects of altitude on maximal aerobic power were estimated from published research studies of elite athletes. This information was combined with the power requirement equation to predict what each cyclist's power output would have been at sea level. This allowed us to estimate the distance that each rider could have covered using state-of-the-art equipment at sea level. According to these calculations, when racing under equivalent conditions, Rominger would be first, Boardman second, Merckx third, and Indurain fourth. In addition, about 60% of the increase in hour record distances since Bracke's record (1967) have come from advances in technology and 40% from physiological improvements. To break the current world hour record, field measurements and the model indicate that a cyclist would have to deliver over 440 W for 1 h at sea level, or correspondingly less at altitude. The optimal elevation for future hour record attempts is predicted to be about 2500 m for acclimatized riders and 2000 m for unacclimatized riders.

  7. A dynamic model of the marriage market-Part 2: simulation of marital states and application to empirical data.

    Science.gov (United States)

    Matthews, A P; Garenne, M L

    2013-09-01

    A dynamic, two-sex, age-structured marriage model is presented. Part 1 focused on first marriage only and described a marriage market matching algorithm. In Part 2 the model is extended to include divorce, widowing, and remarriage. The model produces a self-consistent set of marital states distributed by age and sex in a stable population by means of a gender-symmetric numerical method. The model is compared with empirical data for the case of Zambia. Furthermore, a dynamic marriage function for a changing population is demonstrated in simulations of three hypothetical scenarios of elevated mortality in young to middle adulthood. The marriage model has its primary application to simulation of HIV-AIDS epidemics in African countries. Copyright © 2013 Elsevier Inc. All rights reserved.

  8. Mathematical method to build an empirical model for inhaled anesthetic agent wash-in

    Directory of Open Access Journals (Sweden)

    Grouls René EJ

    2011-06-01

    Full Text Available Abstract Background The wide range of fresh gas flow - vaporizer setting (FGF - FD combinations used by different anesthesiologists during the wash-in period of inhaled anesthetics indicates that the selection of FGF and FD is based on habit and personal experience. An empirical model could rationalize FGF - FD selection during wash-in. Methods During model derivation, 50 ASA PS I-II patients received desflurane in O2 with an ADU® anesthesia machine with a random combination of a fixed FGF - FD setting. The resulting course of the end-expired desflurane concentration (FA was modeled with Excel Solver, with patient age, height, and weight as covariates; NONMEM was used to check for parsimony. The resulting equation was solved for FD, and prospectively tested by having the formula calculate FD to be used by the anesthesiologist after randomly selecting a FGF, a target FA (FAt, and a specified time interval (1 - 5 min after turning on the vaporizer after which FAt had to be reached. The following targets were tested: desflurane FAt 3.5% after 3.5 min (n = 40, 5% after 5 min (n = 37, and 6% after 4.5 min (n = 37. Results Solving the equation derived during model development for FD yields FD=-(e(-FGF*-0.23+FGF*0.24*(e(FGF*-0.23*FAt*Ht*0.1-e(FGF*-0.23*FGF*2.55+40.46-e(FGF*-0.23*40.46+e(FGF*-0.23+Time/-4.08*40.46-e(Time/-4.08*40.46/((-1+e(FGF*0.24*(-1+e(Time/-4.08*39.29. Only height (Ht could be withheld as a significant covariate. Median performance error and median absolute performance error were -2.9 and 7.0% in the 3.5% after 3.5 min group, -3.4 and 11.4% in the 5% after 5 min group, and -16.2 and 16.2% in the 6% after 4.5 min groups, respectively. Conclusions An empirical model can be used to predict the FGF - FD combinations that attain a target end-expired anesthetic agent concentration with clinically acceptable accuracy within the first 5 min of the start of administration. The sequences are easily calculated in an Excel file and simple to

  9. Comparing Absolute Error with Squared Error for Evaluating Empirical Models of Continuous Variables: Compositions, Implications, and Consequences

    Science.gov (United States)

    Gao, J.

    2014-12-01

    Reducing modeling error is often a major concern of empirical geophysical models. However, modeling errors can be defined in different ways: When the response variable is continuous, the most commonly used metrics are squared (SQ) and absolute (ABS) errors. For most applications, ABS error is the more natural, but SQ error is mathematically more tractable, so is often used as a substitute with little scientific justification. Existing literature has not thoroughly investigated the implications of using SQ error in place of ABS error, especially not geospatially. This study compares the two metrics through the lens of bias-variance decomposition (BVD). BVD breaks down the expected modeling error of each model evaluation point into bias (systematic error), variance (model sensitivity), and noise (observation instability). It offers a way to probe the composition of various error metrics. I analytically derived the BVD of ABS error and compared it with the well-known SQ error BVD, and found that not only the two metrics measure the characteristics of the probability distributions of modeling errors differently, but also the effects of these characteristics on the overall expected error are different. Most notably, under SQ error all bias, variance, and noise increase expected error, while under ABS error certain parts of the error components reduce expected error. Since manipulating these subtractive terms is a legitimate way to reduce expected modeling error, SQ error can never capture the complete story embedded in ABS error. I then empirically compared the two metrics with a supervised remote sensing model for mapping surface imperviousness. Pair-wise spatially-explicit comparison for each error component showed that SQ error overstates all error components in comparison to ABS error, especially variance-related terms. Hence, substituting ABS error with SQ error makes model performance appear worse than it actually is, and the analyst would more likely accept a

  10. Pluvials, Droughts, Energetics, and the Mongol Empire

    Science.gov (United States)

    Hessl, A. E.; Pederson, N.; Baatarbileg, N.

    2012-12-01

    consistent with a well-documented volcanic eruption that caused massive crop damage and famine throughout much of Europe. In Mongol history, this abrupt cooling also coincides with the move of the capital from Central Mongolia (Karakorum) to China (Beijing). In combination, the tree-ring records of water and temperature suggest that 1) the rise of the Mongol Empire occurred during an unusually consistent warm and wet climate and 2) the disintegration of the Empire occurred following a plunge into cold and dry conditions in Central Mongolia. These results represent the first step of a larger project integrating physical science and history to understand the role of energy in the evolution of the Mongol Empire. Using data from historic documents, ecological modeling, tree rings, and sediment cores, we will investigate whether the expansion and contraction of the empire was related to moisture and temperature availability and thus grassland productivity associated with climate change in the Orkhon Valley.

  11. FARIMA MODELING OF SOLAR FLARE ACTIVITY FROM EMPIRICAL TIME SERIES OF SOFT X-RAY SOLAR EMISSION

    International Nuclear Information System (INIS)

    Stanislavsky, A. A.; Burnecki, K.; Magdziarz, M.; Weron, A.; Weron, K.

    2009-01-01

    A time series of soft X-ray emission observed by the Geostationary Operational Environment Satellites from 1974 to 2007 is analyzed. We show that in the solar-maximum periods the energy distribution of soft X-ray solar flares for C, M, and X classes is well described by a fractional autoregressive integrated moving average model with Pareto noise. The model incorporates two effects detected in our empirical studies. One effect is a long-term dependence (long-term memory), and another corresponds to heavy-tailed distributions. The parameters of the model: self-similarity exponent H, tail index α, and memory parameter d are statistically stable enough during the periods 1977-1981, 1988-1992, 1999-2003. However, when the solar activity tends to minimum, the parameters vary. We discuss the possible causes of this evolution and suggest a statistically justified model for predicting the solar flare activity.

  12. Empirical Philosophy of Science

    DEFF Research Database (Denmark)

    Mansnerus, Erika; Wagenknecht, Susann

    2015-01-01

    knowledge takes place through the integration of the empirical or historical research into the philosophical studies, as Chang, Nersessian, Thagard and Schickore argue in their work. Building upon their contributions we will develop a blueprint for an Empirical Philosophy of Science that draws upon...... qualitative methods from the social sciences in order to advance our philosophical understanding of science in practice. We will regard the relationship between philosophical conceptualization and empirical data as an iterative dialogue between theory and data, which is guided by a particular ‘feeling with......Empirical insights are proven fruitful for the advancement of Philosophy of Science, but the integration of philosophical concepts and empirical data poses considerable methodological challenges. Debates in Integrated History and Philosophy of Science suggest that the advancement of philosophical...

  13. Empirical Storm-Time Correction to the International Reference Ionosphere Model E-Region Electron and Ion Density Parameterizations Using Observations from TIMED/SABER

    Science.gov (United States)

    Mertens, Christoper J.; Winick, Jeremy R.; Russell, James M., III; Mlynczak, Martin G.; Evans, David S.; Bilitza, Dieter; Xu, Xiaojing

    2007-01-01

    The response of the ionospheric E-region to solar-geomagnetic storms can be characterized using observations of infrared 4.3 micrometers emission. In particular, we utilize nighttime TIMED/SABER measurements of broadband 4.3 micrometers limb emission and derive a new data product, the NO+(v) volume emission rate, which is our primary observation-based quantity for developing an empirical storm-time correction the IRI E-region electron density. In this paper we describe our E-region proxy and outline our strategy for developing the empirical storm model. In our initial studies, we analyzed a six day storm period during the Halloween 2003 event. The results of this analysis are promising and suggest that the ap-index is a viable candidate to use as a magnetic driver for our model.

  14. The hydrodynamic basis of the vacuum cleaner effect in continuous-flow PCNL instruments: an empiric approach and mathematical model.

    Science.gov (United States)

    Mager, R; Balzereit, C; Gust, K; Hüsch, T; Herrmann, T; Nagele, U; Haferkamp, A; Schilling, D

    2016-05-01

    Passive removal of stone fragments in the irrigation stream is one of the characteristics in continuous-flow PCNL instruments. So far the physical principle of this so-called vacuum cleaner effect has not been fully understood yet. The aim of the study was to empirically prove the existence of the vacuum cleaner effect and to develop a physical hypothesis and generate a mathematical model for this phenomenon. In an empiric approach, common low-pressure PCNL instruments and conventional PCNL sheaths were tested using an in vitro model. Flow characteristics were visualized by coloring of irrigation fluid. Influence of irrigation pressure, sheath diameter, sheath design, nephroscope design and position of the nephroscope was assessed. Experiments were digitally recorded for further slow-motion analysis to deduce a physical model. In each tested nephroscope design, we could observe the vacuum cleaner effect. Increase in irrigation pressure and reduction in cross section of sheath sustained the effect. Slow-motion analysis of colored flow revealed a synergism of two effects causing suction and transportation of the stone. For the first time, our model showed a flow reversal in the sheath as an integral part of the origin of the stone transportation during vacuum cleaner effect. The application of Bernoulli's equation provided the explanation of these effects and confirmed our experimental results. We widen the understanding of PCNL with a conclusive physical model, which explains fluid mechanics of the vacuum cleaner effect.

  15. What 'empirical turn in bioethics'?

    Science.gov (United States)

    Hurst, Samia

    2010-10-01

    Uncertainty as to how we should articulate empirical data and normative reasoning seems to underlie most difficulties regarding the 'empirical turn' in bioethics. This article examines three different ways in which we could understand 'empirical turn'. Using real facts in normative reasoning is trivial and would not represent a 'turn'. Becoming an empirical discipline through a shift to the social and neurosciences would be a turn away from normative thinking, which we should not take. Conducting empirical research to inform normative reasoning is the usual meaning given to the term 'empirical turn'. In this sense, however, the turn is incomplete. Bioethics has imported methodological tools from empirical disciplines, but too often it has not imported the standards to which researchers in these disciplines are held. Integrating empirical and normative approaches also represents true added difficulties. Addressing these issues from the standpoint of debates on the fact-value distinction can cloud very real methodological concerns by displacing the debate to a level of abstraction where they need not be apparent. Ideally, empirical research in bioethics should meet standards for empirical and normative validity similar to those used in the source disciplines for these methods, and articulate these aspects clearly and appropriately. More modestly, criteria to ensure that none of these standards are completely left aside would improve the quality of empirical bioethics research and partly clear the air of critiques addressing its theoretical justification, when its rigour in the particularly difficult context of interdisciplinarity is what should be at stake.

  16. An empirical formula for scattered neutron components in fast neutron radiography

    International Nuclear Information System (INIS)

    Dou Haifeng; Tang Bin

    2011-01-01

    Scattering neutrons are one of the key factors that may affect the images of fast neutron radiography. In this paper, a mathematical model for scattered neutrons is developed on a cylinder sample, and an empirical formula for scattered neutrons is obtained. According to the results given by Monte Carlo methods, the parameters in the empirical formula are obtained with curve fitting, which confirms the logicality of the empirical formula. The curve-fitted parameters of common materials such as 6 LiD are given. (authors)

  17. A nonparametric empirical Bayes framework for large-scale multiple testing.

    Science.gov (United States)

    Martin, Ryan; Tokdar, Surya T

    2012-07-01

    We propose a flexible and identifiable version of the 2-groups model, motivated by hierarchical Bayes considerations, that features an empirical null and a semiparametric mixture model for the nonnull cases. We use a computationally efficient predictive recursion (PR) marginal likelihood procedure to estimate the model parameters, even the nonparametric mixing distribution. This leads to a nonparametric empirical Bayes testing procedure, which we call PRtest, based on thresholding the estimated local false discovery rates. Simulations and real data examples demonstrate that, compared to existing approaches, PRtest's careful handling of the nonnull density can give a much better fit in the tails of the mixture distribution which, in turn, can lead to more realistic conclusions.

  18. Comparison of physical and semi-empirical hydraulic models for flood inundation mapping

    Science.gov (United States)

    Tavakoly, A. A.; Afshari, S.; Omranian, E.; Feng, D.; Rajib, A.; Snow, A.; Cohen, S.; Merwade, V.; Fekete, B. M.; Sharif, H. O.; Beighley, E.

    2016-12-01

    Various hydraulic/GIS-based tools can be used for illustrating spatial extent of flooding for first-responders, policy makers and the general public. The objective of this study is to compare four flood inundation modeling tools: HEC-RAS-2D, Gridded Surface Subsurface Hydrologic Analysis (GSSHA), AutoRoute and Height Above the Nearest Drainage (HAND). There is a trade-off among accuracy, workability and computational demand in detailed, physics-based flood inundation models (e.g. HEC-RAS-2D and GSSHA) in contrast with semi-empirical, topography-based, computationally less expensive approaches (e.g. AutoRoute and HAND). The motivation for this study is to evaluate this trade-off and offer guidance to potential large-scale application in an operational prediction system. The models were assessed and contrasted via comparability analysis (e.g. overlapping statistics) by using three case studies in the states of Alabama, Texas, and West Virginia. The sensitivity and accuracy of physical and semi-eimpirical models in producing inundation extent were evaluated for the following attributes: geophysical characteristics (e.g. high topographic variability vs. flat natural terrain, urbanized vs. rural zones, effect of surface roughness paratermer value), influence of hydraulic structures such as dams and levees compared to unobstructed flow condition, accuracy in large vs. small study domain, effect of spatial resolution in topographic data (e.g. 10m National Elevation Dataset vs. 0.3m LiDAR). Preliminary results suggest that semi-empericial models tend to underestimate in a flat, urbanized area with controlled/managed river channel around 40% of the inundation extent compared to the physical models, regardless of topographic resolution. However, in places where there are topographic undulations, semi-empericial models attain relatively higher level of accuracy than they do in flat non-urbanized terrain.

  19. Galveston Orientation and Amnesia Test: applicability and relation with the Glasgow Coma Scale Galveston Orientation and Amnesia Test: aplicabilidad y relación con la Escala de Coma de Glasgow Galveston Orientation and Amnesia Test: aplicabilidade e relação com a Escala de Coma de Glasgow

    Directory of Open Access Journals (Sweden)

    Silvia Cristina Fürbringer e Silva

    2007-08-01

    Full Text Available Restrictions in the application of the Galveston Orientation and Amnesia Test and questionings about the relationship between conscience and post-traumatic amnesia motivated this study, which aims to identify, through the Glasgow Coma Scale scores, when to initiate the application of this amnesia test, as well to verify the relationship between the results of these two indicators. The longitudinal prospective study was carried at a referral center for trauma care in São Paulo - Brazil. The sample consisted of 73 victims of blunt traumatic brain injury, admitted at this institution between January 03rd and May 03rd 2001. Regarding the applicability, the test could be applied in patients with a Glasgow Coma Scale score > 12; however, the end of post traumatic amnesia was verified in patients who scored > 14 on the scale. A significant relationship (r s = 0.65 was verified between these measures, although different kinds of relationship between the end of the amnesia and changes in consciousness were observed.Restricciones en la aplicación del Galveston Orientation and Amnesia Test y los cuestionamientos sobre la relación entre conciencia y amnesia post-traumática motivaron este estudio que visa identificar, a través de la puntuación de la Escala de Coma de Glasgow, el periodo más adecuado para la aplicación de la prueba de amnesia, y observar la relación entre los resultados de esos dos indicadores. El estudio prospectivo y longitudinal fue realizado en un centro de referencia para traumas en São Paulo - Brasil. El número fue de 73 victimas de trauma craneoencefálico contuso, internadas en esta institución en el periodo de 03/01 a 03/05/2001. Con relación a la aplicabilidad, la prueba puede ser aplicada en los pacientes con la Escala de Coma de Glasgow > 12, pero el término de la amnesia post-traumática fue observado en los pacientes con puntuación > 14 en la escala. Correlación significativa (rs = 0,65 fue observada entre esas

  20. An Empirical Agent-Based Model to Simulate the Adoption of Water Reuse Using the Social Amplification of Risk Framework.

    Science.gov (United States)

    Kandiah, Venu; Binder, Andrew R; Berglund, Emily Z

    2017-10-01

    Water reuse can serve as a sustainable alternative water source for urban areas. However, the successful implementation of large-scale water reuse projects depends on community acceptance. Because of the negative perceptions that are traditionally associated with reclaimed water, water reuse is often not considered in the development of urban water management plans. This study develops a simulation model for understanding community opinion dynamics surrounding the issue of water reuse, and how individual perceptions evolve within that context, which can help in the planning and decision-making process. Based on the social amplification of risk framework, our agent-based model simulates consumer perceptions, discussion patterns, and their adoption or rejection of water reuse. The model is based on the "risk publics" model, an empirical approach that uses the concept of belief clusters to explain the adoption of new technology. Each household is represented as an agent, and parameters that define their behavior and attributes are defined from survey data. Community-level parameters-including social groups, relationships, and communication variables, also from survey data-are encoded to simulate the social processes that influence community opinion. The model demonstrates its capabilities to simulate opinion dynamics and consumer adoption of water reuse. In addition, based on empirical data, the model is applied to investigate water reuse behavior in different regions of the United States. Importantly, our results reveal that public opinion dynamics emerge differently based on membership in opinion clusters, frequency of discussion, and the structure of social networks. © 2017 Society for Risk Analysis.

  1. Development of Response Spectral Ground Motion Prediction Equations from Empirical Models for Fourier Spectra and Duration of Ground Motion

    Science.gov (United States)

    Bora, S. S.; Scherbaum, F.; Kuehn, N. M.; Stafford, P.; Edwards, B.

    2014-12-01

    In a probabilistic seismic hazard assessment (PSHA) framework, it still remains a challenge to adjust ground motion prediction equations (GMPEs) for application in different seismological environments. In this context, this study presents a complete framework for the development of a response spectral GMPE easily adjustable to different seismological conditions; and which does not suffer from the technical problems associated with the adjustment in response spectral domain. Essentially, the approach consists of an empirical FAS (Fourier Amplitude Spectrum) model and a duration model for ground motion which are combined within the random vibration theory (RVT) framework to obtain the full response spectral ordinates. Additionally, FAS corresponding to individual acceleration records are extrapolated beyond the frequency range defined by the data using the stochastic FAS model, obtained by inversion as described in Edwards & Faeh, (2013). To that end, an empirical model for a duration, which is tuned to optimize the fit between RVT based and observed response spectral ordinate, at each oscillator frequency is derived. Although, the main motive of the presented approach was to address the adjustability issues of response spectral GMPEs; comparison, of median predicted response spectra with the other regional models indicate that presented approach can also be used as a stand-alone model. Besides that, a significantly lower aleatory variability (σbrands it to a potentially viable alternative to the classical regression (on response spectral ordinates) based GMPEs for seismic hazard studies in the near future. The dataset used for the presented analysis is a subset of the recently compiled database RESORCE-2012 across Europe, Middle East and the Mediterranean region.

  2. Empirical model of TEC response to geomagnetic and solar forcing over Balkan Peninsula

    Science.gov (United States)

    Mukhtarov, P.; Andonov, B.; Pancheva, D.

    2018-01-01

    An empirical total electron content (TEC) model response to external forcing over Balkan Peninsula (35°N-50°N; 15°E-30°E) is built by using the Center for Orbit Determination of Europe (CODE) TEC data for full 17 years, January 1999 - December 2015. The external forcing includes geomagnetic activity described by the Kp-index and solar activity described by the solar radio flux F10.7. The model describes the most probable spatial distribution and temporal variability of the externally forced TEC anomalies assuming that they depend mainly on latitude, Kp-index, F10.7 and LT. The anomalies are expressed by the relative deviation of the TEC from its 15-day mean, rTEC, as the mean value is calculated from the 15 preceding days. The approach for building this regional model is similar to that of the global TEC model reported by Mukhtarov et al. (2013a) however it includes two important improvements related to short-term variability of the solar activity and amended geomagnetic forcing by using a "modified" Kp index. The quality assessment of the new constructing model procedure in terms of modeling error calculated for the period of 1999-2015 indicates significant improvement in accordance with the global TEC model (Mukhtarov et al., 2013a). The short-term prediction capabilities of the model based on the error calculations for 2016 are improved as well. In order to demonstrate how the model is able to reproduce the rTEC response to external forcing three geomagnetic storms, accompanied also with short-term solar activity variations, which occur at different seasons and solar activity conditions are presented.

  3. Reference Evapotranspiration Variation Analysis and Its Approaches Evaluation of 13 Empirical Models in Sub-Humid and Humid Regions: A Case Study of the Huai River Basin, Eastern China

    Directory of Open Access Journals (Sweden)

    Meng Li

    2018-04-01

    Full Text Available Accurate and reliable estimations of reference evapotranspiration (ET0 are imperative in irrigation scheduling and water resource planning. This study aims to analyze the spatiotemporal trends of the monthly ET0 calculated by the Penman–Monteith FAO-56 (PMF-56 model in the Huai River Basin (HRB, eastern China. However, the use of the PMF-56 model is limited by the insufficiency of climatic input parameters in various sites, and the alternative is to employ simple empirical models. In this study, the performances of 13 empirical models were evaluated against the PMF-56 model by using three common statistical approaches: relative root-mean-square error (RRMSE, mean absolute error (MAE, and the Nash–Sutcliffe coefficient (NS. Additionally, a linear regression model was adopted to calibrate and validate the performances of the empirical models during the 1961–2000 and 2001–2014 time periods, respectively. The results showed that the ETPMF increased initially and then decreased on a monthly timescale. On a daily timescale, the Valiantzas3 (VA3 was the best alternative model for estimating the ET0, while the Penman (PEN, WMO, Trabert (TRA, and Jensen-Haise (JH models showed poor results with large errors. Before calibration, the determination coefficients of the temperature-based, radiation-based, and combined models showed the opposite changing trends compared to the mass transfer-based models. After calibration, the performance of each empirical model in each month improved greatly except for the PEN model. If the comprehensive climatic datasets were available, the VA3 would be the recommended model because it had a simple computation procedure and was also very well correlated linearly to the PMF-56 model. Given the data availability, the temperature-based, radiation-based, Valiantzas1 (VA1 and Valiantzas2 (VA2 models were recommended during April–October in the HRB and other similar regions, and also, the mass transfer-based models were

  4. An empirical method to estimate bulk particulate refractive index for ocean satellite applications

    Digital Repository Service at National Institute of Oceanography (India)

    Suresh, T.; Desa, E.; Mascarenhas, A.A.M.Q.; Matondkar, S.G.P.; Naik, P.; Nayak, S.R.

    An empirical method is presented here to estimates bulk particulate refractive index using the measured inherent and apparent optical properties from the various waters types of the Arabian Sea. The empirical model, where the bulk refractive index...

  5. Benchmarking FeCr empirical potentials against density functional theory data

    International Nuclear Information System (INIS)

    Klaver, T P C; Bonny, G; Terentyev, D; Olsson, P

    2010-01-01

    Three semi-empirical force field FeCr potentials, two within the formalism of the two-band model and one within the formalism of the concentration dependent model, have been benchmarked against a wide variety of density functional theory (DFT) structures. The benchmarking allows an assessment of how reliable empirical potential results are in different areas relevant to radiation damage modelling. The DFT data consist of defect-free structures, structures with single interstitials and structures with small di- and tri-interstitial clusters. All three potentials reproduce the general trend of the heat of formation (h.o.f.) quite well. The most important shortcomings of the original two-band model potential are the low or even negative h.o.f. for Cr-rich structures and the lack of a strong repulsion when moving two solute Cr atoms from being second-nearest neighbours to nearest neighbours. The newer two-band model potential partly solves the first problem. The most important shortcoming in the concentration dependent model potential is the magnitude of the Cr–Cr repulsion, being too strong at short distances and mostly absent at longer distances. Both two-band model potentials do reproduce long-range Cr–Cr repulsion. For interstitials the two-band model potentials reproduce a number of Cr–interstitial binding energies surprisingly well, in contrast to the concentration dependent model potential. For Cr interacting with clusters, the result can sometimes be directly extrapolated from Cr interacting with single interstitials, both according to DFT and the three empirical potentials

  6. Semi-empirical fragmentation model of meteoroid motion and radiation during atmospheric penetration

    Science.gov (United States)

    Revelle, D. O.; Ceplecha, Z.

    2002-11-01

    A semi-empirical fragmentation model (FM) of meteoroid motion, ablation, and radiation including two types of fragmentation is outlined. The FM was applied to observational data (height as function of time and the light curve) of Lost City, Innisfree and Benešov bolides. For the Lost City bolide we were able to fit the FM to the observed height as function of time with ±13 m and to the observed light curve with ±0.17 magnitude. Corresponding numbers for Innisfree are ±25 m and ±0.14 magnitude, and for Benešov ±46 m and ±0.19 magnitude. We also define apparent and intrinsic values of σ, K, and τ. Using older results and our fit of FM to the Lost City bolide we derived corrections to intrinsic luminous efficiencies expressed as functions of velocity, mass, and normalized air density.

  7. Theoretical Insight Into the Empirical Tortuosity-Connectivity Factor in the Burdine-Brooks-Corey Water Relative Permeability Model

    Science.gov (United States)

    Ghanbarian, Behzad; Ioannidis, Marios A.; Hunt, Allen G.

    2017-12-01

    A model commonly applied to the estimation of water relative permeability krw in porous media is the Burdine-Brooks-Corey model, which relies on a simplified picture of pores as a bundle of noninterconnected capillary tubes. In this model, the empirical tortuosity-connectivity factor is assumed to be a power law function of effective saturation with an exponent (μ) commonly set equal to 2 in the literature. Invoking critical path analysis and using percolation theory, we relate the tortuosity-connectivity exponent μ to the critical scaling exponent t of percolation that characterizes the power law behavior of the saturation-dependent electrical conductivity of porous media. We also discuss the cause of the nonuniversality of μ in terms of the nonuniversality of t and compare model estimations with water relative permeability from experiments. The comparison supports determining μ from the electrical conductivity scaling exponent t, but also highlights limitations of the model.

  8. Biomass viability: An experimental study and the development of an empirical mathematical model for submerged membrane bioreactor.

    Science.gov (United States)

    Zuthi, M F R; Ngo, H H; Guo, W S; Nghiem, L D; Hai, F I; Xia, S Q; Zhang, Z Q; Li, J X

    2015-08-01

    This study investigates the influence of key biomass parameters on specific oxygen uptake rate (SOUR) in a sponge submerged membrane bioreactor (SSMBR) to develop mathematical models of biomass viability. Extra-cellular polymeric substances (EPS) were considered as a lumped parameter of bound EPS (bEPS) and soluble microbial products (SMP). Statistical analyses of experimental results indicate that the bEPS, SMP, mixed liquor suspended solids and volatile suspended solids (MLSS and MLVSS) have functional relationships with SOUR and their relative influence on SOUR was in the order of EPS>bEPS>SMP>MLVSS/MLSS. Based on correlations among biomass parameters and SOUR, two independent empirical models of biomass viability were developed. The models were validated using results of the SSMBR. However, further validation of the models for different operating conditions is suggested. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. An update on the "empirical turn" in bioethics: analysis of empirical research in nine bioethics journals.

    Science.gov (United States)

    Wangmo, Tenzin; Hauri, Sirin; Gennet, Eloise; Anane-Sarpong, Evelyn; Provoost, Veerle; Elger, Bernice S

    2018-02-07

    A review of literature published a decade ago noted a significant increase in empirical papers across nine bioethics journals. This study provides an update on the presence of empirical papers in the same nine journals. It first evaluates whether the empirical trend is continuing as noted in the previous study, and second, how it is changing, that is, what are the characteristics of the empirical works published in these nine bioethics journals. A review of the same nine journals (Bioethics; Journal of Medical Ethics; Journal of Clinical Ethics; Nursing Ethics; Cambridge Quarterly of Healthcare Ethics; Hastings Center Report; Theoretical Medicine and Bioethics; Christian Bioethics; and Kennedy Institute of Ethics Journal) was conducted for a 12-year period from 2004 to 2015. Data obtained was analysed descriptively and using a non-parametric Chi-square test. Of the total number of original papers (N = 5567) published in the nine bioethics journals, 18.1% (n = 1007) collected and analysed empirical data. Journal of Medical Ethics and Nursing Ethics led the empirical publications, accounting for 89.4% of all empirical papers. The former published significantly more quantitative papers than qualitative, whereas the latter published more qualitative papers. Our analysis reveals no significant difference (χ2 = 2.857; p = 0.091) between the proportion of empirical papers published in 2004-2009 and 2010-2015. However, the increasing empirical trend has continued in these journals with the proportion of empirical papers increasing from 14.9% in 2004 to 17.8% in 2015. This study presents the current state of affairs regarding empirical research published nine bioethics journals. In the quarter century of data that is available about the nine bioethics journals studied in two reviews, the proportion of empirical publications continues to increase, signifying a trend towards empirical research in bioethics. The growing volume is mainly attributable to two

  10. EMPIRICAL TESTING OF MODIFIED BLACK-SCHOLES OPTION PRICING MODEL FORMULA ON NSE DERIVATIVE MARKET IN INDIA

    Directory of Open Access Journals (Sweden)

    Ambrish Gupta

    2013-01-01

    Full Text Available The main objectives of this paper are to incorporate modification in Black-Scholes option pricing model formula by adding some new variables on the basis of given assumption related to risk-free interest rate, and also shows the calculation process of new risk-free interest rate on the basis of modified variable. This paper also identifies the various situations in empirical testing of modified and original Black-Scholes formula with respect to the market value on the basis of assumed and calculated risk-free interest rate.

  11. Differences in Dynamic Brand Competition Across Markets: An Empirical Analysis

    OpenAIRE

    Jean-Pierre Dubé; Puneet Manchanda

    2005-01-01

    We investigate differences in the dynamics of marketing decisions across geographic markets empirically. We begin with a linear-quadratic game involving forward-looking firms competing on prices and advertising. Based on the corresponding Markov perfect equilibrium, we propose estimable econometric equations for demand and marketing policy. Our model allows us to measure empirically the strategic response of competitors along with economic measures such as firm profitability. We use a rich da...

  12. Reply : Collective Action and the Empirical Content of Stochastic Learning Models

    NARCIS (Netherlands)

    Macy, M.W.; Flache, A.

    2007-01-01

    We are grateful for the opportunity that Bendor, Diermeier, and Ting (hereafter BDT) have provided to address important questions about the empirical content of learning theoretic solutions to the collective action problem. They discuss two well-known classes of adaptive models— stochastic learning

  13. Empirical modeling of drying kinetics and microwave assisted extraction of bioactive compounds from Adathoda vasica

    Directory of Open Access Journals (Sweden)

    Prithvi Simha

    2016-03-01

    Full Text Available To highlight the shortcomings in conventional methods of extraction, this study investigates the efficacy of Microwave Assisted Extraction (MAE toward bioactive compound recovery from pharmaceutically-significant medicinal plants, Adathoda vasica and Cymbopogon citratus. Initially, the microwave (MW drying behavior of the plant leaves was investigated at different sample loadings, MW power and drying time. Kinetics was analyzed through empirical modeling of drying data against 10 conventional thin-layer drying equations that were further improvised through the incorporation of Arrhenius, exponential and linear-type expressions. 81 semi-empirical Midilli equations were derived and subjected to non-linear regression to arrive at the characteristic drying equations. Bioactive compounds recovery from the leaves was examined under various parameters through a comparative approach that studied MAE against Soxhlet extraction. MAE of A. vasica reported similar yields although drastic reduction in extraction time (210 s as against the average time of 10 h in the Soxhlet apparatus. Extract yield for MAE of C. citratus was higher than the conventional process with optimal parameters determined to be 20 g sample load, 1:20 sample/solvent ratio, extraction time of 150 s and 300 W output power. Scanning Electron Microscopy and Fourier Transform Infrared Spectroscopy were performed to depict changes in internal leaf morphology.

  14. Impact of Disturbing Factors on Cooperation in Logistics Outsourcing Performance: The Empirical Model

    Directory of Open Access Journals (Sweden)

    Andreja Križman

    2010-05-01

    Full Text Available The purpose of this paper is to present the research results of a study conducted in the Slovene logistics market of conflicts and opportunism as disturbing factors while examining their impact on cooperation in logistics outsourcing performance. Relationship variables are proposed that directly or indirectly affect logistics performance and conceptualize the hypotheses based on causal linkages for the constructs. On the basis of extant literature and new argumentations that are derived from in-depth interviews of logistics experts, including providers and customers, the measurement and structural models are empirically analyzed. Existing measurement scales for the constructs are slightly modified for this analysis. Purification testing and measurement for validity and reliability are performed. Multivariate statistical methods are utilized and hypotheses are tested. The results show that conflicts have a significantly negative impact on cooperation between customers and logistics service providers (LSPs, while opportunism does not play an important role in these relationships. The observed antecedents of logistics outsourcing performance in the model account for 58.4% of the variance of the goal achievement and 36.5% of the variance of the exceeded goal. KEYWORDS: logistics outsourcing performance; logistics customer–provider relationships; conflicts and cooperation in logistics outsourcing; PLS path modelling

  15. Synthetic and Empirical Capsicum Annuum Image Dataset

    NARCIS (Netherlands)

    Barth, R.

    2016-01-01

    This dataset consists of per-pixel annotated synthetic (10500) and empirical images (50) of Capsicum annuum, also known as sweet or bell pepper, situated in a commercial greenhouse. Furthermore, the source models to generate the synthetic images are included. The aim of the datasets are to

  16. An empirical model for parameters affecting energy consumption in boron removal from boron-containing wastewaters by electrocoagulation.

    Science.gov (United States)

    Yilmaz, A Erdem; Boncukcuoğlu, Recep; Kocakerim, M Muhtar

    2007-06-01

    In this study, it was investigated parameters affecting energy consumption in boron removal from boron containing wastewaters prepared synthetically, via electrocoagulation method. The solution pH, initial boron concentration, dose of supporting electrolyte, current density and temperature of solution were selected as experimental parameters affecting energy consumption. The obtained experimental results showed that boron removal efficiency reached up to 99% under optimum conditions, in which solution pH was 8.0, current density 6.0 mA/cm(2), initial boron concentration 100mg/L and solution temperature 293 K. The current density was an important parameter affecting energy consumption too. High current density applied to electrocoagulation cell increased energy consumption. Increasing solution temperature caused to decrease energy consumption that high temperature decreased potential applied under constant current density. That increasing initial boron concentration and dose of supporting electrolyte caused to increase specific conductivity of solution decreased energy consumption. As a result, it was seen that energy consumption for boron removal via electrocoagulation method could be minimized at optimum conditions. An empirical model was predicted by statistically. Experimentally obtained values were fitted with values predicted from empirical model being as following; [formula in text]. Unfortunately, the conditions obtained for optimum boron removal were not the conditions obtained for minimum energy consumption. It was determined that support electrolyte must be used for increase boron removal and decrease electrical energy consumption.

  17. Empirical modeling of a dewaxing system of lubricant oil using Artificial Neural Network (ANN); Modelagem empirica de um sistema de desparafinacao de oleo lubrificante usando redes neurais artificiais

    Energy Technology Data Exchange (ETDEWEB)

    Fontes, Cristiano Hora de Oliveira; Medeiros, Ana Claudia Gondim de; Silva, Marcone Lopes; Neves, Sergio Bello; Carvalho, Luciene Santos de; Guimaraes, Paulo Roberto Britto; Pereira, Magnus; Vianna, Regina Ferreira [Universidade Salvador (UNIFACS), Salvador, BA (Brazil). Dept. de Engenharia e Arquitetura]. E-mail: paulorbg@unifacs.br; Santos, Nilza Maria Querino dos [PETROBRAS S.A., Rio de Janeiro, RJ (Brazil)]. E-mail: nilzaq@petrobras.com.br

    2003-07-01

    The MIBK (m-i-b-ketone) dewaxing unit, located at the Landulpho Alves refinery, allows two different operating modes: dewaxing ND oil removal. The former is comprised of an oil-wax separation process, which generates a wax stream with 2 - 5% oil. The latter involves the reprocessing of the wax stream to reduce its oil content. Both involve a two-stage filtration process (primary and secondary) with rotative filters. The general aim of this research is to develop empirical models to predict variables, for both unit-operating modes, to be used in control algorithms, since many data are not available during normal plant operation and therefore need to be estimated. Studies have suggested that the oil content is an essential variable to develop reliable empirical models and this work is concerned with the development of an empirical model for the prediction of the oil content in the wax stream leaving the primary filters. The model is based on a feed forward Artificial Neural Network (ANN) and tests with one and two hidden layers indicate very good agreement between experimental and predicted values. (author)

  18. Development of nonlinear empirical models to forecast daily PM2.5 and ozone levels in three large Chinese cities

    Science.gov (United States)

    Lv, Baolei; Cobourn, W. Geoffrey; Bai, Yuqi

    2016-12-01

    Empirical regression models for next-day forecasting of PM2.5 and O3 air pollution concentrations have been developed and evaluated for three large Chinese cities, Beijing, Nanjing and Guangzhou. The forecast models are empirical nonlinear regression models designed for use in an automated data retrieval and forecasting platform. The PM2.5 model includes an upwind air quality variable, PM24, to account for regional transport of PM2.5, and a persistence variable (previous day PM2.5 concentration). The models were evaluated in the hindcast mode with a two-year air quality and meteorological data set using a leave-one-month-out cross validation method, and in the forecast mode with a one-year air quality and forecasted weather dataset that included forecasted air trajectories. The PM2.5 models performed well in the hindcast mode, with coefficient of determination (R2) values of 0.54, 0.65 and 0.64, and normalized mean error (NME) values of 0.40, 0.26 and 0.23 respectively, for the three cities. The O3 models also performed well in the hindcast mode, with R2 values of 0.75, 0.55 and 0.73, and NME values of 0.29, 0.26 and 0.24 in the three cities. The O3 models performed better in summertime than in winter in Beijing and Guangzhou, and captured the O3 variations well all the year round in Nanjing. The overall forecast performance of the PM2.5 and O3 models during the test year varied from fair to good, depending on location. The forecasts were somewhat degraded compared with hindcasts from the same year, depending on the accuracy of the forecasted meteorological input data. For the O3 models, the model forecast accuracy was strongly dependent on the maximum temperature forecasts. For the critical forecasts, involving air quality standard exceedences, the PM2.5 model forecasts were fair to good, and the O3 model forecasts were poor to fair.

  19. Empirical models for end-use properties prediction of LDPE: application in the flexible plastic packaging industry

    Directory of Open Access Journals (Sweden)

    Maria Carolina Burgos Costa

    2008-03-01

    Full Text Available The objective of this work is to develop empirical models to predict end use properties of low density polyethylene (LDPE resins as functions of two intrinsic properties easily measured in the polymers industry. The most important properties for application in the flexible plastic packaging industry were evaluated experimentally for seven commercial polymer grades. Statistical correlation analysis was performed for all variables and used as the basis for proper choice of inputs to each model output. Intrinsic properties selected for resin characterization are fluidity index (FI, which is essentially an indirect measurement of viscosity and weight average molecular weight (MW, and density. In general, models developed are able to reproduce and predict experimental data within experimental accuracy and show that a significant number of end use properties improve as the MW and density increase. Optical properties are mainly determined by the polymer morphology.

  20. Empirical Investigation of External Debt-Growth Nexus in Sub ...

    African Journals Online (AJOL)

    Empirical Investigation of External Debt-Growth Nexus in Sub-Saharan Africa. ... distributed lag (PARDL) model and panel non-linear autoregressive distributed lag (PNARDL) model to examine the relationship between external debt and economic growth using a panel dataset of 22 countries from 1985 to 2015. Its results ...

  1. Testing the robustness of the anthropogenic climate change detection statements using different empirical models

    KAUST Repository

    Imbers, J.; Lopez, A.; Huntingford, C.; Allen, M. R.

    2013-01-01

    This paper aims to test the robustness of the detection and attribution of anthropogenic climate change using four different empirical models that were previously developed to explain the observed global mean temperature changes over the last few decades. These studies postulated that the main drivers of these changes included not only the usual natural forcings, such as solar and volcanic, and anthropogenic forcings, such as greenhouse gases and sulfates, but also other known Earth system oscillations such as El Niño Southern Oscillation (ENSO) or the Atlantic Multidecadal Oscillation (AMO). In this paper, we consider these signals, or forced responses, and test whether or not the anthropogenic signal can be robustly detected under different assumptions for the internal variability of the climate system. We assume that the internal variability of the global mean surface temperature can be described by simple stochastic models that explore a wide range of plausible temporal autocorrelations, ranging from short memory processes exemplified by an AR(1) model to long memory processes, represented by a fractional differenced model. In all instances, we conclude that human-induced changes to atmospheric gas composition is affecting global mean surface temperature changes. ©2013. American Geophysical Union. All Rights Reserved.

  2. Testing the robustness of the anthropogenic climate change detection statements using different empirical models

    KAUST Repository

    Imbers, J.

    2013-04-27

    This paper aims to test the robustness of the detection and attribution of anthropogenic climate change using four different empirical models that were previously developed to explain the observed global mean temperature changes over the last few decades. These studies postulated that the main drivers of these changes included not only the usual natural forcings, such as solar and volcanic, and anthropogenic forcings, such as greenhouse gases and sulfates, but also other known Earth system oscillations such as El Niño Southern Oscillation (ENSO) or the Atlantic Multidecadal Oscillation (AMO). In this paper, we consider these signals, or forced responses, and test whether or not the anthropogenic signal can be robustly detected under different assumptions for the internal variability of the climate system. We assume that the internal variability of the global mean surface temperature can be described by simple stochastic models that explore a wide range of plausible temporal autocorrelations, ranging from short memory processes exemplified by an AR(1) model to long memory processes, represented by a fractional differenced model. In all instances, we conclude that human-induced changes to atmospheric gas composition is affecting global mean surface temperature changes. ©2013. American Geophysical Union. All Rights Reserved.

  3. Collective animal navigation and migratory culture: from theoretical models to empirical evidence

    Science.gov (United States)

    Dell, Anthony I.

    2018-01-01

    Animals often travel in groups, and their navigational decisions can be influenced by social interactions. Both theory and empirical observations suggest that such collective navigation can result in individuals improving their ability to find their way and could be one of the key benefits of sociality for these species. Here, we provide an overview of the potential mechanisms underlying collective navigation, review the known, and supposed, empirical evidence for such behaviour and highlight interesting directions for future research. We further explore how both social and collective learning during group navigation could lead to the accumulation of knowledge at the population level, resulting in the emergence of migratory culture. This article is part of the theme issue ‘Collective movement ecology’. PMID:29581394

  4. Statistical microeconomics and commodity prices: theory and empirical results.

    Science.gov (United States)

    Baaquie, Belal E

    2016-01-13

    A review is made of the statistical generalization of microeconomics by Baaquie (Baaquie 2013 Phys. A 392, 4400-4416. (doi:10.1016/j.physa.2013.05.008)), where the market price of every traded commodity, at each instant of time, is considered to be an independent random variable. The dynamics of commodity market prices is given by the unequal time correlation function and is modelled by the Feynman path integral based on an action functional. The correlation functions of the model are defined using the path integral. The existence of the action functional for commodity prices that was postulated to exist in Baaquie (Baaquie 2013 Phys. A 392, 4400-4416. (doi:10.1016/j.physa.2013.05.008)) has been empirically ascertained in Baaquie et al. (Baaquie et al. 2015 Phys. A 428, 19-37. (doi:10.1016/j.physa.2015.02.030)). The model's action functionals for different commodities has been empirically determined and calibrated using the unequal time correlation functions of the market commodity prices using a perturbation expansion (Baaquie et al. 2015 Phys. A 428, 19-37. (doi:10.1016/j.physa.2015.02.030)). Nine commodities drawn from the energy, metal and grain sectors are empirically studied and their auto-correlation for up to 300 days is described by the model to an accuracy of R(2)>0.90-using only six parameters. © 2015 The Author(s).

  5. An empirical model of L-band scintillation S4 index constructed by using FORMOSAT-3/COSMIC data

    Science.gov (United States)

    Chen, Shih-Ping; Bilitza, Dieter; Liu, Jann-Yenq; Caton, Ronald; Chang, Loren C.; Yeh, Wen-Hao

    2017-09-01

    Modern society relies heavily on the Global Navigation Satellite System (GNSS) technology for applications such as satellite communication, navigation, and positioning on the ground and/or aviation in the troposphere/stratosphere. However, ionospheric scintillations can severely impact GNSS systems and their related applications. In this study, a global empirical ionospheric scintillation model is constructed with S4-index data obtained by the FORMOSAT-3/COSMIC (F3/C) satellites during 2007-2014 (hereafter referred to as the F3CGS4 model). This model describes the S4-index as a function of local time, day of year, dip-latitude, and solar activity using the index PF10.7. The model reproduces the F3/C S4-index observations well, and yields good agreement with ground-based reception of satellite signals. This confirms that the constructed model can be used to forecast global L-band scintillations on the ground and in the near surface atmosphere.

  6. Semi-empirical atom-atom interaction models and X-ray crystallography

    International Nuclear Information System (INIS)

    Braam, A.W.M.

    1981-01-01

    Several aspects of semi-empirical energy calculations in crystallography are considered. Solid modifications of ethane have been studied using energy calculations and a fast summation technique has been evaluated. The structure of tetramethylpyrazine has been determined at room temperature and at 100K and accurate structure factors have been derived from measured Bragg intensities. Finally electrostatic properties have been deduced from X-ray structure factors. (C.F.)

  7. A construction of empiric model of dependence of cost of the dried agricultural earths is from the cost of the reclamative system

    OpenAIRE

    VELESIK T.A.

    2011-01-01

    An empiric model which represents dependence between the cost of the dried agricultural earths and cost of the reclamative systems is offered. The coefficient of correlation is expected and determination, the values of which testify that between factors there is close connection.

  8. Empirical model for chlorophyll-a determination in inland waters from the forthcoming Sentinel-2 and 3. Validation from HICO images

    Directory of Open Access Journals (Sweden)

    J. Delegido

    2014-06-01

    Full Text Available Chlorophyll-a concentration is one of the main indicators of inland waters quality. Using CHRIS/PROBA images and in situ data obtained in four lakes in Colombia and Spain, we obtained empirical models for the estimation of chlorophyll-a concentration, which can be directly applied to future images of MSI Sentinel-2 and OLCI Sentinel-3 sensors. The models, based on spectral band indices, were validated with data from the hyperspectral sensor HICO, onboard of the International Space Station.

  9. Economic differences among regional public service broadcasters in Spain according to their management model. An empirical analysis for period 2010-2013

    Directory of Open Access Journals (Sweden)

    Víctor Orive Serrano

    2016-03-01

    Full Text Available Purpose: This piece of research quantifies and analyses empirically the given economic differences among public service television in Spain according to the adopted management model (classic or outsourced. Design/methodology/approach: In so doing, an average contrast of different economic variables studied in the literature is conducted (audience share, total assets, public subsidies, cost of personnel, suppliers spending and profit after taxes. In addition, these variables are related so as to calculate productivity obtained by each two groups of television operators. This analysis is conducted for period 2010-2013, featured by a crisis context in the Spanish economy. Findings: Management model adopted by each regional broadcaster impacts on different economic variables as obtained share, total assets, public subsidies, cost of personnel, suppliers spending or profit after taxes. Moreover, those public corporations adopting an outsourced management model present better productivity values. Research limitations/implications: Only one country has been analyzed for a 4 years period. Practical implications: Regional public service broadcasters with an outsourced model present less economic losses and require less public subsidies by their corresponding regional governments. Social implications: Outsourcing part of the value chain can be useful so as to guarantee sustainability of regional public service television. Originality/value: It has been proven empirically that the management model of a regional public service television impacts its economic results.

  10. Developing Inventory Projection Models Using Empirical Net Forest Growth and Growing-Stock Density Relationships Across U.S. Regions and Species Group

    Science.gov (United States)

    Prakash Nepal; Peter J. Ince; Kenneth E. Skog; Sun J. Chang

    2012-01-01

    This paper describes a set of empirical net forest growth models based on forest growing-stock density relationships for three U.S. regions (North, South, and West) and two species groups (softwoods and hardwoods) at the regional aggregate level. The growth models accurately predict historical U.S. timber inventory trends when we incorporate historical timber harvests...

  11. Empirical Support for Perceptual Conceptualism

    Directory of Open Access Journals (Sweden)

    Nicolás Alejandro Serrano

    2018-03-01

    Full Text Available The main objective of this paper is to show that perceptual conceptualism can be understood as an empirically meaningful position and, furthermore, that there is some degree of empirical support for its main theses. In order to do this, I will start by offering an empirical reading of the conceptualist position, and making three predictions from it. Then, I will consider recent experimental results from cognitive sciences that seem to point towards those predictions. I will conclude that, while the evidence offered by those experiments is far from decisive, it is enough not only to show that conceptualism is an empirically meaningful position but also that there is empirical support for it.

  12. Empirical forecast of the quiet time Ionosphere over Europe: a comparative model investigation

    Science.gov (United States)

    Badeke, R.; Borries, C.; Hoque, M. M.; Minkwitz, D.

    2016-12-01

    The purpose of this work is to find the best empirical model for a reliable 24 hour forecast of the ionospheric Total Electron Content (TEC) over Europe under geomagnetically quiet conditions. It will be used as an improved reference for the description of storm-induced perturbations in the ionosphere. The observational TEC-data were obtained from the International GNSS Service (IGS). Four different forecast model approaches were validated with observational IGS TEC-data: a 27 day median model (27d), a Fourier Analysis (FA) approach, the Neustrelitz TEC global model (NTCM-GL) and NeQuick 2. Two years were investigated depending on the solar activity: 2015 (high activity) and 2008 (low avtivity) The time periods of magnetic storms, which were identified with the Dst index, were excluded from the validation. For both years the two models 27d and FA show better results than NTCM-GL and NeQuick 2. For example for the year 2015 and 15° E / 50° N the difference between the IGS data and the predicted 27d model shows a mean value of 0.413 TEC units (TECU), a standard deviation of 3.307 TECU and a correlation coefficient of 0.921, while NTCM-GL and NeQuick 2 have mean differences of around 2-3 TECU, standard deviations of 4.5-5 TECU and correlation coefficients below 0.85. Since 27d and FA predictions strongly depend on observational data, the results confirm that data driven forecasts perform better than the climatological models NTCM-GL and NeQuick 2. However, the benefits of NTCM-GL and NeQuick 2 are actually the lower data dependency, i.e. they do not lack on precision when observational IGS TEC data are unavailable. Hence a combination of the different models is recommended reacting accordingly to the different data availabilities.

  13. An empirical model of ionospheric total electron content (TEC) near the crest of the equatorial ionization anomaly (EIA)

    Science.gov (United States)

    Hajra, Rajkumar; Chakraborty, Shyamal Kumar; Tsurutani, Bruce T.; DasGupta, Ashish; Echer, Ezequiel; Brum, Christiano G. M.; Gonzalez, Walter D.; Sobral, José Humberto Andrade

    2016-07-01

    We present a geomagnetic quiet time (Dst > -50 nT) empirical model of ionospheric total electron content (TEC) for the northern equatorial ionization anomaly (EIA) crest over Calcutta, India. The model is based on the 1980-1990 TEC measurements from the geostationary Engineering Test Satellite-2 (ETS-2) at the Haringhata (University of Calcutta, India: 22.58° N, 88.38° E geographic; 12.09° N, 160.46° E geomagnetic) ionospheric field station using the technique of Faraday rotation of plane polarized VHF (136.11 MHz) signals. The ground station is situated virtually underneath the northern EIA crest. The monthly mean TEC increases linearly with F10.7 solar ionizing flux, with a significantly high correlation coefficient (r = 0.89-0.99) between the two. For the same solar flux level, the TEC values are found to be significantly different between the descending and ascending phases of the solar cycle. This ionospheric hysteresis effect depends on the local time as well as on the solar flux level. On an annual scale, TEC exhibits semiannual variations with maximum TEC values occurring during the two equinoxes and minimum at summer solstice. The semiannual variation is strongest during local noon with a summer-to-equinox variability of ~50-100 TEC units. The diurnal pattern of TEC is characterized by a pre-sunrise (0400-0500 LT) minimum and near-noon (1300-1400 LT) maximum. Equatorial electrodynamics is dominated by the equatorial electrojet which in turn controls the daytime TEC variation and its maximum. We combine these long-term analyses to develop an empirical model of monthly mean TEC. The model is validated using both ETS-2 measurements and recent GNSS measurements. It is found that the present model efficiently estimates the TEC values within a 1-σ range from the observed mean values.

  14. An empirical model of ionospheric total electron content (TEC near the crest of the equatorial ionization anomaly (EIA

    Directory of Open Access Journals (Sweden)

    Hajra Rajkumar

    2016-01-01

    Full Text Available We present a geomagnetic quiet time (Dst > −50 nT empirical model of ionospheric total electron content (TEC for the northern equatorial ionization anomaly (EIA crest over Calcutta, India. The model is based on the 1980–1990 TEC measurements from the geostationary Engineering Test Satellite-2 (ETS-2 at the Haringhata (University of Calcutta, India: 22.58° N, 88.38° E geographic; 12.09° N, 160.46° E geomagnetic ionospheric field station using the technique of Faraday rotation of plane polarized VHF (136.11 MHz signals. The ground station is situated virtually underneath the northern EIA crest. The monthly mean TEC increases linearly with F10.7 solar ionizing flux, with a significantly high correlation coefficient (r = 0.89–0.99 between the two. For the same solar flux level, the TEC values are found to be significantly different between the descending and ascending phases of the solar cycle. This ionospheric hysteresis effect depends on the local time as well as on the solar flux level. On an annual scale, TEC exhibits semiannual variations with maximum TEC values occurring during the two equinoxes and minimum at summer solstice. The semiannual variation is strongest during local noon with a summer-to-equinox variability of ~50–100 TEC units. The diurnal pattern of TEC is characterized by a pre-sunrise (0400–0500 LT minimum and near-noon (1300–1400 LT maximum. Equatorial electrodynamics is dominated by the equatorial electrojet which in turn controls the daytime TEC variation and its maximum. We combine these long-term analyses to develop an empirical model of monthly mean TEC. The model is validated using both ETS-2 measurements and recent GNSS measurements. It is found that the present model efficiently estimates the TEC values within a 1-σ range from the observed mean values.

  15. Functionality of empirical model-based predictive analytics for the early detection of hemodynamic instabilty.

    Science.gov (United States)

    Summers, Richard L; Pipke, Matt; Wegerich, Stephan; Conkright, Gary; Isom, Kristen C

    2014-01-01

    Background. Monitoring cardiovascular hemodynamics in the modern clinical setting is a major challenge. Increasing amounts of physiologic data must be analyzed and interpreted in the context of the individual patient’s pathology and inherent biologic variability. Certain data-driven analytical methods are currently being explored for smart monitoring of data streams from patients as a first tier automated detection system for clinical deterioration. As a prelude to human clinical trials, an empirical multivariate machine learning method called Similarity-Based Modeling (“SBM”), was tested in an In Silico experiment using data generated with the aid of a detailed computer simulator of human physiology (Quantitative Circulatory Physiology or “QCP”) which contains complex control systems with realistic integrated feedback loops. Methods. SBM is a kernel-based, multivariate machine learning method that that uses monitored clinical information to generate an empirical model of a patient’s physiologic state. This platform allows for the use of predictive analytic techniques to identify early changes in a patient’s condition that are indicative of a state of deterioration or instability. The integrity of the technique was tested through an In Silico experiment using QCP in which the output of computer simulations of a slowly evolving cardiac tamponade resulted in progressive state of cardiovascular decompensation. Simulator outputs for the variables under consideration were generated at a 2-min data rate (0.083Hz) with the tamponade introduced at a point 420 minutes into the simulation sequence. The functionality of the SBM predictive analytics methodology to identify clinical deterioration was compared to the thresholds used by conventional monitoring methods. Results. The SBM modeling method was found to closely track the normal physiologic variation as simulated by QCP. With the slow development of the tamponade, the SBM model are seen to disagree while the

  16. Trade costs in empirical New Economic Geography

    NARCIS (Netherlands)

    Bosker, E.M.; Garretsen, J.H.

    Trade costs are a crucial element of New Economic Geography (NEG) models. Without trade costs there is no role for geography. In empirical NEG studies the unavailability of direct trade cost data calls for the need to approximate these trade costs by introducing a trade cost function. In doing so,

  17. Empirically Exploring Higher Education Cultures of Assessment

    Science.gov (United States)

    Fuller, Matthew B.; Skidmore, Susan T.; Bustamante, Rebecca M.; Holzweiss, Peggy C.

    2016-01-01

    Although touted as beneficial to student learning, cultures of assessment have not been examined adequately using validated instruments. Using data collected from a stratified, random sample (N = 370) of U.S. institutional research and assessment directors, the models tested in this study provide empirical support for the value of using the…

  18. Semi-empirical model for the generation of dose distributions produced by a scanning electron beam

    International Nuclear Information System (INIS)

    Nath, R.; Gignac, C.E.; Agostinelli, A.G.; Rothberg, S.; Schulz, R.J.

    1980-01-01

    There are linear accelerators (Sagittaire and Saturne accelerators produced by Compagnie Generale de Radiologie (CGR/MeV) Corporation) which produce broad, flat electron fields by magnetically scanning the relatively narrow electron beam as it emerges from the accelerator vacuum system. A semi-empirical model, which mimics the scanning action of this type of accelerator, was developed for the generation of dose distributions in homogeneous media. The model employs the dose distributions of the scanning electron beams. These were measured with photographic film in a polystyrene phantom by turning off the magnetic scanning system. The mean deviation calculated from measured dose distributions is about 0.2%; a few points have deviations as large as 2 to 4% inside of the 50% isodose curve, but less than 8% outside of the 50% isodose curve. The model has been used to generate the electron beam library required by a modified version of a commercially-available computerized treatment-planning system. (The RAD-8 treatment planning system was purchased from the Digital Equipment Corporation. It is currently available from Electronic Music Industries

  19. Cycling empirical antibiotic therapy in hospitals: meta-analysis and models.

    Directory of Open Access Journals (Sweden)

    Pia Abel zur Wiesch

    2014-06-01

    Full Text Available The rise of resistance together with the shortage of new broad-spectrum antibiotics underlines the urgency of optimizing the use of available drugs to minimize disease burden. Theoretical studies suggest that coordinating empirical usage of antibiotics in a hospital ward can contain the spread of resistance. However, theoretical and clinical studies came to different conclusions regarding the usefulness of rotating first-line therapy (cycling. Here, we performed a quantitative pathogen-specific meta-analysis of clinical studies comparing cycling to standard practice. We searched PubMed and Google Scholar and identified 46 clinical studies addressing the effect of cycling on nosocomial infections, of which 11 met our selection criteria. We employed a method for multivariate meta-analysis using incidence rates as endpoints and find that cycling reduced the incidence rate/1000 patient days of both total infections by 4.95 [9.43-0.48] and resistant infections by 7.2 [14.00-0.44]. This positive effect was observed in most pathogens despite a large variance between individual species. Our findings remain robust in uni- and multivariate metaregressions. We used theoretical models that reflect various infections and hospital settings to compare cycling to random assignment to different drugs (mixing. We make the realistic assumption that therapy is changed when first line treatment is ineffective, which we call "adjustable cycling/mixing". In concordance with earlier theoretical studies, we find that in strict regimens, cycling is detrimental. However, in adjustable regimens single resistance is suppressed and cycling is successful in most settings. Both a meta-regression and our theoretical model indicate that "adjustable cycling" is especially useful to suppress emergence of multiple resistance. While our model predicts that cycling periods of one month perform well, we expect that too long cycling periods are detrimental. Our results suggest that

  20. An empirical system for probabilistic seasonal climate prediction

    Science.gov (United States)

    Eden, Jonathan; van Oldenborgh, Geert Jan; Hawkins, Ed; Suckling, Emma

    2016-04-01

    Preparing for episodes with risks of anomalous weather a month to a year ahead is an important challenge for governments, non-governmental organisations, and private companies and is dependent on the availability of reliable forecasts. The majority of operational seasonal forecasts are made using process-based dynamical models, which are complex, computationally challenging and prone to biases. Empirical forecast approaches built on statistical models to represent physical processes offer an alternative to dynamical systems and can provide either a benchmark for comparison or independent supplementary forecasts. Here, we present a simple empirical system based on multiple linear regression for producing probabilistic forecasts of seasonal surface air temperature and precipitation across the globe. The global CO2-equivalent concentration is taken as the primary predictor; subsequent predictors, including large-scale modes of variability in the climate system and local-scale information, are selected on the basis of their physical relationship with the predictand. The focus given to the climate change signal as a source of skill and the probabilistic nature of the forecasts produced constitute a novel approach to global empirical prediction. Hindcasts for the period 1961-2013 are validated against observations using deterministic (correlation of seasonal means) and probabilistic (continuous rank probability skill scores) metrics. Good skill is found in many regions, particularly for surface air temperature and most notably in much of Europe during the spring and summer seasons. For precipitation, skill is generally limited to regions with known El Niño-Southern Oscillation (ENSO) teleconnections. The system is used in a quasi-operational framework to generate empirical seasonal forecasts on a monthly basis.

  1. Empirical solar/stellar cycle simulations

    Directory of Open Access Journals (Sweden)

    Santos Ângela R. G.

    2015-01-01

    Full Text Available As a result of the magnetic cycle, the properties of the solar oscillations vary periodically. With the recent discovery of manifestations of activity cycles in the seismic data of other stars, the understanding of the different contributions to such variations becomes even more important. With this in mind, we built an empirical parameterised model able to reproduce the properties of the sunspot cycle. The resulting simulations can be used to estimate the magnetic-induced frequency shifts.

  2. Essays on empirical industrial organization : Entry and innovation

    NARCIS (Netherlands)

    Fernandez Machado, Roxana

    2017-01-01

    The dissertation contains three essays on empirical industrial organization devoted to studying firms' strategic interaction in different settings. The first essay develops an entry model to address an important matter in the area of urban economics: the development of cities. In particular, it

  3. Precision and accuracy of mechanistic-empirical pavement design

    CSIR Research Space (South Africa)

    Theyse, HL

    2006-09-01

    Full Text Available are discussed in general. The effects of variability and error on the design accuracy and design risk are lastly illustrated at the hand of a simple mechanistic-empirical design problem, showing that the engineering models alone determine the accuracy...

  4. Intercomparison of Meteorological Forcing Data from Empirical and Mesoscale Model Sources in the N.F. American River Basin in northern California

    Science.gov (United States)

    Wayand, N. E.; Hamlet, A. F.; Hughes, M. R.; Feld, S.; Lundquist, J. D.

    2012-12-01

    The data required to drive distributed hydrological models is significantly limited within mountainous terrain due to a scarcity of observations. This study evaluated three common configurations of forcing data: a) one low-elevation station, combined with empirical techniques, b) gridded output from the Weather Research and Forecasting (WRF) model, and c) a combination of the two. Each configuration was evaluated within the heavily-instrumented North Fork American River Basin in northern California, during October-June 2000-2010. Simulations of streamflow and snowpack using the Distributed Hydrology Soil and Vegetation Model (DHSVM) highlighted precipitation and radiation as variables whose sources resulted in significant differences. The best source of precipitation data varied between years. On average, the performance of WRF and the single station distributed using the Parameter Regression on Independent Slopes Model (PRISM), were not significantly different. The average percent biases in simulated streamflow were 3.4% and 0.9%, for configurations a) and b) respectively, even though precipitation compared directly with gauge measurements was biased high by 6% and 17%, suggesting that gauge undercatch may explain part of the bias. Simulations of snowpack using empirically-estimated long-wave irradiance resulted in melt rates lower than those observed at high-elevation sites, while at lower-elevations the same forcing caused significant mid-winter melt that was not observed (Figure 1). These results highlight the complexity of how forcing data sources impact hydrology over different areas (high vs. low elevation snow) and different time-periods. Overall, results support the use of output from the WRF model over empirical techniques in regions with limited station data. FIG. 1. (a,b) Simulated SWE from DHSVM compared to observations at the Sierra Snow Lab (2100m) and Blue Canyon (1609m) during 2008 - 2009. Modeled (c,d) internal pack temperature, (e,f) downward

  5. Aplicabilidade científica do método dos elementos finitos Scientific aplication of finite element method

    Directory of Open Access Journals (Sweden)

    Raquel S. Lotti

    2006-04-01

    Full Text Available O Método dos Elementos Finitos (MEF é uma análise matemática que consiste na discretização de um meio contínuo em pequenos elementos, mantendo as mesmas propriedades do meio original. Esses elementos são descritos por equações diferenciais e resolvidos por modelos matemáticos, para que sejam obtidos os resultados desejados. A origem do desenvolvimento deste recurso ocorreu no final do século XVIII, entretanto, a sua viabilização tornou-se possível somente com o advento dos computadores, facilitando a resolução das enormes equações algébricas. O MEF pode ser utilizado em diversas áreas das ciências exatas e biológicas e, devido à sua grande aplicabilidade e eficiência, existem trabalhos com esta metodologia nas diversas especialidades odontológicas, como na Ortodontia, quando se deseja analisar cargas, tensões ou deslocamentos. Com o contínuo uso desse método em pesquisas, com suas vantagens em relação a outros disponíveis, torna-se de suma importância o conhecimento da técnica, para que sua utilização possa proporcionar benefícios científicos à Ortodontia. Torna-se primordial que os ortodontistas clínicos conheçam os conceitos básicos do MEF para que os resultados dos trabalhos sejam melhor interpretados.The Finite Element Method consists of a series of mathematic equations that, utilized with the proper software, allows different kinds of computer simulations. The method was first reported in the end of the seventeenth century, however it’s practical utilization was made feasible only after the advent of computers. Scientific researches that works with this analysis is granted with many advantages. Especially in the Orthodontics field this approach can be very useful to evaluate the forces delivered by different orthodontic appliances, tooth movement and structures of the face. Due to it’s widespread use in orthodontic research it is of paramount importance to know well the method and its

  6. Going Global: A Model for Evaluating Empirically Supported Family-Based Interventions in New Contexts.

    Science.gov (United States)

    Sundell, Knut; Ferrer-Wreder, Laura; Fraser, Mark W

    2014-06-01

    The spread of evidence-based practice throughout the world has resulted in the wide adoption of empirically supported interventions (ESIs) and a growing number of controlled trials of imported and culturally adapted ESIs. This article is informed by outcome research on family-based interventions including programs listed in the American Blueprints Model and Promising Programs. Evidence from these controlled trials is mixed and, because it is comprised of both successful and unsuccessful replications of ESIs, it provides clues for the translation of promising programs in the future. At least four explanations appear plausible for the mixed results in replication trials. One has to do with methodological differences across trials. A second deals with ambiguities in the cultural adaptation process. A third explanation is that ESIs in failed replications have not been adequately implemented. A fourth source of variation derives from unanticipated contextual influences that might affect the effects of ESIs when transported to other cultures and countries. This article describes a model that allows for the differential examination of adaptations of interventions in new cultural contexts. © The Author(s) 2012.

  7. Life Writing After Empire

    DEFF Research Database (Denmark)

    A watershed moment of the twentieth century, the end of empire saw upheavals to global power structures and national identities. However, decolonisation profoundly affected individual subjectivities too. Life Writing After Empire examines how people around the globe have made sense of the post...... in order to understand how individual life writing reflects broader societal changes. From far-flung corners of the former British Empire, people have turned to life writing to manage painful or nostalgic memories, as well as to think about the past and future of the nation anew through the personal...

  8. A semi-empirical model for the prediction of fouling in railway ballast using GPR

    Science.gov (United States)

    Bianchini Ciampoli, Luca; Tosti, Fabio; Benedetto, Andrea; Alani, Amir M.; Loizos, Andreas; D'Amico, Fabrizio; Calvi, Alessandro

    2016-04-01

    The first step in the planning for a renewal of a railway network consists in gathering information, as effectively as possible, about the state of the railway tracks. Nowadays, this activity is mostly carried out by digging trenches at regular intervals along the whole network, to evaluate both geometrical and geotechnical properties of the railway track bed. This involves issues, mainly concerning the invasiveness of the operations, the impacts on the rail traffic, the high costs, and the low levels of significance concerning such discrete data set. Ground-penetrating radar (GPR) can represent a useful technique for overstepping these issues, as it can be directly mounted onto a train crossing the railway, and collect continuous information along the network. This study is aimed at defining an empirical model for the prediction of fouling in railway ballast, by using GPR. With this purpose, a thorough laboratory campaign was implemented within the facilities of Roma Tre University. In more details, a 1.47 m long × 1.47 m wide × 0.48 m height plexiglass framework, accounting for the domain of investigation, was laid over a perfect electric conductor, and filled up with several configuration of railway ballast and fouling material (clayey sand), thereby representing different levels of fouling. Then, the set of fouling configurations was surveyed with several GPR systems. In particular, a ground-coupled multi-channel radar (600 MHz and 1600 MHz center frequency antennas) and three air-launched radar systems (1000 MHz and 2000 MHz center frequency antennas) were employed for surveying the materials. By observing the results both in terms of time and frequency domains, interesting insights are highlighted and an empirical model, relating in particular the shape of the frequency spectrum of the signal and the percentage of fouling characterizing the surveyed material, is finally proposed. Acknowledgement The Authors thank COST, for funding the Action TU1208 "Civil

  9. Probabilistic empirical prediction of seasonal climate: evaluation and potential applications

    Science.gov (United States)

    Dieppois, B.; Eden, J.; van Oldenborgh, G. J.

    2017-12-01

    Preparing for episodes with risks of anomalous weather a month to a year ahead is an important challenge for governments, non-governmental organisations, and private companies and is dependent on the availability of reliable forecasts. The majority of operational seasonal forecasts are made using process-based dynamical models, which are complex, computationally challenging and prone to biases. Empirical forecast approaches built on statistical models to represent physical processes offer an alternative to dynamical systems and can provide either a benchmark for comparison or independent supplementary forecasts. Here, we present a new evaluation of an established empirical system used to predict seasonal climate across the globe. Forecasts for surface air temperature, precipitation and sea level pressure are produced by the KNMI Probabilistic Empirical Prediction (K-PREP) system every month and disseminated via the KNMI Climate Explorer (climexp.knmi.nl). K-PREP is based on multiple linear regression and built on physical principles to the fullest extent with predictive information taken from the global CO2-equivalent concentration, large-scale modes of variability in the climate system and regional-scale information. K-PREP seasonal forecasts for the period 1981-2016 will be compared with corresponding dynamically generated forecasts produced by operational forecast systems. While there are many regions of the world where empirical forecast skill is extremely limited, several areas are identified where K-PREP offers comparable skill to dynamical systems. We discuss two key points in the future development and application of the K-PREP system: (a) the potential for K-PREP to provide a more useful basis for reference forecasts than those based on persistence or climatology, and (b) the added value of including K-PREP forecast information in multi-model forecast products, at least for known regions of good skill. We also discuss the potential development of

  10. Energy levies and endogenous technology in an empirical simulation model for the Netherlands

    International Nuclear Information System (INIS)

    Den Butter, F.A.G.; Dellink, R.B.; Hofkes, M.W.

    1995-01-01

    The belief in beneficial green tax swaps has been particularly prevalent in Europe, where high levels of unemployment and strong preferences for a large public sector (and hence high tax levels) accentuate the desire for revenue-neutral, growth-enhancing reductions in labor income taxes. In this context an empirical simulation model is developed for the Netherlands, especially designed to reckon with the effects of changes in prices on the level and direction of technological progress. It appears that the so-called employment double dividend, i.e. increasing employment and decreasing energy use at the same time, can occur. A general levy yields stronger effects than a levy on household use only. However, the stronger effects of a general levy on employment and energy use are accompanied by shrinking production and, in the longer run, by decreasing disposable income of workers or non-workers. 1 fig., 4 tabs., 1 appendix, 20 refs

  11. Comparison of artificial intelligence methods and empirical equations to estimate daily solar radiation

    Science.gov (United States)

    Mehdizadeh, Saeid; Behmanesh, Javad; Khalili, Keivan

    2016-08-01

    In the present research, three artificial intelligence methods including Gene Expression Programming (GEP), Artificial Neural Networks (ANN) and Adaptive Neuro-Fuzzy Inference System (ANFIS) as well as, 48 empirical equations (10, 12 and 26 equations were temperature-based, sunshine-based and meteorological parameters-based, respectively) were used to estimate daily solar radiation in Kerman, Iran in the period of 1992-2009. To develop the GEP, ANN and ANFIS models, depending on the used empirical equations, various combinations of minimum air temperature, maximum air temperature, mean air temperature, extraterrestrial radiation, actual sunshine duration, maximum possible sunshine duration, sunshine duration ratio, relative humidity and precipitation were considered as inputs in the mentioned intelligent methods. To compare the accuracy of empirical equations and intelligent models, root mean square error (RMSE), mean absolute error (MAE), mean absolute relative error (MARE) and determination coefficient (R2) indices were used. The results showed that in general, sunshine-based and meteorological parameters-based scenarios in ANN and ANFIS models presented high accuracy than mentioned empirical equations. Moreover, the most accurate method in the studied region was ANN11 scenario with five inputs. The values of RMSE, MAE, MARE and R2 indices for the mentioned model were 1.850 MJ m-2 day-1, 1.184 MJ m-2 day-1, 9.58% and 0.935, respectively.

  12. Comparison of precipitating electron energy flux on March 22, 1979 with an empirical model: CDAW-6

    International Nuclear Information System (INIS)

    Simons, S.L. Jr.; Reiff, P.H.; Spiro, R.W.; Hardy, D.A.; Kroehl, H.W.

    1985-01-01

    Data recorded by Defense Meterological Satellite Program, TIROS and P-78-1 satellites for the CDAW 6 event on March 22, 1979, have been compared with a statistical model of precipitating electron fluxes. Comparisons have been made on both an orbit-by-orbit basis and on a global basis by sorting and binning the data by AE index, invariant latitude and magnetic local time in a manner similar to which the model was generated. We conclude that the model flux agrees with the data to within a factor of two, although small features and the exact locations of features are not consistently reproduced. In addition, the latitude of highest electron precipitation usually occurs about 3 0 more pole-ward in the model than in the data. We attribute this discrepancy to ring current inflation of the storm time magnetosphere (as evidenced by negative Dst's). We suggest that a similar empirical model based on AL instead of AE and including some indicator of the history of the event would provide an even better comparison. Alternatively, in situ data such as electrojet location should be used routinely to normalize the latitude of the auroral precipitation

  13. An empirical fit to estimated neutron emission cross sections from ...

    Indian Academy of Sciences (India)

    calculated using the hybrid model code ALICE-91 for proton induced ... By replacing time consuming nuclear model calculations with any simple ex- ..... the empirical relation can be explained satisfactorily within the conceptual framework of ... M Blann, International centre for theoretical physics workshop on applied nuclear ...

  14. The investor behavior and futures market volatility A theory and empirical study based on the OLG model and high-frequency data

    Institute of Scientific and Technical Information of China (English)

    Yun Wang; Renhai Hua; Zongcheng Zhang

    2011-01-01

    Purpose-The purpose of this paper is to examine whether the futures volatility could attect the investor behavior and what trading strategy different investors could adopt when they meet different information conditions.Design/methodology/approach-This study introduces a two-period overlapping generation model (OLG) model into the future market and set the investor behavior model based on the future contract price,which can also be extended to complete and incomplete information.It provides the equilibrium solution and uses cuprum tick data in SHFE to conduct the empirical analysis.Findings-The two-period OLG model based on the future market is consistent with the practical situation;second,the sufficient information investors such as institutional adopt reversal trading patterns generally;last,the insufficient information investors such as individual investors adopt momentum trading patterns in general.Research limitations/implications-Investor trading behavior is always an important issue in the behavioral finance and market supervision,but the related research is scarce.Practical implications-The conclusion shows that the investors' behavior in Chinese future market is different from the Chinese stock market.Originality/value-This study empirically analyzes and verifies the different types of trading strategies investors could;investors such as institutional ones adopt reversal trading patterns generally;while investors such as individual investors adopt momentum trading patterns in general.

  15. Does the U.S. exercise contagion on Italy? A theoretical model and empirical evidence

    Science.gov (United States)

    Cerqueti, Roy; Fenga, Livio; Ventura, Marco

    2018-06-01

    This paper deals with the theme of contagion in financial markets. At this aim, we develop a model based on Mixed Poisson Processes to describe the abnormal returns of financial markets of two considered countries. In so doing, the article defines the theoretical conditions to be satisfied in order to state that one of them - the so-called leader - exercises contagion on the others - the followers. Specifically, we employ an invariant probabilistic result stating that a suitable transformation of a Mixed Poisson Process is still a Mixed Poisson Process. The theoretical claim is validated by implementing an extensive simulation analysis grounded on empirical data. The countries considered are the U.S. (as the leader) and Italy (as the follower) and the period under scrutiny is very large, ranging from 1970 to 2014.

  16. Science and the British Empire.

    Science.gov (United States)

    Harrison, Mark

    2005-03-01

    The last few decades have witnessed a flowering of interest in the history of science in the British Empire. This essay aims to provide an overview of some of the most important work in this area, identifying interpretative shifts and emerging themes. In so doing, it raises some questions about the analytical framework in which colonial science has traditionally been viewed, highlighting interactions with indigenous scientific traditions and the use of network-based models to understand scientific relations within and beyond colonial contexts.

  17. Connecting theoretical and empirical studies of trait-mediated interactions

    Czech Academy of Sciences Publication Activity Database

    Bolker, B.; Holyoak, M.; Křivan, Vlastimil; Rowe, L.; Schmitz, O.

    2003-01-01

    Roč. 84, č. 5 (2003), s. 1101-1114 ISSN 0012-9658 Institutional research plan: CEZ:AV0Z5007907 Keywords : community models * competition * empirical study Subject RIV: EH - Ecology, Behaviour Impact factor: 3.701, year: 2003

  18. Empirical ethics, context-sensitivity, and contextualism.

    Science.gov (United States)

    Musschenga, Albert W

    2005-10-01

    In medical ethics, business ethics, and some branches of political philosophy (multi-culturalism, issues of just allocation, and equitable distribution) the literature increasingly combines insights from ethics and the social sciences. Some authors in medical ethics even speak of a new phase in the history of ethics, hailing "empirical ethics" as a logical next step in the development of practical ethics after the turn to "applied ethics." The name empirical ethics is ill-chosen because of its associations with "descriptive ethics." Unlike descriptive ethics, however, empirical ethics aims to be both descriptive and normative. The first question on which I focus is what kind of empirical research is used by empirical ethics and for which purposes. I argue that the ultimate aim of all empirical ethics is to improve the context-sensitivity of ethics. The second question is whether empirical ethics is essentially connected with specific positions in meta-ethics. I show that in some kinds of meta-ethical theories, which I categorize as broad contextualist theories, there is an intrinsic need for connecting normative ethics with empirical social research. But context-sensitivity is a goal that can be aimed for from any meta-ethical position.

  19. An empirical investigation of compliance and enforcement problems

    DEFF Research Database (Denmark)

    Kronbak, Lone Grønbæk; Jensen, Frank

    2011-01-01

    contributes to the literature by investigating compliance and enforcement in the empirical case of a mixed trawl fishery targeting Norway lobster in Kattegat and Skagerrak located north of Denmark with help from a simulated model. The paper presents results from two simulation models of the case study: one....... Another conclusion from the case study is that only small welfare effects are obtained by increasing enforcement efforts to reduce non-compliance....

  20. Tourism forecasting using modified empirical mode decomposition and group method of data handling

    Science.gov (United States)

    Yahya, N. A.; Samsudin, R.; Shabri, A.

    2017-09-01

    In this study, a hybrid model using modified Empirical Mode Decomposition (EMD) and Group Method of Data Handling (GMDH) model is proposed for tourism forecasting. This approach reconstructs intrinsic mode functions (IMFs) produced by EMD using trial and error method. The new component and the remaining IMFs is then predicted respectively using GMDH model. Finally, the forecasted results for each component are aggregated to construct an ensemble forecast. The data used in this experiment are monthly time series data of tourist arrivals from China, Thailand and India to Malaysia from year 2000 to 2016. The performance of the model is evaluated using Root Mean Square Error (RMSE) and Mean Absolute Percentage Error (MAPE) where conventional GMDH model and EMD-GMDH model are used as benchmark models. Empirical results proved that the proposed model performed better forecasts than the benchmarked models.

  1. Mass Balance Modelling of Saskatchewan Glacier, Canada Using Empirically Downscaled Reanalysis Data

    Science.gov (United States)

    Larouche, O.; Kinnard, C.; Demuth, M. N.

    2017-12-01

    Observations show that glaciers around the world are retreating. As sites with long-term mass balance observations are scarce, models are needed to reconstruct glacier mass balance and assess its sensitivity to climate. In regions with discontinuous and/or sparse meteorological data, high-resolution climate reanalysis data provide a convenient alternative to in situ weather observations, but can also suffer from strong bias due to the spatial and temporal scale mismatch. In this study we used data from the North American Regional Reanalysis (NARR) project with a 30 x 30 km spatial resolution and 3-hour temporal resolution to produce the meteorological forcings needed to drive a physically-based, distributed glacier mass balance model (DEBAM, Hock and Holmgren 2005) for the historical period 1979-2016. A two-year record from an automatic weather station (AWS) operated on Saskatchewan Glacier (2014-2016) was used to downscale air temperature, relative humidity, wind speed and incoming solar radiation from the nearest NARR gridpoint to the glacier AWS site. An homogenized historical precipitation record was produced using data from two nearby, low-elevation weather stations and used to downscale the NARR precipitation data. Three bias correction methods were applied (scaling, delta and empirical quantile mapping - EQM) and evaluated using split sample cross-validation. The EQM method gave better results for precipitation and for air temperature. Only a slight improvement in the relative humidity was obtained using the scaling method, while none of the methods improved the wind speed. The later correlates poorly with AWS observations, probably because the local glacier wind is decoupled from the larger scale NARR wind field. The downscaled data was used to drive the DEBAM model in order to reconstruct the mass balance of Saskatchewan Glacier over the past 30 years. The model was validated using recent snow thickness measurements and previously published geodetic mass

  2. Longitudinal hopping in intervehicle communication: Theory and simulations on modeled and empirical trajectory data

    Science.gov (United States)

    Thiemann, Christian; Treiber, Martin; Kesting, Arne

    2008-09-01

    Intervehicle communication enables vehicles to exchange messages within a limited broadcast range and thus self-organize into dynamical and geographically embedded wireless ad hoc networks. We study the longitudinal hopping mode in which messages are transported using equipped vehicles driving in the same direction as a relay. Given a finite communication range, we investigate the conditions where messages can percolate through the network, i.e., a linked chain of relay vehicles exists between the sender and receiver. We simulate message propagation in different traffic scenarios and for different fractions of equipped vehicles. Simulations are done with both, modeled and empirical traffic data. These results are used to test the limits of applicability of an analytical model assuming a Poissonian distance distribution between the relays. We found a good agreement for homogeneous traffic scenarios and sufficiently low percentages of equipped vehicles. For higher percentages, the observed connectivity was higher than that of the model while in stop-and-go traffic situations it was lower. We explain these results in terms of correlations of the distances between the relay vehicles. Finally, we introduce variable transmission ranges and found that this additional stochastic component generally increased connectivity compared to a deterministic transmission with the same mean.

  3. Empirical study of long-range connections in a road network offers new ingredient for navigation optimization models

    Science.gov (United States)

    Wang, Pu; Liu, Like; Li, Xiamiao; Li, Guanliang; González, Marta C.

    2014-01-01

    Navigation problem in lattices with long-range connections has been widely studied to understand the design principles for optimal transport networks; however, the travel cost of long-range connections was not considered in previous models. We define long-range connection in a road network as the shortest path between a pair of nodes through highways and empirically analyze the travel cost properties of long-range connections. Based on the maximum speed allowed in each road segment, we observe that the time needed to travel through a long-range connection has a characteristic time Th ˜ 29 min, while the time required when using the alternative arterial road path has two different characteristic times Ta ˜ 13 and 41 min and follows a power law for times larger than 50 min. Using daily commuting origin-destination matrix data, we additionally find that the use of long-range connections helps people to save about half of the travel time in their daily commute. Based on the empirical results, we assign a more realistic travel cost to long-range connections in two-dimensional square lattices, observing dramatically different minimum average shortest path but similar optimal navigation conditions.

  4. Does size matter? : An empirical study modifying Fama & French's three factor model to detect size-effect based on turnover in the Swedish markets

    OpenAIRE

    Boros, Daniel; Eriksson, Claes

    2014-01-01

    This thesis investigates whether the estimation of the cost of equity (or the expected return) in the Swedish market should incorporate an adjustment for a company’s size. This is what is commonly known as the size-effect, first presented by Banz (1980) and has later been a part of models for estimating cost of equity, such as Fama & French’s three factor model (1992). The Fama & French model was developed based on empirical research. Since the model was developed, the research on the...

  5. Coupling hydrodynamic modeling and empirical measures of bed mobility to assess the risk of redd scour on a large regulated river

    Science.gov (United States)

    Christine L. May; Bonnie S. Pryor; Thomas E. Lisle; Margaret M. Lang

    2009-01-01

    n order to assess the risk of scour and fill of spawning redds during floods, an understanding of the relations among river discharge, bed mobility, and scour and fill depths in areas of the streambed heavily utilized by spawning salmon is needed. Our approach coupled numerical flow modeling and empirical data from the Trinity River, California, to quantify spatially...

  6. Empirical Hamiltonians

    International Nuclear Information System (INIS)

    Peggs, S.; Talman, R.

    1987-01-01

    As proton accelerators get larger, and include more magnets, the conventional tracking programs which simulate them run slower. The purpose of this paper is to describe a method, still under development, in which element-by-element tracking around one turn is replaced by a single man, which can be processed far faster. It is assumed for this method that a conventional program exists which can perform faithful tracking in the lattice under study for some hundreds of turns, with all lattice parameters held constant. An empirical map is then generated by comparison with the tracking program. A procedure has been outlined for determining an empirical Hamiltonian, which can represent motion through many nonlinear kicks, by taking data from a conventional tracking program. Though derived by an approximate method this Hamiltonian is analytic in form and can be subjected to further analysis of varying degrees of mathematical rigor. Even though the empirical procedure has only been described in one transverse dimension, there is good reason to hope that it can be extended to include two transverse dimensions, so that it can become a more practical tool in realistic cases

  7. A new global empirical model of the electron temperature with the inclusion of the solar activity variations for IRI

    Czech Academy of Sciences Publication Activity Database

    Truhlík, Vladimír; Bilitza, D.; Třísková, Ludmila

    2012-01-01

    Roč. 64, č. 6 (2012), s. 531-543 ISSN 1343-8832 R&D Projects: GA AV ČR IAA300420603; GA ČR GAP209/10/2086 Grant - others: NASA (US) NNH06CD17C. Institutional support: RVO:68378289 Keywords : Electron temperature * ionosphere * plasmasphere * empirical models * International Reference Ionosphere Subject RIV: DG - Athmosphere Sciences, Meteorology Impact factor: 2.921, year: 2012 http://www.terrapub.co.jp/journals/EPS/abstract/6406/64060531.html

  8. Empire as a Geopolitical Figure

    DEFF Research Database (Denmark)

    Parker, Noel

    2010-01-01

    This article analyses the ingredients of empire as a pattern of order with geopolitical effects. Noting the imperial form's proclivity for expansion from a critical reading of historical sociology, the article argues that the principal manifestation of earlier geopolitics lay not in the nation...... but in empire. That in turn has been driven by a view of the world as disorderly and open to the ordering will of empires (emanating, at the time of geopolitics' inception, from Europe). One implication is that empires are likely to figure in the geopolitical ordering of the globe at all times, in particular...... after all that has happened in the late twentieth century to undermine nationalism and the national state. Empire is indeed a probable, even for some an attractive form of regime for extending order over the disorder produced by globalisation. Geopolitics articulated in imperial expansion is likely...

  9. Using a tag team of undergraduate researchers to construct an empirical model of auroral Poynting flux, from satellite data

    Science.gov (United States)

    Cosgrove, R. B.; Bahcivan, H.; Klein, A.; Ortega, J.; Alhassan, M.; Xu, Y.; Chen, S.; Van Welie, M.; Rehberger, J.; Musielak, S.; Cahill, N.

    2012-12-01

    Empirical models of the incident Poynting flux and particle kinetic energy flux, associated with auroral processes, have been constructed using data from the FAST satellite. The models were constructed over a three-year period by a tag-team of three groups of undergraduate researchers from Worcester Polytechnic Institute (WPI), working under the supervision of researchers at SRI International, a nonprofit research institute. Each group spent one academic quarter in residence at SRI, in fulfillment of WPI's Major Qualifying Project (MQP), required for graduation from the Department of Electrical and Computer Engineering. The MQP requires a written group report, which was used to transition from one group to the next. The student's research involved accessing and processing a data set of 20,000 satellite orbits, replete with flaws associated with instrument failures, which had to be removed. The data had to be transformed from the satellite reference frame into solar coordinates, projected to a reference altitude, sorted according to geophysical conditions, and etc. The group visits were chaperoned by WPI, and were jointly funded. Researchers at SRI were supported by a grant from the National Science Foundation, which was tailored to accommodate the undergraduate tag-team approach. The NSF grant extended one year beyond the student visits, with increased funding in the final year, permitting the researchers at SRI to exercise quality control, and to produce publications. It is expected that the empirical models will be used as inputs to large-scale general circulation models (GCMs), to specify the atmospheric heating rate at high altitudes.; Poynting Flux with northward IMF ; Poynting flux with southward IMF

  10. Empirical seasonal forecasts of the NAO

    Science.gov (United States)

    Sanchezgomez, E.; Ortizbevia, M.

    2003-04-01

    We present here seasonal forecasts of the North Atlantic Oscillation (NAO) issued from ocean predictors with an empirical procedure. The Singular Values Decomposition (SVD) of the cross-correlation matrix between predictor and predictand fields at the lag used for the forecast lead is at the core of the empirical model. The main predictor field are sea surface temperature anomalies, although sea ice cover anomalies are also used. Forecasts are issued in probabilistic form. The model is an improvement over a previous version (1), where Sea Level Pressure Anomalies were first forecast, and the NAO Index built from this forecast field. Both correlation skill between forecast and observed field, and number of forecasts that hit the correct NAO sign, are used to assess the forecast performance , usually above those values found in the case of forecasts issued assuming persistence. For certain seasons and/or leads, values of the skill are above the .7 usefulness treshold. References (1) SanchezGomez, E. and Ortiz Bevia M., 2002, Estimacion de la evolucion pluviometrica de la Espana Seca atendiendo a diversos pronosticos empiricos de la NAO, in 'El Agua y el Clima', Publicaciones de la AEC, Serie A, N 3, pp 63-73, Palma de Mallorca, Spain

  11. An empirical model for parameters affecting energy consumption in boron removal from boron-containing wastewaters by electrocoagulation

    Energy Technology Data Exchange (ETDEWEB)

    Yilmaz, A. Erdem [Atatuerk University, Faculty of Engineering, Department of Environmental Engineering, 25240 Erzurum (Turkey)]. E-mail: aerdemy@atauni.edu.tr; Boncukcuoglu, Recep [Atatuerk University, Faculty of Engineering, Department of Environmental Engineering, 25240 Erzurum (Turkey); Kocakerim, M. Muhtar [Atatuerk University, Faculty of Engineering, Department of Chemical Engineering, 25240 Erzurum (Turkey)

    2007-06-01

    In this study, it was investigated parameters affecting energy consumption in boron removal from boron containing wastewaters prepared synthetically, via electrocoagulation method. The solution pH, initial boron concentration, dose of supporting electrolyte, current density and temperature of solution were selected as experimental parameters affecting energy consumption. The obtained experimental results showed that boron removal efficiency reached up to 99% under optimum conditions, in which solution pH was 8.0, current density 6.0mA/cm{sup 2}, initial boron concentration 100mg/L and solution temperature 293K. The current density was an important parameter affecting energy consumption too. High current density applied to electrocoagulation cell increased energy consumption. Increasing solution temperature caused to decrease energy consumption that high temperature decreased potential applied under constant current density. That increasing initial boron concentration and dose of supporting electrolyte caused to increase specific conductivity of solution decreased energy consumption. As a result, it was seen that energy consumption for boron removal via electrocoagulation method could be minimized at optimum conditions. An empirical model was predicted by statistically. Experimentally obtained values were fitted with values predicted from empirical model being as following;[ECB]=7.6x10{sup 6}x[OH]{sup 0.11}x[CD]{sup 0.62}x[IBC]{sup -0.57}x[DSE]{sup -0.}= {sup 04}x[T]{sup -2.98}x[t] Unfortunately, the conditions obtained for optimum boron removal were not the conditions obtained for minimum energy consumption. It was determined that support electrolyte must be used for increase boron removal and decrease electrical energy consumption.

  12. Developing an Empirical Model for Estimating the Probability of Electrical Short Circuits from Tin Whiskers. Part 2

    Science.gov (United States)

    Courey, Karim J.; Asfour, Shihab S.; Onar, Arzu; Bayliss, Jon A.; Ludwig, Larry L.; Wright, Maria C.

    2009-01-01

    To comply with lead-free legislation, many manufacturers have converted from tin-lead to pure tin finishes of electronic components. However, pure tin finishes have a greater propensity to grow tin whiskers than tin-lead finishes. Since tin whiskers present an electrical short circuit hazard in electronic components, simulations have been developed to quantify the risk of said short circuits occurring. Existing risk simulations make the assumption that when a free tin whisker has bridged two adjacent exposed electrical conductors, the result is an electrical short circuit. This conservative assumption is made because shorting is a random event that had an unknown probability associated with it. Note however that due to contact resistance electrical shorts may not occur at lower voltage levels. In our first article we developed an empirical probability model for tin whisker shorting. In this paper, we develop a more comprehensive empirical model using a refined experiment with a larger sample size, in which we studied the effect of varying voltage on the breakdown of the contact resistance which leads to a short circuit. From the resulting data we estimated the probability distribution of an electrical short, as a function of voltage. In addition, the unexpected polycrystalline structure seen in the focused ion beam (FIB) cross section in the first experiment was confirmed in this experiment using transmission electron microscopy (TEM). The FIB was also used to cross section two card guides to facilitate the measurement of the grain size of each card guide's tin plating to determine its finish.

  13. An empirical model for parameters affecting energy consumption in boron removal from boron-containing wastewaters by electrocoagulation

    International Nuclear Information System (INIS)

    Yilmaz, A. Erdem; Boncukcuoglu, Recep; Kocakerim, M. Muhtar

    2007-01-01

    In this study, it was investigated parameters affecting energy consumption in boron removal from boron containing wastewaters prepared synthetically, via electrocoagulation method. The solution pH, initial boron concentration, dose of supporting electrolyte, current density and temperature of solution were selected as experimental parameters affecting energy consumption. The obtained experimental results showed that boron removal efficiency reached up to 99% under optimum conditions, in which solution pH was 8.0, current density 6.0mA/cm 2 , initial boron concentration 100mg/L and solution temperature 293K. The current density was an important parameter affecting energy consumption too. High current density applied to electrocoagulation cell increased energy consumption. Increasing solution temperature caused to decrease energy consumption that high temperature decreased potential applied under constant current density. That increasing initial boron concentration and dose of supporting electrolyte caused to increase specific conductivity of solution decreased energy consumption. As a result, it was seen that energy consumption for boron removal via electrocoagulation method could be minimized at optimum conditions. An empirical model was predicted by statistically. Experimentally obtained values were fitted with values predicted from empirical model being as following;[ECB]=7.6x10 6 x[OH] 0.11 x[CD] 0.62 x[IBC] -0.57 x[DSE] -0.04 x[T] -2.98 x[t] Unfortunately, the conditions obtained for optimum boron removal were not the conditions obtained for minimum energy consumption. It was determined that support electrolyte must be used for increase boron removal and decrease electrical energy consumption.

  14. Improved Wind Speed Prediction Using Empirical Mode Decomposition

    Directory of Open Access Journals (Sweden)

    ZHANG, Y.

    2018-05-01

    Full Text Available Wind power industry plays an important role in promoting the development of low-carbon economic and energy transformation in the world. However, the randomness and volatility of wind speed series restrict the healthy development of the wind power industry. Accurate wind speed prediction is the key to realize the stability of wind power integration and to guarantee the safe operation of the power system. In this paper, combined with the Empirical Mode Decomposition (EMD, the Radial Basis Function Neural Network (RBF and the Least Square Support Vector Machine (SVM, an improved wind speed prediction model based on Empirical Mode Decomposition (EMD-RBF-LS-SVM is proposed. The prediction result indicates that compared with the traditional prediction model (RBF, LS-SVM, the EMD-RBF-LS-SVM model can weaken the random fluctuation to a certain extent and improve the short-term accuracy of wind speed prediction significantly. In a word, this research will significantly reduce the impact of wind power instability on the power grid, ensure the power grid supply and demand balance, reduce the operating costs in the grid-connected systems, and enhance the market competitiveness of the wind power.

  15. Empirical tests of Zipf's law mechanism in open source Linux distribution.

    Science.gov (United States)

    Maillart, T; Sornette, D; Spaeth, S; von Krogh, G

    2008-11-21

    Zipf's power law is a ubiquitous empirical regularity found in many systems, thought to result from proportional growth. Here, we establish empirically the usually assumed ingredients of stochastic growth models that have been previously conjectured to be at the origin of Zipf's law. We use exceptionally detailed data on the evolution of open source software projects in Linux distributions, which offer a remarkable example of a growing complex self-organizing adaptive system, exhibiting Zipf's law over four full decades.

  16. Theological reflections on empire

    Directory of Open Access Journals (Sweden)

    Allan A. Boesak

    2009-11-01

    Full Text Available Since the meeting of the World Alliance of Reformed Churches in Accra, Ghana (2004, and the adoption of the Accra Declaration, a debate has been raging in the churches about globalisation, socio-economic justice, ecological responsibility, political and cultural domination and globalised war. Central to this debate is the concept of empire and the way the United States is increasingly becoming its embodiment. Is the United States a global empire? This article argues that the United States has indeed become the expression of a modern empire and that this reality has considerable consequences, not just for global economics and politics but for theological refl ection as well.

  17. Teste de identificação de sentenças sintéticas com mensagem competitiva ipsilateral pediátrico: revisão narrativa sobre a sua aplicabilidade

    Directory of Open Access Journals (Sweden)

    Fernanda Freitas Vellozo

    2015-10-01

    Full Text Available Resumo: O Processamento auditivo é a capacidade que o sistema nervoso tem para usar a informação que chega pela audição. As habilidades auditivas são necessárias para que haja o processamento das informações. Testes comportamentais são utilizados para avaliar desordens do processamento auditivo, como o Teste de Identificação de Sentenças Sintéticas com Mensagem Competitiva Ipsilateral Pediátrico (PSI, que avalia a habilidade de figura-fundo para sons verbais. Trata-se de uma revisão narrativa que objetiva identificar a aplicabilidade do teste PSI, nos últimos dez anos. Uma busca nas bases de dados: Lilacs, PubMed, Medline, IBCS e SciELO foi realizada, utilizando-se como descritores: percepção auditiva, testes auditivos, transtorno da percepção auditiva ,audição, compreensão, combinados com a palavra PSI. Foram encontrados 52 artigos, sendo selecionados, lidos na íntegra e analisados, apenas, oito artigos. Pôde-se observar grande variabilidade em sua aplicação, demonstrando ser uma eficaz ferramenta de avaliação do processamento auditivo em diferentes populações e faixas etárias.

  18. Testing the performance of empirical remote sensing algorithms in the Baltic Sea waters with modelled and in situ reflectance data

    Directory of Open Access Journals (Sweden)

    Martin Ligi

    2017-01-01

    Full Text Available Remote sensing studies published up to now show that the performance of empirical (band-ratio type algorithms in different parts of the Baltic Sea is highly variable. Best performing algorithms are different in the different regions of the Baltic Sea. Moreover, there is indication that the algorithms have to be seasonal as the optical properties of phytoplankton assemblages dominating in spring and summer are different. We modelled 15,600 reflectance spectra using HydroLight radiative transfer model to test 58 previously published empirical algorithms. 7200 of the spectra were modelled using specific inherent optical properties (SIOPs of the open parts of the Baltic Sea in summer and 8400 with SIOPs of spring season. Concentration range of chlorophyll-a, coloured dissolved organic matter (CDOM and suspended matter used in the model simulations were based on the actually measured values available in literature. For each optically active constituent we added one concentration below actually measured minimum and one concentration above the actually measured maximum value in order to test the performance of the algorithms in wider range. 77 in situ reflectance spectra from rocky (Sweden and sandy (Estonia, Latvia coastal areas were used to evaluate the performance of the algorithms also in coastal waters. Seasonal differences in the algorithm performance were confirmed but we found also algorithms that can be used in both spring and summer conditions. The algorithms that use bands available on OLCI, launched in February 2016, are highlighted as this sensor will be available for Baltic Sea monitoring for coming decades.

  19. The Influence of Quality on E-Commerce Success: An Empirical Application of the Delone and Mclean IS Success Model

    OpenAIRE

    Ultan Sharkey; Murray Scott; Thomas Acton

    2010-01-01

    This research addresses difficulties in measuring e-commerce success by implementing the DeLone and McLean (D&M) model of IS success (1992, 2003) in an e-commerce environment. This research considers the influence of quality on e-commerce success by measuring the information quality and system quality attributes of an e-commerce system and the intention to use, user satisfaction and intention to transact from a sample of respondents. This research provides an empirical e-commerce application ...

  20. A UNIFIED EMPIRICAL MODEL FOR INFRARED GALAXY COUNTS BASED ON THE OBSERVED PHYSICAL EVOLUTION OF DISTANT GALAXIES

    International Nuclear Information System (INIS)

    Béthermin, Matthieu; Daddi, Emanuele; Sargent, Mark T.; Elbaz, David; Mullaney, James; Pannella, Maurilio; Magdis, Georgios; Hezaveh, Yashar; Le Borgne, Damien; Buat, Véronique; Charmandaris, Vassilis; Lagache, Guilaine; Scott, Douglas

    2012-01-01

    We reproduce the mid-infrared to radio galaxy counts with a new empirical model based on our current understanding of the evolution of main-sequence (MS) and starburst (SB) galaxies. We rely on a simple spectral energy distribution (SED) library based on Herschel observations: a single SED for the MS and another one for SB, getting warmer with redshift. Our model is able to reproduce recent measurements of galaxy counts performed with Herschel, including counts per redshift slice. This agreement demonstrates the power of our 2-Star-Formation Modes (2SFM) decomposition in describing the statistical properties of infrared sources and their evolution with cosmic time. We discuss the relative contribution of MS and SB galaxies to the number counts at various wavelengths and flux densities. We also show that MS galaxies are responsible for a bump in the 1.4 GHz radio counts around 50 μJy. Material of the model (predictions, SED library, mock catalogs, etc.) is available online.

  1. Tracking the sleep onset process: an empirical model of behavioral and physiological dynamics.

    Directory of Open Access Journals (Sweden)

    Michael J Prerau

    2014-10-01

    Full Text Available The sleep onset process (SOP is a dynamic process correlated with a multitude of behavioral and physiological markers. A principled analysis of the SOP can serve as a foundation for answering questions of fundamental importance in basic neuroscience and sleep medicine. Unfortunately, current methods for analyzing the SOP fail to account for the overwhelming evidence that the wake/sleep transition is governed by continuous, dynamic physiological processes. Instead, current practices coarsely discretize sleep both in terms of state, where it is viewed as a binary (wake or sleep process, and in time, where it is viewed as a single time point derived from subjectively scored stages in 30-second epochs, effectively eliminating SOP dynamics from the analysis. These methods also fail to integrate information from both behavioral and physiological data. It is thus imperative to resolve the mismatch between the physiological evidence and analysis methodologies. In this paper, we develop a statistically and physiologically principled dynamic framework and empirical SOP model, combining simultaneously-recorded physiological measurements with behavioral data from a novel breathing task requiring no arousing external sensory stimuli. We fit the model using data from healthy subjects, and estimate the instantaneous probability that a subject is awake during the SOP. The model successfully tracked physiological and behavioral dynamics for individual nights, and significantly outperformed the instantaneous transition models implicit in clinical definitions of sleep onset. Our framework also provides a principled means for cross-subject data alignment as a function of wake probability, allowing us to characterize and compare SOP dynamics across different populations. This analysis enabled us to quantitatively compare the EEG of subjects showing reduced alpha power with the remaining subjects at identical response probabilities. Thus, by incorporating both

  2. A global empirical system for probabilistic seasonal climate prediction

    Science.gov (United States)

    Eden, J. M.; van Oldenborgh, G. J.; Hawkins, E.; Suckling, E. B.

    2015-12-01

    Preparing for episodes with risks of anomalous weather a month to a year ahead is an important challenge for governments, non-governmental organisations, and private companies and is dependent on the availability of reliable forecasts. The majority of operational seasonal forecasts are made using process-based dynamical models, which are complex, computationally challenging and prone to biases. Empirical forecast approaches built on statistical models to represent physical processes offer an alternative to dynamical systems and can provide either a benchmark for comparison or independent supplementary forecasts. Here, we present a simple empirical system based on multiple linear regression for producing probabilistic forecasts of seasonal surface air temperature and precipitation across the globe. The global CO2-equivalent concentration is taken as the primary predictor; subsequent predictors, including large-scale modes of variability in the climate system and local-scale information, are selected on the basis of their physical relationship with the predictand. The focus given to the climate change signal as a source of skill and the probabilistic nature of the forecasts produced constitute a novel approach to global empirical prediction. Hindcasts for the period 1961-2013 are validated against observations using deterministic (correlation of seasonal means) and probabilistic (continuous rank probability skill scores) metrics. Good skill is found in many regions, particularly for surface air temperature and most notably in much of Europe during the spring and summer seasons. For precipitation, skill is generally limited to regions with known El Niño-Southern Oscillation (ENSO) teleconnections. The system is used in a quasi-operational framework to generate empirical seasonal forecasts on a monthly basis.

  3. Empirical studies on the pricing of bonds and interest rate derivatives

    NARCIS (Netherlands)

    Driessen, J.J.A.G.

    2001-01-01

    Nowadays, both large financial and non-financial institutions use models for the term structure of interest rates for risk management and pricing purposes. This thesis focuses on these two important applications of term structure models. In the first part, the empirical performance of several term

  4. The role of confidence in the evolution of the Spanish economy: empirical evidence from an ARDL model

    Directory of Open Access Journals (Sweden)

    Pablo Castellanos García

    2014-12-01

    Full Text Available The aim of this paper is to verify the existence and to determine the nature of long-term relationships between economic agents’ confidence, measured by the Economic Sentiment Index (ESI, with some of the "fundamentals" of the Spanish economy. In particular, by modeling this type of relations, we try to determine whether confidence is a dependent (explained or independent (explanatory variable. Along with confidence, in our model we incorporate variables such as risk premium of sovereign debt, financial market volatility, unemployment, inflation, public and private debt and the net lending/net borrowing of the economy. For the purpose of obtaining some empirical evidence on the exogenous or endogenous character of the above mentioned variables an ARDL (Autoregressive-Distributed Lag model is formulated. The model is estimated with quarterly data of the Spanish economy for the period 1990-2012. Our findings suggest that: (a unemployment is the dependent variable, (b there is an inverse relationship between ESI in Spain and unemployment; and (c the Granger causality goes from confidence to unemployment.

  5. Empirical component model to predict the overall performance of heating coils: Calibrations and tests based on manufacturer catalogue data

    International Nuclear Information System (INIS)

    Ruivo, Celestino R.; Angrisani, Giovanni

    2015-01-01

    Highlights: • An empirical model for predicting the performance of heating coils is presented. • Low and high heating capacity cases are used for calibration. • Versions based on several effectiveness correlations are tested. • Catalogue data are considered in approach testing. • The approach is a suitable component model to be used in dynamic simulation tools. - Abstract: A simplified methodology for predicting the overall behaviour of heating coils is presented in this paper. The coil performance is predicted by the ε-NTU method. Usually manufacturers do not provide information about the overall thermal resistance or the geometric details that are required either for the device selection or to apply known empirical correlations for the estimation of the involved thermal resistances. In the present work, heating capacity tables from the manufacturer catalogue are used to calibrate simplified approaches based on the classical theory of heat exchangers, namely the effectiveness method. Only two reference operating cases are required to calibrate each approach. The validity of the simplified approaches is investigated for a relatively high number of operating cases, listed in the technical catalogue of a manufacturer. Four types of coils of three sizes of air handling units are considered. A comparison is conducted between the heating coil capacities provided by the methodology and the values given by the manufacturer catalogue. The results show that several of the proposed approaches are suitable component models to be integrated in dynamic simulation tools of air conditioning systems such as TRNSYS or EnergyPlus

  6. Innovation Forms and Firm Export Performance: Empirical Evidence from ECA Countries

    Directory of Open Access Journals (Sweden)

    Andrzej Cieślik

    2017-06-01

    Full Text Available Objective: The main objective of this paper is to verify empirically the relationship between various forms of innovation and export performance of firms from European and Central Asian (ECA countries. Research Design & Methods: In our empirical approach we refer to the self-selection hypothesis derived from the Melitz (2003 model which proposed the existence of a positive relationship between firm productivity and the probability of exporting. We argue that innovation activities should be regarded as a key element that can increase the level of firm productivity. We focus our analysis on four forms of innovation activities: product, process, marketing, organizational and managerial innovation. The empirical implementation of our analytical framework is based on the probit model, applied to the fifth edition of the BEEPS firm level dataset covering 2011-2014. Findings: Our empirical results indicate that the probability of exporting is positively related to both product and process innovations. The marketing and managerial innovations do not seem to affect positively export performance of firms from ECA countries. Implications & Recommendations: It is recommended to develop innovation supporting mechanisms that would target both product and process innovations rather than other forms of innovation in the ECA countries. Contribution & Value Added: The originality of this work lies in the use of the multi-country firm level dataset that allows distinguishing between various forms of innovations in the ECA countries.

  7. Exogenous empirical-evidence equilibria in perfect-monitoring repeated games yield correlated equilibria

    KAUST Repository

    Dudebout, Nicolas; Shamma, Jeff S.

    2014-01-01

    This paper proves that exogenous empirical-evidence equilibria (xEEEs) in perfect-monitoring repeated games induce correlated equilibria of the associated one-shot game. An empirical-evidence equilibrium (EEE) is a solution concept for stochastic games. At equilibrium, agents' strategies are optimal with respect to models of their opponents. These models satisfy a consistency condition with respect to the actual behavior of the opponents. As such, EEEs replace the full-rationality requirement of Nash equilibria by a consistency-based bounded-rationality one. In this paper, the framework of empirical evidence is summarized, with an emphasis on perfect-monitoring repeated games. A less constraining notion of consistency is introduced. The fact that an xEEE in a perfect-monitoring repeated game induces a correlated equilibrium on the underlying one-shot game is proven. This result and the new notion of consistency are illustrated on the hawk-dove game. Finally, a method to build specific correlated equilibria from xEEEs is derived.

  8. Exogenous empirical-evidence equilibria in perfect-monitoring repeated games yield correlated equilibria

    KAUST Repository

    Dudebout, Nicolas

    2014-12-15

    This paper proves that exogenous empirical-evidence equilibria (xEEEs) in perfect-monitoring repeated games induce correlated equilibria of the associated one-shot game. An empirical-evidence equilibrium (EEE) is a solution concept for stochastic games. At equilibrium, agents\\' strategies are optimal with respect to models of their opponents. These models satisfy a consistency condition with respect to the actual behavior of the opponents. As such, EEEs replace the full-rationality requirement of Nash equilibria by a consistency-based bounded-rationality one. In this paper, the framework of empirical evidence is summarized, with an emphasis on perfect-monitoring repeated games. A less constraining notion of consistency is introduced. The fact that an xEEE in a perfect-monitoring repeated game induces a correlated equilibrium on the underlying one-shot game is proven. This result and the new notion of consistency are illustrated on the hawk-dove game. Finally, a method to build specific correlated equilibria from xEEEs is derived.

  9. Empirical tight-binding modeling of ordered and disordered semiconductor structures

    International Nuclear Information System (INIS)

    Mourad, Daniel

    2010-01-01

    In this thesis, we investigate the electronic and optical properties of pure as well as of substitutionally alloyed II-VI and III-V bulk semiconductors and corresponding semiconductor quantum dots by means of an empirical tight-binding (TB) model. In the case of the alloyed systems of the type A x B 1-x , where A and B are the pure compound semiconductor materials, we study the influence of the disorder by means of several extensions of the TB model with different levels of sophistication. Our methods range from rather simple mean-field approaches (virtual crystal approximation, VCA) over a dynamical mean-field approach (coherent potential approximation, CPA) up to calculations where substitutional disorder is incorporated on a finite ensemble of microscopically distinct configurations. In the first part of this thesis, we cover the necessary fundamentals in order to properly introduce the TB model of our choice, the effective bond-orbital model (EBOM). In this model, one s- and three p-orbitals per spin direction are localized on the sites of the underlying Bravais lattice. The matrix elements between these orbitals are treated as free parameters in order to reproduce the properties of one conduction and three valence bands per spin direction and can then be used in supercell calculations in order to model mixed bulk materials or pure as well as mixed quantum dots. Part II of this thesis deals with unalloyed systems. Here, we use the EBOM in combination with configuration interaction calculations for the investigation of the electronic and optical properties of truncated pyramidal GaN quantum dots embedded in AlN with an underlying zincblende structure. Furthermore, we develop a parametrization of the EBOM for materials with a wurtzite structure, which allows for a fit of one conduction and three valence bands per spin direction throughout the whole Brillouin zone of the hexagonal system. In Part III, we focus on the influence of alloying on the electronic and

  10. Modeling gallic acid production rate by empirical and statistical analysis

    Directory of Open Access Journals (Sweden)

    Bratati Kar

    2000-01-01

    Full Text Available For predicting the rate of enzymatic reaction empirical correlation based on the experimental results obtained under various operating conditions have been developed. Models represent both the activation as well as deactivation conditions of enzymatic hydrolysis and the results have been analyzed by analysis of variance (ANOVA. The tannase activity was found maximum at incubation time 5 min, reaction temperature 40ºC, pH 4.0, initial enzyme concentration 0.12 v/v, initial substrate concentration 0.42 mg/ml, ionic strength 0.2 M and under these optimal conditions, the maximum rate of gallic acid production was 33.49 mumoles/ml/min.Para predizer a taxa das reações enzimaticas uma correlação empírica baseada nos resultados experimentais foi desenvolvida. Os modelos representam a ativação e a desativativação da hydrolise enzimatica. Os resultados foram avaliados pela análise de variança (ANOVA. A atividade máxima da tannase foi obtida após 5 minutos de incubação, temperatura 40ºC, pH 4,0, concentração inicial da enzima de 0,12 v/v, concentração inicial do substrato 0,42 mg/ml, força iônica 0,2 M. Sob essas condições a taxa máxima de produção ácido galico foi de 33,49 µmoles/ml/min.

  11. Empirical complexities in the genetic foundations of lethal mutagenesis.

    Science.gov (United States)

    Bull, James J; Joyce, Paul; Gladstone, Eric; Molineux, Ian J

    2013-10-01

    From population genetics theory, elevating the mutation rate of a large population should progressively reduce average fitness. If the fitness decline is large enough, the population will go extinct in a process known as lethal mutagenesis. Lethal mutagenesis has been endorsed in the virology literature as a promising approach to viral treatment, and several in vitro studies have forced viral extinction with high doses of mutagenic drugs. Yet only one empirical study has tested the genetic models underlying lethal mutagenesis, and the theory failed on even a qualitative level. Here we provide a new level of analysis of lethal mutagenesis by developing and evaluating models specifically tailored to empirical systems that may be used to test the theory. We first quantify a bias in the estimation of a critical parameter and consider whether that bias underlies the previously observed lack of concordance between theory and experiment. We then consider a seemingly ideal protocol that avoids this bias-mutagenesis of virions-but find that it is hampered by other problems. Finally, results that reveal difficulties in the mere interpretation of mutations assayed from double-strand genomes are derived. Our analyses expose unanticipated complexities in testing the theory. Nevertheless, the previous failure of the theory to predict experimental outcomes appears to reside in evolutionary mechanisms neglected by the theory (e.g., beneficial mutations) rather than from a mismatch between the empirical setup and model assumptions. This interpretation raises the specter that naive attempts at lethal mutagenesis may augment adaptation rather than retard it.

  12. Empirical tests of pre-main-sequence stellar evolution models with eclipsing binaries

    Science.gov (United States)

    Stassun, Keivan G.; Feiden, Gregory A.; Torres, Guillermo

    2014-06-01

    We examine the performance of standard pre-main-sequence (PMS) stellar evolution models against the accurately measured properties of a benchmark sample of 26 PMS stars in 13 eclipsing binary (EB) systems having masses 0.04-4.0 M⊙ and nominal ages ≈1-20 Myr. We provide a definitive compilation of all fundamental properties for the EBs, with a careful and consistent reassessment of observational uncertainties. We also provide a definitive compilation of the various PMS model sets, including physical ingredients and limits of applicability. No set of model isochrones is able to successfully reproduce all of the measured properties of all of the EBs. In the H-R diagram, the masses inferred for the individual stars by the models are accurate to better than 10% at ≳1 M⊙, but below 1 M⊙ they are discrepant by 50-100%. Adjusting the observed radii and temperatures using empirical relations for the effects of magnetic activity helps to resolve the discrepancies in a few cases, but fails as a general solution. We find evidence that the failure of the models to match the data is linked to the triples in the EB sample; at least half of the EBs possess tertiary companions. Excluding the triples, the models reproduce the stellar masses to better than ∼10% in the H-R diagram, down to 0.5 M⊙, below which the current sample is fully contaminated by tertiaries. We consider several mechanisms by which a tertiary might cause changes in the EB properties and thus corrupt the agreement with stellar model predictions. We show that the energies of the tertiary orbits are comparable to that needed to potentially explain the scatter in the EB properties through injection of heat, perhaps involving tidal interaction. It seems from the evidence at hand that this mechanism, however it operates in detail, has more influence on the surface properties of the stars than on their internal structure, as the lithium abundances are broadly in good agreement with model predictions. The

  13. Testing isotherm models and recovering empirical relationships for adsorption in microporous carbons using virtual carbon models and grand canonical Monte Carlo simulations

    International Nuclear Information System (INIS)

    Terzyk, Artur P; Furmaniak, Sylwester; Gauden, Piotr A; Harris, Peter J F; Wloch, Jerzy

    2008-01-01

    Using the plausible model of activated carbon proposed by Harris and co-workers and grand canonical Monte Carlo simulations, we study the applicability of standard methods for describing adsorption data on microporous carbons widely used in adsorption science. Two carbon structures are studied, one with a small distribution of micropores in the range up to 1 nm, and the other with micropores covering a wide range of porosity. For both structures, adsorption isotherms of noble gases (from Ne to Xe), carbon tetrachloride and benzene are simulated. The data obtained are considered in terms of Dubinin-Radushkevich plots. Moreover, for benzene and carbon tetrachloride the temperature invariance of the characteristic curve is also studied. We show that using simulated data some empirical relationships obtained from experiment can be successfully recovered. Next we test the applicability of Dubinin's related models including the Dubinin-Izotova, Dubinin-Radushkevich-Stoeckli, and Jaroniec-Choma equations. The results obtained demonstrate the limits and applications of the models studied in the field of carbon porosity characterization

  14. Comparison of the SASSYS/SAS4A radial core expansion reactivity feedback model and the empirical correlation for FFTF

    International Nuclear Information System (INIS)

    Wigeland, R.A.

    1987-01-01

    The present emphasis on inherent safety for LMR designs has resulted in a need to represent the various reactivity feedback mechanisms as accurately as possible. The dominant negative reactivity feedback has been found to result from radial expansion of the core for most postulated ATWS events. For this reason, a more detailed model for calculating the reactivity feedback from radial core expansion has been recently developed for use with the SASSYS/SAS4A Code System. The purpose of this summary is to present an extension to the model so that it is more suitable for handling a core restraint design as used in FFTF, and to compare the SASSYS/SAS4A results using this model to the empirical correlation presently being used to account for radial core expansion reactivity feedback to FFTF

  15. Empirical study of long-range connections in a road network offers new ingredient for navigation optimization models

    International Nuclear Information System (INIS)

    Wang, Pu; Liu, Like; Li, Xiamiao; Li, Guanliang; González, Marta C

    2014-01-01

    Navigation problem in lattices with long-range connections has been widely studied to understand the design principles for optimal transport networks; however, the travel cost of long-range connections was not considered in previous models. We define long-range connection in a road network as the shortest path between a pair of nodes through highways and empirically analyze the travel cost properties of long-range connections. Based on the maximum speed allowed in each road segment, we observe that the time needed to travel through a long-range connection has a characteristic time T h  ∼ 29 min, while the time required when using the alternative arterial road path has two different characteristic times T a  ∼ 13 and 41 min and follows a power law for times larger than 50 min. Using daily commuting origin–destination matrix data, we additionally find that the use of long-range connections helps people to save about half of the travel time in their daily commute. Based on the empirical results, we assign a more realistic travel cost to long-range connections in two-dimensional square lattices, observing dramatically different minimum average shortest path 〈l〉 but similar optimal navigation conditions. (paper)

  16. Birds of the Mongol Empire

    OpenAIRE

    Eugene N. Anderson

    2016-01-01

    The Mongol Empire, the largest contiguous empire the world has ever known, had, among other things, a goodly number of falconers, poultry raisers, birdcatchers, cooks, and other experts on various aspects of birding. We have records of this, largely in the Yinshan Zhengyao, the court nutrition manual of the Mongol empire in China (the Yuan Dynasty). It discusses in some detail 22 bird taxa, from swans to chickens. The Huihui Yaofang, a medical encyclopedia, lists ten taxa used medicinally. Ma...

  17. Computation and empirical modeling of UV flux reaching Arabian Sea due to O3 hole

    International Nuclear Information System (INIS)

    Yousufzai, M. Ayub Khan

    2008-01-01

    Scientific organizations the world over, such as the European Space Agency, the North Atlantic Treaty Organization, the National Aeronautics and Space Administration, and the United Nations Organization, are deeply concerned about the imbalances, caused to a significant extent due to human interference in the natural make-up of the earth's ecosystem. In particular, ozone layer depletion (OLD) over the South Pole is already a serious hazard. The long-term effect of ozone layer depletion appears to be an increase in the ultraviolet radiation reaching the earth. In order to understand the effects of ozone layer depletion, investigations have been initiated by various research groups. However, to the best of our knowledge, there does not seem to be available any work treating the problem of computing and constructing an empirical model for the UV flux reaching the Arabian Sea surface due to the O3 hole. The communication presents the results of quantifying UV flux and modeling future estimation using time series analysis in a local context to understand the nature of the depletion. (author)

  18. Wireless and empire geopolitics radio industry and ionosphere in the British Empire 1918-1939

    CERN Document Server

    Anduaga, Aitor

    2009-01-01

    Although the product of consensus politics, the British Empire was based on communications supremacy and the knowledge of the atmosphere. Focusing on science, industry, government, the military, and education, this book studies the relationship between wireless and Empire throughout the interwar period.

  19. Empirical model with independent variable moments of inertia for triaxial nuclei applied to 76Ge and 192Os

    Science.gov (United States)

    Sugawara, M.

    2018-05-01

    An empirical model with independent variable moments of inertia for triaxial nuclei is devised and applied to 76Ge and 192Os. Three intrinsic moments of inertia, J1, J2, and J3, are varied independently as a particular function of spin I within a revised version of the triaxial rotor model so as to reproduce the energy levels of the ground-state, γ , and (in the case of 192Os) Kπ=4+ bands. The staggering in the γ band is well reproduced in both phase and amplitude. Effective γ values are extracted as a function of spin I from the ratios of the three moments of inertia. The eigenfunctions and the effective γ values are subsequently used to calculate the ratios of B (E 2 ) values associated with these bands. Good agreement between the model calculation and the experimental data is obtained for both 76Ge and 192Os.

  20. An empirical model for the study of employee paticipation and its influence on job satisfaction

    Directory of Open Access Journals (Sweden)

    Lucas Joan Pujol Cols

    2015-12-01

    Full Text Available This article provides an analysis of the factors that influence the employee’s possibilities perceived to trigger actions of meaningful participation in three levels: Intra-group Level, Institutional Level and directly in the Leadership team of of the organization.Twelve (12 interviews were done with teachers from the Social and Economic Sciences School of the Mar del Plata (Argentina University, with different positions, areas and working hours.Based on qualitative evidence, an empirical model was constructed claiming to connect different factors for each manifest of participation, establishing hypothetical relations between subgroups.Additionally, in this article the implication of participation, its relationship with the job satisfaction and the role of individual expectations on the participation opportunities that receives each employee, are discussed. Keywords: Participation, Job satisfaction, University, Expectations, Qualitative Analysis.