WorldWideScience

Sample records for modeling studies empirical

  1. Salt intrusion study in Cochin estuary - Using empirical models

    Digital Repository Service at National Institute of Oceanography (India)

    Jacob, B.; Revichandran, C.; NaveenKumar, K.R.

    been applied to the Cochin estuary in the present study to identify the most suitable model for predicting the salt intrusion length. Comparison of the obtained results indicate that the model of Van der Burgh (1972) is the most suitable empirical model...

  2. An empirical and model study on automobile market in Taiwan

    Science.gov (United States)

    Tang, Ji-Ying; Qiu, Rong; Zhou, Yueping; He, Da-Ren

    2006-03-01

    We have done an empirical investigation on automobile market in Taiwan including the development of the possession rate of the companies in the market from 1979 to 2003, the development of the largest possession rate, and so on. A dynamic model for describing the competition between the companies is suggested based on the empirical study. In the model each company is given a long-term competition factor (such as technology, capital and scale) and a short-term competition factor (such as management, service and advertisement). Then the companies play games in order to obtain more possession rate in the market under certain rules. Numerical simulation based on the model display a competition developing process, which qualitatively and quantitatively agree with our empirical investigation results.

  3. A study on online monitoring system development using empirical models

    Energy Technology Data Exchange (ETDEWEB)

    An, Sang Ha

    2010-02-15

    Maintenance technologies have been progressed from a time-based to a condition-based manner. The fundamental idea of condition-based maintenance (CBM) is built on the real-time diagnosis of impending failures and/or the prognosis of residual lifetime of equipment by monitoring health conditions using various sensors. The success of CBM, therefore, hinges on the capability to develop accurate diagnosis/prognosis models. Even though there may be an unlimited number of methods to implement models, the models can normally be classified into two categories in terms of their origins: using physical principles or historical observations. I have focused on the latter method (sometimes referred as the empirical model based on statistical learning) because of some practical benefits such as context-free applicability, configuration flexibility, and customization adaptability. While several pilot-scale systems using empirical models have been applied to work sites in Korea, it should be noticed that these do not seem to be generally competitive against conventional physical models. As a result of investigating the bottlenecks of previous attempts, I have recognized the need for a novel strategy for grouping correlated variables such that an empirical model can accept not only statistical correlation but also some extent of physical knowledge of a system. Detailed examples of problems are as follows: (1) missing of important signals in a group caused by the lack of observations, (2) problems of signals with the time delay, (3) problems of optimal kernel bandwidth. In this study an improved statistical learning framework including the proposed strategy and case studies illustrating the performance of the method are presented.

  4. Empirical agent-based modelling challenges and solutions

    CERN Document Server

    Barreteau, Olivier

    2014-01-01

    This instructional book showcases techniques to parameterise human agents in empirical agent-based models (ABM). In doing so, it provides a timely overview of key ABM methodologies and the most innovative approaches through a variety of empirical applications.  It features cutting-edge research from leading academics and practitioners, and will provide a guide for characterising and parameterising human agents in empirical ABM.  In order to facilitate learning, this text shares the valuable experiences of other modellers in particular modelling situations. Very little has been published in the area of empirical ABM, and this contributed volume will appeal to graduate-level students and researchers studying simulation modeling in economics, sociology, ecology, and trans-disciplinary studies, such as topics related to sustainability. In a similar vein to the instruction found in a cookbook, this text provides the empirical modeller with a set of 'recipes'  ready to be implemented. Agent-based modeling (AB...

  5. A sensitivity analysis of centrifugal compressors' empirical models

    International Nuclear Information System (INIS)

    Yoon, Sung Ho; Baek, Je Hyun

    2001-01-01

    The mean-line method using empirical models is the most practical method of predicting off-design performance. To gain insight into the empirical models, the influence of empirical models on the performance prediction results is investigated. We found that, in the two-zone model, the secondary flow mass fraction has a considerable effect at high mass flow-rates on the performance prediction curves. In the TEIS model, the first element changes the slope of the performance curves as well as the stable operating range. The second element makes the performance curves move up and down as it increases or decreases. It is also discovered that the slip factor affects pressure ratio, but it has little effect on efficiency. Finally, this study reveals that the skin friction coefficient has significant effect on both the pressure ratio curve and the efficiency curve. These results show the limitations of the present empirical models, and more reasonable empirical models are reeded

  6. Performance-Based Service Quality Model: An Empirical Study on Japanese Universities

    Science.gov (United States)

    Sultan, Parves; Wong, Ho

    2010-01-01

    Purpose: This paper aims to develop and empirically test the performance-based higher education service quality model. Design/methodology/approach: The study develops 67-item instrument for measuring performance-based service quality with a particular focus on the higher education sector. Scale reliability is confirmed using the Cronbach's alpha.…

  7. Model and Empirical Study on Several Urban Public Transport Networks in China

    Science.gov (United States)

    Ding, Yimin; Ding, Zhuo

    2012-07-01

    In this paper, we present the empirical investigation results on the urban public transport networks (PTNs) and propose a model to understand the results obtained. We investigate some urban public traffic networks in China, which are the urban public traffic networks of Beijing, Guangzhou, Wuhan and etc. The empirical results on the big cities show that the accumulative act-degree distributions of PTNs take neither power function forms, nor exponential function forms, but they are described by a shifted power function, and the accumulative act-degree distributions of PTNs in medium-sized or small cities follow the same law. In the end, we propose a model to show a possible evolutionary mechanism for the emergence of such network. The analytic results obtained from this model are in good agreement with the empirical results.

  8. Empirical Model Building Data, Models, and Reality

    CERN Document Server

    Thompson, James R

    2011-01-01

    Praise for the First Edition "This...novel and highly stimulating book, which emphasizes solving real problems...should be widely read. It will have a positive and lasting effect on the teaching of modeling and statistics in general." - Short Book Reviews This new edition features developments and real-world examples that showcase essential empirical modeling techniques Successful empirical model building is founded on the relationship between data and approximate representations of the real systems that generated that data. As a result, it is essential for researchers who construct these m

  9. Predicting acid dew point with a semi-empirical model

    International Nuclear Information System (INIS)

    Xiang, Baixiang; Tang, Bin; Wu, Yuxin; Yang, Hairui; Zhang, Man; Lu, Junfu

    2016-01-01

    Highlights: • The previous semi-empirical models are systematically studied. • An improved thermodynamic correlation is derived. • A semi-empirical prediction model is proposed. • The proposed semi-empirical model is validated. - Abstract: Decreasing the temperature of exhaust flue gas in boilers is one of the most effective ways to further improve the thermal efficiency, electrostatic precipitator efficiency and to decrease the water consumption of desulfurization tower, while, when this temperature is below the acid dew point, the fouling and corrosion will occur on the heating surfaces in the second pass of boilers. So, the knowledge on accurately predicting the acid dew point is essential. By investigating the previous models on acid dew point prediction, an improved thermodynamic correlation formula between the acid dew point and its influencing factors is derived first. And then, a semi-empirical prediction model is proposed, which is validated with the data both in field test and experiment, and comparing with the previous models.

  10. Assessing and improving the quality of modeling : a series of empirical studies about the UML

    NARCIS (Netherlands)

    Lange, C.F.J.

    2007-01-01

    Assessing and Improving the Quality of Modeling A Series of Empirical Studies about the UML This thesis addresses the assessment and improvement of the quality of modeling in software engineering. In particular, we focus on the Unified Modeling Language (UML), which is the de facto standard in

  11. Semi-empirical corrosion model for Zircaloy-4 cladding

    International Nuclear Information System (INIS)

    Nadeem Elahi, Waseem; Atif Rana, Muhammad

    2015-01-01

    The Zircaloy-4 cladding tube in Pressurize Water Reactors (PWRs) bears corrosion due to fast neutron flux, coolant temperature, and water chemistry. The thickness of Zircaloy-4 cladding tube may be decreased due to the increase in corrosion penetration which may affect the integrity of the fuel rod. The tin content and inter-metallic particles sizes has been found significantly in the magnitude of oxide thickness. In present study we have developed a Semiempirical corrosion model by modifying the Arrhenius equation for corrosion as a function of acceleration factor for tin content and accumulative annealing. This developed model has been incorporated into fuel performance computer code. The cladding oxide thickness data obtained from the Semi-empirical corrosion model has been compared with the experimental results i.e., numerous cases of measured cladding oxide thickness from UO 2 fuel rods, irradiated in various PWRs. The results of the both studies lie within the error band of 20μm, which confirms the validity of the developed Semi-empirical corrosion model. Key words: Corrosion, Zircaloy-4, tin content, accumulative annealing factor, Semi-empirical, PWR. (author)

  12. Psychological Models of Art Reception must be Empirically Grounded

    DEFF Research Database (Denmark)

    Nadal, Marcos; Vartanian, Oshin; Skov, Martin

    2017-01-01

    We commend Menninghaus et al. for tackling the role of negative emotions in art reception. However, their model suffers from shortcomings that reduce its applicability to empirical studies of the arts: poor use of evidence, lack of integration with other models, and limited derivation of testable...... hypotheses. We argue that theories about art experiences should be based on empirical evidence....

  13. Generalized Empirical Likelihood-Based Focused Information Criterion and Model Averaging

    Directory of Open Access Journals (Sweden)

    Naoya Sueishi

    2013-07-01

    Full Text Available This paper develops model selection and averaging methods for moment restriction models. We first propose a focused information criterion based on the generalized empirical likelihood estimator. We address the issue of selecting an optimal model, rather than a correct model, for estimating a specific parameter of interest. Then, this study investigates a generalized empirical likelihood-based model averaging estimator that minimizes the asymptotic mean squared error. A simulation study suggests that our averaging estimator can be a useful alternative to existing post-selection estimators.

  14. Design Models as Emergent Features: An Empirical Study in Communication and Shared Mental Models in Instructional

    Science.gov (United States)

    Botturi, Luca

    2006-01-01

    This paper reports the results of an empirical study that investigated the instructional design process of three teams involved in the development of an e-­learning unit. The teams declared they were using the same fast-­prototyping design and development model, and were composed of the same roles (although with a different number of SMEs).…

  15. Empirical Bayes ranking and selection methods via semiparametric hierarchical mixture models in microarray studies.

    Science.gov (United States)

    Noma, Hisashi; Matsui, Shigeyuki

    2013-05-20

    The main purpose of microarray studies is screening of differentially expressed genes as candidates for further investigation. Because of limited resources in this stage, prioritizing genes are relevant statistical tasks in microarray studies. For effective gene selections, parametric empirical Bayes methods for ranking and selection of genes with largest effect sizes have been proposed (Noma et al., 2010; Biostatistics 11: 281-289). The hierarchical mixture model incorporates the differential and non-differential components and allows information borrowing across differential genes with separation from nuisance, non-differential genes. In this article, we develop empirical Bayes ranking methods via a semiparametric hierarchical mixture model. A nonparametric prior distribution, rather than parametric prior distributions, for effect sizes is specified and estimated using the "smoothing by roughening" approach of Laird and Louis (1991; Computational statistics and data analysis 12: 27-37). We present applications to childhood and infant leukemia clinical studies with microarrays for exploring genes related to prognosis or disease progression. Copyright © 2012 John Wiley & Sons, Ltd.

  16. Integrating technology readiness into the expectation-confirmation model: an empirical study of mobile services.

    Science.gov (United States)

    Chen, Shih-Chih; Liu, Ming-Ling; Lin, Chieh-Peng

    2013-08-01

    The aim of this study was to integrate technology readiness into the expectation-confirmation model (ECM) for explaining individuals' continuance of mobile data service usage. After reviewing the ECM and technology readiness, an integrated model was demonstrated via empirical data. Compared with the original ECM, the findings of this study show that the integrated model may offer an ameliorated way to clarify what factors and how they influence the continuous intention toward mobile services. Finally, the major findings are summarized, and future research directions are suggested.

  17. Empirical questions for collective-behaviour modelling

    Indian Academy of Sciences (India)

    The collective behaviour of groups of social animals has been an active topic of study ... Models have been successful at reproducing qualitative features of ... quantitative and detailed empirical results for a range of animal systems. ... standard method [23], the redundant information recorded by the cameras can be used to.

  18. Time-varying volatility in Malaysian stock exchange: An empirical study using multiple-volatility-shift fractionally integrated model

    Science.gov (United States)

    Cheong, Chin Wen

    2008-02-01

    This article investigated the influences of structural breaks on the fractionally integrated time-varying volatility model in the Malaysian stock markets which included the Kuala Lumpur composite index and four major sectoral indices. A fractionally integrated time-varying volatility model combined with sudden changes is developed to study the possibility of structural change in the empirical data sets. Our empirical results showed substantial reduction in fractional differencing parameters after the inclusion of structural change during the Asian financial and currency crises. Moreover, the fractionally integrated model with sudden change in volatility performed better in the estimation and specification evaluations.

  19. Comparison of empirical models and laboratory saturated hydraulic ...

    African Journals Online (AJOL)

    Numerous methods for estimating soil saturated hydraulic conductivity exist, which range from direct measurement in the laboratory to models that use only basic soil properties. A study was conducted to compare laboratory saturated hydraulic conductivity (Ksat) measurement and that estimated from empirical models.

  20. Improving the desolvation penalty in empirical protein pKa modeling

    DEFF Research Database (Denmark)

    Olsson, Mats Henrik Mikael

    2012-01-01

    Unlike atomistic and continuum models, empirical pk(a) predicting methods need to include desolvation contributions explicitly. This study describes a new empirical desolvation method based on the Born solvation model. The new desolvation model was evaluated by high-level Poisson-Boltzmann...

  1. IT-enabled dynamic capability on performance: An empirical study of BSC model

    Directory of Open Access Journals (Sweden)

    Adilson Carlos Yoshikuni

    2017-05-01

    Full Text Available ew studies have investigated the influence of “information capital,” through IT-enabled dynamic capability, on corporate performance, particularly in economic turbulence. Our study investigates the causal relationship between performance perspectives of the balanced scorecard using partial least squares path modeling. Using data on 845 Brazilian companies, we conduct a quantitative empirical study of firms during an economic crisis and observe the following interesting results. Operational and analytical IT-enabled dynamic capability had positive effects on business process improvement and corporate performance. Results pertaining to mediation (endogenous variables and moderation (control variables clarify IT’s role in and benefits for corporate performance.

  2. NOx PREDICTION FOR FBC BOILERS USING EMPIRICAL MODELS

    Directory of Open Access Journals (Sweden)

    Jiří Štefanica

    2014-02-01

    Full Text Available Reliable prediction of NOx emissions can provide useful information for boiler design and fuel selection. Recently used kinetic prediction models for FBC boilers are overly complex and require large computing capacity. Even so, there are many uncertainties in the case of FBC boilers. An empirical modeling approach for NOx prediction has been used exclusively for PCC boilers. No reference is available for modifying this method for FBC conditions. This paper presents possible advantages of empirical modeling based prediction of NOx emissions for FBC boilers, together with a discussion of its limitations. Empirical models are reviewed, and are applied to operation data from FBC boilers used for combusting Czech lignite coal or coal-biomass mixtures. Modifications to the model are proposed in accordance with theoretical knowledge and prediction accuracy.

  3. PWR surveillance based on correspondence between empirical models and physical

    International Nuclear Information System (INIS)

    Zwingelstein, G.; Upadhyaya, B.R.; Kerlin, T.W.

    1976-01-01

    An on line surveillance method based on the correspondence between empirical models and physicals models is proposed for pressurized water reactors. Two types of empirical models are considered as well as the mathematical models defining the correspondence between the physical and empirical parameters. The efficiency of this method is illustrated for the surveillance of the Doppler coefficient for Oconee I (an 886 MWe PWR) [fr

  4. Empirical Testing of a Theoretical Extension of the Technology Acceptance Model: An Exploratory Study of Educational Wikis

    Science.gov (United States)

    Liu, Xun

    2010-01-01

    This study extended the technology acceptance model and empirically tested the new model with wikis, a new type of educational technology. Based on social cognitive theory and the theory of planned behavior, three new variables, wiki self-efficacy, online posting anxiety, and perceived behavioral control, were added to the original technology…

  5. Empirical Bayesian inference and model uncertainty

    International Nuclear Information System (INIS)

    Poern, K.

    1994-01-01

    This paper presents a hierarchical or multistage empirical Bayesian approach for the estimation of uncertainty concerning the intensity of a homogeneous Poisson process. A class of contaminated gamma distributions is considered to describe the uncertainty concerning the intensity. These distributions in turn are defined through a set of secondary parameters, the knowledge of which is also described and updated via Bayes formula. This two-stage Bayesian approach is an example where the modeling uncertainty is treated in a comprehensive way. Each contaminated gamma distributions, represented by a point in the 3D space of secondary parameters, can be considered as a specific model of the uncertainty about the Poisson intensity. Then, by the empirical Bayesian method each individual model is assigned a posterior probability

  6. Empirical modeling of dynamic behaviors of pneumatic artificial muscle actuators.

    Science.gov (United States)

    Wickramatunge, Kanchana Crishan; Leephakpreeda, Thananchai

    2013-11-01

    Pneumatic Artificial Muscle (PAM) actuators yield muscle-like mechanical actuation with high force to weight ratio, soft and flexible structure, and adaptable compliance for rehabilitation and prosthetic appliances to the disabled as well as humanoid robots or machines. The present study is to develop empirical models of the PAM actuators, that is, a PAM coupled with pneumatic control valves, in order to describe their dynamic behaviors for practical control design and usage. Empirical modeling is an efficient approach to computer-based modeling with observations of real behaviors. Different characteristics of dynamic behaviors of each PAM actuator are due not only to the structures of the PAM actuators themselves, but also to the variations of their material properties in manufacturing processes. To overcome the difficulties, the proposed empirical models are experimentally derived from real physical behaviors of the PAM actuators, which are being implemented. In case studies, the simulated results with good agreement to experimental results, show that the proposed methodology can be applied to describe the dynamic behaviors of the real PAM actuators. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.

  7. Empirically evaluating decision-analytic models.

    Science.gov (United States)

    Goldhaber-Fiebert, Jeremy D; Stout, Natasha K; Goldie, Sue J

    2010-08-01

    Model-based cost-effectiveness analyses support decision-making. To augment model credibility, evaluation via comparison to independent, empirical studies is recommended. We developed a structured reporting format for model evaluation and conducted a structured literature review to characterize current model evaluation recommendations and practices. As an illustration, we applied the reporting format to evaluate a microsimulation of human papillomavirus and cervical cancer. The model's outputs and uncertainty ranges were compared with multiple outcomes from a study of long-term progression from high-grade precancer (cervical intraepithelial neoplasia [CIN]) to cancer. Outcomes included 5 to 30-year cumulative cancer risk among women with and without appropriate CIN treatment. Consistency was measured by model ranges overlapping study confidence intervals. The structured reporting format included: matching baseline characteristics and follow-up, reporting model and study uncertainty, and stating metrics of consistency for model and study results. Structured searches yielded 2963 articles with 67 meeting inclusion criteria and found variation in how current model evaluations are reported. Evaluation of the cervical cancer microsimulation, reported using the proposed format, showed a modeled cumulative risk of invasive cancer for inadequately treated women of 39.6% (30.9-49.7) at 30 years, compared with the study: 37.5% (28.4-48.3). For appropriately treated women, modeled risks were 1.0% (0.7-1.3) at 30 years, study: 1.5% (0.4-3.3). To support external and projective validity, cost-effectiveness models should be iteratively evaluated as new studies become available, with reporting standardized to facilitate assessment. Such evaluations are particularly relevant for models used to conduct comparative effectiveness analyses.

  8. Bias-dependent hybrid PKI empirical-neural model of microwave FETs

    Science.gov (United States)

    Marinković, Zlatica; Pronić-Rančić, Olivera; Marković, Vera

    2011-10-01

    Empirical models of microwave transistors based on an equivalent circuit are valid for only one bias point. Bias-dependent analysis requires repeated extractions of the model parameters for each bias point. In order to make model bias-dependent, a new hybrid empirical-neural model of microwave field-effect transistors is proposed in this article. The model is a combination of an equivalent circuit model including noise developed for one bias point and two prior knowledge input artificial neural networks (PKI ANNs) aimed at introducing bias dependency of scattering (S) and noise parameters, respectively. The prior knowledge of the proposed ANNs involves the values of the S- and noise parameters obtained by the empirical model. The proposed hybrid model is valid in the whole range of bias conditions. Moreover, the proposed model provides better accuracy than the empirical model, which is illustrated by an appropriate modelling example of a pseudomorphic high-electron mobility transistor device.

  9. The gravity model specification for modeling international trade flows and free trade agreement effects: a 10-year review of empirical studies

    OpenAIRE

    Kepaptsoglou, Konstantinos; Karlaftis, Matthew G.; Tsamboulas, Dimitrios

    2010-01-01

    The gravity model has been extensively used in international trade research for the last 40 years because of its considerable empirical robustness and explanatory power. Since their introduction in the 1960's, gravity models have been used for assessing trade policy implications and, particularly recently, for analyzing the effects of Free Trade Agreements on international trade. The objective of this paper is to review the recent empirical literature on gravity models, highlight best practic...

  10. Antecedents of employee electricity saving behavior in organizations: An empirical study based on norm activation model

    International Nuclear Information System (INIS)

    Zhang, Yixiang; Wang, Zhaohua; Zhou, Guanghui

    2013-01-01

    China is one of the major energy-consuming countries, and is under great pressure to promote energy saving and reduce domestic energy consumption. Employees constitute an important target group for energy saving. However, only a few research efforts have been paid to study what drives employee energy saving behavior in organizations. To fill this gap, drawing on norm activation model (NAM), we built a research model to study antecedents of employee electricity saving behavior in organizations. The model was empirically tested using survey data collected from office workers in Beijing, China. Results show that personal norm positively influences employee electricity saving behavior. Organizational electricity saving climate negatively moderates the effect of personal norm on electricity saving behavior. Awareness of consequences, ascription of responsibility, and organizational electricity saving climate positively influence personal norm. Furthermore, awareness of consequences positively influences ascription of responsibility. This paper contributes to the energy saving behavior literature by building a theoretical model of employee electricity saving behavior which is understudied in the current literature. Based on the empirical results, implications on how to promote employee electricity saving are discussed. - Highlights: • We studied employee electricity saving behavior based on norm activation model. • The model was tested using survey data collected from office workers in China. • Personal norm positively influences employee′s electricity saving behavior. • Electricity saving climate negatively moderates personal norm′s effect. • This research enhances our understanding of employee electricity saving behavior

  11. A DISTANCE EDUCATION MODEL FOR JORDANIAN STUDENTS BASED ON AN EMPIRICAL STUDY

    Directory of Open Access Journals (Sweden)

    Ahmad SHAHER MASHHOUR

    2007-04-01

    Full Text Available Distance education is expanding worldwide. Numbers of students enrolled in distance education are increasing at very high rates. Distance education is said to be the future of education because it addresses educational needs of the new millennium. This paper represents the findings of an empirical study on a sample of Jordanian distance education students into a requirement model that addresses the need of such education at the national level. The responses of the sample show that distance education is offering a viable and satisfactory alternative to those who cannot enroll in regular residential education. The study also shows that the shortcomings of the regular and the current form of distance education in Jordan can be overcome by the use of modern information technology.

  12. Consistent constitutive modeling of metallic target penetration using empirical, analytical, and numerical penetration models

    Directory of Open Access Journals (Sweden)

    John (Jack P. Riegel III

    2016-04-01

    Full Text Available Historically, there has been little correlation between the material properties used in (1 empirical formulae, (2 analytical formulations, and (3 numerical models. The various regressions and models may each provide excellent agreement for the depth of penetration into semi-infinite targets. But the input parameters for the empirically based procedures may have little in common with either the analytical model or the numerical model. This paper builds on previous work by Riegel and Anderson (2014 to show how the Effective Flow Stress (EFS strength model, based on empirical data, can be used as the average flow stress in the analytical Walker–Anderson Penetration model (WAPEN (Anderson and Walker, 1991 and how the same value may be utilized as an effective von Mises yield strength in numerical hydrocode simulations to predict the depth of penetration for eroding projectiles at impact velocities in the mechanical response regime of the materials. The method has the benefit of allowing the three techniques (empirical, analytical, and numerical to work in tandem. The empirical method can be used for many shot line calculations, but more advanced analytical or numerical models can be employed when necessary to address specific geometries such as edge effects or layering that are not treated by the simpler methods. Developing complete constitutive relationships for a material can be costly. If the only concern is depth of penetration, such a level of detail may not be required. The effective flow stress can be determined from a small set of depth of penetration experiments in many cases, especially for long penetrators such as the L/D = 10 ones considered here, making it a very practical approach. In the process of performing this effort, the authors considered numerical simulations by other researchers based on the same set of experimental data that the authors used for their empirical and analytical assessment. The goals were to establish a

  13. Bridging process-based and empirical approaches to modeling tree growth

    Science.gov (United States)

    Harry T. Valentine; Annikki Makela; Annikki Makela

    2005-01-01

    The gulf between process-based and empirical approaches to modeling tree growth may be bridged, in part, by the use of a common model. To this end, we have formulated a process-based model of tree growth that can be fitted and applied in an empirical mode. The growth model is grounded in pipe model theory and an optimal control model of crown development. Together, the...

  14. A New Empirical Model for Radar Scattering from Bare Soil Surfaces

    Directory of Open Access Journals (Sweden)

    Nicolas Baghdadi

    2016-11-01

    Full Text Available The objective of this paper is to propose a new semi-empirical radar backscattering model for bare soil surfaces based on the Dubois model. A wide dataset of backscattering coefficients extracted from synthetic aperture radar (SAR images and in situ soil surface parameter measurements (moisture content and roughness is used. The retrieval of soil parameters from SAR images remains challenging because the available backscattering models have limited performances. Existing models, physical, semi-empirical, or empirical, do not allow for a reliable estimate of soil surface geophysical parameters for all surface conditions. The proposed model, developed in HH, HV, and VV polarizations, uses a formulation of radar signals based on physical principles that are validated in numerous studies. Never before has a backscattering model been built and validated on such an important dataset as the one proposed in this study. It contains a wide range of incidence angles (18°–57° and radar wavelengths (L, C, X, well distributed, geographically, for regions with different climate conditions (humid, semi-arid, and arid sites, and involving many SAR sensors. The results show that the new model shows a very good performance for different radar wavelengths (L, C, X, incidence angles, and polarizations (RMSE of about 2 dB. This model is easy to invert and could provide a way to improve the retrieval of soil parameters.

  15. Combining Empirical and Stochastic Models for Extreme Floods Estimation

    Science.gov (United States)

    Zemzami, M.; Benaabidate, L.

    2013-12-01

    Hydrological models can be defined as physical, mathematical or empirical. The latter class uses mathematical equations independent of the physical processes involved in the hydrological system. The linear regression and Gradex (Gradient of Extreme values) are classic examples of empirical models. However, conventional empirical models are still used as a tool for hydrological analysis by probabilistic approaches. In many regions in the world, watersheds are not gauged. This is true even in developed countries where the gauging network has continued to decline as a result of the lack of human and financial resources. Indeed, the obvious lack of data in these watersheds makes it impossible to apply some basic empirical models for daily forecast. So we had to find a combination of rainfall-runoff models in which it would be possible to create our own data and use them to estimate the flow. The estimated design floods would be a good choice to illustrate the difficulties facing the hydrologist for the construction of a standard empirical model in basins where hydrological information is rare. The construction of the climate-hydrological model, which is based on frequency analysis, was established to estimate the design flood in the Anseghmir catchments, Morocco. The choice of using this complex model returns to its ability to be applied in watersheds where hydrological information is not sufficient. It was found that this method is a powerful tool for estimating the design flood of the watershed and also other hydrological elements (runoff, volumes of water...).The hydrographic characteristics and climatic parameters were used to estimate the runoff, water volumes and design flood for different return periods.

  16. A Comprehensive Comparison Study of Empirical Cutting Transport Models in Inclined and Horizontal Wells

    Directory of Open Access Journals (Sweden)

    Asep Mohamad Ishaq Shiddiq

    2017-07-01

    Full Text Available In deviated and horizontal drilling, hole-cleaning issues are a common and complex problem. This study explored the effect of various parameters in drilling operations and how they affect the flow rate required for effective cutting transport. Three models, developed following an empirical approach, were employed: Rudi-Shindu’s model, Hopkins’, and Tobenna’s model. Rudi-Shindu’s model needs iteration in the calculation. Firstly, the three models were compared using a sensitivity analysis of drilling parameters affecting cutting transport. The result shows that the models have similar trends but different values for minimum flow velocity. Analysis was conducted to examine the feasibility of using Rudi-Shindu’s, Hopkins’, and Tobenna’s models. The result showed that Hopkins’ model is limited by cutting size and revolution per minute (RPM. The minimum flow rate from Tobenna’s model is affected only by well inclination, drilling fluid weight and drilling fluid rheological property. Meanwhile, Rudi-Shindu’s model is limited by inclinations above 45°. The study showed that the investigated models are not suitable for horizontal wells because they do not include the effect of lateral section.

  17. [A competency model of rural general practitioners: theory construction and empirical study].

    Science.gov (United States)

    Yang, Xiu-Mu; Qi, Yu-Long; Shne, Zheng-Fu; Han, Bu-Xin; Meng, Bei

    2015-04-01

    To perform theory construction and empirical study of the competency model of rural general practitioners. Through literature study, job analysis, interviews, and expert team discussion, the questionnaire of rural general practitioners competency was constructed. A total of 1458 rural general practitioners were surveyed by the questionnaire in 6 central provinces. The common factors were constructed using the principal component method of exploratory factor analysis and confirmatory factor analysis. The influence of the competency characteristics on the working performance was analyzed using regression equation analysis. The Cronbach 's alpha coefficient of the questionnaire was 0.974. The model consisted of 9 dimensions and 59 items. The 9 competency dimensions included basic public health service ability, basic clinical skills, system analysis capability, information management capability, communication and cooperation ability, occupational moral ability, non-medical professional knowledge, personal traits and psychological adaptability. The rate of explained cumulative total variance was 76.855%. The model fitting index were Χ(2)/df 1.88, GFI=0.94, NFI=0.96, NNFI=0.98, PNFI=0.91, RMSEA=0.068, CFI=0.97, IFI=0.97, RFI=0.96, suggesting good model fitting. Regression analysis showed that the competency characteristics had a significant effect on job performance. The rural general practitioners competency model provides reference for rural doctor training, rural order directional cultivation of medical students, and competency performance management of the rural general practitioners.

  18. Testing the gravity p-median model empirically

    Directory of Open Access Journals (Sweden)

    Kenneth Carling

    2015-12-01

    Full Text Available Regarding the location of a facility, the presumption in the widely used p-median model is that the customer opts for the shortest route to the nearest facility. However, this assumption is problematic on free markets since the customer is presumed to gravitate to a facility by the distance to and the attractiveness of it. The recently introduced gravity p-median model offers an extension to the p-median model that account for this. The model is therefore potentially interesting, although it has not yet been implemented and tested empirically. In this paper, we have implemented the model in an empirical problem of locating vehicle inspections, locksmiths, and retail stores of vehicle spare-parts for the purpose of investigating its superiority to the p-median model. We found, however, the gravity p-median model to be of limited use for the problem of locating facilities as it either gives solutions similar to the p-median model, or it gives unstable solutions due to a non-concave objective function.

  19. Political economy models and agricultural policy formation : empirical applicability and relevance for the CAP

    OpenAIRE

    Zee, van der, F.A.

    1997-01-01

    This study explores the relevance and applicability of political economy models for the explanation of agricultural policies. Part I (chapters 4-7) takes a general perspective and evaluates the empirical applicability of voting models and interest group models to agricultural policy formation in industrialised market economics. Part II (chapters 8-11) focuses on the empirical applicability of political economy models to agricultural policy formation and agricultural policy developmen...

  20. Tests of Parameters Instability: Theoretical Study and Empirical Applications on Two Types of Models (ARMA Model and Market Model

    Directory of Open Access Journals (Sweden)

    Sahbi FARHANI

    2012-01-01

    Full Text Available This paper considers tests of parameters instability and structural change with known, unknown or multiple breakpoints. The results apply to a wide class of parametric models that are suitable for estimation by strong rules for detecting the number of breaks in a time series. For that, we use Chow, CUSUM, CUSUM of squares, Wald, likelihood ratio and Lagrange multiplier tests. Each test implicitly uses an estimate of a change point. We conclude with an empirical analysis on two different models (ARMA model and simple linear regression model.

  1. An empirically based model for knowledge management in health care organizations.

    Science.gov (United States)

    Sibbald, Shannon L; Wathen, C Nadine; Kothari, Anita

    2016-01-01

    Knowledge management (KM) encompasses strategies, processes, and practices that allow an organization to capture, share, store, access, and use knowledge. Ideal KM combines different sources of knowledge to support innovation and improve performance. Despite the importance of KM in health care organizations (HCOs), there has been very little empirical research to describe KM in this context. This study explores KM in HCOs, focusing on the status of current intraorganizational KM. The intention is to provide insight for future studies and model development for effective KM implementation in HCOs. A qualitative methods approach was used to create an empirically based model of KM in HCOs. Methods included (a) qualitative interviews (n = 24) with senior leadership to identify types of knowledge important in these roles plus current information-seeking behaviors/needs and (b) in-depth case study with leaders in new executive positions (n = 2). The data were collected from 10 HCOs. Our empirically based model for KM was assessed for face and content validity. The findings highlight the paucity of formal KM in our sample HCOs. Organizational culture, leadership, and resources are instrumental in supporting KM processes. An executive's knowledge needs are extensive, but knowledge assets are often limited or difficult to acquire as much of the available information is not in a usable format. We propose an empirically based model for KM to highlight the importance of context (internal and external), and knowledge seeking, synthesis, sharing, and organization. Participants who reviewed the model supported its basic components and processes, and potential for incorporating KM into organizational processes. Our results articulate ways to improve KM, increase organizational learning, and support evidence-informed decision-making. This research has implications for how to better integrate evidence and knowledge into organizations while considering context and the role of

  2. Empirical study of supervised gene screening

    Directory of Open Access Journals (Sweden)

    Ma Shuangge

    2006-12-01

    Full Text Available Abstract Background Microarray studies provide a way of linking variations of phenotypes with their genetic causations. Constructing predictive models using high dimensional microarray measurements usually consists of three steps: (1 unsupervised gene screening; (2 supervised gene screening; and (3 statistical model building. Supervised gene screening based on marginal gene ranking is commonly used to reduce the number of genes in the model building. Various simple statistics, such as t-statistic or signal to noise ratio, have been used to rank genes in the supervised screening. Despite of its extensive usage, statistical study of supervised gene screening remains scarce. Our study is partly motivated by the differences in gene discovery results caused by using different supervised gene screening methods. Results We investigate concordance and reproducibility of supervised gene screening based on eight commonly used marginal statistics. Concordance is assessed by the relative fractions of overlaps between top ranked genes screened using different marginal statistics. We propose a Bootstrap Reproducibility Index, which measures reproducibility of individual genes under the supervised screening. Empirical studies are based on four public microarray data. We consider the cases where the top 20%, 40% and 60% genes are screened. Conclusion From a gene discovery point of view, the effect of supervised gene screening based on different marginal statistics cannot be ignored. Empirical studies show that (1 genes passed different supervised screenings may be considerably different; (2 concordance may vary, depending on the underlying data structure and percentage of selected genes; (3 evaluated with the Bootstrap Reproducibility Index, genes passed supervised screenings are only moderately reproducible; and (4 concordance cannot be improved by supervised screening based on reproducibility.

  3. Physical Limitations of Empirical Field Models: Force Balance and Plasma Pressure

    International Nuclear Information System (INIS)

    Sorin Zaharia; Cheng, C.Z.

    2002-01-01

    In this paper, we study whether the magnetic field of the T96 empirical model can be in force balance with an isotropic plasma pressure distribution. Using the field of T96, we obtain values for the pressure P by solving a Poisson-type equation (gradient) 2 P = (gradient) · (J x B) in the equatorial plane, and 1-D profiles on the Sun-Earth axis by integrating (gradient)P = J x B. We work in a flux coordinate system in which the magnetic field is expressed in terms of Euler potentials. Our results lead to the conclusion that the T96 model field cannot be in equilibrium with an isotropic pressure. We also analyze in detail the computation of Birkeland currents using the Vasyliunas relation and the T96 field, which yields unphysical results, again indicating the lack of force balance in the empirical model. The underlying reason for the force imbalance is likely the fact that the derivatives of the least-square fitted model B are not accurate predictions of the actual magnetospheric field derivatives. Finally, we discuss a possible solution to the problem of lack of force balance in empirical field models

  4. Empirical model development and validation with dynamic learning in the recurrent multilayer perception

    International Nuclear Information System (INIS)

    Parlos, A.G.; Chong, K.T.; Atiya, A.F.

    1994-01-01

    A nonlinear multivariable empirical model is developed for a U-tube steam generator using the recurrent multilayer perceptron network as the underlying model structure. The recurrent multilayer perceptron is a dynamic neural network, very effective in the input-output modeling of complex process systems. A dynamic gradient descent learning algorithm is used to train the recurrent multilayer perceptron, resulting in an order of magnitude improvement in convergence speed over static learning algorithms. In developing the U-tube steam generator empirical model, the effects of actuator, process,and sensor noise on the training and testing sets are investigated. Learning and prediction both appear very effective, despite the presence of training and testing set noise, respectively. The recurrent multilayer perceptron appears to learn the deterministic part of a stochastic training set, and it predicts approximately a moving average response. Extensive model validation studies indicate that the empirical model can substantially generalize (extrapolate), though online learning becomes necessary for tracking transients significantly different than the ones included in the training set and slowly varying U-tube steam generator dynamics. In view of the satisfactory modeling accuracy and the associated short development time, neural networks based empirical models in some cases appear to provide a serious alternative to first principles models. Caution, however, must be exercised because extensive on-line validation of these models is still warranted

  5. Connecting theoretical and empirical studies of trait-mediated interactions

    Czech Academy of Sciences Publication Activity Database

    Bolker, B.; Holyoak, M.; Křivan, Vlastimil; Rowe, L.; Schmitz, O.

    2003-01-01

    Roč. 84, č. 5 (2003), s. 1101-1114 ISSN 0012-9658 Institutional research plan: CEZ:AV0Z5007907 Keywords : community models * competition * empirical study Subject RIV: EH - Ecology, Behaviour Impact factor: 3.701, year: 2003

  6. An Empirical Investigation into a Subsidiary Absorptive Capacity Process Model

    DEFF Research Database (Denmark)

    Schleimer, Stephanie; Pedersen, Torben

    2011-01-01

    and empirically test a process model of absorptive capacity. The setting of our empirical study is 213 subsidiaries of multinational enterprises and the focus is on the capacity of these subsidiaries to successfully absorb best practices in marketing strategy from their headquarters. This setting allows us...... to explore the process model in its entirety, including different drivers of subsidiary absorptive capacity (organizational mechanisms and contextual drivers), the three original dimensions of absorptive capacity (recognition, assimilation, application), and related outcomes (implementation...... and internalization of the best practice). The study’s findings reveal that managers have discretion in promoting absorptive capacity through the application of specific organizational mechanism and that the impact of contextual drivers on subsidiary absorptive capacity is not direct, but mediated...

  7. Identifiability of Baranyi model and comparison with empirical ...

    African Journals Online (AJOL)

    In addition, performance of the Baranyi model was compared with those of the empirical modified Gompertz and logistic models and Huang models. Higher values of R2, modeling efficiency and lower absolute values of mean bias error, root mean square error, mean percentage error and chi-square were obtained with ...

  8. An anthology of theories and models of design philosophy, approaches and empirical explorations

    CERN Document Server

    Blessing, Lucienne

    2014-01-01

    While investigations into both theories and models has remained a major strand of engineering design research, current literature sorely lacks a reference book that provides a comprehensive and up-to-date anthology of theories and models, and their philosophical and empirical underpinnings; An Anthology of Theories and Models of Design fills this gap. The text collects the expert views of an international authorship, covering: ·         significant theories in engineering design, including CK theory, domain theory, and the theory of technical systems; ·         current models of design, from a function behavior structure model to an integrated model; ·         important empirical research findings from studies into design; and ·         philosophical underpinnings of design itself. For educators and researchers in engineering design, An Anthology of Theories and Models of Design gives access to in-depth coverage of theoretical and empirical developments in this area; for pr...

  9. On the empirical relevance of the transient in opinion models

    International Nuclear Information System (INIS)

    Banisch, Sven; Araujo, Tanya

    2010-01-01

    While the number and variety of models to explain opinion exchange dynamics is huge, attempts to justify the model results using empirical data are relatively rare. As linking to real data is essential for establishing model credibility, this Letter develops an empirical confirmation experiment by which an opinion model is related to real election data. The model is based on a representation of opinions as a vector of k bits. Individuals interact according to the principle that similarity leads to interaction and interaction leads to still more similarity. In the comparison to real data we concentrate on the transient opinion profiles that form during the dynamic process. An artificial election procedure is introduced which allows to relate transient opinion configurations to the electoral performance of candidates for which data are available. The election procedure based on the well-established principle of proximity voting is repeatedly performed during the transient period and remarkable statistical agreement with the empirical data is observed.

  10. On the empirical relevance of the transient in opinion models

    Energy Technology Data Exchange (ETDEWEB)

    Banisch, Sven, E-mail: sven.banisch@universecity.d [Mathematical Physics, Physics Department, Bielefeld University, 33501 Bielefeld (Germany); Institute for Complexity Science (ICC), 1249-078 Lisbon (Portugal); Araujo, Tanya, E-mail: tanya@iseg.utl.p [Research Unit on Complexity in Economics (UECE), ISEG, TULisbon, 1249-078 Lisbon (Portugal); Institute for Complexity Science (ICC), 1249-078 Lisbon (Portugal)

    2010-07-12

    While the number and variety of models to explain opinion exchange dynamics is huge, attempts to justify the model results using empirical data are relatively rare. As linking to real data is essential for establishing model credibility, this Letter develops an empirical confirmation experiment by which an opinion model is related to real election data. The model is based on a representation of opinions as a vector of k bits. Individuals interact according to the principle that similarity leads to interaction and interaction leads to still more similarity. In the comparison to real data we concentrate on the transient opinion profiles that form during the dynamic process. An artificial election procedure is introduced which allows to relate transient opinion configurations to the electoral performance of candidates for which data are available. The election procedure based on the well-established principle of proximity voting is repeatedly performed during the transient period and remarkable statistical agreement with the empirical data is observed.

  11. Application of GIS to Empirical Windthrow Risk Model in Mountain Forested Landscapes

    Directory of Open Access Journals (Sweden)

    Lukas Krejci

    2018-02-01

    Full Text Available Norway spruce dominates mountain forests in Europe. Natural variations in the mountainous coniferous forests are strongly influenced by all the main components of forest and landscape dynamics: species diversity, the structure of forest stands, nutrient cycling, carbon storage, and other ecosystem services. This paper deals with an empirical windthrow risk model based on the integration of logistic regression into GIS to assess forest vulnerability to wind-disturbance in the mountain spruce forests of Šumava National Park (Czech Republic. It is an area where forest management has been the focus of international discussions by conservationists, forest managers, and stakeholders. The authors developed the empirical windthrow risk model, which involves designing an optimized data structure containing dependent and independent variables entering logistic regression. The results from the model, visualized in the form of map outputs, outline the probability of risk to forest stands from wind in the examined territory of the national park. Such an application of the empirical windthrow risk model could be used as a decision support tool for the mountain spruce forests in a study area. Future development of these models could be useful for other protected European mountain forests dominated by Norway spruce.

  12. Modelling metal speciation in the Scheldt Estuary: Combining a flexible-resolution transport model with empirical functions

    Energy Technology Data Exchange (ETDEWEB)

    Elskens, Marc [Vrije Universiteit Brussel, Analytical, Pleinlaan 2, BE-1050 Brussels (Belgium); Gourgue, Olivier [Université catholique de Louvain, Institute of Mechanics, Materials and Civil Engineering (IMMC), 4 Avenue G. Lemaître, bte L4.05.02, BE-1348 Louvain-la-Neuve (Belgium); Université catholique de Louvain, Georges Lemaître Centre for Earth and Climate Research (TECLIM), Place Louis Pasteur 2, bte L4.03.08, BE-1348 Louvain-la-Neuve (Belgium); Baeyens, Willy [Vrije Universiteit Brussel, Analytical, Pleinlaan 2, BE-1050 Brussels (Belgium); Chou, Lei [Université Libre de Bruxelles, Biogéochimie et Modélisation du Système Terre (BGéoSys) —Océanographie Chimique et Géochimie des Eaux, Campus de la Plaine —CP 208, Boulevard du Triomphe, BE-1050 Brussels (Belgium); Deleersnijder, Eric [Université catholique de Louvain, Institute of Mechanics, Materials and Civil Engineering (IMMC), 4 Avenue G. Lemaître, bte L4.05.02, BE-1348 Louvain-la-Neuve (Belgium); Université catholique de Louvain, Earth and Life Institute (ELI), Georges Lemaître Centre for Earth and Climate Research (TECLIM), Place Louis Pasteur 2, bte L4.03.08, BE-1348 Louvain-la-Neuve (Belgium); Leermakers, Martine [Vrije Universiteit Brussel, Analytical, Pleinlaan 2, BE-1050 Brussels (Belgium); and others

    2014-04-01

    Predicting metal concentrations in surface waters is an important step in the understanding and ultimately the assessment of the ecological risk associated with metal contamination. In terms of risk an essential piece of information is the accurate knowledge of the partitioning of the metals between the dissolved and particulate phases, as the former species are generally regarded as the most bioavailable and thus harmful form. As a first step towards the understanding and prediction of metal speciation in the Scheldt Estuary (Belgium, the Netherlands), we carried out a detailed analysis of a historical dataset covering the period 1982–2011. This study reports on the results for two selected metals: Cu and Cd. Data analysis revealed that both the total metal concentration and the metal partitioning coefficient (K{sub d}) could be predicted using relatively simple empirical functions of environmental variables such as salinity and suspended particulate matter concentration (SPM). The validity of these functions has been assessed by their application to salinity and SPM fields simulated by the hydro-environmental model SLIM. The high-resolution total and dissolved metal concentrations reconstructed using this approach, compared surprisingly well with an independent set of validation measurements. These first results from the combined mechanistic-empirical model approach suggest that it may be an interesting tool for risk assessment studies, e.g. to help identify conditions associated with elevated (dissolved) metal concentrations. - Highlights: • Empirical functions were designed for assessing metal speciation in estuarine water. • The empirical functions were implemented in the hydro-environmental model SLIM. • Validation was carried out in the Scheldt Estuary using historical data 1982–2011. • This combined mechanistic-empirical approach is useful for risk assessment.

  13. U-tube steam generator empirical model development and validation using neural networks

    International Nuclear Information System (INIS)

    Parlos, A.G.; Chong, K.T.; Atiya, A.

    1992-01-01

    Empirical modeling techniques that use model structures motivated from neural networks research have proven effective in identifying complex process dynamics. A recurrent multilayer perception (RMLP) network was developed as a nonlinear state-space model structure along with a static learning algorithm for estimating the parameter associated with it. The methods developed were demonstrated by identifying two submodels of a U-tube steam generator (UTSG), each valid around an operating power level. A significant drawback of this approach is the long off-line training times required for the development of even a simplified model of a UTSG. Subsequently, a dynamic gradient descent-based learning algorithm was developed as an accelerated alternative to train an RMLP network for use in empirical modeling of power plants. The two main advantages of this learning algorithm are its ability to consider past error gradient information for future use and the two forward passes associated with its implementation. The enhanced learning capabilities provided by the dynamic gradient descent-based learning algorithm were demonstrated via the case study of a simple steam boiler power plant. In this paper, the dynamic gradient descent-based learning algorithm is used for the development and validation of a complete UTSG empirical model

  14. Empirical model of subdaily variations in the Earth rotation from GPS and its stability

    Science.gov (United States)

    Panafidina, N.; Kurdubov, S.; Rothacher, M.

    2012-12-01

    The model recommended by the IERS for these variations at diurnal and semidiurnal periods has been computed from an ocean tide model and comprises 71 terms in polar motion and Universal Time. In the present study we compute an empirical model of variations in the Earth rotation on tidal frequencies from homogeneously re-processed GPS-observations over 1994-2007 available as free daily normal equations. We discuss the reliability of the obtained amplitudes of the ERP variations and compare results from GPS and VLBI data to identify technique-specific problems and instabilities of the empirical tidal models.

  15. Dynamic gradient descent learning algorithms for enhanced empirical modeling of power plants

    International Nuclear Information System (INIS)

    Parlos, A.G.; Atiya, Amir; Chong, K.T.

    1991-01-01

    A newly developed dynamic gradient descent-based learning algorithm is used to train a recurrent multilayer perceptron network for use in empirical modeling of power plants. The two main advantages of the proposed learning algorithm are its ability to consider past error gradient information for future use and the two forward passes associated with its implementation, instead of one forward and one backward pass of the backpropagation algorithm. The latter advantage results in computational time saving because both passes can be performed simultaneously. The dynamic learning algorithm is used to train a hybrid feedforward/feedback neural network, a recurrent multilayer perceptron, which was previously found to exhibit good interpolation and extrapolation capabilities in modeling nonlinear dynamic systems. One of the drawbacks, however, of the previously reported work has been the long training times associated with accurate empirical models. The enhanced learning capabilities provided by the dynamic gradient descent-based learning algorithm are demonstrated by a case study of a steam power plant. The number of iterations required for accurate empirical modeling has been reduced from tens of thousands to hundreds, thus significantly expediting the learning process

  16. Antecedents and Consequences of Individual Performance Analysis of Turnover Intention Model (Empirical Study of Public Accountants in Indonesia)

    OpenAIRE

    Raza, Hendra; Maksum, Azhar; Erlina; Lumban Raja, Prihatin

    2014-01-01

    Azhar Maksum This study aims to examine empirically the antecedents of individual performance on its consequences of turnover intention in public accounting firms. There are eight variables measured which consists of auditors' empowerment, innovation professionalism, role ambiguity, role conflict, organizational commitment, individual performance and turnover intention. Data analysis is based on 163 public accountant using the Structural Equation Modeling assisted with an appli...

  17. Conceptual Model of IT Infrastructure Capability and Its Empirical Justification

    Institute of Scientific and Technical Information of China (English)

    QI Xianfeng; LAN Boxiong; GUO Zhenwei

    2008-01-01

    Increasing importance has been attached to the value of information technology (IT) infrastructure in today's organizations. The development of efficacious IT infrastructure capability enhances business performance and brings sustainable competitive advantage. This study analyzed the IT infrastructure capability in a holistic way and then presented a concept model of IT capability. IT infrastructure capability was categorized into sharing capability, service capability, and flexibility. This study then empirically tested the model using a set of survey data collected from 145 firms. Three factors emerge from the factor analysis as IT flexibility, IT service capability, and IT sharing capability, which agree with those in the conceptual model built in this study.

  18. Semiphysiological versus Empirical Modelling of the Population Pharmacokinetics of Free and Total Cefazolin during Pregnancy

    Directory of Open Access Journals (Sweden)

    J. G. Coen van Hasselt

    2014-01-01

    Full Text Available This work describes a first population pharmacokinetic (PK model for free and total cefazolin during pregnancy, which can be used for dose regimen optimization. Secondly, analysis of PK studies in pregnant patients is challenging due to study design limitations. We therefore developed a semiphysiological modeling approach, which leveraged gestation-induced changes in creatinine clearance (CrCL into a population PK model. This model was then compared to the conventional empirical covariate model. First, a base two-compartmental PK model with a linear protein binding was developed. The empirical covariate model for gestational changes consisted of a linear relationship between CL and gestational age. The semiphysiological model was based on the base population PK model and a separately developed mixed-effect model for gestation-induced change in CrCL. Estimates for baseline clearance (CL were 0.119 L/min (RSE 58% and 0.142 L/min (RSE 44% for the empirical and semiphysiological models, respectively. Both models described the available PK data comparably well. However, as the semiphysiological model was based on prior knowledge of gestation-induced changes in renal function, this model may have improved predictive performance. This work demonstrates how a hybrid semiphysiological population PK approach may be of relevance in order to derive more informative inferences.

  19. Empirical Modeling on Hot Air Drying of Fresh and Pre-treated Pineapples

    Directory of Open Access Journals (Sweden)

    Tanongkankit Yardfon

    2016-01-01

    Full Text Available This research was aimed to study drying kinetics and determine empirical model of fresh pineapple and pre-treated pineapple with sucrose solution at different concentrations during drying. 3 mm thick samples were immersed into 30, 40 and 50 Brix of sucrose solution before hot air drying at temperatures of 60, 70 and 80°C. The empirical models to predict the drying kinetics were investigated. The results showed that the moisture content decreased when increasing the drying temperatures and times. Increase in sucrose concentration led to longer drying time. According to the statistical values of the highest coefficients (R2, the lowest least of chi-square (χ2 and root mean square error (RMSE, Logarithmic model was the best models for describing the drying behavior of soaked samples into 30, 40 and 50 Brix of sucrose solution.

  20. Empirical intrinsic geometry for nonlinear modeling and time series filtering.

    Science.gov (United States)

    Talmon, Ronen; Coifman, Ronald R

    2013-07-30

    In this paper, we present a method for time series analysis based on empirical intrinsic geometry (EIG). EIG enables one to reveal the low-dimensional parametric manifold as well as to infer the underlying dynamics of high-dimensional time series. By incorporating concepts of information geometry, this method extends existing geometric analysis tools to support stochastic settings and parametrizes the geometry of empirical distributions. However, the statistical models are not required as priors; hence, EIG may be applied to a wide range of real signals without existing definitive models. We show that the inferred model is noise-resilient and invariant under different observation and instrumental modalities. In addition, we show that it can be extended efficiently to newly acquired measurements in a sequential manner. These two advantages enable us to revisit the Bayesian approach and incorporate empirical dynamics and intrinsic geometry into a nonlinear filtering framework. We show applications to nonlinear and non-Gaussian tracking problems as well as to acoustic signal localization.

  1. An Empirical Model for Estimating the Probability of Electrical Short Circuits from Tin Whiskers. Part 2

    Science.gov (United States)

    Courey, Karim; Wright, Clara; Asfour, Shihab; Onar, Arzu; Bayliss, Jon; Ludwig, Larry

    2009-01-01

    In this experiment, an empirical model to quantify the probability of occurrence of an electrical short circuit from tin whiskers as a function of voltage was developed. This empirical model can be used to improve existing risk simulation models. FIB and TEM images of a tin whisker confirm the rare polycrystalline structure on one of the three whiskers studied. FIB cross-section of the card guides verified that the tin finish was bright tin.

  2. Modelling of proton exchange membrane fuel cell performance based on semi-empirical equations

    Energy Technology Data Exchange (ETDEWEB)

    Al-Baghdadi, Maher A.R. Sadiq [Babylon Univ., Dept. of Mechanical Engineering, Babylon (Iraq)

    2005-08-01

    Using semi-empirical equations for modeling a proton exchange membrane fuel cell is proposed for providing a tool for the design and analysis of fuel cell total systems. The focus of this study is to derive an empirical model including process variations to estimate the performance of fuel cell without extensive calculations. The model take into account not only the current density but also the process variations, such as the gas pressure, temperature, humidity, and utilization to cover operating processes, which are important factors in determining the real performance of fuel cell. The modelling results are compared well with known experimental results. The comparison shows good agreements between the modeling results and the experimental data. The model can be used to investigate the influence of process variables for design optimization of fuel cells, stacks, and complete fuel cell power system. (Author)

  3. An empirically-based model for the lift coefficients of twisted airfoils with leading-edge tubercles

    Science.gov (United States)

    Ni, Zao; Su, Tsung-chow; Dhanak, Manhar

    2018-04-01

    Experimental data for untwisted airfoils are utilized to propose a model for predicting the lift coefficients of twisted airfoils with leading-edge tubercles. The effectiveness of the empirical model is verified through comparison with results of a corresponding computational fluid-dynamic (CFD) study. The CFD study is carried out for both twisted and untwisted airfoils with tubercles, the latter shown to compare well with available experimental data. Lift coefficients of twisted airfoils predicted from the proposed empirically-based model match well with the corresponding coefficients determined using the verified CFD study. Flow details obtained from the latter provide better insight into the underlying mechanism and behavior at stall of twisted airfoils with leading edge tubercles.

  4. Political economy models and agricultural policy formation : empirical applicability and relevance for the CAP

    NARCIS (Netherlands)

    Zee, van der F.A.

    1997-01-01

    This study explores the relevance and applicability of political economy models for the explanation of agricultural policies. Part I (chapters 4-7) takes a general perspective and evaluates the empirical applicability of voting models and interest group models to agricultural policy

  5. Theoretical and Empirical Review of Asset Pricing Models: A Structural Synthesis

    Directory of Open Access Journals (Sweden)

    Saban Celik

    2012-01-01

    Full Text Available The purpose of this paper is to give a comprehensive theoretical review devoted to asset pricing models by emphasizing static and dynamic versions in the line with their empirical investigations. A considerable amount of financial economics literature devoted to the concept of asset pricing and their implications. The main task of asset pricing model can be seen as the way to evaluate the present value of the pay offs or cash flows discounted for risk and time lags. The difficulty coming from discounting process is that the relevant factors that affect the pay offs vary through the time whereas the theoretical framework is still useful to incorporate the changing factors into an asset pricing models. This paper fills the gap in literature by giving a comprehensive review of the models and evaluating the historical stream of empirical investigations in the form of structural empirical review.

  6. Vocational Teachers and Professionalism - A Model Based on Empirical Analyses

    DEFF Research Database (Denmark)

    Duch, Henriette Skjærbæk; Andreasen, Karen E

    Vocational Teachers and Professionalism - A Model Based on Empirical Analyses Several theorists has developed models to illustrate the processes of adult learning and professional development (e.g. Illeris, Argyris, Engeström; Wahlgren & Aarkorg, Kolb and Wenger). Models can sometimes be criticized...... emphasis on the adult employee, the organization, its surroundings as well as other contextual factors. Our concern is adult vocational teachers attending a pedagogical course and teaching at vocational colleges. The aim of the paper is to discuss different models and develop a model concerning teachers...... at vocational colleges based on empirical data in a specific context, vocational teacher-training course in Denmark. By offering a basis and concepts for analysis of practice such model is meant to support the development of vocational teachers’ professionalism at courses and in organizational contexts...

  7. Using Graph and Vertex Entropy to Compare Empirical Graphs with Theoretical Graph Models

    Directory of Open Access Journals (Sweden)

    Tomasz Kajdanowicz

    2016-09-01

    Full Text Available Over the years, several theoretical graph generation models have been proposed. Among the most prominent are: the Erdős–Renyi random graph model, Watts–Strogatz small world model, Albert–Barabási preferential attachment model, Price citation model, and many more. Often, researchers working with real-world data are interested in understanding the generative phenomena underlying their empirical graphs. They want to know which of the theoretical graph generation models would most probably generate a particular empirical graph. In other words, they expect some similarity assessment between the empirical graph and graphs artificially created from theoretical graph generation models. Usually, in order to assess the similarity of two graphs, centrality measure distributions are compared. For a theoretical graph model this means comparing the empirical graph to a single realization of a theoretical graph model, where the realization is generated from the given model using an arbitrary set of parameters. The similarity between centrality measure distributions can be measured using standard statistical tests, e.g., the Kolmogorov–Smirnov test of distances between cumulative distributions. However, this approach is both error-prone and leads to incorrect conclusions, as we show in our experiments. Therefore, we propose a new method for graph comparison and type classification by comparing the entropies of centrality measure distributions (degree centrality, betweenness centrality, closeness centrality. We demonstrate that our approach can help assign the empirical graph to the most similar theoretical model using a simple unsupervised learning method.

  8. Corrosion-induced bond strength degradation in reinforced concrete-Analytical and empirical models

    International Nuclear Information System (INIS)

    Bhargava, Kapilesh; Ghosh, A.K.; Mori, Yasuhiro; Ramanujam, S.

    2007-01-01

    The present paper aims to investigate the relationship between the bond strength and the reinforcement corrosion in reinforced concrete (RC). Analytical and empirical models are proposed for the bond strength of corroded reinforcing bars. Analytical model proposed by Cairns.and Abdullah [Cairns, J., Abdullah, R.B., 1996. Bond strength of black and epoxy-coated reinforcement-a theoretical approach. ACI Mater. J. 93 (4), 362-369] for splitting bond failure and later modified by Coronelli [Coronelli, D. 2002. Corrosion cracking and bond strength modeling for corroded bars in reinforced concrete. ACI Struct. J. 99 (3), 267-276] to consider the corroded bars, has been adopted. Estimation of the various parameters in the earlier analytical model has been proposed by the present authors. These parameters include corrosion pressure due to expansive action of corrosion products, modeling of tensile behaviour of cracked concrete and adhesion and friction coefficient between the corroded bar and cracked concrete. Simple empirical models are also proposed to evaluate the reduction in bond strength as a function of reinforcement corrosion in RC specimens. These empirical models are proposed by considering a wide range of published experimental investigations related to the bond degradation in RC specimens due to reinforcement corrosion. It has been found that the proposed analytical and empirical bond models are capable of providing the estimates of predicted bond strength of corroded reinforcement that are in reasonably good agreement with the experimentally observed values and with those of the other reported published data on analytical and empirical predictions. An attempt has also been made to evaluate the flexural strength of RC beams with corroded reinforcement failing in bond. It has also been found that the analytical predictions for the flexural strength of RC beams based on the proposed bond degradation models are in agreement with those of the experimentally

  9. Block Empirical Likelihood for Longitudinal Single-Index Varying-Coefficient Model

    Directory of Open Access Journals (Sweden)

    Yunquan Song

    2013-01-01

    Full Text Available In this paper, we consider a single-index varying-coefficient model with application to longitudinal data. In order to accommodate the within-group correlation, we apply the block empirical likelihood procedure to longitudinal single-index varying-coefficient model, and prove a nonparametric version of Wilks’ theorem which can be used to construct the block empirical likelihood confidence region with asymptotically correct coverage probability for the parametric component. In comparison with normal approximations, the proposed method does not require a consistent estimator for the asymptotic covariance matrix, making it easier to conduct inference for the model's parametric component. Simulations demonstrate how the proposed method works.

  10. Educational Inequality and Income Inequality: An Empirical Study on China

    Science.gov (United States)

    Yang, Jun; Huang, Xiao; Li, Xiaoyu

    2009-01-01

    Based on the endogenous growth theory, this paper uses the Gini coefficient to measure educational inequality and studies the empirical relationship between educational inequality and income inequality through a simultaneous equation model. The results show that: (1) Income inequality leads to educational inequality while the reduction of…

  11. Theoretical-empirical model of the steam-water cycle of the power unit

    Directory of Open Access Journals (Sweden)

    Grzegorz Szapajko

    2010-06-01

    Full Text Available The diagnostics of the energy conversion systems’ operation is realised as a result of collecting, processing, evaluatingand analysing the measurement signals. The result of the analysis is the determination of the process state. It requires a usageof the thermal processes models. Construction of the analytical model with the auxiliary empirical functions built-in brings satisfyingresults. The paper presents theoretical-empirical model of the steam-water cycle. Worked out mathematical simulation model containspartial models of the turbine, the regenerative heat exchangers and the condenser. Statistical verification of the model is presented.

  12. Space evolution model and empirical analysis of an urban public transport network

    Science.gov (United States)

    Sui, Yi; Shao, Feng-jing; Sun, Ren-cheng; Li, Shu-jing

    2012-07-01

    This study explores the space evolution of an urban public transport network, using empirical evidence and a simulation model validated on that data. Public transport patterns primarily depend on traffic spatial-distribution, demands of passengers and expected utility of investors. Evolution is an iterative process of satisfying the needs of passengers and investors based on a given traffic spatial-distribution. The temporal change of urban public transport network is evaluated both using topological measures and spatial ones. The simulation model is validated using empirical data from nine big cities in China. Statistical analyses on topological and spatial attributes suggest that an evolution network with traffic demands characterized by power-law numerical values which distribute in a mode of concentric circles tallies well with these nine cities.

  13. Development of an empirical model of turbine efficiency using the Taylor expansion and regression analysis

    International Nuclear Information System (INIS)

    Fang, Xiande; Xu, Yu

    2011-01-01

    The empirical model of turbine efficiency is necessary for the control- and/or diagnosis-oriented simulation and useful for the simulation and analysis of dynamic performances of the turbine equipment and systems, such as air cycle refrigeration systems, power plants, turbine engines, and turbochargers. Existing empirical models of turbine efficiency are insufficient because there is no suitable form available for air cycle refrigeration turbines. This work performs a critical review of empirical models (called mean value models in some literature) of turbine efficiency and develops an empirical model in the desired form for air cycle refrigeration, the dominant cooling approach in aircraft environmental control systems. The Taylor series and regression analysis are used to build the model, with the Taylor series being used to expand functions with the polytropic exponent and the regression analysis to finalize the model. The measured data of a turbocharger turbine and two air cycle refrigeration turbines are used for the regression analysis. The proposed model is compact and able to present the turbine efficiency map. Its predictions agree with the measured data very well, with the corrected coefficient of determination R c 2 ≥ 0.96 and the mean absolute percentage deviation = 1.19% for the three turbines. -- Highlights: → Performed a critical review of empirical models of turbine efficiency. → Developed an empirical model in the desired form for air cycle refrigeration, using the Taylor expansion and regression analysis. → Verified the method for developing the empirical model. → Verified the model.

  14. Empirical Validation of a Thermal Model of a Complex Roof Including Phase Change Materials

    Directory of Open Access Journals (Sweden)

    Stéphane Guichard

    2015-12-01

    Full Text Available This paper deals with the empirical validation of a building thermal model of a complex roof including a phase change material (PCM. A mathematical model dedicated to PCMs based on the heat apparent capacity method was implemented in a multi-zone building simulation code, the aim being to increase the understanding of the thermal behavior of the whole building with PCM technologies. In order to empirically validate the model, the methodology is based both on numerical and experimental studies. A parametric sensitivity analysis was performed and a set of parameters of the thermal model has been identified for optimization. The use of the generic optimization program called GenOpt® coupled to the building simulation code enabled to determine the set of adequate parameters. We first present the empirical validation methodology and main results of previous work. We then give an overview of GenOpt® and its coupling with the building simulation code. Finally, once the optimization results are obtained, comparisons of the thermal predictions with measurements are found to be acceptable and are presented.

  15. Semi-empirical modelization of charge funneling in a NP diode

    International Nuclear Information System (INIS)

    Musseau, O.

    1991-01-01

    Heavy ion interaction with a semiconductor generates a high density of electrons and holes pairs along the trajectory and in a space charge zone the collected charge is considerably increased. The chronology of this charge funneling is described in a semi-empirical model. From initial conditions characterizing the incident ion and the studied structure, it is possible to evaluate directly the transient current, the collected charge and the length of funneling with a good agreement. The model can be extrapolated to more complex structures

  16. Development and empirical exploration of an extended model of intragroup conflict

    OpenAIRE

    Hjertø, Kjell B.; Kuvaas, Bård

    2009-01-01

    Dette er post-print av artikkelen publisert i International Journal of Conflict Management Purpose - The purpose of this study was to develop and empirically explore a model of four intragroup conflict types (the 4IC model), consisting of an emotional person, a cognitive task, an emotional task, and a cognitive person conflict. The two first conflict types are similar to existing conceptualizations, whereas the two latter represent new dimensions of group conflict. Design/m...

  17. Biomass viability: An experimental study and the development of an empirical mathematical model for submerged membrane bioreactor.

    Science.gov (United States)

    Zuthi, M F R; Ngo, H H; Guo, W S; Nghiem, L D; Hai, F I; Xia, S Q; Zhang, Z Q; Li, J X

    2015-08-01

    This study investigates the influence of key biomass parameters on specific oxygen uptake rate (SOUR) in a sponge submerged membrane bioreactor (SSMBR) to develop mathematical models of biomass viability. Extra-cellular polymeric substances (EPS) were considered as a lumped parameter of bound EPS (bEPS) and soluble microbial products (SMP). Statistical analyses of experimental results indicate that the bEPS, SMP, mixed liquor suspended solids and volatile suspended solids (MLSS and MLVSS) have functional relationships with SOUR and their relative influence on SOUR was in the order of EPS>bEPS>SMP>MLVSS/MLSS. Based on correlations among biomass parameters and SOUR, two independent empirical models of biomass viability were developed. The models were validated using results of the SSMBR. However, further validation of the models for different operating conditions is suggested. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. A Time-dependent Heliospheric Model Driven by Empirical Boundary Conditions

    Science.gov (United States)

    Kim, T. K.; Arge, C. N.; Pogorelov, N. V.

    2017-12-01

    Consisting of charged particles originating from the Sun, the solar wind carries the Sun's energy and magnetic field outward through interplanetary space. The solar wind is the predominant source of space weather events, and modeling the solar wind propagation to Earth is a critical component of space weather research. Solar wind models are typically separated into coronal and heliospheric parts to account for the different physical processes and scales characterizing each region. Coronal models are often coupled with heliospheric models to propagate the solar wind out to Earth's orbit and beyond. The Wang-Sheeley-Arge (WSA) model is a semi-empirical coronal model consisting of a potential field source surface model and a current sheet model that takes synoptic magnetograms as input to estimate the magnetic field and solar wind speed at any distance above the coronal region. The current version of the WSA model takes the Air Force Data Assimilative Photospheric Flux Transport (ADAPT) model as input to provide improved time-varying solutions for the ambient solar wind structure. When heliospheric MHD models are coupled with the WSA model, density and temperature at the inner boundary are treated as free parameters that are tuned to optimal values. For example, the WSA-ENLIL model prescribes density and temperature assuming momentum flux and thermal pressure balance across the inner boundary of the ENLIL heliospheric MHD model. We consider an alternative approach of prescribing density and temperature using empirical correlations derived from Ulysses and OMNI data. We use our own modeling software (Multi-scale Fluid-kinetic Simulation Suite) to drive a heliospheric MHD model with ADAPT-WSA input. The modeling results using the two different approaches of density and temperature prescription suggest that the use of empirical correlations may be a more straightforward, consistent method.

  19. An Evaluation Model for Sustainable Development of China’s Textile Industry: An Empirical Study

    Science.gov (United States)

    Zhao, Hong; Lu, Xiaodong; Yu, Ting; Yin, Yanbin

    2018-04-01

    With economy’s continuous rapid growth, textile industry is required to search for new rules and adjust strategies in order to optimize industrial structure and rationalize social spending. The sustainable development of China’s textile industry is a comprehensive research subject. This study analyzed the status of China’s textile industry and constructed the evaluation model based on the economical, ecologic, and social benefits. Analytic Hierarchy Process (AHP) and Data Envelopment Analysis (DEA) were used for an empirical study of textile industry. The result of evaluation model suggested that the status of the textile industry has become the major problems in the sustainable development of China’s textile industry. It’s nearly impossible to integrate into the global economy if no measures are taken. The enterprises concerned with the textile industry status should be reformed in terms of product design, raw material selection, technological reform, technological progress, and management, in accordance with the ideas and requirements of sustainable development. The results of this study are benefit for 1) discover the main elements restricting the industry’s sustainable development; 2) seek for corresponding solutions for policy formulation and implementation of textile industry; 3) provide references for enterprises’ development transformation in strategic deployment, fund allocation, and personnel assignment.

  20. Empirical Modeling of Lithium-ion Batteries Based on Electrochemical Impedance Spectroscopy Tests

    International Nuclear Information System (INIS)

    Samadani, Ehsan; Farhad, Siamak; Scott, William; Mastali, Mehrdad; Gimenez, Leonardo E.; Fowler, Michael; Fraser, Roydon A.

    2015-01-01

    Highlights: • Two commercial Lithium-ion batteries are studied through HPPC and EIS tests. • An equivalent circuit model is developed for a range of operating conditions. • This model improves the current battery empirical models for vehicle applications • This model is proved to be efficient in terms of predicting HPPC test resistances. - ABSTRACT: An empirical model for commercial lithium-ion batteries is developed based on electrochemical impedance spectroscopy (EIS) tests. An equivalent circuit is established according to EIS test observations at various battery states of charge and temperatures. A Laplace transfer time based model is developed based on the circuit which can predict the battery operating output potential difference in battery electric and plug-in hybrid vehicles at various operating conditions. This model demonstrates up to 6% improvement compared to simple resistance and Thevenin models and is suitable for modeling and on-board controller purposes. Results also show that this model can be used to predict the battery internal resistance obtained from hybrid pulse power characterization (HPPC) tests to within 20 percent, making it suitable for low to medium fidelity powertrain design purposes. In total, this simple battery model can be employed as a real-time model in electrified vehicle battery management systems

  1. EVOLUTION OF THEORIES AND EMPIRICAL MODELS OF A RELATIONSHIP BETWEEN ECONOMIC GROWTH, SCIENCE AND INNOVATIONS (PART I

    Directory of Open Access Journals (Sweden)

    Kaneva M. A.

    2017-12-01

    Full Text Available This article is a first chapter of an analytical review of existing theoretical models of a relationship between economic growth / GRP and indicators of scientific development and innovation activities, as well as empirical approaches to testing this relationship. Aim of the paper is a systematization of existing approaches to modeling of economic growth geared by science and innovations. The novelty of the current review lies in the authors’ criteria of interconnectedness of theoretical and empirical studies in the systematization of a wide range of publications presented in a final table-scheme. In the first part of the article the authors discuss evolution of theoretical approaches, while the second chapter presents a time gap between theories and their empirical verification caused by the level of development of quantitative instruments such as econometric models. The results of this study can be used by researchers and graduate students for familiarization with current scientific approaches that manifest progress from theory to empirical verification of a relationship «economic growth-innovations» for improvement of different types of models in spatial econometrics. To apply these models to management practices the presented review could be supplemented with new criteria for classification of knowledge production functions and other theories about effect of science on economic growth.

  2. Modeling the NPE with finite sources and empirical Green`s functions

    Energy Technology Data Exchange (ETDEWEB)

    Hutchings, L.; Kasameyer, P.; Goldstein, P. [Lawrence Livermore National Lab., CA (United States)] [and others

    1994-12-31

    In order to better understand the source characteristics of both nuclear and chemical explosions for purposes of discrimination, we have modeled the NPE chemical explosion as a finite source and with empirical Green`s functions. Seismograms are synthesized at four sties to test the validity of source models. We use a smaller chemical explosion detonated in the vicinity of the working point to obtain empirical Green`s functions. Empirical Green`s functions contain all the linear information of the geology along the propagation path and recording site, which are identical for chemical or nuclear explosions, and therefore reduce the variability in modeling the source of the larger event. We further constrain the solution to have the overall source duration obtained from point-source deconvolution results. In modeling the source, we consider both an elastic source on a spherical surface and an inelastic expanding spherical volume source. We found that the spherical volume solution provides better fits to observed seismograms. The potential to identify secondary sources was examined, but the resolution is too poor to be definitive.

  3. Empirical model for estimating the surface roughness of machined ...

    African Journals Online (AJOL)

    Michael Horsfall

    one of the most critical quality measure in mechanical products. In the ... Keywords: cutting speed, centre lathe, empirical model, surface roughness, Mean absolute percentage deviation ... The factors considered were work piece properties.

  4. Application of parameters space analysis tools for empirical model validation

    Energy Technology Data Exchange (ETDEWEB)

    Paloma del Barrio, E. [LEPT-ENSAM UMR 8508, Talence (France); Guyon, G. [Electricite de France, Moret-sur-Loing (France)

    2004-01-01

    A new methodology for empirical model validation has been proposed in the framework of the Task 22 (Building Energy Analysis Tools) of the International Energy Agency. It involves two main steps: checking model validity and diagnosis. Both steps, as well as the underlying methods, have been presented in the first part of the paper. In this part, they are applied for testing modelling hypothesis in the framework of the thermal analysis of an actual building. Sensitivity analysis tools have been first used to identify the parts of the model that can be really tested on the available data. A preliminary diagnosis is then supplied by principal components analysis. Useful information for model behaviour improvement has been finally obtained by optimisation techniques. This example of application shows how model parameters space analysis is a powerful tool for empirical validation. In particular, diagnosis possibilities are largely increased in comparison with residuals analysis techniques. (author)

  5. An Empirical Study of Audit Expectation Gap in Hungary

    OpenAIRE

    Judit Füredi-Fülöp

    2015-01-01

    The audit expectation gap has preoccupied the finance and accounting profession for a long time. Considerable research has been conducted into this issue and attempts have been made to provide an accurate definition of the audit expectation gap, model this concept and assess the possibilities of its narrowing. Also, a number of studies investigate whether there is an audit expectation gap in several researched regions. The objectives of empirical studies on the structure and nature of the aud...

  6. Empirical modeling of single-wake advection and expansion using full-scale pulsed lidar-based measurements

    DEFF Research Database (Denmark)

    Machefaux, Ewan; Larsen, Gunner Chr.; Troldborg, Niels

    2015-01-01

    In the present paper, single-wake dynamics have been studied both experimentally and numerically. The use of pulsed lidar measurements allows for validation of basic dynamic wake meandering modeling assumptions. Wake center tracking is used to estimate the wake advection velocity experimentally...... fairly well in the far wake but lacks accuracy in the outer region of the near wake. An empirical relationship, relating maximum wake induction and wake advection velocity, is derived and linked to the characteristics of a spherical vortex structure. Furthermore, a new empirical model for single...

  7. Combining empirical and theory-based land-use modelling approaches to assess economic potential of biofuel production avoiding iLUC: Argentina as a case study

    NARCIS (Netherlands)

    Diogo, V.; van der Hilst, F.; van Eijck, J.; Verstegen, J.A.; Hilbert, J.; Carballo, S.; Volante, J.; Faaij, A.

    2014-01-01

    In this paper, a land-use modelling framework is presented combining empirical and theory-based modelling approaches to determine economic potential of biofuel production avoiding indirect land-use changes (iLUC) resulting from land competition with other functions. The empirical approach explores

  8. Empirical model for estimating the surface roughness of machined ...

    African Journals Online (AJOL)

    Empirical model for estimating the surface roughness of machined ... as well as surface finish is one of the most critical quality measure in mechanical products. ... various cutting speed have been developed using regression analysis software.

  9. Calibrating mechanistic-empirical pavement performance models with an expert matrix

    Energy Technology Data Exchange (ETDEWEB)

    Tighe, S.; AlAssar, R.; Haas, R. [Waterloo Univ., ON (Canada). Dept. of Civil Engineering; Zhiwei, H. [Stantec Consulting Ltd., Cambridge, ON (Canada)

    2001-07-01

    Proper management of pavement infrastructure requires pavement performance modelling. For the past 20 years, the Ontario Ministry of Transportation has used the Ontario Pavement Analysis of Costs (OPAC) system for pavement design. Pavement needs, however, have changed substantially during that time. To address this need, a new research contract is underway to enhance the model and verify the predictions, particularly at extreme points such as low and high traffic volume pavement design. This initiative included a complete evaluation of the existing OPAC pavement design method, the construction of a new set of pavement performance prediction models, and the development of the flexible pavement design procedure that incorporates reliability analysis. The design was also expanded to include rigid pavement designs and modification of the existing life cycle cost analysis procedure which includes both the agency cost and road user cost. Performance prediction and life-cycle costs were developed based on several factors, including material properties, traffic loads and climate. Construction and maintenance schedules were also considered. The methodology for the calibration and validation of a mechanistic-empirical flexible pavement performance model was described. Mechanistic-empirical design methods combine theory based design such as calculated stresses, strains or deflections with empirical methods, where a measured response is associated with thickness and pavement performance. Elastic layer analysis was used to determine pavement response to determine the most effective design using cumulative Equivalent Single Axle Loads (ESALs), below grade type and layer thickness.The new mechanistic-empirical model separates the environment and traffic effects on performance. This makes it possible to quantify regional differences between Southern and Northern Ontario. In addition, roughness can be calculated in terms of the International Roughness Index or Riding comfort Index

  10. Permeability-driven selection in a semi-empirical protocell model

    DEFF Research Database (Denmark)

    Piedrafita, Gabriel; Monnard, Pierre-Alain; Mavelli, Fabio

    2017-01-01

    to prebiotic systems evolution more intricate, but were surely essential for sustaining far-from-equilibrium chemical dynamics, given their functional relevance in all modern cells. Here we explore a protocellular scenario in which some of those additional constraints/mechanisms are addressed, demonstrating...... their 'system-level' implications. In particular, an experimental study on the permeability of prebiotic vesicle membranes composed of binary lipid mixtures allows us to construct a semi-empirical model where protocells are able to reproduce and undergo an evolutionary process based on their coupling...

  11. An Empirical Model for Energy Storage Systems

    Energy Technology Data Exchange (ETDEWEB)

    Rosewater, David Martin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Scott, Paul [TransPower, Poway, CA (United States)

    2016-03-17

    Improved models of energy storage systems are needed to enable the electric grid’s adaptation to increasing penetration of renewables. This paper develops a generic empirical model of energy storage system performance agnostic of type, chemistry, design or scale. Parameters for this model are calculated using test procedures adapted from the US DOE Protocol for Uniformly Measuring and Expressing the Performance of Energy Storage. We then assess the accuracy of this model for predicting the performance of the TransPower GridSaver – a 1 MW rated lithium-ion battery system that underwent laboratory experimentation and analysis. The developed model predicts a range of energy storage system performance based on the uncertainty of estimated model parameters. Finally, this model can be used to better understand the integration and coordination of energy storage on the electric grid.

  12. A semi-empirical two phase model for rocks

    International Nuclear Information System (INIS)

    Fogel, M.B.

    1993-01-01

    This article presents data from an experiment simulating a spherically symmetric tamped nuclear explosion. A semi-empirical two-phase model of the measured response in tuff is presented. A comparison is made of the computed peak stress and velocity versus scaled range and that measured on several recent tuff events

  13. Plant water potential improves prediction of empirical stomatal models.

    Directory of Open Access Journals (Sweden)

    William R L Anderegg

    Full Text Available Climate change is expected to lead to increases in drought frequency and severity, with deleterious effects on many ecosystems. Stomatal responses to changing environmental conditions form the backbone of all ecosystem models, but are based on empirical relationships and are not well-tested during drought conditions. Here, we use a dataset of 34 woody plant species spanning global forest biomes to examine the effect of leaf water potential on stomatal conductance and test the predictive accuracy of three major stomatal models and a recently proposed model. We find that current leaf-level empirical models have consistent biases of over-prediction of stomatal conductance during dry conditions, particularly at low soil water potentials. Furthermore, the recently proposed stomatal conductance model yields increases in predictive capability compared to current models, and with particular improvement during drought conditions. Our results reveal that including stomatal sensitivity to declining water potential and consequent impairment of plant water transport will improve predictions during drought conditions and show that many biomes contain a diversity of plant stomatal strategies that range from risky to conservative stomatal regulation during water stress. Such improvements in stomatal simulation are greatly needed to help unravel and predict the response of ecosystems to future climate extremes.

  14. An empirical model for friction in cold forging

    DEFF Research Database (Denmark)

    Bay, Niels; Eriksen, Morten; Tan, Xincai

    2002-01-01

    With a system of simulative tribology tests for cold forging the friction stress for aluminum, steel and stainless steel provided with typical lubricants for cold forging has been determined for varying normal pressure, surface expansion, sliding length and tool/work piece interface temperature...... of normal pressure and tool/work piece interface temperature. The model is verified by process testing measuring friction at varying reductions in cold forward rod extrusion. KEY WORDS: empirical friction model, cold forging, simulative friction tests....

  15. Risky forward interest rates and swaptions: Quantum finance model and empirical results

    Science.gov (United States)

    Baaquie, Belal Ehsan; Yu, Miao; Bhanap, Jitendra

    2018-02-01

    Risk free forward interest rates (Diebold and Li, 2006 [1]; Jamshidian, 1991 [2 ]) - and their realization by US Treasury bonds as the leading exemplar - have been studied extensively. In Baaquie (2010), models of risk free bonds and their forward interest rates based on the quantum field theoretic formulation of the risk free forward interest rates have been discussed, including the empirical evidence supporting these models. The quantum finance formulation of risk free forward interest rates is extended to the case of risky forward interest rates. The examples of the Singapore and Malaysian forward interest rates are used as specific cases. The main feature of the quantum finance model is that the risky forward interest rates are modeled both a) as a stand-alone case as well as b) being driven by the US forward interest rates plus a spread - having its own term structure -above the US forward interest rates. Both the US forward interest rates and the term structure for the spread are modeled by a two dimensional Euclidean quantum field. As a precursor to the evaluation of put option of the Singapore coupon bond, the quantum finance model for swaptions is tested using empirical study of swaptions for the US Dollar -showing that the model is quite accurate. A prediction for the market price of the put option for the Singapore coupon bonds is obtained. The quantum finance model is generalized to study the Malaysian case and the Malaysian forward interest rates are shown to have anomalies absent for the US and Singapore case. The model's prediction for a Malaysian interest rate swap is obtained.

  16. An empirical Bayesian approach for model-based inference of cellular signaling networks

    Directory of Open Access Journals (Sweden)

    Klinke David J

    2009-11-01

    Full Text Available Abstract Background A common challenge in systems biology is to infer mechanistic descriptions of biological process given limited observations of a biological system. Mathematical models are frequently used to represent a belief about the causal relationships among proteins within a signaling network. Bayesian methods provide an attractive framework for inferring the validity of those beliefs in the context of the available data. However, efficient sampling of high-dimensional parameter space and appropriate convergence criteria provide barriers for implementing an empirical Bayesian approach. The objective of this study was to apply an Adaptive Markov chain Monte Carlo technique to a typical study of cellular signaling pathways. Results As an illustrative example, a kinetic model for the early signaling events associated with the epidermal growth factor (EGF signaling network was calibrated against dynamic measurements observed in primary rat hepatocytes. A convergence criterion, based upon the Gelman-Rubin potential scale reduction factor, was applied to the model predictions. The posterior distributions of the parameters exhibited complicated structure, including significant covariance between specific parameters and a broad range of variance among the parameters. The model predictions, in contrast, were narrowly distributed and were used to identify areas of agreement among a collection of experimental studies. Conclusion In summary, an empirical Bayesian approach was developed for inferring the confidence that one can place in a particular model that describes signal transduction mechanisms and for inferring inconsistencies in experimental measurements.

  17. Ranking Multivariate GARCH Models by Problem Dimension: An Empirical Evaluation

    NARCIS (Netherlands)

    M. Caporin (Massimiliano); M.J. McAleer (Michael)

    2011-01-01

    textabstractIn the last 15 years, several Multivariate GARCH (MGARCH) models have appeared in the literature. Recent research has begun to examine MGARCH specifications in terms of their out-of-sample forecasting performance. In this paper, we provide an empirical comparison of a set of models,

  18. The investor behavior and futures market volatility A theory and empirical study based on the OLG model and high-frequency data

    Institute of Scientific and Technical Information of China (English)

    Yun Wang; Renhai Hua; Zongcheng Zhang

    2011-01-01

    Purpose-The purpose of this paper is to examine whether the futures volatility could attect the investor behavior and what trading strategy different investors could adopt when they meet different information conditions.Design/methodology/approach-This study introduces a two-period overlapping generation model (OLG) model into the future market and set the investor behavior model based on the future contract price,which can also be extended to complete and incomplete information.It provides the equilibrium solution and uses cuprum tick data in SHFE to conduct the empirical analysis.Findings-The two-period OLG model based on the future market is consistent with the practical situation;second,the sufficient information investors such as institutional adopt reversal trading patterns generally;last,the insufficient information investors such as individual investors adopt momentum trading patterns in general.Research limitations/implications-Investor trading behavior is always an important issue in the behavioral finance and market supervision,but the related research is scarce.Practical implications-The conclusion shows that the investors' behavior in Chinese future market is different from the Chinese stock market.Originality/value-This study empirically analyzes and verifies the different types of trading strategies investors could;investors such as institutional ones adopt reversal trading patterns generally;while investors such as individual investors adopt momentum trading patterns in general.

  19. Distribution of longshore sediment transport along the Indian coast based on empirical model

    Digital Repository Service at National Institute of Oceanography (India)

    Chandramohan, P.; Nayak, B.U.

    An empirical sediment transport model has been developed based on longshore energy flux equation. Study indicates that annual gross sediment transport rate is high (1.5 x 10 super(6) cubic meters to 2.0 x 10 super(6) cubic meters) along the coasts...

  20. Advanced empirical estimate of information value for credit scoring models

    Directory of Open Access Journals (Sweden)

    Martin Řezáč

    2011-01-01

    Full Text Available Credit scoring, it is a term for a wide spectrum of predictive models and their underlying techniques that aid financial institutions in granting credits. These methods decide who will get credit, how much credit they should get, and what further strategies will enhance the profitability of the borrowers to the lenders. Many statistical tools are avaiable for measuring quality, within the meaning of the predictive power, of credit scoring models. Because it is impossible to use a scoring model effectively without knowing how good it is, quality indexes like Gini, Kolmogorov-Smirnov statisic and Information value are used to assess quality of given credit scoring model. The paper deals primarily with the Information value, sometimes called divergency. Commonly it is computed by discretisation of data into bins using deciles. One constraint is required to be met in this case. Number of cases have to be nonzero for all bins. If this constraint is not fulfilled there are some practical procedures for preserving finite results. As an alternative method to the empirical estimates one can use the kernel smoothing theory, which allows to estimate unknown densities and consequently, using some numerical method for integration, to estimate value of the Information value. The main contribution of this paper is a proposal and description of the empirical estimate with supervised interval selection. This advanced estimate is based on requirement to have at least k, where k is a positive integer, observations of socres of both good and bad client in each considered interval. A simulation study shows that this estimate outperform both the empirical estimate using deciles and the kernel estimate. Furthermore it shows high dependency on choice of the parameter k. If we choose too small value, we get overestimated value of the Information value, and vice versa. Adjusted square root of number of bad clients seems to be a reasonable compromise.

  1. Climate Prediction for Brazil's Nordeste: Performance of Empirical and Numerical Modeling Methods.

    Science.gov (United States)

    Moura, Antonio Divino; Hastenrath, Stefan

    2004-07-01

    Comparisons of performance of climate forecast methods require consistency in the predictand and a long common reference period. For Brazil's Nordeste, empirical methods developed at the University of Wisconsin use preseason (October January) rainfall and January indices of the fields of meridional wind component and sea surface temperature (SST) in the tropical Atlantic and the equatorial Pacific as input to stepwise multiple regression and neural networking. These are used to predict the March June rainfall at a network of 27 stations. An experiment at the International Research Institute for Climate Prediction, Columbia University, with a numerical model (ECHAM4.5) used global SST information through February to predict the March June rainfall at three grid points in the Nordeste. The predictands for the empirical and numerical model forecasts are correlated at +0.96, and the period common to the independent portion of record of the empirical prediction and the numerical modeling is 1968 99. Over this period, predicted versus observed rainfall are evaluated in terms of correlation, root-mean-square error, absolute error, and bias. Performance is high for both approaches. Numerical modeling produces a correlation of +0.68, moderate errors, and strong negative bias. For the empirical methods, errors and bias are small, and correlations of +0.73 and +0.82 are reached between predicted and observed rainfall.

  2. The effects of performance measurement and compensation on motivation: An empirical study

    NARCIS (Netherlands)

    van Herpen, M.; van Praag, C.M.; Cools, K.

    2003-01-01

    The design and implementation of a performance measurement and compensation system can strongly affect the motivation of employees. Building on economic and psychological theory this study develops a conceptual model that is used to empirically test this effect. Our survey results demonstrate a

  3. Equifinality in empirical studies of cultural transmission.

    Science.gov (United States)

    Barrett, Brendan J

    2018-01-31

    Cultural systems exhibit equifinal behavior - a single final state may be arrived at via different mechanisms and/or from different initial states. Potential for equifinality exists in all empirical studies of cultural transmission including controlled experiments, observational field research, and computational simulations. Acknowledging and anticipating the existence of equifinality is important in empirical studies of social learning and cultural evolution; it helps us understand the limitations of analytical approaches and can improve our ability to predict the dynamics of cultural transmission. Here, I illustrate and discuss examples of equifinality in studies of social learning, and how certain experimental designs might be prone to it. I then review examples of equifinality discussed in the social learning literature, namely the use of s-shaped diffusion curves to discern individual from social learning and operational definitions and analytical approaches used in studies of conformist transmission. While equifinality exists to some extent in all studies of social learning, I make suggestions for how to address instances of it, with an emphasis on using data simulation and methodological verification alongside modern statistical approaches that emphasize prediction and model comparison. In cases where evaluated learning mechanisms are equifinal due to non-methodological factors, I suggest that this is not always a problem if it helps us predict cultural change. In some cases, equifinal learning mechanisms might offer insight into how both individual learning, social learning strategies and other endogenous social factors might by important in structuring cultural dynamics and within- and between-group heterogeneity. Copyright © 2018 Elsevier B.V. All rights reserved.

  4. A Socio-Cultural Model Based on Empirical Data of Cultural and Social Relationship

    DEFF Research Database (Denmark)

    Lipi, Afia Akhter; Nakano, Yukiko; Rehm, Matthias

    2010-01-01

    The goal of this paper is to integrate culture and social relationship as a computational term in an embodied conversational agent system by employing empirical and theoretical approach. We propose a parameter-based model that predicts nonverbal expressions appropriate for specific cultures...... in different social relationship. So, first, we introduce the theories of social and cultural characteristics. Then, we did corpus analysis of human interaction of two cultures in two different social situations and extracted empirical data and finally, by integrating socio-cultural characteristics...... with empirical data, we establish a parameterized network model that generates culture specific non-verbal expressions in different social relationships....

  5. Reflective equilibrium and empirical data: third person moral experiences in empirical medical ethics.

    Science.gov (United States)

    De Vries, Martine; Van Leeuwen, Evert

    2010-11-01

    In ethics, the use of empirical data has become more and more popular, leading to a distinct form of applied ethics, namely empirical ethics. This 'empirical turn' is especially visible in bioethics. There are various ways of combining empirical research and ethical reflection. In this paper we discuss the use of empirical data in a special form of Reflective Equilibrium (RE), namely the Network Model with Third Person Moral Experiences. In this model, the empirical data consist of the moral experiences of people in a practice. Although inclusion of these moral experiences in this specific model of RE can be well defended, their use in the application of the model still raises important questions. What precisely are moral experiences? How to determine relevance of experiences, in other words: should there be a selection of the moral experiences that are eventually used in the RE? How much weight should the empirical data have in the RE? And the key question: can the use of RE by empirical ethicists really produce answers to practical moral questions? In this paper we start to answer the above questions by giving examples taken from our research project on understanding the norm of informed consent in the field of pediatric oncology. We especially emphasize that incorporation of empirical data in a network model can reduce the risk of self-justification and bias and can increase the credibility of the RE reached. © 2009 Blackwell Publishing Ltd.

  6. An empirical model for independent control of variable speed refrigeration system

    International Nuclear Information System (INIS)

    Li Hua; Jeong, Seok-Kwon; Yoon, Jung-In; You, Sam-Sang

    2008-01-01

    This paper deals with an empirical dynamic model for decoupling control of the variable speed refrigeration system (VSRS). To cope with inherent complexity and nonlinearity in system dynamics, the model parameters are first obtained based on experimental data. In the study, the dynamic characteristics of indoor temperature and superheat are assumed to be first-order model with time delay. While the compressor frequency and opening angle of electronic expansion valve are varying, the indoor temperature and the superheat exhibit interfering characteristics each other in the VSRS. Thus, each decoupling model has been proposed to eliminate such interference. Finally, the experiment and simulation results indicate that the proposed model offers more tractable means for describing the actual VSRS comparing to other models currently available

  7. An empirical model to predict infield thin layer drying rate of cut switchgrass

    International Nuclear Information System (INIS)

    Khanchi, A.; Jones, C.L.; Sharma, B.; Huhnke, R.L.; Weckler, P.; Maness, N.O.

    2013-01-01

    A series of 62 thin layer drying experiments were conducted to evaluate the effect of solar radiation, vapor pressure deficit and wind speed on drying rate of switchgrass. An environmental chamber was fabricated that can simulate field drying conditions. An empirical drying model based on maturity stage of switchgrass was also developed during the study. It was observed that solar radiation was the most significant factor in improving the drying rate of switchgrass at seed shattering and seed shattered maturity stage. Therefore, drying switchgrass in wide swath to intercept the maximum amount of radiation at these stages of maturity is recommended. Moreover, it was observed that under low radiation intensity conditions, wind speed helps to improve the drying rate of switchgrass. Field operations such as raking or turning of the windrows are recommended to improve air circulation within a swath on cloudy days. Additionally, it was found that the effect of individual weather parameters on the drying rate of switchgrass was dependent on maturity stage. Vapor pressure deficit was strongly correlated with the drying rate during seed development stage whereas, vapor pressure deficit was weakly correlated during seed shattering and seed shattered stage. These findings suggest the importance of using separate drying rate models for each maturity stage of switchgrass. The empirical models developed in this study can predict the drying time of switchgrass based on the forecasted weather conditions so that the appropriate decisions can be made. -- Highlights: • An environmental chamber was developed in the present study to simulate field drying conditions. • An empirical model was developed that can estimate drying rate of switchgrass based on forecasted weather conditions. • Separate equations were developed based on maturity stage of switchgrass. • Designed environmental chamber can be used to evaluate the effect of other parameters that affect drying of crops

  8. Empirical study on the feasibility of measures for public self-protection capability enhancement

    International Nuclear Information System (INIS)

    Goersch, Henning G.; Werner, Ute

    2011-01-01

    The empirical study on the feasibility of measures for public self-protection capability enhancement covers the following issues with several sections: (1) Introduction: scope of the study; structure of the study. (2) Issue coherence: self-protection; reduction and prevention of damage by personal emergency preparedness, personal emergency preparedness in Germany. (3) Solution coherence: scientific approaches, development of practical problem solution approaches, proposal of a promotion system. (4) Empirical studies: Promotion system evaluation by experts; questioning of the public; Delphi-study on minimum standards in emergency preparedness; local networks in emergency preparedness. (5) Evaluation of models for personal emergency preparedness (M3P). (6) Integration of all research results into the approach of emergency preparedness: scope; recommendations, conclusions.

  9. Using ERP and WfM Systems for Implementing Business Processes: An Empirical Study

    Science.gov (United States)

    Aversano, Lerina; Tortorella, Maria

    Software systems mainly considered from enterprises for dealing with a business process automation belong to the following two categories: Workflow Management Systems (WfMS) and Enterprise Resource Planning (ERP) systems. The wider diffusion of ERP systems tends to favourite this solution, but there are several limitations of most ERP systems for automating business processes. This paper reports an empirical study aiming at comparing the ability of implementing business processes of ERP systems and WfMSs. Two different case studies have been considered in the empirical study. It evaluates and analyses the correctness and completeness of the process models implemented by using ERP and WfM systems.

  10. Empirical study of long-range connections in a road network offers new ingredient for navigation optimization models

    Science.gov (United States)

    Wang, Pu; Liu, Like; Li, Xiamiao; Li, Guanliang; González, Marta C.

    2014-01-01

    Navigation problem in lattices with long-range connections has been widely studied to understand the design principles for optimal transport networks; however, the travel cost of long-range connections was not considered in previous models. We define long-range connection in a road network as the shortest path between a pair of nodes through highways and empirically analyze the travel cost properties of long-range connections. Based on the maximum speed allowed in each road segment, we observe that the time needed to travel through a long-range connection has a characteristic time Th ˜ 29 min, while the time required when using the alternative arterial road path has two different characteristic times Ta ˜ 13 and 41 min and follows a power law for times larger than 50 min. Using daily commuting origin-destination matrix data, we additionally find that the use of long-range connections helps people to save about half of the travel time in their daily commute. Based on the empirical results, we assign a more realistic travel cost to long-range connections in two-dimensional square lattices, observing dramatically different minimum average shortest path but similar optimal navigation conditions.

  11. A theoretical and empirical evaluation and extension of the Todaro migration model.

    Science.gov (United States)

    Salvatore, D

    1981-11-01

    "This paper postulates that it is theoretically and empirically preferable to base internal labor migration on the relative difference in rural-urban real income streams and rates of unemployment, taken as separate and independent variables, rather than on the difference in the expected real income streams as postulated by the very influential and often quoted Todaro model. The paper goes on to specify several important ways of extending the resulting migration model and improving its empirical performance." The analysis is based on Italian data. excerpt

  12. Towards New Empirical Versions of Financial and Accounting Models Corrected for Measurement Errors

    OpenAIRE

    Francois-Éric Racicot; Raymond Théoret; Alain Coen

    2006-01-01

    In this paper, we propose a new empirical version of the Fama and French Model based on the Hausman (1978) specification test and aimed at discarding measurement errors in the variables. The proposed empirical framework is general enough to be used for correcting other financial and accounting models of measurement errors. Removing measurement errors is important at many levels as information disclosure, corporate governance and protection of investors.

  13. β-empirical Bayes inference and model diagnosis of microarray data

    Directory of Open Access Journals (Sweden)

    Hossain Mollah Mohammad

    2012-06-01

    Full Text Available Abstract Background Microarray data enables the high-throughput survey of mRNA expression profiles at the genomic level; however, the data presents a challenging statistical problem because of the large number of transcripts with small sample sizes that are obtained. To reduce the dimensionality, various Bayesian or empirical Bayes hierarchical models have been developed. However, because of the complexity of the microarray data, no model can explain the data fully. It is generally difficult to scrutinize the irregular patterns of expression that are not expected by the usual statistical gene by gene models. Results As an extension of empirical Bayes (EB procedures, we have developed the β-empirical Bayes (β-EB approach based on a β-likelihood measure which can be regarded as an ’evidence-based’ weighted (quasi- likelihood inference. The weight of a transcript t is described as a power function of its likelihood, fβ(yt|θ. Genes with low likelihoods have unexpected expression patterns and low weights. By assigning low weights to outliers, the inference becomes robust. The value of β, which controls the balance between the robustness and efficiency, is selected by maximizing the predictive β0-likelihood by cross-validation. The proposed β-EB approach identified six significant (p−5 contaminated transcripts as differentially expressed (DE in normal/tumor tissues from the head and neck of cancer patients. These six genes were all confirmed to be related to cancer; they were not identified as DE genes by the classical EB approach. When applied to the eQTL analysis of Arabidopsis thaliana, the proposed β-EB approach identified some potential master regulators that were missed by the EB approach. Conclusions The simulation data and real gene expression data showed that the proposed β-EB method was robust against outliers. The distribution of the weights was used to scrutinize the irregular patterns of expression and diagnose the model

  14. Dynamic Modeling of a Reformed Methanol Fuel Cell System using Empirical Data and Adaptive Neuro-Fuzzy Inference System Models

    DEFF Research Database (Denmark)

    Justesen, Kristian Kjær; Andreasen, Søren Juhl; Shaker, Hamid Reza

    2013-01-01

    In this work, a dynamic MATLAB Simulink model of a H3-350 Reformed Methanol Fuel Cell (RMFC) stand-alone battery charger produced by Serenergy is developed on the basis of theoretical and empirical methods. The advantage of RMFC systems is that they use liquid methanol as a fuel instead of gaseous...... of the reforming process are implemented. Models of the cooling flow of the blowers for the fuel cell and the burner which supplies process heat for the reformer are made. The two blowers have a common exhaust, which means that the two blowers influence each other’s output. The models take this into account using...... an empirical approach. Fin efficiency models for the cooling effect of the air are also developed using empirical methods. A fuel cell model is also implemented based on a standard model which is adapted to fit the measured performance of the H3-350 module. All the individual parts of the model are verified...

  15. Dynamic Modeling of a Reformed Methanol Fuel Cell System using Empirical Data and Adaptive Neuro-Fuzzy Inference System Models

    DEFF Research Database (Denmark)

    Justesen, Kristian Kjær; Andreasen, Søren Juhl; Shaker, Hamid Reza

    2014-01-01

    In this work, a dynamic MATLAB Simulink model of a H3-350 Reformed Methanol Fuel Cell (RMFC) stand-alone battery charger produced by Serenergy is developed on the basis of theoretical and empirical methods. The advantage of RMFC systems is that they use liquid methanol as a fuel instead of gaseous...... of the reforming process are implemented. Models of the cooling flow of the blowers for the fuel cell and the burner which supplies process heat for the reformer are made. The two blowers have a common exhaust, which means that the two blowers influence each other’s output. The models take this into account using...... an empirical approach. Fin efficiency models for the cooling effect of the air are also developed using empirical methods. A fuel cell model is also implemented based on a standard model which is adapted to fit the measured performance of the H3-350 module. All the individual parts of the model are verified...

  16. A semi-empirical model for predicting crown diameter of cedrela ...

    African Journals Online (AJOL)

    A semi-empirical model relating age and breast height has been developed to predict individual tree crown diameter for Cedrela odorata (L) plantation in the moist evergreen forest zones of Ghana. The model was based on field records of 269 trees, and could determine the crown cover dynamics, forecast time of canopy ...

  17. Reference Evapotranspiration Variation Analysis and Its Approaches Evaluation of 13 Empirical Models in Sub-Humid and Humid Regions: A Case Study of the Huai River Basin, Eastern China

    Directory of Open Access Journals (Sweden)

    Meng Li

    2018-04-01

    Full Text Available Accurate and reliable estimations of reference evapotranspiration (ET0 are imperative in irrigation scheduling and water resource planning. This study aims to analyze the spatiotemporal trends of the monthly ET0 calculated by the Penman–Monteith FAO-56 (PMF-56 model in the Huai River Basin (HRB, eastern China. However, the use of the PMF-56 model is limited by the insufficiency of climatic input parameters in various sites, and the alternative is to employ simple empirical models. In this study, the performances of 13 empirical models were evaluated against the PMF-56 model by using three common statistical approaches: relative root-mean-square error (RRMSE, mean absolute error (MAE, and the Nash–Sutcliffe coefficient (NS. Additionally, a linear regression model was adopted to calibrate and validate the performances of the empirical models during the 1961–2000 and 2001–2014 time periods, respectively. The results showed that the ETPMF increased initially and then decreased on a monthly timescale. On a daily timescale, the Valiantzas3 (VA3 was the best alternative model for estimating the ET0, while the Penman (PEN, WMO, Trabert (TRA, and Jensen-Haise (JH models showed poor results with large errors. Before calibration, the determination coefficients of the temperature-based, radiation-based, and combined models showed the opposite changing trends compared to the mass transfer-based models. After calibration, the performance of each empirical model in each month improved greatly except for the PEN model. If the comprehensive climatic datasets were available, the VA3 would be the recommended model because it had a simple computation procedure and was also very well correlated linearly to the PMF-56 model. Given the data availability, the temperature-based, radiation-based, Valiantzas1 (VA1 and Valiantzas2 (VA2 models were recommended during April–October in the HRB and other similar regions, and also, the mass transfer-based models were

  18. Guidelines for using empirical studies in software engineering education

    Directory of Open Access Journals (Sweden)

    Fabian Fagerholm

    2017-09-01

    Full Text Available Software engineering education is under constant pressure to provide students with industry-relevant knowledge and skills. Educators must address issues beyond exercises and theories that can be directly rehearsed in small settings. Industry training has similar requirements of relevance as companies seek to keep their workforce up to date with technological advances. Real-life software development often deals with large, software-intensive systems and is influenced by the complex effects of teamwork and distributed software development, which are hard to demonstrate in an educational environment. A way to experience such effects and to increase the relevance of software engineering education is to apply empirical studies in teaching. In this paper, we show how different types of empirical studies can be used for educational purposes in software engineering. We give examples illustrating how to utilize empirical studies, discuss challenges, and derive an initial guideline that supports teachers to include empirical studies in software engineering courses. Furthermore, we give examples that show how empirical studies contribute to high-quality learning outcomes, to student motivation, and to the awareness of the advantages of applying software engineering principles. Having awareness, experience, and understanding of the actions required, students are more likely to apply such principles under real-life constraints in their working life.

  19. Understanding Functional Reuse of ERP Requirements in the Telecommunication Sector: an Empirical Study

    NARCIS (Netherlands)

    Daneva, Maia

    2014-01-01

    This paper is an empirical study on the application of Function Points (FP) and a FP-based reuse measurement model in Enterprise Resource Planning (ERP) projects in three organizations in the telecommunication sector. The findings of the study are used to compare the requirements reuse for one

  20. An Automated Defect Prediction Framework using Genetic Algorithms: A Validation of Empirical Studies

    Directory of Open Access Journals (Sweden)

    Juan Murillo-Morera

    2016-05-01

    Full Text Available Today, it is common for software projects to collect measurement data through development processes. With these data, defect prediction software can try to estimate the defect proneness of a software module, with the objective of assisting and guiding software practitioners. With timely and accurate defect predictions, practitioners can focus their limited testing resources on higher risk areas. This paper reports the results of three empirical studies that uses an automated genetic defect prediction framework. This framework generates and compares different learning schemes (preprocessing + attribute selection + learning algorithms and selects the best one using a genetic algorithm, with the objective to estimate the defect proneness of a software module. The first empirical study is a performance comparison of our framework with the most important framework of the literature. The second empirical study is a performance and runtime comparison between our framework and an exhaustive framework. The third empirical study is a sensitivity analysis. The last empirical study, is our main contribution in this paper. Performance of the software development defect prediction models (using AUC, Area Under the Curve was validated using NASA-MDP and PROMISE data sets. Seventeen data sets from NASA-MDP (13 and PROMISE (4 projects were analyzed running a NxM-fold cross-validation. A genetic algorithm was used to select the components of the learning schemes automatically, and to assess and report the results. Our results reported similar performance between frameworks. Our framework reported better runtime than exhaustive framework. Finally, we reported the best configuration according to sensitivity analysis.

  1. Empirically Based Composite Fracture Prediction Model From the Global Longitudinal Study of Osteoporosis in Postmenopausal Women (GLOW)

    Science.gov (United States)

    Compston, Juliet E.; Chapurlat, Roland D.; Pfeilschifter, Johannes; Cooper, Cyrus; Hosmer, David W.; Adachi, Jonathan D.; Anderson, Frederick A.; Díez-Pérez, Adolfo; Greenspan, Susan L.; Netelenbos, J. Coen; Nieves, Jeri W.; Rossini, Maurizio; Watts, Nelson B.; Hooven, Frederick H.; LaCroix, Andrea Z.; March, Lyn; Roux, Christian; Saag, Kenneth G.; Siris, Ethel S.; Silverman, Stuart; Gehlbach, Stephen H.

    2014-01-01

    Context: Several fracture prediction models that combine fractures at different sites into a composite outcome are in current use. However, to the extent individual fracture sites have differing risk factor profiles, model discrimination is impaired. Objective: The objective of the study was to improve model discrimination by developing a 5-year composite fracture prediction model for fracture sites that display similar risk profiles. Design: This was a prospective, observational cohort study. Setting: The study was conducted at primary care practices in 10 countries. Patients: Women aged 55 years or older participated in the study. Intervention: Self-administered questionnaires collected data on patient characteristics, fracture risk factors, and previous fractures. Main Outcome Measure: The main outcome is time to first clinical fracture of hip, pelvis, upper leg, clavicle, or spine, each of which exhibits a strong association with advanced age. Results: Of four composite fracture models considered, model discrimination (c index) is highest for an age-related fracture model (c index of 0.75, 47 066 women), and lowest for Fracture Risk Assessment Tool (FRAX) major fracture and a 10-site model (c indices of 0.67 and 0.65). The unadjusted increase in fracture risk for an additional 10 years of age ranges from 80% to 180% for the individual bones in the age-associated model. Five other fracture sites not considered for the age-associated model (upper arm/shoulder, rib, wrist, lower leg, and ankle) have age associations for an additional 10 years of age from a 10% decrease to a 60% increase. Conclusions: After examining results for 10 different bone fracture sites, advanced age appeared the single best possibility for uniting several different sites, resulting in an empirically based composite fracture risk model. PMID:24423345

  2. The problem analysis for empirical studies

    NARCIS (Netherlands)

    Groenland, E.A.G.

    2014-01-01

    This article proposes a systematic methodology for the development of a problem analysis for cross-sectional, empirical research. This methodology is referred to as the 'Annabel approach'. It is suitable both for academic studies and applied (business) studies. In addition it can be used for both

  3. The Effect of Private Benefits of Control on Minority Shareholders: A Theoretical Model and Empirical Evidence from State Ownership

    Directory of Open Access Journals (Sweden)

    Kerry Liu

    2017-06-01

    Full Text Available Purpose: The purpose of this paper is to examine the effect of private benefits of control on minority shareholders. Design/methodology/approach: A theoretical model is established. The empirical analysis includes hand-collected data from a wide range of data sources. OLS and 2SLS regression analysis are applied with Huber-White standard errors. Findings: The theoretical model shows that, while private benefits are generally harmful to minority shareholders, the overall effect depends on the size of large shareholder ownership. The empirical evidence from government ownership is consistent with theoretical analysis. Research limitations/implications: The empirical evidence is based on a small number of hand-collected data sets of government ownership. Further studies can be expanded to other types of ownership, such as family ownership and financial institutional ownership. Originality/value: This study is the first to theoretically analyse and empirically test the effect of private benefits. In general, this study significantly contributes to the understanding of the effect of large shareholder and corporate governance.

  4. Strategy for a Rock Mechanics Site Descriptive Model. Development and testing of the empirical approach

    Energy Technology Data Exchange (ETDEWEB)

    Roeshoff, Kennert; Lanaro, Flavio [Berg Bygg Konsult AB, Stockholm (Sweden); Lanru Jing [Royal Inst. of Techn., Stockholm (Sweden). Div. of Engineering Geology

    2002-05-01

    This report presents the results of one part of a wide project for the determination of a methodology for the determination of the rock mechanics properties of the rock mass for the so-called Aespoe Test Case. The Project consists of three major parts: the empirical part dealing with the characterisation of the rock mass by applying empirical methods, a part determining the rock mechanics properties of the rock mass through numerical modelling, and a third part carrying out numerical modelling for the determination of the stress state at Aespoe. All Project's parts were performed based on a limited amount of data about the geology and mechanical tests on samples selected from the Aespoe Database. This Report only considers the empirical approach. The purpose of the project is the development of a descriptive rock mechanics model for SKBs rock mass investigations for a final repository site. The empirical characterisation of the rock mass provides correlations with some of the rock mechanics properties of the rock mass such as the deformation modulus, the friction angle and cohesion for a certain stress interval and the uniaxial compressive strength. For the characterisation of the rock mass, several empirical methods were analysed and reviewed. Among those methods, some were chosen because robust, applicable and widespread in modern rock mechanics. Major weight was given to the well-known Tunnel Quality Index (Q) and Rock Mass Rating (RMR) but also the Rock Mass Index (RMi), the Geological Strength Index (GSI) and Ramamurthy's Criterion were applied for comparison with the two classical methods. The process of: i) sorting the geometrical/geological/rock mechanics data, ii) identifying homogeneous rock volumes, iii) determining the input parameters for the empirical ratings for rock mass characterisation; iv) evaluating the mechanical properties by using empirical relations with the rock mass ratings; was considered. By comparing the methodologies involved

  5. Strategy for a Rock Mechanics Site Descriptive Model. Development and testing of the empirical approach

    International Nuclear Information System (INIS)

    Roeshoff, Kennert; Lanaro, Flavio; Lanru Jing

    2002-05-01

    This report presents the results of one part of a wide project for the determination of a methodology for the determination of the rock mechanics properties of the rock mass for the so-called Aespoe Test Case. The Project consists of three major parts: the empirical part dealing with the characterisation of the rock mass by applying empirical methods, a part determining the rock mechanics properties of the rock mass through numerical modelling, and a third part carrying out numerical modelling for the determination of the stress state at Aespoe. All Project's parts were performed based on a limited amount of data about the geology and mechanical tests on samples selected from the Aespoe Database. This Report only considers the empirical approach. The purpose of the project is the development of a descriptive rock mechanics model for SKBs rock mass investigations for a final repository site. The empirical characterisation of the rock mass provides correlations with some of the rock mechanics properties of the rock mass such as the deformation modulus, the friction angle and cohesion for a certain stress interval and the uniaxial compressive strength. For the characterisation of the rock mass, several empirical methods were analysed and reviewed. Among those methods, some were chosen because robust, applicable and widespread in modern rock mechanics. Major weight was given to the well-known Tunnel Quality Index (Q) and Rock Mass Rating (RMR) but also the Rock Mass Index (RMi), the Geological Strength Index (GSI) and Ramamurthy's Criterion were applied for comparison with the two classical methods. The process of: i) sorting the geometrical/geological/rock mechanics data, ii) identifying homogeneous rock volumes, iii) determining the input parameters for the empirical ratings for rock mass characterisation; iv) evaluating the mechanical properties by using empirical relations with the rock mass ratings; was considered. By comparing the methodologies involved by the

  6. Empirical study of long-range connections in a road network offers new ingredient for navigation optimization models

    International Nuclear Information System (INIS)

    Wang, Pu; Liu, Like; Li, Xiamiao; Li, Guanliang; González, Marta C

    2014-01-01

    Navigation problem in lattices with long-range connections has been widely studied to understand the design principles for optimal transport networks; however, the travel cost of long-range connections was not considered in previous models. We define long-range connection in a road network as the shortest path between a pair of nodes through highways and empirically analyze the travel cost properties of long-range connections. Based on the maximum speed allowed in each road segment, we observe that the time needed to travel through a long-range connection has a characteristic time T h  ∼ 29 min, while the time required when using the alternative arterial road path has two different characteristic times T a  ∼ 13 and 41 min and follows a power law for times larger than 50 min. Using daily commuting origin–destination matrix data, we additionally find that the use of long-range connections helps people to save about half of the travel time in their daily commute. Based on the empirical results, we assign a more realistic travel cost to long-range connections in two-dimensional square lattices, observing dramatically different minimum average shortest path 〈l〉 but similar optimal navigation conditions. (paper)

  7. Empirical Model for Predicting Rate of Biogas Production | Adamu ...

    African Journals Online (AJOL)

    Rate of biogas production using cow manure as substrate was monitored in two laboratory scale batch reactors (13 liter and 108 liter capacities). Two empirical models based on the Gompertz and the modified logistic equations were used to fit the experimental data based on non-linear regression analysis using Solver tool ...

  8. Empirical spatial econometric modelling of small scale neighbourhood

    Science.gov (United States)

    Gerkman, Linda

    2012-07-01

    The aim of the paper is to model small scale neighbourhood in a house price model by implementing the newest methodology in spatial econometrics. A common problem when modelling house prices is that in practice it is seldom possible to obtain all the desired variables. Especially variables capturing the small scale neighbourhood conditions are hard to find. If there are important explanatory variables missing from the model, the omitted variables are spatially autocorrelated and they are correlated with the explanatory variables included in the model, it can be shown that a spatial Durbin model is motivated. In the empirical application on new house price data from Helsinki in Finland, we find the motivation for a spatial Durbin model, we estimate the model and interpret the estimates for the summary measures of impacts. By the analysis we show that the model structure makes it possible to model and find small scale neighbourhood effects, when we know that they exist, but we are lacking proper variables to measure them.

  9. Empirical atom model of Vegard's law

    International Nuclear Information System (INIS)

    Zhang, Lei; Li, Shichun

    2014-01-01

    Vegard's law seldom holds true for most binary continuous solid solutions. When two components form a solid solution, the atom radii of component elements will change to satisfy the continuity requirement of electron density at the interface between component atom A and atom B so that the atom with larger electron density will expand and the atom with the smaller one will contract. If the expansion and contraction of the atomic radii of A and B respectively are equal in magnitude, Vegard's law will hold true. However, the expansion and contraction of two component atoms are not equal in most situations. The magnitude of the variation will depend on the cohesive energy of corresponding element crystals. An empirical atom model of Vegard's law has been proposed to account for signs of deviations according to the electron density at Wigner–Seitz cell from Thomas–Fermi–Dirac–Cheng model

  10. A simple empirical model for the clarification-thickening process in wastewater treatment plants.

    Science.gov (United States)

    Zhang, Y K; Wang, H C; Qi, L; Liu, G H; He, Z J; Fan, H T

    2015-01-01

    In wastewater treatment plants (WWTPs), activated sludge is thickened in secondary settling tanks and recycled into the biological reactor to maintain enough biomass for wastewater treatment. Accurately estimating the activated sludge concentration in the lower portion of the secondary clarifiers is of great importance for evaluating and controlling the sludge recycled ratio, ensuring smooth and efficient operation of the WWTP. By dividing the overall activated sludge-thickening curve into a hindered zone and a compression zone, an empirical model describing activated sludge thickening in the compression zone was obtained by empirical regression. This empirical model was developed through experiments conducted using sludge from five WWTPs, and validated by the measured data from a sixth WWTP, which fit the model well (R² = 0.98, p settling was also developed. Finally, the effects of denitrification and addition of a polymer were also analysed because of their effect on sludge thickening, which can be useful for WWTP operation, e.g., improving wastewater treatment or the proper use of the polymer.

  11. Empirical STORM-E Model. [I. Theoretical and Observational Basis

    Science.gov (United States)

    Mertens, Christopher J.; Xu, Xiaojing; Bilitza, Dieter; Mlynczak, Martin G.; Russell, James M., III

    2013-01-01

    Auroral nighttime infrared emission observed by the Sounding of the Atmosphere using Broadband Emission Radiometry (SABER) instrument onboard the Thermosphere-Ionosphere-Mesosphere Energetics and Dynamics (TIMED) satellite is used to develop an empirical model of geomagnetic storm enhancements to E-region peak electron densities. The empirical model is called STORM-E and will be incorporated into the 2012 release of the International Reference Ionosphere (IRI). The proxy for characterizing the E-region response to geomagnetic forcing is NO+(v) volume emission rates (VER) derived from the TIMED/SABER 4.3 lm channel limb radiance measurements. The storm-time response of the NO+(v) 4.3 lm VER is sensitive to auroral particle precipitation. A statistical database of storm-time to climatological quiet-time ratios of SABER-observed NO+(v) 4.3 lm VER are fit to widely available geomagnetic indices using the theoretical framework of linear impulse-response theory. The STORM-E model provides a dynamic storm-time correction factor to adjust a known quiescent E-region electron density peak concentration for geomagnetic enhancements due to auroral particle precipitation. Part II of this series describes the explicit development of the empirical storm-time correction factor for E-region peak electron densities, and shows comparisons of E-region electron densities between STORM-E predictions and incoherent scatter radar measurements. In this paper, Part I of the series, the efficacy of using SABER-derived NO+(v) VER as a proxy for the E-region response to solar-geomagnetic disturbances is presented. Furthermore, a detailed description of the algorithms and methodologies used to derive NO+(v) VER from SABER 4.3 lm limb emission measurements is given. Finally, an assessment of key uncertainties in retrieving NO+(v) VER is presented

  12. Evaluating Method Engineer Performance: an error classification and preliminary empirical study

    Directory of Open Access Journals (Sweden)

    Steven Kelly

    1998-11-01

    Full Text Available We describe an approach to empirically test the use of metaCASE environments to model methods. Both diagrams and matrices have been proposed as a means for presenting the methods. These different paradigms may have their own effects on how easily and well users can model methods. We extend Batra's classification of errors in data modelling to cover metamodelling, and use it to measure the performance of a group of metamodellers using either diagrams or matrices. The tentative results from this pilot study confirm the usefulness of the classification, and show some interesting differences between the paradigms.

  13. Organizational Learning, Strategic Flexibility and Business Model Innovation: An Empirical Research Based on Logistics Enterprises

    Science.gov (United States)

    Bao, Yaodong; Cheng, Lin; Zhang, Jian

    Using the data of 237 Jiangsu logistics firms, this paper empirically studies the relationship among organizational learning capability, business model innovation, strategic flexibility. The results show as follows; organizational learning capability has positive impacts on business model innovation performance; strategic flexibility plays mediating roles on the relationship between organizational learning capability and business model innovation; interaction among strategic flexibility, explorative learning and exploitative learning play significant roles in radical business model innovation and incremental business model innovation.

  14. Applicability of special quasi-random structure models in thermodynamic calculations using semi-empirical Debye–Grüneisen theory

    International Nuclear Information System (INIS)

    Kim, Jiwoong

    2015-01-01

    In theoretical calculations, expressing the random distribution of atoms in a certain crystal structure is still challenging. The special quasi-random structure (SQS) model is effective for depicting such random distributions. The SQS model has not been applied to semi-empirical thermodynamic calculations; however, Debye–Grüneisen theory (DGT), a semi-empirical method, was used here for that purpose. The model reliability was obtained by comparing supercell models of various sizes. The results for chemical bonds, pair correlation, and elastic properties demonstrated the reliability of the SQS models. Thermodynamic calculations using density functional perturbation theory (DFPT) and DGT assessed the applicability of the SQS models. DGT and DFPT led to similar variations of the mixing and formation energies. This study provides guidelines for theoretical assessments to obtain the reliable SQS models and to calculate the thermodynamic properties of numerous materials with a random atomic distribution. - Highlights: • Various material properties are used to examine reliability of special quasi-random structures. • SQS models are applied to thermodynamic calculations by semi-empirical methods. • Basic calculation guidelines for materials with random atomic distribution are given.

  15. Empirical phylogenies and species abundance distributions are consistent with pre-equilibrium dynamics of neutral community models with gene flow

    KAUST Repository

    Bonnet-Lebrun, Anne-Sophie

    2017-03-17

    Community characteristics reflect past ecological and evolutionary dynamics. Here, we investigate whether it is possible to obtain realistically shaped modelled communities - i.e., with phylogenetic trees and species abundance distributions shaped similarly to typical empirical bird and mammal communities - from neutral community models. To test the effect of gene flow, we contrasted two spatially explicit individual-based neutral models: one with protracted speciation, delayed by gene flow, and one with point mutation speciation, unaffected by gene flow. The former produced more realistic communities (shape of phylogenetic tree and species-abundance distribution), consistent with gene flow being a key process in macro-evolutionary dynamics. Earlier models struggled to capture the empirically observed branching tempo in phylogenetic trees, as measured by the gamma statistic. We show that the low gamma values typical of empirical trees can be obtained in models with protracted speciation, in pre-equilibrium communities developing from an initially abundant and widespread species. This was even more so in communities sampled incompletely, particularly if the unknown species are the youngest. Overall, our results demonstrate that the characteristics of empirical communities that we have studied can, to a large extent, be explained through a purely neutral model under pre-equilibrium conditions. This article is protected by copyright. All rights reserved.

  16. Empirical phylogenies and species abundance distributions are consistent with pre-equilibrium dynamics of neutral community models with gene flow

    KAUST Repository

    Bonnet-Lebrun, Anne-Sophie; Manica, Andrea; Eriksson, Anders; Rodrigues, Ana S.L.

    2017-01-01

    Community characteristics reflect past ecological and evolutionary dynamics. Here, we investigate whether it is possible to obtain realistically shaped modelled communities - i.e., with phylogenetic trees and species abundance distributions shaped similarly to typical empirical bird and mammal communities - from neutral community models. To test the effect of gene flow, we contrasted two spatially explicit individual-based neutral models: one with protracted speciation, delayed by gene flow, and one with point mutation speciation, unaffected by gene flow. The former produced more realistic communities (shape of phylogenetic tree and species-abundance distribution), consistent with gene flow being a key process in macro-evolutionary dynamics. Earlier models struggled to capture the empirically observed branching tempo in phylogenetic trees, as measured by the gamma statistic. We show that the low gamma values typical of empirical trees can be obtained in models with protracted speciation, in pre-equilibrium communities developing from an initially abundant and widespread species. This was even more so in communities sampled incompletely, particularly if the unknown species are the youngest. Overall, our results demonstrate that the characteristics of empirical communities that we have studied can, to a large extent, be explained through a purely neutral model under pre-equilibrium conditions. This article is protected by copyright. All rights reserved.

  17. Empirical analysis of uranium spot prices

    International Nuclear Information System (INIS)

    Morman, M.R.

    1988-01-01

    The objective is to empirically test a market model of the uranium industry that incorporates the notion that, if the resource is viewed as an asset by economic agents, then its own rate of return along with the own rate of return of a competing asset would be a major factor in formulating the price of the resource. The model tested is based on a market model of supply and demand. The supply model incorporates the notion that the decision criteria used by uranium mine owners is to select that extraction rate that maximizes the net present value of their extraction receipts. The demand model uses a concept that allows for explicit recognition of the prospect of arbitrage between a natural-resource market and the market for other capital goods. The empirical approach used for estimation was a recursive or causal model. The empirical results were consistent with the theoretical models. The coefficients of the demand and supply equations had the appropriate signs. Tests for causality were conducted to validate the use of the causal model. The results obtained were favorable. The implication of the findings as related to future studies of exhaustible resources are: (1) in some cases causal models are the appropriate specification for empirical analysis; (2) supply models should incorporate a measure to capture depletion effects

  18. Empirical angle-dependent Biot and MBA models for acoustic anisotropy in cancellous bone

    International Nuclear Information System (INIS)

    Lee, Kang ll; Hughes, E R; Humphrey, V F; Leighton, T G; Choi, Min Joo

    2007-01-01

    The Biot and the modified Biot-Attenborough (MBA) models have been found useful to understand ultrasonic wave propagation in cancellous bone. However, neither of the models, as previously applied to cancellous bone, allows for the angular dependence of acoustic properties with direction. The present study aims to account for the acoustic anisotropy in cancellous bone, by introducing empirical angle-dependent input parameters, as defined for a highly oriented structure, into the Biot and the MBA models. The anisotropy of the angle-dependent Biot model is attributed to the variation in the elastic moduli of the skeletal frame with respect to the trabecular alignment. The angle-dependent MBA model employs a simple empirical way of using the parametric fit for the fast and the slow wave speeds. The angle-dependent models were used to predict both the fast and slow wave velocities as a function of propagation angle with respect to the trabecular alignment of cancellous bone. The predictions were compared with those of the Schoenberg model for anisotropy in cancellous bone and in vitro experimental measurements from the literature. The angle-dependent models successfully predicted the angular dependence of phase velocity of the fast wave with direction. The root-mean-square errors of the measured versus predicted fast wave velocities were 79.2 m s -1 (angle-dependent Biot model) and 36.1 m s -1 (angle-dependent MBA model). They also predicted the fact that the slow wave is nearly independent of propagation angle for angles about 50 0 , but consistently underestimated the slow wave velocity with the root-mean-square errors of 187.2 m s -1 (angle-dependent Biot model) and 240.8 m s -1 (angle-dependent MBA model). The study indicates that the angle-dependent models reasonably replicate the acoustic anisotropy in cancellous bone

  19. An accuracy assessment of an empirical sine model, a novel sine model and an artificial neural network model for forecasting illuminance/irradiance on horizontal plane of all sky types at Mahasarakham, Thailand

    International Nuclear Information System (INIS)

    Pattanasethanon, Singthong; Lertsatitthanakorn, Charoenporn; Atthajariyakul, Surat; Soponronnarit, Somchart

    2008-01-01

    The results of a study on all sky modeling and forecasting daylight availability for the tropical climate found in the central region of the northeastern part of Thailand (16 deg. 14' N, 103 deg. 15' E) is presented. The required components of sky quantities, namely, global and diffuse horizontal irradiance and global horizontal illuminance for saving energy used in buildings are estimated. The empirical sinusoidal models are validated. A and B values of the empirical sinusoidal model for all sky conditions are determined and developed to become a form of the sky conditions. In addition, a novel sinusoidal model, which consists of polynomial or exponential functions, is validated. A and B values of the empirical sinusoidal model for all sky conditions are determined and developed to become a new function in the polynomial or exponential form of the sky conditions. Novelettes, an artificial intelligent agent, namely, artificial neural network (ANN) model is also identified. Back propagation learning algorithms were used in the networks. Moreover, a one year data set and a next half year data set were used in order to train and test the neural network, respectively. Observation results from one year's round data indicate that luminosity and energy from the sky on horizontal in the area around Mahasarakham are frequently brighter than those of Bangkok. The accuracy of the validated model is determined in terms of the mean bias deviation (MBD), the root-mean-square-deviation (RMSD) and the coefficient of correlation (R 2 ) values. A comparison of the estimated solar irradiation values and the observed values revealed a small error slide in the empirical sinusoidal model as well. In addition, some results of the sky quantity forecast by the ANN model indicate that the ANN model is more accurate than the empirical models and the novel sinusoidal models. This study confirms the ability of the ANN to predict highly accurate solar radiance/illuminance values. We believe

  20. Empirical model for mineralisation of manure nitrogen in soil

    DEFF Research Database (Denmark)

    Sørensen, Peter; Thomsen, Ingrid Kaag; Schröder, Jaap

    2017-01-01

    A simple empirical model was developed for estimation of net mineralisation of pig and cattle slurry nitrogen (N) in arable soils under cool and moist climate conditions during the initial 5 years after spring application. The model is based on a Danish 3-year field experiment with measurements...... of N uptake in spring barley and ryegrass catch crops, supplemented with data from the literature on the temporal release of organic residues in soil. The model estimates a faster mineralisation rate for organic N in pig slurry compared with cattle slurry, and the description includes an initial N...

  1. Empirical likelihood

    CERN Document Server

    Owen, Art B

    2001-01-01

    Empirical likelihood provides inferences whose validity does not depend on specifying a parametric model for the data. Because it uses a likelihood, the method has certain inherent advantages over resampling methods: it uses the data to determine the shape of the confidence regions, and it makes it easy to combined data from multiple sources. It also facilitates incorporating side information, and it simplifies accounting for censored, truncated, or biased sampling.One of the first books published on the subject, Empirical Likelihood offers an in-depth treatment of this method for constructing confidence regions and testing hypotheses. The author applies empirical likelihood to a range of problems, from those as simple as setting a confidence region for a univariate mean under IID sampling, to problems defined through smooth functions of means, regression models, generalized linear models, estimating equations, or kernel smooths, and to sampling with non-identically distributed data. Abundant figures offer vi...

  2. How "Does" the Comforting Process Work? An Empirical Test of an Appraisal-Based Model of Comforting

    Science.gov (United States)

    Jones, Susanne M.; Wirtz, John G.

    2006-01-01

    Burleson and Goldsmith's (1998) comforting model suggests an appraisal-based mechanism through which comforting messages can bring about a positive change in emotional states. This study is a first empirical test of three causal linkages implied by the appraisal-based comforting model. Participants (N=258) talked about an upsetting event with a…

  3. Research Article Evaluation of different signal propagation models for a mixed indoor-outdoor scenario using empirical data

    Directory of Open Access Journals (Sweden)

    Oleksandr Artemenko

    2016-06-01

    Full Text Available In this paper, we are choosing a suitable indoor-outdoor propagation model out of the existing models by considering path loss and distance as parameters. A path loss is calculated empirically by placing emitter nodes inside a building. A receiver placed outdoors is represented by a Quadrocopter (QC that receives beacon messages from indoor nodes. As per our analysis, the International Telecommunication Union (ITU model, Stanford University Interim (SUI model, COST-231 Hata model, Green-Obaidat model, Free Space model, Log-Distance Path Loss model and Electronic Communication Committee 33 (ECC-33 models are chosen and evaluated using empirical data collected in a real environment. The aim is to determine if the analytically chosen models fit our scenario by estimating the minimal standard deviation from the empirical data.

  4. Empirical Study on Total Factor Productive Energy Efficiency in Beijing-Tianjin-Hebei Region-Analysis based on Malmquist Index and Window Model

    Science.gov (United States)

    Xu, Qiang; Ding, Shuai; An, Jingwen

    2017-12-01

    This paper studies the energy efficiency of Beijing-Tianjin-Hebei region and to finds out the trend of energy efficiency in order to improve the economic development quality of Beijing-Tianjin-Hebei region. Based on Malmquist index and window analysis model, this paper estimates the total factor energy efficiency in Beijing-Tianjin-Hebei region empirically by using panel data in this region from 1991 to 2014, and provides the corresponding political recommendations. The empirical result shows that, the total factor energy efficiency in Beijing-Tianjin-Hebei region increased from 1991 to 2014, mainly relies on advances in energy technology or innovation, and obvious regional differences in energy efficiency to exist. Throughout the window period of 24 years, the regional differences of energy efficiency in Beijing-Tianjin-Hebei region shrank. There has been significant convergent trend in energy efficiency after 2000, mainly depends on the diffusion and spillover of energy technologies.

  5. Evaluation of empirical atmospheric diffusion data

    International Nuclear Information System (INIS)

    Horst, T.W.; Doran, J.C.; Nickola, P.W.

    1979-10-01

    A study has been made of atmospheric diffusion over level, homogeneous terrain of contaminants released from non-buoyant point sources up to 100 m in height. Current theories of diffusion are compared to empirical diffusion data, and specific dispersion estimation techniques are recommended which can be implemented with the on-site meteorological instrumentation required by the Nuclear Regulatory Commission. A comparison of both the recommended diffusion model and the NRC diffusion model with the empirical data demonstrates that the predictions of the recommended model have both smaller scatter and less bias, particularly for groundlevel sources

  6. Context, Experience, Expectation, and Action—Towards an Empirically Grounded, General Model for Analyzing Biographical Uncertainty

    Directory of Open Access Journals (Sweden)

    Herwig Reiter

    2010-01-01

    Full Text Available The article proposes a general, empirically grounded model for analyzing biographical uncertainty. The model is based on findings from a qualitative-explorative study of transforming meanings of unemployment among young people in post-Soviet Lithuania. In a first step, the particular features of the uncertainty puzzle in post-communist youth transitions are briefly discussed. A historical event like the collapse of state socialism in Europe, similar to the recent financial and economic crisis, is a generator of uncertainty par excellence: it undermines the foundations of societies and the taken-for-grantedness of related expectations. Against this background, the case of a young woman and how she responds to the novel threat of unemployment in the transition to the world of work is introduced. Her uncertainty management in the specific time perspective of certainty production is then conceptually rephrased by distinguishing three types or levels of biographical uncertainty: knowledge, outcome, and recognition uncertainty. Biographical uncertainty, it is argued, is empirically observable through the analysis of acting and projecting at the biographical level. The final part synthesizes the empirical findings and the conceptual discussion into a stratification model of biographical uncertainty as a general tool for the biographical analysis of uncertainty phenomena. URN: urn:nbn:de:0114-fqs100120

  7. Data envelopment analysis in service quality evaluation: an empirical study

    Science.gov (United States)

    Najafi, Seyedvahid; Saati, Saber; Tavana, Madjid

    2015-09-01

    Service quality is often conceptualized as the comparison between service expectations and the actual performance perceptions. It enhances customer satisfaction, decreases customer defection, and promotes customer loyalty. Substantial literature has examined the concept of service quality, its dimensions, and measurement methods. We introduce the perceived service quality index (PSQI) as a single measure for evaluating the multiple-item service quality construct based on the SERVQUAL model. A slack-based measure (SBM) of efficiency with constant inputs is used to calculate the PSQI. In addition, a non-linear programming model based on the SBM is proposed to delineate an improvement guideline and improve service quality. An empirical study is conducted to assess the applicability of the method proposed in this study. A large number of studies have used DEA as a benchmarking tool to measure service quality. These models do not propose a coherent performance evaluation construct and consequently fail to deliver improvement guidelines for improving service quality. The DEA models proposed in this study are designed to evaluate and improve service quality within a comprehensive framework and without any dependency on external data.

  8. Using an Empirical Binomial Hierarchical Bayesian Model as an Alternative to Analyzing Data from Multisite Studies

    Science.gov (United States)

    Hardin, J. Michael; Anderson, Billie S.; Woodby, Lesa L.; Crawford, Myra A.; Russell, Toya V.

    2008-01-01

    This article explores the statistical methodologies used in demonstration and effectiveness studies when the treatments are applied across multiple settings. The importance of evaluating and how to evaluate these types of studies are discussed. As an alternative to standard methodology, the authors of this article offer an empirical binomial…

  9. Models of expected returns on the brazilian market: Empirical tests using predictive methodology

    Directory of Open Access Journals (Sweden)

    Adriano Mussa

    2009-01-01

    Full Text Available Predictive methodologies for test of the expected returns models are largely diffused on the international academic environment. However, these methods have not been used in Brazil in a systematic way. Generally, empirical studies proceeded with Brazilian stock market data are concentrated only in the first step of these methodologies. The purpose of this article was test and compare the models CAPM, 3-factors and 4-factors using a predictive methodology, considering two steps – temporal and cross-section regressions – with standard errors obtained by the techniques of Fama and Macbeth (1973. The results indicated the superiority of the 4-fators model as compared to the 3-fators model, and the superiority of the 3- factors model as compared to the CAPM, but no one of the tested models were enough on the explanation of the Brazilian stock returns. Contrary to some empirical evidences, that do not use predictive methodology, the size and momentum effect seem do not exist on the Brazilian capital markets, but there are evidences of the value effect and the relevance of the market for explanation of expected returns. These finds rise some questions, mainly caused by the originality of the methodology on the local market and by the fact that this subject is still incipient and polemic on the Brazilian academic environment.

  10. Inter-firm Networks, Organizational Learning and Knowledge Updating: An Empirical Study

    Science.gov (United States)

    Zhang, Su-rong; Wang, Wen-ping

    In the era of knowledge-based economy which information technology develops rapidly, the rate of knowledge updating has become a critical factor for enterprises to gaining competitive advantage .We build an interactional theoretical model among inter-firm networks, organizational learning and knowledge updating thereby and demonstrate it with empirical study at last. The result shows that inter-firm networks and organizational learning is the source of knowledge updating.

  11. Comparing the model-simulated global warming signal to observations using empirical estimates of unforced noise

    Science.gov (United States)

    Brown, Patrick T.; Li, Wenhong; Cordero, Eugene C.; Mauget, Steven A.

    2015-01-01

    The comparison of observed global mean surface air temperature (GMT) change to the mean change simulated by climate models has received much public and scientific attention. For a given global warming signal produced by a climate model ensemble, there exists an envelope of GMT values representing the range of possible unforced states of the climate system (the Envelope of Unforced Noise; EUN). Typically, the EUN is derived from climate models themselves, but climate models might not accurately simulate the correct characteristics of unforced GMT variability. Here, we simulate a new, empirical, EUN that is based on instrumental and reconstructed surface temperature records. We compare the forced GMT signal produced by climate models to observations while noting the range of GMT values provided by the empirical EUN. We find that the empirical EUN is wide enough so that the interdecadal variability in the rate of global warming over the 20th century does not necessarily require corresponding variability in the rate-of-increase of the forced signal. The empirical EUN also indicates that the reduced GMT warming over the past decade or so is still consistent with a middle emission scenario's forced signal, but is likely inconsistent with the steepest emission scenario's forced signal. PMID:25898351

  12. Comparing the model-simulated global warming signal to observations using empirical estimates of unforced noise.

    Science.gov (United States)

    Brown, Patrick T; Li, Wenhong; Cordero, Eugene C; Mauget, Steven A

    2015-04-21

    The comparison of observed global mean surface air temperature (GMT) change to the mean change simulated by climate models has received much public and scientific attention. For a given global warming signal produced by a climate model ensemble, there exists an envelope of GMT values representing the range of possible unforced states of the climate system (the Envelope of Unforced Noise; EUN). Typically, the EUN is derived from climate models themselves, but climate models might not accurately simulate the correct characteristics of unforced GMT variability. Here, we simulate a new, empirical, EUN that is based on instrumental and reconstructed surface temperature records. We compare the forced GMT signal produced by climate models to observations while noting the range of GMT values provided by the empirical EUN. We find that the empirical EUN is wide enough so that the interdecadal variability in the rate of global warming over the 20(th) century does not necessarily require corresponding variability in the rate-of-increase of the forced signal. The empirical EUN also indicates that the reduced GMT warming over the past decade or so is still consistent with a middle emission scenario's forced signal, but is likely inconsistent with the steepest emission scenario's forced signal.

  13. Empirical Models for the Estimation of Global Solar Radiation in ...

    African Journals Online (AJOL)

    Empirical Models for the Estimation of Global Solar Radiation in Yola, Nigeria. ... and average daily wind speed (WS) for the interval of three years (2010 – 2012) measured using various instruments for Yola of recorded data collected from the Center for Atmospheric Research (CAR), Anyigba are presented and analyzed.

  14. The emotional involvement in the workplace: An empirical study

    Directory of Open Access Journals (Sweden)

    Ana María Lucia-Casademunt

    2012-06-01

    Full Text Available Purpose: In a multitude of studies, it is verified that the generation of positive attitudes for employees such as job satisfaction or job involvement, have a positive influence on productivity levels of companies. The current investigation focus on the identification of employee-profile -who is emotionally involved with their work activity- through the use of a set of individual, job related and attitudinal factors.Design/methodology: A review of the literature about the main factors that affect the job involvement particularly on its emotional dimension has been completed. For its measurement at the empirical level, various items related to psychological well being of employees included in the IV European Working Conditions Survey-2010 are used. Moreover, those items are identified in Job Involvement Questionnaire (Lodahl & Kejner, 1965. Since then, an empirical and multidimensional study is carried out by applying a logistic regression model on the sample of 11,149 employees obtained with European survey cited previously.Findings: The logistic regression model identifies the factors, which are directly related to emotional involvement at the workplace. Ultimately, is raised a definitive model that define the European employee-profile -who is emotionally involved at the workplace-: a rather aged person who has been working at his/her present place of employment for several years in a medium-sized company where possibly there exist a good working relationship between workers and their superiors –social support-. These employees are “white-collar” workers, have career advancement opportunities in the organizational hierarchy. They perform variety, flexible and complex tasks, which leads to satisfaction in terms of pay and working conditions.Research limitations/implications: Emotional involvement has been measured through self-awareness and, therefore, the corresponding bias in the key variable must be assumed. In addition, the casual

  15. Empirical Scientific Research and Legal Studies Research--A Missing Link

    Science.gov (United States)

    Landry, Robert J., III

    2016-01-01

    This article begins with an overview of what is meant by empirical scientific research in the context of legal studies. With that backdrop, the argument is presented that without engaging in normative, theoretical, and doctrinal research in tandem with empirical scientific research, the role of legal studies scholarship in making meaningful…

  16. Development of efficient air-cooling strategies for lithium-ion battery module based on empirical heat source model

    International Nuclear Information System (INIS)

    Wang, Tao; Tseng, K.J.; Zhao, Jiyun

    2015-01-01

    Thermal modeling is the key issue in thermal management of lithium-ion battery system, and cooling strategies need to be carefully investigated to guarantee the temperature of batteries in operation within a narrow optimal range as well as provide cost effective and energy saving solutions for cooling system. This article reviews and summarizes the past cooling methods especially forced air cooling and introduces an empirical heat source model which can be widely applied in the battery module/pack thermal modeling. In the development of empirical heat source model, three-dimensional computational fluid dynamics (CFD) method is employed, and thermal insulation experiments are conducted to provide the key parameters. A transient thermal model of 5 × 5 battery module with forced air cooling is then developed based on the empirical heat source model. Thermal behaviors of battery module under different air cooling conditions, discharge rates and ambient temperatures are characterized and summarized. Varies cooling strategies are simulated and compared in order to obtain an optimal cooling method. Besides, the battery fault conditions are predicted from transient simulation scenarios. The temperature distributions and variations during discharge process are quantitatively described, and it is found that the upper limit of ambient temperature for forced air cooling is 35 °C, and when ambient temperature is lower than 20 °C, forced air-cooling is not necessary. - Highlights: • An empirical heat source model is developed for battery thermal modeling. • Different air-cooling strategies on module thermal characteristics are investigated. • Impact of different discharge rates on module thermal responses are investigated. • Impact of ambient temperatures on module thermal behaviors are investigated. • Locations of maximum temperatures under different operation conditions are studied.

  17. Integration of least angle regression with empirical Bayes for multi-locus genome-wide association studies

    Science.gov (United States)

    Multi-locus genome-wide association studies has become the state-of-the-art procedure to identify quantitative trait loci (QTL) associated with traits simultaneously. However, implementation of multi-locus model is still difficult. In this study, we integrated least angle regression with empirical B...

  18. A REVIEW of WEBERIAN STUDIES ON THE OTTOMAN EMPIRE

    OpenAIRE

    MAZMAN, İbrahim

    2018-01-01

    This study examines the secondary literature on Max Weber’s (1864-1920) writings onIslam and the Ottoman Empire. It demarcates approaches prevalent in the secondaryliterature. Three basic themes are apparent:- Section a) concentrates on authors who applied Weber’s concepts of patrimonialism andbureaucracy to non-Ottoman countries, such as Maslovski (on the Soviet bureaucracy)and Eisenberg (on China).- Section b) focuses on authors who studied the Ottoman Empire utilizing non-Weberianaboveall ...

  19. Using empirical Bayes predictors from generalized linear mixed models to test and visualize associations among longitudinal outcomes.

    Science.gov (United States)

    Mikulich-Gilbertson, Susan K; Wagner, Brandie D; Grunwald, Gary K; Riggs, Paula D; Zerbe, Gary O

    2018-01-01

    Medical research is often designed to investigate changes in a collection of response variables that are measured repeatedly on the same subjects. The multivariate generalized linear mixed model (MGLMM) can be used to evaluate random coefficient associations (e.g. simple correlations, partial regression coefficients) among outcomes that may be non-normal and differently distributed by specifying a multivariate normal distribution for their random effects and then evaluating the latent relationship between them. Empirical Bayes predictors are readily available for each subject from any mixed model and are observable and hence, plotable. Here, we evaluate whether second-stage association analyses of empirical Bayes predictors from a MGLMM, provide a good approximation and visual representation of these latent association analyses using medical examples and simulations. Additionally, we compare these results with association analyses of empirical Bayes predictors generated from separate mixed models for each outcome, a procedure that could circumvent computational problems that arise when the dimension of the joint covariance matrix of random effects is large and prohibits estimation of latent associations. As has been shown in other analytic contexts, the p-values for all second-stage coefficients that were determined by naively assuming normality of empirical Bayes predictors provide a good approximation to p-values determined via permutation analysis. Analyzing outcomes that are interrelated with separate models in the first stage and then associating the resulting empirical Bayes predictors in a second stage results in different mean and covariance parameter estimates from the maximum likelihood estimates generated by a MGLMM. The potential for erroneous inference from using results from these separate models increases as the magnitude of the association among the outcomes increases. Thus if computable, scatterplots of the conditionally independent empirical Bayes

  20. Empirical models for the estimation of global solar radiation with sunshine hours on horizontal surface in various cities of Pakistan

    International Nuclear Information System (INIS)

    Gadiwala, M.S.; Usman, A.; Akhtar, M.; Jamil, K.

    2013-01-01

    In developing countries like Pakistan the global solar radiation and its components is not available for all locations due to which there is a requirement of using different models for the estimation of global solar radiation that use climatological parameters of the locations. Only five long-period locations data of solar radiation data is available in Pakistan (Karachi, Quetta, Lahore, Multan and Peshawar). These locations almost encompass the different geographical features of Pakistan. For this reason in this study the Mean monthly global solar radiation has been estimated using empirical models of Angstrom, FAO, Glover Mc-Culloch, Sangeeta & Tiwari for the diversity of approach and use of climatic and geographical parameters. Empirical constants for these models have been estimated and the results obtained by these models have been tested statistically. The results show encouraging agreement between estimated and measured values. The outcome of these empirical models will assist the researchers working on solar energy estimation of the location having similar conditions

  1. Qualitative Case Study Research as Empirical Inquiry

    Science.gov (United States)

    Ellinger, Andrea D.; McWhorter, Rochell

    2016-01-01

    This article introduces the concept of qualitative case study research as empirical inquiry. It defines and distinguishes what a case study is, the purposes, intentions, and types of case studies. It then describes how to determine if a qualitative case study is the preferred approach for conducting research. It overviews the essential steps in…

  2. Evaluation of the existing triple point path models with new experimental data: proposal of an original empirical formulation

    Science.gov (United States)

    Boutillier, J.; Ehrhardt, L.; De Mezzo, S.; Deck, C.; Magnan, P.; Naz, P.; Willinger, R.

    2018-03-01

    With the increasing use of improvised explosive devices (IEDs), the need for better mitigation, either for building integrity or for personal security, increases in importance. Before focusing on the interaction of the shock wave with a target and the potential associated damage, knowledge must be acquired regarding the nature of the blast threat, i.e., the pressure-time history. This requirement motivates gaining further insight into the triple point (TP) path, in order to know precisely which regime the target will encounter (simple reflection or Mach reflection). Within this context, the purpose of this study is to evaluate three existing TP path empirical models, which in turn are used in other empirical models for the determination of the pressure profile. These three TP models are the empirical function of Kinney, the Unified Facilities Criteria (UFC) curves, and the model of the Natural Resources Defense Council (NRDC). As discrepancies are observed between these models, new experimental data were obtained to test their reliability and a new promising formulation is proposed for scaled heights of burst ranging from 24.6-172.9 cm/kg^{1/3}.

  3. A behavioural approach to financial portfolio selection problem: an empirical study using heuristics

    OpenAIRE

    Grishina, Nina

    2014-01-01

    This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University The behaviourally based portfolio selection problem with investor's loss aversion and risk aversion biases in portfolio choice under uncertainty are studied. The main results of this work are developed heuristic approaches for the prospect theory and cumulative prospect theory models proposed by Kahneman and Tversky in 1979 and 1992 as well as an empirical comparative analysis of these models ...

  4. Evaluation of empirical atmospheric diffusion data

    Energy Technology Data Exchange (ETDEWEB)

    Horst, T.W.; Doran, J.C.; Nickola, P.W.

    1979-10-01

    A study has been made of atmospheric diffusion over level, homogeneous terrain of contaminants released from non-buoyant point sources up to 100 m in height. Current theories of diffusion are compared to empirical diffusion data, and specific dispersion estimation techniques are recommended which can be implemented with the on-site meteorological instrumentation required by the Nuclear Regulatory Commission. A comparison of both the recommended diffusion model and the NRC diffusion model with the empirical data demonstrates that the predictions of the recommended model have both smaller scatter and less bias, particularly for ground-level sources.

  5. A stochastic empirical model for heavy-metal balnces in Agro-ecosystems

    NARCIS (Netherlands)

    Keller, A.N.; Steiger, von B.; Zee, van der S.E.A.T.M.; Schulin, R.

    2001-01-01

    Mass flux balancing provides essential information for preventive strategies against heavy-metal accumulation in agricultural soils that may result from atmospheric deposition and application of fertilizers and pesticides. In this paper we present the empirical stochastic balance model, PROTERRA-S,

  6. Empirical studies on the pricing of bonds and interest rate derivatives

    NARCIS (Netherlands)

    Driessen, J.J.A.G.

    2001-01-01

    Nowadays, both large financial and non-financial institutions use models for the term structure of interest rates for risk management and pricing purposes. This thesis focuses on these two important applications of term structure models. In the first part, the empirical performance of several term

  7. Evaluation of theoretical and empirical water vapor sorption isotherm models for soils

    Science.gov (United States)

    Arthur, Emmanuel; Tuller, Markus; Moldrup, Per; de Jonge, Lis W.

    2016-01-01

    The mathematical characterization of water vapor sorption isotherms of soils is crucial for modeling processes such as volatilization of pesticides and diffusive and convective water vapor transport. Although numerous physically based and empirical models were previously proposed to describe sorption isotherms of building materials, food, and other industrial products, knowledge about the applicability of these functions for soils is noticeably lacking. We present an evaluation of nine models for characterizing adsorption/desorption isotherms for a water activity range from 0.03 to 0.93 based on measured data of 207 soils with widely varying textures, organic carbon contents, and clay mineralogy. In addition, the potential applicability of the models for prediction of sorption isotherms from known clay content was investigated. While in general, all investigated models described measured adsorption and desorption isotherms reasonably well, distinct differences were observed between physical and empirical models and due to the different degrees of freedom of the model equations. There were also considerable differences in model performance for adsorption and desorption data. While regression analysis relating model parameters and clay content and subsequent model application for prediction of measured isotherms showed promise for the majority of investigated soils, for soils with distinct kaolinitic and smectitic clay mineralogy predicted isotherms did not closely match the measurements.

  8. Technical Note: A comparison of model and empirical measures of catchment-scale effective energy and mass transfer

    Directory of Open Access Journals (Sweden)

    C. Rasmussen

    2013-09-01

    Full Text Available Recent work suggests that a coupled effective energy and mass transfer (EEMT term, which includes the energy associated with effective precipitation and primary production, may serve as a robust prediction parameter of critical zone structure and function. However, the models used to estimate EEMT have been solely based on long-term climatological data with little validation using direct empirical measures of energy, water, and carbon balances. Here we compare catchment-scale EEMT estimates generated using two distinct approaches: (1 EEMT modeled using the established methodology based on estimates of monthly effective precipitation and net primary production derived from climatological data, and (2 empirical catchment-scale EEMT estimated using data from 86 catchments of the Model Parameter Estimation Experiment (MOPEX and MOD17A3 annual net primary production (NPP product derived from Moderate Resolution Imaging Spectroradiometer (MODIS. Results indicated positive and significant linear correspondence (R2 = 0.75; P −2 yr−1. Modeled EEMT values were consistently greater than empirical measures of EEMT. Empirical catchment estimates of the energy associated with effective precipitation (EPPT were calculated using a mass balance approach that accounts for water losses to quick surface runoff not accounted for in the climatologically modeled EPPT. Similarly, local controls on primary production such as solar radiation and nutrient limitation were not explicitly included in the climatologically based estimates of energy associated with primary production (EBIO, whereas these were captured in the remotely sensed MODIS NPP data. These differences likely explain the greater estimate of modeled EEMT relative to the empirical measures. There was significant positive correlation between catchment aridity and the fraction of EEMT partitioned into EBIO (FBIO, with an increase in FBIO as a fraction of the total as aridity increases and percentage of

  9. Power spectrum model of visual masking: simulations and empirical data.

    Science.gov (United States)

    Serrano-Pedraza, Ignacio; Sierra-Vázquez, Vicente; Derrington, Andrew M

    2013-06-01

    In the study of the spatial characteristics of the visual channels, the power spectrum model of visual masking is one of the most widely used. When the task is to detect a signal masked by visual noise, this classical model assumes that the signal and the noise are previously processed by a bank of linear channels and that the power of the signal at threshold is proportional to the power of the noise passing through the visual channel that mediates detection. The model also assumes that this visual channel will have the highest ratio of signal power to noise power at its output. According to this, there are masking conditions where the highest signal-to-noise ratio (SNR) occurs in a channel centered in a spatial frequency different from the spatial frequency of the signal (off-frequency looking). Under these conditions the channel mediating detection could vary with the type of noise used in the masking experiment and this could affect the estimation of the shape and the bandwidth of the visual channels. It is generally believed that notched noise, white noise and double bandpass noise prevent off-frequency looking, and high-pass, low-pass and bandpass noises can promote it independently of the channel's shape. In this study, by means of a procedure that finds the channel that maximizes the SNR at its output, we performed numerical simulations using the power spectrum model to study the characteristics of masking caused by six types of one-dimensional noise (white, high-pass, low-pass, bandpass, notched, and double bandpass) for two types of channel's shape (symmetric and asymmetric). Our simulations confirm that (1) high-pass, low-pass, and bandpass noises do not prevent the off-frequency looking, (2) white noise satisfactorily prevents the off-frequency looking independently of the shape and bandwidth of the visual channel, and interestingly we proved for the first time that (3) notched and double bandpass noises prevent off-frequency looking only when the noise

  10. Traditional Arabic & Islamic medicine: validation and empirical assessment of a conceptual model in Qatar.

    Science.gov (United States)

    AlRawi, Sara N; Khidir, Amal; Elnashar, Maha S; Abdelrahim, Huda A; Killawi, Amal K; Hammoud, Maya M; Fetters, Michael D

    2017-03-14

    Evidence indicates traditional medicine is no longer only used for the healthcare of the poor, its prevalence is also increasing in countries where allopathic medicine is predominant in the healthcare system. While these healing practices have been utilized for thousands of years in the Arabian Gulf, only recently has a theoretical model been developed illustrating the linkages and components of such practices articulated as Traditional Arabic & Islamic Medicine (TAIM). Despite previous theoretical work presenting development of the TAIM model, empirical support has been lacking. The objective of this research is to provide empirical support for the TAIM model and illustrate real world applicability. Using an ethnographic approach, we recruited 84 individuals (43 women and 41 men) who were speakers of one of four common languages in Qatar; Arabic, English, Hindi, and Urdu, Through in-depth interviews, we sought confirming and disconfirming evidence of the model components, namely, health practices, beliefs and philosophy to treat, diagnose, and prevent illnesses and/or maintain well-being, as well as patterns of communication about their TAIM practices with their allopathic providers. Based on our analysis, we find empirical support for all elements of the TAIM model. Participants in this research, visitors to major healthcare centers, mentioned using all elements of the TAIM model: herbal medicines, spiritual therapies, dietary practices, mind-body methods, and manual techniques, applied singularly or in combination. Participants had varying levels of comfort sharing information about TAIM practices with allopathic practitioners. These findings confirm an empirical basis for the elements of the TAIM model. Three elements, namely, spiritual healing, herbal medicine, and dietary practices, were most commonly found. Future research should examine the prevalence of TAIM element use, how it differs among various populations, and its impact on health.

  11. A generalized preferential attachment model for business firms growth rates. I. Empirical evidence

    Science.gov (United States)

    Pammolli, F.; Fu, D.; Buldyrev, S. V.; Riccaboni, M.; Matia, K.; Yamasaki, K.; Stanley, H. E.

    2007-05-01

    We introduce a model of proportional growth to explain the distribution P(g) of business firm growth rates. The model predicts that P(g) is Laplace in the central part and depicts an asymptotic power-law behavior in the tails with an exponent ζ = 3. Because of data limitations, previous studies in this field have been focusing exclusively on the Laplace shape of the body of the distribution. We test the model at different levels of aggregation in the economy, from products, to firms, to countries, and we find that the predictions are in good agreement with empirical evidence on both growth distributions and size-variance relationships.

  12. Empirical high-latitude electric field models

    International Nuclear Information System (INIS)

    Heppner, J.P.; Maynard, N.C.

    1987-01-01

    Electric field measurements from the Dynamics Explorer 2 satellite have been analyzed to extend the empirical models previously developed from dawn-dusk OGO 6 measurements (J.P. Heppner, 1977). The analysis embraces large quantities of data from polar crossings entering and exiting the high latitudes in all magnetic local time zones. Paralleling the previous analysis, the modeling is based on the distinctly different polar cap and dayside convective patterns that occur as a function of the sign of the Y component of the interplanetary magnetic field. The objective, which is to represent the typical distributions of convective electric fields with a minimum number of characteristic patterns, is met by deriving one pattern (model BC) for the northern hemisphere with a +Y interplanetary magnetic field (IMF) and southern hemisphere with a -Y IMF and two patterns (models A and DE) for the northern hemisphere with a -Y IMF and southern hemisphere with a +Y IMF. The most significant large-scale revisions of the OGO 6 models are (1) on the dayside where the latitudinal overlap of morning and evening convection cells reverses with the sign of the IMF Y component, (2) on the nightside where a westward flow region poleward from the Harang discontinuity appears under model BC conditions, and (3) magnetic local time shifts in the positions of the convection cell foci. The modeling above was followed by a detailed examination of cases where the IMF Z component was clearly positive (northward). Neglecting the seasonally dependent cases where irregularities obscure pattern recognition, the observations range from reasonable agreement with the new BC and DE models, to cases where different characteristics appeared primarily at dayside high latitudes

  13. Are Model Transferability And Complexity Antithetical? Insights From Validation of a Variable-Complexity Empirical Snow Model in Space and Time

    Science.gov (United States)

    Lute, A. C.; Luce, Charles H.

    2017-11-01

    The related challenges of predictions in ungauged basins and predictions in ungauged climates point to the need to develop environmental models that are transferable across both space and time. Hydrologic modeling has historically focused on modelling one or only a few basins using highly parameterized conceptual or physically based models. However, model parameters and structures have been shown to change significantly when calibrated to new basins or time periods, suggesting that model complexity and model transferability may be antithetical. Empirical space-for-time models provide a framework within which to assess model transferability and any tradeoff with model complexity. Using 497 SNOTEL sites in the western U.S., we develop space-for-time models of April 1 SWE and Snow Residence Time based on mean winter temperature and cumulative winter precipitation. The transferability of the models to new conditions (in both space and time) is assessed using non-random cross-validation tests with consideration of the influence of model complexity on transferability. As others have noted, the algorithmic empirical models transfer best when minimal extrapolation in input variables is required. Temporal split-sample validations use pseudoreplicated samples, resulting in the selection of overly complex models, which has implications for the design of hydrologic model validation tests. Finally, we show that low to moderate complexity models transfer most successfully to new conditions in space and time, providing empirical confirmation of the parsimony principal.

  14. High-frequency volatility combine forecast evaluations: An empirical study for DAX

    Directory of Open Access Journals (Sweden)

    Wen Cheong Chin

    2017-01-01

    Full Text Available This study aims to examine the benefits of combining realized volatility, higher power variation volatility and nearest neighbour truncation volatility in the forecasts of financial stock market of DAX. A structural break heavy-tailed heterogeneous autoregressive model under the heterogeneous market hypothesis specification is employed to capture the stylized facts of high-frequency empirical data. Using selected averaging forecast methods, the forecast weights are assigned based on the simple average, simple median, least squares and mean square error. The empirical results indicated that the combination of forecasts in general shown superiority under four evaluation criteria regardless which proxy is set as the actual volatility. As a conclusion, we summarized that the forecast performance is influenced by three factors namely the types of volatility proxy, forecast methods (individual or averaging forecast and lastly the type of actual forecast value used in the evaluation criteria.

  15. Quantitative analyses of empirical fitness landscapes

    International Nuclear Information System (INIS)

    Szendro, Ivan G; Franke, Jasper; Krug, Joachim; Schenk, Martijn F; De Visser, J Arjan G M

    2013-01-01

    The concept of a fitness landscape is a powerful metaphor that offers insight into various aspects of evolutionary processes and guidance for the study of evolution. Until recently, empirical evidence on the ruggedness of these landscapes was lacking, but since it became feasible to construct all possible genotypes containing combinations of a limited set of mutations, the number of studies has grown to a point where a classification of landscapes becomes possible. The aim of this review is to identify measures of epistasis that allow a meaningful comparison of fitness landscapes and then apply them to the empirical landscapes in order to discern factors that affect ruggedness. The various measures of epistasis that have been proposed in the literature appear to be equivalent. Our comparison shows that the ruggedness of the empirical landscape is affected by whether the included mutations are beneficial or deleterious and by whether intragenic or intergenic epistasis is involved. Finally, the empirical landscapes are compared to landscapes generated with the rough Mt Fuji model. Despite the simplicity of this model, it captures the features of the experimental landscapes remarkably well. (paper)

  16. An improved empirical model for diversity gain on Earth-space propagation paths

    Science.gov (United States)

    Hodge, D. B.

    1981-01-01

    An empirical model was generated to estimate diversity gain on Earth-space propagation paths as a function of Earth terminal separation distance, link frequency, elevation angle, and angle between the baseline and the path azimuth. The resulting model reproduces the entire experimental data set with an RMS error of 0.73 dB.

  17. Achilles tendons from decorin- and biglycan-null mouse models have inferior mechanical and structural properties predicted by an image-based empirical damage model.

    Science.gov (United States)

    Gordon, J A; Freedman, B R; Zuskov, A; Iozzo, R V; Birk, D E; Soslowsky, L J

    2015-07-16

    Achilles tendons are a common source of pain and injury, and their pathology may originate from aberrant structure function relationships. Small leucine rich proteoglycans (SLRPs) influence mechanical and structural properties in a tendon-specific manner. However, their roles in the Achilles tendon have not been defined. The objective of this study was to evaluate the mechanical and structural differences observed in mouse Achilles tendons lacking class I SLRPs; either decorin or biglycan. In addition, empirical modeling techniques based on mechanical and image-based measures were employed. Achilles tendons from decorin-null (Dcn(-/-)) and biglycan-null (Bgn(-/-)) C57BL/6 female mice (N=102) were used. Each tendon underwent a dynamic mechanical testing protocol including simultaneous polarized light image capture to evaluate both structural and mechanical properties of each Achilles tendon. An empirical damage model was adapted for application to genetic variation and for use with image based structural properties to predict tendon dynamic mechanical properties. We found that Achilles tendons lacking decorin and biglycan had inferior mechanical and structural properties that were age dependent; and that simple empirical models, based on previously described damage models, were predictive of Achilles tendon dynamic modulus in both decorin- and biglycan-null mice. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. Mission Operations Planning with Preferences: An Empirical Study

    Science.gov (United States)

    Bresina, John L.; Khatib, Lina; McGann, Conor

    2006-01-01

    This paper presents an empirical study of some nonexhaustive approaches to optimizing preferences within the context of constraint-based, mixed-initiative planning for mission operations. This work is motivated by the experience of deploying and operating the MAPGEN (Mixed-initiative Activity Plan GENerator) system for the Mars Exploration Rover Mission. Responsiveness to the user is one of the important requirements for MAPGEN, hence, the additional computation time needed to optimize preferences must be kept within reasonabble bounds. This was the primary motivation for studying non-exhaustive optimization approaches. The specific goals of rhe empirical study are to assess the impact on solution quality of two greedy heuristics used in MAPGEN and to assess the improvement gained by applying a linear programming optimization technique to the final solution.

  19. Evolution of viral virulence: empirical studies

    Science.gov (United States)

    Kurath, Gael; Wargo, Andrew R.

    2016-01-01

    The concept of virulence as a pathogen trait that can evolve in response to selection has led to a large body of virulence evolution theory developed in the 1980-1990s. Various aspects of this theory predict increased or decreased virulence in response to a complex array of selection pressures including mode of transmission, changes in host, mixed infection, vector-borne transmission, environmental changes, host vaccination, host resistance, and co-evolution of virus and host. A fundamental concept is prediction of trade-offs between the costs and benefits associated with higher virulence, leading to selection of optimal virulence levels. Through a combination of observational and experimental studies, including experimental evolution of viruses during serial passage, many of these predictions have now been explored in systems ranging from bacteriophage to viruses of plants, invertebrates, and vertebrate hosts. This chapter summarizes empirical studies of viral virulence evolution in numerous diverse systems, including the classic models myxomavirus in rabbits, Marek's disease virus in chickens, and HIV in humans. Collectively these studies support some aspects of virulence evolution theory, suggest modifications for other aspects, and show that predictions may apply in some virus:host interactions but not in others. Finally, we consider how virulence evolution theory applies to disease management in the field.

  20. Empirical Bayes Credibility Models for Economic Catastrophic Losses by Regions

    Directory of Open Access Journals (Sweden)

    Jindrová Pavla

    2017-01-01

    Full Text Available Catastrophic events affect various regions of the world with increasing frequency and intensity. The number of catastrophic events and the amount of economic losses is varying in different world regions. Part of these losses is covered by insurance. Catastrophe events in last years are associated with increases in premiums for some lines of business. The article focus on estimating the amount of net premiums that would be needed to cover the total or insured catastrophic losses in different world regions using Bühlmann and Bühlmann-Straub empirical credibility models based on data from Sigma Swiss Re 2010-2016. The empirical credibility models have been developed to estimate insurance premiums for short term insurance contracts using two ingredients: past data from the risk itself and collateral data from other sources considered to be relevant. In this article we deal with application of these models based on the real data about number of catastrophic events and about the total economic and insured catastrophe losses in seven regions of the world in time period 2009-2015. Estimated credible premiums by world regions provide information how much money in the monitored regions will be need to cover total and insured catastrophic losses in next year.

  1. Empirical microeconomics action functionals

    Science.gov (United States)

    Baaquie, Belal E.; Du, Xin; Tanputraman, Winson

    2015-06-01

    A statistical generalization of microeconomics has been made in Baaquie (2013), where the market price of every traded commodity, at each instant of time, is considered to be an independent random variable. The dynamics of commodity market prices is modeled by an action functional-and the focus of this paper is to empirically determine the action functionals for different commodities. The correlation functions of the model are defined using a Feynman path integral. The model is calibrated using the unequal time correlation of the market commodity prices as well as their cubic and quartic moments using a perturbation expansion. The consistency of the perturbation expansion is verified by a numerical evaluation of the path integral. Nine commodities drawn from the energy, metal and grain sectors are studied and their market behavior is described by the model to an accuracy of over 90% using only six parameters. The paper empirically establishes the existence of the action functional for commodity prices that was postulated to exist in Baaquie (2013).

  2. A three-model comparison of the relationship between quality, satisfaction and loyalty: an empirical study of the Chinese healthcare system

    Directory of Open Access Journals (Sweden)

    Lei Ping

    2012-11-01

    Full Text Available Abstract Background Previous research has addressed the relationship between customer satisfaction, perceived quality and customer loyalty intentions in consumer markets. In this study, we test and compare three theoretical models of the quality–satisfaction–loyalty relationship in the Chinese healthcare system. Methods This research focuses on hospital patients as participants in the process of healthcare procurement. Empirical data were obtained from six Chinese public hospitals in Shanghai. A total of 630 questionnaires were collected in two studies. Study 1 tested the research instruments, and Study 2 tested the three models. Confirmatory factor analysis was used to assess the scales’ construct validity by testing convergent and discriminant validity. A structural equation model (SEM specified the distinctions between each construct. A comparison of the three theoretical models was conducted via AMOS analysis. Results The results of the SEM demonstrate that quality and satisfaction are distinct concepts and that the first model (satisfaction mediates quality and loyalty is the most appropriate one in the context of the Chinese healthcare environment. Conclusions In this study, we test and compare three theoretical models of the quality–satisfaction–loyalty relationship in the Chinese healthcare system. Findings show that perceived quality improvement does not lead directly to customer loyalty. The strategy of using quality improvement to maintain patient loyalty depends on the level of patient satisfaction. This implies that the measurement of patient experiences should include topics of importance for patients’ satisfaction with health care services.

  3. A three-model comparison of the relationship between quality, satisfaction and loyalty: an empirical study of the Chinese healthcare system.

    Science.gov (United States)

    Lei, Ping; Jolibert, Alain

    2012-11-30

    Previous research has addressed the relationship between customer satisfaction, perceived quality and customer loyalty intentions in consumer markets. In this study, we test and compare three theoretical models of the quality-satisfaction-loyalty relationship in the Chinese healthcare system. This research focuses on hospital patients as participants in the process of healthcare procurement. Empirical data were obtained from six Chinese public hospitals in Shanghai. A total of 630 questionnaires were collected in two studies. Study 1 tested the research instruments, and Study 2 tested the three models. Confirmatory factor analysis was used to assess the scales' construct validity by testing convergent and discriminant validity. A structural equation model (SEM) specified the distinctions between each construct. A comparison of the three theoretical models was conducted via AMOS analysis. The results of the SEM demonstrate that quality and satisfaction are distinct concepts and that the first model (satisfaction mediates quality and loyalty) is the most appropriate one in the context of the Chinese healthcare environment. In this study, we test and compare three theoretical models of the quality-satisfaction-loyalty relationship in the Chinese healthcare system. Findings show that perceived quality improvement does not lead directly to customer loyalty. The strategy of using quality improvement to maintain patient loyalty depends on the level of patient satisfaction. This implies that the measurement of patient experiences should include topics of importance for patients' satisfaction with health care services.

  4. A three-model comparison of the relationship between quality, satisfaction and loyalty: an empirical study of the Chinese healthcare system

    Science.gov (United States)

    2012-01-01

    Background Previous research has addressed the relationship between customer satisfaction, perceived quality and customer loyalty intentions in consumer markets. In this study, we test and compare three theoretical models of the quality–satisfaction–loyalty relationship in the Chinese healthcare system. Methods This research focuses on hospital patients as participants in the process of healthcare procurement. Empirical data were obtained from six Chinese public hospitals in Shanghai. A total of 630 questionnaires were collected in two studies. Study 1 tested the research instruments, and Study 2 tested the three models. Confirmatory factor analysis was used to assess the scales’ construct validity by testing convergent and discriminant validity. A structural equation model (SEM) specified the distinctions between each construct. A comparison of the three theoretical models was conducted via AMOS analysis. Results The results of the SEM demonstrate that quality and satisfaction are distinct concepts and that the first model (satisfaction mediates quality and loyalty) is the most appropriate one in the context of the Chinese healthcare environment. Conclusions In this study, we test and compare three theoretical models of the quality–satisfaction–loyalty relationship in the Chinese healthcare system. Findings show that perceived quality improvement does not lead directly to customer loyalty. The strategy of using quality improvement to maintain patient loyalty depends on the level of patient satisfaction. This implies that the measurement of patient experiences should include topics of importance for patients’ satisfaction with health care services. PMID:23198824

  5. Business models of micro businesses: Empirical evidence from creative industries

    Directory of Open Access Journals (Sweden)

    Pfeifer Sanja

    2017-01-01

    Full Text Available Business model describes how a business identifies and creates value for customers and how it organizes itself to capture some of this value in a profitable manner. Previous studies of business models in creative industries have only recently identified the unresolved issues in this field of research. The main objective of this article is to analyse the structure and diversity of business models and to deduce how these components interact or change in the context of micro and small businesses in creative services such as advertising, architecture and design. The article uses a qualitative approach. Case studies and semi-structured, in-depth interviews with six owners/managers of micro businesses in Croatia provide rich data. Structural coding in data analysis has been performed manually. The qualitative analysis has indicative relevance for the assessment and comparison of business models, however, it provides insights into which components of business models seem to be consolidated and which seem to contribute to the diversity of business models in creative industries. The article contributes to the advancement of empirical evidence and conceptual constructs that might lead to more advanced methodological approaches and proposition of the core typologies or classifications of business models in creative industries. In addition, a more detailed mapping of different choices available in managing value creation, value capturing or value networking might be a valuable help for owners/managers who want to change or cross-fertilize their business models.

  6. Empirical Correction to the Likelihood Ratio Statistic for Structural Equation Modeling with Many Variables.

    Science.gov (United States)

    Yuan, Ke-Hai; Tian, Yubin; Yanagihara, Hirokazu

    2015-06-01

    Survey data typically contain many variables. Structural equation modeling (SEM) is commonly used in analyzing such data. The most widely used statistic for evaluating the adequacy of a SEM model is T ML, a slight modification to the likelihood ratio statistic. Under normality assumption, T ML approximately follows a chi-square distribution when the number of observations (N) is large and the number of items or variables (p) is small. However, in practice, p can be rather large while N is always limited due to not having enough participants. Even with a relatively large N, empirical results show that T ML rejects the correct model too often when p is not too small. Various corrections to T ML have been proposed, but they are mostly heuristic. Following the principle of the Bartlett correction, this paper proposes an empirical approach to correct T ML so that the mean of the resulting statistic approximately equals the degrees of freedom of the nominal chi-square distribution. Results show that empirically corrected statistics follow the nominal chi-square distribution much more closely than previously proposed corrections to T ML, and they control type I errors reasonably well whenever N ≥ max(50,2p). The formulations of the empirically corrected statistics are further used to predict type I errors of T ML as reported in the literature, and they perform well.

  7. Comparative approaches from empirical to mechanistic simulation modelling in Land Evaluation studies

    Science.gov (United States)

    Manna, P.; Basile, A.; Bonfante, A.; Terribile, F.

    2009-04-01

    The Land Evaluation (LE) comprise the evaluation procedures to asses the attitudes of the land to a generic or specific use (e.g. biomass production). From local to regional and national scale the approach to the land use planning should requires a deep knowledge of the processes that drive the functioning of the soil-plant-atmosphere system. According to the classical approaches the assessment of attitudes is the result of a qualitative comparison between the land/soil physical properties and the land use requirements. These approaches have a quick and inexpensive applicability; however, they are based on empirical and qualitative models with a basic knowledge structure specifically built for a specific landscape and for the specific object of the evaluation (e.g. crop). The outcome from this situation is the huge difficulties in the spatial extrapolation of the LE results and the rigidity of the system. Modern techniques instead, rely on the application of mechanistic and quantitative simulation modelling that allow a dynamic characterisation of the interrelated physical and chemical processes taking place in the soil landscape. Moreover, the insertion of physical based rules in the LE procedure may make it less difficult in terms of both extending spatially the results and changing the object (e.g. crop species, nitrate dynamics, etc.) of the evaluation. On the other side these modern approaches require high quality and quantity of input data that cause a significant increase in costs. In this scenario nowadays the LE expert is asked to choose the best LE methodology considering costs, complexity of the procedure and benefits in handling a specific land evaluation. In this work we performed a forage maize land suitability study by comparing 9 different methods having increasing complexity and costs. The study area, of about 2000 ha, is located in North Italy in the Lodi plain (Po valley). The range of the 9 employed methods ranged from standard LE approaches to

  8. EMPIRICAL MODELS FOR DESCRIBING FIRE BEHAVIOR IN BRAZILIAN COMMERCIAL EUCALYPT PLANTATIONS

    Directory of Open Access Journals (Sweden)

    Benjamin Leonardo Alves White

    2016-12-01

    Full Text Available Modeling forest fire behavior is an important task that can be used to assist in fire prevention and suppression operations. However, according to previous studies, the existing common worldwide fire behavior models used do not correctly estimate the fire behavior in Brazilian commercial hybrid eucalypt plantations. Therefore, this study aims to build new empirical models to predict the fire rate of spread, flame length and fuel consumption for such vegetation. To meet these objectives, 105 laboratory experimental burns were done, where the main fuel characteristics and weather variables that influence fire behavior were controlled and/or measured in each experiment. Dependent and independent variables were fitted through multiple regression analysis. The fire rate of spread proposed model is based on the wind speed, fuel bed bulk density and 1-h dead fuel moisture content (r2 = 0.86; the flame length model is based on the fuel bed depth, 1-h dead fuel moisture content and wind speed (r2 = 0.72; the fuel consumption proposed model has the 1-h dead fuel moisture, fuel bed bulk density and 1-h dead dry fuel load as independent variables (r2= 0.80. These models were used to develop a new fire behavior software, the “Eucalyptus Fire Safety System”.

  9. Integrating social science into empirical models of coupled human and natural systems

    Directory of Open Access Journals (Sweden)

    Jeffrey D. Kline

    2017-09-01

    Full Text Available Coupled human and natural systems (CHANS research highlights reciprocal interactions (or feedbacks between biophysical and socioeconomic variables to explain system dynamics and resilience. Empirical models often are used to test hypotheses and apply theory that represent human behavior. Parameterizing reciprocal interactions presents two challenges for social scientists: (1 how to represent human behavior as influenced by biophysical factors and integrate this into CHANS empirical models; (2 how to organize and function as a multidisciplinary social science team to accomplish that task. We reflect on these challenges regarding our CHANS research that investigated human adaptation to fire-prone landscapes. Our project sought to characterize the forest management activities of land managers and landowners (or "actors" and their influence on wildfire behavior and landscape outcomes by focusing on biophysical and socioeconomic feedbacks in central Oregon (USA. We used an agent-based model (ABM to compile biophysical and social information pertaining to actor behavior, and to project future landscape conditions under alternative management scenarios. Project social scientists were tasked with identifying actors' forest management activities and biophysical and socioeconomic factors that influence them, and with developing decision rules for incorporation into the ABM to represent actor behavior. We (1 briefly summarize what we learned about actor behavior on this fire-prone landscape and how we represented it in an ABM, and (2 more significantly, report our observations about how we organized and functioned as a diverse team of social scientists to fulfill these CHANS research tasks. We highlight several challenges we experienced, involving quantitative versus qualitative data and methods, distilling complex behavior into empirical models, varying sensitivity of biophysical models to social factors, synchronization of research tasks, and the need to

  10. Prediction of early summer rainfall over South China by a physical-empirical model

    Science.gov (United States)

    Yim, So-Young; Wang, Bin; Xing, Wen

    2014-10-01

    In early summer (May-June, MJ) the strongest rainfall belt of the northern hemisphere occurs over the East Asian (EA) subtropical front. During this period the South China (SC) rainfall reaches its annual peak and represents the maximum rainfall variability over EA. Hence we establish an SC rainfall index, which is the MJ mean precipitation averaged over 72 stations over SC (south of 28°N and east of 110°E) and represents superbly the leading empirical orthogonal function mode of MJ precipitation variability over EA. In order to predict SC rainfall, we established a physical-empirical model. Analysis of 34-year observations (1979-2012) reveals three physically consequential predictors. A plentiful SC rainfall is preceded in the previous winter by (a) a dipole sea surface temperature (SST) tendency in the Indo-Pacific warm pool, (b) a tripolar SST tendency in North Atlantic Ocean, and (c) a warming tendency in northern Asia. These precursors foreshadow enhanced Philippine Sea subtropical High and Okhotsk High in early summer, which are controlling factors for enhanced subtropical frontal rainfall. The physical empirical model built on these predictors achieves a cross-validated forecast correlation skill of 0.75 for 1979-2012. Surprisingly, this skill is substantially higher than four-dynamical models' ensemble prediction for 1979-2010 period (0.15). The results here suggest that the low prediction skill of current dynamical models is largely due to models' deficiency and the dynamical prediction has large room to improve.

  11. An Empirical Study of Relationships between Student Self-Concept and Science Achievement in Hong Kong

    Science.gov (United States)

    Wang, Jianjun; Oliver, Steve; Garcia, Augustine

    2004-01-01

    Positive self-concept and good understanding of science are important indicators of scientific literacy endorsed by professional organizations. The existing research literature suggests that these two indicators are reciprocally related and mutually reinforcing. Generalization of the reciprocal model demands empirical studies in different…

  12. Linking customisation of ERP systems to support effort: an empirical study

    Science.gov (United States)

    Koch, Stefan; Mitteregger, Kurt

    2016-01-01

    The amount of customisation to an enterprise resource planning (ERP) system has always been a major concern in the context of the implementation. This article focuses on the phase of maintenance and presents an empirical study about the relationship between the amount of customising and the resulting support effort. We establish a structural equation modelling model that explains support effort using customisation effort, organisational characteristics and scope of implementation. The findings using data from an ERP provider show that there is a statistically significant effect: with an increasing amount of customisation, the quantity of telephone calls to support increases, as well as the duration of each call.

  13. Empirical Modeling of the Plasmasphere Dynamics Using Neural Networks

    Science.gov (United States)

    Zhelavskaya, I. S.; Shprits, Y.; Spasojevic, M.

    2017-12-01

    We present a new empirical model for reconstructing the global dynamics of the cold plasma density distribution based only on solar wind data and geomagnetic indices. Utilizing the density database obtained using the NURD (Neural-network-based Upper hybrid Resonance Determination) algorithm for the period of October 1, 2012 - July 1, 2016, in conjunction with solar wind data and geomagnetic indices, we develop a neural network model that is capable of globally reconstructing the dynamics of the cold plasma density distribution for 2 ≤ L ≤ 6 and all local times. We validate and test the model by measuring its performance on independent datasets withheld from the training set and by comparing the model predicted global evolution with global images of He+ distribution in the Earth's plasmasphere from the IMAGE Extreme UltraViolet (EUV) instrument. We identify the parameters that best quantify the plasmasphere dynamics by training and comparing multiple neural networks with different combinations of input parameters (geomagnetic indices, solar wind data, and different durations of their time history). We demonstrate results of both local and global plasma density reconstruction. This study illustrates how global dynamics can be reconstructed from local in-situ observations by using machine learning techniques.

  14. Empirical Model for Mobile Learning and their Factors. Case Study: Universities Located in the Urban City of Guadalajara, México

    Directory of Open Access Journals (Sweden)

    Juan Mejía Trejo

    2015-10-01

    Full Text Available Information and communication technologies (ICT are producing new and innovative teaching-learning processes. The research question we focused on is: Which is the empirical model and the factors for mobile learning at universities located within the Metropolitan Zone of Guadalajara, in Jalisco, México? Our research is grounded on a documentary study that chose variables used by specialists in m-learning using Analytic Hierarchy Process (AHP. The factors discovered were three: Technology (TECH; Contents Teaching-Learning Management and Styles (CTLMS; and Professor and Student Role (PSR. We used 13 dimensions and 60 variables. 20 professors and 800 students in social sciences courses participated in the study; they came from 7 universities located in the Urban City of Guadalajara, during 2013-2014 school cycles (24 months. We applied questionnaires and the data were analyzed by structural equations modeling (SEM, using EQS 6.1 software. The results suggest that there are 9/60 variables that have the most influence to improve the interaction with m-Learning model within the universities.

  15. An empirical study of business effect and industry effect in Galicia

    Directory of Open Access Journals (Sweden)

    Susana Iglesias

    2007-07-01

    Full Text Available This work is a contribution to the analysis of the influence that industry and business factors have on the variability of the organizational performance. A linear hierarchical model with fixed effects is applied to a sample of Galician firms. The results show that the portion of such variability explained by the business factor is clearly greater than that explained by the industry factor. These results, in favour of the business effect, are similar to other obtained in previous empirical studies.

  16. An empirical study of factors affecting inflation in Republic of Tajikistan

    OpenAIRE

    Qurbanalieva, Nigina

    2013-01-01

    This paper investigates the core factors affecting the price level in republic of Tajikistan by using ‘auto regressive distributed lags’ and Johansen-Juselius cointegration models. The empirical analysis is based on a dataset of demand pull and cost push inflation indicators. We used the monthly data for a period of 2005 to 2012. The findings of this study reveal that in the long run exchange rate, world wheat prices, world oil prices and labor supply Granger cause the price level. Neverthele...

  17. Characterizing Student Expectations: A Small Empirical Study

    Science.gov (United States)

    Warwick, Jonathan

    2016-01-01

    This paper describes the results of a small empirical study (n = 130), in which undergraduate students in the Business Faculty of a UK university were asked to express views and expectations relating to the study of a mathematics. Factor analysis is used to identify latent variables emerging from clusters of the measured variables and these are…

  18. Comparison of ITER performance predicted by semi-empirical and theory-based transport models

    International Nuclear Information System (INIS)

    Mukhovatov, V.; Shimomura, Y.; Polevoi, A.

    2003-01-01

    The values of Q=(fusion power)/(auxiliary heating power) predicted for ITER by three different methods, i.e., transport model based on empirical confinement scaling, dimensionless scaling technique, and theory-based transport models are compared. The energy confinement time given by the ITERH-98(y,2) scaling for an inductive scenario with plasma current of 15 MA and plasma density 15% below the Greenwald value is 3.6 s with one technical standard deviation of ±14%. These data are translated into a Q interval of [7-13] at the auxiliary heating power P aux = 40 MW and [7-28] at the minimum heating power satisfying a good confinement ELMy H-mode. Predictions of dimensionless scalings and theory-based transport models such as Weiland, MMM and IFS/PPPL overlap with the empirical scaling predictions within the margins of uncertainty. (author)

  19. Molecular models of zinc phthalocyanines: semi-empirical molecular orbital computations and physicochemical properties studied by molecular mechanics simulations

    International Nuclear Information System (INIS)

    Gantchev, Tsvetan G.; van Lier, Johan E.; Hunting, Darel J.

    2005-01-01

    To build 3D-molecular models of Zinc-phthalocyanines (ZnPc) and to study their diverse chemical and photosensitization properties, we performed quantum mechanical molecular orbital (MO) semi-empirical (AM1) computations of the ground, excited singlet and triplet states as well as free radical (ionic) species. RHF and UHF (open shell) geometry optimizations led to near-perfect symmetrical ZnPc. Predicted ionization potentials (IP), electron affinities (EA) and lowest electronic transitions of ZnPc are in good agreement with the published experimental and theoretical data. The computation-derived D 4h /D 2h -symmetry 3D-structures of ground and excited states and free radicals of ZnPc, together with the frontier orbital energies and Mulliken electron population analysis enabled us to build robust molecular models. These models were used to predict important chemical-reactivity entities such as global electronegativity (χ), hardness (η) and local softness based on Fukui-functions analysis. Examples of molecular mechanics (MM) applications of the 3D-molecular models are presented as approaches to evaluate solvation free energy (ΔG 0 ) solv and to estimate ground- and excited- state oxidation/reduction potentials as well as intermolecular interactions and stability of ground and excited state dimers (exciplexes) and radical ion-pairs

  20. Service delivery innovation architecture: An empirical study of antecedents and outcomes

    Directory of Open Access Journals (Sweden)

    Rajeev Verma

    2014-06-01

    Full Text Available The research examines service delivery innovation architecture and its role in achieving sustainable competitive advantage of firms. The study develops and empirically examines an antecedent based model of service delivery innovation. We collected data from 203 service sector professionals working in Mexican financial and information technology firms, and tested the proposed relationship. Further, the study investigates the moderating role of customer orientation on innovation driven performance outcomes. Results show that customer orientation strengthens the service delivery–performance relationship. This paper aims to contribute to the strategic planning of service firms by guiding their resource allocation to ensure sustainable growth.

  1. The role of production and teamwork practices in construction safety: a cognitive model and an empirical case study.

    Science.gov (United States)

    Mitropoulos, Panagiotis Takis; Cupido, Gerardo

    2009-01-01

    In construction, the challenge for researchers and practitioners is to develop work systems (production processes and teams) that can achieve high productivity and high safety at the same time. However, construction accident causation models ignore the role of work practices and teamwork. This study investigates the mechanisms by which production and teamwork practices affect the likelihood of accidents. The paper synthesizes a new model for construction safety based on the cognitive perspective (Fuller's Task-Demand-Capability Interface model, 2005) and then presents an exploratory case study. The case study investigates and compares the work practices of two residential framing crews: a 'High Reliability Crew' (HRC)--that is, a crew with exceptional productivity and safety over several years, and an average performing crew from the same company. The model explains how the production and teamwork practices generate the work situations that workers face (the task demands) and affect the workers ability to cope (capabilities). The case study indicates that the work practices of the HRC directly influence the task demands and match them with the applied capabilities. These practices were guided by the 'principle' of avoiding errors and rework and included work planning and preparation, work distribution, managing the production pressures, and quality and behavior monitoring. The Task Demand-Capability model links construction research to a cognitive model of accident causation and provides a new way to conceptualize safety as an emergent property of the production practices and teamwork processes. The empirical evidence indicates that the crews' work practices and team processes strongly affect the task demands, the applied capabilities, and the match between demands and capabilities. The proposed model and the exploratory case study will guide further discovery of work practices and teamwork processes that can increase both productivity and safety in construction

  2. The logical primitives of thought: Empirical foundations for compositional cognitive models.

    Science.gov (United States)

    Piantadosi, Steven T; Tenenbaum, Joshua B; Goodman, Noah D

    2016-07-01

    The notion of a compositional language of thought (LOT) has been central in computational accounts of cognition from earliest attempts (Boole, 1854; Fodor, 1975) to the present day (Feldman, 2000; Penn, Holyoak, & Povinelli, 2008; Fodor, 2008; Kemp, 2012; Goodman, Tenenbaum, & Gerstenberg, 2015). Recent modeling work shows how statistical inferences over compositionally structured hypothesis spaces might explain learning and development across a variety of domains. However, the primitive components of such representations are typically assumed a priori by modelers and theoreticians rather than determined empirically. We show how different sets of LOT primitives, embedded in a psychologically realistic approximate Bayesian inference framework, systematically predict distinct learning curves in rule-based concept learning experiments. We use this feature of LOT models to design a set of large-scale concept learning experiments that can determine the most likely primitives for psychological concepts involving Boolean connectives and quantification. Subjects' inferences are most consistent with a rich (nonminimal) set of Boolean operations, including first-order, but not second-order, quantification. Our results more generally show how specific LOT theories can be distinguished empirically. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  3. Empirically Derived Dehydration Scoring and Decision Tree Models for Children With Diarrhea: Assessment and Internal Validation in a Prospective Cohort Study in Dhaka, Bangladesh

    OpenAIRE

    Levine, Adam C; Glavis-Bloom, Justin; Modi, Payal; Nasrin, Sabiha; Rege, Soham; Chu, Chieh; Schmid, Christopher H; Alam, Nur H

    2015-01-01

    Introduction: Diarrhea remains one of the most common and most deadly conditions affecting children worldwide. Accurately assessing dehydration status is critical to determining treatment course, yet no clinical diagnostic models for dehydration have been empirically derived and validated for use in resource-limited settings. Methods: In the Dehydration: Assessing Kids Accurately (DHAKA) prospective cohort study, a random sample of children under 5 with acute diarrhea was enrolled between Feb...

  4. Organisational Learning and Performance--An Empirical Study

    Science.gov (United States)

    Jyothibabu, C.; Pradhan, Bibhuti Bhusan; Farooq, Ayesha

    2011-01-01

    This paper explores the important question "how the learning entities--individual, group or organisation--are affecting organisational performance". The answer is important for promoting learning and improving performance. This empirical study in the leading power utility in India found that there is a positive relation between…

  5. Assessment of empirical antibiotic therapy optimisation in six hospitals: an observational cohort study.

    Science.gov (United States)

    Braykov, Nikolay P; Morgan, Daniel J; Schweizer, Marin L; Uslan, Daniel Z; Kelesidis, Theodoros; Weisenberg, Scott A; Johannsson, Birgir; Young, Heather; Cantey, Joseph; Srinivasan, Arjun; Perencevich, Eli; Septimus, Edward; Laxminarayan, Ramanan

    2014-12-01

    Modification of empirical antimicrobials when warranted by culture results or clinical signs is recommended to control antimicrobial overuse and resistance. We aimed to assess the frequency with which patients were started on empirical antimicrobials, characteristics of the empirical regimen and the clinical characteristics of patients at the time of starting antimicrobials, patterns of changes to empirical therapy at different timepoints, and modifiable factors associated with changes to the initial empirical regimen in the first 5 days of therapy. We did a chart review of adult inpatients receiving one or more antimicrobials in six US hospitals on 4 days during 2009 and 2010. Our primary outcome was the modification of antimicrobial regimen on or before the 5th day of empirical therapy, analysed as a three-category variable. Bivariate analyses were used to establish demographic and clinical variables associated with the outcome. Variables with p values below 0·1 were included in a multivariable generalised linear latent and mixed model with multinomial logit link to adjust for clustering within hospitals and accommodate a non-binary outcome variable. Across the six study sites, 4119 (60%) of 6812 inpatients received antimicrobials. Of 1200 randomly selected patients with active antimicrobials, 730 (61%) met inclusion criteria. At the start of therapy, 220 (30%) patients were afebrile and had normal white blood cell counts. Appropriate cultures were collected from 432 (59%) patients, and 250 (58%) were negative. By the 5th day of therapy, 12·5% of empirical antimicrobials were escalated, 21·5% were narrowed or discontinued, and 66·4% were unchanged. Narrowing or discontinuation was more likely when cultures were collected at the start of therapy (adjusted OR 1·68, 95% CI 1·05-2·70) and no infection was noted on an initial radiological study (1·76, 1·11-2·79). Escalation was associated with multiple infection sites (2·54, 1·34-4·83) and a positive

  6. An extended technology acceptance model for detecting influencing factors: An empirical investigation

    Directory of Open Access Journals (Sweden)

    Mohamd Hakkak

    2013-11-01

    Full Text Available The rapid diffusion of the Internet has radically changed the delivery channels applied by the financial services industry. The aim of this study is to identify the influencing factors that encourage customers to adopt online banking in Khorramabad. The research constructs are developed based on the technology acceptance model (TAM and incorporates some extra important control variables. The model is empirically verified to study the factors influencing the online banking adoption behavior of 210 customers of Tejarat Banks in Khorramabad. The findings of the study suggest that the quality of the internet connection, the awareness of online banking and its benefits, the social influence and computer self-efficacy have significant impacts on the perceived usefulness (PU and perceived ease of use (PEOU of online banking acceptance. Trust and resistance to change also have significant impact on the attitude towards the likelihood of adopting online banking.

  7. Satellite-based empirical models linking river plume dynamics with hypoxic area andvolume

    Science.gov (United States)

    Satellite-based empirical models explaining hypoxic area and volume variation were developed for the seasonally hypoxic (O2 < 2 mg L−1) northern Gulf of Mexico adjacent to the Mississippi River. Annual variations in midsummer hypoxic area and ...

  8. The Fracture Mechanical Markov Chain Fatigue Model Compared with Empirical Data

    DEFF Research Database (Denmark)

    Gansted, L.; Brincker, Rune; Hansen, Lars Pilegaard

    The applicability of the FMF-model (Fracture Mechanical Markov Chain Fatigue Model) introduced in Gansted, L., R. Brincker and L. Pilegaard Hansen (1991) is tested by simulations and compared with empirical data. Two sets of data have been used, the Virkler data (aluminium alloy) and data...... established at the Laboratory of Structural Engineering at Aalborg University, the AUC-data, (mild steel). The model, which is based on the assumption, that the crack propagation process can be described by a discrete Space Markov theory, is applicable to constant as well as random loading. It is shown...

  9. A new model of Social Support in Bereavement (SSB): An empirical investigation with a Chinese sample.

    Science.gov (United States)

    Li, Jie; Chen, Sheying

    2016-01-01

    Bereavement can be an extremely stressful experience while the protective effect of social support is expected to facilitate the adjustment after loss. The ingredients or elements of social support as illustrated by a new model of Social Support in Bereavement (SSB), however, requires empirical evidence. Who might be the most effective providers of social support in bereavement has also been understudied, particularly within specific cultural contexts. The present study uses both qualitative and quantitative analyses to explore these two important issues among bereaved Chinese families and individuals. The results show that three major types of social support described by the SSB model were frequently acknowledged by the participants in this study. Aside from relevant books, family and friends were the primary sources of social support who in turn received support from their workplaces. Helping professionals turned out to be the least significant source of social support in the Chinese cultural context. Differences by gender, age, and bereavement time were also found. The findings render empirical evidence to the conceptual model of Social Support in Bereavement and also offer culturally relevant guidance for providing effective support to the bereaved.

  10. Dynamics of bloggers’ communities: Bipartite networks from empirical data and agent-based modeling

    Science.gov (United States)

    Mitrović, Marija; Tadić, Bosiljka

    2012-11-01

    We present an analysis of the empirical data and the agent-based modeling of the emotional behavior of users on the Web portals where the user interaction is mediated by posted comments, like Blogs and Diggs. We consider the dataset of discussion-driven popular Diggs, in which all comments are screened by machine-learning emotion detection in the text, to determine positive and negative valence (attractiveness and aversiveness) of each comment. By mapping the data onto a suitable bipartite network, we perform an analysis of the network topology and the related time-series of the emotional comments. The agent-based model is then introduced to simulate the dynamics and to capture the emergence of the emotional behaviors and communities. The agents are linked to posts on a bipartite network, whose structure evolves through their actions on the posts. The emotional states (arousal and valence) of each agent fluctuate in time, subject to the current contents of the posts to which the agent is exposed. By an agent’s action on a post its current emotions are transferred to the post. The model rules and the key parameters are inferred from the considered empirical data to ensure their realistic values and mutual consistency. The model assumes that the emotional arousal over posts drives the agent’s action. The simulations are preformed for the case of constant flux of agents and the results are analyzed in full analogy with the empirical data. The main conclusions are that the emotion-driven dynamics leads to long-range temporal correlations and emergent networks with community structure, that are comparable with the ones in the empirical system of popular posts. In view of pure emotion-driven agents actions, this type of comparisons provide a quantitative measure for the role of emotions in the dynamics on real blogs. Furthermore, the model reveals the underlying mechanisms which relate the post popularity with the emotion dynamics and the prevalence of negative

  11. Proposed Core Competencies and Empirical Validation Procedure in Competency Modeling: Confirmation and Classification.

    Science.gov (United States)

    Baczyńska, Anna K; Rowiński, Tomasz; Cybis, Natalia

    2016-01-01

    Competency models provide insight into key skills which are common to many positions in an organization. Moreover, there is a range of competencies that is used by many companies. Researchers have developed core competency terminology to underline their cross-organizational value. The article presents a theoretical model of core competencies consisting of two main higher-order competencies called performance and entrepreneurship. Each of them consists of three elements: the performance competency includes cooperation, organization of work and goal orientation, while entrepreneurship includes innovativeness, calculated risk-taking and pro-activeness. However, there is lack of empirical validation of competency concepts in organizations and this would seem crucial for obtaining reliable results from organizational research. We propose a two-step empirical validation procedure: (1) confirmation factor analysis, and (2) classification of employees. The sample consisted of 636 respondents (M = 44.5; SD = 15.1). Participants were administered a questionnaire developed for the study purpose. The reliability, measured by Cronbach's alpha, ranged from 0.60 to 0.83 for six scales. Next, we tested the model using a confirmatory factor analysis. The two separate, single models of performance and entrepreneurial orientations fit quite well to the data, while a complex model based on the two single concepts needs further research. In the classification of employees based on the two higher order competencies we obtained four main groups of employees. Their profiles relate to those found in the literature, including so-called niche finders and top performers. Some proposal for organizations is discussed.

  12. A Semi-Empirical SNR Model for Soil Moisture Retrieval Using GNSS SNR Data

    Directory of Open Access Journals (Sweden)

    Mutian Han

    2018-02-01

    Full Text Available The Global Navigation Satellite System-Interferometry and Reflectometry (GNSS-IR technique on soil moisture remote sensing was studied. A semi-empirical Signal-to-Noise Ratio (SNR model was proposed as a curve-fitting model for SNR data routinely collected by a GNSS receiver. This model aims at reconstructing the direct and reflected signal from SNR data and at the same time extracting frequency and phase information that is affected by soil moisture as proposed by K. M. Larson et al. This is achieved empirically through approximating the direct and reflected signal by a second-order and fourth-order polynomial, respectively, based on the well-established SNR model. Compared with other models (K. M. Larson et al., T. Yang et al., this model can improve the Quality of Fit (QoF with little prior knowledge needed and can allow soil permittivity to be estimated from the reconstructed signals. In developing this model, we showed how noise affects the receiver SNR estimation and thus the model performance through simulations under the bare soil assumption. Results showed that the reconstructed signals with a grazing angle of 5°–15° were better for soil moisture retrieval. The QoF was improved by around 45%, which resulted in better estimation of the frequency and phase information. However, we found that the improvement on phase estimation could be neglected. Experimental data collected at Lamasquère, France, were also used to validate the proposed model. The results were compared with the simulation and previous works. It was found that the model could ensure good fitting quality even in the case of irregular SNR variation. Additionally, the soil moisture calculated from the reconstructed signals was about 15% closer in relation to the ground truth measurements. A deeper insight into the Larson model and the proposed model was given at this stage, which formed a possible explanation of this fact. Furthermore, frequency and phase information

  13. An Empirical Comparison of Different Models of Active Aging in Canada: The International Mobility in Aging Study.

    Science.gov (United States)

    Bélanger, Emmanuelle; Ahmed, Tamer; Filiatrault, Johanne; Yu, Hsiu-Ting; Zunzunegui, Maria Victoria

    2017-04-01

    Active aging is a concept that lacks consensus. The WHO defines it as a holistic concept that encompasses the overall health, participation, and security of older adults. Fernández-Ballesteros and colleagues propose a similar concept but omit security and include mood and cognitive function. To date, researchers attempting to validate conceptual models of active aging have obtained mixed results. The goal of this study was to examine the validity of existing models of active aging with epidemiological data from Canada. The WHO model of active aging and the psychological model of active aging developed by Fernández-Ballesteros and colleagues were tested with confirmatory factor analysis. The data used included 799 community-dwelling older adults between 65 and 74 years old, recruited from the patient lists of family physicians in Saint-Hyacinthe, Quebec and Kingston, Ontario. Neither model could be validated in the sample of Canadian older adults. Although a concept of healthy aging can be modeled adequately, social participation and security did not fit a latent factor model. A simple binary index indicated that 27% of older adults in the sample did not meet the active aging criteria proposed by the WHO. Our results suggest that active aging might represent a human rights policy orientation rather than an empirical measurement tool to guide research among older adult populations. Binary indexes of active aging may serve to highlight what remains to be improved about the health, participation, and security of growing populations of older adults. © The Author 2015. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  14. An empirical analysis of Diaspora bonds

    OpenAIRE

    AKKOYUNLU, Şule; STERN, Max

    2018-01-01

    Abstract. This study is the first to investigate theoretically and empirically the determinants of Diaspora Bonds for eight developing countries (Bangladesh, Ethiopia, Ghana, India, Lebanon, Pakistan, the Philippines, and Sri-Lanka) and one developed country - Israel for the period 1951 and 2008. Empirical results are consistent with the predictions of the theoretical model. The most robust variables are the closeness indicator and the sovereign rating, both on the demand-side. The spread is ...

  15. Environmental ethics and wilderness management: an empirical study

    Science.gov (United States)

    William A. Valliere; Robert E. Manning

    1995-01-01

    The underlying hypothesis of this study is that environmental ethics influence public attitudes toward wilderness management. To study this hypothesis, environmental ethics were defined, categorized, and measured empirically. Additionally, attitudes toward selected wilderness management issues were measured. Associations were found between beliefs in selected...

  16. Lessons from empirical studies in product and service variety management.

    OpenAIRE

    Lyons, Andrew C.L.

    2013-01-01

    [EN] For many years, a trend for businesses has been to increase market segmentation and extend product and service-variety offerings in order to provid more choice for customers and gain a competitive advantags. However, there have been relatively few variety-related, empirical studies that have been undertaken. In this research, two empirical studies are presented that address the impact of product and service variety on business and business function performance. In the first (service-vari...

  17. Empirical Test Case Specification

    DEFF Research Database (Denmark)

    Kalyanova, Olena; Heiselberg, Per

    This document includes the empirical specification on the IEA task of evaluation building energy simulation computer programs for the Double Skin Facades (DSF) constructions. There are two approaches involved into this procedure, one is the comparative approach and another is the empirical one. I....... In the comparative approach the outcomes of different software tools are compared, while in the empirical approach the modelling results are compared with the results of experimental test cases....

  18. Empiric model for mean generation time adjustment factor for classic point kinetics equations

    Energy Technology Data Exchange (ETDEWEB)

    Goes, David A.B.V. de; Martinez, Aquilino S.; Goncalves, Alessandro da C., E-mail: david.goes@poli.ufrj.br, E-mail: aquilino@lmp.ufrj.br, E-mail: alessandro@con.ufrj.br [Coordenacao de Pos-Graduacao e Pesquisa de Engenharia (COPPE/UFRJ), Rio de Janeiro, RJ (Brazil). Departamento de Engenharia Nuclear

    2017-11-01

    Point reactor kinetics equations are the easiest way to observe the neutron production time behavior in a nuclear reactor. These equations are derived from the neutron transport equation using an approximation called Fick's law leading to a set of first order differential equations. The main objective of this study is to review classic point kinetics equation in order to approximate its results to the case when it is considered the time variation of the neutron currents. The computational modeling used for the calculations is based on the finite difference method. The results obtained with this model are compared with the reference model and then it is determined an empirical adjustment factor that modifies the point reactor kinetics equation to the real scenario. (author)

  19. Empiric model for mean generation time adjustment factor for classic point kinetics equations

    International Nuclear Information System (INIS)

    Goes, David A.B.V. de; Martinez, Aquilino S.; Goncalves, Alessandro da C.

    2017-01-01

    Point reactor kinetics equations are the easiest way to observe the neutron production time behavior in a nuclear reactor. These equations are derived from the neutron transport equation using an approximation called Fick's law leading to a set of first order differential equations. The main objective of this study is to review classic point kinetics equation in order to approximate its results to the case when it is considered the time variation of the neutron currents. The computational modeling used for the calculations is based on the finite difference method. The results obtained with this model are compared with the reference model and then it is determined an empirical adjustment factor that modifies the point reactor kinetics equation to the real scenario. (author)

  20. Soil Moisture Estimate under Forest using a Semi-empirical Model at P-Band

    Science.gov (United States)

    Truong-Loi, M.; Saatchi, S.; Jaruwatanadilok, S.

    2013-12-01

    In this paper we show the potential of a semi-empirical algorithm to retrieve soil moisture under forests using P-band polarimetric SAR data. In past decades, several remote sensing techniques have been developed to estimate the surface soil moisture. In most studies associated with radar sensing of soil moisture, the proposed algorithms are focused on bare or sparsely vegetated surfaces where the effect of vegetation can be ignored. At long wavelengths such as L-band, empirical or physical models such as the Small Perturbation Model (SPM) provide reasonable estimates of surface soil moisture at depths of 0-5cm. However for densely covered vegetated surfaces such as forests, the problem becomes more challenging because the vegetation canopy is a complex scattering environment. For this reason there have been only few studies focusing on retrieving soil moisture under vegetation canopy in the literature. Moghaddam et al. developed an algorithm to estimate soil moisture under a boreal forest using L- and P-band SAR data. For their studied area, double-bounce between trunks and ground appear to be the most important scattering mechanism. Thereby, they implemented parametric models of radar backscatter for double-bounce using simulations of a numerical forest scattering model. Hajnsek et al. showed the potential of estimating the soil moisture under agricultural vegetation using L-band polarimetric SAR data and using polarimetric-decomposition techniques to remove the vegetation layer. Here we use an approach based on physical formulation of dominant scattering mechanisms and three parameters that integrates the vegetation and soil effects at long wavelengths. The algorithm is a simplification of a 3-D coherent model of forest canopy based on the Distorted Born Approximation (DBA). The simplified model has three equations and three unknowns, preserving the three dominant scattering mechanisms of volume, double-bounce and surface for three polarized backscattering

  1. Empirical pseudo-potential studies on electronic structure

    Indian Academy of Sciences (India)

    Theoretical investigations of electronic structure of quantum dots is of current interest in nanophase materials. Empirical theories such as effective mass approximation, tight binding methods and empirical pseudo-potential method are capable of explaining the experimentally observed optical properties. We employ the ...

  2. Normalization of time-series satellite reflectance data to a standard sun-target-sensor geometry using a semi-empirical model

    Science.gov (United States)

    Zhao, Yongguang; Li, Chuanrong; Ma, Lingling; Tang, Lingli; Wang, Ning; Zhou, Chuncheng; Qian, Yonggang

    2017-10-01

    Time series of satellite reflectance data have been widely used to characterize environmental phenomena, describe trends in vegetation dynamics and study climate change. However, several sensors with wide spatial coverage and high observation frequency are usually designed to have large field of view (FOV), which cause variations in the sun-targetsensor geometry in time-series reflectance data. In this study, on the basis of semiempirical kernel-driven BRDF model, a new semi-empirical model was proposed to normalize the sun-target-sensor geometry of remote sensing image. To evaluate the proposed model, bidirectional reflectance under different canopy growth conditions simulated by Discrete Anisotropic Radiative Transfer (DART) model were used. The semi-empirical model was first fitted by using all simulated bidirectional reflectance. Experimental result showed a good fit between the bidirectional reflectance estimated by the proposed model and the simulated value. Then, MODIS time-series reflectance data was normalized to a common sun-target-sensor geometry by the proposed model. The experimental results showed the proposed model yielded good fits between the observed and estimated values. The noise-like fluctuations in time-series reflectance data was also reduced after the sun-target-sensor normalization process.

  3. A semi-empirical molecular orbital model of silica, application to radiation compaction

    International Nuclear Information System (INIS)

    Tasker, P.W.

    1978-11-01

    Semi-empirical molecular-orbital theory is used to calculate the bonding in a cluster of two SiO 4 tetrahedra, with the outer bonds saturated with pseudo-hydrogen atoms. The basic properties of the cluster, bond energies and band gap are calculated using a very simple parameterisation scheme. The resulting cluster is used to study the rebonding that occurs when an oxygen vacancy is created. It is suggested that a vacancy model is capable of producing the observed differences between quartz and vitreous silica, and the calculations show that the compaction effect observed in the glass is of a magnitude compatible with the relaxations around the vacancy. More detailed lattice models will be needed to examine this mechanism further. (author)

  4. On Integrating Student Empirical Software Engineering Studies with Research and Teaching Goals

    NARCIS (Netherlands)

    Galster, Matthias; Tofan, Dan; Avgeriou, Paris

    2012-01-01

    Background: Many empirical software engineering studies use students as subjects and are conducted as part of university courses. Aim: We aim at reporting our experiences with using guidelines for integrating empirical studies with our research and teaching goals. Method: We document our experience

  5. Development of an Empirical Model for Optimization of Machining Parameters to Minimize Power Consumption

    Science.gov (United States)

    Kant Garg, Girish; Garg, Suman; Sangwan, K. S.

    2018-04-01

    The manufacturing sector consumes huge energy demand and the machine tools used in this sector have very less energy efficiency. Selection of the optimum machining parameters for machine tools is significant for energy saving and for reduction of environmental emission. In this work an empirical model is developed to minimize the power consumption using response surface methodology. The experiments are performed on a lathe machine tool during the turning of AISI 6061 Aluminum with coated tungsten inserts. The relationship between the power consumption and machining parameters is adequately modeled. This model is used for formulation of minimum power consumption criterion as a function of optimal machining parameters using desirability function approach. The influence of machining parameters on the energy consumption has been found using the analysis of variance. The validation of the developed empirical model is proved using the confirmation experiments. The results indicate that the developed model is effective and has potential to be adopted by the industry for minimum power consumption of machine tools.

  6. Empirical Study towards the Drivers of Sustainable Economic Growth in EU-28 Countries

    Directory of Open Access Journals (Sweden)

    Daniel Ştefan Armeanu

    2017-12-01

    Full Text Available This study aims at empirically investigating the drivers of sustainable economic growth in EU-28 countries. By means of panel data regression models, in the form of fixed and random effects models, alongside system generalized method of moments, we examine several drivers of real gross domestic product (GDP growth rate, as follows: higher education, business environment, infrastructure, technology, communications, and media, population lifestyle, and demographic changes. As regards higher education, the empirical results show that expenditure per student in higher education and traditional 18–22 year-old students are positively linked with sustainable economic growth, whereas science and technology graduates negatively influence real GDP growth. In terms of business environment, total expenditure on research and development and employment rates of recent graduates contributes to sustainable development, but corruption perceptions index revealed a negative association with economic growth. As well, the results provide support for a negative influence of infrastructure abreast technological measures on economic growth. Besides, we found a negative connection between old-age dependency ratio and sustainable economic growth.

  7. Can Environmental Regulations Promote Corporate Environmental Responsibility? Evidence from the Moderated Mediating Effect Model and an Empirical Study in China

    Directory of Open Access Journals (Sweden)

    Benhong Peng

    2018-02-01

    Full Text Available Based on the Stakeholder theory, a moderated mediating effect model is developed to reach the study objective, revealing an important connection that suggests environmental regulations (ERs influence corporate environmental responsibility (CER (Porter Hypothesis. In building the model, the validity of the questionnaire data was analyzed with factor analysis. By employing a two-step approach, a regression analysis is utilized to discuss the mediating effect of altruistic motivation and moderating effect of green innovation, and a structural equation model is used to explore the interactive mechanism of different variables. It is found that altruistic motivation plays a medium role in the relationship between ERs and CER, and green innovation engages a positive coordination in the relationship. The empirical study identifies factors affecting enterprises’ willingness to undertake environmental responsibility, including environment policies, corporate culture, and personal characters among others. It is also revealed that altruistic motivation is conducive to forming a community interests among enterprises and enhancing their resistance to market risks, which explains and corroborates the Stakeholder theory; and the higher the level of green innovation, the more willing enterprises are to implement environmentally friendly operations.

  8. Regime switching model for financial data: Empirical risk analysis

    Science.gov (United States)

    Salhi, Khaled; Deaconu, Madalina; Lejay, Antoine; Champagnat, Nicolas; Navet, Nicolas

    2016-11-01

    This paper constructs a regime switching model for the univariate Value-at-Risk estimation. Extreme value theory (EVT) and hidden Markov models (HMM) are combined to estimate a hybrid model that takes volatility clustering into account. In the first stage, HMM is used to classify data in crisis and steady periods, while in the second stage, EVT is applied to the previously classified data to rub out the delay between regime switching and their detection. This new model is applied to prices of numerous stocks exchanged on NYSE Euronext Paris over the period 2001-2011. We focus on daily returns for which calibration has to be done on a small dataset. The relative performance of the regime switching model is benchmarked against other well-known modeling techniques, such as stable, power laws and GARCH models. The empirical results show that the regime switching model increases predictive performance of financial forecasting according to the number of violations and tail-loss tests. This suggests that the regime switching model is a robust forecasting variant of power laws model while remaining practical to implement the VaR measurement.

  9. Empirical particle transport model for tokamaks

    International Nuclear Information System (INIS)

    Petravic, M.; Kuo-Petravic, G.

    1986-08-01

    A simple empirical particle transport model has been constructed with the purpose of gaining insight into the L- to H-mode transition in tokamaks. The aim was to construct the simplest possible model which would reproduce the measured density profiles in the L-regime, and also produce a qualitatively correct transition to the H-regime without having to assume a completely different transport mode for the bulk of the plasma. Rather than using completely ad hoc constructions for the particle diffusion coefficient, we assume D = 1/5 chi/sub total/, where chi/sub total/ ≅ chi/sub e/ is the thermal diffusivity, and then use the κ/sub e/ = n/sub e/chi/sub e/ values derived from experiments. The observed temperature profiles are then automatically reproduced, but nontrivially, the correct density profiles are also obtained, for realistic fueling rates and profiles. Our conclusion is that it is sufficient to reduce the transport coefficients within a few centimeters of the surface to produce the H-mode behavior. An additional simple assumption, concerning the particle mean-free path, leads to a convective transport term which reverses sign a few centimeters inside the surface, as required by the H-mode density profiles

  10. Prediction of Meiyu rainfall in Taiwan by multi-lead physical-empirical models

    Science.gov (United States)

    Yim, So-Young; Wang, Bin; Xing, Wen; Lu, Mong-Ming

    2015-06-01

    Taiwan is located at the dividing point of the tropical and subtropical monsoons over East Asia. Taiwan has double rainy seasons, the Meiyu in May-June and the Typhoon rains in August-September. To predict the amount of Meiyu rainfall is of profound importance to disaster preparedness and water resource management. The seasonal forecast of May-June Meiyu rainfall has been a challenge to current dynamical models and the factors controlling Taiwan Meiyu variability has eluded climate scientists for decades. Here we investigate the physical processes that are possibly important for leading to significant fluctuation of the Taiwan Meiyu rainfall. Based on this understanding, we develop a physical-empirical model to predict Taiwan Meiyu rainfall at a lead time of 0- (end of April), 1-, and 2-month, respectively. Three physically consequential and complementary predictors are used: (1) a contrasting sea surface temperature (SST) tendency in the Indo-Pacific warm pool, (2) the tripolar SST tendency in North Atlantic that is associated with North Atlantic Oscillation, and (3) a surface warming tendency in northeast Asia. These precursors foreshadow an enhanced Philippine Sea anticyclonic anomalies and the anomalous cyclone near the southeastern China in the ensuing summer, which together favor increasing Taiwan Meiyu rainfall. Note that the identified precursors at various lead-times represent essentially the same physical processes, suggesting the robustness of the predictors. The physical empirical model made by these predictors is capable of capturing the Taiwan rainfall variability with a significant cross-validated temporal correlation coefficient skill of 0.75, 0.64, and 0.61 for 1979-2012 at the 0-, 1-, and 2-month lead time, respectively. The physical-empirical model concept used here can be extended to summer monsoon rainfall prediction over the Southeast Asia and other regions.

  11. Threshold model of cascades in empirical temporal networks

    Science.gov (United States)

    Karimi, Fariba; Holme, Petter

    2013-08-01

    Threshold models try to explain the consequences of social influence like the spread of fads and opinions. Along with models of epidemics, they constitute a major theoretical framework of social spreading processes. In threshold models on static networks, an individual changes her state if a certain fraction of her neighbors has done the same. When there are strong correlations in the temporal aspects of contact patterns, it is useful to represent the system as a temporal network. In such a system, not only contacts but also the time of the contacts are represented explicitly. In many cases, bursty temporal patterns slow down disease spreading. However, as we will see, this is not a universal truth for threshold models. In this work we propose an extension of Watts’s classic threshold model to temporal networks. We do this by assuming that an agent is influenced by contacts which lie a certain time into the past. I.e., the individuals are affected by contacts within a time window. In addition to thresholds in the fraction of contacts, we also investigate the number of contacts within the time window as a basis for influence. To elucidate the model’s behavior, we run the model on real and randomized empirical contact datasets.

  12. An empirical model for the study of employee paticipation and its influence on job satisfaction

    Directory of Open Access Journals (Sweden)

    Lucas Joan Pujol Cols

    2015-12-01

    Full Text Available This article provides an analysis of the factors that influence the employee’s possibilities perceived to trigger actions of meaningful participation in three levels: Intra-group Level, Institutional Level and directly in the Leadership team of of the organization.Twelve (12 interviews were done with teachers from the Social and Economic Sciences School of the Mar del Plata (Argentina University, with different positions, areas and working hours.Based on qualitative evidence, an empirical model was constructed claiming to connect different factors for each manifest of participation, establishing hypothetical relations between subgroups.Additionally, in this article the implication of participation, its relationship with the job satisfaction and the role of individual expectations on the participation opportunities that receives each employee, are discussed. Keywords: Participation, Job satisfaction, University, Expectations, Qualitative Analysis. 

  13. Empirical Analysis of Stochastic Volatility Model by Hybrid Monte Carlo Algorithm

    International Nuclear Information System (INIS)

    Takaishi, Tetsuya

    2013-01-01

    The stochastic volatility model is one of volatility models which infer latent volatility of asset returns. The Bayesian inference of the stochastic volatility (SV) model is performed by the hybrid Monte Carlo (HMC) algorithm which is superior to other Markov Chain Monte Carlo methods in sampling volatility variables. We perform the HMC simulations of the SV model for two liquid stock returns traded on the Tokyo Stock Exchange and measure the volatilities of those stock returns. Then we calculate the accuracy of the volatility measurement using the realized volatility as a proxy of the true volatility and compare the SV model with the GARCH model which is one of other volatility models. Using the accuracy calculated with the realized volatility we find that empirically the SV model performs better than the GARCH model.

  14. Systematic risk and liquidity : an empirical study comparing Norwegian equity certificates before and after the regulation in 2009

    OpenAIRE

    Hatlevik, Håkon; Einvik, Christian

    2014-01-01

    In 2009, the Norwegian savings banks industry was subject to a regulation change, which resulted in a modification of the instrument issued by these banks. Thus, in this empirical study we compare the systematic risk and liquidity of equity certificates issued by Norwegian savings banks before and after the regulation change. We go about estimating systematic risk and liquidity using regression analysis. In order to estimate systematic risk we use the empirical model of the CAPM often referre...

  15. Integrating social science into empirical models of coupled human and natural systems

    Science.gov (United States)

    Jeffrey D. Kline; Eric M. White; A Paige Fischer; Michelle M. Steen-Adams; Susan Charnley; Christine S. Olsen; Thomas A. Spies; John D. Bailey

    2017-01-01

    Coupled human and natural systems (CHANS) research highlights reciprocal interactions (or feedbacks) between biophysical and socioeconomic variables to explain system dynamics and resilience. Empirical models often are used to test hypotheses and apply theory that represent human behavior. Parameterizing reciprocal interactions presents two challenges for social...

  16. An Improved Semi-Empirical Model for Radar Backscattering from Rough Sea Surfaces at X-Band

    Directory of Open Access Journals (Sweden)

    Taekyeong Jin

    2018-04-01

    Full Text Available We propose an improved semi-empirical scattering model for X-band radar backscattering from rough sea surfaces. This new model has a wider validity range of wind speeds than does the existing semi-empirical sea spectrum (SESS model. First, we retrieved the small-roughness parameters from the sea surfaces, which were numerically generated using the Pierson-Moskowitz spectrum and measurement datasets for various wind speeds. Then, we computed the backscattering coefficients of the small-roughness surfaces for various wind speeds using the integral equation method model. Finally, the large-roughness characteristics were taken into account by integrating the small-roughness backscattering coefficients multiplying them with the surface slope probability density function for all possible surface slopes. The new model includes a wind speed range below 3.46 m/s, which was not covered by the existing SESS model. The accuracy of the new model was verified with two measurement datasets for various wind speeds from 0.5 m/s to 14 m/s.

  17. Interface of the polarizable continuum model of solvation with semi-empirical methods in the GAMESS program

    DEFF Research Database (Denmark)

    Svendsen, Casper Steinmann; Blædel, Kristoffer L.; Christensen, Anders Steen

    2013-01-01

    An interface between semi-empirical methods and the polarized continuum model (PCM) of solvation successfully implemented into GAMESS following the approach by Chudinov et al (Chem. Phys. 1992, 160, 41). The interface includes energy gradients and is parallelized. For large molecules such as ubiq......An interface between semi-empirical methods and the polarized continuum model (PCM) of solvation successfully implemented into GAMESS following the approach by Chudinov et al (Chem. Phys. 1992, 160, 41). The interface includes energy gradients and is parallelized. For large molecules...

  18. Observation and empirical shell-model study of new yrast excited states in the nucleus sup 1 sup 4 sup 2 Ce

    CERN Document Server

    Liu Zhong; Guo Ying Xiang; Zhou Xiao Hong; Lei Xiang Guo; LiuMinLiang; Luo Wan Ju; He Jian Jun; Zheng Yong; Pan Qiang Yan; Gan Zai Guo; Luo Yi Xiao; Hayakawa, T; Oshima, M; Toh, Y; Shizima, T; Hatsukawa, Y; Osa, A; Ishii, T; Sugawara, M

    2002-01-01

    Excited states of sup 1 sup 4 sup 2 Ce, populated in deep inelastic reactions of sup 8 sup 2 Se projectiles bombarding sup 1 sup 3 sup 9 La target, have been studied to medium spins using in-beam gamma spectroscopy techniques. Three new levels have been identified at 2625, 2995 and 3834 keV, and assigned as 8 sup + , 9 sup ( sup - sup ) and 11 sup ( sup - sup ) , respectively, based on the analysis of the properties of gamma transitions. These new yrast states follow well the level systematics of N 84 isotones. Their structures have been discussed with the help of empirical shell-model calculations

  19. Labour flexibility in China's companies: An Empirical Study

    NARCIS (Netherlands)

    Y. Chen (Yongping)

    2001-01-01

    textabstractLabour flexibility in China???s Companies: An Empirical Study explores labour flexibility at the workplace in ten manufacturing companies in China. It addresses how HRM contributes and facilitates management in coping with increasing market competition. Flexible labour practices are

  20. A Hybrid Forecasting Model Based on Empirical Mode Decomposition and the Cuckoo Search Algorithm: A Case Study for Power Load

    Directory of Open Access Journals (Sweden)

    Jiani Heng

    2016-01-01

    Full Text Available Power load forecasting always plays a considerable role in the management of a power system, as accurate forecasting provides a guarantee for the daily operation of the power grid. It has been widely demonstrated in forecasting that hybrid forecasts can improve forecast performance compared with individual forecasts. In this paper, a hybrid forecasting approach, comprising Empirical Mode Decomposition, CSA (Cuckoo Search Algorithm, and WNN (Wavelet Neural Network, is proposed. This approach constructs a more valid forecasting structure and more stable results than traditional ANN (Artificial Neural Network models such as BPNN (Back Propagation Neural Network, GABPNN (Back Propagation Neural Network Optimized by Genetic Algorithm, and WNN. To evaluate the forecasting performance of the proposed model, a half-hourly power load in New South Wales of Australia is used as a case study in this paper. The experimental results demonstrate that the proposed hybrid model is not only simple but also able to satisfactorily approximate the actual power load and can be an effective tool in planning and dispatch for smart grids.

  1. The empirical study of norms of justice - an overview

    OpenAIRE

    Jacquemain, Marc

    2003-01-01

    The paper discusses what is empirical study of justice feeling, drawing the line between this and normative study, but defending nevertheless that there are important linkis between both stances. It gives an overview of main theories whitin normative study of justice feelings

  2. Empirical models of wind conditions on Upper Klamath Lake, Oregon

    Science.gov (United States)

    Buccola, Norman L.; Wood, Tamara M.

    2010-01-01

    Upper Klamath Lake is a large (230 square kilometers), shallow (mean depth 2.8 meters at full pool) lake in southern Oregon. Lake circulation patterns are driven largely by wind, and the resulting currents affect the water quality and ecology of the lake. To support hydrodynamic modeling of the lake and statistical investigations of the relation between wind and lake water-quality measurements, the U.S. Geological Survey has monitored wind conditions along the lakeshore and at floating raft sites in the middle of the lake since 2005. In order to make the existing wind archive more useful, this report summarizes the development of empirical wind models that serve two purposes: (1) to fill short (on the order of hours or days) wind data gaps at raft sites in the middle of the lake, and (2) to reconstruct, on a daily basis, over periods of months to years, historical wind conditions at U.S. Geological Survey sites prior to 2005. Empirical wind models based on Artificial Neural Network (ANN) and Multivariate-Adaptive Regressive Splines (MARS) algorithms were compared. ANNs were better suited to simulating the 10-minute wind data that are the dependent variables of the gap-filling models, but the simpler MARS algorithm may be adequate to accurately simulate the daily wind data that are the dependent variables of the historical wind models. To further test the accuracy of the gap-filling models, the resulting simulated winds were used to force the hydrodynamic model of the lake, and the resulting simulated currents were compared to measurements from an acoustic Doppler current profiler. The error statistics indicated that the simulation of currents was degraded as compared to when the model was forced with observed winds, but probably is adequate for short gaps in the data of a few days or less. Transport seems to be less affected by the use of the simulated winds in place of observed winds. The simulated tracer concentration was similar between model results when

  3. An Empirical Study of a Solo Performance Assessment Model

    Science.gov (United States)

    Russell, Brian E.

    2015-01-01

    The purpose of this study was to test a hypothesized model of solo music performance assessment. Specifically, this study investigates the influence of technique and musical expression on perceptions of overall performance quality. The Aural Musical Performance Quality (AMPQ) measure was created to measure overall performance quality, technique,…

  4. Benefits of Applying Hierarchical Models to the Empirical Green's Function Approach

    Science.gov (United States)

    Denolle, M.; Van Houtte, C.

    2017-12-01

    Stress drops calculated from source spectral studies currently show larger variability than what is implied by empirical ground motion models. One of the potential origins of the inflated variability is the simplified model-fitting techniques used in most source spectral studies. This study improves upon these existing methods, and shows that the fitting method may explain some of the discrepancy. In particular, Bayesian hierarchical modelling is shown to be a method that can reduce bias, better quantify uncertainties and allow additional effects to be resolved. The method is applied to the Mw7.1 Kumamoto, Japan earthquake, and other global, moderate-magnitude, strike-slip earthquakes between Mw5 and Mw7.5. It is shown that the variation of the corner frequency, fc, and the falloff rate, n, across the focal sphere can be reliably retrieved without overfitting the data. Additionally, it is shown that methods commonly used to calculate corner frequencies can give substantial biases. In particular, if fc were calculated for the Kumamoto earthquake using a model with a falloff rate fixed at 2 instead of the best fit 1.6, the obtained fc would be as large as twice its realistic value. The reliable retrieval of the falloff rate allows deeper examination of this parameter for a suite of global, strike-slip earthquakes, and its scaling with magnitude. The earthquake sequences considered in this study are from Japan, New Zealand, Haiti and California.

  5. Production functions for climate policy modeling. An empirical analysis

    International Nuclear Information System (INIS)

    Van der Werf, Edwin

    2008-01-01

    Quantitative models for climate policy modeling differ in the production structure used and in the sizes of the elasticities of substitution. The empirical foundation for both is generally lacking. This paper estimates the parameters of 2-level CES production functions with capital, labour and energy as inputs, and is the first to systematically compare all nesting structures. Using industry-level data from 12 OECD countries, we find that the nesting structure where capital and labour are combined first, fits the data best, but for most countries and industries we cannot reject that all three inputs can be put into one single nest. These two nesting structures are used by most climate models. However, while several climate policy models use a Cobb-Douglas function for (part of the) production function, we reject elasticities equal to one, in favour of considerably smaller values. Finally we find evidence for factor-specific technological change. With lower elasticities and with factor-specific technological change, some climate policy models may find a bigger effect of endogenous technological change on mitigating the costs of climate policy. (author)

  6. Ensemble empirical model decomposition and neuro-fuzzy conjunction model for middle and long-term runoff forecast

    Science.gov (United States)

    Tan, Q.

    2017-12-01

    Forecasting the runoff over longer periods, such as months and years, is one of the important tasks for hydrologists and water resource managers to maximize the potential of the limited water. However, due to the nonlinear and nonstationary characteristic of the natural runoff, it is hard to forecast the middle and long-term runoff with a satisfactory accuracy. It has been proven that the forecast performance can be improved by using signal decomposition techniques to product more cleaner signals as model inputs. In this study, a new conjunction model (EEMD-neuro-fuzzy) with adaptive ability is proposed. The ensemble empirical model decomposition (EEMD) is used to decompose the runoff time series into several components, which are with different frequencies and more cleaner than the original time series. Then the neuro-fuzzy model is developed for each component. The final forecast results can be obtained by summing the outputs of all neuro-fuzzy models. Unlike the conventional forecast model, the decomposition and forecast models in this study are adjusted adaptively as long as new runoff information is added. The proposed models are applied to forecast the monthly runoff of Yichang station, located in Yangtze River of China. The results show that the performance of adaptive forecast model we proposed outperforms than the conventional forecast model, the Nash-Sutcliffe efficiency coefficient can reach to 0.9392. Due to its ability to process the nonstationary data, the forecast accuracy, especially in flood season, is improved significantly.

  7. Empirical and theoretical challenges in aboveground-belowground ecology

    DEFF Research Database (Denmark)

    W.H. van der Putten,; R.D. Bardgett; P.C. de Ruiter

    2009-01-01

    of the current conceptual succession models into more predictive models can help targeting empirical studies and generalising their results. Then, we discuss how understanding succession may help to enhance managing arable crops, grasslands and invasive plants, as well as provide insights into the effects...... and environmental settings, we explore where and how they can be supported by theoretical approaches to develop testable predictions and to generalise empirical results. We review four key areas where a combined aboveground-belowground approach offers perspectives for enhancing ecological understanding, namely...

  8. Empirical Reduced-Order Modeling for Boundary Feedback Flow Control

    Directory of Open Access Journals (Sweden)

    Seddik M. Djouadi

    2008-01-01

    Full Text Available This paper deals with the practical and theoretical implications of model reduction for aerodynamic flow-based control problems. Various aspects of model reduction are discussed that apply to partial differential equation- (PDE- based models in general. Specifically, the proper orthogonal decomposition (POD of a high dimension system as well as frequency domain identification methods are discussed for initial model construction. Projections on the POD basis give a nonlinear Galerkin model. Then, a model reduction method based on empirical balanced truncation is developed and applied to the Galerkin model. The rationale for doing so is that linear subspace approximations to exact submanifolds associated with nonlinear controllability and observability require only standard matrix manipulations utilizing simulation/experimental data. The proposed method uses a chirp signal as input to produce the output in the eigensystem realization algorithm (ERA. This method estimates the system's Markov parameters that accurately reproduce the output. Balanced truncation is used to show that model reduction is still effective on ERA produced approximated systems. The method is applied to a prototype convective flow on obstacle geometry. An H∞ feedback flow controller is designed based on the reduced model to achieve tracking and then applied to the full-order model with excellent performance.

  9. Modeling Lolium perenne L. roots in the presence of empirical black holes

    Science.gov (United States)

    Plant root models are designed for understanding structural or functional aspects of root systems. When a process is not thoroughly understood, a black box object is used. However, when a process exists but empirical data do not indicate its existence, you have a black hole. The object of this re...

  10. Health Status and Health Dynamics in an Empirical Model of Expected Longevity*

    Science.gov (United States)

    Benítez-Silva, Hugo; Ni, Huan

    2010-01-01

    Expected longevity is an important factor influencing older individuals’ decisions such as consumption, savings, purchase of life insurance and annuities, claiming of Social Security benefits, and labor supply. It has also been shown to be a good predictor of actual longevity, which in turn is highly correlated with health status. A relatively new literature on health investments under uncertainty, which builds upon the seminal work by Grossman (1972), has directly linked longevity with characteristics, behaviors, and decisions by utility maximizing agents. Our empirical model can be understood within that theoretical framework as estimating a production function of longevity. Using longitudinal data from the Health and Retirement Study, we directly incorporate health dynamics in explaining the variation in expected longevities, and compare two alternative measures of health dynamics: the self-reported health change, and the computed health change based on self-reports of health status. In 38% of the reports in our sample, computed health changes are inconsistent with the direct report on health changes over time. And another 15% of the sample can suffer from information losses if computed changes are used to assess changes in actual health. These potentially serious problems raise doubts regarding the use and interpretation of the computed health changes and even the lagged measures of self-reported health as controls for health dynamics in a variety of empirical settings. Our empirical results, controlling for both subjective and objective measures of health status and unobserved heterogeneity in reporting, suggest that self-reported health changes are a preferred measure of health dynamics. PMID:18187217

  11. A Longitudinal Empirical Investigation of the Pathways Model of Problem Gambling.

    Science.gov (United States)

    Allami, Youssef; Vitaro, Frank; Brendgen, Mara; Carbonneau, René; Lacourse, Éric; Tremblay, Richard E

    2017-12-01

    The pathways model of problem gambling suggests the existence of three developmental pathways to problem gambling, each differentiated by a set of predisposing biopsychosocial characteristics: behaviorally conditioned (BC), emotionally vulnerable (EV), and biologically vulnerable (BV) gamblers. This study examined the empirical validity of the Pathways Model among adolescents followed up to early adulthood. A prospective-longitudinal design was used, thus overcoming limitations of past studies that used concurrent or retrospective designs. Two samples were used: (1) a population sample of French-speaking adolescents (N = 1033) living in low socio-economic status (SES) neighborhoods from the Greater Region of Montreal (Quebec, Canada), and (2) a population sample of adolescents (N = 3017), representative of French-speaking students in Quebec. Only participants with at-risk or problem gambling by mid-adolescence or early adulthood were included in the main analysis (n = 180). Latent Profile Analyses were conducted to identify the optimal number of profiles, in accordance with participants' scores on a set of variables prescribed by the Pathways Model and measured during early adolescence: depression, anxiety, impulsivity, hyperactivity, antisocial/aggressive behavior, and drug problems. A four-profile model fit the data best. Three profiles differed from each other in ways consistent with the Pathways Model (i.e., BC, EV, and BV gamblers). A fourth profile emerged, resembling a combination of EV and BV gamblers. Four profiles of at-risk and problem gamblers were identified. Three of these profiles closely resemble those suggested by the Pathways Model.

  12. Semi-Empirical Calibration of the Integral Equation Model for Co-Polarized L-Band Backscattering

    Directory of Open Access Journals (Sweden)

    Nicolas Baghdadi

    2015-10-01

    Full Text Available The objective of this paper is to extend the semi-empirical calibration of the backscattering Integral Equation Model (IEM initially proposed for Synthetic Aperture Radar (SAR data at C- and X-bands to SAR data at L-band. A large dataset of radar signal and in situ measurements (soil moisture and surface roughness over bare soil surfaces were used. This dataset was collected over numerous agricultural study sites in France, Luxembourg, Belgium, Germany and Italy using various SAR sensors (AIRSAR, SIR-C, JERS-1, PALSAR-1, ESAR. Results showed slightly better simulations with exponential autocorrelation function than with Gaussian function and with HH than with VV. Using the exponential autocorrelation function, the mean difference between experimental data and Integral Equation Model (IEM simulations is +0.4 dB in HH and −1.2 dB in VV with a Root Mean Square Error (RMSE about 3.5 dB. In order to improve the modeling results of the IEM for a better use in the inversion of SAR data, a semi-empirical calibration of the IEM was performed at L-band in replacing the correlation length derived from field experiments by a fitting parameter. Better agreement was observed between the backscattering coefficient provided by the SAR and that simulated by the calibrated version of the IEM (RMSE about 2.2 dB.

  13. An Empirical Outdoor-to-Indoor Path Loss Model from below 6 GHz to cm-Wave Frequency Bands

    DEFF Research Database (Denmark)

    Rodriguez Larrad, Ignacio; Nguyen, Huan Cong; Kovács, István Z.

    2017-01-01

    This letter presents an empirical multi-frequency outdoor-to-indoor path loss model. The model is based on measurements performed on the exact same set of scenarios for different frequency bands ranging from traditional cellular allocations below 6 GHz (0.8, 2, 3.5 and 5.2 GHz), up to cm-wave fre......This letter presents an empirical multi-frequency outdoor-to-indoor path loss model. The model is based on measurements performed on the exact same set of scenarios for different frequency bands ranging from traditional cellular allocations below 6 GHz (0.8, 2, 3.5 and 5.2 GHz), up to cm...

  14. Empirical LTE Smartphone Power Model with DRX Operation for System Level Simulations

    DEFF Research Database (Denmark)

    Lauridsen, Mads; Noël, Laurent; Mogensen, Preben

    2013-01-01

    An LTE smartphone power model is presented to enable academia and industry to evaluate users’ battery life on system level. The model is based on empirical measurements on a smartphone using a second generation LTE chipset, and the model includes functions of receive and transmit data rates...... and power levels. The first comprehensive Discontinuous Reception (DRX) power consumption measurements are reported together with cell bandwidth, screen and CPU power consumption. The transmit power level and to some extent the receive data rate constitute the overall power consumption, while DRX proves...

  15. Empiric Study about the Mix Fiscal Policy – Economic Development

    Directory of Open Access Journals (Sweden)

    Alexandru Sergiu Ocnean

    2006-11-01

    Full Text Available Economic development is one of the primary objectives of any government. Fiscal policy represents one of the most effective tools that government authorities could use in order to influence the economy. Having this in mind, this paper focuses on the connection between economic development and fiscal policy and proposes an empirical study based on a sample of 21 European countries. Using a simple pool data model, we tried to distinguish the relations between the evolution of GDP per capita, as a proxy for economic development, and the evolution of three fiscal policy variables, namely the tax burden, the public expenditure to GDP ratio and the budget deficit to GDP ratio.

  16. Empiric Study about the Mix Fiscal Policy – Economic Development

    Directory of Open Access Journals (Sweden)

    Alexandru Sergiu Ocnean

    2006-09-01

    Full Text Available Economic development is one of the primary objectives of any government. Fiscal policy represents one of the most effective tools that government authorities could use in order to influence the economy. Having this in mind, this paper focuses on the connection between economic development and fiscal policy and proposes an empirical study based on a sample of 21 European countries. Using a simple pool data model, we tried to distinguish the relations between the evolution of GDP per capita, as a proxy for economic development, and the evolution of three fiscal policy variables, namely the tax burden, the public expenditure to GDP ratio and the budget deficit to GDP ratio.

  17. Empirical atom model of Vegard's law

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Lei, E-mail: zhleile2002@163.com [Materials Department, College of Electromechanical Engineering, China University of Petroleum, Qingdao 266555 (China); School of Electromechanical Automobile Engineering, Yantai University, Yantai 264005 (China); Li, Shichun [Materials Department, College of Electromechanical Engineering, China University of Petroleum, Qingdao 266555 (China)

    2014-02-01

    Vegard's law seldom holds true for most binary continuous solid solutions. When two components form a solid solution, the atom radii of component elements will change to satisfy the continuity requirement of electron density at the interface between component atom A and atom B so that the atom with larger electron density will expand and the atom with the smaller one will contract. If the expansion and contraction of the atomic radii of A and B respectively are equal in magnitude, Vegard's law will hold true. However, the expansion and contraction of two component atoms are not equal in most situations. The magnitude of the variation will depend on the cohesive energy of corresponding element crystals. An empirical atom model of Vegard's law has been proposed to account for signs of deviations according to the electron density at Wigner–Seitz cell from Thomas–Fermi–Dirac–Cheng model.

  18. An Empirical Application of a Two-Factor Model of Stochastic Volatility

    Czech Academy of Sciences Publication Activity Database

    Kuchyňka, Alexandr

    2008-01-01

    Roč. 17, č. 3 (2008), s. 243-253 ISSN 1210-0455 R&D Projects: GA ČR GA402/07/1113; GA MŠk(CZ) LC06075 Institutional research plan: CEZ:AV0Z10750506 Keywords : stochastic volatility * Kalman filter Subject RIV: AH - Economics http://library.utia.cas.cz/separaty/2008/E/kuchynka-an empirical application of a two-factor model of stochastic volatility.pdf

  19. Semi-empirical neural network models of controlled dynamical systems

    Directory of Open Access Journals (Sweden)

    Mihail V. Egorchev

    2017-12-01

    Full Text Available A simulation approach is discussed for maneuverable aircraft motion as nonlinear controlled dynamical system under multiple and diverse uncertainties including knowledge imperfection concerning simulated plant and its environment exposure. The suggested approach is based on a merging of theoretical knowledge for the plant with training tools of artificial neural network field. The efficiency of this approach is demonstrated using the example of motion modeling and the identification of the aerodynamic characteristics of a maneuverable aircraft. A semi-empirical recurrent neural network based model learning algorithm is proposed for multi-step ahead prediction problem. This algorithm sequentially states and solves numerical optimization subproblems of increasing complexity, using each solution as initial guess for subsequent subproblem. We also consider a procedure for representative training set acquisition that utilizes multisine control signals.

  20. Attachment-based family therapy for depressed and suicidal adolescents: theory, clinical model and empirical support.

    Science.gov (United States)

    Ewing, E Stephanie Krauthamer; Diamond, Guy; Levy, Suzanne

    2015-01-01

    Attachment-Based Family Therapy (ABFT) is a manualized family-based intervention designed for working with depressed adolescents, including those at risk for suicide, and their families. It is an empirically informed and supported treatment. ABFT has its theoretical underpinnings in attachment theory and clinical roots in structural family therapy and emotion focused therapies. ABFT relies on a transactional model that aims to transform the quality of adolescent-parent attachment, as a means of providing the adolescent with a more secure relationship that can support them during challenging times generally, and the crises related to suicidal thinking and behavior, specifically. This article reviews: (1) the theoretical foundations of ABFT (attachment theory, models of emotional development); (2) the ABFT clinical model, including training and supervision factors; and (3) empirical support.

  1. An empirical model for estimating solar radiation in the Algerian Sahara

    Science.gov (United States)

    Benatiallah, Djelloul; Benatiallah, Ali; Bouchouicha, Kada; Hamouda, Messaoud; Nasri, Bahous

    2018-05-01

    The present work aims to determine the empirical model R.sun that will allow us to evaluate the solar radiation flues on a horizontal plane and in clear-sky on the located Adrar city (27°18 N and 0°11 W) of Algeria and compare with the results measured at the localized site. The expected results of this comparison are of importance for the investment study of solar systems (solar power plants for electricity production, CSP) and also for the design and performance analysis of any system using the solar energy. Statistical indicators used to evaluate the accuracy of the model where the mean bias error (MBE), root mean square error (RMSE) and coefficient of determination. The results show that for global radiation, the daily correlation coefficient is 0.9984. The mean absolute percentage error is 9.44 %. The daily mean bias error is -7.94 %. The daily root mean square error is 12.31 %.

  2. A New Empirical Model for Short-Term Forecasting of the Broadband Penetration: A Short Research in Greece

    Directory of Open Access Journals (Sweden)

    Salpasaranis Konstantinos

    2011-01-01

    Full Text Available The objective of this paper is to present a short research about the overall broadband penetration in Greece. In this research, a new empirical deterministic model is proposed for the short-term forecast of the cumulative broadband adoption. The fitting performance of the model is compared with some widely used diffusion models for the cumulative adoption of new telecommunication products, namely, Logistic, Gompertz, Flexible Logistic (FLOG, Box-Cox, Richards, and Bass models. The fitting process is done with broadband penetration official data for Greece. In conclusion, comparing these models with the empirical model, it could be argued that the latter yields well enough statistics indicators for fitting and forecasting performance. It also stresses the need for further research and performance analysis of the model in other more mature broadband markets.

  3. An empirical model for the melt viscosity of polymer blends

    International Nuclear Information System (INIS)

    Dobrescu, V.

    1981-01-01

    On the basis of experimental data for blends of polyethylene with different polymers an empirical equation is proposed to describe the dependence of melt viscosity of blends on component viscosities and composition. The model ensures the continuity of viscosity vs. composition curves throughout the whole composition range, the possibility of obtaining extremum values higher or lower than the viscosities of components, allows the calculation of flow curves of blends from the flow curves of components and their volume fractions. (orig.)

  4. Empirical studies on changes in oil governance

    Science.gov (United States)

    Kemal, Mohammad

    Regulation of the oil and gas sector is consequential to the economies of oil-producing countries. In the literature, there are two types of regulation: indirect regulation through taxes and tariffs or direct regulation through the creation of a National Oil Company (NOC). In the 1970s, many oil-producing countries nationalized their oil and gas sectors by creating and giving ownership rights of oil and gas resources to NOCs. In light of the success of Norway in regulating its oil and gas resources, over the past two decades several countries have changed their oil governance by changing the rights given to NOC from ownership right to mere access rights like other oil companies. However, empirical literature on these changes in oil governance is quite thin. Thus, this dissertation will explore three research questions to investigate empirically these changes in oil governance. First, I investigate empirically the impact of the changes in oil governance on aggregate domestic income. By employing a difference-in-difference method, I will show that a country which changed its oil governance increases its GDP per-capita by 10%. However, the impact is different for different types of political institution. Second, by observing the changes in oil governance in Indonesia , I explore the impact of the changes on learning-by-doing and learning spillover effect in offshore exploration drilling. By employing an econometric model which includes interaction terms between various experience variables and changes in an oil governance dummy, I will show that the change in oil governance in Indonesia enhances learning-by-doing by the rigs and learning spillover in a basin. Lastly, the impact of the changes in oil governance on expropriation risk and extraction path will be explored. By employing a difference-in-difference method, this essay will show that the changes in oil governance reduce expropriation and the impact of it is different for different sizes of resource stock.

  5. An empirical comparison of alternate regime-switching models for electricity spot prices

    Energy Technology Data Exchange (ETDEWEB)

    Janczura, Joanna [Hugo Steinhaus Center, Institute of Mathematics and Computer Science, Wroclaw University of Technology, 50-370 Wroclaw (Poland); Weron, Rafal [Institute of Organization and Management, Wroclaw University of Technology, 50-370 Wroclaw (Poland)

    2010-09-15

    One of the most profound features of electricity spot prices are the price spikes. Markov regime-switching (MRS) models seem to be a natural candidate for modeling this spiky behavior. However, in the studies published so far, the goodness-of-fit of the proposed models has not been a major focus. While most of the models were elegant, their fit to empirical data has either been not examined thoroughly or the signs of a bad fit ignored. With this paper we want to fill the gap. We calibrate and test a range of MRS models in an attempt to find parsimonious specifications that not only address the main characteristics of electricity prices but are statistically sound as well. We find that the best structure is that of an independent spike 3-regime model with time-varying transition probabilities, heteroscedastic diffusion-type base regime dynamics and shifted spike regime distributions. Not only does it allow for a seasonal spike intensity throughout the year and consecutive spikes or price drops, which is consistent with market observations, but also exhibits the 'inverse leverage effect' reported in the literature for spot electricity prices. (author)

  6. An empirical comparison of alternate regime-switching models for electricity spot prices

    International Nuclear Information System (INIS)

    Janczura, Joanna; Weron, Rafal

    2010-01-01

    One of the most profound features of electricity spot prices are the price spikes. Markov regime-switching (MRS) models seem to be a natural candidate for modeling this spiky behavior. However, in the studies published so far, the goodness-of-fit of the proposed models has not been a major focus. While most of the models were elegant, their fit to empirical data has either been not examined thoroughly or the signs of a bad fit ignored. With this paper we want to fill the gap. We calibrate and test a range of MRS models in an attempt to find parsimonious specifications that not only address the main characteristics of electricity prices but are statistically sound as well. We find that the best structure is that of an independent spike 3-regime model with time-varying transition probabilities, heteroscedastic diffusion-type base regime dynamics and shifted spike regime distributions. Not only does it allow for a seasonal spike intensity throughout the year and consecutive spikes or price drops, which is consistent with market observations, but also exhibits the 'inverse leverage effect' reported in the literature for spot electricity prices. (author)

  7. Does size matter? : An empirical study modifying Fama & French's three factor model to detect size-effect based on turnover in the Swedish markets

    OpenAIRE

    Boros, Daniel; Eriksson, Claes

    2014-01-01

    This thesis investigates whether the estimation of the cost of equity (or the expected return) in the Swedish market should incorporate an adjustment for a company’s size. This is what is commonly known as the size-effect, first presented by Banz (1980) and has later been a part of models for estimating cost of equity, such as Fama & French’s three factor model (1992). The Fama & French model was developed based on empirical research. Since the model was developed, the research on the...

  8. Collective Labour Supply, Taxes, and Intrahousehold Allocation: An Empirical Approach

    NARCIS (Netherlands)

    Bloemen, H.G.

    2017-01-01

    Most empirical studies of the impact of labour income taxation on the labour supply behaviour of households use a unitary modelling approach. In this paper we empirically analyze income taxation and the choice of working hours by combining the collective approach for household behaviour and the

  9. Empirical Likelihood in Nonignorable Covariate-Missing Data Problems.

    Science.gov (United States)

    Xie, Yanmei; Zhang, Biao

    2017-04-20

    Missing covariate data occurs often in regression analysis, which frequently arises in the health and social sciences as well as in survey sampling. We study methods for the analysis of a nonignorable covariate-missing data problem in an assumed conditional mean function when some covariates are completely observed but other covariates are missing for some subjects. We adopt the semiparametric perspective of Bartlett et al. (Improving upon the efficiency of complete case analysis when covariates are MNAR. Biostatistics 2014;15:719-30) on regression analyses with nonignorable missing covariates, in which they have introduced the use of two working models, the working probability model of missingness and the working conditional score model. In this paper, we study an empirical likelihood approach to nonignorable covariate-missing data problems with the objective of effectively utilizing the two working models in the analysis of covariate-missing data. We propose a unified approach to constructing a system of unbiased estimating equations, where there are more equations than unknown parameters of interest. One useful feature of these unbiased estimating equations is that they naturally incorporate the incomplete data into the data analysis, making it possible to seek efficient estimation of the parameter of interest even when the working regression function is not specified to be the optimal regression function. We apply the general methodology of empirical likelihood to optimally combine these unbiased estimating equations. We propose three maximum empirical likelihood estimators of the underlying regression parameters and compare their efficiencies with other existing competitors. We present a simulation study to compare the finite-sample performance of various methods with respect to bias, efficiency, and robustness to model misspecification. The proposed empirical likelihood method is also illustrated by an analysis of a data set from the US National Health and

  10. Computational optogenetics: empirically-derived voltage- and light-sensitive channelrhodopsin-2 model.

    Directory of Open Access Journals (Sweden)

    John C Williams

    Full Text Available Channelrhodospin-2 (ChR2, a light-sensitive ion channel, and its variants have emerged as new excitatory optogenetic tools not only in neuroscience, but also in other areas, including cardiac electrophysiology. An accurate quantitative model of ChR2 is necessary for in silico prediction of the response to optical stimulation in realistic tissue/organ settings. Such a model can guide the rational design of new ion channel functionality tailored to different cell types/tissues. Focusing on one of the most widely used ChR2 mutants (H134R with enhanced current, we collected a comprehensive experimental data set of the response of this ion channel to different irradiances and voltages, and used these data to develop a model of ChR2 with empirically-derived voltage- and irradiance- dependence, where parameters were fine-tuned via simulated annealing optimization. This ChR2 model offers: 1 accurate inward rectification in the current-voltage response across irradiances; 2 empirically-derived voltage- and light-dependent kinetics (activation, deactivation and recovery from inactivation; and 3 accurate amplitude and morphology of the response across voltage and irradiance settings. Temperature-scaling factors (Q10 were derived and model kinetics was adjusted to physiological temperatures. Using optical action potential clamp, we experimentally validated model-predicted ChR2 behavior in guinea pig ventricular myocytes. The model was then incorporated in a variety of cardiac myocytes, including human ventricular, atrial and Purkinje cell models. We demonstrate the ability of ChR2 to trigger action potentials in human cardiomyocytes at relatively low light levels, as well as the differential response of these cells to light, with the Purkinje cells being most easily excitable and ventricular cells requiring the highest irradiance at all pulse durations. This new experimentally-validated ChR2 model will facilitate virtual experimentation in neural and

  11. Semi-empirical models for the estimation of clear sky solar global and direct normal irradiances in the tropics

    International Nuclear Information System (INIS)

    Janjai, S.; Sricharoen, K.; Pattarapanitchai, S.

    2011-01-01

    Highlights: → New semi-empirical models for predicting clear sky irradiance were developed. → The proposed models compare favorably with other empirical models. → Performance of proposed models is comparable with that of widely used physical models. → The proposed models have advantage over the physical models in terms of simplicity. -- Abstract: This paper presents semi-empirical models for estimating global and direct normal solar irradiances under clear sky conditions in the tropics. The models are based on a one-year period of clear sky global and direct normal irradiances data collected at three solar radiation monitoring stations in Thailand: Chiang Mai (18.78 o N, 98.98 o E) located in the North of the country, Nakhon Pathom (13.82 o N, 100.04 o E) in the Centre and Songkhla (7.20 o N, 100.60 o E) in the South. The models describe global and direct normal irradiances as functions of the Angstrom turbidity coefficient, the Angstrom wavelength exponent, precipitable water and total column ozone. The data of Angstrom turbidity coefficient, wavelength exponent and precipitable water were obtained from AERONET sunphotometers, and column ozone was retrieved from the OMI/AURA satellite. Model validation was accomplished using data from these three stations for the data periods which were not included in the model formulation. The models were also validated against an independent data set collected at Ubon Ratchathani (15.25 o N, 104.87 o E) in the Northeast. The global and direct normal irradiances calculated from the models and those obtained from measurements are in good agreement, with the root mean square difference (RMSD) of 7.5% for both global and direct normal irradiances. The performance of the models was also compared with that of other models. The performance of the models compared favorably with that of empirical models. Additionally, the accuracy of irradiances predicted from the proposed model are comparable with that obtained from some

  12. An Empirical Study Of User Acceptance Of Online Social Networks Marketing

    Directory of Open Access Journals (Sweden)

    Olumayowa Mulero

    2013-07-01

    Full Text Available The explosion of Internet usage has drawn the attention of researchers towards online Social Networks Marketing (SNM. Research has shown that a number of the Internet users are distrustful and indecisive, when it comes to the use of social networks marketing system. Therefore, there is a need for researchers to identify some of the factors that determine users’ acceptance of social networks marketing using Technology Acceptance Model (TAM. This study extended the Technology Acceptance Model theoretical framework to predict consumer acceptance of social networks marketing within Western Cape Province of South Africa. The research model was tested using data collected from 470 questionnaires and analysed using linear regression. The results showed that user intentions to use SNM are strongly and positively correlated with user acceptance of using SNM systems. Empirical results confirmed that perceived credibility and perceived usefulness are the strongest determinant in predicting user intentions to use SNM system.

  13. Localization in random bipartite graphs: Numerical and empirical study

    Science.gov (United States)

    Slanina, František

    2017-05-01

    We investigate adjacency matrices of bipartite graphs with a power-law degree distribution. Motivation for this study is twofold: first, vibrational states in granular matter and jammed sphere packings; second, graphs encoding social interaction, especially electronic commerce. We establish the position of the mobility edge and show that it strongly depends on the power in the degree distribution and on the ratio of the sizes of the two parts of the bipartite graph. At the jamming threshold, where the two parts have the same size, localization vanishes. We found that the multifractal spectrum is nontrivial in the delocalized phase, but still near the mobility edge. We also study an empirical bipartite graph, namely, the Amazon reviewer-item network. We found that in this specific graph the mobility edge disappears, and we draw a conclusion from this fact regarding earlier empirical studies of the Amazon network.

  14. Cloud Computing (SaaS Adoption as a Strategic Technology: Results of an Empirical Study

    Directory of Open Access Journals (Sweden)

    Pedro R. Palos-Sanchez

    2017-01-01

    Full Text Available The present study empirically analyzes the factors that determine the adoption of cloud computing (SaaS model in firms where this strategy is considered strategic for executing their activity. A research model has been developed to evaluate the factors that influence the intention of using cloud computing that combines the variables found in the technology acceptance model (TAM with other external variables such as top management support, training, communication, organization size, and technological complexity. Data compiled from 150 companies in Andalusia (Spain are used to test the formulated hypotheses. The results of this study reflect what critical factors should be considered and how they are interrelated. They also show the organizational demands that must be considered by those companies wishing to implement a real management model adopted to the digital economy, especially those related to cloud computing.

  15. An Empirical Temperature Variance Source Model in Heated Jets

    Science.gov (United States)

    Khavaran, Abbas; Bridges, James

    2012-01-01

    An acoustic analogy approach is implemented that models the sources of jet noise in heated jets. The equivalent sources of turbulent mixing noise are recognized as the differences between the fluctuating and Favre-averaged Reynolds stresses and enthalpy fluxes. While in a conventional acoustic analogy only Reynolds stress components are scrutinized for their noise generation properties, it is now accepted that a comprehensive source model should include the additional entropy source term. Following Goldstein s generalized acoustic analogy, the set of Euler equations are divided into two sets of equations that govern a non-radiating base flow plus its residual components. When the base flow is considered as a locally parallel mean flow, the residual equations may be rearranged to form an inhomogeneous third-order wave equation. A general solution is written subsequently using a Green s function method while all non-linear terms are treated as the equivalent sources of aerodynamic sound and are modeled accordingly. In a previous study, a specialized Reynolds-averaged Navier-Stokes (RANS) solver was implemented to compute the variance of thermal fluctuations that determine the enthalpy flux source strength. The main objective here is to present an empirical model capable of providing a reasonable estimate of the stagnation temperature variance in a jet. Such a model is parameterized as a function of the mean stagnation temperature gradient in the jet, and is evaluated using commonly available RANS solvers. The ensuing thermal source distribution is compared with measurements as well as computational result from a dedicated RANS solver that employs an enthalpy variance and dissipation rate model. Turbulent mixing noise predictions are presented for a wide range of jet temperature ratios from 1.0 to 3.20.

  16. A Novel Multiscale Ensemble Carbon Price Prediction Model Integrating Empirical Mode Decomposition, Genetic Algorithm and Artificial Neural Network

    Directory of Open Access Journals (Sweden)

    Bangzhu Zhu

    2012-02-01

    Full Text Available Due to the movement and complexity of the carbon market, traditional monoscale forecasting approaches often fail to capture its nonstationary and nonlinear properties and accurately describe its moving tendencies. In this study, a multiscale ensemble forecasting model integrating empirical mode decomposition (EMD, genetic algorithm (GA and artificial neural network (ANN is proposed to forecast carbon price. Firstly, the proposed model uses EMD to decompose carbon price data into several intrinsic mode functions (IMFs and one residue. Then, the IMFs and residue are composed into a high frequency component, a low frequency component and a trend component which have similar frequency characteristics, simple components and strong regularity using the fine-to-coarse reconstruction algorithm. Finally, those three components are predicted using an ANN trained by GA, i.e., a GAANN model, and the final forecasting results can be obtained by the sum of these three forecasting results. For verification and testing, two main carbon future prices with different maturity in the European Climate Exchange (ECX are used to test the effectiveness of the proposed multiscale ensemble forecasting model. Empirical results obtained demonstrate that the proposed multiscale ensemble forecasting model can outperform the single random walk (RW, ARIMA, ANN and GAANN models without EMD preprocessing and the ensemble ARIMA model with EMD preprocessing.

  17. Empirical Results of Modeling EUR/RON Exchange Rate using ARCH, GARCH, EGARCH, TARCH and PARCH models

    Directory of Open Access Journals (Sweden)

    Andreea – Cristina PETRICĂ

    2017-03-01

    Full Text Available The aim of this study consists in examining the changes in the volatility of daily returns of EUR/RON exchange rate using on the one hand symmetric GARCH models (ARCH and GARCH and on the other hand the asymmetric GARCH models (EGARCH, TARCH and PARCH, since the conditional variance is time-varying. The analysis takes into account daily quotations of EUR/RON exchange rate over the period of 04th January 1999 to 13th June 2016. Thus, we are modeling heteroscedasticity by applying different specifications of GARCH models followed by looking for significant parameters and low information criteria (minimum Akaike Information Criterion. All models are estimated using the maximum likelihood method under the assumption of several distributions of the innovation terms such as: Normal (Gaussian distribution, Student’s t distribution, Generalized Error distribution (GED, Student’s with fixed df. Distribution, and GED with fixed parameter distribution. The predominant models turned out to be EGARCH and PARCH models, and the empirical results point out that the best model for estimating daily returns of EUR/RON exchange rate is EGARCH(2,1 with Asymmetric order 2 under the assumption of Student’s t distributed innovation terms. This can be explained by the fact that in case of EGARCH model, the restriction regarding the positivity of the conditional variance is automatically satisfied.

  18. Double-dividend analysis with SCREEN: an empirical study for Switzerland

    International Nuclear Information System (INIS)

    Frei, Christoph W.; Haldi, Pierre-Andre; Sarlos, Gerard

    2005-01-01

    This paper presents an empirical study that quantifies the effects of an ecological fiscal reform as recently rejected by the Swiss population. The measure aims to encourage employment and, at the same time, to dissuade from an excessive energy use and thereby decrease energy-induced external costs (CO 2 , etc.). The analysis is based on the model SCREEN, a general equilibrium model using the complementarity format for the hybrid description of economy-wide production possibilities where the electricity sector is represented by a bottom-up activity analysis and the other production sectors are characterised by top-down production functions. A dynamic formulation of the activity analysis of technologies allows for the reproduction of endogenous structural change (see Frei, C.W., Haldi, P.-A., Sarlos, G., 2003. Dynamic formulation of a top-down and bottom-up merging energy policy model. Energy Policy 31 (10), 1017-1031.). The labour market is formulated according to the microeconomically founded efficiency wages and calibrated for Switzerland. The study includes the development of a consistent set of top-down, bottom-up and labour data for Switzerland. The collection of bottom-up data on the electricity sector, just before liberalisation, was not easy. The electricity sector characterising data was prepared, based on original statistics about 140 Swiss electricity companies

  19. Empirical Reconstruction and Numerical Modeling of the First Geoeffective Coronal Mass Ejection of Solar Cycle 24

    Science.gov (United States)

    Wood, B. E.; Wu, C.-C.; Howard, R. A.; Socker, D. G.; Rouillard, A. P.

    2011-03-01

    We analyze the kinematics and morphology of a coronal mass ejection (CME) from 2010 April 3, which was responsible for the first significant geomagnetic storm of solar cycle 24. The analysis utilizes coronagraphic and heliospheric images from the two STEREO spacecraft, and coronagraphic images from SOHO/LASCO. Using an empirical three-dimensional (3D) reconstruction technique, we demonstrate that the CME can be reproduced reasonably well at all times with a 3D flux rope shape, but the case for a flux rope being the correct interpretation is not as strong as some events studied with STEREO in the past, given that we are unable to infer a unique orientation for the flux rope. A model with an orientation angle of -80° from the ecliptic plane (i.e., nearly N-S) works best close to the Sun, but a model at 10° (i.e., nearly E-W) works better far from the Sun. Both interpretations require the cross section of the flux rope to be significantly elliptical rather than circular. In addition to our empirical modeling, we also present a fully 3D numerical MHD model of the CME. This physical model appears to effectively reproduce aspects of the shape and kinematics of the CME's leading edge. It is particularly encouraging that the model reproduces the amount of interplanetary deceleration observed for the CME during its journey from the Sun to 1 AU.

  20. EMPIRICAL RECONSTRUCTION AND NUMERICAL MODELING OF THE FIRST GEOEFFECTIVE CORONAL MASS EJECTION OF SOLAR CYCLE 24

    International Nuclear Information System (INIS)

    Wood, B. E.; Wu, C.-C.; Howard, R. A.; Socker, D. G.; Rouillard, A. P.

    2011-01-01

    We analyze the kinematics and morphology of a coronal mass ejection (CME) from 2010 April 3, which was responsible for the first significant geomagnetic storm of solar cycle 24. The analysis utilizes coronagraphic and heliospheric images from the two STEREO spacecraft, and coronagraphic images from SOHO/LASCO. Using an empirical three-dimensional (3D) reconstruction technique, we demonstrate that the CME can be reproduced reasonably well at all times with a 3D flux rope shape, but the case for a flux rope being the correct interpretation is not as strong as some events studied with STEREO in the past, given that we are unable to infer a unique orientation for the flux rope. A model with an orientation angle of -80 deg. from the ecliptic plane (i.e., nearly N-S) works best close to the Sun, but a model at 10 deg. (i.e., nearly E-W) works better far from the Sun. Both interpretations require the cross section of the flux rope to be significantly elliptical rather than circular. In addition to our empirical modeling, we also present a fully 3D numerical MHD model of the CME. This physical model appears to effectively reproduce aspects of the shape and kinematics of the CME's leading edge. It is particularly encouraging that the model reproduces the amount of interplanetary deceleration observed for the CME during its journey from the Sun to 1 AU.

  1. Empiric guideline-recommended weight-based vancomycin dosing and mortality in methicillin-resistant Staphylococcus aureus bacteremia: a retrospective cohort study

    Directory of Open Access Journals (Sweden)

    Hall Ronald G

    2012-04-01

    Full Text Available Abstract Background No studies have evaluated the effect of guideline-recommended weight-based dosing on in-hospital mortality of patients with methicillin-resistant Staphylococcus aureus bacteremia. Methods This was a multicenter, retrospective, cohort study of patients with methicillin-resistant Staphylococcus aureus bacteremia receiving at least 48 hours of empiric vancomycin therapy between 01/07/2002 and 30/06/2008. We compared in-hospital mortality for patients treated empirically with weight-based, guideline-recommended vancomycin doses (at least 15 mg/kg/dose to those treated with less than 15 mg/kg/dose. We used a general linear mixed multivariable model analysis with variables identified a priori through a conceptual framework based on the literature. Results A total of 337 patients who were admitted to the three hospitals were included in the cohort. One-third of patients received vancomycin empirically at the guideline-recommended dose. Guideline-recommended dosing was not associated with in-hospital mortality in the univariable (16% vs. 13%, OR 1.26 [95%CI 0.67-2.39] or multivariable (OR 0.71, 95%CI 0.33-1.55 analysis. Independent predictors of in-hospital mortality were ICU admission, Pitt bacteremia score of 4 or greater, age 53 years or greater, and nephrotoxicity. Conclusions Empiric use of weight-based, guideline-recommended empiric vancomycin dosing was not associated with reduced mortality in this multicenter study.

  2. Investigating Low-Carbon City: Empirical Study of Shanghai

    Directory of Open Access Journals (Sweden)

    Xuan Yang

    2018-04-01

    Full Text Available A low-carbon economy is an inevitable choice for achieving economic and ecological sustainable development. It is of significant importance to analyze a city’s low-carbon economy development level scientifically and reasonably. In order to achieve this goal, we propose an urban low-carbon economic development level evaluation model based on the matter-element extension method. First, we select some indicators from the existing indicator system based on past research and experience. Then, a matter-element model is established on the basis of weight to evaluate the level of a city’s low-carbon, the critical value of each index is determined through the classical domain and the section domain, calculating the correlation degree of a single index and a comprehensive index. Finally, we analyze the low-carbon economy development status and future development trends according to the analysis results. In this study, we select Shanghai as an empirical study—the results show that Shanghai is a city with a low-carbon level and there is a trend of further improvement in Shanghai’s low-carbon economy. But its low carbon construction and low carbon technology investment are relatively low. In summary, this method can provide another angle for evaluating a city’s low-carbon economy.

  3. Empirical Modeling of Information Communication Technology Usage Behaviour among Business Education Teachers in Tertiary Colleges of a Developing Country

    Science.gov (United States)

    Isiyaku, Dauda Dansarki; Ayub, Ahmad Fauzi Mohd; Abdulkadir, Suhaida

    2015-01-01

    This study has empirically tested the fitness of a structural model in explaining the influence of two exogenous variables (perceived enjoyment and attitude towards ICTs) on two endogenous variables (behavioural intention and teachers' Information Communication Technology (ICT) usage behavior), based on the proposition of Technology Acceptance…

  4. Empirical Studies on Legitimation Strategies: A Case for International Business Research Extension

    DEFF Research Database (Denmark)

    Turcan, Romeo V.; Marinova, Svetla Trifonova; Rana, Mohammad Bakhtiar

    2012-01-01

    The paper focuses on legitimation and legitimation strategies applied by companies. Following the process of systematic review, we analyze empirical studies exploring legitimation and legitimation strategies from different theoretical perspectives. Using the key findings by reconnoitering and com...... and comparing the theoretical background, approaches, methodologies, and findings of these empirical studies, we outline potential directions for research in the legitimation strategies of firms engaged in international business operations....

  5. Libor and Swap Market Models for the Pricing of Interest Rate Derivatives : An Empirical Analysis

    NARCIS (Netherlands)

    de Jong, F.C.J.M.; Driessen, J.J.A.G.; Pelsser, A.

    2000-01-01

    In this paper we empirically analyze and compare the Libor and Swap Market Models, developed by Brace, Gatarek, and Musiela (1997) and Jamshidian (1997), using paneldata on prices of US caplets and swaptions.A Libor Market Model can directly be calibrated to observed prices of caplets, whereas a

  6. Theoretical Semi-Empirical AM1 studies of Schiff Bases

    International Nuclear Information System (INIS)

    Arora, K.; Burman, K.

    2005-01-01

    The present communication reports the theoretical semi-empirical studies of schiff bases of 2-amino pyridine along with their comparison with their parent compounds. Theoretical studies reveal that it is the azomethine group, in the schiff bases under study, that acts as site for coordination to metals as it is reported by many coordination chemists. (author)

  7. Visual Design Principles: An Empirical Study of Design Lore

    Science.gov (United States)

    Kimball, Miles A.

    2013-01-01

    Many books, designers, and design educators talk about visual design principles such as balance, contrast, and alignment, but with little consistency. This study uses empirical methods to explore the lore surrounding design principles. The study took the form of two stages: a quantitative literature review to determine what design principles are…

  8. Uncertainty analysis and validation of environmental models. The empirically based uncertainty analysis

    International Nuclear Information System (INIS)

    Monte, Luigi; Hakanson, Lars; Bergstroem, Ulla; Brittain, John; Heling, Rudie

    1996-01-01

    The principles of Empirically Based Uncertainty Analysis (EBUA) are described. EBUA is based on the evaluation of 'performance indices' that express the level of agreement between the model and sets of empirical independent data collected in different experimental circumstances. Some of these indices may be used to evaluate the confidence limits of the model output. The method is based on the statistical analysis of the distribution of the index values and on the quantitative relationship of these values with the ratio 'experimental data/model output'. Some performance indices are described in the present paper. Among these, the so-called 'functional distance' (d) between the logarithm of model output and the logarithm of the experimental data, defined as d 2 =Σ n 1 ( ln M i - ln O i ) 2 /n where M i is the i-th experimental value, O i the corresponding model evaluation and n the number of the couplets 'experimental value, predicted value', is an important tool for the EBUA method. From the statistical distribution of this performance index, it is possible to infer the characteristics of the distribution of the ratio 'experimental data/model output' and, consequently to evaluate the confidence limits for the model predictions. This method was applied to calculate the uncertainty level of a model developed to predict the migration of radiocaesium in lacustrine systems. Unfortunately, performance indices are affected by the uncertainty of the experimental data used in validation. Indeed, measurement results of environmental levels of contamination are generally associated with large uncertainty due to the measurement and sampling techniques and to the large variability in space and time of the measured quantities. It is demonstrated that this non-desired effect, in some circumstances, may be corrected by means of simple formulae

  9. A one-dimensional semi-empirical model considering transition boiling effect for dispersed flow film boiling

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Yu-Jou [Institute of Nuclear Engineering and Science, National Tsing Hua University, Hsinchu 30013, Taiwan, ROC (China); Pan, Chin, E-mail: cpan@ess.nthu.edu.tw [Institute of Nuclear Engineering and Science, National Tsing Hua University, Hsinchu 30013, Taiwan, ROC (China); Department of Engineering and System Science, National Tsing Hua University, Hsinchu 30013, Taiwan, ROC (China); Low Carbon Energy Research Center, National Tsing Hua University, Hsinchu 30013, Taiwan, ROC (China)

    2017-05-15

    Highlights: • Seven heat transfer mechanisms are studied numerically by the model. • A semi-empirical method is proposed to account for the transition boiling effect. • The parametric effects on the heat transfer mechanisms are investigated. • The thermal non-equilibrium phenomenon between vapor and droplets is investigated. - Abstract: The objective of this paper is to develop a one-dimensional semi-empirical model for the dispersed flow film boiling considering transition boiling effects. The proposed model consists of conservation equations, i.e., vapor mass, vapor energy, droplet mass and droplet momentum conservation, and a set of closure relations to address the interactions among wall, vapor and droplets. The results show that the transition boiling effect is of vital importance in the dispersed flow film boiling regime, since the flowing situation in the downstream would be influenced by the conditions in the upstream. In addition, the present paper, through evaluating the vapor temperature and the amount of heat transferred to droplets, investigates the thermal non-equilibrium phenomenon under different flowing conditions. Comparison of the wall temperature predictions with the 1394 experimental data in the literature, the present model ranging from system pressure of 30–140 bar, heat flux of 204–1837 kW/m{sup 2} and mass flux of 380–5180 kg/m{sup 2} s, shows very good agreement with RMS of 8.80% and standard deviation of 8.81%. Moreover, the model well depicts the thermal non-equilibrium phenomenon for the dispersed flow film boiling.

  10. The Role of Ethnographic Studies in Empirical Software Engineering

    DEFF Research Database (Denmark)

    Sharp, Helen; Dittrich, Yvonne; Souza, Cleidson R. B. de

    2016-01-01

    Ethnography is a qualitative research method used to study people and cultures. It is largely adopted in disciplines outside software engineering, including different areas of computer science. Ethnography can provide an in-depth understanding of the socio-technological realities surrounding ever...... as a useful and usable approach to empirical software engineering research. Throughout the paper, relevant examples of ethnographic studies of software practice are used to illustrate the points being made.......Ethnography is a qualitative research method used to study people and cultures. It is largely adopted in disciplines outside software engineering, including different areas of computer science. Ethnography can provide an in-depth understanding of the socio-technological realities surrounding...... everyday software development practice, i.e., it can help to uncover not only what practitioners do, but also why they do it. Despite its potential, ethnography has not been widely adopted by empirical software engineering researchers, and receives little attention in the related literature. The main goal...

  11. Measuring hospital service quality and its influence on patient satisfaction: An empirical study using structural equation modeling

    OpenAIRE

    Nasim Kazemi; Parisa Ehsani; Farshid Abdi; Mohammad Kazem Bighami

    2013-01-01

    This paper presents an empirical investigation to measure different dimensions of hospital service quality (HSQ) by gap analysis and patient satisfaction (PS). It also attempts to measure patients’ satisfaction with three dimensions extracted from exploratory factor analysis (EFA) by Principle component analysis method and conformity factor analysis (CFA). In addition, the study analyzes relationship between HSQ and PS in the context of Iranian hospital services, using structural equation mod...

  12. Empirical evaluation of the conceptual model underpinning a regional aquatic long-term monitoring program using causal modelling

    Science.gov (United States)

    Irvine, Kathryn M.; Miller, Scott; Al-Chokhachy, Robert K.; Archer, Erik; Roper, Brett B.; Kershner, Jeffrey L.

    2015-01-01

    Conceptual models are an integral facet of long-term monitoring programs. Proposed linkages between drivers, stressors, and ecological indicators are identified within the conceptual model of most mandated programs. We empirically evaluate a conceptual model developed for a regional aquatic and riparian monitoring program using causal models (i.e., Bayesian path analysis). We assess whether data gathered for regional status and trend estimation can also provide insights on why a stream may deviate from reference conditions. We target the hypothesized causal pathways for how anthropogenic drivers of road density, percent grazing, and percent forest within a catchment affect instream biological condition. We found instream temperature and fine sediments in arid sites and only fine sediments in mesic sites accounted for a significant portion of the maximum possible variation explainable in biological condition among managed sites. However, the biological significance of the direct effects of anthropogenic drivers on instream temperature and fine sediments were minimal or not detected. Consequently, there was weak to no biological support for causal pathways related to anthropogenic drivers’ impact on biological condition. With weak biological and statistical effect sizes, ignoring environmental contextual variables and covariates that explain natural heterogeneity would have resulted in no evidence of human impacts on biological integrity in some instances. For programs targeting the effects of anthropogenic activities, it is imperative to identify both land use practices and mechanisms that have led to degraded conditions (i.e., moving beyond simple status and trend estimation). Our empirical evaluation of the conceptual model underpinning the long-term monitoring program provided an opportunity for learning and, consequently, we discuss survey design elements that require modification to achieve question driven monitoring, a necessary step in the practice of

  13. Empirical flow parameters : a tool for hydraulic model validity

    Science.gov (United States)

    Asquith, William H.; Burley, Thomas E.; Cleveland, Theodore G.

    2013-01-01

    The objectives of this project were (1) To determine and present from existing data in Texas, relations between observed stream flow, topographic slope, mean section velocity, and other hydraulic factors, to produce charts such as Figure 1 and to produce empirical distributions of the various flow parameters to provide a methodology to "check if model results are way off!"; (2) To produce a statistical regional tool to estimate mean velocity or other selected parameters for storm flows or other conditional discharges at ungauged locations (most bridge crossings) in Texas to provide a secondary way to compare such values to a conventional hydraulic modeling approach. (3.) To present ancillary values such as Froude number, stream power, Rosgen channel classification, sinuosity, and other selected characteristics (readily determinable from existing data) to provide additional information to engineers concerned with the hydraulic-soil-foundation component of transportation infrastructure.

  14. Choice of Foreign Market Entry Mode - Cognitions from Empirical and Theoretical Studies

    OpenAIRE

    Zhao, Xuemin; Decker, Reinhold

    2004-01-01

    This paper analyzes critically five basic theories on market entry mode decision with respect to existing strengths and weaknesses and the results of corresponding empirical studies. Starting from conflictions both in theories and empirical studies dealing with the entry mode choice problem we motivate a significant need of further research in this important area of international marketing. Furthermore we provide implications for managers in practice and outline emerging trends in market entr...

  15. Use of empirical likelihood to calibrate auxiliary information in partly linear monotone regression models.

    Science.gov (United States)

    Chen, Baojiang; Qin, Jing

    2014-05-10

    In statistical analysis, a regression model is needed if one is interested in finding the relationship between a response variable and covariates. When the response depends on the covariate, then it may also depend on the function of this covariate. If one has no knowledge of this functional form but expect for monotonic increasing or decreasing, then the isotonic regression model is preferable. Estimation of parameters for isotonic regression models is based on the pool-adjacent-violators algorithm (PAVA), where the monotonicity constraints are built in. With missing data, people often employ the augmented estimating method to improve estimation efficiency by incorporating auxiliary information through a working regression model. However, under the framework of the isotonic regression model, the PAVA does not work as the monotonicity constraints are violated. In this paper, we develop an empirical likelihood-based method for isotonic regression model to incorporate the auxiliary information. Because the monotonicity constraints still hold, the PAVA can be used for parameter estimation. Simulation studies demonstrate that the proposed method can yield more efficient estimates, and in some situations, the efficiency improvement is substantial. We apply this method to a dementia study. Copyright © 2013 John Wiley & Sons, Ltd.

  16. Empirical data and moral theory. A plea for integrated empirical ethics.

    Science.gov (United States)

    Molewijk, Bert; Stiggelbout, Anne M; Otten, Wilma; Dupuis, Heleen M; Kievit, Job

    2004-01-01

    Ethicists differ considerably in their reasons for using empirical data. This paper presents a brief overview of four traditional approaches to the use of empirical data: "the prescriptive applied ethicists," "the theorists," "the critical applied ethicists," and "the particularists." The main aim of this paper is to introduce a fifth approach of more recent date (i.e. "integrated empirical ethics") and to offer some methodological directives for research in integrated empirical ethics. All five approaches are presented in a table for heuristic purposes. The table consists of eight columns: "view on distinction descriptive-prescriptive sciences," "location of moral authority," "central goal(s)," "types of normativity," "use of empirical data," "method," "interaction empirical data and moral theory," and "cooperation with descriptive sciences." Ethicists can use the table in order to identify their own approach. Reflection on these issues prior to starting research in empirical ethics should lead to harmonization of the different scientific disciplines and effective planning of the final research design. Integrated empirical ethics (IEE) refers to studies in which ethicists and descriptive scientists cooperate together continuously and intensively. Both disciplines try to integrate moral theory and empirical data in order to reach a normative conclusion with respect to a specific social practice. IEE is not wholly prescriptive or wholly descriptive since IEE assumes an interdepence between facts and values and between the empirical and the normative. The paper ends with three suggestions for consideration on some of the future challenges of integrated empirical ethics.

  17. Strategic Management Tools and Techniques: A Comparative Analysis of Empirical Studies

    Directory of Open Access Journals (Sweden)

    Albana Berisha Qehaja

    2017-01-01

    Full Text Available There is no doubt that strategic management tools and techniques are important parts of the strategic management process. Their use in organizations should be observed in a practice-based context. This paper analyzes the empirical studies on the usage of strategic management tools and techniques. Hence, the main aim of this study is to investigate and analyze which enterprises, according to their country development level, use more strategic management tools and techniques and which of these are used the most. Also, this paper investigates which strategic management tools and techniques are used globally according to the results of empirical studies. The study presents a summary of empirical studies for the period 1990–2015. The research results indicate that more strategic tools and techniques are used in developed countries, followed by developing countries and fewest in countries in transition. This study is likely to contribute to the field of strategic management because it summarizes the most used strategic tools and techniques at the global level according to varying stages of countries’ economic development. Also, the findings from this study may be utilized to maximize the full potential of enterprises and reduce the cases of entrepreneurship failures, through creating awareness of the importance of using strategic management tools and techniques.

  18. Empathy at the confluence of neuroscience and empirical literary studies

    NARCIS (Netherlands)

    Burke, M.; Mangen, Anne; Kuzmicova, Anezka; Schilhab, Theresa

    2016-01-01

    The objective of this article is to review extant empirical studies of empathy in narrative reading in light of (a) contemporary literary theory, and (b) neuroscientific studies of empathy, and to discuss how a closer interplay between neuroscience and literary studies may enhance our understanding

  19. Time-varying disaster risk models: An empirical assessment of the Rietz-Barro hypothesis

    DEFF Research Database (Denmark)

    Irarrazabal, Alfonso; Parra-Alvarez, Juan Carlos

    This paper revisits the fit of disaster risk models where a representative agent has recursive preferences and the probability of a macroeconomic disaster changes over time. We calibrate the model as in Wachter (2013) and perform two sets of tests to assess the empirical performance of the model ...... and hence to reduce the Sharpe Ratio, a lower elasticity of substitution generates a more reasonable level for the equity risk premium and for the volatility of the government bond returns without compromising the ability of the price-dividend ratio to predict excess returns....

  20. Mobile Systems Development: An Empirical Study

    DEFF Research Database (Denmark)

    Hosbond, J. H.

    As part of an ongoing study on mobile systems development (MSD), this paper presents preliminary findings of research-in-progress. The debate on mobility in research has so far been dominated by mobile HCI, technological innovations, and socio-technical issues related to new and emerging mobile...... work patterns. This paper is about the development of mobile systems.Based on an on-going empirical study I present four case studies of companies each with different products or services to offer and diverging ways of establishing and sustaining a successful business in the mobile industry. From...... the case studies I propose a five-layered framework for understanding the structure and segmentation of the industry. This leads to an analysis of the different modes of operation within the mobile industry, exemplified by the four case studies.The contribution of this paper is therefore two-fold: (1) I...

  1. Protein-Ligand Empirical Interaction Components for Virtual Screening.

    Science.gov (United States)

    Yan, Yuna; Wang, Weijun; Sun, Zhaoxi; Zhang, John Z H; Ji, Changge

    2017-08-28

    A major shortcoming of empirical scoring functions is that they often fail to predict binding affinity properly. Removing false positives of docking results is one of the most challenging works in structure-based virtual screening. Postdocking filters, making use of all kinds of experimental structure and activity information, may help in solving the issue. We describe a new method based on detailed protein-ligand interaction decomposition and machine learning. Protein-ligand empirical interaction components (PLEIC) are used as descriptors for support vector machine learning to develop a classification model (PLEIC-SVM) to discriminate false positives from true positives. Experimentally derived activity information is used for model training. An extensive benchmark study on 36 diverse data sets from the DUD-E database has been performed to evaluate the performance of the new method. The results show that the new method performs much better than standard empirical scoring functions in structure-based virtual screening. The trained PLEIC-SVM model is able to capture important interaction patterns between ligand and protein residues for one specific target, which is helpful in discarding false positives in postdocking filtering.

  2. Interface of the polarizable continuum model of solvation with semi-empirical methods in the GAMESS program

    DEFF Research Database (Denmark)

    Svendsen, Casper Steinmann; Blædel, Kristoffer; Christensen, Anders S

    2013-01-01

    An interface between semi-empirical methods and the polarized continuum model (PCM) of solvation successfully implemented into GAMESS following the approach by Chudinov et al (Chem. Phys. 1992, 160, 41). The interface includes energy gradients and is parallelized. For large molecules such as ubiq......An interface between semi-empirical methods and the polarized continuum model (PCM) of solvation successfully implemented into GAMESS following the approach by Chudinov et al (Chem. Phys. 1992, 160, 41). The interface includes energy gradients and is parallelized. For large molecules...... such as ubiquitin a reasonable speedup (up to a factor of six) is observed for up to 16 cores. The SCF convergence is greatly improved by PCM for proteins compared to the gas phase....

  3. Empirical research in medical ethics: How conceptual accounts on normative-empirical collaboration may improve research practice

    Science.gov (United States)

    2012-01-01

    Background The methodology of medical ethics during the last few decades has shifted from a predominant use of normative-philosophical analyses to an increasing involvement of empirical methods. The articles which have been published in the course of this so-called 'empirical turn' can be divided into conceptual accounts of empirical-normative collaboration and studies which use socio-empirical methods to investigate ethically relevant issues in concrete social contexts. Discussion A considered reference to normative research questions can be expected from good quality empirical research in medical ethics. However, a significant proportion of empirical studies currently published in medical ethics lacks such linkage between the empirical research and the normative analysis. In the first part of this paper, we will outline two typical shortcomings of empirical studies in medical ethics with regard to a link between normative questions and empirical data: (1) The complete lack of normative analysis, and (2) cryptonormativity and a missing account with regard to the relationship between 'is' and 'ought' statements. Subsequently, two selected concepts of empirical-normative collaboration will be presented and how these concepts may contribute to improve the linkage between normative and empirical aspects of empirical research in medical ethics will be demonstrated. Based on our analysis, as well as our own practical experience with empirical research in medical ethics, we conclude with a sketch of concrete suggestions for the conduct of empirical research in medical ethics. Summary High quality empirical research in medical ethics is in need of a considered reference to normative analysis. In this paper, we demonstrate how conceptual approaches of empirical-normative collaboration can enhance empirical research in medical ethics with regard to the link between empirical research and normative analysis. PMID:22500496

  4. An update on the "empirical turn" in bioethics: analysis of empirical research in nine bioethics journals.

    Science.gov (United States)

    Wangmo, Tenzin; Hauri, Sirin; Gennet, Eloise; Anane-Sarpong, Evelyn; Provoost, Veerle; Elger, Bernice S

    2018-02-07

    A review of literature published a decade ago noted a significant increase in empirical papers across nine bioethics journals. This study provides an update on the presence of empirical papers in the same nine journals. It first evaluates whether the empirical trend is continuing as noted in the previous study, and second, how it is changing, that is, what are the characteristics of the empirical works published in these nine bioethics journals. A review of the same nine journals (Bioethics; Journal of Medical Ethics; Journal of Clinical Ethics; Nursing Ethics; Cambridge Quarterly of Healthcare Ethics; Hastings Center Report; Theoretical Medicine and Bioethics; Christian Bioethics; and Kennedy Institute of Ethics Journal) was conducted for a 12-year period from 2004 to 2015. Data obtained was analysed descriptively and using a non-parametric Chi-square test. Of the total number of original papers (N = 5567) published in the nine bioethics journals, 18.1% (n = 1007) collected and analysed empirical data. Journal of Medical Ethics and Nursing Ethics led the empirical publications, accounting for 89.4% of all empirical papers. The former published significantly more quantitative papers than qualitative, whereas the latter published more qualitative papers. Our analysis reveals no significant difference (χ2 = 2.857; p = 0.091) between the proportion of empirical papers published in 2004-2009 and 2010-2015. However, the increasing empirical trend has continued in these journals with the proportion of empirical papers increasing from 14.9% in 2004 to 17.8% in 2015. This study presents the current state of affairs regarding empirical research published nine bioethics journals. In the quarter century of data that is available about the nine bioethics journals studied in two reviews, the proportion of empirical publications continues to increase, signifying a trend towards empirical research in bioethics. The growing volume is mainly attributable to two

  5. Comparison of physical and semi-empirical hydraulic models for flood inundation mapping

    Science.gov (United States)

    Tavakoly, A. A.; Afshari, S.; Omranian, E.; Feng, D.; Rajib, A.; Snow, A.; Cohen, S.; Merwade, V.; Fekete, B. M.; Sharif, H. O.; Beighley, E.

    2016-12-01

    Various hydraulic/GIS-based tools can be used for illustrating spatial extent of flooding for first-responders, policy makers and the general public. The objective of this study is to compare four flood inundation modeling tools: HEC-RAS-2D, Gridded Surface Subsurface Hydrologic Analysis (GSSHA), AutoRoute and Height Above the Nearest Drainage (HAND). There is a trade-off among accuracy, workability and computational demand in detailed, physics-based flood inundation models (e.g. HEC-RAS-2D and GSSHA) in contrast with semi-empirical, topography-based, computationally less expensive approaches (e.g. AutoRoute and HAND). The motivation for this study is to evaluate this trade-off and offer guidance to potential large-scale application in an operational prediction system. The models were assessed and contrasted via comparability analysis (e.g. overlapping statistics) by using three case studies in the states of Alabama, Texas, and West Virginia. The sensitivity and accuracy of physical and semi-eimpirical models in producing inundation extent were evaluated for the following attributes: geophysical characteristics (e.g. high topographic variability vs. flat natural terrain, urbanized vs. rural zones, effect of surface roughness paratermer value), influence of hydraulic structures such as dams and levees compared to unobstructed flow condition, accuracy in large vs. small study domain, effect of spatial resolution in topographic data (e.g. 10m National Elevation Dataset vs. 0.3m LiDAR). Preliminary results suggest that semi-empericial models tend to underestimate in a flat, urbanized area with controlled/managed river channel around 40% of the inundation extent compared to the physical models, regardless of topographic resolution. However, in places where there are topographic undulations, semi-empericial models attain relatively higher level of accuracy than they do in flat non-urbanized terrain.

  6. Merging expert and empirical data for rare event frequency estimation: Pool homogenisation for empirical Bayes models

    International Nuclear Information System (INIS)

    Quigley, John; Hardman, Gavin; Bedford, Tim; Walls, Lesley

    2011-01-01

    Empirical Bayes provides one approach to estimating the frequency of rare events as a weighted average of the frequencies of an event and a pool of events. The pool will draw upon, for example, events with similar precursors. The higher the degree of homogeneity of the pool, then the Empirical Bayes estimator will be more accurate. We propose and evaluate a new method using homogenisation factors under the assumption that events are generated from a Homogeneous Poisson Process. The homogenisation factors are scaling constants, which can be elicited through structured expert judgement and used to align the frequencies of different events, hence homogenising the pool. The estimation error relative to the homogeneity of the pool is examined theoretically indicating that reduced error is associated with larger pool homogeneity. The effects of misspecified expert assessments of the homogenisation factors are examined theoretically and through simulation experiments. Our results show that the proposed Empirical Bayes method using homogenisation factors is robust under different degrees of misspecification.

  7. An Empirical Study of Atmospheric Correction Procedures for Regional Infrasound Amplitudes with Ground Truth.

    Science.gov (United States)

    Howard, J. E.

    2014-12-01

    This study focusses on improving methods of accounting for atmospheric effects on infrasound amplitudes observed on arrays at regional distances in the southwestern United States. Recordings at ranges of 150 to nearly 300 km from a repeating ground truth source of small HE explosions are used. The explosions range in actual weight from approximately 2000-4000 lbs. and are detonated year-round which provides signals for a wide range of atmospheric conditions. Three methods of correcting the observed amplitudes for atmospheric effects are investigated with the data set. The first corrects amplitudes for upper stratospheric wind as developed by Mutschlecner and Whitaker (1999) and uses the average wind speed between 45-55 km altitudes in the direction of propagation to derive an empirical correction formula. This approach was developed using large chemical and nuclear explosions and is tested with the smaller explosions for which shorter wavelengths cause the energy to be scattered by the smaller scale structure of the atmosphere. The second approach isa semi-empirical method using ray tracing to determine wind speed at ray turning heights where the wind estimates replace the wind values in the existing formula. Finally, parabolic equation (PE) modeling is used to predict the amplitudes at the arrays at 1 Hz. The PE amplitudes are compared to the observed amplitudes with a narrow band filter centered at 1 Hz. An analysis is performed of the conditions under which the empirical and semi-empirical methods fail and full wave methods must be used.

  8. Multiscale empirical modeling of the geomagnetic field: From storms to substorms

    Science.gov (United States)

    Stephens, G. K.; Sitnov, M. I.; Korth, H.; Gkioulidou, M.; Ukhorskiy, A. Y.; Merkin, V. G.

    2017-12-01

    An advanced version of the TS07D empirical geomagnetic field model, herein called SST17, is used to model the global picture of the geomagnetic field and its characteristic variations on both storm and substorm scales. The new SST17 model uses two regular expansions describing the equatorial currents with each having distinctly different scales, one corresponding to a thick and one to a thin current sheet relative to the thermal ion gyroradius. These expansions have an arbitrary distribution of currents in the equatorial plane that is constrained only by magnetometer data. This multi-scale description allows one to reproduce the current sheet thinning during the growth phase. Additionaly, the model uses a flexible description of field-aligned currents that reproduces their spiral structure at low altitudes and provides a continuous transition from region 1 to region 2 current systems. The empirical picture of substorms is obtained by combining magnetometer data from Geotail, THEMIS, Van Allen Probes, Cluster II, Polar, IMP-8, GOES 8, 9, 10 and 12 and then binning this data based on similar values of the auroral index AL, its time derivative and the integral of the solar wind electric field parameter (from ACE, Wind, and IMP-8) in time over substorm scales. The performance of the model is demonstrated for several events, including the 3 July 2012 substorm, which had multi-probe coverage and a series of substorms during the March 2008 storm. It is shown that the AL binning helps reproduce dipolarization signatures in the northward magnetic field Bz, while the solar wind electric field integral allows one to capture the current sheet thinning during the growth phase. The model allows one to trace the substorm dipolarization from the tail to the inner magnetosphere where the dipolarization of strongly stretched tail field lines causes a redistribution of the tail current resulting in an enhancement of the partial ring current in the premidnight sector.

  9. Understanding users’ motivations to engage in virtual worlds: A multipurpose model and empirical testing

    NARCIS (Netherlands)

    Verhagen, T.; Feldberg, J.F.M.; van den Hooff, B.J.; Meents, S.; Merikivi, J.

    2012-01-01

    Despite the growth and commercial potential of virtual worlds, relatively little is known about what drives users' motivations to engage in virtual worlds. This paper proposes and empirically tests a conceptual model aimed at filling this research gap. Given the multipurpose nature of virtual words

  10. MERGANSER - An Empirical Model to Predict Fish and Loon Mercury in New England Lakes

    Science.gov (United States)

    MERGANSER (MERcury Geo-spatial AssessmeNtS for the New England Region) is an empirical least-squares multiple regression model using mercury (Hg) deposition and readily obtainable lake and watershed features to predict fish (fillet) and common loon (blood) Hg in New England lakes...

  11. An empirical model of the topside plasma density around 600 km based on ROCSAT-1 and Hinotori observations

    Science.gov (United States)

    Huang, He; Chen, Yiding; Liu, Libo; Le, Huijun; Wan, Weixing

    2015-05-01

    It is an urgent task to improve the ability of ionospheric empirical models to more precisely reproduce the plasma density variations in the topside ionosphere. Based on the Republic of China Satellite 1 (ROCSAT-1) observations, we developed a new empirical model of topside plasma density around 600 km under relatively quiet geomagnetic conditions. The model reproduces the ROCSAT-1 plasma density observations with a root-mean-square-error of 0.125 in units of lg(Ni(cm-3)) and reasonably describes the temporal and spatial variations of plasma density at altitudes in the range from 550 to 660 km. The model results are also in good agreement with observations from Hinotori, Coupled Ion-Neutral Dynamics Investigations/Communications/Navigation Outage Forecasting System satellites and the incoherent scatter radar at Arecibo. Further, we combined ROCSAT-1 and Hinotori data to improve the ROCSAT-1 model and built a new model (R&H model) after the consistency between the two data sets had been confirmed with the original ROCSAT-1 model. In particular, we studied the solar activity dependence of topside plasma density at a fixed altitude by R&H model and find that its feature slightly differs from the case when the orbit altitude evolution is ignored. In addition, the R&H model shows the merging of the two crests of equatorial ionization anomaly above the F2 peak, while the IRI_Nq topside option always produces two separate crests in this range of altitudes.

  12. An Empirical Study on Washback Effects of the Internet-Based College English Test Band 4 in China

    Science.gov (United States)

    Wang, Chao; Yan, Jiaolan; Liu, Bao

    2014-01-01

    Based on Bailey's washback model, in respect of participants, process and products, the present empirical study was conducted to find the actual washback effects of the internet-based College English Test Band 4 (IB CET-4). The methods adopted are questionnaires, class observation, interview and the analysis of both the CET-4 teaching and testing…

  13. Selection Bias in Educational Transition Models: Theory and Empirical Evidence

    DEFF Research Database (Denmark)

    Holm, Anders; Jæger, Mads

    variables. This paper, first, explains theoretically how selection on unobserved variables leads to waning coefficients and, second, illustrates empirically how selection leads to biased estimates of the effect of family background on educational transitions. Our empirical analysis using data from...

  14. Development of empirical potential functions for the study of molecular geometry, and applications to chlorophyll a dimers

    Energy Technology Data Exchange (ETDEWEB)

    Oie, Tetsuro [Univ. of Rochester, NY (United States); Univ. of Kansas, Lawrence, KS (United States). Dept. of Chemistry

    1980-07-28

    A purpose of the present studies is twofold: (1) development of an empirical potential function (EPF) and (2) application of it to the studies of photoreaction center chlorophyll a dimer. The reliable estimate of geometric structures and energies of large molecules by quantum mechanical methods is not possible at the present time. An alternative method is, therefore, needed for the studies of large molecular systems, and Chapter I is dedicated to the development of this tool, i.e., an empirical potential function, which could suffice this purpose. Because of a large number of variable chemical compositions and functional groups characteristically present in a large molecule, it is important to include a large number of structurally diverse molecules in the development of the EPF. In Chapter II, the EPF is applied to study the geometrical structure of a chlorophyll a (Chl a) dimer, which is believed to exist at the photoreaction center of green plants and is known to play an essential role in photosynthetic energy conversion. Although various models have been proposed for this dimer structure, there is still a great need for information concerning the detailed geometric structure of this dimer. Therefore, in this chapter the structural stabilities of various dimer models are examined by the EPF, and detailed and quantitative information on the structure and stability of these models is provided.

  15. Detailed empirical models for the winds of early-type stars

    International Nuclear Information System (INIS)

    Olson, G.L.; Castor, J.I.

    1981-01-01

    Owing to the recent accumulation of ultraviolet data from the IUE satellite, of X-ray data from the Einstein (HEAO 2) satellite, of visible data from ground based electronic detectors, and of radio data from the Very Large Array (VLA) telescope, it is becoming possible to build much more complete models for the winds of early-type stars. The present work takes the empirical approach of assuming that there exists a coronal region at the base of a cool wind (T/sub e/roughly-equalT/sub eff/). This will be an extension of previous papers by Olson and by Cassinelli and Olson; however, refinements to the model will be presented, and the model will be applied to seven O stars and one BO star. Ionization equilibria are computed to match the line strengths found in UV spectra. The coronal fluxes that are required to produce the observed abundance of O +5 are compared to the X-ray fluxes observed by the Einstein satellite

  16. An empirical test of stage models of e-government development: evidence from Dutch municipalities

    NARCIS (Netherlands)

    Rooks, G.; Matzat, U.; Sadowski, B.M.

    2017-01-01

    In this article we empirically test stage models of e-government development. We use Lee's classification to make a distinction between four stages of e-government: informational, requests, personal, and e-democracy. We draw on a comprehensive data set on the adoption and development of e-government

  17. Comparison of a semi-empirical method with some model codes for gamma-ray spectrum calculation

    Energy Technology Data Exchange (ETDEWEB)

    Sheng, Fan; Zhixiang, Zhao [Chinese Nuclear Data Center, Beijing, BJ (China)

    1996-06-01

    Gamma-ray spectra calculated by a semi-empirical method are compared with those calculated by the model codes such as GNASH, TNG, UNF and NDCP-1. The results of the calculations are discussed. (2 tabs., 3 figs.).

  18. Correcting the bias of empirical frequency parameter estimators in codon models.

    Directory of Open Access Journals (Sweden)

    Sergei Kosakovsky Pond

    2010-07-01

    Full Text Available Markov models of codon substitution are powerful inferential tools for studying biological processes such as natural selection and preferences in amino acid substitution. The equilibrium character distributions of these models are almost always estimated using nucleotide frequencies observed in a sequence alignment, primarily as a matter of historical convention. In this note, we demonstrate that a popular class of such estimators are biased, and that this bias has an adverse effect on goodness of fit and estimates of substitution rates. We propose a "corrected" empirical estimator that begins with observed nucleotide counts, but accounts for the nucleotide composition of stop codons. We show via simulation that the corrected estimates outperform the de facto standard estimates not just by providing better estimates of the frequencies themselves, but also by leading to improved estimation of other parameters in the evolutionary models. On a curated collection of sequence alignments, our estimators show a significant improvement in goodness of fit compared to the approach. Maximum likelihood estimation of the frequency parameters appears to be warranted in many cases, albeit at a greater computational cost. Our results demonstrate that there is little justification, either statistical or computational, for continued use of the -style estimators.

  19. The Role of Light and Music in Gambling Behaviour: An Empirical Pilot Study

    Science.gov (United States)

    Spenwyn, Jenny; Barrett, Doug J. K.; Griffiths, Mark D.

    2010-01-01

    Empirical research examining the situational characteristics of gambling and their effect on gambling behaviour is limited but growing. This experimental pilot investigation reports the first ever empirical study into the combined effects of both music and light on gambling behaviour. While playing an online version of roulette, 56 participants…

  20. Sources of Currency Crisis: An Empirical Analysis

    OpenAIRE

    Weber, Axel A.

    1997-01-01

    Two types of currency crisis models coexist in the literature: first generation models view speculative attacks as being caused by economic fundamentals which are inconsistent with a given parity. Second generation models claim self-fulfilling speculation as the main source of a currency crisis. Recent empirical research in international macroeconomics has attempted to distinguish between the sources of currency crises. This paper adds to this literature by proposing a new empirical approach ...

  1. An empirical study for measuring the success index of banking industry

    Directory of Open Access Journals (Sweden)

    Mohsen Mardani

    2012-08-01

    Full Text Available Measuring organization performance plays an important role for developing better strategic plans. In today's competitive environment, organizations attempt for the product quality or offering the service, delivery, reliability capability and the customer satisfaction. These properties are not measurable only by traditional financial criteria and we need a method, which could consider non-financial factors as well. The present study of this paper proposed a hybrid of balanced score card (BSC and data envelopment analysis (DEA method for an empirical study of banking sector. The study proposes a model for assessing the Tose`eTa`avon bank performance, which is an example of governmental credit and financial services institutes. The study determines different important factors associated with each four components of BSC and uses analytical hierarchy process to rank the measures. In each part of BSC implementation, we use DEA for ranking different units of bank and efficient and inefficient units are determined.

  2. an empirical study of poverty in calabar and its environs.

    African Journals Online (AJOL)

    DJFLEX

    2009-06-17

    Jun 17, 2009 ... AN EMPIRICAL STUDY OF POVERTY IN CALABAR AND ITS. ENVIRONS. ... one of the poorest nations in the world (CBN, 2001). Specifically, these .... rural development in poor regions, inadequate access to education ...

  3. Hybrid empirical--theoretical approach to modeling uranium adsorption

    International Nuclear Information System (INIS)

    Hull, Larry C.; Grossman, Christopher; Fjeld, Robert A.; Coates, John T.; Elzerman, Alan W.

    2004-01-01

    An estimated 330 metric tons of U are buried in the radioactive waste Subsurface Disposal Area (SDA) at the Idaho National Engineering and Environmental Laboratory (INEEL). An assessment of U transport parameters is being performed to decrease the uncertainty in risk and dose predictions derived from computer simulations of U fate and transport to the underlying Snake River Plain Aquifer. Uranium adsorption isotherms were measured for 14 sediment samples collected from sedimentary interbeds underlying the SDA. The adsorption data were fit with a Freundlich isotherm. The Freundlich n parameter is statistically identical for all 14 sediment samples and the Freundlich K f parameter is correlated to sediment surface area (r 2 =0.80). These findings suggest an efficient approach to material characterization and implementation of a spatially variable reactive transport model that requires only the measurement of sediment surface area. To expand the potential applicability of the measured isotherms, a model is derived from the empirical observations by incorporating concepts from surface complexation theory to account for the effects of solution chemistry. The resulting model is then used to predict the range of adsorption conditions to be expected in the vadose zone at the SDA based on the range in measured pore water chemistry. Adsorption in the deep vadose zone is predicted to be stronger than in near-surface sediments because the total dissolved carbonate decreases with depth

  4. Does age impact self-actualization needs?—an empirical study ...

    African Journals Online (AJOL)

    Thus, although this study supports the existence of needs, the chronology of their dominance may not be as per Maslow's hierarchy pyramid. This empirical study establishes that there may not be progressive increase in self-actualization need as age progresses. Keywords: Maslow, need priorities, age, self-actualization ...

  5. Semi-empirical long-term cycle life model coupled with an electrolyte depletion function for large-format graphite/LiFePO4 lithium-ion batteries

    Science.gov (United States)

    Park, Joonam; Appiah, Williams Agyei; Byun, Seoungwoo; Jin, Dahee; Ryou, Myung-Hyun; Lee, Yong Min

    2017-10-01

    To overcome the limitation of simple empirical cycle life models based on only equivalent circuits, we attempt to couple a conventional empirical capacity loss model with Newman's porous composite electrode model, which contains both electrochemical reaction kinetics and material/charge balances. In addition, an electrolyte depletion function is newly introduced to simulate a sudden capacity drop at the end of cycling, which is frequently observed in real lithium-ion batteries (LIBs). When simulated electrochemical properties are compared with experimental data obtained with 20 Ah-level graphite/LiFePO4 LIB cells, our semi-empirical model is sufficiently accurate to predict a voltage profile having a low standard deviation of 0.0035 V, even at 5C. Additionally, our model can provide broad cycle life color maps under different c-rate and depth-of-discharge operating conditions. Thus, this semi-empirical model with an electrolyte depletion function will be a promising platform to predict long-term cycle lives of large-format LIB cells under various operating conditions.

  6. Ion temperature in the outer ionosphere - first version of a global empirical model

    Czech Academy of Sciences Publication Activity Database

    Třísková, Ludmila; Truhlík, Vladimír; Šmilauer, Jan; Smirnova, N. F.

    2004-01-01

    Roč. 34, č. 9 (2004), s. 1998-2003 ISSN 0273-1177 R&D Projects: GA ČR GP205/02/P037; GA AV ČR IAA3042201; GA MŠk ME 651 Institutional research plan: CEZ:AV0Z3042911 Keywords : plasma temperatures * topside ionosphere * empirical models Subject RIV: DG - Athmosphere Sciences, Meteorology Impact factor: 0.548, year: 2004

  7. Multimission empirical ocean tide modeling for shallow waters and polar seas

    DEFF Research Database (Denmark)

    Cheng, Yongcun; Andersen, Ole Baltazar

    2011-01-01

    A new global ocean tide model named DTU10 (developed at Technical University of Denmark) representing all major diurnal and semidiurnal tidal constituents is proposed based on an empirical correction to the global tide model FES2004 (Finite Element Solutions), with residual tides determined using...... tide gauge sets show that the new tide model fits the tide gauge measurements favorably to other state of the art global ocean tide models in both the deep and shallow waters, especially in the Arctic Ocean and the Southern Ocean. One example is a comparison with 207 tide gauge data in the East Asian...... marginal seas where the root-mean-square agreement improved by 35.12%, 22.61%, 27.07%, and 22.65% (M-2, S-2, K-1, and O-1) for the DTU10 tide model compared with the FES2004 tide model. A similar comparison in the Arctic Ocean with 151 gauge data improved by 9.93%, 0.34%, 7.46%, and 9.52% for the M-2, S-2...

  8. An empirical study of the information premium on electricity markets

    International Nuclear Information System (INIS)

    Benth, Fred Espen; Biegler-König, Richard; Kiesel, Rüdiger

    2013-01-01

    Due to the non-storability of electricity and the resulting lack of arbitrage-based arguments to price electricity forward contracts, a significant time-varying risk premium is exhibited. Using EEX data during the introduction of emission certificates and the German “Atom Moratorium” we show that a significant part of the risk premium in electricity forwards is due to different information sets in spot and forward markets. In order to show the existence of the resulting information premium and to analyse its size we design an empirical method based on techniques relating to enlargement of filtrations and the structure of Hilbert spaces. - Highlights: ► Electricity is non-storable and the classical spot–forward-relationship is invalid. ► Future information will cause an information premium for forward contracts. ► We model this premium mathematically using enlargement of filtrations. ► We develop a statistical method testing for the information premium empirically. ► We apply the test to the 2nd phase of the EUETS and the German “Atom Moratorium”

  9. Using Modeling and Simulation to Predict Operator Performance and Automation-Induced Complacency With Robotic Automation: A Case Study and Empirical Validation.

    Science.gov (United States)

    Wickens, Christopher D; Sebok, Angelia; Li, Huiyang; Sarter, Nadine; Gacy, Andrew M

    2015-09-01

    The aim of this study was to develop and validate a computational model of the automation complacency effect, as operators work on a robotic arm task, supported by three different degrees of automation. Some computational models of complacency in human-automation interaction exist, but those are formed and validated within the context of fairly simplified monitoring failures. This research extends model validation to a much more complex task, so that system designers can establish, without need for human-in-the-loop (HITL) experimentation, merits and shortcomings of different automation degrees. We developed a realistic simulation of a space-based robotic arm task that could be carried out with three different levels of trajectory visualization and execution automation support. Using this simulation, we performed HITL testing. Complacency was induced via several trials of correctly performing automation and then was assessed on trials when automation failed. Following a cognitive task analysis of the robotic arm operation, we developed a multicomponent model of the robotic operator and his or her reliance on automation, based in part on visual scanning. The comparison of model predictions with empirical results revealed that the model accurately predicted routine performance and predicted the responses to these failures after complacency developed. However, the scanning models do not account for the entire attention allocation effects of complacency. Complacency modeling can provide a useful tool for predicting the effects of different types of imperfect automation. The results from this research suggest that focus should be given to supporting situation awareness in automation development. © 2015, Human Factors and Ergonomics Society.

  10. Empirically Derived Dehydration Scoring and Decision Tree Models for Children With Diarrhea: Assessment and Internal Validation in a Prospective Cohort Study in Dhaka, Bangladesh.

    Science.gov (United States)

    Levine, Adam C; Glavis-Bloom, Justin; Modi, Payal; Nasrin, Sabiha; Rege, Soham; Chu, Chieh; Schmid, Christopher H; Alam, Nur H

    2015-08-18

    Diarrhea remains one of the most common and most deadly conditions affecting children worldwide. Accurately assessing dehydration status is critical to determining treatment course, yet no clinical diagnostic models for dehydration have been empirically derived and validated for use in resource-limited settings. In the Dehydration: Assessing Kids Accurately (DHAKA) prospective cohort study, a random sample of children under 5 with acute diarrhea was enrolled between February and June 2014 in Bangladesh. Local nurses assessed children for clinical signs of dehydration on arrival, and then serial weights were obtained as subjects were rehydrated. For each child, the percent weight change with rehydration was used to classify subjects with severe dehydration (>9% weight change), some dehydration (3-9%), or no dehydration (Dehydration Score and DHAKA Dehydration Tree, respectively. Models were assessed for their accuracy using the area under their receiver operating characteristic curve (AUC) and for their reliability through repeat clinical exams. Bootstrapping was used to internally validate the models. A total of 850 children were enrolled, with 771 included in the final analysis. Of the 771 children included in the analysis, 11% were classified with severe dehydration, 45% with some dehydration, and 44% with no dehydration. Both the DHAKA Dehydration Score and DHAKA Dehydration Tree had significant AUCs of 0.79 (95% CI = 0.74, 0.84) and 0.76 (95% CI = 0.71, 0.80), respectively, for the diagnosis of severe dehydration. Additionally, the DHAKA Dehydration Score and DHAKA Dehydration Tree had significant positive likelihood ratios of 2.0 (95% CI = 1.8, 2.3) and 2.5 (95% CI = 2.1, 2.8), respectively, and significant negative likelihood ratios of 0.23 (95% CI = 0.13, 0.40) and 0.28 (95% CI = 0.18, 0.44), respectively, for the diagnosis of severe dehydration. Both models demonstrated 90% agreement between independent raters and good

  11. Empirical study on flow experience in China tourism e-commerce market

    Directory of Open Access Journals (Sweden)

    Jianling Wang

    2015-04-01

    Full Text Available Purpose: While tourism e-commerce develops rapidly in China, these channels are truly new to both web providers and web consumers, understanding the nature of these media attaches greater importance. This study investigates the mediation effects of flow experience on the relationship between motivation and behavior intention in tourism e-commerce.Design/methodology/approach: Based on the technology acceptance model, an empirical study is designed to test this relationship.we estimated the measurement model with 13 manifest indicators and 4 latent constructs by CFA to assess the reliability and validity of the construct measures, then tested hypotheses by OLS regression and a formal three-step mediation procedure.Findings: Overall, the results reveal that trust is incorporated in motivation and play it’s role together with other motivations; telepresence and concentration are confirmed in flow experience, and both partially mediated the relationship.Research limitations/implications: This study demonstrates that to improve consumers’ usage adoption, marketers should pay much attention to not only consumers’ motivation but also the areas such as flow experience.Originality/value: This study takes flow experience as a new perspective to explore china tourism e-commerce, estimates its measurement and tests its roles between motivation and behavior intention.

  12. EMERGE - an empirical model for the formation of galaxies since z ˜ 10

    Science.gov (United States)

    Moster, Benjamin P.; Naab, Thorsten; White, Simon D. M.

    2018-06-01

    We present EMERGE, an Empirical ModEl for the foRmation of GalaxiEs, describing the evolution of individual galaxies in large volumes from z ˜ 10 to the present day. We assign a star formation rate to each dark matter halo based on its growth rate, which specifies how much baryonic material becomes available, and the instantaneous baryon conversion efficiency, which determines how efficiently this material is converted to stars, thereby capturing the baryonic physics. Satellites are quenched following the delayed-then-rapid model, and they are tidally disrupted once their subhalo has lost a significant fraction of its mass. The model is constrained with observed data extending out to high redshift. The empirical relations are very flexible, and the model complexity is increased only if required by the data, assessed by several model selection statistics. We find that for the same final halo mass galaxies can have very different star formation histories. Galaxies that are quenched at z = 0 typically have a higher peak star formation rate compared to their star-forming counterparts. EMERGE predicts stellar-to-halo mass ratios for individual galaxies and introduces scatter self-consistently. We find that at fixed halo mass, passive galaxies have a higher stellar mass on average. The intracluster mass in massive haloes can be up to eight times larger than the mass of the central galaxy. Clustering for star-forming and quenched galaxies is in good agreement with observational constraints, indicating a realistic assignment of galaxies to haloes.

  13. Empirical estimates to reduce modeling uncertainties of soil organic carbon in permafrost regions: a review of recent progress and remaining challenges

    International Nuclear Information System (INIS)

    Mishra, U; Jastrow, J D; Matamala, R; Fan, Z; Miller, R M; Hugelius, G; Kuhry, P; Koven, C D; Riley, W J; Harden, J W; Ping, C L; Michaelson, G J; McGuire, A D; Tarnocai, C; Schaefer, K; Schuur, E A G; Jorgenson, M T; Hinzman, L D

    2013-01-01

    The vast amount of organic carbon (OC) stored in soils of the northern circumpolar permafrost region is a potentially vulnerable component of the global carbon cycle. However, estimates of the quantity, decomposability, and combustibility of OC contained in permafrost-region soils remain highly uncertain, thereby limiting our ability to predict the release of greenhouse gases due to permafrost thawing. Substantial differences exist between empirical and modeling estimates of the quantity and distribution of permafrost-region soil OC, which contribute to large uncertainties in predictions of carbon–climate feedbacks under future warming. Here, we identify research challenges that constrain current assessments of the distribution and potential decomposability of soil OC stocks in the northern permafrost region and suggest priorities for future empirical and modeling studies to address these challenges. (letter)

  14. Comparing Multidimensional and Continuum Models of Vocabulary Acquisition: An Empirical Examination of the Vocabulary Knowledge Scale

    Science.gov (United States)

    Stewart, Jeffrey; Batty, Aaron Olaf; Bovee, Nicholas

    2012-01-01

    Second language vocabulary acquisition has been modeled both as multidimensional in nature and as a continuum wherein the learner's knowledge of a word develops along a cline from recognition through production. In order to empirically examine and compare these models, the authors assess the degree to which the Vocabulary Knowledge Scale (VKS;…

  15. Empirical membrane lifetime model for heavy duty fuel cell systems

    Science.gov (United States)

    Macauley, Natalia; Watson, Mark; Lauritzen, Michael; Knights, Shanna; Wang, G. Gary; Kjeang, Erik

    2016-12-01

    Heavy duty fuel cells used in transportation system applications such as transit buses expose the fuel cell membranes to conditions that can lead to lifetime-limiting membrane failure via combined chemical and mechanical degradation. Highly durable membranes and reliable predictive models are therefore needed in order to achieve the ultimate heavy duty fuel cell lifetime target of 25,000 h. In the present work, an empirical membrane lifetime model was developed based on laboratory data from a suite of accelerated membrane durability tests. The model considers the effects of cell voltage, temperature, oxygen concentration, humidity cycling, humidity level, and platinum in the membrane using inverse power law and exponential relationships within the framework of a general log-linear Weibull life-stress statistical distribution. The obtained model is capable of extrapolating the membrane lifetime from accelerated test conditions to use level conditions during field operation. Based on typical conditions for the Whistler, British Columbia fuel cell transit bus fleet, the model predicts a stack lifetime of 17,500 h and a membrane leak initiation time of 9200 h. Validation performed with the aid of a field operated stack confirmed the initial goal of the model to predict membrane lifetime within 20% of the actual operating time.

  16. An empirical model of diagnostic x-ray attenuation under narrow-beam geometry

    International Nuclear Information System (INIS)

    Mathieu, Kelsey B.; Kappadath, S. Cheenu; White, R. Allen; Atkinson, E. Neely; Cody, Dianna D.

    2011-01-01

    Purpose: The purpose of this study was to develop and validate a mathematical model to describe narrow-beam attenuation of kilovoltage x-ray beams for the intended applications of half-value layer (HVL) and quarter-value layer (QVL) estimations, patient organ shielding, and computer modeling. Methods: An empirical model, which uses the Lambert W function and represents a generalized Lambert-Beer law, was developed. To validate this model, transmission of diagnostic energy x-ray beams was measured over a wide range of attenuator thicknesses [0.49-33.03 mm Al on a computed tomography (CT) scanner, 0.09-1.93 mm Al on two mammography systems, and 0.1-0.45 mm Cu and 0.49-14.87 mm Al using general radiography]. Exposure measurements were acquired under narrow-beam geometry using standard methods, including the appropriate ionization chamber, for each radiographic system. Nonlinear regression was used to find the best-fit curve of the proposed Lambert W model to each measured transmission versus attenuator thickness data set. In addition to validating the Lambert W model, we also assessed the performance of two-point Lambert W interpolation compared to traditional methods for estimating the HVL and QVL [i.e., semilogarithmic (exponential) and linear interpolation]. Results: The Lambert W model was validated for modeling attenuation versus attenuator thickness with respect to the data collected in this study (R 2 > 0.99). Furthermore, Lambert W interpolation was more accurate and less sensitive to the choice of interpolation points used to estimate the HVL and/or QVL than the traditional methods of semilogarithmic and linear interpolation. Conclusions: The proposed Lambert W model accurately describes attenuation of both monoenergetic radiation and (kilovoltage) polyenergetic beams (under narrow-beam geometry).

  17. An empirical model of diagnostic x-ray attenuation under narrow-beam geometry.

    Science.gov (United States)

    Mathieu, Kelsey B; Kappadath, S Cheenu; White, R Allen; Atkinson, E Neely; Cody, Dianna D

    2011-08-01

    The purpose of this study was to develop and validate a mathematical model to describe narrow-beam attenuation of kilovoltage x-ray beams for the intended applications of half-value layer (HVL) and quarter-value layer (QVL) estimations, patient organ shielding, and computer modeling. An empirical model, which uses the Lambert W function and represents a generalized Lambert-Beer law, was developed. To validate this model, transmission of diagnostic energy x-ray beams was measured over a wide range of attenuator thicknesses [0.49-33.03 mm Al on a computed tomography (CT) scanner, 0.09-1.93 mm Al on two mammography systems, and 0.1-0.45 mm Cu and 0.49-14.87 mm Al using general radiography]. Exposure measurements were acquired under narrow-beam geometry using standard methods, including the appropriate ionization chamber, for each radiographic system. Nonlinear regression was used to find the best-fit curve of the proposed Lambert W model to each measured transmission versus attenuator thickness data set. In addition to validating the Lambert W model, we also assessed the performance of two-point Lambert W interpolation compared to traditional methods for estimating the HVL and QVL [i.e., semi-logarithmic (exponential) and linear interpolation]. The Lambert W model was validated for modeling attenuation versus attenuator thickness with respect to the data collected in this study (R2 > 0.99). Furthermore, Lambert W interpolation was more accurate and less sensitive to the choice of interpolation points used to estimate the HVL and/or QVL than the traditional methods of semilogarithmic and linear interpolation. The proposed Lambert W model accurately describes attenuation of both monoenergetic radiation and (kilovoltage) polyenergetic beams (under narrow-beam geometry).

  18. Cycling empirical antibiotic therapy in hospitals: meta-analysis and models.

    Directory of Open Access Journals (Sweden)

    Pia Abel zur Wiesch

    2014-06-01

    Full Text Available The rise of resistance together with the shortage of new broad-spectrum antibiotics underlines the urgency of optimizing the use of available drugs to minimize disease burden. Theoretical studies suggest that coordinating empirical usage of antibiotics in a hospital ward can contain the spread of resistance. However, theoretical and clinical studies came to different conclusions regarding the usefulness of rotating first-line therapy (cycling. Here, we performed a quantitative pathogen-specific meta-analysis of clinical studies comparing cycling to standard practice. We searched PubMed and Google Scholar and identified 46 clinical studies addressing the effect of cycling on nosocomial infections, of which 11 met our selection criteria. We employed a method for multivariate meta-analysis using incidence rates as endpoints and find that cycling reduced the incidence rate/1000 patient days of both total infections by 4.95 [9.43-0.48] and resistant infections by 7.2 [14.00-0.44]. This positive effect was observed in most pathogens despite a large variance between individual species. Our findings remain robust in uni- and multivariate metaregressions. We used theoretical models that reflect various infections and hospital settings to compare cycling to random assignment to different drugs (mixing. We make the realistic assumption that therapy is changed when first line treatment is ineffective, which we call "adjustable cycling/mixing". In concordance with earlier theoretical studies, we find that in strict regimens, cycling is detrimental. However, in adjustable regimens single resistance is suppressed and cycling is successful in most settings. Both a meta-regression and our theoretical model indicate that "adjustable cycling" is especially useful to suppress emergence of multiple resistance. While our model predicts that cycling periods of one month perform well, we expect that too long cycling periods are detrimental. Our results suggest that

  19. Relative performance of empirical and physical models in assessing the seasonal and annual glacier surface mass balance of Saint-Sorlin Glacier (French Alps)

    Science.gov (United States)

    Réveillet, Marion; Six, Delphine; Vincent, Christian; Rabatel, Antoine; Dumont, Marie; Lafaysse, Matthieu; Morin, Samuel; Vionnet, Vincent; Litt, Maxime

    2018-04-01

    This study focuses on simulations of the seasonal and annual surface mass balance (SMB) of Saint-Sorlin Glacier (French Alps) for the period 1996-2015 using the detailed SURFEX/ISBA-Crocus snowpack model. The model is forced by SAFRAN meteorological reanalysis data, adjusted with automatic weather station (AWS) measurements to ensure that simulations of all the energy balance components, in particular turbulent fluxes, are accurately represented with respect to the measured energy balance. Results indicate good model performance for the simulation of summer SMB when using meteorological forcing adjusted with in situ measurements. Model performance however strongly decreases without in situ meteorological measurements. The sensitivity of the model to meteorological forcing indicates a strong sensitivity to wind speed, higher than the sensitivity to ice albedo. Compared to an empirical approach, the model exhibited better performance for simulations of snow and firn melting in the accumulation area and similar performance in the ablation area when forced with meteorological data adjusted with nearby AWS measurements. When such measurements were not available close to the glacier, the empirical model performed better. Our results suggest that simulations of the evolution of future mass balance using an energy balance model require very accurate meteorological data. Given the uncertainties in the temporal evolution of the relevant meteorological variables and glacier surface properties in the future, empirical approaches based on temperature and precipitation could be more appropriate for simulations of glaciers in the future.

  20. Relative performance of empirical and physical models in assessing the seasonal and annual glacier surface mass balance of Saint-Sorlin Glacier (French Alps

    Directory of Open Access Journals (Sweden)

    M. Réveillet

    2018-04-01

    Full Text Available This study focuses on simulations of the seasonal and annual surface mass balance (SMB of Saint-Sorlin Glacier (French Alps for the period 1996–2015 using the detailed SURFEX/ISBA-Crocus snowpack model. The model is forced by SAFRAN meteorological reanalysis data, adjusted with automatic weather station (AWS measurements to ensure that simulations of all the energy balance components, in particular turbulent fluxes, are accurately represented with respect to the measured energy balance. Results indicate good model performance for the simulation of summer SMB when using meteorological forcing adjusted with in situ measurements. Model performance however strongly decreases without in situ meteorological measurements. The sensitivity of the model to meteorological forcing indicates a strong sensitivity to wind speed, higher than the sensitivity to ice albedo. Compared to an empirical approach, the model exhibited better performance for simulations of snow and firn melting in the accumulation area and similar performance in the ablation area when forced with meteorological data adjusted with nearby AWS measurements. When such measurements were not available close to the glacier, the empirical model performed better. Our results suggest that simulations of the evolution of future mass balance using an energy balance model require very accurate meteorological data. Given the uncertainties in the temporal evolution of the relevant meteorological variables and glacier surface properties in the future, empirical approaches based on temperature and precipitation could be more appropriate for simulations of glaciers in the future.

  1. Modeling of Principal Flank Wear: An Empirical Approach Combining the Effect of Tool, Environment and Workpiece Hardness

    Science.gov (United States)

    Mia, Mozammel; Al Bashir, Mahmood; Dhar, Nikhil Ranjan

    2016-10-01

    Hard turning is increasingly employed in machining, lately, to replace time-consuming conventional turning followed by grinding process. An excessive amount of tool wear in hard turning is one of the main hurdles to be overcome. Many researchers have developed tool wear model, but most of them developed it for a particular work-tool-environment combination. No aggregate model is developed that can be used to predict the amount of principal flank wear for specific machining time. An empirical model of principal flank wear (VB) has been developed for the different hardness of workpiece (HRC40, HRC48 and HRC56) while turning by coated carbide insert with different configurations (SNMM and SNMG) under both dry and high pressure coolant conditions. Unlike other developed model, this model includes the use of dummy variables along with the base empirical equation to entail the effect of any changes in the input conditions on the response. The base empirical equation for principal flank wear is formulated adopting the Exponential Associate Function using the experimental results. The coefficient of dummy variable reflects the shifting of the response from one set of machining condition to another set of machining condition which is determined by simple linear regression. The independent cutting parameters (speed, rate, depth of cut) are kept constant while formulating and analyzing this model. The developed model is validated with different sets of machining responses in turning hardened medium carbon steel by coated carbide inserts. For any particular set, the model can be used to predict the amount of principal flank wear for specific machining time. Since the predicted results exhibit good resemblance with experimental data and the average percentage error is <10 %, this model can be used to predict the principal flank wear for stated conditions.

  2. Review essay: empires, ancient and modern.

    Science.gov (United States)

    Hall, John A

    2011-09-01

    This essay drews attention to two books on empires by historians which deserve the attention of sociologists. Bang's model of the workings of the Roman economy powerfully demonstrates the tributary nature of per-industrial tributary empires. Darwin's analysis concentrates on modern overseas empires, wholly different in character as they involved the transportation of consumption items for the many rather than luxury goods for the few. Darwin is especially good at describing the conditions of existence of late nineteenth century empires, noting that their demise was caused most of all by the failure of balance of power politics in Europe. Concluding thoughts are offered about the USA. © London School of Economics and Political Science 2011.

  3. Data mining of Ti-Al semi-empirical parameters for developing reduced order models

    Energy Technology Data Exchange (ETDEWEB)

    Broderick, Scott R [Department of Materials Science and Engineering and Institute for Combinatorial Discovery, Iowa State University, Ames, IA 50011 (United States); Aourag, Hafid [Department of Physics, University Abou Bakr Belkaid, Tlemcen 13000 (Algeria); Rajan, Krishna [Department of Materials Science and Engineering and Institute for Combinatorial Discovery, Iowa State University, Ames, IA 50011 (United States)

    2011-05-15

    A focus of materials design is determining the minimum amount of information necessary to fully describe a system, thus reducing the number of empirical results required and simplifying the data analysis. Screening descriptors calculated through a semi-empirical model, we demonstrate how an informatics-based analysis can be used to address this issue with no prior assumptions. We have developed a unique approach for identifying the minimum number of descriptors necessary to capture all the information of a system. Using Ti-Al alloys of varying compositions and crystal chemistries as the test bed, 5 of the 21 original descriptors from electronic structure calculations are found to capture all the information from the calculation, thereby reducing the structure-chemistry-property search space. Additionally, by combining electronic structure calculations with data mining, we classify the systems by chemistries and structures, based on the electronic structure inputs, and thereby rank the impact of change in chemistry and crystal structure on the electronic structure. -- Research Highlights: {yields} We developed an informatics-based methodology to minimize the necessary information. {yields} We applied this methodology to descriptors from semi-empirical calculations. {yields} We developed a validation approach for maintaining information from screening. {yields} We classified intermetallics and identified patterns of composition and structure.

  4. Data mining of Ti-Al semi-empirical parameters for developing reduced order models

    International Nuclear Information System (INIS)

    Broderick, Scott R.; Aourag, Hafid; Rajan, Krishna

    2011-01-01

    A focus of materials design is determining the minimum amount of information necessary to fully describe a system, thus reducing the number of empirical results required and simplifying the data analysis. Screening descriptors calculated through a semi-empirical model, we demonstrate how an informatics-based analysis can be used to address this issue with no prior assumptions. We have developed a unique approach for identifying the minimum number of descriptors necessary to capture all the information of a system. Using Ti-Al alloys of varying compositions and crystal chemistries as the test bed, 5 of the 21 original descriptors from electronic structure calculations are found to capture all the information from the calculation, thereby reducing the structure-chemistry-property search space. Additionally, by combining electronic structure calculations with data mining, we classify the systems by chemistries and structures, based on the electronic structure inputs, and thereby rank the impact of change in chemistry and crystal structure on the electronic structure. -- Research Highlights: → We developed an informatics-based methodology to minimize the necessary information. → We applied this methodology to descriptors from semi-empirical calculations. → We developed a validation approach for maintaining information from screening. → We classified intermetallics and identified patterns of composition and structure.

  5. Developmental Relationship Programs: An Empirical Study of the Impact of Peer-Mentoring Programs

    Science.gov (United States)

    Shojai, Siamack; Davis, William J.; Root, Patricia S.

    2014-01-01

    This paper provides an empirical analysis of the impact and effectiveness of developmental relationships provided through academic intervention programs at a medium-size master's level public university in the Northeastern United States. The programs' curriculum follows the Model of Strategic Learning's four pillars of learning and is administered…

  6. Autonomous e-coaching in the wild: Empirical validation of a model-based reasoning system

    OpenAIRE

    Kamphorst, B.A.; Klein, M.C.A.; van Wissen, A.

    2014-01-01

    Autonomous e-coaching systems have the potential to improve people's health behaviors on a large scale. The intelligent behavior change support system eMate exploits a model of the human agent to support individuals in adopting a healthy lifestyle. The system attempts to identify the causes of a person's non-adherence by reasoning over a computational model (COMBI) that is based on established psychological theories of behavior change. The present work presents an extensive, monthlong empiric...

  7. Complex decision-making: initial results of an empirical study

    OpenAIRE

    Pier Luigi Baldi

    2011-01-01

    A brief survey of key literature on emotions and decision-making introduces an empirical study of a group of university students exploring the effects of decision-making complexity on error risk. The results clearly show that decision-making under stress in the experimental group produces significantly more errors than in the stress-free control group.

  8. Empirical study of the GARCH model with rational errors

    International Nuclear Information System (INIS)

    Chen, Ting Ting; Takaishi, Tetsuya

    2013-01-01

    We use the GARCH model with a fat-tailed error distribution described by a rational function and apply it to stock price data on the Tokyo Stock Exchange. To determine the model parameters we perform Bayesian inference to the model. Bayesian inference is implemented by the Metropolis-Hastings algorithm with an adaptive multi-dimensional Student's t-proposal density. In order to compare our model with the GARCH model with the standard normal errors, we calculate the information criteria AIC and DIC, and find that both criteria favor the GARCH model with a rational error distribution. We also calculate the accuracy of the volatility by using the realized volatility and find that a good accuracy is obtained for the GARCH model with a rational error distribution. Thus we conclude that the GARCH model with a rational error distribution is superior to the GARCH model with the normal errors and it can be used as an alternative GARCH model to those with other fat-tailed distributions

  9. Meteorological conditions associated to high sublimation amounts in semiarid high-elevation Andes decrease the performance of empirical melt models

    Science.gov (United States)

    Ayala, Alvaro; Pellicciotti, Francesca; MacDonell, Shelley; McPhee, James; Burlando, Paolo

    2015-04-01

    Empirical melt (EM) models are often preferred to surface energy balance (SEB) models to calculate melt amounts of snow and ice in hydrological modelling of high-elevation catchments. The most common reasons to support this decision are that, in comparison to SEB models, EM models require lower levels of meteorological data, complexity and computational costs. However, EM models assume that melt can be characterized by means of a few index variables only, and their results strongly depend on the transferability in space and time of the calibrated empirical parameters. In addition, they are intrinsically limited in accounting for specific process components, the complexity of which cannot be easily reconciled with the empirical nature of the model. As an example of an EM model, in this study we use the Enhanced Temperature Index (ETI) model, which calculates melt amounts using air temperature and the shortwave radiation balance as index variables. We evaluate the performance of the ETI model on dry high-elevation sites where sublimation amounts - that are not explicitly accounted for the EM model - represent a relevant percentage of total ablation (1.1 to 8.7%). We analyse a data set of four Automatic Weather Stations (AWS), which were collected during the ablation season 2013-14, at elevations between 3466 and 4775 m asl, on the glaciers El Tapado, San Francisco, Bello and El Yeso, which are located in the semiarid Andes of central Chile. We complement our analysis using data from past studies in Juncal Norte Glacier (Chile) and Haut Glacier d'Arolla (Switzerland), during the ablation seasons 2008-09 and 2006, respectively. We use the results of a SEB model, applied to each study site, along the entire season, to calibrate the ETI model. The ETI model was not designed to calculate sublimation amounts, however, results show that their ability is low also to simulate melt amounts at sites where sublimation represents larger percentages of total ablation. In fact, we

  10. Empirical molecular-dynamics study of diffusion in liquid semiconductors

    Science.gov (United States)

    Yu, W.; Wang, Z. Q.; Stroud, D.

    1996-11-01

    We report the results of an extensive molecular-dynamics study of diffusion in liquid Si and Ge (l-Si and l-Ge) and of impurities in l-Ge, using empirical Stillinger-Weber (SW) potentials with several choices of parameters. We use a numerical algorithm in which the three-body part of the SW potential is decomposed into products of two-body potentials, thereby permitting the study of large systems. One choice of SW parameters agrees very well with the observed l-Ge structure factors. The diffusion coefficients D(T) at melting are found to be approximately 6.4×10-5 cm2/s for l-Si, in good agreement with previous calculations, and about 4.2×10-5 and 4.6×10-5 cm2/s for two models of l-Ge. In all cases, D(T) can be fitted to an activated temperature dependence, with activation energies Ed of about 0.42 eV for l-Si, and 0.32 or 0.26 eV for two models of l-Ge, as calculated from either the Einstein relation or from a Green-Kubo-type integration of the velocity autocorrelation function. D(T) for Si impurities in l-Ge is found to be very similar to the self-diffusion coefficient of l-Ge. We briefly discuss possible reasons why the SW potentials give D(T)'s substantially lower than ab initio predictions.

  11. Comparing cycling world hour records, 1967-1996: modeling with empirical data.

    Science.gov (United States)

    Bassett, D R; Kyle, C R; Passfield, L; Broker, J P; Burke, E R

    1999-11-01

    The world hour record in cycling has increased dramatically in recent years. The present study was designed to compare the performances of former/current record holders, after adjusting for differences in aerodynamic equipment and altitude. Additionally, we sought to determine the ideal elevation for future hour record attempts. The first step was constructing a mathematical model to predict power requirements of track cycling. The model was based on empirical data from wind-tunnel tests, the relationship of body size to frontal surface area, and field power measurements using a crank dynamometer (SRM). The model agreed reasonably well with actual measurements of power output on elite cyclists. Subsequently, the effects of altitude on maximal aerobic power were estimated from published research studies of elite athletes. This information was combined with the power requirement equation to predict what each cyclist's power output would have been at sea level. This allowed us to estimate the distance that each rider could have covered using state-of-the-art equipment at sea level. According to these calculations, when racing under equivalent conditions, Rominger would be first, Boardman second, Merckx third, and Indurain fourth. In addition, about 60% of the increase in hour record distances since Bracke's record (1967) have come from advances in technology and 40% from physiological improvements. To break the current world hour record, field measurements and the model indicate that a cyclist would have to deliver over 440 W for 1 h at sea level, or correspondingly less at altitude. The optimal elevation for future hour record attempts is predicted to be about 2500 m for acclimatized riders and 2000 m for unacclimatized riders.

  12. Reverse logistics: an empirical study for operational framework

    International Nuclear Information System (INIS)

    Yusuf, I.

    2013-01-01

    This paper presents framework of reverse logistics optimizing the stakeholders gain, social gain, economic gain and environmental gain. It identifies the roadblocks that prevail in recycling industry and describes various types of returns and wastes. Framework of the reverse logistics is evolved on the basis of actual happening of the items shown in table 1-4 disposed off from industries shown in table 6. The rejected items require environmental disposal passing through the different phases described in flow of operational framework. An operational framework of reverse logistics is developed studying fifty organizations.. In ad -dition three best practices of reverse logistics are proposed by consolidating the experiential information and rich hands on industrial experience in supply chain and reverse logistics area. The research has proposed the Social, Stakeholder, Economic and Environmental (SSEE) sustained gain model optimizing the benefits of stakeholders and highlights the variety of waste and its operational methodology in Pakistani industry. The proposed framework does not include the hospital waste, radioactive waste, hazardous materials waste, municipal waste, agricultural waste and cold chain waste like meat, milk, etc.The operational framework is existing way of doing that takes the waste materials from point of origin to the point of recycling. A better understanding of this framework may help researchers and front line managers to develop better, more ac -curate models for effective and sustainable utilization of waste materials, benefiting organizations and soci -ety by simultaneously enhancing the cost effectiveness and improving environmental awareness. The paper provides an operational framework of reverse logistics and 2S2E sustained gain model. Specific applications are examined through empirical research. (author)

  13. An Empirical Study about China: Gender Equity in Science Education.

    Science.gov (United States)

    Wang, Jianjun; Staver, John R.

    A data base representing a random sample of more than 10,000 grade 9 students in an SISS (Second IEA Science Study) Extended Study (SES), a key project supported by the China State Commission of Education in the late 1980s, was employed in this study to investigate gender equity in student science achievement in China. This empirical data analysis…

  14. An empirical investigation of compliance and enforcement problems

    DEFF Research Database (Denmark)

    Kronbak, Lone Grønbæk; Jensen, Frank

    2011-01-01

    contributes to the literature by investigating compliance and enforcement in the empirical case of a mixed trawl fishery targeting Norway lobster in Kattegat and Skagerrak located north of Denmark with help from a simulated model. The paper presents results from two simulation models of the case study: one....... Another conclusion from the case study is that only small welfare effects are obtained by increasing enforcement efforts to reduce non-compliance....

  15. Empirical evidence of design-related bias in studies of diagnostic tests

    NARCIS (Netherlands)

    Lijmer, J. G.; Mol, B. W.; Heisterkamp, S.; Bonsel, G. J.; Prins, M. H.; van der Meulen, J. H.; Bossuyt, P. M.

    1999-01-01

    CONTEXT: The literature contains a large number of potential biases in the evaluation of diagnostic tests. Strict application of appropriate methodological criteria would invalidate the clinical application of most study results. OBJECTIVE: To empirically determine the quantitative effect of study

  16. Testing seasonal and long-term controls of streamwater DOC using empirical and process-based models.

    Science.gov (United States)

    Futter, Martyn N; de Wit, Heleen A

    2008-12-15

    Concentrations of dissolved organic carbon (DOC) in surface waters are increasing across Europe and parts of North America. Several mechanisms have been proposed to explain these increases including reductions in acid deposition, change in frequency of winter storms and changes in temperature and precipitation patterns. We used two modelling approaches to identify the mechanisms responsible for changing surface water DOC concentrations. Empirical regression analysis and INCA-C, a process-based model of stream-water DOC, were used to simulate long-term (1986--2003) patterns in stream water DOC concentrations in a small boreal stream. Both modelling approaches successfully simulated seasonal and inter-annual patterns in DOC concentration. In both models, seasonal patterns of DOC concentration were controlled by hydrology and inter-annual patterns were explained by climatic variation. There was a non-linear relationship between warmer summer temperatures and INCA-C predicted DOC. Only the empirical model was able to satisfactorily simulate the observed long-term increase in DOC. The observed long-term trends in DOC are likely to be driven by in-soil processes controlled by SO4(2-) and Cl(-) deposition, and to a lesser extent by temperature-controlled processes. Given the projected changes in climate and deposition, future modelling and experimental research should focus on the possible effects of soil temperature and moisture on organic carbon production, sorption and desorption rates, and chemical controls on organic matter solubility.

  17. Complex decision-making: initial results of an empirical study

    Directory of Open Access Journals (Sweden)

    Pier Luigi Baldi

    2011-09-01

    Full Text Available A brief survey of key literature on emotions and decision-making introduces an empirical study of a group of university students exploring the effects of decision-making complexity on error risk. The results clearly show that decision-making under stress in the experimental group produces significantly more errors than in the stress-free control group.

  18. Theoretical and Empirical Descriptions of Thermospheric Density

    Science.gov (United States)

    Solomon, S. C.; Qian, L.

    2004-12-01

    The longest-term and most accurate overall description the density of the upper thermosphere is provided by analysis of change in the ephemeris of Earth-orbiting satellites. Empirical models of the thermosphere developed in part from these measurements can do a reasonable job of describing thermospheric properties on a climatological basis, but the promise of first-principles global general circulation models of the coupled thermosphere/ionosphere system is that a true high-resolution, predictive capability may ultimately be developed for thermospheric density. However, several issues are encountered when attempting to tune such models so that they accurately represent absolute densities as a function of altitude, and their changes on solar-rotational and solar-cycle time scales. Among these are the crucial ones of getting the heating rates (from both solar and auroral sources) right, getting the cooling rates right, and establishing the appropriate boundary conditions. However, there are several ancillary issues as well, such as the problem of registering a pressure-coordinate model onto an altitude scale, and dealing with possible departures from hydrostatic equilibrium in empirical models. Thus, tuning a theoretical model to match empirical climatology may be difficult, even in the absence of high temporal or spatial variation of the energy sources. We will discuss some of the challenges involved, and show comparisons of simulations using the NCAR Thermosphere-Ionosphere-Electrodynamics General Circulation Model (TIE-GCM) to empirical model estimates of neutral thermosphere density and temperature. We will also show some recent simulations using measured solar irradiance from the TIMED/SEE instrument as input to the TIE-GCM.

  19. Decoupling among CSR policies, programs, and impacts : An empirical study

    NARCIS (Netherlands)

    Graafland, Johan; Smid, Hugo

    2016-01-01

    There are relatively few empirical studies on the impacts of corporate social responsibility (CSR) policies and programs. This article addresses the research gap by analyzing the incidence of, and the conditions that affect, decoupling (defined as divergence) among CSR policies, implementation of

  20. Empirical classification of resources in a business model concept

    Directory of Open Access Journals (Sweden)

    Marko Seppänen

    2009-04-01

    Full Text Available The concept of the business model has been designed for aiding exploitation of the business potential of an innovation. This exploitation inevitably involves new activities in the organisational context and generates a need to select and arrange the resources of the firm in these new activities. A business model encompasses those resources that a firm has access to and aids in a firm’s effort to create a superior ‘innovation capability’. Selecting and arranging resources to utilise innovations requires resource allocation decisions on multiple fronts as well as poses significant challenges for management of innovations. Although current business model conceptualisations elucidate resources, explicit considerations for the composition and the structures of the resource compositions have remained ambiguous. As a result, current business model conceptualisations fail in their core purpose in assisting the decision-making that must consider the resource allocation in exploiting business opportunities. This paper contributes to the existing discussion regarding the representation of resources as components in the business model concept. The categorized list of resources in business models is validated empirically, using two samples of managers in different positions in several industries. The results indicate that most of the theoretically derived resource items have their equivalents in the business language and concepts used by managers. Thus, the categorisation of the resource components enables further development of the business model concept as well as improves daily communication between managers and their subordinates. Future research could be targeted on linking these components of a business model with each other in order to gain a model to assess the performance of different business model configurations. Furthermore, different applications for the developed resource configuration may be envisioned.

  1. PENENTUAN BENTUK FUNGSI MODEL EMPIRIK: STUDI KASUS PERMINTAAN KENDARAAN RODA EMPAT BARU

    Directory of Open Access Journals (Sweden)

    Andryan Setyadharma

    2012-01-01

    Full Text Available In many cases, the determination of form of the regression function of the empirical model betweenthe linear model and the log-linear model is neglected when someone starts research. Someoneconcludes the best model only by comparing the R2 value from respective function form and determinesthe best form of the function model only based on the highest R2 value. This is clearly wrong. This studyattempted to find the best regression function model by using two kinds of tests: MacKinnon, White andDavidson Test (MWD Test and Bera and McAleer Test (B-M Test. This Study showed that the twoforms of the empirical function models-both the linear and log-linear functions- could be used to estimatethe demand of the new four wheels vehicle in Indonesia. Furthermore, checking by using classicalassumption, we found that the log-linear function model is the best model to estimate the demand of thenew four wheels vehicle in Indonesia.Keywords: empirical model, linear model, log-linear model

  2. Development of Response Spectral Ground Motion Prediction Equations from Empirical Models for Fourier Spectra and Duration of Ground Motion

    Science.gov (United States)

    Bora, S. S.; Scherbaum, F.; Kuehn, N. M.; Stafford, P.; Edwards, B.

    2014-12-01

    In a probabilistic seismic hazard assessment (PSHA) framework, it still remains a challenge to adjust ground motion prediction equations (GMPEs) for application in different seismological environments. In this context, this study presents a complete framework for the development of a response spectral GMPE easily adjustable to different seismological conditions; and which does not suffer from the technical problems associated with the adjustment in response spectral domain. Essentially, the approach consists of an empirical FAS (Fourier Amplitude Spectrum) model and a duration model for ground motion which are combined within the random vibration theory (RVT) framework to obtain the full response spectral ordinates. Additionally, FAS corresponding to individual acceleration records are extrapolated beyond the frequency range defined by the data using the stochastic FAS model, obtained by inversion as described in Edwards & Faeh, (2013). To that end, an empirical model for a duration, which is tuned to optimize the fit between RVT based and observed response spectral ordinate, at each oscillator frequency is derived. Although, the main motive of the presented approach was to address the adjustability issues of response spectral GMPEs; comparison, of median predicted response spectra with the other regional models indicate that presented approach can also be used as a stand-alone model. Besides that, a significantly lower aleatory variability (σbrands it to a potentially viable alternative to the classical regression (on response spectral ordinates) based GMPEs for seismic hazard studies in the near future. The dataset used for the presented analysis is a subset of the recently compiled database RESORCE-2012 across Europe, Middle East and the Mediterranean region.

  3. Development and evaluation of an empirical diurnal sea surface temperature model

    Science.gov (United States)

    Weihs, R. R.; Bourassa, M. A.

    2013-12-01

    An innovative method is developed to determine the diurnal heating amplitude of sea surface temperatures (SSTs) using observations of high-quality satellite SST measurements and NWP atmospheric meteorological data. The diurnal cycle results from heating that develops at the surface of the ocean from low mechanical or shear produced turbulence and large solar radiation absorption. During these typically calm weather conditions, the absorption of solar radiation causes heating of the upper few meters of the ocean, which become buoyantly stable; this heating causes a temperature differential between the surface and the mixed [or bulk] layer on the order of a few degrees. It has been shown that capturing the diurnal cycle is important for a variety of applications, including surface heat flux estimates, which have been shown to be underestimated when neglecting diurnal warming, and satellite and buoy calibrations, which can be complicated because of the heating differential. An empirical algorithm using a pre-dawn sea surface temperature, peak solar radiation, and accumulated wind stress is used to estimate the cycle. The empirical algorithm is derived from a multistep process in which SSTs from MTG's SEVIRI SST experimental hourly data set are combined with hourly wind stress fields derived from a bulk flux algorithm. Inputs for the flux model are taken from NASA's MERRA reanalysis product. NWP inputs are necessary because the inputs need to incorporate diurnal and air-sea interactive processes, which are vital to the ocean surface dynamics, with a high enough temporal resolution. The MERRA winds are adjusted with CCMP winds to obtain more realistic spatial and variance characteristics and the other atmospheric inputs (air temperature, specific humidity) are further corrected on the basis of in situ comparisons. The SSTs are fitted to a Gaussian curve (using one or two peaks), forming a set of coefficients used to fit the data. The coefficient data are combined with

  4. Conceptual modeling in systems biology fosters empirical findings: the mRNA lifecycle.

    Directory of Open Access Journals (Sweden)

    Dov Dori

    Full Text Available One of the main obstacles to understanding complex biological systems is the extent and rapid evolution of information, way beyond the capacity individuals to manage and comprehend. Current modeling approaches and tools lack adequate capacity to model concurrently structure and behavior of biological systems. Here we propose Object-Process Methodology (OPM, a holistic conceptual modeling paradigm, as a means to model both diagrammatically and textually biological systems formally and intuitively at any desired number of levels of detail. OPM combines objects, e.g., proteins, and processes, e.g., transcription, in a way that is simple and easily comprehensible to researchers and scholars. As a case in point, we modeled the yeast mRNA lifecycle. The mRNA lifecycle involves mRNA synthesis in the nucleus, mRNA transport to the cytoplasm, and its subsequent translation and degradation therein. Recent studies have identified specific cytoplasmic foci, termed processing bodies that contain large complexes of mRNAs and decay factors. Our OPM model of this cellular subsystem, presented here, led to the discovery of a new constituent of these complexes, the translation termination factor eRF3. Association of eRF3 with processing bodies is observed after a long-term starvation period. We suggest that OPM can eventually serve as a comprehensive evolvable model of the entire living cell system. The model would serve as a research and communication platform, highlighting unknown and uncertain aspects that can be addressed empirically and updated consequently while maintaining consistency.

  5. Empirical modeling of drying kinetics and microwave assisted extraction of bioactive compounds from Adathoda vasica

    Directory of Open Access Journals (Sweden)

    Prithvi Simha

    2016-03-01

    Full Text Available To highlight the shortcomings in conventional methods of extraction, this study investigates the efficacy of Microwave Assisted Extraction (MAE toward bioactive compound recovery from pharmaceutically-significant medicinal plants, Adathoda vasica and Cymbopogon citratus. Initially, the microwave (MW drying behavior of the plant leaves was investigated at different sample loadings, MW power and drying time. Kinetics was analyzed through empirical modeling of drying data against 10 conventional thin-layer drying equations that were further improvised through the incorporation of Arrhenius, exponential and linear-type expressions. 81 semi-empirical Midilli equations were derived and subjected to non-linear regression to arrive at the characteristic drying equations. Bioactive compounds recovery from the leaves was examined under various parameters through a comparative approach that studied MAE against Soxhlet extraction. MAE of A. vasica reported similar yields although drastic reduction in extraction time (210 s as against the average time of 10 h in the Soxhlet apparatus. Extract yield for MAE of C. citratus was higher than the conventional process with optimal parameters determined to be 20 g sample load, 1:20 sample/solvent ratio, extraction time of 150 s and 300 W output power. Scanning Electron Microscopy and Fourier Transform Infrared Spectroscopy were performed to depict changes in internal leaf morphology.

  6. Analogical scaffolding and the learning of abstract ideas in physics: Empirical studies

    Directory of Open Access Journals (Sweden)

    Noah S. Podolefsky

    2007-09-01

    Full Text Available Previously, we proposed a model of student reasoning which combines the roles of representation, analogy, and layering of meaning—analogical scaffolding [Podolefsky and Finkelstein, Phys. Rev. ST Phys. Educ. Res. 3, 010109 (2007]. The present empirical studies build on this model to examine its utility and demonstrate the vital intertwining of representation, analogy, and conceptual learning in physics. In two studies of student reasoning using analogy, we show that representations couple to students’ existing prior knowledge and also lead to the dynamic formation of new knowledge. Students presented with abstract, concrete, or blended (both abstract and concrete representations produced markedly different response patterns. In the first study, using analogies to scaffold understanding of electromagnetic (EM waves, students in the blend group were more likely to reason productively about EM waves than students in the abstract group by as much as a factor of 3 (73% vs 24% correct, p=0.002. In the second study, examining representation use within one domain (sound waves, the blend group was more likely to reason productively about sound waves than the abstract group by as much as a factor of 2 (48% vs 23% correct, p=0.002. Using the analogical scaffolding model we examine when and why students succeed and fail to use analogies and interpret representations appropriately.

  7. Analogical scaffolding and the learning of abstract ideas in physics: Empirical studies

    Directory of Open Access Journals (Sweden)

    Noah D. Finkelstein

    2007-09-01

    Full Text Available Previously, we proposed a model of student reasoning which combines the roles of representation, analogy, and layering of meaning—analogical scaffolding [Podolefsky and Finkelstein, Phys. Rev. ST Phys. Educ. Res. 3, 010109 (2007]. The present empirical studies build on this model to examine its utility and demonstrate the vital intertwining of representation, analogy, and conceptual learning in physics. In two studies of student reasoning using analogy, we show that representations couple to students’ existing prior knowledge and also lead to the dynamic formation of new knowledge. Students presented with abstract, concrete, or blended (both abstract and concrete representations produced markedly different response patterns. In the first study, using analogies to scaffold understanding of electromagnetic (EM waves, students in the blend group were more likely to reason productively about EM waves than students in the abstract group by as much as a factor of 3 (73% vs 24% correct, p=0.002 . In the second study, examining representation use within one domain (sound waves, the blend group was more likely to reason productively about sound waves than the abstract group by as much as a factor of 2 (48% vs 23% correct, p=0.002 . Using the analogical scaffolding model we examine when and why students succeed and fail to use analogies and interpret representations appropriately.

  8. An empirical investigation of the efficiency effects of integrated care models in Switzerland

    Directory of Open Access Journals (Sweden)

    Oliver Reich

    2012-01-01

    Full Text Available Introduction: This study investigates the efficiency gains of integrated care models in Switzerland, since these models are regarded as cost containment options in national social health insurance. These plans generate much lower average health care expenditure than the basic insurance plan. The question is, however, to what extent these total savings are due to the effects of selection and efficiency. Methods: The empirical analysis is based on data from 399,274 Swiss residents that constantly had compulsory health insurance with the Helsana Group, the largest health insurer in Switzerland, covering the years 2006 to 2009. In order to evaluate the efficiency of the different integrated care models, we apply an econometric approach with a mixed-effects model. Results: Our estimations indicate that the efficiency effects of integrated care models on health care expenditure are significant. However, the different insurance plans vary, revealing the following efficiency gains per model: contracted capitated model 21.2%, contracted non-capitated model 15.5% and telemedicine model 3.7%. The remaining 8.5%, 5.6% and 22.5% respectively of the variation in total health care expenditure can be attributed to the effects of selection. Conclusions: Integrated care models have the potential to improve care for patients with chronic diseases and concurrently have a positive impact on health care expenditure. We suggest policy makers improve the incentives for patients with chronic diseases within the existing regulations providing further potential for cost-efficiency of medical care.

  9. An empirical investigation of the efficiency effects of integrated care models in Switzerland

    Directory of Open Access Journals (Sweden)

    Oliver Reich

    2012-01-01

    Full Text Available Introduction: This study investigates the efficiency gains of integrated care models in Switzerland, since these models are regarded as cost containment options in national social health insurance. These plans generate much lower average health care expenditure than the basic insurance plan. The question is, however, to what extent these total savings are due to the effects of selection and efficiency.Methods: The empirical analysis is based on data from 399,274 Swiss residents that constantly had compulsory health insurance with the Helsana Group, the largest health insurer in Switzerland, covering the years 2006 to 2009. In order to evaluate the efficiency of the different integrated care models, we apply an econometric approach with a mixed-effects model.Results: Our estimations indicate that the efficiency effects of integrated care models on health care expenditure are significant. However, the different insurance plans vary, revealing the following efficiency gains per model: contracted capitated model 21.2%, contracted non-capitated model 15.5% and telemedicine model 3.7%. The remaining 8.5%, 5.6% and 22.5% respectively of the variation in total health care expenditure can be attributed to the effects of selection.Conclusions: Integrated care models have the potential to improve care for patients with chronic diseases and concurrently have a positive impact on health care expenditure. We suggest policy makers improve the incentives for patients with chronic diseases within the existing regulations providing further potential for cost-efficiency of medical care.

  10. The patient perspective of clinical training-an empirical study about patient motives to participate.

    Science.gov (United States)

    Drevs, Florian; Gebele, Christoph; Tscheulin, Dieter K

    2014-10-01

    This study introduces a comprehensive model to explain patients' prosocial behavioral intentions to participate in clinical training. Using the helping decision model, the authors analyze the combined impact of factors that affect participation intentions. The model includes intrapersonal and interpersonal appraisals triggered by an awareness of the societal need for clinical training as a practical part of medical education. The results of our empirical study (N=317) show that personal costs and anxiety as negative appraisals and a warm glow as a positive appraisal affect participation intentions and fully mediate the effect of the patient's awareness of the societal need. The study results indicate that communication strategies should address patient beliefs about negative personal consequences of participation rather than highlighting the societal need for practical medical education related to clinical training. Based on the results, medical associations could develop guidelines and provide training for physicians on how to motivate patients to participate in clinical training, resulting in more patient-centered standardized consent discussions. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  11. A New Statistical Method to Determine the Degree of Validity of Health Economic Model Outcomes against Empirical Data.

    NARCIS (Netherlands)

    Corro Ramos, Isaac; van Voorn, George A K; Vemer, Pepijn; Feenstra, Talitha L; Al, Maiwenn J

    2017-01-01

    The validation of health economic (HE) model outcomes against empirical data is of key importance. Although statistical testing seems applicable, guidelines for the validation of HE models lack guidance on statistical validation, and actual validation efforts often present subjective judgment of

  12. Trade costs in empirical New Economic Geography

    NARCIS (Netherlands)

    Bosker, E.M.; Garretsen, J.H.

    Trade costs are a crucial element of New Economic Geography (NEG) models. Without trade costs there is no role for geography. In empirical NEG studies the unavailability of direct trade cost data calls for the need to approximate these trade costs by introducing a trade cost function. In doing so,

  13. Empirically Exploring Higher Education Cultures of Assessment

    Science.gov (United States)

    Fuller, Matthew B.; Skidmore, Susan T.; Bustamante, Rebecca M.; Holzweiss, Peggy C.

    2016-01-01

    Although touted as beneficial to student learning, cultures of assessment have not been examined adequately using validated instruments. Using data collected from a stratified, random sample (N = 370) of U.S. institutional research and assessment directors, the models tested in this study provide empirical support for the value of using the…

  14. Empirical modeling of high-intensity electron beam interaction with materials

    Science.gov (United States)

    Koleva, E.; Tsonevska, Ts; Mladenov, G.

    2018-03-01

    The paper proposes an empirical modeling approach to the prediction followed by optimization of the exact shape of the cross-section of a welded seam, as obtained by electron beam welding. The approach takes into account the electron beam welding process parameters, namely, electron beam power, welding speed, and distances from the magnetic lens of the electron gun to the focus position of the beam and to the surface of the samples treated. The results are verified by comparison with experimental results for type 1H18NT stainless steel samples. The ranges considered of the beam power and the welding speed are 4.2 – 8.4 kW and 3.333 – 13.333 mm/s, respectively.

  15. A New Statistical Method to Determine the Degree of Validity of Health Economic Model Outcomes against Empirical Data.

    Science.gov (United States)

    Corro Ramos, Isaac; van Voorn, George A K; Vemer, Pepijn; Feenstra, Talitha L; Al, Maiwenn J

    2017-09-01

    The validation of health economic (HE) model outcomes against empirical data is of key importance. Although statistical testing seems applicable, guidelines for the validation of HE models lack guidance on statistical validation, and actual validation efforts often present subjective judgment of graphs and point estimates. To discuss the applicability of existing validation techniques and to present a new method for quantifying the degrees of validity statistically, which is useful for decision makers. A new Bayesian method is proposed to determine how well HE model outcomes compare with empirical data. Validity is based on a pre-established accuracy interval in which the model outcomes should fall. The method uses the outcomes of a probabilistic sensitivity analysis and results in a posterior distribution around the probability that HE model outcomes can be regarded as valid. We use a published diabetes model (Modelling Integrated Care for Diabetes based on Observational data) to validate the outcome "number of patients who are on dialysis or with end-stage renal disease." Results indicate that a high probability of a valid outcome is associated with relatively wide accuracy intervals. In particular, 25% deviation from the observed outcome implied approximately 60% expected validity. Current practice in HE model validation can be improved by using an alternative method based on assessing whether the model outcomes fit to empirical data at a predefined level of accuracy. This method has the advantage of assessing both model bias and parameter uncertainty and resulting in a quantitative measure of the degree of validity that penalizes models predicting the mean of an outcome correctly but with overly wide credible intervals. Copyright © 2017 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  16. Essays on empirical industrial organization : Entry and innovation

    NARCIS (Netherlands)

    Fernandez Machado, Roxana

    2017-01-01

    The dissertation contains three essays on empirical industrial organization devoted to studying firms' strategic interaction in different settings. The first essay develops an entry model to address an important matter in the area of urban economics: the development of cities. In particular, it

  17. Semi-empirical evaluation studies on PCMI for the Fugen fuel rod

    International Nuclear Information System (INIS)

    Domoto, Kazushige; Kaneko, Mitsunobu; Takeuchi, Kiyoshi.

    1980-03-01

    Fugen, 165 MWe prototype of a heavy water moderated boiling water cooled reactor, has been well operated since March 1979. In order to establish PCIOMR for Fugen fuels semi-empirical evaluation code to analyze PCMI during power transient of the fuel rod has been developed. In this paper, followings are described 1) general scope of the development work 2) description of the modelling 3) some results of analysis on out pile and in pile tests. (author)

  18. Decoupling Analysis of China’s Product Sector Output and Its Embodied Carbon Emissions—An Empirical Study Based on Non-Competitive I-O and Tapio Decoupling Model

    Directory of Open Access Journals (Sweden)

    Jianbo Hu

    2017-05-01

    Full Text Available This paper uses the non-competitive I-O model and the Tapio decoupling model to comprehensively analyze the decoupling relationship between the output of the product sector in China and its embodied carbon emissions under trade openness. For this purpose, the Chinese input and output data in 2002, 2005, 2007, 2010, and 2012 are used. This approach is beneficial to identify the direct mechanism for the increased carbon emission in China from a micro perspective and provides a new perspective for the subsequent study about low-carbon economy. The obtained empirical results are as follows: (1 From overall perspective, the decoupling elasticity between the output of the product sector and its embodied carbon emissions decreased. Output and embodied carbon emissions showed a growth link from 2002 to 2005 and a weak decoupling relationship for the rest of the study period. (2 Among the 28 industries in the product sector, the increased growth rate of output in more and more product sectors was no longer accompanied by large CO2 emissions. The number of industries with strong decoupling relationships between output and embodied carbon emissions increased. (3 From the perspective of three industries, the output and embodied carbon emissions in the second and third industries exhibited a growth link only from 2002 to 2005; the three industries presented weak or strong decoupling for the rest of the study period. Through empirical analysis, this paper mainly through the construction of ecological and environmental protection of low carbon agriculture, low carbon cycle industrial system, as well as intensive and efficient service industry to reduce the carbon emissions of China’s product sector.

  19. Semi-empirical atom-atom interaction models and X-ray crystallography

    International Nuclear Information System (INIS)

    Braam, A.W.M.

    1981-01-01

    Several aspects of semi-empirical energy calculations in crystallography are considered. Solid modifications of ethane have been studied using energy calculations and a fast summation technique has been evaluated. The structure of tetramethylpyrazine has been determined at room temperature and at 100K and accurate structure factors have been derived from measured Bragg intensities. Finally electrostatic properties have been deduced from X-ray structure factors. (C.F.)

  20. Empirical Philosophy of Science

    DEFF Research Database (Denmark)

    Mansnerus, Erika; Wagenknecht, Susann

    2015-01-01

    knowledge takes place through the integration of the empirical or historical research into the philosophical studies, as Chang, Nersessian, Thagard and Schickore argue in their work. Building upon their contributions we will develop a blueprint for an Empirical Philosophy of Science that draws upon...... qualitative methods from the social sciences in order to advance our philosophical understanding of science in practice. We will regard the relationship between philosophical conceptualization and empirical data as an iterative dialogue between theory and data, which is guided by a particular ‘feeling with......Empirical insights are proven fruitful for the advancement of Philosophy of Science, but the integration of philosophical concepts and empirical data poses considerable methodological challenges. Debates in Integrated History and Philosophy of Science suggest that the advancement of philosophical...

  1. Empirical Storm-Time Correction to the International Reference Ionosphere Model E-Region Electron and Ion Density Parameterizations Using Observations from TIMED/SABER

    Science.gov (United States)

    Mertens, Christoper J.; Winick, Jeremy R.; Russell, James M., III; Mlynczak, Martin G.; Evans, David S.; Bilitza, Dieter; Xu, Xiaojing

    2007-01-01

    The response of the ionospheric E-region to solar-geomagnetic storms can be characterized using observations of infrared 4.3 micrometers emission. In particular, we utilize nighttime TIMED/SABER measurements of broadband 4.3 micrometers limb emission and derive a new data product, the NO+(v) volume emission rate, which is our primary observation-based quantity for developing an empirical storm-time correction the IRI E-region electron density. In this paper we describe our E-region proxy and outline our strategy for developing the empirical storm model. In our initial studies, we analyzed a six day storm period during the Halloween 2003 event. The results of this analysis are promising and suggest that the ap-index is a viable candidate to use as a magnetic driver for our model.

  2. Generation of synthetic Kinect depth images based on empirical noise model

    DEFF Research Database (Denmark)

    Iversen, Thorbjørn Mosekjær; Kraft, Dirk

    2017-01-01

    The development, training and evaluation of computer vision algorithms rely on the availability of a large number of images. The acquisition of these images can be time-consuming if they are recorded using real sensors. An alternative is to rely on synthetic images which can be rapidly generated....... This Letter describes a novel method for the simulation of Kinect v1 depth images. The method is based on an existing empirical noise model from the literature. The authors show that their relatively simple method is able to provide depth images which have a high similarity with real depth images....

  3. Modeling ionospheric foF2 by using empirical orthogonal function analysis

    Directory of Open Access Journals (Sweden)

    E. A

    2011-08-01

    Full Text Available A similar-parameters interpolation method and an empirical orthogonal function analysis are used to construct empirical models for the ionospheric foF2 by using the observational data from three ground-based ionosonde stations in Japan which are Wakkanai (Geographic 45.4° N, 141.7° E, Kokubunji (Geographic 35.7° N, 140.1° E and Yamagawa (Geographic 31.2° N, 130.6° E during the years of 1971–1987. The impact of different drivers towards ionospheric foF2 can be well indicated by choosing appropriate proxies. It is shown that the missing data of original foF2 can be optimal refilled using similar-parameters method. The characteristics of base functions and associated coefficients of EOF model are analyzed. The diurnal variation of base functions can reflect the essential nature of ionospheric foF2 while the coefficients represent the long-term alteration tendency. The 1st order EOF coefficient A1 can reflect the feature of the components with solar cycle variation. A1 also contains an evident semi-annual variation component as well as a relatively weak annual fluctuation component. Both of which are not so obvious as the solar cycle variation. The 2nd order coefficient A2 contains mainly annual variation components. The 3rd order coefficient A3 and 4th order coefficient A4 contain both annual and semi-annual variation components. The seasonal variation, solar rotation oscillation and the small-scale irregularities are also included in the 4th order coefficient A4. The amplitude range and developing tendency of all these coefficients depend on the level of solar activity and geomagnetic activity. The reliability and validity of EOF model are verified by comparison with observational data and with International Reference Ionosphere (IRI. The agreement between observations and EOF model is quite well, indicating that the EOF model can reflect the major changes and the temporal distribution characteristics of the mid-latitude ionosphere of the

  4. Multiband Prediction Model for Financial Time Series with Multivariate Empirical Mode Decomposition

    Directory of Open Access Journals (Sweden)

    Md. Rabiul Islam

    2012-01-01

    Full Text Available This paper presents a subband approach to financial time series prediction. Multivariate empirical mode decomposition (MEMD is employed here for multiband representation of multichannel financial time series together. Autoregressive moving average (ARMA model is used in prediction of individual subband of any time series data. Then all the predicted subband signals are summed up to obtain the overall prediction. The ARMA model works better for stationary signal. With multiband representation, each subband becomes a band-limited (narrow band signal and hence better prediction is achieved. The performance of the proposed MEMD-ARMA model is compared with classical EMD, discrete wavelet transform (DWT, and with full band ARMA model in terms of signal-to-noise ratio (SNR and mean square error (MSE between the original and predicted time series. The simulation results show that the MEMD-ARMA-based method performs better than the other methods.

  5. Adolescents Family Models : A Cross-Cultural Study

    OpenAIRE

    Mayer, Boris

    2009-01-01

    This study explores and compares the family models of adolescents across ten cultures using a typological and multilevel approach. Thereby, it aims to empirically contribute to Kagitcibasi s (2007) theory of family change. This theory postulates the existence of three ideal-typical family models across cultures: a family model of independence prevailing in Western societies, a family model of (total) interdependence prevailing in non-industrialized agrarian cultures, and as a synthesis of the...

  6. Climate change, income and happiness: An empirical study for Barcelona

    NARCIS (Netherlands)

    Sekulova, F.; van den Bergh, J.C.J.M.

    2013-01-01

    The present article builds upon the results of an empirical study exploring key factors which determine life satisfaction in Barcelona. Based on a sample of 840 individuals we first look at the way changes in income, notably income reductions, associated with the current economic situation in Spain,

  7. A control-oriented real-time semi-empirical model for the prediction of NOx emissions in diesel engines

    International Nuclear Information System (INIS)

    D’Ambrosio, Stefano; Finesso, Roberto; Fu, Lezhong; Mittica, Antonio; Spessa, Ezio

    2014-01-01

    Highlights: • New semi-empirical correlation to predict NOx emissions in diesel engines. • Based on a real-time three-zone diagnostic combustion model. • The model is of fast application, and is therefore suitable for control-oriented applications. - Abstract: The present work describes the development of a fast control-oriented semi-empirical model that is capable of predicting NOx emissions in diesel engines under steady state and transient conditions. The model takes into account the maximum in-cylinder burned gas temperature of the main injection, the ambient gas-to-fuel ratio, the mass of injected fuel, the engine speed and the injection pressure. The evaluation of the temperature of the burned gas is based on a three-zone real-time diagnostic thermodynamic model that has recently been developed by the authors. Two correlations have also been developed in the present study, in order to evaluate the maximum burned gas temperature during the main combustion phase (derived from the three-zone diagnostic model) on the basis of significant engine parameters. The model has been tuned and applied to two diesel engines that feature different injection systems of the indirect acting piezoelectric, direct acting piezoelectric and solenoid type, respectively, over a wide range of steady-state operating conditions. The model has also been validated in transient operation conditions, over the urban and extra-urban phases of an NEDC. It has been shown that the proposed approach is capable of improving the predictive capability of NOx emissions, compared to previous approaches, and is characterized by a very low computational effort, as it is based on a single-equation correlation. It is therefore suitable for real-time applications, and could also be integrated in the engine control unit for closed-loop or feed-forward control tasks

  8. Matrix effect studies with empirical formulations in maize saplings

    International Nuclear Information System (INIS)

    Bansal, Meenakshi; Deep, Kanan; Mittal, Raj

    2012-01-01

    In X-ray fluorescence, the earlier derived matrix effects from fundamental relations of intensities of analyte/matrix elements with basic atomic and experimental setup parameters and tested on synthetic known samples were found empirically related to analyte/matrix elemental amounts. The present study involves the application of these relations on potassium and calcium macronutrients of maize saplings treated with different fertilizers. The novelty of work involves a determination of an element in the presence of its secondary excitation rather than avoiding the secondary fluorescence. Therefore, the possible utility of this process is in studying the absorption for some intermediate samples in a lot of a category of samples with close Z interfering constituents (just like Ca and K). Once the absorption and enhancement terms are fitted to elemental amounts and fitted coefficients are determined, with the absorption terms from the fit and an enhancer element amount known from its selective excitation, the next iterative elemental amount can be directly evaluated from the relations. - Highlights: ► Empirical formulation for matrix corrections in terms of amounts of analyte and matrix element. ► The study applied on K and Ca nutrients of maize, rice and potato organic materials. ► The formulation provides matrix terms from amounts of analyte/matrix elements and vice versa.

  9. An Empirical State Error Covariance Matrix Orbit Determination Example

    Science.gov (United States)

    Frisbee, Joseph H., Jr.

    2015-01-01

    is suspect. In its most straight forward form, the technique only requires supplemental calculations to be added to existing batch estimation algorithms. In the current problem being studied a truth model making use of gravity with spherical, J2 and J4 terms plus a standard exponential type atmosphere with simple diurnal and random walk components is used. The ability of the empirical state error covariance matrix to account for errors is investigated under four scenarios during orbit estimation. These scenarios are: exact modeling under known measurement errors, exact modeling under corrupted measurement errors, inexact modeling under known measurement errors, and inexact modeling under corrupted measurement errors. For this problem a simple analog of a distributed space surveillance network is used. The sensors in this network make only range measurements and with simple normally distributed measurement errors. The sensors are assumed to have full horizon to horizon viewing at any azimuth. For definiteness, an orbit at the approximate altitude and inclination of the International Space Station is used for the study. The comparison analyses of the data involve only total vectors. No investigation of specific orbital elements is undertaken. The total vector analyses will look at the chisquare values of the error in the difference between the estimated state and the true modeled state using both the empirical and theoretical error covariance matrices for each of scenario.

  10. The effect of empirical potential functions on modeling of amorphous carbon using molecular dynamics method

    International Nuclear Information System (INIS)

    Li, Longqiu; Xu, Ming; Song, Wenping; Ovcharenko, Andrey; Zhang, Guangyu; Jia, Ding

    2013-01-01

    Empirical potentials have a strong effect on the hybridization and structure of amorphous carbon and are of great importance in molecular dynamics (MD) simulations. In this work, amorphous carbon at densities ranging from 2.0 to 3.2 g/cm 3 was modeled by a liquid quenching method using Tersoff, 2nd REBO, and ReaxFF empirical potentials. The hybridization, structure and radial distribution function G(r) of carbon atoms were analyzed as a function of the three potentials mentioned above. The ReaxFF potential is capable to model the change of the structure of amorphous carbon and MD results are in a good agreement with experimental results and density function theory (DFT) at low density of 2.6 g/cm 3 and below. The 2nd REBO potential can be used when amorphous carbon has a very low density of 2.4 g/cm 3 and below. Considering the computational efficiency, the Tersoff potential is recommended to model amorphous carbon at a high density of 2.6 g/cm 3 and above. In addition, the influence of the quenching time on the hybridization content obtained with the three potentials is discussed.

  11. An Empirical Study of Kirkpatrick's Evaluation Model in the Hospitality Industry

    Science.gov (United States)

    Chang, Ya-Hui Elegance

    2010-01-01

    This study examined Kirkpatrick's training evaluation model (Kirkpatrick & Kirkpatrick, 2006) by assessing a sales training program conducted at an organization in the hospitality industry. The study assessed the employees' training outcomes of knowledge and skills, job performance, and the impact of the training upon the organization. By…

  12. Testing isotherm models and recovering empirical relationships for adsorption in microporous carbons using virtual carbon models and grand canonical Monte Carlo simulations

    International Nuclear Information System (INIS)

    Terzyk, Artur P; Furmaniak, Sylwester; Gauden, Piotr A; Harris, Peter J F; Wloch, Jerzy

    2008-01-01

    Using the plausible model of activated carbon proposed by Harris and co-workers and grand canonical Monte Carlo simulations, we study the applicability of standard methods for describing adsorption data on microporous carbons widely used in adsorption science. Two carbon structures are studied, one with a small distribution of micropores in the range up to 1 nm, and the other with micropores covering a wide range of porosity. For both structures, adsorption isotherms of noble gases (from Ne to Xe), carbon tetrachloride and benzene are simulated. The data obtained are considered in terms of Dubinin-Radushkevich plots. Moreover, for benzene and carbon tetrachloride the temperature invariance of the characteristic curve is also studied. We show that using simulated data some empirical relationships obtained from experiment can be successfully recovered. Next we test the applicability of Dubinin's related models including the Dubinin-Izotova, Dubinin-Radushkevich-Stoeckli, and Jaroniec-Choma equations. The results obtained demonstrate the limits and applications of the models studied in the field of carbon porosity characterization

  13. FARIMA MODELING OF SOLAR FLARE ACTIVITY FROM EMPIRICAL TIME SERIES OF SOFT X-RAY SOLAR EMISSION

    International Nuclear Information System (INIS)

    Stanislavsky, A. A.; Burnecki, K.; Magdziarz, M.; Weron, A.; Weron, K.

    2009-01-01

    A time series of soft X-ray emission observed by the Geostationary Operational Environment Satellites from 1974 to 2007 is analyzed. We show that in the solar-maximum periods the energy distribution of soft X-ray solar flares for C, M, and X classes is well described by a fractional autoregressive integrated moving average model with Pareto noise. The model incorporates two effects detected in our empirical studies. One effect is a long-term dependence (long-term memory), and another corresponds to heavy-tailed distributions. The parameters of the model: self-similarity exponent H, tail index α, and memory parameter d are statistically stable enough during the periods 1977-1981, 1988-1992, 1999-2003. However, when the solar activity tends to minimum, the parameters vary. We discuss the possible causes of this evolution and suggest a statistically justified model for predicting the solar flare activity.

  14. Parameterization of water vapor using high-resolution GPS data and empirical models

    Science.gov (United States)

    Ningombam, Shantikumar S.; Jade, Sridevi; Shrungeshwara, T. S.

    2018-03-01

    The present work evaluates eleven existing empirical models to estimate Precipitable Water Vapor (PWV) over a high-altitude (4500 m amsl), cold-desert environment. These models are tested extensively and used globally to estimate PWV for low altitude sites (below 1000 m amsl). The moist parameters used in the model are: water vapor scale height (Hc), dew point temperature (Td) and water vapor pressure (Es 0). These moist parameters are derived from surface air temperature and relative humidity measured at high temporal resolution from automated weather station. The performance of these models are examined statistically with observed high-resolution GPS (GPSPWV) data over the region (2005-2012). The correlation coefficient (R) between the observed GPSPWV and Model PWV is 0.98 at daily data and varies diurnally from 0.93 to 0.97. Parameterization of moisture parameters were studied in-depth (i.e., 2 h to monthly time scales) using GPSPWV , Td , and Es 0 . The slope of the linear relationships between GPSPWV and Td varies from 0.073°C-1 to 0.106°C-1 (R: 0.83 to 0.97) while GPSPWV and Es 0 varied from 1.688 to 2.209 (R: 0.95 to 0.99) at daily, monthly and diurnal time scales. In addition, the moist parameters for the cold desert, high-altitude environment are examined in-depth at various time scales during 2005-2012.

  15. Using MultiMedia Content to Present Business Ethics: An Empirical Study

    Science.gov (United States)

    Stanwick, Peter A.

    2010-01-01

    The purpose of this study is to empirically examine whether presenting a multimedia case study enhances the learning experience of students in an undergraduate management class. A questionnaire was administered before and after the presentation of the case study and the results showed that the multimedia case did indeed enhance the learning…

  16. Supply chain strategy: empirical case study in Europe and Asia:

    OpenAIRE

    Sillanpää, Ilkka; Sillanpää, Sebastian

    2014-01-01

    The purpose of this case study research is to present a literature review of supply chain strategy approaches, develop supply chain strategy framework and to validate a framework in empirical case study. Literature review and case study research are the research methods for this research. This study presents the supply chain strategy framework which merges together business environment, corporate strategy, supply chain demand and supply chain strategy. Research argues that all the different c...

  17. Constraints and Dedication as Drivers for Relationship Commitment: An Empirical Study in a Health-Care Context

    OpenAIRE

    Gaby Odekerken-Schröder; Bloemer Josée

    2002-01-01

    The objective of this study is to empirically determine the role of constraints and dedication as drivers of relationship commitment as most of the existing work is of a conceptual nature only. We assess how and to which extent these two drivers fit into the established relationships between overall service quality, satisfaction, trust and commitment. Using LISREL, we estimate the conceptual model based on a sample of customers of health-care centers. The results indicate that both constraint...

  18. A comparative empirical analysis of statistical models for evaluating highway segment crash frequency

    Directory of Open Access Journals (Sweden)

    Bismark R.D.K. Agbelie

    2016-08-01

    Full Text Available The present study conducted an empirical highway segment crash frequency analysis on the basis of fixed-parameters negative binomial and random-parameters negative binomial models. Using a 4-year data from a total of 158 highway segments, with a total of 11,168 crashes, the results from both models were presented, discussed, and compared. About 58% of the selected variables produced normally distributed parameters across highway segments, while the remaining produced fixed parameters. The presence of a noise barrier along a highway segment would increase mean annual crash frequency by 0.492 for 88.21% of the highway segments, and would decrease crash frequency for 11.79% of the remaining highway segments. Besides, the number of vertical curves per mile along a segment would increase mean annual crash frequency by 0.006 for 84.13% of the highway segments, and would decrease crash frequency for 15.87% of the remaining highway segments. Thus, constraining the parameters to be fixed across all highway segments would lead to an inaccurate conclusion. Although, the estimated parameters from both models showed consistency in direction, the magnitudes were significantly different. Out of the two models, the random-parameters negative binomial model was found to be statistically superior in evaluating highway segment crashes compared with the fixed-parameters negative binomial model. On average, the marginal effects from the fixed-parameters negative binomial model were observed to be significantly overestimated compared with those from the random-parameters negative binomial model.

  19. Empirical Descriptions of Criminal Sentencing Decision-Making

    Directory of Open Access Journals (Sweden)

    Rasmus H. Wandall

    2014-05-01

    Full Text Available The article addresses the widespread use of statistical causal modelling to describe criminal sentencing decision-making empirically in Scandinavia. The article describes the characteristics of this model, and on this basis discusses three aspects of sentencing decision-making that the model does not capture: 1 the role of law and legal structures in sentencing, 2 the processes of constructing law and facts as they occur in the processes of handling criminal cases, and 3 reflecting newer organisational changes to sentencing decision-making. The article argues for a stronger empirically based design of sentencing models and for a more balanced use of different social scientific methodologies and models of sentencing decision-making.

  20. Matchmaking in organizational change : does every employee value participatory leadership? An empirical study

    OpenAIRE

    Rogiest, Sofie; Segers, Jesse; Witteloostuijn, van, Arjen

    2018-01-01

    Abstract: Although leadership is generally considered an important lever to increase commitment during organizational change, empirical research has yet to unravel many of the underlying mechanisms. In this paper, we propose that the impact of participative leadership on affective commitment to change will be contingent on employees orientation toward leadership. In our empirical study in two police organizations, we find evidence that followers orientation toward leadership is a useful inter...

  1. EMPIRE-II statistical model code for nuclear reaction calculations

    Energy Technology Data Exchange (ETDEWEB)

    Herman, M [International Atomic Energy Agency, Vienna (Austria)

    2001-12-15

    EMPIRE II is a nuclear reaction code, comprising various nuclear models, and designed for calculations in the broad range of energies and incident particles. A projectile can be any nucleon or Heavy Ion. The energy range starts just above the resonance region, in the case of neutron projectile, and extends up to few hundreds of MeV for Heavy Ion induced reactions. The code accounts for the major nuclear reaction mechanisms, such as optical model (SCATB), Multistep Direct (ORION + TRISTAN), NVWY Multistep Compound, and the full featured Hauser-Feshbach model. Heavy Ion fusion cross section can be calculated within the simplified coupled channels approach (CCFUS). A comprehensive library of input parameters covers nuclear masses, optical model parameters, ground state deformations, discrete levels and decay schemes, level densities, fission barriers (BARFIT), moments of inertia (MOMFIT), and {gamma}-ray strength functions. Effects of the dynamic deformation of a fast rotating nucleus can be taken into account in the calculations. The results can be converted into the ENDF-VI format using the accompanying code EMPEND. The package contains the full EXFOR library of experimental data. Relevant EXFOR entries are automatically retrieved during the calculations. Plots comparing experimental results with the calculated ones can be produced using X4TOC4 and PLOTC4 codes linked to the rest of the system through bash-shell (UNIX) scripts. The graphic user interface written in Tcl/Tk is provided. (author)

  2. EMPIRICAL WEIGHTED MODELLING ON INTER-COUNTY INEQUALITIES EVOLUTION AND TO TEST ECONOMICAL CONVERGENCE IN ROMANIA

    Directory of Open Access Journals (Sweden)

    Natalia\tMOROIANU‐DUMITRESCU

    2015-06-01

    Full Text Available During the last decades, the regional convergence process in Europe has attracted a considerable interest as a highly significant issue, especially after EU enlargement with the New Member States from Central and Eastern Europe. The most usual empirical approaches are using the β- and σ-convergence, originally developed by a series of neo-classical models. Up-to-date, the EU integration process was proven to be accompanied by an increase of the regional inequalities. In order to determine the existence of a similar increase of the inequalities between the administrative counties (NUTS3 included in the NUTS2 and NUTS1 regions of Romania, this paper provides an empirical modelling of economic convergence allowing to evaluate the level and evolution of the inter-regional inequalities over more than a decade period lasting from 1995 up to 2011. The paper presents the results of a large cross-sectional study of σ-convergence and weighted coefficient of variation, using GDP and population data obtained from the National Institute of Statistics of Romania. Both graphical representation including non-linear regression and the associated tables summarizing numerical values of the main statistical tests are demonstrating the impact of pre- accession policy on the economic development of all Romanian NUTS types. The clearly emphasised convergence in the middle time subinterval can be correlated with the pre-accession drastic changes on economic, political and social level, and with the opening of the Schengen borders for Romanian labor force in 2002.

  3. a Semi-Empirical Topographic Correction Model for Multi-Source Satellite Images

    Science.gov (United States)

    Xiao, Sa; Tian, Xinpeng; Liu, Qiang; Wen, Jianguang; Ma, Yushuang; Song, Zhenwei

    2018-04-01

    Topographic correction of surface reflectance in rugged terrain areas is the prerequisite for the quantitative application of remote sensing in mountainous areas. Physics-based radiative transfer model can be applied to correct the topographic effect and accurately retrieve the reflectance of the slope surface from high quality satellite image such as Landsat8 OLI. However, as more and more images data available from various of sensors, some times we can not get the accurate sensor calibration parameters and atmosphere conditions which are needed in the physics-based topographic correction model. This paper proposed a semi-empirical atmosphere and topographic corrction model for muti-source satellite images without accurate calibration parameters.Based on this model we can get the topographic corrected surface reflectance from DN data, and we tested and verified this model with image data from Chinese satellite HJ and GF. The result shows that the correlation factor was reduced almost 85 % for near infrared bands and the classification overall accuracy of classification increased 14 % after correction for HJ. The reflectance difference of slope face the sun and face away the sun have reduced after correction.

  4. Semi-empirical model for prediction of unsteady forces on an airfoil with application to flutter

    Science.gov (United States)

    Mahajan, A. J.; Kaza, K. R. V.; Dowell, E. H.

    1993-01-01

    A semi-empirical model is described for predicting unsteady aerodynamic forces on arbitrary airfoils under mildly stalled and unstalled conditions. Aerodynamic forces are modeled using second order ordinary differential equations for lift and moment with airfoil motion as the input. This model is simultaneously integrated with structural dynamics equations to determine flutter characteristics for a two degrees-of-freedom system. Results for a number of cases are presented to demonstrate the suitability of this model to predict flutter. Comparison is made to the flutter characteristics determined by a Navier-Stokes solver and also the classical incompressible potential flow theory.

  5. Chronic Fatigue Syndrome – A clinically empirical approach to its definition and study

    Directory of Open Access Journals (Sweden)

    Papanicolaou Dimitris A

    2005-12-01

    Full Text Available Abstract Background The lack of standardized criteria for defining chronic fatigue syndrome (CFS has constrained research. The objective of this study was to apply the 1994 CFS criteria by standardized reproducible criteria. Methods This population-based case control study enrolled 227 adults identified from the population of Wichita with: (1 CFS (n = 58; (2 non-fatigued controls matched to CFS on sex, race, age and body mass index (n = 55; (3 persons with medically unexplained fatigue not CFS, which we term ISF (n = 59; (4 CFS accompanied by melancholic depression (n = 27; and (5 ISF plus melancholic depression (n = 28. Participants were admitted to a hospital for two days and underwent medical history and physical examination, the Diagnostic Interview Schedule, and laboratory testing to identify medical and psychiatric conditions exclusionary for CFS. Illness classification at the time of the clinical study utilized two algorithms: (1 the same criteria as in the surveillance study; (2 a standardized clinically empirical algorithm based on quantitative assessment of the major domains of CFS (impairment, fatigue, and accompanying symptoms. Results One hundred and sixty-four participants had no exclusionary conditions at the time of this study. Clinically empirical classification identified 43 subjects as CFS, 57 as ISF, and 64 as not ill. There was minimal association between the empirical classification and classification by the surveillance criteria. Subjects empirically classified as CFS had significantly worse impairment (evaluated by the SF-36, more severe fatigue (documented by the multidimensional fatigue inventory, more frequent and severe accompanying symptoms than those with ISF, who in turn had significantly worse scores than the not ill; this was not true for classification by the surveillance algorithm. Conclusion The empirical definition includes all aspects of CFS specified in the 1994 case definition and identifies persons with

  6. Corporate Diversification and Firm Performance: an Empirical Study

    Directory of Open Access Journals (Sweden)

    Olu Ojo

    2009-05-01

    Full Text Available The importance of diversification and performance in then strategic management literature is widely accepted among academics and practitioners . However, the proxies for performance and diversification that have been employed in past strategy research has not been unanimously agreed upon. Given the current state of confusion that exists with regard to the impact of corporate diversification on firm performance in selected Nigerian companies. The reason for increased interest in diversification has always been on the possibility that diversification is related to corporate performance. However , while this topic is rich in studies, empirical evidence semerging from various studies about the effect of diversification on performance have so far yield mixed results that are inconclusive and contradictory. In addition , despite the existence of these studies, very litlle attention has been given to the companies in developing countries including Nigeria. This means that there is a major gap in the relevant literature on developing countries which has to be covered by research. This research attempts to fill this gap by studying the situation of the Nigeria companies and providing more empirical evidence on the effects of corporate diversification on firm performance based on individual company-level data. Survey research design was adopted in this study with the application of simple random sampling tehnique in selecting our case study companies as well as our respondents. Primary data were collected through questionnaire. Data were analysed through descriptive statistics and correlation and coefficient of determination were used to test our hypothese. It was discovered that diversification impacted performance of these companies posivitely and we recommend that these companies should engage in geographical diversification in addition to other forms of diversification they are currently involved in for maximum performance.

  7. An improved empirical dynamic control system model of global mean sea level rise and surface temperature change

    Science.gov (United States)

    Wu, Qing; Luu, Quang-Hung; Tkalich, Pavel; Chen, Ge

    2018-04-01

    Having great impacts on human lives, global warming and associated sea level rise are believed to be strongly linked to anthropogenic causes. Statistical approach offers a simple and yet conceptually verifiable combination of remotely connected climate variables and indices, including sea level and surface temperature. We propose an improved statistical reconstruction model based on the empirical dynamic control system by taking into account the climate variability and deriving parameters from Monte Carlo cross-validation random experiments. For the historic data from 1880 to 2001, we yielded higher correlation results compared to those from other dynamic empirical models. The averaged root mean square errors are reduced in both reconstructed fields, namely, the global mean surface temperature (by 24-37%) and the global mean sea level (by 5-25%). Our model is also more robust as it notably diminished the unstable problem associated with varying initial values. Such results suggest that the model not only enhances significantly the global mean reconstructions of temperature and sea level but also may have a potential to improve future projections.

  8. What Happened to Remote Usability Testing? An Empirical Study of Three Methods

    DEFF Research Database (Denmark)

    Stage, Jan; Andreasen, M. S.; Nielsen, H. V.

    2007-01-01

    The idea of conducting usability tests remotely emerged ten years ago. Since then, it has been studied empirically, and some software organizations employ remote methods. Yet there are still few comparisons involving more than one remote method. This paper presents results from a systematic...... empirical comparison of three methods for remote usability testing and a conventional laboratorybased think-aloud method. The three remote methods are a remote synchronous condition, where testing is conducted in real time but the test monitor is separated spatially from the test subjects, and two remote...

  9. Empirical Studies on Financial Markets: Private Equity, Corporate Bonds and Emerging Markets

    NARCIS (Netherlands)

    G.J. de Zwart (Gerben)

    2008-01-01

    textabstractThis dissertation consists of five empirical studies on financial markets. Each study can be read independently and covers a specific market, either private equity, corporate bonds or emerging markets. The first study documents that risk factors cannot account for the significant excess

  10. Empirical Modeling of ICMEs Using ACE/SWICS Ionic Distributions

    Science.gov (United States)

    Rivera, Y.; Landi, E.; Lepri, S. T.; Gilbert, J. A.

    2017-12-01

    Coronal Mass Ejections (CMEs) are some of the largest, most energetic events in the solar system releasing an immense amount of plasma and magnetic field into the Heliosphere. The Earth-bound plasma plays a large role in space weather, causing geomagnetic storms that can damage space and ground based instrumentation. As a CME is released, the plasma experiences heating, expansion and acceleration; however, the physical mechanism supplying the heating as it lifts out of the corona still remains uncertain. From previous work we know the ionic composition of solar ejecta undergoes a gradual transition to a state where ionization and recombination processes become ineffective rendering the ionic composition static along its trajectory. This property makes them a good indicator of thermal conditions in the corona, where the CME plasma likely receives most of its heating. We model this so-called `freeze-in' process in Earth-directed CMEs using an ionization code to empirically determine the electron temperature, density and bulk velocity. `Frozen-in' ions from an ensemble of independently modeled plasmas within the CME are added together to fit the full range of observational ionic abundances collected by ACE/SWICS during ICME events. The models derived using this method are used to estimate the CME energy budget to determine a heating rate used to compare with a variety of heating mechanisms that can sustain the required heating with a compatible timescale.

  11. A Price Index Model for Road Freight Transportation and Its Empirical analysis in China

    Directory of Open Access Journals (Sweden)

    Liu Zhishuo

    2017-01-01

    Full Text Available The aim of price index for road freight transportation (RFT is to reflect the changes of price in the road transport market. Firstly, a price index model for RFT based on the sample data from Alibaba logistics platform is built. This model is a three levels index system including total index, classification index and individual index and the Laspeyres method is applied to calculate these indices. Finally, an empirical analysis of the price index for RFT market in Zhejiang Province is performed. In order to demonstrate the correctness and validity of the exponential model, a comparative analysis with port throughput and PMI index is carried out.

  12. Pluvials, Droughts, Energetics, and the Mongol Empire

    Science.gov (United States)

    Hessl, A. E.; Pederson, N.; Baatarbileg, N.

    2012-12-01

    The success of the Mongol Empire, the largest contiguous land empire the world has ever known, is a historical enigma. At its peak in the late 13th century, the empire influenced areas from the Hungary to southern Asia and Persia. Powered by domesticated herbivores, the Mongol Empire grew at the expense of agriculturalists in Eastern Europe, Persia, and China. What environmental factors contributed to the rise of the Mongols? What factors influenced the disintegration of the empire by 1300 CE? Until now, little high resolution environmental data have been available to address these questions. We use tree-ring records of past temperature and water to illuminate the role of energy and water in the evolution of the Mongol Empire. The study of energetics has long been applied to biological and ecological systems but has only recently become a theme in understanding modern coupled natural and human systems (CNH). Because water and energy are tightly linked in human and natural systems, studying their synergies and interactions make it possible to integrate knowledge across disciplines and human history, yielding important lessons for modern societies. We focus on the role of energy and water in the trajectory of an empire, including its rise, development, and demise. Our research is focused on the Orkhon Valley, seat of the Mongol Empire, where recent paleoenvironmental and archeological discoveries allow high resolution reconstructions of past human and environmental conditions for the first time. Our preliminary records indicate that the period 1210-1230 CE, the height of Chinggis Khan's reign is one of the longest and most consistent pluvials in our tree ring reconstruction of interannual drought. Reconstructed temperature derived from five millennium-long records from subalpine forests in Mongolia document warm temperatures beginning in the early 1200's and ending with a plunge into cold temperatures in 1260. Abrupt cooling in central Mongolia at this time is

  13. Empirical tests of the Chicago model and the Easterlin hypothesis: a case study of Japan.

    Science.gov (United States)

    Ohbuchi, H

    1982-05-01

    The objective of this discussion is to test the applicability of economic theory of fertility with special reference to postwar Japan and to find a clue for forecasting the future trend of fertility. The theories examined are the "Chicago model" and the "Easterlin hypothesis." The major conclusion common among the leading economic theories of fertility, which have their origin with Gary S. Becker (1960, 1965) and Richard A. Easterlin (1966), is the positive income effect, i.e., that the relationship between income and fertility is positive despite the evidence that higher income families have fewer children and that fertility has declined with economic development. To bridge the gap between theory and fact is the primary purpose of the economic theory of fertility, and each offers a different interpretation for it. The point of the Chicago model, particularly of the household decision making model of the "new home economics," is the mechanism that a positive effect of husband's income growth on fertility is offset by a negative price effect caused by the opportunity cost of wife's time. While the opportunity cost of wife's time is independent of the female wage rate for an unemployed wife, it is directly associated with the wage rate for a gainfully employed wife. Thus, the fertility response to female wages occurs only among families with an employed wife. The primary concern of empirical efforts to test the Chicago model has been with the determination of income and price elasticities. An attempt is made to test the relevance of the Chicago model and the Easterlin hypothesis in explaning the fertility movement in postwar Japan. In case of the Chicago model, the statistical results appeared fairly successful but did not match with the theory. The effect on fertility of a rise in women's real wage (and, therefore in the opportunity cost of mother's time) and of a rise in labor force participation rate of married women of childbearing age in recent years could not

  14. Desire for experiential travel, avoidance of rituality and social esteem: An empirical study of consumer response to tourism innovation

    OpenAIRE

    Chan, Wing Yin; To, Chester Kin-man; Chu, Wai Ching

    2016-01-01

    This study investigates tourist consumption responses toward tourism innovation. To measure tourist responses, this study posits three key consumption drivers, namely social esteem, desire for experiential travel, and avoidance against rituality of tourism settings (a subscale of need for uniqueness) and models consumers’ affective response within the context of tourism innovation. It involves 295 respondents in an empirical survey. The findings affirm the three drivers toward tourist consump...

  15. Continued Use of a Chinese Online Portal: An Empirical Study

    Science.gov (United States)

    Shih, Hung-Pin

    2008-01-01

    The evolution of the internet has made online portals a popular means of surfing the internet. In internet commerce, understanding the post-adoption behaviour of users of online portals can help enterprises to attract new users and retain existing customers. For predicting continued use intentions, this empirical study focused on applying and…

  16. An empirical operationalization study of DSM-IV diagnostic criteria for premature ejaculation

    NARCIS (Netherlands)

    Waldinger, M. D.; Hengeveld, M. W.; Zwinderman, A. H.; Olivier, B.

    1998-01-01

    The DSM-IV diagnostic criteria for premature ejaculation remain to be investigated by a clinical study. A prospective study was therefore conducted to investigate the DSM-IV definition and to provide an empirical operationalization of premature ejaculation. In this study 140 men suffering from

  17. Structural properties of silicon clusters: An empirical potential study

    International Nuclear Information System (INIS)

    Gong, X.G.; Zheng, Q.Q.; He Yizhen

    1993-09-01

    By using our newly proposed empirical interatomic potential for silicon, the structure and some dynamical properties of silicon cluster Si n (10 ≤ n ≤ 24) have been studied. It is found that the obtained results are close to those from ab-initio methods. From present results, we can gain a new insight into the understanding of the experimental data on the Si n clusters. (author). 20 refs, 6 figs

  18. Intrapreneur organizational culture and gender manager: an empirical study on SMES

    Directory of Open Access Journals (Sweden)

    Monica García Solarte

    2015-07-01

    Full Text Available The aim of this research is to empirically identify the relationship between the gender manager of the SMEs and the characteristics of the intrapreneurculture. The empirical study was conducted using a sample of 600 SMEs in the region of Murcia (Spain. The intrapreneur culture is analyzed considering theclassification of Galvez and Garcia (2011. The results show that companies The results show that companies directed by women promoted greater extentthan men intrapreneur characteristics of culture such as: autonomy and risk taking, teamwork, compensation and support to management. The implicationsof our research are relevant and can help public authorities and managers to the analysis and development of policies that promote gender equity withinorganizations.

  19. For a new dialogue between theoretical and empirical studies in evo-devo

    Directory of Open Access Journals (Sweden)

    Giuseppe eFusco

    2015-08-01

    Full Text Available Despite its potentially broad scope, current evo-devo research is largely dominated by empirical developmental studies, whereas comparably little role is played by theoretical research. I argue that this represents an obstacle to a wider appreciation of evo-devo and its integration within a more comprehensive evolutionary theory, and that this situation is causally linked to a limited exchange between theoretical and experimental studies in evo-devo. I discuss some features of current theoretical work in evo-devo, highlighting some possibly concurring impediments to an effective dialogue with experimental studies. Finally, I advance two suggestions for enhancing fruitful cross-fertilization between theoretical and empirical studies in evo-devo: i to broaden the scope of evo-devo beyond its current conceptualization, teaming up with other variational approaches to the study of evolution, and ii to develop more effective forms of scientific interaction and communication.

  20. An Empirical Model for Vane-Type Vortex Generators in a Navier-Stokes Code

    Science.gov (United States)

    Dudek, Julianne C.

    2005-01-01

    An empirical model which simulates the effects of vane-type vortex generators in ducts was incorporated into the Wind-US Navier-Stokes computational fluid dynamics code. The model enables the effects of the vortex generators to be simulated without defining the details of the geometry within the grid, and makes it practical for researchers to evaluate multiple combinations of vortex generator arrangements. The model determines the strength of each vortex based on the generator geometry and the local flow conditions. Validation results are presented for flow in a straight pipe with a counter-rotating vortex generator arrangement, and the results are compared with experimental data and computational simulations using a gridded vane generator. Results are also presented for vortex generator arrays in two S-duct diffusers, along with accompanying experimental data. The effects of grid resolution and turbulence model are also examined.

  1. An Empirical Model and Ethnic Differences in Cultural Meanings Via Motives for Suicide.

    Science.gov (United States)

    Chu, Joyce; Khoury, Oula; Ma, Johnson; Bahn, Francesca; Bongar, Bruce; Goldblum, Peter

    2017-10-01

    The importance of cultural meanings via motives for suicide - what is considered acceptable to motivate suicide - has been advocated as a key step in understanding and preventing development of suicidal behaviors. There have been limited systematic empirical attempts to establish different cultural motives ascribed to suicide across ethnic groups. We used a mixed methods approach and grounded theory methodology to guide the analysis of qualitative data querying for meanings via motives for suicide among 232 Caucasians, Asian Americans, and Latino/a Americans with a history of suicide attempts, ideation, intent, or plan. We used subsequent logistic regression analyses to examine ethnic differences in suicide motive themes. This inductive approach of generating theory from data yielded an empirical model of 6 cultural meanings via motives for suicide themes: intrapersonal perceptions, intrapersonal emotions, intrapersonal behavior, interpersonal, mental health/medical, and external environment. Logistic regressions showed ethnic differences in intrapersonal perceptions (low endorsement by Latino/a Americans) and external environment (high endorsement by Latino/a Americans) categories. Results advance suicide research and practice by establishing 6 empirically based cultural motives for suicide themes that may represent a key intermediary step in the pathway toward suicidal behaviors. Clinicians can use these suicide meanings via motives to guide their assessment and determination of suicide risk. Emphasis on environmental stressors rather than negative perceptions like hopelessness should be considered with Latino/a clients. © 2017 Wiley Periodicals, Inc.

  2. Empirical study of peak-load pricing and investment policies for the domestic market of gas in Great Britain

    Energy Technology Data Exchange (ETDEWEB)

    Tzoannos, J

    1977-06-01

    This paper reports the main results of a study aimed at determining empirically an optimal structure of seasonal tariffs for the domestic sector of gas in Great Britain. The study first involves the development of a peak-load pricing model which maximizes social welfare. The model is then quantified with cost and demand functions which have been statistically estimated for Great Britian using data for town gas. The procedure of estimating these functions is based on an error components model. The seasonal pricing and investment policy resulting from the solution to the above model is then compared with the existing policy for the domestic sector of gas in Great Britain. It is shown that the former is superior to the latter in terms of improvements in social welfare and capacity utilization. 22 references.

  3. Semi-empirical model for the calculation of flow friction factors in wire-wrapped rod bundles

    International Nuclear Information System (INIS)

    Carajilescov, P.; Fernandez y Fernandez, E.

    1981-08-01

    LMFBR fuel elements consist of wire-wrapped rod bundles, with triangular array, with the fluid flowing parallel to the rods. A semi-empirical model is developed in order to obtain the average bundle friction factor, as well as the friction factor for each subchannel. The model also calculates the flow distribution factors. The results are compared to experimental data for geometrical parameters in the range: P(div)D = 1.063 - 1.417, H(div)D = 4 - 50, and are considered satisfactory. (Author) [pt

  4. YOUTH STUDIES – A SPECIFIC GENRE OF THE EMPIRICAL PARADIGM IN SOCIAL SCIENCES

    Directory of Open Access Journals (Sweden)

    Agnė Dorelaitienė

    2017-09-01

    Full Text Available The article presents the situation of youth in contemporary society. Neoliberal economy, ageing society, rapid globalisation, technological changes, increase of social risk have prompted specific, historically unfamiliar, and fairly difficult to forecast social change. Social adaptation and construction of own identity are becoming challenging to youth as a specific social group in this period of great uncertainty, risk, and opportunities. Youth studies are referred to as one of the means to help understand the youth phenomenon and form the respective policy. Aim of the article is to reveal the role of youth studies as a specific interdisciplinary genre of the empirical-analytic paradigm in social sciences. Research objectives: (1 To identify the traditions of youth studies and differences between them; (2. To reveal the specific character of youth studies as an empirical paradigm in the contemporary context. Analysis of scientific sources and document analysis are used for achievement of the goal and objectives. Since the 20th century, youth studies have been developing as an independent research discipline and tradition. Perception of the notion of a young person has been changing along with development of the paradigmatic and methodological research traditions. Modernity has doubtlessly contributed to a young person finding his/her place in other age groups and putting an emphasis on the importance of youth as a specific social group. Recently, youth has been viewed as both the risk and the opportunity group. Although qualitative research, in particular, where youth emancipation is aspired, prevails in the contemporary research tradition, the empirical-analytic paradigm has not lost its relevance. The research has demonstrated that empirical-analytic paradigm is a specific genre of the youth studies characterised by quantitative approach and strong link to politics and practical situation of the phenomenon.

  5. Strategy, Performance Determining Factor of MSMEs: An Empirical Study in Mexico City

    Directory of Open Access Journals (Sweden)

    Francisco Ballina Ríos

    2016-02-01

    Full Text Available The aim of this paper is to analyze the relationship between the strategy used by the company to compete in the market and its performance, considering two different time points before the economic crisis and in times of crisis. To measure the strategy uses the typology of Miles and Snow (1978 and to measure performance using the model proposed by Quinn and Rohrbaugh (1983. For this, an empirical study on a sample of 983 MSMEs Federal District of Mexico. The results show that the strategy is an important factor for the development of MSMEs, both in times of growth and economic crisis, since it significantly influences their growth and profitability. Companies that follow an exploratory strategy, which are those with a greater inclination to innovation, institutions get better performance

  6. An Empirical Validation of Building Simulation Software for Modelling of Double-Skin Facade (DSF)

    DEFF Research Database (Denmark)

    Larsen, Olena Kalyanova; Heiselberg, Per; Felsmann, Clemens

    2009-01-01

    buildings, but their accuracy might be limited in cases with DSFs because of the complexity of the heat and mass transfer processes within the DSF. To address this problem, an empirical validation of building models with DSF, performed with various building simulation tools (ESP-r, IDA ICE 3.0, VA114......Double-skin facade (DSF) buildings are being built as an attractive, innovative and energy efficient solution. Nowadays, several design tools are used for assessment of thermal and energy performance of DSF buildings. Existing design tools are well-suited for performance assessment of conventional......, TRNSYS-TUD and BSim) was carried out in the framework of IEA SHC Task 34 /ECBCS Annex 43 "Testing and Validation of Building Energy Simulation Tools". The experimental data for the validation was gathered in a full-scale outdoor test facility. The empirical data sets comprise the key-functioning modes...

  7. The hydrodynamic basis of the vacuum cleaner effect in continuous-flow PCNL instruments: an empiric approach and mathematical model.

    Science.gov (United States)

    Mager, R; Balzereit, C; Gust, K; Hüsch, T; Herrmann, T; Nagele, U; Haferkamp, A; Schilling, D

    2016-05-01

    Passive removal of stone fragments in the irrigation stream is one of the characteristics in continuous-flow PCNL instruments. So far the physical principle of this so-called vacuum cleaner effect has not been fully understood yet. The aim of the study was to empirically prove the existence of the vacuum cleaner effect and to develop a physical hypothesis and generate a mathematical model for this phenomenon. In an empiric approach, common low-pressure PCNL instruments and conventional PCNL sheaths were tested using an in vitro model. Flow characteristics were visualized by coloring of irrigation fluid. Influence of irrigation pressure, sheath diameter, sheath design, nephroscope design and position of the nephroscope was assessed. Experiments were digitally recorded for further slow-motion analysis to deduce a physical model. In each tested nephroscope design, we could observe the vacuum cleaner effect. Increase in irrigation pressure and reduction in cross section of sheath sustained the effect. Slow-motion analysis of colored flow revealed a synergism of two effects causing suction and transportation of the stone. For the first time, our model showed a flow reversal in the sheath as an integral part of the origin of the stone transportation during vacuum cleaner effect. The application of Bernoulli's equation provided the explanation of these effects and confirmed our experimental results. We widen the understanding of PCNL with a conclusive physical model, which explains fluid mechanics of the vacuum cleaner effect.

  8. Empirically sampling Universal Dependencies

    DEFF Research Database (Denmark)

    Schluter, Natalie; Agic, Zeljko

    2017-01-01

    Universal Dependencies incur a high cost in computation for unbiased system development. We propose a 100% empirically chosen small subset of UD languages for efficient parsing system development. The technique used is based on measurements of model capacity globally. We show that the diversity o...

  9. Empirical Analysis of Closed-Loop Duopoly Advertising Strategies

    OpenAIRE

    Gary M. Erickson

    1992-01-01

    Closed-loop (perfect) equilibria in a Lanchester duopoly differential game of advertising competition are used as the basis for empirical investigation. Two systems of simultaneous nonlinear equations are formed, one from a general Lanchester model and one from a constrained model. Two empirical applications are conducted. In one involving Coca-Cola and Pepsi-Cola, a formal statistical testing procedure is used to detect whether closed-loop equilibrium advertising strategies are used by the c...

  10. Lessons from empirical studies in product and service variety management.

    Directory of Open Access Journals (Sweden)

    Andrew C.L. Lyons

    2013-07-01

    Full Text Available For many years, a trend for businesses has been to increase market segmentation and extend product and service-variety offerings in order to provid more choice for customers and gain a competitive advantags. However, there have been relatively few variety-related, empirical studies that have been undertaken. In this research, two empirical studies are presented that address the impact of product and service variety on business and business function performance. In the first (service-variety study, the focus concerns the relationship between service provision offered by UK-based, third-party logistics (3PL providers and the operational and financial performance of those providers. Here, the results of a large survey identify the  most important services offered by 3PLs and the most important aspects of 3PL operational performance. Also, the research suggests that the range of service variety offered by 3PLs does not directly influence the 3PLs’ financial performance. The second (product-variety study presents the findings from an analysis of data from 163 manufacturing plants where the impact of product variety on the performance of five business functions is examined. An increase in product variety was found to influence business functions differently depending on the combination of customisation and variety offered to customers

  11. Does Branding Need Web Usability? A Value-Oriented Empirical Study

    Science.gov (United States)

    Bolchini, Davide; Garzotto, Franca; Sorce, Fabio

    Does usability of a web-based communication artifact affect brand, i.e., the set of beliefs, emotions, attitudes, or qualities that people mentally associate to the entity behind that artifact? Intuitively, the answer is “yes”: usability is a fundamental aspect of the quality of the experience with a website, and a “good” experience with a “product” or its reifications tends to translate into “good” brand perception. To date, however, the existence of a connection between web usability and brand perception is shown through anecdotic arguments, and is not supported by published systematic research. This paper discusses a study that empirically investigates this correlation in a more rigorous, analytical, and replicable way. Our main contribution is twofold: on the one hand, we provide empirical evidence to the heuristic principle that web usability influences branding, and we do that through four between subjects controlled experiments that involved 120 subjects. On the other hand, we inform the study with a systematic value-oriented approach to the user experience, and thus provide a conceptual framework that can be reused in other experimental settings, either for replicating our study, or for designing similar studies focusing on the correlation of web branding vs. design factors other than usability.

  12. Empirical modeling of a dewaxing system of lubricant oil using Artificial Neural Network (ANN); Modelagem empirica de um sistema de desparafinacao de oleo lubrificante usando redes neurais artificiais

    Energy Technology Data Exchange (ETDEWEB)

    Fontes, Cristiano Hora de Oliveira; Medeiros, Ana Claudia Gondim de; Silva, Marcone Lopes; Neves, Sergio Bello; Carvalho, Luciene Santos de; Guimaraes, Paulo Roberto Britto; Pereira, Magnus; Vianna, Regina Ferreira [Universidade Salvador (UNIFACS), Salvador, BA (Brazil). Dept. de Engenharia e Arquitetura]. E-mail: paulorbg@unifacs.br; Santos, Nilza Maria Querino dos [PETROBRAS S.A., Rio de Janeiro, RJ (Brazil)]. E-mail: nilzaq@petrobras.com.br

    2003-07-01

    The MIBK (m-i-b-ketone) dewaxing unit, located at the Landulpho Alves refinery, allows two different operating modes: dewaxing ND oil removal. The former is comprised of an oil-wax separation process, which generates a wax stream with 2 - 5% oil. The latter involves the reprocessing of the wax stream to reduce its oil content. Both involve a two-stage filtration process (primary and secondary) with rotative filters. The general aim of this research is to develop empirical models to predict variables, for both unit-operating modes, to be used in control algorithms, since many data are not available during normal plant operation and therefore need to be estimated. Studies have suggested that the oil content is an essential variable to develop reliable empirical models and this work is concerned with the development of an empirical model for the prediction of the oil content in the wax stream leaving the primary filters. The model is based on a feed forward Artificial Neural Network (ANN) and tests with one and two hidden layers indicate very good agreement between experimental and predicted values. (author)

  13. Patient Safety and Satisfaction Drivers in Emergency Departments Re-visited - An Empirical Analysis using Structural Equation Modeling

    DEFF Research Database (Denmark)

    Sørup, Christian Michel; Jacobsen, Peter

    2014-01-01

    are entitled safety and satisfaction, waiting time, information delivery, and infrastructure accordingly. As an empirical foundation, a recently published comprehensive survey in 11 Danish EDs is analysed in depth using structural equation modeling (SEM). Consulting the proposed framework, ED decision makers...

  14. An empirically tractable model of optimal oil spills prevention in Russian sea harbours

    Energy Technology Data Exchange (ETDEWEB)

    Deissenberg, C. [CEFI-CNRS, Les Milles (France); Gurman, V.; Tsirlin, A. [RAS, Program Systems Inst., Pereslavl-Zalessky (Russian Federation); Ryumina, E. [Russian Academy of Sciences, Moscow (Russian Federation). Inst. of Economic Market Problems

    2001-07-01

    Based on previous theoretical work by Gottinger (1997, 1998), we propose a simple model of optimal monitoring of oil-related activities in harbour areas that is suitable for empirical estimation within the Russian-Ukrainian context, in spite of the poor availability of data in these countries. Specifically, the model indicates how to best allocate at the steady state a given monitoring budget between different monitoring activities. An approximate analytical solution to the optimization problem is derived, and a simple procedure for estimating the model on the basis of the actually available data is suggested. An application using data obtained for several harbours of the Black and Baltic Seas is given. It suggests that the current Russian monitoring practice could be much improved by better allocating the available monitoring resources. (Author)

  15. Beyond Clinical Case Studies in Psychoanalysis: A Review of Psychoanalytic Empirical Single Case Studies Published in ISI-Ranked Journals

    Science.gov (United States)

    Meganck, Reitske; Inslegers, Ruth; Krivzov, Juri; Notaerts, Liza

    2017-01-01

    Single case studies are at the origin of both theory development and research in the field of psychoanalysis and psychotherapy. While clinical case studies are the hallmark of psychoanalytic theory and practice, their scientific value has been strongly criticized. To address problems with the subjective bias of retrospective therapist reports and uncontrollability of clinical case studies, systematic approaches to investigate psychotherapy process and outcome at the level of the single case have been developed. Such empirical case studies are also able to bridge the famous gap between academic research and clinical practice as they provide clinically relevant insights into how psychotherapy works. This study presents a review of psychoanalytic empirical case studies published in ISI-ranked journals and maps the characteristics of the study, therapist, patient en therapies that are investigated. Empirical case studies increased in quantity and quality (amount of information and systematization) over time. While future studies could pay more attention to providing contextual information on therapist characteristics and informed consent considerations, the available literature provides a basis to conduct meta-studies of single cases and as such contribute to knowledge aggregation. PMID:29046660

  16. High-resolution empirical geomagnetic field model TS07D: Investigating run-on-request and forecasting modes of operation

    Science.gov (United States)

    Stephens, G. K.; Sitnov, M. I.; Ukhorskiy, A. Y.; Vandegriff, J. D.; Tsyganenko, N. A.

    2010-12-01

    The dramatic increase of the geomagnetic field data volume available due to many recent missions, including GOES, Polar, Geotail, Cluster, and THEMIS, required at some point the appropriate qualitative transition in the empirical modeling tools. Classical empirical models, such as T96 and T02, used few custom-tailored modules to represent major magnetospheric current systems and simple data binning or loading-unloading inputs for their fitting with data and the subsequent applications. They have been replaced by more systematic expansions of the equatorial and field-aligned current contributions as well as by the advanced data-mining algorithms searching for events with the global activity parameters, such as the Sym-H index, similar to those at the time of interest, as is done in the model TS07D (Tsyganenko and Sitnov, 2007; Sitnov et al., 2008). The necessity to mine and fit data dynamically, with the individual subset of the database being used to reproduce the geomagnetic field pattern at every new moment in time, requires the corresponding transition in the use of the new empirical geomagnetic field models. It becomes more similar to runs-on-request offered by the Community Coordinated Modeling Center for many first principles MHD and kinetic codes. To provide this mode of operation for the TS07D model a new web-based modeling tool has been created and tested at the JHU/APL (http://geomag_field.jhuapl.edu/model/), and we discuss the first results of its performance testing and validation, including in-sample and out-of-sample modeling of a number of CME- and CIR-driven magnetic storms. We also report on the first tests of the forecasting version of the TS07D model, where the magnetospheric part of the macro-parameters involved in the data-binning process (Sym-H index and its trend parameter) are replaced by their solar wind-based analogs obtained using the Burton-McPherron-Russell approach.

  17. An empirical model of L-band scintillation S4 index constructed by using FORMOSAT-3/COSMIC data

    Science.gov (United States)

    Chen, Shih-Ping; Bilitza, Dieter; Liu, Jann-Yenq; Caton, Ronald; Chang, Loren C.; Yeh, Wen-Hao

    2017-09-01

    Modern society relies heavily on the Global Navigation Satellite System (GNSS) technology for applications such as satellite communication, navigation, and positioning on the ground and/or aviation in the troposphere/stratosphere. However, ionospheric scintillations can severely impact GNSS systems and their related applications. In this study, a global empirical ionospheric scintillation model is constructed with S4-index data obtained by the FORMOSAT-3/COSMIC (F3/C) satellites during 2007-2014 (hereafter referred to as the F3CGS4 model). This model describes the S4-index as a function of local time, day of year, dip-latitude, and solar activity using the index PF10.7. The model reproduces the F3/C S4-index observations well, and yields good agreement with ground-based reception of satellite signals. This confirms that the constructed model can be used to forecast global L-band scintillations on the ground and in the near surface atmosphere.

  18. Application of Generalized Student’s T-Distribution In Modeling The Distribution of Empirical Return Rates on Selected Stock Exchange Indexes

    Directory of Open Access Journals (Sweden)

    Purczyńskiz Jan

    2014-07-01

    Full Text Available This paper examines the application of the so called generalized Student’s t-distribution in modeling the distribution of empirical return rates on selected Warsaw stock exchange indexes. It deals with distribution parameters by means of the method of logarithmic moments, the maximum likelihood method and the method of moments. Generalized Student’s t-distribution ensures better fitting to empirical data than the classical Student’s t-distribution.

  19. An Empirical Study of Security Issues Posted in Open Source Projects

    DEFF Research Database (Denmark)

    Zahedi, Mansooreh; Ali Babar, Muhammad; Treude, Christoph

    2018-01-01

    When developers gain thorough understanding and knowledge of software security, they can produce more secure software. This study aims at empirically identifying and understanding the security issues posted on a random sample of GitHub repositories. We tried to understand the presence of security...

  20. An Empirical Agent-Based Model to Simulate the Adoption of Water Reuse Using the Social Amplification of Risk Framework.

    Science.gov (United States)

    Kandiah, Venu; Binder, Andrew R; Berglund, Emily Z

    2017-10-01

    Water reuse can serve as a sustainable alternative water source for urban areas. However, the successful implementation of large-scale water reuse projects depends on community acceptance. Because of the negative perceptions that are traditionally associated with reclaimed water, water reuse is often not considered in the development of urban water management plans. This study develops a simulation model for understanding community opinion dynamics surrounding the issue of water reuse, and how individual perceptions evolve within that context, which can help in the planning and decision-making process. Based on the social amplification of risk framework, our agent-based model simulates consumer perceptions, discussion patterns, and their adoption or rejection of water reuse. The model is based on the "risk publics" model, an empirical approach that uses the concept of belief clusters to explain the adoption of new technology. Each household is represented as an agent, and parameters that define their behavior and attributes are defined from survey data. Community-level parameters-including social groups, relationships, and communication variables, also from survey data-are encoded to simulate the social processes that influence community opinion. The model demonstrates its capabilities to simulate opinion dynamics and consumer adoption of water reuse. In addition, based on empirical data, the model is applied to investigate water reuse behavior in different regions of the United States. Importantly, our results reveal that public opinion dynamics emerge differently based on membership in opinion clusters, frequency of discussion, and the structure of social networks. © 2017 Society for Risk Analysis.

  1. Sci—Thur AM: YIS - 09: Validation of a General Empirically-Based Beam Model for kV X-ray Sources

    Energy Technology Data Exchange (ETDEWEB)

    Poirier, Y. [CancerCare Manitoba (Canada); University of Calgary (Canada); Sommerville, M.; Johnstone, C.D. [San Diego State University (United States); Gräfe, J.; Nygren, I.; Jacso, F. [Tom Baker Cancer Centre (Canada); Khan, R.; Villareal-Barajas, J.E. [University of Calgary (Canada); Tom Baker Cancer Centre (Canada); Tambasco, M. [University of Calgary (Canada); San Diego State University (United States)

    2014-08-15

    Purpose: To present an empirically-based beam model for computing dose deposited by kilovoltage (kV) x-rays and validate it for radiographic, CT, CBCT, superficial, and orthovoltage kV sources. Method and Materials: We modeled a wide variety of imaging (radiographic, CT, CBCT) and therapeutic (superficial, orthovoltage) kV x-ray sources. The model characterizes spatial variations of the fluence and spectrum independently. The spectrum is derived by matching measured values of the half value layer (HVL) and nominal peak potential (kVp) to computationally-derived spectra while the fluence is derived from in-air relative dose measurements. This model relies only on empirical values and requires no knowledge of proprietary source specifications or other theoretical aspects of the kV x-ray source. To validate the model, we compared measured doses to values computed using our previously validated in-house kV dose computation software, kVDoseCalc. The dose was measured in homogeneous and anthropomorphic phantoms using ionization chambers and LiF thermoluminescent detectors (TLDs), respectively. Results: The maximum difference between measured and computed dose measurements was within 2.6%, 3.6%, 2.0%, 4.8%, and 4.0% for the modeled radiographic, CT, CBCT, superficial, and the orthovoltage sources, respectively. In the anthropomorphic phantom, the computed CBCT dose generally agreed with TLD measurements, with an average difference and standard deviation ranging from 2.4 ± 6.0% to 5.7 ± 10.3% depending on the imaging technique. Most (42/62) measured TLD doses were within 10% of computed values. Conclusions: The proposed model can be used to accurately characterize a wide variety of kV x-ray sources using only empirical values.

  2. Wireless and empire geopolitics radio industry and ionosphere in the British Empire 1918-1939

    CERN Document Server

    Anduaga, Aitor

    2009-01-01

    Although the product of consensus politics, the British Empire was based on communications supremacy and the knowledge of the atmosphere. Focusing on science, industry, government, the military, and education, this book studies the relationship between wireless and Empire throughout the interwar period.

  3. Theory, modeling, and integrated studies in the Arase (ERG) project

    Science.gov (United States)

    Seki, Kanako; Miyoshi, Yoshizumi; Ebihara, Yusuke; Katoh, Yuto; Amano, Takanobu; Saito, Shinji; Shoji, Masafumi; Nakamizo, Aoi; Keika, Kunihiro; Hori, Tomoaki; Nakano, Shin'ya; Watanabe, Shigeto; Kamiya, Kei; Takahashi, Naoko; Omura, Yoshiharu; Nose, Masahito; Fok, Mei-Ching; Tanaka, Takashi; Ieda, Akimasa; Yoshikawa, Akimasa

    2018-02-01

    Understanding of underlying mechanisms of drastic variations of the near-Earth space (geospace) is one of the current focuses of the magnetospheric physics. The science target of the geospace research project Exploration of energization and Radiation in Geospace (ERG) is to understand the geospace variations with a focus on the relativistic electron acceleration and loss processes. In order to achieve the goal, the ERG project consists of the three parts: the Arase (ERG) satellite, ground-based observations, and theory/modeling/integrated studies. The role of theory/modeling/integrated studies part is to promote relevant theoretical and simulation studies as well as integrated data analysis to combine different kinds of observations and modeling. Here we provide technical reports on simulation and empirical models related to the ERG project together with their roles in the integrated studies of dynamic geospace variations. The simulation and empirical models covered include the radial diffusion model of the radiation belt electrons, GEMSIS-RB and RBW models, CIMI model with global MHD simulation REPPU, GEMSIS-RC model, plasmasphere thermosphere model, self-consistent wave-particle interaction simulations (electron hybrid code and ion hybrid code), the ionospheric electric potential (GEMSIS-POT) model, and SuperDARN electric field models with data assimilation. ERG (Arase) science center tools to support integrated studies with various kinds of data are also briefly introduced.[Figure not available: see fulltext.

  4. Impact of Disturbing Factors on Cooperation in Logistics Outsourcing Performance: The Empirical Model

    Directory of Open Access Journals (Sweden)

    Andreja Križman

    2010-05-01

    Full Text Available The purpose of this paper is to present the research results of a study conducted in the Slovene logistics market of conflicts and opportunism as disturbing factors while examining their impact on cooperation in logistics outsourcing performance. Relationship variables are proposed that directly or indirectly affect logistics performance and conceptualize the hypotheses based on causal linkages for the constructs. On the basis of extant literature and new argumentations that are derived from in-depth interviews of logistics experts, including providers and customers, the measurement and structural models are empirically analyzed. Existing measurement scales for the constructs are slightly modified for this analysis. Purification testing and measurement for validity and reliability are performed. Multivariate statistical methods are utilized and hypotheses are tested. The results show that conflicts have a significantly negative impact on cooperation between customers and logistics service providers (LSPs, while opportunism does not play an important role in these relationships. The observed antecedents of logistics outsourcing performance in the model account for 58.4% of the variance of the goal achievement and 36.5% of the variance of the exceeded goal. KEYWORDS: logistics outsourcing performance; logistics customer–provider relationships; conflicts and cooperation in logistics outsourcing; PLS path modelling

  5. Downside Risk And Empirical Asset Pricing

    NARCIS (Netherlands)

    P. van Vliet (Pim)

    2004-01-01

    textabstractCurrently, the Nobel prize winning Capital Asset Pricing Model (CAPM) celebrates its 40th birthday. Although widely applied in financial management, this model does not fully capture the empirical riskreturn relation of stocks; witness the beta, size, value and momentum effects. These

  6. Neural networks in economic modelling : An empirical study

    NARCIS (Netherlands)

    Verkooijen, W.J.H.

    1996-01-01

    This dissertation addresses the statistical aspects of neural networks and their usability for solving problems in economics and finance. Neural networks are discussed in a framework of modelling which is generally accepted in econometrics. Within this framework a neural network is regarded as a

  7. Empirical Studies on English Vocabulary Learning Strategies in Mainland China over the Past Two Decades

    OpenAIRE

    Zhongxin Dai; Yao Zhou

    2015-01-01

    Wen and Wang (2004) reviewed the empirical studies over the past two decades (from 1984 to 2003) on learning strategies that Chinese EFL learners used. This article, following their methodological framework, reviews about 45 empirical studies on Chinese EFL learners’ English vocabulary learning strategies, conducted by Mainland Chinese scholars over the past two decades. The review shows that more than half of the Chinese scholars are interested in questionnaire investigation of EFL learners’...

  8. Comparative empirical analysis of flow-weighted transit route networks in R-space and evolution modeling

    Science.gov (United States)

    Huang, Ailing; Zang, Guangzhi; He, Zhengbing; Guan, Wei

    2017-05-01

    Urban public transit system is a typical mixed complex network with dynamic flow, and its evolution should be a process coupling topological structure with flow dynamics, which has received little attention. This paper presents the R-space to make a comparative empirical analysis on Beijing’s flow-weighted transit route network (TRN) and we found that both the Beijing’s TRNs in the year of 2011 and 2015 exhibit the scale-free properties. As such, we propose an evolution model driven by flow to simulate the development of TRNs with consideration of the passengers’ dynamical behaviors triggered by topological change. The model simulates that the evolution of TRN is an iterative process. At each time step, a certain number of new routes are generated driven by travel demands, which leads to dynamical evolution of new routes’ flow and triggers perturbation in nearby routes that will further impact the next round of opening new routes. We present the theoretical analysis based on the mean-field theory, as well as the numerical simulation for this model. The results obtained agree well with our empirical analysis results, which indicate that our model can simulate the TRN evolution with scale-free properties for distributions of node’s strength and degree. The purpose of this paper is to illustrate the global evolutional mechanism of transit network that will be used to exploit planning and design strategies for real TRNs.

  9. An empirical model for parameters affecting energy consumption in boron removal from boron-containing wastewaters by electrocoagulation.

    Science.gov (United States)

    Yilmaz, A Erdem; Boncukcuoğlu, Recep; Kocakerim, M Muhtar

    2007-06-01

    In this study, it was investigated parameters affecting energy consumption in boron removal from boron containing wastewaters prepared synthetically, via electrocoagulation method. The solution pH, initial boron concentration, dose of supporting electrolyte, current density and temperature of solution were selected as experimental parameters affecting energy consumption. The obtained experimental results showed that boron removal efficiency reached up to 99% under optimum conditions, in which solution pH was 8.0, current density 6.0 mA/cm(2), initial boron concentration 100mg/L and solution temperature 293 K. The current density was an important parameter affecting energy consumption too. High current density applied to electrocoagulation cell increased energy consumption. Increasing solution temperature caused to decrease energy consumption that high temperature decreased potential applied under constant current density. That increasing initial boron concentration and dose of supporting electrolyte caused to increase specific conductivity of solution decreased energy consumption. As a result, it was seen that energy consumption for boron removal via electrocoagulation method could be minimized at optimum conditions. An empirical model was predicted by statistically. Experimentally obtained values were fitted with values predicted from empirical model being as following; [formula in text]. Unfortunately, the conditions obtained for optimum boron removal were not the conditions obtained for minimum energy consumption. It was determined that support electrolyte must be used for increase boron removal and decrease electrical energy consumption.

  10. An Empirical Path-Loss Model for Wireless Channels in Indoor Short-Range Office Environment

    Directory of Open Access Journals (Sweden)

    Ye Wang

    2012-01-01

    Full Text Available A novel empirical path-loss model for wireless indoor short-range office environment at 4.3–7.3 GHz band is presented. The model is developed based on the experimental datum sampled in 30 office rooms in both line of sight (LOS and non-LOS (NLOS scenarios. The model is characterized as the path loss to distance with a Gaussian random variable X due to the shadow fading by using linear regression. The path-loss exponent n is fitted by the frequency using power function and modeled as a frequency-dependent Gaussian variable as the standard deviation σ of X. The presented works should be available for the research of wireless channel characteristics under universal indoor short-distance environments in the Internet of Things (IOT.

  11. THE DYNAMIC INTER-RELATIONSHIP BETWEEN OBESITY AND SCHOOL PERFORMANCE: NEW EMPIRICAL EVIDENCE FROM AUSTRALIA.

    Science.gov (United States)

    Nghiem, Son; Hoang, Viet-Ngu; Vu, Xuan-Binh; Wilson, Clevo

    2017-12-04

    This paper proposes a new empirical model for examining the relationship between obesity and school performance using the simultaneous equation modelling approach. The lagged effects of both learning and health outcomes were included to capture both the dynamic and inter-relational aspects of the relationship between obesity and school performance. The empirical application of this study used comprehensive data from the first five waves of the Longitudinal Study of Australian Children (LSAC), which commenced in 2004 (wave 1) and was repeated every two years until 2018. The study sample included 10,000 children, equally divided between two cohorts (infants and children) across Australia. The empirical results show that past learning and obesity status are strongly associated with most indicators of school outcomes, including reading, writing, spelling, grammar and numeracy national tests, and scores from the internationally standardized Peabody Picture Vocabulary Test and the Matrix Reasoning Test. The main findings of this study are robust due to the choice of obesity indicator and estimation methods.

  12. Visual Semiotics & Uncertainty Visualization: An Empirical Study.

    Science.gov (United States)

    MacEachren, A M; Roth, R E; O'Brien, J; Li, B; Swingley, D; Gahegan, M

    2012-12-01

    This paper presents two linked empirical studies focused on uncertainty visualization. The experiments are framed from two conceptual perspectives. First, a typology of uncertainty is used to delineate kinds of uncertainty matched with space, time, and attribute components of data. Second, concepts from visual semiotics are applied to characterize the kind of visual signification that is appropriate for representing those different categories of uncertainty. This framework guided the two experiments reported here. The first addresses representation intuitiveness, considering both visual variables and iconicity of representation. The second addresses relative performance of the most intuitive abstract and iconic representations of uncertainty on a map reading task. Combined results suggest initial guidelines for representing uncertainty and discussion focuses on practical applicability of results.

  13. An Empirical Study on the Preference of Supermarkets with Analytic Hierarchy Process Model

    Science.gov (United States)

    Weng Siew, Lam; Singh, Ranjeet; Singh, Bishan; Weng Hoe, Lam; Kah Fai, Liew

    2018-04-01

    Large-scale retailers are very important to the consumers in this fast-paced world. Selection of desirable market to purchase products and services becomes major concern among consumers in their daily life due to vast choices available. Therefore, the objective of this paper is to determine the most preferred supermarket among AEON, Jaya Grocer, Tesco, Giant and Econsave by the undergraduate students in Malaysia with Analytic Hierarchy Process (AHP) model. Besides that, this study also aims to determine the priority of decision criteria in the selection of supermarkets among the undergraduatestudents with AHP model. The decision criteria employed in this study are product quality, competitive price, cleanliness, product variety, location, good price labelling, fast checkout and employee courtesy. The results of this study show that AEON is the most preferred supermarket followed by Jaya Grocer, Tesco, Econsave and Giant among the students based on AHP model. Product quality, cleanliness and competitive price are ranked as the top three influential factors in this study. This study is significant because it helps to determine the most preferred supermarket as well as the most influential decision criteria in the preference of supermarkets among the undergraduate students with AHP model.

  14. Energy structure, marginal efficiency and substitution rate: An empirical study of China

    International Nuclear Information System (INIS)

    Han Zhiyong; Fan Ying; Jiao Jianling; Yan Jisheng; Wei Yiming

    2007-01-01

    Energy efficiency is an important factor in developing energy policies as it represents the extent to which resources support economic output. In recent literature, relevant studies have mainly focused on aggregate energy efficiency, but rarely touched on the marginal efficiency of diverse energy resources and their comparative substitution rate. During 1978-2003, China's energy efficiency continually increased; and consequently became a hot topic in contemporary literature. However, there is no empirical study on the relationship between energy structure and energy efficiency. In order to close the gap, this paper reports the empirical study of the impact of China's energy structure on its energy efficiency from 1978 to 2003. The work covered primary estimation of the marginal efficiency of coal and petroleum in China, as well as the comparative substitution rate. Results indicate that the substitution rate between petroleum and coal is a factor of 5.38

  15. Testing the robustness of the anthropogenic climate change detection statements using different empirical models

    KAUST Repository

    Imbers, J.; Lopez, A.; Huntingford, C.; Allen, M. R.

    2013-01-01

    This paper aims to test the robustness of the detection and attribution of anthropogenic climate change using four different empirical models that were previously developed to explain the observed global mean temperature changes over the last few decades. These studies postulated that the main drivers of these changes included not only the usual natural forcings, such as solar and volcanic, and anthropogenic forcings, such as greenhouse gases and sulfates, but also other known Earth system oscillations such as El Niño Southern Oscillation (ENSO) or the Atlantic Multidecadal Oscillation (AMO). In this paper, we consider these signals, or forced responses, and test whether or not the anthropogenic signal can be robustly detected under different assumptions for the internal variability of the climate system. We assume that the internal variability of the global mean surface temperature can be described by simple stochastic models that explore a wide range of plausible temporal autocorrelations, ranging from short memory processes exemplified by an AR(1) model to long memory processes, represented by a fractional differenced model. In all instances, we conclude that human-induced changes to atmospheric gas composition is affecting global mean surface temperature changes. ©2013. American Geophysical Union. All Rights Reserved.

  16. Testing the robustness of the anthropogenic climate change detection statements using different empirical models

    KAUST Repository

    Imbers, J.

    2013-04-27

    This paper aims to test the robustness of the detection and attribution of anthropogenic climate change using four different empirical models that were previously developed to explain the observed global mean temperature changes over the last few decades. These studies postulated that the main drivers of these changes included not only the usual natural forcings, such as solar and volcanic, and anthropogenic forcings, such as greenhouse gases and sulfates, but also other known Earth system oscillations such as El Niño Southern Oscillation (ENSO) or the Atlantic Multidecadal Oscillation (AMO). In this paper, we consider these signals, or forced responses, and test whether or not the anthropogenic signal can be robustly detected under different assumptions for the internal variability of the climate system. We assume that the internal variability of the global mean surface temperature can be described by simple stochastic models that explore a wide range of plausible temporal autocorrelations, ranging from short memory processes exemplified by an AR(1) model to long memory processes, represented by a fractional differenced model. In all instances, we conclude that human-induced changes to atmospheric gas composition is affecting global mean surface temperature changes. ©2013. American Geophysical Union. All Rights Reserved.

  17. An Empirical Study on Capital Structure Determinants of Selected ASEAN Countries

    OpenAIRE

    Ngo, Hoang Anh

    2013-01-01

    Capital structure has been a controversial topic for decades. Conflicting arguments in theories and mixed findings in empirical work require further studies on this subject. More importantly, most previous studies have focused on developed countries and little attention is paid to emerging economies, especially ASEAN. Therefore, this study attempts to fill the gap by examining effects of capital structure's determinants on different measures of leverage of listed manufacturing companies in se...

  18. An empirical investigation on the forecasting ability of mallows model averaging in a macro economic environment

    Science.gov (United States)

    Yin, Yip Chee; Hock-Eam, Lim

    2012-09-01

    This paper investigates the forecasting ability of Mallows Model Averaging (MMA) by conducting an empirical analysis of five Asia countries, Malaysia, Thailand, Philippines, Indonesia and China's GDP growth rate. Results reveal that MMA has no noticeable differences in predictive ability compared to the general autoregressive fractional integrated moving average model (ARFIMA) and its predictive ability is sensitive to the effect of financial crisis. MMA could be an alternative forecasting method for samples without recent outliers such as financial crisis.

  19. Generalized least squares and empirical Bayes estimation in regional partial duration series index-flood modeling

    DEFF Research Database (Denmark)

    Madsen, Henrik; Rosbjerg, Dan

    1997-01-01

    parameters is inferred from regional data using generalized least squares (GLS) regression. Two different Bayesian T-year event estimators are introduced: a linear estimator that requires only some moments of the prior distributions to be specified and a parametric estimator that is based on specified......A regional estimation procedure that combines the index-flood concept with an empirical Bayes method for inferring regional information is introduced. The model is based on the partial duration series approach with generalized Pareto (GP) distributed exceedances. The prior information of the model...

  20. Essays on model uncertainty in financial models

    NARCIS (Netherlands)

    Li, Jing

    2018-01-01

    This dissertation studies model uncertainty, particularly in financial models. It consists of two empirical chapters and one theoretical chapter. The first empirical chapter (Chapter 2) classifies model uncertainty into parameter uncertainty and misspecification uncertainty. It investigates the

  1. Comparing Absolute Error with Squared Error for Evaluating Empirical Models of Continuous Variables: Compositions, Implications, and Consequences

    Science.gov (United States)

    Gao, J.

    2014-12-01

    Reducing modeling error is often a major concern of empirical geophysical models. However, modeling errors can be defined in different ways: When the response variable is continuous, the most commonly used metrics are squared (SQ) and absolute (ABS) errors. For most applications, ABS error is the more natural, but SQ error is mathematically more tractable, so is often used as a substitute with little scientific justification. Existing literature has not thoroughly investigated the implications of using SQ error in place of ABS error, especially not geospatially. This study compares the two metrics through the lens of bias-variance decomposition (BVD). BVD breaks down the expected modeling error of each model evaluation point into bias (systematic error), variance (model sensitivity), and noise (observation instability). It offers a way to probe the composition of various error metrics. I analytically derived the BVD of ABS error and compared it with the well-known SQ error BVD, and found that not only the two metrics measure the characteristics of the probability distributions of modeling errors differently, but also the effects of these characteristics on the overall expected error are different. Most notably, under SQ error all bias, variance, and noise increase expected error, while under ABS error certain parts of the error components reduce expected error. Since manipulating these subtractive terms is a legitimate way to reduce expected modeling error, SQ error can never capture the complete story embedded in ABS error. I then empirically compared the two metrics with a supervised remote sensing model for mapping surface imperviousness. Pair-wise spatially-explicit comparison for each error component showed that SQ error overstates all error components in comparison to ABS error, especially variance-related terms. Hence, substituting ABS error with SQ error makes model performance appear worse than it actually is, and the analyst would more likely accept a

  2. Recent extensions and use of the statistical model code EMPIRE-II - version: 2.17 Millesimo

    International Nuclear Information System (INIS)

    Herman, M.

    2003-01-01

    This lecture notes describe new features of the modular code EMPIRE-2.17 designed to perform comprehensive calculations of nuclear reactions using variety of nuclear reaction models. Compared to the version 2.13, the current release has been extended by including Coupled-Channel mechanism, exciton model, Monte Carlo approach to preequilibrium emission, use of microscopic level densities, widths fluctuation correction, detailed calculation of the recoil spectra, and powerful plotting capabilities provided by the ZVView package. The second part of this lecture concentrates on the use of the code in practical calculations, with emphasis on the aspects relevant to nuclear data evaluation. In particular, adjusting model parameters is discussed in details. (author)

  3. A comparative study of semi-empirical interionic potentials for alkali halides - II

    International Nuclear Information System (INIS)

    Khwaja, F.A.; Naqvi, S.H.

    1985-08-01

    A comprehensive study of some semi-empirical interionic potentials is carried out through the calculation of the cohesive energy, relative stability and pressure induced solid-solid phase transformations in alkali halides. The theoretical values of these properties of the alkali halides are obtained using a new set of van der Waals coefficients and zero-point energy in the expression for interionic potential. From the comparison of the present calculations with some previous sophisticated ab-initio quantum-mechanical calculations and other semi-empirical approaches, it is concluded that the present calculations in the simplest central pairwise interaction description with the new values of the van der Waals coefficients and zero-point energy are in better agreement with the experimental data than the previous calculations. It is also concluded that in some cases the better choice of the interionic potential alone in the simplest semi-empirical picture of interaction gives an agreement of the theoretical predictions with the experimental data much superior to the ab-initio quantum mechanical approaches. (author)

  4. An empirical model of the high-energy electron environment at Jupiter

    Science.gov (United States)

    Soria-Santacruz, M.; Garrett, H. B.; Evans, R. W.; Jun, I.; Kim, W.; Paranicas, C.; Drozdov, A.

    2016-10-01

    We present an empirical model of the energetic electron environment in Jupiter's magnetosphere that we have named the Galileo Interim Radiation Electron Model version-2 (GIRE2) since it is based on Galileo data from the Energetic Particle Detector (EPD). Inside 8RJ, GIRE2 adopts the previously existing model of Divine and Garrett because this region was well sampled by the Pioneer and Voyager spacecraft but poorly covered by Galileo. Outside of 8RJ, the model is based on 10 min averages of Galileo EPD data as well as on measurements from the Geiger Tube Telescope on board the Pioneer spacecraft. In the inner magnetosphere the field configuration is dipolar, while in the outer magnetosphere it presents a disk-like structure. The gradual transition between these two behaviors is centered at about 17RJ. GIRE2 distinguishes between the two different regions characterized by these two magnetic field topologies. Specifically, GIRE2 consists of an inner trapped omnidirectional model between 8 to 17RJ that smoothly joins onto the original Divine and Garrett model inside 8RJ and onto a GIRE2 plasma sheet model at large radial distances. The model provides a complete picture of the high-energy electron environment in the Jovian magnetosphere from ˜1 to 50RJ. The present manuscript describes in great detail the data sets, formulation, and fittings used in the model and provides a discussion of the predicted high-energy electron fluxes as a function of energy and radial distance from the planet.

  5. Empirical isotropic chemical shift surfaces

    International Nuclear Information System (INIS)

    Czinki, Eszter; Csaszar, Attila G.

    2007-01-01

    A list of proteins is given for which spatial structures, with a resolution better than 2.5 A, are known from entries in the Protein Data Bank (PDB) and isotropic chemical shift (ICS) values are known from the RefDB database related to the Biological Magnetic Resonance Bank (BMRB) database. The structures chosen provide, with unknown uncertainties, dihedral angles φ and ψ characterizing the backbone structure of the residues. The joint use of experimental ICSs of the same residues within the proteins, again with mostly unknown uncertainties, and ab initio ICS(φ,ψ) surfaces obtained for the model peptides For-(l-Ala) n -NH 2 , with n = 1, 3, and 5, resulted in so-called empirical ICS(φ,ψ) surfaces for all major nuclei of the 20 naturally occurring α-amino acids. Out of the many empirical surfaces determined, it is the 13C α ICS(φ,ψ) surface which seems to be most promising for identifying major secondary structure types, α-helix, β-strand, left-handed helix (α D ), and polyproline-II. Detailed tests suggest that Ala is a good model for many naturally occurring α-amino acids. Two-dimensional empirical 13C α - 1 H α ICS(φ,ψ) correlation plots, obtained so far only from computations on small peptide models, suggest the utility of the experimental information contained therein and thus they should provide useful constraints for structure determinations of proteins

  6. Semi-empirical model for optimising future heavy-ion luminosity of the LHC

    CERN Document Server

    Schaumann, M

    2014-01-01

    The wide spectrum of intensities and emittances imprinted on the LHC Pb bunches during the accumulation of bunch trains in the injector chain result in a significant spread in the single bunch luminosities and lifetimes in collision. Based on the data collected in the 2011 Pb-Pb run, an empirical model is derived to predict the single-bunch peak luminosity depending on the bunch’s position within the beam. In combination with this model, simulations of representative bunches are used to estimate the luminosity evolution for the complete ensemble of bunches. Several options are being considered to improve the injector performance and to increase the number of bunches in the LHC, leading to several potential injection scenarios, resulting in different peak and integrated luminosities. The most important options for after the long shutdown (LS) 1 and 2 are evaluated and compared.

  7. Empirical study on how social media promotes product innovation

    OpenAIRE

    Idota, Hiroki; Bunno, Teruyuki; Tsuji, Masatsugu

    2014-01-01

    Social media such as SNS, Twitter, and the blogs has been spreading all over the world, and a large number of firms recognize social media as new communication tools for obtaining information on consumer needs and market for developing new goods and services and promoting marketing. In spite of increasing its use in the reality, academic research on whether or how social media contributes to promoting product innovation is not enough yet. This study thus attempts to analyze empirically how so...

  8. Economic differences among regional public service broadcasters in Spain according to their management model. An empirical analysis for period 2010-2013

    Directory of Open Access Journals (Sweden)

    Víctor Orive Serrano

    2016-03-01

    Full Text Available Purpose: This piece of research quantifies and analyses empirically the given economic differences among public service television in Spain according to the adopted management model (classic or outsourced. Design/methodology/approach: In so doing, an average contrast of different economic variables studied in the literature is conducted (audience share, total assets, public subsidies, cost of personnel, suppliers spending and profit after taxes. In addition, these variables are related so as to calculate productivity obtained by each two groups of television operators. This analysis is conducted for period 2010-2013, featured by a crisis context in the Spanish economy. Findings: Management model adopted by each regional broadcaster impacts on different economic variables as obtained share, total assets, public subsidies, cost of personnel, suppliers spending or profit after taxes. Moreover, those public corporations adopting an outsourced management model present better productivity values. Research limitations/implications: Only one country has been analyzed for a 4 years period. Practical implications: Regional public service broadcasters with an outsourced model present less economic losses and require less public subsidies by their corresponding regional governments. Social implications: Outsourcing part of the value chain can be useful so as to guarantee sustainability of regional public service television. Originality/value: It has been proven empirically that the management model of a regional public service television impacts its economic results.

  9. Network Analysis Approach to Stroke Care and Assistance Provision: An Empirical Study

    Directory of Open Access Journals (Sweden)

    Szczygiel Nina

    2017-06-01

    Full Text Available To model and analyse stroke care and assistance provision in the Portuguese context from the network perspective. We used the network theory as a theoretical foundation for the study. The model proposed by Frey et al. (2006 was used to elicit and comprehend possible interactions and relations between organisations expected to be involved in the provision of care and assistance to stroke patients in their pathway to rehabilitation. Providers were identified and contacted to evaluate the nature and intensity of relationships. Network analysis was performed with the NodeXL software package. Analysis of 509 entities based on about 260 000 entries indicates that stroke care provision in the evaluated context is best captured in the coalition-collaboration setting, which appears to best demonstrate the character of the network. Information from analysis of the collaboration stage was not sufficient to determine the network dynamics. Application of the network theory to understand interorganisational dynamics of the complex health care context. Empirical validation of the model proposed by Frey et al. (2006 in terms of its operationalisation and the way it actually reflects the practical context. Examination and analysis of interorganisational relationships and its contribution to management of compound health care context involving actors from various sectors.

  10. Motivation and Emotions in Competition Systems for Education: An Empirical Study

    Science.gov (United States)

    Muñoz-Merino, Pedro J.; Molina, Manuel Fernández; Muñoz-Organero, Mario; Kloos, Carlos Delgado

    2014-01-01

    A lack of student motivation is a problem in many courses in electrical engineering. Introducing competition between students can enhance their motivation, but it can also generate negative emotions. This paper presents an empirical study of students in a telecommunications engineering degree; it measured their level of motivation, and their…

  11. Empirical study on the feasibility of measures for public self-protection capability enhancement; Empirische Untersuchung der Realisierbarkeit von Massnahmen zur Erhoehung der Selbstschutzfaehigkeit der Bevoelkerung

    Energy Technology Data Exchange (ETDEWEB)

    Goersch, Henning G.; Werner, Ute

    2011-07-01

    The empirical study on the feasibility of measures for public self-protection capability enhancement covers the following issues with several sections: (1) Introduction: scope of the study; structure of the study. (2) Issue coherence: self-protection; reduction and prevention of damage by personal emergency preparedness, personal emergency preparedness in Germany. (3) Solution coherence: scientific approaches, development of practical problem solution approaches, proposal of a promotion system. (4) Empirical studies: Promotion system evaluation by experts; questioning of the public; Delphi-study on minimum standards in emergency preparedness; local networks in emergency preparedness. (5) Evaluation of models for personal emergency preparedness (M3P). (6) Integration of all research results into the approach of emergency preparedness: scope; recommendations, conclusions.

  12. Statistical microeconomics and commodity prices: theory and empirical results.

    Science.gov (United States)

    Baaquie, Belal E

    2016-01-13

    A review is made of the statistical generalization of microeconomics by Baaquie (Baaquie 2013 Phys. A 392, 4400-4416. (doi:10.1016/j.physa.2013.05.008)), where the market price of every traded commodity, at each instant of time, is considered to be an independent random variable. The dynamics of commodity market prices is given by the unequal time correlation function and is modelled by the Feynman path integral based on an action functional. The correlation functions of the model are defined using the path integral. The existence of the action functional for commodity prices that was postulated to exist in Baaquie (Baaquie 2013 Phys. A 392, 4400-4416. (doi:10.1016/j.physa.2013.05.008)) has been empirically ascertained in Baaquie et al. (Baaquie et al. 2015 Phys. A 428, 19-37. (doi:10.1016/j.physa.2015.02.030)). The model's action functionals for different commodities has been empirically determined and calibrated using the unequal time correlation functions of the market commodity prices using a perturbation expansion (Baaquie et al. 2015 Phys. A 428, 19-37. (doi:10.1016/j.physa.2015.02.030)). Nine commodities drawn from the energy, metal and grain sectors are empirically studied and their auto-correlation for up to 300 days is described by the model to an accuracy of R(2)>0.90-using only six parameters. © 2015 The Author(s).

  13. Tourism forecasting using modified empirical mode decomposition and group method of data handling

    Science.gov (United States)

    Yahya, N. A.; Samsudin, R.; Shabri, A.

    2017-09-01

    In this study, a hybrid model using modified Empirical Mode Decomposition (EMD) and Group Method of Data Handling (GMDH) model is proposed for tourism forecasting. This approach reconstructs intrinsic mode functions (IMFs) produced by EMD using trial and error method. The new component and the remaining IMFs is then predicted respectively using GMDH model. Finally, the forecasted results for each component are aggregated to construct an ensemble forecast. The data used in this experiment are monthly time series data of tourist arrivals from China, Thailand and India to Malaysia from year 2000 to 2016. The performance of the model is evaluated using Root Mean Square Error (RMSE) and Mean Absolute Percentage Error (MAPE) where conventional GMDH model and EMD-GMDH model are used as benchmark models. Empirical results proved that the proposed model performed better forecasts than the benchmarked models.

  14. Investigating the empirical support for therapeutic targets proposed by the temporal experience of pleasure model in schizophrenia: A systematic review.

    Science.gov (United States)

    Edwards, Clementine J; Cella, Matteo; Tarrier, Nicholas; Wykes, Til

    2015-10-01

    Anhedonia and amotivation are substantial predictors of poor functional outcomes in people with schizophrenia and often present a formidable barrier to returning to work or building relationships. The Temporal Experience of Pleasure Model proposes constructs which should be considered therapeutic targets for these symptoms in schizophrenia e.g. anticipatory pleasure, memory, executive functions, motivation and behaviours related to the activity. Recent reviews have highlighted the need for a clear evidence base to drive the development of targeted interventions. To review systematically the empirical evidence for each TEP model component and propose evidence-based therapeutic targets for anhedonia and amotivation in schizophrenia. Following PRISMA guidelines, PubMed and PsycInfo were searched using the terms "schizophrenia" and "anhedonia". Studies were included if they measured anhedonia and participants had a diagnosis of schizophrenia. The methodology, measures and main findings from each study were extracted and critically summarised for each TEP model construct. 80 independent studies were reviewed and executive functions, emotional memory and the translation of motivation into actions are highlighted as key deficits with a strong evidence base in people with schizophrenia. However, there are many relationships that are unclear because the empirical work is limited by over-general tasks and measures. Promising methods for research which have more ecological validity include experience sampling and behavioural tasks assessing motivation. Specific adaptations to Cognitive Remediation Therapy, Cognitive Behavioural Therapy and the utilisation of mobile technology to enhance representations and emotional memory are recommended for future development. Copyright © 2015. Published by Elsevier B.V.

  15. Fitting non-gaussian Models to Financial data: An Empirical Study

    Directory of Open Access Journals (Sweden)

    Pablo Olivares

    2011-04-01

    Full Text Available In this paper are presented some experiences about the modeling of financial data by three classes of models as alternative to Gaussian Linear models. Dynamic Volatility, Stable L'evy and Diffusion with Jumps models are considered. The techniques are illustrated with some examples of financial series on currency, futures and indexes.

  16. An Investigation of Justice in Supply Chain Trust and Relationship Commitment - An Empirical Study of Pakistan

    Directory of Open Access Journals (Sweden)

    Ziaullah Muhammad

    2015-03-01

    Full Text Available In recent years supply chain integration (SCI has received increasing attention from scholars and practitioners. However, our knowledge of what influences the supply chain integration practice of relationship commitment is still very limited. The objective of this study is to investigate the relationship among supply chain justices (procedural, distributive and interactional, trust and inter-firms relationship commitment in mainland Pakistan. The research variables have considerable importance in the literature of supply chain management (SCM. The conceptual model comprises five hypotheses. Then hypotheses are tested via an empirical study in which data are collected from 170 manufacturers, distributors, suppliers and retailers of main stream spectrum industries in Pakistan. We used exploratory factor analysis (EFA, confirmatory factor analysis (CFA to examine the validity and reliability of the measurement model, and structural equation modeling (SEM to test the hypotheses. The findings delineate that supply chain justices can develop relationship commitment (affective and continuance via establishing trust among supply chain partners. Moreover, this study reveals interesting and useful implications of supply chain justices, trust and relationship commitment for practitioners.

  17. Modelling of volumetric properties of binary and ternary mixtures by CEOS, CEOS/GE and empirical models

    Directory of Open Access Journals (Sweden)

    BOJAN D. DJORDJEVIC

    2007-12-01

    Full Text Available Although many cubic equations of state coupled with van der Waals-one fluid mixing rules including temperature dependent interaction parameters are sufficient for representing phase equilibria and excess properties (excess molar enthalpy HE, excess molar volume VE, etc., difficulties appear in the correlation and prediction of thermodynamic properties of complex mixtures at various temperature and pressure ranges. Great progress has been made by a new approach based on CEOS/GE models. This paper reviews the last six-year of progress achieved in modelling of the volumetric properties for complex binary and ternary systems of non-electrolytes by the CEOS and CEOS/GE approaches. In addition, the vdW1 and TCBT models were used to estimate the excess molar volume VE of ternary systems methanol + chloroform + benzene and 1-propanol + chloroform + benzene, as well as the corresponding binaries methanol + chloroform, chloroform + benzene, 1-propanol + chloroform and 1-propanol + benzene at 288.15–313.15 K and atmospheric pressure. Also, prediction of VE for both ternaries by empirical models (Radojković, Kohler, Jackob–Fitzner, Colinet, Tsao–Smith, Toop, Scatchard, Rastogi was performed.

  18. Empirical modelling to predict the refractive index of human blood

    Science.gov (United States)

    Yahya, M.; Saghir, M. Z.

    2016-02-01

    Optical techniques used for the measurement of the optical properties of blood are of great interest in clinical diagnostics. Blood analysis is a routine procedure used in medical diagnostics to confirm a patient’s condition. Measuring the optical properties of blood is difficult due to the non-homogenous nature of the blood itself. In addition, there is a lot of variation in the refractive indices reported in the literature. These are the reasons that motivated the researchers to develop a mathematical model that can be used to predict the refractive index of human blood as a function of concentration, temperature and wavelength. The experimental measurements were conducted on mimicking phantom hemoglobin samples using the Abbemat Refractometer. The results analysis revealed a linear relationship between the refractive index and concentration as well as temperature, and a non-linear relationship between refractive index and wavelength. These results are in agreement with those found in the literature. In addition, a new formula was developed based on empirical modelling which suggests that temperature and wavelength coefficients be added to the Barer formula. The verification of this correlation confirmed its ability to determine refractive index and/or blood hematocrit values with appropriate clinical accuracy.

  19. An empirical model of the Earth's bow shock based on an artificial neural network

    Science.gov (United States)

    Pallocchia, Giuseppe; Ambrosino, Danila; Trenchi, Lorenzo

    2014-05-01

    All of the past empirical models of the Earth's bow shock shape were obtained by best-fitting some given surfaces to sets of observed crossings. However, the issue of bow shock modeling can be addressed by means of artificial neural networks (ANN) as well. In this regard, here it is presented a perceptron, a simple feedforward network, which computes the bow shock distance along a given direction using the two angular coordinates of that direction, the bow shock predicted distance RF79 (provided by Formisano's model (F79)) and the upstream alfvénic Mach number Ma. After a brief description of the ANN architecture and training method, we discuss the results of the statistical comparison, performed over a test set of 1140 IMP8 crossings, between the prediction accuracies of ANN and F79 models.

  20. Empirical Verification of Fault Models for FPGAs Operating in the Subcritical Voltage Region

    DEFF Research Database (Denmark)

    Birklykke, Alex Aaen; Koch, Peter; Prasad, Ramjee

    2013-01-01

    We present a rigorous empirical study of the bit-level error behavior of field programmable gate arrays operating in the subcricital voltage region. This region is of significant interest as voltage-scaling under normal circumstances is halted by the first occurrence of errors. However, accurate...

  1. College Education and Attitudes toward Democracy in China: An Empirical Study

    Science.gov (United States)

    Wang, Gang; Wu, Liyun; Han, Rongbin

    2015-01-01

    The modernization theory contends that there is a link between education and democracy. Yet few empirical studies have been done to investigate the role of higher education on promoting democratic values in the Chinese context. Using China General Social Survey 2006, this paper generates several findings which are not completely consistent with…

  2. Empirical complexities in the genetic foundations of lethal mutagenesis.

    Science.gov (United States)

    Bull, James J; Joyce, Paul; Gladstone, Eric; Molineux, Ian J

    2013-10-01

    From population genetics theory, elevating the mutation rate of a large population should progressively reduce average fitness. If the fitness decline is large enough, the population will go extinct in a process known as lethal mutagenesis. Lethal mutagenesis has been endorsed in the virology literature as a promising approach to viral treatment, and several in vitro studies have forced viral extinction with high doses of mutagenic drugs. Yet only one empirical study has tested the genetic models underlying lethal mutagenesis, and the theory failed on even a qualitative level. Here we provide a new level of analysis of lethal mutagenesis by developing and evaluating models specifically tailored to empirical systems that may be used to test the theory. We first quantify a bias in the estimation of a critical parameter and consider whether that bias underlies the previously observed lack of concordance between theory and experiment. We then consider a seemingly ideal protocol that avoids this bias-mutagenesis of virions-but find that it is hampered by other problems. Finally, results that reveal difficulties in the mere interpretation of mutations assayed from double-strand genomes are derived. Our analyses expose unanticipated complexities in testing the theory. Nevertheless, the previous failure of the theory to predict experimental outcomes appears to reside in evolutionary mechanisms neglected by the theory (e.g., beneficial mutations) rather than from a mismatch between the empirical setup and model assumptions. This interpretation raises the specter that naive attempts at lethal mutagenesis may augment adaptation rather than retard it.

  3. THE "MAN INCULTS" AND PACIFICATION DURING BRAZILIAN EMPIRE: A MODEL OF HISTORICAL INTERPRETATION BUILT FROM THE APPROACH TO HUMAN RIGHTS

    Directory of Open Access Journals (Sweden)

    José Ernesto Pimentel Filho

    2011-06-01

    Full Text Available The construction of peace in the Empire of Brazil was one of the forms of public space’s monopoly by the dominant sectors of the Empire Society. On the one hand, the Empire built an urban sociability based on patriarchal relations. On the other hand, the Empire was struggling against all forms of disorder and social deviance, as in a diptych image. The center of that peace was the capitals of the provinces. We he discuss here how to construct a model for approaching a mentality of combating crime in rural areas according to the patriarchal minds during the nineteenth century in Brazil. For it, the case of Ceara has been chosen. A historical hermeneutic might been applied for understanding the role of poor white men in social life of the Empire of Brazil. We observe that the education, when associated with the moral, has been seen as able to modify any violent behavior and able shaping the individual attitude before the justice and punishment policy. Discrimination and stereotypes are part of our interpretation as contribution to a debate on Human Rights in the history of Brazil.

  4. Empirical global model of upper thermosphere winds based on atmosphere and dynamics explorer satellite data

    Science.gov (United States)

    Hedin, A. E.; Spencer, N. W.; Killeen, T. L.

    1988-01-01

    Thermospheric wind data obtained from the Atmosphere Explorer E and Dynamics Explorer 2 satellites have been used to generate an empirical wind model for the upper thermosphere, analogous to the MSIS model for temperature and density, using a limited set of vector spherical harmonics. The model is limited to above approximately 220 km where the data coverage is best and wind variations with height are reduced by viscosity. The data base is not adequate to detect solar cycle (F10.7) effects at this time but does include magnetic activity effects. Mid- and low-latitude data are reproduced quite well by the model and compare favorably with published ground-based results. The polar vortices are present, but not to full detail.

  5. An empirical modeling tool and glass property database in development of US-DOE radioactive waste glasses

    International Nuclear Information System (INIS)

    Muller, I.; Gan, H.

    1997-01-01

    An integrated glass database has been developed at the Vitreous State Laboratory of Catholic University of America. The major objective of this tool was to support glass formulation using the MAWS approach (Minimum Additives Waste Stabilization). An empirical modeling capability, based on the properties of over 1000 glasses in the database, was also developed to help formulate glasses from waste streams under multiple user-imposed constraints. The use of this modeling capability, the performance of resulting models in predicting properties of waste glasses, and the correlation of simple structural theories to glass properties are the subjects of this paper. (authors)

  6. Monthly and Fortnightly Tidal Variations of the Earth's Rotation Rate Predicted by a TOPEX/POSEIDON Empirical Ocean Tide Model

    Science.gov (United States)

    Desai, S.; Wahr, J.

    1998-01-01

    Empirical models of the two largest constituents of the long-period ocean tides, the monthly and the fortnightly constituents, are estimated from repeat cycles 10 to 210 of the TOPEX/POSEIDON (T/P) mission.

  7. Franchised fast food brands: An empirical study of factors influencing growth

    OpenAIRE

    Christopher A. Wingrove; Boris Urban

    2017-01-01

    Orientation: Franchising is a popular and multifaceted business arrangement that captures a sizeable portion of the restaurant industry worldwide. Research purpose: The study empirically investigated the influence of various site location and branding factors on the growth of franchised fast food restaurant brands across the greater Gauteng region. Motivation of the study: Researching which factors influence the growth of franchised fast food restaurant brands is important for an emer...

  8. Flexible Modeling of Epidemics with an Empirical Bayes Framework

    Science.gov (United States)

    Brooks, Logan C.; Farrow, David C.; Hyun, Sangwon; Tibshirani, Ryan J.; Rosenfeld, Roni

    2015-01-01

    Seasonal influenza epidemics cause consistent, considerable, widespread loss annually in terms of economic burden, morbidity, and mortality. With access to accurate and reliable forecasts of a current or upcoming influenza epidemic’s behavior, policy makers can design and implement more effective countermeasures. This past year, the Centers for Disease Control and Prevention hosted the “Predict the Influenza Season Challenge”, with the task of predicting key epidemiological measures for the 2013–2014 U.S. influenza season with the help of digital surveillance data. We developed a framework for in-season forecasts of epidemics using a semiparametric Empirical Bayes framework, and applied it to predict the weekly percentage of outpatient doctors visits for influenza-like illness, and the season onset, duration, peak time, and peak height, with and without using Google Flu Trends data. Previous work on epidemic modeling has focused on developing mechanistic models of disease behavior and applying time series tools to explain historical data. However, tailoring these models to certain types of surveillance data can be challenging, and overly complex models with many parameters can compromise forecasting ability. Our approach instead produces possibilities for the epidemic curve of the season of interest using modified versions of data from previous seasons, allowing for reasonable variations in the timing, pace, and intensity of the seasonal epidemics, as well as noise in observations. Since the framework does not make strict domain-specific assumptions, it can easily be applied to some other diseases with seasonal epidemics. This method produces a complete posterior distribution over epidemic curves, rather than, for example, solely point predictions of forecasting targets. We report prospective influenza-like-illness forecasts made for the 2013–2014 U.S. influenza season, and compare the framework’s cross-validated prediction error on historical data to

  9. Empirical Music Aesthetics

    DEFF Research Database (Denmark)

    Grund, Cynthia M.

    The toolbox for empirically exploring the ways that artistic endeavors convey and activate meaning on the part of performers and audiences continues to expand. Current work employing methods at the intersection of performance studies, philosophy, motion capture and neuroscience to better understand...... musical performance and reception is inspired by traditional approaches within aesthetics, but it also challenges some of the presuppositions inherent in them. As an example of such work I present a research project in empirical music aesthetics begun last year and of which I am a team member....

  10. The frontiers of empirical science: A Thomist-inspired critique of ...

    African Journals Online (AJOL)

    2016-07-08

    Jul 8, 2016 ... of scientism, is, however, self-destructive of scientism because contrary to its ... The theory that only empirical facts have epistemic meaning is supported by the ..... (2002:1436). The cyclic model lacks empirical verification,.

  11. Empirical testing of Kotler's high-performance factors to increase sales growth

    Directory of Open Access Journals (Sweden)

    Oren Dayan

    2010-12-01

    Full Text Available Purpose and/or objectives: The primary objective of this study is to empirically test Kotler's (2003 high-performance model which ensures an increase in sales growth. More specifically, the study explores the influence of process variables (as measured by marketing strategies, resources management (as measured by the management of labour, materials, machines, information technology and energy and organisational variables (as measured by TQM and organisational culture on sales growth in the food, motorcar and high-technology manufacturing industries. Problem investigated Various research studies suggest that the managers of firms are continuously challenged in their attempts to increase their sales (Morre, 2007; Pauwels, Silva Risso, Srinivasan & Hanssens, 2004: 142-143; Gray & Hayes, 2007: 1. Kotler (2003 suggests a model that leads to a high performing business. The question is posed as to whether this model can be used to increase sales growth in all businesses. This study seeks to develop a generic model to increase sales growth across industries by using an adapted version of Kotler's (2003 high-performance model. The study investigates the application of this adapted model on the food, motorcar and high-technology manufacturing industries. Design and/or methodology and/or approach: An empirical causal research design that includes 770 marketing and product development practitioners from multinational food, motorcar and high-technology manufacturing firms, was used in this study. A response rate of 76.1% was achieved as only 571 useable questionnaires were returned. The internal reliability and discriminant validity of the measuring instrument were assessed by the calculation of Cronbach alpha coefficients and the conducting an exploratory factor analysis respectively. Structural Equation Modelling SEM was used to statistically test the relationships between the independent variables (marketing strategies, resource management, TQM and

  12. Business Contingency, Strategy Formation, and Firm Performance: An Empirical Study of Chinese Apparel SMEs

    Directory of Open Access Journals (Sweden)

    Ting Chi

    2015-03-01

    Full Text Available This study empirically investigated how small and medium-sized Chinese apparel enterprises (SME formed their strategy as a response to the characteristics of business environment in order to achieve competitive business performance. An environment-strategy-performance model was proposed and tested. Using primary data gathered by a questionnaire survey of the Chinese apparel industry, factor analysis and structural equation modeling (SEM were conducted for measurement and structural model analysis and hypothesis testing. Results show the proposed model met parsimonious statistical criteria. The differences in strategy responses to environment between high- and low-performing firms were striking. Confronting an increasingly turbulent business environment, high performers emphasized differentiation strategy through higher quality, better delivery performance, and greater flexibility than cost reduction. In contrast, low performers prioritized low cost while quality and flexibility were given certain weights. The lack of clear focus on strategies could result in a relatively lower performance. While the process of government-led industrial upgrading continues, forward-looking firms have proactively shifted their strategic focus from solely or mainly cost reduction to a variety of differentiating factors which bring in added value and are less imitable by competitors.

  13. An empirical model for trip distribution of commuters in the Netherlands: Transferability in time and space reconsidered.

    NARCIS (Netherlands)

    Thomas, Tom; Tutert, Bas

    2013-01-01

    In this paper, we evaluate the distribution of commute trips in The Netherlands, to assess its transferability in space and time. We used Dutch Travel Surveys from 1995 and 2004–2008 to estimate the empirical distribution from a spatial interaction model as function of travel time and distance. We

  14. Empirical psychology, common sense, and Kant's empirical markers for moral responsibility.

    Science.gov (United States)

    Frierson, Patrick

    2008-12-01

    This paper explains the empirical markers by which Kant thinks that one can identify moral responsibility. After explaining the problem of discerning such markers within a Kantian framework I briefly explain Kant's empirical psychology. I then argue that Kant's empirical markers for moral responsibility--linked to higher faculties of cognition--are not sufficient conditions for moral responsibility, primarily because they are empirical characteristics subject to natural laws. Next. I argue that these markers are not necessary conditions of moral responsibility. Given Kant's transcendental idealism, even an entity that lacks these markers could be free and morally responsible, although as a matter of fact Kant thinks that none are. Given that they are neither necessary nor sufficient conditions, I discuss the status of Kant's claim that higher faculties are empirical markers of moral responsibility. Drawing on connections between Kant's ethical theory and 'common rational cognition' (4:393), I suggest that Kant's theory of empirical markers can be traced to ordinary common sense beliefs about responsibility. This suggestion helps explain both why empirical markers are important and what the limits of empirical psychology are within Kant's account of moral responsibility.

  15. Empirical Hamiltonians

    International Nuclear Information System (INIS)

    Peggs, S.; Talman, R.

    1987-01-01

    As proton accelerators get larger, and include more magnets, the conventional tracking programs which simulate them run slower. The purpose of this paper is to describe a method, still under development, in which element-by-element tracking around one turn is replaced by a single man, which can be processed far faster. It is assumed for this method that a conventional program exists which can perform faithful tracking in the lattice under study for some hundreds of turns, with all lattice parameters held constant. An empirical map is then generated by comparison with the tracking program. A procedure has been outlined for determining an empirical Hamiltonian, which can represent motion through many nonlinear kicks, by taking data from a conventional tracking program. Though derived by an approximate method this Hamiltonian is analytic in form and can be subjected to further analysis of varying degrees of mathematical rigor. Even though the empirical procedure has only been described in one transverse dimension, there is good reason to hope that it can be extended to include two transverse dimensions, so that it can become a more practical tool in realistic cases

  16. An Empirical Study of Gender Gap in Children Schooling in Nigeria ...

    African Journals Online (AJOL)

    An Empirical Study of Gender Gap in Children Schooling in Nigeria. Olanrewaju Olaniyan. Abstract. African Journal of Economic Policy Vol10(1) 2003: 117-131. Full Text: EMAIL FULL TEXT EMAIL FULL TEXT · DOWNLOAD FULL TEXT DOWNLOAD FULL TEXT · http://dx.doi.org/10.4314/ajep.v10i1.24245 · AJOL African ...

  17. Comparison of artificial intelligence methods and empirical equations to estimate daily solar radiation

    Science.gov (United States)

    Mehdizadeh, Saeid; Behmanesh, Javad; Khalili, Keivan

    2016-08-01

    In the present research, three artificial intelligence methods including Gene Expression Programming (GEP), Artificial Neural Networks (ANN) and Adaptive Neuro-Fuzzy Inference System (ANFIS) as well as, 48 empirical equations (10, 12 and 26 equations were temperature-based, sunshine-based and meteorological parameters-based, respectively) were used to estimate daily solar radiation in Kerman, Iran in the period of 1992-2009. To develop the GEP, ANN and ANFIS models, depending on the used empirical equations, various combinations of minimum air temperature, maximum air temperature, mean air temperature, extraterrestrial radiation, actual sunshine duration, maximum possible sunshine duration, sunshine duration ratio, relative humidity and precipitation were considered as inputs in the mentioned intelligent methods. To compare the accuracy of empirical equations and intelligent models, root mean square error (RMSE), mean absolute error (MAE), mean absolute relative error (MARE) and determination coefficient (R2) indices were used. The results showed that in general, sunshine-based and meteorological parameters-based scenarios in ANN and ANFIS models presented high accuracy than mentioned empirical equations. Moreover, the most accurate method in the studied region was ANN11 scenario with five inputs. The values of RMSE, MAE, MARE and R2 indices for the mentioned model were 1.850 MJ m-2 day-1, 1.184 MJ m-2 day-1, 9.58% and 0.935, respectively.

  18. Analytical and Empirical Modeling of Wear and Forces of CBN Tool in Hard Turning - A Review

    Science.gov (United States)

    Patel, Vallabh Dahyabhai; Gandhi, Anishkumar Hasmukhlal

    2017-08-01

    Machining of steel material having hardness above 45 HRC (Hardness-Rockwell C) is referred as a hard turning. There are numerous models which should be scrutinized and implemented to gain optimum performance of hard turning. Various models in hard turning by cubic boron nitride tool have been reviewed, in attempt to utilize appropriate empirical and analytical models. Validation of steady state flank and crater wear model, Usui's wear model, forces due to oblique cutting theory, extended Lee and Shaffer's force model, chip formation and progressive flank wear have been depicted in this review paper. Effort has been made to understand the relationship between tool wear and tool force based on the different cutting conditions and tool geometries so that appropriate model can be used according to user requirement in hard turning.

  19. Project size and common pool size: An empirical test using Danish municipal mergers

    DEFF Research Database (Denmark)

    Hansen, Sune Welling

    The paper examines the proposition that project size tends to increase with common pool size from the law of 1 over n (Weingast et al, 1981). This remains under-investigated and a recent study conducted by Primo & Snyder (2008) argues, and empirically substantiates, a reverse law of 1 over n...... across two research designs, two outcome variables, two subsamples, and several model specifications The implications of the findings, combined with the limited potential for empirically testing Primo & Snyder’s alternative model, suggest a re-appreciation of the law of 1 over n as it was originally...

  20. Global empirical wind model for the upper mesosphere/lower thermosphere. I. Prevailing wind

    Directory of Open Access Journals (Sweden)

    Y. I. Portnyagin

    Full Text Available An updated empirical climatic zonally averaged prevailing wind model for the upper mesosphere/lower thermosphere (70-110 km, extending from 80°N to 80°S is presented. The model is constructed from the fitting of monthly mean winds from meteor radar and MF radar measurements at more than 40 stations, well distributed over the globe. The height-latitude contour plots of monthly mean zonal and meridional winds for all months of the year, and of annual mean wind, amplitudes and phases of annual and semiannual harmonics of wind variations are analyzed to reveal the main features of the seasonal variation of the global wind structures in the Northern and Southern Hemispheres. Some results of comparison between the ground-based wind models and the space-based models are presented. It is shown that, with the exception of annual mean systematic bias between the zonal winds provided by the ground-based and space-based models, a good agreement between the models is observed. The possible origin of this bias is discussed.

    Key words: Meteorology and Atmospheric dynamics (general circulation; middle atmosphere dynamics; thermospheric dynamics