WorldWideScience

Sample records for extending empirical models

  1. Development and empirical exploration of an extended model of intragroup conflict

    OpenAIRE

    Hjertø, Kjell B.; Kuvaas, Bård

    2009-01-01

    Dette er post-print av artikkelen publisert i International Journal of Conflict Management Purpose - The purpose of this study was to develop and empirically explore a model of four intragroup conflict types (the 4IC model), consisting of an emotional person, a cognitive task, an emotional task, and a cognitive person conflict. The two first conflict types are similar to existing conceptualizations, whereas the two latter represent new dimensions of group conflict. Design/m...

  2. An extended technology acceptance model for detecting influencing factors: An empirical investigation

    Directory of Open Access Journals (Sweden)

    Mohamd Hakkak

    2013-11-01

    Full Text Available The rapid diffusion of the Internet has radically changed the delivery channels applied by the financial services industry. The aim of this study is to identify the influencing factors that encourage customers to adopt online banking in Khorramabad. The research constructs are developed based on the technology acceptance model (TAM and incorporates some extra important control variables. The model is empirically verified to study the factors influencing the online banking adoption behavior of 210 customers of Tejarat Banks in Khorramabad. The findings of the study suggest that the quality of the internet connection, the awareness of online banking and its benefits, the social influence and computer self-efficacy have significant impacts on the perceived usefulness (PU and perceived ease of use (PEOU of online banking acceptance. Trust and resistance to change also have significant impact on the attitude towards the likelihood of adopting online banking.

  3. University staff adoption of iPads: An empirical study using an extended TAM model

    Directory of Open Access Journals (Sweden)

    Michael Steven Lane

    2014-11-01

    Full Text Available This research examined key factors influencing adoption of iPads by university staff. An online survey collected quantitative data to test hypothesised relationships in an extended TAM model. The findings show that university staff consider iPads easy to use and useful, with a high level of compatibility with their work. Social status had no influence on their attitude to using an iPad. However older university staff and university staff with no previous experience in using a similar technology such as an iPhone or smartphone found iPads less easy to use. Furthermore, a lack of formal end user ICT support impacted negatively on the use of iPads.

  4. EXTENDED CRITICAL SUCCESS FACTOR MODEL FOR MANAGEMENT OF MULTIPLE PROJECTS: AN EMPIRICAL VIEW FROM TRANSNET IN SOUTH AFRICA

    Directory of Open Access Journals (Sweden)

    J.M. Nethathe

    2012-01-01

    Full Text Available

    ENGLISH ABSTRACT: Transnet Freight Rail in South Africa has faced projects delays in its multi-project environment. This study takes South Africa as representative of developing countries, and develops the Critical Success Factors (CSFs model for multiple projects success, with the goal of expanding the conventional model by adding the demographic characteristics of the business units involved in the multiple projects. The empirical results showing the greatest number of success factors are people-related, with the focus on team selection and team commitment. Two demographic characteristics are of importance when managing multiple projects: the size of the business unit, and the employees’ project experience.

    AFRIKAANSE OPSOMMING: Transnet, ‘n spoorvragentiteit in Suid-Afrika, ondervind gereeld projekvertragings in hul multi-projekomgewing. Suid-Afrika, as ‘n voorbeeld van ontwikkelende lande, word in die studie gebruik en hierdie studie ontwikkel ‘n reeks suksesfaktore vir ‘n multi-projek-omgewing deur ‘n bestaande konvensionele model aan te pas om ook die demografiese eienskappe van die verskillende besigheidseenhede betrokke in die organisasie te inkorpo-reer. Die resultaat van die studie wys dat die grootste aantal suksesfaktore mens-geörienteerd is, met die fokus op die samestelling en toewyding van die betrokke projekspanne. Twee demografiese eienskape is belangrik by die bestuur van multi-projekte, naamlik die grootte van die besigheidseenheid asook projekondervinding van die werknemers.

  5. An extended technicolor model

    International Nuclear Information System (INIS)

    Appelquist, T.; Terning, J.

    1994-01-01

    An extended technicolor model is constructed. Quark and lepton masses, spontaneous CP violation, and precision electroweak measurements are discussed. Dynamical symmetry breaking is analyzed using the concept of the big MAC (most attractive channel)

  6. Empirical Vector Autoregressive Modeling

    NARCIS (Netherlands)

    M. Ooms (Marius)

    1993-01-01

    textabstractChapter 2 introduces the baseline version of the VAR model, with its basic statistical assumptions that we examine in the sequel. We first check whether the variables in the VAR can be transformed to meet these assumptions. We analyze the univariate characteristics of the series.

  7. Extended Rayleigh Damping Model

    Directory of Open Access Journals (Sweden)

    Naohiro Nakamura

    2016-07-01

    Full Text Available In dynamic analysis, frequency domain analysis can be used if the entire structure is linear. However, time history analysis is generally used if nonlinear elements are present. Rayleigh damping has been widely used in time history response analysis. Many articles have reported the problems associated with this damping and suggested remedies. A basic problem is that the frequency area across which the damping ratio is almost constant is too narrow. If the area could be expanded while incurring only a small increase in computational cost, this would provide an appropriate remedy for this problem. In this study, a novel damping model capable of expanding the constant frequency area by more than five times was proposed based on the study of a causal damping model. This model was constructed by adding two terms to the Rayleigh damping model and can be applied to the linear elements in the time history analysis of a nonlinear structure. The accuracy and efficiency of the model were confirmed using example analyses.

  8. Empirical Model Building Data, Models, and Reality

    CERN Document Server

    Thompson, James R

    2011-01-01

    Praise for the First Edition "This...novel and highly stimulating book, which emphasizes solving real problems...should be widely read. It will have a positive and lasting effect on the teaching of modeling and statistics in general." - Short Book Reviews This new edition features developments and real-world examples that showcase essential empirical modeling techniques Successful empirical model building is founded on the relationship between data and approximate representations of the real systems that generated that data. As a result, it is essential for researchers who construct these m

  9. Axelrod Model with Extended Conservativeness

    Science.gov (United States)

    Dybiec, Bartłomiej

    2012-11-01

    Similarity of opinions and memory about recent interactions are two main factors determining likelihood of social contacts. Here, we explore the Axelrod model with an extended conservativeness which incorporates not only similarity between individuals but also a preference to the last source of accepted information. The additional preference given to the last source of information increases the initial decay of the number of ideas in the system, changes the character of the phase transition between homogeneous and heterogeneous final states and could increase the number of stable regions (clusters) in the final state.

  10. Center for Extended Magnetohydrodynamics Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Ramos, Jesus [Massachusetts Inst. of Technology (MIT), Cambridge, MA (United States)

    2017-02-14

    This researcher participated in the DOE-funded Center for Extended Magnetohydrodynamics Modeling (CEMM), a multi-institutional collaboration led by the Princeton Plasma Physics Laboratory with Dr. Stephen Jardin as the overall Principal Investigator. This project developed advanced simulation tools to study the non-linear macroscopic dynamics of magnetically confined plasmas. The collaborative effort focused on the development of two large numerical simulation codes, M3D-C1 and NIMROD, and their application to a wide variety of problems. Dr. Ramos was responsible for theoretical aspects of the project, deriving consistent sets of model equations applicable to weakly collisional plasmas and devising test problems for verification of the numerical codes. This activity was funded for twelve years.

  11. Extended Analysis of Empirical Citations with Skinner's "Verbal Behavior": 1984-2004

    Science.gov (United States)

    Dixon, Mark R.; Small, Stacey L.; Rosales, Rocio

    2007-01-01

    The present paper comments on and extends the citation analysis of verbal operant publications based on Skinner's "Verbal Behavior" (1957) by Dymond, O'Hora, Whelan, and O'Donovan (2006). Variations in population parameters were evaluated for only those studies that Dymond et al. categorized as empirical. Preliminary results indicate that the…

  12. Empirical high-latitude electric field models

    International Nuclear Information System (INIS)

    Heppner, J.P.; Maynard, N.C.

    1987-01-01

    Electric field measurements from the Dynamics Explorer 2 satellite have been analyzed to extend the empirical models previously developed from dawn-dusk OGO 6 measurements (J.P. Heppner, 1977). The analysis embraces large quantities of data from polar crossings entering and exiting the high latitudes in all magnetic local time zones. Paralleling the previous analysis, the modeling is based on the distinctly different polar cap and dayside convective patterns that occur as a function of the sign of the Y component of the interplanetary magnetic field. The objective, which is to represent the typical distributions of convective electric fields with a minimum number of characteristic patterns, is met by deriving one pattern (model BC) for the northern hemisphere with a +Y interplanetary magnetic field (IMF) and southern hemisphere with a -Y IMF and two patterns (models A and DE) for the northern hemisphere with a -Y IMF and southern hemisphere with a +Y IMF. The most significant large-scale revisions of the OGO 6 models are (1) on the dayside where the latitudinal overlap of morning and evening convection cells reverses with the sign of the IMF Y component, (2) on the nightside where a westward flow region poleward from the Harang discontinuity appears under model BC conditions, and (3) magnetic local time shifts in the positions of the convection cell foci. The modeling above was followed by a detailed examination of cases where the IMF Z component was clearly positive (northward). Neglecting the seasonally dependent cases where irregularities obscure pattern recognition, the observations range from reasonable agreement with the new BC and DE models, to cases where different characteristics appeared primarily at dayside high latitudes

  13. Model uncertainty in growth empirics

    NARCIS (Netherlands)

    Prüfer, P.

    2008-01-01

    This thesis applies so-called Bayesian model averaging (BMA) to three different economic questions substantially exposed to model uncertainty. Chapter 2 addresses a major issue of modern development economics: the analysis of the determinants of pro-poor growth (PPG), which seeks to combine high

  14. Empirical questions for collective-behaviour modelling

    Indian Academy of Sciences (India)

    The collective behaviour of groups of social animals has been an active topic of study ... Models have been successful at reproducing qualitative features of ... quantitative and detailed empirical results for a range of animal systems. ... standard method [23], the redundant information recorded by the cameras can be used to.

  15. Multistate modelling extended by behavioural rules: An application to migration.

    Science.gov (United States)

    Klabunde, Anna; Zinn, Sabine; Willekens, Frans; Leuchter, Matthias

    2017-10-01

    We propose to extend demographic multistate models by adding a behavioural element: behavioural rules explain intentions and thus transitions. Our framework is inspired by the Theory of Planned Behaviour. We exemplify our approach with a model of migration from Senegal to France. Model parameters are determined using empirical data where available. Parameters for which no empirical correspondence exists are determined by calibration. Age- and period-specific migration rates are used for model validation. Our approach adds to the toolkit of demographic projection by allowing for shocks and social influence, which alter behaviour in non-linear ways, while sticking to the general framework of multistate modelling. Our simulations yield that higher income growth in Senegal leads to higher emigration rates in the medium term, while a decrease in fertility yields lower emigration rates.

  16. Empirical intrinsic geometry for nonlinear modeling and time series filtering.

    Science.gov (United States)

    Talmon, Ronen; Coifman, Ronald R

    2013-07-30

    In this paper, we present a method for time series analysis based on empirical intrinsic geometry (EIG). EIG enables one to reveal the low-dimensional parametric manifold as well as to infer the underlying dynamics of high-dimensional time series. By incorporating concepts of information geometry, this method extends existing geometric analysis tools to support stochastic settings and parametrizes the geometry of empirical distributions. However, the statistical models are not required as priors; hence, EIG may be applied to a wide range of real signals without existing definitive models. We show that the inferred model is noise-resilient and invariant under different observation and instrumental modalities. In addition, we show that it can be extended efficiently to newly acquired measurements in a sequential manner. These two advantages enable us to revisit the Bayesian approach and incorporate empirical dynamics and intrinsic geometry into a nonlinear filtering framework. We show applications to nonlinear and non-Gaussian tracking problems as well as to acoustic signal localization.

  17. Empirical Bayesian inference and model uncertainty

    International Nuclear Information System (INIS)

    Poern, K.

    1994-01-01

    This paper presents a hierarchical or multistage empirical Bayesian approach for the estimation of uncertainty concerning the intensity of a homogeneous Poisson process. A class of contaminated gamma distributions is considered to describe the uncertainty concerning the intensity. These distributions in turn are defined through a set of secondary parameters, the knowledge of which is also described and updated via Bayes formula. This two-stage Bayesian approach is an example where the modeling uncertainty is treated in a comprehensive way. Each contaminated gamma distributions, represented by a point in the 3D space of secondary parameters, can be considered as a specific model of the uncertainty about the Poisson intensity. Then, by the empirical Bayesian method each individual model is assigned a posterior probability

  18. Modeling of extended defects in silicon

    International Nuclear Information System (INIS)

    Law, M.E.; Jones, K.S.; Earles, S.K.; Lilak, A.D.; Xu, J.W.

    1997-01-01

    Transient Enhanced Diffusion (TED) is one of the biggest modeling challenges present in predicting scaled technologies. Damage from implantation of dopant ions changes the diffusivities of the dopants and precipitates to form complex extended defects. Developing a quantitative model for the extended defect behavior during short time, low temperature anneals is a key to explaining TED. This paper reviews some of the modeling developments over the last several years, and discusses some of the challenges that remain to be addressed. Two examples of models compared to experimental work are presented and discussed

  19. An Empirical Model for Energy Storage Systems

    Energy Technology Data Exchange (ETDEWEB)

    Rosewater, David Martin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Scott, Paul [TransPower, Poway, CA (United States)

    2016-03-17

    Improved models of energy storage systems are needed to enable the electric grid’s adaptation to increasing penetration of renewables. This paper develops a generic empirical model of energy storage system performance agnostic of type, chemistry, design or scale. Parameters for this model are calculated using test procedures adapted from the US DOE Protocol for Uniformly Measuring and Expressing the Performance of Energy Storage. We then assess the accuracy of this model for predicting the performance of the TransPower GridSaver – a 1 MW rated lithium-ion battery system that underwent laboratory experimentation and analysis. The developed model predicts a range of energy storage system performance based on the uncertainty of estimated model parameters. Finally, this model can be used to better understand the integration and coordination of energy storage on the electric grid.

  20. Extendable linearised adjustment model for deformation analysis

    NARCIS (Netherlands)

    Hiddo Velsink

    2015-01-01

    Author supplied: "This paper gives a linearised adjustment model for the affine, similarity and congruence transformations in 3D that is easily extendable with other parameters to describe deformations. The model considers all coordinates stochastic. Full positive semi-definite covariance matrices

  1. Extendable linearised adjustment model for deformation analysis

    NARCIS (Netherlands)

    Velsink, H.

    2015-01-01

    This paper gives a linearised adjustment model for the affine, similarity and congruence transformations in 3D that is easily extendable with other parameters to describe deformations. The model considers all coordinates stochastic. Full positive semi-definite covariance matrices and correlation

  2. Selfish mothers? An empirical test of parent-offspring conflict over extended parental care.

    Science.gov (United States)

    Paul, Manabi; Sen Majumder, Sreejani; Bhadra, Anindita

    2014-03-01

    Parent-offspring conflict (POC) theory is an interesting conceptual framework for understanding the dynamics of parental care. However, this theory is not easy to test empirically, as exact measures of parental investment in an experimental set-up are difficult to obtain. We have used free-ranging dogs Canis familiaris in India, to study POC in the context of extended parental care. We observed females and their pups in their natural habitat for the mother's tendency to share food given by humans with her pups in the weaning and post-weaning stages. Since these dogs are scavengers, and depend largely on human provided food for their sustenance, voluntary sharing of food by the mother with her pups is a good surrogate for extended parental care. Our behavioural observations convincingly demonstrate an increase of conflict and decrease of cooperation by the mother with her offspring over given food within a span of 4-6 weeks. We also demonstrate that the competition among the pups in a litter scales with litter size, an indicator of sib-sib competition. Copyright © 2013 Elsevier B.V. All rights reserved.

  3. Consistent spectroscopy for a extended gauge model

    International Nuclear Information System (INIS)

    Oliveira Neto, G. de.

    1990-11-01

    The consistent spectroscopy was obtained with a Lagrangian constructed with vector fields with a U(1) group extended symmetry. As consistent spectroscopy is understood the determination of quantum physical properties described by the model in an manner independent from the possible parametrizations adopted in their description. (L.C.J.A.)

  4. Empirically evaluating decision-analytic models.

    Science.gov (United States)

    Goldhaber-Fiebert, Jeremy D; Stout, Natasha K; Goldie, Sue J

    2010-08-01

    Model-based cost-effectiveness analyses support decision-making. To augment model credibility, evaluation via comparison to independent, empirical studies is recommended. We developed a structured reporting format for model evaluation and conducted a structured literature review to characterize current model evaluation recommendations and practices. As an illustration, we applied the reporting format to evaluate a microsimulation of human papillomavirus and cervical cancer. The model's outputs and uncertainty ranges were compared with multiple outcomes from a study of long-term progression from high-grade precancer (cervical intraepithelial neoplasia [CIN]) to cancer. Outcomes included 5 to 30-year cumulative cancer risk among women with and without appropriate CIN treatment. Consistency was measured by model ranges overlapping study confidence intervals. The structured reporting format included: matching baseline characteristics and follow-up, reporting model and study uncertainty, and stating metrics of consistency for model and study results. Structured searches yielded 2963 articles with 67 meeting inclusion criteria and found variation in how current model evaluations are reported. Evaluation of the cervical cancer microsimulation, reported using the proposed format, showed a modeled cumulative risk of invasive cancer for inadequately treated women of 39.6% (30.9-49.7) at 30 years, compared with the study: 37.5% (28.4-48.3). For appropriately treated women, modeled risks were 1.0% (0.7-1.3) at 30 years, study: 1.5% (0.4-3.3). To support external and projective validity, cost-effectiveness models should be iteratively evaluated as new studies become available, with reporting standardized to facilitate assessment. Such evaluations are particularly relevant for models used to conduct comparative effectiveness analyses.

  5. Empirical atom model of Vegard's law

    International Nuclear Information System (INIS)

    Zhang, Lei; Li, Shichun

    2014-01-01

    Vegard's law seldom holds true for most binary continuous solid solutions. When two components form a solid solution, the atom radii of component elements will change to satisfy the continuity requirement of electron density at the interface between component atom A and atom B so that the atom with larger electron density will expand and the atom with the smaller one will contract. If the expansion and contraction of the atomic radii of A and B respectively are equal in magnitude, Vegard's law will hold true. However, the expansion and contraction of two component atoms are not equal in most situations. The magnitude of the variation will depend on the cohesive energy of corresponding element crystals. An empirical atom model of Vegard's law has been proposed to account for signs of deviations according to the electron density at Wigner–Seitz cell from Thomas–Fermi–Dirac–Cheng model

  6. Phase diagram of an extended Agassi model

    Science.gov (United States)

    García-Ramos, J. E.; Dukelsky, J.; Pérez-Fernández, P.; Arias, J. M.

    2018-05-01

    Background: The Agassi model [D. Agassi, Nucl. Phys. A 116, 49 (1968), 10.1016/0375-9474(68)90482-X] is an extension of the Lipkin-Meshkov-Glick (LMG) model [H. J. Lipkin, N. Meshkov, and A. J. Glick, Nucl. Phys. 62, 188 (1965), 10.1016/0029-5582(65)90862-X] that incorporates the pairing interaction. It is a schematic model that describes the interplay between particle-hole and pair correlations. It was proposed in the 1960s by D. Agassi as a model to simulate the properties of the quadrupole plus pairing model. Purpose: The aim of this work is to extend a previous study by Davis and Heiss [J. Phys. G: Nucl. Phys. 12, 805 (1986), 10.1088/0305-4616/12/9/006] generalizing the Agassi model and analyze in detail the phase diagram of the model as well as the different regions with coexistence of several phases. Method: We solve the model Hamiltonian through the Hartree-Fock-Bogoliubov (HFB) approximation, introducing two variational parameters that play the role of order parameters. We also compare the HFB calculations with the exact ones. Results: We obtain the phase diagram of the model and classify the order of the different quantum phase transitions appearing in the diagram. The phase diagram presents broad regions where several phases, up to three, coexist. Moreover, there is also a line and a point where four and five phases are degenerated, respectively. Conclusions: The phase diagram of the extended Agassi model presents a rich variety of phases. Phase coexistence is present in extended areas of the parameter space. The model could be an important tool for benchmarking novel many-body approximations.

  7. Modeling of PWR fuel at extended burnup

    International Nuclear Information System (INIS)

    Dias, Raphael Mejias

    2016-01-01

    This work studies the modifications implemented over successive versions in the empirical models of the computer program FRAPCON used to simulate the steady state irradiation performance of Pressurized Water Reactor (PWR) fuel rods under high burnup condition. In the study, the empirical models present in FRAPCON official documentation were analyzed. A literature study was conducted on the effects of high burnup in nuclear fuels and to improve the understanding of the models used by FRAPCON program in these conditions. A steady state fuel performance analysis was conducted for a typical PWR fuel rod using FRAPCON program versions 3.3, 3.4, and 3.5. The results presented by the different versions of the program were compared in order to verify the impact of model changes in the output parameters of the program. It was observed that the changes brought significant differences in the results of the fuel rod thermal and mechanical parameters, especially when they evolved from FRAPCON-3.3 version to FRAPCON-3.5 version. Lower temperatures, lower cladding stress and strain, lower cladding oxide layer thickness were obtained in the fuel rod analyzed with the FRAPCON-3.5 version. (author)

  8. Extended Linear Models with Gaussian Priors

    DEFF Research Database (Denmark)

    Quinonero, Joaquin

    2002-01-01

    In extended linear models the input space is projected onto a feature space by means of an arbitrary non-linear transformation. A linear model is then applied to the feature space to construct the model output. The dimension of the feature space can be very large, or even infinite, giving the model...... a very big flexibility. Support Vector Machines (SVM's) and Gaussian processes are two examples of such models. In this technical report I present a model in which the dimension of the feature space remains finite, and where a Bayesian approach is used to train the model with Gaussian priors...... on the parameters. The Relevance Vector Machine, introduced by Tipping, is a particular case of such a model. I give the detailed derivations of the expectation-maximisation (EM) algorithm used in the training. These derivations are not found in the literature, and might be helpful for newcomers....

  9. Extended FMEA for Sustainable Manufacturing: An Empirical Study in the Non-Woven Fabrics Industry

    Directory of Open Access Journals (Sweden)

    Thanh-Lam Nguyen

    2016-09-01

    Full Text Available Failure modes and effects analysis ( F M E A substantially facilitates the efforts of industrial manufacturers in prioritizing failures that require corrective actions to continuously improve product quality. However, the conventional approach fails to provide satisfactory explanation of the aggregate effects of a failure from different perspectives such as technical severity, economic severity, and production capacity in some practical applications. To fulfill the existing gap in the F M E A literature, this paper proposes an extension by considering associated quality cost and the capability of failure detection system as additional determinants to signify the priority level for each failure mode. The quality cost and capacity are considered as key factors for sustainable survival and development of an industrial manufacturer in the fierce competition market these days. The performance of the extended scheme was tested in an empirical case at a non-woven fabrics manufacturer. Analytical results indicate that the proposed approach outperforms the traditional one and remarkably reduces the percentage of defective fabrics from about 2.41% before the trial period to 1.13%,thus significantly reducing wastes and increasing operation efficiency, thereby providing valuable advantages to improve organizational competition power for their sustainable growth.

  10. Extended Higgs sectors in radiative neutrino models

    Directory of Open Access Journals (Sweden)

    Oleg Antipin

    2017-05-01

    Full Text Available Testable Higgs partners may be sought within the extensions of the SM Higgs sector aimed at generating neutrino masses at the loop level. We study a viability of extended Higgs sectors for two selected models of radiative neutrino masses: a one-loop mass model, providing the Higgs partner within a real triplet scalar representation, and a three-loop mass model, providing it within its two-Higgs-doublet sector. The Higgs sector in the one-loop model may remain stable and perturbative up to the Planck scale, whereas the three-loop model calls for a UV completion around 106 GeV. Additional vector-like lepton and exotic scalar fields, which are required to close one- and three-loop neutrino-mass diagrams, play a decisive role for the testability of the respective models. We constrain the parameter space of these models using LHC bounds on diboson resonances.

  11. MILES extended : Stellar population synthesis models from the optical to the infrared

    NARCIS (Netherlands)

    Rock, B.; Vazdekis, A.; Ricciardelli, E.; Peletier, R. F.; Knapen, J. H.; Falcon-Barroso, J.

    We present the first single-burst stellar population models, which covers the optical and the infrared wavelength range between 3500 and 50 000 angstrom and which are exclusively based on empirical stellar spectra. To obtain these joint models, we combined the extended MILES models in the optical

  12. PWR surveillance based on correspondence between empirical models and physical

    International Nuclear Information System (INIS)

    Zwingelstein, G.; Upadhyaya, B.R.; Kerlin, T.W.

    1976-01-01

    An on line surveillance method based on the correspondence between empirical models and physicals models is proposed for pressurized water reactors. Two types of empirical models are considered as well as the mathematical models defining the correspondence between the physical and empirical parameters. The efficiency of this method is illustrated for the surveillance of the Doppler coefficient for Oconee I (an 886 MWe PWR) [fr

  13. Empirical particle transport model for tokamaks

    International Nuclear Information System (INIS)

    Petravic, M.; Kuo-Petravic, G.

    1986-08-01

    A simple empirical particle transport model has been constructed with the purpose of gaining insight into the L- to H-mode transition in tokamaks. The aim was to construct the simplest possible model which would reproduce the measured density profiles in the L-regime, and also produce a qualitatively correct transition to the H-regime without having to assume a completely different transport mode for the bulk of the plasma. Rather than using completely ad hoc constructions for the particle diffusion coefficient, we assume D = 1/5 chi/sub total/, where chi/sub total/ ≅ chi/sub e/ is the thermal diffusivity, and then use the κ/sub e/ = n/sub e/chi/sub e/ values derived from experiments. The observed temperature profiles are then automatically reproduced, but nontrivially, the correct density profiles are also obtained, for realistic fueling rates and profiles. Our conclusion is that it is sufficient to reduce the transport coefficients within a few centimeters of the surface to produce the H-mode behavior. An additional simple assumption, concerning the particle mean-free path, leads to a convective transport term which reverses sign a few centimeters inside the surface, as required by the H-mode density profiles

  14. Exploring Social Structures in Extended Team Model

    DEFF Research Database (Denmark)

    Zahedi, Mansooreh; Ali Babar, Muhammad

    2013-01-01

    Extended Team Model (ETM) as a type of offshore outsourcing is increasingly becoming popular mode of Global Software Development (GSD). There is little knowledge about the social structures in ETM and their impact on collaboration. Within a large interdisciplinary project to develop the next...... generation of GSD technologies, we are exploring the role of social structures to support collaboration. This paper reports some details of our research design and initial findings about the mechanisms to support social structures and their impact on collaboration in an ETM....

  15. Center for Extended Magnetohydrodynamic Modeling Cooperative Agreement

    International Nuclear Information System (INIS)

    Sovinec, Carl R.

    2008-01-01

    The Center for Extended Magnetohydrodynamic Modeling (CEMM) is developing computer simulation models for predicting the behavior of magnetically confined plasmas. Over the first phase of support from the Department of Energy's Scientific Discovery through Advanced Computing (SciDAC) initiative, the focus has been on macroscopic dynamics that alter the confinement properties of magnetic field configurations. The ultimate objective is to provide computational capabilities to predict plasma behavior - not unlike computational weather prediction - to optimize performance and to increase the reliability of magnetic confinement for fusion energy. Numerical modeling aids theoretical research by solving complicated mathematical models of plasma behavior including strong nonlinear effects and the influences of geometrical shaping of actual experiments. The numerical modeling itself remains an area of active research, due to challenges associated with simulating multiple temporal and spatial scales. The research summarized in this report spans computational and physical topics associated with state of the art simulation of magnetized plasmas. The tasks performed for this grant are categorized according to whether they are primarily computational, algorithmic, or application-oriented in nature. All involve the development and use of the Non-Ideal Magnetohydrodynamics with Rotation, Open Discussion (NIMROD) code, which is described at http://nimrodteam.org. With respect to computation, we have tested and refined methods for solving the large algebraic systems of equations that result from our numerical approximations of the physical model. Collaboration with the Terascale Optimal PDE Solvers (TOPS) SciDAC center led us to the SuperLU-DIST software library for solving large sparse matrices using direct methods on parallel computers. Switching to this solver library boosted NIMROD's performance by a factor of five in typical large nonlinear simulations, which has been publicized

  16. Empirical information on nuclear matter fourth-order symmetry energy from an extended nuclear mass formula

    Directory of Open Access Journals (Sweden)

    Rui Wang

    2017-10-01

    Full Text Available We establish a relation between the equation of state of nuclear matter and the fourth-order symmetry energy asym,4(A of finite nuclei in a semi-empirical nuclear mass formula by self-consistently considering the bulk, surface and Coulomb contributions to the nuclear mass. Such a relation allows us to extract information on nuclear matter fourth-order symmetry energy Esym,4(ρ0 at normal nuclear density ρ0 from analyzing nuclear mass data. Based on the recent precise extraction of asym,4(A via the double difference of the “experimental” symmetry energy extracted from nuclear masses, for the first time, we estimate a value of Esym,4(ρ0=20.0±4.6 MeV. Such a value of Esym,4(ρ0 is significantly larger than the predictions from mean-field models and thus suggests the importance of considering the effects of beyond the mean-field approximation in nuclear matter calculations.

  17. A sensitivity analysis of centrifugal compressors' empirical models

    International Nuclear Information System (INIS)

    Yoon, Sung Ho; Baek, Je Hyun

    2001-01-01

    The mean-line method using empirical models is the most practical method of predicting off-design performance. To gain insight into the empirical models, the influence of empirical models on the performance prediction results is investigated. We found that, in the two-zone model, the secondary flow mass fraction has a considerable effect at high mass flow-rates on the performance prediction curves. In the TEIS model, the first element changes the slope of the performance curves as well as the stable operating range. The second element makes the performance curves move up and down as it increases or decreases. It is also discovered that the slip factor affects pressure ratio, but it has little effect on efficiency. Finally, this study reveals that the skin friction coefficient has significant effect on both the pressure ratio curve and the efficiency curve. These results show the limitations of the present empirical models, and more reasonable empirical models are reeded

  18. Constraints based analysis of extended cybernetic models.

    Science.gov (United States)

    Mandli, Aravinda R; Venkatesh, Kareenhalli V; Modak, Jayant M

    2015-11-01

    The cybernetic modeling framework provides an interesting approach to model the regulatory phenomena occurring in microorganisms. In the present work, we adopt a constraints based approach to analyze the nonlinear behavior of the extended equations of the cybernetic model. We first show that the cybernetic model exhibits linear growth behavior under the constraint of no resource allocation for the induction of the key enzyme. We then quantify the maximum achievable specific growth rate of microorganisms on mixtures of substitutable substrates under various kinds of regulation and show its use in gaining an understanding of the regulatory strategies of microorganisms. Finally, we show that Saccharomyces cerevisiae exhibits suboptimal dynamic growth with a long diauxic lag phase when growing on a mixture of glucose and galactose and discuss on its potential to achieve optimal growth with a significantly reduced diauxic lag period. The analysis carried out in the present study illustrates the utility of adopting a constraints based approach to understand the dynamic growth strategies of microorganisms. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  19. An Organization's Extended (Soft) Competencies Model

    Science.gov (United States)

    Rosas, João; Macedo, Patrícia; Camarinha-Matos, Luis M.

    One of the steps usually undertaken in partnerships formation is the assessment of organizations’ competencies. Typically considered competencies of a functional or technical nature, which provide specific outcomes can be considered as hard competencies. Yet, the very act of collaboration has its specific requirements, for which the involved organizations must be apt to exercise other type of competencies that affect their own performance and the partnership success. These competencies are more of a behavioral nature, and can be named as soft-competencies. This research aims at addressing the effects of the soft competencies on the performance of the hard ones. An extended competencies model is thus proposed, allowing the construction of adjusted competencies profiles, in which the competency levels are adjusted dynamically according to the requirements of collaboration opportunities.

  20. Psychological Models of Art Reception must be Empirically Grounded

    DEFF Research Database (Denmark)

    Nadal, Marcos; Vartanian, Oshin; Skov, Martin

    2017-01-01

    We commend Menninghaus et al. for tackling the role of negative emotions in art reception. However, their model suffers from shortcomings that reduce its applicability to empirical studies of the arts: poor use of evidence, lack of integration with other models, and limited derivation of testable...... hypotheses. We argue that theories about art experiences should be based on empirical evidence....

  1. EMPIRE-II statistical model code for nuclear reaction calculations

    Energy Technology Data Exchange (ETDEWEB)

    Herman, M [International Atomic Energy Agency, Vienna (Austria)

    2001-12-15

    EMPIRE II is a nuclear reaction code, comprising various nuclear models, and designed for calculations in the broad range of energies and incident particles. A projectile can be any nucleon or Heavy Ion. The energy range starts just above the resonance region, in the case of neutron projectile, and extends up to few hundreds of MeV for Heavy Ion induced reactions. The code accounts for the major nuclear reaction mechanisms, such as optical model (SCATB), Multistep Direct (ORION + TRISTAN), NVWY Multistep Compound, and the full featured Hauser-Feshbach model. Heavy Ion fusion cross section can be calculated within the simplified coupled channels approach (CCFUS). A comprehensive library of input parameters covers nuclear masses, optical model parameters, ground state deformations, discrete levels and decay schemes, level densities, fission barriers (BARFIT), moments of inertia (MOMFIT), and {gamma}-ray strength functions. Effects of the dynamic deformation of a fast rotating nucleus can be taken into account in the calculations. The results can be converted into the ENDF-VI format using the accompanying code EMPEND. The package contains the full EXFOR library of experimental data. Relevant EXFOR entries are automatically retrieved during the calculations. Plots comparing experimental results with the calculated ones can be produced using X4TOC4 and PLOTC4 codes linked to the rest of the system through bash-shell (UNIX) scripts. The graphic user interface written in Tcl/Tk is provided. (author)

  2. Corporate social and financial performance : An extended stakeholder theory, and empirical test with accounting measures

    NARCIS (Netherlands)

    van der Laan, G.; van Ees, H.; van Witteloostuijn, A.

    Although agreement on the positive sign of the relationship between corporate social and financial performance is observed in the literature, the mechanisms that constitute this relationship are not yet well-known. We address this issue by extending management's stakeholder theory by adding insights

  3. A theoretical and empirical evaluation and extension of the Todaro migration model.

    Science.gov (United States)

    Salvatore, D

    1981-11-01

    "This paper postulates that it is theoretically and empirically preferable to base internal labor migration on the relative difference in rural-urban real income streams and rates of unemployment, taken as separate and independent variables, rather than on the difference in the expected real income streams as postulated by the very influential and often quoted Todaro model. The paper goes on to specify several important ways of extending the resulting migration model and improving its empirical performance." The analysis is based on Italian data. excerpt

  4. Empirical model for estimating the surface roughness of machined ...

    African Journals Online (AJOL)

    Empirical model for estimating the surface roughness of machined ... as well as surface finish is one of the most critical quality measure in mechanical products. ... various cutting speed have been developed using regression analysis software.

  5. Empirical model for estimating the surface roughness of machined ...

    African Journals Online (AJOL)

    Michael Horsfall

    one of the most critical quality measure in mechanical products. In the ... Keywords: cutting speed, centre lathe, empirical model, surface roughness, Mean absolute percentage deviation ... The factors considered were work piece properties.

  6. Childhood urinary tract infection caused by extended-spectrum β-lactamase-producing bacteria: Risk factors and empiric therapy.

    Science.gov (United States)

    Uyar Aksu, Nihal; Ekinci, Zelal; Dündar, Devrim; Baydemir, Canan

    2017-02-01

    This study investigated risk factors of childhood urinary tract infection (UTI) associated with extended-spectrum β-lactamase (ESBL)-producing bacteria (ESBL-positive UTI) and evaluated antimicrobial resistance as well as empiric treatment of childhood UTI. The records of children with positive urine culture between 1 January 2008 and 31 December 2012 were evaluated. Patients with positive urine culture for ESBL-producing bacteria were defined as the ESBL-positive group, whereas patients of the same gender and similar age with positive urine culture for non-ESBL-producing bacteria were defined as the ESBL-negative group. Each ESBL-positive patient was matched with two ESBL-negative patients. The ESBL-positive and negative groups consisted of 154 and 308 patients, respectively. Potential risk factors for ESBL-positive UTI were identified as presence of underlying disease, clean intermittent catheterization (CIC), hospitalization, use of any antibiotic and history of infection in the last 3 months (P infection in the last 3 months were identified as independent risk factors. In the present study, 324 of 462 patients had empiric therapy. Empiric therapy was inappropriate in 90.3% of the ESBL-positive group and in 4.5% of the ESBL-negative group. Resistance to nitrofurantoin was similar between groups (5.1% vs 1.2%, P = 0.072); resistance to amikacin was low in the ESBL-positive group (2.6%) and there was no resistance in the ESBL-negative group. Clean intermittent catheterization, hospitalization and history of infection in the last 3 months should be considered as risk factors for ESBL-positive UTI. The combination of ampicillin plus amikacin should be taken into consideration for empiric therapy in patients with acute pyelonephritis who have the risk factors for ESBL-positive UTI. Nitrofurantoin seems to be a logical choice for the empiric therapy of cystitis. © 2016 Japan Pediatric Society.

  7. Identifiability of Baranyi model and comparison with empirical ...

    African Journals Online (AJOL)

    In addition, performance of the Baranyi model was compared with those of the empirical modified Gompertz and logistic models and Huang models. Higher values of R2, modeling efficiency and lower absolute values of mean bias error, root mean square error, mean percentage error and chi-square were obtained with ...

  8. Forecasting Inflation through Econometrics Models: An Empirical ...

    African Journals Online (AJOL)

    This article aims at modeling and forecasting inflation in Pakistan. For this purpose a number of econometric approaches are implemented and their results are compared. In ARIMA models, adding additional lags for p and/or q necessarily reduced the sum of squares of the estimated residuals. When a model is estimated ...

  9. Ultrametric distribution of culture vectors in an extended Axelrod model of cultural dissemination

    Science.gov (United States)

    Stivala, Alex; Robins, Garry; Kashima, Yoshihisa; Kirley, Michael

    2014-05-01

    The Axelrod model of cultural diffusion is an apparently simple model that is capable of complex behaviour. A recent work used a real-world dataset of opinions as initial conditions, demonstrating the effects of the ultrametric distribution of empirical opinion vectors in promoting cultural diversity in the model. Here we quantify the degree of ultrametricity of the initial culture vectors and investigate the effect of varying degrees of ultrametricity on the absorbing state of both a simple and extended model. Unlike the simple model, ultrametricity alone is not sufficient to sustain long-term diversity in the extended Axelrod model; rather, the initial conditions must also have sufficiently large variance in intervector distances. Further, we find that a scheme for evolving synthetic opinion vectors from cultural ``prototypes'' shows the same behaviour as real opinion data in maintaining cultural diversity in the extended model; whereas neutral evolution of cultural vectors does not.

  10. Extended equivalent dipole model for radiated emissions

    OpenAIRE

    Obiekezie, Chijioke S.

    2016-01-01

    This work is on the characterisation of radiated fields from electronic devices. An equivalent dipole approach is used. Previous work showed that this was an effective approach for single layer printed circuit boards where an infinite ground plane can be assumed. In this work, this approach is extended for the characterisation of more complex circuit boards or electronic systems.\\ud For complex electronic radiators with finite ground planes, the main challenge is characterising field diffract...

  11. Empirical questions for collective-behaviour modelling

    Indian Academy of Sciences (India)

    2015-02-04

    Feb 4, 2015 ... The collective behaviour of groups of social animals has been an active topic of study across many disciplines, and has a long history of modelling. Classical models have been successful in capturing the large-scale patterns formed by animal aggregations, but fare less well in accounting for details, ...

  12. Salt intrusion study in Cochin estuary - Using empirical models

    Digital Repository Service at National Institute of Oceanography (India)

    Jacob, B.; Revichandran, C.; NaveenKumar, K.R.

    been applied to the Cochin estuary in the present study to identify the most suitable model for predicting the salt intrusion length. Comparison of the obtained results indicate that the model of Van der Burgh (1972) is the most suitable empirical model...

  13. On the empirical relevance of the transient in opinion models

    Energy Technology Data Exchange (ETDEWEB)

    Banisch, Sven, E-mail: sven.banisch@universecity.d [Mathematical Physics, Physics Department, Bielefeld University, 33501 Bielefeld (Germany); Institute for Complexity Science (ICC), 1249-078 Lisbon (Portugal); Araujo, Tanya, E-mail: tanya@iseg.utl.p [Research Unit on Complexity in Economics (UECE), ISEG, TULisbon, 1249-078 Lisbon (Portugal); Institute for Complexity Science (ICC), 1249-078 Lisbon (Portugal)

    2010-07-12

    While the number and variety of models to explain opinion exchange dynamics is huge, attempts to justify the model results using empirical data are relatively rare. As linking to real data is essential for establishing model credibility, this Letter develops an empirical confirmation experiment by which an opinion model is related to real election data. The model is based on a representation of opinions as a vector of k bits. Individuals interact according to the principle that similarity leads to interaction and interaction leads to still more similarity. In the comparison to real data we concentrate on the transient opinion profiles that form during the dynamic process. An artificial election procedure is introduced which allows to relate transient opinion configurations to the electoral performance of candidates for which data are available. The election procedure based on the well-established principle of proximity voting is repeatedly performed during the transient period and remarkable statistical agreement with the empirical data is observed.

  14. On the empirical relevance of the transient in opinion models

    International Nuclear Information System (INIS)

    Banisch, Sven; Araujo, Tanya

    2010-01-01

    While the number and variety of models to explain opinion exchange dynamics is huge, attempts to justify the model results using empirical data are relatively rare. As linking to real data is essential for establishing model credibility, this Letter develops an empirical confirmation experiment by which an opinion model is related to real election data. The model is based on a representation of opinions as a vector of k bits. Individuals interact according to the principle that similarity leads to interaction and interaction leads to still more similarity. In the comparison to real data we concentrate on the transient opinion profiles that form during the dynamic process. An artificial election procedure is introduced which allows to relate transient opinion configurations to the electoral performance of candidates for which data are available. The election procedure based on the well-established principle of proximity voting is repeatedly performed during the transient period and remarkable statistical agreement with the empirical data is observed.

  15. NOx PREDICTION FOR FBC BOILERS USING EMPIRICAL MODELS

    Directory of Open Access Journals (Sweden)

    Jiří Štefanica

    2014-02-01

    Full Text Available Reliable prediction of NOx emissions can provide useful information for boiler design and fuel selection. Recently used kinetic prediction models for FBC boilers are overly complex and require large computing capacity. Even so, there are many uncertainties in the case of FBC boilers. An empirical modeling approach for NOx prediction has been used exclusively for PCC boilers. No reference is available for modifying this method for FBC conditions. This paper presents possible advantages of empirical modeling based prediction of NOx emissions for FBC boilers, together with a discussion of its limitations. Empirical models are reviewed, and are applied to operation data from FBC boilers used for combusting Czech lignite coal or coal-biomass mixtures. Modifications to the model are proposed in accordance with theoretical knowledge and prediction accuracy.

  16. Empirical agent-based modelling challenges and solutions

    CERN Document Server

    Barreteau, Olivier

    2014-01-01

    This instructional book showcases techniques to parameterise human agents in empirical agent-based models (ABM). In doing so, it provides a timely overview of key ABM methodologies and the most innovative approaches through a variety of empirical applications.  It features cutting-edge research from leading academics and practitioners, and will provide a guide for characterising and parameterising human agents in empirical ABM.  In order to facilitate learning, this text shares the valuable experiences of other modellers in particular modelling situations. Very little has been published in the area of empirical ABM, and this contributed volume will appeal to graduate-level students and researchers studying simulation modeling in economics, sociology, ecology, and trans-disciplinary studies, such as topics related to sustainability. In a similar vein to the instruction found in a cookbook, this text provides the empirical modeller with a set of 'recipes'  ready to be implemented. Agent-based modeling (AB...

  17. Empirical soot formation and oxidation model

    Directory of Open Access Journals (Sweden)

    Boussouara Karima

    2009-01-01

    Full Text Available Modelling internal combustion engines can be made following different approaches, depending on the type of problem to be simulated. A diesel combustion model has been developed and implemented in a full cycle simulation of a combustion, model accounts for transient fuel spray evolution, fuel-air mixing, ignition, combustion, and soot pollutant formation. The models of turbulent combustion of diffusion flame, apply to diffusion flames, which one meets in industry, typically in the diesel engines particulate emission represents one of the most deleterious pollutants generated during diesel combustion. Stringent standards on particulate emission along with specific emphasis on size of emitted particulates have resulted in increased interest in fundamental understanding of the mechanisms of soot particulate formation and oxidation in internal combustion engines. A phenomenological numerical model which can predict the particle size distribution of the soot emitted will be very useful in explaining the above observed results and will also be of use to develop better particulate control techniques. A diesel engine chosen for simulation is a version of the Caterpillar 3406. We are interested in employing a standard finite-volume computational fluid dynamics code, KIVA3V-RELEASE2.

  18. Ranking Multivariate GARCH Models by Problem Dimension: An Empirical Evaluation

    NARCIS (Netherlands)

    M. Caporin (Massimiliano); M.J. McAleer (Michael)

    2011-01-01

    textabstractIn the last 15 years, several Multivariate GARCH (MGARCH) models have appeared in the literature. Recent research has begun to examine MGARCH specifications in terms of their out-of-sample forecasting performance. In this paper, we provide an empirical comparison of a set of models,

  19. Comparison of empirical models and laboratory saturated hydraulic ...

    African Journals Online (AJOL)

    Numerous methods for estimating soil saturated hydraulic conductivity exist, which range from direct measurement in the laboratory to models that use only basic soil properties. A study was conducted to compare laboratory saturated hydraulic conductivity (Ksat) measurement and that estimated from empirical models.

  20. Empirical Comparison of Criterion Referenced Measurement Models

    Science.gov (United States)

    1976-10-01

    rument cons isting of a l a r~c nu mb r of items. The models ~o·ould the n be used to es t imate the tna.• s t""~r c us in~ a smaller and mor r ea lis...ti number o f items. This. rrrun·h is em- piri ca l a nd more dir\\’c tly o ri e nted to pr ti ca l app li ·a i on:; \\ viH ’ r t. tes ting time a nd the

  1. An empirical and model study on automobile market in Taiwan

    Science.gov (United States)

    Tang, Ji-Ying; Qiu, Rong; Zhou, Yueping; He, Da-Ren

    2006-03-01

    We have done an empirical investigation on automobile market in Taiwan including the development of the possession rate of the companies in the market from 1979 to 2003, the development of the largest possession rate, and so on. A dynamic model for describing the competition between the companies is suggested based on the empirical study. In the model each company is given a long-term competition factor (such as technology, capital and scale) and a short-term competition factor (such as management, service and advertisement). Then the companies play games in order to obtain more possession rate in the market under certain rules. Numerical simulation based on the model display a competition developing process, which qualitatively and quantitatively agree with our empirical investigation results.

  2. Bankruptcy risk model and empirical tests

    Science.gov (United States)

    Podobnik, Boris; Horvatic, Davor; Petersen, Alexander M.; Urošević, Branko; Stanley, H. Eugene

    2010-01-01

    We analyze the size dependence and temporal stability of firm bankruptcy risk in the US economy by applying Zipf scaling techniques. We focus on a single risk factor—the debt-to-asset ratio R—in order to study the stability of the Zipf distribution of R over time. We find that the Zipf exponent increases during market crashes, implying that firms go bankrupt with larger values of R. Based on the Zipf analysis, we employ Bayes’s theorem and relate the conditional probability that a bankrupt firm has a ratio R with the conditional probability of bankruptcy for a firm with a given R value. For 2,737 bankrupt firms, we demonstrate size dependence in assets change during the bankruptcy proceedings. Prepetition firm assets and petition firm assets follow Zipf distributions but with different exponents, meaning that firms with smaller assets adjust their assets more than firms with larger assets during the bankruptcy process. We compare bankrupt firms with nonbankrupt firms by analyzing the assets and liabilities of two large subsets of the US economy: 2,545 Nasdaq members and 1,680 New York Stock Exchange (NYSE) members. We find that both assets and liabilities follow a Pareto distribution. The finding is not a trivial consequence of the Zipf scaling relationship of firm size quantified by employees—although the market capitalization of Nasdaq stocks follows a Pareto distribution, the same distribution does not describe NYSE stocks. We propose a coupled Simon model that simultaneously evolves both assets and debt with the possibility of bankruptcy, and we also consider the possibility of firm mergers. PMID:20937903

  3. Micro dosimetry model. An extended version

    International Nuclear Information System (INIS)

    Vroegindewey, C.

    1994-07-01

    In an earlier study a relative simple mathematical model has been constructed to simulate the energy transfer on a cellular scale and thus gain insight in the fundamental processes of BNCT. Based on this work, a more realistic micro dosimetry model is developed. The new facets of the model are: the treatment of proton recoil, the calculation of the distribution of energy depositions, and the determination of the number of particles crossing the target nucleus subdivided in place of origin. Besides these extensions, new stopping power tables for the emitted particles are generated and biased Monte Carlo techniques are used to reduce computer time. (orig.)

  4. Predicting acid dew point with a semi-empirical model

    International Nuclear Information System (INIS)

    Xiang, Baixiang; Tang, Bin; Wu, Yuxin; Yang, Hairui; Zhang, Man; Lu, Junfu

    2016-01-01

    Highlights: • The previous semi-empirical models are systematically studied. • An improved thermodynamic correlation is derived. • A semi-empirical prediction model is proposed. • The proposed semi-empirical model is validated. - Abstract: Decreasing the temperature of exhaust flue gas in boilers is one of the most effective ways to further improve the thermal efficiency, electrostatic precipitator efficiency and to decrease the water consumption of desulfurization tower, while, when this temperature is below the acid dew point, the fouling and corrosion will occur on the heating surfaces in the second pass of boilers. So, the knowledge on accurately predicting the acid dew point is essential. By investigating the previous models on acid dew point prediction, an improved thermodynamic correlation formula between the acid dew point and its influencing factors is derived first. And then, a semi-empirical prediction model is proposed, which is validated with the data both in field test and experiment, and comparing with the previous models.

  5. A Novel Biped Pattern Generator Based on Extended ZMP and Extended Cart-Table Model

    Directory of Open Access Journals (Sweden)

    Guangbin Sun

    2015-07-01

    Full Text Available This paper focuses on planning patterns for biped walking on complex terrains. Two problems are solved: ZMP (zero moment point cannot be used on uneven terrain, and the conventional cart-table model does not allow vertical CM (centre of mass motion. For the ZMP definition problem, we propose the extended ZMP (EZMP concept as an extension of ZMP to uneven terrains. It can be used to judge dynamic balance on universal terrains. We achieve a deeper insight into the connection and difference between ZMP and EZMP by adding different constraints. For the model problem, we extend the cart-table model by using a dynamic constraint instead of constant height constraint, which results in a mathematically symmetric set of three equations. In this way, the vertical motion is enabled and the resultant equations are still linear. Based on the extended ZMP concept and extended cart-table model, a biped pattern generator using triple preview controllers is constructed and implemented simultaneously to three dimensions. Using the proposed pattern generator, the Atlas robot is simulated. The simulation results show the robot can walk stably on rather complex terrains by accurately tracking extended ZMP.

  6. Extended nonabelian symmetries for free fermionic model

    International Nuclear Information System (INIS)

    Zaikov, R.P.

    1993-08-01

    The higher spin symmetry for both Dirac and Majorana massless free fermionic field models are considered. An infinite Lie algebra which is a linear realization of the higher spin extension of the cross products of the Virasoro and affine Kac-Moody algebras is obtained. The corresponding current algebra is closed which is not the case of analogous current algebra in the WZNW model. The gauging procedure for the higher spin symmetry is also given. (author). 12 refs

  7. Combining Empirical and Stochastic Models for Extreme Floods Estimation

    Science.gov (United States)

    Zemzami, M.; Benaabidate, L.

    2013-12-01

    Hydrological models can be defined as physical, mathematical or empirical. The latter class uses mathematical equations independent of the physical processes involved in the hydrological system. The linear regression and Gradex (Gradient of Extreme values) are classic examples of empirical models. However, conventional empirical models are still used as a tool for hydrological analysis by probabilistic approaches. In many regions in the world, watersheds are not gauged. This is true even in developed countries where the gauging network has continued to decline as a result of the lack of human and financial resources. Indeed, the obvious lack of data in these watersheds makes it impossible to apply some basic empirical models for daily forecast. So we had to find a combination of rainfall-runoff models in which it would be possible to create our own data and use them to estimate the flow. The estimated design floods would be a good choice to illustrate the difficulties facing the hydrologist for the construction of a standard empirical model in basins where hydrological information is rare. The construction of the climate-hydrological model, which is based on frequency analysis, was established to estimate the design flood in the Anseghmir catchments, Morocco. The choice of using this complex model returns to its ability to be applied in watersheds where hydrological information is not sufficient. It was found that this method is a powerful tool for estimating the design flood of the watershed and also other hydrological elements (runoff, volumes of water...).The hydrographic characteristics and climatic parameters were used to estimate the runoff, water volumes and design flood for different return periods.

  8. Extending the prevalent consumer loyalty modelling

    DEFF Research Database (Denmark)

    Olsen, Svein Ottar; Tudoran, Ana Alina; Brunsø, Karen

    2013-01-01

    Purpose: This study addresses the role of habit strength in explaining loyalty behaviour. Design/methodology/approach: The study uses 2063 consumers’ data from a survey in Denmark and Spain, and multigroup structural equation modelling to analyse the data. The paper describes an approach employing...... the psychological meanings of the habit construct, such as automaticity, lack of awareness or very little conscious deliberation. Findings: The findings suggest that when habits start to develop and gain strength, less planning is involved, and that the loyalty behaviour sequence mainly occurs guided...... by automaticity and inertia. A new model with habit strength as a mediator between satisfaction and loyalty behaviour provides a substantial increase in explained variance in loyalty behaviour over the traditional model with intention as a mediator. Originality/value: This study contributes to the existent...

  9. Vocational Teachers and Professionalism - A Model Based on Empirical Analyses

    DEFF Research Database (Denmark)

    Duch, Henriette Skjærbæk; Andreasen, Karen E

    Vocational Teachers and Professionalism - A Model Based on Empirical Analyses Several theorists has developed models to illustrate the processes of adult learning and professional development (e.g. Illeris, Argyris, Engeström; Wahlgren & Aarkorg, Kolb and Wenger). Models can sometimes be criticized...... emphasis on the adult employee, the organization, its surroundings as well as other contextual factors. Our concern is adult vocational teachers attending a pedagogical course and teaching at vocational colleges. The aim of the paper is to discuss different models and develop a model concerning teachers...... at vocational colleges based on empirical data in a specific context, vocational teacher-training course in Denmark. By offering a basis and concepts for analysis of practice such model is meant to support the development of vocational teachers’ professionalism at courses and in organizational contexts...

  10. Application of parameters space analysis tools for empirical model validation

    Energy Technology Data Exchange (ETDEWEB)

    Paloma del Barrio, E. [LEPT-ENSAM UMR 8508, Talence (France); Guyon, G. [Electricite de France, Moret-sur-Loing (France)

    2004-01-01

    A new methodology for empirical model validation has been proposed in the framework of the Task 22 (Building Energy Analysis Tools) of the International Energy Agency. It involves two main steps: checking model validity and diagnosis. Both steps, as well as the underlying methods, have been presented in the first part of the paper. In this part, they are applied for testing modelling hypothesis in the framework of the thermal analysis of an actual building. Sensitivity analysis tools have been first used to identify the parts of the model that can be really tested on the available data. A preliminary diagnosis is then supplied by principal components analysis. Useful information for model behaviour improvement has been finally obtained by optimisation techniques. This example of application shows how model parameters space analysis is a powerful tool for empirical validation. In particular, diagnosis possibilities are largely increased in comparison with residuals analysis techniques. (author)

  11. Empirical Testing of a Theoretical Extension of the Technology Acceptance Model: An Exploratory Study of Educational Wikis

    Science.gov (United States)

    Liu, Xun

    2010-01-01

    This study extended the technology acceptance model and empirically tested the new model with wikis, a new type of educational technology. Based on social cognitive theory and the theory of planned behavior, three new variables, wiki self-efficacy, online posting anxiety, and perceived behavioral control, were added to the original technology…

  12. Extending Social Cognition Models of Health Behaviour

    Science.gov (United States)

    Abraham, Charles; Sheeran, Paschal; Henderson, Marion

    2011-01-01

    A cross-sectional study assessed the extent to which indices of social structure, including family socio-economic status (SES), social deprivation, gender and educational/lifestyle aspirations correlated with adolescent condom use and added to the predictive utility of a theory of planned behaviour model. Analyses of survey data from 824 sexually…

  13. Characterising and modelling extended conducted electromagnetic emission

    CSIR Research Space (South Africa)

    Grobler, Inus

    2013-06-01

    Full Text Available , such as common mode and differential mode separation, calibrated with an EMC ETS-Lindgren current probe. Good and workable model accuracies were achieved with the basic Step-Up and Step-Down circuits over the conducted emission frequency band and beyond...

  14. Building metaphors and extending models of grief.

    Science.gov (United States)

    VandeCreek, L

    1985-01-01

    Persons in grief turn to metaphors as they seek to understand and express their experience. Metaphors illustrated in this article include "grief is a whirlwind," "grief is the Great Depression all over again" and "grief is gray, cloudy and rainy weather." Hospice personnel can enhance their bereavement efforts by identifying and cultivating the expression of personal metaphors from patients and families. Two metaphors have gained wide cultural acceptance and lie behind contemporary scientific explorations of grief. These are "grief is recovery from illness" (Bowlby and Parkes) and "death is the last stage of growth and grief is the adjustment reaction to this growth" (Kubler-Ross). These models have developed linear perspectives of grief but have neglected to study the fluctuating intensity of symptoms. Adopting Worden's four-part typology of grief, the author illustrates how the pie graph can be used to display this important aspect of the grief experience, thus enhancing these models.

  15. Rare top quark decays in extended models

    International Nuclear Information System (INIS)

    Gaitan, R.; Miranda, O. G.; Cabral-Rosetti, L. G.

    2006-01-01

    Flavor changing neutral currents (FCNC) decays t → H0 + c, t → Z + c, and H0 → t + c-bar are discussed in the context of Alternative Left-Right symmetric Models (ALRM) with extra isosinglet heavy fermions where FCNC decays may take place at tree-level and are only suppressed by the mixing between ordinary top and charm quarks, which is poorly constraint by current experimental values. The non-manifest case is also briefly discussed

  16. Macroeconomic model of national economy development (extended

    Directory of Open Access Journals (Sweden)

    M. Diaconova

    1997-08-01

    Full Text Available The macroeconomic model offered in this paper describes complex functioning of national economy and can be used for forecasting of possible directions of its development depending on various economic policies. It is the extension of [2] and adaptation of [3]. With the purpose of determination of state policies influence in the field of taxes and exchange rate national economy is considered within the framework of three sectors: government, private and external world.

  17. Top quark decays in extended models

    International Nuclear Information System (INIS)

    Gaitan, R.; Cabral-Rosetti, L.G.

    2011-01-01

    We evaluate the FCNC decays t → H 0 + c at tree-level and t → γ + c at one-loop level in the context of Alternative Left-Right symmetric Models (ALRM) with extra isosinglet heavy fermions; in the first case, FCNC decays occurs at tree-level and they are only suppressed by the mixing between ordinary top and charm quarks. (author)

  18. Empirical Models for the Estimation of Global Solar Radiation in ...

    African Journals Online (AJOL)

    Empirical Models for the Estimation of Global Solar Radiation in Yola, Nigeria. ... and average daily wind speed (WS) for the interval of three years (2010 – 2012) measured using various instruments for Yola of recorded data collected from the Center for Atmospheric Research (CAR), Anyigba are presented and analyzed.

  19. Empirical Model for Predicting Rate of Biogas Production | Adamu ...

    African Journals Online (AJOL)

    Rate of biogas production using cow manure as substrate was monitored in two laboratory scale batch reactors (13 liter and 108 liter capacities). Two empirical models based on the Gompertz and the modified logistic equations were used to fit the experimental data based on non-linear regression analysis using Solver tool ...

  20. A semi-empirical two phase model for rocks

    International Nuclear Information System (INIS)

    Fogel, M.B.

    1993-01-01

    This article presents data from an experiment simulating a spherically symmetric tamped nuclear explosion. A semi-empirical two-phase model of the measured response in tuff is presented. A comparison is made of the computed peak stress and velocity versus scaled range and that measured on several recent tuff events

  1. Extending Ansoff’s Strategic Diagnosis Model

    Directory of Open Access Journals (Sweden)

    Daniel Kipley

    2012-01-01

    Full Text Available Given the complex and disruptive open-ended dynamics in the current dynamic global environment, senior management recognizes the need for a formalized, consistent, and comprehensive framework to analyze the firm’s strategic posture. Modern assessment tools, such as H. Igor Ansoff’s seminal contributions to strategic diagnosis, primarily focused on identifying and enhancing the firm’s strategic performance potential through the analysis of the industry’s environmental turbulence level relative to the firm’s aggressiveness and responsiveness of capability. Other epistemic modeling techniques envisage Porter’s generic strategic positions, Strengths, Weaknesses, Opportunities, Threats (SWOT, and Resource-Based View as useful methodologies to aid in the planning process. All are complex and involve multiple managerial perspectives. Over the last two decades, attempts have been made to comprehensively classify the firm’s future competitive position. Most of these proposals utilized matrices to depict the position, such as the Boston Consulting Group, point positioning, and dispersed positioning. The GE/McKinsey later enhanced this typology by expanding to 3 × 3, contributing to management’s deeper understanding of the firm’s position. Both types of assessments, Ansoff’s strategic diagnosis and positional matrices, are invaluable strategic tools for firms. However, it could be argued that these positional analyses singularly reflect a blind spot in modeling the firm’s future strategic performance potential, as neither considers the interactions of the other. This article is conceptual and takes a different approach from earlier methodologies. Although conceptual, the article aims to present a robust model combining Ansoff’s strategic diagnosis with elements of the performance matrices to provide the management with an enriched capability to evaluate the firm’s current and future performance position.

  2. Modeling of PWR fuel at extended burnup

    Energy Technology Data Exchange (ETDEWEB)

    Dias, Raphael M.; Silva, Antonio Teixeira, E-mail: rmdias@ipen.br, E-mail: teixeira@ipen.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2015-07-01

    Since FRAPCON-3 series was rolled out, many improvements have been implanted in fuel performance codes, based on most recent literature, to promote better predictions against current data. Much of this advances include: improving fuel gas release prediction, hydrogen pickup model, cladding corrosion, and many others. An example of those modifications has been new cladding materials has added into hydrogen pickup model to support M5™, ZIRLO™, and ZIRLO™ optimized family under pressurized water reactor (PWR) conditions. Recently some research have been made over USNRC's steady-state fuel performance code, assessments against FUMEX-III's data have concluded that FRAPCON provides best-estimate calculation of fuel performance. Face of this, a study is required to summarize all those modifications and new implementations, as well as to compare this result against FRAPCON's older version, scrutinizing FRAPCON-3 series documentation to understand the real goal and literature base of any improvements. We have concluded that FRAPCON's latest modifications are based on strong literature review. Those modifications were tested against most recent data to assure these results will be the best evaluation as possible. Many improvements have been made to allow USNRC to have an audit tool with the last improvements. (author)

  3. Extending the enterprise evolution contextualisation model

    Science.gov (United States)

    de Vries, Marné; van der Merwe, Alta; Gerber, Aurona

    2017-07-01

    Enterprise engineering (EE) emerged as a new discipline to encourage comprehensive and consistent enterprise design. Since EE is multidisciplinary, various researchers study enterprises from different perspectives, which resulted in a plethora of applicable literature and terminology, but without shared meaning. Previous research specifically focused on the fragmentation of knowledge for designing and aligning the information and communication technology (ICT) subsystem of the enterprise in order to support the business organisation subsystem of the enterprise. As a solution for this fragmented landscape, a business-IT alignment model (BIAM) was developed inductively from existing business-IT alignment approaches. Since most of the existing alignment frameworks addressed the alignment between the ICT subsystem and the business organisation subsystem, BIAM also focused on the alignment between these two subsystems. Yet, the emerging EE discipline intends to address a broader scope of design, evident in the existing approaches that incorporate a broader scope of design/alignment/governance. A need was identified to address the knowledge fragmentation of the EE knowledge base by adapting BIAM to an enterprise evolution contextualisation model (EECM), to contextualise a broader set of approaches, as identified by Lapalme. The main contribution of this article is the incremental development and evaluation of EECM. We also present guiding indicators/prerequisites for applying EECM as a contextualisation tool.

  4. Modeling of PWR fuel at extended burnup

    International Nuclear Information System (INIS)

    Dias, Raphael M.; Silva, Antonio Teixeira

    2015-01-01

    Since FRAPCON-3 series was rolled out, many improvements have been implanted in fuel performance codes, based on most recent literature, to promote better predictions against current data. Much of this advances include: improving fuel gas release prediction, hydrogen pickup model, cladding corrosion, and many others. An example of those modifications has been new cladding materials has added into hydrogen pickup model to support M5™, ZIRLO™, and ZIRLO™ optimized family under pressurized water reactor (PWR) conditions. Recently some research have been made over USNRC's steady-state fuel performance code, assessments against FUMEX-III's data have concluded that FRAPCON provides best-estimate calculation of fuel performance. Face of this, a study is required to summarize all those modifications and new implementations, as well as to compare this result against FRAPCON's older version, scrutinizing FRAPCON-3 series documentation to understand the real goal and literature base of any improvements. We have concluded that FRAPCON's latest modifications are based on strong literature review. Those modifications were tested against most recent data to assure these results will be the best evaluation as possible. Many improvements have been made to allow USNRC to have an audit tool with the last improvements. (author)

  5. An Examination of Extended a-Rescaling Model

    Institute of Scientific and Technical Information of China (English)

    YAN Zhan-Yuan; DUAN Chun-Gui; HE Zhen-Min

    2001-01-01

    The extended x-rescaling model can explain the quark's nuclear effect very well. Weather it can also explain the gluon's nuclear effect should be investigated further. Associated J/ψ and γ production with large PT is a very clean channel to probe the gluon distribution in proton or nucleus. In this paper, using the extended x-rescaling model, the PT distribution of the nuclear effect factors of p + Fe → J/Ψ + γ+ X process is calculated and discussed. Comparing our theoretical results with the future experimental data, the extended x-rescaling model can be examined.``

  6. Topics in dual models and extended solutions

    International Nuclear Information System (INIS)

    Roth, R.S.

    1977-01-01

    Two main topics are explored. The first deals with the infinities arising from the one loop planar string diagram of the standard dual model. It is shown that for the number of dimensions d = 25 or 26, these infinities lead to a renormalization of the slope of the Regge trajectories, in addition to a renormalization of the coupling constant. The second topic deals with the propagator for a confined particle (monopole) in a field theory. When summed to all orders, this propagator is altogether free of singularities in the finite momentum plane, and an attempt is made to illustrate this. The Bethe-Salpeter equation is examined and it is shown that ladder diagrams are not sufficient to obtain this result. However, in a nonrelativistic approximation confinement is obtained and all poles disappear

  7. An Extended Model of Knowledge Governance

    Science.gov (United States)

    Karvalics, Laszlo Z.; Dalal, Nikunj

    In current times, we are seeing the emergence of a new paradigm to describe, understand, and analyze the expanding "knowledge domain". This overarching framework - called knowledge governance - draws from and builds upon knowledge management and may be seen as a kind of meta-layer of knowledge management. The emerging knowledge governance approach deals with issues that lie at the intersection of organization and knowledge processes. Knowledge governance has two main interpretation levels in the literature: the company- (micro-) and the national (macro-) level. We propose a three-layer model instead of the previous two-layer version, adding a layer of "global" knowledge governance. Analyzing and separating the main issues in this way, we can re-formulate the focus of knowledge governance research and practice in all layers.

  8. Polaron as the extended particle model

    International Nuclear Information System (INIS)

    Kochetov, E.A.; Kuleshov, S.P.; Smondyrev, M.A.

    1977-01-01

    The polaron (a moving electron with concomitant lattice distortion) mass and energy are calculated. The problem of finding the Green function in the polaron model is solved. A number of the simplest approximations corresponding to the approximation in the picture of straight-line paths is considered. The case of strong coupling requires more detailed study of the particle motion in the effective field, caused by the significant polarization of vacuum near the particle. As a consequence, a more complex approximation of functional integrals is required. A variation method is used in this case. The bound state of a polaron interacting not only with photons, but also with some external classical field is investigated as well. A classical potential is considered as an example

  9. Developing and Extending a Cyberinfrastructure Model

    Energy Technology Data Exchange (ETDEWEB)

    Alvarez, Rosio

    2007-11-13

    Increasingly, research and education institutions are realizing the strategic value and challenge of deploying and supporting institutional cyberinfrastructure (CI). Cyberinfrastructure is composed of high performance computing systems, massive storage systems, visualization systems, and advanced networks to interconnect the components within and across institutions and research communities. CI also includes the professionals with expertise in scientific application and algorithm development and parallel systems operation. Unlike ?regular? IT infrastructure, the manner in which the components are configured and skills to do so are highly specific and specialized. Planning and coordinating these assets is a fundamental step toward enhancing an institution?s research competitiveness and return on personnel, technology, and facilities investments. Coordinated deployment of CI assets has implications across the institution. Consider the VC for Research whose new faculty in the Life Sciences are now asking for simulation systems rather than wet labs, or the Provost who lost another faculty candidate to a peer institution that offered computational support for research, or the VC for Administration who has seen a spike in power and cooling demands from many of the labs and office spaces being converted to house systems. These are just some of the issues that research institutions are wrestling with as research becomes increasingly computational, data-intensive and interdisciplinary. This bulletin will discuss these issues and will present an approach for developing a cyberinfrastructure model that was successfully developed at one institution and then deployed across institutions.

  10. An empirical model for friction in cold forging

    DEFF Research Database (Denmark)

    Bay, Niels; Eriksen, Morten; Tan, Xincai

    2002-01-01

    With a system of simulative tribology tests for cold forging the friction stress for aluminum, steel and stainless steel provided with typical lubricants for cold forging has been determined for varying normal pressure, surface expansion, sliding length and tool/work piece interface temperature...... of normal pressure and tool/work piece interface temperature. The model is verified by process testing measuring friction at varying reductions in cold forward rod extrusion. KEY WORDS: empirical friction model, cold forging, simulative friction tests....

  11. Quantifying the levitation picture of extended states in lattice models

    OpenAIRE

    Pereira, Ana. L. C.; Schulz, P. A.

    2002-01-01

    The behavior of extended states is quantitatively analyzed for two-dimensional lattice models. A levitation picture is established for both white-noise and correlated disorder potentials. In a continuum limit window of the lattice models we find simple quantitative expressions for the extended states levitation, suggesting an underlying universal behavior. On the other hand, these results point out that the quantum Hall phase diagrams may be disorder dependent.

  12. Clinical and Economic Impact of Empirical Extended-Infusion Piperacillin-Tazobactam in a Community Medical Center.

    Science.gov (United States)

    Brunetti, Luigi; Poustchi, Shirin; Cunningham, Daniel; Toscani, Michael; Nguyen, Joanne; Lim, Jeremy; Ding, Yilun; Nahass, Ronald G

    2015-07-01

    Current medical center practice allows for the automatic conversion of all piperacillin/tazobactam orders from intermittent to extended infusion (EI). To compare the clinical and cost impact of empirical extended-infusion piperacillin/tazobactam. All consecutive patients treated with piperacillin/tazobactam for >48 hours were reviewed for inclusion. Patients were stratified into 2 groups: (1) traditional infusion (TI), preprotocol implementation, and (2) EI, postprotocol implementation. Patient demographics and primary and secondary diagnoses were extracted from the hospital discharge database. All patients were assessed for the primary end point of all cause 14-day in-hospital mortality. Secondary outcomes included length of hospital stay, duration of antibiotic therapy, cost per treatment course, and occurrence of Clostridium difficile infection. A total of 2150 patients were included (EI = 632; TI = 1518). After adjusting for comorbidity, length of stay, and age, 14-day in-hospital mortality was similar between groups (odds ratio = 1.16; 95% CI = 0.85-1.58; P = 0.37). Length of stay was similar between the EI group versus TI (mean ± SD: 12.5 ± 9.58 days vs 11.8 ± 9.58 days, respectively; P = 0.10) after adjusting for age and Chalson-Deyo comorbidity index. Total cost per treatment course was reduced in the EI group by 13% compared with the TI group ($565.90 ± $257.70 vs $648.30 ± $349.20, respectively; P < 0.0001). Automatic substitution of EI for TI piperacillin/tazobactam is safe and associated with significant cost savings. EI piperacillin/tazobactam was not associated with a reduction in mortality or length of stay. © The Author(s) 2015.

  13. Building and testing models with extended Higgs sectors

    Science.gov (United States)

    Ivanov, Igor P.

    2017-07-01

    Models with non-minimal Higgs sectors represent a mainstream direction in theoretical exploration of physics opportunities beyond the Standard Model. Extended scalar sectors help alleviate difficulties of the Standard Model and lead to a rich spectrum of characteristic collider signatures and astroparticle consequences. In this review, we introduce the reader to the world of extended Higgs sectors. Not pretending to exhaustively cover the entire body of literature, we walk through a selection of the most popular examples: the two- and multi-Higgs-doublet models, as well as singlet and triplet extensions. We will show how one typically builds models with extended Higgs sectors, describe the main goals and the challenges which arise on the way, and mention some methods to overcome them. We will also describe how such models can be tested, what are the key observables one focuses on, and illustrate the general strategy with a subjective selection of results.

  14. Empirical modeling of dynamic behaviors of pneumatic artificial muscle actuators.

    Science.gov (United States)

    Wickramatunge, Kanchana Crishan; Leephakpreeda, Thananchai

    2013-11-01

    Pneumatic Artificial Muscle (PAM) actuators yield muscle-like mechanical actuation with high force to weight ratio, soft and flexible structure, and adaptable compliance for rehabilitation and prosthetic appliances to the disabled as well as humanoid robots or machines. The present study is to develop empirical models of the PAM actuators, that is, a PAM coupled with pneumatic control valves, in order to describe their dynamic behaviors for practical control design and usage. Empirical modeling is an efficient approach to computer-based modeling with observations of real behaviors. Different characteristics of dynamic behaviors of each PAM actuator are due not only to the structures of the PAM actuators themselves, but also to the variations of their material properties in manufacturing processes. To overcome the difficulties, the proposed empirical models are experimentally derived from real physical behaviors of the PAM actuators, which are being implemented. In case studies, the simulated results with good agreement to experimental results, show that the proposed methodology can be applied to describe the dynamic behaviors of the real PAM actuators. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.

  15. Semi-empirical corrosion model for Zircaloy-4 cladding

    International Nuclear Information System (INIS)

    Nadeem Elahi, Waseem; Atif Rana, Muhammad

    2015-01-01

    The Zircaloy-4 cladding tube in Pressurize Water Reactors (PWRs) bears corrosion due to fast neutron flux, coolant temperature, and water chemistry. The thickness of Zircaloy-4 cladding tube may be decreased due to the increase in corrosion penetration which may affect the integrity of the fuel rod. The tin content and inter-metallic particles sizes has been found significantly in the magnitude of oxide thickness. In present study we have developed a Semiempirical corrosion model by modifying the Arrhenius equation for corrosion as a function of acceleration factor for tin content and accumulative annealing. This developed model has been incorporated into fuel performance computer code. The cladding oxide thickness data obtained from the Semi-empirical corrosion model has been compared with the experimental results i.e., numerous cases of measured cladding oxide thickness from UO 2 fuel rods, irradiated in various PWRs. The results of the both studies lie within the error band of 20μm, which confirms the validity of the developed Semi-empirical corrosion model. Key words: Corrosion, Zircaloy-4, tin content, accumulative annealing factor, Semi-empirical, PWR. (author)

  16. An Empirical Investigation into a Subsidiary Absorptive Capacity Process Model

    DEFF Research Database (Denmark)

    Schleimer, Stephanie; Pedersen, Torben

    2011-01-01

    and empirically test a process model of absorptive capacity. The setting of our empirical study is 213 subsidiaries of multinational enterprises and the focus is on the capacity of these subsidiaries to successfully absorb best practices in marketing strategy from their headquarters. This setting allows us...... to explore the process model in its entirety, including different drivers of subsidiary absorptive capacity (organizational mechanisms and contextual drivers), the three original dimensions of absorptive capacity (recognition, assimilation, application), and related outcomes (implementation...... and internalization of the best practice). The study’s findings reveal that managers have discretion in promoting absorptive capacity through the application of specific organizational mechanism and that the impact of contextual drivers on subsidiary absorptive capacity is not direct, but mediated...

  17. An empirical model for the melt viscosity of polymer blends

    International Nuclear Information System (INIS)

    Dobrescu, V.

    1981-01-01

    On the basis of experimental data for blends of polyethylene with different polymers an empirical equation is proposed to describe the dependence of melt viscosity of blends on component viscosities and composition. The model ensures the continuity of viscosity vs. composition curves throughout the whole composition range, the possibility of obtaining extremum values higher or lower than the viscosities of components, allows the calculation of flow curves of blends from the flow curves of components and their volume fractions. (orig.)

  18. Empirical model for mineralisation of manure nitrogen in soil

    DEFF Research Database (Denmark)

    Sørensen, Peter; Thomsen, Ingrid Kaag; Schröder, Jaap

    2017-01-01

    A simple empirical model was developed for estimation of net mineralisation of pig and cattle slurry nitrogen (N) in arable soils under cool and moist climate conditions during the initial 5 years after spring application. The model is based on a Danish 3-year field experiment with measurements...... of N uptake in spring barley and ryegrass catch crops, supplemented with data from the literature on the temporal release of organic residues in soil. The model estimates a faster mineralisation rate for organic N in pig slurry compared with cattle slurry, and the description includes an initial N...

  19. Conceptual Model of IT Infrastructure Capability and Its Empirical Justification

    Institute of Scientific and Technical Information of China (English)

    QI Xianfeng; LAN Boxiong; GUO Zhenwei

    2008-01-01

    Increasing importance has been attached to the value of information technology (IT) infrastructure in today's organizations. The development of efficacious IT infrastructure capability enhances business performance and brings sustainable competitive advantage. This study analyzed the IT infrastructure capability in a holistic way and then presented a concept model of IT capability. IT infrastructure capability was categorized into sharing capability, service capability, and flexibility. This study then empirically tested the model using a set of survey data collected from 145 firms. Three factors emerge from the factor analysis as IT flexibility, IT service capability, and IT sharing capability, which agree with those in the conceptual model built in this study.

  20. Testing the gravity p-median model empirically

    Directory of Open Access Journals (Sweden)

    Kenneth Carling

    2015-12-01

    Full Text Available Regarding the location of a facility, the presumption in the widely used p-median model is that the customer opts for the shortest route to the nearest facility. However, this assumption is problematic on free markets since the customer is presumed to gravitate to a facility by the distance to and the attractiveness of it. The recently introduced gravity p-median model offers an extension to the p-median model that account for this. The model is therefore potentially interesting, although it has not yet been implemented and tested empirically. In this paper, we have implemented the model in an empirical problem of locating vehicle inspections, locksmiths, and retail stores of vehicle spare-parts for the purpose of investigating its superiority to the p-median model. We found, however, the gravity p-median model to be of limited use for the problem of locating facilities as it either gives solutions similar to the p-median model, or it gives unstable solutions due to a non-concave objective function.

  1. A study on online monitoring system development using empirical models

    Energy Technology Data Exchange (ETDEWEB)

    An, Sang Ha

    2010-02-15

    Maintenance technologies have been progressed from a time-based to a condition-based manner. The fundamental idea of condition-based maintenance (CBM) is built on the real-time diagnosis of impending failures and/or the prognosis of residual lifetime of equipment by monitoring health conditions using various sensors. The success of CBM, therefore, hinges on the capability to develop accurate diagnosis/prognosis models. Even though there may be an unlimited number of methods to implement models, the models can normally be classified into two categories in terms of their origins: using physical principles or historical observations. I have focused on the latter method (sometimes referred as the empirical model based on statistical learning) because of some practical benefits such as context-free applicability, configuration flexibility, and customization adaptability. While several pilot-scale systems using empirical models have been applied to work sites in Korea, it should be noticed that these do not seem to be generally competitive against conventional physical models. As a result of investigating the bottlenecks of previous attempts, I have recognized the need for a novel strategy for grouping correlated variables such that an empirical model can accept not only statistical correlation but also some extent of physical knowledge of a system. Detailed examples of problems are as follows: (1) missing of important signals in a group caused by the lack of observations, (2) problems of signals with the time delay, (3) problems of optimal kernel bandwidth. In this study an improved statistical learning framework including the proposed strategy and case studies illustrating the performance of the method are presented.

  2. Analytical and Empirical Modeling of Wear and Forces of CBN Tool in Hard Turning - A Review

    Science.gov (United States)

    Patel, Vallabh Dahyabhai; Gandhi, Anishkumar Hasmukhlal

    2017-08-01

    Machining of steel material having hardness above 45 HRC (Hardness-Rockwell C) is referred as a hard turning. There are numerous models which should be scrutinized and implemented to gain optimum performance of hard turning. Various models in hard turning by cubic boron nitride tool have been reviewed, in attempt to utilize appropriate empirical and analytical models. Validation of steady state flank and crater wear model, Usui's wear model, forces due to oblique cutting theory, extended Lee and Shaffer's force model, chip formation and progressive flank wear have been depicted in this review paper. Effort has been made to understand the relationship between tool wear and tool force based on the different cutting conditions and tool geometries so that appropriate model can be used according to user requirement in hard turning.

  3. Extended Hubbard models for ultracold atoms in optical lattices

    Energy Technology Data Exchange (ETDEWEB)

    Juergensen, Ole

    2015-06-05

    In this thesis, the phase diagrams and dynamics of various extended Hubbard models for ultracold atoms in optical lattices are studied. Hubbard models are the primary description for many interacting particles in periodic potentials with the paramount example of the electrons in solids. The very same models describe the behavior of ultracold quantum gases trapped in the periodic potentials generated by interfering beams of laser light. These optical lattices provide an unprecedented access to the fundamentals of the many-particle physics that govern the properties of solid-state materials. They can be used to simulate solid-state systems and validate the approximations and simplifications made in theoretical models. This thesis revisits the numerous approximations underlying the standard Hubbard models with special regard to optical lattice experiments. The incorporation of the interaction between particles on adjacent lattice sites leads to extended Hubbard models. Offsite interactions have a strong influence on the phase boundaries and can give rise to novel correlated quantum phases. The extended models are studied with the numerical methods of exact diagonalization and time evolution, a cluster Gutzwiller approximation, as well as with the strong-coupling expansion approach. In total, this thesis demonstrates the high relevance of beyond-Hubbard processes for ultracold atoms in optical lattices. Extended Hubbard models can be employed to tackle unexplained problems of solid-state physics as well as enter previously inaccessible regimes.

  4. An Extended Optimal Velocity Model with Consideration of Honk Effect

    International Nuclear Information System (INIS)

    Tang Tieqiao; Li Chuanyao; Huang Haijun; Shang Huayan

    2010-01-01

    Based on the OV (optimal velocity) model, we in this paper present an extended OV model with the consideration of the honk effect. The analytical and numerical results illustrate that the honk effect can improve the velocity and flow of uniform flow but that the increments are relevant to the density. (interdisciplinary physics and related areas of science and technology)

  5. Extended Hubbard models for ultracold atoms in optical lattices

    International Nuclear Information System (INIS)

    Juergensen, Ole

    2015-01-01

    In this thesis, the phase diagrams and dynamics of various extended Hubbard models for ultracold atoms in optical lattices are studied. Hubbard models are the primary description for many interacting particles in periodic potentials with the paramount example of the electrons in solids. The very same models describe the behavior of ultracold quantum gases trapped in the periodic potentials generated by interfering beams of laser light. These optical lattices provide an unprecedented access to the fundamentals of the many-particle physics that govern the properties of solid-state materials. They can be used to simulate solid-state systems and validate the approximations and simplifications made in theoretical models. This thesis revisits the numerous approximations underlying the standard Hubbard models with special regard to optical lattice experiments. The incorporation of the interaction between particles on adjacent lattice sites leads to extended Hubbard models. Offsite interactions have a strong influence on the phase boundaries and can give rise to novel correlated quantum phases. The extended models are studied with the numerical methods of exact diagonalization and time evolution, a cluster Gutzwiller approximation, as well as with the strong-coupling expansion approach. In total, this thesis demonstrates the high relevance of beyond-Hubbard processes for ultracold atoms in optical lattices. Extended Hubbard models can be employed to tackle unexplained problems of solid-state physics as well as enter previously inaccessible regimes.

  6. An Alternative Approach to the Extended Drude Model

    Science.gov (United States)

    Gantzler, N. J.; Dordevic, S. V.

    2018-05-01

    The original Drude model, proposed over a hundred years ago, is still used today for the analysis of optical properties of solids. Within this model, both the plasma frequency and quasiparticle scattering rate are constant, which makes the model rather inflexible. In order to circumvent this problem, the so-called extended Drude model was proposed, which allowed for the frequency dependence of both the quasiparticle scattering rate and the effective mass. In this work we will explore an alternative approach to the extended Drude model. Here, one also assumes that the quasiparticle scattering rate is frequency dependent; however, instead of the effective mass, the plasma frequency becomes frequency-dependent. This alternative model is applied to the high Tc superconductor Bi2Sr2CaCu2O8+δ (Bi2212) with Tc = 92 K, and the results are compared and contrasted with the ones obtained from the conventional extended Drude model. The results point to several advantages of this alternative approach to the extended Drude model.

  7. Statistical model of stress corrosion cracking based on extended

    Indian Academy of Sciences (India)

    The mechanism of stress corrosion cracking (SCC) has been discussed for decades. Here I propose a model of SCC reflecting the feature of fracture in brittle manner based on the variational principle under approximately supposed thermal equilibrium. In that model the functionals are expressed with extended forms of ...

  8. Plant water potential improves prediction of empirical stomatal models.

    Directory of Open Access Journals (Sweden)

    William R L Anderegg

    Full Text Available Climate change is expected to lead to increases in drought frequency and severity, with deleterious effects on many ecosystems. Stomatal responses to changing environmental conditions form the backbone of all ecosystem models, but are based on empirical relationships and are not well-tested during drought conditions. Here, we use a dataset of 34 woody plant species spanning global forest biomes to examine the effect of leaf water potential on stomatal conductance and test the predictive accuracy of three major stomatal models and a recently proposed model. We find that current leaf-level empirical models have consistent biases of over-prediction of stomatal conductance during dry conditions, particularly at low soil water potentials. Furthermore, the recently proposed stomatal conductance model yields increases in predictive capability compared to current models, and with particular improvement during drought conditions. Our results reveal that including stomatal sensitivity to declining water potential and consequent impairment of plant water transport will improve predictions during drought conditions and show that many biomes contain a diversity of plant stomatal strategies that range from risky to conservative stomatal regulation during water stress. Such improvements in stomatal simulation are greatly needed to help unravel and predict the response of ecosystems to future climate extremes.

  9. Extending product modeling methods for integrated product development

    DEFF Research Database (Denmark)

    Bonev, Martin; Wörösch, Michael; Hauksdóttir, Dagný

    2013-01-01

    Despite great efforts within the modeling domain, the majority of methods often address the uncommon design situation of an original product development. However, studies illustrate that development tasks are predominantly related to redesigning, improving, and extending already existing products...... and PVM methods, in a presented Product Requirement Development model some of the individual drawbacks of each method could be overcome. Based on the UML standard, the model enables the representation of complex hierarchical relationships in a generic product model. At the same time it uses matrix....... Updated design requirements have then to be made explicit and mapped against the existing product architecture. In this paper, existing methods are adapted and extended through linking updated requirements to suitable product models. By combining several established modeling techniques, such as the DSM...

  10. Advanced empirical estimate of information value for credit scoring models

    Directory of Open Access Journals (Sweden)

    Martin Řezáč

    2011-01-01

    Full Text Available Credit scoring, it is a term for a wide spectrum of predictive models and their underlying techniques that aid financial institutions in granting credits. These methods decide who will get credit, how much credit they should get, and what further strategies will enhance the profitability of the borrowers to the lenders. Many statistical tools are avaiable for measuring quality, within the meaning of the predictive power, of credit scoring models. Because it is impossible to use a scoring model effectively without knowing how good it is, quality indexes like Gini, Kolmogorov-Smirnov statisic and Information value are used to assess quality of given credit scoring model. The paper deals primarily with the Information value, sometimes called divergency. Commonly it is computed by discretisation of data into bins using deciles. One constraint is required to be met in this case. Number of cases have to be nonzero for all bins. If this constraint is not fulfilled there are some practical procedures for preserving finite results. As an alternative method to the empirical estimates one can use the kernel smoothing theory, which allows to estimate unknown densities and consequently, using some numerical method for integration, to estimate value of the Information value. The main contribution of this paper is a proposal and description of the empirical estimate with supervised interval selection. This advanced estimate is based on requirement to have at least k, where k is a positive integer, observations of socres of both good and bad client in each considered interval. A simulation study shows that this estimate outperform both the empirical estimate using deciles and the kernel estimate. Furthermore it shows high dependency on choice of the parameter k. If we choose too small value, we get overestimated value of the Information value, and vice versa. Adjusted square root of number of bad clients seems to be a reasonable compromise.

  11. The dialogically extended mind

    DEFF Research Database (Denmark)

    Fusaroli, Riccardo; Gangopadhyay, Nivedita; Tylén, Kristian

    2014-01-01

    A growing conceptual and empirical literature is advancing the idea that language extends our cognitive skills. One of the most influential positions holds that language – qua material symbols – facilitates individual thought processes by virtue of its material properties. Extending upon this model...... relate our approach to other ideas about collective minds and review a number of empirical studies to identify the mechanisms enabling the constitution of interpersonal cognitive systems....

  12. Coleman-Weinberg phase transition in extended Higgs models

    International Nuclear Information System (INIS)

    Sher, M.

    1996-01-01

    In Coleman-Weinberg symmetry breaking, all dimensionful parameters vanish and the symmetry is broken by loop corrections. Before Coleman-Weinberg symmetry breaking in the standard model was experimentally ruled out, it had already been excluded on cosmological grounds. In this Brief Report, the cosmological analysis is carried out for Coleman-Weinberg models with extended Higgs sectors, which are not experimentally ruled out, and general constraints on such models are given. copyright 1996 The American Physical Society

  13. Empirical STORM-E Model. [I. Theoretical and Observational Basis

    Science.gov (United States)

    Mertens, Christopher J.; Xu, Xiaojing; Bilitza, Dieter; Mlynczak, Martin G.; Russell, James M., III

    2013-01-01

    Auroral nighttime infrared emission observed by the Sounding of the Atmosphere using Broadband Emission Radiometry (SABER) instrument onboard the Thermosphere-Ionosphere-Mesosphere Energetics and Dynamics (TIMED) satellite is used to develop an empirical model of geomagnetic storm enhancements to E-region peak electron densities. The empirical model is called STORM-E and will be incorporated into the 2012 release of the International Reference Ionosphere (IRI). The proxy for characterizing the E-region response to geomagnetic forcing is NO+(v) volume emission rates (VER) derived from the TIMED/SABER 4.3 lm channel limb radiance measurements. The storm-time response of the NO+(v) 4.3 lm VER is sensitive to auroral particle precipitation. A statistical database of storm-time to climatological quiet-time ratios of SABER-observed NO+(v) 4.3 lm VER are fit to widely available geomagnetic indices using the theoretical framework of linear impulse-response theory. The STORM-E model provides a dynamic storm-time correction factor to adjust a known quiescent E-region electron density peak concentration for geomagnetic enhancements due to auroral particle precipitation. Part II of this series describes the explicit development of the empirical storm-time correction factor for E-region peak electron densities, and shows comparisons of E-region electron densities between STORM-E predictions and incoherent scatter radar measurements. In this paper, Part I of the series, the efficacy of using SABER-derived NO+(v) VER as a proxy for the E-region response to solar-geomagnetic disturbances is presented. Furthermore, a detailed description of the algorithms and methodologies used to derive NO+(v) VER from SABER 4.3 lm limb emission measurements is given. Finally, an assessment of key uncertainties in retrieving NO+(v) VER is presented

  14. Empirical Bayes Credibility Models for Economic Catastrophic Losses by Regions

    Directory of Open Access Journals (Sweden)

    Jindrová Pavla

    2017-01-01

    Full Text Available Catastrophic events affect various regions of the world with increasing frequency and intensity. The number of catastrophic events and the amount of economic losses is varying in different world regions. Part of these losses is covered by insurance. Catastrophe events in last years are associated with increases in premiums for some lines of business. The article focus on estimating the amount of net premiums that would be needed to cover the total or insured catastrophic losses in different world regions using Bühlmann and Bühlmann-Straub empirical credibility models based on data from Sigma Swiss Re 2010-2016. The empirical credibility models have been developed to estimate insurance premiums for short term insurance contracts using two ingredients: past data from the risk itself and collateral data from other sources considered to be relevant. In this article we deal with application of these models based on the real data about number of catastrophic events and about the total economic and insured catastrophe losses in seven regions of the world in time period 2009-2015. Estimated credible premiums by world regions provide information how much money in the monitored regions will be need to cover total and insured catastrophic losses in next year.

  15. Empirical spatial econometric modelling of small scale neighbourhood

    Science.gov (United States)

    Gerkman, Linda

    2012-07-01

    The aim of the paper is to model small scale neighbourhood in a house price model by implementing the newest methodology in spatial econometrics. A common problem when modelling house prices is that in practice it is seldom possible to obtain all the desired variables. Especially variables capturing the small scale neighbourhood conditions are hard to find. If there are important explanatory variables missing from the model, the omitted variables are spatially autocorrelated and they are correlated with the explanatory variables included in the model, it can be shown that a spatial Durbin model is motivated. In the empirical application on new house price data from Helsinki in Finland, we find the motivation for a spatial Durbin model, we estimate the model and interpret the estimates for the summary measures of impacts. By the analysis we show that the model structure makes it possible to model and find small scale neighbourhood effects, when we know that they exist, but we are lacking proper variables to measure them.

  16. Regime switching model for financial data: Empirical risk analysis

    Science.gov (United States)

    Salhi, Khaled; Deaconu, Madalina; Lejay, Antoine; Champagnat, Nicolas; Navet, Nicolas

    2016-11-01

    This paper constructs a regime switching model for the univariate Value-at-Risk estimation. Extreme value theory (EVT) and hidden Markov models (HMM) are combined to estimate a hybrid model that takes volatility clustering into account. In the first stage, HMM is used to classify data in crisis and steady periods, while in the second stage, EVT is applied to the previously classified data to rub out the delay between regime switching and their detection. This new model is applied to prices of numerous stocks exchanged on NYSE Euronext Paris over the period 2001-2011. We focus on daily returns for which calibration has to be done on a small dataset. The relative performance of the regime switching model is benchmarked against other well-known modeling techniques, such as stable, power laws and GARCH models. The empirical results show that the regime switching model increases predictive performance of financial forecasting according to the number of violations and tail-loss tests. This suggests that the regime switching model is a robust forecasting variant of power laws model while remaining practical to implement the VaR measurement.

  17. Empirical Reduced-Order Modeling for Boundary Feedback Flow Control

    Directory of Open Access Journals (Sweden)

    Seddik M. Djouadi

    2008-01-01

    Full Text Available This paper deals with the practical and theoretical implications of model reduction for aerodynamic flow-based control problems. Various aspects of model reduction are discussed that apply to partial differential equation- (PDE- based models in general. Specifically, the proper orthogonal decomposition (POD of a high dimension system as well as frequency domain identification methods are discussed for initial model construction. Projections on the POD basis give a nonlinear Galerkin model. Then, a model reduction method based on empirical balanced truncation is developed and applied to the Galerkin model. The rationale for doing so is that linear subspace approximations to exact submanifolds associated with nonlinear controllability and observability require only standard matrix manipulations utilizing simulation/experimental data. The proposed method uses a chirp signal as input to produce the output in the eigensystem realization algorithm (ERA. This method estimates the system's Markov parameters that accurately reproduce the output. Balanced truncation is used to show that model reduction is still effective on ERA produced approximated systems. The method is applied to a prototype convective flow on obstacle geometry. An H∞ feedback flow controller is designed based on the reduced model to achieve tracking and then applied to the full-order model with excellent performance.

  18. An extended geometric criterion for chaos in the Dicke model

    International Nuclear Information System (INIS)

    Li Jiangdan; Zhang Suying

    2010-01-01

    We extend HBLSL's (Horwitz, Ben Zion, Lewkowicz, Schiffer and Levitan) new Riemannian geometric criterion for chaotic motion to Hamiltonian systems of weak coupling of potential and momenta by defining the 'mean unstable ratio'. We discuss the Dicke model of an unstable Hamiltonian system in detail and show that our results are in good agreement with that of the computation of Lyapunov characteristic exponents.

  19. The Extended Parallel Process Model: Illuminating the Gaps in Research

    Science.gov (United States)

    Popova, Lucy

    2012-01-01

    This article examines constructs, propositions, and assumptions of the extended parallel process model (EPPM). Review of the EPPM literature reveals that its theoretical concepts are thoroughly developed, but the theory lacks consistency in operational definitions of some of its constructs. Out of the 12 propositions of the EPPM, a few have not…

  20. Ground state phase diagram of extended attractive Hubbard model

    International Nuclear Information System (INIS)

    Robaszkiewicz, S.; Chao, K.A.; Micnas, R.

    1980-08-01

    The ground state phase diagram of the extended Hubbard model with intraatomic attraction has been derived in the Hartree-Fock approximation formulated in terms of the Bogoliubov variational approach. For a given value of electron density, the nature of the ordered ground state depends essentially on the sign and the strength of the nearest neighbor coupling. (author)

  1. Extended Cellular Automata Models of Particles and Space-Time

    Science.gov (United States)

    Beedle, Michael

    2005-04-01

    Models of particles and space-time are explored through simulations and theoretical models that use Extended Cellular Automata models. The expanded Cellular Automata Models consist go beyond simple scalar binary cell-fields, into discrete multi-level group representations like S0(2), SU(2), SU(3), SPIN(3,1). The propagation and evolution of these expanded cellular automatas are then compared to quantum field theories based on the "harmonic paradigm" i.e. built by an infinite number of harmonic oscillators, and with gravitational models.

  2. Business models of micro businesses: Empirical evidence from creative industries

    Directory of Open Access Journals (Sweden)

    Pfeifer Sanja

    2017-01-01

    Full Text Available Business model describes how a business identifies and creates value for customers and how it organizes itself to capture some of this value in a profitable manner. Previous studies of business models in creative industries have only recently identified the unresolved issues in this field of research. The main objective of this article is to analyse the structure and diversity of business models and to deduce how these components interact or change in the context of micro and small businesses in creative services such as advertising, architecture and design. The article uses a qualitative approach. Case studies and semi-structured, in-depth interviews with six owners/managers of micro businesses in Croatia provide rich data. Structural coding in data analysis has been performed manually. The qualitative analysis has indicative relevance for the assessment and comparison of business models, however, it provides insights into which components of business models seem to be consolidated and which seem to contribute to the diversity of business models in creative industries. The article contributes to the advancement of empirical evidence and conceptual constructs that might lead to more advanced methodological approaches and proposition of the core typologies or classifications of business models in creative industries. In addition, a more detailed mapping of different choices available in managing value creation, value capturing or value networking might be a valuable help for owners/managers who want to change or cross-fertilize their business models.

  3. An extended chain Ising model and its Glauber dynamics

    International Nuclear Information System (INIS)

    Zhao Xing-Yu; Fan Xiao-Hui; Huang Yi-Neng; Huang Xin-Ru

    2012-01-01

    It was first proposed that an extended chain Ising (ECI) model contains the Ising chain model, single spin double-well potentials and a pure phonon heat bath of a specific energy exchange with the spins. The extension method is easy to apply to high dimensional cases. Then the single spin-flip probability (rate) of the ECI model is deduced based on the Boltzmann principle and general statistical principles of independent events and the model is simplified to an extended chain Glauber—Ising (ECGI) model. Moreover, the relaxation dynamics of the ECGI model were simulated by the Monte Carlo method and a comparison with the predictions of the special chain Glauber—Ising (SCGI) model was presented. It was found that the results of the two models are consistent with each other when the Ising chain length is large enough and temperature is relative low, which is the most valuable case of the model applications. These show that the ECI model will provide a firm physical base for the widely used single spin-flip rate proposed by Glauber and a possible route to obtain the single spin-flip rate of other form and even the multi-spin-flip rate. (condensed matter: electronic structure, electrical, magnetic, and optical properties)

  4. Selection Bias in Educational Transition Models: Theory and Empirical Evidence

    DEFF Research Database (Denmark)

    Holm, Anders; Jæger, Mads

    variables. This paper, first, explains theoretically how selection on unobserved variables leads to waning coefficients and, second, illustrates empirically how selection leads to biased estimates of the effect of family background on educational transitions. Our empirical analysis using data from...

  5. Empirical models of wind conditions on Upper Klamath Lake, Oregon

    Science.gov (United States)

    Buccola, Norman L.; Wood, Tamara M.

    2010-01-01

    Upper Klamath Lake is a large (230 square kilometers), shallow (mean depth 2.8 meters at full pool) lake in southern Oregon. Lake circulation patterns are driven largely by wind, and the resulting currents affect the water quality and ecology of the lake. To support hydrodynamic modeling of the lake and statistical investigations of the relation between wind and lake water-quality measurements, the U.S. Geological Survey has monitored wind conditions along the lakeshore and at floating raft sites in the middle of the lake since 2005. In order to make the existing wind archive more useful, this report summarizes the development of empirical wind models that serve two purposes: (1) to fill short (on the order of hours or days) wind data gaps at raft sites in the middle of the lake, and (2) to reconstruct, on a daily basis, over periods of months to years, historical wind conditions at U.S. Geological Survey sites prior to 2005. Empirical wind models based on Artificial Neural Network (ANN) and Multivariate-Adaptive Regressive Splines (MARS) algorithms were compared. ANNs were better suited to simulating the 10-minute wind data that are the dependent variables of the gap-filling models, but the simpler MARS algorithm may be adequate to accurately simulate the daily wind data that are the dependent variables of the historical wind models. To further test the accuracy of the gap-filling models, the resulting simulated winds were used to force the hydrodynamic model of the lake, and the resulting simulated currents were compared to measurements from an acoustic Doppler current profiler. The error statistics indicated that the simulation of currents was degraded as compared to when the model was forced with observed winds, but probably is adequate for short gaps in the data of a few days or less. Transport seems to be less affected by the use of the simulated winds in place of observed winds. The simulated tracer concentration was similar between model results when

  6. Exotic superconducting states in the extended attractive Hubbard model.

    Science.gov (United States)

    Nayak, Swagatam; Kumar, Sanjeev

    2018-04-04

    We show that the extended attractive Hubbard model on a square lattice allows for a variety of superconducting phases, including exotic mixed-symmetry phases with [Formula: see text] and [Formula: see text] symmetries, and a novel [Formula: see text] state. The calculations are performed within the Hartree-Fock Bardeen-Cooper-Schrieffer framework. The ground states of the mean-field Hamiltonian are obtained via a minimization scheme that relaxes the symmetry constraints on the superconducting solutions, hence allowing for a mixing of s-, p- and d-wave order parameters. The results are obtained within the assumption of uniform-density states. Our results show that extended attractive Hubbard model can serve as an effective model for investigating properties of exotic superconductors.

  7. Low-energy limit of the extended Linear Sigma Model

    Energy Technology Data Exchange (ETDEWEB)

    Divotgey, Florian [Johann Wolfgang Goethe-Universitaet, Institut fuer Theoretische Physik, Frankfurt am Main (Germany); Kovacs, Peter [Wigner Research Center for Physics, Hungarian Academy of Sciences, Institute for Particle and Nuclear Physics, Budapest (Hungary); GSI Helmholtzzentrum fuer Schwerionenforschung, ExtreMe Matter Institute, Darmstadt (Germany); Giacosa, Francesco [Johann Wolfgang Goethe-Universitaet, Institut fuer Theoretische Physik, Frankfurt am Main (Germany); Jan-Kochanowski University, Institute of Physics, Kielce (Poland); Rischke, Dirk H. [Johann Wolfgang Goethe-Universitaet, Institut fuer Theoretische Physik, Frankfurt am Main (Germany); University of Science and Technology of China, Interdisciplinary Center for Theoretical Study and Department of Modern Physics, Hefei, Anhui (China)

    2018-01-15

    The extended Linear Sigma Model is an effective hadronic model based on the linear realization of chiral symmetry SU(N{sub f}){sub L} x SU(N{sub f}){sub R}, with (pseudo)scalar and (axial-)vector mesons as degrees of freedom. In this paper, we study the low-energy limit of the extended Linear Sigma Model (eLSM) for N{sub f} = flavors by integrating out all fields except for the pions, the (pseudo-)Nambu-Goldstone bosons of chiral symmetry breaking. The resulting low-energy effective action is identical to Chiral Perturbation Theory (ChPT) after choosing a representative for the coset space generated by chiral symmetry breaking and expanding it in powers of (derivatives of) the pion fields. The tree-level values of the coupling constants of the effective low-energy action agree remarkably well with those of ChPT. (orig.)

  8. Dynamical quantum phase transitions in extended transverse Ising models

    Science.gov (United States)

    Bhattacharjee, Sourav; Dutta, Amit

    2018-04-01

    We study the dynamical quantum phase transitions (DQPTs) manifested in the subsequent unitary dynamics of an extended Ising model with an additional three spin interactions following a sudden quench. Revisiting the equilibrium phase diagram of the model, where different quantum phases are characterized by different winding numbers, we show that in some situations the winding number may not change across a gap closing point in the energy spectrum. Although, usually there exists a one-to-one correspondence between the change in winding number and the number of critical time scales associated with DQPTs, we show that the extended nature of interactions may lead to unusual situations. Importantly, we show that in the limit of the cluster Ising model, three critical modes associated with DQPTs become degenerate, thereby leading to a single critical time scale for a given sector of Fisher zeros.

  9. Empirical atom model of Vegard's law

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Lei, E-mail: zhleile2002@163.com [Materials Department, College of Electromechanical Engineering, China University of Petroleum, Qingdao 266555 (China); School of Electromechanical Automobile Engineering, Yantai University, Yantai 264005 (China); Li, Shichun [Materials Department, College of Electromechanical Engineering, China University of Petroleum, Qingdao 266555 (China)

    2014-02-01

    Vegard's law seldom holds true for most binary continuous solid solutions. When two components form a solid solution, the atom radii of component elements will change to satisfy the continuity requirement of electron density at the interface between component atom A and atom B so that the atom with larger electron density will expand and the atom with the smaller one will contract. If the expansion and contraction of the atomic radii of A and B respectively are equal in magnitude, Vegard's law will hold true. However, the expansion and contraction of two component atoms are not equal in most situations. The magnitude of the variation will depend on the cohesive energy of corresponding element crystals. An empirical atom model of Vegard's law has been proposed to account for signs of deviations according to the electron density at Wigner–Seitz cell from Thomas–Fermi–Dirac–Cheng model.

  10. Semi-empirical neural network models of controlled dynamical systems

    Directory of Open Access Journals (Sweden)

    Mihail V. Egorchev

    2017-12-01

    Full Text Available A simulation approach is discussed for maneuverable aircraft motion as nonlinear controlled dynamical system under multiple and diverse uncertainties including knowledge imperfection concerning simulated plant and its environment exposure. The suggested approach is based on a merging of theoretical knowledge for the plant with training tools of artificial neural network field. The efficiency of this approach is demonstrated using the example of motion modeling and the identification of the aerodynamic characteristics of a maneuverable aircraft. A semi-empirical recurrent neural network based model learning algorithm is proposed for multi-step ahead prediction problem. This algorithm sequentially states and solves numerical optimization subproblems of increasing complexity, using each solution as initial guess for subsequent subproblem. We also consider a procedure for representative training set acquisition that utilizes multisine control signals.

  11. Empirical flow parameters : a tool for hydraulic model validity

    Science.gov (United States)

    Asquith, William H.; Burley, Thomas E.; Cleveland, Theodore G.

    2013-01-01

    The objectives of this project were (1) To determine and present from existing data in Texas, relations between observed stream flow, topographic slope, mean section velocity, and other hydraulic factors, to produce charts such as Figure 1 and to produce empirical distributions of the various flow parameters to provide a methodology to "check if model results are way off!"; (2) To produce a statistical regional tool to estimate mean velocity or other selected parameters for storm flows or other conditional discharges at ungauged locations (most bridge crossings) in Texas to provide a secondary way to compare such values to a conventional hydraulic modeling approach. (3.) To present ancillary values such as Froude number, stream power, Rosgen channel classification, sinuosity, and other selected characteristics (readily determinable from existing data) to provide additional information to engineers concerned with the hydraulic-soil-foundation component of transportation infrastructure.

  12. Threshold model of cascades in empirical temporal networks

    Science.gov (United States)

    Karimi, Fariba; Holme, Petter

    2013-08-01

    Threshold models try to explain the consequences of social influence like the spread of fads and opinions. Along with models of epidemics, they constitute a major theoretical framework of social spreading processes. In threshold models on static networks, an individual changes her state if a certain fraction of her neighbors has done the same. When there are strong correlations in the temporal aspects of contact patterns, it is useful to represent the system as a temporal network. In such a system, not only contacts but also the time of the contacts are represented explicitly. In many cases, bursty temporal patterns slow down disease spreading. However, as we will see, this is not a universal truth for threshold models. In this work we propose an extension of Watts’s classic threshold model to temporal networks. We do this by assuming that an agent is influenced by contacts which lie a certain time into the past. I.e., the individuals are affected by contacts within a time window. In addition to thresholds in the fraction of contacts, we also investigate the number of contacts within the time window as a basis for influence. To elucidate the model’s behavior, we run the model on real and randomized empirical contact datasets.

  13. Empirical membrane lifetime model for heavy duty fuel cell systems

    Science.gov (United States)

    Macauley, Natalia; Watson, Mark; Lauritzen, Michael; Knights, Shanna; Wang, G. Gary; Kjeang, Erik

    2016-12-01

    Heavy duty fuel cells used in transportation system applications such as transit buses expose the fuel cell membranes to conditions that can lead to lifetime-limiting membrane failure via combined chemical and mechanical degradation. Highly durable membranes and reliable predictive models are therefore needed in order to achieve the ultimate heavy duty fuel cell lifetime target of 25,000 h. In the present work, an empirical membrane lifetime model was developed based on laboratory data from a suite of accelerated membrane durability tests. The model considers the effects of cell voltage, temperature, oxygen concentration, humidity cycling, humidity level, and platinum in the membrane using inverse power law and exponential relationships within the framework of a general log-linear Weibull life-stress statistical distribution. The obtained model is capable of extrapolating the membrane lifetime from accelerated test conditions to use level conditions during field operation. Based on typical conditions for the Whistler, British Columbia fuel cell transit bus fleet, the model predicts a stack lifetime of 17,500 h and a membrane leak initiation time of 9200 h. Validation performed with the aid of a field operated stack confirmed the initial goal of the model to predict membrane lifetime within 20% of the actual operating time.

  14. The extended RBAC model based on grid computing

    Institute of Scientific and Technical Information of China (English)

    CHEN Jian-gang; WANG Ru-chuan; WANG Hai-yan

    2006-01-01

    This article proposes the extended role-based access control (RBAC) model for solving dynamic and multidomain problems in grid computing, The formulated description of the model has been provided. The introduction of context and the mapping relations of context-to-role and context-to-permission help the model adapt to dynamic property in grid environment.The multidomain role inheritance relation by the authorization agent service realizes the multidomain authorization amongst the autonomy domain. A function has been proposed for solving the role inheritance conflict during the establishment of the multidomain role inheritance relation.

  15. Constructing Multidatabase Collections Using Extended ODMG Object Model

    Directory of Open Access Journals (Sweden)

    Adrian Skehill Mark Roantree

    1999-11-01

    Full Text Available Collections are an important feature in database systems. They provide us with the ability to group objects of interest together, and then to manipulate them in the required fashion. The OASIS project is focused on the construction a multidatabase prototype which uses the ODMG model and a canonical model. As part of this work we have extended the base model to provide a more powerful collection mechanism, and to permit the construction of a federated collection, a collection of heterogenous objects taken from distributed data sources

  16. Non-Fermi liquid behaviour in an extended Anderson model

    International Nuclear Information System (INIS)

    Liu Yuliang; Su Zhaobin; Yu Lu.

    1996-08-01

    An extended Anderson model, including screening channels (non-hybridizing, but interacting with the local orbit), is studied within the Anderson-Yuval approach, originally devised for the single-chanell Kondo problem. By comparing the perturbation expansions of this model and a generalized resonant level model, the spin-spin correlation functions are calculated which show non-Fermi liquid exponent depending on the strength of the scattering potential. The relevance of this result to experiments in some heavy fermion systems is briefly discussed. (author). 31 refs

  17. empirical modeling of oxygen modeling of oxygen uptake of flow

    African Journals Online (AJOL)

    eobe

    structure. Keywords: stepped chute, skimming flow, aeration l. 1. INTRODUCTION ..... [3] Toombes, L. and Chanson, H., “Air-water flow and gas transfer at aeration ... of numerical model of the flow behaviour through smooth and stepped.

  18. Empirical classification of resources in a business model concept

    Directory of Open Access Journals (Sweden)

    Marko Seppänen

    2009-04-01

    Full Text Available The concept of the business model has been designed for aiding exploitation of the business potential of an innovation. This exploitation inevitably involves new activities in the organisational context and generates a need to select and arrange the resources of the firm in these new activities. A business model encompasses those resources that a firm has access to and aids in a firm’s effort to create a superior ‘innovation capability’. Selecting and arranging resources to utilise innovations requires resource allocation decisions on multiple fronts as well as poses significant challenges for management of innovations. Although current business model conceptualisations elucidate resources, explicit considerations for the composition and the structures of the resource compositions have remained ambiguous. As a result, current business model conceptualisations fail in their core purpose in assisting the decision-making that must consider the resource allocation in exploiting business opportunities. This paper contributes to the existing discussion regarding the representation of resources as components in the business model concept. The categorized list of resources in business models is validated empirically, using two samples of managers in different positions in several industries. The results indicate that most of the theoretically derived resource items have their equivalents in the business language and concepts used by managers. Thus, the categorisation of the resource components enables further development of the business model concept as well as improves daily communication between managers and their subordinates. Future research could be targeted on linking these components of a business model with each other in order to gain a model to assess the performance of different business model configurations. Furthermore, different applications for the developed resource configuration may be envisioned.

  19. EMERGE - an empirical model for the formation of galaxies since z ˜ 10

    Science.gov (United States)

    Moster, Benjamin P.; Naab, Thorsten; White, Simon D. M.

    2018-06-01

    We present EMERGE, an Empirical ModEl for the foRmation of GalaxiEs, describing the evolution of individual galaxies in large volumes from z ˜ 10 to the present day. We assign a star formation rate to each dark matter halo based on its growth rate, which specifies how much baryonic material becomes available, and the instantaneous baryon conversion efficiency, which determines how efficiently this material is converted to stars, thereby capturing the baryonic physics. Satellites are quenched following the delayed-then-rapid model, and they are tidally disrupted once their subhalo has lost a significant fraction of its mass. The model is constrained with observed data extending out to high redshift. The empirical relations are very flexible, and the model complexity is increased only if required by the data, assessed by several model selection statistics. We find that for the same final halo mass galaxies can have very different star formation histories. Galaxies that are quenched at z = 0 typically have a higher peak star formation rate compared to their star-forming counterparts. EMERGE predicts stellar-to-halo mass ratios for individual galaxies and introduces scatter self-consistently. We find that at fixed halo mass, passive galaxies have a higher stellar mass on average. The intracluster mass in massive haloes can be up to eight times larger than the mass of the central galaxy. Clustering for star-forming and quenched galaxies is in good agreement with observational constraints, indicating a realistic assignment of galaxies to haloes.

  20. Empiric Piperacillin-Tazobactam versus Carbapenems in the Treatment of Bacteraemia Due to Extended-Spectrum Beta-Lactamase-Producing Enterobacteriaceae.

    Science.gov (United States)

    Ng, Tat Ming; Khong, Wendy X; Harris, Patrick N A; De, Partha P; Chow, Angela; Tambyah, Paul A; Lye, David C

    2016-01-01

    Extended-spectrum beta-lactamase (ESBL)-producing Enterobacteriaceae are a common cause of bacteraemia in endemic countries and may be associated with high mortality; carbapenems are considered the drug of choice. Limited data suggest piperacillin-tazobactam could be equally effective. We aimed to compare 30-day mortality of patients treated empirically with piperacillin-tazobactam versus a carbapenem in a multi-centre retrospective cohort study in Singapore. Only patients with active empiric monotherapy with piperacillin-tazobactam or a carbapenem were included. A propensity score for empiric carbapenem therapy was derived and an adjusted multivariate analysis of mortality was conducted. A total of 394 patients had ESBL-Escherichia.coli and ESBL-Klebsiella pneumoniae bacteraemia of which 23.1% were community acquired cases. One hundred and fifty-one received initial active monotherapy comprising piperacillin-tazobactam (n = 94) or a carbapenem (n = 57). Patients who received carbapenems were less likely to have health-care associated risk factors and have an unknown source of bacteraemia, but were more likely to have a urinary source. Thirty-day mortality was comparable between those who received empiric piperacillin-tazobactam and a carbapenem (29 [30.9%] vs. 17 [29.8%]), P = 0.89). Those who received empiric piperacillin-tazobactam had a lower 30-day acquisition of multi-drug resistant and fungal infections (7 [7.4%] vs. 14 [24.6%]), Pcarbapenem.

  1. Impact of empirical treatment in extended-spectrum beta-lactamase-producing Escherichia coli and Klebsiella spp. bacteremia. A multicentric cohort study

    Directory of Open Access Journals (Sweden)

    Peralta Galo

    2012-10-01

    Full Text Available Abstract Background The objective of this study is to analyze the factors that are associated with the adequacy of empirical antibiotic therapy and its impact in mortality in a large cohort of patients with extended-spectrum β-lactamase (ESBL - producing Escherichia coli and Klebsiella spp. bacteremia. Methods Cases of ESBL producing Enterobacteriaceae (ESBL-E bacteremia collected from 2003 through 2008 in 19 hospitals in Spain. Statistical analysis was performed using multivariate logistic regression. Results We analyzed 387 cases ESBL-E bloodstream infections. The main sources of bacteremia were urinary tract (55.3%, biliary tract (12.7%, intra-abdominal (8.8% and unknown origin (9.6%. Among all the 387 episodes, E. coli was isolated from blood cultures in 343 and in 45.71% the ESBL-E was multidrug resistant. Empirical antibiotic treatment was adequate in 48.8% of the cases and the in hospital mortality was 20.9%. In a multivariate analysis adequacy was a risk factor for death [adjusted OR (95% CI: 0.39 (0.31-0.97; P = 0.04], but not in patients without severe sepsis or shock. The class of antibiotic used empirically was not associated with prognosis in adequately treated patients. Conclusion ESBL-E bacteremia has a relatively high mortality that is partly related with a low adequacy of empirical antibiotic treatment. In selected subgroups the relevance of the adequacy of empirical therapy is limited.

  2. Production functions for climate policy modeling. An empirical analysis

    International Nuclear Information System (INIS)

    Van der Werf, Edwin

    2008-01-01

    Quantitative models for climate policy modeling differ in the production structure used and in the sizes of the elasticities of substitution. The empirical foundation for both is generally lacking. This paper estimates the parameters of 2-level CES production functions with capital, labour and energy as inputs, and is the first to systematically compare all nesting structures. Using industry-level data from 12 OECD countries, we find that the nesting structure where capital and labour are combined first, fits the data best, but for most countries and industries we cannot reject that all three inputs can be put into one single nest. These two nesting structures are used by most climate models. However, while several climate policy models use a Cobb-Douglas function for (part of the) production function, we reject elasticities equal to one, in favour of considerably smaller values. Finally we find evidence for factor-specific technological change. With lower elasticities and with factor-specific technological change, some climate policy models may find a bigger effect of endogenous technological change on mitigating the costs of climate policy. (author)

  3. Power spectrum model of visual masking: simulations and empirical data.

    Science.gov (United States)

    Serrano-Pedraza, Ignacio; Sierra-Vázquez, Vicente; Derrington, Andrew M

    2013-06-01

    cutoffs around the spatial frequency of the signal match the shape of the visual channel (symmetric or asymmetric) involved in the detection. In order to test the explanatory power of the model with empirical data, we performed six visual masking experiments. We show that this model, with only two free parameters, fits the empirical masking data with high precision. Finally, we provide equations of the power spectrum model for six masking noises used in the simulations and in the experiments.

  4. Model Calibration of Exciter and PSS Using Extended Kalman Filter

    Energy Technology Data Exchange (ETDEWEB)

    Kalsi, Karanjit; Du, Pengwei; Huang, Zhenyu

    2012-07-26

    Power system modeling and controls continue to become more complex with the advent of smart grid technologies and large-scale deployment of renewable energy resources. As demonstrated in recent studies, inaccurate system models could lead to large-scale blackouts, thereby motivating the need for model calibration. Current methods of model calibration rely on manual tuning based on engineering experience, are time consuming and could yield inaccurate parameter estimates. In this paper, the Extended Kalman Filter (EKF) is used as a tool to calibrate exciter and Power System Stabilizer (PSS) models of a particular type of machine in the Western Electricity Coordinating Council (WECC). The EKF-based parameter estimation is a recursive prediction-correction process which uses the mismatch between simulation and measurement to adjust the model parameters at every time step. Numerical simulations using actual field test data demonstrate the effectiveness of the proposed approach in calibrating the parameters.

  5. Empirical Modeling of the Plasmasphere Dynamics Using Neural Networks

    Science.gov (United States)

    Zhelavskaya, I. S.; Shprits, Y.; Spasojevic, M.

    2017-12-01

    We present a new empirical model for reconstructing the global dynamics of the cold plasma density distribution based only on solar wind data and geomagnetic indices. Utilizing the density database obtained using the NURD (Neural-network-based Upper hybrid Resonance Determination) algorithm for the period of October 1, 2012 - July 1, 2016, in conjunction with solar wind data and geomagnetic indices, we develop a neural network model that is capable of globally reconstructing the dynamics of the cold plasma density distribution for 2 ≤ L ≤ 6 and all local times. We validate and test the model by measuring its performance on independent datasets withheld from the training set and by comparing the model predicted global evolution with global images of He+ distribution in the Earth's plasmasphere from the IMAGE Extreme UltraViolet (EUV) instrument. We identify the parameters that best quantify the plasmasphere dynamics by training and comparing multiple neural networks with different combinations of input parameters (geomagnetic indices, solar wind data, and different durations of their time history). We demonstrate results of both local and global plasma density reconstruction. This study illustrates how global dynamics can be reconstructed from local in-situ observations by using machine learning techniques.

  6. Extended cox regression model: The choice of timefunction

    Science.gov (United States)

    Isik, Hatice; Tutkun, Nihal Ata; Karasoy, Durdu

    2017-07-01

    Cox regression model (CRM), which takes into account the effect of censored observations, is one the most applicative and usedmodels in survival analysis to evaluate the effects of covariates. Proportional hazard (PH), requires a constant hazard ratio over time, is the assumptionofCRM. Using extended CRM provides the test of including a time dependent covariate to assess the PH assumption or an alternative model in case of nonproportional hazards. In this study, the different types of real data sets are used to choose the time function and the differences between time functions are analyzed and discussed.

  7. Magnetization plateaux in an extended Shastry-Sutherland model

    International Nuclear Information System (INIS)

    Schmidt, Kai Phillip; Dorier, Julien; Mila, Frederic

    2009-01-01

    We study an extended two-dimensional Shastry-Sutherland model in a magnetic field where besides the usual Heisenberg exchanges of the Shastry-Sutherland model two additional SU(2) invariant couplings are included. Perturbative continous unitary transformations are used to determine the leading order effects of the additional couplings on the pure hopping and on the long-range interactions between the triplons which are the most relevant terms for small magnetization. We then compare the energy of various magnetization plateaux in the classical limit and we discuss the implications for the two-dimensional quantum magnet SrCu 2 (BO 3 ) 2 .

  8. Merging expert and empirical data for rare event frequency estimation: Pool homogenisation for empirical Bayes models

    International Nuclear Information System (INIS)

    Quigley, John; Hardman, Gavin; Bedford, Tim; Walls, Lesley

    2011-01-01

    Empirical Bayes provides one approach to estimating the frequency of rare events as a weighted average of the frequencies of an event and a pool of events. The pool will draw upon, for example, events with similar precursors. The higher the degree of homogeneity of the pool, then the Empirical Bayes estimator will be more accurate. We propose and evaluate a new method using homogenisation factors under the assumption that events are generated from a Homogeneous Poisson Process. The homogenisation factors are scaling constants, which can be elicited through structured expert judgement and used to align the frequencies of different events, hence homogenising the pool. The estimation error relative to the homogeneity of the pool is examined theoretically indicating that reduced error is associated with larger pool homogeneity. The effects of misspecified expert assessments of the homogenisation factors are examined theoretically and through simulation experiments. Our results show that the proposed Empirical Bayes method using homogenisation factors is robust under different degrees of misspecification.

  9. Hybrid empirical--theoretical approach to modeling uranium adsorption

    International Nuclear Information System (INIS)

    Hull, Larry C.; Grossman, Christopher; Fjeld, Robert A.; Coates, John T.; Elzerman, Alan W.

    2004-01-01

    An estimated 330 metric tons of U are buried in the radioactive waste Subsurface Disposal Area (SDA) at the Idaho National Engineering and Environmental Laboratory (INEEL). An assessment of U transport parameters is being performed to decrease the uncertainty in risk and dose predictions derived from computer simulations of U fate and transport to the underlying Snake River Plain Aquifer. Uranium adsorption isotherms were measured for 14 sediment samples collected from sedimentary interbeds underlying the SDA. The adsorption data were fit with a Freundlich isotherm. The Freundlich n parameter is statistically identical for all 14 sediment samples and the Freundlich K f parameter is correlated to sediment surface area (r 2 =0.80). These findings suggest an efficient approach to material characterization and implementation of a spatially variable reactive transport model that requires only the measurement of sediment surface area. To expand the potential applicability of the measured isotherms, a model is derived from the empirical observations by incorporating concepts from surface complexation theory to account for the effects of solution chemistry. The resulting model is then used to predict the range of adsorption conditions to be expected in the vadose zone at the SDA based on the range in measured pore water chemistry. Adsorption in the deep vadose zone is predicted to be stronger than in near-surface sediments because the total dissolved carbonate decreases with depth

  10. Empirical Modeling of ICMEs Using ACE/SWICS Ionic Distributions

    Science.gov (United States)

    Rivera, Y.; Landi, E.; Lepri, S. T.; Gilbert, J. A.

    2017-12-01

    Coronal Mass Ejections (CMEs) are some of the largest, most energetic events in the solar system releasing an immense amount of plasma and magnetic field into the Heliosphere. The Earth-bound plasma plays a large role in space weather, causing geomagnetic storms that can damage space and ground based instrumentation. As a CME is released, the plasma experiences heating, expansion and acceleration; however, the physical mechanism supplying the heating as it lifts out of the corona still remains uncertain. From previous work we know the ionic composition of solar ejecta undergoes a gradual transition to a state where ionization and recombination processes become ineffective rendering the ionic composition static along its trajectory. This property makes them a good indicator of thermal conditions in the corona, where the CME plasma likely receives most of its heating. We model this so-called `freeze-in' process in Earth-directed CMEs using an ionization code to empirically determine the electron temperature, density and bulk velocity. `Frozen-in' ions from an ensemble of independently modeled plasmas within the CME are added together to fit the full range of observational ionic abundances collected by ACE/SWICS during ICME events. The models derived using this method are used to estimate the CME energy budget to determine a heating rate used to compare with a variety of heating mechanisms that can sustain the required heating with a compatible timescale.

  11. Phenomenological comparison of models with extended Higgs sectors

    International Nuclear Information System (INIS)

    Muehlleitner, Margarete

    2017-01-01

    Beyond the Standard Model (SM) extensions usually include extended Higgs sectors. Models with singlet or doublet fields are the simplest ones that are compatible with the ρ parameter constraint. The discovery of new non-SM Higgs bosons and the identification of the underlying model requires dedicated Higgs properties analyses. In this paper, we compare several Higgs sectors featuring 3 CP-even neutral Higgs bosons that are also motivated by their simplicity and their capability to solve some of the flaws of the SM. They are: the SM extended by a complex singlet field (C x SM), the singlet extension of the 2-Higgs-Doublet Model (N2HDM), and the Next-to-Minimal Supersymmetric SM extension (NMSSM). In addition, we analyse the CP-violating 2-Higgs-Doublet Model (C2HDM), which provides 3 neutral Higgs bosons with a pseudoscalar admixture. This allows us to compare the effects of singlet and pseudoscalar admixtures. Through dedicated scans of the allowed parameter space of the models, we analyse the phenomenologically viable scenarios from the view point of the SM-like Higgs boson and of the signal rates of the non-SM-like Higgs bosons to be found. In particular, we analyse the effect of singlet/pseudoscalar admixture, and the potential to differentiate these models in the near future. This is supported by a study of couplings sums of the Higgs bosons to massive gauge bosons and to fermions, where we identify features that allow us to distinguish the models, in particular when only part of the Higgs spectrum is discovered. Our results can be taken as guidelines for future LHC data analyses, by the ATLAS and CMS experiments, to identify specific benchmark points aimed at revealing the underlying model.

  12. Phenomenological comparison of models with extended Higgs sectors

    Energy Technology Data Exchange (ETDEWEB)

    Muehlleitner, Margarete [Karlsruhe Institute of Technology (KIT), Karlsruhe (Germany). Inst. for Theoretical Physics; Sampaio, Marco O.P. [Aveiro Univ. e CIDMA (Portugal). Dept. de Fisica; Santos, Rui [Instituto Politecnico de Lisboa (Portugal). ISEL - Instituto Superior de Engenharia de Lisboa; Lisboa Univ. (Portugal). Centro de Fisica Teorica e Computacional; Univ. do Minho, Braga (Portugal). LIP, Dept. de Fisica; Wittbrodt, Jonas [Karlsruhe Institute of Technology (KIT), Karlsruhe (Germany). Inst. for Theoretical Physics; Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany)

    2017-03-22

    Beyond the Standard Model (SM) extensions usually include extended Higgs sectors. Models with singlet or doublet fields are the simplest ones that are compatible with the ρ parameter constraint. The discovery of new non-SM Higgs bosons and the identification of the underlying model requires dedicated Higgs properties analyses. In this paper, we compare several Higgs sectors featuring 3 CP-even neutral Higgs bosons that are also motivated by their simplicity and their capability to solve some of the flaws of the SM. They are: the SM extended by a complex singlet field (C x SM), the singlet extension of the 2-Higgs-Doublet Model (N2HDM), and the Next-to-Minimal Supersymmetric SM extension (NMSSM). In addition, we analyse the CP-violating 2-Higgs-Doublet Model (C2HDM), which provides 3 neutral Higgs bosons with a pseudoscalar admixture. This allows us to compare the effects of singlet and pseudoscalar admixtures. Through dedicated scans of the allowed parameter space of the models, we analyse the phenomenologically viable scenarios from the view point of the SM-like Higgs boson and of the signal rates of the non-SM-like Higgs bosons to be found. In particular, we analyse the effect of singlet/pseudoscalar admixture, and the potential to differentiate these models in the near future. This is supported by a study of couplings sums of the Higgs bosons to massive gauge bosons and to fermions, where we identify features that allow us to distinguish the models, in particular when only part of the Higgs spectrum is discovered. Our results can be taken as guidelines for future LHC data analyses, by the ATLAS and CMS experiments, to identify specific benchmark points aimed at revealing the underlying model.

  13. An Empirical Temperature Variance Source Model in Heated Jets

    Science.gov (United States)

    Khavaran, Abbas; Bridges, James

    2012-01-01

    An acoustic analogy approach is implemented that models the sources of jet noise in heated jets. The equivalent sources of turbulent mixing noise are recognized as the differences between the fluctuating and Favre-averaged Reynolds stresses and enthalpy fluxes. While in a conventional acoustic analogy only Reynolds stress components are scrutinized for their noise generation properties, it is now accepted that a comprehensive source model should include the additional entropy source term. Following Goldstein s generalized acoustic analogy, the set of Euler equations are divided into two sets of equations that govern a non-radiating base flow plus its residual components. When the base flow is considered as a locally parallel mean flow, the residual equations may be rearranged to form an inhomogeneous third-order wave equation. A general solution is written subsequently using a Green s function method while all non-linear terms are treated as the equivalent sources of aerodynamic sound and are modeled accordingly. In a previous study, a specialized Reynolds-averaged Navier-Stokes (RANS) solver was implemented to compute the variance of thermal fluctuations that determine the enthalpy flux source strength. The main objective here is to present an empirical model capable of providing a reasonable estimate of the stagnation temperature variance in a jet. Such a model is parameterized as a function of the mean stagnation temperature gradient in the jet, and is evaluated using commonly available RANS solvers. The ensuing thermal source distribution is compared with measurements as well as computational result from a dedicated RANS solver that employs an enthalpy variance and dissipation rate model. Turbulent mixing noise predictions are presented for a wide range of jet temperature ratios from 1.0 to 3.20.

  14. The emergence of a temporally extended self and factors that contribute to its development: from theoretical and empirical perspectives.

    Science.gov (United States)

    2013-04-01

    The main aims of the current research were to determine when children develop a temporally extended self (TES) and what factors contribute to its development. However, in order to address these aims it was important to, first, assess whether the test of delayed self-recognition (DSR) is a valid measure for the development of the TES, and, second, to propose and evaluate a theoretical model that describes what factors influence the development of the TES. The validity of the DSR test was verified by comparing the performance of 57 children on the DSR test to their performance on a meta-representational task (modified false belief task) and to a task that was essentially the same as the DSR test but was specifically designed to rely on the capacity to entertain secondary representations (i.e., surprise body task). Longitudinal testing of the children showed that at the mental age (MA) of 2.5 years they failed the DSR test, despite training them to understand the intended functions of the medium used in the DSR test; whereas, with training, children at the MA of 3.0 and 3.5 years exhibited DSR. Children at the MA of 4 years exhibited DSR without any training. Finally, results suggest that children's meta-representational ability was the only factor that contributed to the prediction of successful performance on the DSR test, and thus to the emergence of the TES. Furthermore, prospective longitudinal data revealed that caregiver conversational style was the only factor that contributed to the prediction of level of training required to pass the DSR test. That is, children of low-elaborative caregivers required significantly more training to pass the DSR test than children of high-elaborative caregivers, indicating that children who received more elaborative conversational input from their caregivers had a more advanced understanding of their TES. © 2013 The Society for Research in Child Development, Inc.

  15. Modeling of heavy metal salt solubility using the Extended UNIQUAC model

    DEFF Research Database (Denmark)

    Iliuta, Maria Cornelia; Thomsen, Kaj; Rasmussen, Peter

    2002-01-01

    Solid-liquid equilibria in complex aqueous systems involving a heavy metal cation (Mn2+, Fe2+, Co2+, Ni2+, Cu2+, or Zn2+) and one or more ions for which Extended UNIQUAC parameters have been published previously are modeled using the Extended UNIQUAC model. Model parameters are determined...

  16. Wave speeds in the macroscopic extended model for ultrarelativistic gases

    Energy Technology Data Exchange (ETDEWEB)

    Borghero, F., E-mail: borghero@unica.it [Dip. Matematica e Informatica, Università di Cagliari, Via Ospedale 72, 09124 Cagliari (Italy); Demontis, F., E-mail: fdemontis@unica.it [Dip. Matematica, Università di Cagliari, Viale Merello 92, 09123 Cagliari (Italy); Pennisi, S., E-mail: spennisi@unica.it [Dip. Matematica, Università di Cagliari, Via Ospedale 72, 09124 Cagliari (Italy)

    2013-11-15

    Equations determining wave speeds for a model of ultrarelativistic gases are investigated. This model is already present in literature; it deals with an arbitrary number of moments and it was proposed in the context of exact macroscopic approaches in Extended Thermodynamics. We find these results: the whole system for the determination of the wave speeds can be divided into independent subsystems which are expressed by linear combinations, through scalar coefficients, of tensors all of the same order; some wave speeds, but not all of them, are expressed by square roots of rational numbers; finally, we prove that these wave speeds for the macroscopic model are the same of those furnished by the kinetic model.

  17. Higgs detectability in the extended supersymmetric standard model

    International Nuclear Information System (INIS)

    Kamoshita, Jun-ichi

    1995-01-01

    Higgs detectability at a future linear collider are discussed in the minimal supersymmetric standard model (MSSM) and a supersymmetric standard model with a gauge singlet Higgs field (NMSSM). First, in the MSSM at least one of the neutral scalar Higgs is shown to be detectable irrespective of parameters of the model in a future e + e - linear collider at √s = 300-500 GeV. Next the Higgs sector of the NMSSM is considered, since the lightest Higgs boson can be singlet dominated and therefore decouple from Z 0 boson it is important to consider the production of heavier Higgses. It is shown that also in this case at least one of the neutral scalar Higgs will be detectable in a future linear collider. We extend the analysis and show that the same is true even if three singlets are included. Thus the detectability of these Higgs bosons of these models is guaranteed. (author)

  18. Extended Smoluchowski models for interpreting relaxation phenomena in liquids

    International Nuclear Information System (INIS)

    Polimeno, A.; Frezzato, D.; Saielli, G.; Moro, G.J.; Nordio, P.L.

    1998-01-01

    Interpretation of the dynamical behaviour of single molecules or collective modes in liquids has been increasingly centered, in the last decade, on complex liquid systems, including ionic solutions, polymeric liquids, supercooled fluids and liquid crystals. This has been made necessary by the need of interpreting dynamical data obtained by advanced experiments, like optical Kerr effect, time dependent fluorescence shift experiments, two-dimensional Fourier-transform and high field electron spin resonance and scattering experiments like quasi-elastic neutron scattering. This communication is centered on the definition, treatment and application of several extended stochastic models, which have proved to be very effective tools for interpreting and rationalizing complex relaxation phenomena in liquids structures. First, applications of standard Fokker-Planck equations for the orientational relaxation of molecules in isotropic and ordered liquid phase are reviewed. In particular attention will be focused on the interpretation of neutron scattering in nematics. Next, an extended stochastic model is used to interpret time-domain resolved fluorescence emission experiments. A two-body stochastic model allows the theoretical interpretation of dynamical Stokes shift effects in fluorescence emission spectra, performed on probes in isotropic and ordered polar phases. Finally, for the case of isotropic fluids made of small rigid molecules, a very detailed model is considered, which includes as basic ingredients a Fokker-Planck description of the molecular vibrational motion and the slow diffusive motion of a persistent cage structure together with the decay processes related to the changing structure of the cage. (author)

  19. Prediction of Meiyu rainfall in Taiwan by multi-lead physical-empirical models

    Science.gov (United States)

    Yim, So-Young; Wang, Bin; Xing, Wen; Lu, Mong-Ming

    2015-06-01

    Taiwan is located at the dividing point of the tropical and subtropical monsoons over East Asia. Taiwan has double rainy seasons, the Meiyu in May-June and the Typhoon rains in August-September. To predict the amount of Meiyu rainfall is of profound importance to disaster preparedness and water resource management. The seasonal forecast of May-June Meiyu rainfall has been a challenge to current dynamical models and the factors controlling Taiwan Meiyu variability has eluded climate scientists for decades. Here we investigate the physical processes that are possibly important for leading to significant fluctuation of the Taiwan Meiyu rainfall. Based on this understanding, we develop a physical-empirical model to predict Taiwan Meiyu rainfall at a lead time of 0- (end of April), 1-, and 2-month, respectively. Three physically consequential and complementary predictors are used: (1) a contrasting sea surface temperature (SST) tendency in the Indo-Pacific warm pool, (2) the tripolar SST tendency in North Atlantic that is associated with North Atlantic Oscillation, and (3) a surface warming tendency in northeast Asia. These precursors foreshadow an enhanced Philippine Sea anticyclonic anomalies and the anomalous cyclone near the southeastern China in the ensuing summer, which together favor increasing Taiwan Meiyu rainfall. Note that the identified precursors at various lead-times represent essentially the same physical processes, suggesting the robustness of the predictors. The physical empirical model made by these predictors is capable of capturing the Taiwan rainfall variability with a significant cross-validated temporal correlation coefficient skill of 0.75, 0.64, and 0.61 for 1979-2012 at the 0-, 1-, and 2-month lead time, respectively. The physical-empirical model concept used here can be extended to summer monsoon rainfall prediction over the Southeast Asia and other regions.

  20. Empirical modelling to predict the refractive index of human blood

    Science.gov (United States)

    Yahya, M.; Saghir, M. Z.

    2016-02-01

    Optical techniques used for the measurement of the optical properties of blood are of great interest in clinical diagnostics. Blood analysis is a routine procedure used in medical diagnostics to confirm a patient’s condition. Measuring the optical properties of blood is difficult due to the non-homogenous nature of the blood itself. In addition, there is a lot of variation in the refractive indices reported in the literature. These are the reasons that motivated the researchers to develop a mathematical model that can be used to predict the refractive index of human blood as a function of concentration, temperature and wavelength. The experimental measurements were conducted on mimicking phantom hemoglobin samples using the Abbemat Refractometer. The results analysis revealed a linear relationship between the refractive index and concentration as well as temperature, and a non-linear relationship between refractive index and wavelength. These results are in agreement with those found in the literature. In addition, a new formula was developed based on empirical modelling which suggests that temperature and wavelength coefficients be added to the Barer formula. The verification of this correlation confirmed its ability to determine refractive index and/or blood hematocrit values with appropriate clinical accuracy.

  1. Empirical modelling to predict the refractive index of human blood

    International Nuclear Information System (INIS)

    Yahya, M; Saghir, M Z

    2016-01-01

    Optical techniques used for the measurement of the optical properties of blood are of great interest in clinical diagnostics. Blood analysis is a routine procedure used in medical diagnostics to confirm a patient’s condition. Measuring the optical properties of blood is difficult due to the non-homogenous nature of the blood itself. In addition, there is a lot of variation in the refractive indices reported in the literature. These are the reasons that motivated the researchers to develop a mathematical model that can be used to predict the refractive index of human blood as a function of concentration, temperature and wavelength. The experimental measurements were conducted on mimicking phantom hemoglobin samples using the Abbemat Refractometer. The results analysis revealed a linear relationship between the refractive index and concentration as well as temperature, and a non-linear relationship between refractive index and wavelength. These results are in agreement with those found in the literature. In addition, a new formula was developed based on empirical modelling which suggests that temperature and wavelength coefficients be added to the Barer formula. The verification of this correlation confirmed its ability to determine refractive index and/or blood hematocrit values with appropriate clinical accuracy. (paper)

  2. OSeMOSYS Energy Modeling Using an Extended UTOPIA Model

    Science.gov (United States)

    Lavigne, Denis

    2017-01-01

    The OSeMOSYS project offers open-access energy modeling to a wide audience. Its relative simplicity makes it appealing for academic research and governmental organizations to study the impacts of policy decisions on an energy system in the context of possibly severe greenhouse gases emissions limitations. OSeMOSYS is a tool that enhances the…

  3. Risky forward interest rates and swaptions: Quantum finance model and empirical results

    Science.gov (United States)

    Baaquie, Belal Ehsan; Yu, Miao; Bhanap, Jitendra

    2018-02-01

    Risk free forward interest rates (Diebold and Li, 2006 [1]; Jamshidian, 1991 [2 ]) - and their realization by US Treasury bonds as the leading exemplar - have been studied extensively. In Baaquie (2010), models of risk free bonds and their forward interest rates based on the quantum field theoretic formulation of the risk free forward interest rates have been discussed, including the empirical evidence supporting these models. The quantum finance formulation of risk free forward interest rates is extended to the case of risky forward interest rates. The examples of the Singapore and Malaysian forward interest rates are used as specific cases. The main feature of the quantum finance model is that the risky forward interest rates are modeled both a) as a stand-alone case as well as b) being driven by the US forward interest rates plus a spread - having its own term structure -above the US forward interest rates. Both the US forward interest rates and the term structure for the spread are modeled by a two dimensional Euclidean quantum field. As a precursor to the evaluation of put option of the Singapore coupon bond, the quantum finance model for swaptions is tested using empirical study of swaptions for the US Dollar -showing that the model is quite accurate. A prediction for the market price of the put option for the Singapore coupon bonds is obtained. The quantum finance model is generalized to study the Malaysian case and the Malaysian forward interest rates are shown to have anomalies absent for the US and Singapore case. The model's prediction for a Malaysian interest rate swap is obtained.

  4. Extended Neural Metastability in an Embodied Model of Sensorimotor Coupling

    Directory of Open Access Journals (Sweden)

    Miguel Aguilera

    2016-09-01

    Full Text Available The hypothesis that brain organization is based on mechanisms of metastable synchronization in neural assemblies has been popularized during the last decades of neuroscientific research. Nevertheless, the role of body and environment for understanding the functioning of metastable assemblies is frequently dismissed. The main goal of this paper is to investigate the contribution of sensorimotor coupling to neural and behavioural metastability using a minimal computational model of plastic neural ensembles embedded in a robotic agent in a behavioural preference task. Our hypothesis is that, under some conditions, the metastability of the system is not restricted to the brain but extends to the system composed by the interaction of brain, body and environment. We test this idea, comparing an agent in continuous interaction with its environment in a task demanding behavioural flexibility with an equivalent model from the point of view of 'internalist neuroscience'. A statistical characterization of our model and tools from information theory allows us to show how (1 the bidirectional coupling between agent and environment brings the system closer to a regime of criticality and triggers the emergence of additional metastable states which are not found in the brain in isolation but extended to the whole system of sensorimotor interaction, (2 the synaptic plasticity of the agent is fundamental to sustain open structures in the neural controller of the agent flexibly engaging and disengaging different behavioural patterns that sustain sensorimotor metastable states, and (3 these extended metastable states emerge when the agent generates an asymmetrical circular loop of causal interaction with its environment, in which the agent responds to variability of the environment at fast timescales while acting over the environment at slow timescales, suggesting the constitution of the agent as an autonomous entity actively modulating its sensorimotor coupling

  5. Extended Neural Metastability in an Embodied Model of Sensorimotor Coupling.

    Science.gov (United States)

    Aguilera, Miguel; Bedia, Manuel G; Barandiaran, Xabier E

    2016-01-01

    The hypothesis that brain organization is based on mechanisms of metastable synchronization in neural assemblies has been popularized during the last decades of neuroscientific research. Nevertheless, the role of body and environment for understanding the functioning of metastable assemblies is frequently dismissed. The main goal of this paper is to investigate the contribution of sensorimotor coupling to neural and behavioral metastability using a minimal computational model of plastic neural ensembles embedded in a robotic agent in a behavioral preference task. Our hypothesis is that, under some conditions, the metastability of the system is not restricted to the brain but extends to the system composed by the interaction of brain, body and environment. We test this idea, comparing an agent in continuous interaction with its environment in a task demanding behavioral flexibility with an equivalent model from the point of view of "internalist neuroscience." A statistical characterization of our model and tools from information theory allow us to show how (1) the bidirectional coupling between agent and environment brings the system closer to a regime of criticality and triggers the emergence of additional metastable states which are not found in the brain in isolation but extended to the whole system of sensorimotor interaction, (2) the synaptic plasticity of the agent is fundamental to sustain open structures in the neural controller of the agent flexibly engaging and disengaging different behavioral patterns that sustain sensorimotor metastable states, and (3) these extended metastable states emerge when the agent generates an asymmetrical circular loop of causal interaction with its environment, in which the agent responds to variability of the environment at fast timescales while acting over the environment at slow timescales, suggesting the constitution of the agent as an autonomous entity actively modulating its sensorimotor coupling with the world. We

  6. Flexible Modeling of Epidemics with an Empirical Bayes Framework

    Science.gov (United States)

    Brooks, Logan C.; Farrow, David C.; Hyun, Sangwon; Tibshirani, Ryan J.; Rosenfeld, Roni

    2015-01-01

    Seasonal influenza epidemics cause consistent, considerable, widespread loss annually in terms of economic burden, morbidity, and mortality. With access to accurate and reliable forecasts of a current or upcoming influenza epidemic’s behavior, policy makers can design and implement more effective countermeasures. This past year, the Centers for Disease Control and Prevention hosted the “Predict the Influenza Season Challenge”, with the task of predicting key epidemiological measures for the 2013–2014 U.S. influenza season with the help of digital surveillance data. We developed a framework for in-season forecasts of epidemics using a semiparametric Empirical Bayes framework, and applied it to predict the weekly percentage of outpatient doctors visits for influenza-like illness, and the season onset, duration, peak time, and peak height, with and without using Google Flu Trends data. Previous work on epidemic modeling has focused on developing mechanistic models of disease behavior and applying time series tools to explain historical data. However, tailoring these models to certain types of surveillance data can be challenging, and overly complex models with many parameters can compromise forecasting ability. Our approach instead produces possibilities for the epidemic curve of the season of interest using modified versions of data from previous seasons, allowing for reasonable variations in the timing, pace, and intensity of the seasonal epidemics, as well as noise in observations. Since the framework does not make strict domain-specific assumptions, it can easily be applied to some other diseases with seasonal epidemics. This method produces a complete posterior distribution over epidemic curves, rather than, for example, solely point predictions of forecasting targets. We report prospective influenza-like-illness forecasts made for the 2013–2014 U.S. influenza season, and compare the framework’s cross-validated prediction error on historical data to

  7. Extending Primitive Spatial Data Models to Include Semantics

    Science.gov (United States)

    Reitsma, F.; Batcheller, J.

    2009-04-01

    Our traditional geospatial data model involves associating some measurable quality, such as temperature, or observable feature, such as a tree, with a point or region in space and time. When capturing data we implicitly subscribe to some kind of conceptualisation. If we can make this explicit in an ontology and associate it with the captured data, we can leverage formal semantics to reason with the concepts represented in our spatial data sets. To do so, we extend our fundamental representation of geospatial data in a data model by including a URI in our basic data model that links it to our ontology defining our conceptualisation, We thus extend Goodchild et al's geo-atom [1] with the addition of a URI: (x, Z, z(x), URI) . This provides us with pixel or feature level knowledge and the ability to create layers of data from a set of pixels or features that might be drawn from a database based on their semantics. Using open source tools, we present a prototype that involves simple reasoning as a proof of concept. References [1] M.F. Goodchild, M. Yuan, and T.J. Cova. Towards a general theory of geographic representation in gis. International Journal of Geographical Information Science, 21(3):239-260, 2007.

  8. Extended nonnegative tensor factorisation models for musical sound source separation.

    Science.gov (United States)

    FitzGerald, Derry; Cranitch, Matt; Coyle, Eugene

    2008-01-01

    Recently, shift-invariant tensor factorisation algorithms have been proposed for the purposes of sound source separation of pitched musical instruments. However, in practice, existing algorithms require the use of log-frequency spectrograms to allow shift invariance in frequency which causes problems when attempting to resynthesise the separated sources. Further, it is difficult to impose harmonicity constraints on the recovered basis functions. This paper proposes a new additive synthesis-based approach which allows the use of linear-frequency spectrograms as well as imposing strict harmonic constraints, resulting in an improved model. Further, these additional constraints allow the addition of a source filter model to the factorisation framework, and an extended model which is capable of separating mixtures of pitched and percussive instruments simultaneously.

  9. Extended Nonnegative Tensor Factorisation Models for Musical Sound Source Separation

    Directory of Open Access Journals (Sweden)

    Derry FitzGerald

    2008-01-01

    Full Text Available Recently, shift-invariant tensor factorisation algorithms have been proposed for the purposes of sound source separation of pitched musical instruments. However, in practice, existing algorithms require the use of log-frequency spectrograms to allow shift invariance in frequency which causes problems when attempting to resynthesise the separated sources. Further, it is difficult to impose harmonicity constraints on the recovered basis functions. This paper proposes a new additive synthesis-based approach which allows the use of linear-frequency spectrograms as well as imposing strict harmonic constraints, resulting in an improved model. Further, these additional constraints allow the addition of a source filter model to the factorisation framework, and an extended model which is capable of separating mixtures of pitched and percussive instruments simultaneously.

  10. Extending cavitation models to subcooled and superheated nozzle flow

    International Nuclear Information System (INIS)

    Schmidt, D.P.; Corradini, M.L.

    1997-01-01

    Existing models for cavitating flow are extended to apply to discharge of hot liquid through nozzles. Two types of models are considered: an analytical model and a two-dimensional numerical model. The analytical model of cavitating nozzle flow is reviewed and shown to apply to critical nozzle flow where the liquid is subcooled with respect to the downstream conditions. In this model the liquid and vapor are assumed to be in thermodynamic equilibrium. The success of this analytical model suggests that hydrodynamic effects dominate the subcooled nozzle flow. For more detailed predictions an existing multi-dimensional cavitation model based on hydrodynamic non-equilibrium is modified to apply to discharge of hot liquid. Non-equilibrium rate data from experimental measurements are used to close the equations. The governing equations are solved numerically in time and in two spatial dimensions on a boundary fitted grid. Results are shown for flow through sharp nozzles, and the coefficient of discharge is found to agree with experimental measurements for both subcooled and flashing fluid. (author)

  11. Extending SME to Handle Large-Scale Cognitive Modeling.

    Science.gov (United States)

    Forbus, Kenneth D; Ferguson, Ronald W; Lovett, Andrew; Gentner, Dedre

    2017-07-01

    Analogy and similarity are central phenomena in human cognition, involved in processes ranging from visual perception to conceptual change. To capture this centrality requires that a model of comparison must be able to integrate with other processes and handle the size and complexity of the representations required by the tasks being modeled. This paper describes extensions to Structure-Mapping Engine (SME) since its inception in 1986 that have increased its scope of operation. We first review the basic SME algorithm, describe psychological evidence for SME as a process model, and summarize its role in simulating similarity-based retrieval and generalization. Then we describe five techniques now incorporated into the SME that have enabled it to tackle large-scale modeling tasks: (a) Greedy merging rapidly constructs one or more best interpretations of a match in polynomial time: O(n 2 log(n)); (b) Incremental operation enables mappings to be extended as new information is retrieved or derived about the base or target, to model situations where information in a task is updated over time; (c) Ubiquitous predicates model the varying degrees to which items may suggest alignment; (d) Structural evaluation of analogical inferences models aspects of plausibility judgments; (e) Match filters enable large-scale task models to communicate constraints to SME to influence the mapping process. We illustrate via examples from published studies how these enable it to capture a broader range of psychological phenomena than before. Copyright © 2016 Cognitive Science Society, Inc.

  12. Empirical Models of Social Learning in a Large, Evolving Network.

    Directory of Open Access Journals (Sweden)

    Ayşe Başar Bener

    Full Text Available This paper advances theories of social learning through an empirical examination of how social networks change over time. Social networks are important for learning because they constrain individuals' access to information about the behaviors and cognitions of other people. Using data on a large social network of mobile device users over a one-month time period, we test three hypotheses: 1 attraction homophily causes individuals to form ties on the basis of attribute similarity, 2 aversion homophily causes individuals to delete existing ties on the basis of attribute dissimilarity, and 3 social influence causes individuals to adopt the attributes of others they share direct ties with. Statistical models offer varied degrees of support for all three hypotheses and show that these mechanisms are more complex than assumed in prior work. Although homophily is normally thought of as a process of attraction, people also avoid relationships with others who are different. These mechanisms have distinct effects on network structure. While social influence does help explain behavior, people tend to follow global trends more than they follow their friends.

  13. Empirically modelled Pc3 activity based on solar wind parameters

    Directory of Open Access Journals (Sweden)

    B. Heilig

    2010-09-01

    Full Text Available It is known that under certain solar wind (SW/interplanetary magnetic field (IMF conditions (e.g. high SW speed, low cone angle the occurrence of ground-level Pc3–4 pulsations is more likely. In this paper we demonstrate that in the event of anomalously low SW particle density, Pc3 activity is extremely low regardless of otherwise favourable SW speed and cone angle. We re-investigate the SW control of Pc3 pulsation activity through a statistical analysis and two empirical models with emphasis on the influence of SW density on Pc3 activity. We utilise SW and IMF measurements from the OMNI project and ground-based magnetometer measurements from the MM100 array to relate SW and IMF measurements to the occurrence of Pc3 activity. Multiple linear regression and artificial neural network models are used in iterative processes in order to identify sets of SW-based input parameters, which optimally reproduce a set of Pc3 activity data. The inclusion of SW density in the parameter set significantly improves the models. Not only the density itself, but other density related parameters, such as the dynamic pressure of the SW, or the standoff distance of the magnetopause work equally well in the model. The disappearance of Pc3s during low-density events can have at least four reasons according to the existing upstream wave theory: 1. Pausing the ion-cyclotron resonance that generates the upstream ultra low frequency waves in the absence of protons, 2. Weakening of the bow shock that implies less efficient reflection, 3. The SW becomes sub-Alfvénic and hence it is not able to sweep back the waves propagating upstream with the Alfvén-speed, and 4. The increase of the standoff distance of the magnetopause (and of the bow shock. Although the models cannot account for the lack of Pc3s during intervals when the SW density is extremely low, the resulting sets of optimal model inputs support the generation of mid latitude Pc3 activity predominantly through

  14. Conformal standard model with an extended scalar sector

    Energy Technology Data Exchange (ETDEWEB)

    Latosiński, Adam [Max-Planck-Institut für Gravitationsphysik (Albert-Einstein-Institut),Mühlenberg 1, D-14476 Potsdam (Germany); Lewandowski, Adrian; Meissner, Krzysztof A. [Faculty of Physics, University of Warsaw,Pasteura 5, 02-093 Warsaw (Poland); Nicolai, Hermann [Max-Planck-Institut für Gravitationsphysik (Albert-Einstein-Institut),Mühlenberg 1, D-14476 Potsdam (Germany)

    2015-10-26

    We present an extended version of the Conformal Standard Model (characterized by the absence of any new intermediate scales between the electroweak scale and the Planck scale) with an enlarged scalar sector coupling to right-chiral neutrinos. The scalar potential and the Yukawa couplings involving only right-chiral neutrinos are invariant under a new global symmetry SU(3){sub N} that complements the standard U(1){sub B−L} symmetry, and is broken explicitly only by the Yukawa interaction, of order O(10{sup −6}), coupling right-chiral neutrinos and the electroweak lepton doublets. We point out four main advantages of this enlargement, namely: (1) the economy of the (non-supersymmetric) Standard Model, and thus its observational success, is preserved; (2) thanks to the enlarged scalar sector the RG improved one-loop effective potential is everywhere positive with a stable global minimum, thereby avoiding the notorious instability of the Standard Model vacuum; (3) the pseudo-Goldstone bosons resulting from spontaneous breaking of the SU(3){sub N} symmetry are natural Dark Matter candidates with calculable small masses and couplings; and (4) the Majorana Yukawa coupling matrix acquires a form naturally adapted to leptogenesis. The model is made perturbatively consistent up to the Planck scale by imposing the vanishing of quadratic divergences at the Planck scale (‘softly broken conformal symmetry’). Observable consequences of the model occur mainly via the mixing of the new scalars and the standard model Higgs boson.

  15. Extended particle model with quark confinement and charmonium spectroscopy

    International Nuclear Information System (INIS)

    Hasenfratz, Peter; Kuti, Julius; Szalay, A.S.

    Extended particle like vector gluon bubbles /bags/ are introduced which are stabilized against free expansion by a surface tension of volume tension. Since quraks are coupled to the gluon field, they are confined to the inside of the gluon bag without any further mechanism. Only color singlet gluon bags are allowed. Nonlinear boundary conditions are not imposed on the quark field in the model. A massless abelian gauge confined by a surface tension is first considered; in a four-dimensional relativistic picture the surface of the gauge field bubble appears as a tube with a three dimensional surface. As a first application, the model is used to study bound states of heavy charmed quarks (charmonium). Similar to the Born-Oppenheimer approximation in molecular physics, heavy charmed quarks are treated as nonrelativistic in their motion whereas the gluon bag and light quarks (u,d,s) are treated in an adiabatic approximation

  16. Properties of hybrid stars in an extended MIT bag model

    International Nuclear Information System (INIS)

    Bao Tmurbagan; Liu Guangzhou; Zhu Mingfeng

    2009-01-01

    The properties of hybrid stars are investigated in the framework of the relativistic mean field theory (RMFT) and an MIT bag model with density-dependent bag constant to describe the hadron phase (HP) and quark phase (QP), respectively. We find that the density-dependent B(ρ) decreases with baryon density ρ; this decrement makes the strange quark matter become more energetically favorable than ever; which makes the threshold densities of the hadron-quark phase transition lower than those of the original bag constant case. In this case, the hyperon degrees of freedom can not be considered. As a result, the equations of state of a star in the mixed phase (MP) become softer whereas those in the QP become stiffer, and the radii of the star obviously decrease. This indicates that the extended MIT bag model is more suitable to describe hybrid stars with small radii. (authors)

  17. Improving the desolvation penalty in empirical protein pKa modeling

    DEFF Research Database (Denmark)

    Olsson, Mats Henrik Mikael

    2012-01-01

    Unlike atomistic and continuum models, empirical pk(a) predicting methods need to include desolvation contributions explicitly. This study describes a new empirical desolvation method based on the Born solvation model. The new desolvation model was evaluated by high-level Poisson-Boltzmann...

  18. Analysis of the phase structure in extended Higgs models

    Energy Technology Data Exchange (ETDEWEB)

    Seniuch, M.

    2006-07-07

    We study the generation of the baryon asymmetry in the context of electroweak baryogenesis in two different extensions of the Standard Model. First, we consider an effective theory, in which the Standard Model is augmented by an additional dimension-six Higgs operator. The effects of new physics beyond a cut-off scale are parameterized by this operator. The second model is the two-Higgs-doublet model, whose particle spectrum is extended by two further neutral and two charged heavy Higgs bosons. In both cases we focus on the properties of the electroweak phase transition, especially on its strength and the profile of the nucleating bubbles. After reviewing some general aspects of the electroweak phase transition and baryogenesis we derive the respective thermal effective potentials to one-loop order. We systematically study the parameter spaces, using numerical methods, and compute the strength of the phase transition and the wall thickness as a function of the Higgs masses. We find a strong first order transition for a light Higgs state with a mass up to about 200 GeV. In case of the dimension-six model the cut-off scale has to stay between 500 and 850 GeV, in the two-Higgs-doublet model one needs at least one heavy Higgs mass of 300 GeV. The wall thickness varies for both theories in the range roughly from two to fifteen, in units of the inverse critical temperature. We also estimate the size of the electron and neutron electric dipole moments, since new sources of CP violation give rise to them. In wide ranges of the parameter space we are not in conflict with the experimental bounds. Finally the baryon asymmetry, which is predicted by these models, is related to the Higgs mass and the other appropriate input parameters. In both models the measured baryon asymmetry can be achieved for natural values of the model parameters. (orig.)

  19. Modelling grain growth in the framework of Rational Extended Thermodynamics

    International Nuclear Information System (INIS)

    Kertsch, Lukas; Helm, Dirk

    2016-01-01

    Grain growth is a significant phenomenon for the thermomechanical processing of metals. Since the mobility of the grain boundaries is thermally activated and energy stored in the grain boundaries is released during their motion, a mutual interaction with the process conditions occurs. To model such phenomena, a thermodynamic framework for the representation of thermomechanical coupling phenomena in metals including a microstructure description is required. For this purpose, Rational Extended Thermodynamics appears to be a useful tool. We apply an entropy principle to derive a thermodynamically consistent model for grain coarsening due to the growth and shrinkage of individual grains. Despite the rather different approaches applied, we obtain a grain growth model which is similar to existing ones and can be regarded as a thermodynamic extension of that by Hillert (1965) to more general systems. To demonstrate the applicability of the model, we compare our simulation results to grain growth experiments in pure copper by different authors, which we are able to reproduce very accurately. Finally, we study the implications of the energy release due to grain growth on the energy balance. The present unified approach combining a microstructure description and continuum mechanics is ready to be further used to develop more elaborate material models for complex thermo-chemo-mechanical coupling phenomena. (paper)

  20. Extended Nambu models: Their relation to gauge theories

    Science.gov (United States)

    Escobar, C. A.; Urrutia, L. F.

    2017-05-01

    Yang-Mills theories supplemented by an additional coordinate constraint, which is solved and substituted in the original Lagrangian, provide examples of the so-called Nambu models, in the case where such constraints arise from spontaneous Lorentz symmetry breaking. Some explicit calculations have shown that, after additional conditions are imposed, Nambu models are capable of reproducing the original gauge theories, thus making Lorentz violation unobservable and allowing the interpretation of the corresponding massless gauge bosons as the Goldstone bosons arising from the spontaneous symmetry breaking. A natural question posed by this approach in the realm of gauge theories is to determine under which conditions the recovery of an arbitrary gauge theory from the corresponding Nambu model, defined by a general constraint over the coordinates, becomes possible. We refer to these theories as extended Nambu models (ENM) and emphasize the fact that the defining coordinate constraint is not treated as a standard gauge fixing term. At this level, the mechanism for generating the constraint is irrelevant and the case of spontaneous Lorentz symmetry breaking is taken only as a motivation, which naturally bring this problem under consideration. Using a nonperturbative Hamiltonian analysis we prove that the ENM yields the original gauge theory after we demand current conservation for all time, together with the imposition of the Gauss laws constraints as initial conditions upon the dynamics of the ENM. The Nambu models yielding electrodynamics, Yang-Mills theories and linearized gravity are particular examples of our general approach.

  1. Modelling grain growth in the framework of Rational Extended Thermodynamics

    Science.gov (United States)

    Kertsch, Lukas; Helm, Dirk

    2016-05-01

    Grain growth is a significant phenomenon for the thermomechanical processing of metals. Since the mobility of the grain boundaries is thermally activated and energy stored in the grain boundaries is released during their motion, a mutual interaction with the process conditions occurs. To model such phenomena, a thermodynamic framework for the representation of thermomechanical coupling phenomena in metals including a microstructure description is required. For this purpose, Rational Extended Thermodynamics appears to be a useful tool. We apply an entropy principle to derive a thermodynamically consistent model for grain coarsening due to the growth and shrinkage of individual grains. Despite the rather different approaches applied, we obtain a grain growth model which is similar to existing ones and can be regarded as a thermodynamic extension of that by Hillert (1965) to more general systems. To demonstrate the applicability of the model, we compare our simulation results to grain growth experiments in pure copper by different authors, which we are able to reproduce very accurately. Finally, we study the implications of the energy release due to grain growth on the energy balance. The present unified approach combining a microstructure description and continuum mechanics is ready to be further used to develop more elaborate material models for complex thermo-chemo-mechanical coupling phenomena.

  2. Specification and Aggregation Errors in Environmentally Extended Input-Output Models

    NARCIS (Netherlands)

    Bouwmeester, Maaike C.; Oosterhaven, Jan

    This article considers the specification and aggregation errors that arise from estimating embodied emissions and embodied water use with environmentally extended national input-output (IO) models, instead of with an environmentally extended international IO model. Model specification errors result

  3. Ising tricriticality in the extended Hubbard model with bond dimerization

    Science.gov (United States)

    Fehske, Holger; Ejima, Satoshi; Lange, Florian; Essler, Fabian H. L.

    We explore the quantum phase transition between Peierls and charge-density-wave insulating states in the one-dimensional, half-filled, extended Hubbard model with explicit bond dimerization. We show that the critical line of the continuous Ising transition terminates at a tricritical point, belonging to the universality class of the tricritical Ising model with central charge c=7/10. Above this point, the quantum phase transition becomes first order. Employing a numerical matrix-product-state based (infinite) density-matrix renormalization group method we determine the ground-state phase diagram, the spin and two-particle charge excitations gaps, and the entanglement properties of the model with high precision. Performing a bosonization analysis we can derive a field description of the transition region in terms of a triple sine-Gordon model. This allows us to derive field theory predictions for the power-law (exponential) decay of the density-density (spin-spin) and bond-order-wave correlation functions, which are found to be in excellent agreement with our numerical results. This work was supported by Deutsche Forschungsgemeinschaft (Germany), SFB 652, project B5, and by the EPSRC under Grant No. EP/N01930X/1 (FHLE).

  4. A multifluid model extended for strong temperature nonequilibrium

    Energy Technology Data Exchange (ETDEWEB)

    Chang, Chong [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-08-08

    We present a multifluid model in which the material temperature is strongly affected by the degree of segregation of each material. In order to track temperatures of segregated form and mixed form of the same material, they are defined as different materials with their own energy. This extension makes it necessary to extend multifluid models to the case in which each form is defined as a separate material. Statistical variations associated with the morphology of the mixture have to be simplified. Simplifications introduced include combining all molecularly mixed species into a single composite material, which is treated as another segregated material. Relative motion within the composite material, diffusion, is represented by material velocity of each component in the composite material. Compression work, momentum and energy exchange, virtual mass forces, and dissipation of the unresolved kinetic energy have been generalized to the heterogeneous mixture in temperature nonequilibrium. The present model can be further simplified by combining all mixed forms of materials into a composite material. Molecular diffusion in this case is modeled by the Stefan-Maxwell equations.

  5. Extended timescale atomistic modeling of crack tip behavior in aluminum

    International Nuclear Information System (INIS)

    Baker, K L; Warner, D H

    2012-01-01

    Traditional molecular dynamics (MD) simulations are limited not only by their spatial domain, but also by the time domain that they can examine. Considering that many of the events associated with plasticity are thermally activated, and thus rare at atomic timescales, the limited time domain of traditional MD simulations can present a significant challenge when trying to realistically model the mechanical behavior of materials. A wide variety of approaches have been developed to address the timescale challenge, each having their own strengths and weaknesses dependent upon the specific application. Here, we have simultaneously applied three distinct approaches to model crack tip behavior in aluminum at timescales well beyond those accessible to traditional MD simulation. Specifically, we combine concurrent multiscale modeling (to reduce the degrees of freedom in the system), parallel replica dynamics (to parallelize the simulations in time) and hyperdynamics (to accelerate the exploration of phase space). Overall, the simulations (1) provide new insight into atomic-scale crack tip behavior at more typical timescales and (2) illuminate the potential of common extended timescale techniques to enable atomic-scale modeling of fracture processes at typical experimental timescales. (paper)

  6. Process health management using success tree and empirical model

    Energy Technology Data Exchange (ETDEWEB)

    Heo, Gyunyoung [Kyung Hee Univ., Yongin (Korea, Republic of); Kim, Suyoung [BNF Technology, Daejeon (Korea, Republic of); Sung, Wounkyoung [Korea South-East Power Co. Ltd., Seoul (Korea, Republic of)

    2012-03-15

    Interests on predictive or condition-based maintenance are heightening in power industries. The ultimate goal of the condition-based maintenance is to prioritize and optimize the maintenance resources by taking a reasonable decision-making process depending op plant's conditions. Such decision-making process should be able to not only observe the deviation from a normal state but also determine the severity or impact of the deviation on different levels such as a component, a system, or a plant. In order to achieve this purpose, a Plant Health Index (PHI) monitoring system was developed, which is operational in more than 10 units of large steam turbine cycles in Korea as well as desalination plants in Saudi Arabia as a proto-type demonstration. The PHI monitoring system has capability to detect whether the deviation between a measured and an estimated parameter which is the result of kernel regression using the accumulated operation data and the current plant boundary conditions (referred as an empirical model) is statistically meaningful. This deviation is converted into a certain index considering the margin to set points which are associated with safety. This index is referred as a PHI and the PHIs can be monitored for an individual parameter as well as a component, system, or plant level. In order to organize the PHIs at the component, system, or plant level, a success tree was developed. At the top of the success tree, the PHIs nodes in the middle of the success tree, the PHIs represent the health status of a component or a system. The concept and definition of the PHI, the key methodologies, the architecture of the developed system, and a practical case of using the PHI monitoring system are described in this article.

  7. Process health management using success tree and empirical model

    International Nuclear Information System (INIS)

    Heo, Gyunyoung; Kim, Suyoung; Sung, Wounkyoung

    2012-01-01

    Interests on predictive or condition-based maintenance are heightening in power industries. The ultimate goal of the condition-based maintenance is to prioritize and optimize the maintenance resources by taking a reasonable decision-making process depending op plant's conditions. Such decision-making process should be able to not only observe the deviation from a normal state but also determine the severity or impact of the deviation on different levels such as a component, a system, or a plant. In order to achieve this purpose, a Plant Health Index (PHI) monitoring system was developed, which is operational in more than 10 units of large steam turbine cycles in Korea as well as desalination plants in Saudi Arabia as a proto-type demonstration. The PHI monitoring system has capability to detect whether the deviation between a measured and an estimated parameter which is the result of kernel regression using the accumulated operation data and the current plant boundary conditions (referred as an empirical model) is statistically meaningful. This deviation is converted into a certain index considering the margin to set points which are associated with safety. This index is referred as a PHI and the PHIs can be monitored for an individual parameter as well as a component, system, or plant level. In order to organize the PHIs at the component, system, or plant level, a success tree was developed. At the top of the success tree, the PHIs nodes in the middle of the success tree, the PHIs represent the health status of a component or a system. The concept and definition of the PHI, the key methodologies, the architecture of the developed system, and a practical case of using the PHI monitoring system are described in this article

  8. Modeling gallic acid production rate by empirical and statistical analysis

    Directory of Open Access Journals (Sweden)

    Bratati Kar

    2000-01-01

    Full Text Available For predicting the rate of enzymatic reaction empirical correlation based on the experimental results obtained under various operating conditions have been developed. Models represent both the activation as well as deactivation conditions of enzymatic hydrolysis and the results have been analyzed by analysis of variance (ANOVA. The tannase activity was found maximum at incubation time 5 min, reaction temperature 40ºC, pH 4.0, initial enzyme concentration 0.12 v/v, initial substrate concentration 0.42 mg/ml, ionic strength 0.2 M and under these optimal conditions, the maximum rate of gallic acid production was 33.49 mumoles/ml/min.Para predizer a taxa das reações enzimaticas uma correlação empírica baseada nos resultados experimentais foi desenvolvida. Os modelos representam a ativação e a desativativação da hydrolise enzimatica. Os resultados foram avaliados pela análise de variança (ANOVA. A atividade máxima da tannase foi obtida após 5 minutos de incubação, temperatura 40ºC, pH 4,0, concentração inicial da enzima de 0,12 v/v, concentração inicial do substrato 0,42 mg/ml, força iônica 0,2 M. Sob essas condições a taxa máxima de produção ácido galico foi de 33,49 µmoles/ml/min.

  9. Global empirical wind model for the upper mesosphere/lower thermosphere. I. Prevailing wind

    Directory of Open Access Journals (Sweden)

    Y. I. Portnyagin

    Full Text Available An updated empirical climatic zonally averaged prevailing wind model for the upper mesosphere/lower thermosphere (70-110 km, extending from 80°N to 80°S is presented. The model is constructed from the fitting of monthly mean winds from meteor radar and MF radar measurements at more than 40 stations, well distributed over the globe. The height-latitude contour plots of monthly mean zonal and meridional winds for all months of the year, and of annual mean wind, amplitudes and phases of annual and semiannual harmonics of wind variations are analyzed to reveal the main features of the seasonal variation of the global wind structures in the Northern and Southern Hemispheres. Some results of comparison between the ground-based wind models and the space-based models are presented. It is shown that, with the exception of annual mean systematic bias between the zonal winds provided by the ground-based and space-based models, a good agreement between the models is observed. The possible origin of this bias is discussed.

    Key words: Meteorology and Atmospheric dynamics (general circulation; middle atmosphere dynamics; thermospheric dynamics

  10. "Let's Move" campaign: applying the extended parallel process model.

    Science.gov (United States)

    Batchelder, Alicia; Matusitz, Jonathan

    2014-01-01

    This article examines Michelle Obama's health campaign, "Let's Move," through the lens of the extended parallel process model (EPPM). "Let's Move" aims to reduce the childhood obesity epidemic in the United States. Developed by Kim Witte, EPPM rests on the premise that people's attitudes can be changed when fear is exploited as a factor of persuasion. Fear appeals work best (a) when a person feels a concern about the issue or situation, and (b) when he or she believes to have the capability of dealing with that issue or situation. Overall, the analysis found that "Let's Move" is based on past health campaigns that have been successful. An important element of the campaign is the use of fear appeals (as it is postulated by EPPM). For example, part of the campaign's strategies is to explain the severity of the diseases associated with obesity. By looking at the steps of EPPM, readers can also understand the strengths and weaknesses of "Let's Move."

  11. Fidelity study of superconductivity in extended Hubbard models

    Science.gov (United States)

    Plonka, N.; Jia, C. J.; Wang, Y.; Moritz, B.; Devereaux, T. P.

    2015-07-01

    The Hubbard model with local on-site repulsion is generally thought to possess a superconducting ground state for appropriate parameters, but the effects of more realistic long-range Coulomb interactions have not been studied extensively. We study the influence of these interactions on superconductivity by including nearest- and next-nearest-neighbor extended Hubbard interactions in addition to the usual on-site terms. Utilizing numerical exact diagonalization, we analyze the signatures of superconductivity in the ground states through the fidelity metric of quantum information theory. We find that nearest and next-nearest neighbor interactions have thresholds above which they destabilize superconductivity regardless of whether they are attractive or repulsive, seemingly due to competing charge fluctuations.

  12. Extended Group Contribution Model for Polyfunctional Phase Equilibria

    DEFF Research Database (Denmark)

    Abildskov, Jens

    of physical separation processes. In a thermodynamic sense, design requires detailed knowledge of activity coefficients in the phases at equilibrium. The prediction of these quantities from a minimum of experimental data is the broad scope of this thesis. Adequate equations exist for predicting vapor......Material and energy balances and equilibrium data form the basis of most design calculations. While material and energy balances may be stated without much difficulty, the design engineer is left with a choice between a wide variety of models for describing phase equilibria in the design......-liquid equilibria from data on binary mixtures, composed of structurally simple molecules with a single functional group. More complex is the situation with mixtures composed of structurally more complicated molecules or molecules with more than one functional group. The UNIFAC method is extended to handle...

  13. Baryon and meson phenomenology in the extended Linear Sigma Model

    Energy Technology Data Exchange (ETDEWEB)

    Giacosa, Francesco; Habersetzer, Anja; Teilab, Khaled; Eshraim, Walaa; Divotgey, Florian; Olbrich, Lisa; Gallas, Susanna; Wolkanowski, Thomas; Janowski, Stanislaus; Heinz, Achim; Deinet, Werner; Rischke, Dirk H. [Institute for Theoretical Physics, J. W. Goethe University, Max-von-Laue-Str. 1, 60438 Frankfurt am Main (Germany); Kovacs, Peter; Wolf, Gyuri [Institute for Particle and Nuclear Physics, Wigner Research Center for Physics, Hungarian Academy of Sciences, H-1525 Budapest (Hungary); Parganlija, Denis [Institute for Theoretical Physics, Vienna University of Technology, Wiedner Hauptstr. 8-10, A-1040 Vienna (Austria)

    2014-07-01

    The vacuum phenomenology obtained within the so-called extended Linear Sigma Model (eLSM) is presented. The eLSM Lagrangian is constructed by including from the very beginning vector and axial-vector d.o.f., and by requiring dilatation invariance and chiral symmetry. After a general introduction of the approach, particular attention is devoted to the latest results. In the mesonic sector the strong decays of the scalar and the pseudoscalar glueballs, the weak decays of the tau lepton into vector and axial-vector mesons, and the description of masses and decays of charmed mesons are shown. In the baryonic sector the omega production in proton-proton scattering and the inclusion of baryons with strangeness are described.

  14. Bridging process-based and empirical approaches to modeling tree growth

    Science.gov (United States)

    Harry T. Valentine; Annikki Makela; Annikki Makela

    2005-01-01

    The gulf between process-based and empirical approaches to modeling tree growth may be bridged, in part, by the use of a common model. To this end, we have formulated a process-based model of tree growth that can be fitted and applied in an empirical mode. The growth model is grounded in pipe model theory and an optimal control model of crown development. Together, the...

  15. Bias-dependent hybrid PKI empirical-neural model of microwave FETs

    Science.gov (United States)

    Marinković, Zlatica; Pronić-Rančić, Olivera; Marković, Vera

    2011-10-01

    Empirical models of microwave transistors based on an equivalent circuit are valid for only one bias point. Bias-dependent analysis requires repeated extractions of the model parameters for each bias point. In order to make model bias-dependent, a new hybrid empirical-neural model of microwave field-effect transistors is proposed in this article. The model is a combination of an equivalent circuit model including noise developed for one bias point and two prior knowledge input artificial neural networks (PKI ANNs) aimed at introducing bias dependency of scattering (S) and noise parameters, respectively. The prior knowledge of the proposed ANNs involves the values of the S- and noise parameters obtained by the empirical model. The proposed hybrid model is valid in the whole range of bias conditions. Moreover, the proposed model provides better accuracy than the empirical model, which is illustrated by an appropriate modelling example of a pseudomorphic high-electron mobility transistor device.

  16. Phenomenological study of extended seesaw model for light sterile neutrino

    International Nuclear Information System (INIS)

    Nath, Newton; Ghosh, Monojit; Goswami, Srubabati; Gupta, Shivani

    2017-01-01

    We study the zero textures of the Yukawa matrices in the minimal extended type-I seesaw (MES) model which can give rise to ∼ eV scale sterile neutrinos. In this model, three right handed neutrinos and one extra singlet S are added to generate a light sterile neutrino. The light neutrino mass matrix for the active neutrinos, m ν , depends on the Dirac neutrino mass matrix (M D ), Majorana neutrino mass matrix (M R ) and the mass matrix (M S ) coupling the right handed neutrinos and the singlet. The model predicts one of the light neutrino masses to vanish. We systematically investigate the zero textures in M D and observe that maximum five zeros in M D can lead to viable zero textures in m ν . For this study we consider four different forms for M R (one diagonal and three off diagonal) and two different forms of (M S ) containing one zero. Remarkably we obtain only two allowed forms of m ν (m eτ =0 and m ττ =0) having inverted hierarchical mass spectrum. We re-analyze the phenomenological implications of these two allowed textures of m ν in the light of recent neutrino oscillation data. In the context of the MES model, we also express the low energy mass matrix, the mass of the sterile neutrino and the active-sterile mixing in terms of the parameters of the allowed Yukawa matrices. The MES model leads to some extra correlations which disallow some of the Yukawa textures obtained earlier, even though they give allowed one-zero forms of m ν . We show that the allowed textures in our study can be realized in a simple way in a model based on MES mechanism with a discrete Abelian flavor symmetry group Z 8 ×Z 2 .

  17. A Semi-empirical Model of the Stratosphere in the Climate System

    Science.gov (United States)

    Sodergren, A. H.; Bodeker, G. E.; Kremser, S.; Meinshausen, M.; McDonald, A.

    2014-12-01

    Chemistry climate models (CCMs) currently used to project changes in Antarctic ozone are extremely computationally demanding. CCM projections are uncertain due to lack of knowledge of future emissions of greenhouse gases (GHGs) and ozone depleting substances (ODSs), as well as parameterizations within the CCMs that have weakly constrained tuning parameters. While projections should be based on an ensemble of simulations, this is not currently possible due to the complexity of the CCMs. An inexpensive but realistic approach to simulate changes in stratospheric ozone, and its coupling to the climate system, is needed as a complement to CCMs. A simple climate model (SCM) can be used as a fast emulator of complex atmospheric-ocean climate models. If such an SCM includes a representation of stratospheric ozone, the evolution of the global ozone layer can be simulated for a wide range of GHG and ODS emissions scenarios. MAGICC is an SCM used in previous IPCC reports. In the current version of the MAGICC SCM, stratospheric ozone changes depend only on equivalent effective stratospheric chlorine (EESC). In this work, MAGICC is extended to include an interactive stratospheric ozone layer using a semi-empirical model of ozone responses to CO2and EESC, with changes in ozone affecting the radiative forcing in the SCM. To demonstrate the ability of our new, extended SCM to generate projections of global changes in ozone, tuning parameters from 19 coupled atmosphere-ocean general circulation models (AOGCMs) and 10 carbon cycle models (to create an ensemble of 190 simulations) have been used to generate probability density functions of the dates of return of stratospheric column ozone to 1960 and 1980 levels for different latitudes.

  18. Very light Higgs bosons in extended models at the LHC

    International Nuclear Information System (INIS)

    Belyaev, Alexander; Guedes, Renato; Santos, Rui; Moretti, Stefano

    2010-01-01

    The Large Electron-Positron (LEP) collider experiments have constrained the mass of the standard model (SM) Higgs boson to be above 114.4 GeV. This bound applies to all extensions of the SM where the coupling of a Higgs boson to the Z boson and also the Higgs decay profile do not differ much from the SM one. However, in scenarios with extended Higgs sectors, this coupling can be made very small by a suitable choice of the parameters of the model. In such cases, the lightest CP-even Higgs boson mass can in turn be made very small. Such a very light Higgs state, with a mass of the order of the Z boson one or even smaller, could have escaped detection at LEP. In this work we perform a detailed parton level study on the feasibility of the detection of such a very light Higgs particle at the Large Hadron Collider (LHC) in the production process pp→hj→τ + τ - j, where j is a resolved jet. We conclude that there are several models where such a Higgs state could be detected at the LHC with early data.

  19. Development of an empirical model of turbine efficiency using the Taylor expansion and regression analysis

    International Nuclear Information System (INIS)

    Fang, Xiande; Xu, Yu

    2011-01-01

    The empirical model of turbine efficiency is necessary for the control- and/or diagnosis-oriented simulation and useful for the simulation and analysis of dynamic performances of the turbine equipment and systems, such as air cycle refrigeration systems, power plants, turbine engines, and turbochargers. Existing empirical models of turbine efficiency are insufficient because there is no suitable form available for air cycle refrigeration turbines. This work performs a critical review of empirical models (called mean value models in some literature) of turbine efficiency and develops an empirical model in the desired form for air cycle refrigeration, the dominant cooling approach in aircraft environmental control systems. The Taylor series and regression analysis are used to build the model, with the Taylor series being used to expand functions with the polytropic exponent and the regression analysis to finalize the model. The measured data of a turbocharger turbine and two air cycle refrigeration turbines are used for the regression analysis. The proposed model is compact and able to present the turbine efficiency map. Its predictions agree with the measured data very well, with the corrected coefficient of determination R c 2 ≥ 0.96 and the mean absolute percentage deviation = 1.19% for the three turbines. -- Highlights: → Performed a critical review of empirical models of turbine efficiency. → Developed an empirical model in the desired form for air cycle refrigeration, using the Taylor expansion and regression analysis. → Verified the method for developing the empirical model. → Verified the model.

  20. Design Models as Emergent Features: An Empirical Study in Communication and Shared Mental Models in Instructional

    Science.gov (United States)

    Botturi, Luca

    2006-01-01

    This paper reports the results of an empirical study that investigated the instructional design process of three teams involved in the development of an e-­learning unit. The teams declared they were using the same fast-­prototyping design and development model, and were composed of the same roles (although with a different number of SMEs).…

  1. Empirical Modeling of Oxygen Uptake of Flow Over Stepped Chutes ...

    African Journals Online (AJOL)

    The present investigation evaluates the influence of three different step chute geometry when skimming flow was allowed over them with the aim of determining the aerated flow length which is a significant factor when developing empirical equations for estimating aeration efficiency of flow. Overall, forty experiments were ...

  2. On the Complete Instability of Empirically Implemented Dynamic Leontief Models

    NARCIS (Netherlands)

    Steenge, A.E.

    1990-01-01

    On theoretical grounds, real world implementations of forward-looking dynamic Leontief systems were expected to be stable. Empirical work, however, showed the opposite to be true: all investigated systems proved to be unstable. In fact, an extreme form of instability ('complete instability')

  3. Extended Jiles-Atherton model for modelling the magnetic characteristics of isotropic materials

    International Nuclear Information System (INIS)

    Szewczyk, Roman; Bienkowski, Adam; Salach, Jacek

    2008-01-01

    This paper presents the idea of the extension of the Jiles-Atherton model applied for modelling of the magnetic characteristics of Mn-Zn, as well as Ni-Zn ferrites. The presented extension of the model takes into account changes of the parameter k during the magnetisation process, what is physically judged. The extended Jiles-Atherton model gives novel possibility of modelling the hysteresis loops of isotropic materials. For one set of the extended model parameters, a good agreement between experimental data and modelled hysteresis loops is observed, for different values of maximal magnetising field. As a result, the extended Jiles-Atherton model presented in the paper may be applied for both technical applications and fundamental research, focused on understanding the physical aspects of the magnetisation process of anisotropic soft magnetic materials

  4. Baryon-Baryon Interactions ---Nijmegen Extended-Soft-Core Models---

    Science.gov (United States)

    Rijken, T. A.; Nagels, M. M.; Yamamoto, Y.

    We review the Nijmegen extended-soft-core (ESC) models for the baryon-baryon (BB) interactions of the SU(3) flavor-octet of baryons (N, Lambda, Sigma, and Xi). The interactions are basically studied from the meson-exchange point of view, in the spirit of the Yukawa-approach to the nuclear force problem [H. Yukawa, ``On the interaction of Elementary Particles I'', Proceedings of the Physico-Mathematical Society of Japan 17 (1935), 48], using generalized soft-core Yukawa-functions. These interactions are supplemented with (i) multiple-gluon-exchange, and (ii) structural effects due to the quark-core of the baryons. We present in some detail the most recent extended-soft-core model, henceforth referred to as ESC08, which is the most complete, sophisticated, and successful interaction-model. Furthermore, we discuss briefly its predecessor the ESC04-model [Th. A. Rijken and Y. Yamamoto, Phys. Rev. C 73 (2006), 044007; Th. A. Rijken and Y. Yamamoto, Ph ys. Rev. C 73 (2006), 044008; Th. A. Rijken and Y. Yamamoto, nucl-th/0608074]. For the soft-core one-boson-exchange (OBE) models we refer to the literature [Th. A. Rijken, in Proceedings of the International Conference on Few-Body Problems in Nuclear and Particle Physics, Quebec, 1974, ed. R. J. Slobodrian, B. Cuec and R. Ramavataram (Presses Universitè Laval, Quebec, 1975), p. 136; Th. A. Rijken, Ph. D. thesis, University of Nijmegen, 1975; M. M. Nagels, Th. A. Rijken and J. J. de Swart, Phys. Rev. D 17 (1978), 768; P. M. M. Maessen, Th. A. Rijken and J. J. de Swart, Phys. Rev. C 40 (1989), 2226; Th. A. Rijken, V. G. J. Stoks and Y. Yamamoto, Phys. Rev. C 59 (1999), 21; V. G. J. Stoks and Th. A. Rijken, Phys. Rev. C 59 (1999), 3009]. All ingredients of these latter models are also part of ESC08, and so a description of ESC08 comprises all models so far in principle. The extended-soft-core (ESC) interactions consist of local- and non-local-potentials due to (i) one-boson-exchanges (OBE), which are the members of nonets of

  5. Phenomenological study of extended seesaw model for light sterile neutrino

    Energy Technology Data Exchange (ETDEWEB)

    Nath, Newton [Physical Research Laboratory,Navarangpura, Ahmedabad 380 009 (India); Indian Institute of Technology,Gandhinagar, Ahmedabad-382424 (India); Ghosh, Monojit [Department of Physics, Tokyo Metropolitan University,Hachioji, Tokyo 192-0397 (Japan); Goswami, Srubabati [Physical Research Laboratory,Navarangpura, Ahmedabad 380 009 (India); Gupta, Shivani [Center of Excellence for Particle Physics (CoEPP), University of Adelaide,Adelaide SA 5005 (Australia)

    2017-03-14

    We study the zero textures of the Yukawa matrices in the minimal extended type-I seesaw (MES) model which can give rise to ∼ eV scale sterile neutrinos. In this model, three right handed neutrinos and one extra singlet S are added to generate a light sterile neutrino. The light neutrino mass matrix for the active neutrinos, m{sub ν}, depends on the Dirac neutrino mass matrix (M{sub D}), Majorana neutrino mass matrix (M{sub R}) and the mass matrix (M{sub S}) coupling the right handed neutrinos and the singlet. The model predicts one of the light neutrino masses to vanish. We systematically investigate the zero textures in M{sub D} and observe that maximum five zeros in M{sub D} can lead to viable zero textures in m{sub ν}. For this study we consider four different forms for M{sub R} (one diagonal and three off diagonal) and two different forms of (M{sub S}) containing one zero. Remarkably we obtain only two allowed forms of m{sub ν} (m{sub eτ}=0 and m{sub ττ}=0) having inverted hierarchical mass spectrum. We re-analyze the phenomenological implications of these two allowed textures of m{sub ν} in the light of recent neutrino oscillation data. In the context of the MES model, we also express the low energy mass matrix, the mass of the sterile neutrino and the active-sterile mixing in terms of the parameters of the allowed Yukawa matrices. The MES model leads to some extra correlations which disallow some of the Yukawa textures obtained earlier, even though they give allowed one-zero forms of m{sub ν}. We show that the allowed textures in our study can be realized in a simple way in a model based on MES mechanism with a discrete Abelian flavor symmetry group Z{sub 8}×Z{sub 2}.

  6. Political economy models and agricultural policy formation : empirical applicability and relevance for the CAP

    OpenAIRE

    Zee, van der, F.A.

    1997-01-01

    This study explores the relevance and applicability of political economy models for the explanation of agricultural policies. Part I (chapters 4-7) takes a general perspective and evaluates the empirical applicability of voting models and interest group models to agricultural policy formation in industrialised market economics. Part II (chapters 8-11) focuses on the empirical applicability of political economy models to agricultural policy formation and agricultural policy developmen...

  7. Disorder structure of free-flow and global jams in the extended BML model

    International Nuclear Information System (INIS)

    Zhao Xiaomei; Xie Dongfan; Jia Bin; Jiang Rui; Gao Ziyou

    2011-01-01

    The original BML model is extended by introducing extended sites, which can hold several vehicles at each time-step. Unexpectedly, the flow in the extended model sharply transits from free-flow to global jams, but the transition is not one-order in original BML model. And congestion in the extended model appears more easily. This can ascribe to the mixture of vehicles from different directions in one site, leading to the drop-off of the capacity of the site. Furthermore, the typical configuration of free flowing and global jams in the extended models is disorder, different from the regular structure in the original model.

  8. Streamflow data assimilation in SWAT model using Extended Kalman Filter

    Science.gov (United States)

    Sun, Leqiang; Nistor, Ioan; Seidou, Ousmane

    2015-12-01

    The Extended Kalman Filter (EKF) is coupled with the Soil and Water Assessment Tools (SWAT) model in the streamflow assimilation of the upstream Senegal River in West Africa. Given the large number of distributed variables in SWAT, only the average watershed scale variables are included in the state vector and the Hydrological Response Unit (HRU) scale variables are updated with the a posteriori/a priori ratio of their watershed scale counterparts. The Jacobian matrix is calculated numerically by perturbing the state variables. Both the soil moisture and CN2 are significantly updated in the wet season, yet they have opposite update patterns. A case study for a large flood forecast shows that for up to seven days, the streamflow forecast is moderately improved using the EKF-subsequent open loop scheme but significantly improved with a newly designed quasi-error update scheme. The former has better performances in the flood rising period while the latter has better performances in the recession period. For both schemes, the streamflow forecast is improved more significantly when the lead time is shorter.

  9. The one-dimensional extended Bose–Hubbard model

    Indian Academy of Sciences (India)

    Unknown

    method to obtain the zero-temperature phase diagram of the one-dimensional, extended ... Progress in this field has been driven by an interplay between ... superconductor-insulator transition in thin films of superconducting materials like bis-.

  10. An Empirical Model for Estimating the Probability of Electrical Short Circuits from Tin Whiskers. Part 2

    Science.gov (United States)

    Courey, Karim; Wright, Clara; Asfour, Shihab; Onar, Arzu; Bayliss, Jon; Ludwig, Larry

    2009-01-01

    In this experiment, an empirical model to quantify the probability of occurrence of an electrical short circuit from tin whiskers as a function of voltage was developed. This empirical model can be used to improve existing risk simulation models. FIB and TEM images of a tin whisker confirm the rare polycrystalline structure on one of the three whiskers studied. FIB cross-section of the card guides verified that the tin finish was bright tin.

  11. Semi-Empirical Models for Buoyancy-Driven Ventilation

    DEFF Research Database (Denmark)

    Terpager Andersen, Karl

    2015-01-01

    A literature study is presented on the theories and models dealing with buoyancy-driven ventilation in rooms. The models are categorised into four types according to how the physical process is conceived: column model, fan model, neutral plane model and pressure model. These models are analysed...... and compared with a reference model. Discrepancies and differences are shown, and the deviations are discussed. It is concluded that a reliable buoyancy model based solely on the fundamental flow equations is desirable....

  12. Cognitive style and depressive symptoms in elderly people - extending the empirical evidence for the cognitive vulnerability-stress hypothesis.

    Science.gov (United States)

    Meyer, Thomas D; Gudgeon, Emma; Thomas, Alan J; Collerton, Daniel

    2010-10-01

    Depression is common in older people and its identification and treatment has been highlighted as one of the major challenges in an ageing world. Poor physical and cognitive health, bereavement, and prior depression are important risk factors for depression in elderly people. Attributional or cognitive style has been identified as a risk factor for depression in children, adolescents and younger adults but its relevance for depression and mood in elderly people has not been investigated in the context of other risk factors. Sixty-four older adults from an 'extra care' living scheme (aged 59-97) were recruited for a 6-week prospective study to examine the relationships between cognitive style and depressive symptoms. Regression analyses revealed that, when other risk factors were controlled for, cognitive style and its interaction with stress predicted changes in depressive symptoms, therefore partially replicating prior research. Cognitive-stress-vulnerability models also apply to elderly populations, but may be rather predictive of changes in depression when facing lower levels of stress. Copyright 2010 Elsevier Ltd. All rights reserved.

  13. EMPIRE-II 2.18, Comprehensive Nuclear Model Code, Nucleons, Ions Induced Cross-Sections

    International Nuclear Information System (INIS)

    Herman, Michal Wladyslaw; Panini, Gian Carlo

    2003-01-01

    1 - Description of program or function: EMPIRE-II is a flexible code for calculation of nuclear reactions in the frame of combined optical, Multi-step Direct (TUL), Multi-step Compound (NVWY) and statistical (Hauser-Feshbach) models. Incident particle can be a nucleon or any nucleus(Heavy Ion). Isomer ratios, residue production cross sections and emission spectra for neutrons, protons, alpha-particles, gamma-rays, and one type of Light Ion can be calculated. The energy range starts just above the resonance region for neutron induced reactions and extends up to several hundreds of MeV for the Heavy Ion induced reactions. IAEA1169/06: This version corrects an error in the Absoft compile procedure. 2 - Method of solution: For projectiles with A<5 EMPIRE calculates fusion cross section using spherical optical model transmission coefficients. In the case of Heavy Ion induced reactions the fusion cross section can be determined using various approaches including simplified coupled channels method (code CCFUS). Pre-equilibrium emission is treated in terms of quantum-mechanical theories (TUL-MSD and NVWY-MSC). MSC contribution to the gamma emission is taken into account. These calculations are followed by statistical decay with arbitrary number of subsequent particle emissions. Gamma-ray competition is considered in detail for every decaying compound nucleus. Different options for level densities are available including dynamical approach with collective effects taken into account. EMPIRE contains following third party codes converted into subroutines: - SCAT2 by O. Bersillon, - ORION and TRISTAN by H. Lenske and H. Wolter, - CCFUS by C.H. Dasso and S. Landowne, - BARMOM by A. Sierk. 3 - Restrictions on the complexity of the problem: The code can be easily adjusted to the problem by changing dimensions in the dimensions.h file. The actual limits are set by the available memory. In the current formulation up to 4 ejectiles plus gamma are allowed. This limit can be relaxed

  14. Thermodynamic modelling of acid gas removal from natural gas using the Extended UNIQUAC model

    DEFF Research Database (Denmark)

    Sadegh, Negar; Stenby, Erling Halfdan; Thomsen, Kaj

    2017-01-01

    Thermodynamics of natural gas sweetening process needs to be known for proper design of natural gas treating plants. Absorption with aqueous N-Methyldiethanolamine is currently the most commonly used process for removal of acid gas (CO2 and H2S) impurities from natural gas. Model parameters...... for the Extended UNIQUAC model have already been determined by the same authors to calculate single acid gas solubility in aqueous MDEA. In this study, the model is further extended to estimate solubility of CO2 and H2S and their mixture in aqueous MDEA at high pressures with methane as a makeup gas....

  15. Consciousness extended

    DEFF Research Database (Denmark)

    Carrara-Augustenborg, Claudia

    2012-01-01

    There is no consensus yet regarding a conceptualization of consciousness able to accommodate all the features of such complex phenomenon. Different theoretical and empirical models lend strength to both the occurrence of a non-accessible informational broadcast, and to the mobilization of specific...... brain areas responsible for the emergence of the individual´s explicit and variable access to given segments of such broadcast. Rather than advocating one model over others, this chapter proposes to broaden the conceptualization of consciousness by letting it embrace both mechanisms. Within...... such extended framework, I propose conceptual and functional distinctions between consciousness (global broadcast of information), awareness (individual´s ability to access the content of such broadcast) and unconsciousness (focally isolated neural activations). My hypothesis is that a demarcation in terms...

  16. Modeling of carbon dioxide absorption by aqueous ammonia solutions using the Extended UNIQUAC model

    DEFF Research Database (Denmark)

    Darde, Victor Camille Alfred; van Well, Willy J. M.; Stenby, Erling Halfdan

    2010-01-01

    An upgraded version of the Extended UNIQUAC thermodynamic model for the carbon dioxide-ammonia-water system has been developed, based on the original version proposed by Thomsen and Rasmussen. The original model was valid in the temperature range 0-110°C, the pressure range 0-10 MPa...... properties of carbon dioxide and ammonia to supercritical conditions....

  17. Neutron star moment-of-inertia in the extended Zimanyi-Moszkowski model

    CERN Document Server

    Miyazaki, K

    2006-01-01

    We revisit the extended Zimanyi-Moszkowski (EZM) model of dense neutron star (NS) core matter. In contrast to our previous work we treat the vector potentials of baryons on an equal footing with the effective masses, and solve a set of 6 equations to determine the three independent effective masses and vector potentials and a set of 2 equations to determine the conditions of beta-equilibrated NS matter, simultaneously. According to an expectation that the precisely measurable moment-of-inertia of J0737-3039A will impose a significant constraint on the nuclear equation-of-state (EOS), it is calculated using the two sets of hyperon coupling constants, EZM-SU6 and EZM-P, derived from the SU(6) symmetry and the empirical data of hypernuclei. We find I_{45}=1.23 and 1.64 that are close to the values in the EOSs of "APR" and "MS1" calculated by Morrison et al., while their mass-radius relations are rather different from the EZM models. The uniqueness of the EZM model is! also apparent in the correlation map between...

  18. A dynamic model of the marriage market-Part 2: simulation of marital states and application to empirical data.

    Science.gov (United States)

    Matthews, A P; Garenne, M L

    2013-09-01

    A dynamic, two-sex, age-structured marriage model is presented. Part 1 focused on first marriage only and described a marriage market matching algorithm. In Part 2 the model is extended to include divorce, widowing, and remarriage. The model produces a self-consistent set of marital states distributed by age and sex in a stable population by means of a gender-symmetric numerical method. The model is compared with empirical data for the case of Zambia. Furthermore, a dynamic marriage function for a changing population is demonstrated in simulations of three hypothetical scenarios of elevated mortality in young to middle adulthood. The marriage model has its primary application to simulation of HIV-AIDS epidemics in African countries. Copyright © 2013 Elsevier Inc. All rights reserved.

  19. Empirical modeling and data analysis for engineers and applied scientists

    CERN Document Server

    Pardo, Scott A

    2016-01-01

    This textbook teaches advanced undergraduate and first-year graduate students in Engineering and Applied Sciences to gather and analyze empirical observations (data) in order to aid in making design decisions. While science is about discovery, the primary paradigm of engineering and "applied science" is design. Scientists are in the discovery business and want, in general, to understand the natural world rather than to alter it. In contrast, engineers and applied scientists design products, processes, and solutions to problems. That said, statistics, as a discipline, is mostly oriented toward the discovery paradigm. Young engineers come out of their degree programs having taken courses such as "Statistics for Engineers and Scientists" without any clear idea as to how they can use statistical methods to help them design products or processes. Many seem to think that statistics is only useful for demonstrating that a device or process actually does what it was designed to do. Statistics courses emphasize creati...

  20. Temporal structure of neuronal population oscillations with empirical model decomposition

    International Nuclear Information System (INIS)

    Li Xiaoli

    2006-01-01

    Frequency analysis of neuronal oscillation is very important for understanding the neural information processing and mechanism of disorder in the brain. This Letter addresses a new method to analyze the neuronal population oscillations with empirical mode decomposition (EMD). Following EMD of neuronal oscillation, a series of intrinsic mode functions (IMFs) are obtained, then Hilbert transform of IMFs can be used to extract the instantaneous time frequency structure of neuronal oscillation. The method is applied to analyze the neuronal oscillation in the hippocampus of epileptic rats in vivo, the results show the neuronal oscillations have different descriptions during the pre-ictal, seizure onset and ictal periods of the epileptic EEG at the different frequency band. This new method is very helpful to provide a view for the temporal structure of neural oscillation

  1. Consistent constitutive modeling of metallic target penetration using empirical, analytical, and numerical penetration models

    Directory of Open Access Journals (Sweden)

    John (Jack P. Riegel III

    2016-04-01

    Full Text Available Historically, there has been little correlation between the material properties used in (1 empirical formulae, (2 analytical formulations, and (3 numerical models. The various regressions and models may each provide excellent agreement for the depth of penetration into semi-infinite targets. But the input parameters for the empirically based procedures may have little in common with either the analytical model or the numerical model. This paper builds on previous work by Riegel and Anderson (2014 to show how the Effective Flow Stress (EFS strength model, based on empirical data, can be used as the average flow stress in the analytical Walker–Anderson Penetration model (WAPEN (Anderson and Walker, 1991 and how the same value may be utilized as an effective von Mises yield strength in numerical hydrocode simulations to predict the depth of penetration for eroding projectiles at impact velocities in the mechanical response regime of the materials. The method has the benefit of allowing the three techniques (empirical, analytical, and numerical to work in tandem. The empirical method can be used for many shot line calculations, but more advanced analytical or numerical models can be employed when necessary to address specific geometries such as edge effects or layering that are not treated by the simpler methods. Developing complete constitutive relationships for a material can be costly. If the only concern is depth of penetration, such a level of detail may not be required. The effective flow stress can be determined from a small set of depth of penetration experiments in many cases, especially for long penetrators such as the L/D = 10 ones considered here, making it a very practical approach. In the process of performing this effort, the authors considered numerical simulations by other researchers based on the same set of experimental data that the authors used for their empirical and analytical assessment. The goals were to establish a

  2. A Socio-Cultural Model Based on Empirical Data of Cultural and Social Relationship

    DEFF Research Database (Denmark)

    Lipi, Afia Akhter; Nakano, Yukiko; Rehm, Matthias

    2010-01-01

    The goal of this paper is to integrate culture and social relationship as a computational term in an embodied conversational agent system by employing empirical and theoretical approach. We propose a parameter-based model that predicts nonverbal expressions appropriate for specific cultures...... in different social relationship. So, first, we introduce the theories of social and cultural characteristics. Then, we did corpus analysis of human interaction of two cultures in two different social situations and extracted empirical data and finally, by integrating socio-cultural characteristics...... with empirical data, we establish a parameterized network model that generates culture specific non-verbal expressions in different social relationships....

  3. Towards New Empirical Versions of Financial and Accounting Models Corrected for Measurement Errors

    OpenAIRE

    Francois-Éric Racicot; Raymond Théoret; Alain Coen

    2006-01-01

    In this paper, we propose a new empirical version of the Fama and French Model based on the Hausman (1978) specification test and aimed at discarding measurement errors in the variables. The proposed empirical framework is general enough to be used for correcting other financial and accounting models of measurement errors. Removing measurement errors is important at many levels as information disclosure, corporate governance and protection of investors.

  4. Using Graph and Vertex Entropy to Compare Empirical Graphs with Theoretical Graph Models

    Directory of Open Access Journals (Sweden)

    Tomasz Kajdanowicz

    2016-09-01

    Full Text Available Over the years, several theoretical graph generation models have been proposed. Among the most prominent are: the Erdős–Renyi random graph model, Watts–Strogatz small world model, Albert–Barabási preferential attachment model, Price citation model, and many more. Often, researchers working with real-world data are interested in understanding the generative phenomena underlying their empirical graphs. They want to know which of the theoretical graph generation models would most probably generate a particular empirical graph. In other words, they expect some similarity assessment between the empirical graph and graphs artificially created from theoretical graph generation models. Usually, in order to assess the similarity of two graphs, centrality measure distributions are compared. For a theoretical graph model this means comparing the empirical graph to a single realization of a theoretical graph model, where the realization is generated from the given model using an arbitrary set of parameters. The similarity between centrality measure distributions can be measured using standard statistical tests, e.g., the Kolmogorov–Smirnov test of distances between cumulative distributions. However, this approach is both error-prone and leads to incorrect conclusions, as we show in our experiments. Therefore, we propose a new method for graph comparison and type classification by comparing the entropies of centrality measure distributions (degree centrality, betweenness centrality, closeness centrality. We demonstrate that our approach can help assign the empirical graph to the most similar theoretical model using a simple unsupervised learning method.

  5. Theoretical-empirical model of the steam-water cycle of the power unit

    Directory of Open Access Journals (Sweden)

    Grzegorz Szapajko

    2010-06-01

    Full Text Available The diagnostics of the energy conversion systems’ operation is realised as a result of collecting, processing, evaluatingand analysing the measurement signals. The result of the analysis is the determination of the process state. It requires a usageof the thermal processes models. Construction of the analytical model with the auxiliary empirical functions built-in brings satisfyingresults. The paper presents theoretical-empirical model of the steam-water cycle. Worked out mathematical simulation model containspartial models of the turbine, the regenerative heat exchangers and the condenser. Statistical verification of the model is presented.

  6. CCR+: Metadata Based Extended Personal Health Record Data Model Interoperable with the ASTM CCR Standard.

    Science.gov (United States)

    Park, Yu Rang; Yoon, Young Jo; Jang, Tae Hun; Seo, Hwa Jeong; Kim, Ju Han

    2014-01-01

    Extension of the standard model while retaining compliance with it is a challenging issue because there is currently no method for semantically or syntactically verifying an extended data model. A metadata-based extended model, named CCR+, was designed and implemented to achieve interoperability between standard and extended models. Furthermore, a multilayered validation method was devised to validate the standard and extended models. The American Society for Testing and Materials (ASTM) Community Care Record (CCR) standard was selected to evaluate the CCR+ model; two CCR and one CCR+ XML files were evaluated. In total, 188 metadata were extracted from the ASTM CCR standard; these metadata are semantically interconnected and registered in the metadata registry. An extended-data-model-specific validation file was generated from these metadata. This file can be used in a smartphone application (Health Avatar CCR+) as a part of a multilayered validation. The new CCR+ model was successfully evaluated via a patient-centric exchange scenario involving multiple hospitals, with the results supporting both syntactic and semantic interoperability between the standard CCR and extended, CCR+, model. A feasible method for delivering an extended model that complies with the standard model is presented herein. There is a great need to extend static standard models such as the ASTM CCR in various domains: the methods presented here represent an important reference for achieving interoperability between standard and extended models.

  7. Hazard identification by extended multilevel flow modelling with function roles

    DEFF Research Database (Denmark)

    Wu, Jing; Zhang, Laibin; Jørgensen, Sten Bay

    2014-01-01

    ) is extended with functi on roles to complete HAZOP studies in principle. A graphical MFM editor, which is combined with the reasoning engine (MFM Workbench) developed by DTU is applied to automate HAZOP studies. The method is proposed to suppor t the ‘brain-storming’ sessions in traditional HAZOP analysis...

  8. Creating a Generic Extended Enterprise Management Model using GERAM

    DEFF Research Database (Denmark)

    Larsen, Lars Bjørn; Kaas-Pedersen, Carsten; Vesterager, Johan

    1998-01-01

    The two main themes of the Globeman21 (Global Manufacturing in the 21st century) project are product life cycle management and extended enterprise management. This article focus on the later of these subjects and an illustration of the concept is given together with a discussion of the concept...

  9. An extended rational thermodynamics model for surface excess fluxes

    NARCIS (Netherlands)

    Sagis, L.M.C.

    2012-01-01

    In this paper, we derive constitutive equations for the surface excess fluxes in multiphase systems, in the context of an extended rational thermodynamics formalism. This formalism allows us to derive Maxwell–Cattaneo type constitutive laws for the surface extra stress tensor, the surface thermal

  10. Sectoral patterns of interactive learning : an empirical exploration using an extended resource based model

    NARCIS (Netherlands)

    Meeus, M.T.H.; Oerlemans, L.A.G.; Hage, J.

    1999-01-01

    This paper pursues the development of a theoretical framework that explains interactive learning between innovating firms and external actors in the knowledge infrastructure and the production chain. The research question is: what kinds of factors explain interactive learning of innovating firms

  11. On the Reconciliation of the Extended Nelson-Siegel and the Extended Vasicek Models (with a View Towards Swap and Swaption Valuation)

    DEFF Research Database (Denmark)

    Jørgensen, Peter Løchte

    Extended Nelson-Siegel models are widely used by e.g. practitioners and central banks to estimate current term structures of riskless zero-coupon interest rates, whereas other models such as the extended Vasicek model (a.k.a. the Hull-White model) are popular for pricing interest rate derivatives....... This paper establishes theoretical consistency between these two types of models by showing how to specify the extended Vasicek model such that its implied initial term structure curve precisely matches a given extended Nelson-Siegel specification. That is, we show how to reconcile the two classes of models...

  12. Empirically derived neighbourhood rules for urban land-use modelling

    DEFF Research Database (Denmark)

    Hansen, Henning Sten

    2012-01-01

    Land-use modelling and spatial scenarios have gained attention as a means to meet the challenge of reducing uncertainty in spatial planning and decision making. Many of the recent modelling efforts incorporate cellular automata to accomplish spatially explicit land-use-change modelling. Spatial...

  13. Poisson-generalized gamma empirical Bayes model for disease ...

    African Journals Online (AJOL)

    In spatial disease mapping, the use of Bayesian models of estimation technique is becoming popular for smoothing relative risks estimates for disease mapping. The most common Bayesian conjugate model for disease mapping is the Poisson-Gamma Model (PG). To explore further the activity of smoothing of relative risk ...

  14. Generalized Empirical Likelihood-Based Focused Information Criterion and Model Averaging

    Directory of Open Access Journals (Sweden)

    Naoya Sueishi

    2013-07-01

    Full Text Available This paper develops model selection and averaging methods for moment restriction models. We first propose a focused information criterion based on the generalized empirical likelihood estimator. We address the issue of selecting an optimal model, rather than a correct model, for estimating a specific parameter of interest. Then, this study investigates a generalized empirical likelihood-based model averaging estimator that minimizes the asymptotic mean squared error. A simulation study suggests that our averaging estimator can be a useful alternative to existing post-selection estimators.

  15. Theories of extended objects and composite models of particles

    International Nuclear Information System (INIS)

    Barut, A.O.

    1992-05-01

    The goal of the relativistic theory of extended objects is to predict and correlate the experimentally observed mass spectra, form factors, inelastic transitions, polarizabilities, structure functions of particles from different probes (photons, neutrinos, electrons), and eventually, the break-up, pair production of the system, and scattering of extended objects among themselves. The internal structure may be classified by the nature and number of the internal variables: discrete (fundamental particles), finite number of continuous variables (bound systems), infinite number of continuous variables (p-membranes or localized fields). The algebraic group theoretical S-matrix approach allows us to formulate all the above properties in a unified manner. Different structures are then characterized by different specific parameters. (author). Refs, 4 figs, 1 tab

  16. Business Processes Modeling Recommender Systems: User Expectations and Empirical Evidence

    Directory of Open Access Journals (Sweden)

    Michael Fellmann

    2018-04-01

    Full Text Available Recommender systems are in widespread use in many areas, especially electronic commerce solutions. In this contribution, we apply recommender functionalities to business process modeling and investigate their potential for supporting process modeling. To do so, we have implemented two prototypes, demonstrated them at a major fair and collected user feedback. After analysis of the feedback, we have confronted the findings with the results of the experiment. Our results indicate that fairgoers expect increased modeling speed as the key advantage and completeness of models as the most unlikely advantage. This stands in contrast to an initial experiment revealing that modelers, in fact, increase the completeness of their models when adequate knowledge is presented while time consumption is not necessarily reduced. We explain possible causes of this mismatch and finally hypothesize on two “sweet spots” of process modeling recommender systems.

  17. Empirical study of the GARCH model with rational errors

    International Nuclear Information System (INIS)

    Chen, Ting Ting; Takaishi, Tetsuya

    2013-01-01

    We use the GARCH model with a fat-tailed error distribution described by a rational function and apply it to stock price data on the Tokyo Stock Exchange. To determine the model parameters we perform Bayesian inference to the model. Bayesian inference is implemented by the Metropolis-Hastings algorithm with an adaptive multi-dimensional Student's t-proposal density. In order to compare our model with the GARCH model with the standard normal errors, we calculate the information criteria AIC and DIC, and find that both criteria favor the GARCH model with a rational error distribution. We also calculate the accuracy of the volatility by using the realized volatility and find that a good accuracy is obtained for the GARCH model with a rational error distribution. Thus we conclude that the GARCH model with a rational error distribution is superior to the GARCH model with the normal errors and it can be used as an alternative GARCH model to those with other fat-tailed distributions

  18. Empirical methods for modeling landscape change, ecosystem services, and biodiversity

    Science.gov (United States)

    David Lewis; Ralph. Alig

    2009-01-01

    The purpose of this paper is to synthesize recent economics research aimed at integrating discrete-choice econometric models of land-use change with spatially-explicit landscape simulations and quantitative ecology. This research explicitly models changes in the spatial pattern of landscapes in two steps: 1) econometric estimation of parcel-scale transition...

  19. An Empirical Comparison of Default Swap Pricing Models

    NARCIS (Netherlands)

    P. Houweling (Patrick); A.C.F. Vorst (Ton)

    2002-01-01

    textabstractAbstract: In this paper we compare market prices of credit default swaps with model prices. We show that a simple reduced form model with a constant recovery rate outperforms the market practice of directly comparing bonds' credit spreads to default swap premiums. We find that the

  20. Empirical Analysis of Farm Credit Risk under the Structure Model

    Science.gov (United States)

    Yan, Yan

    2009-01-01

    The study measures farm credit risk by using farm records collected by Farm Business Farm Management (FBFM) during the period 1995-2004. The study addresses the following questions: (1) whether farm's financial position is fully described by the structure model, (2) what are the determinants of farm capital structure under the structure model, (3)…

  1. Hybrid modeling and empirical analysis of automobile supply chain network

    Science.gov (United States)

    Sun, Jun-yan; Tang, Jian-ming; Fu, Wei-ping; Wu, Bing-ying

    2017-05-01

    Based on the connection mechanism of nodes which automatically select upstream and downstream agents, a simulation model for dynamic evolutionary process of consumer-driven automobile supply chain is established by integrating ABM and discrete modeling in the GIS-based map. Firstly, the rationality is proved by analyzing the consistency of sales and changes in various agent parameters between the simulation model and a real automobile supply chain. Second, through complex network theory, hierarchical structures of the model and relationships of networks at different levels are analyzed to calculate various characteristic parameters such as mean distance, mean clustering coefficients, and degree distributions. By doing so, it verifies that the model is a typical scale-free network and small-world network. Finally, the motion law of this model is analyzed from the perspective of complex self-adaptive systems. The chaotic state of the simulation system is verified, which suggests that this system has typical nonlinear characteristics. This model not only macroscopically illustrates the dynamic evolution of complex networks of automobile supply chain but also microcosmically reflects the business process of each agent. Moreover, the model construction and simulation of the system by means of combining CAS theory and complex networks supplies a novel method for supply chain analysis, as well as theory bases and experience for supply chain analysis of auto companies.

  2. An Empirical Test of a Model of Resistance to Persuasion.

    Science.gov (United States)

    And Others; Burgoon, Michael

    1978-01-01

    Tests a model of resistance to persuasion based upon variables not considered by earlier congruity and inoculation models. Supports the prediction that the kind of critical response set induced and the target of the criticism are mediators of resistance to persuasion. (JMF)

  3. Travel Time Reliability for Urban Networks : Modelling and Empirics

    NARCIS (Netherlands)

    Zheng, F.; Liu, Xiaobo; van Zuylen, H.J.; Li, Jie; Lu, Chao

    2017-01-01

    The importance of travel time reliability in traffic management, control, and network design has received a lot of attention in the past decade. In this paper, a network travel time distribution model based on the Johnson curve system is proposed. The model is applied to field travel time data

  4. Political economy models and agricultural policy formation : empirical applicability and relevance for the CAP

    NARCIS (Netherlands)

    Zee, van der F.A.

    1997-01-01

    This study explores the relevance and applicability of political economy models for the explanation of agricultural policies. Part I (chapters 4-7) takes a general perspective and evaluates the empirical applicability of voting models and interest group models to agricultural policy

  5. Assessing and improving the quality of modeling : a series of empirical studies about the UML

    NARCIS (Netherlands)

    Lange, C.F.J.

    2007-01-01

    Assessing and Improving the Quality of Modeling A Series of Empirical Studies about the UML This thesis addresses the assessment and improvement of the quality of modeling in software engineering. In particular, we focus on the Unified Modeling Language (UML), which is the de facto standard in

  6. Empirical model of subdaily variations in the Earth rotation from GPS and its stability

    Science.gov (United States)

    Panafidina, N.; Kurdubov, S.; Rothacher, M.

    2012-12-01

    The model recommended by the IERS for these variations at diurnal and semidiurnal periods has been computed from an ocean tide model and comprises 71 terms in polar motion and Universal Time. In the present study we compute an empirical model of variations in the Earth rotation on tidal frequencies from homogeneously re-processed GPS-observations over 1994-2007 available as free daily normal equations. We discuss the reliability of the obtained amplitudes of the ERP variations and compare results from GPS and VLBI data to identify technique-specific problems and instabilities of the empirical tidal models.

  7. An empirical model of global spread-f occurrence

    International Nuclear Information System (INIS)

    Singleton, D.G.

    1974-09-01

    A method of combining models of ionospheric F-layer peak electron density and irregularity incremental electron density into a model of the occurrence probability of the frequency spreading component of spread-F is presented. The predictions of the model are compared with spread-F occurrence data obtained under sunspot maximum conditions. Good agreement is obtained for latitudes less than 70 0 geomagnetic. At higher latitudes, the inclusion of a 'blackout factor' in the model allows it to accurately represent the data and, in so doing, resolves an apparent discrepancy in the occurrence statistics at high latitudes. The blackout factor is ascribed to the effect of polar blackout on the spread-F statistics and/or the lack of a definitve incremental electron density model for irregularities at polar latitudes. Ways of isolating these effects and assessing their relative importance in the blackout factor are discussed. The model, besides providing estimates of spread-F occurrence on a worldwide basis, which will be of value in the engineering of HF and VHF communications, also furnishes a means of further checking the irregularity incremental electron density model on which it is based. (author)

  8. Cold light dark matter in extended seesaw models

    Science.gov (United States)

    Boulebnane, Sami; Heeck, Julian; Nguyen, Anne; Teresi, Daniele

    2018-04-01

    We present a thorough discussion of light dark matter produced via freeze-in in two-body decays A→ B DM . If A and B are quasi-degenerate, the dark matter particle has a cold spectrum even for keV masses. We show this explicitly by calculating the transfer function that encodes the impact on structure formation. As examples for this setup we study extended seesaw mechanisms with a spontaneously broken global U(1) symmetry, such as the inverse seesaw. The keV-scale pseudo-Goldstone dark matter particle is then naturally produced cold by the decays of the quasi-degenerate right-handed neutrinos.

  9. Recent extensions and use of the statistical model code EMPIRE-II - version: 2.17 Millesimo

    International Nuclear Information System (INIS)

    Herman, M.

    2003-01-01

    This lecture notes describe new features of the modular code EMPIRE-2.17 designed to perform comprehensive calculations of nuclear reactions using variety of nuclear reaction models. Compared to the version 2.13, the current release has been extended by including Coupled-Channel mechanism, exciton model, Monte Carlo approach to preequilibrium emission, use of microscopic level densities, widths fluctuation correction, detailed calculation of the recoil spectra, and powerful plotting capabilities provided by the ZVView package. The second part of this lecture concentrates on the use of the code in practical calculations, with emphasis on the aspects relevant to nuclear data evaluation. In particular, adjusting model parameters is discussed in details. (author)

  10. Empirical evaluation of a forecasting model for successful facilitation ...

    African Journals Online (AJOL)

    During 2000 the annual Facilitator Customer Satisfaction Survey was ... the forecasting model is successful concerning the CSI value and a high positive linear ... namely that of human behaviour to incorporate other influences than just the ...

  11. Empirical model development and validation with dynamic learning in the recurrent multilayer perception

    International Nuclear Information System (INIS)

    Parlos, A.G.; Chong, K.T.; Atiya, A.F.

    1994-01-01

    A nonlinear multivariable empirical model is developed for a U-tube steam generator using the recurrent multilayer perceptron network as the underlying model structure. The recurrent multilayer perceptron is a dynamic neural network, very effective in the input-output modeling of complex process systems. A dynamic gradient descent learning algorithm is used to train the recurrent multilayer perceptron, resulting in an order of magnitude improvement in convergence speed over static learning algorithms. In developing the U-tube steam generator empirical model, the effects of actuator, process,and sensor noise on the training and testing sets are investigated. Learning and prediction both appear very effective, despite the presence of training and testing set noise, respectively. The recurrent multilayer perceptron appears to learn the deterministic part of a stochastic training set, and it predicts approximately a moving average response. Extensive model validation studies indicate that the empirical model can substantially generalize (extrapolate), though online learning becomes necessary for tracking transients significantly different than the ones included in the training set and slowly varying U-tube steam generator dynamics. In view of the satisfactory modeling accuracy and the associated short development time, neural networks based empirical models in some cases appear to provide a serious alternative to first principles models. Caution, however, must be exercised because extensive on-line validation of these models is still warranted

  12. Thermodynamic modeling of CO2 absorption in aqueous N-Methyldiethanolamine using Extended UNIQUAC model

    DEFF Research Database (Denmark)

    Sadegh, Negar; Stenby, Erling Halfdan; Thomsen, Kaj

    2015-01-01

    A Thermodynamic model that can predict the behavior of the gas sweetening process over the applicable conditions is of vital importance in industry. In this work, Extended UNIQUAC model parameters optimized for the CO2-MDEA-H2O system are presented. Different types of experimental data consisting...... model accurately represents thermodynamic and thermal properties of the studied systems. The model parameters are valid in the temperature range from -15 to 200 °C, MDEA mass% of 5-75 and CO2 partial pressure of 0-6161.5 kPa....

  13. Empirical assessment of a threshold model for sylvatic plague

    DEFF Research Database (Denmark)

    Davis, Stephen; Leirs, Herwig; Viljugrein, H.

    2007-01-01

    Plague surveillance programmes established in Kazakhstan, Central Asia, during the previous century, have generated large plague archives that have been used to parameterize an abundance threshold model for sylvatic plague in great gerbil (Rhombomys opimus) populations. Here, we assess the model...... examine six hypotheses that could explain the resulting false positive predictions, namely (i) including end-of-outbreak data erroneously lowers the estimated threshold, (ii) too few gerbils were tested, (iii) plague becomes locally extinct, (iv) the abundance of fleas was too low, (v) the climate...

  14. Empirical justification of the elementary model of money circulation

    Science.gov (United States)

    Schinckus, Christophe; Altukhov, Yurii A.; Pokrovskii, Vladimir N.

    2018-03-01

    This paper proposes an elementary model describing the money circulation for a system, composed by a production system, the government, a central bank, commercial banks and their customers. A set of equations for the system determines the main features of interaction between the production and the money circulation. It is shown, that the money system can evolve independently of the evolution of production. The model can be applied to any national economy but we will illustrate our claim in the context of the Russian monetary system.

  15. Theoretical and Empirical Review of Asset Pricing Models: A Structural Synthesis

    Directory of Open Access Journals (Sweden)

    Saban Celik

    2012-01-01

    Full Text Available The purpose of this paper is to give a comprehensive theoretical review devoted to asset pricing models by emphasizing static and dynamic versions in the line with their empirical investigations. A considerable amount of financial economics literature devoted to the concept of asset pricing and their implications. The main task of asset pricing model can be seen as the way to evaluate the present value of the pay offs or cash flows discounted for risk and time lags. The difficulty coming from discounting process is that the relevant factors that affect the pay offs vary through the time whereas the theoretical framework is still useful to incorporate the changing factors into an asset pricing models. This paper fills the gap in literature by giving a comprehensive review of the models and evaluating the historical stream of empirical investigations in the form of structural empirical review.

  16. An Empirical Generative Framework for Computational Modeling of Language Acquisition

    Science.gov (United States)

    Waterfall, Heidi R.; Sandbank, Ben; Onnis, Luca; Edelman, Shimon

    2010-01-01

    This paper reports progress in developing a computer model of language acquisition in the form of (1) a generative grammar that is (2) algorithmically learnable from realistic corpus data, (3) viable in its large-scale quantitative performance and (4) psychologically real. First, we describe new algorithmic methods for unsupervised learning of…

  17. An auto-calibration procedure for empirical solar radiation models

    NARCIS (Netherlands)

    Bojanowski, J.S.; Donatelli, Marcello; Skidmore, A.K.; Vrieling, A.

    2013-01-01

    Solar radiation data are an important input for estimating evapotranspiration and modelling crop growth. Direct measurement of solar radiation is now carried out in most European countries, but the network of measuring stations is too sparse for reliable interpolation of measured values. Instead of

  18. Neural networks in economic modelling : An empirical study

    NARCIS (Netherlands)

    Verkooijen, W.J.H.

    1996-01-01

    This dissertation addresses the statistical aspects of neural networks and their usability for solving problems in economics and finance. Neural networks are discussed in a framework of modelling which is generally accepted in econometrics. Within this framework a neural network is regarded as a

  19. An Empirical Study of a Solo Performance Assessment Model

    Science.gov (United States)

    Russell, Brian E.

    2015-01-01

    The purpose of this study was to test a hypothesized model of solo music performance assessment. Specifically, this study investigates the influence of technique and musical expression on perceptions of overall performance quality. The Aural Musical Performance Quality (AMPQ) measure was created to measure overall performance quality, technique,…

  20. Modeling social networks in geographic space: approach and empirical application

    NARCIS (Netherlands)

    Arentze, T.A.; Berg, van den P.E.W.; Timmermans, H.J.P.

    2012-01-01

    Social activities are responsible for a large proportion of travel demands of individuals. Modeling of the social network of a studied population offers a basis to predict social travel in a more comprehensive way than currently is possible. In this paper we develop a method to generate a whole

  1. Extending enterprise architecture modelling with business goals and requirements

    NARCIS (Netherlands)

    Engelsman, W.; Quartel, Dick; Jonkers, Henk; van Sinderen, Marten J.

    The methods for enterprise architecture (EA), such as The Open Group Architecture Framework, acknowledge the importance of requirements modelling in the development of EAs. Modelling support is needed to specify, document, communicate and reason about goals and requirements. The current modelling

  2. An extended dual search space model of scientific discovery learning

    NARCIS (Netherlands)

    van Joolingen, Wouter; de Jong, Anthonius J.M.

    1997-01-01

    This article describes a theory of scientific discovery learning which is an extension of Klahr and Dunbar''s model of Scientific Discovery as Dual Search (SDDS) model. We present a model capable of describing and understanding scientific discovery learning in complex domains in terms of the SDDS

  3. Block Empirical Likelihood for Longitudinal Single-Index Varying-Coefficient Model

    Directory of Open Access Journals (Sweden)

    Yunquan Song

    2013-01-01

    Full Text Available In this paper, we consider a single-index varying-coefficient model with application to longitudinal data. In order to accommodate the within-group correlation, we apply the block empirical likelihood procedure to longitudinal single-index varying-coefficient model, and prove a nonparametric version of Wilks’ theorem which can be used to construct the block empirical likelihood confidence region with asymptotically correct coverage probability for the parametric component. In comparison with normal approximations, the proposed method does not require a consistent estimator for the asymptotic covariance matrix, making it easier to conduct inference for the model's parametric component. Simulations demonstrate how the proposed method works.

  4. Organizational Learning, Strategic Flexibility and Business Model Innovation: An Empirical Research Based on Logistics Enterprises

    Science.gov (United States)

    Bao, Yaodong; Cheng, Lin; Zhang, Jian

    Using the data of 237 Jiangsu logistics firms, this paper empirically studies the relationship among organizational learning capability, business model innovation, strategic flexibility. The results show as follows; organizational learning capability has positive impacts on business model innovation performance; strategic flexibility plays mediating roles on the relationship between organizational learning capability and business model innovation; interaction among strategic flexibility, explorative learning and exploitative learning play significant roles in radical business model innovation and incremental business model innovation.

  5. Extending the linear model with R generalized linear, mixed effects and nonparametric regression models

    CERN Document Server

    Faraway, Julian J

    2005-01-01

    Linear models are central to the practice of statistics and form the foundation of a vast range of statistical methodologies. Julian J. Faraway''s critically acclaimed Linear Models with R examined regression and analysis of variance, demonstrated the different methods available, and showed in which situations each one applies. Following in those footsteps, Extending the Linear Model with R surveys the techniques that grow from the regression model, presenting three extensions to that framework: generalized linear models (GLMs), mixed effect models, and nonparametric regression models. The author''s treatment is thoroughly modern and covers topics that include GLM diagnostics, generalized linear mixed models, trees, and even the use of neural networks in statistics. To demonstrate the interplay of theory and practice, throughout the book the author weaves the use of the R software environment to analyze the data of real examples, providing all of the R commands necessary to reproduce the analyses. All of the ...

  6. General Friction Model Extended by the Effect of Strain Hardening

    DEFF Research Database (Denmark)

    Nielsen, Chris V.; Martins, Paulo A.F.; Bay, Niels

    2016-01-01

    An extension to the general friction model proposed by Wanheim and Bay [1] to include the effect of strain hardening is proposed. The friction model relates the friction stress to the fraction of real contact area by a friction factor under steady state sliding. The original model for the real...... contact area as function of the normalized contact pressure is based on slip-line analysis and hence on the assumption of rigid-ideally plastic material behavior. In the present work, a general finite element model is established to, firstly, reproduce the original model under the assumption of rigid...

  7. Light Curve Simulation Using Spacecraft CAD Models and Empirical Material Spectral BRDFS

    Science.gov (United States)

    Willison, A.; Bedard, D.

    This paper presents a Matlab-based light curve simulation software package that uses computer-aided design (CAD) models of spacecraft and the spectral bidirectional reflectance distribution function (sBRDF) of their homogenous surface materials. It represents the overall optical reflectance of objects as a sBRDF, a spectrometric quantity, obtainable during an optical ground truth experiment. The broadband bidirectional reflectance distribution function (BRDF), the basis of a broadband light curve, is produced by integrating the sBRDF over the optical wavelength range. Colour-filtered BRDFs, the basis of colour-filtered light curves, are produced by first multiplying the sBRDF by colour filters, and integrating the products. The software package's validity is established through comparison of simulated reflectance spectra and broadband light curves with those measured of the CanX-1 Engineering Model (EM) nanosatellite, collected during an optical ground truth experiment. It is currently being extended to simulate light curves of spacecraft in Earth orbit, using spacecraft Two-Line-Element (TLE) sets, yaw/pitch/roll angles, and observer coordinates. Measured light curves of the NEOSSat spacecraft will be used to validate simulated quantities. The sBRDF was chosen to represent material reflectance as it is spectrometric and a function of illumination and observation geometry. Homogeneous material sBRDFs were obtained using a goniospectrometer for a range of illumination and observation geometries, collected in a controlled environment. The materials analyzed include aluminum alloy, two types of triple-junction photovoltaic (TJPV) cell, white paint, and multi-layer insulation (MLI). Interpolation and extrapolation methods were used to determine the sBRDF for all possible illumination and observation geometries not measured in the laboratory, resulting in empirical look-up tables. These look-up tables are referenced when calculating the overall sBRDF of objects, where

  8. Extending enterprise architecture modelling with business goals and requirements

    Science.gov (United States)

    Engelsman, Wilco; Quartel, Dick; Jonkers, Henk; van Sinderen, Marten

    2011-02-01

    The methods for enterprise architecture (EA), such as The Open Group Architecture Framework, acknowledge the importance of requirements modelling in the development of EAs. Modelling support is needed to specify, document, communicate and reason about goals and requirements. The current modelling techniques for EA focus on the products, services, processes and applications of an enterprise. In addition, techniques may be provided to describe structured requirements lists and use cases. Little support is available however for modelling the underlying motivation of EAs in terms of stakeholder concerns and the high-level goals that address these concerns. This article describes a language that supports the modelling of this motivation. The definition of the language is based on existing work on high-level goal and requirements modelling and is aligned with an existing standard for enterprise modelling: the ArchiMate language. Furthermore, the article illustrates how EA can benefit from analysis techniques from the requirements engineering domain.

  9. Empirical study on entropy models of cellular manufacturing systems

    Institute of Scientific and Technical Information of China (English)

    Zhifeng Zhang; Renbin Xiao

    2009-01-01

    From the theoretical point of view,the states of manufacturing resources can be monitored and assessed through the amount of information needed to describe their technological structure and operational state.The amount of information needed to describe cellular manufacturing systems is investigated by two measures:the structural entropy and the operational entropy.Based on the Shannon entropy,the models of the structural entropy and the operational entropy of cellular manufacturing systems are developed,and the cognizance of the states of manufacturing resources is also illustrated.Scheduling is introduced to measure the entropy models of cellular manufacturing systems,and the feasible concepts of maximum schedule horizon and schedule adherence are advanced to quantitatively evaluate the effectiveness of schedules.Finally,an example is used to demonstrate the validity of the proposed methodology.

  10. An Empirical Model of Wage Dispersion with Sorting

    DEFF Research Database (Denmark)

    Bagger, Jesper; Lentz, Rasmus

    (submodular). The model is estimated on Danish matched employer-employee data. We find evidence of positive assortative matching. In the estimated equilibrium match distribution, the correlation between worker skill and firm productivity is 0.12. The assortative matching has a substantial impact on wage......This paper studies wage dispersion in an equilibrium on-the-job-search model with endogenous search intensity. Workers differ in their permanent skill level and firms differ with respect to productivity. Positive (negative) sorting results if the match production function is supermodular...... to mismatch by asking how much greater output would be if the estimated population of matches were perfectly positively assorted. In this case, output would increase by 7.7%....

  11. A Trade Study of Thermosphere Empirical Neutral Density Models

    Science.gov (United States)

    2014-08-01

    solar radio F10.7 proxy and magnetic activity measurements are used to calculate the baseline orbit. This approach is applied to compare the daily... approach is to calculate along-track errors for these models and compare them against the baseline error based on the “ground truth” neutral density data...n,m = Degree and order, respectively ′ = Geocentric latitude Approved for public release; distribution is unlimited. 2 λ = Geocentric

  12. Towards an Empirical-Relational Model of Supply Chain Flexibility

    OpenAIRE

    Santanu Mandal

    2015-01-01

    Supply chains are prone to disruptions and associated risks. To develop capabilities for risk mitigation, supply chains need to be flexible. A flexible supply chain can respond better to environmental contingencies. Based on the theoretical tenets of resource-based view, relational view and dynamic capabilities theory, the current study develops a relational model of supply chain flexibility comprising trust, commitment, communication, co-operation, adaptation and interdependence. Subsequentl...

  13. PERFORMANCE EVALUATION OF EMPIRICAL MODELS FOR VENTED LEAN HYDROGEN EXPLOSIONS

    OpenAIRE

    Anubhav Sinha; Vendra C. Madhav Rao; Jennifer X. Wen

    2017-01-01

    Explosion venting is a method commonly used to prevent or minimize damage to an enclosure caused by an accidental explosion. An estimate of the maximum overpressure generated though explosion is an important parameter in the design of the vents. Various engineering models (Bauwens et al., 2012, Molkov and Bragin, 2015) and European (EN 14994 ) and USA standards (NFPA 68) are available to predict such overpressure. In this study, their performance is evaluated using a number of published exper...

  14. An empirical firn-densification model comprising ice-lences

    DEFF Research Database (Denmark)

    Reeh, Niels; Fisher, D.A.; Koerner, R.M.

    2005-01-01

    a suitable value of the surface snow density. In the present study, a simple densification model is developed that specifically accounts for the content of ice lenses in the snowpack. An annual layer is considered to be composed of an ice fraction and a firn fraction. It is assumed that all meltwater formed...... changes reflect a volume change of the ice sheet with no corresponding change of mass, i.e. a volume change that does not influence global sea level....

  15. Reconstructing plateau icefields: Evaluating empirical and modelled approaches

    Science.gov (United States)

    Pearce, Danni; Rea, Brice; Barr, Iestyn

    2013-04-01

    Glacial landforms are widely utilised to reconstruct former glacier geometries with a common aim to estimate the Equilibrium Line Altitudes (ELAs) and from these, infer palaeoclimatic conditions. Such inferences may be studied on a regional scale and used to correlate climatic gradients across large distances (e.g., Europe). In Britain, the traditional approach uses geomorphological mapping with hand contouring to derive the palaeo-ice surface. Recently, ice surface modelling enables an equilibrium profile reconstruction tuned using the geomorphology. Both methods permit derivation of palaeo-climate but no study has compared the two methods for the same ice-mass. This is important because either approach may result in differences in glacier limits, ELAs and palaeo-climate. This research uses both methods to reconstruct a plateau icefield and quantifies the results from a cartographic and geometrical aspect. Detailed geomorphological mapping of the Tweedsmuir Hills in the Southern Uplands, Scotland (c. 320 km2) was conducted to examine the extent of Younger Dryas (YD; 12.9 -11.7 cal. ka BP) glaciation. Landform evidence indicates a plateau icefield configuration of two separate ice-masses during the YD covering an area c. 45 km2 and 25 km2. The interpreted age is supported by new radiocarbon dating of basal stratigraphies and Terrestrial Cosmogenic Nuclide Analysis (TCNA) of in situ boulders. Both techniques produce similar configurations however; the model results in a coarser resolution requiring further processing if a cartographic map is required. When landforms are absent or fragmentary (e.g., trimlines and lateral moraines), like in many accumulation zones on plateau icefields, the geomorphological approach increasingly relies on extrapolation between lines of evidence and on the individual's perception of how the ice-mass ought to look. In some locations this results in an underestimation of the ice surface compared to the modelled surface most likely due to

  16. Students Working Online for Group Projects: A Test of an Extended Theory of Planned Behaviour Model

    Science.gov (United States)

    Cheng, Eddie W. L.

    2017-01-01

    This study examined an extended theory of planned behaviour (TPB) model that specified factors affecting students' intentions to collaborate online for group work. Past behaviour, past experience and actual behavioural control were incorporated in the extended TPB model. The mediating roles of attitudes, subjective norms and perceived behavioural…

  17. Thermodynamic admissibility of the extended Pom-Pom model for branched polymers

    NARCIS (Netherlands)

    Soulages, J.; Hütter, M.; Öttinger, H.C.

    2006-01-01

    The thermodynamic consistency of the eXtended Pom-Pom (XPP) model for branched polymers of Verbeeten et al. [W.M.H. Verbeeten, G.W.M. Peters, F.P.T. Baaijens, Differential constitutive equations for polymer melts: the extended pom-pom model, J. Rheol. 45 (4) (2001) 823–843; W.M.H. Verbeeten, G.W.M.

  18. Libor and Swap Market Models for the Pricing of Interest Rate Derivatives : An Empirical Analysis

    NARCIS (Netherlands)

    de Jong, F.C.J.M.; Driessen, J.J.A.G.; Pelsser, A.

    2000-01-01

    In this paper we empirically analyze and compare the Libor and Swap Market Models, developed by Brace, Gatarek, and Musiela (1997) and Jamshidian (1997), using paneldata on prices of US caplets and swaptions.A Libor Market Model can directly be calibrated to observed prices of caplets, whereas a

  19. An improved empirical model for diversity gain on Earth-space propagation paths

    Science.gov (United States)

    Hodge, D. B.

    1981-01-01

    An empirical model was generated to estimate diversity gain on Earth-space propagation paths as a function of Earth terminal separation distance, link frequency, elevation angle, and angle between the baseline and the path azimuth. The resulting model reproduces the entire experimental data set with an RMS error of 0.73 dB.

  20. Comparing Multidimensional and Continuum Models of Vocabulary Acquisition: An Empirical Examination of the Vocabulary Knowledge Scale

    Science.gov (United States)

    Stewart, Jeffrey; Batty, Aaron Olaf; Bovee, Nicholas

    2012-01-01

    Second language vocabulary acquisition has been modeled both as multidimensional in nature and as a continuum wherein the learner's knowledge of a word develops along a cline from recognition through production. In order to empirically examine and compare these models, the authors assess the degree to which the Vocabulary Knowledge Scale (VKS;…

  1. A semi-empirical model for predicting crown diameter of cedrela ...

    African Journals Online (AJOL)

    A semi-empirical model relating age and breast height has been developed to predict individual tree crown diameter for Cedrela odorata (L) plantation in the moist evergreen forest zones of Ghana. The model was based on field records of 269 trees, and could determine the crown cover dynamics, forecast time of canopy ...

  2. Extending a configuration model to find communities in complex networks

    International Nuclear Information System (INIS)

    Jin, Di; Hu, Qinghua; He, Dongxiao; Yang, Bo; Baquero, Carlos

    2013-01-01

    Discovery of communities in complex networks is a fundamental data analysis task in various domains. Generative models are a promising class of techniques for identifying modular properties from networks, which has been actively discussed recently. However, most of them cannot preserve the degree sequence of networks, which will distort the community detection results. Rather than using a blockmodel as most current works do, here we generalize a configuration model, namely, a null model of modularity, to solve this problem. Towards decomposing and combining sub-graphs according to the soft community memberships, our model incorporates the ability to describe community structures, something the original model does not have. Also, it has the property, as with the original model, that it fixes the expected degree sequence to be the same as that of the observed network. We combine both the community property and degree sequence preserving into a single unified model, which gives better community results compared with other models. Thereafter, we learn the model using a technique of nonnegative matrix factorization and determine the number of communities by applying consensus clustering. We test this approach both on synthetic benchmarks and on real-world networks, and compare it with two similar methods. The experimental results demonstrate the superior performance of our method over competing methods in detecting both disjoint and overlapping communities. (paper)

  3. Stochastic Modeling of Empirical Storm Loss in Germany

    Science.gov (United States)

    Prahl, B. F.; Rybski, D.; Kropp, J. P.; Burghoff, O.; Held, H.

    2012-04-01

    Based on German insurance loss data for residential property we derive storm damage functions that relate daily loss with maximum gust wind speed. Over a wide range of loss, steep power law relationships are found with spatially varying exponents ranging between approximately 8 and 12. Global correlations between parameters and socio-demographic data are employed to reduce the number of local parameters to 3. We apply a Monte Carlo approach to calculate German loss estimates including confidence bounds in daily and annual resolution. Our model reproduces the annual progression of winter storm losses and enables to estimate daily losses over a wide range of magnitude.

  4. Semiphysiological versus Empirical Modelling of the Population Pharmacokinetics of Free and Total Cefazolin during Pregnancy

    Directory of Open Access Journals (Sweden)

    J. G. Coen van Hasselt

    2014-01-01

    Full Text Available This work describes a first population pharmacokinetic (PK model for free and total cefazolin during pregnancy, which can be used for dose regimen optimization. Secondly, analysis of PK studies in pregnant patients is challenging due to study design limitations. We therefore developed a semiphysiological modeling approach, which leveraged gestation-induced changes in creatinine clearance (CrCL into a population PK model. This model was then compared to the conventional empirical covariate model. First, a base two-compartmental PK model with a linear protein binding was developed. The empirical covariate model for gestational changes consisted of a linear relationship between CL and gestational age. The semiphysiological model was based on the base population PK model and a separately developed mixed-effect model for gestation-induced change in CrCL. Estimates for baseline clearance (CL were 0.119 L/min (RSE 58% and 0.142 L/min (RSE 44% for the empirical and semiphysiological models, respectively. Both models described the available PK data comparably well. However, as the semiphysiological model was based on prior knowledge of gestation-induced changes in renal function, this model may have improved predictive performance. This work demonstrates how a hybrid semiphysiological population PK approach may be of relevance in order to derive more informative inferences.

  5. Modeling Active Aging and Explicit Memory: An Empirical Study.

    Science.gov (United States)

    Ponce de León, Laura Ponce; Lévy, Jean Pierre; Fernández, Tomás; Ballesteros, Soledad

    2015-08-01

    The rapid growth of the population of older adults and their concomitant psychological status and health needs have captured the attention of researchers and health professionals. To help fill the void of literature available to social workers interested in mental health promotion and aging, the authors provide a model for active aging that uses psychosocial variables. Structural equation modeling was used to examine the relationships among the latent variables of the state of explicit memory, the perception of social resources, depression, and the perception of quality of life in a sample of 184 older adults. The results suggest that explicit memory is not a direct indicator of the perception of quality of life, but it could be considered an indirect indicator as it is positively correlated with perception of social resources and negatively correlated with depression. These last two variables influenced the perception of quality of life directly, the former positively and the latter negatively. The main outcome suggests that the perception of social support improves explicit memory and quality of life and reduces depression in active older adults. The findings also suggest that gerontological professionals should design memory training programs, improve available social resources, and offer environments with opportunities to exercise memory.

  6. Toward an Empirically-based Parametric Explosion Spectral Model

    Science.gov (United States)

    Ford, S. R.; Walter, W. R.; Ruppert, S.; Matzel, E.; Hauk, T. F.; Gok, R.

    2010-12-01

    Small underground nuclear explosions need to be confidently detected, identified, and characterized in regions of the world where they have never occurred. We develop a parametric model of the nuclear explosion seismic source spectrum derived from regional phases (Pn, Pg, and Lg) that is compatible with earthquake-based geometrical spreading and attenuation. Earthquake spectra are fit with a generalized version of the Brune spectrum, which is a three-parameter model that describes the long-period level, corner-frequency, and spectral slope at high-frequencies. These parameters are then correlated with near-source geology and containment conditions. There is a correlation of high gas-porosity (low strength) with increased spectral slope. However, there are trade-offs between the slope and corner-frequency, which we try to independently constrain using Mueller-Murphy relations and coda-ratio techniques. The relationship between the parametric equation and the geologic and containment conditions will assist in our physical understanding of the nuclear explosion source, and aid in the prediction of observed local and regional distance seismic amplitudes for event identification and yield determination in regions with incomplete or no prior history of underground nuclear testing.

  7. Model and Empirical Study on Several Urban Public Transport Networks in China

    Science.gov (United States)

    Ding, Yimin; Ding, Zhuo

    2012-07-01

    In this paper, we present the empirical investigation results on the urban public transport networks (PTNs) and propose a model to understand the results obtained. We investigate some urban public traffic networks in China, which are the urban public traffic networks of Beijing, Guangzhou, Wuhan and etc. The empirical results on the big cities show that the accumulative act-degree distributions of PTNs take neither power function forms, nor exponential function forms, but they are described by a shifted power function, and the accumulative act-degree distributions of PTNs in medium-sized or small cities follow the same law. In the end, we propose a model to show a possible evolutionary mechanism for the emergence of such network. The analytic results obtained from this model are in good agreement with the empirical results.

  8. Hyperstate matrix models : extending demographic state spaces to higher dimensions

    NARCIS (Netherlands)

    Roth, G.; Caswell, H.

    2016-01-01

    1. Demographic models describe population dynamics in terms of the movement of individuals among states (e.g. size, age, developmental stage, parity, frailty, physiological condition). Matrix population models originally classified individuals by a single characteristic. This was enlarged to two

  9. Model-based segmentation and classification of trajectories (Extended abstract)

    NARCIS (Netherlands)

    Alewijnse, S.P.A.; Buchin, K.; Buchin, M.; Sijben, S.; Westenberg, M.A.

    2014-01-01

    We present efficient algorithms for segmenting and classifying a trajectory based on a parameterized movement model like the Brownian bridge movement model. Segmentation is the problem of subdividing a trajectory into parts such that each art is homogeneous in its movement characteristics. We

  10. An extended gravity model with substitution applied to international trade

    NARCIS (Netherlands)

    Bikker, J.A.|info:eu-repo/dai/nl/06912261X

    The traditional gravity model has been applied many times to international trade flows, especially in order to analyze trade creation and trade diversion. However, there are two fundamental objections to the model: it cannot describe substitutions between flows and it lacks a cogent theoretical

  11. A 'theory of everything'? [Extending the Standard Model

    International Nuclear Information System (INIS)

    Ross, G.G.

    1993-01-01

    The Standard Model provides us with an amazingly successful theory of the strong, weak and electromagnetic interactions. Despite this, many physicists believe it represents only a step towards understanding the ultimate ''theory of everything''. In this article we describe why the Standard Model is thought to be incomplete and some of the suggestions for its extension. (Author)

  12. Efficient Modelling and Generation of Markov Automata (extended version)

    NARCIS (Netherlands)

    Timmer, Mark; Katoen, Joost P.; van de Pol, Jan Cornelis; Stoelinga, Mariëlle Ida Antoinette

    2012-01-01

    This paper introduces a framework for the efficient modelling and generation of Markov automata. It consists of (1) the data-rich process-algebraic language MAPA, allowing concise modelling of systems with nondeterminism, probability and Markovian timing; (2) a restricted form of the language, the

  13. Analytic investigation of extended Heitler-Matthews model

    Energy Technology Data Exchange (ETDEWEB)

    Grimm, Stefan; Veberic, Darko; Engel, Ralph [KIT, IKP (Germany)

    2016-07-01

    Many features of extensive air showers are qualitatively well described by the Heitler cascade model and its extensions. The core of a shower is given by hadrons that interact with air nuclei. After each interaction some of these hadrons decay and feed the electromagnetic shower component. The most important parameters of such hadronic interactions are inelasticity, multiplicity, and the ratio of charged vs. neutral particles. However, in analytic considerations approximations are needed to include the characteristics of hadron production. We discuss extensions of the simple cascade model by analytic description of air showers by cascade models which include also the elasticity, and derive the number of produced muons. In a second step we apply this model to calculate the dependence of the shower center of gravity on model parameters. The depth of the center of gravity is closely related to that of the shower maximum, which is a commonly-used composition-sensitive observable.

  14. Cycle length maximization in PWRs using empirical core models

    International Nuclear Information System (INIS)

    Okafor, K.C.; Aldemir, T.

    1987-01-01

    The problem of maximizing cycle length in nuclear reactors through optimal fuel and poison management has been addressed by many investigators. An often-used neutronic modeling technique is to find correlations between the state and control variables to describe the response of the core to changes in the control variables. In this study, a set of linear correlations, generated by two-dimensional diffusion-depletion calculations, is used to find the enrichment distribution that maximizes cycle length for the initial core of a pressurized water reactor (PWR). These correlations (a) incorporate the effect of composition changes in all the control zones on a given fuel assembly and (b) are valid for a given range of control variables. The advantage of using such correlations is that the cycle length maximization problem can be reduced to a linear programming problem

  15. Behavioral Modeling Based on Probabilistic Finite Automata: An Empirical Study.

    Science.gov (United States)

    Tîrnăucă, Cristina; Montaña, José L; Ontañón, Santiago; González, Avelino J; Pardo, Luis M

    2016-06-24

    Imagine an agent that performs tasks according to different strategies. The goal of Behavioral Recognition (BR) is to identify which of the available strategies is the one being used by the agent, by simply observing the agent's actions and the environmental conditions during a certain period of time. The goal of Behavioral Cloning (BC) is more ambitious. In this last case, the learner must be able to build a model of the behavior of the agent. In both settings, the only assumption is that the learner has access to a training set that contains instances of observed behavioral traces for each available strategy. This paper studies a machine learning approach based on Probabilistic Finite Automata (PFAs), capable of achieving both the recognition and cloning tasks. We evaluate the performance of PFAs in the context of a simulated learning environment (in this case, a virtual Roomba vacuum cleaner robot), and compare it with a collection of other machine learning approaches.

  16. Semi-continuous and multigroup models in extended kinetic theory

    International Nuclear Information System (INIS)

    Koller, W.

    2000-01-01

    The aim of this thesis is to study energy discretization of the Boltzmann equation in the framework of extended kinetic theory. In case that external fields can be neglected, the semi- continuous Boltzmann equation yields a sound basis for various generalizations. Semi-continuous kinetic equations describing a three component gas mixture interacting with monochromatic photons as well as a four component gas mixture undergoing chemical reactions are established and investigated. These equations reflect all major aspects (conservation laws, equilibria, H-theorem) of the full continuous kinetic description. For the treatment of the spatial dependence, an expansion of the distribution function in terms of Legendre polynomials is carried out. An implicit finite differencing scheme is combined with the operator splitting method. The obtained numerical schemes are applied to the space homogeneous study of binary chemical reactions and to spatially one-dimensional laser-induced acoustic waves. In the presence of external fields, the developed overlapping multigroup approach (with the spline-interpolation as its extension) is well suited for numerical studies. Furthermore, two formulations of consistent multigroup approaches to the non-linear Boltzmann equation are presented. (author)

  17. Extended UNIQUAC model for thermodynamic modeling of CO2 absorption in aqueous alkanolamine solutions

    DEFF Research Database (Denmark)

    Faramarzi, Leila; Kontogeorgis, Georgios; Thomsen, Kaj

    2009-01-01

    The extended UNIQUAC model [K. Thomsen, R Rasmussen, Chem. Eng. Sci. 54 (1999) 1787-1802] was applied to the thermodynamic representation of carbon dioxide absorption in aqueous monoethanolamine (MEA), methyldiethanolamine (MDEA) and varied strength mixtures of the two alkanolamines (MEA-MDEA). F......The extended UNIQUAC model [K. Thomsen, R Rasmussen, Chem. Eng. Sci. 54 (1999) 1787-1802] was applied to the thermodynamic representation of carbon dioxide absorption in aqueous monoethanolamine (MEA), methyldiethanolamine (MDEA) and varied strength mixtures of the two alkanolamines (MEA......) are included in the parameter estimation process. The previously unavailable standard state properties of the alkanolamine ions appearing in this work, i.e. MEA protonate, MEA carbamate and MDEA protonate are determined. The concentration of the species in both MEA and MDEA solutions containing CO2...

  18. Extending the Modelling Framework for Gas-Particle Systems

    DEFF Research Database (Denmark)

    Rosendahl, Lasse Aistrup

    , with very good results. Single particle combustion has been tested using a number of different particle combustion models applied to coal and straw particles. Comparing the results of these calculations to measurements on straw burnout, the results indicate that for straw, existing heterogeneous combustion...... models perform well, and may be used in high temperature ranges. Finally, the particle tracking and combustion model is applied to an existing coal and straw co- fuelled burner. The results indicate that again, the straw follows very different trajectories than the coal particles, and also that burnout...

  19. Modelling heavy-ion energy deposition in extended media

    International Nuclear Information System (INIS)

    Mishustin, I.; Pshenichnov, I.; Greiner, W.; Mishustin, I.; Pshenichnov, I.

    2010-01-01

    We present recent developments of the Monte Carlo model for heavy-ion therapy (MCHIT), which is currently based on the Geant4 tool-kit of version 9.2. The major advancement of the model concerns the modelling of violent fragmentation reactions by means of the Fermi break-up model, which is used to simulate decays of hot fragments created after the first stage of nucleus-nucleus collisions. By means of MCHIT we study the dose distributions from therapeutic beams of carbon nuclei in tissue-like materials, like water and PMMA. The contributions to the total dose from primary beam nuclei and from charged secondary fragments produced in nuclear fragmentation reactions are calculated. The build-up of secondary fragments along the beam axis is calculated and compared with available experimental data. Finally, we demonstrate the impact of violent multifragment decays on energy distributions of secondary neutrons produced by carbon nuclei in water. (authors)

  20. Modelling heavy-ion energy deposition in extended media

    Energy Technology Data Exchange (ETDEWEB)

    Mishustin, I.; Pshenichnov, I.; Greiner, W. [Frankfurt Institute for Advanced Studies, J.-W. Goethe University, Frankfurt am Main (Germany); Mishustin, I. [Kurchatov Institute, Russian Research Center, Moscow (Russian Federation); Pshenichnov, I. [Institute for Nuclear Research, Russian Academy of Science, Moscow (Russian Federation)

    2010-10-15

    We present recent developments of the Monte Carlo model for heavy-ion therapy (MCHIT), which is currently based on the Geant4 tool-kit of version 9.2. The major advancement of the model concerns the modelling of violent fragmentation reactions by means of the Fermi break-up model, which is used to simulate decays of hot fragments created after the first stage of nucleus-nucleus collisions. By means of MCHIT we study the dose distributions from therapeutic beams of carbon nuclei in tissue-like materials, like water and PMMA. The contributions to the total dose from primary beam nuclei and from charged secondary fragments produced in nuclear fragmentation reactions are calculated. The build-up of secondary fragments along the beam axis is calculated and compared with available experimental data. Finally, we demonstrate the impact of violent multifragment decays on energy distributions of secondary neutrons produced by carbon nuclei in water. (authors)

  1. Modern elementary particle physics explaining and extending the standard model

    CERN Document Server

    Kane, Gordon

    2017-01-01

    This book is written for students and scientists wanting to learn about the Standard Model of particle physics. Only an introductory course knowledge about quantum theory is needed. The text provides a pedagogical description of the theory, and incorporates the recent Higgs boson and top quark discoveries. With its clear and engaging style, this new edition retains its essential simplicity. Long and detailed calculations are replaced by simple approximate ones. It includes introductions to accelerators, colliders, and detectors, and several main experimental tests of the Standard Model are explained. Descriptions of some well-motivated extensions of the Standard Model prepare the reader for new developments. It emphasizes the concepts of gauge theories and Higgs physics, electroweak unification and symmetry breaking, and how force strengths vary with energy, providing a solid foundation for those working in the field, and for those who simply want to learn about the Standard Model.

  2. Quark-flavour phenomenology of models with extended gauge symmetries

    International Nuclear Information System (INIS)

    Carlucci, Maria Valentina

    2013-01-01

    Gauge invariance is one of the fundamental principles of the Standard Model of particles and interactions, and it is reasonable to believe that it also regulates the physics beyond it. In this thesis we have studied the theory and phenomenology of two New Physics models based on gauge symmetries that are extensions of the Standard Model group. Both of them are particularly interesting because they provide some answers to the question of the origin of flavour, which is still unexplained. Moreover, the flavour sector represents a promising field for the research of indirect signatures of New Physics, since after the first run of LHC we do not have any direct hint of it yet. The first model assumes that flavour is a gauge symmetry of nature, SU(3) 3 f , spontaneously broken by the vacuum expectation values of new scalar fields; the second model is based on the gauge group SU(3) c x SU(3) L x U(1) X , the simplest non-abelian extension of the Standard Model group. We have traced the complete theoretical building of the models, from the gauge group, passing through the nonanomalous fermion contents and the appropriate symmetry breakings, up to the spectra and the Feynman rules, with a particular attention to the treatment of the flavour structure, of tree-level Flavour Changing Neutral Currents and of new CP-violating phases. In fact, these models present an interesting flavour phenomenology, and for both of them we have analytically calculated the contributions to the ΔF=2 and ΔF=1 down-type transitions, arising from new tree-level and box diagrams. Subsequently, we have performed a comprehensive numerical analysis of the phenomenology of the two models. In both cases we have found very effective the strategy of first to identify the quantities able to provide the strongest constraints to the parameter space, then to systematically scan the allowed regions of the latter in order to obtain indications about the key flavour observables, namely the mixing parameters of

  3. Financial power laws: Empirical evidence, models, and mechanisms

    International Nuclear Information System (INIS)

    Lux, Thomas; Alfarano, Simone

    2016-01-01

    Financial markets (share markets, foreign exchange markets and others) are all characterized by a number of universal power laws. The most prominent example is the ubiquitous finding of a robust, approximately cubic power law characterizing the distribution of large returns. A similarly robust feature is long-range dependence in volatility (i.e., hyperbolic decline of its autocorrelation function). The recent literature adds temporal scaling of trading volume and multi-scaling of higher moments of returns. Increasing awareness of these properties has recently spurred attempts at theoretical explanations of the emergence of these key characteristics form the market process. In principle, different types of dynamic processes could be responsible for these power-laws. Examples to be found in the economics literature include multiplicative stochastic processes as well as dynamic processes with multiple equilibria. Though both types of dynamics are characterized by intermittent behavior which occasionally generates large bursts of activity, they can be based on fundamentally different perceptions of the trading process. The present paper reviews both the analytical background of the power laws emerging from the above data generating mechanisms as well as pertinent models proposed in the economics literature.

  4. Empirical modeling of nuclear power plants using neural networks

    International Nuclear Information System (INIS)

    Parlos, A.G.; Atiya, A.; Chong, K.T.

    1991-01-01

    A summary of a procedure for nonlinear identification of process dynamics encountered in nuclear power plant components is presented in this paper using artificial neural systems. A hybrid feedforward/feedback neural network, namely, a recurrent multilayer perceptron, is used as the nonlinear structure for system identification. In the overall identification process, the feedforward portion of the network architecture provides its well-known interpolation property, while through recurrency and cross-talk, the local information feedback enables representation of time-dependent system nonlinearities. The standard backpropagation learning algorithm is modified and is used to train the proposed hybrid network in a supervised manner. The performance of recurrent multilayer perceptron networks in identifying process dynamics is investigated via the case study of a U-tube steam generator. The nonlinear response of a representative steam generator is predicted using a neural network and is compared to the response obtained from a sophisticated physical model during both high- and low-power operation. The transient responses compare well, though further research is warranted for training and testing of recurrent neural networks during more severe operational transients and accident scenarios

  5. Screening enterprising personality in youth: an empirical model.

    Science.gov (United States)

    Suárez-Álvarez, Javier; Pedrosa, Ignacio; García-Cueto, Eduardo; Muñiz, José

    2014-02-20

    Entrepreneurial attitudes of individuals are determined by different variables, some of them related to the cognitive and personality characteristics of the person, and others focused on contextual aspects. The aim of this study is to review the essential dimensions of enterprising personality and develop a test that will permit their thorough assessment. Nine dimensions were identified: achievement motivation, risk taking, innovativeness, autonomy, internal locus of control, external locus of control, stress tolerance, self-efficacy and optimism. For the assessment of these dimensions, 161 items were developed which were applied to a sample of 416 students, 54% male and 46% female (M = 17.89 years old, SD = 3.26). After conducting several qualitative and quantitative analyses, the final test was composed of 127 items with acceptable psychometric properties. Alpha coefficients for the subscales ranged from .81 to .98. The validity evidence relative to the content was provided by experts (V = .71, 95% CI = .56 - .85). Construct validity was assessed using different factorial analyses, obtaining a dimensional structure in accordance with the proposed model of nine interdependent dimensions as well as a global factor that groups these nine dimensions (explained variance = 49.07%; χ2/df = 1.78; GFI= .97; SRMR = .07). Nine out of the 127 items showed Differential Item Functioning as a function of gender (p .035). The results obtained are discussed and future lines of research analyzed.

  6. Model for extended Pati-Salam gauge symmetry

    International Nuclear Information System (INIS)

    Foot, R.; Lew, H.; Volkas, R.R.

    1990-11-01

    The possibility of constructing non-minimal models of the Pati-Salam type is investigated. The most interesting examples are found to have an SU(6) x SU(2) L x SU(2) R guage invariance. Two interesting symmetry breaking patterns are analysed: one leading to the theory of SU(5) colour at an intermediate scale, the other to the quark-lepton symmetric model. 15 refs

  7. Extended model of restricted beam for FSO links

    Science.gov (United States)

    Poliak, Juraj; Wilfert, Otakar

    2012-10-01

    Modern wireless optical communication systems in many aspects overcome wire or radio communications. Their advantages are license-free operation and broad bandwidth that they offer. The medium in free-space optical (FSO) links is the atmosphere. Operation of outdoor FSO links struggles with many atmospheric phenomena that deteriorate phase and amplitude of the transmitted optical beam. This beam originates in the transmitter and is affected by its individual parts, especially by the lens socket and the transmitter aperture, where attenuation and diffraction effects take place. Both of these phenomena unfavourable influence the beam and cause degradation of link availability, or its total malfunction. Therefore, both of these phenomena should be modelled and simulated, so that one can judge the link function prior to the realization of the system. Not only the link availability and reliability are concerned, but also economic aspects. In addition, the transmitted beam is not, generally speaking, circularly symmetrical, what makes the link simulation more difficult. In a comprehensive model, it is necessary to take into account the ellipticity of the beam that is restricted by circularly symmetrical aperture where then the attenuation and diffraction occur. General model is too computationally extensive; therefore simplification of the calculations by means of analytical and numerical approaches will be discussed. Presented model is not only simulated using computer, but also experimentally proven. One can then deduce the ability of the model to describe the reality and to estimate how far can one go with approximations, i.e. limitations of the model are discussed.

  8. Semi-Empirical Calibration of the Integral Equation Model for Co-Polarized L-Band Backscattering

    Directory of Open Access Journals (Sweden)

    Nicolas Baghdadi

    2015-10-01

    Full Text Available The objective of this paper is to extend the semi-empirical calibration of the backscattering Integral Equation Model (IEM initially proposed for Synthetic Aperture Radar (SAR data at C- and X-bands to SAR data at L-band. A large dataset of radar signal and in situ measurements (soil moisture and surface roughness over bare soil surfaces were used. This dataset was collected over numerous agricultural study sites in France, Luxembourg, Belgium, Germany and Italy using various SAR sensors (AIRSAR, SIR-C, JERS-1, PALSAR-1, ESAR. Results showed slightly better simulations with exponential autocorrelation function than with Gaussian function and with HH than with VV. Using the exponential autocorrelation function, the mean difference between experimental data and Integral Equation Model (IEM simulations is +0.4 dB in HH and −1.2 dB in VV with a Root Mean Square Error (RMSE about 3.5 dB. In order to improve the modeling results of the IEM for a better use in the inversion of SAR data, a semi-empirical calibration of the IEM was performed at L-band in replacing the correlation length derived from field experiments by a fitting parameter. Better agreement was observed between the backscattering coefficient provided by the SAR and that simulated by the calibrated version of the IEM (RMSE about 2.2 dB.

  9. Retrospective Correction of Physiological Noise in DTI Using an Extended Tensor Model and Peripheral Measurements

    Science.gov (United States)

    Mohammadi, Siawoosh; Hutton, Chloe; Nagy, Zoltan; Josephs, Oliver; Weiskopf, Nikolaus

    2013-01-01

    Diffusion tensor imaging is widely used in research and clinical applications, but this modality is highly sensitive to artefacts. We developed an easy-to-implement extension of the original diffusion tensor model to account for physiological noise in diffusion tensor imaging using measures of peripheral physiology (pulse and respiration), the so-called extended tensor model. Within the framework of the extended tensor model two types of regressors, which respectively modeled small (linear) and strong (nonlinear) variations in the diffusion signal, were derived from peripheral measures. We tested the performance of four extended tensor models with different physiological noise regressors on nongated and gated diffusion tensor imaging data, and compared it to an established data-driven robust fitting method. In the brainstem and cerebellum the extended tensor models reduced the noise in the tensor-fit by up to 23% in accordance with previous studies on physiological noise. The extended tensor model addresses both large-amplitude outliers and small-amplitude signal-changes. The framework of the extended tensor model also facilitates further investigation into physiological noise in diffusion tensor imaging. The proposed extended tensor model can be readily combined with other artefact correction methods such as robust fitting and eddy current correction. PMID:22936599

  10. Empirical wind retrieval model based on SAR spectrum measurements

    Science.gov (United States)

    Panfilova, Maria; Karaev, Vladimir; Balandina, Galina; Kanevsky, Mikhail; Portabella, Marcos; Stoffelen, Ad

    The present paper considers polarimetric SAR wind vector applications. Remote-sensing measurements of the near-surface wind over the ocean are of great importance for the understanding of atmosphere-ocean interaction. In recent years investigations for wind vector retrieval using Synthetic Aperture Radar (SAR) data have been performed. In contrast with scatterometers, a SAR has a finer spatial resolution that makes it a more suitable microwave instrument to explore wind conditions in the marginal ice zones, coastal regions and lakes. The wind speed retrieval procedure from scatterometer data matches the measured radar backscattering signal with the geophysical model function (GMF). The GMF determines the radar cross section dependence on the wind speed and direction with respect to the azimuthal angle of the radar beam. Scatterometers provide information on wind speed and direction simultaneously due to the fact that each wind vector cell (WVC) is observed at several azimuth angles. However, SAR is not designed to be used as a high resolution scatterometer. In this case, each WVC is observed at only one single azimuth angle. That is why for wind vector determination additional information such as wind streak orientation over the sea surface is required. It is shown that the wind vector can be obtained using polarimetric SAR without additional information. The main idea is to analyze the spectrum of a homogeneous SAR image area instead of the backscattering normalized radar cross section. Preliminary numerical simulations revealed that SAR image spectral maxima positions depend on the wind vector. Thus the following method for wind speed retrieval is proposed. In the first stage of the algorithm, the SAR spectrum maxima are determined. This procedure is carried out to estimate the wind speed and direction with ambiguities separated by 180 degrees due to the SAR spectrum symmetry. The second stage of the algorithm allows us to select the correct wind direction

  11. An empirical probability model of detecting species at low densities.

    Science.gov (United States)

    Delaney, David G; Leung, Brian

    2010-06-01

    False negatives, not detecting things that are actually present, are an important but understudied problem. False negatives are the result of our inability to perfectly detect species, especially those at low density such as endangered species or newly arriving introduced species. They reduce our ability to interpret presence-absence survey data and make sound management decisions (e.g., rapid response). To reduce the probability of false negatives, we need to compare the efficacy and sensitivity of different sampling approaches and quantify an unbiased estimate of the probability of detection. We conducted field experiments in the intertidal zone of New England and New York to test the sensitivity of two sampling approaches (quadrat vs. total area search, TAS), given different target characteristics (mobile vs. sessile). Using logistic regression we built detection curves for each sampling approach that related the sampling intensity and the density of targets to the probability of detection. The TAS approach reduced the probability of false negatives and detected targets faster than the quadrat approach. Mobility of targets increased the time to detection but did not affect detection success. Finally, we interpreted two years of presence-absence data on the distribution of the Asian shore crab (Hemigrapsus sanguineus) in New England and New York, using our probability model for false negatives. The type of experimental approach in this paper can help to reduce false negatives and increase our ability to detect species at low densities by refining sampling approaches, which can guide conservation strategies and management decisions in various areas of ecology such as conservation biology and invasion ecology.

  12. Semi empirical model for astrophysical nuclear fusion reactions of 1≤Z≤15

    International Nuclear Information System (INIS)

    Manjunatha, H.C.; Seenappa, L.; Sridhar, K.N.

    2017-01-01

    The fusion reaction is one of the most important reactions in the stellar evolution. Due to the complicated reaction mechanism of fusion, there is great uncertainty in the reaction rate which limits our understanding of various stellar objects. Low z elements are formed through many fusion reactions such as "4He+"1"2C→"1"6O, "1"2C+"1"2C→"2"0Ne+"4He, "1"2C+"1"2C→"2"3Na, "1"2C+"1"2C→"2"3Mg, "1"6O+"1"6O→"2"8Si+"4He, "1"2C+"1H→"1"3N and "1"3C+"4He→"1"6O. A detail study is required on Coulomb and nuclear interaction in formation of low Z elements in stars through fusion reactions. For astrophysics, the important energy range extends from 1 MeV to 3 MeV in the center of mass frame, which is only partially covered by experiments. In the present work, we have studied the basic fusion parameters such as barrier heights (V_B), positions (R_B), curvature of the inverted parabola (ħω_1) for fusion barrier, cross section and compound nucleus formation probability (P_C_N) and fusion process in the low Z element (1≤Z≤15) formation process. For each isotope, we have studied all possible projectile-target combinations. We have also studied the astrophysical S(E) factor for these reactions. Based on this study, we have formulated the semi empirical relations for barrier heights (V_B), positions (R_B), curvature of the inverted parabola and hence for the fusion cross section and astrophysical S(E) factor. The values produced by the present model compared with the experiments and data available in the literature. (author)

  13. Tests of Parameters Instability: Theoretical Study and Empirical Applications on Two Types of Models (ARMA Model and Market Model

    Directory of Open Access Journals (Sweden)

    Sahbi FARHANI

    2012-01-01

    Full Text Available This paper considers tests of parameters instability and structural change with known, unknown or multiple breakpoints. The results apply to a wide class of parametric models that are suitable for estimation by strong rules for detecting the number of breaks in a time series. For that, we use Chow, CUSUM, CUSUM of squares, Wald, likelihood ratio and Lagrange multiplier tests. Each test implicitly uses an estimate of a change point. We conclude with an empirical analysis on two different models (ARMA model and simple linear regression model.

  14. Generative probabilistic models extend the scope of inferential structure determination

    DEFF Research Database (Denmark)

    Olsson, Simon; Boomsma, Wouter; Frellsen, Jes

    2011-01-01

    demonstrate that the use of generative probabilistic models instead of physical forcefields in the Bayesian formalism is not only conceptually attractive, but also improves precision and efficiency. Our results open new vistas for the use of sophisticated probabilistic models of biomolecular structure......Conventional methods for protein structure determination from NMR data rely on the ad hoc combination of physical forcefields and experimental data, along with heuristic determination of free parameters such as weight of experimental data relative to a physical forcefield. Recently, a theoretically...

  15. Extending MBI Model using ITIL and COBIT Processes

    Directory of Open Access Journals (Sweden)

    Sona Karkoskova

    2015-10-01

    Full Text Available Most organizations today operate in a highly complex and competitive business environment and need to be able to react to rapidly changing market conditions. IT management frameworks are widely used to provide effective support for business objectives through aligning IT with business and optimizing the use of IT resources. In this paper we analyze three IT management frameworks (ITIL, COBIT and MBI with the objective to identify the relationships between these frameworks, and mapping ITIL and COBIT processes to MBI tasks. As a result of this analysis we propose extensions to the MBI model to incorporate IT Performance Management and a Capability Maturity Model.

  16. Elementary particles, dark matter candidate and new extended standard model

    Science.gov (United States)

    Hwang, Jaekwang

    2017-01-01

    Elementary particle decays and reactions are discussed in terms of the three-dimensional quantized space model beyond the standard model. Three generations of the leptons and quarks correspond to the lepton charges. Three heavy leptons and three heavy quarks are introduced. And the bastons (new particles) are proposed as the possible candidate of the dark matters. Dark matter force, weak force and strong force are explained consistently. Possible rest masses of the new particles are, tentatively, proposed for the experimental searches. For more details, see the conference paper at https://www.researchgate.net/publication/308723916.

  17. Modelling of proton exchange membrane fuel cell performance based on semi-empirical equations

    Energy Technology Data Exchange (ETDEWEB)

    Al-Baghdadi, Maher A.R. Sadiq [Babylon Univ., Dept. of Mechanical Engineering, Babylon (Iraq)

    2005-08-01

    Using semi-empirical equations for modeling a proton exchange membrane fuel cell is proposed for providing a tool for the design and analysis of fuel cell total systems. The focus of this study is to derive an empirical model including process variations to estimate the performance of fuel cell without extensive calculations. The model take into account not only the current density but also the process variations, such as the gas pressure, temperature, humidity, and utilization to cover operating processes, which are important factors in determining the real performance of fuel cell. The modelling results are compared well with known experimental results. The comparison shows good agreements between the modeling results and the experimental data. The model can be used to investigate the influence of process variables for design optimization of fuel cells, stacks, and complete fuel cell power system. (Author)

  18. Comparing the model-simulated global warming signal to observations using empirical estimates of unforced noise

    Science.gov (United States)

    Brown, Patrick T.; Li, Wenhong; Cordero, Eugene C.; Mauget, Steven A.

    2015-01-01

    The comparison of observed global mean surface air temperature (GMT) change to the mean change simulated by climate models has received much public and scientific attention. For a given global warming signal produced by a climate model ensemble, there exists an envelope of GMT values representing the range of possible unforced states of the climate system (the Envelope of Unforced Noise; EUN). Typically, the EUN is derived from climate models themselves, but climate models might not accurately simulate the correct characteristics of unforced GMT variability. Here, we simulate a new, empirical, EUN that is based on instrumental and reconstructed surface temperature records. We compare the forced GMT signal produced by climate models to observations while noting the range of GMT values provided by the empirical EUN. We find that the empirical EUN is wide enough so that the interdecadal variability in the rate of global warming over the 20th century does not necessarily require corresponding variability in the rate-of-increase of the forced signal. The empirical EUN also indicates that the reduced GMT warming over the past decade or so is still consistent with a middle emission scenario's forced signal, but is likely inconsistent with the steepest emission scenario's forced signal. PMID:25898351

  19. Comparing the model-simulated global warming signal to observations using empirical estimates of unforced noise.

    Science.gov (United States)

    Brown, Patrick T; Li, Wenhong; Cordero, Eugene C; Mauget, Steven A

    2015-04-21

    The comparison of observed global mean surface air temperature (GMT) change to the mean change simulated by climate models has received much public and scientific attention. For a given global warming signal produced by a climate model ensemble, there exists an envelope of GMT values representing the range of possible unforced states of the climate system (the Envelope of Unforced Noise; EUN). Typically, the EUN is derived from climate models themselves, but climate models might not accurately simulate the correct characteristics of unforced GMT variability. Here, we simulate a new, empirical, EUN that is based on instrumental and reconstructed surface temperature records. We compare the forced GMT signal produced by climate models to observations while noting the range of GMT values provided by the empirical EUN. We find that the empirical EUN is wide enough so that the interdecadal variability in the rate of global warming over the 20(th) century does not necessarily require corresponding variability in the rate-of-increase of the forced signal. The empirical EUN also indicates that the reduced GMT warming over the past decade or so is still consistent with a middle emission scenario's forced signal, but is likely inconsistent with the steepest emission scenario's forced signal.

  20. Searches for Neutral Higgs Bosons in Extended Models

    CERN Document Server

    Abdallah, J; Adam, W; Adzic, P; Albrecht, T; Alderweireld, T; Alemany-Fernandez, R; Allmendinger, T; Allport, P P; Amaldi, Ugo; Amapane, N; Amato, S; Anashkin, E; Andreazza, A; Andringa, S; Anjos, N; Antilogus, P; Apel, W D; Arnoud, Y; Ask, S; Åsman, B; Augustin, J E; Augustinus, A; Baillon, Paul; Ballestrero, A; Bambade, P; Barbier, R; Bardin, Dimitri Yuri; Barker, G J; Baroncelli, A; Battaglia, Marco; Baubillier, M; Becks, K H; Begalli, M; Behrmann, A; Ben-Haim, E; Benekos, N C; Benvenuti, Alberto C; Bérat, C; Berggren, M; Berntzon, L; Bertrand, D; Besançon, M; Besson, N; Bloch, D; Blom, M; Bluj, M; Bonesini, M; Boonekamp, M; Booth, P S L; Borisov, G; Botner, O; Bouquet, B; Bowcock, T J V; Boyko, I; Bracko, M; Brenner, R; Brodet, E; Brückman, P; Brunet, J M; Bugge, L; Buschmann, P; Calvi, M; Camporesi, T; Canale, V; Carena, F; Castro, N; Cavallo, F R; Chapkin, M M; Charpentier, P; Checchia, P; Chierici, R; Shlyapnikov, P; Chudoba, J; Chung, S U; Cieslik, K; Collins, P; Contri, R; Cosme, G; Cossutti, F; Costa, M J; Crennell, D J; Cuevas-Maestro, J; D'Hondt, J; Dalmau, J; Da Silva, T; Da Silva, W; Della Ricca, G; De Angelis, A; de Boer, Wim; De Clercq, C; De Lotto, B; De Maria, N; De Min, A; De Paula, L S; Di Ciaccio, L; Di Simone, A; Doroba, K; Drees, J; Dris, M; Eigen, G; Ekelöf, T J C; Ellert, M; Elsing, M; Espirito-Santo, M C; Fanourakis, G K; Fassouliotis, D; Feindt, M; Fernández, J; Ferrer, A; Ferro, F; Flagmeyer, U; Föth, H; Fokitis, E; Fulda-Quenzer, F; Fuster, J A; Gandelman, M; García, C; Gavillet, P; Gazis, E N; Gokieli, R; Golob, B; Gómez-Ceballos, G; Gonçalves, P; Graziani, E; Grosdidier, G; Grzelak, K; Guy, J; Haag, C; Hallgren, A; Hamacher, K; Hamilton, K; Haug, S; Hauler, F; Hedberg, V; Hennecke, M; Herr, H; Hoffman, J; Holmgren, S O; Holt, P J; Houlden, M A; Hultqvist, K; Jackson, J N; Jarlskog, G; Jarry, P; Jeans, D; Johansson, E K; Johansson, P D; Jonsson, P; Joram, C; Jungermann, L; Kapusta, F; Katsanevas, S; Katsoufis, E C; Kernel, G; Kersevan, B P; Kerzel, U; Kiiskinen, A P; King, B T; Kjaer, N J; Kluit, P; Kokkinias, P; Kourkoumelis, C; Kuznetsov, O; Krumshtein, Z; Kucharczyk, M; Lamsa, J; Leder, G; Ledroit, F; Leinonen, L; Leitner, R; Lemonne, J; Lepeltier, V; Lesiak, T; Liebig, W; Liko, D; Lipniacka, A; Lopes, J H; López, J M; Loukas, D; Lutz, P; Lyons, L; MacNaughton, J; Malek, A; Maltezos, S; Mandl, F; Marco, J; Marco, R; Maréchal, B; Margoni, M; Marin, J C; Mariotti, C; Markou, A; Martínez-Rivero, C; Masik, J; Mastroyiannopoulos, N; Matorras, F; Matteuzzi, C; Mazzucato, F; Mazzucato, M; McNulty, R; Meroni, C; Migliore, E; Mitaroff, W A; Mjörnmark, U; Moa, T; Moch, M; Mönig, K; Monge, R; Montenegro, J; Moraes, D; Moreno, S; Morettini, P; Müller, U; Münich, K; Mulders, M; Mundim, L; Murray, W; Muryn, B; Myatt, G; Myklebust, T; Nassiakou, M; Navarria, Francesco Luigi; Nawrocki, K; Nicolaidou, R; Nikolenko, M; Oblakowska-Mucha, A; Obraztsov, V F; Olshevskii, A G; Onofre, A; Orava, R; Österberg, K; Ouraou, A; Oyanguren, A; Paganoni, M; Paiano, S; Palacios, J P; Palka, H; Papadopoulou, T D; Pape, L; Parkes, C; Parodi, F; Parzefall, U; Passeri, A; Passon, O; Peralta, L; Perepelitsa, V F; Perrotta, A; Petrolini, A; Piedra, J; Pieri, L; Pierre, F; Pimenta, M; Piotto, E; Podobnik, T; Poireau, V; Pol, M E; Polok, G; Pozdnyakov, V; Pukhaeva, N; Pullia, A; Rames, J; Read, A; Rebecchi, P; Rehn, J; Reid, D; Reinhardt, R; Renton, P B; Richard, F; Rídky, J; Rivero, M; Rodríguez, D; Romero, A; Ronchese, P; Roudeau, P; Rovelli, T; Ruhlmann-Kleider, V; Ryabtchikov, D; Sadovskii, A; Salmi, L; Salt, J; Sander, C; Savoy-Navarro, A; Schwickerath, U; Segar, A; Sekulin, R L; Siebel, M; Sissakian, A N; Smadja, G; Smirnova, O G; Sokolov, A; Sopczak, A; Sosnowski, R; Spassoff, Tz; Stanitzki, M; Stocchi, A; Strauss, J; Stugu, B; Szczekowski, M; Szeptycka, M; Szumlak, T; Tabarelli de Fatis, T; Taffard, A C; Tegenfeldt, F; Timmermans, J; Tkatchev, L G; Tobin, M; Todorovova, S; Tomé, B; Tonazzo, A; Tortosa, P; Travnicek, P; Treille, D; Tristram, G; Trochimczuk, M; Troncon, C; Turluer, M L; Tyapkin, I A; Tyapkin, P; Tzamarias, S; Uvarov, V; Valenti, G; van Dam, P; Van Eldik, J; Van Lysebetten, A; Van Remortel, N; Van Vulpen, I; Vegni, G; Veloso, F; Venus, W; Verdier, P; Verzi, V; Vilanova, D; Vitale, L; Vrba, V; Wahlen, H; Washbrook, A J; Weiser, C; Wicke, D; Wickens, J; Wilkinson, G; Winter, M; Witek, M; Yushchenko, O P; Zalewska-Bak, A; Zalewski, P; Zavrtanik, D; Zhuravlov, V; Zimin, N I; Zintchenko, A; Zupan, M

    2004-01-01

    Searches for neutral Higgs bosons produced at LEP in association with Z bosons, in pairs and in the Yukawa process are presented in this paper. Higgs boson decays into b quarks, tau leptons, or other Higgs bosons are considered, giving rise to four-b, four-b+jets, six-b and four-tau final states, as well as mixed modes with b quarks and tau leptons. The whole mass domain kinematically accessible at LEP in these topologies is searched. The analysed data set covers both the LEP1 and LEP2 energy ranges and exploits most of the luminosity recorded by the DELPHI experiment. No convincing evidence for a signal is found, and results are presented in the form of mass-dependent upper bounds on coupling factors (in units of model-independent reference cross-sections) for all processes, allowing interpretation of the data in a large class of models.

  1. Non-leptonic decays in an extended chiral quark model

    Energy Technology Data Exchange (ETDEWEB)

    Eeg, J. O. [Dept. of Physics, Univ. of Oslo, P.O. Box 1048 Blindern, N-0316 Oslo (Norway)

    2012-10-23

    We consider the color suppressed (nonfactorizable) amplitude for the decay mode B{sub d}{sup 0}{yields}{pi}{sup 0}{pi}{sup 0}. We treat the b-quark in the heavy quark limit and the energetic light (u,d,s) quarks within a variant of Large Energy Effective Theory combined with an extension of chiral quark models. Our calculated amplitude for B{sub d}{sup 0}{yields}{pi}{sup 0}{pi}{sup 0} is suppressed by a factor of order {Lambda}{sub QCD}/m{sub b} with respect to the factorized amplitude, as it should according to QCD-factorization. Further, for reasonable values of the (model dependent) gluon condensate and the constituent quark mass, the calculated nonfactorizable amplitude for B{sub d}{sup 0}{yields}{pi}{sup 0}{pi}{sup 0} can easily accomodate the experimental value. Unfortunately, the color suppressed amplitude is very sensitive to the values of these model dependent parameters. Therefore fine-tuning is necessary in order to obtain an amplitude compatible with the experimental result for B{sub d}{sup 0}{yields}{pi}{sup 0}{pi}{sup 0}.

  2. Extended objects

    International Nuclear Information System (INIS)

    Creutz, M.

    1976-01-01

    After some disconnected comments on the MIT bag and string models for extended hadrons, I review current understanding of extended objects in classical conventional relativistic field theories and their quantum mechanical interpretation

  3. On extended liability in a model of adverse selection

    OpenAIRE

    Dieter Balkenborg

    2004-01-01

    We consider a model where a judgment-proof firm needs finance to realize a project. This project might cause an environmental hazard with a probability that is the private knowledge of the firm. Thus there is asymmetric information with respect to the environmental riskiness of the project. We consider the implications of a simple joint and strict liability rule on the lender and the firm where, in case of a damage, the lender is responsible for that part of the liability which the judgment-p...

  4. On Extending Temporal Models in Timed Influence Networks

    Science.gov (United States)

    2009-06-01

    among variables in a system. A situation where the impact of a variable takes some time to reach the affected variable(s) cannot be modeled by either of...A1 A4 [h11(1) = 0.99, h11(0) = -0.99] [h12(1) = 0.90, h12 (0) = 0] [ h13 (1) = 0, h13 (0) = -0.90] [h14(1) =- 0.90, h14(0...the corresponding )( 1 11 xh and )( 2 12 xh . The posterior probability of B captures the impact of an affecting event on B and can be plotted as a

  5. Extending PSA models including ageing and asset management - 15291

    International Nuclear Information System (INIS)

    Martorell, S.; Marton, I.; Carlos, S.; Sanchez, A.I.

    2015-01-01

    This paper proposes a new approach to Ageing Probabilistic Safety Assessment (APSA) modelling, which is intended to be used to support risk-informed decisions on the effectiveness of maintenance management programs and technical specification requirements of critical equipment of Nuclear Power Plants (NPP) within the framework of the Risk Informed Decision Making according to R.G. 1.174 principles. This approach focuses on the incorporation of not only equipment ageing but also effectiveness of maintenance and efficiency of surveillance testing explicitly into APSA models and data. This methodology is applied to a motor-operated valve of the auxiliary feed water system (AFWS) of a PWR. This simple example of application focuses on a critical safety-related equipment of a NPP in order to evaluate the risk impact of considering different approaches to APSA and the combined effect of equipment ageing and maintenance and testing alternatives along NPP design life. The risk impact of several alternatives in maintenance strategy is discussed

  6. The Action of Chain Extenders in Nylon-6, PET, and Model Compounds

    NARCIS (Netherlands)

    Loontjens, T.; Pauwels, K.; Derks, F.; Neilen, M.; Sham, C.K.; Serné, M.

    1997-01-01

    The action of two complementary chain extenders is studied in model systems as well as in poly(ethylene terephthalate) (PET) and nylon–6. Chain extenders are low molecular weight compounds that can be used to increase the molecular weight of polymers in a short time. The reaction must preferably be

  7. Minimal representations of supersymmetry and 1D N-extended σ-models

    International Nuclear Information System (INIS)

    Toppan, Francesco

    2008-01-01

    We discuss the minimal representations of the 1D N-Extended Supersymmetry algebra (the Z 2 -graded symmetry algebra of the Supersymmetric Quantum Mechanics) linearly realized on a finite number of fields depending on a real parameter t, the time. Their knowledge allows to construct one-dimensional sigma-models with extended off-shell supersymmetries without using superfields (author)

  8. Perceived Convenience in an Extended Technology Acceptance Model: Mobile Technology and English Learning for College Students

    Science.gov (United States)

    Chang, Chi-Cheng; Yan, Chi-Fang; Tseng, Ju-Shih

    2012-01-01

    Since convenience is one of the features for mobile learning, does it affect attitude and intention of using mobile technology? The technology acceptance model (TAM), proposed by David (1989), was extended with perceived convenience in the present study. With regard to English language mobile learning, the variables in the extended TAM and its…

  9. Modelling mobile health systems: an application of augmented MDA for the extended healthcare enterprise

    NARCIS (Netherlands)

    Jones, Valerie M.; Rensink, Arend; Brinksma, Hendrik

    2005-01-01

    Mobile health systems can extend the enterprise computing system of the healthcare provider by bringing services to the patient any time and anywhere. We propose a model-driven design and development methodology for the development of the m-health components in such extended enterprise computing

  10. A stochastic empirical model for heavy-metal balnces in Agro-ecosystems

    NARCIS (Netherlands)

    Keller, A.N.; Steiger, von B.; Zee, van der S.E.A.T.M.; Schulin, R.

    2001-01-01

    Mass flux balancing provides essential information for preventive strategies against heavy-metal accumulation in agricultural soils that may result from atmospheric deposition and application of fertilizers and pesticides. In this paper we present the empirical stochastic balance model, PROTERRA-S,

  11. Modeling Lolium perenne L. roots in the presence of empirical black holes

    Science.gov (United States)

    Plant root models are designed for understanding structural or functional aspects of root systems. When a process is not thoroughly understood, a black box object is used. However, when a process exists but empirical data do not indicate its existence, you have a black hole. The object of this re...

  12. Performance-Based Service Quality Model: An Empirical Study on Japanese Universities

    Science.gov (United States)

    Sultan, Parves; Wong, Ho

    2010-01-01

    Purpose: This paper aims to develop and empirically test the performance-based higher education service quality model. Design/methodology/approach: The study develops 67-item instrument for measuring performance-based service quality with a particular focus on the higher education sector. Scale reliability is confirmed using the Cronbach's alpha.…

  13. Climate Prediction for Brazil's Nordeste: Performance of Empirical and Numerical Modeling Methods.

    Science.gov (United States)

    Moura, Antonio Divino; Hastenrath, Stefan

    2004-07-01

    Comparisons of performance of climate forecast methods require consistency in the predictand and a long common reference period. For Brazil's Nordeste, empirical methods developed at the University of Wisconsin use preseason (October January) rainfall and January indices of the fields of meridional wind component and sea surface temperature (SST) in the tropical Atlantic and the equatorial Pacific as input to stepwise multiple regression and neural networking. These are used to predict the March June rainfall at a network of 27 stations. An experiment at the International Research Institute for Climate Prediction, Columbia University, with a numerical model (ECHAM4.5) used global SST information through February to predict the March June rainfall at three grid points in the Nordeste. The predictands for the empirical and numerical model forecasts are correlated at +0.96, and the period common to the independent portion of record of the empirical prediction and the numerical modeling is 1968 99. Over this period, predicted versus observed rainfall are evaluated in terms of correlation, root-mean-square error, absolute error, and bias. Performance is high for both approaches. Numerical modeling produces a correlation of +0.68, moderate errors, and strong negative bias. For the empirical methods, errors and bias are small, and correlations of +0.73 and +0.82 are reached between predicted and observed rainfall.

  14. Understanding users’ motivations to engage in virtual worlds: A multipurpose model and empirical testing

    NARCIS (Netherlands)

    Verhagen, T.; Feldberg, J.F.M.; van den Hooff, B.J.; Meents, S.; Merikivi, J.

    2012-01-01

    Despite the growth and commercial potential of virtual worlds, relatively little is known about what drives users' motivations to engage in virtual worlds. This paper proposes and empirically tests a conceptual model aimed at filling this research gap. Given the multipurpose nature of virtual words

  15. MERGANSER - An Empirical Model to Predict Fish and Loon Mercury in New England Lakes

    Science.gov (United States)

    MERGANSER (MERcury Geo-spatial AssessmeNtS for the New England Region) is an empirical least-squares multiple regression model using mercury (Hg) deposition and readily obtainable lake and watershed features to predict fish (fillet) and common loon (blood) Hg in New England lakes...

  16. Distribution of longshore sediment transport along the Indian coast based on empirical model

    Digital Repository Service at National Institute of Oceanography (India)

    Chandramohan, P.; Nayak, B.U.

    An empirical sediment transport model has been developed based on longshore energy flux equation. Study indicates that annual gross sediment transport rate is high (1.5 x 10 super(6) cubic meters to 2.0 x 10 super(6) cubic meters) along the coasts...

  17. An empirical test of stage models of e-government development: evidence from Dutch municipalities

    NARCIS (Netherlands)

    Rooks, G.; Matzat, U.; Sadowski, B.M.

    2017-01-01

    In this article we empirically test stage models of e-government development. We use Lee's classification to make a distinction between four stages of e-government: informational, requests, personal, and e-democracy. We draw on a comprehensive data set on the adoption and development of e-government

  18. Satellite-based empirical models linking river plume dynamics with hypoxic area andvolume

    Science.gov (United States)

    Satellite-based empirical models explaining hypoxic area and volume variation were developed for the seasonally hypoxic (O2 < 2 mg L−1) northern Gulf of Mexico adjacent to the Mississippi River. Annual variations in midsummer hypoxic area and ...

  19. Integrating social science into empirical models of coupled human and natural systems

    Science.gov (United States)

    Jeffrey D. Kline; Eric M. White; A Paige Fischer; Michelle M. Steen-Adams; Susan Charnley; Christine S. Olsen; Thomas A. Spies; John D. Bailey

    2017-01-01

    Coupled human and natural systems (CHANS) research highlights reciprocal interactions (or feedbacks) between biophysical and socioeconomic variables to explain system dynamics and resilience. Empirical models often are used to test hypotheses and apply theory that represent human behavior. Parameterizing reciprocal interactions presents two challenges for social...

  20. Top quark decays with flavor violation in extended models

    International Nuclear Information System (INIS)

    Aranda, J I; Gómez, D E; Ramírez-Zavaleta, F; Tututi, E S; Cortés-Maldonado, I

    2016-01-01

    We analyze the top quark decays t → cg and t → cγ mediated by a new neutral gauge boson, identified as Z', in the context of the sequential Z model. We focus our attention on the corresponding branching ratios, which are a function of the Z' boson mass. The study range is taken from 2 TeV to 6 TeV, which is compatible with the resonant region of the dileptonic channel reported by ATLAS and CMS Collaborations. Finally, our preliminary results tell us that the branching ratios of t → cg and t → cγ processes can be of the order of 10 -11 and 10 -13 , respectively. (paper)

  1. EXTENDE MODEL OF COMPETITIVITY THROUG APPLICATION OF NEW APPROACH DIRECTIVES

    Directory of Open Access Journals (Sweden)

    Slavko Arsovski

    2009-03-01

    Full Text Available The basic subject of this work is the model of new approach impact on quality and safety products, and competency of our companies. This work represents real hypothesis on the basis of expert's experiences, in regard to that the infrastructure with using new approach directives wasn't examined until now, it isn't known which product or industry of Serbia is related to directives of the new approach and CE mark, and it is not known which are effects of the use of the CE mark. This work should indicate existing quality reserves and product's safety, the level of possible competency improvement and increasing the profit by discharging new approach directive requires.

  2. A Time-dependent Heliospheric Model Driven by Empirical Boundary Conditions

    Science.gov (United States)

    Kim, T. K.; Arge, C. N.; Pogorelov, N. V.

    2017-12-01

    Consisting of charged particles originating from the Sun, the solar wind carries the Sun's energy and magnetic field outward through interplanetary space. The solar wind is the predominant source of space weather events, and modeling the solar wind propagation to Earth is a critical component of space weather research. Solar wind models are typically separated into coronal and heliospheric parts to account for the different physical processes and scales characterizing each region. Coronal models are often coupled with heliospheric models to propagate the solar wind out to Earth's orbit and beyond. The Wang-Sheeley-Arge (WSA) model is a semi-empirical coronal model consisting of a potential field source surface model and a current sheet model that takes synoptic magnetograms as input to estimate the magnetic field and solar wind speed at any distance above the coronal region. The current version of the WSA model takes the Air Force Data Assimilative Photospheric Flux Transport (ADAPT) model as input to provide improved time-varying solutions for the ambient solar wind structure. When heliospheric MHD models are coupled with the WSA model, density and temperature at the inner boundary are treated as free parameters that are tuned to optimal values. For example, the WSA-ENLIL model prescribes density and temperature assuming momentum flux and thermal pressure balance across the inner boundary of the ENLIL heliospheric MHD model. We consider an alternative approach of prescribing density and temperature using empirical correlations derived from Ulysses and OMNI data. We use our own modeling software (Multi-scale Fluid-kinetic Simulation Suite) to drive a heliospheric MHD model with ADAPT-WSA input. The modeling results using the two different approaches of density and temperature prescription suggest that the use of empirical correlations may be a more straightforward, consistent method.

  3. A New Empirical Model for Radar Scattering from Bare Soil Surfaces

    Directory of Open Access Journals (Sweden)

    Nicolas Baghdadi

    2016-11-01

    Full Text Available The objective of this paper is to propose a new semi-empirical radar backscattering model for bare soil surfaces based on the Dubois model. A wide dataset of backscattering coefficients extracted from synthetic aperture radar (SAR images and in situ soil surface parameter measurements (moisture content and roughness is used. The retrieval of soil parameters from SAR images remains challenging because the available backscattering models have limited performances. Existing models, physical, semi-empirical, or empirical, do not allow for a reliable estimate of soil surface geophysical parameters for all surface conditions. The proposed model, developed in HH, HV, and VV polarizations, uses a formulation of radar signals based on physical principles that are validated in numerous studies. Never before has a backscattering model been built and validated on such an important dataset as the one proposed in this study. It contains a wide range of incidence angles (18°–57° and radar wavelengths (L, C, X, well distributed, geographically, for regions with different climate conditions (humid, semi-arid, and arid sites, and involving many SAR sensors. The results show that the new model shows a very good performance for different radar wavelengths (L, C, X, incidence angles, and polarizations (RMSE of about 2 dB. This model is easy to invert and could provide a way to improve the retrieval of soil parameters.

  4. Topological superconductivity in the extended Kitaev-Heisenberg model

    Science.gov (United States)

    Schmidt, Johann; Scherer, Daniel D.; Black-Schaffer, Annica M.

    2018-01-01

    We study superconducting pairing in the doped Kitaev-Heisenberg model by taking into account the recently proposed symmetric off-diagonal exchange Γ . By performing a mean-field analysis, we classify all possible superconducting phases in terms of symmetry, explicitly taking into account effects of spin-orbit coupling. Solving the resulting gap equations self-consistently, we map out a phase diagram that involves several topologically nontrivial states. For Γ breaking chiral phase with Chern number ±1 and a time-reversal symmetric nematic phase that breaks the rotational symmetry of the lattice. On the other hand, for Γ ≥0 we find a time-reversal symmetric phase that preserves all the lattice symmetries, thus yielding clearly distinguishable experimental signatures for all superconducting phases. Both of the time-reversal symmetric phases display a transition to a Z2 nontrivial phase at high doping levels. Finally, we also include a symmetry-allowed spin-orbit coupling kinetic energy and show that it destroys a tentative symmetry-protected topological order at lower doping levels. However, it can be used to tune the time-reversal symmetric phases into a Z2 nontrivial phase even at lower doping.

  5. Bipolarons in one-dimensional extended Peierls-Hubbard models

    Science.gov (United States)

    Sous, John; Chakraborty, Monodeep; Krems, Roman; Berciu, Mona

    2017-04-01

    We study two particles in an infinite chain and coupled to phonons by interactions that modulate their hopping as described by the Peierls/Su-Schrieffer-Heeger (SSH) model. In the case of hard-core bare particles, we show that exchange of phonons generates effective nearest-neighbor repulsion between particles and also gives rise to interactions that move the pair as a whole. The two-polaron phase diagram exhibits two sharp transitions, leading to light dimers at strong coupling and the flattening of the dimer dispersion at some critical values of the parameters. This dimer (quasi)self-trapping occurs at coupling strengths where single polarons are mobile. On the other hand, in the case of soft-core particles/ spinfull fermions, we show that phonon-mediated interactions are attractive and result in strongly bound and mobile bipolarons in a wide region of parameter space. This illustrates that, depending on the strength of the phonon-mediated interactions and statistics of bare particles, the coupling to phonons may completely suppress or strongly enhance quantum transport of correlated particles. This work was supported by NSERC of Canada and the Stewart Blusson Quantum Matter Institute.

  6. Panel data models extended to spatial error autocorrelation or a spatially lagged dependent variable

    NARCIS (Netherlands)

    Elhorst, J. Paul

    2001-01-01

    This paper surveys panel data models extended to spatial error autocorrelation or a spatially lagged dependent variable. In particular, it focuses on the specification and estimation of four panel data models commonly used in applied research: the fixed effects model, the random effects model, the

  7. Extended Mixed-Efects Item Response Models with the MH-RM Algorithm

    Science.gov (United States)

    Chalmers, R. Philip

    2015-01-01

    A mixed-effects item response theory (IRT) model is presented as a logical extension of the generalized linear mixed-effects modeling approach to formulating explanatory IRT models. Fixed and random coefficients in the extended model are estimated using a Metropolis-Hastings Robbins-Monro (MH-RM) stochastic imputation algorithm to accommodate for…

  8. Empirical Correction to the Likelihood Ratio Statistic for Structural Equation Modeling with Many Variables.

    Science.gov (United States)

    Yuan, Ke-Hai; Tian, Yubin; Yanagihara, Hirokazu

    2015-06-01

    Survey data typically contain many variables. Structural equation modeling (SEM) is commonly used in analyzing such data. The most widely used statistic for evaluating the adequacy of a SEM model is T ML, a slight modification to the likelihood ratio statistic. Under normality assumption, T ML approximately follows a chi-square distribution when the number of observations (N) is large and the number of items or variables (p) is small. However, in practice, p can be rather large while N is always limited due to not having enough participants. Even with a relatively large N, empirical results show that T ML rejects the correct model too often when p is not too small. Various corrections to T ML have been proposed, but they are mostly heuristic. Following the principle of the Bartlett correction, this paper proposes an empirical approach to correct T ML so that the mean of the resulting statistic approximately equals the degrees of freedom of the nominal chi-square distribution. Results show that empirically corrected statistics follow the nominal chi-square distribution much more closely than previously proposed corrections to T ML, and they control type I errors reasonably well whenever N ≥ max(50,2p). The formulations of the empirically corrected statistics are further used to predict type I errors of T ML as reported in the literature, and they perform well.

  9. An anthology of theories and models of design philosophy, approaches and empirical explorations

    CERN Document Server

    Blessing, Lucienne

    2014-01-01

    While investigations into both theories and models has remained a major strand of engineering design research, current literature sorely lacks a reference book that provides a comprehensive and up-to-date anthology of theories and models, and their philosophical and empirical underpinnings; An Anthology of Theories and Models of Design fills this gap. The text collects the expert views of an international authorship, covering: ·         significant theories in engineering design, including CK theory, domain theory, and the theory of technical systems; ·         current models of design, from a function behavior structure model to an integrated model; ·         important empirical research findings from studies into design; and ·         philosophical underpinnings of design itself. For educators and researchers in engineering design, An Anthology of Theories and Models of Design gives access to in-depth coverage of theoretical and empirical developments in this area; for pr...

  10. Empirical Modeling on Hot Air Drying of Fresh and Pre-treated Pineapples

    Directory of Open Access Journals (Sweden)

    Tanongkankit Yardfon

    2016-01-01

    Full Text Available This research was aimed to study drying kinetics and determine empirical model of fresh pineapple and pre-treated pineapple with sucrose solution at different concentrations during drying. 3 mm thick samples were immersed into 30, 40 and 50 Brix of sucrose solution before hot air drying at temperatures of 60, 70 and 80°C. The empirical models to predict the drying kinetics were investigated. The results showed that the moisture content decreased when increasing the drying temperatures and times. Increase in sucrose concentration led to longer drying time. According to the statistical values of the highest coefficients (R2, the lowest least of chi-square (χ2 and root mean square error (RMSE, Logarithmic model was the best models for describing the drying behavior of soaked samples into 30, 40 and 50 Brix of sucrose solution.

  11. Corrosion-induced bond strength degradation in reinforced concrete-Analytical and empirical models

    International Nuclear Information System (INIS)

    Bhargava, Kapilesh; Ghosh, A.K.; Mori, Yasuhiro; Ramanujam, S.

    2007-01-01

    The present paper aims to investigate the relationship between the bond strength and the reinforcement corrosion in reinforced concrete (RC). Analytical and empirical models are proposed for the bond strength of corroded reinforcing bars. Analytical model proposed by Cairns.and Abdullah [Cairns, J., Abdullah, R.B., 1996. Bond strength of black and epoxy-coated reinforcement-a theoretical approach. ACI Mater. J. 93 (4), 362-369] for splitting bond failure and later modified by Coronelli [Coronelli, D. 2002. Corrosion cracking and bond strength modeling for corroded bars in reinforced concrete. ACI Struct. J. 99 (3), 267-276] to consider the corroded bars, has been adopted. Estimation of the various parameters in the earlier analytical model has been proposed by the present authors. These parameters include corrosion pressure due to expansive action of corrosion products, modeling of tensile behaviour of cracked concrete and adhesion and friction coefficient between the corroded bar and cracked concrete. Simple empirical models are also proposed to evaluate the reduction in bond strength as a function of reinforcement corrosion in RC specimens. These empirical models are proposed by considering a wide range of published experimental investigations related to the bond degradation in RC specimens due to reinforcement corrosion. It has been found that the proposed analytical and empirical bond models are capable of providing the estimates of predicted bond strength of corroded reinforcement that are in reasonably good agreement with the experimentally observed values and with those of the other reported published data on analytical and empirical predictions. An attempt has also been made to evaluate the flexural strength of RC beams with corroded reinforcement failing in bond. It has also been found that the analytical predictions for the flexural strength of RC beams based on the proposed bond degradation models are in agreement with those of the experimentally

  12. An empirical investigation on the forecasting ability of mallows model averaging in a macro economic environment

    Science.gov (United States)

    Yin, Yip Chee; Hock-Eam, Lim

    2012-09-01

    This paper investigates the forecasting ability of Mallows Model Averaging (MMA) by conducting an empirical analysis of five Asia countries, Malaysia, Thailand, Philippines, Indonesia and China's GDP growth rate. Results reveal that MMA has no noticeable differences in predictive ability compared to the general autoregressive fractional integrated moving average model (ARFIMA) and its predictive ability is sensitive to the effect of financial crisis. MMA could be an alternative forecasting method for samples without recent outliers such as financial crisis.

  13. Autonomous e-coaching in the wild: Empirical validation of a model-based reasoning system

    OpenAIRE

    Kamphorst, B.A.; Klein, M.C.A.; van Wissen, A.

    2014-01-01

    Autonomous e-coaching systems have the potential to improve people's health behaviors on a large scale. The intelligent behavior change support system eMate exploits a model of the human agent to support individuals in adopting a healthy lifestyle. The system attempts to identify the causes of a person's non-adherence by reasoning over a computational model (COMBI) that is based on established psychological theories of behavior change. The present work presents an extensive, monthlong empiric...

  14. U-tube steam generator empirical model development and validation using neural networks

    International Nuclear Information System (INIS)

    Parlos, A.G.; Chong, K.T.; Atiya, A.

    1992-01-01

    Empirical modeling techniques that use model structures motivated from neural networks research have proven effective in identifying complex process dynamics. A recurrent multilayer perception (RMLP) network was developed as a nonlinear state-space model structure along with a static learning algorithm for estimating the parameter associated with it. The methods developed were demonstrated by identifying two submodels of a U-tube steam generator (UTSG), each valid around an operating power level. A significant drawback of this approach is the long off-line training times required for the development of even a simplified model of a UTSG. Subsequently, a dynamic gradient descent-based learning algorithm was developed as an accelerated alternative to train an RMLP network for use in empirical modeling of power plants. The two main advantages of this learning algorithm are its ability to consider past error gradient information for future use and the two forward passes associated with its implementation. The enhanced learning capabilities provided by the dynamic gradient descent-based learning algorithm were demonstrated via the case study of a simple steam boiler power plant. In this paper, the dynamic gradient descent-based learning algorithm is used for the development and validation of a complete UTSG empirical model

  15. Improved OCV Model of a Li-Ion NMC Battery for Online SOC Estimation Using the Extended Kalman Filter

    Directory of Open Access Journals (Sweden)

    Ines Baccouche

    2017-05-01

    Full Text Available Accurate modeling of the nonlinear relationship between the open circuit voltage (OCV and the state of charge (SOC is required for adaptive SOC estimation during the lithium-ion (Li-ion battery operation. Online SOC estimation should meet several constraints, such as the computational cost, the number of parameters, as well as the accuracy of the model. In this paper, these challenges are considered by proposing an improved simplified and accurate OCV model of a nickel manganese cobalt (NMC Li-ion battery, based on an empirical analytical characterization approach. In fact, composed of double exponential and simple quadratic functions containing only five parameters, the proposed model accurately follows the experimental curve with a minor fitting error of 1 mV. The model is also valid at a wide temperature range and takes into account the voltage hysteresis of the OCV. Using this model in SOC estimation by the extended Kalman filter (EKF contributes to minimizing the execution time and to reducing the SOC estimation error to only 3% compared to other existing models where the estimation error is about 5%. Experiments are also performed to prove that the proposed OCV model incorporated in the EKF estimator exhibits good reliability and precision under various loading profiles and temperatures.

  16. Traditional Arabic & Islamic medicine: validation and empirical assessment of a conceptual model in Qatar.

    Science.gov (United States)

    AlRawi, Sara N; Khidir, Amal; Elnashar, Maha S; Abdelrahim, Huda A; Killawi, Amal K; Hammoud, Maya M; Fetters, Michael D

    2017-03-14

    Evidence indicates traditional medicine is no longer only used for the healthcare of the poor, its prevalence is also increasing in countries where allopathic medicine is predominant in the healthcare system. While these healing practices have been utilized for thousands of years in the Arabian Gulf, only recently has a theoretical model been developed illustrating the linkages and components of such practices articulated as Traditional Arabic & Islamic Medicine (TAIM). Despite previous theoretical work presenting development of the TAIM model, empirical support has been lacking. The objective of this research is to provide empirical support for the TAIM model and illustrate real world applicability. Using an ethnographic approach, we recruited 84 individuals (43 women and 41 men) who were speakers of one of four common languages in Qatar; Arabic, English, Hindi, and Urdu, Through in-depth interviews, we sought confirming and disconfirming evidence of the model components, namely, health practices, beliefs and philosophy to treat, diagnose, and prevent illnesses and/or maintain well-being, as well as patterns of communication about their TAIM practices with their allopathic providers. Based on our analysis, we find empirical support for all elements of the TAIM model. Participants in this research, visitors to major healthcare centers, mentioned using all elements of the TAIM model: herbal medicines, spiritual therapies, dietary practices, mind-body methods, and manual techniques, applied singularly or in combination. Participants had varying levels of comfort sharing information about TAIM practices with allopathic practitioners. These findings confirm an empirical basis for the elements of the TAIM model. Three elements, namely, spiritual healing, herbal medicine, and dietary practices, were most commonly found. Future research should examine the prevalence of TAIM element use, how it differs among various populations, and its impact on health.

  17. Empirical Modeling of Lithium-ion Batteries Based on Electrochemical Impedance Spectroscopy Tests

    International Nuclear Information System (INIS)

    Samadani, Ehsan; Farhad, Siamak; Scott, William; Mastali, Mehrdad; Gimenez, Leonardo E.; Fowler, Michael; Fraser, Roydon A.

    2015-01-01

    Highlights: • Two commercial Lithium-ion batteries are studied through HPPC and EIS tests. • An equivalent circuit model is developed for a range of operating conditions. • This model improves the current battery empirical models for vehicle applications • This model is proved to be efficient in terms of predicting HPPC test resistances. - ABSTRACT: An empirical model for commercial lithium-ion batteries is developed based on electrochemical impedance spectroscopy (EIS) tests. An equivalent circuit is established according to EIS test observations at various battery states of charge and temperatures. A Laplace transfer time based model is developed based on the circuit which can predict the battery operating output potential difference in battery electric and plug-in hybrid vehicles at various operating conditions. This model demonstrates up to 6% improvement compared to simple resistance and Thevenin models and is suitable for modeling and on-board controller purposes. Results also show that this model can be used to predict the battery internal resistance obtained from hybrid pulse power characterization (HPPC) tests to within 20 percent, making it suitable for low to medium fidelity powertrain design purposes. In total, this simple battery model can be employed as a real-time model in electrified vehicle battery management systems

  18. Comparison of ITER performance predicted by semi-empirical and theory-based transport models

    International Nuclear Information System (INIS)

    Mukhovatov, V.; Shimomura, Y.; Polevoi, A.

    2003-01-01

    The values of Q=(fusion power)/(auxiliary heating power) predicted for ITER by three different methods, i.e., transport model based on empirical confinement scaling, dimensionless scaling technique, and theory-based transport models are compared. The energy confinement time given by the ITERH-98(y,2) scaling for an inductive scenario with plasma current of 15 MA and plasma density 15% below the Greenwald value is 3.6 s with one technical standard deviation of ±14%. These data are translated into a Q interval of [7-13] at the auxiliary heating power P aux = 40 MW and [7-28] at the minimum heating power satisfying a good confinement ELMy H-mode. Predictions of dimensionless scalings and theory-based transport models such as Weiland, MMM and IFS/PPPL overlap with the empirical scaling predictions within the margins of uncertainty. (author)

  19. Attachment-based family therapy for depressed and suicidal adolescents: theory, clinical model and empirical support.

    Science.gov (United States)

    Ewing, E Stephanie Krauthamer; Diamond, Guy; Levy, Suzanne

    2015-01-01

    Attachment-Based Family Therapy (ABFT) is a manualized family-based intervention designed for working with depressed adolescents, including those at risk for suicide, and their families. It is an empirically informed and supported treatment. ABFT has its theoretical underpinnings in attachment theory and clinical roots in structural family therapy and emotion focused therapies. ABFT relies on a transactional model that aims to transform the quality of adolescent-parent attachment, as a means of providing the adolescent with a more secure relationship that can support them during challenging times generally, and the crises related to suicidal thinking and behavior, specifically. This article reviews: (1) the theoretical foundations of ABFT (attachment theory, models of emotional development); (2) the ABFT clinical model, including training and supervision factors; and (3) empirical support.

  20. Empirical research on decoupling relationship between energy-related carbon emission and economic growth in Guangdong province based on extended Kaya identity.

    Science.gov (United States)

    Wang, Wenxiu; Kuang, Yaoqiu; Huang, Ningsheng; Zhao, Daiqing

    2014-01-01

    The decoupling elasticity decomposition quantitative model of energy-related carbon emission in Guangdong is established based on the extended Kaya identity and Tapio decoupling model for the first time, to explore the decoupling relationship and its internal mechanism between energy-related carbon emission and economic growth in Guangdong. Main results are as follows. (1) Total production energy-related carbon emissions in Guangdong increase from 4128 × 10⁴ tC in 1995 to 14396 × 10⁴ tC in 2011. Decoupling elasticity values of energy-related carbon emission and economic growth increase from 0.53 in 1996 to 0.85 in 2011, and its decoupling state turns from weak decoupling in 1996-2004 to expansive coupling in 2005-2011. (2) Land economic output and energy intensity are the first inhibiting factor and the first promoting factor to energy-related carbon emission decoupling from economic growth, respectively. The development speeds of land urbanization and population urbanization, especially land urbanization, play decisive roles in the change of total decoupling elasticity values. (3) Guangdong can realize decoupling of energy-related carbon emission from economic growth effectively by adjusting the energy mix and industrial structure, coordinating the development speed of land urbanization and population urbanization effectively, and strengthening the construction of carbon sink.

  1. Prediction of early summer rainfall over South China by a physical-empirical model

    Science.gov (United States)

    Yim, So-Young; Wang, Bin; Xing, Wen

    2014-10-01

    In early summer (May-June, MJ) the strongest rainfall belt of the northern hemisphere occurs over the East Asian (EA) subtropical front. During this period the South China (SC) rainfall reaches its annual peak and represents the maximum rainfall variability over EA. Hence we establish an SC rainfall index, which is the MJ mean precipitation averaged over 72 stations over SC (south of 28°N and east of 110°E) and represents superbly the leading empirical orthogonal function mode of MJ precipitation variability over EA. In order to predict SC rainfall, we established a physical-empirical model. Analysis of 34-year observations (1979-2012) reveals three physically consequential predictors. A plentiful SC rainfall is preceded in the previous winter by (a) a dipole sea surface temperature (SST) tendency in the Indo-Pacific warm pool, (b) a tripolar SST tendency in North Atlantic Ocean, and (c) a warming tendency in northern Asia. These precursors foreshadow enhanced Philippine Sea subtropical High and Okhotsk High in early summer, which are controlling factors for enhanced subtropical frontal rainfall. The physical empirical model built on these predictors achieves a cross-validated forecast correlation skill of 0.75 for 1979-2012. Surprisingly, this skill is substantially higher than four-dynamical models' ensemble prediction for 1979-2010 period (0.15). The results here suggest that the low prediction skill of current dynamical models is largely due to models' deficiency and the dynamical prediction has large room to improve.

  2. Application of GIS to Empirical Windthrow Risk Model in Mountain Forested Landscapes

    Directory of Open Access Journals (Sweden)

    Lukas Krejci

    2018-02-01

    Full Text Available Norway spruce dominates mountain forests in Europe. Natural variations in the mountainous coniferous forests are strongly influenced by all the main components of forest and landscape dynamics: species diversity, the structure of forest stands, nutrient cycling, carbon storage, and other ecosystem services. This paper deals with an empirical windthrow risk model based on the integration of logistic regression into GIS to assess forest vulnerability to wind-disturbance in the mountain spruce forests of Šumava National Park (Czech Republic. It is an area where forest management has been the focus of international discussions by conservationists, forest managers, and stakeholders. The authors developed the empirical windthrow risk model, which involves designing an optimized data structure containing dependent and independent variables entering logistic regression. The results from the model, visualized in the form of map outputs, outline the probability of risk to forest stands from wind in the examined territory of the national park. Such an application of the empirical windthrow risk model could be used as a decision support tool for the mountain spruce forests in a study area. Future development of these models could be useful for other protected European mountain forests dominated by Norway spruce.

  3. Physical Limitations of Empirical Field Models: Force Balance and Plasma Pressure

    International Nuclear Information System (INIS)

    Sorin Zaharia; Cheng, C.Z.

    2002-01-01

    In this paper, we study whether the magnetic field of the T96 empirical model can be in force balance with an isotropic plasma pressure distribution. Using the field of T96, we obtain values for the pressure P by solving a Poisson-type equation (gradient) 2 P = (gradient) · (J x B) in the equatorial plane, and 1-D profiles on the Sun-Earth axis by integrating (gradient)P = J x B. We work in a flux coordinate system in which the magnetic field is expressed in terms of Euler potentials. Our results lead to the conclusion that the T96 model field cannot be in equilibrium with an isotropic pressure. We also analyze in detail the computation of Birkeland currents using the Vasyliunas relation and the T96 field, which yields unphysical results, again indicating the lack of force balance in the empirical model. The underlying reason for the force imbalance is likely the fact that the derivatives of the least-square fitted model B are not accurate predictions of the actual magnetospheric field derivatives. Finally, we discuss a possible solution to the problem of lack of force balance in empirical field models

  4. Calibrating mechanistic-empirical pavement performance models with an expert matrix

    Energy Technology Data Exchange (ETDEWEB)

    Tighe, S.; AlAssar, R.; Haas, R. [Waterloo Univ., ON (Canada). Dept. of Civil Engineering; Zhiwei, H. [Stantec Consulting Ltd., Cambridge, ON (Canada)

    2001-07-01

    Proper management of pavement infrastructure requires pavement performance modelling. For the past 20 years, the Ontario Ministry of Transportation has used the Ontario Pavement Analysis of Costs (OPAC) system for pavement design. Pavement needs, however, have changed substantially during that time. To address this need, a new research contract is underway to enhance the model and verify the predictions, particularly at extreme points such as low and high traffic volume pavement design. This initiative included a complete evaluation of the existing OPAC pavement design method, the construction of a new set of pavement performance prediction models, and the development of the flexible pavement design procedure that incorporates reliability analysis. The design was also expanded to include rigid pavement designs and modification of the existing life cycle cost analysis procedure which includes both the agency cost and road user cost. Performance prediction and life-cycle costs were developed based on several factors, including material properties, traffic loads and climate. Construction and maintenance schedules were also considered. The methodology for the calibration and validation of a mechanistic-empirical flexible pavement performance model was described. Mechanistic-empirical design methods combine theory based design such as calculated stresses, strains or deflections with empirical methods, where a measured response is associated with thickness and pavement performance. Elastic layer analysis was used to determine pavement response to determine the most effective design using cumulative Equivalent Single Axle Loads (ESALs), below grade type and layer thickness.The new mechanistic-empirical model separates the environment and traffic effects on performance. This makes it possible to quantify regional differences between Southern and Northern Ontario. In addition, roughness can be calculated in terms of the International Roughness Index or Riding comfort Index

  5. Modeling microbial diversity in anaerobic digestion through an extended ADM1 model.

    Science.gov (United States)

    Ramirez, Ivan; Volcke, Eveline I P; Rajinikanth, Rajagopal; Steyer, Jean-Philippe

    2009-06-01

    The anaerobic digestion process comprises a whole network of sequential and parallel reactions, of both biochemical and physicochemical nature. Mathematical models, aiming at understanding and optimization of the anaerobic digestion process, describe these reactions in a structured way, the IWA Anaerobic Digestion Model No. 1 (ADM1) being the most well established example. While these models distinguish between different microorganisms involved in different reactions, to our knowledge they all neglect species diversity between organisms with the same function, i.e. performing the same reaction. Nevertheless, available experimental evidence suggests that the structure and properties of a microbial community may be influenced by process operation and on their turn also determine the reactor functioning. In order to adequately describe these phenomena, mathematical models need to consider the underlying microbial diversity. This is demonstrated in this contribution by extending the ADM1 to describe microbial diversity between organisms of the same functional group. The resulting model has been compared with the traditional ADM1 in describing experimental data of a pilot-scale hybrid Upflow Anaerobic Sludge Filter Bed (UASFB) reactor, as well as in a more detailed simulation study. The presented model is further shown useful in assessing the relationship between reactor performance and microbial community structure in mesophilic CSTRs seeded with slaughterhouse wastewater when facing increasing levels of ammonia.

  6. Model-based safety analysis of a control system using Simulink and Simscape extended models

    Directory of Open Access Journals (Sweden)

    Shao Nian

    2017-01-01

    Full Text Available The aircraft or system safety assessment process is an integral part of the overall aircraft development cycle. It is usually characterized by a very high timely and financial effort and can become a critical design driver in certain cases. Therefore, an increasing demand of effective methods to assist the safety assessment process arises within the aerospace community. One approach is the utilization of model-based technology, which is already well-established in the system development, for safety assessment purposes. This paper mainly describes a new tool for Model-Based Safety Analysis. A formal model for an example system is generated and enriched with extended models. Then, system safety analyses are performed on the model with the assistance of automation tools and compared to the results of a manual analysis. The objective of this paper is to improve the increasingly complex aircraft systems development process. This paper develops a new model-based analysis tool in Simulink/Simscape environment.

  7. Semi-empirical model for prediction of unsteady forces on an airfoil with application to flutter

    Science.gov (United States)

    Mahajan, A. J.; Kaza, K. R. V.; Dowell, E. H.

    1993-01-01

    A semi-empirical model is described for predicting unsteady aerodynamic forces on arbitrary airfoils under mildly stalled and unstalled conditions. Aerodynamic forces are modeled using second order ordinary differential equations for lift and moment with airfoil motion as the input. This model is simultaneously integrated with structural dynamics equations to determine flutter characteristics for a two degrees-of-freedom system. Results for a number of cases are presented to demonstrate the suitability of this model to predict flutter. Comparison is made to the flutter characteristics determined by a Navier-Stokes solver and also the classical incompressible potential flow theory.

  8. Using a tag team of undergraduate researchers to construct an empirical model of auroral Poynting flux, from satellite data

    Science.gov (United States)

    Cosgrove, R. B.; Bahcivan, H.; Klein, A.; Ortega, J.; Alhassan, M.; Xu, Y.; Chen, S.; Van Welie, M.; Rehberger, J.; Musielak, S.; Cahill, N.

    2012-12-01

    Empirical models of the incident Poynting flux and particle kinetic energy flux, associated with auroral processes, have been constructed using data from the FAST satellite. The models were constructed over a three-year period by a tag-team of three groups of undergraduate researchers from Worcester Polytechnic Institute (WPI), working under the supervision of researchers at SRI International, a nonprofit research institute. Each group spent one academic quarter in residence at SRI, in fulfillment of WPI's Major Qualifying Project (MQP), required for graduation from the Department of Electrical and Computer Engineering. The MQP requires a written group report, which was used to transition from one group to the next. The student's research involved accessing and processing a data set of 20,000 satellite orbits, replete with flaws associated with instrument failures, which had to be removed. The data had to be transformed from the satellite reference frame into solar coordinates, projected to a reference altitude, sorted according to geophysical conditions, and etc. The group visits were chaperoned by WPI, and were jointly funded. Researchers at SRI were supported by a grant from the National Science Foundation, which was tailored to accommodate the undergraduate tag-team approach. The NSF grant extended one year beyond the student visits, with increased funding in the final year, permitting the researchers at SRI to exercise quality control, and to produce publications. It is expected that the empirical models will be used as inputs to large-scale general circulation models (GCMs), to specify the atmospheric heating rate at high altitudes.; Poynting Flux with northward IMF ; Poynting flux with southward IMF

  9. Empirical Analysis of Stochastic Volatility Model by Hybrid Monte Carlo Algorithm

    International Nuclear Information System (INIS)

    Takaishi, Tetsuya

    2013-01-01

    The stochastic volatility model is one of volatility models which infer latent volatility of asset returns. The Bayesian inference of the stochastic volatility (SV) model is performed by the hybrid Monte Carlo (HMC) algorithm which is superior to other Markov Chain Monte Carlo methods in sampling volatility variables. We perform the HMC simulations of the SV model for two liquid stock returns traded on the Tokyo Stock Exchange and measure the volatilities of those stock returns. Then we calculate the accuracy of the volatility measurement using the realized volatility as a proxy of the true volatility and compare the SV model with the GARCH model which is one of other volatility models. Using the accuracy calculated with the realized volatility we find that empirically the SV model performs better than the GARCH model.

  10. Evaluation of theoretical and empirical water vapor sorption isotherm models for soils

    Science.gov (United States)

    Arthur, Emmanuel; Tuller, Markus; Moldrup, Per; de Jonge, Lis W.

    2016-01-01

    The mathematical characterization of water vapor sorption isotherms of soils is crucial for modeling processes such as volatilization of pesticides and diffusive and convective water vapor transport. Although numerous physically based and empirical models were previously proposed to describe sorption isotherms of building materials, food, and other industrial products, knowledge about the applicability of these functions for soils is noticeably lacking. We present an evaluation of nine models for characterizing adsorption/desorption isotherms for a water activity range from 0.03 to 0.93 based on measured data of 207 soils with widely varying textures, organic carbon contents, and clay mineralogy. In addition, the potential applicability of the models for prediction of sorption isotherms from known clay content was investigated. While in general, all investigated models described measured adsorption and desorption isotherms reasonably well, distinct differences were observed between physical and empirical models and due to the different degrees of freedom of the model equations. There were also considerable differences in model performance for adsorption and desorption data. While regression analysis relating model parameters and clay content and subsequent model application for prediction of measured isotherms showed promise for the majority of investigated soils, for soils with distinct kaolinitic and smectitic clay mineralogy predicted isotherms did not closely match the measurements.

  11. An empirically based model for knowledge management in health care organizations.

    Science.gov (United States)

    Sibbald, Shannon L; Wathen, C Nadine; Kothari, Anita

    2016-01-01

    Knowledge management (KM) encompasses strategies, processes, and practices that allow an organization to capture, share, store, access, and use knowledge. Ideal KM combines different sources of knowledge to support innovation and improve performance. Despite the importance of KM in health care organizations (HCOs), there has been very little empirical research to describe KM in this context. This study explores KM in HCOs, focusing on the status of current intraorganizational KM. The intention is to provide insight for future studies and model development for effective KM implementation in HCOs. A qualitative methods approach was used to create an empirically based model of KM in HCOs. Methods included (a) qualitative interviews (n = 24) with senior leadership to identify types of knowledge important in these roles plus current information-seeking behaviors/needs and (b) in-depth case study with leaders in new executive positions (n = 2). The data were collected from 10 HCOs. Our empirically based model for KM was assessed for face and content validity. The findings highlight the paucity of formal KM in our sample HCOs. Organizational culture, leadership, and resources are instrumental in supporting KM processes. An executive's knowledge needs are extensive, but knowledge assets are often limited or difficult to acquire as much of the available information is not in a usable format. We propose an empirically based model for KM to highlight the importance of context (internal and external), and knowledge seeking, synthesis, sharing, and organization. Participants who reviewed the model supported its basic components and processes, and potential for incorporating KM into organizational processes. Our results articulate ways to improve KM, increase organizational learning, and support evidence-informed decision-making. This research has implications for how to better integrate evidence and knowledge into organizations while considering context and the role of

  12. Dynamic gradient descent learning algorithms for enhanced empirical modeling of power plants

    International Nuclear Information System (INIS)

    Parlos, A.G.; Atiya, Amir; Chong, K.T.

    1991-01-01

    A newly developed dynamic gradient descent-based learning algorithm is used to train a recurrent multilayer perceptron network for use in empirical modeling of power plants. The two main advantages of the proposed learning algorithm are its ability to consider past error gradient information for future use and the two forward passes associated with its implementation, instead of one forward and one backward pass of the backpropagation algorithm. The latter advantage results in computational time saving because both passes can be performed simultaneously. The dynamic learning algorithm is used to train a hybrid feedforward/feedback neural network, a recurrent multilayer perceptron, which was previously found to exhibit good interpolation and extrapolation capabilities in modeling nonlinear dynamic systems. One of the drawbacks, however, of the previously reported work has been the long training times associated with accurate empirical models. The enhanced learning capabilities provided by the dynamic gradient descent-based learning algorithm are demonstrated by a case study of a steam power plant. The number of iterations required for accurate empirical modeling has been reduced from tens of thousands to hundreds, thus significantly expediting the learning process

  13. Modeling the NPE with finite sources and empirical Green`s functions

    Energy Technology Data Exchange (ETDEWEB)

    Hutchings, L.; Kasameyer, P.; Goldstein, P. [Lawrence Livermore National Lab., CA (United States)] [and others

    1994-12-31

    In order to better understand the source characteristics of both nuclear and chemical explosions for purposes of discrimination, we have modeled the NPE chemical explosion as a finite source and with empirical Green`s functions. Seismograms are synthesized at four sties to test the validity of source models. We use a smaller chemical explosion detonated in the vicinity of the working point to obtain empirical Green`s functions. Empirical Green`s functions contain all the linear information of the geology along the propagation path and recording site, which are identical for chemical or nuclear explosions, and therefore reduce the variability in modeling the source of the larger event. We further constrain the solution to have the overall source duration obtained from point-source deconvolution results. In modeling the source, we consider both an elastic source on a spherical surface and an inelastic expanding spherical volume source. We found that the spherical volume solution provides better fits to observed seismograms. The potential to identify secondary sources was examined, but the resolution is too poor to be definitive.

  14. Empirical Validation of a Thermal Model of a Complex Roof Including Phase Change Materials

    Directory of Open Access Journals (Sweden)

    Stéphane Guichard

    2015-12-01

    Full Text Available This paper deals with the empirical validation of a building thermal model of a complex roof including a phase change material (PCM. A mathematical model dedicated to PCMs based on the heat apparent capacity method was implemented in a multi-zone building simulation code, the aim being to increase the understanding of the thermal behavior of the whole building with PCM technologies. In order to empirically validate the model, the methodology is based both on numerical and experimental studies. A parametric sensitivity analysis was performed and a set of parameters of the thermal model has been identified for optimization. The use of the generic optimization program called GenOpt® coupled to the building simulation code enabled to determine the set of adequate parameters. We first present the empirical validation methodology and main results of previous work. We then give an overview of GenOpt® and its coupling with the building simulation code. Finally, once the optimization results are obtained, comparisons of the thermal predictions with measurements are found to be acceptable and are presented.

  15. Should the patients colonized with extended-spectrum beta-lactamase-producing Gram-negative bacilli (E-GNB) coming to hospital from the community with pneumonia get anti-E-GNB active empirical treatment?

    Science.gov (United States)

    Peterlin, Lara; Žagar, Mateja; Lejko Zupanc, Tatjana; Paladin, Marija; Beović, Bojana

    2017-10-01

    Extended-spectrum beta-lactamases are responsible for resistance of Gram-negative bacilli to several beta-lactam antibiotics, including those prescribed for treatment pneumonia. To evaluate the importance of colonization with E-GNB for the choice of empirical treatment we performed a retrospective case-control study including 156 patients, hospitalized for treatment of pneumonia from 2009 through 2013. Empirical treatment success and in-hospital survival were significantly lower in patients colonized with E-GNB compared to non-colonized (p = 0.002, p = 0.035). When comparing subgroups of colonized patients, treatment success was significantly lower in patients who were colonized with E-GNB resistant to empirical antibiotic (p = 0.010), but not in those colonized by E-GNB susceptible to empirically given antibiotic (p = 0.104). Difference in in-hospital mortality was insignificant in both subgroups (p = 0.056, p = 0.331). The results of study suggest that an anti-E-GNB active antibiotic should be used for empirical treatment of pneumonia in E-GNB colonized patients.

  16. Space evolution model and empirical analysis of an urban public transport network

    Science.gov (United States)

    Sui, Yi; Shao, Feng-jing; Sun, Ren-cheng; Li, Shu-jing

    2012-07-01

    This study explores the space evolution of an urban public transport network, using empirical evidence and a simulation model validated on that data. Public transport patterns primarily depend on traffic spatial-distribution, demands of passengers and expected utility of investors. Evolution is an iterative process of satisfying the needs of passengers and investors based on a given traffic spatial-distribution. The temporal change of urban public transport network is evaluated both using topological measures and spatial ones. The simulation model is validated using empirical data from nine big cities in China. Statistical analyses on topological and spatial attributes suggest that an evolution network with traffic demands characterized by power-law numerical values which distribute in a mode of concentric circles tallies well with these nine cities.

  17. A singular evolutive extended Kalman filter to assimilate real in situ data in a 1-D marine ecosystem model

    Directory of Open Access Journals (Sweden)

    I. Hoteit

    2003-01-01

    Full Text Available A singular evolutive extended Kalman (SEEK filter is used to assimilate real in situ data in a water column marine ecosystem model. The biogeochemistry of the ecosystem is described by the European Regional Sea Ecosystem Model (ERSEM, while the physical forcing is described by the Princeton Ocean Model (POM. In the SEEK filter, the error statistics are parameterized by means of a suitable basis of empirical orthogonal functions (EOFs. The purpose of this contribution is to track the possibility of using data assimilation techniques for state estimation in marine ecosystem models. In the experiments, real oxygen and nitrate data are used and the results evaluated against independent chlorophyll data. These data were collected from an offshore station at three different depths for the needs of the MFSPP project. The assimilation results show a continuous decrease in the estimation error and a clear improvement in the model behavior. Key words. Oceanography: general (ocean prediction; numerical modelling – Oceanography: biological and chemical (ecosystems and ecology

  18. Comparative approaches from empirical to mechanistic simulation modelling in Land Evaluation studies

    Science.gov (United States)

    Manna, P.; Basile, A.; Bonfante, A.; Terribile, F.

    2009-04-01

    The Land Evaluation (LE) comprise the evaluation procedures to asses the attitudes of the land to a generic or specific use (e.g. biomass production). From local to regional and national scale the approach to the land use planning should requires a deep knowledge of the processes that drive the functioning of the soil-plant-atmosphere system. According to the classical approaches the assessment of attitudes is the result of a qualitative comparison between the land/soil physical properties and the land use requirements. These approaches have a quick and inexpensive applicability; however, they are based on empirical and qualitative models with a basic knowledge structure specifically built for a specific landscape and for the specific object of the evaluation (e.g. crop). The outcome from this situation is the huge difficulties in the spatial extrapolation of the LE results and the rigidity of the system. Modern techniques instead, rely on the application of mechanistic and quantitative simulation modelling that allow a dynamic characterisation of the interrelated physical and chemical processes taking place in the soil landscape. Moreover, the insertion of physical based rules in the LE procedure may make it less difficult in terms of both extending spatially the results and changing the object (e.g. crop species, nitrate dynamics, etc.) of the evaluation. On the other side these modern approaches require high quality and quantity of input data that cause a significant increase in costs. In this scenario nowadays the LE expert is asked to choose the best LE methodology considering costs, complexity of the procedure and benefits in handling a specific land evaluation. In this work we performed a forage maize land suitability study by comparing 9 different methods having increasing complexity and costs. The study area, of about 2000 ha, is located in North Italy in the Lodi plain (Po valley). The range of the 9 employed methods ranged from standard LE approaches to

  19. Ion temperature in the outer ionosphere - first version of a global empirical model

    Czech Academy of Sciences Publication Activity Database

    Třísková, Ludmila; Truhlík, Vladimír; Šmilauer, Jan; Smirnova, N. F.

    2004-01-01

    Roč. 34, č. 9 (2004), s. 1998-2003 ISSN 0273-1177 R&D Projects: GA ČR GP205/02/P037; GA AV ČR IAA3042201; GA MŠk ME 651 Institutional research plan: CEZ:AV0Z3042911 Keywords : plasma temperatures * topside ionosphere * empirical models Subject RIV: DG - Athmosphere Sciences, Meteorology Impact factor: 0.548, year: 2004

  20. An Empirical Application of a Two-Factor Model of Stochastic Volatility

    Czech Academy of Sciences Publication Activity Database

    Kuchyňka, Alexandr

    2008-01-01

    Roč. 17, č. 3 (2008), s. 243-253 ISSN 1210-0455 R&D Projects: GA ČR GA402/07/1113; GA MŠk(CZ) LC06075 Institutional research plan: CEZ:AV0Z10750506 Keywords : stochastic volatility * Kalman filter Subject RIV: AH - Economics http://library.utia.cas.cz/separaty/2008/E/kuchynka-an empirical application of a two-factor model of stochastic volatility.pdf

  1. Establishment of Grain Farmers' Supply Response Model and Empirical Analysis under Minimum Grain Purchase Price Policy

    OpenAIRE

    Zhang, Shuang

    2012-01-01

    Based on farmers' supply behavior theory and price expectations theory, this paper establishes grain farmers' supply response model of two major grain varieties (early indica rice and mixed wheat) in the major producing areas, to test whether the minimum grain purchase price policy can have price-oriented effect on grain production and supply in the major producing areas. Empirical analysis shows that the minimum purchase price published annually by the government has significant positive imp...

  2. A generalized preferential attachment model for business firms growth rates. I. Empirical evidence

    Science.gov (United States)

    Pammolli, F.; Fu, D.; Buldyrev, S. V.; Riccaboni, M.; Matia, K.; Yamasaki, K.; Stanley, H. E.

    2007-05-01

    We introduce a model of proportional growth to explain the distribution P(g) of business firm growth rates. The model predicts that P(g) is Laplace in the central part and depicts an asymptotic power-law behavior in the tails with an exponent ζ = 3. Because of data limitations, previous studies in this field have been focusing exclusively on the Laplace shape of the body of the distribution. We test the model at different levels of aggregation in the economy, from products, to firms, to countries, and we find that the predictions are in good agreement with empirical evidence on both growth distributions and size-variance relationships.

  3. Time-varying disaster risk models: An empirical assessment of the Rietz-Barro hypothesis

    DEFF Research Database (Denmark)

    Irarrazabal, Alfonso; Parra-Alvarez, Juan Carlos

    This paper revisits the fit of disaster risk models where a representative agent has recursive preferences and the probability of a macroeconomic disaster changes over time. We calibrate the model as in Wachter (2013) and perform two sets of tests to assess the empirical performance of the model ...... and hence to reduce the Sharpe Ratio, a lower elasticity of substitution generates a more reasonable level for the equity risk premium and for the volatility of the government bond returns without compromising the ability of the price-dividend ratio to predict excess returns....

  4. Support Vector Regression Model Based on Empirical Mode Decomposition and Auto Regression for Electric Load Forecasting

    Directory of Open Access Journals (Sweden)

    Hong-Juan Li

    2013-04-01

    Full Text Available Electric load forecasting is an important issue for a power utility, associated with the management of daily operations such as energy transfer scheduling, unit commitment, and load dispatch. Inspired by strong non-linear learning capability of support vector regression (SVR, this paper presents a SVR model hybridized with the empirical mode decomposition (EMD method and auto regression (AR for electric load forecasting. The electric load data of the New South Wales (Australia market are employed for comparing the forecasting performances of different forecasting models. The results confirm the validity of the idea that the proposed model can simultaneously provide forecasting with good accuracy and interpretability.

  5. Integrating technology readiness into the expectation-confirmation model: an empirical study of mobile services.

    Science.gov (United States)

    Chen, Shih-Chih; Liu, Ming-Ling; Lin, Chieh-Peng

    2013-08-01

    The aim of this study was to integrate technology readiness into the expectation-confirmation model (ECM) for explaining individuals' continuance of mobile data service usage. After reviewing the ECM and technology readiness, an integrated model was demonstrated via empirical data. Compared with the original ECM, the findings of this study show that the integrated model may offer an ameliorated way to clarify what factors and how they influence the continuous intention toward mobile services. Finally, the major findings are summarized, and future research directions are suggested.

  6. The Fracture Mechanical Markov Chain Fatigue Model Compared with Empirical Data

    DEFF Research Database (Denmark)

    Gansted, L.; Brincker, Rune; Hansen, Lars Pilegaard

    The applicability of the FMF-model (Fracture Mechanical Markov Chain Fatigue Model) introduced in Gansted, L., R. Brincker and L. Pilegaard Hansen (1991) is tested by simulations and compared with empirical data. Two sets of data have been used, the Virkler data (aluminium alloy) and data...... established at the Laboratory of Structural Engineering at Aalborg University, the AUC-data, (mild steel). The model, which is based on the assumption, that the crack propagation process can be described by a discrete Space Markov theory, is applicable to constant as well as random loading. It is shown...

  7. Empirical LTE Smartphone Power Model with DRX Operation for System Level Simulations

    DEFF Research Database (Denmark)

    Lauridsen, Mads; Noël, Laurent; Mogensen, Preben

    2013-01-01

    An LTE smartphone power model is presented to enable academia and industry to evaluate users’ battery life on system level. The model is based on empirical measurements on a smartphone using a second generation LTE chipset, and the model includes functions of receive and transmit data rates...... and power levels. The first comprehensive Discontinuous Reception (DRX) power consumption measurements are reported together with cell bandwidth, screen and CPU power consumption. The transmit power level and to some extent the receive data rate constitute the overall power consumption, while DRX proves...

  8. A Price Index Model for Road Freight Transportation and Its Empirical analysis in China

    Directory of Open Access Journals (Sweden)

    Liu Zhishuo

    2017-01-01

    Full Text Available The aim of price index for road freight transportation (RFT is to reflect the changes of price in the road transport market. Firstly, a price index model for RFT based on the sample data from Alibaba logistics platform is built. This model is a three levels index system including total index, classification index and individual index and the Laspeyres method is applied to calculate these indices. Finally, an empirical analysis of the price index for RFT market in Zhejiang Province is performed. In order to demonstrate the correctness and validity of the exponential model, a comparative analysis with port throughput and PMI index is carried out.

  9. Analytical modeling of electron energy loss spectroscopy of graphene: Ab initio study versus extended hydrodynamic model.

    Science.gov (United States)

    Djordjević, Tijana; Radović, Ivan; Despoja, Vito; Lyon, Keenan; Borka, Duško; Mišković, Zoran L

    2018-01-01

    We present an analytical modeling of the electron energy loss (EEL) spectroscopy data for free-standing graphene obtained by scanning transmission electron microscope. The probability density for energy loss of fast electrons traversing graphene under normal incidence is evaluated using an optical approximation based on the conductivity of graphene given in the local, i.e., frequency-dependent form derived by both a two-dimensional, two-fluid extended hydrodynamic (eHD) model and an ab initio method. We compare the results for the real and imaginary parts of the optical conductivity in graphene obtained by these two methods. The calculated probability density is directly compared with the EEL spectra from three independent experiments and we find very good agreement, especially in the case of the eHD model. Furthermore, we point out that the subtraction of the zero-loss peak from the experimental EEL spectra has a strong influence on the analytical model for the EEL spectroscopy data. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Default risk modeling beyond the first-passage approximation: Extended Black-Cox model

    Science.gov (United States)

    Katz, Yuri A.; Shokhirev, Nikolai V.

    2010-07-01

    We develop a generalization of the Black-Cox structural model of default risk. The extended model captures uncertainty related to firm’s ability to avoid default even if company’s liabilities momentarily exceeding its assets. Diffusion in a linear potential with the radiation boundary condition is used to mimic a company’s default process. The exact solution of the corresponding Fokker-Planck equation allows for derivation of analytical expressions for the cumulative probability of default and the relevant hazard rate. Obtained closed formulas fit well the historical data on global corporate defaults and demonstrate the split behavior of credit spreads for bonds of companies in different categories of speculative-grade ratings with varying time to maturity. Introduction of the finite rate of default at the boundary improves valuation of credit risk for short time horizons, which is the key advantage of the proposed model. We also consider the influence of uncertainty in the initial distance to the default barrier on the outcome of the model and demonstrate that this additional source of incomplete information may be responsible for nonzero credit spreads for bonds with very short time to maturity.

  11. Context, Experience, Expectation, and Action—Towards an Empirically Grounded, General Model for Analyzing Biographical Uncertainty

    Directory of Open Access Journals (Sweden)

    Herwig Reiter

    2010-01-01

    Full Text Available The article proposes a general, empirically grounded model for analyzing biographical uncertainty. The model is based on findings from a qualitative-explorative study of transforming meanings of unemployment among young people in post-Soviet Lithuania. In a first step, the particular features of the uncertainty puzzle in post-communist youth transitions are briefly discussed. A historical event like the collapse of state socialism in Europe, similar to the recent financial and economic crisis, is a generator of uncertainty par excellence: it undermines the foundations of societies and the taken-for-grantedness of related expectations. Against this background, the case of a young woman and how she responds to the novel threat of unemployment in the transition to the world of work is introduced. Her uncertainty management in the specific time perspective of certainty production is then conceptually rephrased by distinguishing three types or levels of biographical uncertainty: knowledge, outcome, and recognition uncertainty. Biographical uncertainty, it is argued, is empirically observable through the analysis of acting and projecting at the biographical level. The final part synthesizes the empirical findings and the conceptual discussion into a stratification model of biographical uncertainty as a general tool for the biographical analysis of uncertainty phenomena. URN: urn:nbn:de:0114-fqs100120

  12. A simple empirical model for the clarification-thickening process in wastewater treatment plants.

    Science.gov (United States)

    Zhang, Y K; Wang, H C; Qi, L; Liu, G H; He, Z J; Fan, H T

    2015-01-01

    In wastewater treatment plants (WWTPs), activated sludge is thickened in secondary settling tanks and recycled into the biological reactor to maintain enough biomass for wastewater treatment. Accurately estimating the activated sludge concentration in the lower portion of the secondary clarifiers is of great importance for evaluating and controlling the sludge recycled ratio, ensuring smooth and efficient operation of the WWTP. By dividing the overall activated sludge-thickening curve into a hindered zone and a compression zone, an empirical model describing activated sludge thickening in the compression zone was obtained by empirical regression. This empirical model was developed through experiments conducted using sludge from five WWTPs, and validated by the measured data from a sixth WWTP, which fit the model well (R² = 0.98, p settling was also developed. Finally, the effects of denitrification and addition of a polymer were also analysed because of their effect on sludge thickening, which can be useful for WWTP operation, e.g., improving wastewater treatment or the proper use of the polymer.

  13. An Extended System Frequency Response Model Considering Wind Power Participation in Frequency Regulation

    Directory of Open Access Journals (Sweden)

    Yi Tang

    2017-11-01

    Full Text Available With increasing penetration of wind power into the power system, wind power participation in frequency regulation is regarded as a beneficial strategy to improve the dynamic frequency response characteristics of power systems. The traditional power system frequency response (SFR model, which only includes synchronous generators, is no longer suitable for power systems with high penetrated wind power. An extended SFR model, based on the reduced-order model of wind turbine generator (WTG and the traditional SFR model, is presented in this paper. In the extended SFR model, the reduced-order model of WTG with combined frequency control is deduced by employing small signal analysis theory. Afterwards, the stability analysis of a closed-loop control system for the extended SFR model is carried out. Time-domain simulations using a test system are performed to validate the effectiveness of the extended SFR model; this model can provide a simpler, clearer and faster way to analyze the dynamic frequency response characteristic for a high-wind integrated power systems. The impact of additional frequency control parameters and wind speed disturbances on the system dynamic frequency response characteristics are investigated.

  14. An environment-dependent semi-empirical tight binding model suitable for electron transport in bulk metals, metal alloys, metallic interfaces, and metallic nanostructures. I. Model and validation

    Energy Technology Data Exchange (ETDEWEB)

    Hegde, Ganesh, E-mail: ghegde@purdue.edu; Povolotskyi, Michael; Kubis, Tillmann; Klimeck, Gerhard, E-mail: gekco@purdue.edu [Network for Computational Nanotechnology (NCN), Department of Electrical and Computer Engineering, Purdue University, West Lafayette, Indiana 47907 (United States); Boykin, Timothy [Department of Electrical and Computer Engineering, University of Alabama, Huntsville, Alabama (United States)

    2014-03-28

    Semi-empirical Tight Binding (TB) is known to be a scalable and accurate atomistic representation for electron transport for realistically extended nano-scaled semiconductor devices that might contain millions of atoms. In this paper, an environment-aware and transferable TB model suitable for electronic structure and transport simulations in technologically relevant metals, metallic alloys, metal nanostructures, and metallic interface systems are described. Part I of this paper describes the development and validation of the new TB model. The new model incorporates intra-atomic diagonal and off-diagonal elements for implicit self-consistency and greater transferability across bonding environments. The dependence of the on-site energies on strain has been obtained by appealing to the Moments Theorem that links closed electron paths in the system to energy moments of angular momentum resolved local density of states obtained ab initio. The model matches self-consistent density functional theory electronic structure results for bulk face centered cubic metals with and without strain, metallic alloys, metallic interfaces, and metallic nanostructures with high accuracy and can be used in predictive electronic structure and transport problems in metallic systems at realistically extended length scales.

  15. An environment-dependent semi-empirical tight binding model suitable for electron transport in bulk metals, metal alloys, metallic interfaces, and metallic nanostructures. I. Model and validation

    International Nuclear Information System (INIS)

    Hegde, Ganesh; Povolotskyi, Michael; Kubis, Tillmann; Klimeck, Gerhard; Boykin, Timothy

    2014-01-01

    Semi-empirical Tight Binding (TB) is known to be a scalable and accurate atomistic representation for electron transport for realistically extended nano-scaled semiconductor devices that might contain millions of atoms. In this paper, an environment-aware and transferable TB model suitable for electronic structure and transport simulations in technologically relevant metals, metallic alloys, metal nanostructures, and metallic interface systems are described. Part I of this paper describes the development and validation of the new TB model. The new model incorporates intra-atomic diagonal and off-diagonal elements for implicit self-consistency and greater transferability across bonding environments. The dependence of the on-site energies on strain has been obtained by appealing to the Moments Theorem that links closed electron paths in the system to energy moments of angular momentum resolved local density of states obtained ab initio. The model matches self-consistent density functional theory electronic structure results for bulk face centered cubic metals with and without strain, metallic alloys, metallic interfaces, and metallic nanostructures with high accuracy and can be used in predictive electronic structure and transport problems in metallic systems at realistically extended length scales

  16. An environment-dependent semi-empirical tight binding model suitable for electron transport in bulk metals, metal alloys, metallic interfaces, and metallic nanostructures. I. Model and validation

    Science.gov (United States)

    Hegde, Ganesh; Povolotskyi, Michael; Kubis, Tillmann; Boykin, Timothy; Klimeck, Gerhard

    2014-03-01

    Semi-empirical Tight Binding (TB) is known to be a scalable and accurate atomistic representation for electron transport for realistically extended nano-scaled semiconductor devices that might contain millions of atoms. In this paper, an environment-aware and transferable TB model suitable for electronic structure and transport simulations in technologically relevant metals, metallic alloys, metal nanostructures, and metallic interface systems are described. Part I of this paper describes the development and validation of the new TB model. The new model incorporates intra-atomic diagonal and off-diagonal elements for implicit self-consistency and greater transferability across bonding environments. The dependence of the on-site energies on strain has been obtained by appealing to the Moments Theorem that links closed electron paths in the system to energy moments of angular momentum resolved local density of states obtained ab initio. The model matches self-consistent density functional theory electronic structure results for bulk face centered cubic metals with and without strain, metallic alloys, metallic interfaces, and metallic nanostructures with high accuracy and can be used in predictive electronic structure and transport problems in metallic systems at realistically extended length scales.

  17. Strategy for a Rock Mechanics Site Descriptive Model. Development and testing of the empirical approach

    International Nuclear Information System (INIS)

    Roeshoff, Kennert; Lanaro, Flavio; Lanru Jing

    2002-05-01

    This report presents the results of one part of a wide project for the determination of a methodology for the determination of the rock mechanics properties of the rock mass for the so-called Aespoe Test Case. The Project consists of three major parts: the empirical part dealing with the characterisation of the rock mass by applying empirical methods, a part determining the rock mechanics properties of the rock mass through numerical modelling, and a third part carrying out numerical modelling for the determination of the stress state at Aespoe. All Project's parts were performed based on a limited amount of data about the geology and mechanical tests on samples selected from the Aespoe Database. This Report only considers the empirical approach. The purpose of the project is the development of a descriptive rock mechanics model for SKBs rock mass investigations for a final repository site. The empirical characterisation of the rock mass provides correlations with some of the rock mechanics properties of the rock mass such as the deformation modulus, the friction angle and cohesion for a certain stress interval and the uniaxial compressive strength. For the characterisation of the rock mass, several empirical methods were analysed and reviewed. Among those methods, some were chosen because robust, applicable and widespread in modern rock mechanics. Major weight was given to the well-known Tunnel Quality Index (Q) and Rock Mass Rating (RMR) but also the Rock Mass Index (RMi), the Geological Strength Index (GSI) and Ramamurthy's Criterion were applied for comparison with the two classical methods. The process of: i) sorting the geometrical/geological/rock mechanics data, ii) identifying homogeneous rock volumes, iii) determining the input parameters for the empirical ratings for rock mass characterisation; iv) evaluating the mechanical properties by using empirical relations with the rock mass ratings; was considered. By comparing the methodologies involved by the

  18. Strategy for a Rock Mechanics Site Descriptive Model. Development and testing of the empirical approach

    Energy Technology Data Exchange (ETDEWEB)

    Roeshoff, Kennert; Lanaro, Flavio [Berg Bygg Konsult AB, Stockholm (Sweden); Lanru Jing [Royal Inst. of Techn., Stockholm (Sweden). Div. of Engineering Geology

    2002-05-01

    This report presents the results of one part of a wide project for the determination of a methodology for the determination of the rock mechanics properties of the rock mass for the so-called Aespoe Test Case. The Project consists of three major parts: the empirical part dealing with the characterisation of the rock mass by applying empirical methods, a part determining the rock mechanics properties of the rock mass through numerical modelling, and a third part carrying out numerical modelling for the determination of the stress state at Aespoe. All Project's parts were performed based on a limited amount of data about the geology and mechanical tests on samples selected from the Aespoe Database. This Report only considers the empirical approach. The purpose of the project is the development of a descriptive rock mechanics model for SKBs rock mass investigations for a final repository site. The empirical characterisation of the rock mass provides correlations with some of the rock mechanics properties of the rock mass such as the deformation modulus, the friction angle and cohesion for a certain stress interval and the uniaxial compressive strength. For the characterisation of the rock mass, several empirical methods were analysed and reviewed. Among those methods, some were chosen because robust, applicable and widespread in modern rock mechanics. Major weight was given to the well-known Tunnel Quality Index (Q) and Rock Mass Rating (RMR) but also the Rock Mass Index (RMi), the Geological Strength Index (GSI) and Ramamurthy's Criterion were applied for comparison with the two classical methods. The process of: i) sorting the geometrical/geological/rock mechanics data, ii) identifying homogeneous rock volumes, iii) determining the input parameters for the empirical ratings for rock mass characterisation; iv) evaluating the mechanical properties by using empirical relations with the rock mass ratings; was considered. By comparing the methodologies involved

  19. Modeling the acceptance of clinical information systems among hospital medical staff: an extended TAM model.

    Science.gov (United States)

    Melas, Christos D; Zampetakis, Leonidas A; Dimopoulou, Anastasia; Moustakis, Vassilis

    2011-08-01

    Recent empirical research has utilized the Technology Acceptance Model (TAM) to advance the understanding of doctors' and nurses' technology acceptance in the workplace. However, the majority of the reported studies are either qualitative in nature or use small convenience samples of medical staff. Additionally, in very few studies moderators are either used or assessed despite their importance in TAM based research. The present study focuses on the application of TAM in order to explain the intention to use clinical information systems, in a random sample of 604 medical staff (534 physicians) working in 14 hospitals in Greece. We introduce physicians' specialty as a moderator in TAM and test medical staff's information and communication technology (ICT) knowledge and ICT feature demands, as external variables. The results show that TAM predicts a substantial proportion of the intention to use clinical information systems. Findings make a contribution to the literature by replicating, explaining and advancing the TAM, whereas theory is benefited by the addition of external variables and medical specialty as a moderator. Recommendations for further research are discussed. Copyright © 2011 Elsevier Inc. All rights reserved.

  20. Empirical likelihood

    CERN Document Server

    Owen, Art B

    2001-01-01

    Empirical likelihood provides inferences whose validity does not depend on specifying a parametric model for the data. Because it uses a likelihood, the method has certain inherent advantages over resampling methods: it uses the data to determine the shape of the confidence regions, and it makes it easy to combined data from multiple sources. It also facilitates incorporating side information, and it simplifies accounting for censored, truncated, or biased sampling.One of the first books published on the subject, Empirical Likelihood offers an in-depth treatment of this method for constructing confidence regions and testing hypotheses. The author applies empirical likelihood to a range of problems, from those as simple as setting a confidence region for a univariate mean under IID sampling, to problems defined through smooth functions of means, regression models, generalized linear models, estimating equations, or kernel smooths, and to sampling with non-identically distributed data. Abundant figures offer vi...

  1. Inference and testing on the boundary in extended constant conditional correlation GARCH models

    DEFF Research Database (Denmark)

    Pedersen, Rasmus Søndergaard

    2017-01-01

    We consider inference and testing in extended constant conditional correlation GARCH models in the case where the true parameter vector is a boundary point of the parameter space. This is of particular importance when testing for volatility spillovers in the model. The large-sample properties...

  2. 3D Printed Molecules and Extended Solid Models for Teaching Symmetry and Point Groups

    Science.gov (United States)

    Scalfani, Vincent F.; Vaid, Thomas P.

    2014-01-01

    Tangible models help students and researchers visualize chemical structures in three dimensions (3D). 3D printing offers a unique and straightforward approach to fabricate plastic 3D models of molecules and extended solids. In this article, we prepared a series of digital 3D design files of molecular structures that will be useful for teaching…

  3. 2D Modeling and Classification of Extended Objects in a Network of HRR Radars

    NARCIS (Netherlands)

    Fasoula, A.

    2011-01-01

    In this thesis, the modeling of extended objects with low-dimensional representations of their 2D geometry is addressed. The ultimate objective is the classification of the objects using libraries of such compact 2D object models that are much smaller than in the state-of-the-art classification

  4. Determining the inventory impact of extended-shelf-life platelets with a network simulation model.

    Science.gov (United States)

    Blake, John T

    2017-12-01

    The regulatory shelf life for platelets (PLTs) in many jurisdictions is 5 days. PLT shelf life can be extended to 7 days with an enhanced bacterial detection algorithm. Enhanced testing, however, comes at a cost, which may be offset by reductions in wastage due to longer shelf life. This article describes a method for estimating systemwide reductions in PLT outdates after PLT shelf life is extended. A simulation was used to evaluate the impact of an extended PLT shelf life within a national blood network. A network model of the Canadian Blood Services PLT supply chain was built and validated. PLT shelf life was extended from 5 days to 6, 7, and 8 days and runs were completed to determine the impact on outdates. Results suggest that, in general, a 16.3% reduction in PLT wastage can be expected with each additional day that PLT shelf life is extended. Both suppliers and hospitals will experience fewer outdating units, but wastage will decrease at a faster rate at hospitals. No effect was seen by blood group, but there was some evidence that supplier site characteristics influences both the number of units wasted and the site's ability to benefit from extended-shelf-life PLTs. Extended-shelf-life PLTs will reduce wastage within a blood supply chain. At 7 days, an improvement of 38% reduction in wastage can be expected with outdates being equally distributed between suppliers and hospital customers. © 2017 AABB.

  5. Empirical angle-dependent Biot and MBA models for acoustic anisotropy in cancellous bone

    International Nuclear Information System (INIS)

    Lee, Kang ll; Hughes, E R; Humphrey, V F; Leighton, T G; Choi, Min Joo

    2007-01-01

    The Biot and the modified Biot-Attenborough (MBA) models have been found useful to understand ultrasonic wave propagation in cancellous bone. However, neither of the models, as previously applied to cancellous bone, allows for the angular dependence of acoustic properties with direction. The present study aims to account for the acoustic anisotropy in cancellous bone, by introducing empirical angle-dependent input parameters, as defined for a highly oriented structure, into the Biot and the MBA models. The anisotropy of the angle-dependent Biot model is attributed to the variation in the elastic moduli of the skeletal frame with respect to the trabecular alignment. The angle-dependent MBA model employs a simple empirical way of using the parametric fit for the fast and the slow wave speeds. The angle-dependent models were used to predict both the fast and slow wave velocities as a function of propagation angle with respect to the trabecular alignment of cancellous bone. The predictions were compared with those of the Schoenberg model for anisotropy in cancellous bone and in vitro experimental measurements from the literature. The angle-dependent models successfully predicted the angular dependence of phase velocity of the fast wave with direction. The root-mean-square errors of the measured versus predicted fast wave velocities were 79.2 m s -1 (angle-dependent Biot model) and 36.1 m s -1 (angle-dependent MBA model). They also predicted the fact that the slow wave is nearly independent of propagation angle for angles about 50 0 , but consistently underestimated the slow wave velocity with the root-mean-square errors of 187.2 m s -1 (angle-dependent Biot model) and 240.8 m s -1 (angle-dependent MBA model). The study indicates that the angle-dependent models reasonably replicate the acoustic anisotropy in cancellous bone

  6. An Empirical Validation of Building Simulation Software for Modelling of Double-Skin Facade (DSF)

    DEFF Research Database (Denmark)

    Larsen, Olena Kalyanova; Heiselberg, Per; Felsmann, Clemens

    2009-01-01

    buildings, but their accuracy might be limited in cases with DSFs because of the complexity of the heat and mass transfer processes within the DSF. To address this problem, an empirical validation of building models with DSF, performed with various building simulation tools (ESP-r, IDA ICE 3.0, VA114......Double-skin facade (DSF) buildings are being built as an attractive, innovative and energy efficient solution. Nowadays, several design tools are used for assessment of thermal and energy performance of DSF buildings. Existing design tools are well-suited for performance assessment of conventional......, TRNSYS-TUD and BSim) was carried out in the framework of IEA SHC Task 34 /ECBCS Annex 43 "Testing and Validation of Building Energy Simulation Tools". The experimental data for the validation was gathered in a full-scale outdoor test facility. The empirical data sets comprise the key-functioning modes...

  7. Integrating social science into empirical models of coupled human and natural systems

    Directory of Open Access Journals (Sweden)

    Jeffrey D. Kline

    2017-09-01

    Full Text Available Coupled human and natural systems (CHANS research highlights reciprocal interactions (or feedbacks between biophysical and socioeconomic variables to explain system dynamics and resilience. Empirical models often are used to test hypotheses and apply theory that represent human behavior. Parameterizing reciprocal interactions presents two challenges for social scientists: (1 how to represent human behavior as influenced by biophysical factors and integrate this into CHANS empirical models; (2 how to organize and function as a multidisciplinary social science team to accomplish that task. We reflect on these challenges regarding our CHANS research that investigated human adaptation to fire-prone landscapes. Our project sought to characterize the forest management activities of land managers and landowners (or "actors" and their influence on wildfire behavior and landscape outcomes by focusing on biophysical and socioeconomic feedbacks in central Oregon (USA. We used an agent-based model (ABM to compile biophysical and social information pertaining to actor behavior, and to project future landscape conditions under alternative management scenarios. Project social scientists were tasked with identifying actors' forest management activities and biophysical and socioeconomic factors that influence them, and with developing decision rules for incorporation into the ABM to represent actor behavior. We (1 briefly summarize what we learned about actor behavior on this fire-prone landscape and how we represented it in an ABM, and (2 more significantly, report our observations about how we organized and functioned as a diverse team of social scientists to fulfill these CHANS research tasks. We highlight several challenges we experienced, involving quantitative versus qualitative data and methods, distilling complex behavior into empirical models, varying sensitivity of biophysical models to social factors, synchronization of research tasks, and the need to

  8. Fermion Masses and Mixing in SUSY Grand Unified Gauge Models with Extended Gut Gauge Groups

    Energy Technology Data Exchange (ETDEWEB)

    Chou, Chih-Lung

    2005-04-05

    The authors discuss a class of supersymmetric (SUSY) grand unified gauge (GUT) models based on the GUT symmetry G x G or G x G x G, where G denotes the GUT group that has the Standard Model symmetry (SU(3){sub c} x SU(2){sub L} x U(1){sub Y}) embedded as a subgroup. As motivated from string theory, these models are constructed without introducing any Higgs field of rani two or higher. Thus all the Higgs fields are in the fundamental representations of the extended GUT symmetry or, when G = SO(10), in the spinorial representation. These Higgs fields, when acquiring their vacuum expectation values, would break the extended GUT symmetry down to the Standard Model symmetry. In this dissertation, they argue that the features required of unified models, such as the Higgs doublet-triplet splitting, proton stability, and the hierarchy of fermion masses and mixing angles, could have natural explanations in the framework of the extended SUSY GUTs. Furthermore, they argue that the frameworks used previously to construct SO(10) GUT models using adjoint Higgs fields can naturally arise from the SO(10) x SO(10) and SO(10) x SO(10) x SO(10) models by integrating out heavy fermions. This observation thus suggests that the traditional SUSY GUT SO(10) theories can be viewed as the low energy effective theories generated by breaking the extended GUT symmetry down to the SO(10) symmetry.

  9. An empirical model for independent control of variable speed refrigeration system

    International Nuclear Information System (INIS)

    Li Hua; Jeong, Seok-Kwon; Yoon, Jung-In; You, Sam-Sang

    2008-01-01

    This paper deals with an empirical dynamic model for decoupling control of the variable speed refrigeration system (VSRS). To cope with inherent complexity and nonlinearity in system dynamics, the model parameters are first obtained based on experimental data. In the study, the dynamic characteristics of indoor temperature and superheat are assumed to be first-order model with time delay. While the compressor frequency and opening angle of electronic expansion valve are varying, the indoor temperature and the superheat exhibit interfering characteristics each other in the VSRS. Thus, each decoupling model has been proposed to eliminate such interference. Finally, the experiment and simulation results indicate that the proposed model offers more tractable means for describing the actual VSRS comparing to other models currently available

  10. Dynamic Modeling of a Reformed Methanol Fuel Cell System using Empirical Data and Adaptive Neuro-Fuzzy Inference System Models

    DEFF Research Database (Denmark)

    Justesen, Kristian Kjær; Andreasen, Søren Juhl; Shaker, Hamid Reza

    2013-01-01

    In this work, a dynamic MATLAB Simulink model of a H3-350 Reformed Methanol Fuel Cell (RMFC) stand-alone battery charger produced by Serenergy is developed on the basis of theoretical and empirical methods. The advantage of RMFC systems is that they use liquid methanol as a fuel instead of gaseous...... of the reforming process are implemented. Models of the cooling flow of the blowers for the fuel cell and the burner which supplies process heat for the reformer are made. The two blowers have a common exhaust, which means that the two blowers influence each other’s output. The models take this into account using...... an empirical approach. Fin efficiency models for the cooling effect of the air are also developed using empirical methods. A fuel cell model is also implemented based on a standard model which is adapted to fit the measured performance of the H3-350 module. All the individual parts of the model are verified...

  11. Dynamic Modeling of a Reformed Methanol Fuel Cell System using Empirical Data and Adaptive Neuro-Fuzzy Inference System Models

    DEFF Research Database (Denmark)

    Justesen, Kristian Kjær; Andreasen, Søren Juhl; Shaker, Hamid Reza

    2014-01-01

    In this work, a dynamic MATLAB Simulink model of a H3-350 Reformed Methanol Fuel Cell (RMFC) stand-alone battery charger produced by Serenergy is developed on the basis of theoretical and empirical methods. The advantage of RMFC systems is that they use liquid methanol as a fuel instead of gaseous...... of the reforming process are implemented. Models of the cooling flow of the blowers for the fuel cell and the burner which supplies process heat for the reformer are made. The two blowers have a common exhaust, which means that the two blowers influence each other’s output. The models take this into account using...... an empirical approach. Fin efficiency models for the cooling effect of the air are also developed using empirical methods. A fuel cell model is also implemented based on a standard model which is adapted to fit the measured performance of the H3-350 module. All the individual parts of the model are verified...

  12. Antecedents of employee electricity saving behavior in organizations: An empirical study based on norm activation model

    International Nuclear Information System (INIS)

    Zhang, Yixiang; Wang, Zhaohua; Zhou, Guanghui

    2013-01-01

    China is one of the major energy-consuming countries, and is under great pressure to promote energy saving and reduce domestic energy consumption. Employees constitute an important target group for energy saving. However, only a few research efforts have been paid to study what drives employee energy saving behavior in organizations. To fill this gap, drawing on norm activation model (NAM), we built a research model to study antecedents of employee electricity saving behavior in organizations. The model was empirically tested using survey data collected from office workers in Beijing, China. Results show that personal norm positively influences employee electricity saving behavior. Organizational electricity saving climate negatively moderates the effect of personal norm on electricity saving behavior. Awareness of consequences, ascription of responsibility, and organizational electricity saving climate positively influence personal norm. Furthermore, awareness of consequences positively influences ascription of responsibility. This paper contributes to the energy saving behavior literature by building a theoretical model of employee electricity saving behavior which is understudied in the current literature. Based on the empirical results, implications on how to promote employee electricity saving are discussed. - Highlights: • We studied employee electricity saving behavior based on norm activation model. • The model was tested using survey data collected from office workers in China. • Personal norm positively influences employee′s electricity saving behavior. • Electricity saving climate negatively moderates personal norm′s effect. • This research enhances our understanding of employee electricity saving behavior

  13. Models of expected returns on the brazilian market: Empirical tests using predictive methodology

    Directory of Open Access Journals (Sweden)

    Adriano Mussa

    2009-01-01

    Full Text Available Predictive methodologies for test of the expected returns models are largely diffused on the international academic environment. However, these methods have not been used in Brazil in a systematic way. Generally, empirical studies proceeded with Brazilian stock market data are concentrated only in the first step of these methodologies. The purpose of this article was test and compare the models CAPM, 3-factors and 4-factors using a predictive methodology, considering two steps – temporal and cross-section regressions – with standard errors obtained by the techniques of Fama and Macbeth (1973. The results indicated the superiority of the 4-fators model as compared to the 3-fators model, and the superiority of the 3- factors model as compared to the CAPM, but no one of the tested models were enough on the explanation of the Brazilian stock returns. Contrary to some empirical evidences, that do not use predictive methodology, the size and momentum effect seem do not exist on the Brazilian capital markets, but there are evidences of the value effect and the relevance of the market for explanation of expected returns. These finds rise some questions, mainly caused by the originality of the methodology on the local market and by the fact that this subject is still incipient and polemic on the Brazilian academic environment.

  14. Empirical Reconstruction and Numerical Modeling of the First Geoeffective Coronal Mass Ejection of Solar Cycle 24

    Science.gov (United States)

    Wood, B. E.; Wu, C.-C.; Howard, R. A.; Socker, D. G.; Rouillard, A. P.

    2011-03-01

    We analyze the kinematics and morphology of a coronal mass ejection (CME) from 2010 April 3, which was responsible for the first significant geomagnetic storm of solar cycle 24. The analysis utilizes coronagraphic and heliospheric images from the two STEREO spacecraft, and coronagraphic images from SOHO/LASCO. Using an empirical three-dimensional (3D) reconstruction technique, we demonstrate that the CME can be reproduced reasonably well at all times with a 3D flux rope shape, but the case for a flux rope being the correct interpretation is not as strong as some events studied with STEREO in the past, given that we are unable to infer a unique orientation for the flux rope. A model with an orientation angle of -80° from the ecliptic plane (i.e., nearly N-S) works best close to the Sun, but a model at 10° (i.e., nearly E-W) works better far from the Sun. Both interpretations require the cross section of the flux rope to be significantly elliptical rather than circular. In addition to our empirical modeling, we also present a fully 3D numerical MHD model of the CME. This physical model appears to effectively reproduce aspects of the shape and kinematics of the CME's leading edge. It is particularly encouraging that the model reproduces the amount of interplanetary deceleration observed for the CME during its journey from the Sun to 1 AU.

  15. EMPIRICAL RECONSTRUCTION AND NUMERICAL MODELING OF THE FIRST GEOEFFECTIVE CORONAL MASS EJECTION OF SOLAR CYCLE 24

    International Nuclear Information System (INIS)

    Wood, B. E.; Wu, C.-C.; Howard, R. A.; Socker, D. G.; Rouillard, A. P.

    2011-01-01

    We analyze the kinematics and morphology of a coronal mass ejection (CME) from 2010 April 3, which was responsible for the first significant geomagnetic storm of solar cycle 24. The analysis utilizes coronagraphic and heliospheric images from the two STEREO spacecraft, and coronagraphic images from SOHO/LASCO. Using an empirical three-dimensional (3D) reconstruction technique, we demonstrate that the CME can be reproduced reasonably well at all times with a 3D flux rope shape, but the case for a flux rope being the correct interpretation is not as strong as some events studied with STEREO in the past, given that we are unable to infer a unique orientation for the flux rope. A model with an orientation angle of -80 deg. from the ecliptic plane (i.e., nearly N-S) works best close to the Sun, but a model at 10 deg. (i.e., nearly E-W) works better far from the Sun. Both interpretations require the cross section of the flux rope to be significantly elliptical rather than circular. In addition to our empirical modeling, we also present a fully 3D numerical MHD model of the CME. This physical model appears to effectively reproduce aspects of the shape and kinematics of the CME's leading edge. It is particularly encouraging that the model reproduces the amount of interplanetary deceleration observed for the CME during its journey from the Sun to 1 AU.

  16. A Multistep Extending Truncation Method towards Model Construction of Infinite-State Markov Chains

    Directory of Open Access Journals (Sweden)

    Kemin Wang

    2014-01-01

    Full Text Available The model checking of Infinite-State Continuous Time Markov Chains will inevitably encounter the state explosion problem when constructing the CTMCs model; our method is to get a truncated model of the infinite one; to get a sufficient truncated model to meet the model checking of Continuous Stochastic Logic based system properties, we propose a multistep extending advanced truncation method towards model construction of CTMCs and implement it in the INFAMY model checker; the experiment results show that our method is effective.

  17. Empirical models of the Solar Wind : Extrapolations from the Helios & Ulysses observations back to the corona

    Science.gov (United States)

    Maksimovic, M.; Zaslavsky, A.

    2017-12-01

    We will present extrapolation of the HELIOS & Ulysses proton density, temperature & bulk velocities back to the corona. Using simple mass flux conservations we show a very good agreement between these extrapolations and the current state knowledge of these parameters in the corona, based on SOHO mesurements. These simple extrapolations could potentially be very useful for the science planning of both the Parker Solar Probe and Solar Orbiter missions. Finally will also present some modelling considerations, based on simple energy balance equations which arise from these empirical observationnal models.

  18. Generalized least squares and empirical Bayes estimation in regional partial duration series index-flood modeling

    DEFF Research Database (Denmark)

    Madsen, Henrik; Rosbjerg, Dan

    1997-01-01

    parameters is inferred from regional data using generalized least squares (GLS) regression. Two different Bayesian T-year event estimators are introduced: a linear estimator that requires only some moments of the prior distributions to be specified and a parametric estimator that is based on specified......A regional estimation procedure that combines the index-flood concept with an empirical Bayes method for inferring regional information is introduced. The model is based on the partial duration series approach with generalized Pareto (GP) distributed exceedances. The prior information of the model...

  19. Semi-empirical modelization of charge funneling in a NP diode

    International Nuclear Information System (INIS)

    Musseau, O.

    1991-01-01

    Heavy ion interaction with a semiconductor generates a high density of electrons and holes pairs along the trajectory and in a space charge zone the collected charge is considerably increased. The chronology of this charge funneling is described in a semi-empirical model. From initial conditions characterizing the incident ion and the studied structure, it is possible to evaluate directly the transient current, the collected charge and the length of funneling with a good agreement. The model can be extrapolated to more complex structures

  20. The renormalizability and the asymptotically free behaviour of the extended Wess-Zumino models

    International Nuclear Information System (INIS)

    Ha Huy Bang; Hoang Ngoc Long.

    1989-09-01

    By using the path integral method for superfields the Ward identities and the Callan-Symanzik equations for the extended Wess-Zumino models are derived. From these the renormalizability and the asymptotically behaviour of all the extended Wess-Zumino models in d = 2,4 (mod 8)-dimensional space-time are studied. In particular, we will come to the conclusion that the supersymmetric Ward identities together with the broken chiral Ward identities imply that a single wave function renormalization is sufficient to renormalize the theory and that the theory is not asymptotically free. (author). 16 refs

  1. One-dimensional extended Bose-Hubbard model with a confining potential: a DMRG analysis

    Energy Technology Data Exchange (ETDEWEB)

    Urba, Laura; Lundh, Emil; Rosengren, Anders [Condensed Matter Theory, Department of Theoretical Physics, KTH, AlbaNova University Center, SE-106 91 Stockholm (Sweden)

    2006-12-28

    The extended Bose-Hubbard model in a quadratic trap potential is studied using a finite-size density-matrix renormalization group method (DMRG). We compute the boson density profiles, the local compressibility and the hopping correlation functions. We observe the phase separation induced by the trap in all the quantities studied and conclude that the local density approximation is valid in the extended Bose-Hubbard model. From the plateaus obtained in the local compressibility it was possible to obtain the phase diagram of the homogeneous system which is in agreement with previous results.

  2. An empirical Bayesian approach for model-based inference of cellular signaling networks

    Directory of Open Access Journals (Sweden)

    Klinke David J

    2009-11-01

    Full Text Available Abstract Background A common challenge in systems biology is to infer mechanistic descriptions of biological process given limited observations of a biological system. Mathematical models are frequently used to represent a belief about the causal relationships among proteins within a signaling network. Bayesian methods provide an attractive framework for inferring the validity of those beliefs in the context of the available data. However, efficient sampling of high-dimensional parameter space and appropriate convergence criteria provide barriers for implementing an empirical Bayesian approach. The objective of this study was to apply an Adaptive Markov chain Monte Carlo technique to a typical study of cellular signaling pathways. Results As an illustrative example, a kinetic model for the early signaling events associated with the epidermal growth factor (EGF signaling network was calibrated against dynamic measurements observed in primary rat hepatocytes. A convergence criterion, based upon the Gelman-Rubin potential scale reduction factor, was applied to the model predictions. The posterior distributions of the parameters exhibited complicated structure, including significant covariance between specific parameters and a broad range of variance among the parameters. The model predictions, in contrast, were narrowly distributed and were used to identify areas of agreement among a collection of experimental studies. Conclusion In summary, an empirical Bayesian approach was developed for inferring the confidence that one can place in a particular model that describes signal transduction mechanisms and for inferring inconsistencies in experimental measurements.

  3. Development of an Empirical Model for Optimization of Machining Parameters to Minimize Power Consumption

    Science.gov (United States)

    Kant Garg, Girish; Garg, Suman; Sangwan, K. S.

    2018-04-01

    The manufacturing sector consumes huge energy demand and the machine tools used in this sector have very less energy efficiency. Selection of the optimum machining parameters for machine tools is significant for energy saving and for reduction of environmental emission. In this work an empirical model is developed to minimize the power consumption using response surface methodology. The experiments are performed on a lathe machine tool during the turning of AISI 6061 Aluminum with coated tungsten inserts. The relationship between the power consumption and machining parameters is adequately modeled. This model is used for formulation of minimum power consumption criterion as a function of optimal machining parameters using desirability function approach. The influence of machining parameters on the energy consumption has been found using the analysis of variance. The validation of the developed empirical model is proved using the confirmation experiments. The results indicate that the developed model is effective and has potential to be adopted by the industry for minimum power consumption of machine tools.

  4. An Empirical Based Proposal for Mass Customization Business Model in Footwear Industry

    OpenAIRE

    Pourabdollahian , Golboo; Corti , Donatella; Galbusera , Chiara; Silva , Julio ,

    2012-01-01

    Part 2: Design, Manufacturing and Production Management; International audience; This research aims at developing a business model for companies in the footwear industry interested in implementing Mass Customization with the goal of offering to the market products which perfectly match customers’ needs. The studies on mass customization are actually mostly focused on product development and production system aspects. This study extends the business modeling including also Supply Chain aspects...

  5. Modeling of VSC-Based Power Systems in The Extended Harmonic Domain

    DEFF Research Database (Denmark)

    Esparza, Miguel; Segundo-Ramirez, Juan; Kwon, Jun Bum

    2017-01-01

    Averaged modeling is a commonly used approach used to obtain mathematical representations of VSC-based systems. However, essential characteristics mainly related to the modulation process and the harmonic distortion of the signals are not able to be accurately captured and analyzed. The extended ...... on simulations and experimental case studies. The obtained results show that the resulting EHD models are accurate and reliable, while the memory and computation time are improved with the proposed model order reductions....

  6. Modelling metal speciation in the Scheldt Estuary: Combining a flexible-resolution transport model with empirical functions

    Energy Technology Data Exchange (ETDEWEB)

    Elskens, Marc [Vrije Universiteit Brussel, Analytical, Pleinlaan 2, BE-1050 Brussels (Belgium); Gourgue, Olivier [Université catholique de Louvain, Institute of Mechanics, Materials and Civil Engineering (IMMC), 4 Avenue G. Lemaître, bte L4.05.02, BE-1348 Louvain-la-Neuve (Belgium); Université catholique de Louvain, Georges Lemaître Centre for Earth and Climate Research (TECLIM), Place Louis Pasteur 2, bte L4.03.08, BE-1348 Louvain-la-Neuve (Belgium); Baeyens, Willy [Vrije Universiteit Brussel, Analytical, Pleinlaan 2, BE-1050 Brussels (Belgium); Chou, Lei [Université Libre de Bruxelles, Biogéochimie et Modélisation du Système Terre (BGéoSys) —Océanographie Chimique et Géochimie des Eaux, Campus de la Plaine —CP 208, Boulevard du Triomphe, BE-1050 Brussels (Belgium); Deleersnijder, Eric [Université catholique de Louvain, Institute of Mechanics, Materials and Civil Engineering (IMMC), 4 Avenue G. Lemaître, bte L4.05.02, BE-1348 Louvain-la-Neuve (Belgium); Université catholique de Louvain, Earth and Life Institute (ELI), Georges Lemaître Centre for Earth and Climate Research (TECLIM), Place Louis Pasteur 2, bte L4.03.08, BE-1348 Louvain-la-Neuve (Belgium); Leermakers, Martine [Vrije Universiteit Brussel, Analytical, Pleinlaan 2, BE-1050 Brussels (Belgium); and others

    2014-04-01

    Predicting metal concentrations in surface waters is an important step in the understanding and ultimately the assessment of the ecological risk associated with metal contamination. In terms of risk an essential piece of information is the accurate knowledge of the partitioning of the metals between the dissolved and particulate phases, as the former species are generally regarded as the most bioavailable and thus harmful form. As a first step towards the understanding and prediction of metal speciation in the Scheldt Estuary (Belgium, the Netherlands), we carried out a detailed analysis of a historical dataset covering the period 1982–2011. This study reports on the results for two selected metals: Cu and Cd. Data analysis revealed that both the total metal concentration and the metal partitioning coefficient (K{sub d}) could be predicted using relatively simple empirical functions of environmental variables such as salinity and suspended particulate matter concentration (SPM). The validity of these functions has been assessed by their application to salinity and SPM fields simulated by the hydro-environmental model SLIM. The high-resolution total and dissolved metal concentrations reconstructed using this approach, compared surprisingly well with an independent set of validation measurements. These first results from the combined mechanistic-empirical model approach suggest that it may be an interesting tool for risk assessment studies, e.g. to help identify conditions associated with elevated (dissolved) metal concentrations. - Highlights: • Empirical functions were designed for assessing metal speciation in estuarine water. • The empirical functions were implemented in the hydro-environmental model SLIM. • Validation was carried out in the Scheldt Estuary using historical data 1982–2011. • This combined mechanistic-empirical approach is useful for risk assessment.

  7. Dynamics of bloggers’ communities: Bipartite networks from empirical data and agent-based modeling

    Science.gov (United States)

    Mitrović, Marija; Tadić, Bosiljka

    2012-11-01

    We present an analysis of the empirical data and the agent-based modeling of the emotional behavior of users on the Web portals where the user interaction is mediated by posted comments, like Blogs and Diggs. We consider the dataset of discussion-driven popular Diggs, in which all comments are screened by machine-learning emotion detection in the text, to determine positive and negative valence (attractiveness and aversiveness) of each comment. By mapping the data onto a suitable bipartite network, we perform an analysis of the network topology and the related time-series of the emotional comments. The agent-based model is then introduced to simulate the dynamics and to capture the emergence of the emotional behaviors and communities. The agents are linked to posts on a bipartite network, whose structure evolves through their actions on the posts. The emotional states (arousal and valence) of each agent fluctuate in time, subject to the current contents of the posts to which the agent is exposed. By an agent’s action on a post its current emotions are transferred to the post. The model rules and the key parameters are inferred from the considered empirical data to ensure their realistic values and mutual consistency. The model assumes that the emotional arousal over posts drives the agent’s action. The simulations are preformed for the case of constant flux of agents and the results are analyzed in full analogy with the empirical data. The main conclusions are that the emotion-driven dynamics leads to long-range temporal correlations and emergent networks with community structure, that are comparable with the ones in the empirical system of popular posts. In view of pure emotion-driven agents actions, this type of comparisons provide a quantitative measure for the role of emotions in the dynamics on real blogs. Furthermore, the model reveals the underlying mechanisms which relate the post popularity with the emotion dynamics and the prevalence of negative

  8. Data mining of Ti-Al semi-empirical parameters for developing reduced order models

    Energy Technology Data Exchange (ETDEWEB)

    Broderick, Scott R [Department of Materials Science and Engineering and Institute for Combinatorial Discovery, Iowa State University, Ames, IA 50011 (United States); Aourag, Hafid [Department of Physics, University Abou Bakr Belkaid, Tlemcen 13000 (Algeria); Rajan, Krishna [Department of Materials Science and Engineering and Institute for Combinatorial Discovery, Iowa State University, Ames, IA 50011 (United States)

    2011-05-15

    A focus of materials design is determining the minimum amount of information necessary to fully describe a system, thus reducing the number of empirical results required and simplifying the data analysis. Screening descriptors calculated through a semi-empirical model, we demonstrate how an informatics-based analysis can be used to address this issue with no prior assumptions. We have developed a unique approach for identifying the minimum number of descriptors necessary to capture all the information of a system. Using Ti-Al alloys of varying compositions and crystal chemistries as the test bed, 5 of the 21 original descriptors from electronic structure calculations are found to capture all the information from the calculation, thereby reducing the structure-chemistry-property search space. Additionally, by combining electronic structure calculations with data mining, we classify the systems by chemistries and structures, based on the electronic structure inputs, and thereby rank the impact of change in chemistry and crystal structure on the electronic structure. -- Research Highlights: {yields} We developed an informatics-based methodology to minimize the necessary information. {yields} We applied this methodology to descriptors from semi-empirical calculations. {yields} We developed a validation approach for maintaining information from screening. {yields} We classified intermetallics and identified patterns of composition and structure.

  9. Data mining of Ti-Al semi-empirical parameters for developing reduced order models

    International Nuclear Information System (INIS)

    Broderick, Scott R.; Aourag, Hafid; Rajan, Krishna

    2011-01-01

    A focus of materials design is determining the minimum amount of information necessary to fully describe a system, thus reducing the number of empirical results required and simplifying the data analysis. Screening descriptors calculated through a semi-empirical model, we demonstrate how an informatics-based analysis can be used to address this issue with no prior assumptions. We have developed a unique approach for identifying the minimum number of descriptors necessary to capture all the information of a system. Using Ti-Al alloys of varying compositions and crystal chemistries as the test bed, 5 of the 21 original descriptors from electronic structure calculations are found to capture all the information from the calculation, thereby reducing the structure-chemistry-property search space. Additionally, by combining electronic structure calculations with data mining, we classify the systems by chemistries and structures, based on the electronic structure inputs, and thereby rank the impact of change in chemistry and crystal structure on the electronic structure. -- Research Highlights: → We developed an informatics-based methodology to minimize the necessary information. → We applied this methodology to descriptors from semi-empirical calculations. → We developed a validation approach for maintaining information from screening. → We classified intermetallics and identified patterns of composition and structure.

  10. An Empirical Model and Ethnic Differences in Cultural Meanings Via Motives for Suicide.

    Science.gov (United States)

    Chu, Joyce; Khoury, Oula; Ma, Johnson; Bahn, Francesca; Bongar, Bruce; Goldblum, Peter

    2017-10-01

    The importance of cultural meanings via motives for suicide - what is considered acceptable to motivate suicide - has been advocated as a key step in understanding and preventing development of suicidal behaviors. There have been limited systematic empirical attempts to establish different cultural motives ascribed to suicide across ethnic groups. We used a mixed methods approach and grounded theory methodology to guide the analysis of qualitative data querying for meanings via motives for suicide among 232 Caucasians, Asian Americans, and Latino/a Americans with a history of suicide attempts, ideation, intent, or plan. We used subsequent logistic regression analyses to examine ethnic differences in suicide motive themes. This inductive approach of generating theory from data yielded an empirical model of 6 cultural meanings via motives for suicide themes: intrapersonal perceptions, intrapersonal emotions, intrapersonal behavior, interpersonal, mental health/medical, and external environment. Logistic regressions showed ethnic differences in intrapersonal perceptions (low endorsement by Latino/a Americans) and external environment (high endorsement by Latino/a Americans) categories. Results advance suicide research and practice by establishing 6 empirically based cultural motives for suicide themes that may represent a key intermediary step in the pathway toward suicidal behaviors. Clinicians can use these suicide meanings via motives to guide their assessment and determination of suicide risk. Emphasis on environmental stressors rather than negative perceptions like hopelessness should be considered with Latino/a clients. © 2017 Wiley Periodicals, Inc.

  11. An empirical model to predict infield thin layer drying rate of cut switchgrass

    International Nuclear Information System (INIS)

    Khanchi, A.; Jones, C.L.; Sharma, B.; Huhnke, R.L.; Weckler, P.; Maness, N.O.

    2013-01-01

    A series of 62 thin layer drying experiments were conducted to evaluate the effect of solar radiation, vapor pressure deficit and wind speed on drying rate of switchgrass. An environmental chamber was fabricated that can simulate field drying conditions. An empirical drying model based on maturity stage of switchgrass was also developed during the study. It was observed that solar radiation was the most significant factor in improving the drying rate of switchgrass at seed shattering and seed shattered maturity stage. Therefore, drying switchgrass in wide swath to intercept the maximum amount of radiation at these stages of maturity is recommended. Moreover, it was observed that under low radiation intensity conditions, wind speed helps to improve the drying rate of switchgrass. Field operations such as raking or turning of the windrows are recommended to improve air circulation within a swath on cloudy days. Additionally, it was found that the effect of individual weather parameters on the drying rate of switchgrass was dependent on maturity stage. Vapor pressure deficit was strongly correlated with the drying rate during seed development stage whereas, vapor pressure deficit was weakly correlated during seed shattering and seed shattered stage. These findings suggest the importance of using separate drying rate models for each maturity stage of switchgrass. The empirical models developed in this study can predict the drying time of switchgrass based on the forecasted weather conditions so that the appropriate decisions can be made. -- Highlights: • An environmental chamber was developed in the present study to simulate field drying conditions. • An empirical model was developed that can estimate drying rate of switchgrass based on forecasted weather conditions. • Separate equations were developed based on maturity stage of switchgrass. • Designed environmental chamber can be used to evaluate the effect of other parameters that affect drying of crops

  12. Technology, Demographic Characteristics and E-Learning Acceptance: A Conceptual Model Based on Extended Technology Acceptance Model

    Science.gov (United States)

    Tarhini, Ali; Elyas, Tariq; Akour, Mohammad Ali; Al-Salti, Zahran

    2016-01-01

    The main aim of this paper is to develop an amalgamated conceptual model of technology acceptance that explains how individual, social, cultural and organizational factors affect the students' acceptance and usage behaviour of the Web-based learning systems. More specifically, the proposed model extends the Technology Acceptance Model (TAM) to…

  13. Research Article Evaluation of different signal propagation models for a mixed indoor-outdoor scenario using empirical data

    Directory of Open Access Journals (Sweden)

    Oleksandr Artemenko

    2016-06-01

    Full Text Available In this paper, we are choosing a suitable indoor-outdoor propagation model out of the existing models by considering path loss and distance as parameters. A path loss is calculated empirically by placing emitter nodes inside a building. A receiver placed outdoors is represented by a Quadrocopter (QC that receives beacon messages from indoor nodes. As per our analysis, the International Telecommunication Union (ITU model, Stanford University Interim (SUI model, COST-231 Hata model, Green-Obaidat model, Free Space model, Log-Distance Path Loss model and Electronic Communication Committee 33 (ECC-33 models are chosen and evaluated using empirical data collected in a real environment. The aim is to determine if the analytically chosen models fit our scenario by estimating the minimal standard deviation from the empirical data.

  14. Extended charge banking model of dual path shocks for implantable cardioverter defibrillators.

    Science.gov (United States)

    Dosdall, Derek J; Sweeney, James D

    2008-08-01

    Single path defibrillation shock methods have been improved through the use of the Charge Banking Model of defibrillation, which predicts the response of the heart to shocks as a simple resistor-capacitor (RC) circuit. While dual path defibrillation configurations have significantly reduced defibrillation thresholds, improvements to dual path defibrillation techniques have been limited to experimental observations without a practical model to aid in improving dual path defibrillation techniques. The Charge Banking Model has been extended into a new Extended Charge Banking Model of defibrillation that represents small sections of the heart as separate RC circuits, uses a weighting factor based on published defibrillation shock field gradient measures, and implements a critical mass criteria to predict the relative efficacy of single and dual path defibrillation shocks. The new model reproduced the results from several published experimental protocols that demonstrated the relative efficacy of dual path defibrillation shocks. The model predicts that time between phases or pulses of dual path defibrillation shock configurations should be minimized to maximize shock efficacy. Through this approach the Extended Charge Banking Model predictions may be used to improve dual path and multi-pulse defibrillation techniques, which have been shown experimentally to lower defibrillation thresholds substantially. The new model may be a useful tool to help in further improving dual path and multiple pulse defibrillation techniques by predicting optimal pulse durations and shock timing parameters.

  15. Adaptation of the concept of varying time of concentration within flood modelling: Theoretical and empirical investigations across the Mediterranean

    Science.gov (United States)

    Michailidi, Eleni Maria; Antoniadi, Sylvia; Koukouvinos, Antonis; Bacchi, Baldassare; Efstratiadis, Andreas

    2017-04-01

    The time of concentration, tc, is a key hydrological concept and often is an essential parameter of rainfall-runoff modelling, which has been traditionally tackled as a characteristic property of the river basin. However, both theoretical proof and empirical evidence imply that tc is a hydraulic quantity that depends on flow, and thus it should be considered as variable and not as constant parameter. Using a kinematic method approach, easily implemented in GIS environment, we first illustrate that the relationship between tc and the effective rainfall produced over the catchment is well-approximated by a power-type law, the exponent of which is associated with the slope of the longest flow path of the river basin. Next, we take advantage of this relationship to adapt the concept of varying time of concentration within flood modelling, and particularly the well-known SCS-CN approach. In this context, the initial abstraction ratio is also considered varying, while the propagation of the effective rainfall is employed through a parametric unit hydrograph, the shape of which is dynamically adjusted according to the runoff produced during the flood event. The above framework is tested in a number of Mediterranean river basins in Greece, Italy and Cyprus, ensuring faithful representation of most of the observed flood events. Based on the outcomes of this extended analysis, we provide guidance for employing this methodology for flood design studies in ungauged basins.

  16. An Empirical Model for Vane-Type Vortex Generators in a Navier-Stokes Code

    Science.gov (United States)

    Dudek, Julianne C.

    2005-01-01

    An empirical model which simulates the effects of vane-type vortex generators in ducts was incorporated into the Wind-US Navier-Stokes computational fluid dynamics code. The model enables the effects of the vortex generators to be simulated without defining the details of the geometry within the grid, and makes it practical for researchers to evaluate multiple combinations of vortex generator arrangements. The model determines the strength of each vortex based on the generator geometry and the local flow conditions. Validation results are presented for flow in a straight pipe with a counter-rotating vortex generator arrangement, and the results are compared with experimental data and computational simulations using a gridded vane generator. Results are also presented for vortex generator arrays in two S-duct diffusers, along with accompanying experimental data. The effects of grid resolution and turbulence model are also examined.

  17. Empirical global model of upper thermosphere winds based on atmosphere and dynamics explorer satellite data

    Science.gov (United States)

    Hedin, A. E.; Spencer, N. W.; Killeen, T. L.

    1988-01-01

    Thermospheric wind data obtained from the Atmosphere Explorer E and Dynamics Explorer 2 satellites have been used to generate an empirical wind model for the upper thermosphere, analogous to the MSIS model for temperature and density, using a limited set of vector spherical harmonics. The model is limited to above approximately 220 km where the data coverage is best and wind variations with height are reduced by viscosity. The data base is not adequate to detect solar cycle (F10.7) effects at this time but does include magnetic activity effects. Mid- and low-latitude data are reproduced quite well by the model and compare favorably with published ground-based results. The polar vortices are present, but not to full detail.

  18. An empirical model of the Earth's bow shock based on an artificial neural network

    Science.gov (United States)

    Pallocchia, Giuseppe; Ambrosino, Danila; Trenchi, Lorenzo

    2014-05-01

    All of the past empirical models of the Earth's bow shock shape were obtained by best-fitting some given surfaces to sets of observed crossings. However, the issue of bow shock modeling can be addressed by means of artificial neural networks (ANN) as well. In this regard, here it is presented a perceptron, a simple feedforward network, which computes the bow shock distance along a given direction using the two angular coordinates of that direction, the bow shock predicted distance RF79 (provided by Formisano's model (F79)) and the upstream alfvénic Mach number Ma. After a brief description of the ANN architecture and training method, we discuss the results of the statistical comparison, performed over a test set of 1140 IMP8 crossings, between the prediction accuracies of ANN and F79 models.

  19. Multiband Prediction Model for Financial Time Series with Multivariate Empirical Mode Decomposition

    Directory of Open Access Journals (Sweden)

    Md. Rabiul Islam

    2012-01-01

    Full Text Available This paper presents a subband approach to financial time series prediction. Multivariate empirical mode decomposition (MEMD is employed here for multiband representation of multichannel financial time series together. Autoregressive moving average (ARMA model is used in prediction of individual subband of any time series data. Then all the predicted subband signals are summed up to obtain the overall prediction. The ARMA model works better for stationary signal. With multiband representation, each subband becomes a band-limited (narrow band signal and hence better prediction is achieved. The performance of the proposed MEMD-ARMA model is compared with classical EMD, discrete wavelet transform (DWT, and with full band ARMA model in terms of signal-to-noise ratio (SNR and mean square error (MSE between the original and predicted time series. The simulation results show that the MEMD-ARMA-based method performs better than the other methods.

  20. An empirical model describing the postnatal growth of organs in ICRP reference humans: Pt. 1

    International Nuclear Information System (INIS)

    Walker, J.T.

    1991-01-01

    An empirical model is presented for describing the postnatal mass growth of lungs in ICRP reference humans. A combined exponential and logistic function containing six parameters is fitted to ICRP 23 lung data using a weighted non-linear least squares technique. The results indicate that the model delineates the data well. Further analysis shows that reference male lungs attain a higher pubertal peak velocity (PPV) and adult mass size than female lungs, although the latter reach their PPV and adult mass size first. Furthermore, the model shows that lung growth rates in infants are two to three orders of magnitude higher than those in mature adults. This finding is important because of the possible association between higher radiation risks in infants' organs that have faster cell turnover rates compared to mature adult organs. The significance of the model for ICRP dosimetric purposes will be discussed. (author)

  1. An Empirical Path-Loss Model for Wireless Channels in Indoor Short-Range Office Environment

    Directory of Open Access Journals (Sweden)

    Ye Wang

    2012-01-01

    Full Text Available A novel empirical path-loss model for wireless indoor short-range office environment at 4.3–7.3 GHz band is presented. The model is developed based on the experimental datum sampled in 30 office rooms in both line of sight (LOS and non-LOS (NLOS scenarios. The model is characterized as the path loss to distance with a Gaussian random variable X due to the shadow fading by using linear regression. The path-loss exponent n is fitted by the frequency using power function and modeled as a frequency-dependent Gaussian variable as the standard deviation σ of X. The presented works should be available for the research of wireless channel characteristics under universal indoor short-distance environments in the Internet of Things (IOT.

  2. Empirical probability model of cold plasma environment in the Jovian magnetosphere

    Science.gov (United States)

    Futaana, Yoshifumi; Wang, Xiao-Dong; Barabash, Stas; Roussos, Elias; Truscott, Pete

    2015-04-01

    We analyzed the Galileo PLS dataset to produce a new cold plasma environment model for the Jovian magneto- sphere. Although there exist many sophisticated radiation models, treating energetic plasma (e.g. JOSE, GIRE, or Salammbo), only a limited number of simple models has been utilized for cold plasma environment. By extend- ing the existing cold plasma models toward the probability domain, we can predict the extreme periods of Jovian environment by specifying the percentile of the environmental parameters. The new model was produced in the following procedure. We first referred to the existing cold plasma models of Divine and Garrett, 1983 (DG83) or Bagenal and Delamere 2011 (BD11). These models are scaled to fit the statistical median of the parameters obtained from Galileo PLS data. The scaled model (also called as "mean model") indicates the median environment of Jovian magnetosphere. Then, assuming that the deviations in the Galileo PLS parameters are purely due to variations in the environment, we extended the mean model toward the percentile domain. The input parameter of the model is simply the position of the spacecraft (distance, magnetic longitude and lati- tude) and the specific percentile (e.g. 0.5 for the mean model). All the parameters in the model are described in mathematical forms; therefore the needed computational resources are quite low. The new model can be used for assessing the JUICE mission profile. The spatial extent of the model covers the main phase of the JUICE mission; namely from the Europa orbit to 40 Rj (where Rj is the radius of Jupiter). In addition, theoretical extensions toward the latitudinal direction are also included in the model to support the high latitude orbit of the JUICE spacecraft.

  3. Uncertainty analysis and validation of environmental models. The empirically based uncertainty analysis

    International Nuclear Information System (INIS)

    Monte, Luigi; Hakanson, Lars; Bergstroem, Ulla; Brittain, John; Heling, Rudie

    1996-01-01

    The principles of Empirically Based Uncertainty Analysis (EBUA) are described. EBUA is based on the evaluation of 'performance indices' that express the level of agreement between the model and sets of empirical independent data collected in different experimental circumstances. Some of these indices may be used to evaluate the confidence limits of the model output. The method is based on the statistical analysis of the distribution of the index values and on the quantitative relationship of these values with the ratio 'experimental data/model output'. Some performance indices are described in the present paper. Among these, the so-called 'functional distance' (d) between the logarithm of model output and the logarithm of the experimental data, defined as d 2 =Σ n 1 ( ln M i - ln O i ) 2 /n where M i is the i-th experimental value, O i the corresponding model evaluation and n the number of the couplets 'experimental value, predicted value', is an important tool for the EBUA method. From the statistical distribution of this performance index, it is possible to infer the characteristics of the distribution of the ratio 'experimental data/model output' and, consequently to evaluate the confidence limits for the model predictions. This method was applied to calculate the uncertainty level of a model developed to predict the migration of radiocaesium in lacustrine systems. Unfortunately, performance indices are affected by the uncertainty of the experimental data used in validation. Indeed, measurement results of environmental levels of contamination are generally associated with large uncertainty due to the measurement and sampling techniques and to the large variability in space and time of the measured quantities. It is demonstrated that this non-desired effect, in some circumstances, may be corrected by means of simple formulae

  4. A Comprehensive Comparison Study of Empirical Cutting Transport Models in Inclined and Horizontal Wells

    Directory of Open Access Journals (Sweden)

    Asep Mohamad Ishaq Shiddiq

    2017-07-01

    Full Text Available In deviated and horizontal drilling, hole-cleaning issues are a common and complex problem. This study explored the effect of various parameters in drilling operations and how they affect the flow rate required for effective cutting transport. Three models, developed following an empirical approach, were employed: Rudi-Shindu’s model, Hopkins’, and Tobenna’s model. Rudi-Shindu’s model needs iteration in the calculation. Firstly, the three models were compared using a sensitivity analysis of drilling parameters affecting cutting transport. The result shows that the models have similar trends but different values for minimum flow velocity. Analysis was conducted to examine the feasibility of using Rudi-Shindu’s, Hopkins’, and Tobenna’s models. The result showed that Hopkins’ model is limited by cutting size and revolution per minute (RPM. The minimum flow rate from Tobenna’s model is affected only by well inclination, drilling fluid weight and drilling fluid rheological property. Meanwhile, Rudi-Shindu’s model is limited by inclinations above 45°. The study showed that the investigated models are not suitable for horizontal wells because they do not include the effect of lateral section.

  5. A novel model for extending international co-operation in science and education

    NARCIS (Netherlands)

    de Boer, S.J.; Ji-zehn, Q.

    2004-01-01

    Journal of Zhejiang University SCIENCE (ISSN 1009-3095, Monthly) 2004 Vol. 5 No. 3 p.358-364 --------------------------------------------------------------------------------A novel model for extending international co-operation in science and educationDE BOER Sirp J.1, QIU Ji-zhen 2(1International

  6. Acting in solidarity : Testing an extended dual pathway model of collective action by bystander group members

    NARCIS (Netherlands)

    Saab, Rim; Tausch, Nicole; Spears, Russell; Cheung, Wing-Yee

    We examined predictors of collective action among bystander group members in solidarity with a disadvantaged group by extending the dual pathway model of collective action, which proposes one efficacy-based and one emotion-based path to collective action (Van Zomeren, Spears, Fischer, & Leach,

  7. Stall Recovery in a Centrifuge-Based Flight Simulator With an Extended Aerodynamic Model

    NARCIS (Netherlands)

    Ledegang, W.D.; Groen, E.L.

    2015-01-01

    We investigated the performance of 12 airline pilots in recovering from an asymmetrical stall in a flight simulator featuring an extended aerodynamic model of a transport-category aircraft, and a centrifuge-based motion platform capable of generating enhanced buffet motion and g-cueing. All pilots

  8. Fear Control an Danger Control: A Test of the Extended Parallel Process Model (EPPM).

    Science.gov (United States)

    Witte, Kim

    1994-01-01

    Explores cognitive and emotional mechanisms underlying success and failure of fear appeals in context of AIDS prevention. Offers general support for Extended Parallel Process Model. Suggests that cognitions lead to fear appeal success (attitude, intention, or behavior changes) via danger control processes, whereas the emotion fear leads to fear…

  9. An Inconvenient Truth: An Application of the Extended Parallel Process Model

    Science.gov (United States)

    Goodall, Catherine E.; Roberto, Anthony J.

    2008-01-01

    "An Inconvenient Truth" is an Academy Award-winning documentary about global warming presented by Al Gore. This documentary is appropriate for a lesson on fear appeals and the extended parallel process model (EPPM). The EPPM is concerned with the effects of perceived threat and efficacy on behavior change. Perceived threat is composed of an…

  10. Competing recombinant technologies for environmental innovation: Extending Arthur's model of lock-in

    NARCIS (Netherlands)

    Zeppini, P.; van den Bergh, J.C.J.M.

    2011-01-01

    This article presents a model of sequential decisions about investments in environmentally dirty and clean technologies, which extends the path-dependence framework of B. Arthur (1989, Competing technologies, increasing returns, and lock-in by historical events, The Economic Journal, 99, pp.

  11. Competing recombinant technologies for environmental innovation: extending Arthur’s model of lock-in

    NARCIS (Netherlands)

    Zeppini, P.; van den Bergh, J.C.J.M.

    2010-01-01

    This article presents a model of sequential decisions about investments in environmentally dirty and clean technologies, which extends the path-dependence framework of Arthur (1989). This allows us to evaluate if and how an economy locked into a dirty technology can be unlocked and move towards the

  12. Competing recombinant technologies for environmental innovation : extending Arthur's model of lock-in

    NARCIS (Netherlands)

    Zeppini, P.; Bergh, van den J.C.J.M.

    2011-01-01

    This article presents a model of sequential decisions about investments in environmentally dirty and clean technologies, which extends the path-dependence framework of B. Arthur (1989, Competing technologies, increasing returns, and lock-in by historical events, The Economic Journal, 99, pp.

  13. The asymmetry in attenuation experiments and the Glashow-Salam-Weinberg model with extended Higgs sector

    International Nuclear Information System (INIS)

    Santangelo, E.M.

    1983-01-01

    The asymmetry seen in beam-dump experiments done in CERN, between ν sub(e)/ν sup(-) sub(e) and ν sub(μ)/ν sup(-) sub(μ), is discussed using the Glashow-Salam-Weinberg model with extended Higgs sector. (L.C.) [pt

  14. Invariance of an Extended Technology Acceptance Model Across Gender and Age Group

    Science.gov (United States)

    Ahmad, Tunku Badariah Tunku; Madarsha, Kamal Basha; Zainuddin, Ahmad Marzuki; Ismail, Nik Ahmad Hisham; Khairani, Ahmad Zamri; Nordin, Mohamad Sahari

    2011-01-01

    In this study, we examined the likelihood of a TAME (extended technology acceptance model), in which the interrelationships among computer self-efficacy, perceived usefulness, intention to use and self-reported use of computer-mediated technology were tested. In addition, the gender- and age-invariant of its causal structure were evaluated. The…

  15. Sustainability Attitudes and Behavioral Motivations of College Students: Testing the Extended Parallel Process Model

    Science.gov (United States)

    Perrault, Evan K.; Clark, Scott K.

    2018-01-01

    Purpose: A planet that can no longer sustain life is a frightening thought--and one that is often present in mass media messages. Therefore, this study aims to test the components of a classic fear appeal theory, the extended parallel process model (EPPM) and to determine how well its constructs predict sustainability behavioral intentions. This…

  16. Social security wealth and aggregate consumption : An extended life-cycle model estimated for The Netherlands

    NARCIS (Netherlands)

    Zant, W.

    In this paper a method is developed to calculate a wealth variable accounting for the existence of the basic old-age provisions in The Netherlands (AOW). In line with Feldstein's extended life-cycle model, consumption functions with (gross) social security wealth are estimated for The Netherlands

  17. Generation of synthetic Kinect depth images based on empirical noise model

    DEFF Research Database (Denmark)

    Iversen, Thorbjørn Mosekjær; Kraft, Dirk

    2017-01-01

    The development, training and evaluation of computer vision algorithms rely on the availability of a large number of images. The acquisition of these images can be time-consuming if they are recorded using real sensors. An alternative is to rely on synthetic images which can be rapidly generated....... This Letter describes a novel method for the simulation of Kinect v1 depth images. The method is based on an existing empirical noise model from the literature. The authors show that their relatively simple method is able to provide depth images which have a high similarity with real depth images....

  18. IT-enabled dynamic capability on performance: An empirical study of BSC model

    Directory of Open Access Journals (Sweden)

    Adilson Carlos Yoshikuni

    2017-05-01

    Full Text Available ew studies have investigated the influence of “information capital,” through IT-enabled dynamic capability, on corporate performance, particularly in economic turbulence. Our study investigates the causal relationship between performance perspectives of the balanced scorecard using partial least squares path modeling. Using data on 845 Brazilian companies, we conduct a quantitative empirical study of firms during an economic crisis and observe the following interesting results. Operational and analytical IT-enabled dynamic capability had positive effects on business process improvement and corporate performance. Results pertaining to mediation (endogenous variables and moderation (control variables clarify IT’s role in and benefits for corporate performance.

  19. Permeability-driven selection in a semi-empirical protocell model

    DEFF Research Database (Denmark)

    Piedrafita, Gabriel; Monnard, Pierre-Alain; Mavelli, Fabio

    2017-01-01

    to prebiotic systems evolution more intricate, but were surely essential for sustaining far-from-equilibrium chemical dynamics, given their functional relevance in all modern cells. Here we explore a protocellular scenario in which some of those additional constraints/mechanisms are addressed, demonstrating...... their 'system-level' implications. In particular, an experimental study on the permeability of prebiotic vesicle membranes composed of binary lipid mixtures allows us to construct a semi-empirical model where protocells are able to reproduce and undergo an evolutionary process based on their coupling...

  20. Implementation of extended Lagrangian dynamics in GROMACS for polarizable simulations using the classical Drude oscillator model.

    Science.gov (United States)

    Lemkul, Justin A; Roux, Benoît; van der Spoel, David; MacKerell, Alexander D

    2015-07-15

    Explicit treatment of electronic polarization in empirical force fields used for molecular dynamics simulations represents an important advancement in simulation methodology. A straightforward means of treating electronic polarization in these simulations is the inclusion of Drude oscillators, which are auxiliary, charge-carrying particles bonded to the cores of atoms in the system. The additional degrees of freedom make these simulations more computationally expensive relative to simulations using traditional fixed-charge (additive) force fields. Thus, efficient tools are needed for conducting these simulations. Here, we present the implementation of highly scalable algorithms in the GROMACS simulation package that allow for the simulation of polarizable systems using extended Lagrangian dynamics with a dual Nosé-Hoover thermostat as well as simulations using a full self-consistent field treatment of polarization. The performance of systems of varying size is evaluated, showing that the present code parallelizes efficiently and is the fastest implementation of the extended Lagrangian methods currently available for simulations using the Drude polarizable force field. © 2015 Wiley Periodicals, Inc.

  1. Monthly and Fortnightly Tidal Variations of the Earth's Rotation Rate Predicted by a TOPEX/POSEIDON Empirical Ocean Tide Model

    Science.gov (United States)

    Desai, S.; Wahr, J.

    1998-01-01

    Empirical models of the two largest constituents of the long-period ocean tides, the monthly and the fortnightly constituents, are estimated from repeat cycles 10 to 210 of the TOPEX/POSEIDON (T/P) mission.

  2. Comparison of a semi-empirical method with some model codes for gamma-ray spectrum calculation

    Energy Technology Data Exchange (ETDEWEB)

    Sheng, Fan; Zhixiang, Zhao [Chinese Nuclear Data Center, Beijing, BJ (China)

    1996-06-01

    Gamma-ray spectra calculated by a semi-empirical method are compared with those calculated by the model codes such as GNASH, TNG, UNF and NDCP-1. The results of the calculations are discussed. (2 tabs., 3 figs.).

  3. The logical primitives of thought: Empirical foundations for compositional cognitive models.

    Science.gov (United States)

    Piantadosi, Steven T; Tenenbaum, Joshua B; Goodman, Noah D

    2016-07-01

    The notion of a compositional language of thought (LOT) has been central in computational accounts of cognition from earliest attempts (Boole, 1854; Fodor, 1975) to the present day (Feldman, 2000; Penn, Holyoak, & Povinelli, 2008; Fodor, 2008; Kemp, 2012; Goodman, Tenenbaum, & Gerstenberg, 2015). Recent modeling work shows how statistical inferences over compositionally structured hypothesis spaces might explain learning and development across a variety of domains. However, the primitive components of such representations are typically assumed a priori by modelers and theoreticians rather than determined empirically. We show how different sets of LOT primitives, embedded in a psychologically realistic approximate Bayesian inference framework, systematically predict distinct learning curves in rule-based concept learning experiments. We use this feature of LOT models to design a set of large-scale concept learning experiments that can determine the most likely primitives for psychological concepts involving Boolean connectives and quantification. Subjects' inferences are most consistent with a rich (nonminimal) set of Boolean operations, including first-order, but not second-order, quantification. Our results more generally show how specific LOT theories can be distinguished empirically. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  4. A new model of Social Support in Bereavement (SSB): An empirical investigation with a Chinese sample.

    Science.gov (United States)

    Li, Jie; Chen, Sheying

    2016-01-01

    Bereavement can be an extremely stressful experience while the protective effect of social support is expected to facilitate the adjustment after loss. The ingredients or elements of social support as illustrated by a new model of Social Support in Bereavement (SSB), however, requires empirical evidence. Who might be the most effective providers of social support in bereavement has also been understudied, particularly within specific cultural contexts. The present study uses both qualitative and quantitative analyses to explore these two important issues among bereaved Chinese families and individuals. The results show that three major types of social support described by the SSB model were frequently acknowledged by the participants in this study. Aside from relevant books, family and friends were the primary sources of social support who in turn received support from their workplaces. Helping professionals turned out to be the least significant source of social support in the Chinese cultural context. Differences by gender, age, and bereavement time were also found. The findings render empirical evidence to the conceptual model of Social Support in Bereavement and also offer culturally relevant guidance for providing effective support to the bereaved.

  5. The effect of empirical potential functions on modeling of amorphous carbon using molecular dynamics method

    International Nuclear Information System (INIS)

    Li, Longqiu; Xu, Ming; Song, Wenping; Ovcharenko, Andrey; Zhang, Guangyu; Jia, Ding

    2013-01-01

    Empirical potentials have a strong effect on the hybridization and structure of amorphous carbon and are of great importance in molecular dynamics (MD) simulations. In this work, amorphous carbon at densities ranging from 2.0 to 3.2 g/cm 3 was modeled by a liquid quenching method using Tersoff, 2nd REBO, and ReaxFF empirical potentials. The hybridization, structure and radial distribution function G(r) of carbon atoms were analyzed as a function of the three potentials mentioned above. The ReaxFF potential is capable to model the change of the structure of amorphous carbon and MD results are in a good agreement with experimental results and density function theory (DFT) at low density of 2.6 g/cm 3 and below. The 2nd REBO potential can be used when amorphous carbon has a very low density of 2.4 g/cm 3 and below. Considering the computational efficiency, the Tersoff potential is recommended to model amorphous carbon at a high density of 2.6 g/cm 3 and above. In addition, the influence of the quenching time on the hybridization content obtained with the three potentials is discussed.

  6. Online Cancer Information Seeking: Applying and Extending the Comprehensive Model of Information Seeking.

    Science.gov (United States)

    Van Stee, Stephanie K; Yang, Qinghua

    2017-10-30

    This study applied the comprehensive model of information seeking (CMIS) to online cancer information and extended the model by incorporating an exogenous variable: interest in online health information exchange with health providers. A nationally representative sample from the Health Information National Trends Survey 4 Cycle 4 was analyzed to examine the extended CMIS in predicting online cancer information seeking. Findings from a structural equation model supported most of the hypotheses derived from the CMIS, as well as the extension of the model related to interest in online health information exchange. In particular, socioeconomic status, beliefs, and interest in online health information exchange predicted utility. Utility, in turn, predicted online cancer information seeking, as did information-carrier characteristics. An unexpected but important finding from the study was the significant, direct relationship between cancer worry and online cancer information seeking. Theoretical and practical implications are discussed.

  7. An extended car-following model considering the acceleration derivative in some typical traffic environments

    Science.gov (United States)

    Zhou, Tong; Chen, Dong; Liu, Weining

    2018-03-01

    Based on the full velocity difference and acceleration car-following model, an extended car-following model is proposed by considering the vehicle’s acceleration derivative. The stability condition is given by applying the control theory. Considering some typical traffic environments, the results of theoretical analysis and numerical simulation show the extended model has a more actual acceleration of string vehicles than that of the previous models in starting process, stopping process and sudden brake. Meanwhile, the traffic jams more easily occur when the coefficient of vehicle’s acceleration derivative increases, which is presented by space-time evolution. The results confirm that the vehicle’s acceleration derivative plays an important role in the traffic jamming transition and the evolution of traffic congestion.

  8. Empirical Hamiltonians

    International Nuclear Information System (INIS)

    Peggs, S.; Talman, R.

    1987-01-01

    As proton accelerators get larger, and include more magnets, the conventional tracking programs which simulate them run slower. The purpose of this paper is to describe a method, still under development, in which element-by-element tracking around one turn is replaced by a single man, which can be processed far faster. It is assumed for this method that a conventional program exists which can perform faithful tracking in the lattice under study for some hundreds of turns, with all lattice parameters held constant. An empirical map is then generated by comparison with the tracking program. A procedure has been outlined for determining an empirical Hamiltonian, which can represent motion through many nonlinear kicks, by taking data from a conventional tracking program. Though derived by an approximate method this Hamiltonian is analytic in form and can be subjected to further analysis of varying degrees of mathematical rigor. Even though the empirical procedure has only been described in one transverse dimension, there is good reason to hope that it can be extended to include two transverse dimensions, so that it can become a more practical tool in realistic cases

  9. An empirical investigation of the efficiency effects of integrated care models in Switzerland

    Directory of Open Access Journals (Sweden)

    Oliver Reich

    2012-01-01

    Full Text Available Introduction: This study investigates the efficiency gains of integrated care models in Switzerland, since these models are regarded as cost containment options in national social health insurance. These plans generate much lower average health care expenditure than the basic insurance plan. The question is, however, to what extent these total savings are due to the effects of selection and efficiency. Methods: The empirical analysis is based on data from 399,274 Swiss residents that constantly had compulsory health insurance with the Helsana Group, the largest health insurer in Switzerland, covering the years 2006 to 2009. In order to evaluate the efficiency of the different integrated care models, we apply an econometric approach with a mixed-effects model. Results: Our estimations indicate that the efficiency effects of integrated care models on health care expenditure are significant. However, the different insurance plans vary, revealing the following efficiency gains per model: contracted capitated model 21.2%, contracted non-capitated model 15.5% and telemedicine model 3.7%. The remaining 8.5%, 5.6% and 22.5% respectively of the variation in total health care expenditure can be attributed to the effects of selection. Conclusions: Integrated care models have the potential to improve care for patients with chronic diseases and concurrently have a positive impact on health care expenditure. We suggest policy makers improve the incentives for patients with chronic diseases within the existing regulations providing further potential for cost-efficiency of medical care.

  10. An empirical investigation of the efficiency effects of integrated care models in Switzerland

    Directory of Open Access Journals (Sweden)

    Oliver Reich

    2012-01-01

    Full Text Available Introduction: This study investigates the efficiency gains of integrated care models in Switzerland, since these models are regarded as cost containment options in national social health insurance. These plans generate much lower average health care expenditure than the basic insurance plan. The question is, however, to what extent these total savings are due to the effects of selection and efficiency.Methods: The empirical analysis is based on data from 399,274 Swiss residents that constantly had compulsory health insurance with the Helsana Group, the largest health insurer in Switzerland, covering the years 2006 to 2009. In order to evaluate the efficiency of the different integrated care models, we apply an econometric approach with a mixed-effects model.Results: Our estimations indicate that the efficiency effects of integrated care models on health care expenditure are significant. However, the different insurance plans vary, revealing the following efficiency gains per model: contracted capitated model 21.2%, contracted non-capitated model 15.5% and telemedicine model 3.7%. The remaining 8.5%, 5.6% and 22.5% respectively of the variation in total health care expenditure can be attributed to the effects of selection.Conclusions: Integrated care models have the potential to improve care for patients with chronic diseases and concurrently have a positive impact on health care expenditure. We suggest policy makers improve the incentives for patients with chronic diseases within the existing regulations providing further potential for cost-efficiency of medical care.

  11. Analytic Quasi-Perodic Cocycles with Singularities and the Lyapunov Exponent of Extended Harper's Model

    Science.gov (United States)

    Jitomirskaya, S.; Marx, C. A.

    2012-11-01

    We show how to extend (and with what limitations) Avila's global theory of analytic SL(2,C) cocycles to families of cocycles with singularities. This allows us to develop a strategy to determine the Lyapunov exponent for the extended Harper's model, for all values of parameters and all irrational frequencies. In particular, this includes the self-dual regime for which even heuristic results did not previously exist in physics literature. The extension of Avila's global theory is also shown to imply continuous behavior of the LE on the space of analytic {M_2({C})}-cocycles. This includes rational approximation of the frequency, which so far has not been available.

  12. Model Predictive Control for Linear Complementarity and Extended Linear Complementarity Systems

    Directory of Open Access Journals (Sweden)

    Bambang Riyanto

    2005-11-01

    Full Text Available In this paper, we propose model predictive control method for linear complementarity and extended linear complementarity systems by formulating optimization along prediction horizon as mixed integer quadratic program. Such systems contain interaction between continuous dynamics and discrete event systems, and therefore, can be categorized as hybrid systems. As linear complementarity and extended linear complementarity systems finds applications in different research areas, such as impact mechanical systems, traffic control and process control, this work will contribute to the development of control design method for those areas as well, as shown by three given examples.

  13. Formal Analysis of Functional Behaviour for Model Transformations Based on Triple Graph Grammars - Extended Version

    OpenAIRE

    Hermann, Frank; Ehrig, Hartmut; Orejas, Fernando; Ulrike, Golas

    2010-01-01

    Triple Graph Grammars (TGGs) are a well-established concept for the specification of model transformations. In previous work we have formalized and analyzed already crucial properties of model transformations like termination, correctness and completeness, but functional behaviour - especially local confluence - is missing up to now. In order to close this gap we generate forward translation rules, which extend standard forward rules by translation attributes keeping track of the elements whi...

  14. Empiric model for mean generation time adjustment factor for classic point kinetics equations

    Energy Technology Data Exchange (ETDEWEB)

    Goes, David A.B.V. de; Martinez, Aquilino S.; Goncalves, Alessandro da C., E-mail: david.goes@poli.ufrj.br, E-mail: aquilino@lmp.ufrj.br, E-mail: alessandro@con.ufrj.br [Coordenacao de Pos-Graduacao e Pesquisa de Engenharia (COPPE/UFRJ), Rio de Janeiro, RJ (Brazil). Departamento de Engenharia Nuclear

    2017-11-01

    Point reactor kinetics equations are the easiest way to observe the neutron production time behavior in a nuclear reactor. These equations are derived from the neutron transport equation using an approximation called Fick's law leading to a set of first order differential equations. The main objective of this study is to review classic point kinetics equation in order to approximate its results to the case when it is considered the time variation of the neutron currents. The computational modeling used for the calculations is based on the finite difference method. The results obtained with this model are compared with the reference model and then it is determined an empirical adjustment factor that modifies the point reactor kinetics equation to the real scenario. (author)

  15. Empiric model for mean generation time adjustment factor for classic point kinetics equations

    International Nuclear Information System (INIS)

    Goes, David A.B.V. de; Martinez, Aquilino S.; Goncalves, Alessandro da C.

    2017-01-01

    Point reactor kinetics equations are the easiest way to observe the neutron production time behavior in a nuclear reactor. These equations are derived from the neutron transport equation using an approximation called Fick's law leading to a set of first order differential equations. The main objective of this study is to review classic point kinetics equation in order to approximate its results to the case when it is considered the time variation of the neutron currents. The computational modeling used for the calculations is based on the finite difference method. The results obtained with this model are compared with the reference model and then it is determined an empirical adjustment factor that modifies the point reactor kinetics equation to the real scenario. (author)

  16. An empirically tractable model of optimal oil spills prevention in Russian sea harbours

    Energy Technology Data Exchange (ETDEWEB)

    Deissenberg, C. [CEFI-CNRS, Les Milles (France); Gurman, V.; Tsirlin, A. [RAS, Program Systems Inst., Pereslavl-Zalessky (Russian Federation); Ryumina, E. [Russian Academy of Sciences, Moscow (Russian Federation). Inst. of Economic Market Problems

    2001-07-01

    Based on previous theoretical work by Gottinger (1997, 1998), we propose a simple model of optimal monitoring of oil-related activities in harbour areas that is suitable for empirical estimation within the Russian-Ukrainian context, in spite of the poor availability of data in these countries. Specifically, the model indicates how to best allocate at the steady state a given monitoring budget between different monitoring activities. An approximate analytical solution to the optimization problem is derived, and a simple procedure for estimating the model on the basis of the actually available data is suggested. An application using data obtained for several harbours of the Black and Baltic Seas is given. It suggests that the current Russian monitoring practice could be much improved by better allocating the available monitoring resources. (Author)

  17. Relaxation model for extended magnetohydrodynamics: Comparison to magnetohydrodynamics for dense Z-pinches

    International Nuclear Information System (INIS)

    Seyler, C. E.; Martin, M. R.

    2011-01-01

    It is shown that the two-fluid model under a generalized Ohm's law formulation and the resistive magnetohydrodynamics (MHD) can both be described as relaxation systems. In the relaxation model, the under-resolved stiff source terms constrain the dynamics of a set of hyperbolic equations to give the correct asymptotic solution. When applied to the collisional two-fluid model, the relaxation of fast time scales associated with displacement current and finite electron mass allows for a natural transition from a system where Ohm's law determines the current density to a system where Ohm's law determines the electric field. This result is used to derive novel algorithms, which allow for multiscale simulation of low and high frequency extended-MHD physics. This relaxation formulation offers an efficient way to implicitly advance the Hall term and naturally simulate a plasma-vacuum interface without invoking phenomenological models. The relaxation model is implemented as an extended-MHD code, which is used to analyze pulsed power loads such as wire arrays and ablating foils. Two-dimensional simulations of pulsed power loads are compared for extended-MHD and MHD. For these simulations, it is also shown that the relaxation model properly recovers the resistive-MHD limit.

  18. An extended diffusive model for calculating thermal diffusivity from single monopole tokamak heat pulse propagation

    International Nuclear Information System (INIS)

    Marinak, M.

    1990-02-01

    The problem of deducing χ e from measurements of the propagation of a monopole heatpulse is considered. An extended diffusive model, which takes into account perturbed sources and sinks is extended to the case of a monopole heat input. χ e is expressed as a function of two observables, the heat pulse velocity and the radial damping rate. Two simple expressions valid for two different ranges of the radius of the poloidal waist of the beam power profile are given. The expressions are valid in the heat pulse measurement region, extending radially 0.05a beyond the beam power waist to near 0.6a. The inferred χ e is a local value, not an average value of the radial χ e profile. 7 refs., 6 figs., 1 tab

  19. An empirical model of the high-energy electron environment at Jupiter

    Science.gov (United States)

    Soria-Santacruz, M.; Garrett, H. B.; Evans, R. W.; Jun, I.; Kim, W.; Paranicas, C.; Drozdov, A.

    2016-10-01

    We present an empirical model of the energetic electron environment in Jupiter's magnetosphere that we have named the Galileo Interim Radiation Electron Model version-2 (GIRE2) since it is based on Galileo data from the Energetic Particle Detector (EPD). Inside 8RJ, GIRE2 adopts the previously existing model of Divine and Garrett because this region was well sampled by the Pioneer and Voyager spacecraft but poorly covered by Galileo. Outside of 8RJ, the model is based on 10 min averages of Galileo EPD data as well as on measurements from the Geiger Tube Telescope on board the Pioneer spacecraft. In the inner magnetosphere the field configuration is dipolar, while in the outer magnetosphere it presents a disk-like structure. The gradual transition between these two behaviors is centered at about 17RJ. GIRE2 distinguishes between the two different regions characterized by these two magnetic field topologies. Specifically, GIRE2 consists of an inner trapped omnidirectional model between 8 to 17RJ that smoothly joins onto the original Divine and Garrett model inside 8RJ and onto a GIRE2 plasma sheet model at large radial distances. The model provides a complete picture of the high-energy electron environment in the Jovian magnetosphere from ˜1 to 50RJ. The present manuscript describes in great detail the data sets, formulation, and fittings used in the model and provides a discussion of the predicted high-energy electron fluxes as a function of energy and radial distance from the planet.

  20. Multiscale empirical modeling of the geomagnetic field: From storms to substorms

    Science.gov (United States)

    Stephens, G. K.; Sitnov, M. I.; Korth, H.; Gkioulidou, M.; Ukhorskiy, A. Y.; Merkin, V. G.

    2017-12-01

    An advanced version of the TS07D empirical geomagnetic field model, herein called SST17, is used to model the global picture of the geomagnetic field and its characteristic variations on both storm and substorm scales. The new SST17 model uses two regular expansions describing the equatorial currents with each having distinctly different scales, one corresponding to a thick and one to a thin current sheet relative to the thermal ion gyroradius. These expansions have an arbitrary distribution of currents in the equatorial plane that is constrained only by magnetometer data. This multi-scale description allows one to reproduce the current sheet thinning during the growth phase. Additionaly, the model uses a flexible description of field-aligned currents that reproduces their spiral structure at low altitudes and provides a continuous transition from region 1 to region 2 current systems. The empirical picture of substorms is obtained by combining magnetometer data from Geotail, THEMIS, Van Allen Probes, Cluster II, Polar, IMP-8, GOES 8, 9, 10 and 12 and then binning this data based on similar values of the auroral index AL, its time derivative and the integral of the solar wind electric field parameter (from ACE, Wind, and IMP-8) in time over substorm scales. The performance of the model is demonstrated for several events, including the 3 July 2012 substorm, which had multi-probe coverage and a series of substorms during the March 2008 storm. It is shown that the AL binning helps reproduce dipolarization signatures in the northward magnetic field Bz, while the solar wind electric field integral allows one to capture the current sheet thinning during the growth phase. The model allows one to trace the substorm dipolarization from the tail to the inner magnetosphere where the dipolarization of strongly stretched tail field lines causes a redistribution of the tail current resulting in an enhancement of the partial ring current in the premidnight sector.

  1. Experimental validation of new empirical models of the thermal properties of food products for safe shipping

    Science.gov (United States)

    Hamid, Hanan H.; Mitchell, Mark; Jahangiri, Amirreza; Thiel, David V.

    2018-04-01

    Temperature controlled food transport is essential for human safety and to minimise food waste. The thermal properties of food are important for determining the heat transfer during the transient stages of transportation (door opening during loading and unloading processes). For example, the temperature of most dairy products must be confined to a very narrow range (3-7 °C). If a predefined critical temperature is exceeded, the food is defined as spoiled and unfit for human consumption. An improved empirical model for the thermal conductivity and specific heat capacity of a wide range of food products was derived based on the food composition (moisture, fat, protein, carbohydrate and ash). The models that developed using linear regression analysis were compared with the published measured parameters in addition to previously published theoretical and empirical models. It was found that the maximum variation in the predicated thermal properties leads to less than 0.3 °C temperature change. The correlation coefficient for these models was 0.96. The t-Stat test ( P-value >0.99) demonstrated that the model results are an improvement on previous works. The transient heat transfer based on the food composition and the temperature boundary conditions was found for a Camembert cheese (short cylindrical shape) using a multiple dimension finite difference method code. The result was verified using the heat transfer today (HTT) educational software which is based on finite volume method. The core temperature rises from the initial temperature (2.7 °C) to the maximum safe temperature in ambient air (20.24 °C) was predicted to within about 35.4 ± 0.5 min. The simulation results agree very well ( +0.2 °C) with the measured temperature data. This improved model impacts on temperature estimation during loading and unloading the trucks and provides a clear direction for temperature control in all refrigerated transport applications.

  2. A Local Search Modeling for Constrained Optimum Paths Problems (Extended Abstract

    Directory of Open Access Journals (Sweden)

    Quang Dung Pham

    2009-10-01

    Full Text Available Constrained Optimum Path (COP problems appear in many real-life applications, especially on communication networks. Some of these problems have been considered and solved by specific techniques which are usually difficult to extend. In this paper, we introduce a novel local search modeling for solving some COPs by local search. The modeling features the compositionality, modularity, reuse and strengthens the benefits of Constrained-Based Local Search. We also apply the modeling to the edge-disjoint paths problem (EDP. We show that side constraints can easily be added in the model. Computational results show the significance of the approach.

  3. A Semi-Empirical SNR Model for Soil Moisture Retrieval Using GNSS SNR Data

    Directory of Open Access Journals (Sweden)

    Mutian Han

    2018-02-01

    Full Text Available The Global Navigation Satellite System-Interferometry and Reflectometry (GNSS-IR technique on soil moisture remote sensing was studied. A semi-empirical Signal-to-Noise Ratio (SNR model was proposed as a curve-fitting model for SNR data routinely collected by a GNSS receiver. This model aims at reconstructing the direct and reflected signal from SNR data and at the same time extracting frequency and phase information that is affected by soil moisture as proposed by K. M. Larson et al. This is achieved empirically through approximating the direct and reflected signal by a second-order and fourth-order polynomial, respectively, based on the well-established SNR model. Compared with other models (K. M. Larson et al., T. Yang et al., this model can improve the Quality of Fit (QoF with little prior knowledge needed and can allow soil permittivity to be estimated from the reconstructed signals. In developing this model, we showed how noise affects the receiver SNR estimation and thus the model performance through simulations under the bare soil assumption. Results showed that the reconstructed signals with a grazing angle of 5°–15° were better for soil moisture retrieval. The QoF was improved by around 45%, which resulted in better estimation of the frequency and phase information. However, we found that the improvement on phase estimation could be neglected. Experimental data collected at Lamasquère, France, were also used to validate the proposed model. The results were compared with the simulation and previous works. It was found that the model could ensure good fitting quality even in the case of irregular SNR variation. Additionally, the soil moisture calculated from the reconstructed signals was about 15% closer in relation to the ground truth measurements. A deeper insight into the Larson model and the proposed model was given at this stage, which formed a possible explanation of this fact. Furthermore, frequency and phase information

  4. Extended Lipkin-type models with residual proton-neutron interaction

    International Nuclear Information System (INIS)

    Stoica, S.

    1999-01-01

    Extended Lipkin-Meshkov-Glick (LMG) models for testing the Random Phase Approximation (RPA) and proton-neutron Random Phase Approximation (pnRPA) methods are developed taking into account explicitly the proton and neutron degrees of freedom. First, an extended LMG model for testing RPA is developed. The proton and neutron Hamiltonians are taken to be of the LMG form and, in addition, a residual proton-neutron interaction is included. Exact solutions in a SU(2) x SU(2) basis as well as the RPA solutions for the energy spectrum of the model Hamiltonian are obtained. Then, the behaviour of the first collective excited state is studied as a function of the interaction parameters of the model using the exact and RPA methods. Secondly, an extended LMG model for testing pnRPA method is developed. Besides the proton and neutron single particle terms two types of residual proton-neutron interactions, one simulating a particle-particle and the other a particle-hole interaction, are included in the model Hamiltonian, so that the model is exactly solvable in an isospin SU(2) x SU(2) basis. The exact and pnRPA spectra of the model Hamiltonian are calculated as a function of the model parameters and compared to each other. Furthermore, charge-changing operators simulating a nuclear beta decay and their action on eigenfunctions of the model Hamiltonian are defined, and transition amplitude of them are calculated using exact and pnRPA wave functions. The best agreement between the exact RPA-type calculations for spectra and transitions, was obtained when the correlated RPA ground state, instead of the uncorrelated HF ground state was employed and when both kinds of residual interactions (i.e. like- and unlike-particle two-body interactions) are included in the model Hamiltonians. (author)

  5. EMPIRICAL WEIGHTED MODELLING ON INTER-COUNTY INEQUALITIES EVOLUTION AND TO TEST ECONOMICAL CONVERGENCE IN ROMANIA

    Directory of Open Access Journals (Sweden)

    Natalia\tMOROIANU‐DUMITRESCU

    2015-06-01

    Full Text Available During the last decades, the regional convergence process in Europe has attracted a considerable interest as a highly significant issue, especially after EU enlargement with the New Member States from Central and Eastern Europe. The most usual empirical approaches are using the β- and σ-convergence, originally developed by a series of neo-classical models. Up-to-date, the EU integration process was proven to be accompanied by an increase of the regional inequalities. In order to determine the existence of a similar increase of the inequalities between the administrative counties (NUTS3 included in the NUTS2 and NUTS1 regions of Romania, this paper provides an empirical modelling of economic convergence allowing to evaluate the level and evolution of the inter-regional inequalities over more than a decade period lasting from 1995 up to 2011. The paper presents the results of a large cross-sectional study of σ-convergence and weighted coefficient of variation, using GDP and population data obtained from the National Institute of Statistics of Romania. Both graphical representation including non-linear regression and the associated tables summarizing numerical values of the main statistical tests are demonstrating the impact of pre- accession policy on the economic development of all Romanian NUTS types. The clearly emphasised convergence in the middle time subinterval can be correlated with the pre-accession drastic changes on economic, political and social level, and with the opening of the Schengen borders for Romanian labor force in 2002.

  6. Empirical Bayes ranking and selection methods via semiparametric hierarchical mixture models in microarray studies.

    Science.gov (United States)

    Noma, Hisashi; Matsui, Shigeyuki

    2013-05-20

    The main purpose of microarray studies is screening of differentially expressed genes as candidates for further investigation. Because of limited resources in this stage, prioritizing genes are relevant statistical tasks in microarray studies. For effective gene selections, parametric empirical Bayes methods for ranking and selection of genes with largest effect sizes have been proposed (Noma et al., 2010; Biostatistics 11: 281-289). The hierarchical mixture model incorporates the differential and non-differential components and allows information borrowing across differential genes with separation from nuisance, non-differential genes. In this article, we develop empirical Bayes ranking methods via a semiparametric hierarchical mixture model. A nonparametric prior distribution, rather than parametric prior distributions, for effect sizes is specified and estimated using the "smoothing by roughening" approach of Laird and Louis (1991; Computational statistics and data analysis 12: 27-37). We present applications to childhood and infant leukemia clinical studies with microarrays for exploring genes related to prognosis or disease progression. Copyright © 2012 John Wiley & Sons, Ltd.

  7. An extended continuum model considering optimal velocity change with memory and numerical tests

    Science.gov (United States)

    Qingtao, Zhai; Hongxia, Ge; Rongjun, Cheng

    2018-01-01

    In this paper, an extended continuum model of traffic flow is proposed with the consideration of optimal velocity changes with memory. The new model's stability condition and KdV-Burgers equation considering the optimal velocities change with memory are deduced through linear stability theory and nonlinear analysis, respectively. Numerical simulation is carried out to study the extended continuum model, which explores how optimal velocity changes with memory affected velocity, density and energy consumption. Numerical results show that when considering the effects of optimal velocity changes with memory, the traffic jams can be suppressed efficiently. Both the memory step and sensitivity parameters of optimal velocity changes with memory will enhance the stability of traffic flow efficiently. Furthermore, numerical results demonstrates that the effect of optimal velocity changes with memory can avoid the disadvantage of historical information, which increases the stability of traffic flow on road, and so it improve the traffic flow stability and minimize cars' energy consumptions.

  8. Compact extended model for doppler broadening of neutron absorption resonances in solids

    International Nuclear Information System (INIS)

    Villanueva, A. J; Granada, J.R

    2009-01-01

    We present a simplified compact model for calculating Doppler broadening of neutron absorption resonances in an incoherent Debye solid. Our model extends the effective temperature gas model to cover the whole range of energies and temperatures, and reduces the information of the dynamical system to a minimum content compatible with a much better accuracy of the calculation. This model is thus capable of replacing the existing algorithm in standard codes for resonance cross sections preparation aimed at neutron and reactor physics calculations. The model is applied to the 238 U 6.671 eV effective broadened cross section. We also show how this model can be used for thermometry in an improved fashion compared to the effective temperature gas model. Experimental data of the same resonance at low and high temperatures are also shown and the performances of each model are put to the test on this basis. [es

  9. Proposed Core Competencies and Empirical Validation Procedure in Competency Modeling: Confirmation and Classification.

    Science.gov (United States)

    Baczyńska, Anna K; Rowiński, Tomasz; Cybis, Natalia

    2016-01-01

    Competency models provide insight into key skills which are common to many positions in an organization. Moreover, there is a range of competencies that is used by many companies. Researchers have developed core competency terminology to underline their cross-organizational value. The article presents a theoretical model of core competencies consisting of two main higher-order competencies called performance and entrepreneurship. Each of them consists of three elements: the performance competency includes cooperation, organization of work and goal orientation, while entrepreneurship includes innovativeness, calculated risk-taking and pro-activeness. However, there is lack of empirical validation of competency concepts in organizations and this would seem crucial for obtaining reliable results from organizational research. We propose a two-step empirical validation procedure: (1) confirmation factor analysis, and (2) classification of employees. The sample consisted of 636 respondents (M = 44.5; SD = 15.1). Participants were administered a questionnaire developed for the study purpose. The reliability, measured by Cronbach's alpha, ranged from 0.60 to 0.83 for six scales. Next, we tested the model using a confirmatory factor analysis. The two separate, single models of performance and entrepreneurial orientations fit quite well to the data, while a complex model based on the two single concepts needs further research. In the classification of employees based on the two higher order competencies we obtained four main groups of employees. Their profiles relate to those found in the literature, including so-called niche finders and top performers. Some proposal for organizations is discussed.

  10. Extended causal modeling to assess Partial Directed Coherence in multiple time series with significant instantaneous interactions.

    Science.gov (United States)

    Faes, Luca; Nollo, Giandomenico

    2010-11-01

    The Partial Directed Coherence (PDC) and its generalized formulation (gPDC) are popular tools for investigating, in the frequency domain, the concept of Granger causality among multivariate (MV) time series. PDC and gPDC are formalized in terms of the coefficients of an MV autoregressive (MVAR) model which describes only the lagged effects among the time series and forsakes instantaneous effects. However, instantaneous effects are known to affect linear parametric modeling, and are likely to occur in experimental time series. In this study, we investigate the impact on the assessment of frequency domain causality of excluding instantaneous effects from the model underlying PDC evaluation. Moreover, we propose the utilization of an extended MVAR model including both instantaneous and lagged effects. This model is used to assess PDC either in accordance with the definition of Granger causality when considering only lagged effects (iPDC), or with an extended form of causality, when we consider both instantaneous and lagged effects (ePDC). The approach is first evaluated on three theoretical examples of MVAR processes, which show that the presence of instantaneous correlations may produce misleading profiles of PDC and gPDC, while ePDC and iPDC derived from the extended model provide here a correct interpretation of extended and lagged causality. It is then applied to representative examples of cardiorespiratory and EEG MV time series. They suggest that ePDC and iPDC are better interpretable than PDC and gPDC in terms of the known cardiovascular and neural physiologies.

  11. Extended spider cognition.

    Science.gov (United States)

    Japyassú, Hilton F; Laland, Kevin N

    2017-05-01

    There is a tension between the conception of cognition as a central nervous system (CNS) process and a view of cognition as extending towards the body or the contiguous environment. The centralised conception requires large or complex nervous systems to cope with complex environments. Conversely, the extended conception involves the outsourcing of information processing to the body or environment, thus making fewer demands on the processing power of the CNS. The evolution of extended cognition should be particularly favoured among small, generalist predators such as spiders, and here, we review the literature to evaluate the fit of empirical data with these contrasting models of cognition. Spiders do not seem to be cognitively limited, displaying a large diversity of learning processes, from habituation to contextual learning, including a sense of numerosity. To tease apart the central from the extended cognition, we apply the mutual manipulability criterion, testing the existence of reciprocal causal links between the putative elements of the system. We conclude that the web threads and configurations are integral parts of the cognitive systems. The extension of cognition to the web helps to explain some puzzling features of spider behaviour and seems to promote evolvability within the group, enhancing innovation through cognitive connectivity to variable habitat features. Graded changes in relative brain size could also be explained by outsourcing information processing to environmental features. More generally, niche-constructed structures emerge as prime candidates for extending animal cognition, generating the selective pressures that help to shape the evolving cognitive system.

  12. β-empirical Bayes inference and model diagnosis of microarray data

    Directory of Open Access Journals (Sweden)

    Hossain Mollah Mohammad

    2012-06-01

    Full Text Available Abstract Background Microarray data enables the high-throughput survey of mRNA expression profiles at the genomic level; however, the data presents a challenging statistical problem because of the large number of transcripts with small sample sizes that are obtained. To reduce the dimensionality, various Bayesian or empirical Bayes hierarchical models have been developed. However, because of the complexity of the microarray data, no model can explain the data fully. It is generally difficult to scrutinize the irregular patterns of expression that are not expected by the usual statistical gene by gene models. Results As an extension of empirical Bayes (EB procedures, we have developed the β-empirical Bayes (β-EB approach based on a β-likelihood measure which can be regarded as an ’evidence-based’ weighted (quasi- likelihood inference. The weight of a transcript t is described as a power function of its likelihood, fβ(yt|θ. Genes with low likelihoods have unexpected expression patterns and low weights. By assigning low weights to outliers, the inference becomes robust. The value of β, which controls the balance between the robustness and efficiency, is selected by maximizing the predictive β0-likelihood by cross-validation. The proposed β-EB approach identified six significant (p−5 contaminated transcripts as differentially expressed (DE in normal/tumor tissues from the head and neck of cancer patients. These six genes were all confirmed to be related to cancer; they were not identified as DE genes by the classical EB approach. When applied to the eQTL analysis of Arabidopsis thaliana, the proposed β-EB approach identified some potential master regulators that were missed by the EB approach. Conclusions The simulation data and real gene expression data showed that the proposed β-EB method was robust against outliers. The distribution of the weights was used to scrutinize the irregular patterns of expression and diagnose the model

  13. Health Status and Health Dynamics in an Empirical Model of Expected Longevity*

    Science.gov (United States)

    Benítez-Silva, Hugo; Ni, Huan

    2010-01-01

    Expected longevity is an important factor influencing older individuals’ decisions such as consumption, savings, purchase of life insurance and annuities, claiming of Social Security benefits, and labor supply. It has also been shown to be a good predictor of actual longevity, which in turn is highly correlated with health status. A relatively new literature on health investments under uncertainty, which builds upon the seminal work by Grossman (1972), has directly linked longevity with characteristics, behaviors, and decisions by utility maximizing agents. Our empirical model can be understood within that theoretical framework as estimating a production function of longevity. Using longitudinal data from the Health and Retirement Study, we directly incorporate health dynamics in explaining the variation in expected longevities, and compare two alternative measures of health dynamics: the self-reported health change, and the computed health change based on self-reports of health status. In 38% of the reports in our sample, computed health changes are inconsistent with the direct report on health changes over time. And another 15% of the sample can suffer from information losses if computed changes are used to assess changes in actual health. These potentially serious problems raise doubts regarding the use and interpretation of the computed health changes and even the lagged measures of self-reported health as controls for health dynamics in a variety of empirical settings. Our empirical results, controlling for both subjective and objective measures of health status and unobserved heterogeneity in reporting, suggest that self-reported health changes are a preferred measure of health dynamics. PMID:18187217

  14. An empirical model for estimating solar radiation in the Algerian Sahara

    Science.gov (United States)

    Benatiallah, Djelloul; Benatiallah, Ali; Bouchouicha, Kada; Hamouda, Messaoud; Nasri, Bahous

    2018-05-01

    The present work aims to determine the empirical model R.sun that will allow us to evaluate the solar radiation flues on a horizontal plane and in clear-sky on the located Adrar city (27°18 N and 0°11 W) of Algeria and compare with the results measured at the localized site. The expected results of this comparison are of importance for the investment study of solar systems (solar power plants for electricity production, CSP) and also for the design and performance analysis of any system using the solar energy. Statistical indicators used to evaluate the accuracy of the model where the mean bias error (MBE), root mean square error (RMSE) and coefficient of determination. The results show that for global radiation, the daily correlation coefficient is 0.9984. The mean absolute percentage error is 9.44 %. The daily mean bias error is -7.94 %. The daily root mean square error is 12.31 %.

  15. A semi-empirical molecular orbital model of silica, application to radiation compaction

    International Nuclear Information System (INIS)

    Tasker, P.W.

    1978-11-01

    Semi-empirical molecular-orbital theory is used to calculate the bonding in a cluster of two SiO 4 tetrahedra, with the outer bonds saturated with pseudo-hydrogen atoms. The basic properties of the cluster, bond energies and band gap are calculated using a very simple parameterisation scheme. The resulting cluster is used to study the rebonding that occurs when an oxygen vacancy is created. It is suggested that a vacancy model is capable of producing the observed differences between quartz and vitreous silica, and the calculations show that the compaction effect observed in the glass is of a magnitude compatible with the relaxations around the vacancy. More detailed lattice models will be needed to examine this mechanism further. (author)

  16. Use of empirically based corrosion model to aid steam generator life management

    Energy Technology Data Exchange (ETDEWEB)

    Angell, P.; Balakrishnan, P.V.; Turner, C.W

    2000-07-01

    Alloy 800 (N08800) tubes used in CANDU 6 steam generators have shown a low incidence of corrosion damage because of the good corrosion resistance of N08800 and successful water chemistry control strategies. However, N08800 is not immune to corrosion, especially pitting, under plausible SG conditions. Electrochemical potentials are critical in determining both susceptibility and rates of corrosion and are known to be a function of water-chemistry. Using laboratory data an empirical model for pitting and crevice corrosion has been developed for N08800. Combination of such a model with chemistry monitoring and diagnostic software makes it possible to arm the impact of plant operating conditions on SG tube corrosion for plant life management (PLIM). Possible transient chemistry regimes that could significantly shorten expected tube lifetimes have been identified and predictions continue to support the position dud under normal, low dissolved oxygen conditions, pitting of N08800 will not initiate. (author)

  17. Use of empirically based corrosion model to aid steam generator life management

    International Nuclear Information System (INIS)

    Angell, P.; Balakrishnan, P.V.; Turner, C.W.

    2000-01-01

    Alloy 800 (N08800) tubes used in CANDU 6 steam generators have shown a low incidence of corrosion damage because of the good corrosion resistance of N08800 and successful water chemistry control strategies. However, N08800 is not immune to corrosion, especially pitting, under plausible SG conditions. Electrochemical potentials are critical in determining both susceptibility and rates of corrosion and are known to be a function of water-chemistry. Using laboratory data an empirical model for pitting and crevice corrosion has been developed for N08800. Combination of such a model with chemistry monitoring and diagnostic software makes it possible to arm the impact of plant operating conditions on SG tube corrosion for plant life management (PLIM). Possible transient chemistry regimes that could significantly shorten expected tube lifetimes have been identified and predictions continue to support the position dud under normal, low dissolved oxygen conditions, pitting of N08800 will not initiate. (author)

  18. Semi-empirical model for optimising future heavy-ion luminosity of the LHC

    CERN Document Server

    Schaumann, M

    2014-01-01

    The wide spectrum of intensities and emittances imprinted on the LHC Pb bunches during the accumulation of bunch trains in the injector chain result in a significant spread in the single bunch luminosities and lifetimes in collision. Based on the data collected in the 2011 Pb-Pb run, an empirical model is derived to predict the single-bunch peak luminosity depending on the bunch’s position within the beam. In combination with this model, simulations of representative bunches are used to estimate the luminosity evolution for the complete ensemble of bunches. Several options are being considered to improve the injector performance and to increase the number of bunches in the LHC, leading to several potential injection scenarios, resulting in different peak and integrated luminosities. The most important options for after the long shutdown (LS) 1 and 2 are evaluated and compared.

  19. EMPIRICAL MODELS FOR DESCRIBING FIRE BEHAVIOR IN BRAZILIAN COMMERCIAL EUCALYPT PLANTATIONS

    Directory of Open Access Journals (Sweden)

    Benjamin Leonardo Alves White

    2016-12-01

    Full Text Available Modeling forest fire behavior is an important task that can be used to assist in fire prevention and suppression operations. However, according to previous studies, the existing common worldwide fire behavior models used do not correctly estimate the fire behavior in Brazilian commercial hybrid eucalypt plantations. Therefore, this study aims to build new empirical models to predict the fire rate of spread, flame length and fuel consumption for such vegetation. To meet these objectives, 105 laboratory experimental burns were done, where the main fuel characteristics and weather variables that influence fire behavior were controlled and/or measured in each experiment. Dependent and independent variables were fitted through multiple regression analysis. The fire rate of spread proposed model is based on the wind speed, fuel bed bulk density and 1-h dead fuel moisture content (r2 = 0.86; the flame length model is based on the fuel bed depth, 1-h dead fuel moisture content and wind speed (r2 = 0.72; the fuel consumption proposed model has the 1-h dead fuel moisture, fuel bed bulk density and 1-h dead dry fuel load as independent variables (r2= 0.80. These models were used to develop a new fire behavior software, the “Eucalyptus Fire Safety System”.

  20. Extended behavioural modelling of FET and lattice-mismatched HEMT devices

    Science.gov (United States)

    Khawam, Yahya; Albasha, Lutfi

    2017-07-01

    This study presents an improved large signal model that can be used for high electron mobility transistors (HEMTs) and field effect transistors using measurement-based behavioural modelling techniques. The steps for accurate large and small signal modelling for transistor are also discussed. The proposed DC model is based on the Fager model since it compensates between the number of model's parameters and accuracy. The objective is to increase the accuracy of the drain-source current model with respect to any change in gate or drain voltages. Also, the objective is to extend the improved DC model to account for soft breakdown and kink effect found in some variants of HEMT devices. A hybrid Newton's-Genetic algorithm is used in order to determine the unknown parameters in the developed model. In addition to accurate modelling of a transistor's DC characteristics, the complete large signal model is modelled using multi-bias s-parameter measurements. The way that the complete model is performed is by using a hybrid multi-objective optimisation technique (Non-dominated Sorting Genetic Algorithm II) and local minimum search (multivariable Newton's method) for parasitic elements extraction. Finally, the results of DC modelling and multi-bias s-parameters modelling are presented, and three-device modelling recommendations are discussed.