University staff adoption of iPads: An empirical study using an extended TAM model
Michael Steven Lane
2014-11-01
Full Text Available This research examined key factors influencing adoption of iPads by university staff. An online survey collected quantitative data to test hypothesised relationships in an extended TAM model. The findings show that university staff consider iPads easy to use and useful, with a high level of compatibility with their work. Social status had no influence on their attitude to using an iPad. However older university staff and university staff with no previous experience in using a similar technology such as an iPhone or smartphone found iPads less easy to use. Furthermore, a lack of formal end user ICT support impacted negatively on the use of iPads.
Brax, Philippe; Tamanini, Nicola
2016-05-01
We extend the chameleon models by considering scalar-fluid theories where the coupling between matter and the scalar field can be represented by a quadratic effective potential with density-dependent minimum and mass. In this context, we study the effects of the scalar field on Solar System tests of gravity and show that models passing these stringent constraints can still induce large modifications of Newton's law on galactic scales. On these scales we analyze models which could lead to a percent deviation of Newton's law outside the virial radius. We then model the dark matter halo as a Navarro-Frenk-White profile and explicitly find that the fifth force can give large contributions around the galactic core in a particular model where the scalar field mass is constant and the minimum of its potential varies linearly with the matter density. At cosmological distances, we find that this model does not alter the growth of large scale structures and therefore would be best tested on galactic scales, where interesting signatures might arise in the galaxy rotation curves.
Fender, R P; Belloni, T M
2009-01-01
In this paper we study the relation of radio emission to X-ray spectral and variability properties for a large sample of black hole X-ray binary systems. This is done to test, refine and extend -- notably into the timing properties -- the previously published `unified model' for the coupling of accretion and ejection in such sources. In 14 outbursts from 11 different sources we find that in every case the peak radio flux, on occasion directly resolved into discrete relativistic ejections, is associated with the bright hard to soft state transition near the peak of the outburst. We also note the association of the radio flaring with periods of X-ray flaring during this transition in most, but not all, of the systems. In the soft state, radio emission is in nearly all cases either undetectable or optically thin, consistent with the suppression of the core jet in these states and `relic' radio emission from interactions of previously ejected material and the ambient medium. However, these data cannot rule out an...
Integrable extended van der Waals model
Giglio, Francesco; Landolfi, Giulio; Moro, Antonio
2016-10-01
Inspired by the recent developments in the study of the thermodynamics of van der Waals fluids via the theory of nonlinear conservation laws and the description of phase transitions in terms of classical (dissipative) shock waves, we propose a novel approach to the construction of multi-parameter generalisations of the van der Waals model. The theory of integrable nonlinear conservation laws still represents the inspiring framework. Starting from a macroscopic approach, a four parameter family of integrable extended van der Waals models is indeed constructed in such a way that the equation of state is a solution to an integrable nonlinear conservation law linearisable by a Cole-Hopf transformation. This family is further specified by the request that, in regime of high temperature, far from the critical region, the extended model reproduces asymptotically the standard van der Waals equation of state. We provide a detailed comparison of our extended model with two notable empirical models such as Peng-Robinson and Soave's modification of the Redlich-Kwong equations of state. We show that our extended van der Waals equation of state is compatible with both empirical models for a suitable choice of the free parameters and can be viewed as a master interpolating equation. The present approach also suggests that further generalisations can be obtained by including the class of dispersive and viscous-dispersive nonlinear conservation laws and could lead to a new type of thermodynamic phase transitions associated to nonclassical and dispersive shock waves.
Empirical Vector Autoregressive Modeling
M. Ooms (Marius)
1993-01-01
textabstractChapter 2 introduces the baseline version of the VAR model, with its basic statistical assumptions that we examine in the sequel. We first check whether the variables in the VAR can be transformed to meet these assumptions. We analyze the univariate characteristics of the series. Import
Empirical Vector Autoregressive Modeling
M. Ooms (Marius)
1993-01-01
textabstractChapter 2 introduces the baseline version of the VAR model, with its basic statistical assumptions that we examine in the sequel. We first check whether the variables in the VAR can be transformed to meet these assumptions. We analyze the univariate characteristics of the series. Import
Extending reference assembly models
Church, Deanna M.; Schneider, Valerie A.; Steinberg, Karyn Meltz
2015-01-01
The human genome reference assembly is crucial for aligning and analyzing sequence data, and for genome annotation, among other roles. However, the models and analysis assumptions that underlie the current assembly need revising to fully represent human sequence diversity. Improved analysis tools...
Extending reference assembly models
Church, Deanna M.; Schneider, Valerie A.; Steinberg, Karyn Meltz;
2015-01-01
The human genome reference assembly is crucial for aligning and analyzing sequence data, and for genome annotation, among other roles. However, the models and analysis assumptions that underlie the current assembly need revising to fully represent human sequence diversity. Improved analysis tools...
Empirical Model Building Data, Models, and Reality
Thompson, James R
2011-01-01
Praise for the First Edition "This...novel and highly stimulating book, which emphasizes solving real problems...should be widely read. It will have a positive and lasting effect on the teaching of modeling and statistics in general." - Short Book Reviews This new edition features developments and real-world examples that showcase essential empirical modeling techniques Successful empirical model building is founded on the relationship between data and approximate representations of the real systems that generated that data. As a result, it is essential for researchers who construct these m
Extended UML with Role Modeling
无
2001-01-01
UML is widely accepted and applied by the international softwareindus try. UML is a powerful language for Object-oriented modeling, designing, and i m plementing software systems, but its Use-Case method for requirement analysis a n d modeling software patterns has some explicit drawbacks. For more complete UML, this paper proposes the Role Use-Case modeling and its glyphs, and provides an instance of requirement analysis using Role Use-Case method. Uses the Role Mode l to modeling software pattern at knowledge level. This paper also extends the UM L Meta Model and accentuates “RM before UML's class Modeling”.
Empirically Based, Agent-based models
Elinor Ostrom
2006-12-01
Full Text Available There is an increasing drive to combine agent-based models with empirical methods. An overview is provided of the various empirical methods that are used for different kinds of questions. Four categories of empirical approaches are identified in which agent-based models have been empirically tested: case studies, stylized facts, role-playing games, and laboratory experiments. We discuss how these different types of empirical studies can be combined. The various ways empirical techniques are used illustrate the main challenges of contemporary social sciences: (1 how to develop models that are generalizable and still applicable in specific cases, and (2 how to scale up the processes of interactions of a few agents to interactions among many agents.
Dicyanometallates as Model Extended Frameworks
2016-01-01
We report the structures of eight new dicyanometallate frameworks containing molecular extra-framework cations. These systems include a number of hybrid inorganic–organic analogues of conventional ceramics, such as Ruddlesden–Popper phases and perovskites. The structure types adopted are rationalized in the broader context of all known dicyanometallate framework structures. We show that the structural diversity of this family can be understood in terms of (i) the charge and coordination preferences of the particular metal cation acting as framework node, and (ii) the size, shape, and extent of incorporation of extra-framework cations. In this way, we suggest that dicyanometallates form a particularly attractive model family of extended frameworks in which to explore the interplay between molecular degrees of freedom, framework topology, and supramolecular interactions. PMID:27057759
Empirical data validation for model building
Kazarian, Aram
2008-03-01
Optical Proximity Correction (OPC) has become an integral and critical part of process development for advanced technologies with challenging k I requirements. OPC solutions in turn require stable, predictive models to be built that can project the behavior of all structures. These structures must comprehend all geometries that can occur in the layout in order to define the optimal corrections by feature, and thus enable a manufacturing process with acceptable margin. The model is built upon two main component blocks. First, is knowledge of the process conditions which includes the optical parameters (e.g. illumination source, wavelength, lens characteristics, etc) as well as mask definition, resist parameters and process film stack information. Second, is the empirical critical dimension (CD) data collected using this process on specific test features the results of which are used to fit and validate the model and to project resist contours for all allowable feature layouts. The quality of the model therefore is highly dependent on the integrity of the process data collected for this purpose. Since the test pattern suite generally extends to below the resolution limit that the process can support with adequate latitude, the CD measurements collected can often be quite noisy with marginal signal-to-noise ratios. In order for the model to be reliable and a best representation of the process behavior, it is necessary to scrutinize empirical data to ensure that it is not dominated by measurement noise or flyer/outlier points. The primary approach for generating a clean, smooth and dependable empirical data set should be a replicated measurement sampling that can help to statistically reduce measurement noise by averaging. However, it can often be impractical to collect the amount of data needed to ensure a clean data set by this method. An alternate approach is studied in this paper to further smooth the measured data by means of curve fitting to identify remaining
MIRROR EXTENDING AND CIRCULAR SPLINE FUNCTION FOR EMPIRICAL MODE DECOMPOSITION METHOD
无
2001-01-01
The Mirror Extending (ME) approach is proposed in this paper for solving the end extending issue in the Empirical Mode Decomposition (EMD) method. By this approach, the data is extended into a closed circuit without end. The derivatives on ends are not necessary any more for Spline fitting. The approach eliminates the possible problems in reliability and uniqueness in the original extending approach of the EMD method. In the ME approach only one extending is necessary before the data analysis. A theoretical criterion is proposed here for checking the extending approach. ME approach has been proved to satisfy the theoretical criterion automatically and permanently. This approach makes the EMD method reliable and easy to follow.
Extended Analysis of Empirical Citations with Skinner's "Verbal Behavior": 1984-2004
Dixon, Mark R.; Small, Stacey L.; Rosales, Rocio
2007-01-01
The present paper comments on and extends the citation analysis of verbal operant publications based on Skinner's "Verbal Behavior" (1957) by Dymond, O'Hora, Whelan, and O'Donovan (2006). Variations in population parameters were evaluated for only those studies that Dymond et al. categorized as empirical. Preliminary results indicate that the…
Developing Empirically Based Models of Practice.
Blythe, Betty J.; Briar, Scott
1985-01-01
Over the last decade emphasis has shifted from theoretically based models of practice to empirically based models whose elements are derived from clinical research. These models are defined and a developing model of practice through the use of single-case methodology is examined. Potential impediments to this new role are identified. (Author/BL)
5-Dimensional Extended Space Model
Tsipenyuk, D. Yu.; Andreev, V. A.
2006-01-01
We put forward an idea that physical phenomena have to be treated in 5-dimensional space where the fifth coordinate is the interval S. Thus, we considered the (1+4) extended space G(T;X,Y,Z,S). In addition to Lorentz transformations (T;X), (T;Y), (T;Z) which are in (1+3)-dimensional Minkowski space, in the proposed (1+4)d extended space two other types of transformations exist in planes (T,S); (X,S), (Y,S), (Z,S) that converts massive particles into massless and vice versa. We also consider e...
Model uncertainty in growth empirics
Prüfer, P.
2008-01-01
This thesis applies so-called Bayesian model averaging (BMA) to three different economic questions substantially exposed to model uncertainty. Chapter 2 addresses a major issue of modern development economics: the analysis of the determinants of pro-poor growth (PPG), which seeks to combine high gro
Extending models for two-dimensional constraints
Forchhammer, Søren
2009-01-01
Random fields in two dimensions may be specified on 2 times 2 elements such that the probabilities of finite configurations and the entropy may be calculated explicitly. The Pickard random field is one example where probability of a new (non-boundary) element is conditioned on three previous...... elements. To extend the concept we consider extending such a field such that a vector or block of elements is conditioned on a larger set of previous elements. Given a stationary model defined on 2 times 2 elements, iterative scaling is used to define the extended model. The extended model may be used...
An Extended Analysis of Requirements Traceability Model
Jiang Dandong(蒋丹东); Zhang Shensheng; Chen Lu
2004-01-01
A new extended meta model of traceability is presented. Then, a formalized fine-grained model of traceability is described. Some major issues about this model, including trace units, requirements and relations within the model, are further analyzed. Finally, a case study that comes from a key project of 863 Program is given.
STUDY OF NEUROSES: III AN EMPIRICAL MODEL*
Bhatti, Ranbir S.; Channabasavanna, S.M.
1986-01-01
SUMMARY The empirical model presented in this paper is based on observations made on 60 neurotics and 60 normals matched at the individual level. Efforts are made to use the systems approach to present this paradigm synthesising both individual and environmental resources. We are of the opinion that this model is not only useful in understanding the genesis of neuroses rather has utility at the intervention level as well.
Empirical intrinsic geometry for nonlinear modeling and time series filtering.
Talmon, Ronen; Coifman, Ronald R
2013-07-30
In this paper, we present a method for time series analysis based on empirical intrinsic geometry (EIG). EIG enables one to reveal the low-dimensional parametric manifold as well as to infer the underlying dynamics of high-dimensional time series. By incorporating concepts of information geometry, this method extends existing geometric analysis tools to support stochastic settings and parametrizes the geometry of empirical distributions. However, the statistical models are not required as priors; hence, EIG may be applied to a wide range of real signals without existing definitive models. We show that the inferred model is noise-resilient and invariant under different observation and instrumental modalities. In addition, we show that it can be extended efficiently to newly acquired measurements in a sequential manner. These two advantages enable us to revisit the Bayesian approach and incorporate empirical dynamics and intrinsic geometry into a nonlinear filtering framework. We show applications to nonlinear and non-Gaussian tracking problems as well as to acoustic signal localization.
Empirical Analysis of Xinjiang's Bilateral Trade: Gravity Model Approach
CHEN Xuegang; YANG Zhaoping; LIU Xuling
2008-01-01
Based on the basic trade gravity model and Xinjiang's practical situation, new explanatory variables (GDP,GDPpc and SCO) are introduced to build an extended trade gravity model fitting for Xinjiang's bilateral trade. Fromthe empirical analysis of this model, it is proposed that those three variables affect the Xinjiang's bilateral trade posi-tively. Whereas, geographic distance is found to be a significant factor influencing Xinjiang's bilateral trade negatively.Then, by the extended trade gravity model, this article analyzes the present trade situation between Xinjiang and itsmain trade partners quantitatively in 2004. The results indicate that Xinjiang cooperates with its most trade partnerssuccessfully in terms of present economic scale and developing revel. Xinjiang has established successfully trade part-nership with Central Asia, Central Europe and Eastern Europe, Western Europe, East Asia and South Asia. However,the foreign trade development with West Asia is much slower. Finally, some suggestions on developing Xinjiang's for-eign trade are put forward.
Characterising and modelling extended conducted electromagnetic emission
Grobler, Inus
2013-06-01
Full Text Available -1 2013 IEEE Energy Conversion Congress and Exposition Asia (ECCE Downunder), Melbourne, Australia, 3-6 June 2013 Characterising and Modelling Extended Conducted Electromagnetic Emission I Grobler1 and MN Gitau2 Department of Electrical...
An empirical model of tropical ocean dynamics
Newman, Matthew; Scott, James D. [University of Colorado, CIRES Climate Diagnostics Center, Boulder, CO (United States); NOAA Earth System Research Laboratory, Physical Sciences Division, Boulder, CO (United States); Alexander, Michael A. [NOAA Earth System Research Laboratory, Physical Sciences Division, Boulder, CO (United States)
2011-11-15
To extend the linear stochastically forced paradigm of tropical sea surface temperature (SST) variability to the subsurface ocean, a linear inverse model (LIM) is constructed from the simultaneous and 3-month lag covariances of observed 3-month running mean anomalies of SST, thermocline depth, and zonal wind stress. This LIM is then used to identify the empirically-determined linear dynamics with physical processes to gauge their relative importance to ENSO evolution. Optimal growth of SST anomalies over several months is triggered by both an initial SST anomaly and a central equatorial Pacific thermocline anomaly that propagates slowly eastward while leading the amplifying SST anomaly. The initial SST and thermocline anomalies each produce roughly half the SST amplification. If interactions between the sea surface and the thermocline are removed in the linear dynamical operator, the SST anomaly undergoes less optimal growth but is also more persistent, and its location shifts from the eastern to central Pacific. Optimal growth is also found to be essentially the result of two stable eigenmodes with similar structure but differing 2- and 4-year periods evolving from initial destructive to constructive interference. Variations among ENSO events could then be a consequence not of changing stability characteristics but of random excitation of these two eigenmodes, which represent different balances between surface and subsurface coupled dynamics. As found in previous studies, the impact of the additional variables on LIM SST forecasts is relatively small for short time scales. Over time intervals greater than about 9 months, however, the additional variables both significantly enhance forecast skill and predict lag covariances and associated power spectra whose closer agreement with observations enhances the validation of the linear model. Moreover, a secondary type of optimal growth exists that is not present in a LIM constructed from SST alone, in which initial SST
Hyland, Michael E
2003-12-01
Extended Network Generalized Entanglement Theory (Entanglement Theory for short) combines two earlier theories based on complexity theory and quantum mechanics. The theory's assumptions are: the body is a complex, self-organizing system (the extended network) that self-organizes so as to achieve genetically defined patterns (where patterns include morphologic as well as lifestyle patterns). These pattern-specifying genes require feedback that is provided by generalized quantum entanglement. Additionally, generalized entanglement has evolved as a form of communication between people (and animals) and can be used in healing. Entanglement Theory suggests that several processes are involved in complementary and alternative medicine (CAM). Direct subtle therapy creates network change either through lifestyle management, some manual therapies, and psychologically mediated effects of therapy. Indirect subtle therapy is a process of entanglement with other people or physical entities (e.g., remedies, healing sites). Both types of subtle therapy create two kinds of information within the network--either that the network is more disregulated than it is and the network then compensates for this error, or as a guide for network change leading to healing. Most CAM therapies involve a combination of indirect and direct therapies, making empirical evaluation complex. Empirical predictions from this theory are contrasted with those from two other possible mechanisms of healing: (1) psychologic processes and (2) mechanisms involving electromagnetic influence between people (biofield/energy medicine). Topics for empirical study include a hyperfast communication system, the phenomenology of entanglement, predictors of outcome in naturally occurring clinical settings, and the importance of therapist and patient characteristics to outcome.
Empirical generalization assessment of neural network models
Larsen, Jan; Hansen, Lars Kai
1995-01-01
competing models. Since all models are trained on the same data, a key issue is to take this dependency into account. The optimal split of the data set of size N into a cross-validation set of size Nγ and a training set of size N(1-γ) is discussed. Asymptotically (large data sees), γopt→1......This paper addresses the assessment of generalization performance of neural network models by use of empirical techniques. We suggest to use the cross-validation scheme combined with a resampling technique to obtain an estimate of the generalization performance distribution of a specific model...
An Extended Chiral SU(3) Quark Model
ZHANG Zong-Ye; YU You-Wen; WANG Ping; DAI Lian-Rong
2003-01-01
The chiral SU(3) quark model is extended by including the vector meson exchanges to describe the short range interactions. The phase shifts of NN scattering are studied in this model. Compared with the results of the chiral SU(3) quark model in which only the pseudo-scalar and scalar chiralfields are considered, the phase shifts of 1 So wave are obviously improved.
A review of wildland fire spread modelling, 1990-present 2: Empirical and quasi-empirical models
Sullivan, A L
2007-01-01
In recent years, advances in computational power and spatial data analysis (GIS, remote sensing, etc) have led to an increase in attempts to model the spread and behaviour of wildland fires across the landscape. This series of review papers endeavours to critically and comprehensively review all types of surface fire spread models developed since 1990. This paper reviews models of an empirical or quasi-empirical nature. These models are based solely on the statistical analysis of experimentally obtained data with or without some physical framework for the basis of the relations. Other papers in the series review models of a physical or quasi-physical nature, and mathematical analogues and simulation models. The main relations of empirical models are that of wind speed and fuel moisture content with rate of forward spread. Comparisons are made of the different functional relationships selected by various authors for these variables.
Empirical correction of a toy climate model
Allgaier, Nicholas A; Danforth, Christopher M
2011-01-01
Improving the accuracy of forecast models for physical systems such as the atmosphere is a crucial ongoing effort. Errors in state estimation for these often highly nonlinear systems has been the primary focus of recent research, but as that error has been successfully diminished, the role of model error in forecast uncertainty has duly increased. The present study is an investigation of a particular empirical correction procedure that is of special interest because it considers the model a "black box", and therefore can be applied widely with little modification. The procedure involves the comparison of short model forecasts with a reference "truth" system during a training period in order to calculate systematic (1) state-independent model bias and (2) state-dependent error patterns. An estimate of the likelihood of the latter error component is computed from the current state at every timestep of model integration. The effectiveness of this technique is explored in two experiments: (1) a perfect model scen...
Analysis of Empirical Software Effort Estimation Models
Basha, Saleem
2010-01-01
Reliable effort estimation remains an ongoing challenge to software engineers. Accurate effort estimation is the state of art of software engineering, effort estimation of software is the preliminary phase between the client and the business enterprise. The relationship between the client and the business enterprise begins with the estimation of the software. The credibility of the client to the business enterprise increases with the accurate estimation. Effort estimation often requires generalizing from a small number of historical projects. Generalization from such limited experience is an inherently under constrained problem. Accurate estimation is a complex process because it can be visualized as software effort prediction, as the term indicates prediction never becomes an actual. This work follows the basics of the empirical software effort estimation models. The goal of this paper is to study the empirical software effort estimation. The primary conclusion is that no single technique is best for all sit...
Selfish mothers? An empirical test of parent-offspring conflict over extended parental care.
Paul, Manabi; Sen Majumder, Sreejani; Bhadra, Anindita
2014-03-01
Parent-offspring conflict (POC) theory is an interesting conceptual framework for understanding the dynamics of parental care. However, this theory is not easy to test empirically, as exact measures of parental investment in an experimental set-up are difficult to obtain. We have used free-ranging dogs Canis familiaris in India, to study POC in the context of extended parental care. We observed females and their pups in their natural habitat for the mother's tendency to share food given by humans with her pups in the weaning and post-weaning stages. Since these dogs are scavengers, and depend largely on human provided food for their sustenance, voluntary sharing of food by the mother with her pups is a good surrogate for extended parental care. Our behavioural observations convincingly demonstrate an increase of conflict and decrease of cooperation by the mother with her offspring over given food within a span of 4-6 weeks. We also demonstrate that the competition among the pups in a litter scales with litter size, an indicator of sib-sib competition.
An empirical behavioral model of price formation
Mike, S
2005-01-01
Although behavioral economics has demonstrated that there are many situations where rational choice is a poor empirical model, it has so far failed to provide quantitative models of economic problems such as price formation. We make a step in this direction by developing empirical models that capture behavioral regularities in trading order placement and cancellation using data from the London Stock Exchange. For order placement we show that the probability of placing an order at a given price is well approximated by a Student distribution with less than two degrees of freedom, centered on the best quoted price. This result is surprising because it implies that trading order placement is symmetric, independent of the bid-ask spread, and the same for buying and selling. We also develop a crude but simple cancellation model that depends on the position of an order relative to the best price and the imbalance between buying and selling orders in the limit order book. These results are combined to construct a sto...
Exploring Social Structures in Extended Team Model
Zahedi, Mansooreh; Ali Babar, Muhammad
2013-01-01
Extended Team Model (ETM) as a type of offshore outsourcing is increasingly becoming popular mode of Global Software Development (GSD). There is little knowledge about the social structures in ETM and their impact on collaboration. Within a large interdisciplinary project to develop the next gene...
Assessing change with the extended logistic model.
Cristante, Francesca; Robusto, Egidio
2007-11-01
The purpose of this article is to define a method for the assessment of change. A reinterpretation of the extended logistic model is proposed. The extended logistic model for the assessment of change (ELMAC) allows the definition of a time parameter which is supposed to identify whether change occurs during a period of time, given a specific event or phenomenon. The assessment of a trend of change through time, on the basis of the time parameter which is estimated at different successive occasions during a period of time, is also considered. In addition, a dispersion parameter is calculated which identifies whether change is consistent at each time point. The issue of independence is taken into account both in relation to the time parameter and the dispersion parameter. An application of the ELMAC in a learning process is presented. The interpretation of the model parameters and the model fit statistics is consistent with expectations.
An Extended Colored Zee-Babu Model
Nomura, Takaaki
2016-01-01
We study the extended colored Zee-Babu model introducing a vector-like quark and singlet scalar. The active neutrino mass matrix and muon anomalous magnetic moment are analyzed, which can be fitted to experimental data satisfying the constraints from flavor changing neutral current. Then we discuss signature of our model via vector-like quark production. In addition, the diphoton excess can be explained with the contribution from vector-like quark
Extended Higgs sectors in radiative neutrino models
Oleg Antipin
2017-05-01
Full Text Available Testable Higgs partners may be sought within the extensions of the SM Higgs sector aimed at generating neutrino masses at the loop level. We study a viability of extended Higgs sectors for two selected models of radiative neutrino masses: a one-loop mass model, providing the Higgs partner within a real triplet scalar representation, and a three-loop mass model, providing it within its two-Higgs-doublet sector. The Higgs sector in the one-loop model may remain stable and perturbative up to the Planck scale, whereas the three-loop model calls for a UV completion around 106 GeV. Additional vector-like lepton and exotic scalar fields, which are required to close one- and three-loop neutrino-mass diagrams, play a decisive role for the testability of the respective models. We constrain the parameter space of these models using LHC bounds on diboson resonances.
Extended FMEA for Sustainable Manufacturing: An Empirical Study in the Non-Woven Fabrics Industry
Thanh-Lam Nguyen
2016-09-01
Full Text Available Failure modes and effects analysis ( F M E A substantially facilitates the efforts of industrial manufacturers in prioritizing failures that require corrective actions to continuously improve product quality. However, the conventional approach fails to provide satisfactory explanation of the aggregate effects of a failure from different perspectives such as technical severity, economic severity, and production capacity in some practical applications. To fulfill the existing gap in the F M E A literature, this paper proposes an extension by considering associated quality cost and the capability of failure detection system as additional determinants to signify the priority level for each failure mode. The quality cost and capacity are considered as key factors for sustainable survival and development of an industrial manufacturer in the fierce competition market these days. The performance of the extended scheme was tested in an empirical case at a non-woven fabrics manufacturer. Analytical results indicate that the proposed approach outperforms the traditional one and remarkably reduces the percentage of defective fabrics from about 2.41% before the trial period to 1.13%,thus significantly reducing wastes and increasing operation efficiency, thereby providing valuable advantages to improve organizational competition power for their sustainable growth.
MILES extended : Stellar population synthesis models from the optical to the infrared
Rock, B.; Vazdekis, A.; Ricciardelli, E.; Peletier, R. F.; Knapen, J. H.; Falcon-Barroso, J.
2016-01-01
We present the first single-burst stellar population models, which covers the optical and the infrared wavelength range between 3500 and 50 000 angstrom and which are exclusively based on empirical stellar spectra. To obtain these joint models, we combined the extended MILES models in the optical wi
Porto, Melina
2014-01-01
The work presented here is an empirical study of how advanced learners of English as a foreign language in Argentina access and understand the culture-specific dimensions of literary narrative texts. It has three purposes. First, to extend research into reading in a foreign language to take account of the culture-specific content of texts. Second,…
Neutrino Anomalies in an Extended Zee Model
Joshipura, A S; Joshipura, Anjan S.; Rindani, Saurabh D.
1999-01-01
We discuss an extended SU(2)X U(1) model which naturally leads to mass scales and mixing angles relevant for understanding both the solar and atmospheric neutrino anomalies. No right-handed neutrinos are introduced in the model.The model uses a softly broken L_e-L_{\\mu}-L_{\\tau} symmetry. Neutrino masses arise only at the loop level. The one-loop neutrino masses which arise as in the Zee model solve the atmospheric neutrino anomaly while breaking of L_e-L_{\\mu}-L_{\\tau} generates at two-loop order a mass splitting needed for the vacuum solution of the solar neutrino problem. A somewhat different model is possible which accommodates the large-angle MSW resolution of the solar neutrino problem.
Action principles for extended magnetohydrodynamic models
Keramidas Charidakos, I.; Lingam, M.; Morrison, P. J.; White, R. L. [Institute for Fusion Studies and Department of Physics, The University of Texas at Austin, Austin, Texas 78712 (United States); Wurm, A. [Department of Physical and Biological Sciences, Western New England University, Springfield, Massachusetts 01119 (United States)
2014-09-15
The general, non-dissipative, two-fluid model in plasma physics is Hamiltonian, but this property is sometimes lost or obscured in the process of deriving simplified (or reduced) two-fluid or one-fluid models from the two-fluid equations of motion. To ensure that the reduced models are Hamiltonian, we start with the general two-fluid action functional, and make all the approximations, changes of variables, and expansions directly within the action context. The resulting equations are then mapped to the Eulerian fluid variables using a novel nonlocal Lagrange-Euler map. Using this method, we recover Lüst's general two-fluid model, extended magnetohydrodynamic (MHD), Hall MHD, and electron MHD from a unified framework. The variational formulation allows us to use Noether's theorem to derive conserved quantities for each symmetry of the action.
Differential Poisson Sigma Models with Extended Supersymmetry
Arias, Cesar; Torres-Gomez, Alexander
2016-01-01
The induced two-dimensional topological N=1 supersymmetric sigma model on a differential Poisson manifold M presented in arXiv:1503.05625 is shown to be a special case of the induced Poisson sigma model on the bi-graded supermanifold T[0,1]M. The bi-degree comprises the standard N-valued target space degree, corresponding to the form degree on the worldsheet, and an additional Z-valued fermion number, corresponding to the degree in the differential graded algebra of forms on M. The N=1 supersymmetry stems from the compatibility between the (extended) differential Poisson bracket and the de Rham differential on M. The latter is mapped to a nilpotent vector field Q of bi-degree (0,1) on T*[1,0](T[0,1]M), and the covariant Hamiltonian action is Q-exact. New extended supersymmetries arise as inner derivatives along special bosonic Killing vectors on M that induce Killing supervector fields of bi-degree (0,-1) on T*[1,0](T[0,1]M).
Semi-empirical model of solar plages
FANG; Cheng
2001-01-01
［1］ Zirin, H., Astrophysics of the Sun, Chapter 7, Cambridge: Cambridge University Press, 1988.［2］ Shine, R. A., Linsky, J. L., Physical properties of solar chromospheric plages II. Chromospheric plage models, Solar Phys., 1974, 39: 49.［3］ Kelch, W. L., Linsky, J. L., Physical properties of solar chromospheric plages III. Models based on CaII and MgII observations, Solar Phys., 1978, 58: 37.［4］ Lemaire, P., Goutlebroze, J. C., Vial, J. C. et al., Physical properties of the solar chromosphere deduced from optically thick lines, A & A, 1981, 103: 160.［5］ Fontenla, J. M., Avrett, E. H., Loeser, R., Energy balance in the solar transition region II. Effects of pressure and energy input on hydrostatic models, ApJ, 1991, 377: 712.［6］ Fontenla, J. M., Avrett, E. H., Loeser, R., Energy balance in the solar transition region III. Helium emission in hydrostatic, constant-abundance models with diffusion, ApJ, 1993, 406: 319.［7］ Pierce, A. K., Slaughter, C., Solar limb darkening I: λλ(30337297), Solar Phys., 1977, 51: 25.［8］ Pierce, A. K., Slaughter, C., Weinberger, D., Solar limb darkening in the interval 740424018*!, II, Solar Phys., 1977, 52: 179.［9］ Nechel, H., Labs, D., The solar radiation between 3300 and 12500*!, Solar Phys., 1984, 90: 205.［10］ Vernazza, J. E., Avrett, E. H., Loeser, R., Structure of the solar chromosphere I. Basic computations and summary of the results, ApJ, 1973, 184: 605.［11］ Mihalas, D., Stellar Atmospheres, San Francisco: W. H. Freeman and Company, 1978.［12］ Fang, C., Hnoux, J. -C., Self-consistent model of flare heated solar chromosphere, A & A, 1983, 118: 139.［13］ Ding, M. D., Fang, C., A semi-empirical model of sunspot penumbra, A & A, 1989, 225: 204.［14］ Vernazza, J. E., Avrett, E. H., Loeser, R., Structure of the solar chromosphere III. Models of the EUV brightness components of the quiet Sun, ApJ Suppl., 1981, 45: 635.［15］ Canfield, R. C., Athey, R
Wang, Rui; Chen, Lie-Wen
2017-10-01
We establish a relation between the equation of state of nuclear matter and the fourth-order symmetry energy asym,4 (A) of finite nuclei in a semi-empirical nuclear mass formula by self-consistently considering the bulk, surface and Coulomb contributions to the nuclear mass. Such a relation allows us to extract information on nuclear matter fourth-order symmetry energy Esym,4 (ρ0) at normal nuclear density ρ0 from analyzing nuclear mass data. Based on the recent precise extraction of asym,4 (A) via the double difference of the ;experimental; symmetry energy extracted from nuclear masses, for the first time, we estimate a value of Esym,4 (ρ0) = 20.0 ± 4.6 MeV. Such a value of Esym,4 (ρ0) is significantly larger than the predictions from mean-field models and thus suggests the importance of considering the effects of beyond the mean-field approximation in nuclear matter calculations.
Empirical Modeling of Plant Gas Fluxes in Controlled Environments
Cornett, Jessie David
1994-01-01
As humans extend their reach beyond the earth, bioregenerative life support systems must replace the resupply and physical/chemical systems now used. The Controlled Ecological Life Support System (CELSS) will utilize plants to recycle the carbon dioxide (CO2) and excrement produced by humans and return oxygen (O2), purified water and food. CELSS design requires knowledge of gas flux levels for net photosynthesis (PS(sub n)), dark respiration (R(sub d)) and evapotranspiration (ET). Full season gas flux data regarding these processes for wheat (Triticum aestivum), soybean (Glycine max) and rice (Oryza sativa) from published sources were used to develop empirical models. Univariate models relating crop age (days after planting) and gas flux were fit by simple regression. Models are either high order (5th to 8th) or more complex polynomials whose curves describe crop development characteristics. The models provide good estimates of gas flux maxima, but are of limited utility. To broaden the applicability, data were transformed to dimensionless or correlation formats and, again, fit by regression. Polynomials, similar to those in the initial effort, were selected as the most appropriate models. These models indicate that, within a cultivar, gas flux patterns appear remarkably similar prior to maximum flux, but exhibit considerable variation beyond this point. This suggests that more broadly applicable models of plant gas flux are feasible, but univariate models defining gas flux as a function of crop age are too simplistic. Multivariate models using CO2 and crop age were fit for PS(sub n), and R(sub d) by multiple regression. In each case, the selected model is a subset of a full third order model with all possible interactions. These models are improvements over the univariate models because they incorporate more than the single factor, crop age, as the primary variable governing gas flux. They are still limited, however, by their reliance on the other environmental
Extending a context model for microphone forensics
Kraetzer, Christian; Qian, Kun; Dittmann, Jana
2012-03-01
In this paper, we extend an existing context model for statistical pattern recognition based microphone forensics by: first, generating a generalized model for this process and second, using this general model to construct a complex new application scenario model for microphone forensic investigations on the detection of playback recordings (a.k.a. replays, re-recordings, double-recordings). Thereby, we build the theoretical basis for answering the question whether an audio recording was made to record a playback or natural sound. The results of our investigations on the research question of playback detection imply that it is possible with our approach on our evaluation set of six microphones. If the recorded sound is not modified prior to playback, we achieve in our tests 89.00% positive indications on the correct two microphones involved. If the sound is post-processed (here, by normalization) this figure decreases (in our normalization example to 36.00%, while another 50.67% of the tests still indicate two microphones, of which one has actually not been involved in the recording and playback recording process).
Center for Extended Magnetohydrodynamic Modeling Cooperative Agreement
Carl R. Sovinec
2008-02-15
The Center for Extended Magnetohydrodynamic Modeling (CEMM) is developing computer simulation models for predicting the behavior of magnetically confined plasmas. Over the first phase of support from the Department of Energy’s Scientific Discovery through Advanced Computing (SciDAC) initiative, the focus has been on macroscopic dynamics that alter the confinement properties of magnetic field configurations. The ultimate objective is to provide computational capabilities to predict plasma behavior—not unlike computational weather prediction—to optimize performance and to increase the reliability of magnetic confinement for fusion energy. Numerical modeling aids theoretical research by solving complicated mathematical models of plasma behavior including strong nonlinear effects and the influences of geometrical shaping of actual experiments. The numerical modeling itself remains an area of active research, due to challenges associated with simulating multiple temporal and spatial scales. The research summarized in this report spans computational and physical topics associated with state of the art simulation of magnetized plasmas. The tasks performed for this grant are categorized according to whether they are primarily computational, algorithmic, or application-oriented in nature. All involve the development and use of the Non-Ideal Magnetohydrodynamics with Rotation, Open Discussion (NIMROD) code, which is described at http://nimrodteam.org. With respect to computation, we have tested and refined methods for solving the large algebraic systems of equations that result from our numerical approximations of the physical model. Collaboration with the Terascale Optimal PDE Solvers (TOPS) SciDAC center led us to the SuperLU_DIST software library [http://crd.lbl.gov/~xiaoye/SuperLU/] for solving large sparse matrices using direct methods on parallel computers. Switching to this solver library boosted NIMROD’s performance by a factor of five in typical large
Taylor, H. A., Jr.; Mayr, H. G.; Niemann, H. B.; Larson, J.
1985-01-01
In-situ measurements of positive ion composition of the ionosphere of Venus are combined in an empirical model which is a key element for the Venus International Reference Atmosphere (VIRA) model. The ion data are obtained from the Pioneer Venus Orbiter Ion Mass Spectrometer (OIMS) which obtained daily measurements beginning in December 1978 and extending to July 1980 when the uncontrolled rise of satellite periapsis height precluded further measurements in the main body of the ionosphere. For this period, measurements of 12 ion species are sorted into altitude and local time bins with altitude extending from 150 to 1000 km. The model results exhibit the appreciable nightside ionosphere found at Venus, the dominance of atomic oxygen ions in the dayside upper ionosphere and the increase in prominence of atomic oxygen and deuterium ions on the nightside. Short term variations, such as the abrupt changes observed in the ionopause, cannot be represented in the model.
EMPIRICAL LIKELIHOOD FOR LINEAR MODELS UNDER m-DEPENDENT ERRORS
QinYongsong; JiangBo; LiYufang
2005-01-01
In this paper，the empirical likelihood confidence regions for the regression coefficient in a linear model are constructed under m-dependent errors. It is shown that the blockwise empirical likelihood is a good way to deal with dependent samples.
An Empirical Investigation into a Subsidiary Absorptive Capacity Process Model
Schleimer, Stephanie; Pedersen, Torben
2011-01-01
and empirically test a process model of absorptive capacity. The setting of our empirical study is 213 subsidiaries of multinational enterprises and the focus is on the capacity of these subsidiaries to successfully absorb best practices in marketing strategy from their headquarters. This setting allows us...
Bibliometric Modeling Processes and the Empirical Validity of Lotka's Law.
Nicholls, Paul Travis
1989-01-01
Examines the elements involved in fitting a bibliometric model to empirical data, proposes a consistent methodology for applying Lotka's law, and presents the results of an empirical test of the methodology. The results are discussed in terms of the validity of Lotka's law and the suitability of the proposed methodology. (49 references) (CLB)
EMPIRE: Nuclear Reaction Model Code System for Data Evaluation
Herman, M.; Capote, R.; Carlson, B. V.; Obložinský, P.; Sin, M.; Trkov, A.; Wienke, H.; Zerkin, V.
2007-12-01
EMPIRE is a modular system of nuclear reaction codes, comprising various nuclear models, and designed for calculations over a broad range of energies and incident particles. A projectile can be a neutron, proton, any ion (including heavy-ions) or a photon. The energy range extends from the beginning of the unresolved resonance region for neutron-induced reactions (∽ keV) and goes up to several hundred MeV for heavy-ion induced reactions. The code accounts for the major nuclear reaction mechanisms, including direct, pre-equilibrium and compound nucleus ones. Direct reactions are described by a generalized optical model (ECIS03) or by the simplified coupled-channels approach (CCFUS). The pre-equilibrium mechanism can be treated by a deformation dependent multi-step direct (ORION + TRISTAN) model, by a NVWY multi-step compound one or by either a pre-equilibrium exciton model with cluster emission (PCROSS) or by another with full angular momentum coupling (DEGAS). Finally, the compound nucleus decay is described by the full featured Hauser-Feshbach model with γ-cascade and width-fluctuations. Advanced treatment of the fission channel takes into account transmission through a multiple-humped fission barrier with absorption in the wells. The fission probability is derived in the WKB approximation within the optical model of fission. Several options for nuclear level densities include the EMPIRE-specific approach, which accounts for the effects of the dynamic deformation of a fast rotating nucleus, the classical Gilbert-Cameron approach and pre-calculated tables obtained with a microscopic model based on HFB single-particle level schemes with collective enhancement. A comprehensive library of input parameters covers nuclear masses, optical model parameters, ground state deformations, discrete levels and decay schemes, level densities, fission barriers, moments of inertia and γ-ray strength functions. The results can be converted into ENDF-6 formatted files using the
Including Finite Surface Span Effects in Empirical Jet-Surface Interaction Noise Models
Brown, Clifford A.
2016-01-01
The effect of finite span on the jet-surface interaction noise source and the jet mixing noise shielding and reflection effects is considered using recently acquired experimental data. First, the experimental setup and resulting data are presented with particular attention to the role of surface span on far-field noise. These effects are then included in existing empirical models that have previously assumed that all surfaces are semi-infinite. This extended abstract briefly describes the experimental setup and data leaving the empirical modeling aspects for the final paper.
Modeling Electrolyte Solutions with the extended universal quasichemical (UNIQUAC) Model
Thomsen, Kaj
2005-01-01
The extended universal quasichemical (UNIQUAC) model is a thermodynamic model for solutions containing electrolytes and non-electrolytes. The model is a gibbs excess function consisting of a Debye-Hückel term and a standard UNIQUAC term. The model only requires binary, ion specific interaction...... parameters. A unique choice of standard states makes the model able to reproduce solid-liquid, vapor-liquid, and liquid-liquid phase equilibria as well as thermal properties of electrolyte solutions using one set of parameters....
Modeling Electrolyte Solutions with the extended universal quasichemical (UNIQUAC) Model
Thomsen, Kaj
2005-01-01
The extended universal quasichemical (UNIQUAC) model is a thermodynamic model for solutions containing electrolytes and non-electrolytes. The model is a gibbs excess function consisting of a Debye-Hückel term and a standard UNIQUAC term. The model only requires binary, ion specific interaction...... parameters. A unique choice of standard states makes the model able to reproduce solid-liquid, vapor-liquid, and liquid-liquid phase equilibria as well as thermal properties of electrolyte solutions using one set of parameters....
Quality Management in Hospital Departments : Empirical Studies of Organisational Models
Kunkel, Stefan
2008-01-01
The general aim of this thesis was to empirically explore the organisational characteristics of quality systems of hospital departments, to develop and empirically test models for the organisation and implementation of quality systems, and to discuss the clinical implications of the findings. Data were collected from hospital departments through interviews (n=19) and a nation-wide survey (n=386). The interviews were analysed thematically and organisational models were developed. Relationships...
Van der Laan, Gerwin; Van Ees, Hans; Van Witteloostuijn, Arjen
2008-01-01
Although agreement on the positive sign of the relationship between corporate social and financial performance is observed in the literature, the mechanisms that constitute this relationship are not yet well-known. We address this issue by extending management's stakeholder theory by adding insights
Applying TAM in B2C E-Commerce Research: An Extended Model
QIU Lingyun; LI Dong
2008-01-01
As one of the most widely accepted adoption models in information systems research, the technology acceptance model (TAM) focuses exclusively on cognition-oriented constructs such as perceived usefulness and perceived ease of use. This perspective may have limited the explanatory power of TAM when it is utilized in studying consumers' adoption intentions of online shopping. Based on the contrasts between e-commerce systems and traditional workplace information systems as well as empirical findings from a variety of recent e-commerce research works, this paper analyzes an extended model which integrates three additional constructs: trust, social presence, and perceived enjoyment. The interrelationship between these constructs is also explained. Empirical validations of this extended model are expected in future research.
Tensor Models: extending the matrix models structures and methods
Dartois, Stephane
2016-01-01
In this text we review a few structural properties of matrix models that should at least partly generalize to random tensor models. We review some aspects of the loop equations for matrix models and their algebraic counterpart for tensor models. Despite the generic title of this review, we, in particular, invoke the Topological Recursion. We explain its appearance in matrix models. Then we state that a family of tensor models provides a natural example which satisfies a version of the most general form of the topological recursion, named the blobbed topological recursion. We discuss the difficulties of extending the technical solutions existing for matrix models to tensor models. Some proofs are not published yet but will be given in a coming paper, the rest of the results are well known in the literature.
Development of Solar Wind Model Driven by Empirical Heat Flux and Pressure Terms
Sittler, Edward C., Jr.; Ofman, L.; Selwa, M.; Kramar, M.
2008-01-01
We are developing a time stationary self-consistent 2D MHD model of the solar corona and solar wind as suggested by Sittler et al. (2003). Sittler & Guhathakurta (1999) developed a semiempirical steady state model (SG model) of the solar wind in a multipole 3-streamer structure, with the model constrained by Skylab observations. Guhathakurta et al. (2006) presented a more recent version of their initial work. Sittler et al. (2003) modified the SG model by investigating time dependent MHD, ad hoc heating term with heat conduction and empirical heating solutions. Next step of development of 2D MHD models was performed by Sittler & Ofman (2006). They derived effective temperature and effective heat flux from the data-driven SG model and fit smooth analytical functions to be used in MHD calculations. Improvements of the Sittler & Ofman (2006) results now show a convergence of the 3-streamer topology into a single equatorial streamer at altitudes > 2 R(sub S). This is a new result and shows we are now able to reproduce observations of an equatorially confined streamer belt. In order to allow our solutions to be applied to more general applications, we extend that model by using magnetogram data and PFSS model as a boundary condition. Initial results were presented by Selwa et al. (2008). We choose solar minimum magnetogram data since during solar maximum the boundary conditions are more complex and the coronal magnetic field may not be described correctly by PFSS model. As the first step we studied the simplest 2D MHD case with variable heat conduction, and with empirical heat input combined with empirical momentum addition for the fast solar wind. We use realistic magnetic field data based on NSO/GONG data, and plan to extend the study to 3D. This study represents the first attempt of fully self-consistent realistic model based on real data and including semi-empirical heat flux and semi-empirical effective pressure terms.
An Extended Enterprise Modeling Approach to Enterprise-based Integration
无
2002-01-01
The paradigm of extended enterprise is the core competency focused. An extended enterprise expands its scope from bounding a single enterprise to including additional processes performed by other enterprises. The integration of processes is enterprise based. This paper proposes a recursive enterprises interconnected chain model for the extended enterprise, and presents an enterprise-based integration framework for the extended enterprise. The case study is based on a motorcycle group corporation.
Empirical agent-based modelling challenges and solutions
Barreteau, Olivier
2014-01-01
This instructional book showcases techniques to parameterise human agents in empirical agent-based models (ABM). In doing so, it provides a timely overview of key ABM methodologies and the most innovative approaches through a variety of empirical applications. It features cutting-edge research from leading academics and practitioners, and will provide a guide for characterising and parameterising human agents in empirical ABM. In order to facilitate learning, this text shares the valuable experiences of other modellers in particular modelling situations. Very little has been published in the area of empirical ABM, and this contributed volume will appeal to graduate-level students and researchers studying simulation modeling in economics, sociology, ecology, and trans-disciplinary studies, such as topics related to sustainability. In a similar vein to the instruction found in a cookbook, this text provides the empirical modeller with a set of 'recipes' ready to be implemented. Agent-based modeling (AB...
Critical phenomena of nuclear matter in the extended Zimanyi-Moszkowski model
Miyazaki, K
2005-01-01
We have studied the thermodynamics of warm nuclear matter below the saturation density in the extended Zimanyi-Moszkowski model. The EOS behaves like van der Waals one and shows the liquid-gas phase transition as the other microscopic EOSs. It predicts the critical temperature T_{C}=16.36MeV that agrees well with its empirical value. We have further calculated the phase coexistence curve and obtained the critical exponents beta=0.34 and gamma=1.22, which also agree with their universal values and empirical values derived in the recent experimental efforts.
Empirical questions for collective-behaviour modelling
Nicholas T Ouellette
2015-03-01
The collective behaviour of groups of social animals has been an active topic of study across many disciplines, and has a long history of modelling. Classical models have been successful in capturing the large-scale patterns formed by animal aggregations, but fare less well in accounting for details, particularly for groups that do not display net motion. Inspired by recent measurements of swarming insects, which are not well described by the classical modelling paradigm, I pose a set of questions that must be answered by any collective-behaviour model. By explicitly stating the choices made in response to each of these questions, models can be more easily categorized and compared, and their expected range of validity can be clarified.
Empirical likelihood-based evaluations of Value at Risk models
2009-01-01
Value at Risk (VaR) is a basic and very useful tool in measuring market risks. Numerous VaR models have been proposed in literature. Therefore, it is of great interest to evaluate the efficiency of these models, and to select the most appropriate one. In this paper, we shall propose to use the empirical likelihood approach to evaluate these models. Simulation results and real life examples show that the empirical likelihood method is more powerful and more robust than some of the asymptotic method available in literature.
Empirical Bayes Model Comparisons for Differential Methylation Analysis
Mingxiang Teng
2012-01-01
Full Text Available A number of empirical Bayes models (each with different statistical distribution assumptions have now been developed to analyze differential DNA methylation using high-density oligonucleotide tiling arrays. However, it remains unclear which model performs best. For example, for analysis of differentially methylated regions for conservative and functional sequence characteristics (e.g., enrichment of transcription factor-binding sites (TFBSs, the sensitivity of such analyses, using various empirical Bayes models, remains unclear. In this paper, five empirical Bayes models were constructed, based on either a gamma distribution or a log-normal distribution, for the identification of differential methylated loci and their cell division—(1, 3, and 5 and drug-treatment-(cisplatin dependent methylation patterns. While differential methylation patterns generated by log-normal models were enriched with numerous TFBSs, we observed almost no TFBS-enriched sequences using gamma assumption models. Statistical and biological results suggest log-normal, rather than gamma, empirical Bayes model distribution to be a highly accurate and precise method for differential methylation microarray analysis. In addition, we presented one of the log-normal models for differential methylation analysis and tested its reproducibility by simulation study. We believe this research to be the first extensive comparison of statistical modeling for the analysis of differential DNA methylation, an important biological phenomenon that precisely regulates gene transcription.
Extending Model Checking to Object Process Validation
Rein, van H.
2002-01-01
Object-oriented techniques allow the gathering and modelling of system requirements in terms of an application area. The expression of data and process models at that level is a great asset in communication with non-technical people in that area, but it does not necessarily lead to consistent models
An Empirically Grounded Model of the Adoption of Intellectual Technologies.
Wildemuth, Barbara M.
1992-01-01
Data on adoption of 43 user-developed computing applications in 3 large corporations were analyzed to develop an empirically grounded model of the adoption process for intellectual technologies. A five-stage model consisting of Resource Acquisition, Application Development, Adoption/Renewal, Routinization/Enhancement, and External Adoption was…
Learning-Testing Process in Classroom: An Empirical Simulation Model
Buda, Rodolphe
2009-01-01
This paper presents an empirical micro-simulation model of the teaching and the testing process in the classroom (Programs and sample data are available--the actual names of pupils have been hidden). It is a non-econometric micro-simulation model describing informational behaviors of the pupils, based on the observation of the pupils'…
Empirical model for mineralisation of manure nitrogen in soil
Sørensen, Peter; Thomsen, Ingrid Kaag; Schröder, Jaap
2017-01-01
A simple empirical model was developed for estimation of net mineralisation of pig and cattle slurry nitrogen (N) in arable soils under cool and moist climate conditions during the initial 5 years after spring application. The model is based on a Danish 3-year field experiment with measurements...
An Empirical-Mathematical Modelling Approach to Upper Secondary Physics
Angell, Carl; Kind, Per Morten; Henriksen, Ellen K.; Guttersrud, Oystein
2008-01-01
In this paper we describe a teaching approach focusing on modelling in physics, emphasizing scientific reasoning based on empirical data and using the notion of multiple representations of physical phenomena as a framework. We describe modelling activities from a project (PHYS 21) and relate some experiences from implementation of the modelling…
Ranking Multivariate GARCH Models by Problem Dimension: An Empirical Evaluation
M. Caporin (Massimiliano); M.J. McAleer (Michael)
2011-01-01
textabstractIn the last 15 years, several Multivariate GARCH (MGARCH) models have appeared in the literature. Recent research has begun to examine MGARCH specifications in terms of their out-of-sample forecasting performance. In this paper, we provide an empirical comparison of a set of models, name
Continuity of the robustness of contextuality of empirical models
Meng, HuiXian; Cao, HuaiXin; Wang, WenHua; Chen, Liang; Fan, Yajing
2016-10-01
Recently, the robustness of contextuality (RoC) of an empirical model was discussed in [Sci. China-Phys. Mech. Astron. 59, 640303 (2016)], many important properties of the RoC have been proved except for its boundedness and continuity. The aim of this paper is to find an upper bound for the RoC over all of empirical models and prove that the RoC is a continuous function on the set of all empirical models. Lastly, a relationship between the RoC and the extent of violating the noncontextual inequalities is established for an n-cycle contextual box. This relationship implies that the RoC can be used to quantify the contextuality of n-cycle boxes.
Multiphase model for transformation induced plasticity. Extended Leblond's model
Weisz-Patrault, Daniel
2017-09-01
Transformation induced plasticity (TRIP) classically refers to plastic strains observed during phase transitions that occur under mechanical loads (that can be lower than the yield stress). A theoretical approach based on homogenization is proposed to deal with multiphase changes and to extend the validity of the well known and widely used model proposed by Leblond (1989). The approach is similar, but several product phases are considered instead of one and several assumptions have been released. Thus, besides the generalization for several phases, one can mention three main improvements in the calculation of the local equivalent plastic strain: the deviatoric part of the phase transformation is taken into account, both parent and product phases are elastic-plastic with linear isotropic hardening and the applied stress is considered. Results show that classical issues of singularities arising in the Leblond's model (corrected by ad hoc numerical functions or thresholding) are solved in this contribution excepted when the applied equivalent stress reaches the yield stress. Indeed, in this situation the parent phase is entirely plastic as soon as the phase transformation begins and the same singularity as in the Leblond's model arises. A physical explanation of the cutoff function is introduced in order to regularize the singularity. Furthermore, experiments extracted from the literature dealing with multiphase transitions and multiaxial loads are compared with the original Leblond's model and the proposed extended version. For the extended version, very good agreement is observed without any fitting procedures (i.e., material parameters are extracted from other dedicated experiments) and for the original version results are more qualitative.
Low Order Empirical Galerkin Models for Feedback Flow Control
Tadmor, Gilead; Noack, Bernd
2005-11-01
Model-based feedback control restrictions on model order and complexity stem from several generic considerations: real time computation, the ability to either measure or reliably estimate the state in real time and avoiding sensitivity to noise, uncertainty and numerical ill-conditioning are high on that list. Empirical POD Galerkin models are attractive in the sense that they are simple and (optimally) efficient, but are notoriously fragile, and commonly fail to capture transients and control effects. In this talk we review recent efforts to enhance empirical Galerkin models and make them suitable for feedback design. Enablers include `subgrid' estimation of turbulence and pressure representations, tunable models using modes from multiple operating points, and actuation models. An invariant manifold defines the model's dynamic envelope. It must be respected and can be exploited in observer and control design. These ideas are benchmarked in the cylinder wake system and validated by a systematic DNS investigation of a 3-dimensional Galerkin model of the controlled wake.
An Extended SISa Model for Sentiment Contagion
Zhifeng Liu
2014-01-01
Full Text Available One of the main differences between sentiment and infectious diseases is that the former one has two opposite infectious states: positive (optimistic and negative (pessimistic, while the latter one has not. In this paper, based on the SISa model, we consider this issue and propose a new model of sentiment contagion called the SOSa-SPSa model. The results of both numerical and agent-based simulations show that our model could explain the process of sentiment contagion better than that of Hill et al. (2010. Further analysis shows that both the numbers of optimistic and pessimistic individuals will increase with the probability of spontaneity or contagion and decrease with the probability of recovery. Potential applications of this model in financial market have also been discussed.
An extended empirical formula for inner-shell ionization of atoms
Haque, A K F; Shahjahan, M; Uddin, M A; Basak, A K [Department of Physics, University of Rajshahi, Rajshahi 6205 (Bangladesh); Talukder, M R [Department of Applied Physics and Electronic Engineering, University of Rajshahi, Rajshahi 6205 (Bangladesh); Saha, B C, E-mail: ahaque@ictp.i, E-mail: fhaque2001@yahoo.co [Department of Physics, Florida A and M University, Tallahassee, FL 32307 (United States)
2010-06-14
An extension of the analytical model of Campos et al (2007 J. Phys. B: At. Mol. Opt. Phys. 40 3835) is proposed to evaluate electron impact single inner-shell ionization cross sections up to the M-shell. The new model includes ionic and relativistic factors in its structure and describes neatly the K-shell ionization cross section data up to 2 GeV, and L- and M-shell ionization data up to 300 MeV. Comparison is also made with other theoretical calculations.
Liu, Xun
2010-01-01
This study extended the technology acceptance model and empirically tested the new model with wikis, a new type of educational technology. Based on social cognitive theory and the theory of planned behavior, three new variables, wiki self-efficacy, online posting anxiety, and perceived behavioral control, were added to the original technology…
A Novel Biped Pattern Generator Based on Extended ZMP and Extended Cart-table Model
Guangbin Sun
2015-07-01
Full Text Available This paper focuses on planning patterns for biped walking on complex terrains. Two problems are solved: ZMP (zero moment point cannot be used on uneven terrain, and the conventional cart-table model does not allow vertical CM (centre of mass motion. For the ZMP definition problem, we propose the extended ZMP (EZMP concept as an extension of ZMP to uneven terrains. It can be used to judge dynamic balance on universal terrains. We achieve a deeper insight into the connection and difference between ZMP and EZMP by adding different constraints. For the model problem, we extend the cart-table model by using a dynamic constraint instead of constant height constraint, which results in a mathematically symmetric set of three equations. In this way, the vertical motion is enabled and the resultant equations are still linear. Based on the extended ZMP concept and extended cart-table model, a biped pattern generator using triple preview controllers is constructed and implemented simultaneously to three dimensions. Using the proposed pattern generator, the Atlas robot is simulated. The simulation results show the robot can walk stably on rather complex terrains by accurately tracking extended ZMP.
Comparison of modelled and empirical atmospheric propagation data
Schott, J. R.; Biegel, J. D.
1983-01-01
The radiometric integrity of TM thermal infrared channel data was evaluated and monitored to develop improved radiometric preprocessing calibration techniques for removal of atmospheric effects. Modelled atmospheric transmittance and path radiance were compared with empirical values derived from aircraft underflight data. Aircraft thermal infrared imagery and calibration data were available on two dates as were corresponding atmospheric radiosonde data. The radiosonde data were used as input to the LOWTRAN 5A code which was modified to output atmospheric path radiance in addition to transmittance. The aircraft data were calibrated and used to generate analogous measurements. These data indicate that there is a tendancy for the LOWTRAN model to underestimate atmospheric path radiance and transmittance as compared to empirical data. A plot of transmittance versus altitude for both LOWTRAN and empirical data is presented.
Empirical Modeling of Metal Oxides Dissolution
Kim, Seon-Byeong; Won, Hui-Jun; Park, Sang-Yoon; Moon, Jei-Kwon; Choi, Wang-Kyu [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)
2015-05-15
There have been tons of studies to examine the dissolution of metal oxides in terms of dissolution kinetics, type of reactants, geometry, etc. However, most of previous studies is the observation of macroscopic dissolution characteristics and might not provide the atomic scale characteristics of dissolution reactions. Even the analysis of microscopic structure of metal oxide with SEM, XRD, etc. during the dissolution does not observe the microscopic characteristics of dissolution mechanism. Computational analysis with well-established dissolution model is the one of the best approaches to understand indirectly the microscopic dissolution behaviour. Various designs of experimental conditions are applied to the in-vitro methods interpreting the dissolution characteristics controlled by each influencing parameter.
Extended Linear Models with Gaussian Priors
Quinonero, Joaquin
2002-01-01
on the parameters. The Relevance Vector Machine, introduced by Tipping, is a particular case of such a model. I give the detailed derivations of the expectation-maximisation (EM) algorithm used in the training. These derivations are not found in the literature, and might be helpful for newcomers....
Extending Social Cognition Models of Health Behaviour
Abraham, Charles; Sheeran, Paschal; Henderson, Marion
2011-01-01
A cross-sectional study assessed the extent to which indices of social structure, including family socio-economic status (SES), social deprivation, gender and educational/lifestyle aspirations correlated with adolescent condom use and added to the predictive utility of a theory of planned behaviour model. Analyses of survey data from 824 sexually…
Extending Social Cognition Models of Health Behaviour
Abraham, Charles; Sheeran, Paschal; Henderson, Marion
2011-01-01
A cross-sectional study assessed the extent to which indices of social structure, including family socio-economic status (SES), social deprivation, gender and educational/lifestyle aspirations correlated with adolescent condom use and added to the predictive utility of a theory of planned behaviour model. Analyses of survey data from 824 sexually…
Bankruptcy risk model and empirical tests.
Podobnik, Boris; Horvatic, Davor; Petersen, Alexander M; Urosevic, Branko; Stanley, H Eugene
2010-10-26
We analyze the size dependence and temporal stability of firm bankruptcy risk in the US economy by applying Zipf scaling techniques. We focus on a single risk factor--the debt-to-asset ratio R--in order to study the stability of the Zipf distribution of R over time. We find that the Zipf exponent increases during market crashes, implying that firms go bankrupt with larger values of R. Based on the Zipf analysis, we employ Bayes's theorem and relate the conditional probability that a bankrupt firm has a ratio R with the conditional probability of bankruptcy for a firm with a given R value. For 2,737 bankrupt firms, we demonstrate size dependence in assets change during the bankruptcy proceedings. Prepetition firm assets and petition firm assets follow Zipf distributions but with different exponents, meaning that firms with smaller assets adjust their assets more than firms with larger assets during the bankruptcy process. We compare bankrupt firms with nonbankrupt firms by analyzing the assets and liabilities of two large subsets of the US economy: 2,545 Nasdaq members and 1,680 New York Stock Exchange (NYSE) members. We find that both assets and liabilities follow a Pareto distribution. The finding is not a trivial consequence of the Zipf scaling relationship of firm size quantified by employees--although the market capitalization of Nasdaq stocks follows a Pareto distribution, the same distribution does not describe NYSE stocks. We propose a coupled Simon model that simultaneously evolves both assets and debt with the possibility of bankruptcy, and we also consider the possibility of firm mergers.
Extended Schema Mode conceptualizations for specific personality disorders: an empirical study.
Bamelis, Lotte L M; Renner, Fritz; Heidkamp, David; Arntz, Arnoud
2011-02-01
The aim of this study was to investigate newly formulated schema mode models for cluster-C, paranoid, histrionic and narcissistic personality disorders. In order to assess 18 hypothesized modes, the Schema Mode Inventory (SMI) was modified into the SMI-2. The SMI-2 was administered to a sample of 323 patients (with a main diagnosis on one of the PDs mentioned) and 121 nonpatients. The SMI-2 was successful in distinguishing patients and controls. Newly formulated modes proved to be appropriate for histrionic, avoidant, and dependent PD. The modification of the Overcontroller mode into the Perfectionistic and Suspicious Overcontroller mode was valuable for characterizing paranoid and obsessive-compulsive PD. The results support recent theoretical developments in Schema Therapy, and are useful for application in clinical practice.
Extended Weyl Invariance in a Bimetric Model
Hassan, S F; von Strauss, Mikael
2015-01-01
We revisit a particular ghost-free bimetric model which is related to both partial masslessness as well conformal gravity. Its equations of motion can be recast in the form of a perturbative series in derivatives which exhibits a remarkable amount of structure. In a perturbative (but fully nonlinear) analysis, we demonstrate that the equations are invariant under scalar gauge transformations up to six orders in derivatives, the lowest-order term being a local Weyl scaling of the metrics. More specifically, we develop a procedure for constructing terms in the gauge transformations order by order in the perturbative framework. This allows us to derive sufficient conditions for the existence of a gauge symmetry at the nonlinear level. It is explicitly demonstrated that these conditions are satisfied at the first relevant order and, consequently, the equations are gauge invariant up to six orders in derivatives. We furthermore show that the model propagates six instead of seven degrees of freedom not only around ...
Extending and Refining the Propaganda Model
Sparks, Colin
2007-01-01
The ‘propaganda model’ of news production in capitalist democracies elaborated by Edward S. Herman and Noam Chomsky in 1988 was met with initial hostile criticism and then more or less complete neglect. In the last five years, there has been a renewal of interest, although opinion remains seriously divided. This article adopts a sympathetic stance towards the main ideas of the model, but suggests that there are a number of ways in which in its classical iteration it is insufficiently sensitiv...
Macroeconomic model of national economy development (extended
M. Diaconova
1997-08-01
Full Text Available The macroeconomic model offered in this paper describes complex functioning of national economy and can be used for forecasting of possible directions of its development depending on various economic policies. It is the extension of [2] and adaptation of [3]. With the purpose of determination of state policies influence in the field of taxes and exchange rate national economy is considered within the framework of three sectors: government, private and external world.
A Development of Empirical Models for Equipment Condition Monitoring System
Lee, Song Kyu; Baik, Se Jin [KEPCO Engineering and Construction Company, Daejeon (Korea, Republic of); An, Sang Ha [Korea Advanced Institute of Science and Technology, Daejeon (Korea, Republic of)
2010-10-15
A great deal of effort is recently put into on-line monitoring (OLM), specially using empirical model to detect earlier the fault of components or the calibration reduction/extension of instrument. The empirical model is constructed with historical data obtained during operation and it mainly relies on regression techniques. Various models are used in OLM and the role of models is to describe the relation among signals that have been collected. Ultimate goal of empirical models is to best estimate parameter as soon as possible close to actual value. Typically some of the historical data are used for model training, and some data are used for verification and assessment of model performance. Several different models for OLM of nuclear power systems are currently being used. Examples include the ANL Multivariate State Estimation Techniques (MSET) used in EPI center of SmartSignal, the expert state estimation engine (ESEE) used in SureSense software of Expert Microsystems, Process Evaluation and Analysis by Neural Operators (PEANO) OECD of Halden Reactor Project and linear regression model used in RCP seal integrity monitoring system (SIMON) of KEPCO E and C
Extending Ansoff’s Strategic Diagnosis Model
Daniel Kipley
2012-01-01
Full Text Available Given the complex and disruptive open-ended dynamics in the current dynamic global environment, senior management recognizes the need for a formalized, consistent, and comprehensive framework to analyze the firm’s strategic posture. Modern assessment tools, such as H. Igor Ansoff’s seminal contributions to strategic diagnosis, primarily focused on identifying and enhancing the firm’s strategic performance potential through the analysis of the industry’s environmental turbulence level relative to the firm’s aggressiveness and responsiveness of capability. Other epistemic modeling techniques envisage Porter’s generic strategic positions, Strengths, Weaknesses, Opportunities, Threats (SWOT, and Resource-Based View as useful methodologies to aid in the planning process. All are complex and involve multiple managerial perspectives. Over the last two decades, attempts have been made to comprehensively classify the firm’s future competitive position. Most of these proposals utilized matrices to depict the position, such as the Boston Consulting Group, point positioning, and dispersed positioning. The GE/McKinsey later enhanced this typology by expanding to 3 × 3, contributing to management’s deeper understanding of the firm’s position. Both types of assessments, Ansoff’s strategic diagnosis and positional matrices, are invaluable strategic tools for firms. However, it could be argued that these positional analyses singularly reflect a blind spot in modeling the firm’s future strategic performance potential, as neither considers the interactions of the other. This article is conceptual and takes a different approach from earlier methodologies. Although conceptual, the article aims to present a robust model combining Ansoff’s strategic diagnosis with elements of the performance matrices to provide the management with an enriched capability to evaluate the firm’s current and future performance position.
Empirically derived neighbourhood rules for urban land-use modelling
Hansen, Henning Sten
2012-01-01
interaction between neighbouring land uses is an important component in urban cellular automata. Nevertheless, this component is often calibrated through trial-and-error estimation. The aim of this project has been to develop an empirically derived landscape metric supporting cellular-automata-based land......-use modelling. Through access to very detailed urban land-use data it has been possible to derive neighbourhood rules empirically, and test their sensitivity to the land-use classification applied, the regional variability of the rules, and their time variance. The developed methodology can be implemented...
An Examination of Extended a-Rescaling Model
YAN Zhan-Yuan; DUAN Chun-Gui; HE Zhen-Min
2001-01-01
The extended x-rescaling model can explain the quark's nuclear effect very well. Weather it can also explain the gluon's nuclear effect should be investigated further. Associated J/ψ and γ production with large PT is a very clean channel to probe the gluon distribution in proton or nucleus. In this paper, using the extended x-rescaling model, the PT distribution of the nuclear effect factors of p + Fe → J/Ψ + γ+ X process is calculated and discussed. Comparing our theoretical results with the future experimental data, the extended x-rescaling model can be examined.``
An empirical investigation of two competing models of patient satisfaction.
Mishra, D P; Singh, J; Wood, V
1991-01-01
This paper empirically examines two competing models of patient satisfaction. Specifically, a five factor SERVQUAL model proposed by Parasuraman et al. (1988) and a tripartite model posited by Smith, Bloom, and Davis (1986) are examined. The two models are tested via factor analysis based on data collected from a field survey of hospital patients. The results of this study indicate that the five dimensional SERVQUAL model is not supported by data. On the other hand, there is general support for the tripartite model. Implications of our results for health care practitioners and researchers are discussed. Future directions for research are also outlined.
Extending the prevalent consumer loyalty modelling
Olsen, Svein Ottar; Tudoran, Ana Alina; Brunsø, Karen
2013-01-01
Purpose: This study addresses the role of habit strength in explaining loyalty behaviour. Design/methodology/approach: The study uses 2063 consumers’ data from a survey in Denmark and Spain, and multigroup structural equation modelling to analyse the data. The paper describes an approach employing...... the psychological meanings of the habit construct, such as automaticity, lack of awareness or very little conscious deliberation. Findings: The findings suggest that when habits start to develop and gain strength, less planning is involved, and that the loyalty behaviour sequence mainly occurs guided...... literature by providing an extension of the prevalent consumer loyalty theorizing by integrating the concept of habit strength and by generating new knowledge concerning the conscious/strategic and unconscious/automatic nature of consumer loyalty. The study derives managerial implications on how...
An empirical model for friction in cold forging
Bay, Niels; Eriksen, Morten; Tan, Xincai
2002-01-01
With a system of simulative tribology tests for cold forging the friction stress for aluminum, steel and stainless steel provided with typical lubricants for cold forging has been determined for varying normal pressure, surface expansion, sliding length and tool/work piece interface temperature...... of normal pressure and tool/work piece interface temperature. The model is verified by process testing measuring friction at varying reductions in cold forward rod extrusion. KEY WORDS: empirical friction model, cold forging, simulative friction tests....
A Trade Study of Thermosphere Empirical Neutral Density Models
2014-08-01
into the ram direction, and m is the satellite mass. The velocity ?⃗? equals to the satellite velocity in the corotating Earth frame ?⃗?...drag force. In a trade study we have investigated a methodology to assess performances of neutral density models in predicting orbit against a... assess overall errors in orbit prediction expected from empirical density models. They have also been adapted in an analysis tool Satellite Orbital
Empirical modelling for the conceptual design and use of products
Roe, Chris P.; Beynon, Meurig; Fischer, Carlos N
2001-01-01
The process of designing an engineering product usually involves only superficial interaction on the part of the user during the design. This often leads to the product being unsuitable for its target comnmnity. In this paper, we describe an approach called Empirical Modelling that emphasises interaction and experiment throughout the construction of a model that we believe has benefits in respect of usability. We use a case study in digital watch design to illustrate our approach and our ideas.
Models of social entrepreneurship: empirical evidence from Mexico
Wulleman, Marine; Hudon, Marek
2015-01-01
This paper seeks to improve the understanding of social entrepreneurship models based on empirical evidence from Mexico, where social entrepreneurship is currently booming. It aims to supplement existing typologies of social entrepreneurship models. To that end, building on Zahra et al. (2009) typology it begins by providing a new framework classifying the three types of social entrepreneurship. A comparative case study of ten Mexican social enterprises is then elaborated using that framework...
Extended Quark Potential Model From Random Phase Approximation
DENGWei－Zhen; CHENXiao－Lin; 等
2002-01-01
The quark potential model is extended to include the sea quark excitation using the random phase approximation.The effective quark interaction preserves the important QCD properties-chiral symmetry and confinement simultaneously.A primary qualitative analysis shows that the π meson as a well-known typical Goldstone boson and the other mesons made up of valence qq quark pair such as the ρ meson can also be described in this extended quark potential model.
Extended Quark Potential Model from Random Phase Approximation
DENG Wei-Zhen; CHEN Xiao-Lin; LU Da-Hai; YANG Li-Ming
2002-01-01
The quark potential model is extended to include the sea quark excitation using the random phase approx-imation. The effective quark interaction preserves the important QCD properties - chiral symmetry and confinementsimultaneously. A primary qualitative analysis shows that the π meson as a well-known typical Goldstone boson andthe other mesons made up of valence qq quark pair such as the ρ meson can also be described in this extended quarkpotential model.
Extended Goldstone-boson-exchange constituent quark model
Wagenbrunn, R F; Plessas, W; Varga, K
2000-01-01
We discuss an updated version of the Goldstone-boson-exchange chiral quark model extended to include in addition to pseudoscalar meson exchanges also vector and scalar meson exchanges. The latter ingredients are viewed as effective parametrizations of multiple Goldstone-boson exchanges in baryons. The extended model allows for an accurate description of all light and strange baryon spectra and at the same time produces the right properties for deducing baryon-baryon interactions.
Selection Bias in Educational Transition Models: Theory and Empirical Evidence
Holm, Anders; Jæger, Mads
Most studies using Mare’s (1980, 1981) seminal model of educational transitions find that the effect of family background decreases across transitions. Recently, Cameron and Heckman (1998, 2001) have argued that the “waning coefficients” in the Mare model are driven by selection on unobserved...... the United States, United Kingdom, Denmark, and the Netherlands shows that when we take selection into account the effect of family background variables on educational transitions is largely constant across transitions. We also discuss several difficulties in estimating educational transition models which...... variables. This paper, first, explains theoretically how selection on unobserved variables leads to waning coefficients and, second, illustrates empirically how selection leads to biased estimates of the effect of family background on educational transitions. Our empirical analysis using data from...
Bayesian model reduction and empirical Bayes for group (DCM) studies.
Friston, Karl J; Litvak, Vladimir; Oswal, Ashwini; Razi, Adeel; Stephan, Klaas E; van Wijk, Bernadette C M; Ziegler, Gabriel; Zeidman, Peter
2016-03-01
This technical note describes some Bayesian procedures for the analysis of group studies that use nonlinear models at the first (within-subject) level - e.g., dynamic causal models - and linear models at subsequent (between-subject) levels. Its focus is on using Bayesian model reduction to finesse the inversion of multiple models of a single dataset or a single (hierarchical or empirical Bayes) model of multiple datasets. These applications of Bayesian model reduction allow one to consider parametric random effects and make inferences about group effects very efficiently (in a few seconds). We provide the relatively straightforward theoretical background to these procedures and illustrate their application using a worked example. This example uses a simulated mismatch negativity study of schizophrenia. We illustrate the robustness of Bayesian model reduction to violations of the (commonly used) Laplace assumption in dynamic causal modelling and show how its recursive application can facilitate both classical and Bayesian inference about group differences. Finally, we consider the application of these empirical Bayesian procedures to classification and prediction.
Conceptual Model of IT Infrastructure Capability and Its Empirical Justification
QI Xianfeng; LAN Boxiong; GUO Zhenwei
2008-01-01
Increasing importance has been attached to the value of information technology (IT) infrastructure in today's organizations. The development of efficacious IT infrastructure capability enhances business performance and brings sustainable competitive advantage. This study analyzed the IT infrastructure capability in a holistic way and then presented a concept model of IT capability. IT infrastructure capability was categorized into sharing capability, service capability, and flexibility. This study then empirically tested the model using a set of survey data collected from 145 firms. Three factors emerge from the factor analysis as IT flexibility, IT service capability, and IT sharing capability, which agree with those in the conceptual model built in this study.
Transdiagnostic models of anxiety disorder: Theoretical and empirical underpinnings.
Norton, Peter J; Paulus, Daniel J
2017-08-01
Despite the increasing development, evaluation, and adoption of transdiagnostic cognitive behavioral therapies, relatively little has been written to detail the conceptual and empirical psychopathology framework underlying transdiagnostic models of anxiety and related disorders. In this review, the diagnostic, genetic, neurobiological, developmental, behavioral, cognitive, and interventional data underlying the model are described, with an emphasis on highlighting elements that both support and contradict transdiagnostic conceptualizations. Finally, a transdiagnostic model of anxiety disorder is presented and key areas of future evaluation and refinement are discussed. Copyright © 2017 Elsevier Ltd. All rights reserved.
Testing the gravity p-median model empirically
Kenneth Carling
2015-12-01
Full Text Available Regarding the location of a facility, the presumption in the widely used p-median model is that the customer opts for the shortest route to the nearest facility. However, this assumption is problematic on free markets since the customer is presumed to gravitate to a facility by the distance to and the attractiveness of it. The recently introduced gravity p-median model offers an extension to the p-median model that account for this. The model is therefore potentially interesting, although it has not yet been implemented and tested empirically. In this paper, we have implemented the model in an empirical problem of locating vehicle inspections, locksmiths, and retail stores of vehicle spare-parts for the purpose of investigating its superiority to the p-median model. We found, however, the gravity p-median model to be of limited use for the problem of locating facilities as it either gives solutions similar to the p-median model, or it gives unstable solutions due to a non-concave objective function.
An extended car-following model at signalized intersections
Yu, Shaowei; Shi, Zhongke
2014-08-01
To simulate car-following behaviors better when the traffic light is red, three successive car-following data at a signalized intersection of Jinan in China were collected by using a new proposed data acquisition method and then analyzed to select input variables of the extended car-following model. An extended car-following model considering two leading cars' accelerations was proposed, calibrated and verified with field data obtained on the basis of the full velocity difference model and then a comparative model used for comparative research was also proposed and calibrated in the light of the GM model. The results indicate that the extended car-following model could fit measured data well, and that the fitting precision of the extended model is prior to the comparative model, whose mean absolute error is reduced by 22.83%. Finally a theoretical car-following model considering multiple leading cars' accelerations was put forward which has potential applicable to vehicle automation system and vehicle safety early warning system, and then the linear stability analysis and numerical simulations were conducted to analyze some observed physical features existing in the realistic traffic.
Decaying Domain Walls in an Extended Gravity Model and Cosmology
Shiraishi, Kiyoshi
2013-01-01
We investigate cosmological consequences of an extended gravity model which belongs to the same class studied by Accetta and Steinhardt in an extended inflationary scenario. But we do not worry about inflation in our model; instead, we focus on a topological object formed during cosmological phase transitions. Although domain walls appear during first-order phase transitions such as QCD transition, they decay at the end of the phase transition. Therefore the "domain wall problem" does not exist in the suitable range of pameters and, on the contrary, the "fragments" of walls may become seeds of dark matter. A possible connection to "oscillating universe" model offered by Morikawa et al. is also discussed.
Analytical and Empirical Modeling of Wear and Forces of CBN Tool in Hard Turning - A Review
Patel, Vallabh Dahyabhai; Gandhi, Anishkumar Hasmukhlal
2016-06-01
Machining of steel material having hardness above 45 HRC (Hardness-Rockwell C) is referred as a hard turning. There are numerous models which should be scrutinized and implemented to gain optimum performance of hard turning. Various models in hard turning by cubic boron nitride tool have been reviewed, in attempt to utilize appropriate empirical and analytical models. Validation of steady state flank and crater wear model, Usui's wear model, forces due to oblique cutting theory, extended Lee and Shaffer's force model, chip formation and progressive flank wear have been depicted in this review paper. Effort has been made to understand the relationship between tool wear and tool force based on the different cutting conditions and tool geometries so that appropriate model can be used according to user requirement in hard turning.
An empirical model to estimate ultraviolet erythemal transmissivity
Antón, M.; Serrano, A.; Cancillo, M. L.; García, J. A.
2009-04-01
An empirical model to estimate the solar ultraviolet erythemal irradiance (UVER) for all-weather conditions is presented. This model proposes a power expression with the UV transmissivity as a dependent variable, and the slant ozone column and the clearness index as independent variables. The UVER were measured at three stations in South-Western Spain during a five year period (2001-2005). A dataset corresponding to the period 2001-2004 was used to develop the model and an independent dataset (year 2005) for validation purposes. For all three locations, the empirical model explains more than 95% of UV transmissivity variability due to changes in the two independent variables. In addition, the coefficients of the models show that when the slant ozone amount decreases 1%, UV transmissivity and, therefore, UVER values increase approximately 1.33%-1.35%. The coefficients also show that when the clearness index decreases 1%, UV transmissivity increase 0.75%-0.78%. The validation of the model provided satisfactory results, with low mean absolute bias error (MABE), about 7%-8% for all stations. Finally, a one-day ahead forecast of the UV Index for cloud-free cases is presented, assuming the persistence in the total ozone column. The percentage of days with differences between forecast and experimental UVI lower than ±0.5 unit and ±1 unit is within the range of 28% to 37%, and 60% to 75%, respectively. Therefore, the empirical model proposed in this work provides reliable forecast cloud-free UVI in order to inform the public about the possible harmful effects of UV radiation over-exposure.
An empirical model to estimate ultraviolet erythemal transmissivity
Anton, M.; Serrano, A.; Cancillo, M.L.; Garcia, J.A. [Universidad de Extremadura, Badajoz (Spain). Dept. de Fisica
2009-07-01
An empirical model to estimate the solar ultraviolet erythemal irradiance (UVER) for all-weather conditions is presented. This model proposes a power expression with the UV transmissivity as a dependent variable, and the slant ozone column and the clearness index as independent variables. The UVER were measured at three stations in South-Western Spain during a five year period (2001-2005). A dataset corresponding to the period 2001-2004 was used to develop the model and an independent dataset (year 2005) for validation purposes. For all three locations, the empirical model explains more than 95% of UV transmissivity variability due to changes in the two independent variables. In addition, the coefficients of the models show that when the slant ozone amount decreases 1%, UV transmissivity and, therefore, UVER values increase approximately 1.33%-1.35%. The coefficients also show that when the clearness index decreases 1%, UV transmissivity increase 0.75%-0.78%. The validation of the model provided satisfactory results, with low mean absolute bias error (MABE), about 7%-8% for all stations. Finally, a one-day ahead forecast of the UV Index for cloud-free cases is presented, assuming the persistence in the total ozone column. The percentage of days with differences between forecast and experimental UVI lower than {+-}0.5 unit and {+-}1 unit is within the range of 28% to 37%, and 60% to 75%, respectively. Therefore, the empirical model proposed in this work provides reliable forecast cloud-free UVI in order to inform the public about the possible harmful effects of UV radiation over-exposure. (orig.)
Phenomenological study of extended seesaw model for light sterile neutrino
Nath, Newton; Goswami, Srubabati; Gupta, Shivani
2016-01-01
We study the zero textures of the Yukawa matrices in the minimal extended type-I seesaw (MES) model which can give rise to $\\sim$ eV scale sterile neutrinos. In this model, three right handed neutrinos and one extra singlet $S$ are added to generate a light sterile neutrino. The light neutrino mass matrix for the active neutrinos, $ m_{\
Extended Hubbard models for ultracold atoms in optical lattices
Juergensen, Ole
2015-06-05
In this thesis, the phase diagrams and dynamics of various extended Hubbard models for ultracold atoms in optical lattices are studied. Hubbard models are the primary description for many interacting particles in periodic potentials with the paramount example of the electrons in solids. The very same models describe the behavior of ultracold quantum gases trapped in the periodic potentials generated by interfering beams of laser light. These optical lattices provide an unprecedented access to the fundamentals of the many-particle physics that govern the properties of solid-state materials. They can be used to simulate solid-state systems and validate the approximations and simplifications made in theoretical models. This thesis revisits the numerous approximations underlying the standard Hubbard models with special regard to optical lattice experiments. The incorporation of the interaction between particles on adjacent lattice sites leads to extended Hubbard models. Offsite interactions have a strong influence on the phase boundaries and can give rise to novel correlated quantum phases. The extended models are studied with the numerical methods of exact diagonalization and time evolution, a cluster Gutzwiller approximation, as well as with the strong-coupling expansion approach. In total, this thesis demonstrates the high relevance of beyond-Hubbard processes for ultracold atoms in optical lattices. Extended Hubbard models can be employed to tackle unexplained problems of solid-state physics as well as enter previously inaccessible regimes.
Developing an Empirical Model for Jet-Surface Interaction Noise
Brown, Clifford A.
2014-01-01
The process of developing an empirical model for jet-surface interaction noise is described and the resulting model evaluated. Jet-surface interaction noise is generated when the high-speed engine exhaust from modern tightly integrated or conventional high-bypass ratio engine aircraft strikes or flows over the airframe surfaces. An empirical model based on an existing experimental database is developed for use in preliminary design system level studies where computation speed and range of configurations is valued over absolute accuracy to select the most promising (or eliminate the worst) possible designs. The model developed assumes that the jet-surface interaction noise spectra can be separated from the jet mixing noise and described as a parabolic function with three coefficients: peak amplitude, spectral width, and peak frequency. These coefficients are fit to functions of surface length and distance from the jet lipline to form a characteristic spectra which is then adjusted for changes in jet velocity and/or observer angle using scaling laws from published theoretical and experimental work. The resulting model is then evaluated for its ability to reproduce the characteristic spectra and then for reproducing spectra measured at other jet velocities and observer angles; successes and limitations are discussed considering the complexity of the jet-surface interaction noise versus the desire for a model that is simple to implement and quick to execute.
Empirical Study and Model of User Acceptance for Personalized Recommendation
Zheng Hua
2013-02-01
Full Text Available Personalized recommendation technology plays an important role in the current e-commerce system, but the user willingness to accept the personalized recommendation and its influencing factors need to be study. In this study, the Theory of Reasoned Action (TRA and Technology Acceptance Model (TAM are used to construct a user acceptance model of personalized recommendation which tested by the empirical method. The results show that perceived usefulness, perceived ease of use, subjective rules and trust tend had an impact on personalized recommendation.
Creating a Generic Extended Enterprise Management Model using GERAM
Larsen, Lars Bjørn; Kaas-Pedersen, Carsten; Vesterager, Johan
1998-01-01
The two main themes of the Globeman21 (Global Manufacturing in the 21st century) project are product life cycle management and extended enterprise management. This article focus on the later of these subjects and an illustration of the concept is given together with a discussion of the concept...... management model. By working with GERAM in relation to extended enterprise management it has been found that it provides a useful background for organising knowledge, experience and the activities within the project...... of virtual enterprises. Through the introduction of GERAM (Generalised Enterprise Reference Architecture and Methodology) an initial version of a basic framework for extended enterprise management is introduced. This basic framework is the first step towards the creation of a generic extended enterprise...
Extending the Relational Model to Deal with Probabilistic Data
MA Zongmin; ZHANG W. J; MA W. Y.
2000-01-01
According to the soundness and completeness of information in databases, the expressive form and the semantics of incomplete information are discussed in this paper. On the basis of the discussion, the current studies on incomplete data in relational databases are reviewed. In order to represent stochastic uncertainty in most general sense in the real world, probabilistic data are introduced into relational databases. An extended relational data model is presented to express and manipulate probabilistic data and the operations in relational algebra based on the extended model are defined in this paper.
Modeling Healthcare Processes Using Commitments: An Empirical Evaluation
2015-01-01
The two primary objectives of this paper are: (a) to demonstrate how Comma, a business modeling methodology based on commitments, can be applied in healthcare process modeling, and (b) to evaluate the effectiveness of such an approach in producing healthcare process models. We apply the Comma approach on a breast cancer diagnosis process adapted from an HHS committee report, and presents the results of an empirical study that compares Comma with a traditional approach based on the HL7 Messaging Standard (Traditional-HL7). Our empirical study involved 47 subjects, and two phases. In the first phase, we partitioned the subjects into two approximately equal groups. We gave each group the same requirements based on a process scenario for breast cancer diagnosis. Members of one group first applied Traditional-HL7 and then Comma whereas members of the second group first applied Comma and then Traditional-HL7—each on the above-mentioned requirements. Thus, each subject produced two models, each model being a set of UML Sequence Diagrams. In the second phase, we repartitioned the subjects into two groups with approximately equal distributions from both original groups. We developed exemplar Traditional-HL7 and Comma models; we gave one repartitioned group our Traditional-HL7 model and the other repartitioned group our Comma model. We provided the same changed set of requirements to all subjects and asked them to modify the provided exemplar model to satisfy the new requirements. We assessed solutions produced by subjects in both phases with respect to measures of flexibility, time, difficulty, objective quality, and subjective quality. Our study found that Comma is superior to Traditional-HL7 in flexibility and objective quality as validated via Student’s t-test to the 10% level of significance. Comma is a promising new approach for modeling healthcare processes. Further gains could be made through improved tooling and enhanced training of modeling personnel. PMID
Extending product modeling methods for integrated product development
Bonev, Martin; Wörösch, Michael; Hauksdóttir, Dagný
2013-01-01
Despite great efforts within the modeling domain, the majority of methods often address the uncommon design situation of an original product development. However, studies illustrate that development tasks are predominantly related to redesigning, improving, and extending already existing products....... Updated design requirements have then to be made explicit and mapped against the existing product architecture. In this paper, existing methods are adapted and extended through linking updated requirements to suitable product models. By combining several established modeling techniques, such as the DSM...... and PVM methods, in a presented Product Requirement Development model some of the individual drawbacks of each method could be overcome. Based on the UML standard, the model enables the representation of complex hierarchical relationships in a generic product model. At the same time it uses matrix...
Empirical modeling of the location of the Earth's magnetopause
Machková, Anna; Nemec, Frantisek; Nemecek, Zdenek; Safrankova, Jana
2016-04-01
We systematically examine the location of the magnetopause using a database of 16800 magnetopause crossings registered by 8 different satellites. The analysis is limited to the best sampled region near the subsolar point. We analyze the influence of the Dst and corrected Dst* indices, solar wind flow speed, and the eccentricity of the terrestrial magnetic dipole, i.e., the parameters typically unconsidered in former empirical models. The effects on the magnetopause location are investigated by comparing the observed and model magnetopause distances. We show that the magnetopause distance increases with decreasing Dst index, which can be likely linked to the increasing magnetic field magnitude at the magnetopause due to the enhanced ring current. The magnetopause distance is further higher at the times of higher solar wind flow speeds, in particular during high solar wind dynamic pressures. The eccentricity of the magnetic dipole also results in a statistically observable magnetopause displacement, as the magnetic field magnitude increases at the locations toward which the eccentric dipole is shifted (by about 2.5 percent). Finally, we employ the IGRF internal magnetic field model (accounting thus for the eccentricity of the terrestrial magnetic dipole) and the T96 external magnetic field model (accounting thus for the ring current and the Chapman-Ferraro current). We suggest a simple improvement of existing empirical magnetopause models based on the observed dependencies.
Testing a new Free Core Nutation empirical model
Belda, Santiago; Ferrándiz, José M.; Heinkelmann, Robert; Nilsson, Tobias; Schuh, Harald
2016-03-01
The Free Core Nutation (FCN) is a free mode of the Earth's rotation caused by the different material characteristics of the Earth's core and mantle. This causes the rotational axes of those layers to slightly diverge from each other, resulting in a wobble of the Earth's rotation axis comparable to nutations. In this paper we focus on estimating empirical FCN models using the observed nutations derived from the VLBI sessions between 1993 and 2013. Assuming a fixed value for the oscillation period, the time-variable amplitudes and phases are estimated by means of multiple sliding window analyses. The effects of using different a priori Earth Rotation Parameters (ERP) in the derivation of models are also addressed. The optimal choice of the fundamental parameters of the model, namely the window width and step-size of its shift, is searched by performing a thorough experimental analysis using real data. The former analyses lead to the derivation of a model with a temporal resolution higher than the one used in the models currently available, with a sliding window reduced to 400 days and a day-by-day shift. It is shown that this new model increases the accuracy of the modeling of the observed Earth's rotation. Besides, empirical models determined from USNO Finals as a priori ERP present a slightly lower Weighted Root Mean Square (WRMS) of residuals than IERS 08 C04 along the whole period of VLBI observations, according to our computations. The model is also validated through comparisons with other recognized models. The level of agreement among them is satisfactory. Let us remark that our estimates give rise to the lowest residuals and seem to reproduce the FCN signal in more detail.
A Design for Composing and Extending Vehicle Models
Madden, Michael M.; Neuhaus, Jason R.
2003-01-01
The Systems Development Branch (SDB) at NASA Langley Research Center (LaRC) creates simulation software products for research. Each product consists of an aircraft model with experiment extensions. SDB treats its aircraft models as reusable components, upon which experiments can be built. SDB has evolved aircraft model design with the following goals: 1. Avoid polluting the aircraft model with experiment code. 2. Discourage the copy and tailor method of reuse. The current evolution of that architecture accomplishes these goals by reducing experiment creation to extend and compose. The architecture mechanizes the operational concerns of the model's subsystems and encapsulates them in an interface inherited by all subsystems. Generic operational code exercises the subsystems through the shared interface. An experiment is thus defined by the collection of subsystems that it creates ("compose"). Teams can modify the aircraft subsystems for the experiment using inheritance and polymorphism to create variants ("extend").
Extended Range Hydrological Predictions: Uncertainty Associated with Model Parametrization
Joseph, J.; Ghosh, S.; Sahai, A. K.
2016-12-01
The better understanding of various atmospheric processes has led to improved predictions of meteorological conditions at various temporal scale, ranging from short term which cover a period up to 2 days to long term covering a period of more than 10 days. Accurate prediction of hydrological variables can be done using these predicted meteorological conditions, which would be helpful in proper management of water resources. Extended range hydrological simulation includes the prediction of hydrological variables for a period more than 10 days. The main sources of uncertainty in hydrological predictions include the uncertainty in the initial conditions, meteorological forcing and model parametrization. In the present study, the Extended Range Prediction developed for India for monsoon by Indian Institute of Tropical Meteorology (IITM), Pune is used as meteorological forcing for the Variable Infiltration Capacity (VIC) model. Sensitive hydrological parameters, as derived from literature, along with a few vegetation parameters are assumed to be uncertain and 1000 random values are generated given their prescribed ranges. Uncertainty bands are generated by performing Monte-Carlo Simulations (MCS) for the generated sets of parameters and observed meteorological forcings. The basins with minimum human intervention, within the Indian Peninsular region, are identified and validation of results are carried out using the observed gauge discharge. Further, the uncertainty bands are generated for the extended range hydrological predictions by performing MCS for the same set of parameters and extended range meteorological predictions. The results demonstrate the uncertainty associated with the model parametrisation for the extended range hydrological simulations. Keywords: Extended Range Prediction, Variable Infiltration Capacity model, Monte Carlo Simulation.
An empirical firn-densification model comprising ice-lences
Reeh, Niels; Fisher, D.A.; Koerner, R.M.
2005-01-01
-density profiles from Canadian Arctic ice-core sites with large melting-refreezing percentages shows good agreement. The model is also used to estimate the long-term surface elevation change in interior Greenland that will result from temperature-driven changes of density-depth profiles. These surface elevation......In the past, several empirical firn-densification models have been developed fitted to measured density-depth profiles from Greenland and Antarctica. These models do not specifically deal with refreezing of meltwater in the firn. Ice lenses are usually indirectly taken into account by choosing...... a suitable value of the surface snow density. In the present study, a simple densification model is developed that specifically accounts for the content of ice lenses in the snowpack. An annual layer is considered to be composed of an ice fraction and a firn fraction. It is assumed that all meltwater formed...
Comparison of blade-strike modeling results with empirical data
Ploskey, Gene R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Carlson, Thomas J. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)
2004-03-01
This study is the initial stage of further investigation into the dynamics of injury to fish during passage through a turbine runner. As part of the study, Pacific Northwest National Laboratory (PNNL) estimated the probability of blade strike, and associated injury, as a function of fish length and turbine operating geometry at two adjacent turbines in Powerhouse 1 of Bonneville Dam. Units 5 and 6 had identical intakes, stay vanes, wicket gates, and draft tubes, but Unit 6 had a new runner and curved discharge ring to minimize gaps between the runner hub and blades and between the blade tips and discharge ring. We used a mathematical model to predict blade strike associated with two Kaplan turbines and compared results with empirical data from biological tests conducted in 1999 and 2000. Blade-strike models take into consideration the geometry of the turbine blades and discharges as well as fish length, orientation, and distribution along the runner. The first phase of this study included a sensitivity analysis to consider the effects of difference in geometry and operations between families of turbines on the strike probability response surface. The analysis revealed that the orientation of fish relative to the leading edge of a runner blade and the location that fish pass along the blade between the hub and blade tip are critical uncertainties in blade-strike models. Over a range of discharges, the average prediction of injury from blade strike was two to five times higher than average empirical estimates of visible injury from shear and mechanical devices. Empirical estimates of mortality may be better metrics for comparison to predicted injury rates than other injury measures for fish passing at mid-blade and blade-tip locations.
Extended propagation model for interfacial crack in composite material structure
闫相桥; 冯希金
2002-01-01
An interfacial crack is a common damage in a composite material structure . An extended propaga-tion model has been established for an interfacial crack to study the dependence of crack growth on the relativesizes of energy release rates at left and right crack tips and the properties of interfacial material characterize thegrowth of interfacial crack better.
Extended FEM modeling of crack paths near inclusions
Nielsen, Chris Valentin; Legarth, Brian Nyvang; Niordson, Christian Frithiof
2012-01-01
The extended FEM is applied to model crack growth near inclusions. A procedure to handle different propagation rates at different crack tips is presented. The examples considered investigate uniform tension as well as equibiaxial tension under plane strain conditions. A parameter study analyzes...
The Extended Parallel Process Model: Illuminating the Gaps in Research
Popova, Lucy
2012-01-01
This article examines constructs, propositions, and assumptions of the extended parallel process model (EPPM). Review of the EPPM literature reveals that its theoretical concepts are thoroughly developed, but the theory lacks consistency in operational definitions of some of its constructs. Out of the 12 propositions of the EPPM, a few have not…
An Empirical Study of Smoothing Techniques for Language Modeling
Chen, S F; Chen, Stanley F.; Goodman, Joshua T.
1996-01-01
We present an extensive empirical comparison of several smoothing techniques in the domain of language modeling, including those described by Jelinek and Mercer (1980), Katz (1987), and Church and Gale (1991). We investigate for the first time how factors such as training data size, corpus (e.g., Brown versus Wall Street Journal), and n-gram order (bigram versus trigram) affect the relative performance of these methods, which we measure through the cross-entropy of test data. In addition, we introduce two novel smoothing techniques, one a variation of Jelinek-Mercer smoothing and one a very simple linear interpolation technique, both of which outperform existing methods.
Empirical Bayes Credibility Models for Economic Catastrophic Losses by Regions
Jindrová Pavla
2017-01-01
Full Text Available Catastrophic events affect various regions of the world with increasing frequency and intensity. The number of catastrophic events and the amount of economic losses is varying in different world regions. Part of these losses is covered by insurance. Catastrophe events in last years are associated with increases in premiums for some lines of business. The article focus on estimating the amount of net premiums that would be needed to cover the total or insured catastrophic losses in different world regions using Bühlmann and Bühlmann-Straub empirical credibility models based on data from Sigma Swiss Re 2010-2016. The empirical credibility models have been developed to estimate insurance premiums for short term insurance contracts using two ingredients: past data from the risk itself and collateral data from other sources considered to be relevant. In this article we deal with application of these models based on the real data about number of catastrophic events and about the total economic and insured catastrophe losses in seven regions of the world in time period 2009-2015. Estimated credible premiums by world regions provide information how much money in the monitored regions will be need to cover total and insured catastrophic losses in next year.
Empirical Reduced-Order Modeling for Boundary Feedback Flow Control
Seddik M. Djouadi
2008-01-01
Full Text Available This paper deals with the practical and theoretical implications of model reduction for aerodynamic flow-based control problems. Various aspects of model reduction are discussed that apply to partial differential equation- (PDE- based models in general. Specifically, the proper orthogonal decomposition (POD of a high dimension system as well as frequency domain identification methods are discussed for initial model construction. Projections on the POD basis give a nonlinear Galerkin model. Then, a model reduction method based on empirical balanced truncation is developed and applied to the Galerkin model. The rationale for doing so is that linear subspace approximations to exact submanifolds associated with nonlinear controllability and observability require only standard matrix manipulations utilizing simulation/experimental data. The proposed method uses a chirp signal as input to produce the output in the eigensystem realization algorithm (ERA. This method estimates the system's Markov parameters that accurately reproduce the output. Balanced truncation is used to show that model reduction is still effective on ERA produced approximated systems. The method is applied to a prototype convective flow on obstacle geometry. An H∞ feedback flow controller is designed based on the reduced model to achieve tracking and then applied to the full-order model with excellent performance.
Empirical spatial econometric modelling of small scale neighbourhood
Gerkman, Linda
2012-07-01
The aim of the paper is to model small scale neighbourhood in a house price model by implementing the newest methodology in spatial econometrics. A common problem when modelling house prices is that in practice it is seldom possible to obtain all the desired variables. Especially variables capturing the small scale neighbourhood conditions are hard to find. If there are important explanatory variables missing from the model, the omitted variables are spatially autocorrelated and they are correlated with the explanatory variables included in the model, it can be shown that a spatial Durbin model is motivated. In the empirical application on new house price data from Helsinki in Finland, we find the motivation for a spatial Durbin model, we estimate the model and interpret the estimates for the summary measures of impacts. By the analysis we show that the model structure makes it possible to model and find small scale neighbourhood effects, when we know that they exist, but we are lacking proper variables to measure them.
Regime switching model for financial data: Empirical risk analysis
Salhi, Khaled; Deaconu, Madalina; Lejay, Antoine; Champagnat, Nicolas; Navet, Nicolas
2016-11-01
This paper constructs a regime switching model for the univariate Value-at-Risk estimation. Extreme value theory (EVT) and hidden Markov models (HMM) are combined to estimate a hybrid model that takes volatility clustering into account. In the first stage, HMM is used to classify data in crisis and steady periods, while in the second stage, EVT is applied to the previously classified data to rub out the delay between regime switching and their detection. This new model is applied to prices of numerous stocks exchanged on NYSE Euronext Paris over the period 2001-2011. We focus on daily returns for which calibration has to be done on a small dataset. The relative performance of the regime switching model is benchmarked against other well-known modeling techniques, such as stable, power laws and GARCH models. The empirical results show that the regime switching model increases predictive performance of financial forecasting according to the number of violations and tail-loss tests. This suggests that the regime switching model is a robust forecasting variant of power laws model while remaining practical to implement the VaR measurement.
Parameter Estimation of the Extended Vasiček Model
Rujivan, Sanae
2010-01-01
In this paper, an estimate of the drift and diffusion parameters of the extended Vasiček model is presented. The estimate is based on the method of maximum likelihood. We derive a closed-form expansion for the transition (probability) density of the extended Vasiček process and use the expansion to construct an approximate log-likelihood function of a discretely sampled data of the process. Approximate maximum likelihood estimators (AMLEs) of the parameters are obtained by maximizing the appr...
Two Empirical Models for Land-falling Hurricane Gust Factors
Merceret, Franics J.
2008-01-01
Gaussian and lognormal models for gust factors as a function of height and mean windspeed in land-falling hurricanes are presented. The models were empirically derived using data from 2004 hurricanes Frances and Jeanne and independently verified using data from 2005 hurricane Wilma. The data were collected from three wind towers at Kennedy Space Center and Cape Canaveral Air Force Station with instrumentation at multiple levels from 12 to 500 feet above ground level. An additional 200-foot tower was available for the verification. Mean wind speeds from 15 to 60 knots were included in the data. The models provide formulas for the mean and standard deviation of the gust factor given the mean windspeed and height above ground. These statistics may then be used to assess the probability of exceeding a specified peak wind threshold of operational significance given a specified mean wind speed.
Layered Workflow Process Model Based on Extended Synchronizer
Gang Ni
2014-07-01
Full Text Available The layered workflow process model provide a modeling approach and analysis for the key process with Petri Net. It not only describes the relation between the process of business flow and transition nodes clearly, but also limits the rapid increase in the scale of libraries, transition and directed arcs. This paper studies the process like reservation and complaint handling information management system, especially for the multi-mergence and discriminator patterns which can not be directly modeled with existing synchronizers. Petri Net is adopted to provide formalization description for the workflow patterns and the relation between Arcs and weight class are also analyzed. We use the number of in and out arcs to generalize the workflow into three synchronous modes: fully synchronous mode, competition synchronous mode and asynchronous mode. The types and parameters for synchronization are added to extend the modeling ability of the synchronizers and the synchronous distance is also expanded. The extended synchronizers have the ability to terminate branches automatically or activate the next link many times, besides the ability of original synchronizers. By the analyses on cases of the key business, it is verified that the original synchronizers can not model directly, while the extended synchronizers based on Petri Net can provide modeling for multi-mergence and discriminator modes.
EXTENSION OF THE NUCLEAR REACTION MODEL CODE EMPIRE TO ACTINIDES NUCLEAR DATA EVALUATION.
CAPOTE,R.; SIN, M.; TRKOV, A.; HERMAN, M.; CARLSON, B.V.; OBLOZINSKY, P.
2007-04-22
Recent extensions and improvements of the EMPIRE code system are outlined. They add new capabilities to the code, such as prompt fission neutron spectra calculations using Hauser-Feshbach plus pre-equilibrium pre-fission spectra, cross section covariance matrix calculations by Monte Carlo method, fitting of optical model parameters, extended set of optical model potentials including new dispersive coupled channel potentials, parity-dependent level densities and transmission through numerically defined fission barriers. These features, along with improved and validated ENDF formatting, exclusive/inclusive spectra, and recoils make the current EMPIRE release a complete and well validated tool for evaluation of nuclear data at incident energies above the resonance region. The current EMPIRE release has been used in evaluations of neutron induced reaction files for {sup 232}Th and {sup 231,233}Pa nuclei in the fast neutron region at IAEA. Triple-humped fission barriers and exclusive pre-fission neutron spectra were considered for the fission data evaluation. Total, fission, capture and neutron emission cross section, average resonance parameters and angular distributions of neutron scattering are in excellent agreement with the available experimental data.
Equation-free mechanistic ecosystem forecasting using empirical dynamic modeling.
Ye, Hao; Beamish, Richard J; Glaser, Sarah M; Grant, Sue C H; Hsieh, Chih-Hao; Richards, Laura J; Schnute, Jon T; Sugihara, George
2015-03-31
It is well known that current equilibrium-based models fall short as predictive descriptions of natural ecosystems, and particularly of fisheries systems that exhibit nonlinear dynamics. For example, model parameters assumed to be fixed constants may actually vary in time, models may fit well to existing data but lack out-of-sample predictive skill, and key driving variables may be misidentified due to transient (mirage) correlations that are common in nonlinear systems. With these frailties, it is somewhat surprising that static equilibrium models continue to be widely used. Here, we examine empirical dynamic modeling (EDM) as an alternative to imposed model equations and that accommodates both nonequilibrium dynamics and nonlinearity. Using time series from nine stocks of sockeye salmon (Oncorhynchus nerka) from the Fraser River system in British Columbia, Canada, we perform, for the the first time to our knowledge, real-data comparison of contemporary fisheries models with equivalent EDM formulations that explicitly use spawning stock and environmental variables to forecast recruitment. We find that EDM models produce more accurate and precise forecasts, and unlike extensions of the classic Ricker spawner-recruit equation, they show significant improvements when environmental factors are included. Our analysis demonstrates the strategic utility of EDM for incorporating environmental influences into fisheries forecasts and, more generally, for providing insight into how environmental factors can operate in forecast models, thus paving the way for equation-free mechanistic forecasting to be applied in management contexts.
Creating a Generic Extended Enterprise Management Model using GERAM
Larsen, Lars Bjørn; Kaas-Pedersen, Carsten; Vesterager, Johan
1998-01-01
The two main themes of the Globeman21 (Global Manufacturing in the 21st century) project are product life cycle management and extended enterprise management. This article focus on the later of these subjects and an illustration of the concept is given together with a discussion of the concept of...... management model. By working with GERAM in relation to extended enterprise management it has been found that it provides a useful background for organising knowledge, experience and the activities within the project......The two main themes of the Globeman21 (Global Manufacturing in the 21st century) project are product life cycle management and extended enterprise management. This article focus on the later of these subjects and an illustration of the concept is given together with a discussion of the concept...... of virtual enterprises. Through the introduction of GERAM (Generalised Enterprise Reference Architecture and Methodology) an initial version of a basic framework for extended enterprise management is introduced. This basic framework is the first step towards the creation of a generic extended enterprise...
Empirical likelihood ratio tests for multivariate regression models
WU Jianhong; ZHU Lixing
2007-01-01
This paper proposes some diagnostic tools for checking the adequacy of multivariate regression models including classical regression and time series autoregression. In statistical inference, the empirical likelihood ratio method has been well known to be a powerful tool for constructing test and confidence region. For model checking, however, the naive empirical likelihood (EL) based tests are not of Wilks' phenomenon. Hence, we make use of bias correction to construct the EL-based score tests and derive a nonparametric version of Wilks' theorem. Moreover, by the advantages of both the EL and score test method, the EL-based score tests share many desirable features as follows: They are self-scale invariant and can detect the alternatives that converge to the null at rate n-1/2, the possibly fastest rate for lack-of-fit testing; they involve weight functions, which provides us with the flexibility to choose scores for improving power performance, especially under directional alternatives. Furthermore, when the alternatives are not directional, we construct asymptotically distribution-free maximin tests for a large class of possible alternatives. A simulation study is carried out and an application for a real dataset is analyzed.
Center for Extended Magnetohydrodynamics Modeling - Final Technical Report
Parker, Scott [Univ. of Colorado, Boulder, CO (United States)
2016-02-14
This project funding supported approximately 74 percent of a Ph.D. graduate student, not including costs of travel and supplies. We had a highly successful research project including the development of a second-order implicit electromagnetic kinetic ion hybrid model [Cheng 2013, Sturdevant 2016], direct comparisons with the extended MHD NIMROD code and kinetic simulation [Schnack 2013], modeling of slab tearing modes using the fully kinetic ion hybrid model and finally, modeling global tearing modes in cylindrical geometry using gyrokinetic simulation [Chen 2015, Chen 2016]. We developed an electromagnetic second-order implicit kinetic ion fluid electron hybrid model [Cheng 2013]. As a first step, we assumed isothermal electrons, but have included drift-kinetic electrons in similar models [Chen 2011]. We used this simulation to study the nonlinear evolution of the tearing mode in slab geometry, including nonlinear evolution and saturation [Cheng 2013]. Later, we compared this model directly to extended MHD calculations using the NIMROD code [Schnack 2013]. In this study, we investigated the ion-temperature-gradient instability with an extended MHD code for the first time and got reasonable agreement with the kinetic calculation in terms of linear frequency, growth rate and mode structure. We then extended this model to include orbit averaging and sub-cycling of the ions and compared directly to gyrokinetic theory [Sturdevant 2016]. This work was highlighted in an Invited Talk at the International Conference on the Numerical Simulation of Plasmas in 2015. The orbit averaging sub-cycling multi-scale algorithm is amenable to hybrid architectures with GPUS or math co-processors. Additionally, our participation in the Center for Extend Magnetohydrodynamics motivated our research on developing the capability for gyrokinetic simulation to model a global tearing mode. We did this in cylindrical geometry where the results could be benchmarked with existing eigenmode
Empirical model of atomic nitrogen in the upper thermosphere
Engebretson, M. J.; Mauersberger, K.; Kayser, D. C.; Potter, W. E.; Nier, A. O.
1977-01-01
Atomic nitrogen number densities in the upper thermosphere measured by the open source neutral mass spectrometer (OSS) on Atmosphere Explorer-C during 1974 and part of 1975 have been used to construct a global empirical model at an altitude of 375 km based on a spherical harmonic expansion. The most evident features of the model are large diurnal and seasonal variations of atomic nitrogen and only a moderate and latitude-dependent density increase during periods of geomagnetic activity. Maximum and minimum N number densities at 375 km for periods of low solar activity are 3.6 x 10 to the 6th/cu cm at 1500 LST (local solar time) and low latitude in the summer hemisphere and 1.5 x 10 to the 5th/cu cm at 0200 LST at mid-latitudes in the winter hemisphere.
EMPIRICAL MODEL FOR HYDROCYCLONES CORRECTED CUT SIZE CALCULATION
André Carlos Silva
2012-12-01
Full Text Available Hydrocyclones are devices worldwide used in mineral processing for desliming, classification, selective classification, thickening and pre-concentration. A hydrocyclone is composed by one cylindrical and one conical section joint together, without any moving parts and it is capable of perform granular material separation in pulp. The mineral particles separation mechanism acting in a hydrocyclone is complex and its mathematical modelling is usually empirical. The most used model for hydrocyclone corrected cut size is proposed by Plitt. Over the years many revisions and corrections to Plitt´s model were proposed. The present paper shows a modification in the Plitt´s model constant, obtained by exponential regression of simulated data for three different hydrocyclones geometry: Rietema, Bradley and Krebs. To validate the proposed model literature data obtained from phosphate ore using fifteen different hydrocyclones geometry are used. The proposed model shows a correlation equals to 88.2% between experimental and calculated corrected cut size, while the correlation obtained using Plitt´s model is 11.5%.
Empirical testing of earthquake recurrence models at source and site
Albarello, D.; Mucciarelli, M.
2012-04-01
Several probabilistic procedures are presently available for seismic hazard assessment (PSHA), based on time-dependent or time-independent models. The result is a number of different outcomes (hazard maps), and to take into account the inherent uncertainty (epistemic), the outcomes of alternative procedures are combined in the frame of logic-tree approaches by scoring each procedure as a function of the respective reliability. This is deduced by evaluating ex-ante (by expert judgements) each element concurring in the relevant PSH computational procedure. This approach appears unsatisfactory also because the value of each procedure depends both on the reliability of each concurring element and on that of their combination: thus, checking the correctness of single elements does not allow evaluating the correctness of the procedure as a whole. Alternative approaches should be based 1) on the ex-post empirical testing of the considered PSH computational models and 2) on the validation of the assumptions underlying concurrent models. The first goal can be achieved comparing the probabilistic forecasts provided by each model with empirical evidence relative to seismic occurrences (e.g., strong-motion data or macroseismic intensity evaluations) during some selected control periods of dimension comparable with the relevant exposure time. About assumptions validation, critical issues are the dimension of the minimum data set necessary to distinguish processes with or without memory, the reliability of mixed data on seismic sources (i.e. historical and palaeoseismological), the completeness of fault catalogues. Some results obtained by the application of these testing procedures in Italy will be shortly outlined.
Evaluation of empirical models and competition indices in ranking canola
A. S Safahani
2012-06-01
Full Text Available In order to evaluate the competitive ability (CA of canola cultivars against wild mustard, two experiments were conducted at the Gorgan Institute in Iran during the 2005-2007 cropping seasons. The experimental factors were canola cultivars (1st year: Zarfam, Option500, Hayola330, Hayola401, Talayh, RGS003 and Sarigol; 2nd year: Zarfam, Hayola330, RGS003 and Option500 and weed density (1st year: control and 30 plants m-2; 2nd year: control, 4, 8 and 16 plants m-2. The result of the first year is experiment indicated that the grain yield and competitive indices differed significantly between the cultivars. Cultivar Zarfam showed a high ability to withstand competition (AWC = 47 %, high competitive indices (CI=1.79 and CI2 = 1.83 and low grain yield in the weed- free plots (1729 kg ha-1. The cultivar Option500, a less competitive cultivar had the lowest ability to withstand competition (AWC = 4 % and the lowest competitive indices (CI = 0.09 and CI2= 0.11 amongst the cultivars. However, the cultivar Option500 showed more grain yield in the weed- free plots (2333 kg ha-1 than cultivar Zarfam. In the second year of the experiment, the result of the yield loss models showed that the lowest and highest yield loss belonged to cultivars Zarfam and Option500 (50 and 95 % respectively. A comparison of different empirical models revealed that the empirical yield loss model based on weed relative leaf area was more reliable for predicting canola yield loss according to a high coefficient of determination (R2=0.99. The relative damage coefficient (q of the weed relative leaf area model showed that wild mustard was more competitive than canola (q>1.
The extended RBAC model based on grid computing
CHEN Jian-gang; WANG Ru-chuan; WANG Hai-yan
2006-01-01
This article proposes the extended role-based access control (RBAC) model for solving dynamic and multidomain problems in grid computing, The formulated description of the model has been provided. The introduction of context and the mapping relations of context-to-role and context-to-permission help the model adapt to dynamic property in grid environment.The multidomain role inheritance relation by the authorization agent service realizes the multidomain authorization amongst the autonomy domain. A function has been proposed for solving the role inheritance conflict during the establishment of the multidomain role inheritance relation.
Constructing Multidatabase Collections Using Extended ODMG Object Model
Adrian Skehill Mark Roantree
1999-11-01
Full Text Available Collections are an important feature in database systems. They provide us with the ability to group objects of interest together, and then to manipulate them in the required fashion. The OASIS project is focused on the construction a multidatabase prototype which uses the ODMG model and a canonical model. As part of this work we have extended the base model to provide a more powerful collection mechanism, and to permit the construction of a federated collection, a collection of heterogenous objects taken from distributed data sources
Ng, Tat Ming; Khong, Wendy X; Harris, Patrick N A; De, Partha P; Chow, Angela; Tambyah, Paul A; Lye, David C
2016-01-01
Extended-spectrum beta-lactamase (ESBL)-producing Enterobacteriaceae are a common cause of bacteraemia in endemic countries and may be associated with high mortality; carbapenems are considered the drug of choice. Limited data suggest piperacillin-tazobactam could be equally effective. We aimed to compare 30-day mortality of patients treated empirically with piperacillin-tazobactam versus a carbapenem in a multi-centre retrospective cohort study in Singapore. Only patients with active empiric monotherapy with piperacillin-tazobactam or a carbapenem were included. A propensity score for empiric carbapenem therapy was derived and an adjusted multivariate analysis of mortality was conducted. A total of 394 patients had ESBL-Escherichia.coli and ESBL-Klebsiella pneumoniae bacteraemia of which 23.1% were community acquired cases. One hundred and fifty-one received initial active monotherapy comprising piperacillin-tazobactam (n = 94) or a carbapenem (n = 57). Patients who received carbapenems were less likely to have health-care associated risk factors and have an unknown source of bacteraemia, but were more likely to have a urinary source. Thirty-day mortality was comparable between those who received empiric piperacillin-tazobactam and a carbapenem (29 [30.9%] vs. 17 [29.8%]), P = 0.89). Those who received empiric piperacillin-tazobactam had a lower 30-day acquisition of multi-drug resistant and fungal infections (7 [7.4%] vs. 14 [24.6%]), Pcarbapenem.
Peralta Galo
2012-10-01
Full Text Available Abstract Background The objective of this study is to analyze the factors that are associated with the adequacy of empirical antibiotic therapy and its impact in mortality in a large cohort of patients with extended-spectrum β-lactamase (ESBL - producing Escherichia coli and Klebsiella spp. bacteremia. Methods Cases of ESBL producing Enterobacteriaceae (ESBL-E bacteremia collected from 2003 through 2008 in 19 hospitals in Spain. Statistical analysis was performed using multivariate logistic regression. Results We analyzed 387 cases ESBL-E bloodstream infections. The main sources of bacteremia were urinary tract (55.3%, biliary tract (12.7%, intra-abdominal (8.8% and unknown origin (9.6%. Among all the 387 episodes, E. coli was isolated from blood cultures in 343 and in 45.71% the ESBL-E was multidrug resistant. Empirical antibiotic treatment was adequate in 48.8% of the cases and the in hospital mortality was 20.9%. In a multivariate analysis adequacy was a risk factor for death [adjusted OR (95% CI: 0.39 (0.31-0.97; P = 0.04], but not in patients without severe sepsis or shock. The class of antibiotic used empirically was not associated with prognosis in adequately treated patients. Conclusion ESBL-E bacteremia has a relatively high mortality that is partly related with a low adequacy of empirical antibiotic treatment. In selected subgroups the relevance of the adequacy of empirical therapy is limited.
Extended unified SEM approach for modeling event-related fMRI data.
Gates, Kathleen M; Molenaar, Peter C M; Hillary, Frank G; Slobounov, Semyon
2011-01-15
There has been increasing emphasis in fMRI research on the examination of how regions covary in a distributed neural network. Event-related data designs present a unique challenge to modeling how couplings among regions change in the presence of experimental manipulations. The present paper presents the extended unified SEM (euSEM), a novel approach for acquiring effective connectivity maps with event-related data. The euSEM adds to the unified SEM, which models both lagged and contemporaneous effects, by estimating the direct effects that experimental manipulations have on blood-oxygen-level dependent activity as well as the modulating effects the manipulations have on couplings among regions. Monte Carlos simulations included in this paper offer support for the model's ability to recover covariance patterns used to estimate data. Next, we apply the model to empirical data to demonstrate feasibility. Finally, the results of the empirical data are compared to those found using dynamic causal modeling. The euSEM provides a flexible approach for modeling event-related data as it may be employed in an exploratory, partially exploratory, or entirely confirmatory manner.
Extended hard-sphere model and collisions of cohesive particles.
Kosinski, Pawel; Hoffmann, Alex C
2011-09-01
In two earlier papers the present authors modified a standard hard-sphere particle-wall and particle-particle collision model to account for the presence of adhesive or cohesive interaction between the colliding particles: the problem is of importance for modeling particle-fluid flow using the Lagrangian approach. This technique, which involves a direct numerical simulation of such flows, is gaining increasing popularity for simulating, e.g., dust transport, flows of nanofluids and grains in planetary rings. The main objective of the previous papers was to formally extend the impulse-based hard-sphere model, while suggestions for quantifications of the adhesive or cohesive interaction were made. This present paper gives an improved quantification of the adhesive and cohesive interactions for use in the extended hard-sphere model for cases where the surfaces of the colliding bodies are "dry," e.g., there is no liquid-bridge formation between the colliding bodies. This quantification is based on the Johnson-Kendall-Roberts (JKR) analysis of collision dynamics but includes, in addition, dissipative forces using a soft-sphere modeling technique. In this way the cohesive impulse, required for the hard-sphere model, is calculated together with other parameters, namely the collision duration and the restitution coefficient. Finally a dimensional analysis technique is applied to fit an analytical expression to the results for the cohesive impulse that can be used in the extended hard-sphere model. At the end of the paper we show some simulation results in order to illustrate the model.
The dialogically extended mind
Fusaroli, Riccardo; Gangopadhyay, Nivedita; Tylén, Kristian
2014-01-01
A growing conceptual and empirical literature is advancing the idea that language extends our cognitive skills. One of the most influential positions holds that language – qua material symbols – facilitates individual thought processes by virtue of its material properties. Extending upon this model......, we argue that language enhances our cognitive capabilities in a much more radical way: The skilful engagement of public material symbols facilitates evolutionarily unprecedented modes of collective perception, action and reasoning (interpersonal synergies) creating dialogically extended minds. We...... relate our approach to other ideas about collective minds and review a number of empirical studies to identify the mechanisms enabling the constitution of interpersonal cognitive systems....
Adaptation of an empirical model for erythemal ultraviolet irradiance
I. Foyo-Moreno
2007-07-01
Full Text Available In this work we adapt an empirical model to estimate ultraviolet erythemal irradiance (UVER using experimental measurements carried out at seven stations in Spain during four years (2000–2003. The measurements were taken in the framework of the Spanish UVB radiometric network operated and maintained by the Spanish Meteorological Institute. The UVER observations are recorded as half hour average values. The model is valid for all-sky conditions, estimating UVER from the ozone columnar content and parameters usually registered in radiometric networks, such as global broadband hemispherical transmittance and optical air mass. One data set was used to develop the model and another independent set was used to validate it. The model provides satisfactory results, with low mean bias error (MBE for all stations. In fact, MBEs are less than 4% and root mean square errors (RMSE are below 18% (except for one location. The model has also been evaluated to estimate the UV index. The percentage of cases with differences of 0 UVI units is in the range of 61.1% to 72.0%, while the percentage of cases with differences of ±1 UVI unit covers the range of 95.6% to 99.2%. This result confirms the applicability of the model to estimate UVER irradiance and the UV index at those locations in the Iberian Peninsula where there are no UV radiation measurements.
Negativity in the Extended Hubbard Model under External Magnetic Field
YANG Zhen; NING Wen-Qiang
2008-01-01
We exactly calculate the negativity,a measurement of entanglement,in the two-site extended Hubbard model with external magnetic field.Its behaviour at different temperatures is presented.The negativity reduces with the increasing temperature or with the increasing uniform external magnetic field.It is also found that a non-uniform external magnetic field can be used to modulate or to increase the negativity.
The one-dimensional extended Bose-Hubbard model
Ramesh V Pai; Rahul Pandit
2003-10-01
We use the finite-size, density-matrix-renormalization-group (DMRG) method to obtain the zero-temperature phase diagram of the one-dimensional, extended Bose-Hubbard model, for mean boson density ρ = 1, in the - plane ( and are respectively, onsite and nearest-neighbour repulsive interactions between bosons). The phase diagram includes superfluid (SF), bosonic-Mott-insulator (MI), and mass-density-wave (MDW) phases. We determine the natures of the quantum phase transitions between these phases.
Extended gauge models at e+e- colliders
Djouadi, Abdelhak
1995-01-01
We summarize the potential of high--energy \\ee linear colliders for discovering, and in case of discovery, for studying the signals of extended gauge models. We will mainly focus on the virtual signals of new neutral gauge bosons and on the production of new heavy leptons. [Invited talk given at the Workshop on Physics and Experiments with Linear Colliders, Morioka-Appi, Japan, September 8-12 1995.
Hu, Caihong
2013-04-01
Xiaolandi-Huayuankou region is an important rainstorm centre in the middle Yellow river, which drainage area of 35883km2. A set of forecasting methods applied in this region was formed throughout years of practice. The Xiaohuajian flood forecasting model and empirical model were introduced in this paper. The simulated processes of the Xiaohuajian flood forecasting model include evapotranspiration, infiltration, runoff, river flow. Infiltration and surface runoff are calculated utilizing the Horton model for infiltration into multilayered soil profiles. Overland flow is routed by Nash instantaneous unit hydrograph and Section Muskingum method. The empirical model are simulated using P~Pa~R and empirical relation approach for runoff generation and concentration. The structures of these two models were analyzed and compared in detail. Yihe river basin located in Xiaolandi-Huayuankou region was selected for the purpose of the study. The results show that the accuracy of the two methods are similar, however, the accuracy of Xiaohuajian flood forecasting model for flood forecasting is relatively higher, especially the process of the flood; the accuracy of the empirical methods is much worse, but it can also be accept. The two models are both practicable, so the two models can be combined to apply. The result of the Xiaohuajian flood forecasting model can be used to guide the reservoir for flood control, and the result of empirical methods can be as a reference.
A Tool for Sharing Empirical Models of Climate Impacts
Rising, J.; Kopp, R. E.; Hsiang, S. M.
2013-12-01
Scientists, policy advisors, and the public struggle to synthesize the quickly evolving empirical work on climate change impacts. The Integrated Assessment Models (IAMs) used to estimate the impacts of climate change and the effects of adaptation and mitigation policies can also benefit greatly from recent empirical results (Kopp, Hsiang & Oppenheimer, Impacts World 2013 discussion paper). This paper details a new online tool for exploring, analyzing, combining, and communicating a wide range of impact results, and supporting their integration into IAMs. The tool uses a new database of statistical results, which researchers can expand both in depth (by providing additional results that describing existing relationships) and breadth (by adding new relationships). Scientists can use the tool to quickly perform meta-analyses of related results, using Bayesian techniques to produce pooled and partially-pooled posterior distributions. Policy advisors can apply the statistical results to particular contexts, and combine different kinds of results in a cost-benefit framework. For example, models of the impact of temperature changes on agricultural yields can be first aggregated to build a best-estimate of the effect under given assumptions, then compared across countries using different temperature scenarios, and finally combined to estimate a social cost of carbon. The general public can better understand the many estimates of climate impacts and their range of uncertainty by exploring these results dynamically, with maps, bar charts, and dose-response-style plots. Front page of the climate impacts tool website. Sample "collections" of models, within which all results are estimates of the same fundamental relationship, are shown on the right. Simple pooled result for Gelman's "8 schools" example. Pooled results are calculated analytically, while partial-pooling (Bayesian hierarchical estimation) uses posterior simulations.
Model Calibration of Exciter and PSS Using Extended Kalman Filter
Kalsi, Karanjit; Du, Pengwei; Huang, Zhenyu
2012-07-26
Power system modeling and controls continue to become more complex with the advent of smart grid technologies and large-scale deployment of renewable energy resources. As demonstrated in recent studies, inaccurate system models could lead to large-scale blackouts, thereby motivating the need for model calibration. Current methods of model calibration rely on manual tuning based on engineering experience, are time consuming and could yield inaccurate parameter estimates. In this paper, the Extended Kalman Filter (EKF) is used as a tool to calibrate exciter and Power System Stabilizer (PSS) models of a particular type of machine in the Western Electricity Coordinating Council (WECC). The EKF-based parameter estimation is a recursive prediction-correction process which uses the mismatch between simulation and measurement to adjust the model parameters at every time step. Numerical simulations using actual field test data demonstrate the effectiveness of the proposed approach in calibrating the parameters.
Empirical classification of resources in a business model concept
Marko Seppänen
2009-04-01
Full Text Available The concept of the business model has been designed for aiding exploitation of the business potential of an innovation. This exploitation inevitably involves new activities in the organisational context and generates a need to select and arrange the resources of the firm in these new activities. A business model encompasses those resources that a firm has access to and aids in a firm’s effort to create a superior ‘innovation capability’. Selecting and arranging resources to utilise innovations requires resource allocation decisions on multiple fronts as well as poses significant challenges for management of innovations. Although current business model conceptualisations elucidate resources, explicit considerations for the composition and the structures of the resource compositions have remained ambiguous. As a result, current business model conceptualisations fail in their core purpose in assisting the decision-making that must consider the resource allocation in exploiting business opportunities. This paper contributes to the existing discussion regarding the representation of resources as components in the business model concept. The categorized list of resources in business models is validated empirically, using two samples of managers in different positions in several industries. The results indicate that most of the theoretically derived resource items have their equivalents in the business language and concepts used by managers. Thus, the categorisation of the resource components enables further development of the business model concept as well as improves daily communication between managers and their subordinates. Future research could be targeted on linking these components of a business model with each other in order to gain a model to assess the performance of different business model configurations. Furthermore, different applications for the developed resource configuration may be envisioned.
Testing the Empirical Shock Arrival Model using Quadrature Observations
Gopalswamy, N; Xie, H; Yashiro, S
2013-01-01
The empirical shock arrival (ESA) model was developed based on quadrature data from Helios (in-situ) and P-78 (remote-sensing) to predict the Sun-Earth travel time of coronal mass ejections (CMEs) [Gopalswamy et al. 2005a]. The ESA model requires earthward CME speed as input, which is not directly measurable from coronagraphs along the Sun-Earth line. The Solar Terrestrial Relations Observatory (STEREO) and the Solar and Heliospheric Observatory (SOHO) were in quadrature during 2010 - 2012, so the speeds of Earth-directed CMEs were observed with minimal projection effects. We identified a set of 20 full halo CMEs in the field of view of SOHO that were also observed in quadrature by STEREO. We used the earthward speed from STEREO measurements as input to the ESA model and compared the resulting travel times with the observed ones from L1 monitors. We find that the model predicts the CME travel time within about 7.3 hours, which is similar to the predictions by the ENLIL model. We also find that CME-CME and CME...
An Empirical Analysis on Credit Risk Models and its Application
Joocheol Kim
2014-08-01
Full Text Available This study intends to focus on introducing credit default risk with widely used credit risk models in an effort to empirically test whether the models hold their validity, apply to financial institutions which usually are highly levered with various types of debts, and finally reinterpret the results in computing adequate collateral level in the over-the-counter derivatives market. By calculating the distance-to-default values using historical market data for South Korean banks and brokerage firms as suggested in Merton model and KMV’s EDF model, we find that the performance of the introduced models well reflect the credit quality of the sampled financial institutions. Moreover, we suggest that in addition to the given credit ratings of different financial institutions, their distance-to-default values can be utilized in determining the sufficient level of credit support. Our suggested “smoothened” collateral level allows both contractual parties to minimize their costs caused from provision of collateral without undertaking additional credit risk and achieve efficient collateral management.
EMPIRICAL MODEL FOR FORMULATION OF CRYSTAL-TOLERANT HLW GLASSES
KRUGER AA; MATYAS J; HUCKLEBERRY AR; VIENNA JD; RODRIGUEZ CA
2012-03-07
Historically, high-level waste (HLW) glasses have been formulated with a low liquideus temperature (T{sub L}), or temperature at which the equilibrium fraction of spinel crystals in the melt is below 1 vol % (T{sub 0.01}), nominally below 1050 C. These constraints cannot prevent the accumulation of large spinel crystals in considerably cooler regions ({approx} 850 C) of the glass discharge riser during melter idling and significantly limit the waste loading, which is reflected in a high volume of waste glass, and would result in high capital, production, and disposal costs. A developed empirical model predicts crystal accumulation in the riser of the melter as a function of concentration of spinel-forming components in glass, and thereby provides guidance in formulating crystal-tolerant glasses that would allow high waste loadings by keeping the spinel crystals small and therefore suspended in the glass.
Extended superconformal symmetry, Freudenthal triple systems and gauged WZW models
Günaydin, M
1995-01-01
We review the construction of extended ( N=2 and N=4 ) superconformal algebras over triple systems and the gauged WZW models invariant under them. The N=2 superconformal algebras (SCA) realized over Freudenthal triple systems (FTS) admit extension to ``maximal'' N=4 SCA's with SU(2)XSU(2)XU(1) symmetry. A detailed study of the construction and classification of N=2 and N=4 SCA's over Freudenthal triple systems is given. We conclude with a study and classification of gauged WZW models with N=4 superconformal symmetry.
Extended cox regression model: The choice of timefunction
Isik, Hatice; Tutkun, Nihal Ata; Karasoy, Durdu
2017-07-01
Cox regression model (CRM), which takes into account the effect of censored observations, is one the most applicative and usedmodels in survival analysis to evaluate the effects of covariates. Proportional hazard (PH), requires a constant hazard ratio over time, is the assumptionofCRM. Using extended CRM provides the test of including a time dependent covariate to assess the PH assumption or an alternative model in case of nonproportional hazards. In this study, the different types of real data sets are used to choose the time function and the differences between time functions are analyzed and discussed.
An Extended Hierarchical Trusted Model for Wireless Sensor Networks
DU Ruiying; XU Mingdi; ZHANG Huanguo
2006-01-01
Cryptography and authentication are traditional approach for providing network security. However, they are not sufficient for solving the problems which malicious nodes compromise whole wireless sensor network leading to invalid data transmission and wasting resource by using vicious behaviors. This paper puts forward an extended hierarchical trusted architecture for wireless sensor network, and establishes trusted congregations by three-tier framework. The method combines statistics, economics with encrypt mechanism for developing two trusted models which evaluate cluster head nodes and common sensor nodes respectively. The models form logical trusted-link from command node to common sensor nodes and guarantees the network can run in secure and reliable circumstance.
Empirical fitness models for hepatitis C virus immunogen design
Hart, Gregory R.; Ferguson, Andrew L.
2015-12-01
Hepatitis C virus (HCV) afflicts 170 million people worldwide, 2%-3% of the global population, and kills 350 000 each year. Prophylactic vaccination offers the most realistic and cost effective hope of controlling this epidemic in the developing world where expensive drug therapies are not available. Despite 20 years of research, the high mutability of the virus and lack of knowledge of what constitutes effective immune responses have impeded development of an effective vaccine. Coupling data mining of sequence databases with spin glass models from statistical physics, we have developed a computational approach to translate clinical sequence databases into empirical fitness landscapes quantifying the replicative capacity of the virus as a function of its amino acid sequence. These landscapes explicitly connect viral genotype to phenotypic fitness, and reveal vulnerable immunological targets within the viral proteome that can be exploited to rationally design vaccine immunogens. We have recovered the empirical fitness landscape for the HCV RNA-dependent RNA polymerase (protein NS5B) responsible for viral genome replication, and validated the predictions of our model by demonstrating excellent accord with experimental measurements and clinical observations. We have used our landscapes to perform exhaustive in silico screening of 16.8 million T-cell immunogen candidates to identify 86 optimal formulations. By reducing the search space of immunogen candidates by over five orders of magnitude, our approach can offer valuable savings in time, expense, and labor for experimental vaccine development and accelerate the search for a HCV vaccine. Abbreviations: HCV—hepatitis C virus, HLA—human leukocyte antigen, CTL—cytotoxic T lymphocyte, NS5B—nonstructural protein 5B, MSA—multiple sequence alignment, PEG-IFN—pegylated interferon.
Extending the Clapper-Yule model to rough printing supports.
Hébert, Mathieu; Hersch, Roger David
2005-09-01
The Clapper-Yule model is the only classical spectral reflection model for halftone prints that takes explicitly into account both the multiple internal reflections between the print-air interface and the paper substrate and the lateral propagation of light within the paper bulk. However, the Clapper-Yule model assumes a planar interface and does not take into account the roughness of the print surface. In order to extend the Clapper-Yule model to rough printing supports (e.g., matte coated papers or calendered papers), we model the print surface as a set of randomly oriented microfacets. The influence of the shadowing effect is evaluated and incorporated into the model. By integrating over all incident angles and facet orientations, we are able to express the internal reflectance of the rough interface as a function of the rms facet slope. By considering also the rough interface transmittances both for the incident light and for the emerging light, we obtain a generalization of the Clapper-Yule model for rough interfaces. The comparison between the classical Clapper-Yule model and the model extended to rough surfaces shows that the influence of the surface roughness on the predicted reflectance factor is small. For high-quality papers such as coated and calendered papers, as well as for low-quality papers such as newsprint or copy papers, the influence of surface roughness is negligible, and the classical Clapper-Yule model can be used to predict the halftone-print reflectance factors. The influence of roughness becomes significant only for very rough and thick nondiffusing coatings.
An empirical conceptual gully evolution model for channelled sea cliffs
Leyland, Julian; Darby, Stephen E.
2008-12-01
Incised coastal channels are a specific form of incised channel that are found in locations where stream channels flowing to cliffed coasts have the excess energy required to cut down through the cliff to reach the outlet water body. The southern coast of the Isle of Wight, southern England, comprises soft cliffs that vary in height between 15 and 100 m and which are retreating at rates ≤ 1.5 m a - 1 , due to a combination of wave erosion and landslides. In several locations, river channels have cut through the cliffs to create deeply (≤ 45 m) incised gullies, known locally as 'Chines'. The Chines are unusual in that their formation is associated with dynamic shoreline encroachment during a period of rising sea-level, whereas existing models of incised channel evolution emphasise the significance of base level lowering. This paper develops a conceptual model of Chine evolution by applying space for time substitution methods using empirical data gathered from Chine channel surveys and remotely sensed data. The model identifies a sequence of evolutionary stages, which are classified based on a suite of morphometric indices and associated processes. The extent to which individual Chines are in a state of growth or decay is estimated by determining the relative rates of shoreline retreat and knickpoint recession, the former via analysis of historical aerial images and the latter through the use of a stream power erosion model.
Parameter Estimation of the Extended Vasiček Model
Sanae RUJIVAN
2010-01-01
Full Text Available In this paper, an estimate of the drift and diffusion parameters of the extended Vasiček model is presented. The estimate is based on the method of maximum likelihood. We derive a closed-form expansion for the transition (probability density of the extended Vasiček process and use the expansion to construct an approximate log-likelihood function of a discretely sampled data of the process. Approximate maximum likelihood estimators (AMLEs of the parameters are obtained by maximizing the approximate log-likelihood function. The convergence of the AMLEs to the true maximum likelihood estimators is obtained by increasing the number of terms in the expansions with a small time step size.
Global Empirical Model of the TEC Response to Geomagnetic Activity and Forcing from Below
2014-04-01
AFRL-AFOSR-UK-TR-2014-0025 Global empirical model of the TEC response to geomagnetic activity and forcing from below Dora...April 2014 4. TITLE AND SUBTITLE Global empirical model of the TEC response to geomagnetic activity and forcing from below 5a. CONTRACT NUMBER...the global background TEC model c) Development of global empirical model of TEC response to geomagnetic activity d) On-line implementation of both
Wave speeds in the macroscopic extended model for ultrarelativistic gases
Borghero, F., E-mail: borghero@unica.it [Dip. Matematica e Informatica, Università di Cagliari, Via Ospedale 72, 09124 Cagliari (Italy); Demontis, F., E-mail: fdemontis@unica.it [Dip. Matematica, Università di Cagliari, Viale Merello 92, 09123 Cagliari (Italy); Pennisi, S., E-mail: spennisi@unica.it [Dip. Matematica, Università di Cagliari, Via Ospedale 72, 09124 Cagliari (Italy)
2013-11-15
Equations determining wave speeds for a model of ultrarelativistic gases are investigated. This model is already present in literature; it deals with an arbitrary number of moments and it was proposed in the context of exact macroscopic approaches in Extended Thermodynamics. We find these results: the whole system for the determination of the wave speeds can be divided into independent subsystems which are expressed by linear combinations, through scalar coefficients, of tensors all of the same order; some wave speeds, but not all of them, are expressed by square roots of rational numbers; finally, we prove that these wave speeds for the macroscopic model are the same of those furnished by the kinetic model.
Wave speeds in the macroscopic extended model for ultrarelativistic gases
Borghero, F., E-mail: borghero@unica.it [Dip. Matematica e Informatica, Università di Cagliari, Via Ospedale 72, 09124 Cagliari (Italy); Demontis, F., E-mail: fdemontis@unica.it [Dip. Matematica, Università di Cagliari, Viale Merello 92, 09123 Cagliari (Italy); Pennisi, S., E-mail: spennisi@unica.it [Dip. Matematica, Università di Cagliari, Via Ospedale 72, 09124 Cagliari (Italy)
2013-11-15
Equations determining wave speeds for a model of ultrarelativistic gases are investigated. This model is already present in literature; it deals with an arbitrary number of moments and it was proposed in the context of exact macroscopic approaches in Extended Thermodynamics. We find these results: the whole system for the determination of the wave speeds can be divided into independent subsystems which are expressed by linear combinations, through scalar coefficients, of tensors all of the same order; some wave speeds, but not all of them, are expressed by square roots of rational numbers; finally, we prove that these wave speeds for the macroscopic model are the same of those furnished by the kinetic model.
General Friction Model Extended by the Effect of Strain Hardening
Nielsen, Chris V.; Martins, Paulo A.F.; Bay, Niels
2016-01-01
An extension to the general friction model proposed by Wanheim and Bay [1] to include the effect of strain hardening is proposed. The friction model relates the friction stress to the fraction of real contact area by a friction factor under steady state sliding. The original model for the real co...... of friction in metal forming, where the material generally strain hardens. The extension of the model to cover strain hardening materials is validated by comparison to previously published experimental data.......An extension to the general friction model proposed by Wanheim and Bay [1] to include the effect of strain hardening is proposed. The friction model relates the friction stress to the fraction of real contact area by a friction factor under steady state sliding. The original model for the real......-ideally plastic material, and secondly, to extend the solution by the influence of material strain hardening. This corresponds to adding a new variable and, therefore, a new axis to the general friction model. The resulting model is presented in a combined function suitable for e.g. finite element modeling...
A Design and Implementation of the Extended Andorra Model
Lopes, Ricardo; Silva, Fernando
2011-01-01
Logic programming provides a high-level view of programming, giving implementers a vast latitude into what techniques to explore to achieve the best performance for logic programs. Towards obtaining maximum performance, one of the holy grails of logic programming has been to design computational models that could be executed efficiently and that would allow both for a reduction of the search space and for exploiting all the available parallelism in the application. These goals have motivated the design of the Extended Andorra Model, a model where goals that do not constrain non-deterministic goals can execute first. In this work we present and evaluate the Basic design for Extended Andorra Model (BEAM), a system that builds upon David H. D. Warren's original EAM with Implicit Control. We provide a complete description and implementation of the BEAM System as a set of rewrite and control rules. We present the major data structures and execution algorithms that are required for efficient execution, and evaluate...
Hybrid empirical--theoretical approach to modeling uranium adsorption
Hull, Larry C.; Grossman, Christopher; Fjeld, Robert A.; Coates, John T.; Elzerman, Alan W
2004-05-01
An estimated 330 metric tons of U are buried in the radioactive waste Subsurface Disposal Area (SDA) at the Idaho National Engineering and Environmental Laboratory (INEEL). An assessment of U transport parameters is being performed to decrease the uncertainty in risk and dose predictions derived from computer simulations of U fate and transport to the underlying Snake River Plain Aquifer. Uranium adsorption isotherms were measured for 14 sediment samples collected from sedimentary interbeds underlying the SDA. The adsorption data were fit with a Freundlich isotherm. The Freundlich n parameter is statistically identical for all 14 sediment samples and the Freundlich K{sub f} parameter is correlated to sediment surface area (r{sup 2}=0.80). These findings suggest an efficient approach to material characterization and implementation of a spatially variable reactive transport model that requires only the measurement of sediment surface area. To expand the potential applicability of the measured isotherms, a model is derived from the empirical observations by incorporating concepts from surface complexation theory to account for the effects of solution chemistry. The resulting model is then used to predict the range of adsorption conditions to be expected in the vadose zone at the SDA based on the range in measured pore water chemistry. Adsorption in the deep vadose zone is predicted to be stronger than in near-surface sediments because the total dissolved carbonate decreases with depth.
The Chromospheric Solar Millimeter-wave Cavity; a Common Property in the Semi-empirical Models
Victor, De la Luz; Emanuele, Bertone
2014-01-01
The semi-empirical models of the solar chromosphere are useful in the study of the solar radio emission at millimeter - infrared wavelengths. However, current models do not reproduce the observations of the quiet sun. In this work we present a theoretical study of the radiative transfer equation for four semi- empirical models at these wavelengths. We found that the Chromospheric Solar Milimeter-wave Cavity (CSMC), a region where the atmosphere becomes locally optically thin at millimeter wavelengths, is present in the semi-empirical models under study. We conclude that the CSMC is a general property of the solar chromosphere where the semi-empirical models shows temperature minimum.
Critical phenomena of strange hadronic matter in the extended Zimanyi-Moszkowski model
Miyazaki, K
2005-01-01
We have studied the liquid-gas phase transition of warm strange hadronic matter (SHM) in the extended Zimanyi-Moszkowski model. We implement the Nijmegen soft-core potential model NSC97f of hyperon-hyperon interactions in terms of the (hidden) strange mesons. The saturation properties of pure Lambda and Xi matter by the potential essentially determine the dependence of the critical temperature on the strangeness fraction of SHM. We treat the liquid-gas phase transition of SHM as the first-order one and employ Maxwell construction so as to calculate the phase coexistence curves. The derived critical exponents beta \\simeq 1/3 and gamma=1.22 are almost independent of the strangeness fraction of SHM and almost agree with the empirical values derived from the recent multifragmentation reactions. Consequently, we have confirmed the universality of the critical phenomena in the liquid-gas phase transition of hadronic system.
OSeMOSYS Energy Modeling Using an Extended UTOPIA Model
Lavigne, Denis
2017-01-01
The OSeMOSYS project offers open-access energy modeling to a wide audience. Its relative simplicity makes it appealing for academic research and governmental organizations to study the impacts of policy decisions on an energy system in the context of possibly severe greenhouse gases emissions limitations. OSeMOSYS is a tool that enhances the…
Extended Neural Metastability in an Embodied Model of Sensorimotor Coupling
Miguel Aguilera
2016-09-01
Full Text Available The hypothesis that brain organization is based on mechanisms of metastable synchronization in neural assemblies has been popularized during the last decades of neuroscientific research. Nevertheless, the role of body and environment for understanding the functioning of metastable assemblies is frequently dismissed. The main goal of this paper is to investigate the contribution of sensorimotor coupling to neural and behavioural metastability using a minimal computational model of plastic neural ensembles embedded in a robotic agent in a behavioural preference task. Our hypothesis is that, under some conditions, the metastability of the system is not restricted to the brain but extends to the system composed by the interaction of brain, body and environment. We test this idea, comparing an agent in continuous interaction with its environment in a task demanding behavioural flexibility with an equivalent model from the point of view of 'internalist neuroscience'. A statistical characterization of our model and tools from information theory allows us to show how (1 the bidirectional coupling between agent and environment brings the system closer to a regime of criticality and triggers the emergence of additional metastable states which are not found in the brain in isolation but extended to the whole system of sensorimotor interaction, (2 the synaptic plasticity of the agent is fundamental to sustain open structures in the neural controller of the agent flexibly engaging and disengaging different behavioural patterns that sustain sensorimotor metastable states, and (3 these extended metastable states emerge when the agent generates an asymmetrical circular loop of causal interaction with its environment, in which the agent responds to variability of the environment at fast timescales while acting over the environment at slow timescales, suggesting the constitution of the agent as an autonomous entity actively modulating its sensorimotor coupling
Extended Neural Metastability in an Embodied Model of Sensorimotor Coupling
Aguilera, Miguel; Bedia, Manuel G.; Barandiaran, Xabier E.
2016-01-01
The hypothesis that brain organization is based on mechanisms of metastable synchronization in neural assemblies has been popularized during the last decades of neuroscientific research. Nevertheless, the role of body and environment for understanding the functioning of metastable assemblies is frequently dismissed. The main goal of this paper is to investigate the contribution of sensorimotor coupling to neural and behavioral metastability using a minimal computational model of plastic neural ensembles embedded in a robotic agent in a behavioral preference task. Our hypothesis is that, under some conditions, the metastability of the system is not restricted to the brain but extends to the system composed by the interaction of brain, body and environment. We test this idea, comparing an agent in continuous interaction with its environment in a task demanding behavioral flexibility with an equivalent model from the point of view of “internalist neuroscience.” A statistical characterization of our model and tools from information theory allow us to show how (1) the bidirectional coupling between agent and environment brings the system closer to a regime of criticality and triggers the emergence of additional metastable states which are not found in the brain in isolation but extended to the whole system of sensorimotor interaction, (2) the synaptic plasticity of the agent is fundamental to sustain open structures in the neural controller of the agent flexibly engaging and disengaging different behavioral patterns that sustain sensorimotor metastable states, and (3) these extended metastable states emerge when the agent generates an asymmetrical circular loop of causal interaction with its environment, in which the agent responds to variability of the environment at fast timescales while acting over the environment at slow timescales, suggesting the constitution of the agent as an autonomous entity actively modulating its sensorimotor coupling with the world. We
Extending the transdiagnostic model of attachment and psychopathology
Tsachi eEin-Dor
2016-03-01
Full Text Available Research has suggested that high levels of attachment insecurities that are formed through interactions with significant others are associated with a general vulnerability to mental disorders. In the present paper, we extend Ein-Dor and Doron's (2015 transdiagnostic model linking attachment orientations with internalizing and externalizing symptoms, to include thought disorder spectrum symptoms. Specifically, we speculate on the processes that mediate the linkage between attachment insecurities and psychosis and obsessive compulsive disorder (OCD symptoms, and indicate the different contexts that might set a trajectory of one individual to one set of symptoms while another individual to a different set of symptoms.
Extended Bayesian Information Criteria for Gaussian Graphical Models
Foygel, Rina
2010-01-01
Gaussian graphical models with sparsity in the inverse covariance matrix are of significant interest in many modern applications. For the problem of recovering the graphical structure, information criteria provide useful optimization objectives for algorithms searching through sets of graphs or for selection of tuning parameters of other methods such as the graphical lasso, which is a likelihood penalization technique. In this paper we establish the consistency of an extended Bayesian information criterion for Gaussian graphical models in a scenario where both the number of variables p and the sample size n grow. Compared to earlier work on the regression case, our treatment allows for growth in the number of non-zero parameters in the true model, which is necessary in order to cover connected graphs. We demonstrate the performance of this criterion on simulated data when used in conjunction with the graphical lasso, and verify that the criterion indeed performs better than either cross-validation or the ordi...
Extended Nonnegative Tensor Factorisation Models for Musical Sound Source Separation
Derry FitzGerald
2008-01-01
Full Text Available Recently, shift-invariant tensor factorisation algorithms have been proposed for the purposes of sound source separation of pitched musical instruments. However, in practice, existing algorithms require the use of log-frequency spectrograms to allow shift invariance in frequency which causes problems when attempting to resynthesise the separated sources. Further, it is difficult to impose harmonicity constraints on the recovered basis functions. This paper proposes a new additive synthesis-based approach which allows the use of linear-frequency spectrograms as well as imposing strict harmonic constraints, resulting in an improved model. Further, these additional constraints allow the addition of a source filter model to the factorisation framework, and an extended model which is capable of separating mixtures of pitched and percussive instruments simultaneously.
Cahalane, Diarmuid J; Clancy, Barbara; Kingsbury, Marcy A; Graf, Ethan; Sporns, Olaf; Finlay, Barbara L
2011-01-11
The developmental mechanisms by which the network organization of the adult cortex is established are incompletely understood. Here we report on empirical data on the development of connections in hamster isocortex and use these data to parameterize a network model of early cortical connectivity. Using anterograde tracers at a series of postnatal ages, we investigate the growth of connections in the early cortical sheet and systematically map initial axon extension from sites in anterior (motor), middle (somatosensory) and posterior (visual) cortex. As a general rule, developing axons extend from all sites to cover relatively large portions of the cortical field that include multiple cortical areas. From all sites, outgrowth is anisotropic, covering a greater distance along the medial/lateral axis than along the anterior/posterior axis. These observations are summarized as 2-dimensional probability distributions of axon terminal sites over the cortical sheet. Our network model consists of nodes, representing parcels of cortex, embedded in 2-dimensional space. Network nodes are connected via directed edges, representing axons, drawn according to the empirically derived anisotropic probability distribution. The networks generated are described by a number of graph theoretic measurements including graph efficiency, node betweenness centrality and average shortest path length. To determine if connectional anisotropy helps reduce the total volume occupied by axons, we define and measure a simple metric for the extra volume required by axons crossing. We investigate the impact of different levels of anisotropy on network structure and volume. The empirically observed level of anisotropy suggests a good trade-off between volume reduction and maintenance of both network efficiency and robustness. Future work will test the model's predictions for connectivity in larger cortices to gain insight into how the regulation of axonal outgrowth may have evolved to achieve efficient
An empirical model of the quiet daily geomagnetic field variation
Yamazaki, Y.; Yumoto, K.; Cardinal, M.G.; Fraser, B.J.; Hattori, P.; Kakinami, Y.; Liu, J.Y.; Lynn, K.J.W.; Marshall, R.; McNamara, D.; Nagatsuma, T.; Nikiforov, V.M.; Otadoy, R.E.; Ruhimat, M.; Shevtsov, B.M.; Shiokawa, K.; Abe, S.; Uozumi, T.; Yoshikawa, A.
2011-01-01
An empirical model of the quiet daily geomagnetic field variation has been constructed based on geomagnetic data obtained from 21 stations along the 210 Magnetic Meridian of the Circum-pan Pacific Magnetometer Network (CPMN) from 1996 to 2007. Using the least squares fitting method for geomagnetically quiet days (Kp ??? 2+), the quiet daily geomagnetic field variation at each station was described as a function of solar activity SA, day of year DOY, lunar age LA, and local time LT. After interpolation in latitude, the model can describe solar-activity dependence and seasonal dependence of solar quiet daily variations (S) and lunar quiet daily variations (L). We performed a spherical harmonic analysis (SHA) on these S and L variations to examine average characteristics of the equivalent external current systems. We found three particularly noteworthy results. First, the total current intensity of the S current system is largely controlled by solar activity while its focus position is not significantly affected by solar activity. Second, we found that seasonal variations of the S current intensity exhibit north-south asymmetry; the current intensity of the northern vortex shows a prominent annual variation while the southern vortex shows a clear semi-annual variation as well as annual variation. Thirdly, we found that the total intensity of the L current system changes depending on solar activity and season; seasonal variations of the L current intensity show an enhancement during the December solstice, independent of the level of solar activity. Copyright 2011 by the American Geophysical Union.
Williams, Richard J; Purves, Drew W
2011-09-01
The structure of food webs, complex networks of interspecies feeding interactions, plays a crucial role in ecosystem resilience and function, and understanding food web structure remains a central problem in ecology. Previous studies have shown that key features of empirical food webs can be reproduced by low-dimensional "niche" models. Here we examine the form and variability of food web niche structure by fitting a probabilistic niche model to 37 empirical food webs, a much larger number of food webs than used in previous studies. The model relaxes previous assumptions about parameter distributions and hierarchy and returns parameter estimates for each species in each web. The model significantly outperforms previous niche model variants and also performs well for several webs where a body-size-based niche model performs poorly, implying that traits other than body size are important in structuring these webs' niche space. Parameter estimates frequently violate previous models' assumptions: in 19 of 37 webs, parameter values are not significantly hierarchical, 32 of 37 webs have nonuniform niche value distributions, and 15 of 37 webs lack a correlation between niche width and niche position. Extending the model to a two-dimensional niche space yields networks with a mixture of one- and two-dimensional niches and provides a significantly better fit for webs with a large number of species and links. These results confirm that food webs are strongly niche-structured but reveal substantial variation in the form of the niche structuring, a result with fundamental implications for ecosystem resilience and function.
Polarizable six-point water models from computational and empirical optimization.
Tröster, Philipp; Lorenzen, Konstantin; Tavan, Paul
2014-02-13
Tröster et al. (J. Phys. Chem B 2013, 117, 9486-9500) recently suggested a mixed computational and empirical approach to the optimization of polarizable molecular mechanics (PMM) water models. In the empirical part the parameters of Buckingham potentials are optimized by PMM molecular dynamics (MD) simulations. The computational part applies hybrid calculations, which combine the quantum mechanical description of a H2O molecule by density functional theory (DFT) with a PMM model of its liquid phase environment generated by MD. While the static dipole moments and polarizabilities of the PMM water models are fixed at the experimental gas phase values, the DFT/PMM calculations are employed to optimize the remaining electrostatic properties. These properties cover the width of a Gaussian inducible dipole positioned at the oxygen and the locations of massless negative charge points within the molecule (the positive charges are attached to the hydrogens). The authors considered the cases of one and two negative charges rendering the PMM four- and five-point models TL4P and TL5P. Here we extend their approach to three negative charges, thus suggesting the PMM six-point model TL6P. As compared to the predecessors and to other PMM models, which also exhibit partial charges at fixed positions, TL6P turned out to predict all studied properties of liquid water at p0 = 1 bar and T0 = 300 K with a remarkable accuracy. These properties cover, for instance, the diffusion constant, viscosity, isobaric heat capacity, isothermal compressibility, dielectric constant, density, and the isobaric thermal expansion coefficient. This success concurrently provides a microscopic physical explanation of corresponding shortcomings of previous models. It uniquely assigns the failures of previous models to substantial inaccuracies in the description of the higher electrostatic multipole moments of liquid phase water molecules. Resulting favorable properties concerning the transferability to
Empirically tuned model for a precooled MGJT cryoprobe
Skye, H. M.; Passow, K. L.; Nellis, G. F.; Klein, S. A.
Cryosurgery is a medical technique that uses a freezing process to destroy undesirable tissues such as cancerous tumors. The handheld portion of the cryoprobe must be compact and powerful in order to serve as an effective surgical instrument; the next generation of cryoprobes utilizes precooled Mixed Gas Joule-Thomson (pMGJT) cycles to meet these design criteria. The increased refrigeration power available with this more complex cycle improves probe effectiveness by reducing the number of probes and the time required to treat large tissue masses. Selecting mixtures and precooling cycle parameters to meet a cryogenic cooling load in a size-limited application is a challenging design problem. Modeling the precooler and recuperator performance is critical for cycle design, yet existing techniques in the literature typically use highly idealized models of the heat exchangers that neglect pressure drop and assume infinite conductance. These assumptions are questionable for cycles that are required to use compact components. The focus of this research project is to understand how the cycle performance is impacted by transport processes in the heat exchangers and to integrate these findings into an empirically tuned model that can be used for mixture optimization. This effort is carried out through a series of modeling, experimental, and optimization studies. While these results have been applied to the design of a cryosurgical probe, they are also more generally useful in understanding the operation of other compact MGJT systems. A commercially available pMGJT cryoprobe system has been modified in order to integrate a suite of measurement instrumentation that can completely characterize the performance of the individual components as well as the overall system. Measurements include sufficient temperature and pressure sensors to resolve thermodynamic states, as well as flow meters in order to compute the heat and work transfer rates. Temperature sensors are also
Improving the desolvation penalty in empirical protein pK_{a} modeling
Olsson, Mats Henrik Mikael
2012-01-01
Unlike atomistic and continuum models, empirical pk(a) predicting methods need to include desolvation contributions explicitly. This study describes a new empirical desolvation method based on the Born solvation model. The new desolvation model was evaluated by high-level Poisson-Boltzmann...
Empirical agent-based land market: Integrating adaptive economic behavior in urban land-use models
Filatova, Tatiana
2015-01-01
This paper introduces an economic agent-based model of an urban housing market. The RHEA (Risks and Hedonics in Empirical Agent-based land market) model captures natural hazard risks and environmental amenities through hedonic analysis, facilitating empirical agent-based land market modeling. RHEA i
F.N.J. Frakking (Florine N.); W.C. Rottier (Wouter); J.W. Dorigo-Zetsma; J.M. van Hattem (Jarne); B.C. van Hees (Babette); J.A.J.W. Kluytmans (Jan); S.P.M. Lutgens (Suzanne P.); J.M. Prins (Jan); S.F. Thijsen (Steven); A. Verbon (Annelies); B.J.M. Vlaminckx (Bart J.); J.W.C. Stuart (James W. Cohen); M.A. Leverstein-Van Hall (Maurine); M.J.M. Bonten (Marc)
2013-01-01
textabstractWe studied clinical characteristics, appropriateness of initial antibiotic treatment, and other factors associated with day 30 mortality in patients with bacteremia caused by extended-spectrum-β-lactamase (ESBL)-producing bacteria in eight Dutch hospitals. Retrospectively, information wa
F.N.J. Frakking (Florine N.); W.C. Rottier (Wouter); J.W. Dorigo-Zetsma; J.M. van Hattem (Jarne); B.C. van Hees (Babette); J.A.J.W. Kluytmans (Jan); S.P.M. Lutgens (Suzanne P.); J.M. Prins (Jan); S.F. Thijsen (Steven); A. Verbon (Annelies); B.J.M. Vlaminckx (Bart J.); J.W.C. Stuart (James W. Cohen); M.A. Leverstein-Van Hall (Maurine); M.J.M. Bonten (Marc)
2013-01-01
textabstractWe studied clinical characteristics, appropriateness of initial antibiotic treatment, and other factors associated with day 30 mortality in patients with bacteremia caused by extended-spectrum-β-lactamase (ESBL)-producing bacteria in eight Dutch hospitals. Retrospectively, information wa
Empirical likelihood-based inference in a partially linear model for longitudinal data
无
2008-01-01
A partially linear model with longitudinal data is considered, empirical likelihood to inference for the regression coefficients and the baseline function is investigated, the empirical log-likelihood ratios is proven to be asymptotically chi-squared, and the corresponding confidence regions for the parameters of interest are then constructed. Also by the empirical likelihood ratio functions, we can obtain the maximum empirical likelihood estimates of the regression coefficients and the baseline function, and prove the asymptotic normality. The numerical results are conducted to compare the performance of the empirical likelihood and the normal approximation-based method, and a real example is analysed.
Empirical likelihood-based inference in a partially linear model for longitudinal data
2008-01-01
A partially linear model with longitudinal data is considered, empirical likelihood to infer- ence for the regression coefficients and the baseline function is investigated, the empirical log-likelihood ratios is proven to be asymptotically chi-squared, and the corresponding confidence regions for the pa- rameters of interest are then constructed. Also by the empirical likelihood ratio functions, we can obtain the maximum empirical likelihood estimates of the regression coefficients and the baseline function, and prove the asymptotic normality. The numerical results are conducted to compare the performance of the empirical likelihood and the normal approximation-based method, and a real example is analysed.
An Extended Clustering Algorithm for Statistical Language Models
Ueberla, J P
1994-01-01
Statistical language models frequently suffer from a lack of training data. This problem can be alleviated by clustering, because it reduces the number of free parameters that need to be trained. However, clustered models have the following drawback: if there is ``enough'' data to train an unclustered model, then the clustered variant may perform worse. On currently used language modeling corpora, e.g. the Wall Street Journal corpus, how do the performances of a clustered and an unclustered model compare? While trying to address this question, we develop the following two ideas. First, to get a clustering algorithm with potentially high performance, an existing algorithm is extended to deal with higher order N-grams. Second, to make it possible to cluster large amounts of training data more efficiently, a heuristic to speed up the algorithm is presented. The resulting clustering algorithm can be used to cluster trigrams on the Wall Street Journal corpus and the language models it produces can compete with exi...
Conformal standard model with an extended scalar sector
Latosiński, Adam [Max-Planck-Institut für Gravitationsphysik (Albert-Einstein-Institut),Mühlenberg 1, D-14476 Potsdam (Germany); Lewandowski, Adrian; Meissner, Krzysztof A. [Faculty of Physics, University of Warsaw,Pasteura 5, 02-093 Warsaw (Poland); Nicolai, Hermann [Max-Planck-Institut für Gravitationsphysik (Albert-Einstein-Institut),Mühlenberg 1, D-14476 Potsdam (Germany)
2015-10-26
We present an extended version of the Conformal Standard Model (characterized by the absence of any new intermediate scales between the electroweak scale and the Planck scale) with an enlarged scalar sector coupling to right-chiral neutrinos. The scalar potential and the Yukawa couplings involving only right-chiral neutrinos are invariant under a new global symmetry SU(3){sub N} that complements the standard U(1){sub B−L} symmetry, and is broken explicitly only by the Yukawa interaction, of order O(10{sup −6}), coupling right-chiral neutrinos and the electroweak lepton doublets. We point out four main advantages of this enlargement, namely: (1) the economy of the (non-supersymmetric) Standard Model, and thus its observational success, is preserved; (2) thanks to the enlarged scalar sector the RG improved one-loop effective potential is everywhere positive with a stable global minimum, thereby avoiding the notorious instability of the Standard Model vacuum; (3) the pseudo-Goldstone bosons resulting from spontaneous breaking of the SU(3){sub N} symmetry are natural Dark Matter candidates with calculable small masses and couplings; and (4) the Majorana Yukawa coupling matrix acquires a form naturally adapted to leptogenesis. The model is made perturbatively consistent up to the Planck scale by imposing the vanishing of quadratic divergences at the Planck scale (‘softly broken conformal symmetry’). Observable consequences of the model occur mainly via the mixing of the new scalars and the standard model Higgs boson.
Applying the Extended Parallel Process Model to workplace safety messages.
Basil, Michael; Basil, Debra; Deshpande, Sameer; Lavack, Anne M
2013-01-01
The extended parallel process model (EPPM) proposes fear appeals are most effective when they combine threat and efficacy. Three studies conducted in the workplace safety context examine the use of various EPPM factors and their effects, especially multiplicative effects. Study 1 was a content analysis examining the use of EPPM factors in actual workplace safety messages. Study 2 experimentally tested these messages with 212 construction trainees. Study 3 replicated this experiment with 1,802 men across four English-speaking countries-Australia, Canada, the United Kingdom, and the United States. The results of these three studies (1) demonstrate the inconsistent use of EPPM components in real-world work safety communications, (2) support the necessity of self-efficacy for the effective use of threat, (3) show a multiplicative effect where communication effectiveness is maximized when all model components are present (severity, susceptibility, and efficacy), and (4) validate these findings with gory appeals across four English-speaking countries.
Modeling the heterogeneous intestinal absorption of propiverine extended-release.
Weiss, Michael; Sermsappasuk, Pakawadee; Siegmund, Werner
2015-08-30
Propiverine is a widely used antimuscarinic drug with bioavailability that is limited by intestinal first-pass extraction. To study the apparent heterogeneity in intestinal first-pass extraction, we performed a population analysis of oral concentration-time data measured after administration of an extended-release formulation of propiverine in ten healthy subjects. Using an inverse Gaussian function as input model, the assumption that the systemically available fraction increases as a sigmoidal function of time considerably improved the fit. The step-like increase in this fraction at time t=3.7h predicted by the model suggests that propiverine is predominantly absorbed in colon. A nearly perfect correlation was found between the estimates of bioavailability and mean dissolution time. Copyright © 2015 Elsevier B.V. All rights reserved.
Extended Holstein polaron model for charge transfer in dry DNA
Liu Tao; Wang Yi; Wang Ke-Lin
2007-01-01
The variational method is applied to the study of charge transfer in dry DNA by using an extended Holstein small polaron model in two cases: the site-dependent finite-chain discrete case and the site-independent continuous one. The treatments in the two cases are proven to be consistent in theory and calculation. Discrete and continuous treatments of Holstein model both can yield a nonlinear equation to describe the charge migration in an actual long-range DNA chain.Our theoretical results of binding energy Eb, probability amplitude of charge carrier φ and the relation between energy and charge-lattice coupling strength are in accordance with the available experimental results and recent theoretical calculations.
Gutzwiller study of extended Hubbard models with fixed boson densities
Kimura, Takashi [Department of Information Sciences, Kanagawa University, 2946 Tsuchiya, Hiratsuka, Kanagawa 259-1293 (Japan)
2011-12-15
We studied all possible ground states, including supersolid (SS) phases and phase separations of hard-core- and soft-core-extended Bose-Hubbard models with fixed boson densities by using the Gutzwiller variational wave function and the linear programming method. We found that the phase diagram of the soft-core model depends strongly on its transfer integral. Furthermore, for a large transfer integral, we showed that an SS phase can be the ground state even below or at half filling against the phase separation. We also found that the density difference between nearest-neighbor sites, which indicates the density order of the SS phase, depends strongly on the boson density and transfer integral.
Extended Group Contribution Model for Polyfunctional Phase Equilibria
Abildskov, Jens
-liquid equilibria from data on binary mixtures, composed of structurally simple molecules with a single functional group. More complex is the situation with mixtures composed of structurally more complicated molecules or molecules with more than one functional group. The UNIFAC method is extended to handle...... polyfunctional group situations, based on additional information on molecular structure. The extension involves the addition of second-order correction terms to the existing equation. In this way the current first-order formulation is retained. The second-order concept is developed for mixture properties based....... In chapter 4 parameters are estimated for the first-order UNIFAC model, based on which parameters are estimated for one of the second-order models described in chapter 3. The parameter estimation is based on measured binary data on around 4000 systems, covering 11 C-, H- and O-containing functional groups...
MILES extended: Stellar population synthesis models from the optical to the infrared
Röck, B.; Vazdekis, A.; Ricciardelli, E.; Peletier, R. F.; Knapen, J. H.; Falcón-Barroso, J.
2016-05-01
We present the first single-burst stellar population models, which covers the optical and the infrared wavelength range between 3500 and 50 000 Å and which are exclusively based on empirical stellar spectra. To obtain these joint models, we combined the extended MILES models in the optical with our new infrared models that are based on the IRTF (Infrared Telescope Facility) library. The latter are available only for a limited range in terms of both age and metallicity. Our combined single-burst stellar population models were calculated for ages larger than 1 Gyr, for metallicities between [ Fe / H ] = - 0.40 and 0.26, for initial mass functions of various types and slopes, and on the basis of two different sets of isochrones. They are available to the scientific community on the MILES web page. We checked the internal consistency of our models and compared their colour predictions to those of other models that are available in the literature. Optical and near infrared colours that are measured from our models are found to reproduce the colours well that were observed for various samples of early-type galaxies. Our models will enable a detailed analysis of the stellar populations of observed galaxies.
Model Equilibrium and Empirical Study of Rural Labor Transfer
Qinghua; HUANG; Xiuchuan; XU; Ming; ZHANG; Yue; ZHAO
2013-01-01
We establish the two-sector economy model including the urban sector and the rural sector, derive the labor demand curve of the urban sector and rural sector under the condition of balanced production decisions with benefit maximization, and analyze the labor flow when in the short-term or long-term two-sector economic equilibrium. The results show that rising wages caused by short-term internal and external impact increases the pressure on the employment in two sectors, and the urban sector is difficult to absorb the surplus labor of the rural sector. However, under the conditions of free flow of factors and fully competitive market, the wage variation arising from the long-term endogenous evolution, leads to inversely proportional relationship between the demand for labor in the urban and rural sectors, which is conducive to the transfer of rural labor force. Based on microeconomic survey data of labor flow in urban-rural coordination experimental zones in Chongqing City, this paper makes an empirical study of the main factors having a short-term impact on the labor transfer, and the results show that education level and the opportunity to participate in the training are important factors.
Analysis of the phase structure in extended Higgs models
Seniuch, M.
2006-07-07
We study the generation of the baryon asymmetry in the context of electroweak baryogenesis in two different extensions of the Standard Model. First, we consider an effective theory, in which the Standard Model is augmented by an additional dimension-six Higgs operator. The effects of new physics beyond a cut-off scale are parameterized by this operator. The second model is the two-Higgs-doublet model, whose particle spectrum is extended by two further neutral and two charged heavy Higgs bosons. In both cases we focus on the properties of the electroweak phase transition, especially on its strength and the profile of the nucleating bubbles. After reviewing some general aspects of the electroweak phase transition and baryogenesis we derive the respective thermal effective potentials to one-loop order. We systematically study the parameter spaces, using numerical methods, and compute the strength of the phase transition and the wall thickness as a function of the Higgs masses. We find a strong first order transition for a light Higgs state with a mass up to about 200 GeV. In case of the dimension-six model the cut-off scale has to stay between 500 and 850 GeV, in the two-Higgs-doublet model one needs at least one heavy Higgs mass of 300 GeV. The wall thickness varies for both theories in the range roughly from two to fifteen, in units of the inverse critical temperature. We also estimate the size of the electron and neutron electric dipole moments, since new sources of CP violation give rise to them. In wide ranges of the parameter space we are not in conflict with the experimental bounds. Finally the baryon asymmetry, which is predicted by these models, is related to the Higgs mass and the other appropriate input parameters. In both models the measured baryon asymmetry can be achieved for natural values of the model parameters. (orig.)
Empirically modelled Pc3 activity based on solar wind parameters
T. Raita
2010-09-01
Full Text Available It is known that under certain solar wind (SW/interplanetary magnetic field (IMF conditions (e.g. high SW speed, low cone angle the occurrence of ground-level Pc3–4 pulsations is more likely. In this paper we demonstrate that in the event of anomalously low SW particle density, Pc3 activity is extremely low regardless of otherwise favourable SW speed and cone angle. We re-investigate the SW control of Pc3 pulsation activity through a statistical analysis and two empirical models with emphasis on the influence of SW density on Pc3 activity. We utilise SW and IMF measurements from the OMNI project and ground-based magnetometer measurements from the MM100 array to relate SW and IMF measurements to the occurrence of Pc3 activity. Multiple linear regression and artificial neural network models are used in iterative processes in order to identify sets of SW-based input parameters, which optimally reproduce a set of Pc3 activity data. The inclusion of SW density in the parameter set significantly improves the models. Not only the density itself, but other density related parameters, such as the dynamic pressure of the SW, or the standoff distance of the magnetopause work equally well in the model. The disappearance of Pc3s during low-density events can have at least four reasons according to the existing upstream wave theory: 1. Pausing the ion-cyclotron resonance that generates the upstream ultra low frequency waves in the absence of protons, 2. Weakening of the bow shock that implies less efficient reflection, 3. The SW becomes sub-Alfvénic and hence it is not able to sweep back the waves propagating upstream with the Alfvén-speed, and 4. The increase of the standoff distance of the magnetopause (and of the bow shock. Although the models cannot account for the lack of Pc3s during intervals when the SW density is extremely low, the resulting sets of optimal model inputs support the generation of mid latitude Pc3 activity predominantly through
Extended nonlinear feedback model for describing episodes of high inflation
Szybisz, Martín A.; Szybisz, Leszek
2017-01-01
An extension of the nonlinear feedback (NLF) formalism to describe regimes of hyper- and high-inflation in economy is proposed in the present work. In the NLF model the consumer price index (CPI) exhibits a finite time singularity of the type 1 /(tc - t) (1 - β) / β, with β > 0, predicting a blow up of the economy at a critical time tc. However, this model fails in determining tc in the case of weak hyperinflation regimes like, e.g., that occurred in Israel. To overcome this trouble, the NLF model is extended by introducing a parameter γ, which multiplies all terms with past growth rate index (GRI). In this novel approach the solution for CPI is also analytic being proportional to the Gaussian hypergeometric function 2F1(1 / β , 1 / β , 1 + 1 / β ; z) , where z is a function of β, γ, and tc. For z → 1 this hypergeometric function diverges leading to a finite time singularity, from which a value of tc can be determined. This singularity is also present in GRI. It is shown that the interplay between parameters β and γ may produce phenomena of multiple equilibria. An analysis of the severe hyperinflation occurred in Hungary proves that the novel model is robust. When this model is used for examining data of Israel a reasonable tc is got. High-inflation regimes in Mexico and Iceland, which exhibit weaker inflations than that of Israel, are also successfully described.
Extended Nambu models: Their relation to gauge theories
Escobar, C. A.; Urrutia, L. F.
2017-05-01
Yang-Mills theories supplemented by an additional coordinate constraint, which is solved and substituted in the original Lagrangian, provide examples of the so-called Nambu models, in the case where such constraints arise from spontaneous Lorentz symmetry breaking. Some explicit calculations have shown that, after additional conditions are imposed, Nambu models are capable of reproducing the original gauge theories, thus making Lorentz violation unobservable and allowing the interpretation of the corresponding massless gauge bosons as the Goldstone bosons arising from the spontaneous symmetry breaking. A natural question posed by this approach in the realm of gauge theories is to determine under which conditions the recovery of an arbitrary gauge theory from the corresponding Nambu model, defined by a general constraint over the coordinates, becomes possible. We refer to these theories as extended Nambu models (ENM) and emphasize the fact that the defining coordinate constraint is not treated as a standard gauge fixing term. At this level, the mechanism for generating the constraint is irrelevant and the case of spontaneous Lorentz symmetry breaking is taken only as a motivation, which naturally bring this problem under consideration. Using a nonperturbative Hamiltonian analysis we prove that the ENM yields the original gauge theory after we demand current conservation for all time, together with the imposition of the Gauss laws constraints as initial conditions upon the dynamics of the ENM. The Nambu models yielding electrodynamics, Yang-Mills theories and linearized gravity are particular examples of our general approach.
Streamflow Data Assimilation in SWAT Model Using Extended Kalman Filter
Sun, L.; Nistor, I.; Seidou, O.
2014-12-01
Although Extended Kalman Filter (EKF) is regarded as the de facto method for the application of Kalman Filter in non-linear system, it's application to complex distributed hydrological models faces a lot of challenges. Ensemble Kalman Filter (EnKF) is often preferred because it avoids the calculation of the linearization Jacobian Matrix and the propagation of estimation error covariance. EnKF is however difficult to apply to large models because of the huge computation demand needed for parallel propagation of ensemble members. This paper deals with the application of EKF in stream flow prediction using the SWAT model in the watershed of Senegal River, West Africa. In the Jacobian Matrix calculation, SWAT is regarded as a black box model and the derivatives are calculated in the form of differential equations. The state vector is the combination of runoff, soil, shallow aquifer and deep aquifer water contents. As an initial attempt, only stream flow observations are assimilated. Despite the fact that EKF is a sub-optimal filter, the coupling of EKF significantly improves the estimation of daily streamflow. The results of SWAT+EKF are also compared to those of a simpler quasi linear streamflow prediction model where both state and parameters are updated with the EKF.
Kutílek, M; Jendele, L; Krejca, M
2009-02-16
The accelerated flow in soil pores is responsible for a rapid transport of pollutants from the soil surface to deeper layers up to groundwater. The term preferential flow is used for this type of transport. Our study was aimed at the preferential flow realized in the structural porous domain in bi-modal soils. We compared equations describing the soil water retention function h(theta) and unsaturated hydraulic conductivity K(h), eventually K(theta) modified for bi-modal soils, where theta is the soil water content and h is the pressure head. The analytical description of a curve passing experimental data sets of the soil hydraulic function is typical for the empirical equation characterized by fitting parameters only. If the measured data are described by the equation derived by the physical model without using fitting parameters, we speak about a physically based model. There exist several transitional subtypes between empirical and physically based models. They are denoted as semi-empirical, or semi-physical. We tested 3 models of soil water retention function and 3 models of unsaturated conductivity using experimental data sets of sand, silt, silt loam and loam. All used soils are typical by their bi-modality of the soil porous system. The model efficiency was estimated by RMSE (Root mean square error) and by RSE (Relative square error). The semi-empirical equation of the soil water retention function had the lowest values of RMSE and RSE and was qualified as "optimal" for the formal description of the shape of the water retention function. With this equation, the fit of the modelled data to experiments was the closest one. The fitting parameters smoothed the difference between the model and the physical reality of the soil porous media. The physical equation based upon the model of the pore size distribution did not allow exact fitting of the modelled data to the experimental data due to the rigidity and simplicity of the physical model when compared to the
MILES extended: Stellar population synthesis models from the optical to the infrared
Röck, B; Ricciardelli, E; Peletier, R F; Knapen, J H; Falcon-Barroso, J
2016-01-01
We present the first single-burst stellar population models which covers the optical and the infrared wavelength range between 3500 and 50000 Angstrom and which are exclusively based on empirical stellar spectra. To obtain these joint models, we combined the extended MILES models in the optical with our new infrared models that are based on the IRTF (Infrared Telescope Facility) library. The latter are available only for a limited range in terms of both age and metallicity. Our combined single-burst stellar population models were calculated for ages larger than 1 Gyr, for metallicities between [Fe/H] = -0.40 and 0.26, for initial mass functions of various types and slopes, and on the basis of two different sets of isochrones. They are available to the scientific community on the MILES web page. We checked the internal consistency of our models and compared their colour predictions to those of other models that are available in the literature. Optical and near infrared colours that are measured from our models...
Critical phenomena of asymmetric nuclear matter in the extended Zimanyi-Moszkowski model
Miyazaki, K
2005-01-01
We have studied the liquid-gas phase transition of warm asymmetric nuclear matter in the extended Zimanyi-Moszkowski model. The three sets of the isovector-meson coupling constants are used. It is found that the critical temperature depends only on the difference of the symmetry energy but not on the differences of each isovector coupling constant. We treat the asymmetric nuclear matter as one-component system and employ the Maxwell construction so as to calculate the liquid-gas phase coexistence curve. The derived critical exponents depend on neither the symmetry energy nor the asymmetry of the system. Their values beta=0.33 and gamma=1.21 agree with the empirical values derived from the recent multifragmentation reactions. Consequently, we have confirmed the universality of the critical phenomena in the liquid-gas phase transition of nuclear matter.
Extended duration local anesthetic agent in a rat paw model.
Ickowicz, D E; Golovanevski, L; Domb, A J; Weiniger, C F
2014-07-01
Encapsulated local anesthetics extend postoperative analgesic effect following site-directed nerve injection; potentially reducing postoperative complications. Our study aim was to investigate efficacy of our improved extended duration formulation - 15% bupivacaine in poly(DL-lactic acid co castor oil) 3:7 synthesized by ring opening polymerization. In vitro, around 70% of bupivacaine was released from the p(DLLA-CO) 3:7 after 10 days. A single injection of the optimal formulation of 15% bupivacaine-polymer or plain (0.5%) bupivacaine (control), was injected via a 22G needle beside the sciatic nerve of Sprague-Dawley rats under anesthesia; followed (in some animals) by a 1cm longitudinal incision through the skin and fascia of the paw area. Behavioral tests for sensory and motor block assessment were done using Hargreave's hot plate score, von Frey filaments and rearing count. The 15% bupivacaine formulation significantly prolonged sensory block duration up to at least 48 h. Following surgery, motor block was observed for 48 h following administration of bupivacaine-polymer formulation and rearing was reduced (returning to baseline after 48 h). No significant differences in mechanical nociceptive response were observed. The optimized bupivacaine-polymer formulation prolonged duration of local anesthesia effect in our animal model up to at least 48 h.
Generalized multiplicative error models: Asymptotic inference and empirical analysis
Li, Qian
This dissertation consists of two parts. The first part focuses on extended Multiplicative Error Models (MEM) that include two extreme cases for nonnegative series. These extreme cases are common phenomena in high-frequency financial time series. The Location MEM(p,q) model incorporates a location parameter so that the series are required to have positive lower bounds. The estimator for the location parameter turns out to be the minimum of all the observations and is shown to be consistent. The second case captures the nontrivial fraction of zero outcomes feature in a series and combines a so-called Zero-Augmented general F distribution with linear MEM(p,q). Under certain strict stationary and moment conditions, we establish a consistency and asymptotic normality of the semiparametric estimation for these two new models. The second part of this dissertation examines the differences and similarities between trades in the home market and trades in the foreign market of cross-listed stocks. We exploit the multiplicative framework to model trading duration, volume per trade and price volatility for Canadian shares that are cross-listed in the New York Stock Exchange (NYSE) and the Toronto Stock Exchange (TSX). We explore the clustering effect, interaction between trading variables, and the time needed for price equilibrium after a perturbation for each market. The clustering effect is studied through the use of univariate MEM(1,1) on each variable, while the interactions among duration, volume and price volatility are captured by a multivariate system of MEM(p,q). After estimating these models by a standard QMLE procedure, we exploit the Impulse Response function to compute the calendar time for a perturbation in these variables to be absorbed into price variance, and use common statistical tests to identify the difference between the two markets in each aspect. These differences are of considerable interest to traders, stock exchanges and policy makers.
A multifluid model extended for strong temperature nonequilibrium
Chang, Chong [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2016-08-08
We present a multifluid model in which the material temperature is strongly affected by the degree of segregation of each material. In order to track temperatures of segregated form and mixed form of the same material, they are defined as different materials with their own energy. This extension makes it necessary to extend multifluid models to the case in which each form is defined as a separate material. Statistical variations associated with the morphology of the mixture have to be simplified. Simplifications introduced include combining all molecularly mixed species into a single composite material, which is treated as another segregated material. Relative motion within the composite material, diffusion, is represented by material velocity of each component in the composite material. Compression work, momentum and energy exchange, virtual mass forces, and dissipation of the unresolved kinetic energy have been generalized to the heterogeneous mixture in temperature nonequilibrium. The present model can be further simplified by combining all mixed forms of materials into a composite material. Molecular diffusion in this case is modeled by the Stefan-Maxwell equations.
How Stueckelberg Extends the Standard Model and the MSSM
Körs, B; Kors, Boris; Nath, Pran
2005-01-01
Abelian vector bosons can get massive through the Stueckelberg mechanism without spontaneous symmetry breaking via condensation of Higgs scalar fields. This appears very naturally in models derived from string theory and supergravity. The simplest scenarios of this type consist of extensions of the Standard Model (SM) or the minimal supersymmetric standard model (MSSM) by an extra U(1)_X gauge group with Stueckelberg type couplings. For the SM, the physical spectrum is extended by a massive neutral gauge boson Z' only, while the extension of the MSSM contains a CP-even neutral scalar and two extra neutralinos. The new gauge boson Z' can be very light compared to other models with U(1)' extensions. Among the new features of the Stueckelberg extension of the MSSM, the most striking is the possibility of a new lightest supersymmetric particle (LSP) chi_{St}^0 which is mostly composed of Stueckelberg fermions. In this scenario the LSP of the MSSM chi_1^0 is unstable and decays into chi_{St}^0. Such decays alter t...
Global empirical wind model for the upper mesosphere/lower thermosphere. I. Prevailing wind
Y. I. Portnyagin
Full Text Available An updated empirical climatic zonally averaged prevailing wind model for the upper mesosphere/lower thermosphere (70-110 km, extending from 80°N to 80°S is presented. The model is constructed from the fitting of monthly mean winds from meteor radar and MF radar measurements at more than 40 stations, well distributed over the globe. The height-latitude contour plots of monthly mean zonal and meridional winds for all months of the year, and of annual mean wind, amplitudes and phases of annual and semiannual harmonics of wind variations are analyzed to reveal the main features of the seasonal variation of the global wind structures in the Northern and Southern Hemispheres. Some results of comparison between the ground-based wind models and the space-based models are presented. It is shown that, with the exception of annual mean systematic bias between the zonal winds provided by the ground-based and space-based models, a good agreement between the models is observed. The possible origin of this bias is discussed.
Key words: Meteorology and Atmospheric dynamics (general circulation; middle atmosphere dynamics; thermospheric dynamics
"Let's Move" campaign: applying the extended parallel process model.
Batchelder, Alicia; Matusitz, Jonathan
2014-01-01
This article examines Michelle Obama's health campaign, "Let's Move," through the lens of the extended parallel process model (EPPM). "Let's Move" aims to reduce the childhood obesity epidemic in the United States. Developed by Kim Witte, EPPM rests on the premise that people's attitudes can be changed when fear is exploited as a factor of persuasion. Fear appeals work best (a) when a person feels a concern about the issue or situation, and (b) when he or she believes to have the capability of dealing with that issue or situation. Overall, the analysis found that "Let's Move" is based on past health campaigns that have been successful. An important element of the campaign is the use of fear appeals (as it is postulated by EPPM). For example, part of the campaign's strategies is to explain the severity of the diseases associated with obesity. By looking at the steps of EPPM, readers can also understand the strengths and weaknesses of "Let's Move."
Postcorrection and mathematical model of life in Extended Everett's Concept
Mensky, Michael B
2007-01-01
Extended Everett's Concept (EEC) recently developed by the author to explain the phenomenon of consciousness is considered. A mathematical model is proposed for the principal feature of consciousness assumed in EEC, namely its ability (in the state of sleep, trance or meditation, when the explicit consciousness is disabled) to obtain information from all alternative classical realities (Everett's worlds) and select the favorable realities. To represent this ability, a mathematical operation called postcorrection is introduced, which corrects the present state to guarantee certain characteristics of the future state. Evolution of living matter is thus determined by goals (first of all by the goal of survival) as well as by causes. The resulting theory, in a way symmetrical in time direction, follows from a sort of antropic principle. Possible criteria for postcorrection and corresponding phenomena in the sphere of life are classified. Both individual and collective criteria of survival are considered as well a...
Wan, Rui; Durlach, Nathaniel I; Colburn, H Steven
2010-12-01
An extended version of the equalization-cancellation (EC) model of binaural processing is described and applied to speech intelligibility tasks in the presence of multiple maskers. The model incorporates time-varying jitters, both in time and amplitude, and implements the equalization and cancellation operations in each frequency band independently. The model is consistent with the original EC model in predicting tone-detection performance for a large set of configurations. When the model is applied to speech, the speech intelligibility index is used to predict speech intelligibility performance in a variety of conditions. Specific conditions addressed include different types of maskers, different numbers of maskers, and different spatial locations of maskers. Model predictions are compared with empirical measurements reported by Hawley et al. [J. Acoust. Soc. Am. 115, 833-843 (2004)] and by Marrone et al. [J. Acoust. Soc. Am. 124, 1146-1158 (2008)]. The model succeeds in predicting speech intelligibility performance when maskers are speech-shaped noise or broadband-modulated speech-shaped noise but fails when the maskers are speech or reversed speech.
Denitrification in the root zone using a simple empirical model SimDen
Vinther, Finn Pilgaard
2006-01-01
Only by knowing soil type and amount of nitrogen applied, an estimate of the annual denitrification can be obtained with the simple empirical model SimDen.......Only by knowing soil type and amount of nitrogen applied, an estimate of the annual denitrification can be obtained with the simple empirical model SimDen....
The Hannover Consultation Liaison model: some empirical findings.
Freyberger, H; Künsebeck, H W; Lempa, W; Avenarius, H J; Liedtke, R; Plassman, R; Nordmeyer, J
1985-01-01
Starting from the definitions concerning the concepts 'Liaison medicine' and 'Consultative Psychiatry' we begin with remarks with regard to the Consultation Liaison-Situation in West Germany on the basis of the key-words 'Brief history', 'Independent university units with regard to Psychotherapy and Psychosomatics as well as the connected organization' and 'Teaching procedures'. Following it the Hannover Consultation Liaison model is presented particularly with regard to both the psychosomatic inpatient ward including the functional organization and psychotherapeutic processes as well as the so-called 'Innere Ambulanz' which includes the consultation liaison services in the clinico-medical departments outside Psychiatry and Psychosomatics. Within the 'Innere Ambulanz', which is closely connected to our psychosomatic inpatient ward, the consultation liaison activities and the resulting supportive psychotherapeutic strategies are performed by student auxiliary therapists who are interested in completing their 4-5 months internship-time in our department. We describe both the three supportive psychotherapeutic steps, which may last months to years including subsequent dynamically psychotherapeutic strategies as well as the reactions of the auxiliary therapist function on the students. Furthermore, we may state that there exists no one more optional education procedure of graduate students than the student's confrontation with his partial self-responsibility vis-à-vis a patient who is being supportive-psychotherapeutically treated by him. Specific empirical proofs concerning our patient oriented consultation liaison activities are demonstrated on the basis of previous psychotherapeutic findings in Crohn patients. Here we are able to demonstrate the effectivity of psychotherapy in the case of the supplementarily psychotherapeutically treated patients in comparison to the patients who received medical therapy only. Finally we are able to present quantitative clinico
Empirical evaluation of scoring functions for Bayesian network model selection.
Liu, Zhifa; Malone, Brandon; Yuan, Changhe
2012-01-01
In this work, we empirically evaluate the capability of various scoring functions of Bayesian networks for recovering true underlying structures. Similar investigations have been carried out before, but they typically relied on approximate learning algorithms to learn the network structures. The suboptimal structures found by the approximation methods have unknown quality and may affect the reliability of their conclusions. Our study uses an optimal algorithm to learn Bayesian network structures from datasets generated from a set of gold standard Bayesian networks. Because all optimal algorithms always learn equivalent networks, this ensures that only the choice of scoring function affects the learned networks. Another shortcoming of the previous studies stems from their use of random synthetic networks as test cases. There is no guarantee that these networks reflect real-world data. We use real-world data to generate our gold-standard structures, so our experimental design more closely approximates real-world situations. A major finding of our study suggests that, in contrast to results reported by several prior works, the Minimum Description Length (MDL) (or equivalently, Bayesian information criterion (BIC)) consistently outperforms other scoring functions such as Akaike's information criterion (AIC), Bayesian Dirichlet equivalence score (BDeu), and factorized normalized maximum likelihood (fNML) in recovering the underlying Bayesian network structures. We believe this finding is a result of using both datasets generated from real-world applications rather than from random processes used in previous studies and learning algorithms to select high-scoring structures rather than selecting random models. Other findings of our study support existing work, e.g., large sample sizes result in learning structures closer to the true underlying structure; the BDeu score is sensitive to the parameter settings; and the fNML performs pretty well on small datasets. We also
A Semi-empirical Model of the Stratosphere in the Climate System
Sodergren, A. H.; Bodeker, G. E.; Kremser, S.; Meinshausen, M.; McDonald, A.
2014-12-01
Chemistry climate models (CCMs) currently used to project changes in Antarctic ozone are extremely computationally demanding. CCM projections are uncertain due to lack of knowledge of future emissions of greenhouse gases (GHGs) and ozone depleting substances (ODSs), as well as parameterizations within the CCMs that have weakly constrained tuning parameters. While projections should be based on an ensemble of simulations, this is not currently possible due to the complexity of the CCMs. An inexpensive but realistic approach to simulate changes in stratospheric ozone, and its coupling to the climate system, is needed as a complement to CCMs. A simple climate model (SCM) can be used as a fast emulator of complex atmospheric-ocean climate models. If such an SCM includes a representation of stratospheric ozone, the evolution of the global ozone layer can be simulated for a wide range of GHG and ODS emissions scenarios. MAGICC is an SCM used in previous IPCC reports. In the current version of the MAGICC SCM, stratospheric ozone changes depend only on equivalent effective stratospheric chlorine (EESC). In this work, MAGICC is extended to include an interactive stratospheric ozone layer using a semi-empirical model of ozone responses to CO2and EESC, with changes in ozone affecting the radiative forcing in the SCM. To demonstrate the ability of our new, extended SCM to generate projections of global changes in ozone, tuning parameters from 19 coupled atmosphere-ocean general circulation models (AOGCMs) and 10 carbon cycle models (to create an ensemble of 190 simulations) have been used to generate probability density functions of the dates of return of stratospheric column ozone to 1960 and 1980 levels for different latitudes.
DCC&U: An Extended Digital Curation Lifecycle Model
Panos Constantopoulos
2009-06-01
Full Text Available Normal 0 The proliferation of Web, database and social networking technologies has enabled us to produce, publish and exchange digital assets at an enormous rate. This vast amount of information that is either digitized or born-digital needs to be collected, organized and preserved in a way that ensures that our digital assets and the information they carry remain available for future use. Digital curation has emerged as a new inter-disciplinary practice that seeks to set guidelines for disciplined management of information. In this paper we review two recent models for digital curation introduced by the Digital Curation Centre (DCC and the Digital Curation Unit (DCU of the Athena Research Centre. We then propose a fusion of the two models that highlights the need to extend the digital curation lifecycle by adding (a provisions for the registration of usage experience, (b a stage for knowledge enhancement and (c controlled vocabularies used by convention to denote concepts, properties and relations. The objective of the proposed extensions is twofold: (i to provide a more complete lifecycle model for the digital curation domain; and (ii to provide a stimulus for a broader discussion on the research agenda.
New extended standard model, dark matters and relativity theory
Hwang, Jae-Kwang
2016-03-01
Three-dimensional quantized space model is newly introduced as the extended standard model. Four three-dimensional quantized spaces with total 12 dimensions are used to explain the universes including ours. Electric (EC), lepton (LC) and color (CC) charges are defined to be the charges of the x1x2x3, x4x5x6 and x7x8x9 warped spaces, respectively. Then, the lepton is the xi(EC) - xj(LC) correlated state which makes 3x3 = 9 leptons and the quark is the xi(EC) - xj(LC) - xk(CC) correlated state which makes 3x3x3 = 27 quarks. The new three bastons with the xi(EC) state are proposed as the dark matters seen in the x1x2x3 space, too. The matter universe question, three generations of the leptons and quarks, dark matter and dark energy, hadronization, the big bang, quantum entanglement, quantum mechanics and general relativity are briefly discussed in terms of this new model. The details can be found in the article titled as ``journey into the universe; three-dimensional quantized spaces, elementary particles and quantum mechanics at https://www.researchgate.net/profile/J_Hwang2''.
赵进平
2002-01-01
The mirror extending approach proposed by Zhao and Huang ９in EMD method is improved in this paper.Mirror extending manner of data is kept unchanged, but the approach for determining nvelopes is changed. When the end of data is obvio usly not extremum, the envelope is determined by the first inner extremum and the image value in the mirror, ignoring the value on the end. This improvement eliminates the frequency compression near the end and decreases the error.Meanwhile,tridiagonal equations are used and the calculation speed is much increase d.Thetemporal process curve is more important in reflecting the real physical process and comparablewithotherp henomena.Frequency mixing in IMFs makes it impossible.A highfrequencyreconstruction (HFR)approach is proposed to eli minate common frequency mixing and reconstruct an IMF with all high fr equency portions.By this approach, the IMFs without frequency mixing are obtain ed to express significative processes.The high frequency information restored in high frequency IMF can be extracted by general spectrum method. After obtainin g IMFs by EMD method, some of the theoretical and technological issues still exi st w hen using the IMFs.The consistency of IMFs with real physical process is discus sed in detail.By virtue of the approach proposed in this paper,the EMD method can be widely used in various fields.
Bora, Sanjay; Scherbaum, Frank; Kuehn, Nicolas; Stafford, Peter; Edwards, Benjamin
2016-04-01
The current practice of deriving empirical ground motion prediction equations (GMPEs) involves using ground motions recorded at multiple sites. However, in applications like site-specific (e.g., critical facility) hazard ground motions obtained from the GMPEs are need to be adjusted/corrected to a particular site/site-condition under investigation. This study presents a complete framework for developing a response spectral GMPE, within which the issue of adjustment of ground motions is addressed in a manner consistent with the linear system framework. The present approach is a two-step process in which the first step consists of deriving two separate empirical models, one for Fourier amplitude spectra (FAS) and the other for a random vibration theory (RVT) optimized duration (Drvto) of ground motion. In the second step the two models are combined within the RVT framework to obtain full response spectral amplitudes. Additionally, the framework also involves a stochastic model based extrapolation of individual Fourier spectra to extend the useable frequency limit of the empirically derived FAS model. The stochastic model parameters were determined by inverting the Fourier spectral data using an approach similar to the one as described in Edwards and Faeh (2013). Comparison of median predicted response spectra from present approach with those from other regional GMPEs indicates that the present approach can also be used as a stand-alone model. The dataset used for the presented analysis is a subset of the recently compiled database RESORCE-2012 across Europe, the Middle East and the Mediterranean region.
Baryon-Baryon Interactions ---Nijmegen Extended-Soft-Core Models---
Rijken, T. A.; Nagels, M. M.; Yamamoto, Y.
We review the Nijmegen extended-soft-core (ESC) models for the baryon-baryon (BB) interactions of the SU(3) flavor-octet of baryons (N, Lambda, Sigma, and Xi). The interactions are basically studied from the meson-exchange point of view, in the spirit of the Yukawa-approach to the nuclear force problem [H. Yukawa, ``On the interaction of Elementary Particles I'', Proceedings of the Physico-Mathematical Society of Japan 17 (1935), 48], using generalized soft-core Yukawa-functions. These interactions are supplemented with (i) multiple-gluon-exchange, and (ii) structural effects due to the quark-core of the baryons. We present in some detail the most recent extended-soft-core model, henceforth referred to as ESC08, which is the most complete, sophisticated, and successful interaction-model. Furthermore, we discuss briefly its predecessor the ESC04-model [Th. A. Rijken and Y. Yamamoto, Phys. Rev. C 73 (2006), 044007; Th. A. Rijken and Y. Yamamoto, Ph ys. Rev. C 73 (2006), 044008; Th. A. Rijken and Y. Yamamoto, nucl-th/0608074]. For the soft-core one-boson-exchange (OBE) models we refer to the literature [Th. A. Rijken, in Proceedings of the International Conference on Few-Body Problems in Nuclear and Particle Physics, Quebec, 1974, ed. R. J. Slobodrian, B. Cuec and R. Ramavataram (Presses Universitè Laval, Quebec, 1975), p. 136; Th. A. Rijken, Ph. D. thesis, University of Nijmegen, 1975; M. M. Nagels, Th. A. Rijken and J. J. de Swart, Phys. Rev. D 17 (1978), 768; P. M. M. Maessen, Th. A. Rijken and J. J. de Swart, Phys. Rev. C 40 (1989), 2226; Th. A. Rijken, V. G. J. Stoks and Y. Yamamoto, Phys. Rev. C 59 (1999), 21; V. G. J. Stoks and Th. A. Rijken, Phys. Rev. C 59 (1999), 3009]. All ingredients of these latter models are also part of ESC08, and so a description of ESC08 comprises all models so far in principle. The extended-soft-core (ESC) interactions consist of local- and non-local-potentials due to (i) one-boson-exchanges (OBE), which are the members of nonets of
Extended MHD Modeling of Tearing-Driven Magnetic Relaxation
Sauppe, Joshua
2016-10-01
Driven plasma pinch configurations are characterized by the gradual accumulation and episodic release of free energy in discrete relaxation events. The hallmark of this relaxation in a reversed-field pinch (RFP) plasma is flattening of the parallel current density profile effected by a fluctuation-induced dynamo emf in Ohm's law. Nonlinear two-fluid modeling of macroscopic RFP dynamics has shown appreciable coupling of magnetic relaxation and the evolution of plasma flow. Accurate modeling of RFP dynamics requires the Hall effect in Ohm's law as well as first order ion finite Larmor radius (FLR) effects, represented by the Braginskii ion gyroviscous stress tensor. New results find that the Hall dynamo effect from / ne can counter the MHD effect from - in some of the relaxation events. The MHD effect dominates these events and relaxes the current profile toward the Taylor state, but the opposition of the two dynamos generates plasma flow in the direction of equilibrium current density, consistent with experimental measurements. Detailed experimental measurements of the MHD and Hall emf terms are compared to these extended MHD predictions. Tracking the evolution of magnetic energy, helicity, and hybrid helicity during relaxation identifies the most important contributions in single-fluid and two-fluid models. Magnetic helicity is well conserved relative to the magnetic energy during relaxation. The hybrid helicity is dominated by magnetic helicity in realistic low-beta pinch conditions and is also well conserved. Differences of less than 1 % between magnetic helicity and hybrid helicity are observed with two-fluid modeling and result from cross helicity evolution through ion FLR effects, which have not been included in contemporary relaxation theories. The kinetic energy driven by relaxation in the computations is dominated by velocity components perpendicular to the magnetic field, an effect that had not been predicted. Work performed at University of Wisconsin
Low-energy limit of the extended Linear Sigma Model
Divotgey, Florian; Giacosa, Francesco; Rischke, Dirk H
2016-01-01
The extended Linear Sigma Model (eLSM) is an effective hadronic model based on the linear realization of chiral symmetry $SU(N_f)_L \\times SU(N_f)_R$, with (pseudo)scalar and (axial-)vector mesons as degrees of freedom. In this paper, we study the low-energy limit of the eLSM for $N_f=2$ flavors by integrating out all fields except for the pions, the (pseudo-)Nambu--Goldstone bosons of chiral symmetry breaking. We only keep terms entering at tree level and up to fourth order in powers of derivatives of the pion fields. Up to this order, there are four low-energy coupling constants in the resulting low-energy effective action. We show that the latter is formally identical to Chiral Perturbation Theory (ChPT), after choosing a representative for the coset space generated by chiral symmetry breaking and expanding up to fourth order in powers of derivatives of the pion fields. Two of the low-energy coupling constants of the eLSM are uniquely determined by a fit to hadron masses and decay widths. We find that thei...
Foster, N.L.; Paris, C.B.; Kool, J.T.; Baums, I.B.; Stevens, J.R.; Sanchez, J.A.; Bastidas, C.; Agudelo, C.; Bush, P.; Day, O.; Ferrari, R.; Gonzalez, P.; Gore, S.; Guppy, R.; McCartney, M.A.; McCoy, C.; Mendes, J.; Srinivasan, A.; Steiner, S.; Vermeij, M.J.A.; Weil, E.; Mumby, P.J.
2012-01-01
Understanding patterns of connectivity among populations of marine organisms is essential for the development of realistic, spatially explicit models of population dynamics. Two approaches, empirical genetic patterns and oceanographic dispersal modelling, have been used to estimate levels of
An empirical mixing model for pressurized thermal shock applications
Chexal, V.K.; Chao, J.; Griesbach, T.J.; Nickell, R.E.
1985-04-01
Empirical correlations are developed for the local temperature and velocity distributions in the pressurized water reactor downcomer for pressurized thermal shock scenarios. The correlation is based on Creare test data and has been validated with Science Applications, Inc., experiments and COMMIX code calculations. It provides good agreement under pump flow and natural circulation conditions and gives a conservative estimate under stagnation conditions.
Empirical Bayes Estimation in the Rasch Model: A Simulation.
de Gruijter, Dato N. M.
In a situation where the population distribution of latent trait scores can be estimated, the ordinary maximum likelihood estimator of latent trait scores may be improved upon by taking the estimated population distribution into account. In this paper empirical Bayes estimators are compared with the liklihood estimator for three samples of 300…
Raab, Markus
2014-01-01
Situation Model of Anticipated Response consequences in tactical decisions (SMART) describes the interaction of top-down and bottom-up processes in skill acquisition and thus the dynamic interaction of sensory and motor capacities in embodied cognition. The empirically validated, extended, and revised SMART-ER can now predict when specific dynamic interactions of top-down and bottom-up processes have a beneficial or detrimental effect on performance and learning depending on situational constraints. The model is empirically supported and proposes learning strategies for when situation complexity varies or time pressure is present. Experiments from expertise research in sports illustrate that neither bottom-up nor top-down processes are bad or good per se but their effects depend on personal and situational characteristics.
The Fracture Mechanical Markov Chain Fatigue Model Compared with Empirical Data
Gansted, L.; Brincker, Rune; Hansen, Lars Pilegaard
The applicability of the FMF-model (Fracture Mechanical Markov Chain Fatigue Model) introduced in Gansted, L., R. Brincker and L. Pilegaard Hansen (1991) is tested by simulations and compared with empirical data. Two sets of data have been used, the Virkler data (aluminium alloy) and data...... that the FMF-model gives adequate description of the empirical data using model parameters characteristic of the material....
Rodríguez, J; Clemente, G; Sanjuán, N; Bon, J
2014-01-01
The drying kinetics of thyme was analyzed by considering different conditions: air temperature of between 40°C and 70°C , and air velocity of 1 m/s. A theoretical diffusion model and eight different empirical models were fitted to the experimental data. From the theoretical model application, the effective diffusivity per unit area of the thyme was estimated (between 3.68 × 10(-5) and 2.12 × 10 (-4) s(-1)). The temperature dependence of the effective diffusivity was described by the Arrhenius relationship with activation energy of 49.42 kJ/mol. Eight different empirical models were fitted to the experimental data. Additionally, the dependence of the parameters of each model on the drying temperature was determined, obtaining equations that allow estimating the evolution of the moisture content at any temperature in the established range. Furthermore, artificial neural networks were developed and compared with the theoretical and empirical models using the percentage of the relative errors and the explained variance. The artificial neural networks were found to be more accurate predictors of moisture evolution with VAR ≥ 99.3% and ER ≤ 8.7%.
Polarons in semiconducting polymers: Study within an extended Holstein model
Meisel, K. D.; Vocks, H.; Bobbert, P. A.
2005-05-01
We present a study of electron- (hole-) phonon interaction and polaron formation in semiconducting polymers within an extended Holstein model. A minimization of the lowest electronic state of this Hamiltonian with respect to lattice degrees of freedom yields the polaronic ground state. Input parameters of this Hamiltonian are obtained from ab initio calculations based on the density-functional theory. We calculate optical phonon modes and the coupling constants of these modes to the highest occupied and lowest unoccupied molecular orbital bands, respectively. For the studied polymers [polythiophene, poly(phenylenevinylene), poly(para-phenylene)] the polaron binding energy, its size, and the lattice deformation as a function of conjugation length have been determined. Self-trapped polarons are found for long conjugation lengths. Energies of prominent PPV modes involved in polaron formation agree with infrared spectra. The polaron binding energies we find are much smaller than the width of the energy disorder in polymeric systems of practical importance, thus self-trapping effects can be ignored in practice.
Wette, Karl
2016-01-01
All-sky searches for gravitational-wave pulsars are generally limited in sensitivity by the finite availability of computing resources. Semicoherent searches are a common method of maximizing search sensitivity given a fixed computing budget. The work of Wette and Prix [Phys. Rev. D 88, 123005 (2013)] and Wette [Phys. Rev. D 92, 082003 (2015)] developed a semicoherent search method which uses metrics to construct the banks of pulsar signal templates needed to search the parameter space of interest. In this work we extend the range of validity of the parameter-space metrics using an empirically-derived relationship between the resolution (or mismatch) of the template banks and the mismatch of the overall search. This work has important consequences for the optimization of metric-based semicoherent searches at fixed computing cost.
TIME-IGGCAS model validation:Comparisons with empirical models and observations
2008-01-01
The TIME-IGGCAS (Theoretical Ionospheric Model of the Earth in Institute of Ge- ology and Geophysics, Chinese Academy of Sciences) has been developed re- cently on the basis of previous works. To test its validity, we have made compari- sons of model results with other typical empirical ionospheric models (IRI, NeQuick-ITUR, and TItheridge temperature models) and multi-observations (GPS, Ionosondes, Topex, DMSP, FORMOSAT, and CHAMP) in this paper. Several conclu- sions are obtained from our comparisons. The modeled electron density and elec- tron and ion temperatures are quantitatively in good agreement with those of em- pirical models and observations. TIME-IGGCAS can model the electron density variations versus several factors such as local time, latitude, and season very well and can reproduce most anomalistic features of ionosphere including equatorial anomaly, winter anomaly, and semiannual anomaly. These results imply a good base for the development of ionospheric data assimilation model in the future. TIME-IGGCAS underestimates electron temperature and overestimates ion tem- perature in comparison with either empirical models or observations. The model results have relatively large deviations near sunrise time and sunset time and at the low altitudes. These results give us a reference to improve the model and enhance its performance in the future.
Courey, Karim; Wright, Clara; Asfour, Shihab; Onar, Arzu; Bayliss, Jon; Ludwig, Larry
2009-01-01
In this experiment, an empirical model to quantify the probability of occurrence of an electrical short circuit from tin whiskers as a function of voltage was developed. This empirical model can be used to improve existing risk simulation models. FIB and TEM images of a tin whisker confirm the rare polycrystalline structure on one of the three whiskers studied. FIB cross-section of the card guides verified that the tin finish was bright tin.
Olta Milova
2014-02-01
Full Text Available According to Mankiw (2000, fiscal policy in major macroeconomic models adversely affects the behavior of private agents as consumers and firms and they affect economic growth through investment and savings decisions. Increasing government spending will increase the aggregate demand for goods and services and money demand in the money market leading to an increase of interest rates while markets tend towards equilibrium. The increased interest rates affect negatively the level of private investment. To assess the effect of fiscal policy on economic growth generally are used the endogenous growth models, which include technological progress as an integrated part of this model. These models were called endogenous because they were taking into account long-term economic growth and were using endogenous mechanisms to explain its main source which is the technological progress. Endogenous growth models developed by Barro (1990, Mendosa, Milesi-Ferreti and Asea (1997 or even by other economists, predict that the fiscal policy can affect the level of product and the long run economic growth. This conclusion is analysed in the theory of Barro (1990, which extends the model by including the fiscal policy. The Barro’s model is the model used in this paper to analyse the effect of the fiscal policy on economic growth in the case of Albania. The empirical work shows that all the variables, except inflation which according to theoretical expectations should have a negative effect, affect positively the economic growth. This positive relation between these variables can be explained by investments in infrastructure and other priority sectors that the government has done during all this period.
Carrara-Augustenborg, Claudia
2012-01-01
There is no consensus yet regarding a conceptualization of consciousness able to accommodate all the features of such complex phenomenon. Different theoretical and empirical models lend strength to both the occurrence of a non-accessible informational broadcast, and to the mobilization of specific...... brain areas responsible for the emergence of the individual´s explicit and variable access to given segments of such broadcast. Rather than advocating one model over others, this chapter proposes to broaden the conceptualization of consciousness by letting it embrace both mechanisms. Within...... such extended framework, I propose conceptual and functional distinctions between consciousness (global broadcast of information), awareness (individual´s ability to access the content of such broadcast) and unconsciousness (focally isolated neural activations). My hypothesis is that a demarcation in terms...
Emergent lattices with geometrical frustration in doped extended Hubbard models
Kaneko, Ryui; Tocchio, Luca F.; Valentí, Roser; Gros, Claudius
2016-11-01
Spontaneous charge ordering occurring in correlated systems may be considered as a possible route to generate effective lattice structures with unconventional couplings. For this purpose we investigate the phase diagram of doped extended Hubbard models on two lattices: (i) the honeycomb lattice with on-site U and nearest-neighbor V Coulomb interactions at 3 /4 filling (n =3 /2 ) and (ii) the triangular lattice with on-site U , nearest-neighbor V , and next-nearest-neighbor V' Coulomb interactions at 3 /8 filling (n =3 /4 ). We consider various approaches including mean-field approximations, perturbation theory, and variational Monte Carlo. For the honeycomb case (i), charge order induces an effective triangular lattice at large values of U /t and V /t , where t is the nearest-neighbor hopping integral. The nearest-neighbor spin exchange interactions on this effective triangular lattice are antiferromagnetic in most of the phase diagram, while they become ferromagnetic when U is much larger than V . At U /t ˜(V/t ) 3 , ferromagnetic and antiferromagnetic exchange interactions nearly cancel out, leading to a system with four-spin ring-exchange interactions. On the other hand, for the triangular case (ii) at large U and finite V', we find no charge order for small V , an effective kagome lattice for intermediate V , and one-dimensional charge order for large V . These results indicate that Coulomb interactions induce [case (i)] or enhance [case(ii)] emergent geometrical frustration of the spin degrees of freedom in the system, by forming charge order.
Modeling of heavy metal salt solubility using the Extended UNIQUAC model
Iliuta, Maria Cornelia; Thomsen, Kaj; Rasmussen, Peter
2002-01-01
Solid-liquid equilibria in complex aqueous systems involving a heavy metal cation (Mn2+, Fe2+, Co2+, Ni2+, Cu2+, or Zn2+) and one or more ions for which Extended UNIQUAC parameters have been published previously are modeled using the Extended UNIQUAC model. Model parameters are determined...... on the basis of a data bank with more than 4, 000 experimental data points for binary and ternary aqueous systems. The parameters are generally valid in the temperature range from the freezing point to the boiling point of the respective solutions....
Thermodynamic modelling of acid gas removal from natural gas using the Extended UNIQUAC model
Sadegh, Negar; Stenby, Erling Halfdan; Thomsen, Kaj
2017-01-01
Thermodynamics of natural gas sweetening process needs to be known for proper design of natural gas treating plants. Absorption with aqueous N-Methyldiethanolamine is currently the most commonly used process for removal of acid gas (CO2 and H2S) impurities from natural gas. Model parameters...... for the Extended UNIQUAC model have already been determined by the same authors to calculate single acid gas solubility in aqueous MDEA. In this study, the model is further extended to estimate solubility of CO2 and H2S and their mixture in aqueous MDEA at high pressures with methane as a makeup gas....
The dynamics of value segments : modeling framework and empirical illustration
Brangule-Vlagsma, K; Pieters, RGM; Wedel, M
2002-01-01
Value systems are central to understanding consumer behavior and they are an important basis for market segmentation. This study addresses changes in individual value systems across time. First, we conceptualize main ways in which value systems may change over time. Next, we extend Kamakura and Mazz
Alaa H. Abed
2012-01-01
Full Text Available The objective of this research is to predict rut depth in local flexible pavements. Predication model in pavement performance is the process that used to estimate the parameter values which related to pavement structure, environmental condition and traffic loading. The different local empirical models have been used to calculate permanent deformation which include environmental and traffic conditions. Finite element analysis through ANSYS computer software is used to analyze two dimensional linear elastic plane strain problem through (Plane 82 elements. Standard Axle Load (ESAL of 18 kip (80 kN loading on an axle with dual set of tires, the wheel spacing is 13.5 in (343 mm with tire contact pressure of 87 psi (0.6 MPa is used. The pavement system is assumed to be an elastic multi-layers system with each layer being isotropic, homogeneous with specified resilient modulus and Poisson ratio. Each layer is to extend to infinity in the horizontal direction and have a finite thickness except the bottom layer. The analysis of results show that, although, the stress level decrease 14% in the leveling course and 27% in the base course, the rut depth is increased by 12 and 28% in that layers respectively because the material properties is changed.
Semi-Empirical Models for Buoyancy-Driven Ventilation
Terpager Andersen, Karl
2015-01-01
A literature study is presented on the theories and models dealing with buoyancy-driven ventilation in rooms. The models are categorised into four types according to how the physical process is conceived: column model, fan model, neutral plane model and pressure model. These models are analysed...... and compared with a reference model. Discrepancies and differences are shown, and the deviations are discussed. It is concluded that a reliable buoyancy model based solely on the fundamental flow equations is desirable....
Matthews, A P; Garenne, M L
2013-09-01
A dynamic, two-sex, age-structured marriage model is presented. Part 1 focused on first marriage only and described a marriage market matching algorithm. In Part 2 the model is extended to include divorce, widowing, and remarriage. The model produces a self-consistent set of marital states distributed by age and sex in a stable population by means of a gender-symmetric numerical method. The model is compared with empirical data for the case of Zambia. Furthermore, a dynamic marriage function for a changing population is demonstrated in simulations of three hypothetical scenarios of elevated mortality in young to middle adulthood. The marriage model has its primary application to simulation of HIV-AIDS epidemics in African countries.
Hutchings, L.
1992-01-01
This report outlines a method of using empirical Green's functions in an earthquake simulation program EMPSYN that provides realistic seismograms from potential earthquakes. The theory for using empirical Green's functions is developed, implementation of the theory in EMPSYN is outlined, and an example is presented where EMPSYN is used to synthesize observed records from the 1971 San Fernando earthquake. To provide useful synthetic ground motion data from potential earthquakes, synthetic seismograms should model frequencies from 0.5 to 15.0 Hz, the full wave-train energy distribution, and absolute amplitudes. However, high-frequency arrivals are stochastically dependent upon the inhomogeneous geologic structure and irregular fault rupture. The fault rupture can be modeled, but the stochastic nature of faulting is largely an unknown factor in the earthquake process. The effect of inhomogeneous geology can readily be incorporated into synthetic seismograms by using small earthquakes to obtain empirical Green's functions. Small earthquakes with source corner frequencies higher than the site recording limit f{sub max}, or much higher than the frequency of interest, effectively have impulsive point-fault dislocation sources, and their recordings are used as empirical Green's functions. Since empirical Green's functions are actual recordings at a site, they include the effects on seismic waves from all geologic inhomogeneities and include all recordable frequencies, absolute amplitudes, and all phases. They scale only in amplitude with differences in seismic moment. They can provide nearly the exact integrand to the representation relation. Furthermore, since their source events have spatial extent, they can be summed to simulate fault rupture without loss of information, thereby potentially computing the exact representation relation for an extended source earthquake.
GDP model for Chinese energy modeling based on empirical production function
HiroshiYAGITA; BaorenWEI; AtsushiINABA; MasayukiSAGISAKA; KeikoHIROTA; KiyoyukiMINATO
2003-01-01
In many energy models, GDP is an exogenous variable, so that variables within energy model are not able to change the value of GDP. Based on empirical production function, a GDP model has been established in this paper using capital stock, urbanization rate and population size as independent variables. It has been found that urbanization rate is a kind of integrated indicator of labor quantity and the education level of labors in China. And it also takes away the labor surplus in rural area in China. The forecasting results show that the model is robust. The results have the same tendency as the results from famous CGE model and the results from responsible Chinese authorities, and the numbers of GDP growth rates are also similar in 50 years. It has been concluded that the model is a good candidate for energy model as an endogenous vadable.
Using Graph and Vertex Entropy to Compare Empirical Graphs with Theoretical Graph Models
Tomasz Kajdanowicz
2016-09-01
Full Text Available Over the years, several theoretical graph generation models have been proposed. Among the most prominent are: the Erdős–Renyi random graph model, Watts–Strogatz small world model, Albert–Barabási preferential attachment model, Price citation model, and many more. Often, researchers working with real-world data are interested in understanding the generative phenomena underlying their empirical graphs. They want to know which of the theoretical graph generation models would most probably generate a particular empirical graph. In other words, they expect some similarity assessment between the empirical graph and graphs artificially created from theoretical graph generation models. Usually, in order to assess the similarity of two graphs, centrality measure distributions are compared. For a theoretical graph model this means comparing the empirical graph to a single realization of a theoretical graph model, where the realization is generated from the given model using an arbitrary set of parameters. The similarity between centrality measure distributions can be measured using standard statistical tests, e.g., the Kolmogorov–Smirnov test of distances between cumulative distributions. However, this approach is both error-prone and leads to incorrect conclusions, as we show in our experiments. Therefore, we propose a new method for graph comparison and type classification by comparing the entropies of centrality measure distributions (degree centrality, betweenness centrality, closeness centrality. We demonstrate that our approach can help assign the empirical graph to the most similar theoretical model using a simple unsupervised learning method.
John Jack P. RIEGEL III; David DAVISON
2016-01-01
Historically, there has been little correlation between the material properties used in (1) empirical formulae, (2) analytical formulations, and (3) numerical models. The various regressions and models may each provide excellent agreement for the depth of penetration into semi-infinite targets. But the input parameters for the empirically based procedures may have little in common with either the analytical model or the numerical model. This paper builds on previous work by Riegel and Anderson (2014) to show how the Effective Flow Stress (EFS) strength model, based on empirical data, can be used as the average flow stress in the analytical Walker–Anderson Penetration model (WAPEN) (Anderson and Walker, 1991) and how the same value may be utilized as an effective von Mises yield strength in numerical hydrocode simulations to predict the depth of penetration for eroding projectiles at impact velocities in the mechanical response regime of the materials. The method has the benefit of allowing the three techniques (empirical, analytical, and numerical) to work in tandem. The empirical method can be used for many shot line calculations, but more advanced analytical or numerical models can be employed when necessary to address specific geometries such as edge effects or layering that are not treated by the simpler methods. Developing complete constitutive relationships for a material can be costly. If the only concern is depth of penetration, such a level of detail may not be required. The effective flow stress can be determined from a small set of depth of penetration experiments in many cases, especially for long penetrators such as the L/D=10 ones considered here, making it a very practical approach. In the process of performing this effort, the authors considered numerical simulations by other researchers based on the same set of experimental data that the authors used for their empirical and analytical assessment. The goals were to establish a baseline with a full
John (Jack P. Riegel III
2016-04-01
Full Text Available Historically, there has been little correlation between the material properties used in (1 empirical formulae, (2 analytical formulations, and (3 numerical models. The various regressions and models may each provide excellent agreement for the depth of penetration into semi-infinite targets. But the input parameters for the empirically based procedures may have little in common with either the analytical model or the numerical model. This paper builds on previous work by Riegel and Anderson (2014 to show how the Effective Flow Stress (EFS strength model, based on empirical data, can be used as the average flow stress in the analytical Walker–Anderson Penetration model (WAPEN (Anderson and Walker, 1991 and how the same value may be utilized as an effective von Mises yield strength in numerical hydrocode simulations to predict the depth of penetration for eroding projectiles at impact velocities in the mechanical response regime of the materials. The method has the benefit of allowing the three techniques (empirical, analytical, and numerical to work in tandem. The empirical method can be used for many shot line calculations, but more advanced analytical or numerical models can be employed when necessary to address specific geometries such as edge effects or layering that are not treated by the simpler methods. Developing complete constitutive relationships for a material can be costly. If the only concern is depth of penetration, such a level of detail may not be required. The effective flow stress can be determined from a small set of depth of penetration experiments in many cases, especially for long penetrators such as the L/D = 10 ones considered here, making it a very practical approach. In the process of performing this effort, the authors considered numerical simulations by other researchers based on the same set of experimental data that the authors used for their empirical and analytical assessment. The goals were to establish a
熊伟; 宋锐
2011-01-01
As one kind of the most popular dining styles, the new-fashioned Casual Dining attracts tons of customers by its environment and atmosphere. Moreover, M-R model reveals the structure of ＂Stimulus-Organism-Response＂ which can effectively explain how the atmo- spherics leads to the behaviors. The main purpose of this study is to examine the measurement system of the Casual Restaurants consumption environment from the perspectives of customers and the role of Casual Restaurants＇ consumption environment on customers＇ emotional responses and behaviors, Data were collected using a questionnaire survey and were analyzed via SPSS 16.0.The findings are showed as followings： The restaurants environment comprises totally 19 factors into 5 dimensions consisting of visual stimulus, auditory stimulus, intangibles, servers, congestion degree and other customers. Restaurants environment affects both customers＇ advent behavior and positive emotion significantly. Emotional response also influences customers＇ advent behavior obviously. Emotional response plays as the mediate between restaurants environment and advent behavior. Finally, some suggestions for follow-up research are put forward.%休闲餐饮作为一种新型餐饮形式备受消费者青睐，其环境和氛围正是其最大的吸引点。而M－R模型提出的“刺激物-有机体-反应”的结构能有效解释物理环境对个体行为如何造成影响。本文以休闲餐厅环境为研究对象，通过问誊调查获取第一手资料，采用SPSS统计软件的多种分析功能，探索基于顾客视角的餐厅环境构成要素和评价体系，并借用M—R模型深入研究餐厅环境对顾客情绪反应和行为反应的影响机理。结果显示：休闲餐厅环境要素评价体系由19个评价指标，共计五大因素构成，即视觉刺激要素、听觉刺激要素、无形要素、服务性要素以及空间和周围顾客要素；休闲餐厅环境显著影响顾客趋
Empirical Evaluation of a Mathematical Model of Ethnolinguistic Vitality: The Case of Voro
Ehala, Martin; Niglas, Katrin
2007-01-01
The paper presents the results of an empirical evaluation of a mathematical model of ethnolinguistic vitality. The model adds several new factors to the set used in previous models of ethnolinguistic vitality and operationalises it in a manner that would make it easier to compare the vitality of different groups. According to the model, the…
Oth, Adrien; Wenzel, Friedemann; Radulian, Mircea
2007-06-01
Several source parameters (source dimensions, slip, particle velocity, static and dynamic stress drop) are determined for the moderate-size October 27th, 2004 ( MW = 5.8), and the large August 30th, 1986 ( MW = 7.1) and March 4th, 1977 ( MW = 7.4) Vrancea (Romania) intermediate-depth earthquakes. For this purpose, the empirical Green's functions method of Irikura [e.g. Irikura, K. (1983). Semi-Empirical Estimation of Strong Ground Motions during Large Earthquakes. Bull. Dis. Prev. Res. Inst., Kyoto Univ., 33, Part 2, No. 298, 63-104., Irikura, K. (1986). Prediction of strong acceleration motions using empirical Green's function, in Proceedings of the 7th Japan earthquake engineering symposium, 151-156., Irikura, K. (1999). Techniques for the simulation of strong ground motion and deterministic seismic hazard analysis, in Proceedings of the advanced study course seismotectonic and microzonation techniques in earthquake engineering: integrated training in earthquake risk reduction practices, Kefallinia, 453-554.] is used to generate synthetic time series from recordings of smaller events (with 4 ≤ MW ≤ 5) in order to estimate several parameters characterizing the so-called strong motion generation area, which is defined as an extended area with homogeneous slip and rise time and, for crustal earthquakes, corresponds to an asperity of about 100 bar stress release [Miyake, H., T. Iwata and K. Irikura (2003). Source characterization for broadband ground-motion simulation: Kinematic heterogeneous source model and strong motion generation area. Bull. Seism. Soc. Am., 93, 2531-2545.] The parameters are obtained by acceleration envelope and displacement waveform inversion for the 2004 and 1986 events and MSK intensity pattern inversion for the 1977 event using a genetic algorithm. The strong motion recordings of the analyzed Vrancea earthquakes as well as the MSK intensity pattern of the 1977 earthquake can be well reproduced using relatively small strong motion
Temporal structure of neuronal population oscillations with empirical model decomposition
Li, Xiaoli
2006-08-01
Frequency analysis of neuronal oscillation is very important for understanding the neural information processing and mechanism of disorder in the brain. This Letter addresses a new method to analyze the neuronal population oscillations with empirical mode decomposition (EMD). Following EMD of neuronal oscillation, a series of intrinsic mode functions (IMFs) are obtained, then Hilbert transform of IMFs can be used to extract the instantaneous time frequency structure of neuronal oscillation. The method is applied to analyze the neuronal oscillation in the hippocampus of epileptic rats in vivo, the results show the neuronal oscillations have different descriptions during the pre-ictal, seizure onset and ictal periods of the epileptic EEG at the different frequency band. This new method is very helpful to provide a view for the temporal structure of neural oscillation.
Empirical modeling and data analysis for engineers and applied scientists
Pardo, Scott A
2016-01-01
This textbook teaches advanced undergraduate and first-year graduate students in Engineering and Applied Sciences to gather and analyze empirical observations (data) in order to aid in making design decisions. While science is about discovery, the primary paradigm of engineering and "applied science" is design. Scientists are in the discovery business and want, in general, to understand the natural world rather than to alter it. In contrast, engineers and applied scientists design products, processes, and solutions to problems. That said, statistics, as a discipline, is mostly oriented toward the discovery paradigm. Young engineers come out of their degree programs having taken courses such as "Statistics for Engineers and Scientists" without any clear idea as to how they can use statistical methods to help them design products or processes. Many seem to think that statistics is only useful for demonstrating that a device or process actually does what it was designed to do. Statistics courses emphasize creati...
Jørgensen, Peter Løchte
Extended Nelson-Siegel models are widely used by e.g. practitioners and central banks to estimate current term structures of riskless zero-coupon interest rates, whereas other models such as the extended Vasicek model (a.k.a. the Hull-White model) are popular for pricing interest rate derivatives....... This paper establishes theoretical consistency between these two types of models by showing how to specify the extended Vasicek model such that its implied initial term structure curve precisely matches a given extended Nelson-Siegel specification. That is, we show how to reconcile the two classes of models...
Generalized Empirical Likelihood-Based Focused Information Criterion and Model Averaging
Naoya Sueishi
2013-07-01
Full Text Available This paper develops model selection and averaging methods for moment restriction models. We first propose a focused information criterion based on the generalized empirical likelihood estimator. We address the issue of selecting an optimal model, rather than a correct model, for estimating a specific parameter of interest. Then, this study investigates a generalized empirical likelihood-based model averaging estimator that minimizes the asymptotic mean squared error. A simulation study suggests that our averaging estimator can be a useful alternative to existing post-selection estimators.
Linsky, Jeffrey; Fontenla, Juan; France, Kevin
2016-05-01
We present a semi-empirical model of the photosphere, chromosphere, transition region, and corona for the M2 dwarf star GJ832, which hosts two exoplanets. The atmospheric model uses a modification of the Solar Radiation Physical Modeling tools developed by Fontenla and collaborators. These computer codes model non-LTE spectral line formation for 52 atoms and ions and include a large number of lines from 20 abundant diatomic molecules that are present in the much cooler photosphere and chromosphere of this star. We constructed the temperature distribution to fit Hubble Space Telescope observations of chromospheric lines (e.g., MgII), transition region lines (CII, CIV, SiIV, and NV), and the UV continuum. Temperatures in the coronal portion of the model are consistent with ROSAT and XMM-Newton X-ray observations and the FeXII 124.2 nm line. The excellent fit of the model to the data demonstrates that the highly developed model atmosphere code developed to explain regions of the solar atmosphere with different activity levels has wide applicability to stars, including this M star with an effective temperature 2200 K cooler than the Sun. We describe similarities and differences between the M star model and models of the quiet and active Sun.
Empirical Likelihood for Mixed-effects Error-in-variables Model
Qiu-hua Chen; Ping-shou Zhong; Heng-jian Cui
2009-01-01
This paper mainly introduces the method of empirical likelihood and its applications on two dif-ferent models.We discuss the empirical likelihood inference on fixed-effect parameter in mixed-effects model with error-in-variables.We first consider a linear mixed-effects model with measurement errors in both fixed and random effects.We construct the empirical likelihood confidence regions for the fixed-effects parameters and the mean parameters of random-effects.The limiting distribution of the empirical log likelihood ratio at the true parameter is χ2p+q,where p,q are dimension of fixed and random effects respectively.Then we discuss empirical likelihood inference in a semi-linear error-in-variable mixed-effects model.Under certain conditions,it is shown that the empirical log likelihood ratio at the true parameter also converges to χ2p+q.Simulations illustrate that the proposed confidence region has a coverage probability more closer to the nominal level than normal approximation based confidence region.
Institutions and foreign direct investment (FDI) in Malaysia: empirical evidence using ARDL model
Abdul Karim, Zulkefly; Zaidi, Mohd Azlan Shah; Ismail, Mohd Adib; Abdul Karim, Bakri
2011-01-01
Since 1990’s, institution factors have been regarded as playing important roles in stimulating foreign direct investments (FDI). However, empirical studies on their importance in affecting FDI are still lacking especially for small open economies. This paper attempts to investigate the role of institutions upon the inflow of foreign direct investment (FDI) in a small open economy of Malaysia. Using bounds testing approach (ARDL model), the empirical findings reveal that there exists a long ru...
Time-varying disaster risk models: An empirical assessment of the Rietz-Barro hypothesis
Irarrazabal, Alfonso; Parra-Alvarez, Juan Carlos
This paper revisits the fit of disaster risk models where a representative agent has recursive preferences and the probability of a macroeconomic disaster changes over time. We calibrate the model as in Wachter (2013) and perform two sets of tests to assess the empirical performance of the model ...
Zee, van der F.A.
1997-01-01
This study explores the relevance and applicability of political economy models for the explanation of agricultural policies. Part I (chapters 4-7) takes a general perspective and evaluates the empirical applicability of voting models and interest group models to agricultural policy
Zee, van der F.A.
1997-01-01
This study explores the relevance and applicability of political economy models for the explanation of agricultural policies. Part I (chapters 4-7) takes a general perspective and evaluates the empirical applicability of voting models and interest group models to agricultural policy formation in ind
Zee, van der F.A.
1997-01-01
This study explores the relevance and applicability of political economy models for the explanation of agricultural policies. Part I (chapters 4-7) takes a general perspective and evaluates the empirical applicability of voting models and interest group models to agricultural policy formati
Empirical LTE Smartphone Power Model with DRX Operation for System Level Simulations
Lauridsen, Mads; Noël, Laurent; Mogensen, Preben
2013-01-01
An LTE smartphone power model is presented to enable academia and industry to evaluate users’ battery life on system level. The model is based on empirical measurements on a smartphone using a second generation LTE chipset, and the model includes functions of receive and transmit data rates...
Lucca Botturi
2006-06-01
Full Text Available This paper reports the results of an empirical study that investigated the instructional design process of three teams involved in the development of an e-learning unit. The teams declared they were using the same fast-prototyping design and development model, and were composed of the same roles (although with a different number of SMEs. Results indicate that the design and development model actually informs the activities of the group, but that it is interpreted and adapted by the team for the specific project. Thus, the actual practice model of each team can be regarded as an emergent feature. This analysis delivers insights concerning issues about team communication, shared understanding, individual perspectives and the implementation of prescriptive instructional design models.
Empirical Estimation of Hybrid Model: A Controlled Case Study
Sadaf Un Nisa; M. Rizwan Jameel Qureshi
2012-01-01
Scrum and Extreme Programming (XP) are frequently used models among all agile models whereas Rational Unified Process (RUP) is one of the widely used conventional plan driven software development models. The agile and plan driven approaches both have their own strengths and weaknesses. Although RUP model has certain drawbacks, such as tendency to be over budgeted, slow in adaptation to rapidly changing requirements and reputation of being impractical for small and fast paced projects. XP mode...
An Empirical Comparison of Default Swap Pricing Models
P. Houweling (Patrick); A.C.F. Vorst (Ton)
2002-01-01
textabstractAbstract: In this paper we compare market prices of credit default swaps with model prices. We show that a simple reduced form model with a constant recovery rate outperforms the market practice of directly comparing bonds' credit spreads to default swap premiums. We find that the model
Extending the Compensatory Model of Second Language Reading
McNeil, Levi
2012-01-01
Bernhardt (2005) proposed a compensatory model of second language reading. This model predicted that 50% of second language (L2) reading scores are attributed to second language knowledge and first-language (L1) reading ability. In this model, these two factors compensate for deficiencies in each other. Although this model explains a significant…
Roberts, M S; Anissimov, Y G
1999-08-01
The conventional convection-dispersion (also called axial dispersion) model is widely used to interrelate hepatic availability (F) and clearance (Cl) with the morphology and physiology of the liver and to predict effects such as changes in liver blood flow on F and Cl. An extended form of the convection-dispersion model has been developed to adequately describe the outflow concentration-time profiles for vascular markers at both short and long times after bolus injections into perfused livers. The model, based on flux concentration and a convolution of catheters and large vessels, assumes that solute elimination in hepatocytes follows either fast distribution into or radial diffusion in hepatocytes. The model includes a secondary vascular compartment, postulated to be interconnecting sinusoids. Analysis of the mean hepatic transit time (MTT) and normalized variance (CV2) of solutes with extraction showed that the discrepancy between the predictions of MTT and CV2 for the extended and unweighted conventional convection-dispersion models decreases as hepatic extraction increases. A correspondence of more than 95% in F and Cl exists for all solute extractions. In addition, the analysis showed that the outflow concentration-time profiles for both the extended and conventional models are essentially identical irrespective of the magnitude of rate constants representing permeability, volume, and clearance parameters, providing that there is significant hepatic extraction. In conclusion, the application of a newly developed extended convection-dispersion model has shown that the unweighted conventional convection-dispersion model can be used to describe the disposition of extracted solutes and, in particular, to estimate hepatic availability and clearance in both experimental and clinical situations.
Latent Utility Shocks in a Structural Empirical Asset Pricing Model
Christensen, Bent Jesper; Raahauge, Peter
We consider a random utility extension of the fundamental Lucas (1978) equilibriumasset pricing model. The resulting structural model leads naturally to a likelihoodfunction. We estimate the model using U.S. asset market data from 1871 to2000, using both dividends and earnings as state variables....... We find that current dividendsdo not forecast future utility shocks, whereas current utility shocks do forecastfuture dividends. The estimated structural model produces a sequence of predictedutility shocks which provide better forecasts of future long-horizon stock market returnsthan the classical...... dividend-price ratio.KEYWORDS: Randomutility, asset pricing, maximumlikelihood, structuralmodel,return predictability...
Practice models and roles of physician extenders in dermatologic surgery.
Tierney, Emily P; Hanke, C William; Kimball, Alexa Boer
2011-05-01
The prevalence of physician extenders (PEs) has increased significantly in dermatologic surgery over the last decade. An analysis was performed of the staff in dermatologic surgery practices, roles of PEs, and level of supervision. Mohs fellowship-trained (MMSFT) dermatologic surgeons were more likely to employ registered nurses (n=85, 73.9%) than non-fellowship-trained (NMMSFT) surgeons (n=65, 50.0%, pdermatology patients, but NMMSFT surgeons were twice as likely as MMSFT surgeons to have their PEs involved in performing or assisting with cosmetic procedures. MMSFT surgeons (38.5%) were twice as likely to have direct supervision of their PEs as NMMSFT surgeons (16.1%, p=.01). PEs are highly prevalent in dermatologic surgery practices and are playing direct roles in the delivery of dermatologic care. Promoting patient safety through appropriate extender supervision and reporting of patient outcomes are highly needed as this sector of the dermatologic surgery workforce continues to expand. © 2011 by the American Society for Dermatologic Surgery, Inc.
Extending the Magic Formula and SWIFT tyre models for inflation pressure changes
Schmeitz, A.J.C.; Besselink, I.J.M.; Hoogh, J. de; Nijmeijer, H.
2005-01-01
The Magic Formula and SWIFT tyre models are well-known semi-empirical tyre models for vehicle dynamic simulations. Up to now, the only way to account for inflation pressure changes is to identify all model parameters for each inflation pressure that has to be considered. Since this is a time
Theoretical and Empirical Review of Asset Pricing Models: A Structural Synthesis
Saban Celik
2012-01-01
Full Text Available The purpose of this paper is to give a comprehensive theoretical review devoted to asset pricing models by emphasizing static and dynamic versions in the line with their empirical investigations. A considerable amount of financial economics literature devoted to the concept of asset pricing and their implications. The main task of asset pricing model can be seen as the way to evaluate the present value of the pay offs or cash flows discounted for risk and time lags. The difficulty coming from discounting process is that the relevant factors that affect the pay offs vary through the time whereas the theoretical framework is still useful to incorporate the changing factors into an asset pricing models. This paper fills the gap in literature by giving a comprehensive review of the models and evaluating the historical stream of empirical investigations in the form of structural empirical review.
Empirical Validation of a Thermal Model of a Complex Roof Including Phase Change Materials
Guichard, Stéphane; Bigot, Dimitri; Malet-Damour, Bruno; Libelle, Teddy; Boyer, Harry
2015-01-01
This paper deals with the empirical validation of a building thermal model using a phase change material (PCM) in a complex roof. A mathematical model dedicated to phase change materials based on the heat apparent capacity method was implemented in a multi-zone building simulation code, the aim being to increase understanding of the thermal behavior of the whole building with PCM technologies. To empirically validate the model, the methodology is based both on numerical and experimental studies. A parametric sensitivity analysis was performed and a set of parameters of the thermal model have been identified for optimization. The use of a generic optimization program called GenOpt coupled to the building simulation code enabled to determine the set of adequate parameters. We first present the empirical validation methodology and main results of previous work. We then give an overview of GenOpt and its coupling with the building simulation code. Finally, once the optimization results are obtained, comparisons o...
Empirical Estimation of Hybrid Model: A Controlled Case Study
Sadaf Un Nisa
2012-07-01
Full Text Available Scrum and Extreme Programming (XP are frequently used models among all agile models whereas Rational Unified Process (RUP is one of the widely used conventional plan driven software development models. The agile and plan driven approaches both have their own strengths and weaknesses. Although RUP model has certain drawbacks, such as tendency to be over budgeted, slow in adaptation to rapidly changing requirements and reputation of being impractical for small and fast paced projects. XP model has certain drawbacks such as weak documentation and poor performance for medium and large development projects. XP has a concrete set of engineering practices that emphasizes on team work where managers, customers and developers are all equal partners in collaborative teams. Scrum is more concerned with the project management. It has seven practices namely Scrum Master, Scrum teams, Product Backlog, Sprint, Sprint Planning Meeting, Daily Scrum Meeting and Sprint Review. Keeping above mentioned context in view, this paper intends to propose a hybrid model naming SPRUP model by combining strengths of Scrum, XP and RUP by eliminating their weaknesses to produce high quality software. The proposed SPRUP model is validated through a controlled case study.
Empirical Analysis of Farm Credit Risk under the Structure Model
Yan, Yan
2009-01-01
The study measures farm credit risk by using farm records collected by Farm Business Farm Management (FBFM) during the period 1995-2004. The study addresses the following questions: (1) whether farm's financial position is fully described by the structure model, (2) what are the determinants of farm capital structure under the structure model, (3)…
Empirical assessment of a threshold model for sylvatic plague
Davis, Stephen; Leirs, Herwig; Viljugrein, H.
2007-01-01
Plague surveillance programmes established in Kazakhstan, Central Asia, during the previous century, have generated large plague archives that have been used to parameterize an abundance threshold model for sylvatic plague in great gerbil (Rhombomys opimus) populations. Here, we assess the model...
Drugs and Crime: An Empirically Based, Interdisciplinary Model
Quinn, James F.; Sneed, Zach
2008-01-01
This article synthesizes neuroscience findings with long-standing criminological models and data into a comprehensive explanation of the relationship between drug use and crime. The innate factors that make some people vulnerable to drug use are conceptually similar to those that predict criminality, supporting a spurious reciprocal model of the…
Empirical Analysis of Farm Credit Risk under the Structure Model
Yan, Yan
2009-01-01
The study measures farm credit risk by using farm records collected by Farm Business Farm Management (FBFM) during the period 1995-2004. The study addresses the following questions: (1) whether farm's financial position is fully described by the structure model, (2) what are the determinants of farm capital structure under the structure model, (3)…
Drugs and Crime: An Empirically Based, Interdisciplinary Model
Quinn, James F.; Sneed, Zach
2008-01-01
This article synthesizes neuroscience findings with long-standing criminological models and data into a comprehensive explanation of the relationship between drug use and crime. The innate factors that make some people vulnerable to drug use are conceptually similar to those that predict criminality, supporting a spurious reciprocal model of the…
Hybrid modeling and empirical analysis of automobile supply chain network
Sun, Jun-yan; Tang, Jian-ming; Fu, Wei-ping; Wu, Bing-ying
2017-05-01
Based on the connection mechanism of nodes which automatically select upstream and downstream agents, a simulation model for dynamic evolutionary process of consumer-driven automobile supply chain is established by integrating ABM and discrete modeling in the GIS-based map. Firstly, the rationality is proved by analyzing the consistency of sales and changes in various agent parameters between the simulation model and a real automobile supply chain. Second, through complex network theory, hierarchical structures of the model and relationships of networks at different levels are analyzed to calculate various characteristic parameters such as mean distance, mean clustering coefficients, and degree distributions. By doing so, it verifies that the model is a typical scale-free network and small-world network. Finally, the motion law of this model is analyzed from the perspective of complex self-adaptive systems. The chaotic state of the simulation system is verified, which suggests that this system has typical nonlinear characteristics. This model not only macroscopically illustrates the dynamic evolution of complex networks of automobile supply chain but also microcosmically reflects the business process of each agent. Moreover, the model construction and simulation of the system by means of combining CAS theory and complex networks supplies a novel method for supply chain analysis, as well as theory bases and experience for supply chain analysis of auto companies.
Extending temperature sum models to simulate onset of birch flowering on the regional scale
Klein, Christian; Biernath, Christian; Priesack, Eckart
2015-04-01
For human health issues a reliable forecast of the onset of flowering of different plants which produce allergenic pollen is important. Yet, there are numerous phenological models available with different degrees of model complexity. All models consider the effect of the air temperatures on plant development; but only few models also include other environmental factors and/or plant internal water and nutrient status. However, the more complex models often use empirical relations without physiological meaning and are often tested against small datasets derived from a limited amount of sites. Most models which are used to simulate plant phenology are based on the temporal integration of temperatures above a defined base temperature. A critical temperature sum then defines the onset of a new phenological stage. The use of models that base on temperatures only, is efficient as temperatures are the most frequently documented and available weather component on global, regional and local scales. These models score by their robustness over a wide range of environmental conditions. However, the simulations sometimes fail by more than 20 days compared to measurements, and thus are not adequate for their use in pollen forecast. We tested the ability of temperature sum models to simulate onset of flowering of wild (e.g. birch) and domestic plants in Bavaria. In a first step we therefore determined both, a regional averaged optimum base temperature and temperature sum for the examined plant species in Bavaria. In the second step, the base temperatures were optimized to each site for the simulation period 2001-2010. Our hypothesis is that domestic plants depend much less on the regional weather conditions than wild plants do, due to low and high genetic variability, respectively. If so, the observed base temperatures of wild plants are smaller for low annual average temperatures and higher for high annual average temperatures. In the cases of domestic plants the optimized base
Modelling and analysis of Markov reward automata (extended version)
Guck, Dennis; Timmer, Mark; Hatefi, Hassan; Ruijters, Enno; Stoelinga, Mariëlle
2014-01-01
Costs and rewards are important ingredients for cyberphysical systems, modelling critical aspects like energy consumption, task completion, repair costs, and memory usage. This paper introduces Markov reward automata, an extension of Markov automata that allows the modelling of systems incorporating
Cameron, Lindsey; Rutland, Adam; Brown, Rupert; Douch, Rebecca
2006-01-01
The present research evaluated an intervention, derived from the "extended contact hypothesis," which aimed to change children's intergroup attitudes toward refugees. The study (n=253) tested 3 models of extended contact among 5- to 11-year-old children: dual identity, common ingroup identity, and decategorization. Children read friendship stories based upon these models featuring in- and outgroup members. Outgroup attitudes were significantly more positive in the extended contact conditions,...
Bao, Yaodong; Cheng, Lin; Zhang, Jian
Using the data of 237 Jiangsu logistics firms, this paper empirically studies the relationship among organizational learning capability, business model innovation, strategic flexibility. The results show as follows; organizational learning capability has positive impacts on business model innovation performance; strategic flexibility plays mediating roles on the relationship between organizational learning capability and business model innovation; interaction among strategic flexibility, explorative learning and exploitative learning play significant roles in radical business model innovation and incremental business model innovation.
Muon anomalous magnetic moment in string inspired extended family models
Kephart, T W
2002-01-01
We propose a standard model minimal extension with two lepton weak SU(2) doublets and a scalar singlet to explain the deviation of the measured anomalous magnetic moment of the muon from the standard model expectation. This scheme can be naturally motivated in string inspired models such as E_6 and AdS/CFT.
Extending enterprise architecture modelling with business goals and requirements
Engelsman, Wilco; Quartel, Dick; Jonkers, Henk; Sinderen, van Marten
2011-01-01
The methods for enterprise architecture (EA), such as The Open Group Architecture Framework, acknowledge the importance of requirements modelling in the development of EAs. Modelling support is needed to specify, document, communicate and reason about goals and requirements. The current modelling te
An extended dual search space model of scientific discovery learning
Joolingen, van Wouter R.; Jong, de Ton
1997-01-01
This article describes a theory of scientific discovery learning which is an extension of Klahr and Dunbar''s model of Scientific Discovery as Dual Search (SDDS) model. We present a model capable of describing and understanding scientific discovery learning in complex domains in terms of the SDDS fr
Faraway, Julian J
2005-01-01
Linear models are central to the practice of statistics and form the foundation of a vast range of statistical methodologies. Julian J. Faraway''s critically acclaimed Linear Models with R examined regression and analysis of variance, demonstrated the different methods available, and showed in which situations each one applies. Following in those footsteps, Extending the Linear Model with R surveys the techniques that grow from the regression model, presenting three extensions to that framework: generalized linear models (GLMs), mixed effect models, and nonparametric regression models. The author''s treatment is thoroughly modern and covers topics that include GLM diagnostics, generalized linear mixed models, trees, and even the use of neural networks in statistics. To demonstrate the interplay of theory and practice, throughout the book the author weaves the use of the R software environment to analyze the data of real examples, providing all of the R commands necessary to reproduce the analyses. All of the ...
A cosmological dust model with extended f({chi}) gravity
Carranza, D.A.; Mendoza, S.; Torres, L.A. [Universidad Nacional Autonoma de Mexico, Instituto de Astronomia, AP 70-264, Distrito Federal (Mexico)
2013-01-15
Introducing a fundamental constant of nature with dimensions of acceleration into the theory of gravity makes it possible to extend gravity in a very consistent manner. At the non-relativistic level a MOND-like theory with a modification in the force sector is obtained, which is the limit of a very general metric relativistic theory of gravity. Since the mass and length scales involved in the dynamics of the whole universe require small accelerations of the order of Milgrom's acceleration constant a{sub 0}, it turns out that the relativistic theory of gravity can be used to explain the expansion of the universe. In this work it is explained how to use that relativistic theory of gravity in such a way that the overall large-scale dynamics of the universe can be treated in a pure metric approach without the need to introduce dark matter and/or dark energy components. (orig.)
Tsalmantza, P.; Karampelas, A.; Kontizas, M.; Bailer-Jones, C. A. L.; Rocca-Volmerange, B.; Livanou, E.; Bellas-Velidis, I.; Kontizas, E.; Vallenari, A.
2012-01-01
Aims: This paper is the third in a series implementing a classification system for Gaia observations of unresolved galaxies. The system makes use of template galaxy spectra in order to determine spectral classes and estimate intrinsic astrophysical parameters. In previous work we used synthetic galaxy spectra produced by PÉGASE.2 code to simulate Gaia observations and to test the performance of support vector machine (SVM) classifiers and parametrizers. Here we produce a semi-empirical library of galaxy spectra by fitting SDSS spectra with the previously produced synthetic libraries. We present (1) the semi-empirical library of galaxy spectra; (2) a comparison between the observed and synthetic spectra; and (3) first results of classification and parametrization experiments with simulated Gaia spectrophotometry of this library. Methods: We use χ2-fitting to fit SDSS galaxy spectra with the synthetic library in order to construct a semi-empirical library of galaxy spectra in which (1) the real spectra are extended by the synthetic ones in order to cover the full wavelength range of Gaia; and (2) astrophysical parameters are assigned to the SDSS spectra by the best fitting synthetic spectrum. The SVM models were trained with and applied to semi-empirical spectra. Tests were performed for the classification of spectral types and the estimation of the most significant galaxy parameters (in particular redshift, mass to light ratio and star formation history). Results: We produce a semi-empirical library of 33 670 galaxy spectra covering the wavelength range 250 to 1050 nm at a sampling of 1 nm or less. Using the results of the fitting of the SDSS spectra with our synthetic library, we investigate the range of the input model parameters that produces spectra which are in good agreement with observations. In general the results are very good for the majority of the synthetic spectra of early type, spiral and irregular galaxies, while they reveal problems in the models
Extending enterprise architecture modelling with business goals and requirements
Engelsman, Wilco; Quartel, Dick; Jonkers, Henk; van Sinderen, Marten
2011-02-01
The methods for enterprise architecture (EA), such as The Open Group Architecture Framework, acknowledge the importance of requirements modelling in the development of EAs. Modelling support is needed to specify, document, communicate and reason about goals and requirements. The current modelling techniques for EA focus on the products, services, processes and applications of an enterprise. In addition, techniques may be provided to describe structured requirements lists and use cases. Little support is available however for modelling the underlying motivation of EAs in terms of stakeholder concerns and the high-level goals that address these concerns. This article describes a language that supports the modelling of this motivation. The definition of the language is based on existing work on high-level goal and requirements modelling and is aligned with an existing standard for enterprise modelling: the ArchiMate language. Furthermore, the article illustrates how EA can benefit from analysis techniques from the requirements engineering domain.
Empirical Study on Deep Learning Models for Question Answering
Yu, Yang; Zhang, Wei; Hang, Chung-Wei; Xiang, Bing; Zhou, Bowen
2015-01-01
In this paper we explore deep learning models with memory component or attention mechanism for question answering task. We combine and compare three models, Neural Machine Translation, Neural Turing Machine, and Memory Networks for a simulated QA data set. This paper is the first one that uses Neural Machine Translation and Neural Turing Machines for solving QA tasks. Our results suggest that the combination of attention and memory have potential to solve certain QA problem.
Empirical modelling of NO{sub x} emissions
Pedersen, L.S.; Lans, R. van der; Glarborg, P.; Dam-Johansen, K. [Technical University of Denmark Lyngby (Denmark). Dept. of Chemical Engineering
1998-12-31
The applicability of predicting nitrogen oxide emissions and burnout from swirling pulverised coal flames using ideal chemical reactors was investigated. The flow pattern inside the furnace was modified as was the mixing between the combustion air and the fuel inside the reactors using a first order reaction for dissolution of air into the combustion zone. Devolatilisation is assumed to occur much faster than char combustion, with HCN as the primary volatile fuel nitrogen product. Char oxidation is modelled by a single film model with changing particle size and density. Oxidation of HCN is modelled with two reaction channels. The temperature is input from measurements. The model was verified against experimental data obtained from the cylindrical, 5 meter long and 0.5 m diameter Mitsui Babcock Energy Ltd., test-rig (160 kW{sub th}) for a Colombian, a Polish and a South African coal. The model was able to predict the NO concentration and carbon in ash reasonably well, and could predict relative differences in NO concentrations between the three coals. However, the simple reaction mechanism for the formation of NO from HCN fails at a primary stoichiometry below 0.9 for staged combustion. A short sensitivity analysis was performed for the most important parameters, which showed that the model is sensitive towards the particle size distribution. Although the model has only been tested against the small scale test-rig, the data have been compared with full scale tests conducted by ELSAM in Denmark with the same coals. In these tests NO emissions varied but the relative differences between the coals were identical. This means that the model can indirectly predict the NO emissions, depending on coal type, from the full scale power stations. 23 refs., 20 figs., 6 tabs.
An Empirical Comparison of Probability Models for Dependency Grammar
Eisner, J
1997-01-01
This technical report is an appendix to Eisner (1996): it gives superior experimental results that were reported only in the talk version of that paper. Eisner (1996) trained three probability models on a small set of about 4,000 conjunction-free, dependency-grammar parses derived from the Wall Street Journal section of the Penn Treebank, and then evaluated the models on a held-out test set, using a novel O(n^3) parsing algorithm. The present paper describes some details of the experiments and repeats them with a larger training set of 25,000 sentences. As reported at the talk, the more extensive training yields greatly improved performance. Nearly half the sentences are parsed with no misattachments; two-thirds are parsed with at most one misattachment. Of the models described in the original written paper, the best score is still obtained with the generative (top-down) "model C." However, slightly better models are also explored, in particular, two variants on the comprehension (bottom-up) "model B." The be...
Evaluation of Regression Models of Balance Calibration Data Using an Empirical Criterion
Ulbrich, Norbert; Volden, Thomas R.
2012-01-01
An empirical criterion for assessing the significance of individual terms of regression models of wind tunnel strain gage balance outputs is evaluated. The criterion is based on the percent contribution of a regression model term. It considers a term to be significant if its percent contribution exceeds the empirical threshold of 0.05%. The criterion has the advantage that it can easily be computed using the regression coefficients of the gage outputs and the load capacities of the balance. First, a definition of the empirical criterion is provided. Then, it is compared with an alternate statistical criterion that is widely used in regression analysis. Finally, calibration data sets from a variety of balances are used to illustrate the connection between the empirical and the statistical criterion. A review of these results indicated that the empirical criterion seems to be suitable for a crude assessment of the significance of a regression model term as the boundary between a significant and an insignificant term cannot be defined very well. Therefore, regression model term reduction should only be performed by using the more universally applicable statistical criterion.
Lightning Detection Efficiency Analysis Process: Modeling Based on Empirical Data
Rompala, John T.
2005-01-01
A ground based lightning detection system employs a grid of sensors, which record and evaluate the electromagnetic signal produced by a lightning strike. Several detectors gather information on that signal s strength, time of arrival, and behavior over time. By coordinating the information from several detectors, an event solution can be generated. That solution includes the signal s point of origin, strength and polarity. Determination of the location of the lightning strike uses algorithms based on long used techniques of triangulation. Determination of the event s original signal strength relies on the behavior of the generated magnetic field over distance and time. In general the signal from the event undergoes geometric dispersion and environmental attenuation as it progresses. Our knowledge of that radial behavior together with the strength of the signal received by detecting sites permits an extrapolation and evaluation of the original strength of the lightning strike. It also limits the detection efficiency (DE) of the network. For expansive grids and with a sparse density of detectors, the DE varies widely over the area served. This limits the utility of the network in gathering information on regional lightning strike density and applying it to meteorological studies. A network of this type is a grid of four detectors in the Rondonian region of Brazil. The service area extends over a million square kilometers. Much of that area is covered by rain forests. Thus knowledge of lightning strike characteristics over the expanse is of particular value. I have been developing a process that determines the DE over the region [3]. In turn, this provides a way to produce lightning strike density maps, corrected for DE, over the entire region of interest. This report offers a survey of that development to date and a record of present activity.
Libor and Swap Market Models for the Pricing of Interest Rate Derivatives : An Empirical Analysis
de Jong, F.C.J.M.; Driessen, J.J.A.G.; Pelsser, A.
2000-01-01
In this paper we empirically analyze and compare the Libor and Swap Market Models, developed by Brace, Gatarek, and Musiela (1997) and Jamshidian (1997), using paneldata on prices of US caplets and swaptions.A Libor Market Model can directly be calibrated to observed prices of caplets, whereas a
Empirical modeling of soot formation in shock-tube pyrolysis of aromatic hydrocarbons
Frenklach, M.; Clary, D. W.; Matula, R. A.
1986-01-01
A method for empirical modeling of soot formation during shock-tube pyrolysis of aromatic hydrocarbons is developed. The method is demonstrated using data obtained in pyrolysis of argon-diluted mixtures of toluene behind reflected shock waves. The developed model is in good agreement with experiment.
Computer Model of the Empirical Knowledge of Physics Formation: Coordination with Testing Results
Mayer, Robert V.
2016-01-01
The use of method of imitational modeling to study forming the empirical knowledge in pupil's consciousness is discussed. The offered model is based on division of the physical facts into three categories: 1) the facts established in everyday life; 2) the facts, which the pupil can experimentally establish at a physics lesson; 3) the facts which…
An Empirically Based Method of Q-Matrix Validation for the DINA Model: Development and Applications
de la Torre, Jimmy
2008-01-01
Most model fit analyses in cognitive diagnosis assume that a Q matrix is correct after it has been constructed, without verifying its appropriateness. Consequently, any model misfit attributable to the Q matrix cannot be addressed and remedied. To address this concern, this paper proposes an empirically based method of validating a Q matrix used…
Faculty's Acceptance of Computer Based Technology: Cross-Validation of an Extended Model
Ahmad, Tunku Badariah Tunku; Madarsha, Kamal Basha; Zainuddin, Ahmad Marzuki; Ismail, Nik Ahmad Hisham; Nordin, Mohamad Sahari
2010-01-01
The first aim of the present study is to validate an extended technology acceptance model (TAME) on the data derived from the faculty members of a university in an ongoing, computer mediated work setting. The study extended the original TAM model by including an intrinsic motivation component--computer self efficacy. In so doing, the study…
Empirical slip and viscosity model performance for microscale gas flows.
Gallis, Michail A.; Boyd, Iain D. (University of Michigan, Ann Arbor, MI); McNenly, Matthew J. (University of Michigan, Ann Arbor, MI)
2004-07-01
For the simple geometries of Couette and Poiseuille flows, the velocity profile maintains a similar shape from continuum to free molecular flow. Therefore, modifications to the fluid viscosity and slip boundary conditions can improve the continuum based Navier-Stokes solution in the non-continuum non-equilibrium regime. In this investigation, the optimal modifications are found by a linear least-squares fit of the Navier-Stokes solution to the non-equilibrium solution obtained using the direct simulation Monte Carlo (DSMC) method. Models are then constructed for the Knudsen number dependence of the viscosity correction and the slip model from a database of DSMC solutions for Couette and Poiseuille flows of argon and nitrogen gas, with Knudsen numbers ranging from 0.01 to 10. Finally, the accuracy of the models is measured for non-equilibrium cases both in and outside the DSMC database. Flows outside the database include: combined Couette and Poiseuille flow, partial wall accommodation, helium gas, and non-zero convective acceleration. The models reproduce the velocity profiles in the DSMC database within an L{sub 2} error norm of 3% for Couette flows and 7% for Poiseuille flows. However, the errors in the model predictions outside the database are up to five times larger.
J. G. Coen van Hasselt
2014-01-01
Full Text Available This work describes a first population pharmacokinetic (PK model for free and total cefazolin during pregnancy, which can be used for dose regimen optimization. Secondly, analysis of PK studies in pregnant patients is challenging due to study design limitations. We therefore developed a semiphysiological modeling approach, which leveraged gestation-induced changes in creatinine clearance (CrCL into a population PK model. This model was then compared to the conventional empirical covariate model. First, a base two-compartmental PK model with a linear protein binding was developed. The empirical covariate model for gestational changes consisted of a linear relationship between CL and gestational age. The semiphysiological model was based on the base population PK model and a separately developed mixed-effect model for gestation-induced change in CrCL. Estimates for baseline clearance (CL were 0.119 L/min (RSE 58% and 0.142 L/min (RSE 44% for the empirical and semiphysiological models, respectively. Both models described the available PK data comparably well. However, as the semiphysiological model was based on prior knowledge of gestation-induced changes in renal function, this model may have improved predictive performance. This work demonstrates how a hybrid semiphysiological population PK approach may be of relevance in order to derive more informative inferences.
Extending the generalized Chaplygin gas model by using geometrothermodynamics
Aviles, Alejandro; Campuzano, Lorena; Quevedo, Hernando
2012-01-01
We use the formalism of geometrothermodynamics (GTD) to derive fundamental thermodynamic equations that are used to construct general relativistic cosmological models. In particular, we show that the simplest possible fundamental equation, which corresponds in GTD to a system with no internal thermodynamic interaction, describes the different fluids of the standard model of cosmology. In addition, a particular fundamental equation with internal thermodynamic interaction is shown to generate a new cosmological model that correctly describes the dark sector of the Universe and contains as a special case the generalized Chaplygin gas model.
Computer Model of the Empirical Knowledge of Physics Formation: Coordination with Testing Results
Robert V. Mayer
2016-06-01
Full Text Available The use of method of imitational modeling to study forming the empirical knowledge in pupil’s consciousness is discussed. The offered model is based on division of the physical facts into three categories: 1 the facts established in everyday life; 2 the facts, which the pupil can experimentally establish at a physics lesson; 3 the facts which are studied only on the theoretical level (speculative or ideally. The determination of the forgetting coefficients of the facts of the first, second and third categories and coordination of imitating model with distribution of empirical information in the school physics course and testing results is carried out. The graphs of dependence of empirical knowledge for various physics sections and facts categories on time are given.
Empirical and modeled synoptic cloud climatology of the Arctic Ocean
Barry, R. G.; Newell, J. P.; Schweiger, A.; Crane, R. G.
1986-01-01
A set of cloud cover data were developed for the Arctic during the climatically important spring/early summer transition months. Parallel with the determination of mean monthly cloud conditions, data for different synoptic pressure patterns were also composited as a means of evaluating the role of synoptic variability on Arctic cloud regimes. In order to carry out this analysis, a synoptic classification scheme was developed for the Arctic using an objective typing procedure. A second major objective was to analyze model output of pressure fields and cloud parameters from a control run of the Goddard Institue for Space Studies climate model for the same area and to intercompare the synoptic climatatology of the model with that based on the observational data.
Efficient Modelling and Generation of Markov Automata (extended version)
Timmer, Mark; Katoen, Joost-Pieter; Pol, van de Jaco; Stoelinga, Mariëlle
2012-01-01
This paper introduces a framework for the efficient modelling and generation of Markov automata. It consists of (1) the data-rich process-algebraic language MAPA, allowing concise modelling of systems with nondeterminism, probability and Markovian timing; (2) a restricted form of the language, the M
Hyperstate matrix models : extending demographic state spaces to higher dimensions
Roth, G.; Caswell, H.
2016-01-01
1. Demographic models describe population dynamics in terms of the movement of individuals among states (e.g. size, age, developmental stage, parity, frailty, physiological condition). Matrix population models originally classified individuals by a single characteristic. This was enlarged to two cha
Hazard identification by extended multilevel flow modelling with function roles
Wu, Jing; Zhang, Laibin; Jørgensen, Sten Bay
2014-01-01
HAZOP studies are widely accepted in chemical and petroleum industries as the method for conducting process hazard analysis related to design, maintenance and operation of th e systems. In this paper, a HAZOP reasoning method based on function-oriented modelling, multilevel flow modelling (MFM) i...
An Empirical Model of Wage Dispersion with Sorting
Bagger, Jesper; Lentz, Rasmus
This paper studies wage dispersion in an equilibrium on-the-job-search model with endogenous search intensity. Workers differ in their permanent skill level and firms differ with respect to productivity. Positive (negative) sorting results if the match production function is supermodular...
Empirical validation data sets for double skin facade models
Kalyanova, Olena; Jensen, Rasmus Lund; Heiselberg, Per
2008-01-01
During recent years application of double skin facades (DSF) has greatly increased. However, successful application depends heavily on reliable and validated models for simulation of the DSF performance and this in turn requires access to high quality experimental data. Three sets of accurate emp...
Neural networks in economic modelling : An empirical study
Verkooijen, W.J.H.
1996-01-01
This dissertation addresses the statistical aspects of neural networks and their usability for solving problems in economics and finance. Neural networks are discussed in a framework of modelling which is generally accepted in econometrics. Within this framework a neural network is regarded as a sta
An Empirical Study of a Solo Performance Assessment Model
Russell, Brian E.
2015-01-01
The purpose of this study was to test a hypothesized model of solo music performance assessment. Specifically, this study investigates the influence of technique and musical expression on perceptions of overall performance quality. The Aural Musical Performance Quality (AMPQ) measure was created to measure overall performance quality, technique,…
An Empirical Generative Framework for Computational Modeling of Language Acquisition
Waterfall, Heidi R.; Sandbank, Ben; Onnis, Luca; Edelman, Shimon
2010-01-01
This paper reports progress in developing a computer model of language acquisition in the form of (1) a generative grammar that is (2) algorithmically learnable from realistic corpus data, (3) viable in its large-scale quantitative performance and (4) psychologically real. First, we describe new algorithmic methods for unsupervised learning of…
Extended master equation models for molecular communication networks
Chou, Chun Tung
2012-01-01
We consider molecular communication networks consisting of transmitters and receivers distributed in a fluidic medium. In such networks, a transmitter sends one or more signalling molecules, which are diffused over the medium, to the receiver to realise the communication. In order to be able to engineer synthetic molecular communication networks, mathematical models for these networks are required. This paper proposes a new stochastic model for molecular communication networks called reaction-diffusion master equation with exogenous input (RDMEX). The key idea behind RDMEX is to model the transmitters as time sequences specify the emission patterns of signalling molecules, while diffusion in the medium and chemical reactions at the receivers are modelled as Markov processes using master equation. An advantage of RDMEX is that it can readily be used to model molecular communication networks with multiple transmitters and receivers. For the case where the reaction kinetics at the receivers is linear, we show ho...
Empirical genome evolution models root the tree of life.
Harish, Ajith; Kurland, Charles G
2017-07-01
A reliable phylogenetic reconstruction of the evolutionary history of contemporary species depends on a robust identification of the universal common ancestor (UCA) at the root of the Tree of Life (ToL). That root polarizes the tree so that the evolutionary succession of ancestors to descendants is discernable. In effect, the root determines the branching order and the direction of character evolution. Typically, conventional phylogenetic analyses implement time-reversible models of evolution for which character evolution is un-polarized. Such practices leave the root and the direction of character evolution undefined by the data used to construct such trees. In such cases, rooting relies on theoretic assumptions and/or the use of external data to interpret unrooted trees. The most common rooting method, the outgroup method is clearly inapplicable to the ToL, which has no outgroup. Both here and in the accompanying paper (Harish and Kurland, 2017) we have explored the theoretical and technical issues related to several rooting methods. We demonstrate (1) that Genome-level characters and evolution models are necessary for species phylogeny reconstructions. By the same token, standard practices exploiting sequence-based methods that implement gene-scale substitution models do not root species trees; (2) Modeling evolution of complex genomic characters and processes that are non-reversible and non-stationary is required to reconstruct the polarized evolution of the ToL; (3) Rooting experiments and Bayesian model selection tests overwhelmingly support the earlier finding that akaryotes and eukaryotes are sister clades that descend independently from UCA (Harish and Kurland, 2013); (4) Consistent ancestral state reconstructions from independent genome samplings confirm the previous finding that UCA features three fourths of the unique protein domain-superfamilies encoded by extant genomes. Copyright © 2017 Elsevier B.V. and Société Française de Biochimie et Biologie
Faramarzi, Leila; Kontogeorgis, Georgios; Thomsen, Kaj
2009-01-01
The extended UNIQUAC model [K. Thomsen, R Rasmussen, Chem. Eng. Sci. 54 (1999) 1787-1802] was applied to the thermodynamic representation of carbon dioxide absorption in aqueous monoethanolamine (MEA), methyldiethanolamine (MDEA) and varied strength mixtures of the two alkanolamines (MEA-MDEA). F......The extended UNIQUAC model [K. Thomsen, R Rasmussen, Chem. Eng. Sci. 54 (1999) 1787-1802] was applied to the thermodynamic representation of carbon dioxide absorption in aqueous monoethanolamine (MEA), methyldiethanolamine (MDEA) and varied strength mixtures of the two alkanolamines (MEA......) are included in the parameter estimation process. The previously unavailable standard state properties of the alkanolamine ions appearing in this work, i.e. MEA protonate, MEA carbamate and MDEA protonate are determined. The concentration of the species in both MEA and MDEA solutions containing CO2...
Extended master equation models for molecular communication networks.
Chou, Chun Tung
2013-06-01
We consider molecular communication networks consisting of transmitters and receivers distributed in a fluidic medium. In such networks, a transmitter sends one or more signaling molecules, which are diffused over the medium, to the receiver to realize the communication. In order to be able to engineer synthetic molecular communication networks, mathematical models for these networks are required. This paper proposes a new stochastic model for molecular communication networks called reaction-diffusion master equation with exogenous input (RDMEX). The key idea behind RDMEX is to model the transmitters as time series of signaling molecule counts, while diffusion in the medium and chemical reactions at the receivers are modeled as Markov processes using master equation. An advantage of RDMEX is that it can readily be used to model molecular communication networks with multiple transmitters and receivers. For the case where the reaction kinetics at the receivers is linear, we show how RDMEX can be used to determine the mean and covariance of the receiver output signals, and derive closed-form expressions for the mean receiver output signal of the RDMEX model. These closed-form expressions reveal that the output signal of a receiver can be affected by the presence of other receivers. Numerical examples are provided to demonstrate the properties of the model.
Semi-Empirical Calibration of the Integral Equation Model for Co-Polarized L-Band Backscattering
Nicolas Baghdadi
2015-10-01
Full Text Available The objective of this paper is to extend the semi-empirical calibration of the backscattering Integral Equation Model (IEM initially proposed for Synthetic Aperture Radar (SAR data at C- and X-bands to SAR data at L-band. A large dataset of radar signal and in situ measurements (soil moisture and surface roughness over bare soil surfaces were used. This dataset was collected over numerous agricultural study sites in France, Luxembourg, Belgium, Germany and Italy using various SAR sensors (AIRSAR, SIR-C, JERS-1, PALSAR-1, ESAR. Results showed slightly better simulations with exponential autocorrelation function than with Gaussian function and with HH than with VV. Using the exponential autocorrelation function, the mean difference between experimental data and Integral Equation Model (IEM simulations is +0.4 dB in HH and −1.2 dB in VV with a Root Mean Square Error (RMSE about 3.5 dB. In order to improve the modeling results of the IEM for a better use in the inversion of SAR data, a semi-empirical calibration of the IEM was performed at L-band in replacing the correlation length derived from field experiments by a fitting parameter. Better agreement was observed between the backscattering coefficient provided by the SAR and that simulated by the calibrated version of the IEM (RMSE about 2.2 dB.
Extending the Modelling Framework for Gas-Particle Systems
Rosendahl, Lasse Aistrup
, with very good results. Single particle combustion has been tested using a number of different particle combustion models applied to coal and straw particles. Comparing the results of these calculations to measurements on straw burnout, the results indicate that for straw, existing heterogeneous combustion...... models perform well, and may be used in high temperature ranges. Finally, the particle tracking and combustion model is applied to an existing coal and straw co- fuelled burner. The results indicate that again, the straw follows very different trajectories than the coal particles, and also that burnout...
Empirical study on entropy models of cellular manufacturing systems
Zhifeng Zhang; Renbin Xiao
2009-01-01
From the theoretical point of view,the states of manufacturing resources can be monitored and assessed through the amount of information needed to describe their technological structure and operational state.The amount of information needed to describe cellular manufacturing systems is investigated by two measures:the structural entropy and the operational entropy.Based on the Shannon entropy,the models of the structural entropy and the operational entropy of cellular manufacturing systems are developed,and the cognizance of the states of manufacturing resources is also illustrated.Scheduling is introduced to measure the entropy models of cellular manufacturing systems,and the feasible concepts of maximum schedule horizon and schedule adherence are advanced to quantitatively evaluate the effectiveness of schedules.Finally,an example is used to demonstrate the validity of the proposed methodology.
AN EMPIRICAL MODEL OF ONLINE BUYING CONTINUANCE INTENTION
ORZAN Gheorghe; Claudia ICONARU; MACOVEI Octav-Ionut
2012-01-01
The aim of this paper is to propose, test and validate a model of consumers` continuance intention to buy online as a main function of affective attitude towards using the Internet for purchasing goods and services and the overall satisfaction towards the decision of buying online. The confirmation of initial expectations regarding online buying is the main predictor of online consumers` satisfaction and online consumers` perceived usefulness of online buying. Affective attitude is mediating ...
Internet enabled modelling of extended manufacturing enterprises using the process based techniques
Cheng, K; Popov, Y
2004-01-01
The paper presents the preliminary results of an ongoing research project on Internet enabled process-based modelling of extended manufacturing enterprises. It is proposed to apply the Open System Architecture for CIM (CIMOSA) modelling framework alongside with object-oriented Petri Net models of enterprise processes and object-oriented techniques for extended enterprises modelling. The main features of the proposed approach are described and some components discussed. Elementary examples of ...
PROPOSAL OF AN EMPIRICAL MODEL FOR SUPPLIERS SELECTION
Paulo Ávila
2015-03-01
Full Text Available The problem of selecting suppliers/partners is a crucial and important part in the process of decision making for companies that intend to perform competitively in their area of activity. The selection of supplier/partner is a time and resource-consuming task that involves data collection and a careful analysis of the factors that can positively or negatively influence the choice. Nevertheless it is a critical process that affects significantly the operational performance of each company. In this work, trough the literature review, there were identified five broad suppliers selection criteria: Quality, Financial, Synergies, Cost, and Production System. Within these criteria, it was also included five sub-criteria. Thereafter, a survey was elaborated and companies were contacted in order to answer which factors have more relevance in their decisions to choose the suppliers. Interpreted the results and processed the data, it was adopted a model of linear weighting to reflect the importance of each factor. The model has a hierarchical structure and can be applied with the Analytic Hierarchy Process (AHP method or Simple Multi-Attribute Rating Technique (SMART. The result of the research undertaken by the authors is a reference model that represents a decision making support for the suppliers/partners selection process.
Modern elementary particle physics explaining and extending the standard model
Kane, Gordon
2017-01-01
This book is written for students and scientists wanting to learn about the Standard Model of particle physics. Only an introductory course knowledge about quantum theory is needed. The text provides a pedagogical description of the theory, and incorporates the recent Higgs boson and top quark discoveries. With its clear and engaging style, this new edition retains its essential simplicity. Long and detailed calculations are replaced by simple approximate ones. It includes introductions to accelerators, colliders, and detectors, and several main experimental tests of the Standard Model are explained. Descriptions of some well-motivated extensions of the Standard Model prepare the reader for new developments. It emphasizes the concepts of gauge theories and Higgs physics, electroweak unification and symmetry breaking, and how force strengths vary with energy, providing a solid foundation for those working in the field, and for those who simply want to learn about the Standard Model.
An extended multi-zone combustion model for PCI simulation
Kodavasal, Janardhan; Keum, SeungHwan; Babajimopoulos, Aristotelis
2011-12-01
Novel combustion modes are becoming an important area of research with emission regulations more stringent than ever before, and with fuel economy being assigned greater importance every day. Homogeneous Charge Compression Ignition (HCCI) and Premixed Compression Ignition (PCI) modes in particular promise better fuel economy and lower emissions in internal combustion engines. Multi-zone combustion models have been popular in modelling HCCI combustion. In this work, an improved multi-zone model is suggested for PCI combustion modelling. A new zoning scheme is suggested based on incorporating the internal energy of formation into an earlier conventional HCCI multi-zone approach, which considers a two-dimensional reaction space defined by equivalence ratio and temperature. It is shown that the added dimension improves zoning by creating more representative zones, and thus reducing errors compared to the conventional zoning approach, when applied to PCI simulation.
Overwinding in a stochastic model of an extended polymer
Bernido, Christopher C. [Research Center for Theoretical Physics, Central Visayan Institute Foundation, Jagna, Bohol 6308 (Philippines)], E-mail: cbernido@mozcom.com; Carpio-Bernido, M. Victoria [Research Center for Theoretical Physics, Central Visayan Institute Foundation, Jagna, Bohol 6308 (Philippines)
2007-09-10
We evaluate explicit expressions of length-dependent winding configuration probabilities for a biopolymer. The stochastic model incorporates several experimentally observed features. In particular, it exhibits overwinding under stretching forces until a critical length of the polymer is reached.
Extended Rasch Modeling: The eRm Package for the Application of IRT Models in R
Patrick Mair
2007-02-01
Full Text Available Item response theory models (IRT are increasingly becoming established in social science research, particularly in the analysis of performance or attitudinal data in psychology, education, medicine, marketing and other fields where testing is relevant. We propose the R package eRm (extended Rasch modeling for computing Rasch models and several extensions. A main characteristic of some IRT models, the Rasch model being the most prominent, concerns the separation of two kinds of parameters, one that describes qualities of the subject under investigation, and the other relates to qualities of the situation under which the response of a subject is observed. Using conditional maximum likelihood (CML estimation both types of parameters may be estimated independently from each other. IRT models are well suited to cope with dichotomous and polytomous responses, where the response categories may be unordered as well as ordered. The incorporation of linear structures allows for modeling the effects of covariates and enables the analysis of repeated categorical measurements. The eRm package fits the following models: the Rasch model, the rating scale model (RSM, and the partial credit model (PCM as well as linear reparameterizations through covariate structures like the linear logistic test model (LLTM, the linear rating scale model (LRSM, and the linear partial credit model (LPCM. We use an unitary, efficient CML approach to estimate the item parameters and their standard errors. Graphical and numeric tools for assessing goodness-of-fit are provided.
Improved Nucleon Properties in the Extended Quark Sigma Model
Abu-Shady, M
2013-01-01
The quark sigma model describes the quarks interacting via exchange the pions and sigma meson fields. A new version of mesonic potential is suggested in the frame of some aspects of the quantum chromodynamics (QCD). The field equations have been solved in the mean-field approximation for the hedgehog baryon state. The obtained results are compared with previous works and other models. We conclude that the suggested mesonic potential successfully calculates nucleon properties.
Extended model of restricted beam for FSO links
Poliak, Juraj; Wilfert, Otakar
2012-10-01
Modern wireless optical communication systems in many aspects overcome wire or radio communications. Their advantages are license-free operation and broad bandwidth that they offer. The medium in free-space optical (FSO) links is the atmosphere. Operation of outdoor FSO links struggles with many atmospheric phenomena that deteriorate phase and amplitude of the transmitted optical beam. This beam originates in the transmitter and is affected by its individual parts, especially by the lens socket and the transmitter aperture, where attenuation and diffraction effects take place. Both of these phenomena unfavourable influence the beam and cause degradation of link availability, or its total malfunction. Therefore, both of these phenomena should be modelled and simulated, so that one can judge the link function prior to the realization of the system. Not only the link availability and reliability are concerned, but also economic aspects. In addition, the transmitted beam is not, generally speaking, circularly symmetrical, what makes the link simulation more difficult. In a comprehensive model, it is necessary to take into account the ellipticity of the beam that is restricted by circularly symmetrical aperture where then the attenuation and diffraction occur. General model is too computationally extensive; therefore simplification of the calculations by means of analytical and numerical approaches will be discussed. Presented model is not only simulated using computer, but also experimentally proven. One can then deduce the ability of the model to describe the reality and to estimate how far can one go with approximations, i.e. limitations of the model are discussed.
Empirically Grounded Agent-Based Models of Innovation Diffusion: A Critical Review
Zhang, Haifeng
2016-01-01
Innovation diffusion has been studied extensively in a variety of disciplines, including sociology, economics, marketing, ecology, and computer science. Traditional literature on innovation diffusion has been dominated by models of aggregate behavior and trends. However, the agent-based modeling (ABM) paradigm is gaining popularity as it captures agent heterogeneity and enables fine-grained modeling of interactions mediated by social and geographic networks. While most ABM work on innovation diffusion is theoretical, empirically grounded models are increasingly important, particularly in guiding policy decisions. We present a critical review of empirically grounded agent-based models of innovation diffusion, developing a categorization of this research based on types of agent models as well as applications. By connecting the modeling methodologies in the fields of information and innovation diffusion, we suggest that the maximum likelihood estimation framework widely used in the former is a promising paradigm...
Sahbi FARHANI
2012-01-01
Full Text Available This paper considers tests of parameters instability and structural change with known, unknown or multiple breakpoints. The results apply to a wide class of parametric models that are suitable for estimation by strong rules for detecting the number of breaks in a time series. For that, we use Chow, CUSUM, CUSUM of squares, Wald, likelihood ratio and Lagrange multiplier tests. Each test implicitly uses an estimate of a change point. We conclude with an empirical analysis on two different models (ARMA model and simple linear regression model.
A model of deep ecotourism development and its empirical study
无
2007-01-01
Ecotourism requires the harmony of all factors involved in tourism system for a multilateral benefit.Based on such cognition,a concept of deep ecotourism development is put forward which includes two connotations:on the one hand,it should give prominence to the display of the eco-culture of the tourist destination and tourists'eco-experience.in which way the development behavior on the tourist destination and the tourists' behavior will beregulated;on the other hand,it implies the deep harmony among tourist entrepreneurs and tourists,the local governments and the local residents,as well as tourist activities and the ecological environment in the tourism development for the multilateral benefit of every element involved and sustainable tourism development.The common sense is that the degrees in a certain tourism destination will differ and that consequently four levels of ecotourism are divided-very shallow ecotourism,shallow ecotourism,deep ecotourism and very deep ecotourism.To move shallow ecotourism toward deep one,two models of"foursubjects and two wings"and"connecting the two wings"of deep ecotourism development system are introduced to make ecotourism industry favorable to the display of eco-culture and the Sustainable development of the destination community.With the two models,a case study of ecotourism development in Louguantal National Forest Park was made as a demonstration.The ultimate purpose is to build an ideal new Shangri-La.
Modeling Active Aging and Explicit Memory: An Empirical Study.
Ponce de León, Laura Ponce; Lévy, Jean Pierre; Fernández, Tomás; Ballesteros, Soledad
2015-08-01
The rapid growth of the population of older adults and their concomitant psychological status and health needs have captured the attention of researchers and health professionals. To help fill the void of literature available to social workers interested in mental health promotion and aging, the authors provide a model for active aging that uses psychosocial variables. Structural equation modeling was used to examine the relationships among the latent variables of the state of explicit memory, the perception of social resources, depression, and the perception of quality of life in a sample of 184 older adults. The results suggest that explicit memory is not a direct indicator of the perception of quality of life, but it could be considered an indirect indicator as it is positively correlated with perception of social resources and negatively correlated with depression. These last two variables influenced the perception of quality of life directly, the former positively and the latter negatively. The main outcome suggests that the perception of social support improves explicit memory and quality of life and reduces depression in active older adults. The findings also suggest that gerontological professionals should design memory training programs, improve available social resources, and offer environments with opportunities to exercise memory.
Stability analysis of traffic flow with extended CACC control models
Ya-Zhou, Zheng; Rong-Jun, Cheng; Siu-Ming, Lo; Hong-Xia, Ge
2016-06-01
To further investigate car-following behaviors in the cooperative adaptive cruise control (CACC) strategy, a comprehensive control system which can handle three traffic conditions to guarantee driving efficiency and safety is designed by using three CACC models. In this control system, some vital comprehensive information, such as multiple preceding cars’ speed differences and headway, variable safety distance (VSD) and time-delay effect on the traffic current and the jamming transition have been investigated via analytical or numerical methods. Local and string stability criterion for the velocity control (VC) model and gap control (GC) model are derived via linear stability theory. Numerical simulations are conducted to study the performance of the simulated traffic flow. The simulation results show that the VC model and GC model can improve driving efficiency and suppress traffic congestion. Project supported by the National Natural Science Foundation of China (Grant Nos. 71571107 and 11302110). The Scientific Research Fund of Zhejiang Province, China (Grant Nos. LY15A020007, LY15E080013, and LY16G010003). The Natural Science Foundation of Ningbo City (Grant Nos. 2014A610030 and 2015A610299), the Fund from the Government of the Hong Kong Administrative Region, China (Grant No. CityU11209614), and the K C Wong Magna Fund in Ningbo University, China.
Extending the LCDM model through shear-free anisotropies
Pereira, Thiago S
2016-01-01
If the spacetime metric has anisotropic spatial curvature, one can afford to expand the universe isotropically, provided that the energy-momentum tensor satisfy a certain con- straint. This leads to the so-called shear-free metrics, which have the interesting property of violating the cosmological principle while still preserving the isotropy of the cosmic mi- crowave background (CMB) radiation. In this work we show that shear-free cosmologies correspond to an attractor solution in the space of models with anisotropic spatial curva- ture. Through a rigorous definition of linear perturbation theory in these spacetimes, we show that shear-free models represent a viable alternative to describe the large-scale evo- lution of the universe, leading, in particular, to a kinematically equivalent Sachs-Wolfe effect. Alternatively, we discuss some specific signatures that shear-free models would imprint on the temperature spectrum of CMB.
An extended model for electron spin polarization in photosynthetic bacteria
Morris, A.L.; Norris, J.R. (Argonne National Lab., IL (USA) Chicago Univ., IL (USA). Dept. of Chemistry); Thurnauer, M.C. (Argonne National Lab., IL (USA))
1990-01-01
We have developed a general model for electron spin polarization which includes contributions from both CIDEP (chemically induced dynamic electron polarization) and CRP (correlated radical polarization). In this paper, we apply this model to sequential electron transfer in photosynthetic bacteria. Our model calculates the density matrix for the P{sup +}I{sup {minus}} radical pair and transfers the polarization as it develops to the P{sup +}Q{sup {minus}} radical pair. We illustrate several possible cases. One case is equivalent to CIDEP; no interactions are included on the secondary radical pair, P{sup +}Q{sup {minus}}. Another approximates CRPP by either increasing the transfer rate from P{sup +}I{sup {minus}} to P{sup +}Q{sup {minus}} or restricting interactions to the secondary radical pair, P{sup +}Q{sup {minus}}. Others allow interactions on both the primary and secondary radical pairs with various transfer rates. 15 refs., 4 figs.
Extending the dimensionality of flatland with attribute view probabilistic models
Neufeld, Eric; Bickis, Mikelis; Grant, Kevin
2008-01-01
In much of Bertin's Semiology of Graphics, marks representing individuals are arranged on paper according to their various attributes (components). Paper and computer monitors can conveniently map two attributes to width and height, and can map other attributes into nonspatial dimensions such as texture, or colour. Good visualizations exploit the human perceptual apparatus so that key relationships are quickly detected as interesting patterns. Graphical models take a somewhat dual approach with respect to the original information. Components, rather than individuals, are represented as marks. Links between marks represent conceptually simple, easily computable, and typically probabilistic relationships of possibly varying strength, and the viewer studies the diagram to discover deeper relationships. Although visually annotated graphical models have been around for almost a century, they have not been widely used. We argue that they have the potential to represent multivariate data as generically as pie charts represent univariate data. The present work suggests a semiology for graphical models, and discusses the consequences for information visualization.
Elementary particles, dark matter candidate and new extended standard model
Hwang, Jaekwang
2017-01-01
Elementary particle decays and reactions are discussed in terms of the three-dimensional quantized space model beyond the standard model. Three generations of the leptons and quarks correspond to the lepton charges. Three heavy leptons and three heavy quarks are introduced. And the bastons (new particles) are proposed as the possible candidate of the dark matters. Dark matter force, weak force and strong force are explained consistently. Possible rest masses of the new particles are, tentatively, proposed for the experimental searches. For more details, see the conference paper at https://www.researchgate.net/publication/308723916.
Extending MBI Model using ITIL and COBIT Processes
Sona Karkoskova
2015-10-01
Full Text Available Most organizations today operate in a highly complex and competitive business environment and need to be able to react to rapidly changing market conditions. IT management frameworks are widely used to provide effective support for business objectives through aligning IT with business and optimizing the use of IT resources. In this paper we analyze three IT management frameworks (ITIL, COBIT and MBI with the objective to identify the relationships between these frameworks, and mapping ITIL and COBIT processes to MBI tasks. As a result of this analysis we propose extensions to the MBI model to incorporate IT Performance Management and a Capability Maturity Model.
Extended nonlinear feedback model for describing episodes of high inflation
Szybisz, M A; Szybisz, L.
2016-01-01
An extension of the nonlinear feedback (NLF) formalism to describe regimes of hyper- and high-inflation in economy is proposed in the present work. In the NLF model the consumer price index (CPI) exhibits a finite time singularity of the type $1/(t_c -t)^{(1- \\beta)/\\beta}$, with $\\beta>0$, predicting a blow up of the economy at a critical time $t_c$. However, this model fails in determining $t_c$ in the case of weak hyperinflation regimes like, e.g., that occurred in Israel. To overcome this...
Incommensurate Antiferromagnetism in the Extended t-J Model
LIANG Ying; MA Tian-Xing; FENG Shi-Ping; CHEN Wei-Yeu
2002-01-01
The effect of the extra second neighbor hopping t' on the incommensurate spin correlation in the t-J modelin the underdoped regime is studied within the fermion-spin theory. It is shown that although the extra second neighborhopping t' is systematically accompanied with the increasing of the weight of the incommensurate peaks in the dynamicalspin structure factor, for the physical reasonable small value of t' the qualitative behavior of the incommensurate spincorrelation in the t-t'-J model is the same as in the case of t-J model.
Behavioral Modeling Based on Probabilistic Finite Automata: An Empirical Study.
Tîrnăucă, Cristina; Montaña, José L; Ontañón, Santiago; González, Avelino J; Pardo, Luis M
2016-06-24
Imagine an agent that performs tasks according to different strategies. The goal of Behavioral Recognition (BR) is to identify which of the available strategies is the one being used by the agent, by simply observing the agent's actions and the environmental conditions during a certain period of time. The goal of Behavioral Cloning (BC) is more ambitious. In this last case, the learner must be able to build a model of the behavior of the agent. In both settings, the only assumption is that the learner has access to a training set that contains instances of observed behavioral traces for each available strategy. This paper studies a machine learning approach based on Probabilistic Finite Automata (PFAs), capable of achieving both the recognition and cloning tasks. We evaluate the performance of PFAs in the context of a simulated learning environment (in this case, a virtual Roomba vacuum cleaner robot), and compare it with a collection of other machine learning approaches.
Cycle length maximization in PWRs using empirical core models
Okafor, K.C.; Aldemir, T.
1987-01-01
The problem of maximizing cycle length in nuclear reactors through optimal fuel and poison management has been addressed by many investigators. An often-used neutronic modeling technique is to find correlations between the state and control variables to describe the response of the core to changes in the control variables. In this study, a set of linear correlations, generated by two-dimensional diffusion-depletion calculations, is used to find the enrichment distribution that maximizes cycle length for the initial core of a pressurized water reactor (PWR). These correlations (a) incorporate the effect of composition changes in all the control zones on a given fuel assembly and (b) are valid for a given range of control variables. The advantage of using such correlations is that the cycle length maximization problem can be reduced to a linear programming problem.
Extending radiative transfer models by use of Bayes rule. [in atmospheric science
Whitney, C.
1977-01-01
This paper presents a procedure that extends some existing radiative transfer modeling techniques to problems in atmospheric science where curvature and layering of the medium and dynamic range and angular resolution of the signal are important. Example problems include twilight and limb scan simulations. Techniques that are extended include successive orders of scattering, matrix operator, doubling, Gauss-Seidel iteration, discrete ordinates and spherical harmonics. The procedure for extending them is based on Bayes' rule from probability theory.
An extended topological model for binary phosphate glasses
Hermansen, Christian; Rodrigues, B.P.; Wondraczek, L.
2014-01-01
the model reduces to classical constraint counting. The constraints on the modifying cations are linear constraints to first neighbor NBOs, and all angular constraints are broken as expected for ionic bonding. For small modifying cations, such as Li+, the linear constraints are almost fully intact...
Experimental validation of extended NO and soot model for advanced HD diesel engine combustion
Seykens, X.L.J.; Baert, R.S.G.; Somers, L.M.T.; Willems, F.P.T.
2009-01-01
A computationally efficient engine model is developed based on an extended NO emission model and state-of-the-art soot model. The model predicts exhaust NO and soot emission for both conventional and advanced, high-EGR (up to 50%), heavy-duty DI diesel combustion. Modeling activities have aimed at l
Extending UML-RT for Control System Modeling
Qimin Gao
2004-01-01
Full Text Available There is a growing interest in adopting object technologies for the development of real-time control systems. Several commercial tools, currently available, provide object-oriented modeling and design support for real-time control systems. While these products provide many useful facilities, such as visualization tools and automatic code generation, they are all weak in addressing the central characteristic of real-time control systems design, i.e., providing support for a designer to reason about timeliness properties. We believe an approach that integrates the advancements in both object modeling and design methods and real-time scheduling theory is the key to successful use of object technology for real-time software. Surprisingly several past approaches to integrate the two either restrict the object models, or do not allow sophisticated schedulability analysis techniques. This study shows how schedulability analysis can be integrated with UML for Real-Time (UML-RT to deal with timing properties in real time control systems. More specifically, we develop the schedulability and feasibility analysis modeling for the external messages that may suffer release jitter due to being dispatched by a tick driven scheduler in real-time control system and we also develop the scheduliablity modeling for sporadic activities, where messages arrive sporadically then execute periodically for some bounded time. This method can be used to cope with timing constraints in realistic and complex real-time control systems. Using this method, a designer can quickly evaluate the impact of various implementation decisions on schedulability. In conjunction with automatic code generation, we believe that this will greatly streamline the design and development of real-time control systems software.
Ivan Puga-Gonzalez
Full Text Available Post-conflict affiliation between former opponents and bystanders occurs in several species of non-human primates. It is classified in four categories of which affiliation received by the former victim, 'consolation', has received most attention. The hypotheses of cognitive constraint and social constraint are inadequate to explain its occurrence. The cognitive constraint hypothesis is contradicted by recent evidence of 'consolation' in monkeys and the social constraint hypothesis lacks information why 'consolation' actually happens. Here, we combine a computational model and an empirical study to investigate the minimum cognitive requirements for post-conflict affiliation. In the individual-based model, individuals are steered by cognitively simple behavioural rules. Individuals group and when nearby each other they fight if they are likely to win, otherwise, they may groom, especially when anxious. We parameterize the model after empirical data of a tolerant species, the Tonkean macaque (Macaca tonkeana. We find evidence for the four categories of post-conflict affiliation in the model and in the empirical data. We explain how in the model these patterns emerge from the combination of a weak hierarchy, social facilitation, risk-sensitive aggression, interactions with partners close-by and grooming as tension-reduction mechanism. We indicate how this may function as a new explanation for empirical data.
Brown, Patrick T; Li, Wenhong; Cordero, Eugene C; Mauget, Steven A
2015-04-21
The comparison of observed global mean surface air temperature (GMT) change to the mean change simulated by climate models has received much public and scientific attention. For a given global warming signal produced by a climate model ensemble, there exists an envelope of GMT values representing the range of possible unforced states of the climate system (the Envelope of Unforced Noise; EUN). Typically, the EUN is derived from climate models themselves, but climate models might not accurately simulate the correct characteristics of unforced GMT variability. Here, we simulate a new, empirical, EUN that is based on instrumental and reconstructed surface temperature records. We compare the forced GMT signal produced by climate models to observations while noting the range of GMT values provided by the empirical EUN. We find that the empirical EUN is wide enough so that the interdecadal variability in the rate of global warming over the 20(th) century does not necessarily require corresponding variability in the rate-of-increase of the forced signal. The empirical EUN also indicates that the reduced GMT warming over the past decade or so is still consistent with a middle emission scenario's forced signal, but is likely inconsistent with the steepest emission scenario's forced signal.
Puga-Gonzalez, Ivan; Butovskaya, Marina; Thierry, Bernard; Hemelrijk, Charlotte Korinna
2014-01-01
Post-conflict affiliation between former opponents and bystanders occurs in several species of non-human primates. It is classified in four categories of which affiliation received by the former victim, 'consolation', has received most attention. The hypotheses of cognitive constraint and social constraint are inadequate to explain its occurrence. The cognitive constraint hypothesis is contradicted by recent evidence of 'consolation' in monkeys and the social constraint hypothesis lacks information why 'consolation' actually happens. Here, we combine a computational model and an empirical study to investigate the minimum cognitive requirements for post-conflict affiliation. In the individual-based model, individuals are steered by cognitively simple behavioural rules. Individuals group and when nearby each other they fight if they are likely to win, otherwise, they may groom, especially when anxious. We parameterize the model after empirical data of a tolerant species, the Tonkean macaque (Macaca tonkeana). We find evidence for the four categories of post-conflict affiliation in the model and in the empirical data. We explain how in the model these patterns emerge from the combination of a weak hierarchy, social facilitation, risk-sensitive aggression, interactions with partners close-by and grooming as tension-reduction mechanism. We indicate how this may function as a new explanation for empirical data.
Modeling of carbon dioxide absorption by aqueous ammonia solutions using the Extended UNIQUAC model
Darde, Victor Camille Alfred; van Well, Willy J. M.; Stenby, Erling Halfdan
2010-01-01
and the concentration range up to 80 molal ammonia. In this work, the validity of this model was extended up to 150°C and the accuracy improved by increasing the number of experimental data points from 2000 to more than 3500. These experimental data consisting of vapor-liquid equilibrium data in various concentration...... ranges, enthalpy change from partial evaporation measurements, speciation data, heat capacity, enthalpy of solution and enthalpy of dilution data have been used to refit 43 model parameters and standard state properties. Henry’s law constant correlations have been used for extrapolating standard state...
Searches for Neutral Higgs Bosons in Extended Models
Abdallah, J; Adam, W; Adzic, P; Albrecht, T; Alderweireld, T; Alemany-Fernandez, R; Allmendinger, T; Allport, P P; Amaldi, Ugo; Amapane, N; Amato, S; Anashkin, E; Andreazza, A; Andringa, S; Anjos, N; Antilogus, P; Apel, W D; Arnoud, Y; Ask, S; Åsman, B; Augustin, J E; Augustinus, A; Baillon, Paul; Ballestrero, A; Bambade, P; Barbier, R; Bardin, Dimitri Yuri; Barker, G J; Baroncelli, A; Battaglia, Marco; Baubillier, M; Becks, K H; Begalli, M; Behrmann, A; Ben-Haim, E; Benekos, N C; Benvenuti, Alberto C; Bérat, C; Berggren, M; Berntzon, L; Bertrand, D; Besançon, M; Besson, N; Bloch, D; Blom, M; Bluj, M; Bonesini, M; Boonekamp, M; Booth, P S L; Borisov, G; Botner, O; Bouquet, B; Bowcock, T J V; Boyko, I; Bracko, M; Brenner, R; Brodet, E; Brückman, P; Brunet, J M; Bugge, L; Buschmann, P; Calvi, M; Camporesi, T; Canale, V; Carena, F; Castro, N; Cavallo, F R; Chapkin, M M; Charpentier, P; Checchia, P; Chierici, R; Shlyapnikov, P; Chudoba, J; Chung, S U; Cieslik, K; Collins, P; Contri, R; Cosme, G; Cossutti, F; Costa, M J; Crennell, D J; Cuevas-Maestro, J; D'Hondt, J; Dalmau, J; Da Silva, T; Da Silva, W; Della Ricca, G; De Angelis, A; de Boer, Wim; De Clercq, C; De Lotto, B; De Maria, N; De Min, A; De Paula, L S; Di Ciaccio, L; Di Simone, A; Doroba, K; Drees, J; Dris, M; Eigen, G; Ekelöf, T J C; Ellert, M; Elsing, M; Espirito-Santo, M C; Fanourakis, G K; Fassouliotis, D; Feindt, M; Fernández, J; Ferrer, A; Ferro, F; Flagmeyer, U; Föth, H; Fokitis, E; Fulda-Quenzer, F; Fuster, J A; Gandelman, M; García, C; Gavillet, P; Gazis, E N; Gokieli, R; Golob, B; Gómez-Ceballos, G; Gonçalves, P; Graziani, E; Grosdidier, G; Grzelak, K; Guy, J; Haag, C; Hallgren, A; Hamacher, K; Hamilton, K; Haug, S; Hauler, F; Hedberg, V; Hennecke, M; Herr, H; Hoffman, J; Holmgren, S O; Holt, P J; Houlden, M A; Hultqvist, K; Jackson, J N; Jarlskog, G; Jarry, P; Jeans, D; Johansson, E K; Johansson, P D; Jonsson, P; Joram, C; Jungermann, L; Kapusta, F; Katsanevas, S; Katsoufis, E C; Kernel, G; Kersevan, B P; Kerzel, U; Kiiskinen, A P; King, B T; Kjaer, N J; Kluit, P; Kokkinias, P; Kourkoumelis, C; Kuznetsov, O; Krumshtein, Z; Kucharczyk, M; Lamsa, J; Leder, G; Ledroit, F; Leinonen, L; Leitner, R; Lemonne, J; Lepeltier, V; Lesiak, T; Liebig, W; Liko, D; Lipniacka, A; Lopes, J H; López, J M; Loukas, D; Lutz, P; Lyons, L; MacNaughton, J; Malek, A; Maltezos, S; Mandl, F; Marco, J; Marco, R; Maréchal, B; Margoni, M; Marin, J C; Mariotti, C; Markou, A; Martínez-Rivero, C; Masik, J; Mastroyiannopoulos, N; Matorras, F; Matteuzzi, C; Mazzucato, F; Mazzucato, M; McNulty, R; Meroni, C; Migliore, E; Mitaroff, W A; Mjörnmark, U; Moa, T; Moch, M; Mönig, K; Monge, R; Montenegro, J; Moraes, D; Moreno, S; Morettini, P; Müller, U; Münich, K; Mulders, M; Mundim, L; Murray, W; Muryn, B; Myatt, G; Myklebust, T; Nassiakou, M; Navarria, Francesco Luigi; Nawrocki, K; Nicolaidou, R; Nikolenko, M; Oblakowska-Mucha, A; Obraztsov, V F; Olshevskii, A G; Onofre, A; Orava, R; Österberg, K; Ouraou, A; Oyanguren, A; Paganoni, M; Paiano, S; Palacios, J P; Palka, H; Papadopoulou, T D; Pape, L; Parkes, C; Parodi, F; Parzefall, U; Passeri, A; Passon, O; Peralta, L; Perepelitsa, V F; Perrotta, A; Petrolini, A; Piedra, J; Pieri, L; Pierre, F; Pimenta, M; Piotto, E; Podobnik, T; Poireau, V; Pol, M E; Polok, G; Pozdnyakov, V; Pukhaeva, N; Pullia, A; Rames, J; Read, A; Rebecchi, P; Rehn, J; Reid, D; Reinhardt, R; Renton, P B; Richard, F; Rídky, J; Rivero, M; Rodríguez, D; Romero, A; Ronchese, P; Roudeau, P; Rovelli, T; Ruhlmann-Kleider, V; Ryabtchikov, D; Sadovskii, A; Salmi, L; Salt, J; Sander, C; Savoy-Navarro, A; Schwickerath, U; Segar, A; Sekulin, R L; Siebel, M; Sissakian, A N; Smadja, G; Smirnova, O G; Sokolov, A; Sopczak, A; Sosnowski, R; Spassoff, Tz; Stanitzki, M; Stocchi, A; Strauss, J; Stugu, B; Szczekowski, M; Szeptycka, M; Szumlak, T; Tabarelli de Fatis, T; Taffard, A C; Tegenfeldt, F; Timmermans, J; Tkatchev, L G; Tobin, M; Todorovova, S; Tomé, B; Tonazzo, A; Tortosa, P; Travnicek, P; Treille, D; Tristram, G; Trochimczuk, M; Troncon, C; Turluer, M L; Tyapkin, I A; Tyapkin, P; Tzamarias, S; Uvarov, V; Valenti, G; van Dam, P; Van Eldik, J; Van Lysebetten, A; Van Remortel, N; Van Vulpen, I; Vegni, G; Veloso, F; Venus, W; Verdier, P; Verzi, V; Vilanova, D; Vitale, L; Vrba, V; Wahlen, H; Washbrook, A J; Weiser, C; Wicke, D; Wickens, J; Wilkinson, G; Winter, M; Witek, M; Yushchenko, O P; Zalewska-Bak, A; Zalewski, P; Zavrtanik, D; Zhuravlov, V; Zimin, N I; Zintchenko, A; Zupan, M
2004-01-01
Searches for neutral Higgs bosons produced at LEP in association with Z bosons, in pairs and in the Yukawa process are presented in this paper. Higgs boson decays into b quarks, tau leptons, or other Higgs bosons are considered, giving rise to four-b, four-b+jets, six-b and four-tau final states, as well as mixed modes with b quarks and tau leptons. The whole mass domain kinematically accessible at LEP in these topologies is searched. The analysed data set covers both the LEP1 and LEP2 energy ranges and exploits most of the luminosity recorded by the DELPHI experiment. No convincing evidence for a signal is found, and results are presented in the form of mass-dependent upper bounds on coupling factors (in units of model-independent reference cross-sections) for all processes, allowing interpretation of the data in a large class of models.
An extended topological model for binary phosphate glasses
Hermansen, Christian [Section of Chemistry, Aalborg University, 9220 Aalborg (Denmark); Rodrigues, Bruno P.; Wondraczek, Lothar [Otto Schott Institute of Materials Research, University of Jena, 07743 Jena (Germany); Yue, Yuanzheng, E-mail: yy@bio.aau.dk [Section of Chemistry, Aalborg University, 9220 Aalborg (Denmark); State Key Laboratory of Silicate Materials for Architecture, Wuhan University of Technology, Wuhan 430070 (China)
2014-12-28
We present a topological model for binary phosphate glasses that builds on the previously introduced concepts of the modifying ion sub-network and the strength of modifier constraints. The validity of the model is confirmed by the correct prediction of T{sub g}(x) for covalent polyphosphoric acids where the model reduces to classical constraint counting. The constraints on the modifying cations are linear constraints to first neighbor non-bridging oxygens, and all angular constraints are broken as expected for ionic bonding. For small modifying cations, such as Li{sup +}, the linear constraints are almost fully intact, but for larger ions, a significant fraction is broken. By accounting for the fraction of intact modifying ion related constraints, q{sub γ}, the T{sub g}(x) of alkali phosphate glasses is predicted. By examining alkali, alkaline earth, and rare earth metaphosphate glasses, we find that the effective number of intact constraints per modifying cation is linearly related to the charge-to-distance ratio of the modifying cation to oxygen.
Non-leptonic decays in an extended chiral quark model
Eeg, J O
2012-01-01
We consider the color suppressed (nonfactorizable) amplitude for the decay mode $\\bar{B_{d}^0} \\rightarrow \\pi^0 \\pi^{0} $. We treat the $b$-quark in the heavy quark limit and the energetic light ($u,d,s$) quarks within a variant of Large Energy Effective Theory combined with an extension of chiral quark models. Our calculated amplitude for $\\bar{B_{d}^0} \\rightarrow \\pi^0 \\pi^{0} $ is suppressed by a factor of order $\\Lambda_{QCD}/m_b$ with respect to the factorized amplitude, as it should according to QCD-factorization. Further, for reasonable values of the (model dependent) gluon condensate and the constituent quark mass, the calculated nonfactorizable amplitude for $\\bar{B_{d}^0} \\rightarrow \\pi^0 \\pi^{0} $ can easily accomodate the experimental value. Unfortunately, the color suppressed amplitude is very sensitive to the values of these model dependent parameters. Therefore fine-tuning is necessary in order to obtain an amplitude compatible with the experimental result for $\\bar{B_{d}^0} \\rightarrow \\pi^...
WEI Yan-fang; GUO Si-ling; XUE Yu
2007-01-01
In this article, the traffic hydrodynamic model considering the driver's reaction time was applied to the traffic analysis at the intersections on real roads. In the numerical simulation with the model, the pinch effect of the right-turning vehicles flow was found, which mainly leads to traffic jamming on the straight lane. All of the results in accordance with the empirical data confirm the applicability of this model.
Empirical wind retrieval model based on SAR spectrum measurements
Panfilova, Maria; Karaev, Vladimir; Balandina, Galina; Kanevsky, Mikhail; Portabella, Marcos; Stoffelen, Ad
The present paper considers polarimetric SAR wind vector applications. Remote-sensing measurements of the near-surface wind over the ocean are of great importance for the understanding of atmosphere-ocean interaction. In recent years investigations for wind vector retrieval using Synthetic Aperture Radar (SAR) data have been performed. In contrast with scatterometers, a SAR has a finer spatial resolution that makes it a more suitable microwave instrument to explore wind conditions in the marginal ice zones, coastal regions and lakes. The wind speed retrieval procedure from scatterometer data matches the measured radar backscattering signal with the geophysical model function (GMF). The GMF determines the radar cross section dependence on the wind speed and direction with respect to the azimuthal angle of the radar beam. Scatterometers provide information on wind speed and direction simultaneously due to the fact that each wind vector cell (WVC) is observed at several azimuth angles. However, SAR is not designed to be used as a high resolution scatterometer. In this case, each WVC is observed at only one single azimuth angle. That is why for wind vector determination additional information such as wind streak orientation over the sea surface is required. It is shown that the wind vector can be obtained using polarimetric SAR without additional information. The main idea is to analyze the spectrum of a homogeneous SAR image area instead of the backscattering normalized radar cross section. Preliminary numerical simulations revealed that SAR image spectral maxima positions depend on the wind vector. Thus the following method for wind speed retrieval is proposed. In the first stage of the algorithm, the SAR spectrum maxima are determined. This procedure is carried out to estimate the wind speed and direction with ambiguities separated by 180 degrees due to the SAR spectrum symmetry. The second stage of the algorithm allows us to select the correct wind direction
Extended soft-wall model for the QCD phase diagram
Zöllner, Rico; Kampfer, Burkhard
2016-01-01
The soft-wall model, emerging as bottom-up holographic scenario anchored in the AdS/CFT correspondence, displays the disappearance of normalisable modes referring to vector mesons at a temperature $T_{\\dis}$ depending on the chemical potential $\\mu$, $T_{\\dis}(\\mu)$. We explore options for making $T_{\\dis}(\\mu)$ consistent with the freeze-out curve $T_{\\rm f.o.}(\\mu)$ from relativistic heavy-ion collisions and the cross-over curve $T_{\\rm c}(\\mu)$ from QCD at small values of $\\mu$.
Modeling Lolium perenne L. roots in the presence of empirical black holes
Plant root models are designed for understanding structural or functional aspects of root systems. When a process is not thoroughly understood, a black box object is used. However, when a process exists but empirical data do not indicate its existence, you have a black hole. The object of this re...
Satellite-based empirical models linking river plume dynamics with hypoxic area andvolume
Satellite-based empirical models explaining hypoxic area and volume variation were developed for the seasonally hypoxic (O2 < 2 mg L−1) northern Gulf of Mexico adjacent to the Mississippi River. Annual variations in midsummer hypoxic area and ...
Mechanistic-empirical subgrade design model based on heavy vehicle simulator test results
Theyse, HL
2006-06-01
Full Text Available -empirical design models. This paper presents a study on subgrade permanent deformation based on the data generated from a series of Heavy Vehicle Simulator (HVS) tests done at the Richmond Field Station in California. The total subgrade deflection was found to be a...
THE SUPERIORITY OF EMPIRICAL BAYES ESTIMATION OF PARAMETERS IN PARTITIONED NORMAL LINEAR MODEL
Zhang Weiping; Wei Laisheng
2008-01-01
In this article, the empirical Bayes (EB) estimators are constructed for the estimable functions of the parameters in partitioned normal linear model. The superiorities of the EB estimators over ordinary least-squares (LS) estimator are investigated under mean square error matrix (MSEM) criterion.
Performance-Based Service Quality Model: An Empirical Study on Japanese Universities
Sultan, Parves; Wong, Ho
2010-01-01
Purpose: This paper aims to develop and empirically test the performance-based higher education service quality model. Design/methodology/approach: The study develops 67-item instrument for measuring performance-based service quality with a particular focus on the higher education sector. Scale reliability is confirmed using the Cronbach's alpha.…
Distribution of longshore sediment transport along the Indian coast based on empirical model
Chandramohan, P.; Nayak, B.U.
An empirical sediment transport model has been developed based on longshore energy flux equation. Study indicates that annual gross sediment transport rate is high (1.5 x 10 super(6) cubic meters to 2.0 x 10 super(6) cubic meters) along the coasts...
Sarstedt, Marko; Henseler, Jörg; Ringle, Christian M.
2011-01-01
Purpose – Partial least squares (PLS) path modeling has become a pivotal empirical research method in international marketing. Owing to group comparisons' important role in research on international marketing, we provide researchers with recommendations on how to conduct multigroup analyses in PLS p
A stochastic empirical model for heavy-metal balnces in Agro-ecosystems
Keller, A.N.; Steiger, von B.; Zee, van der S.E.A.T.M.; Schulin, R.
2001-01-01
Mass flux balancing provides essential information for preventive strategies against heavy-metal accumulation in agricultural soils that may result from atmospheric deposition and application of fertilizers and pesticides. In this paper we present the empirical stochastic balance model, PROTERRA-S,
Mandemaker, M.
2014-01-01
Quantifying relationships between governance, agriculture, and nature: empirical-statistical- and pattern-oriented modeling Abstract An improved understanding of complex processes of both socio-political and economic governance may help to abate neg
Interest groups: a survey of empirical models that try to assess their influence
Potters, J.J.M.; Sloof, R.
1996-01-01
Substantial political power is often attributed to interest groups. The origin of this power is not quite clear, though, and the mechanisms by which influence is effectuated are not yet fully understood. The last two decades have yielded a vast number of studies which use empirical models to assess
Ecological Forecasting in Chesapeake Bay: Using a Mechanistic-Empirical Modelling Approach
Brown, C. W.; Hood, Raleigh R.; Long, Wen; Jacobs, John M.; Ramers, D. L.; Wazniak, C.; Wiggert, J. D.; Wood, R.; Xu, J.
2013-09-01
The Chesapeake Bay Ecological Prediction System (CBEPS) automatically generates daily nowcasts and three-day forecasts of several environmental variables, such as sea-surface temperature and salinity, the concentrations of chlorophyll, nitrate, and dissolved oxygen, and the likelihood of encountering several noxious species, including harmful algal blooms and water-borne pathogens, for the purpose of monitoring the Bay's ecosystem. While the physical and biogeochemical variables are forecast mechanistically using the Regional Ocean Modeling System configured for the Chesapeake Bay, the species predictions are generated using a novel mechanistic empirical approach, whereby real-time output from the coupled physical biogeochemical model drives multivariate empirical habitat models of the target species. The predictions, in the form of digital images, are available via the World Wide Web to interested groups to guide recreational, management, and research activities. Though full validation of the integrated forecasts for all species is still a work in progress, we argue that the mechanistic–empirical approach can be used to generate a wide variety of short-term ecological forecasts, and that it can be applied in any marine system where sufficient data exist to develop empirical habitat models. This paper provides an overview of this system, its predictions, and the approach taken.
Peixin ZHAO
2013-01-01
In this paper,we consider the variable selection for the parametric components of varying coefficient partially linear models with censored data.By constructing a penalized auxiliary vector ingeniously,we propose an empirical likelihood based variable selection procedure,and show that it is consistent and satisfies the sparsity.The simulation studies show that the proposed variable selection method is workable.
Performance-Based Service Quality Model: An Empirical Study on Japanese Universities
Sultan, Parves; Wong, Ho
2010-01-01
Purpose: This paper aims to develop and empirically test the performance-based higher education service quality model. Design/methodology/approach: The study develops 67-item instrument for measuring performance-based service quality with a particular focus on the higher education sector. Scale reliability is confirmed using the Cronbach's alpha.…
Jones, Valerie M.; Rensink, Arend; Brinksma, Hendrik
2005-01-01
Mobile health systems can extend the enterprise computing system of the healthcare provider by bringing services to the patient any time and anywhere. We propose a model-driven design and development methodology for the development of the m-health components in such extended enterprise computing
The Extended Perturbation Method: New Insights on the New Keynesian Model
Andreasen, Martin Møller; Kronborg, Anders Farver
bound on inflation as implied by Calvo pricing. In contrast, extended perturbation generates stable dynamics as it enforces this bound. Extended perturbation also adds to existing evidence on downward nominal wage rigidities in the New Keynesian model, as we only find support for this friction when...
Jones, Val; Rensink, Arend; Brinksma, Ed
2005-01-01
Mobile health systems can extend the enterprise computing system of the healthcare provider by bringing services to the patient any time and anywhere. We propose a model-driven design and development methodology for the development of the m-health components in such extended enterprise computing sys
Chang, Chi-Cheng; Yan, Chi-Fang; Tseng, Ju-Shih
2012-01-01
Since convenience is one of the features for mobile learning, does it affect attitude and intention of using mobile technology? The technology acceptance model (TAM), proposed by David (1989), was extended with perceived convenience in the present study. With regard to English language mobile learning, the variables in the extended TAM and its…
Floquet topological semimetal phases of an extended kicked Harper model
Bomantara, Raditya Weda; Raghava, Gudapati Naresh; Zhou, Longwen; Gong, Jiangbin
2016-02-01
Recent discoveries on topological characterization of gapless systems have attracted interest in both theoretical studies and experimental realizations. Examples of such gapless topological phases are Weyl semimetals, which exhibit three-dimensional (3D) Dirac cones (Weyl points), and nodal line semimetals, which are characterized by line nodes (two bands touching along a line). Inspired by our previous discoveries that the kicked Harper model exhibits many fascinating features of Floquet topological phases, in this paper we consider a generalization of the model, where two additional periodic system parameters are introduced into the Hamiltonian to serve as artificial dimensions, so as to simulate a 3 D periodically driven system. We observe that by increasing the hopping strength and the kicking strength of the system, many new Floquet band touching points at Floquet quasienergies 0 and π will start to appear. Some of them are Weyl points, while the others form line nodes in the parameter space. By taking open boundary conditions along the physical dimension, edge states analogous to Fermi arcs in static Weyl semimetal systems are observed. Finally, by designing an adiabatic pumping scheme, the chirality of the Floquet-band Weyl points and the π Berry phase around Floquet-band line nodes can be manifested.
An extended Cellular Potts Model analyzing a wound healing assay.
Scianna, Marco
2015-07-01
A suitable Cellular Potts Model is developed to reproduce and analyze an in vitro wound-healing assay. The proposed approach is able both to quantify the invasive capacity of the overall cell population and to evaluate selected determinants of single cell movement (velocity, directional movement, and final displacement). In this respect, the present CPM allows us to capture differences and correlations in the migratory behavior of cells initially located at different distances from the wound edge. In the case of an undifferentiated extracellular matrix, the model then predicts that a maximal healing can be obtained by a chemically induced increment of cell elasticity and not by a chemically induced downregulation of intercellular adhesive contacts. Moreover, in the case of two-component substrates (formed by a mesh of collagenous-like threads and by a homogeneous medium), CPM simulations show that both fiber number and cell-fiber adhesiveness influence cell speed and wound closure rate in a biphasic fashion. On the contrary, the topology of the fibrous network affects the healing process by mediating the productive directional cell movement. The paper, also equipped with comments on the computational cost of the CPM algorithm, ends with a throughout discussion of the pertinent experimental and theoretical literature. Copyright © 2015 Elsevier Ltd. All rights reserved.
Extended models of nonlinear waves in liquid with gas bubbles
Kudryashov, Nikolay A
2016-01-01
In this work we generalize the models for nonlinear waves in a gas--liquid mixture taking into account an interphase heat transfer, a surface tension and a weak liquid compressibility simultaneously at the derivation of the equations for nonlinear waves. We also take into consideration high order terms with respect to the small parameter. Two new nonlinear differential equations are derived for long weakly nonlinear waves in a liquid with gas bubbles by the reductive perturbation method considering both high order terms with respect to the small parameter and the above mentioned physical properties. One of these equations is the perturbation of the Burgers equation and corresponds to main influence of dissipation on nonlinear waves propagation. The other equation is the perturbation of the Burgers--Korteweg--de Vries equation and corresponds to main influence of dispersion on nonlinear waves propagation.
Abelian cosmic string in the extended Starobinsky model of gravity
Graça, J P Morais
2016-01-01
We analyze numerically the behaviour of the solutions corresponding to an Abelian cosmic string taking into account an extension of the Starobinsky model, where the action of general relativity is replaced by $f(R) = R - 2\\Lambda + \\eta R^2 + \\rho R^m$, with $m > 2$. As an interesting result, we find that the angular deficit which characterizes the cosmic string decreases as the parameters $\\eta$ and $\\rho$ increase. We also find that the cosmic horizon due to the presence of a cosmological constant is affected in such a way that it can grows or shrinks, depending on the vacuum expectation value of the scalar field and on the value of the cosmological constant
EXTENDE MODEL OF COMPETITIVITY THROUG APPLICATION OF NEW APPROACH DIRECTIVES
Slavko Arsovski
2009-03-01
Full Text Available The basic subject of this work is the model of new approach impact on quality and safety products, and competency of our companies. This work represents real hypothesis on the basis of expert's experiences, in regard to that the infrastructure with using new approach directives wasn't examined until now, it isn't known which product or industry of Serbia is related to directives of the new approach and CE mark, and it is not known which are effects of the use of the CE mark. This work should indicate existing quality reserves and product's safety, the level of possible competency improvement and increasing the profit by discharging new approach directive requires.
A New Empirical Model for Radar Scattering from Bare Soil Surfaces
Nicolas Baghdadi
2016-11-01
Full Text Available The objective of this paper is to propose a new semi-empirical radar backscattering model for bare soil surfaces based on the Dubois model. A wide dataset of backscattering coefficients extracted from synthetic aperture radar (SAR images and in situ soil surface parameter measurements (moisture content and roughness is used. The retrieval of soil parameters from SAR images remains challenging because the available backscattering models have limited performances. Existing models, physical, semi-empirical, or empirical, do not allow for a reliable estimate of soil surface geophysical parameters for all surface conditions. The proposed model, developed in HH, HV, and VV polarizations, uses a formulation of radar signals based on physical principles that are validated in numerous studies. Never before has a backscattering model been built and validated on such an important dataset as the one proposed in this study. It contains a wide range of incidence angles (18°–57° and radar wavelengths (L, C, X, well distributed, geographically, for regions with different climate conditions (humid, semi-arid, and arid sites, and involving many SAR sensors. The results show that the new model shows a very good performance for different radar wavelengths (L, C, X, incidence angles, and polarizations (RMSE of about 2 dB. This model is easy to invert and could provide a way to improve the retrieval of soil parameters.
Panel data models extended to spatial error autocorrelation or a spatially lagged dependent variable
Elhorst, J. Paul
2001-01-01
This paper surveys panel data models extended to spatial error autocorrelation or a spatially lagged dependent variable. In particular, it focuses on the specification and estimation of four panel data models commonly used in applied research: the fixed effects model, the random effects model, the
Extended Mixed-Efects Item Response Models with the MH-RM Algorithm
Chalmers, R. Philip
2015-01-01
A mixed-effects item response theory (IRT) model is presented as a logical extension of the generalized linear mixed-effects modeling approach to formulating explanatory IRT models. Fixed and random coefficients in the extended model are estimated using a Metropolis-Hastings Robbins-Monro (MH-RM) stochastic imputation algorithm to accommodate for…
CML based estimation of extended Rasch models with the eRm package in R
PATRICK MAIR
2007-03-01
Full Text Available This paper presents an open source tool for computing extended Rasch models. It is realized in R (R Development Core Team, 2006 and available as package eRm. In addition to ordinary Rasch models extended models such as linear logistic test models, (linear rating scale models and (linear partial credit models can be estimated. A striking feature of this package is the implementation of conditional maximum likelihood estimation techniques which relate directly to Rasch's original concept of specific objectivity. The mathematical and epistemological benefits of this estimation method are discussed. Moreover, the capabilities of the eRm routine with respect to structural item response designs are demonstrated.
Vacuum stability in extended standard model with leptoquark
Bandyopadhyay, Priyotosh
2016-01-01
We investigate the standard model (SM) with extension of a charged scalar having fractional electromagnetic charge of $-1/3$ unit and with lepton and baryon number violating couplings at tree level. Without directly taking part in the electro-weak (EW) symmetry breaking this scalar can affect stability of the EW vacuum via loop effects. The impact of such a scalar i.e., leptoquark on the perturbativity of SM dimensionless couplings as well as on new physics couplings has been studied at two-loop order. The vacuum stability of the Higgs potential is checked using one-loop renormalization group (RG) improved effective potential approach with two-loop beta function for all the couplings. From the stability analysis various bounds are drawn on parameter space by identifying the region corresponding to metastability and stability of the EW vacuum. Later we also address Higgs mass fine-tuning issue via Veltman condition and the presence of such scalar increases the scale up to which the theory can be considered as ...
Diffusion and topological neighbours in flocks of starlings: relating a model to empirical data.
Hemelrijk, Charlotte K; Hildenbrandt, Hanno
2015-01-01
Moving in a group while avoiding collisions with group members causes internal dynamics in the group. Although these dynamics have recently been measured quantitatively in starling flocks (Sturnus vulgaris), it is unknown what causes them. Computational models have shown that collective motion in groups is likely due to attraction, avoidance and, possibly, alignment among group members. Empirical studies show that starlings adjust their movement to a fixed number of closest neighbours or topological range, namely 6 or 7 and assume that each of the three activities is done with the same number of neighbours (topological range). Here, we start from the hypothesis that escape behavior is more effective at preventing collisions in a flock when avoiding the single closest neighbor than compromising by avoiding 6 or 7 of them. For alignment and attraction, we keep to the empirical topological range. We investigate how avoiding one or several neighbours affects the internal dynamics of flocks of starlings in our computational model StarDisplay. By comparing to empirical data, we confirm that internal dynamics resemble empirical data more closely if flock members avoid merely their single, closest neighbor. Our model shows that considering a different number of interaction partners per activity represents a useful perspective and that changing a single parameter, namely the number of interaction partners that are avoided, has several effects through selforganisation.
Ines Baccouche
2017-05-01
Full Text Available Accurate modeling of the nonlinear relationship between the open circuit voltage (OCV and the state of charge (SOC is required for adaptive SOC estimation during the lithium-ion (Li-ion battery operation. Online SOC estimation should meet several constraints, such as the computational cost, the number of parameters, as well as the accuracy of the model. In this paper, these challenges are considered by proposing an improved simplified and accurate OCV model of a nickel manganese cobalt (NMC Li-ion battery, based on an empirical analytical characterization approach. In fact, composed of double exponential and simple quadratic functions containing only five parameters, the proposed model accurately follows the experimental curve with a minor fitting error of 1 mV. The model is also valid at a wide temperature range and takes into account the voltage hysteresis of the OCV. Using this model in SOC estimation by the extended Kalman filter (EKF contributes to minimizing the execution time and to reducing the SOC estimation error to only 3% compared to other existing models where the estimation error is about 5%. Experiments are also performed to prove that the proposed OCV model incorporated in the EKF estimator exhibits good reliability and precision under various loading profiles and temperatures.
An anthology of theories and models of design philosophy, approaches and empirical explorations
Blessing, Lucienne
2014-01-01
While investigations into both theories and models has remained a major strand of engineering design research, current literature sorely lacks a reference book that provides a comprehensive and up-to-date anthology of theories and models, and their philosophical and empirical underpinnings; An Anthology of Theories and Models of Design fills this gap. The text collects the expert views of an international authorship, covering: · significant theories in engineering design, including CK theory, domain theory, and the theory of technical systems; · current models of design, from a function behavior structure model to an integrated model; · important empirical research findings from studies into design; and · philosophical underpinnings of design itself. For educators and researchers in engineering design, An Anthology of Theories and Models of Design gives access to in-depth coverage of theoretical and empirical developments in this area; for pr...
An Empirical Validation of Building Simulation Software for Modelling of Double-Skin Facade (DSF)
Larsen, Olena Kalyanova; Heiselberg, Per; Felsmann, Clemens
2009-01-01
buildings, but their accuracy might be limited in cases with DSFs because of the complexity of the heat and mass transfer processes within the DSF. To address this problem, an empirical validation of building models with DSF, performed with various building simulation tools (ESP-r, IDA ICE 3.0, VA114...... of DSF: 1. Thermal buffer mode (closed DSF cavity) and 2. External air curtain mode (naturally ventilated DSF cavity with the top and bottom openings open to outdoors). By carrying out the empirical tests, it was concluded that all models experience difficulties in predictions during the peak solar loads....... None of the models was consistent enough when comparing simulation results with experimental data for the ventilated cavity. However, some models showed reasonable agreement with the experimental results for the thermal buffer mode....
Empirical study and modeling of human behaviour dynamics of comments on Blog posts
Guo, Jin-Li
2010-01-01
On-line communities offer a great opportunity to investigate human dynamics, because much information about individuals is registered in databases. In this paper, based on data statistics of online comments on Blog posts, we first present an empirical study of a comment arrival-time interval distribution. We find that people interested in some subjects gradually disappear and the interval distribution is a power law. According to this feature, we propose a model with gradually decaying interest. We give a rigorous analysis on the model by non-homogeneous Poisson processes and obtain an analytic expression of the interval distribution. Our analysis indicates that the time interval between two consecutive events follows the power-law distribution with a tunable exponent, which can be controlled by the model parameters and is in interval (1,\\infty). The analytical result agrees with the empirical results well, obeying an approximately power-law form. Our model provides a theoretical basis for human behaviour dyn...
Carl Angell
2008-11-01
Full Text Available This paper reports on the implementation of an upper secondary physics curriculum with an empirical-mathematical modelling approach. In project PHYS 21, we used the notion of multiple representations of physical phenomena as a framework for developing modelling activities for students. Interviews with project teachers indicate that implementation of empirical-mathematical modelling varied widely among classes. The new curriculum ideas were adapted to teachers’ ways of doing andreflecting on teaching and learning rather than radically changing these. Modelling was taken up as a method for reaching the traditional content goals of physics teaching, whereas goals related to process skills and the nature of science were given a lower priority by the teachers. Our results indicate that more attention needs to be focused on teachers’ and students’ meta-understanding of physics and physics learning.
Empirical Modeling on Hot Air Drying of Fresh and Pre-treated Pineapples
Tanongkankit Yardfon
2016-01-01
Full Text Available This research was aimed to study drying kinetics and determine empirical model of fresh pineapple and pre-treated pineapple with sucrose solution at different concentrations during drying. 3 mm thick samples were immersed into 30, 40 and 50 Brix of sucrose solution before hot air drying at temperatures of 60, 70 and 80°C. The empirical models to predict the drying kinetics were investigated. The results showed that the moisture content decreased when increasing the drying temperatures and times. Increase in sucrose concentration led to longer drying time. According to the statistical values of the highest coefficients (R2, the lowest least of chi-square (χ2 and root mean square error (RMSE, Logarithmic model was the best models for describing the drying behavior of soaked samples into 30, 40 and 50 Brix of sucrose solution.
Huong, Audrey; Ngu, Xavier
2014-01-01
This work presents the use of extended Modified Lambert Beer (MLB) model for accurate and continuous monitoring of percent blood carboxyhemoglobin (COHb) (SCO) and oxyhemoglobin (OxyHb) saturation (SO2...
Extended sigma-model in nontrivially deformed field-antifield formalism
Batalin, Igor A
2015-01-01
We propose an action for the extended sigma - models in the most general setting of the kinetic term allowed in the nontrivially deformed field - antifield formalism. We show that the classical motion equations do naturally take their desired canonical form.
National Aeronautics and Space Administration — Estimation of aerodynamic models for the control of damaged aircraft using an innovative differential vortex lattice method tightly coupled with an extended Kalman...
PANG Hou-Rong; PING Jia-Lun; WANG Fan; ZHAO En-Guang
2004-01-01
Promising high strangeness dibaryons are studied by the extended quark delocalization and color screening model. It is shown that besides H particle and di-Ω, there might be other dibaryon candidates worth to be searched experimentally such as NΩ.
The empirical likelihood goodness-of-fit test for regression model
Li-xing ZHU; Yong-song QIN; Wang-li XU
2007-01-01
Goodness-of-fit test for regression modes has received much attention in literature. In this paper, empirical likelihood (EL) goodness-of-fit tests for regression models including classical parametric and autoregressive (AR) time series models are proposed. Unlike the existing locally smoothing and globally smoothing methodologies, the new method has the advantage that the tests are self-scale invariant and that the asymptotic null distribution is chi-squared. Simulations are carried out to illustrate the methodology.
An Empirical Study on End-users Productivity Using Model-based Spreadsheets
Beckwith, Laura; Fernandes, João Paulo; Saraiva, João
2011-01-01
Spreadsheets are widely used, and studies have shown that most end-user spreadsheets contain nontrivial errors. To improve end-users productivity, recent research proposes the use of a model-driven engineering approach to spreadsheets. In this paper we conduct the first systematic empirical study to assess the effectiveness and efficiency of this approach. A set of spreadsheet end users worked with two different model-based spreadsheets, and we present and analyze here the results achieved.
Hospetitiveness – the Empirical Model of Competitiveness in Romanian Hospitality Industry
Radu Emilian; Claudia Elena Tuclea; Madalina Lavinia Tala; Catalina Nicoleta Brîndusoiu
2009-01-01
Our interest is focused on an important sector of the national economy: the hospitality industry. The paper is the result of a careful analysis of the literature and of a field research. According to the answers of hotels' managers, competitiveness is based mainly on service quality and cost control. The analyses of questionnaires and dedicated literature lead us to the design of a competitiveness model for hospitality industry, called "Hospetitiveness – The empirical model of competitiveness...
Cross–Project Defect Prediction With Respect To Code Ownership Model: An Empirical Study
Marian Jureczko
2015-06-01
Full Text Available The paper presents an analysis of 83 versions of industrial, open-source and academic projects. We have empirically evaluated whether those project types constitute separate classes of projects with regard to defect prediction. Statistical tests proved that there exist significant differences between the models trained on the aforementioned project classes. This work makes the next step towards cross-project reusability of defect prediction models and facilitates their adoption, which has been very limited so far.
An empirical approach to update multivariate regression models intended for routine industrial use
Garcia-Mencia, M.V.; Andrade, J.M.; Lopez-Mahia, P.; Prada, D. [University of La Coruna, La Coruna (Spain). Dept. of Analytical Chemistry
2000-11-01
Many problems currently tackled by analysts are highly complex and, accordingly, multivariate regression models need to be developed. Two intertwined topics are important when such models are to be applied within the industrial routines: (1) Did the model account for the 'natural' variance of the production samples? (2) Is the model stable on time? This paper focuses on the second topic and it presents an empirical approach where predictive models developed by using Mid-FTIR and PLS and PCR hold its utility during about nine months when used to predict the octane number of platforming naphthas in a petrochemical refinery. 41 refs., 10 figs., 1 tab.
Off-site interaction effect in the Extended Hubbard Model with SCRPA method
Harir, S.; Bennai, M.; Boughaleb, Y.
2009-01-01
The Self Consistent Random Phase Approximation (SCRPA) and a Direct Analytical (DA) method are proposed to solve the Extended Hubbard Model in 1D. We have considered an Extended Hubbard Model (EHM) including on-site and off-site interactions for closed chains in one dimension with periodic boundary conditions. The comparison of the SCRPA results with ones obtained by a Direct Analytical approach shows that the SCRPA treats the problem of these closed chains with a rigorous manner. The analysi...
Wavelet modeling and prediction of the stability of states: the Roman Empire and the European Union
Yaroshenko, Tatyana Y.; Krysko, Dmitri V.; Dobriyan, Vitalii; Zhigalov, Maksim V.; Vos, Hendrik; Vandenabeele, Peter; Krysko, Vadim A.
2015-09-01
How can the stability of a state be quantitatively determined and its future stability predicted? The rise and collapse of empires and states is very complex, and it is exceedingly difficult to understand and predict it. Existing theories are usually formulated as verbal models and, consequently, do not yield sharply defined, quantitative prediction that can be unambiguously validated with data. Here we describe a model that determines whether the state is in a stable or chaotic condition and predicts its future condition. The central model, which we test, is that growth and collapse of states is reflected by the changes of their territories, populations and budgets. The model was simulated within the historical societies of the Roman Empire (400 BC to 400 AD) and the European Union (1957-2007) by using wavelets and analysis of the sign change of the spectrum of Lyapunov exponents. The model matches well with the historical events. During wars and crises, the state becomes unstable; this is reflected in the wavelet analysis by a significant increase in the frequency ω (t) and wavelet coefficients W (ω, t) and the sign of the largest Lyapunov exponent becomes positive, indicating chaos. We successfully reconstructed and forecasted time series in the Roman Empire and the European Union by applying artificial neural network. The proposed model helps to quantitatively determine and forecast the stability of a state.
Extended ARMA models for estimating price developments on day-ahead electricity markets
Swider, Derk J. [Institute of Energy Economics and the Rational Use of Energy, University of Stuttgart, Hessbruehlstr. 49a, 70565 Stuttgart (Germany); Weber, Christoph [University of Duisburg-Essen, Universitaetsstr. 12, 45117 Essen (Germany)
2007-04-15
In this paper extended models for estimating price developments on electricity markets are presented. The models consider deviations from the normality hypothesis of the prices. Based on an ARMA model combination with GARCH, Gaussian-mixture and switching-regime approaches are comparatively discussed. The comparison is based on historic electricity prices of the spot and two reserve markets in Germany. It is shown that the proposed extended models lead to significantly improved representations of the considered stochastic price processes. It is inferred that these models may be preferred for estimating price developments on electricity markets. (author)
An Empirically Driven Time-Dependent Model of the Solar Wind
Linker, Jon A.; Caplan, Ronald M.; Downs, Cooper; Lionello, Roberto; Riley, Pete; Mikic, Zoran; Henney, Carl J.; Arge, Charles N.; Kim, Tae; Pogorelov, Nikolai
2016-05-01
We describe the development and application of a time-dependent model of the solar wind. The model is empirically driven, starting from magnetic maps created with the Air Force Data Assimilative Photospheric flux Transport (ADAPT) model at a daily cadence. Potential field solutions are used to model the coronal magnetic field, and an empirical specification is used to develop boundary conditions for an MHD model of the solar wind. The time-dependent MHD simulation shows classic features of stream structure in the interplanetary medium that are seen in steady-state models; it also shows time evolutionary features that do not appear in a steady-state approach. The model results compare reasonably well with 1 AU OMNI observations. Data gaps when SOLIS magnetograms were unavailable hinder the model performance. The reasonable comparisons with observations suggest that this modeling approach is suitable for driving long term models of the outer heliosphere. Improvements to the ingestion of magnetograms in flux transport models will be necessary to apply this approach in a time-dependent space weather model.
Case, Michael J; Lawler, Joshua J
2017-05-01
Empirical and mechanistic models have both been used to assess the potential impacts of climate change on species distributions, and each modeling approach has its strengths and weaknesses. Here, we demonstrate an approach to projecting climate-driven changes in species distributions that draws on both empirical and mechanistic models. We combined projections from a dynamic global vegetation model (DGVM) that simulates the distributions of biomes based on basic plant functional types with projections from empirical climatic niche models for six tree species in northwestern North America. These integrated model outputs incorporate important biological processes, such as competition, physiological responses of plants to changes in atmospheric CO2 concentrations, and fire, as well as what are likely to be species-specific climatic constraints. We compared the integrated projections to projections from the empirical climatic niche models alone. Overall, our integrated model outputs projected a greater climate-driven loss of potentially suitable environmental space than did the empirical climatic niche model outputs alone for the majority of modeled species. Our results also show that refining species distributions with DGVM outputs had large effects on the geographic locations of suitable habitat. We demonstrate one approach to integrating the outputs of mechanistic and empirical niche models to produce bioclimatic projections. But perhaps more importantly, our study reveals the potential for empirical climatic niche models to over-predict suitable environmental space under future climatic conditions. © 2016 John Wiley & Sons Ltd.
Enzymatic saccharification of acid pretreated corn stover: Empirical and fractal kinetic modelling.
Wojtusik, Mateusz; Zurita, Mauricio; Villar, Juan C; Ladero, Miguel; Garcia-Ochoa, Felix
2016-11-01
Enzymatic hydrolysis of corn stover was studied at agitation speeds from 50 to 500rpm in a stirred tank bioreactor, at high solid concentrations (20% w/w dry solid/suspension), 50°C and 15.5mgprotein·gglucane(-1). Two empirical kinetic models have been fitted to empirical data, namely: a potential model and a fractal one. For the former case, the global order dramatically decreases from 13 to 2 as agitation speed increases, suggesting an increment in the access of enzymes to cellulose in terms of chemisorption followed by hydrolysis. For its part, the fractal kinetic model fits better to data, showing its kinetic constant a constant augmentation with increasing agitation speed up to a constant value at 250rpm and above, when mass transfer limitations are overcome. In contrast, the fractal exponent decreases with rising agitation speed till circa 0.19, suggesting higher accessibility of enzymes to the substrate.
Ewing, E Stephanie Krauthamer; Diamond, Guy; Levy, Suzanne
2015-01-01
Attachment-Based Family Therapy (ABFT) is a manualized family-based intervention designed for working with depressed adolescents, including those at risk for suicide, and their families. It is an empirically informed and supported treatment. ABFT has its theoretical underpinnings in attachment theory and clinical roots in structural family therapy and emotion focused therapies. ABFT relies on a transactional model that aims to transform the quality of adolescent-parent attachment, as a means of providing the adolescent with a more secure relationship that can support them during challenging times generally, and the crises related to suicidal thinking and behavior, specifically. This article reviews: (1) the theoretical foundations of ABFT (attachment theory, models of emotional development); (2) the ABFT clinical model, including training and supervision factors; and (3) empirical support.
Measurements and empirical model of the acoustic properties of reticulated vitreous carbon
Muehleisen, Ralph T.; Beamer, C. Walter; Tinianov, Brandon D.
2005-02-01
Reticulated vitreous carbon (RVC) is a highly porous, rigid, open cell carbon foam structure with a high melting point, good chemical inertness, and low bulk thermal conductivity. For the proper design of acoustic devices such as acoustic absorbers and thermoacoustic stacks and regenerators utilizing RVC, the acoustic properties of RVC must be known. From knowledge of the complex characteristic impedance and wave number most other acoustic properties can be computed. In this investigation, the four-microphone transfer matrix measurement method is used to measure the complex characteristic impedance and wave number for 60 to 300 pore-per-inch RVC foams with flow resistivities from 1759 to 10 782 Pa s m-2 in the frequency range of 330 Hz-2 kHz. The data are found to be poorly predicted by the fibrous material empirical model developed by Delany and Bazley, the open cell plastic foam empirical model developed by Qunli, or the Johnson-Allard microstructural model. A new empirical power law model is developed and is shown to provide good predictions of the acoustic properties over the frequency range of measurement. Uncertainty estimates for the constants of the model are also computed. .
Physical Limitations of Empirical Field Models: Force Balance and Plasma Pressure
Sorin Zaharia; C.Z. Cheng
2002-06-18
In this paper, we study whether the magnetic field of the T96 empirical model can be in force balance with an isotropic plasma pressure distribution. Using the field of T96, we obtain values for the pressure P by solving a Poisson-type equation {del}{sup 2}P = {del} {center_dot} (J x B) in the equatorial plane, and 1-D profiles on the Sun-Earth axis by integrating {del}P = J x B. We work in a flux coordinate system in which the magnetic field is expressed in terms of Euler potentials. Our results lead to the conclusion that the T96 model field cannot be in equilibrium with an isotropic pressure. We also analyze in detail the computation of Birkeland currents using the Vasyliunas relation and the T96 field, which yields unphysical results, again indicating the lack of force balance in the empirical model. The underlying reason for the force imbalance is likely the fact that the derivatives of the least-square fitted model B are not accurate predictions of the actual magnetospheric field derivatives. Finally, we discuss a possible solution to the problem of lack of force balance in empirical field models.
Classification and estimation in the Stochastic Block Model based on the empirical degrees
Channarond, Antoine; Robin, Stéphane
2011-01-01
The Stochastic Block Model (Holland et al., 1983) is a mixture model for heterogeneous network data. Unlike the usual statistical framework, new nodes give additional information about the previous ones in this model. Thereby the distribution of the degrees concentrates in points conditionally on the node class. We show under a mild assumption that classification, estimation and model selection can actually be achieved with no more than the empirical degree data. We provide an algorithm able to process very large networks and consistent estimators based on it. In particular, we prove a bound of the probability of misclassification of at least one node, including when the number of classes grows.
Empirical valence bond model of an SN2 reaction in polar and nonpolar solvents
Benjamin, Ilan
2008-08-01
A new model for the substitution nucleophilic reaction (SN2) in solution is described using the empirical valence bond (EVB) method. The model includes a generalization to three dimensions of a collinear gas phase EVB model developed by Mathis et al. [J. Mol. Liq. 61, 81 (1994)] and a parametrization of solute-solvent interactions of four different solvents (water, ethanol, chloroform, and carbon tetrachloride). The model is used to compute (in these four solvents) reaction free energy profiles, reaction and solvent dynamics, a two-dimensional reaction/solvent free energy map, as well as a number of other properties that in the past have mostly been estimated.
Wang, Wenxiu; Huang, Ningsheng; Zhao, Daiqing
2014-01-01
The decoupling elasticity decomposition quantitative model of energy-related carbon emission in Guangdong is established based on the extended Kaya identity and Tapio decoupling model for the first time, to explore the decoupling relationship and its internal mechanism between energy-related carbon emission and economic growth in Guangdong. Main results are as follows. (1) Total production energy-related carbon emissions in Guangdong increase from 4128 × 104 tC in 1995 to 14396 × 104 tC in 2011. Decoupling elasticity values of energy-related carbon emission and economic growth increase from 0.53 in 1996 to 0.85 in 2011, and its decoupling state turns from weak decoupling in 1996–2004 to expansive coupling in 2005–2011. (2) Land economic output and energy intensity are the first inhibiting factor and the first promoting factor to energy-related carbon emission decoupling from economic growth, respectively. The development speeds of land urbanization and population urbanization, especially land urbanization, play decisive roles in the change of total decoupling elasticity values. (3) Guangdong can realize decoupling of energy-related carbon emission from economic growth effectively by adjusting the energy mix and industrial structure, coordinating the development speed of land urbanization and population urbanization effectively, and strengthening the construction of carbon sink. PMID:24782666
Wang, Wenxiu; Kuang, Yaoqiu; Huang, Ningsheng; Zhao, Daiqing
2014-01-01
The decoupling elasticity decomposition quantitative model of energy-related carbon emission in Guangdong is established based on the extended Kaya identity and Tapio decoupling model for the first time, to explore the decoupling relationship and its internal mechanism between energy-related carbon emission and economic growth in Guangdong. Main results are as follows. (1) Total production energy-related carbon emissions in Guangdong increase from 4128 × 10⁴ tC in 1995 to 14396 × 10⁴ tC in 2011. Decoupling elasticity values of energy-related carbon emission and economic growth increase from 0.53 in 1996 to 0.85 in 2011, and its decoupling state turns from weak decoupling in 1996-2004 to expansive coupling in 2005-2011. (2) Land economic output and energy intensity are the first inhibiting factor and the first promoting factor to energy-related carbon emission decoupling from economic growth, respectively. The development speeds of land urbanization and population urbanization, especially land urbanization, play decisive roles in the change of total decoupling elasticity values. (3) Guangdong can realize decoupling of energy-related carbon emission from economic growth effectively by adjusting the energy mix and industrial structure, coordinating the development speed of land urbanization and population urbanization effectively, and strengthening the construction of carbon sink.
Empirical Validation of a Thermal Model of a Complex Roof Including Phase Change Materials
Stéphane Guichard
2015-12-01
Full Text Available This paper deals with the empirical validation of a building thermal model of a complex roof including a phase change material (PCM. A mathematical model dedicated to PCMs based on the heat apparent capacity method was implemented in a multi-zone building simulation code, the aim being to increase the understanding of the thermal behavior of the whole building with PCM technologies. In order to empirically validate the model, the methodology is based both on numerical and experimental studies. A parametric sensitivity analysis was performed and a set of parameters of the thermal model has been identified for optimization. The use of the generic optimization program called GenOpt® coupled to the building simulation code enabled to determine the set of adequate parameters. We first present the empirical validation methodology and main results of previous work. We then give an overview of GenOpt® and its coupling with the building simulation code. Finally, once the optimization results are obtained, comparisons of the thermal predictions with measurements are found to be acceptable and are presented.
Dorbath, Felix; Nagel, Björn; Gollnick, Volker
2011-01-01
This paper introduces the concept of the ELWIS model generator for Finite Element models of aircraft wing structures. The physical modelling of the structure is extended beyond the wing primary structures, to increase the level of accuracy for aircraft which diverge from existing configurations. Also the impact of novel high lift technologies on structural masses can be captured already in the early stages of design by using the ELWIS models. The ELWIS model generator is able to c...
Using change-point models to estimate empirical critical loads for nitrogen in mountain ecosystems.
Roth, Tobias; Kohli, Lukas; Rihm, Beat; Meier, Reto; Achermann, Beat
2017-01-01
To protect ecosystems and their services, the critical load concept has been implemented under the framework of the Convention on Long-range Transboundary Air Pollution (UNECE) to develop effects-oriented air pollution abatement strategies. Critical loads are thresholds below which damaging effects on sensitive habitats do not occur according to current knowledge. Here we use change-point models applied in a Bayesian context to overcome some of the difficulties when estimating empirical critical loads for nitrogen (N) from empirical data. We tested the method using simulated data with varying sample sizes, varying effects of confounding variables, and with varying negative effects of N deposition on species richness. The method was applied to the national-scale plant species richness data from mountain hay meadows and (sub)alpine scrubs sites in Switzerland. Seven confounding factors (elevation, inclination, precipitation, calcareous content, aspect as well as indicator values for humidity and light) were selected based on earlier studies examining numerous environmental factors to explain Swiss vascular plant diversity. The estimated critical load confirmed the existing empirical critical load of 5-15 kg N ha(-1) yr(-1) for (sub)alpine scrubs, while for mountain hay meadows the estimated critical load was at the lower end of the current empirical critical load range. Based on these results, we suggest to narrow down the critical load range for mountain hay meadows to 10-15 kg N ha(-1) yr(-1).
Asymptotic Theory for Extended Asymmetric Multivariate GARCH Processes
M. Asai (Manabu); M.J. McAleer (Michael)
2016-01-01
textabstractThe paper considers various extended asymmetric multivariate conditional volatility models, and derives appropriate regularity conditions and associated asymptotic theory. This enables checking of internal consistency and allows valid statistical inferences to be drawn based on empirical
An empirical model for probabilistic decadal prediction: global attribution and regional hindcasts
Suckling, Emma B.; van Oldenborgh, Geert Jan; Eden, Jonathan M.; Hawkins, Ed
2016-07-01
Empirical models, designed to predict surface variables over seasons to decades ahead, provide useful benchmarks for comparison against the performance of dynamical forecast systems; they may also be employable as predictive tools for use by climate services in their own right. A new global empirical decadal prediction system is presented, based on a multiple linear regression approach designed to produce probabilistic output for comparison against dynamical models. A global attribution is performed initially to identify the important forcing and predictor components of the model . Ensemble hindcasts of surface air temperature anomaly fields are then generated, based on the forcings and predictors identified as important, under a series of different prediction `modes' and their performance is evaluated. The modes include a real-time setting, a scenario in which future volcanic forcings are prescribed during the hindcasts, and an approach which exploits knowledge of the forced trend. A two-tier prediction system, which uses knowledge of future sea surface temperatures in the Pacific and Atlantic Oceans, is also tested, but within a perfect knowledge framework. Each mode is designed to identify sources of predictability and uncertainty, as well as investigate different approaches to the design of decadal prediction systems for operational use. It is found that the empirical model shows skill above that of persistence hindcasts for annual means at lead times of up to 10 years ahead in all of the prediction modes investigated. It is suggested that hindcasts which exploit full knowledge of the forced trend due to increasing greenhouse gases throughout the hindcast period can provide more robust estimates of model bias for the calibration of the empirical model in an operational setting. The two-tier system shows potential for improved real-time prediction, given the assumption that skilful predictions of large-scale modes of variability are available. The empirical
An empirical model for probabilistic decadal prediction: global attribution and regional hindcasts
Suckling, Emma B.; van Oldenborgh, Geert Jan; Eden, Jonathan M.; Hawkins, Ed
2017-05-01
Empirical models, designed to predict surface variables over seasons to decades ahead, provide useful benchmarks for comparison against the performance of dynamical forecast systems; they may also be employable as predictive tools for use by climate services in their own right. A new global empirical decadal prediction system is presented, based on a multiple linear regression approach designed to produce probabilistic output for comparison against dynamical models. A global attribution is performed initially to identify the important forcing and predictor components of the model . Ensemble hindcasts of surface air temperature anomaly fields are then generated, based on the forcings and predictors identified as important, under a series of different prediction `modes' and their performance is evaluated. The modes include a real-time setting, a scenario in which future volcanic forcings are prescribed during the hindcasts, and an approach which exploits knowledge of the forced trend. A two-tier prediction system, which uses knowledge of future sea surface temperatures in the Pacific and Atlantic Oceans, is also tested, but within a perfect knowledge framework. Each mode is designed to identify sources of predictability and uncertainty, as well as investigate different approaches to the design of decadal prediction systems for operational use. It is found that the empirical model shows skill above that of persistence hindcasts for annual means at lead times of up to 10 years ahead in all of the prediction modes investigated. It is suggested that hindcasts which exploit full knowledge of the forced trend due to increasing greenhouse gases throughout the hindcast period can provide more robust estimates of model bias for the calibration of the empirical model in an operational setting. The two-tier system shows potential for improved real-time prediction, given the assumption that skilful predictions of large-scale modes of variability are available. The empirical
A Bayesian Reformulation of the Extended Drift-Diffusion Model in Perceptual Decision Making
Fard, Pouyan R.; Park, Hame; Warkentin, Andrej; Kiebel, Stefan J.; Bitzer, Sebastian
2017-01-01
Perceptual decision making can be described as a process of accumulating evidence to a bound which has been formalized within drift-diffusion models (DDMs). Recently, an equivalent Bayesian model has been proposed. In contrast to standard DDMs, this Bayesian model directly links information in the stimulus to the decision process. Here, we extend this Bayesian model further and allow inter-trial variability of two parameters following the extended version of the DDM. We derive parameter distributions for the Bayesian model and show that they lead to predictions that are qualitatively equivalent to those made by the extended drift-diffusion model (eDDM). Further, we demonstrate the usefulness of the extended Bayesian model (eBM) for the analysis of concrete behavioral data. Specifically, using Bayesian model selection, we find evidence that including additional inter-trial parameter variability provides for a better model, when the model is constrained by trial-wise stimulus features. This result is remarkable because it was derived using just 200 trials per condition, which is typically thought to be insufficient for identifying variability parameters in DDMs. In sum, we present a Bayesian analysis, which provides for a novel and promising analysis of perceptual decision making experiments. PMID:28553219
A Bayesian Reformulation of the Extended Drift-Diffusion Model in Perceptual Decision Making
Pouyan R. Fard
2017-05-01
Full Text Available Perceptual decision making can be described as a process of accumulating evidence to a bound which has been formalized within drift-diffusion models (DDMs. Recently, an equivalent Bayesian model has been proposed. In contrast to standard DDMs, this Bayesian model directly links information in the stimulus to the decision process. Here, we extend this Bayesian model further and allow inter-trial variability of two parameters following the extended version of the DDM. We derive parameter distributions for the Bayesian model and show that they lead to predictions that are qualitatively equivalent to those made by the extended drift-diffusion model (eDDM. Further, we demonstrate the usefulness of the extended Bayesian model (eBM for the analysis of concrete behavioral data. Specifically, using Bayesian model selection, we find evidence that including additional inter-trial parameter variability provides for a better model, when the model is constrained by trial-wise stimulus features. This result is remarkable because it was derived using just 200 trials per condition, which is typically thought to be insufficient for identifying variability parameters in DDMs. In sum, we present a Bayesian analysis, which provides for a novel and promising analysis of perceptual decision making experiments.
Empirical analysis of cascade deformable models for multi-view face detection
Orozco, Javier; Martinez, Brais; Pantic, Maja
2015-01-01
We present a multi-view face detector based on Cascade Deformable Part Models (CDPM). Over the last decade, there have been several attempts to extend the well-established Viola&Jones face detector algorithm to solve the problem of multi-view face detection. Recently a tree structure model for multi
J. Vrba
2013-12-01
Full Text Available In this article, a known concept and measurement probe geometry for the estimation of the dielectric properties of oils have been adapted. The new probe enables the~measurement in the frequency range of 1 to 3000 MHz. Additionally, the measurement probe has been equipped with a~heat exchanger, which has enabled us to measure the dielectric properties of sunflower and olive oil as well as of two commercial emulsion concentrates. Subsequently, corresponding linear empirical temperature and frequency dependent models of the dielectric properties of the above mentioned oils and concentrates have been created. The dielectric properties measured here as well as the values obtained based on the empirical models created here match the data published in professional literature very well.
Empirical tight-binding force model for molecular-dynamics simulation of Si
Wang, C. Z.; Chan, C. T.; Ho, K. M.
1989-04-01
A scheme of molecular-dynamics simulation using the empirical tight-binding force model is proposed. The scheme allows the interatomic interactions involved in the molecular dynamics to be determined by first-principles total-energy and electronic-structure calculations without resorting to fitting experimental data. For a first application of the scheme we show that a very simple nearest-neighbor two-center empirical tight-binding force model is able to stabilize the diamond structure of Si within a reasonable temperature range. We also show that the scheme makes possible the quantitative calculation of the temperature dependence of various anharmonic effects such as lattice thermal expansion, temperature-dependent phonon linewidths, and phonon frequency shifts.
Koch, M W; Bjerregaard, P; Curtis, C
2004-01-01
OBJECTIVES: Many studies concerning mental health among ethnic minorities have used the concept of acculturation as a model of explanation, in particular J.W. Berry's model of acculturative stress. But Berry's theory has only been empirically verified few times. The aims of the study were...... to examine whether Berry's hypothesis about the connection between acculturation and mental health can be empirically verified for Greenlanders living in Denmark and to analyse whether acculturation plays a significant role for mental health among Greenlanders living in Denmark. STUDY DESIGN AND METHODS......: The study used data from the 1999 Health Profile for Greenlanders in Denmark. As measure of mental health we applied the General Health Questionnaire (GHQ-12). Acculturation was assessed from answers to questions about how the respondents value the fact that children maintain their traditional cultural...
Microscopic driving theory with oscillatory congested states: model and empirical verification
Tian, Junfang; Ma, Shoufeng; Jia, Bin; Zhang, Wenyi
2014-01-01
The essential distinction between the Fundamental Diagram Approach (FDA) and Kerner's Three- Phase Theory (KTPT) is the existence of a unique gap-speed (or flow-density) relationship in the former class. In order to verify this relationship, empirical data are analyzed with the following findings: (1) linear relationship between the actual space gap and speed can be identified when the speed difference between vehicles approximates zero; (2) vehicles accelerate or decelerate around the desired space gap most of the time. To explain these phenomena, we propose that, in congested traffic flow, the space gap between two vehicles will oscillate around the desired space gap in the deterministic limit. This assumption is formulated in terms of a cellular automaton. In contrast to FDA and KTPT, the new model does not have any congested steady-state solution. Simulations under periodic and open boundary conditions reproduce the empirical findings of KTPT. Calibrating and validating the model to detector data produces...
Noma, Hisashi; Matsui, Shigeyuki
2013-05-20
The main purpose of microarray studies is screening of differentially expressed genes as candidates for further investigation. Because of limited resources in this stage, prioritizing genes are relevant statistical tasks in microarray studies. For effective gene selections, parametric empirical Bayes methods for ranking and selection of genes with largest effect sizes have been proposed (Noma et al., 2010; Biostatistics 11: 281-289). The hierarchical mixture model incorporates the differential and non-differential components and allows information borrowing across differential genes with separation from nuisance, non-differential genes. In this article, we develop empirical Bayes ranking methods via a semiparametric hierarchical mixture model. A nonparametric prior distribution, rather than parametric prior distributions, for effect sizes is specified and estimated using the "smoothing by roughening" approach of Laird and Louis (1991; Computational statistics and data analysis 12: 27-37). We present applications to childhood and infant leukemia clinical studies with microarrays for exploring genes related to prognosis or disease progression.
Markus eRaab
2015-01-01
Full Text Available SMART (Situation Model of Anticipated Response consequences in tactical decisions describes the interaction of top-down and bottom-up processes in skill acquisition and thus the dynamic interaction of sensory and motor capacities in embodied cognition. The empirically validated, extended, and revised SMART-ER can now predict when specific dynamic interactions of top-down and bottom-up processes have a beneficial or detrimental effect on performance and learning depending on situational constraints. The model is empirically supported and proposes learning strategies for when situation complexity varies or time pressure is present. Experiments from expertise research in sports illustrate that neither bottom-up nor top-down processes are bad or good per se but their effects depend on personal and situational characteristics.
A New Empirical Model for Estimation of sp3 Fraction in Diamond-Like Carbon Films
DAI Hai-Yang; WANG Li-Wu; JIANG Hui; HUANG Ning-Kang
2007-01-01
A new empirical model to estimate the content of sp3 in diamond-like carbon (DLC) films is presented, based on the conventional Raman spectra excited by 488nm or 514nm visible light for different carbons. It is found that bandwidth of the G peak is related to the sp3 fraction. A wider bandwidth of the G peak shows a higher sp3 fraction in DLC films.
Phillip S. LEVIN, Peter HORNE, Kelly S. ANDREWS, Greg WILLIAMS
2012-01-01
Understanding the movement of animals is fundamental to population and community ecology. Historically, it has been difficult to quantify movement patterns of most fishes, but technological advances in acoustic telemetry have increased our abilities to monitor their movement. In this study, we combined small-scale active acoustic tracking with large-scale passive acoustic monitoring to develop an empirical movement model for sixgill sharks in Puget Sound, WA, USA. We began by testing whether ...
Evaluation Model for Scientific Quality Based on Rough Sets and Its Empirical Study
LIU Dun; HU Pei; JIANG Chao-zhe; LIU Li
2007-01-01
By analyzing the questionnaires recollected from 74 different government departments in Chengdu, China, an evaluation model for scientific quality of civil servants was developed with the rough set theory. In the empirical study, a series of important rules were given to help to check and forecast the degree of the scientific quality of civil servants by using the reduction algorithm, and the total accuracy of prediction was 93.2%.
Zhang, Shuang
2012-01-01
Based on farmers' supply behavior theory and price expectations theory, this paper establishes grain farmers' supply response model of two major grain varieties (early indica rice and mixed wheat) in the major producing areas, to test whether the minimum grain purchase price policy can have price-oriented effect on grain production and supply in the major producing areas. Empirical analysis shows that the minimum purchase price published annually by the government has significant positive imp...
Knightian uncertainty and stock-price movements: Why the REH present-value model failed empirically
Frydman, Roman; Michael D. Goldberg; Mangee, Nicholas
2015-01-01
Macroeconomic models that are based on either the rational expectations hypothesis (REH) or behavioral considerations share a core premise: All future market outcomes can be characterized ex ante with a single overarching probability distribution. This paper assesses the empirical relevance of this premise using a novel data set. The authors find that Knightian uncertainty, which cannot be reduced to a probability distribution, underpins outcomes in the stock market. This finding reveals the ...
Рerspective Model of Specialized Military Education in Empirical Characteristics
Alexander P. Abramov
2015-06-01
Full Text Available On the basis of these sociological polls and interview to pupils, graduates of military schools of the Ministry of Defence of the Russian Federation, the Suvorov military schools, the Nakhimov military sea schools and experts in 2002-2013 is developed theoretical construct of perspective model of secondary specialized military education, conceptually is developed and its place and a role in structure of cadet formation of modern Russia is empirically proved.
Empirical probability model of the cold plasma environment in Jovian inner magnetosphere
Futaana, Yoshifumi; Roussos, Elias; Trouscott, Pete; Heynderickx, Daniel; Cipriani, Fabrice; Rodgers, David
2016-01-01
A new empirical, analytical model of cold plasma (< 10 keV) in the Jovian inner magnetosphere is constructed. Plasmas in this energy range impact surface charging. A new feature of this model is predicting each plasma parameter for a specified probability (percentile). The new model was produced as follows. We start from a reference model for each plasma parameter, which was scaled to fit the data of Galileo plasma spectrometer. The scaled model was then represented as a function of radial distance, magnetic local time, and magnetic latitude, presumably describing the mean states. Then, the deviation of the observed values from the model were attribute to the variability in the environment, which was accounted for by the percentile at a given location.The input parameters for this model are the spacecraft position and the percentile. The model is inteded to be used for the JUICE mission analysis.
Hong-Juan Li
2013-04-01
Full Text Available Electric load forecasting is an important issue for a power utility, associated with the management of daily operations such as energy transfer scheduling, unit commitment, and load dispatch. Inspired by strong non-linear learning capability of support vector regression (SVR, this paper presents a SVR model hybridized with the empirical mode decomposition (EMD method and auto regression (AR for electric load forecasting. The electric load data of the New South Wales (Australia market are employed for comparing the forecasting performances of different forecasting models. The results confirm the validity of the idea that the proposed model can simultaneously provide forecasting with good accuracy and interpretability.
Empirical likelihood confidence regions of the parameters in a partially linear single-index model
XUE Liugen; ZHU Lixing
2005-01-01
In this paper, a partially linear single-index model is investigated, and three empirical log-likelihood ratio statistics for the unknown parameters in the model are suggested. It is proved that the proposed statistics are asymptotically standard chi-square under some suitable conditions, and hence can be used to construct the confidence regions of the parameters. Our methods can also deal with the confidence region construction for the index in the pure single-index model. A simulation study indicates that, in terms of coverage probabilities and average areas of the confidence regions, the proposed methods perform better than the least-squares method.
A Price Index Model for Road Freight Transportation and Its Empirical analysis in China
Liu Zhishuo
2017-01-01
Full Text Available The aim of price index for road freight transportation (RFT is to reflect the changes of price in the road transport market. Firstly, a price index model for RFT based on the sample data from Alibaba logistics platform is built. This model is a three levels index system including total index, classification index and individual index and the Laspeyres method is applied to calculate these indices. Finally, an empirical analysis of the price index for RFT market in Zhejiang Province is performed. In order to demonstrate the correctness and validity of the exponential model, a comparative analysis with port throughput and PMI index is carried out.
Empirical model for electron impact ionization cross sections of neutral atoms
Talukder, M.R.; Bose, S. [Rajshahi Univ., Dept. of Applied Physics and Electronic Engineering (Bangladesh); Patoary, M.A.R.; Haque, A.K.F.; Uddin, M.A.; Basak, A.K. [Rajshahi Univ., Dept. of Physics (Bangladesh); Kando, M. [Shizuoka Univ., Graduate School of Electronic Science and Technology (Japan)
2008-02-15
A simple empirical formula is proposed for the rapid calculation of electron impact total ionization cross sections both for the open- and closed-shell neutral atoms considered in the range 1 {<=} Z {<=} 92 and the incident electron energies from threshold to about 10{sup 4} eV. The results of the present analysis are compared with the available experimental and theoretical data. The proposed model provides a fast method for calculating fairly accurate electron impact total ionization cross sections of atoms. This model may be a prudent choice, for the practitioners in the field of applied sciences e.g. in plasma modeling, due to its simple inherent structure. (authors)
Empirical models of the eddy heat flux and vertical shear on short time scales
Ghan, S. J.
1984-01-01
An intimate relation exists between the vertical shear and the horizontal eddy heat flux within the atmosphere. In the present investigation empirical means are employed to provide clues concerning the relationship between the shear and eddy heat flux. In particular, linear regression models are applied to individual and joint time series of the shear and eddy heat flux. These discrete models are used as a basis to infer continuous models. A description is provided of the observed relationship between the flux and the shear, taking into account means, standard deviations, and lag correction functions.
Chen, Shih-Chih; Liu, Ming-Ling; Lin, Chieh-Peng
2013-08-01
The aim of this study was to integrate technology readiness into the expectation-confirmation model (ECM) for explaining individuals' continuance of mobile data service usage. After reviewing the ECM and technology readiness, an integrated model was demonstrated via empirical data. Compared with the original ECM, the findings of this study show that the integrated model may offer an ameliorated way to clarify what factors and how they influence the continuous intention toward mobile services. Finally, the major findings are summarized, and future research directions are suggested.
Zhang, Lei; Zeng, Zhi; Ji, Qiang
2011-09-01
Chain graph (CG) is a hybrid probabilistic graphical model (PGM) capable of modeling heterogeneous relationships among random variables. So far, however, its application in image and video analysis is very limited due to lack of principled learning and inference methods for a CG of general topology. To overcome this limitation, we introduce methods to extend the conventional chain-like CG model to CG model with more general topology and the associated methods for learning and inference in such a general CG model. Specifically, we propose techniques to systematically construct a generally structured CG, to parameterize this model, to derive its joint probability distribution, to perform joint parameter learning, and to perform probabilistic inference in this model. To demonstrate the utility of such an extended CG, we apply it to two challenging image and video analysis problems: human activity recognition and image segmentation. The experimental results show improved performance of the extended CG model over the conventional directed or undirected PGMs. This study demonstrates the promise of the extended CG for effective modeling and inference of complex real-world problems.
Swagata Banerjee; Masanobu Shinozuka
2008-01-01
Bridges are one of the most vulnerable components of a highway transportation network system subjected to earthquake ground motions.Prediction of resilience and sustainability of bridge performance in a probabilistic manner provides valuable information for pre-event system upgrading and post-event functional recovery of the network.The current study integrates bridge seismic damageability information obtained through empirical,analytical and experimental procedures and quantifies threshold limits of bridge damage states consistent with the physical damage description given in HAZUS.Experimental data from a large-scale shaking table test are utilized for this purpose.This experiment was conducted at the University of Nevada,Reno,where a research team from the University of California,Irvine,participated.Observed experimental damage data are processed to idemify and quantify bridge damage states in terms of rotational ductility at bridge column ends.In parallel,a mechanistic model for fragility curves is developed in such a way that the model can be calibrated against empirical fragility curves that have been constructed from damage data obtained during the 1994 Northridge earthquake.This calibration quantifies threshold values of bridge damage states and makes the analytical study consistent with damage data observed in past earthquakes.The mechanistic model is transportable and applicable to most types and sizes of bridges.Finally,calibrated damage state definitions are compared with that obtained using experimental findings.Comparison shows excellent consistency among results from analytical,empirical and experimental observations.
Herwig Reiter
2010-01-01
Full Text Available The article proposes a general, empirically grounded model for analyzing biographical uncertainty. The model is based on findings from a qualitative-explorative study of transforming meanings of unemployment among young people in post-Soviet Lithuania. In a first step, the particular features of the uncertainty puzzle in post-communist youth transitions are briefly discussed. A historical event like the collapse of state socialism in Europe, similar to the recent financial and economic crisis, is a generator of uncertainty par excellence: it undermines the foundations of societies and the taken-for-grantedness of related expectations. Against this background, the case of a young woman and how she responds to the novel threat of unemployment in the transition to the world of work is introduced. Her uncertainty management in the specific time perspective of certainty production is then conceptually rephrased by distinguishing three types or levels of biographical uncertainty: knowledge, outcome, and recognition uncertainty. Biographical uncertainty, it is argued, is empirically observable through the analysis of acting and projecting at the biographical level. The final part synthesizes the empirical findings and the conceptual discussion into a stratification model of biographical uncertainty as a general tool for the biographical analysis of uncertainty phenomena. URN: urn:nbn:de:0114-fqs100120
A simple empirical model for the clarification-thickening process in wastewater treatment plants.
Zhang, Y K; Wang, H C; Qi, L; Liu, G H; He, Z J; Fan, H T
2015-01-01
In wastewater treatment plants (WWTPs), activated sludge is thickened in secondary settling tanks and recycled into the biological reactor to maintain enough biomass for wastewater treatment. Accurately estimating the activated sludge concentration in the lower portion of the secondary clarifiers is of great importance for evaluating and controlling the sludge recycled ratio, ensuring smooth and efficient operation of the WWTP. By dividing the overall activated sludge-thickening curve into a hindered zone and a compression zone, an empirical model describing activated sludge thickening in the compression zone was obtained by empirical regression. This empirical model was developed through experiments conducted using sludge from five WWTPs, and validated by the measured data from a sixth WWTP, which fit the model well (R² = 0.98, p settling was also developed. Finally, the effects of denitrification and addition of a polymer were also analysed because of their effect on sludge thickening, which can be useful for WWTP operation, e.g., improving wastewater treatment or the proper use of the polymer.
Comparative approaches from empirical to mechanistic simulation modelling in Land Evaluation studies
Manna, P.; Basile, A.; Bonfante, A.; Terribile, F.
2009-04-01
The Land Evaluation (LE) comprise the evaluation procedures to asses the attitudes of the land to a generic or specific use (e.g. biomass production). From local to regional and national scale the approach to the land use planning should requires a deep knowledge of the processes that drive the functioning of the soil-plant-atmosphere system. According to the classical approaches the assessment of attitudes is the result of a qualitative comparison between the land/soil physical properties and the land use requirements. These approaches have a quick and inexpensive applicability; however, they are based on empirical and qualitative models with a basic knowledge structure specifically built for a specific landscape and for the specific object of the evaluation (e.g. crop). The outcome from this situation is the huge difficulties in the spatial extrapolation of the LE results and the rigidity of the system. Modern techniques instead, rely on the application of mechanistic and quantitative simulation modelling that allow a dynamic characterisation of the interrelated physical and chemical processes taking place in the soil landscape. Moreover, the insertion of physical based rules in the LE procedure may make it less difficult in terms of both extending spatially the results and changing the object (e.g. crop species, nitrate dynamics, etc.) of the evaluation. On the other side these modern approaches require high quality and quantity of input data that cause a significant increase in costs. In this scenario nowadays the LE expert is asked to choose the best LE methodology considering costs, complexity of the procedure and benefits in handling a specific land evaluation. In this work we performed a forage maize land suitability study by comparing 9 different methods having increasing complexity and costs. The study area, of about 2000 ha, is located in North Italy in the Lodi plain (Po valley). The range of the 9 employed methods ranged from standard LE approaches to
Nucleon Properties at Finite Temperature in the Extended Quark-Sigma Model
Abu-Shady, M
2014-01-01
Hadron properties are studied at hot medium using the quark sigma model. The quark sigma model is extended to include eighth-order of mesonic interactions based on some aspects of quantum chromodynamic (QCD) theory. The extended effective potential tends to the original effective potential when the coupling between the higher order mesonic interactions equal to zero. The field equations have been solved in the mean-field approximation by using the extended iteration method. We found that the nucleon mass increases with increasing temperature and the magnetic moments of proton and neutron increase with increasing temperature. A comparison is presented with recent previous works and other models. We conclude that higher-order mesonic interactions play an important role in changing the behavior of nucleon properties at finite temperature. In addition, the deconfinement phase transition is satisfied in the present model.
Advancing the extended parallel process model through the inclusion of response cost measures.
Rintamaki, Lance S; Yang, Z Janet
2014-01-01
This study advances the Extended Parallel Process Model through the inclusion of response cost measures, which are drawbacks associated with a proposed response to a health threat. A sample of 502 college students completed a questionnaire on perceptions regarding sexually transmitted infections and condom use after reading information from the Centers for Disease Control and Prevention on the health risks of sexually transmitted infections and the utility of latex condoms in preventing sexually transmitted infection transmission. The questionnaire included standard Extended Parallel Process Model assessments of perceived threat and efficacy, as well as questions pertaining to response costs associated with condom use. Results from hierarchical ordinary least squares regression demonstrated how the addition of response cost measures improved the predictive power of the Extended Parallel Process Model, supporting the inclusion of this variable in the model.
△△ Dibaryon Structure in Extended Chiral SU(3) Quark Model
DAI Lian-Rong
2005-01-01
@@ The structure of △△ dibaryon is studied in the extended chiral SU(3) quark model in which vector meson exchanges are included. The effect of the vector meson fields is very similar to that of the one-gluon exchange (OGE) interaction. Both in the chiral SU(3) quark model and in the extended chiral SU(3) quark model, the resultant mass of the △△ dibaryon is lower than the threshold of the △△ channel but higher than that of the△Nπ channel.
Extending the Real-Time Maude Semantics of Ptolemy to Hierarchical DE Models
Bae, Kyungmin; 10.4204/EPTCS.36.3
2010-01-01
This paper extends our Real-Time Maude formalization of the semantics of flat Ptolemy II discrete-event (DE) models to hierarchical models, including modal models. This is a challenging task that requires combining synchronous fixed-point computations with hierarchical structure. The synthesis of a Real-Time Maude verification model from a Ptolemy II DE model, and the formal verification of the synthesized model in Real-Time Maude, have been integrated into Ptolemy II, enabling a model-engineering process that combines the convenience of Ptolemy II DE modeling and simulation with formal verification in Real-Time Maude.
Roeshoff, Kennert; Lanaro, Flavio [Berg Bygg Konsult AB, Stockholm (Sweden); Lanru Jing [Royal Inst. of Techn., Stockholm (Sweden). Div. of Engineering Geology
2002-05-01
This report presents the results of one part of a wide project for the determination of a methodology for the determination of the rock mechanics properties of the rock mass for the so-called Aespoe Test Case. The Project consists of three major parts: the empirical part dealing with the characterisation of the rock mass by applying empirical methods, a part determining the rock mechanics properties of the rock mass through numerical modelling, and a third part carrying out numerical modelling for the determination of the stress state at Aespoe. All Project's parts were performed based on a limited amount of data about the geology and mechanical tests on samples selected from the Aespoe Database. This Report only considers the empirical approach. The purpose of the project is the development of a descriptive rock mechanics model for SKBs rock mass investigations for a final repository site. The empirical characterisation of the rock mass provides correlations with some of the rock mechanics properties of the rock mass such as the deformation modulus, the friction angle and cohesion for a certain stress interval and the uniaxial compressive strength. For the characterisation of the rock mass, several empirical methods were analysed and reviewed. Among those methods, some were chosen because robust, applicable and widespread in modern rock mechanics. Major weight was given to the well-known Tunnel Quality Index (Q) and Rock Mass Rating (RMR) but also the Rock Mass Index (RMi), the Geological Strength Index (GSI) and Ramamurthy's Criterion were applied for comparison with the two classical methods. The process of: i) sorting the geometrical/geological/rock mechanics data, ii) identifying homogeneous rock volumes, iii) determining the input parameters for the empirical ratings for rock mass characterisation; iv) evaluating the mechanical properties by using empirical relations with the rock mass ratings; was considered. By comparing the methodologies involved
Owen, Art B
2001-01-01
Empirical likelihood provides inferences whose validity does not depend on specifying a parametric model for the data. Because it uses a likelihood, the method has certain inherent advantages over resampling methods: it uses the data to determine the shape of the confidence regions, and it makes it easy to combined data from multiple sources. It also facilitates incorporating side information, and it simplifies accounting for censored, truncated, or biased sampling.One of the first books published on the subject, Empirical Likelihood offers an in-depth treatment of this method for constructing confidence regions and testing hypotheses. The author applies empirical likelihood to a range of problems, from those as simple as setting a confidence region for a univariate mean under IID sampling, to problems defined through smooth functions of means, regression models, generalized linear models, estimating equations, or kernel smooths, and to sampling with non-identically distributed data. Abundant figures offer vi...
A Quasi-Nonmetric Method for Multidimensional Scaling via an Extended Euclidean Model.
Winsberg, Suzanne; Carroll, J. Douglas
1989-01-01
An Extended Two-Way Euclidean Multidimensional Scaling (MDS) model that assumes both common and specific dimensions is described and contrasted with the "standard" (Two-Way) MDS model. Illustrations with both artificial and real data on the judged similarity of nations are provided. (TJH)
3D Printed Molecules and Extended Solid Models for Teaching Symmetry and Point Groups
Scalfani, Vincent F.; Vaid, Thomas P.
2014-01-01
Tangible models help students and researchers visualize chemical structures in three dimensions (3D). 3D printing offers a unique and straightforward approach to fabricate plastic 3D models of molecules and extended solids. In this article, we prepared a series of digital 3D design files of molecular structures that will be useful for teaching…
A kinetic type extended model for dense gases and macromolecular fluids
M. Cristina Carrisi
2005-05-01
Full Text Available Extended thermodynamics is an important theory which is appreciated from mathematicians and physicists. Following its ideas and considering the macroscopic approach with suggestions from the kinetic one, we find in this paper, the solution of an interesting model: the model for dense gases and macromolecular fluids.
3D Printed Molecules and Extended Solid Models for Teaching Symmetry and Point Groups
Scalfani, Vincent F.; Vaid, Thomas P.
2014-01-01
Tangible models help students and researchers visualize chemical structures in three dimensions (3D). 3D printing offers a unique and straightforward approach to fabricate plastic 3D models of molecules and extended solids. In this article, we prepared a series of digital 3D design files of molecular structures that will be useful for teaching…
Klein, Daniel; Zezula, Ivan
The extended growth curve model is discussed in this paper. There are two versions of the model studied in the literature, which differ in the way how the column spaces of the design matrices are nested. The nesting is applied either to the between-individual or to the within-individual design
Evaluation of theoretical and empirical water vapor sorption isotherm models for soils
Arthur, Emmanuel; Tuller, Markus; Møldrup, Per;
2016-01-01
sorption isotherms of building materials, food, and other industrial products, knowledge about the 24 applicability of these functions for soils is noticeably lacking. We present validation of nine models for characterizing adsorption/desorption isotherms for a water activity range from 0.03 to 0.......93 for 207 soils, widely varying in texture and organic carbon content. In addition the potential applicability of the models for prediction of sorption isotherms from known clay content was investigated. While in general all investigated models described measured adsorption and desorption isotherms...... reasonably well, distinct differences were observed between physical and empirical models and due to the different degrees of freedom of the model equations. There were also considerable differences in model performance for the adsorption and desorption data. Regression analysis relating model parameters...
Theoretical and Empirical Comparisons between Two Models for Continuous Item Response.
Ferrando, Pere J
2002-10-01
This article analyzes the relations between two continuous response models intended for typical response items: the linear congeneric model and Samejima's continuous response model (CRM). Using a factor analytical (FA) approach based on the assumption of underlying response variables, I describe how a particular case of the CRM can be considered as a nonlinear counterpart of Spearman's FA model. The mathematical relations between the: item-trait regressions, item parameter values, and conditional and marginal distributions of both models are obtained. The results allow (a) the item parameter values of the linear model to be obtained from CRM item parameter values, and (b) the conditions in which the congeneric model will be a good approximation to the CRM to be predicted. The relations described are illustrated using an empirical example and assessed by means of a simulation study.
Comparison of empirical models to estimate soil erosion and sediment yield in micro catchments
Lida Eisazadeh
2015-05-01
Full Text Available Assessment of sediment yield in soil conservation and watershed Project and implementation plan for water and soil resources management is so important. Regarding to somewhere that doesn’t have enough information and statistical data such as upper river branches, Empirical models should be used to estimate erosion and sediment yield. However the efficiency and usage of these models before calibration isn’t clear. In this research, the measurement of erosion and sediment yield of 10 basins upstream of reservoirshas been estimated by RUSLE and MPSIAC empirical models.In order to compare means between measured and estimated datat-test method was applied.Theresults indicated no significant differences between means of measured and estimated sediment yield in MPSAIC model in 5% level. In contrast, T-test showed contrary results in RUSLE model. Then the applicability and priority of two models were examined by statistical methodssuch as MAE and MBE methods. By regarding to accuracy and precision, MPSIAC model placed in first priorityto estimate soil erosion and sediment yield and has minimum value of MAE=0.79 and MBE = -0.59.
Determining the inventory impact of extended-shelf-life platelets with a network simulation model.
Blake, John T
2017-09-06
The regulatory shelf life for platelets (PLTs) in many jurisdictions is 5 days. PLT shelf life can be extended to 7 days with an enhanced bacterial detection algorithm. Enhanced testing, however, comes at a cost, which may be offset by reductions in wastage due to longer shelf life. This article describes a method for estimating systemwide reductions in PLT outdates after PLT shelf life is extended. A simulation was used to evaluate the impact of an extended PLT shelf life within a national blood network. A network model of the Canadian Blood Services PLT supply chain was built and validated. PLT shelf life was extended from 5 days to 6, 7, and 8 days and runs were completed to determine the impact on outdates. Results suggest that, in general, a 16.3% reduction in PLT wastage can be expected with each additional day that PLT shelf life is extended. Both suppliers and hospitals will experience fewer outdating units, but wastage will decrease at a faster rate at hospitals. No effect was seen by blood group, but there was some evidence that supplier site characteristics influences both the number of units wasted and the site's ability to benefit from extended-shelf-life PLTs. Extended-shelf-life PLTs will reduce wastage within a blood supply chain. At 7 days, an improvement of 38% reduction in wastage can be expected with outdates being equally distributed between suppliers and hospital customers. © 2017 AABB.
Justesen, Kristian Kjær; Andreasen, Søren Juhl; Shaker, Hamid Reza
2014-01-01
In this work, a dynamic MATLAB Simulink model of a H3-350 Reformed Methanol Fuel Cell (RMFC) stand-alone battery charger produced by Serenergy is developed on the basis of theoretical and empirical methods. The advantage of RMFC systems is that they use liquid methanol as a fuel instead of gaseous...... of the reforming process are implemented. Models of the cooling flow of the blowers for the fuel cell and the burner which supplies process heat for the reformer are made. The two blowers have a common exhaust, which means that the two blowers influence each other’s output. The models take this into account using...... an empirical approach. Fin efficiency models for the cooling effect of the air are also developed using empirical methods. A fuel cell model is also implemented based on a standard model which is adapted to fit the measured performance of the H3-350 module. All the individual parts of the model are verified...
Justesen, Kristian Kjær; Andreasen, Søren Juhl; Shaker, Hamid Reza
2013-01-01
In this work, a dynamic MATLAB Simulink model of a H3-350 Reformed Methanol Fuel Cell (RMFC) stand-alone battery charger produced by Serenergy is developed on the basis of theoretical and empirical methods. The advantage of RMFC systems is that they use liquid methanol as a fuel instead of gaseous...... of the reforming process are implemented. Models of the cooling flow of the blowers for the fuel cell and the burner which supplies process heat for the reformer are made. The two blowers have a common exhaust, which means that the two blowers influence each other’s output. The models take this into account using...... an empirical approach. Fin efficiency models for the cooling effect of the air are also developed using empirical methods. A fuel cell model is also implemented based on a standard model which is adapted to fit the measured performance of the H3-350 module. All the individual parts of the model are verified...
Empirical angle-dependent Biot and MBA models for acoustic anisotropy in cancellous bone.
Lee, Kang Il; Hughes, E R; Humphrey, V F; Leighton, T G; Choi, Min Joo
2007-01-01
The Biot and the modified Biot-Attenborough (MBA) models have been found useful to understand ultrasonic wave propagation in cancellous bone. However, neither of the models, as previously applied to cancellous bone, allows for the angular dependence of acoustic properties with direction. The present study aims to account for the acoustic anisotropy in cancellous bone, by introducing empirical angle-dependent input parameters, as defined for a highly oriented structure, into the Biot and the MBA models. The anisotropy of the angle-dependent Biot model is attributed to the variation in the elastic moduli of the skeletal frame with respect to the trabecular alignment. The angle-dependent MBA model employs a simple empirical way of using the parametric fit for the fast and the slow wave speeds. The angle-dependent models were used to predict both the fast and slow wave velocities as a function of propagation angle with respect to the trabecular alignment of cancellous bone. The predictions were compared with those of the Schoenberg model for anisotropy in cancellous bone and in vitro experimental measurements from the literature. The angle-dependent models successfully predicted the angular dependence of phase velocity of the fast wave with direction. The root-mean-square errors of the measured versus predicted fast wave velocities were 79.2 m s(-1) (angle-dependent Biot model) and 36.1 m s(-1) (angle-dependent MBA model). They also predicted the fact that the slow wave is nearly independent of propagation angle for angles about 50 degrees , but consistently underestimated the slow wave velocity with the root-mean-square errors of 187.2 m s(-1) (angle-dependent Biot model) and 240.8 m s(-1) (angle-dependent MBA model). The study indicates that the angle-dependent models reasonably replicate the acoustic anisotropy in cancellous bone.
Extended hierarchy equation of motion for the spin-boson model.
Tang, Zhoufei; Ouyang, Xiaolong; Gong, Zhihao; Wang, Haobin; Wu, Jianlan
2015-12-14
An extended hierarchy equation of motion (HEOM) is proposed and applied to study the dynamics of the spin-boson model. In this approach, a complete set of orthonormal functions are used to expand an arbitrary bath correlation function. As a result, a complete dynamic basis set is constructed by including the system reduced density matrix and auxiliary fields composed of these expansion functions, where the extended HEOM is derived for the time derivative of each element. The reliability of the extended HEOM is demonstrated by comparison with the stochastic Hamiltonian approach under room-temperature classical ohmic and sub-ohmic noises and the multilayer multiconfiguration time-dependent Hartree theory under zero-temperature quantum ohmic noise. Upon increasing the order in the hierarchical expansion, the result obtained from the extended HOEM systematically converges to the numerically exact answer.
A non-quasistatic semi-empirical model for small geometry MOSFETs
Murray, Daniel; Sanchez, Julian J.; Demassa, Thomas A.
1997-09-01
A new charge-oriented semi-empirical non-quasistatic (NQS) model is developed for small geometry MOSFETs that is computationally efficient to be useful for circuit simulation. The NQS model includes the effect of velocity saturation, gate field dependent mobility, charge sharing, drain induced barrier lowering and geometric dependencies of threshold voltage. To model the carrier inertia that causes non-steady state conditions, a non-quasistatic model is adopted. An approximate inversion charge profile is used to reduce the nonlinear current-continuity equation to an ordinary differential equation. The model is valid in all regions of operation (weak, moderate and strong inversion) and is derived without resorting to the approximate arbitrary channel charge partitioning. The results from the proposed model are examined and compared with 2D simulation results and good agreement is obtained for the transient source, drain and gate currents for large signals applied to the gate.
Alternative Specifications for the Lévy Libor Market Model: An Empirical Investigation
Skovmand, David; Nicolato, Elisa
This paper introduces and analyzes specications of the Lévy Market Model originally proposed by Eberlein and Özkan (2005). An investigation of the term structure of option implied moments rules out the Brownian motion and homogeneous Lévy processes as suitable modeling devices, and consequently...... a variety of more appropriate models is proposed. Besides a diffusive component the models have jump structures with low or high frequency combined with constant or stochastic volatility. The models are subjected to an empirical analysis using a time series of data for Euribor caps. The results...... of the estimation show that pricing performances are improved when a high frequency jump component is incorporated. Specifically, excellent results are achieved with the 4 parameter Sato-Variance Gamma model, which is able to fit an entire surface of caps with an average absolute percentage pricing error of less...
Empirical validation of the thermal model of a passive solar cell test
Mara, T A; Boyer, H; Mamode, M
2012-01-01
The paper deals with an empirical validation of a building thermal model. We put the emphasis on sensitivity analysis and on research of inputs/residual correlation to improve our model. In this article, we apply a sensitivity analysis technique in the frequency domain to point out the more important parameters of the model. Then, we compare measured and predicted data of indoor dry-air temperature. When the model is not accurate enough, recourse to time-frequency analysis is of great help to identify the inputs responsible for the major part of error. In our approach, two samples of experimental data are required. The first one is used to calibrate our model the second one to really validate the optimized model
Salloum, Maher N.; Gharagozloo, Patricia E.
2013-10-01
Metal particle beds have recently become a major technique for hydrogen storage. In order to extract hydrogen from such beds, it is crucial to understand the decomposition kinetics of the metal hydride. We are interested in obtaining a a better understanding of the uranium hydride (UH3) decomposition kinetics. We first developed an empirical model by fitting data compiled from different experimental studies in the literature and quantified the uncertainty resulting from the scattered data. We found that the decomposition time range predicted by the obtained kinetics was in a good agreement with published experimental results. Secondly, we developed a physics based mathematical model to simulate the rate of hydrogen diffusion in a hydride particle during the decomposition. We used this model to simulate the decomposition of the particles for temperatures ranging from 300K to 1000K while propagating parametric uncertainty and evaluated the kinetics from the results. We compared the kinetics parameters derived from the empirical and physics based models and found that the uncertainty in the kinetics predicted by the physics based model covers the scattered experimental data. Finally, we used the physics-based kinetics parameters to simulate the effects of boundary resistances and powder morphological changes during decomposition in a continuum level model. We found that the species change within the bed occurring during the decomposition accelerates the hydrogen flow by increasing the bed permeability, while the pressure buildup and the thermal barrier forming at the wall significantly impede the hydrogen extraction.
Models of expected returns on the brazilian market: Empirical tests using predictive methodology
Adriano Mussa
2009-01-01
Full Text Available Predictive methodologies for test of the expected returns models are largely diffused on the international academic environment. However, these methods have not been used in Brazil in a systematic way. Generally, empirical studies proceeded with Brazilian stock market data are concentrated only in the first step of these methodologies. The purpose of this article was test and compare the models CAPM, 3-factors and 4-factors using a predictive methodology, considering two steps – temporal and cross-section regressions – with standard errors obtained by the techniques of Fama and Macbeth (1973. The results indicated the superiority of the 4-fators model as compared to the 3-fators model, and the superiority of the 3- factors model as compared to the CAPM, but no one of the tested models were enough on the explanation of the Brazilian stock returns. Contrary to some empirical evidences, that do not use predictive methodology, the size and momentum effect seem do not exist on the Brazilian capital markets, but there are evidences of the value effect and the relevance of the market for explanation of expected returns. These finds rise some questions, mainly caused by the originality of the methodology on the local market and by the fact that this subject is still incipient and polemic on the Brazilian academic environment.
An empirical model simulating long-term diurnal CO2 flux for diverse vegetation types
A. D. Richardson
2008-10-01
Full Text Available We present an empirical model for the estimation of diurnal variability in net ecosystem CO2 exchange (NEE. The model is based on the use of a nonrectangular hyperbola for photosynthetic response of canopy and was constructed by using a dataset obtained from the AmeriFlux network and containing continuous eddy covariance CO2 flux from 26 ecosystems over seven biomes. The model uses simplified empirical expression of seasonal variability in biome-specific physiological parameters with air temperature, vapor pressure deficit, and precipitation. The physiological parameters of maximum CO2 uptake rate by the canopy and ecosystem respiration had biome-specific responses to environmental variables. The estimated physiological parameters had reasonable magnitudes and seasonal variation and gave reasonable timing of the beginning and end of the growing season over various biomes, but they were less satisfactory for disturbed grassland and savanna than for forests. Comparison with observational data revealed that the diurnal cycle of NEE was generally well predicted all year round by the model. The model gave satisfactory results even for tundra, which had very small amplitudes of NEE variability. These results suggest that this model with biome-specific parameters will be applicable to numerous terrestrial biomes, particularly forest ones.
TS07D Empirical Geomagnetic Field Model as a Space Weather Tool
Sharp, N. M.; Stephens, G. K.; Sitnov, M. I.
2011-12-01
Empirical modeling and forecasting of the geomagnetic field is a key element of the space weather research. A dramatic increase in the number of data available for the terrestrial magnetosphere required a new generation of empirical models with large numbers of degrees of freedom and sophisticated data-mining techniques. A set of the corresponding data binning, fitting and visualization procedures known as the TS07D model is now available at \\url{http://geomag_field.jhuapl.edu/model/} and it is used for detailed investigation of storm-scale phenomena in the magnetosphere. However, the transformation of this research model into a practical space weather application, which implies its extensive running for validation and interaction with other space weather codes, requires its presentation in the form of a single state-of-the-art code, well documented and optimized for the highest performance. To this end, the model is implemented in the Java programming language with extensive self-sufficient library and a set of optimization tools, including multi-thread operations that assume the use of the code in multi-core computers and clusters. The results of the new code validation and optimization of its binning, fitting and visualization parts are presented as well as some examples of the processed storms are discussed.
Soil Moisture Estimate Under Forest Using a Semi-Empirical Model at P-Band
Truong-Loi, My-Linh; Saatchi, Sassan; Jaruwatanadilok, Sermsak
2013-01-01
Here we present the result of a semi-empirical inversion model for soil moisture retrieval using the three backscattering coefficients: sigma(sub HH), sigma(sub VV) and sigma(sub HV). In this paper we focus on the soil moisture estimate and use the biomass as an ancillary parameter estimated automatically from the algorithm and used as a validation parameter, We will first remind the model analytical formulation. Then we will sow some results obtained with real SAR data and compare them to ground estimates.
Generalized Empirical Likelihood Inference in Semiparametric Regression Model for Longitudinal Data
Gao Rong LI; Ping TIAN; Liu Gen XUE
2008-01-01
In this paper, we consider the semiparametric regression model for longitudinal data. Due to the correlation within groups, a generalized empirical log-likelihood ratio statistic for the unknown parameters in the model is suggested by introducing the working covariance matrix. It is proved that the proposed statistic is asymptotically standard chi-squared under some suitable conditions, and hence it can be used to construct the confidence regions of the parameters. A simulation study is conducted to compare the proposed method with the generalized least squares method in terms of coverage accuracy and average lengths of the confidence intervals.
Roy, Kunal; Ghosh, Gopinath
2008-11-01
In this communication, we have developed quantitative predictive models using human lethal concentration values of 26 organic compounds including some pharmaceuticals with extended topochemical atom (ETA) indices applying different chemometric tools and compared the extended topochemical atom models with the models developed from non-extended topochemical atom ones. Extended topochemical atom descriptors were also tried in combination with non-extended topochemical atom descriptors to develop better predictive models. The use of extended topochemical atom descriptors along with non-extended topochemical atom ones improved equation statistics and cross-validation quality. The best model with sound statistical quality was developed from partial least squares regression using extended topochemical atom descriptors in combination non-extended topochemical atom ones. Finally, to check true predictability of the ETA parameters, the data set was divided into training (n = 19) and test (n = 7) sets. Partial least squares and genetic partial least squares models were developed from the training set using extended topochemical atom indices and the models were validated using the test set. The extended topochemical atom models developed from different statistical tools suggest that the toxicity increases with bulk, chloro functionality, presence of electronegative atoms within a chain or ring and unsaturation, and decreases with hydroxy functionality and branching. The results suggest that the extended topochemical atom descriptors are sufficiently rich in chemical information to encode the structural features for QSAR/QSPR/QSTR modeling.
An Extended Ontology Model and Ontology Checking Based on Description Logics
王洪伟; 蒋馥; 吴家春
2004-01-01
Ontology is defined as an explicit specification of a conceptualization. In this paper, an extended ontology model was constructed using description logics, which is a 5-tuples including term set, individual set, term definition set, instantiation assertion set and term restriction set. Based on the extended model, the issue on ontology checking was studied with the conclusion that the four kinds of term checking, including term satisfiability checking, term subsumption checking, term equivalence checking and term disjointness checking, can be reduced to the satisfiability checking, and satisfiability checking can be transformed into instantiation consistence checking.
Predicting the sky from 30 MHz to 800 GHz: the extended Global Sky Model
Liu, Adrian
We propose to construct the extended Global Sky Model (eGSM), a software package and associated data products that are capable of generating maps of the sky at any frequency within a broad range (30 MHz to 800 GHz). The eGSM is constructed from archival data, and its outputs will include not only "best estimate" sky maps, but also accurate error bars and the ability to generate random realizations of missing modes in the input data. Such views of the sky are crucial in the practice of precision cosmology, where our ability to constrain cosmological parameters and detect new phenomena (such as B-mode signatures from primordial gravitational waves, or spectral distortions of the Cosmic Microwave Background; CMB) rests crucially on our ability to remove systematic foreground contamination. Doing so requires empirical measurements of the foreground sky brightness (such as that arising from Galactic synchrotron radiation, among other sources), which are typically performed only at select narrow wavelength ranges. We aim to transcend traditional wavelength limits by optimally combining existing data to provide a comprehensive view of the foreground sky at any frequency within the broad range of 30 MHz to 800 GHz. Previous efforts to interpolate between multi-frequency maps resulted in the Global Sky Model (GSM) of de Oliveira-Costa et al. (2008), a software package that outputs foreground maps at any frequency of the user's choosing between 10 MHz and 100 GHz. However, the GSM has a number of shortcomings. First and foremost, the GSM does not include the latest archival data from the Planck satellite. Multi-frequency models depend crucially on data from Planck, WMAP, and COBE to provide high-frequency "anchor" maps. Another crucial shortcoming is the lack of error bars in the output maps. Finally, the GSM is only able to predict temperature (i.e., total intensity) maps, and not polarization information. With the recent release of Planck's polarized data products, the
Van Bon-Martens, M J H; Van De Goor, L A M; Achterberg, P W; Van Oers, J A M
2011-08-01
To develop and describe an empirical model for regional public health reporting, based on the model and experience of the Dutch national Public Health Status and Forecasts (PHSF) as well as on relevant theories and literature. Three basic requirements were chosen in a preparatory feasibility study: the products to be developed, the project organization of the pilot study, and a regional elaboration of the conceptual model of the national PHSF. Subsequently, from November 2005 to June 2007, a regional PHSF was developed in two Dutch pilot regions, to serve as a base for the empirical model for regional public health reporting. The developed empirical regional PHSF model consists of different products for different purposes and target groups. Regional and Municipal Reports aim to underpin strategic regional and local public health policy. Websites contain up-to-date information, aiming to underpin tactical regional and local public health policy by providing building blocks for translating strategic policy priorities into concrete plans of action. Numerous stakeholders are involved in the development of a regional PHSF. The developed empirical process model for a regional PHSF connects to the theoretical framework in which interaction between researchers and policymakers is an important condition for the use of research data in public health policy. The empirical model for a regional PHSF can be characterized by its 1) products, 2) content and design, and 3) underlying process and organization. This empirical model can be seen as a first step in the direction of a generic model for regional public health reporting.
Empirical validation of the InVEST water yield ecosystem service model at a national scale.
Redhead, J W; Stratford, C; Sharps, K; Jones, L; Ziv, G; Clarke, D; Oliver, T H; Bullock, J M
2016-11-01
A variety of tools have emerged with the goal of mapping the current delivery of ecosystem services and quantifying the impact of environmental changes. An important and often overlooked question is how accurate the outputs of these models are in relation to empirical observations. In this paper we validate a hydrological ecosystem service model (InVEST Water Yield Model) using widely available data. We modelled annual water yield in 22 UK catchments with widely varying land cover, population and geology, and compared model outputs with gauged river flow data from the UK National River Flow Archive. Values for input parameters were selected from existing literature to reflect conditions in the UK and were subjected to sensitivity analyses. We also compared model performance between precipitation and potential evapotranspiration data sourced from global- and UK-scale datasets. We then tested the transferability of the results within the UK by additional validation in a further 20 catchments. Whilst the model performed only moderately with global-scale data (linear regression of modelled total water yield against empirical data; slope=0.763, intercept=54.45, R(2)=0.963) with wide variation in performance between catchments, the model performed much better when using UK-scale input data, with closer fit to the observed data (slope=1.07, intercept=3.07, R(2)=0.990). With UK data the majority of catchments showed modelled water yield but there was a minor but consistent overestimate per hectare (86m(3)/ha/year). Additional validation on a further 20 UK catchments was similarly robust, indicating that these results are transferable within the UK. These results suggest that relatively simple models can give accurate measures of ecosystem services. However, the choice of input data is critical and there is a need for further validation in other parts of the world. Copyright © 2016 Elsevier B.V. All rights reserved.
An empirical Bayesian approach for model-based inference of cellular signaling networks
Klinke David J
2009-11-01
Full Text Available Abstract Background A common challenge in systems biology is to infer mechanistic descriptions of biological process given limited observations of a biological system. Mathematical models are frequently used to represent a belief about the causal relationships among proteins within a signaling network. Bayesian methods provide an attractive framework for inferring the validity of those beliefs in the context of the available data. However, efficient sampling of high-dimensional parameter space and appropriate convergence criteria provide barriers for implementing an empirical Bayesian approach. The objective of this study was to apply an Adaptive Markov chain Monte Carlo technique to a typical study of cellular signaling pathways. Results As an illustrative example, a kinetic model for the early signaling events associated with the epidermal growth factor (EGF signaling network was calibrated against dynamic measurements observed in primary rat hepatocytes. A convergence criterion, based upon the Gelman-Rubin potential scale reduction factor, was applied to the model predictions. The posterior distributions of the parameters exhibited complicated structure, including significant covariance between specific parameters and a broad range of variance among the parameters. The model predictions, in contrast, were narrowly distributed and were used to identify areas of agreement among a collection of experimental studies. Conclusion In summary, an empirical Bayesian approach was developed for inferring the confidence that one can place in a particular model that describes signal transduction mechanisms and for inferring inconsistencies in experimental measurements.
Empirical Tests of the Assumptions Underlying Models for Foreign Exchange Rates.
1984-03-01
Martinengo (1980) extends a model by Dornbusch (1976) in which market equilibrium is formalized in terms of interest rates, level of prices, public...55-65. Dornbusch , R., "The Theory of Flexible Exchange Rate Regimes and Macroeconomic Policy", Scandinavian Journal of Economics, 78, 1976, pP. 255
Elskens, Marc [Vrije Universiteit Brussel, Analytical, Pleinlaan 2, BE-1050 Brussels (Belgium); Gourgue, Olivier [Université catholique de Louvain, Institute of Mechanics, Materials and Civil Engineering (IMMC), 4 Avenue G. Lemaître, bte L4.05.02, BE-1348 Louvain-la-Neuve (Belgium); Université catholique de Louvain, Georges Lemaître Centre for Earth and Climate Research (TECLIM), Place Louis Pasteur 2, bte L4.03.08, BE-1348 Louvain-la-Neuve (Belgium); Baeyens, Willy [Vrije Universiteit Brussel, Analytical, Pleinlaan 2, BE-1050 Brussels (Belgium); Chou, Lei [Université Libre de Bruxelles, Biogéochimie et Modélisation du Système Terre (BGéoSys) —Océanographie Chimique et Géochimie des Eaux, Campus de la Plaine —CP 208, Boulevard du Triomphe, BE-1050 Brussels (Belgium); Deleersnijder, Eric [Université catholique de Louvain, Institute of Mechanics, Materials and Civil Engineering (IMMC), 4 Avenue G. Lemaître, bte L4.05.02, BE-1348 Louvain-la-Neuve (Belgium); Université catholique de Louvain, Earth and Life Institute (ELI), Georges Lemaître Centre for Earth and Climate Research (TECLIM), Place Louis Pasteur 2, bte L4.03.08, BE-1348 Louvain-la-Neuve (Belgium); Leermakers, Martine [Vrije Universiteit Brussel, Analytical, Pleinlaan 2, BE-1050 Brussels (Belgium); and others
2014-04-01
Predicting metal concentrations in surface waters is an important step in the understanding and ultimately the assessment of the ecological risk associated with metal contamination. In terms of risk an essential piece of information is the accurate knowledge of the partitioning of the metals between the dissolved and particulate phases, as the former species are generally regarded as the most bioavailable and thus harmful form. As a first step towards the understanding and prediction of metal speciation in the Scheldt Estuary (Belgium, the Netherlands), we carried out a detailed analysis of a historical dataset covering the period 1982–2011. This study reports on the results for two selected metals: Cu and Cd. Data analysis revealed that both the total metal concentration and the metal partitioning coefficient (K{sub d}) could be predicted using relatively simple empirical functions of environmental variables such as salinity and suspended particulate matter concentration (SPM). The validity of these functions has been assessed by their application to salinity and SPM fields simulated by the hydro-environmental model SLIM. The high-resolution total and dissolved metal concentrations reconstructed using this approach, compared surprisingly well with an independent set of validation measurements. These first results from the combined mechanistic-empirical model approach suggest that it may be an interesting tool for risk assessment studies, e.g. to help identify conditions associated with elevated (dissolved) metal concentrations. - Highlights: • Empirical functions were designed for assessing metal speciation in estuarine water. • The empirical functions were implemented in the hydro-environmental model SLIM. • Validation was carried out in the Scheldt Estuary using historical data 1982–2011. • This combined mechanistic-empirical approach is useful for risk assessment.
Oleksandr Artemenko
2016-06-01
Full Text Available In this paper, we are choosing a suitable indoor-outdoor propagation model out of the existing models by considering path loss and distance as parameters. A path loss is calculated empirically by placing emitter nodes inside a building. A receiver placed outdoors is represented by a Quadrocopter (QC that receives beacon messages from indoor nodes. As per our analysis, the International Telecommunication Union (ITU model, Stanford University Interim (SUI model, COST-231 Hata model, Green-Obaidat model, Free Space model, Log-Distance Path Loss model and Electronic Communication Committee 33 (ECC-33 models are chosen and evaluated using empirical data collected in a real environment. The aim is to determine if the analytically chosen models fit our scenario by estimating the minimal standard deviation from the empirical data.
A maturity model for SCPMS project-an empirical investigation in large sized Moroccan companies
Chafik Okar
2011-03-01
Full Text Available In the recent years many studies on maturity model have been carried out. Some refer specifically to maturity models for supply chain and performance measurement system. Starting from an analysis of the existing literature, the aim of this paper is to develop a maturity model for the supply chain performance measurement system (SCPMS project based on the concept of critical success factors (CSFs. This model will be validated by two approaches. The first is a pilot test of the model in a Moroccan supply chain to demonstrate his capacity of assessing the maturity of SCPMS project and whether it can develop an improvement roadmap. The second is an empirical investigation in large sized Moroccan companies by using a survey to depict whether it can evaluate the maturity of SCPMS project in different industries.
Ip, Edward Haksing
2010-05-01
Multidimensionality is a core concept in the measurement and analysis of psychological data. In personality assessment, for example, constructs are mostly theoretically defined as unidimensional, yet responses collected from the real world are almost always determined by multiple factors. Significant research efforts have concentrated on the use of simulated studies to evaluate the robustness of unidimensional item response models when applied to multidimensional data with a dominant dimension. In contrast, in the present paper, I report the result from a theoretical investigation that a multidimensional item response model is empirically indistinguishable from a locally dependent unidimensional model, of which the single dimension represents the actual construct of interest. A practical implication of this result is that multidimensional response data do not automatically require the use of multidimensional models. Circumstances under which the alternative approach of locally dependent unidimensional models may be useful are discussed.
An empirical test of a self-care model of women's responses to battering.
Campbell, J C; Weber, N
2000-01-01
A model of women's responses to battering was constructed based on Orem's theory of self-care deficit and on empirical and clinical observations. The model proposed that the age, educational level, and cultural influences as basic conditioning factors would all be directly related to relational conflict, which would be negatively related to self-care agency (as a mediator) and indirectly related to both outcomes of health and well-being. Using simultaneous structural equation modeling with specification searching, a modified model was derived that eliminated the mediation path but supported direct effects of both abuse and self-care agency on health. The derived model was found to be only a borderline fit with the data, probably due to measurement problems, lack of inclusion of important variables, and small sample size (N = 117). However, there was support for several of the relationships deduced from and/or congruent with Orem's theory.
Comparative Analysis of Empirical Path Loss Model for Cellular Transmission in Rivers State
B.O.H Akinwole, Biebuma J.J
2013-08-01
Full Text Available This paper presents a comparative analysis of three empirical path loss models with measured data for urban, suburban, and rural areas in Rivers State. The three models investigated were COST 231 Hata, SUI,ECC-33models. A downlink data was collected at operating frequency of 2100MHz using drive test procedure consisting of test mobile phones to determine the received signal power (RSCP at specified receiver distanceson a Globacom Node Bs located in some locations in the State. This test was carried out for investigating the effectiveness of the commonly used existing models for Cellular transmission. The results analysed were based on Mean Square Error (MSE and Standard Deviation (SD and were simulated on MATLAB (7.5.0. The results show that COST 231 Hata model gives better predictions and therefore recommended for path loss predictions in River State.
Van Hung, Nguyen
2014-02-01
A pressure-dependent anharmonic correlated Einstein model is derived for extended X-ray absorption fine structure (EXAFS) Debye-Waller factors (DWFs), which are presented in terms of cumulant expansion up to the third order. The model is based on quantum thermodynamic perturbation theory and includes anharmonic effects based on empirical potentials. Explicit analytical expressions of the pressure-dependent changes in the interatomic distance, anharmonic effective potential, thermodynamic parameters, first, second, and third EXAFS cumulants, and thermal expansion coefficient have been derived. This model avoids the use of extensive full lattice dynamical calculations, yet provides good and reasonable agreement of numerical results for Cu with experimental results of X-ray diffraction (XRD) analysis and pressure-dependent EXAFS. Significant pressure effects are shown by the decrease in the pressure-induced changes in the interatomic distance, EXAFS cumulants and thermal expansion coefficient, as well as by the increase in the pressure-induced changes in the interatomic effective potential, effective spring constant, correlated Einstein frequency, and temperature.
Dedes, Christos; Ravanis, Konstantinos
2009-01-01
This research, carried out in Greece on pupils aged 12-16, focuses on the transformation of their representations concerning light emission and image formation by extended light sources. The instructive process was carried out in two stages, each one having a different, distinct target set. During the first stage, the appropriate conflict conditions were created by contrasting the subjects’ predictions with the results of experimental situations inspired by the History of Science, with a view to destabilizing the pupils’ alternative representations. During the second stage, the experimental teaching intervention was carried out; it was based on the geometrical optics model and its parameters were derived from Kepler’s relevant historic experiment. For the duration of this process and within the framework of didactical interactions, an effort was made to reorganize initial limited representations and restructure them at the level of the accepted scientific model. The effectiveness of the intervention was evaluated two weeks later, using experimental tasks which had the same cognitive yet different empirical content with respect to the tasks conducted during the intervention. The results of the study showed that the majority of the subjects accepted the model of geometrical optics, that is, the pupils were able to correctly predict and adequately justify the experimental results based on the principle of punctiform light emission. Educational and research implications are discussed.
Dabbakuti, J. R. K. Kumar; Venkata Ratnam, D.
2017-10-01
The Total Electron Content (TEC) is an essential component describing the temporal and spatial characteristics of the ionosphere. In this paper, an empirical orthogonal function (EOF) model is constructed by using ground based Global Navigational Satellite System (GNSS) TEC observation data at the Bangalore International GNSS Service (IGS) station (geographic - 13.02° N, 77.57° E; geomagnetic latitude 4.4° N) during an extended period (2009-2016) in the 24th solar cycle. EOF model can be decomposed into base functions and its corresponding coefficients. These decomposed modes well represented the influence of solar and geomagnetic activity towards TEC. The first three EOFs modes constitute about 98% of the total variance of the observed data sets. The Fourier Series Analysis (FSA) is carried out to characterize the solar-cycle, annual and semi-annual dependences by modulating the first three EOF coefficients with solar (F10.7) and geomagnetic (Ap and Dst) indices. The TEC model is validated during daytime and nighttime conditions as well as under different solar activity and geomagnetic conditions. A positive correlation (0.85) of averaged daily GPS-TEC with averaged daily F10.7 strongly supports those time-varying characteristics of the ionosphere features depends on the solar activity. Further, the validity and reliability of EOF model is verified by comparing with the GPS-TEC data, and standard global ionospheric models (International Reference Ionosphere, IRI2016 and Standard Plasmasphere-Ionosphere Model, SPIM). The performances of the standard ionospheric models are marked to be relatively better during High Solar Activity (HSA) periods as compared to the Low Solar Activity (LSA) periods.