WorldWideScience

Sample records for models empirical models

  1. Empirical Vector Autoregressive Modeling

    NARCIS (Netherlands)

    M. Ooms (Marius)

    1993-01-01

    textabstractChapter 2 introduces the baseline version of the VAR model, with its basic statistical assumptions that we examine in the sequel. We first check whether the variables in the VAR can be transformed to meet these assumptions. We analyze the univariate characteristics of the series. Import

  2. Empirical Vector Autoregressive Modeling

    NARCIS (Netherlands)

    M. Ooms (Marius)

    1993-01-01

    textabstractChapter 2 introduces the baseline version of the VAR model, with its basic statistical assumptions that we examine in the sequel. We first check whether the variables in the VAR can be transformed to meet these assumptions. We analyze the univariate characteristics of the series. Import

  3. Empirical Model Building Data, Models, and Reality

    CERN Document Server

    Thompson, James R

    2011-01-01

    Praise for the First Edition "This...novel and highly stimulating book, which emphasizes solving real problems...should be widely read. It will have a positive and lasting effect on the teaching of modeling and statistics in general." - Short Book Reviews This new edition features developments and real-world examples that showcase essential empirical modeling techniques Successful empirical model building is founded on the relationship between data and approximate representations of the real systems that generated that data. As a result, it is essential for researchers who construct these m

  4. Empirically Based, Agent-based models

    Directory of Open Access Journals (Sweden)

    Elinor Ostrom

    2006-12-01

    Full Text Available There is an increasing drive to combine agent-based models with empirical methods. An overview is provided of the various empirical methods that are used for different kinds of questions. Four categories of empirical approaches are identified in which agent-based models have been empirically tested: case studies, stylized facts, role-playing games, and laboratory experiments. We discuss how these different types of empirical studies can be combined. The various ways empirical techniques are used illustrate the main challenges of contemporary social sciences: (1 how to develop models that are generalizable and still applicable in specific cases, and (2 how to scale up the processes of interactions of a few agents to interactions among many agents.

  5. Developing Empirically Based Models of Practice.

    Science.gov (United States)

    Blythe, Betty J.; Briar, Scott

    1985-01-01

    Over the last decade emphasis has shifted from theoretically based models of practice to empirically based models whose elements are derived from clinical research. These models are defined and a developing model of practice through the use of single-case methodology is examined. Potential impediments to this new role are identified. (Author/BL)

  6. Model uncertainty in growth empirics

    NARCIS (Netherlands)

    Prüfer, P.

    2008-01-01

    This thesis applies so-called Bayesian model averaging (BMA) to three different economic questions substantially exposed to model uncertainty. Chapter 2 addresses a major issue of modern development economics: the analysis of the determinants of pro-poor growth (PPG), which seeks to combine high gro

  7. Empirical generalization assessment of neural network models

    DEFF Research Database (Denmark)

    Larsen, Jan; Hansen, Lars Kai

    1995-01-01

    competing models. Since all models are trained on the same data, a key issue is to take this dependency into account. The optimal split of the data set of size N into a cross-validation set of size Nγ and a training set of size N(1-γ) is discussed. Asymptotically (large data sees), γopt→1......This paper addresses the assessment of generalization performance of neural network models by use of empirical techniques. We suggest to use the cross-validation scheme combined with a resampling technique to obtain an estimate of the generalization performance distribution of a specific model...

  8. STUDY OF NEUROSES: III AN EMPIRICAL MODEL*

    OpenAIRE

    Bhatti, Ranbir S.; Channabasavanna, S.M.

    1986-01-01

    SUMMARY The empirical model presented in this paper is based on observations made on 60 neurotics and 60 normals matched at the individual level. Efforts are made to use the systems approach to present this paradigm synthesising both individual and environmental resources. We are of the opinion that this model is not only useful in understanding the genesis of neuroses rather has utility at the intervention level as well.

  9. Empirical correction of a toy climate model

    CERN Document Server

    Allgaier, Nicholas A; Danforth, Christopher M

    2011-01-01

    Improving the accuracy of forecast models for physical systems such as the atmosphere is a crucial ongoing effort. Errors in state estimation for these often highly nonlinear systems has been the primary focus of recent research, but as that error has been successfully diminished, the role of model error in forecast uncertainty has duly increased. The present study is an investigation of a particular empirical correction procedure that is of special interest because it considers the model a "black box", and therefore can be applied widely with little modification. The procedure involves the comparison of short model forecasts with a reference "truth" system during a training period in order to calculate systematic (1) state-independent model bias and (2) state-dependent error patterns. An estimate of the likelihood of the latter error component is computed from the current state at every timestep of model integration. The effectiveness of this technique is explored in two experiments: (1) a perfect model scen...

  10. A review of wildland fire spread modelling, 1990-present 2: Empirical and quasi-empirical models

    CERN Document Server

    Sullivan, A L

    2007-01-01

    In recent years, advances in computational power and spatial data analysis (GIS, remote sensing, etc) have led to an increase in attempts to model the spread and behaviour of wildland fires across the landscape. This series of review papers endeavours to critically and comprehensively review all types of surface fire spread models developed since 1990. This paper reviews models of an empirical or quasi-empirical nature. These models are based solely on the statistical analysis of experimentally obtained data with or without some physical framework for the basis of the relations. Other papers in the series review models of a physical or quasi-physical nature, and mathematical analogues and simulation models. The main relations of empirical models are that of wind speed and fuel moisture content with rate of forward spread. Comparisons are made of the different functional relationships selected by various authors for these variables.

  11. An empirical behavioral model of price formation

    CERN Document Server

    Mike, S

    2005-01-01

    Although behavioral economics has demonstrated that there are many situations where rational choice is a poor empirical model, it has so far failed to provide quantitative models of economic problems such as price formation. We make a step in this direction by developing empirical models that capture behavioral regularities in trading order placement and cancellation using data from the London Stock Exchange. For order placement we show that the probability of placing an order at a given price is well approximated by a Student distribution with less than two degrees of freedom, centered on the best quoted price. This result is surprising because it implies that trading order placement is symmetric, independent of the bid-ask spread, and the same for buying and selling. We also develop a crude but simple cancellation model that depends on the position of an order relative to the best price and the imbalance between buying and selling orders in the limit order book. These results are combined to construct a sto...

  12. Analysis of Empirical Software Effort Estimation Models

    CERN Document Server

    Basha, Saleem

    2010-01-01

    Reliable effort estimation remains an ongoing challenge to software engineers. Accurate effort estimation is the state of art of software engineering, effort estimation of software is the preliminary phase between the client and the business enterprise. The relationship between the client and the business enterprise begins with the estimation of the software. The credibility of the client to the business enterprise increases with the accurate estimation. Effort estimation often requires generalizing from a small number of historical projects. Generalization from such limited experience is an inherently under constrained problem. Accurate estimation is a complex process because it can be visualized as software effort prediction, as the term indicates prediction never becomes an actual. This work follows the basics of the empirical software effort estimation models. The goal of this paper is to study the empirical software effort estimation. The primary conclusion is that no single technique is best for all sit...

  13. Empirical data validation for model building

    Science.gov (United States)

    Kazarian, Aram

    2008-03-01

    Optical Proximity Correction (OPC) has become an integral and critical part of process development for advanced technologies with challenging k I requirements. OPC solutions in turn require stable, predictive models to be built that can project the behavior of all structures. These structures must comprehend all geometries that can occur in the layout in order to define the optimal corrections by feature, and thus enable a manufacturing process with acceptable margin. The model is built upon two main component blocks. First, is knowledge of the process conditions which includes the optical parameters (e.g. illumination source, wavelength, lens characteristics, etc) as well as mask definition, resist parameters and process film stack information. Second, is the empirical critical dimension (CD) data collected using this process on specific test features the results of which are used to fit and validate the model and to project resist contours for all allowable feature layouts. The quality of the model therefore is highly dependent on the integrity of the process data collected for this purpose. Since the test pattern suite generally extends to below the resolution limit that the process can support with adequate latitude, the CD measurements collected can often be quite noisy with marginal signal-to-noise ratios. In order for the model to be reliable and a best representation of the process behavior, it is necessary to scrutinize empirical data to ensure that it is not dominated by measurement noise or flyer/outlier points. The primary approach for generating a clean, smooth and dependable empirical data set should be a replicated measurement sampling that can help to statistically reduce measurement noise by averaging. However, it can often be impractical to collect the amount of data needed to ensure a clean data set by this method. An alternate approach is studied in this paper to further smooth the measured data by means of curve fitting to identify remaining

  14. Semi-empirical model of solar plages

    Institute of Scientific and Technical Information of China (English)

    FANG; Cheng

    2001-01-01

    [1] Zirin, H., Astrophysics of the Sun, Chapter 7, Cambridge: Cambridge University Press, 1988.[2] Shine, R. A., Linsky, J. L., Physical properties of solar chromospheric plages II. Chromospheric plage models, Solar Phys., 1974, 39: 49.[3] Kelch, W. L., Linsky, J. L., Physical properties of solar chromospheric plages III. Models based on CaII and MgII observations, Solar Phys., 1978, 58: 37.[4] Lemaire, P., Goutlebroze, J. C., Vial, J. C. et al., Physical properties of the solar chromosphere deduced from optically thick lines, A & A, 1981, 103: 160.[5] Fontenla, J. M., Avrett, E. H., Loeser, R., Energy balance in the solar transition region II. Effects of pressure and energy input on hydrostatic models, ApJ, 1991, 377: 712.[6] Fontenla, J. M., Avrett, E. H., Loeser, R., Energy balance in the solar transition region III. Helium emission in hydrostatic, constant-abundance models with diffusion, ApJ, 1993, 406: 319.[7] Pierce, A. K., Slaughter, C., Solar limb darkening I: λλ(30337297), Solar Phys., 1977, 51: 25.[8] Pierce, A. K., Slaughter, C., Weinberger, D., Solar limb darkening in the interval 740424018*!, II, Solar Phys., 1977, 52: 179.[9] Nechel, H., Labs, D., The solar radiation between 3300 and 12500*!, Solar Phys., 1984, 90: 205.[10] Vernazza, J. E., Avrett, E. H., Loeser, R., Structure of the solar chromosphere I. Basic computations and summary of the results, ApJ, 1973, 184: 605.[11] Mihalas, D., Stellar Atmospheres, San Francisco: W. H. Freeman and Company, 1978.[12] Fang, C., Hnoux, J. -C., Self-consistent model of flare heated solar chromosphere, A & A, 1983, 118: 139.[13] Ding, M. D., Fang, C., A semi-empirical model of sunspot penumbra, A & A, 1989, 225: 204.[14] Vernazza, J. E., Avrett, E. H., Loeser, R., Structure of the solar chromosphere III. Models of the EUV brightness components of the quiet Sun, ApJ Suppl., 1981, 45: 635.[15] Canfield, R. C., Athey, R

  15. An empirical model of tropical ocean dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Newman, Matthew; Scott, James D. [University of Colorado, CIRES Climate Diagnostics Center, Boulder, CO (United States); NOAA Earth System Research Laboratory, Physical Sciences Division, Boulder, CO (United States); Alexander, Michael A. [NOAA Earth System Research Laboratory, Physical Sciences Division, Boulder, CO (United States)

    2011-11-15

    To extend the linear stochastically forced paradigm of tropical sea surface temperature (SST) variability to the subsurface ocean, a linear inverse model (LIM) is constructed from the simultaneous and 3-month lag covariances of observed 3-month running mean anomalies of SST, thermocline depth, and zonal wind stress. This LIM is then used to identify the empirically-determined linear dynamics with physical processes to gauge their relative importance to ENSO evolution. Optimal growth of SST anomalies over several months is triggered by both an initial SST anomaly and a central equatorial Pacific thermocline anomaly that propagates slowly eastward while leading the amplifying SST anomaly. The initial SST and thermocline anomalies each produce roughly half the SST amplification. If interactions between the sea surface and the thermocline are removed in the linear dynamical operator, the SST anomaly undergoes less optimal growth but is also more persistent, and its location shifts from the eastern to central Pacific. Optimal growth is also found to be essentially the result of two stable eigenmodes with similar structure but differing 2- and 4-year periods evolving from initial destructive to constructive interference. Variations among ENSO events could then be a consequence not of changing stability characteristics but of random excitation of these two eigenmodes, which represent different balances between surface and subsurface coupled dynamics. As found in previous studies, the impact of the additional variables on LIM SST forecasts is relatively small for short time scales. Over time intervals greater than about 9 months, however, the additional variables both significantly enhance forecast skill and predict lag covariances and associated power spectra whose closer agreement with observations enhances the validation of the linear model. Moreover, a secondary type of optimal growth exists that is not present in a LIM constructed from SST alone, in which initial SST

  16. Empirical questions for collective-behaviour modelling

    Indian Academy of Sciences (India)

    Nicholas T Ouellette

    2015-03-01

    The collective behaviour of groups of social animals has been an active topic of study across many disciplines, and has a long history of modelling. Classical models have been successful in capturing the large-scale patterns formed by animal aggregations, but fare less well in accounting for details, particularly for groups that do not display net motion. Inspired by recent measurements of swarming insects, which are not well described by the classical modelling paradigm, I pose a set of questions that must be answered by any collective-behaviour model. By explicitly stating the choices made in response to each of these questions, models can be more easily categorized and compared, and their expected range of validity can be clarified.

  17. EMPIRICAL LIKELIHOOD FOR LINEAR MODELS UNDER m-DEPENDENT ERRORS

    Institute of Scientific and Technical Information of China (English)

    QinYongsong; JiangBo; LiYufang

    2005-01-01

    In this paper,the empirical likelihood confidence regions for the regression coefficient in a linear model are constructed under m-dependent errors. It is shown that the blockwise empirical likelihood is a good way to deal with dependent samples.

  18. An Empirical Investigation into a Subsidiary Absorptive Capacity Process Model

    DEFF Research Database (Denmark)

    Schleimer, Stephanie; Pedersen, Torben

    2011-01-01

    and empirically test a process model of absorptive capacity. The setting of our empirical study is 213 subsidiaries of multinational enterprises and the focus is on the capacity of these subsidiaries to successfully absorb best practices in marketing strategy from their headquarters. This setting allows us...

  19. Bibliometric Modeling Processes and the Empirical Validity of Lotka's Law.

    Science.gov (United States)

    Nicholls, Paul Travis

    1989-01-01

    Examines the elements involved in fitting a bibliometric model to empirical data, proposes a consistent methodology for applying Lotka's law, and presents the results of an empirical test of the methodology. The results are discussed in terms of the validity of Lotka's law and the suitability of the proposed methodology. (49 references) (CLB)

  20. Quality Management in Hospital Departments : Empirical Studies of Organisational Models

    OpenAIRE

    Kunkel, Stefan

    2008-01-01

    The general aim of this thesis was to empirically explore the organisational characteristics of quality systems of hospital departments, to develop and empirically test models for the organisation and implementation of quality systems, and to discuss the clinical implications of the findings. Data were collected from hospital departments through interviews (n=19) and a nation-wide survey (n=386). The interviews were analysed thematically and organisational models were developed. Relationships...

  1. Low Order Empirical Galerkin Models for Feedback Flow Control

    Science.gov (United States)

    Tadmor, Gilead; Noack, Bernd

    2005-11-01

    Model-based feedback control restrictions on model order and complexity stem from several generic considerations: real time computation, the ability to either measure or reliably estimate the state in real time and avoiding sensitivity to noise, uncertainty and numerical ill-conditioning are high on that list. Empirical POD Galerkin models are attractive in the sense that they are simple and (optimally) efficient, but are notoriously fragile, and commonly fail to capture transients and control effects. In this talk we review recent efforts to enhance empirical Galerkin models and make them suitable for feedback design. Enablers include `subgrid' estimation of turbulence and pressure representations, tunable models using modes from multiple operating points, and actuation models. An invariant manifold defines the model's dynamic envelope. It must be respected and can be exploited in observer and control design. These ideas are benchmarked in the cylinder wake system and validated by a systematic DNS investigation of a 3-dimensional Galerkin model of the controlled wake.

  2. Empirical Bayes Model Comparisons for Differential Methylation Analysis

    Directory of Open Access Journals (Sweden)

    Mingxiang Teng

    2012-01-01

    Full Text Available A number of empirical Bayes models (each with different statistical distribution assumptions have now been developed to analyze differential DNA methylation using high-density oligonucleotide tiling arrays. However, it remains unclear which model performs best. For example, for analysis of differentially methylated regions for conservative and functional sequence characteristics (e.g., enrichment of transcription factor-binding sites (TFBSs, the sensitivity of such analyses, using various empirical Bayes models, remains unclear. In this paper, five empirical Bayes models were constructed, based on either a gamma distribution or a log-normal distribution, for the identification of differential methylated loci and their cell division—(1, 3, and 5 and drug-treatment-(cisplatin dependent methylation patterns. While differential methylation patterns generated by log-normal models were enriched with numerous TFBSs, we observed almost no TFBS-enriched sequences using gamma assumption models. Statistical and biological results suggest log-normal, rather than gamma, empirical Bayes model distribution to be a highly accurate and precise method for differential methylation microarray analysis. In addition, we presented one of the log-normal models for differential methylation analysis and tested its reproducibility by simulation study. We believe this research to be the first extensive comparison of statistical modeling for the analysis of differential DNA methylation, an important biological phenomenon that precisely regulates gene transcription.

  3. Empirical agent-based modelling challenges and solutions

    CERN Document Server

    Barreteau, Olivier

    2014-01-01

    This instructional book showcases techniques to parameterise human agents in empirical agent-based models (ABM). In doing so, it provides a timely overview of key ABM methodologies and the most innovative approaches through a variety of empirical applications.  It features cutting-edge research from leading academics and practitioners, and will provide a guide for characterising and parameterising human agents in empirical ABM.  In order to facilitate learning, this text shares the valuable experiences of other modellers in particular modelling situations. Very little has been published in the area of empirical ABM, and this contributed volume will appeal to graduate-level students and researchers studying simulation modeling in economics, sociology, ecology, and trans-disciplinary studies, such as topics related to sustainability. In a similar vein to the instruction found in a cookbook, this text provides the empirical modeller with a set of 'recipes'  ready to be implemented. Agent-based modeling (AB...

  4. An Empirically Grounded Model of the Adoption of Intellectual Technologies.

    Science.gov (United States)

    Wildemuth, Barbara M.

    1992-01-01

    Data on adoption of 43 user-developed computing applications in 3 large corporations were analyzed to develop an empirically grounded model of the adoption process for intellectual technologies. A five-stage model consisting of Resource Acquisition, Application Development, Adoption/Renewal, Routinization/Enhancement, and External Adoption was…

  5. Learning-Testing Process in Classroom: An Empirical Simulation Model

    Science.gov (United States)

    Buda, Rodolphe

    2009-01-01

    This paper presents an empirical micro-simulation model of the teaching and the testing process in the classroom (Programs and sample data are available--the actual names of pupils have been hidden). It is a non-econometric micro-simulation model describing informational behaviors of the pupils, based on the observation of the pupils'…

  6. Empirical model for mineralisation of manure nitrogen in soil

    DEFF Research Database (Denmark)

    Sørensen, Peter; Thomsen, Ingrid Kaag; Schröder, Jaap

    2017-01-01

    A simple empirical model was developed for estimation of net mineralisation of pig and cattle slurry nitrogen (N) in arable soils under cool and moist climate conditions during the initial 5 years after spring application. The model is based on a Danish 3-year field experiment with measurements...

  7. An Empirical-Mathematical Modelling Approach to Upper Secondary Physics

    Science.gov (United States)

    Angell, Carl; Kind, Per Morten; Henriksen, Ellen K.; Guttersrud, Oystein

    2008-01-01

    In this paper we describe a teaching approach focusing on modelling in physics, emphasizing scientific reasoning based on empirical data and using the notion of multiple representations of physical phenomena as a framework. We describe modelling activities from a project (PHYS 21) and relate some experiences from implementation of the modelling…

  8. Ranking Multivariate GARCH Models by Problem Dimension: An Empirical Evaluation

    NARCIS (Netherlands)

    M. Caporin (Massimiliano); M.J. McAleer (Michael)

    2011-01-01

    textabstractIn the last 15 years, several Multivariate GARCH (MGARCH) models have appeared in the literature. Recent research has begun to examine MGARCH specifications in terms of their out-of-sample forecasting performance. In this paper, we provide an empirical comparison of a set of models, name

  9. Empirical likelihood-based evaluations of Value at Risk models

    Institute of Scientific and Technical Information of China (English)

    2009-01-01

    Value at Risk (VaR) is a basic and very useful tool in measuring market risks. Numerous VaR models have been proposed in literature. Therefore, it is of great interest to evaluate the efficiency of these models, and to select the most appropriate one. In this paper, we shall propose to use the empirical likelihood approach to evaluate these models. Simulation results and real life examples show that the empirical likelihood method is more powerful and more robust than some of the asymptotic method available in literature.

  10. Empirical Modeling of Metal Oxides Dissolution

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Seon-Byeong; Won, Hui-Jun; Park, Sang-Yoon; Moon, Jei-Kwon; Choi, Wang-Kyu [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2015-05-15

    There have been tons of studies to examine the dissolution of metal oxides in terms of dissolution kinetics, type of reactants, geometry, etc. However, most of previous studies is the observation of macroscopic dissolution characteristics and might not provide the atomic scale characteristics of dissolution reactions. Even the analysis of microscopic structure of metal oxide with SEM, XRD, etc. during the dissolution does not observe the microscopic characteristics of dissolution mechanism. Computational analysis with well-established dissolution model is the one of the best approaches to understand indirectly the microscopic dissolution behaviour. Various designs of experimental conditions are applied to the in-vitro methods interpreting the dissolution characteristics controlled by each influencing parameter.

  11. An empirical investigation of two competing models of patient satisfaction.

    Science.gov (United States)

    Mishra, D P; Singh, J; Wood, V

    1991-01-01

    This paper empirically examines two competing models of patient satisfaction. Specifically, a five factor SERVQUAL model proposed by Parasuraman et al. (1988) and a tripartite model posited by Smith, Bloom, and Davis (1986) are examined. The two models are tested via factor analysis based on data collected from a field survey of hospital patients. The results of this study indicate that the five dimensional SERVQUAL model is not supported by data. On the other hand, there is general support for the tripartite model. Implications of our results for health care practitioners and researchers are discussed. Future directions for research are also outlined.

  12. Bankruptcy risk model and empirical tests.

    Science.gov (United States)

    Podobnik, Boris; Horvatic, Davor; Petersen, Alexander M; Urosevic, Branko; Stanley, H Eugene

    2010-10-26

    We analyze the size dependence and temporal stability of firm bankruptcy risk in the US economy by applying Zipf scaling techniques. We focus on a single risk factor--the debt-to-asset ratio R--in order to study the stability of the Zipf distribution of R over time. We find that the Zipf exponent increases during market crashes, implying that firms go bankrupt with larger values of R. Based on the Zipf analysis, we employ Bayes's theorem and relate the conditional probability that a bankrupt firm has a ratio R with the conditional probability of bankruptcy for a firm with a given R value. For 2,737 bankrupt firms, we demonstrate size dependence in assets change during the bankruptcy proceedings. Prepetition firm assets and petition firm assets follow Zipf distributions but with different exponents, meaning that firms with smaller assets adjust their assets more than firms with larger assets during the bankruptcy process. We compare bankrupt firms with nonbankrupt firms by analyzing the assets and liabilities of two large subsets of the US economy: 2,545 Nasdaq members and 1,680 New York Stock Exchange (NYSE) members. We find that both assets and liabilities follow a Pareto distribution. The finding is not a trivial consequence of the Zipf scaling relationship of firm size quantified by employees--although the market capitalization of Nasdaq stocks follows a Pareto distribution, the same distribution does not describe NYSE stocks. We propose a coupled Simon model that simultaneously evolves both assets and debt with the possibility of bankruptcy, and we also consider the possibility of firm mergers.

  13. A Development of Empirical Models for Equipment Condition Monitoring System

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Song Kyu; Baik, Se Jin [KEPCO Engineering and Construction Company, Daejeon (Korea, Republic of); An, Sang Ha [Korea Advanced Institute of Science and Technology, Daejeon (Korea, Republic of)

    2010-10-15

    A great deal of effort is recently put into on-line monitoring (OLM), specially using empirical model to detect earlier the fault of components or the calibration reduction/extension of instrument. The empirical model is constructed with historical data obtained during operation and it mainly relies on regression techniques. Various models are used in OLM and the role of models is to describe the relation among signals that have been collected. Ultimate goal of empirical models is to best estimate parameter as soon as possible close to actual value. Typically some of the historical data are used for model training, and some data are used for verification and assessment of model performance. Several different models for OLM of nuclear power systems are currently being used. Examples include the ANL Multivariate State Estimation Techniques (MSET) used in EPI center of SmartSignal, the expert state estimation engine (ESEE) used in SureSense software of Expert Microsystems, Process Evaluation and Analysis by Neural Operators (PEANO) OECD of Halden Reactor Project and linear regression model used in RCP seal integrity monitoring system (SIMON) of KEPCO E and C

  14. Continuity of the robustness of contextuality of empirical models

    Science.gov (United States)

    Meng, HuiXian; Cao, HuaiXin; Wang, WenHua; Chen, Liang; Fan, Yajing

    2016-10-01

    Recently, the robustness of contextuality (RoC) of an empirical model was discussed in [Sci. China-Phys. Mech. Astron. 59, 640303 (2016)], many important properties of the RoC have been proved except for its boundedness and continuity. The aim of this paper is to find an upper bound for the RoC over all of empirical models and prove that the RoC is a continuous function on the set of all empirical models. Lastly, a relationship between the RoC and the extent of violating the noncontextual inequalities is established for an n-cycle contextual box. This relationship implies that the RoC can be used to quantify the contextuality of n-cycle boxes.

  15. Comparison of modelled and empirical atmospheric propagation data

    Science.gov (United States)

    Schott, J. R.; Biegel, J. D.

    1983-01-01

    The radiometric integrity of TM thermal infrared channel data was evaluated and monitored to develop improved radiometric preprocessing calibration techniques for removal of atmospheric effects. Modelled atmospheric transmittance and path radiance were compared with empirical values derived from aircraft underflight data. Aircraft thermal infrared imagery and calibration data were available on two dates as were corresponding atmospheric radiosonde data. The radiosonde data were used as input to the LOWTRAN 5A code which was modified to output atmospheric path radiance in addition to transmittance. The aircraft data were calibrated and used to generate analogous measurements. These data indicate that there is a tendancy for the LOWTRAN model to underestimate atmospheric path radiance and transmittance as compared to empirical data. A plot of transmittance versus altitude for both LOWTRAN and empirical data is presented.

  16. An empirical model for friction in cold forging

    DEFF Research Database (Denmark)

    Bay, Niels; Eriksen, Morten; Tan, Xincai

    2002-01-01

    With a system of simulative tribology tests for cold forging the friction stress for aluminum, steel and stainless steel provided with typical lubricants for cold forging has been determined for varying normal pressure, surface expansion, sliding length and tool/work piece interface temperature...... of normal pressure and tool/work piece interface temperature. The model is verified by process testing measuring friction at varying reductions in cold forward rod extrusion. KEY WORDS: empirical friction model, cold forging, simulative friction tests....

  17. A Trade Study of Thermosphere Empirical Neutral Density Models

    Science.gov (United States)

    2014-08-01

    into the ram direction, and m is the satellite mass. The velocity ?⃗? equals to the satellite velocity in the corotating Earth frame ?⃗?...drag force. In a trade study we have investigated a methodology to assess performances of neutral density models in predicting orbit against a... assess overall errors in orbit prediction expected from empirical density models. They have also been adapted in an analysis tool Satellite Orbital

  18. Empirical modelling for the conceptual design and use of products

    OpenAIRE

    Roe, Chris P.; Beynon, Meurig; Fischer, Carlos N

    2001-01-01

    The process of designing an engineering product usually involves only superficial interaction on the part of the user during the design. This often leads to the product being unsuitable for its target comnmnity. In this paper, we describe an approach called Empirical Modelling that emphasises interaction and experiment throughout the construction of a model that we believe has benefits in respect of usability. We use a case study in digital watch design to illustrate our approach and our ideas.

  19. Models of social entrepreneurship: empirical evidence from Mexico

    OpenAIRE

    Wulleman, Marine; Hudon, Marek

    2015-01-01

    This paper seeks to improve the understanding of social entrepreneurship models based on empirical evidence from Mexico, where social entrepreneurship is currently booming. It aims to supplement existing typologies of social entrepreneurship models. To that end, building on Zahra et al. (2009) typology it begins by providing a new framework classifying the three types of social entrepreneurship. A comparative case study of ten Mexican social enterprises is then elaborated using that framework...

  20. Bayesian model reduction and empirical Bayes for group (DCM) studies.

    Science.gov (United States)

    Friston, Karl J; Litvak, Vladimir; Oswal, Ashwini; Razi, Adeel; Stephan, Klaas E; van Wijk, Bernadette C M; Ziegler, Gabriel; Zeidman, Peter

    2016-03-01

    This technical note describes some Bayesian procedures for the analysis of group studies that use nonlinear models at the first (within-subject) level - e.g., dynamic causal models - and linear models at subsequent (between-subject) levels. Its focus is on using Bayesian model reduction to finesse the inversion of multiple models of a single dataset or a single (hierarchical or empirical Bayes) model of multiple datasets. These applications of Bayesian model reduction allow one to consider parametric random effects and make inferences about group effects very efficiently (in a few seconds). We provide the relatively straightforward theoretical background to these procedures and illustrate their application using a worked example. This example uses a simulated mismatch negativity study of schizophrenia. We illustrate the robustness of Bayesian model reduction to violations of the (commonly used) Laplace assumption in dynamic causal modelling and show how its recursive application can facilitate both classical and Bayesian inference about group differences. Finally, we consider the application of these empirical Bayesian procedures to classification and prediction.

  1. Testing the gravity p-median model empirically

    Directory of Open Access Journals (Sweden)

    Kenneth Carling

    2015-12-01

    Full Text Available Regarding the location of a facility, the presumption in the widely used p-median model is that the customer opts for the shortest route to the nearest facility. However, this assumption is problematic on free markets since the customer is presumed to gravitate to a facility by the distance to and the attractiveness of it. The recently introduced gravity p-median model offers an extension to the p-median model that account for this. The model is therefore potentially interesting, although it has not yet been implemented and tested empirically. In this paper, we have implemented the model in an empirical problem of locating vehicle inspections, locksmiths, and retail stores of vehicle spare-parts for the purpose of investigating its superiority to the p-median model. We found, however, the gravity p-median model to be of limited use for the problem of locating facilities as it either gives solutions similar to the p-median model, or it gives unstable solutions due to a non-concave objective function.

  2. Selection Bias in Educational Transition Models: Theory and Empirical Evidence

    DEFF Research Database (Denmark)

    Holm, Anders; Jæger, Mads

    Most studies using Mare’s (1980, 1981) seminal model of educational transitions find that the effect of family background decreases across transitions. Recently, Cameron and Heckman (1998, 2001) have argued that the “waning coefficients” in the Mare model are driven by selection on unobserved...... the United States, United Kingdom, Denmark, and the Netherlands shows that when we take selection into account the effect of family background variables on educational transitions is largely constant across transitions. We also discuss several difficulties in estimating educational transition models which...... variables. This paper, first, explains theoretically how selection on unobserved variables leads to waning coefficients and, second, illustrates empirically how selection leads to biased estimates of the effect of family background on educational transitions. Our empirical analysis using data from...

  3. Empirically derived neighbourhood rules for urban land-use modelling

    DEFF Research Database (Denmark)

    Hansen, Henning Sten

    2012-01-01

    interaction between neighbouring land uses is an important component in urban cellular automata. Nevertheless, this component is often calibrated through trial-and-error estimation. The aim of this project has been to develop an empirically derived landscape metric supporting cellular-automata-based land......-use modelling. Through access to very detailed urban land-use data it has been possible to derive neighbourhood rules empirically, and test their sensitivity to the land-use classification applied, the regional variability of the rules, and their time variance. The developed methodology can be implemented...

  4. Conceptual Model of IT Infrastructure Capability and Its Empirical Justification

    Institute of Scientific and Technical Information of China (English)

    QI Xianfeng; LAN Boxiong; GUO Zhenwei

    2008-01-01

    Increasing importance has been attached to the value of information technology (IT) infrastructure in today's organizations. The development of efficacious IT infrastructure capability enhances business performance and brings sustainable competitive advantage. This study analyzed the IT infrastructure capability in a holistic way and then presented a concept model of IT capability. IT infrastructure capability was categorized into sharing capability, service capability, and flexibility. This study then empirically tested the model using a set of survey data collected from 145 firms. Three factors emerge from the factor analysis as IT flexibility, IT service capability, and IT sharing capability, which agree with those in the conceptual model built in this study.

  5. Transdiagnostic models of anxiety disorder: Theoretical and empirical underpinnings.

    Science.gov (United States)

    Norton, Peter J; Paulus, Daniel J

    2017-08-01

    Despite the increasing development, evaluation, and adoption of transdiagnostic cognitive behavioral therapies, relatively little has been written to detail the conceptual and empirical psychopathology framework underlying transdiagnostic models of anxiety and related disorders. In this review, the diagnostic, genetic, neurobiological, developmental, behavioral, cognitive, and interventional data underlying the model are described, with an emphasis on highlighting elements that both support and contradict transdiagnostic conceptualizations. Finally, a transdiagnostic model of anxiety disorder is presented and key areas of future evaluation and refinement are discussed. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Application Study of Empirical Model and Xiaohuajian Flood Forecasting Model in the Middle Yellow River

    Science.gov (United States)

    Hu, Caihong

    2013-04-01

    Xiaolandi-Huayuankou region is an important rainstorm centre in the middle Yellow river, which drainage area of 35883km2. A set of forecasting methods applied in this region was formed throughout years of practice. The Xiaohuajian flood forecasting model and empirical model were introduced in this paper. The simulated processes of the Xiaohuajian flood forecasting model include evapotranspiration, infiltration, runoff, river flow. Infiltration and surface runoff are calculated utilizing the Horton model for infiltration into multilayered soil profiles. Overland flow is routed by Nash instantaneous unit hydrograph and Section Muskingum method. The empirical model are simulated using P~Pa~R and empirical relation approach for runoff generation and concentration. The structures of these two models were analyzed and compared in detail. Yihe river basin located in Xiaolandi-Huayuankou region was selected for the purpose of the study. The results show that the accuracy of the two methods are similar, however, the accuracy of Xiaohuajian flood forecasting model for flood forecasting is relatively higher, especially the process of the flood; the accuracy of the empirical methods is much worse, but it can also be accept. The two models are both practicable, so the two models can be combined to apply. The result of the Xiaohuajian flood forecasting model can be used to guide the reservoir for flood control, and the result of empirical methods can be as a reference.

  7. Modeling Healthcare Processes Using Commitments: An Empirical Evaluation

    Science.gov (United States)

    2015-01-01

    The two primary objectives of this paper are: (a) to demonstrate how Comma, a business modeling methodology based on commitments, can be applied in healthcare process modeling, and (b) to evaluate the effectiveness of such an approach in producing healthcare process models. We apply the Comma approach on a breast cancer diagnosis process adapted from an HHS committee report, and presents the results of an empirical study that compares Comma with a traditional approach based on the HL7 Messaging Standard (Traditional-HL7). Our empirical study involved 47 subjects, and two phases. In the first phase, we partitioned the subjects into two approximately equal groups. We gave each group the same requirements based on a process scenario for breast cancer diagnosis. Members of one group first applied Traditional-HL7 and then Comma whereas members of the second group first applied Comma and then Traditional-HL7—each on the above-mentioned requirements. Thus, each subject produced two models, each model being a set of UML Sequence Diagrams. In the second phase, we repartitioned the subjects into two groups with approximately equal distributions from both original groups. We developed exemplar Traditional-HL7 and Comma models; we gave one repartitioned group our Traditional-HL7 model and the other repartitioned group our Comma model. We provided the same changed set of requirements to all subjects and asked them to modify the provided exemplar model to satisfy the new requirements. We assessed solutions produced by subjects in both phases with respect to measures of flexibility, time, difficulty, objective quality, and subjective quality. Our study found that Comma is superior to Traditional-HL7 in flexibility and objective quality as validated via Student’s t-test to the 10% level of significance. Comma is a promising new approach for modeling healthcare processes. Further gains could be made through improved tooling and enhanced training of modeling personnel. PMID

  8. An empirical model to estimate ultraviolet erythemal transmissivity

    Science.gov (United States)

    Antón, M.; Serrano, A.; Cancillo, M. L.; García, J. A.

    2009-04-01

    An empirical model to estimate the solar ultraviolet erythemal irradiance (UVER) for all-weather conditions is presented. This model proposes a power expression with the UV transmissivity as a dependent variable, and the slant ozone column and the clearness index as independent variables. The UVER were measured at three stations in South-Western Spain during a five year period (2001-2005). A dataset corresponding to the period 2001-2004 was used to develop the model and an independent dataset (year 2005) for validation purposes. For all three locations, the empirical model explains more than 95% of UV transmissivity variability due to changes in the two independent variables. In addition, the coefficients of the models show that when the slant ozone amount decreases 1%, UV transmissivity and, therefore, UVER values increase approximately 1.33%-1.35%. The coefficients also show that when the clearness index decreases 1%, UV transmissivity increase 0.75%-0.78%. The validation of the model provided satisfactory results, with low mean absolute bias error (MABE), about 7%-8% for all stations. Finally, a one-day ahead forecast of the UV Index for cloud-free cases is presented, assuming the persistence in the total ozone column. The percentage of days with differences between forecast and experimental UVI lower than ±0.5 unit and ±1 unit is within the range of 28% to 37%, and 60% to 75%, respectively. Therefore, the empirical model proposed in this work provides reliable forecast cloud-free UVI in order to inform the public about the possible harmful effects of UV radiation over-exposure.

  9. An empirical model to estimate ultraviolet erythemal transmissivity

    Energy Technology Data Exchange (ETDEWEB)

    Anton, M.; Serrano, A.; Cancillo, M.L.; Garcia, J.A. [Universidad de Extremadura, Badajoz (Spain). Dept. de Fisica

    2009-07-01

    An empirical model to estimate the solar ultraviolet erythemal irradiance (UVER) for all-weather conditions is presented. This model proposes a power expression with the UV transmissivity as a dependent variable, and the slant ozone column and the clearness index as independent variables. The UVER were measured at three stations in South-Western Spain during a five year period (2001-2005). A dataset corresponding to the period 2001-2004 was used to develop the model and an independent dataset (year 2005) for validation purposes. For all three locations, the empirical model explains more than 95% of UV transmissivity variability due to changes in the two independent variables. In addition, the coefficients of the models show that when the slant ozone amount decreases 1%, UV transmissivity and, therefore, UVER values increase approximately 1.33%-1.35%. The coefficients also show that when the clearness index decreases 1%, UV transmissivity increase 0.75%-0.78%. The validation of the model provided satisfactory results, with low mean absolute bias error (MABE), about 7%-8% for all stations. Finally, a one-day ahead forecast of the UV Index for cloud-free cases is presented, assuming the persistence in the total ozone column. The percentage of days with differences between forecast and experimental UVI lower than {+-}0.5 unit and {+-}1 unit is within the range of 28% to 37%, and 60% to 75%, respectively. Therefore, the empirical model proposed in this work provides reliable forecast cloud-free UVI in order to inform the public about the possible harmful effects of UV radiation over-exposure. (orig.)

  10. Developing an Empirical Model for Jet-Surface Interaction Noise

    Science.gov (United States)

    Brown, Clifford A.

    2014-01-01

    The process of developing an empirical model for jet-surface interaction noise is described and the resulting model evaluated. Jet-surface interaction noise is generated when the high-speed engine exhaust from modern tightly integrated or conventional high-bypass ratio engine aircraft strikes or flows over the airframe surfaces. An empirical model based on an existing experimental database is developed for use in preliminary design system level studies where computation speed and range of configurations is valued over absolute accuracy to select the most promising (or eliminate the worst) possible designs. The model developed assumes that the jet-surface interaction noise spectra can be separated from the jet mixing noise and described as a parabolic function with three coefficients: peak amplitude, spectral width, and peak frequency. These coefficients are fit to functions of surface length and distance from the jet lipline to form a characteristic spectra which is then adjusted for changes in jet velocity and/or observer angle using scaling laws from published theoretical and experimental work. The resulting model is then evaluated for its ability to reproduce the characteristic spectra and then for reproducing spectra measured at other jet velocities and observer angles; successes and limitations are discussed considering the complexity of the jet-surface interaction noise versus the desire for a model that is simple to implement and quick to execute.

  11. Testing a new Free Core Nutation empirical model

    Science.gov (United States)

    Belda, Santiago; Ferrándiz, José M.; Heinkelmann, Robert; Nilsson, Tobias; Schuh, Harald

    2016-03-01

    The Free Core Nutation (FCN) is a free mode of the Earth's rotation caused by the different material characteristics of the Earth's core and mantle. This causes the rotational axes of those layers to slightly diverge from each other, resulting in a wobble of the Earth's rotation axis comparable to nutations. In this paper we focus on estimating empirical FCN models using the observed nutations derived from the VLBI sessions between 1993 and 2013. Assuming a fixed value for the oscillation period, the time-variable amplitudes and phases are estimated by means of multiple sliding window analyses. The effects of using different a priori Earth Rotation Parameters (ERP) in the derivation of models are also addressed. The optimal choice of the fundamental parameters of the model, namely the window width and step-size of its shift, is searched by performing a thorough experimental analysis using real data. The former analyses lead to the derivation of a model with a temporal resolution higher than the one used in the models currently available, with a sliding window reduced to 400 days and a day-by-day shift. It is shown that this new model increases the accuracy of the modeling of the observed Earth's rotation. Besides, empirical models determined from USNO Finals as a priori ERP present a slightly lower Weighted Root Mean Square (WRMS) of residuals than IERS 08 C04 along the whole period of VLBI observations, according to our computations. The model is also validated through comparisons with other recognized models. The level of agreement among them is satisfactory. Let us remark that our estimates give rise to the lowest residuals and seem to reproduce the FCN signal in more detail.

  12. Empirical Reduced-Order Modeling for Boundary Feedback Flow Control

    Directory of Open Access Journals (Sweden)

    Seddik M. Djouadi

    2008-01-01

    Full Text Available This paper deals with the practical and theoretical implications of model reduction for aerodynamic flow-based control problems. Various aspects of model reduction are discussed that apply to partial differential equation- (PDE- based models in general. Specifically, the proper orthogonal decomposition (POD of a high dimension system as well as frequency domain identification methods are discussed for initial model construction. Projections on the POD basis give a nonlinear Galerkin model. Then, a model reduction method based on empirical balanced truncation is developed and applied to the Galerkin model. The rationale for doing so is that linear subspace approximations to exact submanifolds associated with nonlinear controllability and observability require only standard matrix manipulations utilizing simulation/experimental data. The proposed method uses a chirp signal as input to produce the output in the eigensystem realization algorithm (ERA. This method estimates the system's Markov parameters that accurately reproduce the output. Balanced truncation is used to show that model reduction is still effective on ERA produced approximated systems. The method is applied to a prototype convective flow on obstacle geometry. An H∞ feedback flow controller is designed based on the reduced model to achieve tracking and then applied to the full-order model with excellent performance.

  13. Empirical spatial econometric modelling of small scale neighbourhood

    Science.gov (United States)

    Gerkman, Linda

    2012-07-01

    The aim of the paper is to model small scale neighbourhood in a house price model by implementing the newest methodology in spatial econometrics. A common problem when modelling house prices is that in practice it is seldom possible to obtain all the desired variables. Especially variables capturing the small scale neighbourhood conditions are hard to find. If there are important explanatory variables missing from the model, the omitted variables are spatially autocorrelated and they are correlated with the explanatory variables included in the model, it can be shown that a spatial Durbin model is motivated. In the empirical application on new house price data from Helsinki in Finland, we find the motivation for a spatial Durbin model, we estimate the model and interpret the estimates for the summary measures of impacts. By the analysis we show that the model structure makes it possible to model and find small scale neighbourhood effects, when we know that they exist, but we are lacking proper variables to measure them.

  14. Regime switching model for financial data: Empirical risk analysis

    Science.gov (United States)

    Salhi, Khaled; Deaconu, Madalina; Lejay, Antoine; Champagnat, Nicolas; Navet, Nicolas

    2016-11-01

    This paper constructs a regime switching model for the univariate Value-at-Risk estimation. Extreme value theory (EVT) and hidden Markov models (HMM) are combined to estimate a hybrid model that takes volatility clustering into account. In the first stage, HMM is used to classify data in crisis and steady periods, while in the second stage, EVT is applied to the previously classified data to rub out the delay between regime switching and their detection. This new model is applied to prices of numerous stocks exchanged on NYSE Euronext Paris over the period 2001-2011. We focus on daily returns for which calibration has to be done on a small dataset. The relative performance of the regime switching model is benchmarked against other well-known modeling techniques, such as stable, power laws and GARCH models. The empirical results show that the regime switching model increases predictive performance of financial forecasting according to the number of violations and tail-loss tests. This suggests that the regime switching model is a robust forecasting variant of power laws model while remaining practical to implement the VaR measurement.

  15. Empirical Modeling of Plant Gas Fluxes in Controlled Environments

    Science.gov (United States)

    Cornett, Jessie David

    1994-01-01

    As humans extend their reach beyond the earth, bioregenerative life support systems must replace the resupply and physical/chemical systems now used. The Controlled Ecological Life Support System (CELSS) will utilize plants to recycle the carbon dioxide (CO2) and excrement produced by humans and return oxygen (O2), purified water and food. CELSS design requires knowledge of gas flux levels for net photosynthesis (PS(sub n)), dark respiration (R(sub d)) and evapotranspiration (ET). Full season gas flux data regarding these processes for wheat (Triticum aestivum), soybean (Glycine max) and rice (Oryza sativa) from published sources were used to develop empirical models. Univariate models relating crop age (days after planting) and gas flux were fit by simple regression. Models are either high order (5th to 8th) or more complex polynomials whose curves describe crop development characteristics. The models provide good estimates of gas flux maxima, but are of limited utility. To broaden the applicability, data were transformed to dimensionless or correlation formats and, again, fit by regression. Polynomials, similar to those in the initial effort, were selected as the most appropriate models. These models indicate that, within a cultivar, gas flux patterns appear remarkably similar prior to maximum flux, but exhibit considerable variation beyond this point. This suggests that more broadly applicable models of plant gas flux are feasible, but univariate models defining gas flux as a function of crop age are too simplistic. Multivariate models using CO2 and crop age were fit for PS(sub n), and R(sub d) by multiple regression. In each case, the selected model is a subset of a full third order model with all possible interactions. These models are improvements over the univariate models because they incorporate more than the single factor, crop age, as the primary variable governing gas flux. They are still limited, however, by their reliance on the other environmental

  16. Empirical intrinsic geometry for nonlinear modeling and time series filtering.

    Science.gov (United States)

    Talmon, Ronen; Coifman, Ronald R

    2013-07-30

    In this paper, we present a method for time series analysis based on empirical intrinsic geometry (EIG). EIG enables one to reveal the low-dimensional parametric manifold as well as to infer the underlying dynamics of high-dimensional time series. By incorporating concepts of information geometry, this method extends existing geometric analysis tools to support stochastic settings and parametrizes the geometry of empirical distributions. However, the statistical models are not required as priors; hence, EIG may be applied to a wide range of real signals without existing definitive models. We show that the inferred model is noise-resilient and invariant under different observation and instrumental modalities. In addition, we show that it can be extended efficiently to newly acquired measurements in a sequential manner. These two advantages enable us to revisit the Bayesian approach and incorporate empirical dynamics and intrinsic geometry into a nonlinear filtering framework. We show applications to nonlinear and non-Gaussian tracking problems as well as to acoustic signal localization.

  17. Empirical Study and Model of User Acceptance for Personalized Recommendation

    Directory of Open Access Journals (Sweden)

    Zheng Hua

    2013-02-01

    Full Text Available Personalized recommendation technology plays an important role in the current e-commerce system, but the user willingness to accept the personalized recommendation and its influencing factors need to be study. In this study, the Theory of Reasoned Action (TRA and Technology Acceptance Model (TAM are used to construct a user acceptance model of personalized recommendation which tested by the empirical method. The results show that perceived usefulness, perceived ease of use, subjective rules and trust tend had an impact on personalized recommendation.

  18. Equation-free mechanistic ecosystem forecasting using empirical dynamic modeling.

    Science.gov (United States)

    Ye, Hao; Beamish, Richard J; Glaser, Sarah M; Grant, Sue C H; Hsieh, Chih-Hao; Richards, Laura J; Schnute, Jon T; Sugihara, George

    2015-03-31

    It is well known that current equilibrium-based models fall short as predictive descriptions of natural ecosystems, and particularly of fisheries systems that exhibit nonlinear dynamics. For example, model parameters assumed to be fixed constants may actually vary in time, models may fit well to existing data but lack out-of-sample predictive skill, and key driving variables may be misidentified due to transient (mirage) correlations that are common in nonlinear systems. With these frailties, it is somewhat surprising that static equilibrium models continue to be widely used. Here, we examine empirical dynamic modeling (EDM) as an alternative to imposed model equations and that accommodates both nonequilibrium dynamics and nonlinearity. Using time series from nine stocks of sockeye salmon (Oncorhynchus nerka) from the Fraser River system in British Columbia, Canada, we perform, for the the first time to our knowledge, real-data comparison of contemporary fisheries models with equivalent EDM formulations that explicitly use spawning stock and environmental variables to forecast recruitment. We find that EDM models produce more accurate and precise forecasts, and unlike extensions of the classic Ricker spawner-recruit equation, they show significant improvements when environmental factors are included. Our analysis demonstrates the strategic utility of EDM for incorporating environmental influences into fisheries forecasts and, more generally, for providing insight into how environmental factors can operate in forecast models, thus paving the way for equation-free mechanistic forecasting to be applied in management contexts.

  19. Empirical modeling of the location of the Earth's magnetopause

    Science.gov (United States)

    Machková, Anna; Nemec, Frantisek; Nemecek, Zdenek; Safrankova, Jana

    2016-04-01

    We systematically examine the location of the magnetopause using a database of 16800 magnetopause crossings registered by 8 different satellites. The analysis is limited to the best sampled region near the subsolar point. We analyze the influence of the Dst and corrected Dst* indices, solar wind flow speed, and the eccentricity of the terrestrial magnetic dipole, i.e., the parameters typically unconsidered in former empirical models. The effects on the magnetopause location are investigated by comparing the observed and model magnetopause distances. We show that the magnetopause distance increases with decreasing Dst index, which can be likely linked to the increasing magnetic field magnitude at the magnetopause due to the enhanced ring current. The magnetopause distance is further higher at the times of higher solar wind flow speeds, in particular during high solar wind dynamic pressures. The eccentricity of the magnetic dipole also results in a statistically observable magnetopause displacement, as the magnetic field magnitude increases at the locations toward which the eccentric dipole is shifted (by about 2.5 percent). Finally, we employ the IGRF internal magnetic field model (accounting thus for the eccentricity of the terrestrial magnetic dipole) and the T96 external magnetic field model (accounting thus for the ring current and the Chapman-Ferraro current). We suggest a simple improvement of existing empirical magnetopause models based on the observed dependencies.

  20. An empirical firn-densification model comprising ice-lences

    DEFF Research Database (Denmark)

    Reeh, Niels; Fisher, D.A.; Koerner, R.M.

    2005-01-01

    -density profiles from Canadian Arctic ice-core sites with large melting-refreezing percentages shows good agreement. The model is also used to estimate the long-term surface elevation change in interior Greenland that will result from temperature-driven changes of density-depth profiles. These surface elevation......In the past, several empirical firn-densification models have been developed fitted to measured density-depth profiles from Greenland and Antarctica. These models do not specifically deal with refreezing of meltwater in the firn. Ice lenses are usually indirectly taken into account by choosing...... a suitable value of the surface snow density. In the present study, a simple densification model is developed that specifically accounts for the content of ice lenses in the snowpack. An annual layer is considered to be composed of an ice fraction and a firn fraction. It is assumed that all meltwater formed...

  1. Empirical Analysis of Xinjiang's Bilateral Trade: Gravity Model Approach

    Institute of Scientific and Technical Information of China (English)

    CHEN Xuegang; YANG Zhaoping; LIU Xuling

    2008-01-01

    Based on the basic trade gravity model and Xinjiang's practical situation, new explanatory variables (GDP,GDPpc and SCO) are introduced to build an extended trade gravity model fitting for Xinjiang's bilateral trade. Fromthe empirical analysis of this model, it is proposed that those three variables affect the Xinjiang's bilateral trade posi-tively. Whereas, geographic distance is found to be a significant factor influencing Xinjiang's bilateral trade negatively.Then, by the extended trade gravity model, this article analyzes the present trade situation between Xinjiang and itsmain trade partners quantitatively in 2004. The results indicate that Xinjiang cooperates with its most trade partnerssuccessfully in terms of present economic scale and developing revel. Xinjiang has established successfully trade part-nership with Central Asia, Central Europe and Eastern Europe, Western Europe, East Asia and South Asia. However,the foreign trade development with West Asia is much slower. Finally, some suggestions on developing Xinjiang's for-eign trade are put forward.

  2. EMPIRICAL MODEL FOR HYDROCYCLONES CORRECTED CUT SIZE CALCULATION

    Directory of Open Access Journals (Sweden)

    André Carlos Silva

    2012-12-01

    Full Text Available Hydrocyclones are devices worldwide used in mineral processing for desliming, classification, selective classification, thickening and pre-concentration. A hydrocyclone is composed by one cylindrical and one conical section joint together, without any moving parts and it is capable of perform granular material separation in pulp. The mineral particles separation mechanism acting in a hydrocyclone is complex and its mathematical modelling is usually empirical. The most used model for hydrocyclone corrected cut size is proposed by Plitt. Over the years many revisions and corrections to Plitt´s model were proposed. The present paper shows a modification in the Plitt´s model constant, obtained by exponential regression of simulated data for three different hydrocyclones geometry: Rietema, Bradley and Krebs. To validate the proposed model literature data obtained from phosphate ore using fifteen different hydrocyclones geometry are used. The proposed model shows a correlation equals to 88.2% between experimental and calculated corrected cut size, while the correlation obtained using Plitt´s model is 11.5%.

  3. Two Empirical Models for Land-falling Hurricane Gust Factors

    Science.gov (United States)

    Merceret, Franics J.

    2008-01-01

    Gaussian and lognormal models for gust factors as a function of height and mean windspeed in land-falling hurricanes are presented. The models were empirically derived using data from 2004 hurricanes Frances and Jeanne and independently verified using data from 2005 hurricane Wilma. The data were collected from three wind towers at Kennedy Space Center and Cape Canaveral Air Force Station with instrumentation at multiple levels from 12 to 500 feet above ground level. An additional 200-foot tower was available for the verification. Mean wind speeds from 15 to 60 knots were included in the data. The models provide formulas for the mean and standard deviation of the gust factor given the mean windspeed and height above ground. These statistics may then be used to assess the probability of exceeding a specified peak wind threshold of operational significance given a specified mean wind speed.

  4. Empirical classification of resources in a business model concept

    Directory of Open Access Journals (Sweden)

    Marko Seppänen

    2009-04-01

    Full Text Available The concept of the business model has been designed for aiding exploitation of the business potential of an innovation. This exploitation inevitably involves new activities in the organisational context and generates a need to select and arrange the resources of the firm in these new activities. A business model encompasses those resources that a firm has access to and aids in a firm’s effort to create a superior ‘innovation capability’. Selecting and arranging resources to utilise innovations requires resource allocation decisions on multiple fronts as well as poses significant challenges for management of innovations. Although current business model conceptualisations elucidate resources, explicit considerations for the composition and the structures of the resource compositions have remained ambiguous. As a result, current business model conceptualisations fail in their core purpose in assisting the decision-making that must consider the resource allocation in exploiting business opportunities. This paper contributes to the existing discussion regarding the representation of resources as components in the business model concept. The categorized list of resources in business models is validated empirically, using two samples of managers in different positions in several industries. The results indicate that most of the theoretically derived resource items have their equivalents in the business language and concepts used by managers. Thus, the categorisation of the resource components enables further development of the business model concept as well as improves daily communication between managers and their subordinates. Future research could be targeted on linking these components of a business model with each other in order to gain a model to assess the performance of different business model configurations. Furthermore, different applications for the developed resource configuration may be envisioned.

  5. Empirical Bayes Credibility Models for Economic Catastrophic Losses by Regions

    Directory of Open Access Journals (Sweden)

    Jindrová Pavla

    2017-01-01

    Full Text Available Catastrophic events affect various regions of the world with increasing frequency and intensity. The number of catastrophic events and the amount of economic losses is varying in different world regions. Part of these losses is covered by insurance. Catastrophe events in last years are associated with increases in premiums for some lines of business. The article focus on estimating the amount of net premiums that would be needed to cover the total or insured catastrophic losses in different world regions using Bühlmann and Bühlmann-Straub empirical credibility models based on data from Sigma Swiss Re 2010-2016. The empirical credibility models have been developed to estimate insurance premiums for short term insurance contracts using two ingredients: past data from the risk itself and collateral data from other sources considered to be relevant. In this article we deal with application of these models based on the real data about number of catastrophic events and about the total economic and insured catastrophe losses in seven regions of the world in time period 2009-2015. Estimated credible premiums by world regions provide information how much money in the monitored regions will be need to cover total and insured catastrophic losses in next year.

  6. Comparison of blade-strike modeling results with empirical data

    Energy Technology Data Exchange (ETDEWEB)

    Ploskey, Gene R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Carlson, Thomas J. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2004-03-01

    This study is the initial stage of further investigation into the dynamics of injury to fish during passage through a turbine runner. As part of the study, Pacific Northwest National Laboratory (PNNL) estimated the probability of blade strike, and associated injury, as a function of fish length and turbine operating geometry at two adjacent turbines in Powerhouse 1 of Bonneville Dam. Units 5 and 6 had identical intakes, stay vanes, wicket gates, and draft tubes, but Unit 6 had a new runner and curved discharge ring to minimize gaps between the runner hub and blades and between the blade tips and discharge ring. We used a mathematical model to predict blade strike associated with two Kaplan turbines and compared results with empirical data from biological tests conducted in 1999 and 2000. Blade-strike models take into consideration the geometry of the turbine blades and discharges as well as fish length, orientation, and distribution along the runner. The first phase of this study included a sensitivity analysis to consider the effects of difference in geometry and operations between families of turbines on the strike probability response surface. The analysis revealed that the orientation of fish relative to the leading edge of a runner blade and the location that fish pass along the blade between the hub and blade tip are critical uncertainties in blade-strike models. Over a range of discharges, the average prediction of injury from blade strike was two to five times higher than average empirical estimates of visible injury from shear and mechanical devices. Empirical estimates of mortality may be better metrics for comparison to predicted injury rates than other injury measures for fish passing at mid-blade and blade-tip locations.

  7. Adaptation of an empirical model for erythemal ultraviolet irradiance

    Directory of Open Access Journals (Sweden)

    I. Foyo-Moreno

    2007-07-01

    Full Text Available In this work we adapt an empirical model to estimate ultraviolet erythemal irradiance (UVER using experimental measurements carried out at seven stations in Spain during four years (2000–2003. The measurements were taken in the framework of the Spanish UVB radiometric network operated and maintained by the Spanish Meteorological Institute. The UVER observations are recorded as half hour average values. The model is valid for all-sky conditions, estimating UVER from the ozone columnar content and parameters usually registered in radiometric networks, such as global broadband hemispherical transmittance and optical air mass. One data set was used to develop the model and another independent set was used to validate it. The model provides satisfactory results, with low mean bias error (MBE for all stations. In fact, MBEs are less than 4% and root mean square errors (RMSE are below 18% (except for one location. The model has also been evaluated to estimate the UV index. The percentage of cases with differences of 0 UVI units is in the range of 61.1% to 72.0%, while the percentage of cases with differences of ±1 UVI unit covers the range of 95.6% to 99.2%. This result confirms the applicability of the model to estimate UVER irradiance and the UV index at those locations in the Iberian Peninsula where there are no UV radiation measurements.

  8. An Empirical Study of Smoothing Techniques for Language Modeling

    CERN Document Server

    Chen, S F; Chen, Stanley F.; Goodman, Joshua T.

    1996-01-01

    We present an extensive empirical comparison of several smoothing techniques in the domain of language modeling, including those described by Jelinek and Mercer (1980), Katz (1987), and Church and Gale (1991). We investigate for the first time how factors such as training data size, corpus (e.g., Brown versus Wall Street Journal), and n-gram order (bigram versus trigram) affect the relative performance of these methods, which we measure through the cross-entropy of test data. In addition, we introduce two novel smoothing techniques, one a variation of Jelinek-Mercer smoothing and one a very simple linear interpolation technique, both of which outperform existing methods.

  9. EMPIRE: Nuclear Reaction Model Code System for Data Evaluation

    Science.gov (United States)

    Herman, M.; Capote, R.; Carlson, B. V.; Obložinský, P.; Sin, M.; Trkov, A.; Wienke, H.; Zerkin, V.

    2007-12-01

    EMPIRE is a modular system of nuclear reaction codes, comprising various nuclear models, and designed for calculations over a broad range of energies and incident particles. A projectile can be a neutron, proton, any ion (including heavy-ions) or a photon. The energy range extends from the beginning of the unresolved resonance region for neutron-induced reactions (∽ keV) and goes up to several hundred MeV for heavy-ion induced reactions. The code accounts for the major nuclear reaction mechanisms, including direct, pre-equilibrium and compound nucleus ones. Direct reactions are described by a generalized optical model (ECIS03) or by the simplified coupled-channels approach (CCFUS). The pre-equilibrium mechanism can be treated by a deformation dependent multi-step direct (ORION + TRISTAN) model, by a NVWY multi-step compound one or by either a pre-equilibrium exciton model with cluster emission (PCROSS) or by another with full angular momentum coupling (DEGAS). Finally, the compound nucleus decay is described by the full featured Hauser-Feshbach model with γ-cascade and width-fluctuations. Advanced treatment of the fission channel takes into account transmission through a multiple-humped fission barrier with absorption in the wells. The fission probability is derived in the WKB approximation within the optical model of fission. Several options for nuclear level densities include the EMPIRE-specific approach, which accounts for the effects of the dynamic deformation of a fast rotating nucleus, the classical Gilbert-Cameron approach and pre-calculated tables obtained with a microscopic model based on HFB single-particle level schemes with collective enhancement. A comprehensive library of input parameters covers nuclear masses, optical model parameters, ground state deformations, discrete levels and decay schemes, level densities, fission barriers, moments of inertia and γ-ray strength functions. The results can be converted into ENDF-6 formatted files using the

  10. Testing the Empirical Shock Arrival Model using Quadrature Observations

    CERN Document Server

    Gopalswamy, N; Xie, H; Yashiro, S

    2013-01-01

    The empirical shock arrival (ESA) model was developed based on quadrature data from Helios (in-situ) and P-78 (remote-sensing) to predict the Sun-Earth travel time of coronal mass ejections (CMEs) [Gopalswamy et al. 2005a]. The ESA model requires earthward CME speed as input, which is not directly measurable from coronagraphs along the Sun-Earth line. The Solar Terrestrial Relations Observatory (STEREO) and the Solar and Heliospheric Observatory (SOHO) were in quadrature during 2010 - 2012, so the speeds of Earth-directed CMEs were observed with minimal projection effects. We identified a set of 20 full halo CMEs in the field of view of SOHO that were also observed in quadrature by STEREO. We used the earthward speed from STEREO measurements as input to the ESA model and compared the resulting travel times with the observed ones from L1 monitors. We find that the model predicts the CME travel time within about 7.3 hours, which is similar to the predictions by the ENLIL model. We also find that CME-CME and CME...

  11. An Empirical Analysis on Credit Risk Models and its Application

    Directory of Open Access Journals (Sweden)

    Joocheol Kim

    2014-08-01

    Full Text Available This study intends to focus on introducing credit default risk with widely used credit risk models in an effort to empirically test whether the models hold their validity, apply to financial institutions which usually are highly levered with various types of debts, and finally reinterpret the results in computing adequate collateral level in the over-the-counter derivatives market. By calculating the distance-to-default values using historical market data for South Korean banks and brokerage firms as suggested in Merton model and KMV’s EDF model, we find that the performance of the introduced models well reflect the credit quality of the sampled financial institutions. Moreover, we suggest that in addition to the given credit ratings of different financial institutions, their distance-to-default values can be utilized in determining the sufficient level of credit support. Our suggested “smoothened” collateral level allows both contractual parties to minimize their costs caused from provision of collateral without undertaking additional credit risk and achieve efficient collateral management.

  12. Empirical model of atomic nitrogen in the upper thermosphere

    Science.gov (United States)

    Engebretson, M. J.; Mauersberger, K.; Kayser, D. C.; Potter, W. E.; Nier, A. O.

    1977-01-01

    Atomic nitrogen number densities in the upper thermosphere measured by the open source neutral mass spectrometer (OSS) on Atmosphere Explorer-C during 1974 and part of 1975 have been used to construct a global empirical model at an altitude of 375 km based on a spherical harmonic expansion. The most evident features of the model are large diurnal and seasonal variations of atomic nitrogen and only a moderate and latitude-dependent density increase during periods of geomagnetic activity. Maximum and minimum N number densities at 375 km for periods of low solar activity are 3.6 x 10 to the 6th/cu cm at 1500 LST (local solar time) and low latitude in the summer hemisphere and 1.5 x 10 to the 5th/cu cm at 0200 LST at mid-latitudes in the winter hemisphere.

  13. Empirical testing of earthquake recurrence models at source and site

    Science.gov (United States)

    Albarello, D.; Mucciarelli, M.

    2012-04-01

    Several probabilistic procedures are presently available for seismic hazard assessment (PSHA), based on time-dependent or time-independent models. The result is a number of different outcomes (hazard maps), and to take into account the inherent uncertainty (epistemic), the outcomes of alternative procedures are combined in the frame of logic-tree approaches by scoring each procedure as a function of the respective reliability. This is deduced by evaluating ex-ante (by expert judgements) each element concurring in the relevant PSH computational procedure. This approach appears unsatisfactory also because the value of each procedure depends both on the reliability of each concurring element and on that of their combination: thus, checking the correctness of single elements does not allow evaluating the correctness of the procedure as a whole. Alternative approaches should be based 1) on the ex-post empirical testing of the considered PSH computational models and 2) on the validation of the assumptions underlying concurrent models. The first goal can be achieved comparing the probabilistic forecasts provided by each model with empirical evidence relative to seismic occurrences (e.g., strong-motion data or macroseismic intensity evaluations) during some selected control periods of dimension comparable with the relevant exposure time. About assumptions validation, critical issues are the dimension of the minimum data set necessary to distinguish processes with or without memory, the reliability of mixed data on seismic sources (i.e. historical and palaeoseismological), the completeness of fault catalogues. Some results obtained by the application of these testing procedures in Italy will be shortly outlined.

  14. Evaluation of empirical models and competition indices in ranking canola

    Directory of Open Access Journals (Sweden)

    A. S Safahani

    2012-06-01

    Full Text Available In order to evaluate the competitive ability (CA of canola cultivars against wild mustard, two experiments were conducted at the Gorgan Institute in Iran during the 2005-2007 cropping seasons. The experimental factors were canola cultivars (1st year: Zarfam, Option500, Hayola330, Hayola401, Talayh, RGS003 and Sarigol; 2nd year: Zarfam, Hayola330, RGS003 and Option500 and weed density (1st year: control and 30 plants m-2; 2nd year: control, 4, 8 and 16 plants m-2. The result of the first year is experiment indicated that the grain yield and competitive indices differed significantly between the cultivars. Cultivar Zarfam showed a high ability to withstand competition (AWC = 47 %, high competitive indices (CI=1.79 and CI2 = 1.83 and low grain yield in the weed- free plots (1729 kg ha-1. The cultivar Option500, a less competitive cultivar had the lowest ability to withstand competition (AWC = 4 % and the lowest competitive indices (CI = 0.09 and CI2= 0.11 amongst the cultivars. However, the cultivar Option500 showed more grain yield in the weed- free plots (2333 kg ha-1 than cultivar Zarfam. In the second year of the experiment, the result of the yield loss models showed that the lowest and highest yield loss belonged to cultivars Zarfam and Option500 (50 and 95 % respectively. A comparison of different empirical models revealed that the empirical yield loss model based on weed relative leaf area was more reliable for predicting canola yield loss according to a high coefficient of determination (R2=0.99. The relative damage coefficient (q of the weed relative leaf area model showed that wild mustard was more competitive than canola (q>1.

  15. TIME-IGGCAS model validation:Comparisons with empirical models and observations

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    The TIME-IGGCAS (Theoretical Ionospheric Model of the Earth in Institute of Ge- ology and Geophysics, Chinese Academy of Sciences) has been developed re- cently on the basis of previous works. To test its validity, we have made compari- sons of model results with other typical empirical ionospheric models (IRI, NeQuick-ITUR, and TItheridge temperature models) and multi-observations (GPS, Ionosondes, Topex, DMSP, FORMOSAT, and CHAMP) in this paper. Several conclu- sions are obtained from our comparisons. The modeled electron density and elec- tron and ion temperatures are quantitatively in good agreement with those of em- pirical models and observations. TIME-IGGCAS can model the electron density variations versus several factors such as local time, latitude, and season very well and can reproduce most anomalistic features of ionosphere including equatorial anomaly, winter anomaly, and semiannual anomaly. These results imply a good base for the development of ionospheric data assimilation model in the future. TIME-IGGCAS underestimates electron temperature and overestimates ion tem- perature in comparison with either empirical models or observations. The model results have relatively large deviations near sunrise time and sunset time and at the low altitudes. These results give us a reference to improve the model and enhance its performance in the future.

  16. Empirical likelihood ratio tests for multivariate regression models

    Institute of Scientific and Technical Information of China (English)

    WU Jianhong; ZHU Lixing

    2007-01-01

    This paper proposes some diagnostic tools for checking the adequacy of multivariate regression models including classical regression and time series autoregression. In statistical inference, the empirical likelihood ratio method has been well known to be a powerful tool for constructing test and confidence region. For model checking, however, the naive empirical likelihood (EL) based tests are not of Wilks' phenomenon. Hence, we make use of bias correction to construct the EL-based score tests and derive a nonparametric version of Wilks' theorem. Moreover, by the advantages of both the EL and score test method, the EL-based score tests share many desirable features as follows: They are self-scale invariant and can detect the alternatives that converge to the null at rate n-1/2, the possibly fastest rate for lack-of-fit testing; they involve weight functions, which provides us with the flexibility to choose scores for improving power performance, especially under directional alternatives. Furthermore, when the alternatives are not directional, we construct asymptotically distribution-free maximin tests for a large class of possible alternatives. A simulation study is carried out and an application for a real dataset is analyzed.

  17. An empirical conceptual gully evolution model for channelled sea cliffs

    Science.gov (United States)

    Leyland, Julian; Darby, Stephen E.

    2008-12-01

    Incised coastal channels are a specific form of incised channel that are found in locations where stream channels flowing to cliffed coasts have the excess energy required to cut down through the cliff to reach the outlet water body. The southern coast of the Isle of Wight, southern England, comprises soft cliffs that vary in height between 15 and 100 m and which are retreating at rates ≤ 1.5 m a - 1 , due to a combination of wave erosion and landslides. In several locations, river channels have cut through the cliffs to create deeply (≤ 45 m) incised gullies, known locally as 'Chines'. The Chines are unusual in that their formation is associated with dynamic shoreline encroachment during a period of rising sea-level, whereas existing models of incised channel evolution emphasise the significance of base level lowering. This paper develops a conceptual model of Chine evolution by applying space for time substitution methods using empirical data gathered from Chine channel surveys and remotely sensed data. The model identifies a sequence of evolutionary stages, which are classified based on a suite of morphometric indices and associated processes. The extent to which individual Chines are in a state of growth or decay is estimated by determining the relative rates of shoreline retreat and knickpoint recession, the former via analysis of historical aerial images and the latter through the use of a stream power erosion model.

  18. A Tool for Sharing Empirical Models of Climate Impacts

    Science.gov (United States)

    Rising, J.; Kopp, R. E.; Hsiang, S. M.

    2013-12-01

    Scientists, policy advisors, and the public struggle to synthesize the quickly evolving empirical work on climate change impacts. The Integrated Assessment Models (IAMs) used to estimate the impacts of climate change and the effects of adaptation and mitigation policies can also benefit greatly from recent empirical results (Kopp, Hsiang & Oppenheimer, Impacts World 2013 discussion paper). This paper details a new online tool for exploring, analyzing, combining, and communicating a wide range of impact results, and supporting their integration into IAMs. The tool uses a new database of statistical results, which researchers can expand both in depth (by providing additional results that describing existing relationships) and breadth (by adding new relationships). Scientists can use the tool to quickly perform meta-analyses of related results, using Bayesian techniques to produce pooled and partially-pooled posterior distributions. Policy advisors can apply the statistical results to particular contexts, and combine different kinds of results in a cost-benefit framework. For example, models of the impact of temperature changes on agricultural yields can be first aggregated to build a best-estimate of the effect under given assumptions, then compared across countries using different temperature scenarios, and finally combined to estimate a social cost of carbon. The general public can better understand the many estimates of climate impacts and their range of uncertainty by exploring these results dynamically, with maps, bar charts, and dose-response-style plots. Front page of the climate impacts tool website. Sample "collections" of models, within which all results are estimates of the same fundamental relationship, are shown on the right. Simple pooled result for Gelman's "8 schools" example. Pooled results are calculated analytically, while partial-pooling (Bayesian hierarchical estimation) uses posterior simulations.

  19. Modelling drying kinetics of thyme (Thymus vulgaris L.): theoretical and empirical models, and neural networks.

    Science.gov (United States)

    Rodríguez, J; Clemente, G; Sanjuán, N; Bon, J

    2014-01-01

    The drying kinetics of thyme was analyzed by considering different conditions: air temperature of between 40°C  and 70°C , and air velocity of 1 m/s. A theoretical diffusion model and eight different empirical models were fitted to the experimental data. From the theoretical model application, the effective diffusivity per unit area of the thyme was estimated (between 3.68 × 10(-5) and 2.12 × 10 (-4) s(-1)). The temperature dependence of the effective diffusivity was described by the Arrhenius relationship with activation energy of 49.42 kJ/mol. Eight different empirical models were fitted to the experimental data. Additionally, the dependence of the parameters of each model on the drying temperature was determined, obtaining equations that allow estimating the evolution of the moisture content at any temperature in the established range. Furthermore, artificial neural networks were developed and compared with the theoretical and empirical models using the percentage of the relative errors and the explained variance. The artificial neural networks were found to be more accurate predictors of moisture evolution with VAR ≥ 99.3% and ER ≤ 8.7%.

  20. GDP model for Chinese energy modeling based on empirical production function

    Institute of Scientific and Technical Information of China (English)

    HiroshiYAGITA; BaorenWEI; AtsushiINABA; MasayukiSAGISAKA; KeikoHIROTA; KiyoyukiMINATO

    2003-01-01

    In many energy models, GDP is an exogenous variable, so that variables within energy model are not able to change the value of GDP. Based on empirical production function, a GDP model has been established in this paper using capital stock, urbanization rate and population size as independent variables. It has been found that urbanization rate is a kind of integrated indicator of labor quantity and the education level of labors in China. And it also takes away the labor surplus in rural area in China. The forecasting results show that the model is robust. The results have the same tendency as the results from famous CGE model and the results from responsible Chinese authorities, and the numbers of GDP growth rates are also similar in 50 years. It has been concluded that the model is a good candidate for energy model as an endogenous vadable.

  1. Hybrid empirical--theoretical approach to modeling uranium adsorption

    Energy Technology Data Exchange (ETDEWEB)

    Hull, Larry C.; Grossman, Christopher; Fjeld, Robert A.; Coates, John T.; Elzerman, Alan W

    2004-05-01

    An estimated 330 metric tons of U are buried in the radioactive waste Subsurface Disposal Area (SDA) at the Idaho National Engineering and Environmental Laboratory (INEEL). An assessment of U transport parameters is being performed to decrease the uncertainty in risk and dose predictions derived from computer simulations of U fate and transport to the underlying Snake River Plain Aquifer. Uranium adsorption isotherms were measured for 14 sediment samples collected from sedimentary interbeds underlying the SDA. The adsorption data were fit with a Freundlich isotherm. The Freundlich n parameter is statistically identical for all 14 sediment samples and the Freundlich K{sub f} parameter is correlated to sediment surface area (r{sup 2}=0.80). These findings suggest an efficient approach to material characterization and implementation of a spatially variable reactive transport model that requires only the measurement of sediment surface area. To expand the potential applicability of the measured isotherms, a model is derived from the empirical observations by incorporating concepts from surface complexation theory to account for the effects of solution chemistry. The resulting model is then used to predict the range of adsorption conditions to be expected in the vadose zone at the SDA based on the range in measured pore water chemistry. Adsorption in the deep vadose zone is predicted to be stronger than in near-surface sediments because the total dissolved carbonate decreases with depth.

  2. EMPIRICAL MODEL FOR FORMULATION OF CRYSTAL-TOLERANT HLW GLASSES

    Energy Technology Data Exchange (ETDEWEB)

    KRUGER AA; MATYAS J; HUCKLEBERRY AR; VIENNA JD; RODRIGUEZ CA

    2012-03-07

    Historically, high-level waste (HLW) glasses have been formulated with a low liquideus temperature (T{sub L}), or temperature at which the equilibrium fraction of spinel crystals in the melt is below 1 vol % (T{sub 0.01}), nominally below 1050 C. These constraints cannot prevent the accumulation of large spinel crystals in considerably cooler regions ({approx} 850 C) of the glass discharge riser during melter idling and significantly limit the waste loading, which is reflected in a high volume of waste glass, and would result in high capital, production, and disposal costs. A developed empirical model predicts crystal accumulation in the riser of the melter as a function of concentration of spinel-forming components in glass, and thereby provides guidance in formulating crystal-tolerant glasses that would allow high waste loadings by keeping the spinel crystals small and therefore suspended in the glass.

  3. Semi-Empirical Models for Buoyancy-Driven Ventilation

    DEFF Research Database (Denmark)

    Terpager Andersen, Karl

    2015-01-01

    A literature study is presented on the theories and models dealing with buoyancy-driven ventilation in rooms. The models are categorised into four types according to how the physical process is conceived: column model, fan model, neutral plane model and pressure model. These models are analysed...... and compared with a reference model. Discrepancies and differences are shown, and the deviations are discussed. It is concluded that a reliable buoyancy model based solely on the fundamental flow equations is desirable....

  4. Empirically tuned model for a precooled MGJT cryoprobe

    Science.gov (United States)

    Skye, H. M.; Passow, K. L.; Nellis, G. F.; Klein, S. A.

    Cryosurgery is a medical technique that uses a freezing process to destroy undesirable tissues such as cancerous tumors. The handheld portion of the cryoprobe must be compact and powerful in order to serve as an effective surgical instrument; the next generation of cryoprobes utilizes precooled Mixed Gas Joule-Thomson (pMGJT) cycles to meet these design criteria. The increased refrigeration power available with this more complex cycle improves probe effectiveness by reducing the number of probes and the time required to treat large tissue masses. Selecting mixtures and precooling cycle parameters to meet a cryogenic cooling load in a size-limited application is a challenging design problem. Modeling the precooler and recuperator performance is critical for cycle design, yet existing techniques in the literature typically use highly idealized models of the heat exchangers that neglect pressure drop and assume infinite conductance. These assumptions are questionable for cycles that are required to use compact components. The focus of this research project is to understand how the cycle performance is impacted by transport processes in the heat exchangers and to integrate these findings into an empirically tuned model that can be used for mixture optimization. This effort is carried out through a series of modeling, experimental, and optimization studies. While these results have been applied to the design of a cryosurgical probe, they are also more generally useful in understanding the operation of other compact MGJT systems. A commercially available pMGJT cryoprobe system has been modified in order to integrate a suite of measurement instrumentation that can completely characterize the performance of the individual components as well as the overall system. Measurements include sufficient temperature and pressure sensors to resolve thermodynamic states, as well as flow meters in order to compute the heat and work transfer rates. Temperature sensors are also

  5. Empirical fitness models for hepatitis C virus immunogen design

    Science.gov (United States)

    Hart, Gregory R.; Ferguson, Andrew L.

    2015-12-01

    Hepatitis C virus (HCV) afflicts 170 million people worldwide, 2%-3% of the global population, and kills 350 000 each year. Prophylactic vaccination offers the most realistic and cost effective hope of controlling this epidemic in the developing world where expensive drug therapies are not available. Despite 20 years of research, the high mutability of the virus and lack of knowledge of what constitutes effective immune responses have impeded development of an effective vaccine. Coupling data mining of sequence databases with spin glass models from statistical physics, we have developed a computational approach to translate clinical sequence databases into empirical fitness landscapes quantifying the replicative capacity of the virus as a function of its amino acid sequence. These landscapes explicitly connect viral genotype to phenotypic fitness, and reveal vulnerable immunological targets within the viral proteome that can be exploited to rationally design vaccine immunogens. We have recovered the empirical fitness landscape for the HCV RNA-dependent RNA polymerase (protein NS5B) responsible for viral genome replication, and validated the predictions of our model by demonstrating excellent accord with experimental measurements and clinical observations. We have used our landscapes to perform exhaustive in silico screening of 16.8 million T-cell immunogen candidates to identify 86 optimal formulations. By reducing the search space of immunogen candidates by over five orders of magnitude, our approach can offer valuable savings in time, expense, and labor for experimental vaccine development and accelerate the search for a HCV vaccine. Abbreviations: HCV—hepatitis C virus, HLA—human leukocyte antigen, CTL—cytotoxic T lymphocyte, NS5B—nonstructural protein 5B, MSA—multiple sequence alignment, PEG-IFN—pegylated interferon.

  6. Global Empirical Model of the TEC Response to Geomagnetic Activity and Forcing from Below

    Science.gov (United States)

    2014-04-01

    AFRL-AFOSR-UK-TR-2014-0025 Global empirical model of the TEC response to geomagnetic activity and forcing from below Dora...April 2014 4. TITLE AND SUBTITLE Global empirical model of the TEC response to geomagnetic activity and forcing from below 5a. CONTRACT NUMBER...the global background TEC model c) Development of global empirical model of TEC response to geomagnetic activity d) On-line implementation of both

  7. An empirical model of the quiet daily geomagnetic field variation

    Science.gov (United States)

    Yamazaki, Y.; Yumoto, K.; Cardinal, M.G.; Fraser, B.J.; Hattori, P.; Kakinami, Y.; Liu, J.Y.; Lynn, K.J.W.; Marshall, R.; McNamara, D.; Nagatsuma, T.; Nikiforov, V.M.; Otadoy, R.E.; Ruhimat, M.; Shevtsov, B.M.; Shiokawa, K.; Abe, S.; Uozumi, T.; Yoshikawa, A.

    2011-01-01

    An empirical model of the quiet daily geomagnetic field variation has been constructed based on geomagnetic data obtained from 21 stations along the 210 Magnetic Meridian of the Circum-pan Pacific Magnetometer Network (CPMN) from 1996 to 2007. Using the least squares fitting method for geomagnetically quiet days (Kp ??? 2+), the quiet daily geomagnetic field variation at each station was described as a function of solar activity SA, day of year DOY, lunar age LA, and local time LT. After interpolation in latitude, the model can describe solar-activity dependence and seasonal dependence of solar quiet daily variations (S) and lunar quiet daily variations (L). We performed a spherical harmonic analysis (SHA) on these S and L variations to examine average characteristics of the equivalent external current systems. We found three particularly noteworthy results. First, the total current intensity of the S current system is largely controlled by solar activity while its focus position is not significantly affected by solar activity. Second, we found that seasonal variations of the S current intensity exhibit north-south asymmetry; the current intensity of the northern vortex shows a prominent annual variation while the southern vortex shows a clear semi-annual variation as well as annual variation. Thirdly, we found that the total intensity of the L current system changes depending on solar activity and season; seasonal variations of the L current intensity show an enhancement during the December solstice, independent of the level of solar activity. Copyright 2011 by the American Geophysical Union.

  8. The Chromospheric Solar Millimeter-wave Cavity; a Common Property in the Semi-empirical Models

    CERN Document Server

    Victor, De la Luz; Emanuele, Bertone

    2014-01-01

    The semi-empirical models of the solar chromosphere are useful in the study of the solar radio emission at millimeter - infrared wavelengths. However, current models do not reproduce the observations of the quiet sun. In this work we present a theoretical study of the radiative transfer equation for four semi- empirical models at these wavelengths. We found that the Chromospheric Solar Milimeter-wave Cavity (CSMC), a region where the atmosphere becomes locally optically thin at millimeter wavelengths, is present in the semi-empirical models under study. We conclude that the CSMC is a general property of the solar chromosphere where the semi-empirical models shows temperature minimum.

  9. Empirically modelled Pc3 activity based on solar wind parameters

    Directory of Open Access Journals (Sweden)

    T. Raita

    2010-09-01

    Full Text Available It is known that under certain solar wind (SW/interplanetary magnetic field (IMF conditions (e.g. high SW speed, low cone angle the occurrence of ground-level Pc3–4 pulsations is more likely. In this paper we demonstrate that in the event of anomalously low SW particle density, Pc3 activity is extremely low regardless of otherwise favourable SW speed and cone angle. We re-investigate the SW control of Pc3 pulsation activity through a statistical analysis and two empirical models with emphasis on the influence of SW density on Pc3 activity. We utilise SW and IMF measurements from the OMNI project and ground-based magnetometer measurements from the MM100 array to relate SW and IMF measurements to the occurrence of Pc3 activity. Multiple linear regression and artificial neural network models are used in iterative processes in order to identify sets of SW-based input parameters, which optimally reproduce a set of Pc3 activity data. The inclusion of SW density in the parameter set significantly improves the models. Not only the density itself, but other density related parameters, such as the dynamic pressure of the SW, or the standoff distance of the magnetopause work equally well in the model. The disappearance of Pc3s during low-density events can have at least four reasons according to the existing upstream wave theory: 1. Pausing the ion-cyclotron resonance that generates the upstream ultra low frequency waves in the absence of protons, 2. Weakening of the bow shock that implies less efficient reflection, 3. The SW becomes sub-Alfvénic and hence it is not able to sweep back the waves propagating upstream with the Alfvén-speed, and 4. The increase of the standoff distance of the magnetopause (and of the bow shock. Although the models cannot account for the lack of Pc3s during intervals when the SW density is extremely low, the resulting sets of optimal model inputs support the generation of mid latitude Pc3 activity predominantly through

  10. Consistent constitutive modeling of metallic target penetration using empirical, analytical, and numerical penetration models

    Institute of Scientific and Technical Information of China (English)

    John Jack P. RIEGEL III; David DAVISON

    2016-01-01

    Historically, there has been little correlation between the material properties used in (1) empirical formulae, (2) analytical formulations, and (3) numerical models. The various regressions and models may each provide excellent agreement for the depth of penetration into semi-infinite targets. But the input parameters for the empirically based procedures may have little in common with either the analytical model or the numerical model. This paper builds on previous work by Riegel and Anderson (2014) to show how the Effective Flow Stress (EFS) strength model, based on empirical data, can be used as the average flow stress in the analytical Walker–Anderson Penetration model (WAPEN) (Anderson and Walker, 1991) and how the same value may be utilized as an effective von Mises yield strength in numerical hydrocode simulations to predict the depth of penetration for eroding projectiles at impact velocities in the mechanical response regime of the materials. The method has the benefit of allowing the three techniques (empirical, analytical, and numerical) to work in tandem. The empirical method can be used for many shot line calculations, but more advanced analytical or numerical models can be employed when necessary to address specific geometries such as edge effects or layering that are not treated by the simpler methods. Developing complete constitutive relationships for a material can be costly. If the only concern is depth of penetration, such a level of detail may not be required. The effective flow stress can be determined from a small set of depth of penetration experiments in many cases, especially for long penetrators such as the L/D=10 ones considered here, making it a very practical approach. In the process of performing this effort, the authors considered numerical simulations by other researchers based on the same set of experimental data that the authors used for their empirical and analytical assessment. The goals were to establish a baseline with a full

  11. Consistent constitutive modeling of metallic target penetration using empirical, analytical, and numerical penetration models

    Directory of Open Access Journals (Sweden)

    John (Jack P. Riegel III

    2016-04-01

    Full Text Available Historically, there has been little correlation between the material properties used in (1 empirical formulae, (2 analytical formulations, and (3 numerical models. The various regressions and models may each provide excellent agreement for the depth of penetration into semi-infinite targets. But the input parameters for the empirically based procedures may have little in common with either the analytical model or the numerical model. This paper builds on previous work by Riegel and Anderson (2014 to show how the Effective Flow Stress (EFS strength model, based on empirical data, can be used as the average flow stress in the analytical Walker–Anderson Penetration model (WAPEN (Anderson and Walker, 1991 and how the same value may be utilized as an effective von Mises yield strength in numerical hydrocode simulations to predict the depth of penetration for eroding projectiles at impact velocities in the mechanical response regime of the materials. The method has the benefit of allowing the three techniques (empirical, analytical, and numerical to work in tandem. The empirical method can be used for many shot line calculations, but more advanced analytical or numerical models can be employed when necessary to address specific geometries such as edge effects or layering that are not treated by the simpler methods. Developing complete constitutive relationships for a material can be costly. If the only concern is depth of penetration, such a level of detail may not be required. The effective flow stress can be determined from a small set of depth of penetration experiments in many cases, especially for long penetrators such as the L/D = 10 ones considered here, making it a very practical approach. In the process of performing this effort, the authors considered numerical simulations by other researchers based on the same set of experimental data that the authors used for their empirical and analytical assessment. The goals were to establish a

  12. Tests of Parameters Instability: Theoretical Study and Empirical Applications on Two Types of Models (ARMA Model and Market Model

    Directory of Open Access Journals (Sweden)

    Sahbi FARHANI

    2012-01-01

    Full Text Available This paper considers tests of parameters instability and structural change with known, unknown or multiple breakpoints. The results apply to a wide class of parametric models that are suitable for estimation by strong rules for detecting the number of breaks in a time series. For that, we use Chow, CUSUM, CUSUM of squares, Wald, likelihood ratio and Lagrange multiplier tests. Each test implicitly uses an estimate of a change point. We conclude with an empirical analysis on two different models (ARMA model and simple linear regression model.

  13. Improving the desolvation penalty in empirical protein pKa modeling

    DEFF Research Database (Denmark)

    Olsson, Mats Henrik Mikael

    2012-01-01

    Unlike atomistic and continuum models, empirical pk(a) predicting methods need to include desolvation contributions explicitly. This study describes a new empirical desolvation method based on the Born solvation model. The new desolvation model was evaluated by high-level Poisson-Boltzmann...

  14. Empirical agent-based land market: Integrating adaptive economic behavior in urban land-use models

    NARCIS (Netherlands)

    Filatova, Tatiana

    2015-01-01

    This paper introduces an economic agent-based model of an urban housing market. The RHEA (Risks and Hedonics in Empirical Agent-based land market) model captures natural hazard risks and environmental amenities through hedonic analysis, facilitating empirical agent-based land market modeling. RHEA i

  15. Empirical model of the composition of the Venus ionosphere Repeatable characteristics and key features not modeled

    Science.gov (United States)

    Taylor, H. A., Jr.; Mayr, H. G.; Niemann, H. B.; Larson, J.

    1985-01-01

    In-situ measurements of positive ion composition of the ionosphere of Venus are combined in an empirical model which is a key element for the Venus International Reference Atmosphere (VIRA) model. The ion data are obtained from the Pioneer Venus Orbiter Ion Mass Spectrometer (OIMS) which obtained daily measurements beginning in December 1978 and extending to July 1980 when the uncontrolled rise of satellite periapsis height precluded further measurements in the main body of the ionosphere. For this period, measurements of 12 ion species are sorted into altitude and local time bins with altitude extending from 150 to 1000 km. The model results exhibit the appreciable nightside ionosphere found at Venus, the dominance of atomic oxygen ions in the dayside upper ionosphere and the increase in prominence of atomic oxygen and deuterium ions on the nightside. Short term variations, such as the abrupt changes observed in the ionopause, cannot be represented in the model.

  16. Design models as emergent features: An empirical study in communication and shared mental models in instructional

    Directory of Open Access Journals (Sweden)

    Lucca Botturi

    2006-06-01

    Full Text Available This paper reports the results of an empirical study that investigated the instructional design process of three teams involved in the development of an e-learning unit. The teams declared they were using the same fast-prototyping design and development model, and were composed of the same roles (although with a different number of SMEs. Results indicate that the design and development model actually informs the activities of the group, but that it is interpreted and adapted by the team for the specific project. Thus, the actual practice model of each team can be regarded as an emergent feature. This analysis delivers insights concerning issues about team communication, shared understanding, individual perspectives and the implementation of prescriptive instructional design models.

  17. Model Equilibrium and Empirical Study of Rural Labor Transfer

    Institute of Scientific and Technical Information of China (English)

    Qinghua; HUANG; Xiuchuan; XU; Ming; ZHANG; Yue; ZHAO

    2013-01-01

    We establish the two-sector economy model including the urban sector and the rural sector, derive the labor demand curve of the urban sector and rural sector under the condition of balanced production decisions with benefit maximization, and analyze the labor flow when in the short-term or long-term two-sector economic equilibrium. The results show that rising wages caused by short-term internal and external impact increases the pressure on the employment in two sectors, and the urban sector is difficult to absorb the surplus labor of the rural sector. However, under the conditions of free flow of factors and fully competitive market, the wage variation arising from the long-term endogenous evolution, leads to inversely proportional relationship between the demand for labor in the urban and rural sectors, which is conducive to the transfer of rural labor force. Based on microeconomic survey data of labor flow in urban-rural coordination experimental zones in Chongqing City, this paper makes an empirical study of the main factors having a short-term impact on the labor transfer, and the results show that education level and the opportunity to participate in the training are important factors.

  18. A semi-empirical model for the M star GJ832 using modeling tools developed for computing semi-empirical solar models

    Science.gov (United States)

    Linsky, Jeffrey; Fontenla, Juan; France, Kevin

    2016-05-01

    We present a semi-empirical model of the photosphere, chromosphere, transition region, and corona for the M2 dwarf star GJ832, which hosts two exoplanets. The atmospheric model uses a modification of the Solar Radiation Physical Modeling tools developed by Fontenla and collaborators. These computer codes model non-LTE spectral line formation for 52 atoms and ions and include a large number of lines from 20 abundant diatomic molecules that are present in the much cooler photosphere and chromosphere of this star. We constructed the temperature distribution to fit Hubble Space Telescope observations of chromospheric lines (e.g., MgII), transition region lines (CII, CIV, SiIV, and NV), and the UV continuum. Temperatures in the coronal portion of the model are consistent with ROSAT and XMM-Newton X-ray observations and the FeXII 124.2 nm line. The excellent fit of the model to the data demonstrates that the highly developed model atmosphere code developed to explain regions of the solar atmosphere with different activity levels has wide applicability to stars, including this M star with an effective temperature 2200 K cooler than the Sun. We describe similarities and differences between the M star model and models of the quiet and active Sun.

  19. Connectivity of Caribbean coral populations: complementary insights from empirical and modelled gene flow

    NARCIS (Netherlands)

    Foster, N.L.; Paris, C.B.; Kool, J.T.; Baums, I.B.; Stevens, J.R.; Sanchez, J.A.; Bastidas, C.; Agudelo, C.; Bush, P.; Day, O.; Ferrari, R.; Gonzalez, P.; Gore, S.; Guppy, R.; McCartney, M.A.; McCoy, C.; Mendes, J.; Srinivasan, A.; Steiner, S.; Vermeij, M.J.A.; Weil, E.; Mumby, P.J.

    2012-01-01

    Understanding patterns of connectivity among populations of marine organisms is essential for the development of realistic, spatially explicit models of population dynamics. Two approaches, empirical genetic patterns and oceanographic dispersal modelling, have been used to estimate levels of

  20. The Hannover Consultation Liaison model: some empirical findings.

    Science.gov (United States)

    Freyberger, H; Künsebeck, H W; Lempa, W; Avenarius, H J; Liedtke, R; Plassman, R; Nordmeyer, J

    1985-01-01

    Starting from the definitions concerning the concepts 'Liaison medicine' and 'Consultative Psychiatry' we begin with remarks with regard to the Consultation Liaison-Situation in West Germany on the basis of the key-words 'Brief history', 'Independent university units with regard to Psychotherapy and Psychosomatics as well as the connected organization' and 'Teaching procedures'. Following it the Hannover Consultation Liaison model is presented particularly with regard to both the psychosomatic inpatient ward including the functional organization and psychotherapeutic processes as well as the so-called 'Innere Ambulanz' which includes the consultation liaison services in the clinico-medical departments outside Psychiatry and Psychosomatics. Within the 'Innere Ambulanz', which is closely connected to our psychosomatic inpatient ward, the consultation liaison activities and the resulting supportive psychotherapeutic strategies are performed by student auxiliary therapists who are interested in completing their 4-5 months internship-time in our department. We describe both the three supportive psychotherapeutic steps, which may last months to years including subsequent dynamically psychotherapeutic strategies as well as the reactions of the auxiliary therapist function on the students. Furthermore, we may state that there exists no one more optional education procedure of graduate students than the student's confrontation with his partial self-responsibility vis-à-vis a patient who is being supportive-psychotherapeutically treated by him. Specific empirical proofs concerning our patient oriented consultation liaison activities are demonstrated on the basis of previous psychotherapeutic findings in Crohn patients. Here we are able to demonstrate the effectivity of psychotherapy in the case of the supplementarily psychotherapeutically treated patients in comparison to the patients who received medical therapy only. Finally we are able to present quantitative clinico

  1. Empirical evaluation of scoring functions for Bayesian network model selection.

    Science.gov (United States)

    Liu, Zhifa; Malone, Brandon; Yuan, Changhe

    2012-01-01

    In this work, we empirically evaluate the capability of various scoring functions of Bayesian networks for recovering true underlying structures. Similar investigations have been carried out before, but they typically relied on approximate learning algorithms to learn the network structures. The suboptimal structures found by the approximation methods have unknown quality and may affect the reliability of their conclusions. Our study uses an optimal algorithm to learn Bayesian network structures from datasets generated from a set of gold standard Bayesian networks. Because all optimal algorithms always learn equivalent networks, this ensures that only the choice of scoring function affects the learned networks. Another shortcoming of the previous studies stems from their use of random synthetic networks as test cases. There is no guarantee that these networks reflect real-world data. We use real-world data to generate our gold-standard structures, so our experimental design more closely approximates real-world situations. A major finding of our study suggests that, in contrast to results reported by several prior works, the Minimum Description Length (MDL) (or equivalently, Bayesian information criterion (BIC)) consistently outperforms other scoring functions such as Akaike's information criterion (AIC), Bayesian Dirichlet equivalence score (BDeu), and factorized normalized maximum likelihood (fNML) in recovering the underlying Bayesian network structures. We believe this finding is a result of using both datasets generated from real-world applications rather than from random processes used in previous studies and learning algorithms to select high-scoring structures rather than selecting random models. Other findings of our study support existing work, e.g., large sample sizes result in learning structures closer to the true underlying structure; the BDeu score is sensitive to the parameter settings; and the fNML performs pretty well on small datasets. We also

  2. The Fracture Mechanical Markov Chain Fatigue Model Compared with Empirical Data

    DEFF Research Database (Denmark)

    Gansted, L.; Brincker, Rune; Hansen, Lars Pilegaard

    The applicability of the FMF-model (Fracture Mechanical Markov Chain Fatigue Model) introduced in Gansted, L., R. Brincker and L. Pilegaard Hansen (1991) is tested by simulations and compared with empirical data. Two sets of data have been used, the Virkler data (aluminium alloy) and data...... that the FMF-model gives adequate description of the empirical data using model parameters characteristic of the material....

  3. Denitrification in the root zone using a simple empirical model SimDen

    DEFF Research Database (Denmark)

    Vinther, Finn Pilgaard

    2006-01-01

    Only by knowing soil type and amount of nitrogen applied, an estimate of the annual denitrification can be obtained with the simple empirical model SimDen.......Only by knowing soil type and amount of nitrogen applied, an estimate of the annual denitrification can be obtained with the simple empirical model SimDen....

  4. Empirical Estimation of Hybrid Model: A Controlled Case Study

    OpenAIRE

    Sadaf Un Nisa; M. Rizwan Jameel Qureshi

    2012-01-01

    Scrum and Extreme Programming (XP) are frequently used models among all agile models whereas Rational Unified Process (RUP) is one of the widely used conventional plan driven software development models. The agile and plan driven approaches both have their own strengths and weaknesses. Although RUP model has certain drawbacks, such as tendency to be over budgeted, slow in adaptation to rapidly changing requirements and reputation of being impractical for small and fast paced projects. XP mode...

  5. An Empirical Comparison of Default Swap Pricing Models

    NARCIS (Netherlands)

    P. Houweling (Patrick); A.C.F. Vorst (Ton)

    2002-01-01

    textabstractAbstract: In this paper we compare market prices of credit default swaps with model prices. We show that a simple reduced form model with a constant recovery rate outperforms the market practice of directly comparing bonds' credit spreads to default swap premiums. We find that the model

  6. Empirical Evaluation of a Mathematical Model of Ethnolinguistic Vitality: The Case of Voro

    Science.gov (United States)

    Ehala, Martin; Niglas, Katrin

    2007-01-01

    The paper presents the results of an empirical evaluation of a mathematical model of ethnolinguistic vitality. The model adds several new factors to the set used in previous models of ethnolinguistic vitality and operationalises it in a manner that would make it easier to compare the vitality of different groups. According to the model, the…

  7. Empirical likelihood-based inference in a partially linear model for longitudinal data

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    A partially linear model with longitudinal data is considered, empirical likelihood to inference for the regression coefficients and the baseline function is investigated, the empirical log-likelihood ratios is proven to be asymptotically chi-squared, and the corresponding confidence regions for the parameters of interest are then constructed. Also by the empirical likelihood ratio functions, we can obtain the maximum empirical likelihood estimates of the regression coefficients and the baseline function, and prove the asymptotic normality. The numerical results are conducted to compare the performance of the empirical likelihood and the normal approximation-based method, and a real example is analysed.

  8. Empirical likelihood-based inference in a partially linear model for longitudinal data

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    A partially linear model with longitudinal data is considered, empirical likelihood to infer- ence for the regression coefficients and the baseline function is investigated, the empirical log-likelihood ratios is proven to be asymptotically chi-squared, and the corresponding confidence regions for the pa- rameters of interest are then constructed. Also by the empirical likelihood ratio functions, we can obtain the maximum empirical likelihood estimates of the regression coefficients and the baseline function, and prove the asymptotic normality. The numerical results are conducted to compare the performance of the empirical likelihood and the normal approximation-based method, and a real example is analysed.

  9. Latent Utility Shocks in a Structural Empirical Asset Pricing Model

    DEFF Research Database (Denmark)

    Christensen, Bent Jesper; Raahauge, Peter

    We consider a random utility extension of the fundamental Lucas (1978) equilibriumasset pricing model. The resulting structural model leads naturally to a likelihoodfunction. We estimate the model using U.S. asset market data from 1871 to2000, using both dividends and earnings as state variables....... We find that current dividendsdo not forecast future utility shocks, whereas current utility shocks do forecastfuture dividends. The estimated structural model produces a sequence of predictedutility shocks which provide better forecasts of future long-horizon stock market returnsthan the classical...... dividend-price ratio.KEYWORDS: Randomutility, asset pricing, maximumlikelihood, structuralmodel,return predictability...

  10. An Empirical Model for Estimating the Probability of Electrical Short Circuits from Tin Whiskers. Part 2

    Science.gov (United States)

    Courey, Karim; Wright, Clara; Asfour, Shihab; Onar, Arzu; Bayliss, Jon; Ludwig, Larry

    2009-01-01

    In this experiment, an empirical model to quantify the probability of occurrence of an electrical short circuit from tin whiskers as a function of voltage was developed. This empirical model can be used to improve existing risk simulation models. FIB and TEM images of a tin whisker confirm the rare polycrystalline structure on one of the three whiskers studied. FIB cross-section of the card guides verified that the tin finish was bright tin.

  11. Comparison of empirical, semi-empirical and physically based models of soil hydraulic functions derived for bi-modal soils.

    Science.gov (United States)

    Kutílek, M; Jendele, L; Krejca, M

    2009-02-16

    The accelerated flow in soil pores is responsible for a rapid transport of pollutants from the soil surface to deeper layers up to groundwater. The term preferential flow is used for this type of transport. Our study was aimed at the preferential flow realized in the structural porous domain in bi-modal soils. We compared equations describing the soil water retention function h(theta) and unsaturated hydraulic conductivity K(h), eventually K(theta) modified for bi-modal soils, where theta is the soil water content and h is the pressure head. The analytical description of a curve passing experimental data sets of the soil hydraulic function is typical for the empirical equation characterized by fitting parameters only. If the measured data are described by the equation derived by the physical model without using fitting parameters, we speak about a physically based model. There exist several transitional subtypes between empirical and physically based models. They are denoted as semi-empirical, or semi-physical. We tested 3 models of soil water retention function and 3 models of unsaturated conductivity using experimental data sets of sand, silt, silt loam and loam. All used soils are typical by their bi-modality of the soil porous system. The model efficiency was estimated by RMSE (Root mean square error) and by RSE (Relative square error). The semi-empirical equation of the soil water retention function had the lowest values of RMSE and RSE and was qualified as "optimal" for the formal description of the shape of the water retention function. With this equation, the fit of the modelled data to experiments was the closest one. The fitting parameters smoothed the difference between the model and the physical reality of the soil porous media. The physical equation based upon the model of the pore size distribution did not allow exact fitting of the modelled data to the experimental data due to the rigidity and simplicity of the physical model when compared to the

  12. Empirical Estimation of Hybrid Model: A Controlled Case Study

    Directory of Open Access Journals (Sweden)

    Sadaf Un Nisa

    2012-07-01

    Full Text Available Scrum and Extreme Programming (XP are frequently used models among all agile models whereas Rational Unified Process (RUP is one of the widely used conventional plan driven software development models. The agile and plan driven approaches both have their own strengths and weaknesses. Although RUP model has certain drawbacks, such as tendency to be over budgeted, slow in adaptation to rapidly changing requirements and reputation of being impractical for small and fast paced projects. XP model has certain drawbacks such as weak documentation and poor performance for medium and large development projects. XP has a concrete set of engineering practices that emphasizes on team work where managers, customers and developers are all equal partners in collaborative teams. Scrum is more concerned with the project management. It has seven practices namely Scrum Master, Scrum teams, Product Backlog, Sprint, Sprint Planning Meeting, Daily Scrum Meeting and Sprint Review. Keeping above mentioned context in view, this paper intends to propose a hybrid model naming SPRUP model by combining strengths of Scrum, XP and RUP by eliminating their weaknesses to produce high quality software. The proposed SPRUP model is validated through a controlled case study.

  13. Empirical Analysis of Farm Credit Risk under the Structure Model

    Science.gov (United States)

    Yan, Yan

    2009-01-01

    The study measures farm credit risk by using farm records collected by Farm Business Farm Management (FBFM) during the period 1995-2004. The study addresses the following questions: (1) whether farm's financial position is fully described by the structure model, (2) what are the determinants of farm capital structure under the structure model, (3)…

  14. Empirical assessment of a threshold model for sylvatic plague

    DEFF Research Database (Denmark)

    Davis, Stephen; Leirs, Herwig; Viljugrein, H.

    2007-01-01

    Plague surveillance programmes established in Kazakhstan, Central Asia, during the previous century, have generated large plague archives that have been used to parameterize an abundance threshold model for sylvatic plague in great gerbil (Rhombomys opimus) populations. Here, we assess the model...

  15. Drugs and Crime: An Empirically Based, Interdisciplinary Model

    Science.gov (United States)

    Quinn, James F.; Sneed, Zach

    2008-01-01

    This article synthesizes neuroscience findings with long-standing criminological models and data into a comprehensive explanation of the relationship between drug use and crime. The innate factors that make some people vulnerable to drug use are conceptually similar to those that predict criminality, supporting a spurious reciprocal model of the…

  16. Empirical Analysis of Farm Credit Risk under the Structure Model

    Science.gov (United States)

    Yan, Yan

    2009-01-01

    The study measures farm credit risk by using farm records collected by Farm Business Farm Management (FBFM) during the period 1995-2004. The study addresses the following questions: (1) whether farm's financial position is fully described by the structure model, (2) what are the determinants of farm capital structure under the structure model, (3)…

  17. Drugs and Crime: An Empirically Based, Interdisciplinary Model

    Science.gov (United States)

    Quinn, James F.; Sneed, Zach

    2008-01-01

    This article synthesizes neuroscience findings with long-standing criminological models and data into a comprehensive explanation of the relationship between drug use and crime. The innate factors that make some people vulnerable to drug use are conceptually similar to those that predict criminality, supporting a spurious reciprocal model of the…

  18. Hybrid modeling and empirical analysis of automobile supply chain network

    Science.gov (United States)

    Sun, Jun-yan; Tang, Jian-ming; Fu, Wei-ping; Wu, Bing-ying

    2017-05-01

    Based on the connection mechanism of nodes which automatically select upstream and downstream agents, a simulation model for dynamic evolutionary process of consumer-driven automobile supply chain is established by integrating ABM and discrete modeling in the GIS-based map. Firstly, the rationality is proved by analyzing the consistency of sales and changes in various agent parameters between the simulation model and a real automobile supply chain. Second, through complex network theory, hierarchical structures of the model and relationships of networks at different levels are analyzed to calculate various characteristic parameters such as mean distance, mean clustering coefficients, and degree distributions. By doing so, it verifies that the model is a typical scale-free network and small-world network. Finally, the motion law of this model is analyzed from the perspective of complex self-adaptive systems. The chaotic state of the simulation system is verified, which suggests that this system has typical nonlinear characteristics. This model not only macroscopically illustrates the dynamic evolution of complex networks of automobile supply chain but also microcosmically reflects the business process of each agent. Moreover, the model construction and simulation of the system by means of combining CAS theory and complex networks supplies a novel method for supply chain analysis, as well as theory bases and experience for supply chain analysis of auto companies.

  19. Using Graph and Vertex Entropy to Compare Empirical Graphs with Theoretical Graph Models

    Directory of Open Access Journals (Sweden)

    Tomasz Kajdanowicz

    2016-09-01

    Full Text Available Over the years, several theoretical graph generation models have been proposed. Among the most prominent are: the Erdős–Renyi random graph model, Watts–Strogatz small world model, Albert–Barabási preferential attachment model, Price citation model, and many more. Often, researchers working with real-world data are interested in understanding the generative phenomena underlying their empirical graphs. They want to know which of the theoretical graph generation models would most probably generate a particular empirical graph. In other words, they expect some similarity assessment between the empirical graph and graphs artificially created from theoretical graph generation models. Usually, in order to assess the similarity of two graphs, centrality measure distributions are compared. For a theoretical graph model this means comparing the empirical graph to a single realization of a theoretical graph model, where the realization is generated from the given model using an arbitrary set of parameters. The similarity between centrality measure distributions can be measured using standard statistical tests, e.g., the Kolmogorov–Smirnov test of distances between cumulative distributions. However, this approach is both error-prone and leads to incorrect conclusions, as we show in our experiments. Therefore, we propose a new method for graph comparison and type classification by comparing the entropies of centrality measure distributions (degree centrality, betweenness centrality, closeness centrality. We demonstrate that our approach can help assign the empirical graph to the most similar theoretical model using a simple unsupervised learning method.

  20. Empirical Study on Deep Learning Models for Question Answering

    OpenAIRE

    Yu, Yang; Zhang, Wei; Hang, Chung-Wei; Xiang, Bing; Zhou, Bowen

    2015-01-01

    In this paper we explore deep learning models with memory component or attention mechanism for question answering task. We combine and compare three models, Neural Machine Translation, Neural Turing Machine, and Memory Networks for a simulated QA data set. This paper is the first one that uses Neural Machine Translation and Neural Turing Machines for solving QA tasks. Our results suggest that the combination of attention and memory have potential to solve certain QA problem.

  1. Empirical modelling of NO{sub x} emissions

    Energy Technology Data Exchange (ETDEWEB)

    Pedersen, L.S.; Lans, R. van der; Glarborg, P.; Dam-Johansen, K. [Technical University of Denmark Lyngby (Denmark). Dept. of Chemical Engineering

    1998-12-31

    The applicability of predicting nitrogen oxide emissions and burnout from swirling pulverised coal flames using ideal chemical reactors was investigated. The flow pattern inside the furnace was modified as was the mixing between the combustion air and the fuel inside the reactors using a first order reaction for dissolution of air into the combustion zone. Devolatilisation is assumed to occur much faster than char combustion, with HCN as the primary volatile fuel nitrogen product. Char oxidation is modelled by a single film model with changing particle size and density. Oxidation of HCN is modelled with two reaction channels. The temperature is input from measurements. The model was verified against experimental data obtained from the cylindrical, 5 meter long and 0.5 m diameter Mitsui Babcock Energy Ltd., test-rig (160 kW{sub th}) for a Colombian, a Polish and a South African coal. The model was able to predict the NO concentration and carbon in ash reasonably well, and could predict relative differences in NO concentrations between the three coals. However, the simple reaction mechanism for the formation of NO from HCN fails at a primary stoichiometry below 0.9 for staged combustion. A short sensitivity analysis was performed for the most important parameters, which showed that the model is sensitive towards the particle size distribution. Although the model has only been tested against the small scale test-rig, the data have been compared with full scale tests conducted by ELSAM in Denmark with the same coals. In these tests NO emissions varied but the relative differences between the coals were identical. This means that the model can indirectly predict the NO emissions, depending on coal type, from the full scale power stations. 23 refs., 20 figs., 6 tabs.

  2. An Empirical Comparison of Probability Models for Dependency Grammar

    CERN Document Server

    Eisner, J

    1997-01-01

    This technical report is an appendix to Eisner (1996): it gives superior experimental results that were reported only in the talk version of that paper. Eisner (1996) trained three probability models on a small set of about 4,000 conjunction-free, dependency-grammar parses derived from the Wall Street Journal section of the Penn Treebank, and then evaluated the models on a held-out test set, using a novel O(n^3) parsing algorithm. The present paper describes some details of the experiments and repeats them with a larger training set of 25,000 sentences. As reported at the talk, the more extensive training yields greatly improved performance. Nearly half the sentences are parsed with no misattachments; two-thirds are parsed with at most one misattachment. Of the models described in the original written paper, the best score is still obtained with the generative (top-down) "model C." However, slightly better models are also explored, in particular, two variants on the comprehension (bottom-up) "model B." The be...

  3. Generalized Empirical Likelihood-Based Focused Information Criterion and Model Averaging

    Directory of Open Access Journals (Sweden)

    Naoya Sueishi

    2013-07-01

    Full Text Available This paper develops model selection and averaging methods for moment restriction models. We first propose a focused information criterion based on the generalized empirical likelihood estimator. We address the issue of selecting an optimal model, rather than a correct model, for estimating a specific parameter of interest. Then, this study investigates a generalized empirical likelihood-based model averaging estimator that minimizes the asymptotic mean squared error. A simulation study suggests that our averaging estimator can be a useful alternative to existing post-selection estimators.

  4. Time-varying disaster risk models: An empirical assessment of the Rietz-Barro hypothesis

    DEFF Research Database (Denmark)

    Irarrazabal, Alfonso; Parra-Alvarez, Juan Carlos

    This paper revisits the fit of disaster risk models where a representative agent has recursive preferences and the probability of a macroeconomic disaster changes over time. We calibrate the model as in Wachter (2013) and perform two sets of tests to assess the empirical performance of the model ...

  5. Political economy models and agricultural policy formation : empirical applicability and relevance for the CAP

    NARCIS (Netherlands)

    Zee, van der F.A.

    1997-01-01

    This study explores the relevance and applicability of political economy models for the explanation of agricultural policies. Part I (chapters 4-7) takes a general perspective and evaluates the empirical applicability of voting models and interest group models to agricultural policy

  6. Political economy models and agricultural policy formation: empirical applicability and relevance for the CAP.

    NARCIS (Netherlands)

    Zee, van der F.A.

    1997-01-01

    This study explores the relevance and applicability of political economy models for the explanation of agricultural policies. Part I (chapters 4-7) takes a general perspective and evaluates the empirical applicability of voting models and interest group models to agricultural policy formation in ind

  7. Political economy models and agricultural policy formation : empirical applicability and relevance for the CAP

    NARCIS (Netherlands)

    Zee, van der F.A.

    1997-01-01

    This study explores the relevance and applicability of political economy models for the explanation of agricultural policies. Part I (chapters 4-7) takes a general perspective and evaluates the empirical applicability of voting models and interest group models to agricultural policy formati

  8. Empirical LTE Smartphone Power Model with DRX Operation for System Level Simulations

    DEFF Research Database (Denmark)

    Lauridsen, Mads; Noël, Laurent; Mogensen, Preben

    2013-01-01

    An LTE smartphone power model is presented to enable academia and industry to evaluate users’ battery life on system level. The model is based on empirical measurements on a smartphone using a second generation LTE chipset, and the model includes functions of receive and transmit data rates...

  9. Organizational Learning, Strategic Flexibility and Business Model Innovation: An Empirical Research Based on Logistics Enterprises

    Science.gov (United States)

    Bao, Yaodong; Cheng, Lin; Zhang, Jian

    Using the data of 237 Jiangsu logistics firms, this paper empirically studies the relationship among organizational learning capability, business model innovation, strategic flexibility. The results show as follows; organizational learning capability has positive impacts on business model innovation performance; strategic flexibility plays mediating roles on the relationship between organizational learning capability and business model innovation; interaction among strategic flexibility, explorative learning and exploitative learning play significant roles in radical business model innovation and incremental business model innovation.

  10. Empirical slip and viscosity model performance for microscale gas flows.

    Energy Technology Data Exchange (ETDEWEB)

    Gallis, Michail A.; Boyd, Iain D. (University of Michigan, Ann Arbor, MI); McNenly, Matthew J. (University of Michigan, Ann Arbor, MI)

    2004-07-01

    For the simple geometries of Couette and Poiseuille flows, the velocity profile maintains a similar shape from continuum to free molecular flow. Therefore, modifications to the fluid viscosity and slip boundary conditions can improve the continuum based Navier-Stokes solution in the non-continuum non-equilibrium regime. In this investigation, the optimal modifications are found by a linear least-squares fit of the Navier-Stokes solution to the non-equilibrium solution obtained using the direct simulation Monte Carlo (DSMC) method. Models are then constructed for the Knudsen number dependence of the viscosity correction and the slip model from a database of DSMC solutions for Couette and Poiseuille flows of argon and nitrogen gas, with Knudsen numbers ranging from 0.01 to 10. Finally, the accuracy of the models is measured for non-equilibrium cases both in and outside the DSMC database. Flows outside the database include: combined Couette and Poiseuille flow, partial wall accommodation, helium gas, and non-zero convective acceleration. The models reproduce the velocity profiles in the DSMC database within an L{sub 2} error norm of 3% for Couette flows and 7% for Poiseuille flows. However, the errors in the model predictions outside the database are up to five times larger.

  11. Empirical and modeled synoptic cloud climatology of the Arctic Ocean

    Science.gov (United States)

    Barry, R. G.; Newell, J. P.; Schweiger, A.; Crane, R. G.

    1986-01-01

    A set of cloud cover data were developed for the Arctic during the climatically important spring/early summer transition months. Parallel with the determination of mean monthly cloud conditions, data for different synoptic pressure patterns were also composited as a means of evaluating the role of synoptic variability on Arctic cloud regimes. In order to carry out this analysis, a synoptic classification scheme was developed for the Arctic using an objective typing procedure. A second major objective was to analyze model output of pressure fields and cloud parameters from a control run of the Goddard Institue for Space Studies climate model for the same area and to intercompare the synoptic climatatology of the model with that based on the observational data.

  12. An Empirical Model of Wage Dispersion with Sorting

    DEFF Research Database (Denmark)

    Bagger, Jesper; Lentz, Rasmus

    This paper studies wage dispersion in an equilibrium on-the-job-search model with endogenous search intensity. Workers differ in their permanent skill level and firms differ with respect to productivity. Positive (negative) sorting results if the match production function is supermodular...

  13. Empirical validation data sets for double skin facade models

    DEFF Research Database (Denmark)

    Kalyanova, Olena; Jensen, Rasmus Lund; Heiselberg, Per

    2008-01-01

    During recent years application of double skin facades (DSF) has greatly increased. However, successful application depends heavily on reliable and validated models for simulation of the DSF performance and this in turn requires access to high quality experimental data. Three sets of accurate emp...

  14. Neural networks in economic modelling : An empirical study

    NARCIS (Netherlands)

    Verkooijen, W.J.H.

    1996-01-01

    This dissertation addresses the statistical aspects of neural networks and their usability for solving problems in economics and finance. Neural networks are discussed in a framework of modelling which is generally accepted in econometrics. Within this framework a neural network is regarded as a sta

  15. An Empirical Study of a Solo Performance Assessment Model

    Science.gov (United States)

    Russell, Brian E.

    2015-01-01

    The purpose of this study was to test a hypothesized model of solo music performance assessment. Specifically, this study investigates the influence of technique and musical expression on perceptions of overall performance quality. The Aural Musical Performance Quality (AMPQ) measure was created to measure overall performance quality, technique,…

  16. An Empirical Generative Framework for Computational Modeling of Language Acquisition

    Science.gov (United States)

    Waterfall, Heidi R.; Sandbank, Ben; Onnis, Luca; Edelman, Shimon

    2010-01-01

    This paper reports progress in developing a computer model of language acquisition in the form of (1) a generative grammar that is (2) algorithmically learnable from realistic corpus data, (3) viable in its large-scale quantitative performance and (4) psychologically real. First, we describe new algorithmic methods for unsupervised learning of…

  17. Semiphysiological versus Empirical Modelling of the Population Pharmacokinetics of Free and Total Cefazolin during Pregnancy

    Directory of Open Access Journals (Sweden)

    J. G. Coen van Hasselt

    2014-01-01

    Full Text Available This work describes a first population pharmacokinetic (PK model for free and total cefazolin during pregnancy, which can be used for dose regimen optimization. Secondly, analysis of PK studies in pregnant patients is challenging due to study design limitations. We therefore developed a semiphysiological modeling approach, which leveraged gestation-induced changes in creatinine clearance (CrCL into a population PK model. This model was then compared to the conventional empirical covariate model. First, a base two-compartmental PK model with a linear protein binding was developed. The empirical covariate model for gestational changes consisted of a linear relationship between CL and gestational age. The semiphysiological model was based on the base population PK model and a separately developed mixed-effect model for gestation-induced change in CrCL. Estimates for baseline clearance (CL were 0.119 L/min (RSE 58% and 0.142 L/min (RSE 44% for the empirical and semiphysiological models, respectively. Both models described the available PK data comparably well. However, as the semiphysiological model was based on prior knowledge of gestation-induced changes in renal function, this model may have improved predictive performance. This work demonstrates how a hybrid semiphysiological population PK approach may be of relevance in order to derive more informative inferences.

  18. Empirical genome evolution models root the tree of life.

    Science.gov (United States)

    Harish, Ajith; Kurland, Charles G

    2017-07-01

    A reliable phylogenetic reconstruction of the evolutionary history of contemporary species depends on a robust identification of the universal common ancestor (UCA) at the root of the Tree of Life (ToL). That root polarizes the tree so that the evolutionary succession of ancestors to descendants is discernable. In effect, the root determines the branching order and the direction of character evolution. Typically, conventional phylogenetic analyses implement time-reversible models of evolution for which character evolution is un-polarized. Such practices leave the root and the direction of character evolution undefined by the data used to construct such trees. In such cases, rooting relies on theoretic assumptions and/or the use of external data to interpret unrooted trees. The most common rooting method, the outgroup method is clearly inapplicable to the ToL, which has no outgroup. Both here and in the accompanying paper (Harish and Kurland, 2017) we have explored the theoretical and technical issues related to several rooting methods. We demonstrate (1) that Genome-level characters and evolution models are necessary for species phylogeny reconstructions. By the same token, standard practices exploiting sequence-based methods that implement gene-scale substitution models do not root species trees; (2) Modeling evolution of complex genomic characters and processes that are non-reversible and non-stationary is required to reconstruct the polarized evolution of the ToL; (3) Rooting experiments and Bayesian model selection tests overwhelmingly support the earlier finding that akaryotes and eukaryotes are sister clades that descend independently from UCA (Harish and Kurland, 2013); (4) Consistent ancestral state reconstructions from independent genome samplings confirm the previous finding that UCA features three fourths of the unique protein domain-superfamilies encoded by extant genomes. Copyright © 2017 Elsevier B.V. and Société Française de Biochimie et Biologie

  19. Generalized multiplicative error models: Asymptotic inference and empirical analysis

    Science.gov (United States)

    Li, Qian

    This dissertation consists of two parts. The first part focuses on extended Multiplicative Error Models (MEM) that include two extreme cases for nonnegative series. These extreme cases are common phenomena in high-frequency financial time series. The Location MEM(p,q) model incorporates a location parameter so that the series are required to have positive lower bounds. The estimator for the location parameter turns out to be the minimum of all the observations and is shown to be consistent. The second case captures the nontrivial fraction of zero outcomes feature in a series and combines a so-called Zero-Augmented general F distribution with linear MEM(p,q). Under certain strict stationary and moment conditions, we establish a consistency and asymptotic normality of the semiparametric estimation for these two new models. The second part of this dissertation examines the differences and similarities between trades in the home market and trades in the foreign market of cross-listed stocks. We exploit the multiplicative framework to model trading duration, volume per trade and price volatility for Canadian shares that are cross-listed in the New York Stock Exchange (NYSE) and the Toronto Stock Exchange (TSX). We explore the clustering effect, interaction between trading variables, and the time needed for price equilibrium after a perturbation for each market. The clustering effect is studied through the use of univariate MEM(1,1) on each variable, while the interactions among duration, volume and price volatility are captured by a multivariate system of MEM(p,q). After estimating these models by a standard QMLE procedure, we exploit the Impulse Response function to compute the calendar time for a perturbation in these variables to be absorbed into price variance, and use common statistical tests to identify the difference between the two markets in each aspect. These differences are of considerable interest to traders, stock exchanges and policy makers.

  20. An empirical mixing model for pressurized thermal shock applications

    Energy Technology Data Exchange (ETDEWEB)

    Chexal, V.K.; Chao, J.; Griesbach, T.J.; Nickell, R.E.

    1985-04-01

    Empirical correlations are developed for the local temperature and velocity distributions in the pressurized water reactor downcomer for pressurized thermal shock scenarios. The correlation is based on Creare test data and has been validated with Science Applications, Inc., experiments and COMMIX code calculations. It provides good agreement under pump flow and natural circulation conditions and gives a conservative estimate under stagnation conditions.

  1. Empirical Bayes Estimation in the Rasch Model: A Simulation.

    Science.gov (United States)

    de Gruijter, Dato N. M.

    In a situation where the population distribution of latent trait scores can be estimated, the ordinary maximum likelihood estimator of latent trait scores may be improved upon by taking the estimated population distribution into account. In this paper empirical Bayes estimators are compared with the liklihood estimator for three samples of 300…

  2. Empirical study on entropy models of cellular manufacturing systems

    Institute of Scientific and Technical Information of China (English)

    Zhifeng Zhang; Renbin Xiao

    2009-01-01

    From the theoretical point of view,the states of manufacturing resources can be monitored and assessed through the amount of information needed to describe their technological structure and operational state.The amount of information needed to describe cellular manufacturing systems is investigated by two measures:the structural entropy and the operational entropy.Based on the Shannon entropy,the models of the structural entropy and the operational entropy of cellular manufacturing systems are developed,and the cognizance of the states of manufacturing resources is also illustrated.Scheduling is introduced to measure the entropy models of cellular manufacturing systems,and the feasible concepts of maximum schedule horizon and schedule adherence are advanced to quantitatively evaluate the effectiveness of schedules.Finally,an example is used to demonstrate the validity of the proposed methodology.

  3. AN EMPIRICAL MODEL OF ONLINE BUYING CONTINUANCE INTENTION

    OpenAIRE

    ORZAN Gheorghe; Claudia ICONARU; MACOVEI Octav-Ionut

    2012-01-01

    The aim of this paper is to propose, test and validate a model of consumers` continuance intention to buy online as a main function of affective attitude towards using the Internet for purchasing goods and services and the overall satisfaction towards the decision of buying online. The confirmation of initial expectations regarding online buying is the main predictor of online consumers` satisfaction and online consumers` perceived usefulness of online buying. Affective attitude is mediating ...

  4. PROPOSAL OF AN EMPIRICAL MODEL FOR SUPPLIERS SELECTION

    Directory of Open Access Journals (Sweden)

    Paulo Ávila

    2015-03-01

    Full Text Available The problem of selecting suppliers/partners is a crucial and important part in the process of decision making for companies that intend to perform competitively in their area of activity. The selection of supplier/partner is a time and resource-consuming task that involves data collection and a careful analysis of the factors that can positively or negatively influence the choice. Nevertheless it is a critical process that affects significantly the operational performance of each company. In this work, trough the literature review, there were identified five broad suppliers selection criteria: Quality, Financial, Synergies, Cost, and Production System. Within these criteria, it was also included five sub-criteria. Thereafter, a survey was elaborated and companies were contacted in order to answer which factors have more relevance in their decisions to choose the suppliers. Interpreted the results and processed the data, it was adopted a model of linear weighting to reflect the importance of each factor. The model has a hierarchical structure and can be applied with the Analytic Hierarchy Process (AHP method or Simple Multi-Attribute Rating Technique (SMART. The result of the research undertaken by the authors is a reference model that represents a decision making support for the suppliers/partners selection process.

  5. A model of deep ecotourism development and its empirical study

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Ecotourism requires the harmony of all factors involved in tourism system for a multilateral benefit.Based on such cognition,a concept of deep ecotourism development is put forward which includes two connotations:on the one hand,it should give prominence to the display of the eco-culture of the tourist destination and tourists'eco-experience.in which way the development behavior on the tourist destination and the tourists' behavior will beregulated;on the other hand,it implies the deep harmony among tourist entrepreneurs and tourists,the local governments and the local residents,as well as tourist activities and the ecological environment in the tourism development for the multilateral benefit of every element involved and sustainable tourism development.The common sense is that the degrees in a certain tourism destination will differ and that consequently four levels of ecotourism are divided-very shallow ecotourism,shallow ecotourism,deep ecotourism and very deep ecotourism.To move shallow ecotourism toward deep one,two models of"foursubjects and two wings"and"connecting the two wings"of deep ecotourism development system are introduced to make ecotourism industry favorable to the display of eco-culture and the Sustainable development of the destination community.With the two models,a case study of ecotourism development in Louguantal National Forest Park was made as a demonstration.The ultimate purpose is to build an ideal new Shangri-La.

  6. Modeling Active Aging and Explicit Memory: An Empirical Study.

    Science.gov (United States)

    Ponce de León, Laura Ponce; Lévy, Jean Pierre; Fernández, Tomás; Ballesteros, Soledad

    2015-08-01

    The rapid growth of the population of older adults and their concomitant psychological status and health needs have captured the attention of researchers and health professionals. To help fill the void of literature available to social workers interested in mental health promotion and aging, the authors provide a model for active aging that uses psychosocial variables. Structural equation modeling was used to examine the relationships among the latent variables of the state of explicit memory, the perception of social resources, depression, and the perception of quality of life in a sample of 184 older adults. The results suggest that explicit memory is not a direct indicator of the perception of quality of life, but it could be considered an indirect indicator as it is positively correlated with perception of social resources and negatively correlated with depression. These last two variables influenced the perception of quality of life directly, the former positively and the latter negatively. The main outcome suggests that the perception of social support improves explicit memory and quality of life and reduces depression in active older adults. The findings also suggest that gerontological professionals should design memory training programs, improve available social resources, and offer environments with opportunities to exercise memory.

  7. Empirically Grounded Agent-Based Models of Innovation Diffusion: A Critical Review

    CERN Document Server

    Zhang, Haifeng

    2016-01-01

    Innovation diffusion has been studied extensively in a variety of disciplines, including sociology, economics, marketing, ecology, and computer science. Traditional literature on innovation diffusion has been dominated by models of aggregate behavior and trends. However, the agent-based modeling (ABM) paradigm is gaining popularity as it captures agent heterogeneity and enables fine-grained modeling of interactions mediated by social and geographic networks. While most ABM work on innovation diffusion is theoretical, empirically grounded models are increasingly important, particularly in guiding policy decisions. We present a critical review of empirically grounded agent-based models of innovation diffusion, developing a categorization of this research based on types of agent models as well as applications. By connecting the modeling methodologies in the fields of information and innovation diffusion, we suggest that the maximum likelihood estimation framework widely used in the former is a promising paradigm...

  8. Libor and Swap Market Models for the Pricing of Interest Rate Derivatives : An Empirical Analysis

    NARCIS (Netherlands)

    de Jong, F.C.J.M.; Driessen, J.J.A.G.; Pelsser, A.

    2000-01-01

    In this paper we empirically analyze and compare the Libor and Swap Market Models, developed by Brace, Gatarek, and Musiela (1997) and Jamshidian (1997), using paneldata on prices of US caplets and swaptions.A Libor Market Model can directly be calibrated to observed prices of caplets, whereas a

  9. Empirical modeling of soot formation in shock-tube pyrolysis of aromatic hydrocarbons

    Science.gov (United States)

    Frenklach, M.; Clary, D. W.; Matula, R. A.

    1986-01-01

    A method for empirical modeling of soot formation during shock-tube pyrolysis of aromatic hydrocarbons is developed. The method is demonstrated using data obtained in pyrolysis of argon-diluted mixtures of toluene behind reflected shock waves. The developed model is in good agreement with experiment.

  10. Computer Model of the Empirical Knowledge of Physics Formation: Coordination with Testing Results

    Science.gov (United States)

    Mayer, Robert V.

    2016-01-01

    The use of method of imitational modeling to study forming the empirical knowledge in pupil's consciousness is discussed. The offered model is based on division of the physical facts into three categories: 1) the facts established in everyday life; 2) the facts, which the pupil can experimentally establish at a physics lesson; 3) the facts which…

  11. An Empirically Based Method of Q-Matrix Validation for the DINA Model: Development and Applications

    Science.gov (United States)

    de la Torre, Jimmy

    2008-01-01

    Most model fit analyses in cognitive diagnosis assume that a Q matrix is correct after it has been constructed, without verifying its appropriateness. Consequently, any model misfit attributable to the Q matrix cannot be addressed and remedied. To address this concern, this paper proposes an empirically based method of validating a Q matrix used…

  12. Development of Solar Wind Model Driven by Empirical Heat Flux and Pressure Terms

    Science.gov (United States)

    Sittler, Edward C., Jr.; Ofman, L.; Selwa, M.; Kramar, M.

    2008-01-01

    We are developing a time stationary self-consistent 2D MHD model of the solar corona and solar wind as suggested by Sittler et al. (2003). Sittler & Guhathakurta (1999) developed a semiempirical steady state model (SG model) of the solar wind in a multipole 3-streamer structure, with the model constrained by Skylab observations. Guhathakurta et al. (2006) presented a more recent version of their initial work. Sittler et al. (2003) modified the SG model by investigating time dependent MHD, ad hoc heating term with heat conduction and empirical heating solutions. Next step of development of 2D MHD models was performed by Sittler & Ofman (2006). They derived effective temperature and effective heat flux from the data-driven SG model and fit smooth analytical functions to be used in MHD calculations. Improvements of the Sittler & Ofman (2006) results now show a convergence of the 3-streamer topology into a single equatorial streamer at altitudes > 2 R(sub S). This is a new result and shows we are now able to reproduce observations of an equatorially confined streamer belt. In order to allow our solutions to be applied to more general applications, we extend that model by using magnetogram data and PFSS model as a boundary condition. Initial results were presented by Selwa et al. (2008). We choose solar minimum magnetogram data since during solar maximum the boundary conditions are more complex and the coronal magnetic field may not be described correctly by PFSS model. As the first step we studied the simplest 2D MHD case with variable heat conduction, and with empirical heat input combined with empirical momentum addition for the fast solar wind. We use realistic magnetic field data based on NSO/GONG data, and plan to extend the study to 3D. This study represents the first attempt of fully self-consistent realistic model based on real data and including semi-empirical heat flux and semi-empirical effective pressure terms.

  13. Empirical Likelihood for Mixed-effects Error-in-variables Model

    Institute of Scientific and Technical Information of China (English)

    Qiu-hua Chen; Ping-shou Zhong; Heng-jian Cui

    2009-01-01

    This paper mainly introduces the method of empirical likelihood and its applications on two dif-ferent models.We discuss the empirical likelihood inference on fixed-effect parameter in mixed-effects model with error-in-variables.We first consider a linear mixed-effects model with measurement errors in both fixed and random effects.We construct the empirical likelihood confidence regions for the fixed-effects parameters and the mean parameters of random-effects.The limiting distribution of the empirical log likelihood ratio at the true parameter is χ2p+q,where p,q are dimension of fixed and random effects respectively.Then we discuss empirical likelihood inference in a semi-linear error-in-variable mixed-effects model.Under certain conditions,it is shown that the empirical log likelihood ratio at the true parameter also converges to χ2p+q.Simulations illustrate that the proposed confidence region has a coverage probability more closer to the nominal level than normal approximation based confidence region.

  14. Theoretical and Empirical Review of Asset Pricing Models: A Structural Synthesis

    Directory of Open Access Journals (Sweden)

    Saban Celik

    2012-01-01

    Full Text Available The purpose of this paper is to give a comprehensive theoretical review devoted to asset pricing models by emphasizing static and dynamic versions in the line with their empirical investigations. A considerable amount of financial economics literature devoted to the concept of asset pricing and their implications. The main task of asset pricing model can be seen as the way to evaluate the present value of the pay offs or cash flows discounted for risk and time lags. The difficulty coming from discounting process is that the relevant factors that affect the pay offs vary through the time whereas the theoretical framework is still useful to incorporate the changing factors into an asset pricing models. This paper fills the gap in literature by giving a comprehensive review of the models and evaluating the historical stream of empirical investigations in the form of structural empirical review.

  15. Empirical Validation of a Thermal Model of a Complex Roof Including Phase Change Materials

    CERN Document Server

    Guichard, Stéphane; Bigot, Dimitri; Malet-Damour, Bruno; Libelle, Teddy; Boyer, Harry

    2015-01-01

    This paper deals with the empirical validation of a building thermal model using a phase change material (PCM) in a complex roof. A mathematical model dedicated to phase change materials based on the heat apparent capacity method was implemented in a multi-zone building simulation code, the aim being to increase understanding of the thermal behavior of the whole building with PCM technologies. To empirically validate the model, the methodology is based both on numerical and experimental studies. A parametric sensitivity analysis was performed and a set of parameters of the thermal model have been identified for optimization. The use of a generic optimization program called GenOpt coupled to the building simulation code enabled to determine the set of adequate parameters. We first present the empirical validation methodology and main results of previous work. We then give an overview of GenOpt and its coupling with the building simulation code. Finally, once the optimization results are obtained, comparisons o...

  16. Behavioral Modeling Based on Probabilistic Finite Automata: An Empirical Study.

    Science.gov (United States)

    Tîrnăucă, Cristina; Montaña, José L; Ontañón, Santiago; González, Avelino J; Pardo, Luis M

    2016-06-24

    Imagine an agent that performs tasks according to different strategies. The goal of Behavioral Recognition (BR) is to identify which of the available strategies is the one being used by the agent, by simply observing the agent's actions and the environmental conditions during a certain period of time. The goal of Behavioral Cloning (BC) is more ambitious. In this last case, the learner must be able to build a model of the behavior of the agent. In both settings, the only assumption is that the learner has access to a training set that contains instances of observed behavioral traces for each available strategy. This paper studies a machine learning approach based on Probabilistic Finite Automata (PFAs), capable of achieving both the recognition and cloning tasks. We evaluate the performance of PFAs in the context of a simulated learning environment (in this case, a virtual Roomba vacuum cleaner robot), and compare it with a collection of other machine learning approaches.

  17. Cycle length maximization in PWRs using empirical core models

    Energy Technology Data Exchange (ETDEWEB)

    Okafor, K.C.; Aldemir, T.

    1987-01-01

    The problem of maximizing cycle length in nuclear reactors through optimal fuel and poison management has been addressed by many investigators. An often-used neutronic modeling technique is to find correlations between the state and control variables to describe the response of the core to changes in the control variables. In this study, a set of linear correlations, generated by two-dimensional diffusion-depletion calculations, is used to find the enrichment distribution that maximizes cycle length for the initial core of a pressurized water reactor (PWR). These correlations (a) incorporate the effect of composition changes in all the control zones on a given fuel assembly and (b) are valid for a given range of control variables. The advantage of using such correlations is that the cycle length maximization problem can be reduced to a linear programming problem.

  18. Temporal structure of neuronal population oscillations with empirical model decomposition

    Science.gov (United States)

    Li, Xiaoli

    2006-08-01

    Frequency analysis of neuronal oscillation is very important for understanding the neural information processing and mechanism of disorder in the brain. This Letter addresses a new method to analyze the neuronal population oscillations with empirical mode decomposition (EMD). Following EMD of neuronal oscillation, a series of intrinsic mode functions (IMFs) are obtained, then Hilbert transform of IMFs can be used to extract the instantaneous time frequency structure of neuronal oscillation. The method is applied to analyze the neuronal oscillation in the hippocampus of epileptic rats in vivo, the results show the neuronal oscillations have different descriptions during the pre-ictal, seizure onset and ictal periods of the epileptic EEG at the different frequency band. This new method is very helpful to provide a view for the temporal structure of neural oscillation.

  19. Empirical modeling and data analysis for engineers and applied scientists

    CERN Document Server

    Pardo, Scott A

    2016-01-01

    This textbook teaches advanced undergraduate and first-year graduate students in Engineering and Applied Sciences to gather and analyze empirical observations (data) in order to aid in making design decisions. While science is about discovery, the primary paradigm of engineering and "applied science" is design. Scientists are in the discovery business and want, in general, to understand the natural world rather than to alter it. In contrast, engineers and applied scientists design products, processes, and solutions to problems. That said, statistics, as a discipline, is mostly oriented toward the discovery paradigm. Young engineers come out of their degree programs having taken courses such as "Statistics for Engineers and Scientists" without any clear idea as to how they can use statistical methods to help them design products or processes. Many seem to think that statistics is only useful for demonstrating that a device or process actually does what it was designed to do. Statistics courses emphasize creati...

  20. Institutions and foreign direct investment (FDI) in Malaysia: empirical evidence using ARDL model

    OpenAIRE

    Abdul Karim, Zulkefly; Zaidi, Mohd Azlan Shah; Ismail, Mohd Adib; Abdul Karim, Bakri

    2011-01-01

    Since 1990’s, institution factors have been regarded as playing important roles in stimulating foreign direct investments (FDI). However, empirical studies on their importance in affecting FDI are still lacking especially for small open economies. This paper attempts to investigate the role of institutions upon the inflow of foreign direct investment (FDI) in a small open economy of Malaysia. Using bounds testing approach (ARDL model), the empirical findings reveal that there exists a long ru...

  1. Dynamic Modeling of a Reformed Methanol Fuel Cell System using Empirical Data and Adaptive Neuro-Fuzzy Inference System Models

    DEFF Research Database (Denmark)

    Justesen, Kristian Kjær; Andreasen, Søren Juhl; Shaker, Hamid Reza

    2014-01-01

    In this work, a dynamic MATLAB Simulink model of a H3-350 Reformed Methanol Fuel Cell (RMFC) stand-alone battery charger produced by Serenergy is developed on the basis of theoretical and empirical methods. The advantage of RMFC systems is that they use liquid methanol as a fuel instead of gaseous...... of the reforming process are implemented. Models of the cooling flow of the blowers for the fuel cell and the burner which supplies process heat for the reformer are made. The two blowers have a common exhaust, which means that the two blowers influence each other’s output. The models take this into account using...... an empirical approach. Fin efficiency models for the cooling effect of the air are also developed using empirical methods. A fuel cell model is also implemented based on a standard model which is adapted to fit the measured performance of the H3-350 module. All the individual parts of the model are verified...

  2. Dynamic Modeling of a Reformed Methanol Fuel Cell System using Empirical Data and Adaptive Neuro-Fuzzy Inference System Models

    DEFF Research Database (Denmark)

    Justesen, Kristian Kjær; Andreasen, Søren Juhl; Shaker, Hamid Reza

    2013-01-01

    In this work, a dynamic MATLAB Simulink model of a H3-350 Reformed Methanol Fuel Cell (RMFC) stand-alone battery charger produced by Serenergy is developed on the basis of theoretical and empirical methods. The advantage of RMFC systems is that they use liquid methanol as a fuel instead of gaseous...... of the reforming process are implemented. Models of the cooling flow of the blowers for the fuel cell and the burner which supplies process heat for the reformer are made. The two blowers have a common exhaust, which means that the two blowers influence each other’s output. The models take this into account using...... an empirical approach. Fin efficiency models for the cooling effect of the air are also developed using empirical methods. A fuel cell model is also implemented based on a standard model which is adapted to fit the measured performance of the H3-350 module. All the individual parts of the model are verified...

  3. EMPIRICAL VERIFICATION OF ANISOTROPIC HYDRODYNAMIC TRAFFIC MODEL IN TRAFFIC ANALYSIS AT INTERSECTIONS IN URBAN AREA

    Institute of Scientific and Technical Information of China (English)

    WEI Yan-fang; GUO Si-ling; XUE Yu

    2007-01-01

    In this article, the traffic hydrodynamic model considering the driver's reaction time was applied to the traffic analysis at the intersections on real roads. In the numerical simulation with the model, the pinch effect of the right-turning vehicles flow was found, which mainly leads to traffic jamming on the straight lane. All of the results in accordance with the empirical data confirm the applicability of this model.

  4. Empirical wind retrieval model based on SAR spectrum measurements

    Science.gov (United States)

    Panfilova, Maria; Karaev, Vladimir; Balandina, Galina; Kanevsky, Mikhail; Portabella, Marcos; Stoffelen, Ad

    The present paper considers polarimetric SAR wind vector applications. Remote-sensing measurements of the near-surface wind over the ocean are of great importance for the understanding of atmosphere-ocean interaction. In recent years investigations for wind vector retrieval using Synthetic Aperture Radar (SAR) data have been performed. In contrast with scatterometers, a SAR has a finer spatial resolution that makes it a more suitable microwave instrument to explore wind conditions in the marginal ice zones, coastal regions and lakes. The wind speed retrieval procedure from scatterometer data matches the measured radar backscattering signal with the geophysical model function (GMF). The GMF determines the radar cross section dependence on the wind speed and direction with respect to the azimuthal angle of the radar beam. Scatterometers provide information on wind speed and direction simultaneously due to the fact that each wind vector cell (WVC) is observed at several azimuth angles. However, SAR is not designed to be used as a high resolution scatterometer. In this case, each WVC is observed at only one single azimuth angle. That is why for wind vector determination additional information such as wind streak orientation over the sea surface is required. It is shown that the wind vector can be obtained using polarimetric SAR without additional information. The main idea is to analyze the spectrum of a homogeneous SAR image area instead of the backscattering normalized radar cross section. Preliminary numerical simulations revealed that SAR image spectral maxima positions depend on the wind vector. Thus the following method for wind speed retrieval is proposed. In the first stage of the algorithm, the SAR spectrum maxima are determined. This procedure is carried out to estimate the wind speed and direction with ambiguities separated by 180 degrees due to the SAR spectrum symmetry. The second stage of the algorithm allows us to select the correct wind direction

  5. Computer Model of the Empirical Knowledge of Physics Formation: Coordination with Testing Results

    Directory of Open Access Journals (Sweden)

    Robert V. Mayer

    2016-06-01

    Full Text Available The use of method of imitational modeling to study forming the empirical knowledge in pupil’s consciousness is discussed. The offered model is based on division of the physical facts into three categories: 1 the facts established in everyday life; 2 the facts, which the pupil can experimentally establish at a physics lesson; 3 the facts which are studied only on the theoretical level (speculative or ideally. The determination of the forgetting coefficients of the facts of the first, second and third categories and coordination of imitating model with distribution of empirical information in the school physics course and testing results is carried out. The graphs of dependence of empirical knowledge for various physics sections and facts categories on time are given.

  6. An empirical approach to update multivariate regression models intended for routine industrial use

    Energy Technology Data Exchange (ETDEWEB)

    Garcia-Mencia, M.V.; Andrade, J.M.; Lopez-Mahia, P.; Prada, D. [University of La Coruna, La Coruna (Spain). Dept. of Analytical Chemistry

    2000-11-01

    Many problems currently tackled by analysts are highly complex and, accordingly, multivariate regression models need to be developed. Two intertwined topics are important when such models are to be applied within the industrial routines: (1) Did the model account for the 'natural' variance of the production samples? (2) Is the model stable on time? This paper focuses on the second topic and it presents an empirical approach where predictive models developed by using Mid-FTIR and PLS and PCR hold its utility during about nine months when used to predict the octane number of platforming naphthas in a petrochemical refinery. 41 refs., 10 figs., 1 tab.

  7. Evaluation of Regression Models of Balance Calibration Data Using an Empirical Criterion

    Science.gov (United States)

    Ulbrich, Norbert; Volden, Thomas R.

    2012-01-01

    An empirical criterion for assessing the significance of individual terms of regression models of wind tunnel strain gage balance outputs is evaluated. The criterion is based on the percent contribution of a regression model term. It considers a term to be significant if its percent contribution exceeds the empirical threshold of 0.05%. The criterion has the advantage that it can easily be computed using the regression coefficients of the gage outputs and the load capacities of the balance. First, a definition of the empirical criterion is provided. Then, it is compared with an alternate statistical criterion that is widely used in regression analysis. Finally, calibration data sets from a variety of balances are used to illustrate the connection between the empirical and the statistical criterion. A review of these results indicated that the empirical criterion seems to be suitable for a crude assessment of the significance of a regression model term as the boundary between a significant and an insignificant term cannot be defined very well. Therefore, regression model term reduction should only be performed by using the more universally applicable statistical criterion.

  8. An Empirically Driven Time-Dependent Model of the Solar Wind

    Science.gov (United States)

    Linker, Jon A.; Caplan, Ronald M.; Downs, Cooper; Lionello, Roberto; Riley, Pete; Mikic, Zoran; Henney, Carl J.; Arge, Charles N.; Kim, Tae; Pogorelov, Nikolai

    2016-05-01

    We describe the development and application of a time-dependent model of the solar wind. The model is empirically driven, starting from magnetic maps created with the Air Force Data Assimilative Photospheric flux Transport (ADAPT) model at a daily cadence. Potential field solutions are used to model the coronal magnetic field, and an empirical specification is used to develop boundary conditions for an MHD model of the solar wind. The time-dependent MHD simulation shows classic features of stream structure in the interplanetary medium that are seen in steady-state models; it also shows time evolutionary features that do not appear in a steady-state approach. The model results compare reasonably well with 1 AU OMNI observations. Data gaps when SOLIS magnetograms were unavailable hinder the model performance. The reasonable comparisons with observations suggest that this modeling approach is suitable for driving long term models of the outer heliosphere. Improvements to the ingestion of magnetograms in flux transport models will be necessary to apply this approach in a time-dependent space weather model.

  9. A New Empirical Model for Radar Scattering from Bare Soil Surfaces

    Directory of Open Access Journals (Sweden)

    Nicolas Baghdadi

    2016-11-01

    Full Text Available The objective of this paper is to propose a new semi-empirical radar backscattering model for bare soil surfaces based on the Dubois model. A wide dataset of backscattering coefficients extracted from synthetic aperture radar (SAR images and in situ soil surface parameter measurements (moisture content and roughness is used. The retrieval of soil parameters from SAR images remains challenging because the available backscattering models have limited performances. Existing models, physical, semi-empirical, or empirical, do not allow for a reliable estimate of soil surface geophysical parameters for all surface conditions. The proposed model, developed in HH, HV, and VV polarizations, uses a formulation of radar signals based on physical principles that are validated in numerous studies. Never before has a backscattering model been built and validated on such an important dataset as the one proposed in this study. It contains a wide range of incidence angles (18°–57° and radar wavelengths (L, C, X, well distributed, geographically, for regions with different climate conditions (humid, semi-arid, and arid sites, and involving many SAR sensors. The results show that the new model shows a very good performance for different radar wavelengths (L, C, X, incidence angles, and polarizations (RMSE of about 2 dB. This model is easy to invert and could provide a way to improve the retrieval of soil parameters.

  10. Including Finite Surface Span Effects in Empirical Jet-Surface Interaction Noise Models

    Science.gov (United States)

    Brown, Clifford A.

    2016-01-01

    The effect of finite span on the jet-surface interaction noise source and the jet mixing noise shielding and reflection effects is considered using recently acquired experimental data. First, the experimental setup and resulting data are presented with particular attention to the role of surface span on far-field noise. These effects are then included in existing empirical models that have previously assumed that all surfaces are semi-infinite. This extended abstract briefly describes the experimental setup and data leaving the empirical modeling aspects for the final paper.

  11. Empathy versus parsimony in understanding post-conflict affiliation in monkeys: model and empirical data.

    Directory of Open Access Journals (Sweden)

    Ivan Puga-Gonzalez

    Full Text Available Post-conflict affiliation between former opponents and bystanders occurs in several species of non-human primates. It is classified in four categories of which affiliation received by the former victim, 'consolation', has received most attention. The hypotheses of cognitive constraint and social constraint are inadequate to explain its occurrence. The cognitive constraint hypothesis is contradicted by recent evidence of 'consolation' in monkeys and the social constraint hypothesis lacks information why 'consolation' actually happens. Here, we combine a computational model and an empirical study to investigate the minimum cognitive requirements for post-conflict affiliation. In the individual-based model, individuals are steered by cognitively simple behavioural rules. Individuals group and when nearby each other they fight if they are likely to win, otherwise, they may groom, especially when anxious. We parameterize the model after empirical data of a tolerant species, the Tonkean macaque (Macaca tonkeana. We find evidence for the four categories of post-conflict affiliation in the model and in the empirical data. We explain how in the model these patterns emerge from the combination of a weak hierarchy, social facilitation, risk-sensitive aggression, interactions with partners close-by and grooming as tension-reduction mechanism. We indicate how this may function as a new explanation for empirical data.

  12. Comparing the model-simulated global warming signal to observations using empirical estimates of unforced noise.

    Science.gov (United States)

    Brown, Patrick T; Li, Wenhong; Cordero, Eugene C; Mauget, Steven A

    2015-04-21

    The comparison of observed global mean surface air temperature (GMT) change to the mean change simulated by climate models has received much public and scientific attention. For a given global warming signal produced by a climate model ensemble, there exists an envelope of GMT values representing the range of possible unforced states of the climate system (the Envelope of Unforced Noise; EUN). Typically, the EUN is derived from climate models themselves, but climate models might not accurately simulate the correct characteristics of unforced GMT variability. Here, we simulate a new, empirical, EUN that is based on instrumental and reconstructed surface temperature records. We compare the forced GMT signal produced by climate models to observations while noting the range of GMT values provided by the empirical EUN. We find that the empirical EUN is wide enough so that the interdecadal variability in the rate of global warming over the 20(th) century does not necessarily require corresponding variability in the rate-of-increase of the forced signal. The empirical EUN also indicates that the reduced GMT warming over the past decade or so is still consistent with a middle emission scenario's forced signal, but is likely inconsistent with the steepest emission scenario's forced signal.

  13. Empathy versus parsimony in understanding post-conflict affiliation in monkeys: model and empirical data.

    Science.gov (United States)

    Puga-Gonzalez, Ivan; Butovskaya, Marina; Thierry, Bernard; Hemelrijk, Charlotte Korinna

    2014-01-01

    Post-conflict affiliation between former opponents and bystanders occurs in several species of non-human primates. It is classified in four categories of which affiliation received by the former victim, 'consolation', has received most attention. The hypotheses of cognitive constraint and social constraint are inadequate to explain its occurrence. The cognitive constraint hypothesis is contradicted by recent evidence of 'consolation' in monkeys and the social constraint hypothesis lacks information why 'consolation' actually happens. Here, we combine a computational model and an empirical study to investigate the minimum cognitive requirements for post-conflict affiliation. In the individual-based model, individuals are steered by cognitively simple behavioural rules. Individuals group and when nearby each other they fight if they are likely to win, otherwise, they may groom, especially when anxious. We parameterize the model after empirical data of a tolerant species, the Tonkean macaque (Macaca tonkeana). We find evidence for the four categories of post-conflict affiliation in the model and in the empirical data. We explain how in the model these patterns emerge from the combination of a weak hierarchy, social facilitation, risk-sensitive aggression, interactions with partners close-by and grooming as tension-reduction mechanism. We indicate how this may function as a new explanation for empirical data.

  14. Empirical probability model of the cold plasma environment in Jovian inner magnetosphere

    CERN Document Server

    Futaana, Yoshifumi; Roussos, Elias; Trouscott, Pete; Heynderickx, Daniel; Cipriani, Fabrice; Rodgers, David

    2016-01-01

    A new empirical, analytical model of cold plasma (< 10 keV) in the Jovian inner magnetosphere is constructed. Plasmas in this energy range impact surface charging. A new feature of this model is predicting each plasma parameter for a specified probability (percentile). The new model was produced as follows. We start from a reference model for each plasma parameter, which was scaled to fit the data of Galileo plasma spectrometer. The scaled model was then represented as a function of radial distance, magnetic local time, and magnetic latitude, presumably describing the mean states. Then, the deviation of the observed values from the model were attribute to the variability in the environment, which was accounted for by the percentile at a given location.The input parameters for this model are the spacecraft position and the percentile. The model is inteded to be used for the JUICE mission analysis.

  15. An anthology of theories and models of design philosophy, approaches and empirical explorations

    CERN Document Server

    Blessing, Lucienne

    2014-01-01

    While investigations into both theories and models has remained a major strand of engineering design research, current literature sorely lacks a reference book that provides a comprehensive and up-to-date anthology of theories and models, and their philosophical and empirical underpinnings; An Anthology of Theories and Models of Design fills this gap. The text collects the expert views of an international authorship, covering: ·         significant theories in engineering design, including CK theory, domain theory, and the theory of technical systems; ·         current models of design, from a function behavior structure model to an integrated model; ·         important empirical research findings from studies into design; and ·         philosophical underpinnings of design itself. For educators and researchers in engineering design, An Anthology of Theories and Models of Design gives access to in-depth coverage of theoretical and empirical developments in this area; for pr...

  16. Classification and estimation in the Stochastic Block Model based on the empirical degrees

    CERN Document Server

    Channarond, Antoine; Robin, Stéphane

    2011-01-01

    The Stochastic Block Model (Holland et al., 1983) is a mixture model for heterogeneous network data. Unlike the usual statistical framework, new nodes give additional information about the previous ones in this model. Thereby the distribution of the degrees concentrates in points conditionally on the node class. We show under a mild assumption that classification, estimation and model selection can actually be achieved with no more than the empirical degree data. We provide an algorithm able to process very large networks and consistent estimators based on it. In particular, we prove a bound of the probability of misclassification of at least one node, including when the number of classes grows.

  17. Empirical valence bond model of an SN2 reaction in polar and nonpolar solvents

    Science.gov (United States)

    Benjamin, Ilan

    2008-08-01

    A new model for the substitution nucleophilic reaction (SN2) in solution is described using the empirical valence bond (EVB) method. The model includes a generalization to three dimensions of a collinear gas phase EVB model developed by Mathis et al. [J. Mol. Liq. 61, 81 (1994)] and a parametrization of solute-solvent interactions of four different solvents (water, ethanol, chloroform, and carbon tetrachloride). The model is used to compute (in these four solvents) reaction free energy profiles, reaction and solvent dynamics, a two-dimensional reaction/solvent free energy map, as well as a number of other properties that in the past have mostly been estimated.

  18. An Empirical Validation of Building Simulation Software for Modelling of Double-Skin Facade (DSF)

    DEFF Research Database (Denmark)

    Larsen, Olena Kalyanova; Heiselberg, Per; Felsmann, Clemens

    2009-01-01

    buildings, but their accuracy might be limited in cases with DSFs because of the complexity of the heat and mass transfer processes within the DSF. To address this problem, an empirical validation of building models with DSF, performed with various building simulation tools (ESP-r, IDA ICE 3.0, VA114...... of DSF: 1. Thermal buffer mode (closed DSF cavity) and 2. External air curtain mode (naturally ventilated DSF cavity with the top and bottom openings open to outdoors). By carrying out the empirical tests, it was concluded that all models experience difficulties in predictions during the peak solar loads....... None of the models was consistent enough when comparing simulation results with experimental data for the ventilated cavity. However, some models showed reasonable agreement with the experimental results for the thermal buffer mode....

  19. Empirical study and modeling of human behaviour dynamics of comments on Blog posts

    CERN Document Server

    Guo, Jin-Li

    2010-01-01

    On-line communities offer a great opportunity to investigate human dynamics, because much information about individuals is registered in databases. In this paper, based on data statistics of online comments on Blog posts, we first present an empirical study of a comment arrival-time interval distribution. We find that people interested in some subjects gradually disappear and the interval distribution is a power law. According to this feature, we propose a model with gradually decaying interest. We give a rigorous analysis on the model by non-homogeneous Poisson processes and obtain an analytic expression of the interval distribution. Our analysis indicates that the time interval between two consecutive events follows the power-law distribution with a tunable exponent, which can be controlled by the model parameters and is in interval (1,\\infty). The analytical result agrees with the empirical results well, obeying an approximately power-law form. Our model provides a theoretical basis for human behaviour dyn...

  20. Implementation of empirical-mathematical modelling in upper secondary physics: Teachers’ interpretations and considerations

    Directory of Open Access Journals (Sweden)

    Carl Angell

    2008-11-01

    Full Text Available This paper reports on the implementation of an upper secondary physics curriculum with an empirical-mathematical modelling approach. In project PHYS 21, we used the notion of multiple representations of physical phenomena as a framework for developing modelling activities for students. Interviews with project teachers indicate that implementation of empirical-mathematical modelling varied widely among classes. The new curriculum ideas were adapted to teachers’ ways of doing andreflecting on teaching and learning rather than radically changing these. Modelling was taken up as a method for reaching the traditional content goals of physics teaching, whereas goals related to process skills and the nature of science were given a lower priority by the teachers. Our results indicate that more attention needs to be focused on teachers’ and students’ meta-understanding of physics and physics learning.

  1. Empirical Modeling on Hot Air Drying of Fresh and Pre-treated Pineapples

    Directory of Open Access Journals (Sweden)

    Tanongkankit Yardfon

    2016-01-01

    Full Text Available This research was aimed to study drying kinetics and determine empirical model of fresh pineapple and pre-treated pineapple with sucrose solution at different concentrations during drying. 3 mm thick samples were immersed into 30, 40 and 50 Brix of sucrose solution before hot air drying at temperatures of 60, 70 and 80°C. The empirical models to predict the drying kinetics were investigated. The results showed that the moisture content decreased when increasing the drying temperatures and times. Increase in sucrose concentration led to longer drying time. According to the statistical values of the highest coefficients (R2, the lowest least of chi-square (χ2 and root mean square error (RMSE, Logarithmic model was the best models for describing the drying behavior of soaked samples into 30, 40 and 50 Brix of sucrose solution.

  2. The empirical likelihood goodness-of-fit test for regression model

    Institute of Scientific and Technical Information of China (English)

    Li-xing ZHU; Yong-song QIN; Wang-li XU

    2007-01-01

    Goodness-of-fit test for regression modes has received much attention in literature. In this paper, empirical likelihood (EL) goodness-of-fit tests for regression models including classical parametric and autoregressive (AR) time series models are proposed. Unlike the existing locally smoothing and globally smoothing methodologies, the new method has the advantage that the tests are self-scale invariant and that the asymptotic null distribution is chi-squared. Simulations are carried out to illustrate the methodology.

  3. An Empirical Study on End-users Productivity Using Model-based Spreadsheets

    CERN Document Server

    Beckwith, Laura; Fernandes, João Paulo; Saraiva, João

    2011-01-01

    Spreadsheets are widely used, and studies have shown that most end-user spreadsheets contain nontrivial errors. To improve end-users productivity, recent research proposes the use of a model-driven engineering approach to spreadsheets. In this paper we conduct the first systematic empirical study to assess the effectiveness and efficiency of this approach. A set of spreadsheet end users worked with two different model-based spreadsheets, and we present and analyze here the results achieved.

  4. Hospetitiveness – the Empirical Model of Competitiveness in Romanian Hospitality Industry

    OpenAIRE

    Radu Emilian; Claudia Elena Tuclea; Madalina Lavinia Tala; Catalina Nicoleta Brîndusoiu

    2009-01-01

    Our interest is focused on an important sector of the national economy: the hospitality industry. The paper is the result of a careful analysis of the literature and of a field research. According to the answers of hotels' managers, competitiveness is based mainly on service quality and cost control. The analyses of questionnaires and dedicated literature lead us to the design of a competitiveness model for hospitality industry, called "Hospetitiveness – The empirical model of competitiveness...

  5. Cross–Project Defect Prediction With Respect To Code Ownership Model: An Empirical Study

    Directory of Open Access Journals (Sweden)

    Marian Jureczko

    2015-06-01

    Full Text Available The paper presents an analysis of 83 versions of industrial, open-source and academic projects. We have empirically evaluated whether those project types constitute separate classes of projects with regard to defect prediction. Statistical tests proved that there exist significant differences between the models trained on the aforementioned project classes. This work makes the next step towards cross-project reusability of defect prediction models and facilitates their adoption, which has been very limited so far.

  6. Modeling Lolium perenne L. roots in the presence of empirical black holes

    Science.gov (United States)

    Plant root models are designed for understanding structural or functional aspects of root systems. When a process is not thoroughly understood, a black box object is used. However, when a process exists but empirical data do not indicate its existence, you have a black hole. The object of this re...

  7. Satellite-based empirical models linking river plume dynamics with hypoxic area andvolume

    Science.gov (United States)

    Satellite-based empirical models explaining hypoxic area and volume variation were developed for the seasonally hypoxic (O2 < 2 mg L−1) northern Gulf of Mexico adjacent to the Mississippi River. Annual variations in midsummer hypoxic area and ...

  8. Mechanistic-empirical subgrade design model based on heavy vehicle simulator test results

    CSIR Research Space (South Africa)

    Theyse, HL

    2006-06-01

    Full Text Available -empirical design models. This paper presents a study on subgrade permanent deformation based on the data generated from a series of Heavy Vehicle Simulator (HVS) tests done at the Richmond Field Station in California. The total subgrade deflection was found to be a...

  9. THE SUPERIORITY OF EMPIRICAL BAYES ESTIMATION OF PARAMETERS IN PARTITIONED NORMAL LINEAR MODEL

    Institute of Scientific and Technical Information of China (English)

    Zhang Weiping; Wei Laisheng

    2008-01-01

    In this article, the empirical Bayes (EB) estimators are constructed for the estimable functions of the parameters in partitioned normal linear model. The superiorities of the EB estimators over ordinary least-squares (LS) estimator are investigated under mean square error matrix (MSEM) criterion.

  10. Performance-Based Service Quality Model: An Empirical Study on Japanese Universities

    Science.gov (United States)

    Sultan, Parves; Wong, Ho

    2010-01-01

    Purpose: This paper aims to develop and empirically test the performance-based higher education service quality model. Design/methodology/approach: The study develops 67-item instrument for measuring performance-based service quality with a particular focus on the higher education sector. Scale reliability is confirmed using the Cronbach's alpha.…

  11. Distribution of longshore sediment transport along the Indian coast based on empirical model

    Digital Repository Service at National Institute of Oceanography (India)

    Chandramohan, P.; Nayak, B.U.

    An empirical sediment transport model has been developed based on longshore energy flux equation. Study indicates that annual gross sediment transport rate is high (1.5 x 10 super(6) cubic meters to 2.0 x 10 super(6) cubic meters) along the coasts...

  12. Multigroup Analysis in Partial Least Squares (PLS) Path Modeling: Alternative Methods and Empirical Results

    NARCIS (Netherlands)

    Sarstedt, Marko; Henseler, Jörg; Ringle, Christian M.

    2011-01-01

    Purpose – Partial least squares (PLS) path modeling has become a pivotal empirical research method in international marketing. Owing to group comparisons' important role in research on international marketing, we provide researchers with recommendations on how to conduct multigroup analyses in PLS p

  13. A stochastic empirical model for heavy-metal balnces in Agro-ecosystems

    NARCIS (Netherlands)

    Keller, A.N.; Steiger, von B.; Zee, van der S.E.A.T.M.; Schulin, R.

    2001-01-01

    Mass flux balancing provides essential information for preventive strategies against heavy-metal accumulation in agricultural soils that may result from atmospheric deposition and application of fertilizers and pesticides. In this paper we present the empirical stochastic balance model, PROTERRA-S,

  14. Quantifying relationships between governance, agriculture, and nature: empirical-statistical-and pattern-oriented modeling

    NARCIS (Netherlands)

    Mandemaker, M.

    2014-01-01

    Quantifying relationships between governance, agriculture, and nature: empirical-statistical- and pattern-oriented modeling Abstract An improved understanding of complex processes of both socio-political and economic governance may help to abate neg

  15. Interest groups: a survey of empirical models that try to assess their influence

    NARCIS (Netherlands)

    Potters, J.J.M.; Sloof, R.

    1996-01-01

    Substantial political power is often attributed to interest groups. The origin of this power is not quite clear, though, and the mechanisms by which influence is effectuated are not yet fully understood. The last two decades have yielded a vast number of studies which use empirical models to assess

  16. Ecological Forecasting in Chesapeake Bay: Using a Mechanistic-Empirical Modelling Approach

    Energy Technology Data Exchange (ETDEWEB)

    Brown, C. W.; Hood, Raleigh R.; Long, Wen; Jacobs, John M.; Ramers, D. L.; Wazniak, C.; Wiggert, J. D.; Wood, R.; Xu, J.

    2013-09-01

    The Chesapeake Bay Ecological Prediction System (CBEPS) automatically generates daily nowcasts and three-day forecasts of several environmental variables, such as sea-surface temperature and salinity, the concentrations of chlorophyll, nitrate, and dissolved oxygen, and the likelihood of encountering several noxious species, including harmful algal blooms and water-borne pathogens, for the purpose of monitoring the Bay's ecosystem. While the physical and biogeochemical variables are forecast mechanistically using the Regional Ocean Modeling System configured for the Chesapeake Bay, the species predictions are generated using a novel mechanistic empirical approach, whereby real-time output from the coupled physical biogeochemical model drives multivariate empirical habitat models of the target species. The predictions, in the form of digital images, are available via the World Wide Web to interested groups to guide recreational, management, and research activities. Though full validation of the integrated forecasts for all species is still a work in progress, we argue that the mechanistic–empirical approach can be used to generate a wide variety of short-term ecological forecasts, and that it can be applied in any marine system where sufficient data exist to develop empirical habitat models. This paper provides an overview of this system, its predictions, and the approach taken.

  17. Empirical Likelihood Based Variable Selection for Varying Coefficient Partially Linear Models with Censored Data

    Institute of Scientific and Technical Information of China (English)

    Peixin ZHAO

    2013-01-01

    In this paper,we consider the variable selection for the parametric components of varying coefficient partially linear models with censored data.By constructing a penalized auxiliary vector ingeniously,we propose an empirical likelihood based variable selection procedure,and show that it is consistent and satisfies the sparsity.The simulation studies show that the proposed variable selection method is workable.

  18. Performance-Based Service Quality Model: An Empirical Study on Japanese Universities

    Science.gov (United States)

    Sultan, Parves; Wong, Ho

    2010-01-01

    Purpose: This paper aims to develop and empirically test the performance-based higher education service quality model. Design/methodology/approach: The study develops 67-item instrument for measuring performance-based service quality with a particular focus on the higher education sector. Scale reliability is confirmed using the Cronbach's alpha.…

  19. Wavelet modeling and prediction of the stability of states: the Roman Empire and the European Union

    Science.gov (United States)

    Yaroshenko, Tatyana Y.; Krysko, Dmitri V.; Dobriyan, Vitalii; Zhigalov, Maksim V.; Vos, Hendrik; Vandenabeele, Peter; Krysko, Vadim A.

    2015-09-01

    How can the stability of a state be quantitatively determined and its future stability predicted? The rise and collapse of empires and states is very complex, and it is exceedingly difficult to understand and predict it. Existing theories are usually formulated as verbal models and, consequently, do not yield sharply defined, quantitative prediction that can be unambiguously validated with data. Here we describe a model that determines whether the state is in a stable or chaotic condition and predicts its future condition. The central model, which we test, is that growth and collapse of states is reflected by the changes of their territories, populations and budgets. The model was simulated within the historical societies of the Roman Empire (400 BC to 400 AD) and the European Union (1957-2007) by using wavelets and analysis of the sign change of the spectrum of Lyapunov exponents. The model matches well with the historical events. During wars and crises, the state becomes unstable; this is reflected in the wavelet analysis by a significant increase in the frequency ω (t) and wavelet coefficients W (ω, t) and the sign of the largest Lyapunov exponent becomes positive, indicating chaos. We successfully reconstructed and forecasted time series in the Roman Empire and the European Union by applying artificial neural network. The proposed model helps to quantitatively determine and forecast the stability of a state.

  20. Evaluation of theoretical and empirical water vapor sorption isotherm models for soils

    DEFF Research Database (Denmark)

    Arthur, Emmanuel; Tuller, Markus; Møldrup, Per;

    2016-01-01

    sorption isotherms of building materials, food, and other industrial products, knowledge about the 24 applicability of these functions for soils is noticeably lacking. We present validation of nine models for characterizing adsorption/desorption isotherms for a water activity range from 0.03 to 0.......93 for 207 soils, widely varying in texture and organic carbon content. In addition the potential applicability of the models for prediction of sorption isotherms from known clay content was investigated. While in general all investigated models described measured adsorption and desorption isotherms...... reasonably well, distinct differences were observed between physical and empirical models and due to the different degrees of freedom of the model equations. There were also considerable differences in model performance for the adsorption and desorption data. Regression analysis relating model parameters...

  1. Theoretical and Empirical Comparisons between Two Models for Continuous Item Response.

    Science.gov (United States)

    Ferrando, Pere J

    2002-10-01

    This article analyzes the relations between two continuous response models intended for typical response items: the linear congeneric model and Samejima's continuous response model (CRM). Using a factor analytical (FA) approach based on the assumption of underlying response variables, I describe how a particular case of the CRM can be considered as a nonlinear counterpart of Spearman's FA model. The mathematical relations between the: item-trait regressions, item parameter values, and conditional and marginal distributions of both models are obtained. The results allow (a) the item parameter values of the linear model to be obtained from CRM item parameter values, and (b) the conditions in which the congeneric model will be a good approximation to the CRM to be predicted. The relations described are illustrated using an empirical example and assessed by means of a simulation study.

  2. Analytical and Empirical Modeling of Wear and Forces of CBN Tool in Hard Turning - A Review

    Science.gov (United States)

    Patel, Vallabh Dahyabhai; Gandhi, Anishkumar Hasmukhlal

    2016-06-01

    Machining of steel material having hardness above 45 HRC (Hardness-Rockwell C) is referred as a hard turning. There are numerous models which should be scrutinized and implemented to gain optimum performance of hard turning. Various models in hard turning by cubic boron nitride tool have been reviewed, in attempt to utilize appropriate empirical and analytical models. Validation of steady state flank and crater wear model, Usui's wear model, forces due to oblique cutting theory, extended Lee and Shaffer's force model, chip formation and progressive flank wear have been depicted in this review paper. Effort has been made to understand the relationship between tool wear and tool force based on the different cutting conditions and tool geometries so that appropriate model can be used according to user requirement in hard turning.

  3. Modelling metal speciation in the Scheldt Estuary: Combining a flexible-resolution transport model with empirical functions

    Energy Technology Data Exchange (ETDEWEB)

    Elskens, Marc [Vrije Universiteit Brussel, Analytical, Pleinlaan 2, BE-1050 Brussels (Belgium); Gourgue, Olivier [Université catholique de Louvain, Institute of Mechanics, Materials and Civil Engineering (IMMC), 4 Avenue G. Lemaître, bte L4.05.02, BE-1348 Louvain-la-Neuve (Belgium); Université catholique de Louvain, Georges Lemaître Centre for Earth and Climate Research (TECLIM), Place Louis Pasteur 2, bte L4.03.08, BE-1348 Louvain-la-Neuve (Belgium); Baeyens, Willy [Vrije Universiteit Brussel, Analytical, Pleinlaan 2, BE-1050 Brussels (Belgium); Chou, Lei [Université Libre de Bruxelles, Biogéochimie et Modélisation du Système Terre (BGéoSys) —Océanographie Chimique et Géochimie des Eaux, Campus de la Plaine —CP 208, Boulevard du Triomphe, BE-1050 Brussels (Belgium); Deleersnijder, Eric [Université catholique de Louvain, Institute of Mechanics, Materials and Civil Engineering (IMMC), 4 Avenue G. Lemaître, bte L4.05.02, BE-1348 Louvain-la-Neuve (Belgium); Université catholique de Louvain, Earth and Life Institute (ELI), Georges Lemaître Centre for Earth and Climate Research (TECLIM), Place Louis Pasteur 2, bte L4.03.08, BE-1348 Louvain-la-Neuve (Belgium); Leermakers, Martine [Vrije Universiteit Brussel, Analytical, Pleinlaan 2, BE-1050 Brussels (Belgium); and others

    2014-04-01

    Predicting metal concentrations in surface waters is an important step in the understanding and ultimately the assessment of the ecological risk associated with metal contamination. In terms of risk an essential piece of information is the accurate knowledge of the partitioning of the metals between the dissolved and particulate phases, as the former species are generally regarded as the most bioavailable and thus harmful form. As a first step towards the understanding and prediction of metal speciation in the Scheldt Estuary (Belgium, the Netherlands), we carried out a detailed analysis of a historical dataset covering the period 1982–2011. This study reports on the results for two selected metals: Cu and Cd. Data analysis revealed that both the total metal concentration and the metal partitioning coefficient (K{sub d}) could be predicted using relatively simple empirical functions of environmental variables such as salinity and suspended particulate matter concentration (SPM). The validity of these functions has been assessed by their application to salinity and SPM fields simulated by the hydro-environmental model SLIM. The high-resolution total and dissolved metal concentrations reconstructed using this approach, compared surprisingly well with an independent set of validation measurements. These first results from the combined mechanistic-empirical model approach suggest that it may be an interesting tool for risk assessment studies, e.g. to help identify conditions associated with elevated (dissolved) metal concentrations. - Highlights: • Empirical functions were designed for assessing metal speciation in estuarine water. • The empirical functions were implemented in the hydro-environmental model SLIM. • Validation was carried out in the Scheldt Estuary using historical data 1982–2011. • This combined mechanistic-empirical approach is useful for risk assessment.

  4. Integrating mechanistic and empirical model projections to assess climate impacts on tree species distributions in northwestern North America.

    Science.gov (United States)

    Case, Michael J; Lawler, Joshua J

    2017-05-01

    Empirical and mechanistic models have both been used to assess the potential impacts of climate change on species distributions, and each modeling approach has its strengths and weaknesses. Here, we demonstrate an approach to projecting climate-driven changes in species distributions that draws on both empirical and mechanistic models. We combined projections from a dynamic global vegetation model (DGVM) that simulates the distributions of biomes based on basic plant functional types with projections from empirical climatic niche models for six tree species in northwestern North America. These integrated model outputs incorporate important biological processes, such as competition, physiological responses of plants to changes in atmospheric CO2 concentrations, and fire, as well as what are likely to be species-specific climatic constraints. We compared the integrated projections to projections from the empirical climatic niche models alone. Overall, our integrated model outputs projected a greater climate-driven loss of potentially suitable environmental space than did the empirical climatic niche model outputs alone for the majority of modeled species. Our results also show that refining species distributions with DGVM outputs had large effects on the geographic locations of suitable habitat. We demonstrate one approach to integrating the outputs of mechanistic and empirical niche models to produce bioclimatic projections. But perhaps more importantly, our study reveals the potential for empirical climatic niche models to over-predict suitable environmental space under future climatic conditions. © 2016 John Wiley & Sons Ltd.

  5. Empirical angle-dependent Biot and MBA models for acoustic anisotropy in cancellous bone.

    Science.gov (United States)

    Lee, Kang Il; Hughes, E R; Humphrey, V F; Leighton, T G; Choi, Min Joo

    2007-01-01

    The Biot and the modified Biot-Attenborough (MBA) models have been found useful to understand ultrasonic wave propagation in cancellous bone. However, neither of the models, as previously applied to cancellous bone, allows for the angular dependence of acoustic properties with direction. The present study aims to account for the acoustic anisotropy in cancellous bone, by introducing empirical angle-dependent input parameters, as defined for a highly oriented structure, into the Biot and the MBA models. The anisotropy of the angle-dependent Biot model is attributed to the variation in the elastic moduli of the skeletal frame with respect to the trabecular alignment. The angle-dependent MBA model employs a simple empirical way of using the parametric fit for the fast and the slow wave speeds. The angle-dependent models were used to predict both the fast and slow wave velocities as a function of propagation angle with respect to the trabecular alignment of cancellous bone. The predictions were compared with those of the Schoenberg model for anisotropy in cancellous bone and in vitro experimental measurements from the literature. The angle-dependent models successfully predicted the angular dependence of phase velocity of the fast wave with direction. The root-mean-square errors of the measured versus predicted fast wave velocities were 79.2 m s(-1) (angle-dependent Biot model) and 36.1 m s(-1) (angle-dependent MBA model). They also predicted the fact that the slow wave is nearly independent of propagation angle for angles about 50 degrees , but consistently underestimated the slow wave velocity with the root-mean-square errors of 187.2 m s(-1) (angle-dependent Biot model) and 240.8 m s(-1) (angle-dependent MBA model). The study indicates that the angle-dependent models reasonably replicate the acoustic anisotropy in cancellous bone.

  6. Measurements and empirical model of the acoustic properties of reticulated vitreous carbon

    Science.gov (United States)

    Muehleisen, Ralph T.; Beamer, C. Walter; Tinianov, Brandon D.

    2005-02-01

    Reticulated vitreous carbon (RVC) is a highly porous, rigid, open cell carbon foam structure with a high melting point, good chemical inertness, and low bulk thermal conductivity. For the proper design of acoustic devices such as acoustic absorbers and thermoacoustic stacks and regenerators utilizing RVC, the acoustic properties of RVC must be known. From knowledge of the complex characteristic impedance and wave number most other acoustic properties can be computed. In this investigation, the four-microphone transfer matrix measurement method is used to measure the complex characteristic impedance and wave number for 60 to 300 pore-per-inch RVC foams with flow resistivities from 1759 to 10 782 Pa s m-2 in the frequency range of 330 Hz-2 kHz. The data are found to be poorly predicted by the fibrous material empirical model developed by Delany and Bazley, the open cell plastic foam empirical model developed by Qunli, or the Johnson-Allard microstructural model. A new empirical power law model is developed and is shown to provide good predictions of the acoustic properties over the frequency range of measurement. Uncertainty estimates for the constants of the model are also computed. .

  7. Physical Limitations of Empirical Field Models: Force Balance and Plasma Pressure

    Energy Technology Data Exchange (ETDEWEB)

    Sorin Zaharia; C.Z. Cheng

    2002-06-18

    In this paper, we study whether the magnetic field of the T96 empirical model can be in force balance with an isotropic plasma pressure distribution. Using the field of T96, we obtain values for the pressure P by solving a Poisson-type equation {del}{sup 2}P = {del} {center_dot} (J x B) in the equatorial plane, and 1-D profiles on the Sun-Earth axis by integrating {del}P = J x B. We work in a flux coordinate system in which the magnetic field is expressed in terms of Euler potentials. Our results lead to the conclusion that the T96 model field cannot be in equilibrium with an isotropic pressure. We also analyze in detail the computation of Birkeland currents using the Vasyliunas relation and the T96 field, which yields unphysical results, again indicating the lack of force balance in the empirical model. The underlying reason for the force imbalance is likely the fact that the derivatives of the least-square fitted model B are not accurate predictions of the actual magnetospheric field derivatives. Finally, we discuss a possible solution to the problem of lack of force balance in empirical field models.

  8. Enzymatic saccharification of acid pretreated corn stover: Empirical and fractal kinetic modelling.

    Science.gov (United States)

    Wojtusik, Mateusz; Zurita, Mauricio; Villar, Juan C; Ladero, Miguel; Garcia-Ochoa, Felix

    2016-11-01

    Enzymatic hydrolysis of corn stover was studied at agitation speeds from 50 to 500rpm in a stirred tank bioreactor, at high solid concentrations (20% w/w dry solid/suspension), 50°C and 15.5mgprotein·gglucane(-1). Two empirical kinetic models have been fitted to empirical data, namely: a potential model and a fractal one. For the former case, the global order dramatically decreases from 13 to 2 as agitation speed increases, suggesting an increment in the access of enzymes to cellulose in terms of chemisorption followed by hydrolysis. For its part, the fractal kinetic model fits better to data, showing its kinetic constant a constant augmentation with increasing agitation speed up to a constant value at 250rpm and above, when mass transfer limitations are overcome. In contrast, the fractal exponent decreases with rising agitation speed till circa 0.19, suggesting higher accessibility of enzymes to the substrate.

  9. Attachment-based family therapy for depressed and suicidal adolescents: theory, clinical model and empirical support.

    Science.gov (United States)

    Ewing, E Stephanie Krauthamer; Diamond, Guy; Levy, Suzanne

    2015-01-01

    Attachment-Based Family Therapy (ABFT) is a manualized family-based intervention designed for working with depressed adolescents, including those at risk for suicide, and their families. It is an empirically informed and supported treatment. ABFT has its theoretical underpinnings in attachment theory and clinical roots in structural family therapy and emotion focused therapies. ABFT relies on a transactional model that aims to transform the quality of adolescent-parent attachment, as a means of providing the adolescent with a more secure relationship that can support them during challenging times generally, and the crises related to suicidal thinking and behavior, specifically. This article reviews: (1) the theoretical foundations of ABFT (attachment theory, models of emotional development); (2) the ABFT clinical model, including training and supervision factors; and (3) empirical support.

  10. Diffusion and topological neighbours in flocks of starlings: relating a model to empirical data.

    Science.gov (United States)

    Hemelrijk, Charlotte K; Hildenbrandt, Hanno

    2015-01-01

    Moving in a group while avoiding collisions with group members causes internal dynamics in the group. Although these dynamics have recently been measured quantitatively in starling flocks (Sturnus vulgaris), it is unknown what causes them. Computational models have shown that collective motion in groups is likely due to attraction, avoidance and, possibly, alignment among group members. Empirical studies show that starlings adjust their movement to a fixed number of closest neighbours or topological range, namely 6 or 7 and assume that each of the three activities is done with the same number of neighbours (topological range). Here, we start from the hypothesis that escape behavior is more effective at preventing collisions in a flock when avoiding the single closest neighbor than compromising by avoiding 6 or 7 of them. For alignment and attraction, we keep to the empirical topological range. We investigate how avoiding one or several neighbours affects the internal dynamics of flocks of starlings in our computational model StarDisplay. By comparing to empirical data, we confirm that internal dynamics resemble empirical data more closely if flock members avoid merely their single, closest neighbor. Our model shows that considering a different number of interaction partners per activity represents a useful perspective and that changing a single parameter, namely the number of interaction partners that are avoided, has several effects through selforganisation.

  11. A non-quasistatic semi-empirical model for small geometry MOSFETs

    Science.gov (United States)

    Murray, Daniel; Sanchez, Julian J.; Demassa, Thomas A.

    1997-09-01

    A new charge-oriented semi-empirical non-quasistatic (NQS) model is developed for small geometry MOSFETs that is computationally efficient to be useful for circuit simulation. The NQS model includes the effect of velocity saturation, gate field dependent mobility, charge sharing, drain induced barrier lowering and geometric dependencies of threshold voltage. To model the carrier inertia that causes non-steady state conditions, a non-quasistatic model is adopted. An approximate inversion charge profile is used to reduce the nonlinear current-continuity equation to an ordinary differential equation. The model is valid in all regions of operation (weak, moderate and strong inversion) and is derived without resorting to the approximate arbitrary channel charge partitioning. The results from the proposed model are examined and compared with 2D simulation results and good agreement is obtained for the transient source, drain and gate currents for large signals applied to the gate.

  12. Alternative Specifications for the Lévy Libor Market Model: An Empirical Investigation

    DEFF Research Database (Denmark)

    Skovmand, David; Nicolato, Elisa

    This paper introduces and analyzes specications of the Lévy Market Model originally proposed by Eberlein and Özkan (2005). An investigation of the term structure of option implied moments rules out the Brownian motion and homogeneous Lévy processes as suitable modeling devices, and consequently...... a variety of more appropriate models is proposed. Besides a diffusive component the models have jump structures with low or high frequency combined with constant or stochastic volatility. The models are subjected to an empirical analysis using a time series of data for Euribor caps. The results...... of the estimation show that pricing performances are improved when a high frequency jump component is incorporated. Specifically, excellent results are achieved with the 4 parameter Sato-Variance Gamma model, which is able to fit an entire surface of caps with an average absolute percentage pricing error of less...

  13. Empirical validation of the thermal model of a passive solar cell test

    CERN Document Server

    Mara, T A; Boyer, H; Mamode, M

    2012-01-01

    The paper deals with an empirical validation of a building thermal model. We put the emphasis on sensitivity analysis and on research of inputs/residual correlation to improve our model. In this article, we apply a sensitivity analysis technique in the frequency domain to point out the more important parameters of the model. Then, we compare measured and predicted data of indoor dry-air temperature. When the model is not accurate enough, recourse to time-frequency analysis is of great help to identify the inputs responsible for the major part of error. In our approach, two samples of experimental data are required. The first one is used to calibrate our model the second one to really validate the optimized model

  14. Support Vector Regression Model Based on Empirical Mode Decomposition and Auto Regression for Electric Load Forecasting

    Directory of Open Access Journals (Sweden)

    Hong-Juan Li

    2013-04-01

    Full Text Available Electric load forecasting is an important issue for a power utility, associated with the management of daily operations such as energy transfer scheduling, unit commitment, and load dispatch. Inspired by strong non-linear learning capability of support vector regression (SVR, this paper presents a SVR model hybridized with the empirical mode decomposition (EMD method and auto regression (AR for electric load forecasting. The electric load data of the New South Wales (Australia market are employed for comparing the forecasting performances of different forecasting models. The results confirm the validity of the idea that the proposed model can simultaneously provide forecasting with good accuracy and interpretability.

  15. Empirical likelihood confidence regions of the parameters in a partially linear single-index model

    Institute of Scientific and Technical Information of China (English)

    XUE Liugen; ZHU Lixing

    2005-01-01

    In this paper, a partially linear single-index model is investigated, and three empirical log-likelihood ratio statistics for the unknown parameters in the model are suggested. It is proved that the proposed statistics are asymptotically standard chi-square under some suitable conditions, and hence can be used to construct the confidence regions of the parameters. Our methods can also deal with the confidence region construction for the index in the pure single-index model. A simulation study indicates that, in terms of coverage probabilities and average areas of the confidence regions, the proposed methods perform better than the least-squares method.

  16. A Price Index Model for Road Freight Transportation and Its Empirical analysis in China

    Directory of Open Access Journals (Sweden)

    Liu Zhishuo

    2017-01-01

    Full Text Available The aim of price index for road freight transportation (RFT is to reflect the changes of price in the road transport market. Firstly, a price index model for RFT based on the sample data from Alibaba logistics platform is built. This model is a three levels index system including total index, classification index and individual index and the Laspeyres method is applied to calculate these indices. Finally, an empirical analysis of the price index for RFT market in Zhejiang Province is performed. In order to demonstrate the correctness and validity of the exponential model, a comparative analysis with port throughput and PMI index is carried out.

  17. Empirical model for electron impact ionization cross sections of neutral atoms

    Energy Technology Data Exchange (ETDEWEB)

    Talukder, M.R.; Bose, S. [Rajshahi Univ., Dept. of Applied Physics and Electronic Engineering (Bangladesh); Patoary, M.A.R.; Haque, A.K.F.; Uddin, M.A.; Basak, A.K. [Rajshahi Univ., Dept. of Physics (Bangladesh); Kando, M. [Shizuoka Univ., Graduate School of Electronic Science and Technology (Japan)

    2008-02-15

    A simple empirical formula is proposed for the rapid calculation of electron impact total ionization cross sections both for the open- and closed-shell neutral atoms considered in the range 1 {<=} Z {<=} 92 and the incident electron energies from threshold to about 10{sup 4} eV. The results of the present analysis are compared with the available experimental and theoretical data. The proposed model provides a fast method for calculating fairly accurate electron impact total ionization cross sections of atoms. This model may be a prudent choice, for the practitioners in the field of applied sciences e.g. in plasma modeling, due to its simple inherent structure. (authors)

  18. Empirical models of the eddy heat flux and vertical shear on short time scales

    Science.gov (United States)

    Ghan, S. J.

    1984-01-01

    An intimate relation exists between the vertical shear and the horizontal eddy heat flux within the atmosphere. In the present investigation empirical means are employed to provide clues concerning the relationship between the shear and eddy heat flux. In particular, linear regression models are applied to individual and joint time series of the shear and eddy heat flux. These discrete models are used as a basis to infer continuous models. A description is provided of the observed relationship between the flux and the shear, taking into account means, standard deviations, and lag correction functions.

  19. Integrating technology readiness into the expectation-confirmation model: an empirical study of mobile services.

    Science.gov (United States)

    Chen, Shih-Chih; Liu, Ming-Ling; Lin, Chieh-Peng

    2013-08-01

    The aim of this study was to integrate technology readiness into the expectation-confirmation model (ECM) for explaining individuals' continuance of mobile data service usage. After reviewing the ECM and technology readiness, an integrated model was demonstrated via empirical data. Compared with the original ECM, the findings of this study show that the integrated model may offer an ameliorated way to clarify what factors and how they influence the continuous intention toward mobile services. Finally, the major findings are summarized, and future research directions are suggested.

  20. Empirical Results of Modeling EUR/RON Exchange Rate using ARCH, GARCH, EGARCH, TARCH and PARCH models

    Directory of Open Access Journals (Sweden)

    Andreea – Cristina PETRICĂ

    2017-03-01

    Full Text Available The aim of this study consists in examining the changes in the volatility of daily returns of EUR/RON exchange rate using on the one hand symmetric GARCH models (ARCH and GARCH and on the other hand the asymmetric GARCH models (EGARCH, TARCH and PARCH, since the conditional variance is time-varying. The analysis takes into account daily quotations of EUR/RON exchange rate over the period of 04th January 1999 to 13th June 2016. Thus, we are modeling heteroscedasticity by applying different specifications of GARCH models followed by looking for significant parameters and low information criteria (minimum Akaike Information Criterion. All models are estimated using the maximum likelihood method under the assumption of several distributions of the innovation terms such as: Normal (Gaussian distribution, Student’s t distribution, Generalized Error distribution (GED, Student’s with fixed df. Distribution, and GED with fixed parameter distribution. The predominant models turned out to be EGARCH and PARCH models, and the empirical results point out that the best model for estimating daily returns of EUR/RON exchange rate is EGARCH(2,1 with Asymmetric order 2 under the assumption of Student’s t distributed innovation terms. This can be explained by the fact that in case of EGARCH model, the restriction regarding the positivity of the conditional variance is automatically satisfied.

  1. Comparison of empirical models to estimate soil erosion and sediment yield in micro catchments

    Directory of Open Access Journals (Sweden)

    Lida Eisazadeh

    2015-05-01

    Full Text Available Assessment of sediment yield in soil conservation and watershed Project and implementation plan for water and soil resources management is so important. Regarding to somewhere that doesn’t have enough information and statistical data such as upper river branches, Empirical models should be used to estimate erosion and sediment yield. However the efficiency and usage of these models before calibration isn’t clear. In this research, the measurement of erosion and sediment yield of 10 basins upstream of reservoirshas been estimated by RUSLE and MPSIAC empirical models.In order to compare means between measured and estimated datat-test method was applied.Theresults indicated no significant differences between means of measured and estimated sediment yield in MPSAIC model in 5% level. In contrast, T-test showed contrary results in RUSLE model. Then the applicability and priority of two models were examined by statistical methodssuch as MAE and MBE methods. By regarding to accuracy and precision, MPSIAC model placed in first priorityto estimate soil erosion and sediment yield and has minimum value of MAE=0.79 and MBE = -0.59.

  2. Empirical Modeling of Heating Element Power for the Czochralski Crystallization Process

    Directory of Open Access Journals (Sweden)

    Magnus Komperød

    2010-01-01

    Full Text Available The Czochralski (CZ crystallization process is used to produce monocrystalline silicon. Monocrystalline silicon is used in solar cell wafers and in computers and electronics. The CZ process is a batch process, where multicrystalline silicon is melted in a crucible and later solidifies on a monocrystalline seed crystal. The crucible is heated using a heating element where the power is manipulated using a triode for alternating current (TRIAC. As the electric resistance of the heating element increases by increased temperature, there are significant dynamics from the TRIAC input signal (control system output to the actual (measured heating element power. The present paper focuses on empirical modeling of these dynamics. The modeling is based on a dataset logged from a real-life CZ process. Initially the dataset is preprocessed by detrending and handling outliers. Next, linear ARX, ARMAX, and output error (OE models are identfied. As the linear models do not fully explain the process' behavior, nonlinear system identification is applied. The Hammerstein-Wiener (HW model structure is chosen. The final model identified is a Hammerstein model, i.e. a HW model with nonlinearity at the input, but not at the output. This model has only one more identified parameter than the linear OE model, but still improves the optimization criterion (mean squared ballistic simulation errors by a factor of six. As there is no nonlinearity at the output, the dynamics from the prediction error to the model output are linear, which allows a noise model to be added. Comparison of a Hammerstein model with noise model and the linear ARMAX model, both optimized for mean squared one-step-ahead prediction errors, shows that this optimization criterion is 42% lower for the Hammerstein model. Minimizing the number of parameters to be identified has been an important consideration throughout the modeling work.

  3. An empirical model for probabilistic decadal prediction: global attribution and regional hindcasts

    Science.gov (United States)

    Suckling, Emma B.; van Oldenborgh, Geert Jan; Eden, Jonathan M.; Hawkins, Ed

    2016-07-01

    Empirical models, designed to predict surface variables over seasons to decades ahead, provide useful benchmarks for comparison against the performance of dynamical forecast systems; they may also be employable as predictive tools for use by climate services in their own right. A new global empirical decadal prediction system is presented, based on a multiple linear regression approach designed to produce probabilistic output for comparison against dynamical models. A global attribution is performed initially to identify the important forcing and predictor components of the model . Ensemble hindcasts of surface air temperature anomaly fields are then generated, based on the forcings and predictors identified as important, under a series of different prediction `modes' and their performance is evaluated. The modes include a real-time setting, a scenario in which future volcanic forcings are prescribed during the hindcasts, and an approach which exploits knowledge of the forced trend. A two-tier prediction system, which uses knowledge of future sea surface temperatures in the Pacific and Atlantic Oceans, is also tested, but within a perfect knowledge framework. Each mode is designed to identify sources of predictability and uncertainty, as well as investigate different approaches to the design of decadal prediction systems for operational use. It is found that the empirical model shows skill above that of persistence hindcasts for annual means at lead times of up to 10 years ahead in all of the prediction modes investigated. It is suggested that hindcasts which exploit full knowledge of the forced trend due to increasing greenhouse gases throughout the hindcast period can provide more robust estimates of model bias for the calibration of the empirical model in an operational setting. The two-tier system shows potential for improved real-time prediction, given the assumption that skilful predictions of large-scale modes of variability are available. The empirical

  4. An empirical model for probabilistic decadal prediction: global attribution and regional hindcasts

    Science.gov (United States)

    Suckling, Emma B.; van Oldenborgh, Geert Jan; Eden, Jonathan M.; Hawkins, Ed

    2017-05-01

    Empirical models, designed to predict surface variables over seasons to decades ahead, provide useful benchmarks for comparison against the performance of dynamical forecast systems; they may also be employable as predictive tools for use by climate services in their own right. A new global empirical decadal prediction system is presented, based on a multiple linear regression approach designed to produce probabilistic output for comparison against dynamical models. A global attribution is performed initially to identify the important forcing and predictor components of the model . Ensemble hindcasts of surface air temperature anomaly fields are then generated, based on the forcings and predictors identified as important, under a series of different prediction `modes' and their performance is evaluated. The modes include a real-time setting, a scenario in which future volcanic forcings are prescribed during the hindcasts, and an approach which exploits knowledge of the forced trend. A two-tier prediction system, which uses knowledge of future sea surface temperatures in the Pacific and Atlantic Oceans, is also tested, but within a perfect knowledge framework. Each mode is designed to identify sources of predictability and uncertainty, as well as investigate different approaches to the design of decadal prediction systems for operational use. It is found that the empirical model shows skill above that of persistence hindcasts for annual means at lead times of up to 10 years ahead in all of the prediction modes investigated. It is suggested that hindcasts which exploit full knowledge of the forced trend due to increasing greenhouse gases throughout the hindcast period can provide more robust estimates of model bias for the calibration of the empirical model in an operational setting. The two-tier system shows potential for improved real-time prediction, given the assumption that skilful predictions of large-scale modes of variability are available. The empirical

  5. Empirical validation of the InVEST water yield ecosystem service model at a national scale.

    Science.gov (United States)

    Redhead, J W; Stratford, C; Sharps, K; Jones, L; Ziv, G; Clarke, D; Oliver, T H; Bullock, J M

    2016-11-01

    A variety of tools have emerged with the goal of mapping the current delivery of ecosystem services and quantifying the impact of environmental changes. An important and often overlooked question is how accurate the outputs of these models are in relation to empirical observations. In this paper we validate a hydrological ecosystem service model (InVEST Water Yield Model) using widely available data. We modelled annual water yield in 22 UK catchments with widely varying land cover, population and geology, and compared model outputs with gauged river flow data from the UK National River Flow Archive. Values for input parameters were selected from existing literature to reflect conditions in the UK and were subjected to sensitivity analyses. We also compared model performance between precipitation and potential evapotranspiration data sourced from global- and UK-scale datasets. We then tested the transferability of the results within the UK by additional validation in a further 20 catchments. Whilst the model performed only moderately with global-scale data (linear regression of modelled total water yield against empirical data; slope=0.763, intercept=54.45, R(2)=0.963) with wide variation in performance between catchments, the model performed much better when using UK-scale input data, with closer fit to the observed data (slope=1.07, intercept=3.07, R(2)=0.990). With UK data the majority of catchments showed modelled water yield but there was a minor but consistent overestimate per hectare (86m(3)/ha/year). Additional validation on a further 20 UK catchments was similarly robust, indicating that these results are transferable within the UK. These results suggest that relatively simple models can give accurate measures of ecosystem services. However, the choice of input data is critical and there is a need for further validation in other parts of the world. Copyright © 2016 Elsevier B.V. All rights reserved.

  6. Empirical Validation of a Thermal Model of a Complex Roof Including Phase Change Materials

    Directory of Open Access Journals (Sweden)

    Stéphane Guichard

    2015-12-01

    Full Text Available This paper deals with the empirical validation of a building thermal model of a complex roof including a phase change material (PCM. A mathematical model dedicated to PCMs based on the heat apparent capacity method was implemented in a multi-zone building simulation code, the aim being to increase the understanding of the thermal behavior of the whole building with PCM technologies. In order to empirically validate the model, the methodology is based both on numerical and experimental studies. A parametric sensitivity analysis was performed and a set of parameters of the thermal model has been identified for optimization. The use of the generic optimization program called GenOpt® coupled to the building simulation code enabled to determine the set of adequate parameters. We first present the empirical validation methodology and main results of previous work. We then give an overview of GenOpt® and its coupling with the building simulation code. Finally, once the optimization results are obtained, comparisons of the thermal predictions with measurements are found to be acceptable and are presented.

  7. Research Article Evaluation of different signal propagation models for a mixed indoor-outdoor scenario using empirical data

    Directory of Open Access Journals (Sweden)

    Oleksandr Artemenko

    2016-06-01

    Full Text Available In this paper, we are choosing a suitable indoor-outdoor propagation model out of the existing models by considering path loss and distance as parameters. A path loss is calculated empirically by placing emitter nodes inside a building. A receiver placed outdoors is represented by a Quadrocopter (QC that receives beacon messages from indoor nodes. As per our analysis, the International Telecommunication Union (ITU model, Stanford University Interim (SUI model, COST-231 Hata model, Green-Obaidat model, Free Space model, Log-Distance Path Loss model and Electronic Communication Committee 33 (ECC-33 models are chosen and evaluated using empirical data collected in a real environment. The aim is to determine if the analytically chosen models fit our scenario by estimating the minimal standard deviation from the empirical data.

  8. A maturity model for SCPMS project-an empirical investigation in large sized Moroccan companies

    Directory of Open Access Journals (Sweden)

    Chafik Okar

    2011-03-01

    Full Text Available In the recent years many studies on maturity model have been carried out. Some refer specifically to maturity models for supply chain and performance measurement system. Starting from an analysis of the existing literature, the aim of this paper is to develop a maturity model for the supply chain performance measurement system (SCPMS project based on the concept of critical success factors (CSFs. This model will be validated by two approaches. The first is a pilot test of the model in a Moroccan supply chain to demonstrate his capacity of assessing the maturity of SCPMS project and whether it can develop an improvement roadmap. The second is an empirical investigation in large sized Moroccan companies by using a survey to depict whether it can evaluate the maturity of SCPMS project in different industries.

  9. Empirically indistinguishable multidimensional IRT and locally dependent unidimensional item response models.

    Science.gov (United States)

    Ip, Edward Haksing

    2010-05-01

    Multidimensionality is a core concept in the measurement and analysis of psychological data. In personality assessment, for example, constructs are mostly theoretically defined as unidimensional, yet responses collected from the real world are almost always determined by multiple factors. Significant research efforts have concentrated on the use of simulated studies to evaluate the robustness of unidimensional item response models when applied to multidimensional data with a dominant dimension. In contrast, in the present paper, I report the result from a theoretical investigation that a multidimensional item response model is empirically indistinguishable from a locally dependent unidimensional model, of which the single dimension represents the actual construct of interest. A practical implication of this result is that multidimensional response data do not automatically require the use of multidimensional models. Circumstances under which the alternative approach of locally dependent unidimensional models may be useful are discussed.

  10. An empirical test of a self-care model of women's responses to battering.

    Science.gov (United States)

    Campbell, J C; Weber, N

    2000-01-01

    A model of women's responses to battering was constructed based on Orem's theory of self-care deficit and on empirical and clinical observations. The model proposed that the age, educational level, and cultural influences as basic conditioning factors would all be directly related to relational conflict, which would be negatively related to self-care agency (as a mediator) and indirectly related to both outcomes of health and well-being. Using simultaneous structural equation modeling with specification searching, a modified model was derived that eliminated the mediation path but supported direct effects of both abuse and self-care agency on health. The derived model was found to be only a borderline fit with the data, probably due to measurement problems, lack of inclusion of important variables, and small sample size (N = 117). However, there was support for several of the relationships deduced from and/or congruent with Orem's theory.

  11. Comparative Analysis of Empirical Path Loss Model for Cellular Transmission in Rivers State

    Directory of Open Access Journals (Sweden)

    B.O.H Akinwole, Biebuma J.J

    2013-08-01

    Full Text Available This paper presents a comparative analysis of three empirical path loss models with measured data for urban, suburban, and rural areas in Rivers State. The three models investigated were COST 231 Hata, SUI,ECC-33models. A downlink data was collected at operating frequency of 2100MHz using drive test procedure consisting of test mobile phones to determine the received signal power (RSCP at specified receiver distanceson a Globacom Node Bs located in some locations in the State. This test was carried out for investigating the effectiveness of the commonly used existing models for Cellular transmission. The results analysed were based on Mean Square Error (MSE and Standard Deviation (SD and were simulated on MATLAB (7.5.0. The results show that COST 231 Hata model gives better predictions and therefore recommended for path loss predictions in River State.

  12. Empirical and physics based mathematical models of uranium hydride decomposition kinetics with quantified uncertainties.

    Energy Technology Data Exchange (ETDEWEB)

    Salloum, Maher N.; Gharagozloo, Patricia E.

    2013-10-01

    Metal particle beds have recently become a major technique for hydrogen storage. In order to extract hydrogen from such beds, it is crucial to understand the decomposition kinetics of the metal hydride. We are interested in obtaining a a better understanding of the uranium hydride (UH3) decomposition kinetics. We first developed an empirical model by fitting data compiled from different experimental studies in the literature and quantified the uncertainty resulting from the scattered data. We found that the decomposition time range predicted by the obtained kinetics was in a good agreement with published experimental results. Secondly, we developed a physics based mathematical model to simulate the rate of hydrogen diffusion in a hydride particle during the decomposition. We used this model to simulate the decomposition of the particles for temperatures ranging from 300K to 1000K while propagating parametric uncertainty and evaluated the kinetics from the results. We compared the kinetics parameters derived from the empirical and physics based models and found that the uncertainty in the kinetics predicted by the physics based model covers the scattered experimental data. Finally, we used the physics-based kinetics parameters to simulate the effects of boundary resistances and powder morphological changes during decomposition in a continuum level model. We found that the species change within the bed occurring during the decomposition accelerates the hydrogen flow by increasing the bed permeability, while the pressure buildup and the thermal barrier forming at the wall significantly impede the hydrogen extraction.

  13. Models of expected returns on the brazilian market: Empirical tests using predictive methodology

    Directory of Open Access Journals (Sweden)

    Adriano Mussa

    2009-01-01

    Full Text Available Predictive methodologies for test of the expected returns models are largely diffused on the international academic environment. However, these methods have not been used in Brazil in a systematic way. Generally, empirical studies proceeded with Brazilian stock market data are concentrated only in the first step of these methodologies. The purpose of this article was test and compare the models CAPM, 3-factors and 4-factors using a predictive methodology, considering two steps – temporal and cross-section regressions – with standard errors obtained by the techniques of Fama and Macbeth (1973. The results indicated the superiority of the 4-fators model as compared to the 3-fators model, and the superiority of the 3- factors model as compared to the CAPM, but no one of the tested models were enough on the explanation of the Brazilian stock returns. Contrary to some empirical evidences, that do not use predictive methodology, the size and momentum effect seem do not exist on the Brazilian capital markets, but there are evidences of the value effect and the relevance of the market for explanation of expected returns. These finds rise some questions, mainly caused by the originality of the methodology on the local market and by the fact that this subject is still incipient and polemic on the Brazilian academic environment.

  14. An empirical model simulating long-term diurnal CO2 flux for diverse vegetation types

    Directory of Open Access Journals (Sweden)

    A. D. Richardson

    2008-10-01

    Full Text Available We present an empirical model for the estimation of diurnal variability in net ecosystem CO2 exchange (NEE. The model is based on the use of a nonrectangular hyperbola for photosynthetic response of canopy and was constructed by using a dataset obtained from the AmeriFlux network and containing continuous eddy covariance CO2 flux from 26 ecosystems over seven biomes. The model uses simplified empirical expression of seasonal variability in biome-specific physiological parameters with air temperature, vapor pressure deficit, and precipitation. The physiological parameters of maximum CO2 uptake rate by the canopy and ecosystem respiration had biome-specific responses to environmental variables. The estimated physiological parameters had reasonable magnitudes and seasonal variation and gave reasonable timing of the beginning and end of the growing season over various biomes, but they were less satisfactory for disturbed grassland and savanna than for forests. Comparison with observational data revealed that the diurnal cycle of NEE was generally well predicted all year round by the model. The model gave satisfactory results even for tundra, which had very small amplitudes of NEE variability. These results suggest that this model with biome-specific parameters will be applicable to numerous terrestrial biomes, particularly forest ones.

  15. TS07D Empirical Geomagnetic Field Model as a Space Weather Tool

    Science.gov (United States)

    Sharp, N. M.; Stephens, G. K.; Sitnov, M. I.

    2011-12-01

    Empirical modeling and forecasting of the geomagnetic field is a key element of the space weather research. A dramatic increase in the number of data available for the terrestrial magnetosphere required a new generation of empirical models with large numbers of degrees of freedom and sophisticated data-mining techniques. A set of the corresponding data binning, fitting and visualization procedures known as the TS07D model is now available at \\url{http://geomag_field.jhuapl.edu/model/} and it is used for detailed investigation of storm-scale phenomena in the magnetosphere. However, the transformation of this research model into a practical space weather application, which implies its extensive running for validation and interaction with other space weather codes, requires its presentation in the form of a single state-of-the-art code, well documented and optimized for the highest performance. To this end, the model is implemented in the Java programming language with extensive self-sufficient library and a set of optimization tools, including multi-thread operations that assume the use of the code in multi-core computers and clusters. The results of the new code validation and optimization of its binning, fitting and visualization parts are presented as well as some examples of the processed storms are discussed.

  16. An empirical investigation of the efficiency effects of integrated care models in Switzerland

    Directory of Open Access Journals (Sweden)

    Oliver Reich

    2012-01-01

    Full Text Available Introduction: This study investigates the efficiency gains of integrated care models in Switzerland, since these models are regarded as cost containment options in national social health insurance. These plans generate much lower average health care expenditure than the basic insurance plan. The question is, however, to what extent these total savings are due to the effects of selection and efficiency. Methods: The empirical analysis is based on data from 399,274 Swiss residents that constantly had compulsory health insurance with the Helsana Group, the largest health insurer in Switzerland, covering the years 2006 to 2009. In order to evaluate the efficiency of the different integrated care models, we apply an econometric approach with a mixed-effects model. Results: Our estimations indicate that the efficiency effects of integrated care models on health care expenditure are significant. However, the different insurance plans vary, revealing the following efficiency gains per model: contracted capitated model 21.2%, contracted non-capitated model 15.5% and telemedicine model 3.7%. The remaining 8.5%, 5.6% and 22.5% respectively of the variation in total health care expenditure can be attributed to the effects of selection. Conclusions: Integrated care models have the potential to improve care for patients with chronic diseases and concurrently have a positive impact on health care expenditure. We suggest policy makers improve the incentives for patients with chronic diseases within the existing regulations providing further potential for cost-efficiency of medical care.

  17. An empirical investigation of the efficiency effects of integrated care models in Switzerland

    Directory of Open Access Journals (Sweden)

    Oliver Reich

    2012-01-01

    Full Text Available Introduction: This study investigates the efficiency gains of integrated care models in Switzerland, since these models are regarded as cost containment options in national social health insurance. These plans generate much lower average health care expenditure than the basic insurance plan. The question is, however, to what extent these total savings are due to the effects of selection and efficiency.Methods: The empirical analysis is based on data from 399,274 Swiss residents that constantly had compulsory health insurance with the Helsana Group, the largest health insurer in Switzerland, covering the years 2006 to 2009. In order to evaluate the efficiency of the different integrated care models, we apply an econometric approach with a mixed-effects model.Results: Our estimations indicate that the efficiency effects of integrated care models on health care expenditure are significant. However, the different insurance plans vary, revealing the following efficiency gains per model: contracted capitated model 21.2%, contracted non-capitated model 15.5% and telemedicine model 3.7%. The remaining 8.5%, 5.6% and 22.5% respectively of the variation in total health care expenditure can be attributed to the effects of selection.Conclusions: Integrated care models have the potential to improve care for patients with chronic diseases and concurrently have a positive impact on health care expenditure. We suggest policy makers improve the incentives for patients with chronic diseases within the existing regulations providing further potential for cost-efficiency of medical care.

  18. Temperature and Frequency Dependent Empirical Models of Dielectric Properties of Sunflower and Olive Oil

    Directory of Open Access Journals (Sweden)

    J. Vrba

    2013-12-01

    Full Text Available In this article, a known concept and measurement probe geometry for the estimation of the dielectric properties of oils have been adapted. The new probe enables the~measurement in the frequency range of 1 to 3000 MHz. Additionally, the measurement probe has been equipped with a~heat exchanger, which has enabled us to measure the dielectric properties of sunflower and olive oil as well as of two commercial emulsion concentrates. Subsequently, corresponding linear empirical temperature and frequency dependent models of the dielectric properties of the above mentioned oils and concentrates have been created. The dielectric properties measured here as well as the values obtained based on the empirical models created here match the data published in professional literature very well.

  19. Empirical tight-binding force model for molecular-dynamics simulation of Si

    Science.gov (United States)

    Wang, C. Z.; Chan, C. T.; Ho, K. M.

    1989-04-01

    A scheme of molecular-dynamics simulation using the empirical tight-binding force model is proposed. The scheme allows the interatomic interactions involved in the molecular dynamics to be determined by first-principles total-energy and electronic-structure calculations without resorting to fitting experimental data. For a first application of the scheme we show that a very simple nearest-neighbor two-center empirical tight-binding force model is able to stabilize the diamond structure of Si within a reasonable temperature range. We also show that the scheme makes possible the quantitative calculation of the temperature dependence of various anharmonic effects such as lattice thermal expansion, temperature-dependent phonon linewidths, and phonon frequency shifts.

  20. Acculturation and mental health--empirical verification of J.W. Berry's model of acculturative stress

    DEFF Research Database (Denmark)

    Koch, M W; Bjerregaard, P; Curtis, C

    2004-01-01

    OBJECTIVES: Many studies concerning mental health among ethnic minorities have used the concept of acculturation as a model of explanation, in particular J.W. Berry's model of acculturative stress. But Berry's theory has only been empirically verified few times. The aims of the study were...... to examine whether Berry's hypothesis about the connection between acculturation and mental health can be empirically verified for Greenlanders living in Denmark and to analyse whether acculturation plays a significant role for mental health among Greenlanders living in Denmark. STUDY DESIGN AND METHODS......: The study used data from the 1999 Health Profile for Greenlanders in Denmark. As measure of mental health we applied the General Health Questionnaire (GHQ-12). Acculturation was assessed from answers to questions about how the respondents value the fact that children maintain their traditional cultural...

  1. Microscopic driving theory with oscillatory congested states: model and empirical verification

    CERN Document Server

    Tian, Junfang; Ma, Shoufeng; Jia, Bin; Zhang, Wenyi

    2014-01-01

    The essential distinction between the Fundamental Diagram Approach (FDA) and Kerner's Three- Phase Theory (KTPT) is the existence of a unique gap-speed (or flow-density) relationship in the former class. In order to verify this relationship, empirical data are analyzed with the following findings: (1) linear relationship between the actual space gap and speed can be identified when the speed difference between vehicles approximates zero; (2) vehicles accelerate or decelerate around the desired space gap most of the time. To explain these phenomena, we propose that, in congested traffic flow, the space gap between two vehicles will oscillate around the desired space gap in the deterministic limit. This assumption is formulated in terms of a cellular automaton. In contrast to FDA and KTPT, the new model does not have any congested steady-state solution. Simulations under periodic and open boundary conditions reproduce the empirical findings of KTPT. Calibrating and validating the model to detector data produces...

  2. Empirical Bayes ranking and selection methods via semiparametric hierarchical mixture models in microarray studies.

    Science.gov (United States)

    Noma, Hisashi; Matsui, Shigeyuki

    2013-05-20

    The main purpose of microarray studies is screening of differentially expressed genes as candidates for further investigation. Because of limited resources in this stage, prioritizing genes are relevant statistical tasks in microarray studies. For effective gene selections, parametric empirical Bayes methods for ranking and selection of genes with largest effect sizes have been proposed (Noma et al., 2010; Biostatistics 11: 281-289). The hierarchical mixture model incorporates the differential and non-differential components and allows information borrowing across differential genes with separation from nuisance, non-differential genes. In this article, we develop empirical Bayes ranking methods via a semiparametric hierarchical mixture model. A nonparametric prior distribution, rather than parametric prior distributions, for effect sizes is specified and estimated using the "smoothing by roughening" approach of Laird and Louis (1991; Computational statistics and data analysis 12: 27-37). We present applications to childhood and infant leukemia clinical studies with microarrays for exploring genes related to prognosis or disease progression.

  3. A New Empirical Model for Estimation of sp3 Fraction in Diamond-Like Carbon Films

    Institute of Scientific and Technical Information of China (English)

    DAI Hai-Yang; WANG Li-Wu; JIANG Hui; HUANG Ning-Kang

    2007-01-01

    A new empirical model to estimate the content of sp3 in diamond-like carbon (DLC) films is presented, based on the conventional Raman spectra excited by 488nm or 514nm visible light for different carbons. It is found that bandwidth of the G peak is related to the sp3 fraction. A wider bandwidth of the G peak shows a higher sp3 fraction in DLC films.

  4. An empirical movement model for sixgill sharks in Puget Sound: Combining observed and unobserved behavior

    OpenAIRE

    Phillip S. LEVIN, Peter HORNE, Kelly S. ANDREWS, Greg WILLIAMS

    2012-01-01

    Understanding the movement of animals is fundamental to population and community ecology. Historically, it has been difficult to quantify movement patterns of most fishes, but technological advances in acoustic telemetry have increased our abilities to monitor their movement. In this study, we combined small-scale active acoustic tracking with large-scale passive acoustic monitoring to develop an empirical movement model for sixgill sharks in Puget Sound, WA, USA. We began by testing whether ...

  5. Evaluation Model for Scientific Quality Based on Rough Sets and Its Empirical Study

    Institute of Scientific and Technical Information of China (English)

    LIU Dun; HU Pei; JIANG Chao-zhe; LIU Li

    2007-01-01

    By analyzing the questionnaires recollected from 74 different government departments in Chengdu, China, an evaluation model for scientific quality of civil servants was developed with the rough set theory. In the empirical study, a series of important rules were given to help to check and forecast the degree of the scientific quality of civil servants by using the reduction algorithm, and the total accuracy of prediction was 93.2%.

  6. Establishment of Grain Farmers' Supply Response Model and Empirical Analysis under Minimum Grain Purchase Price Policy

    OpenAIRE

    Zhang, Shuang

    2012-01-01

    Based on farmers' supply behavior theory and price expectations theory, this paper establishes grain farmers' supply response model of two major grain varieties (early indica rice and mixed wheat) in the major producing areas, to test whether the minimum grain purchase price policy can have price-oriented effect on grain production and supply in the major producing areas. Empirical analysis shows that the minimum purchase price published annually by the government has significant positive imp...

  7. Knightian uncertainty and stock-price movements: Why the REH present-value model failed empirically

    OpenAIRE

    Frydman, Roman; Michael D. Goldberg; Mangee, Nicholas

    2015-01-01

    Macroeconomic models that are based on either the rational expectations hypothesis (REH) or behavioral considerations share a core premise: All future market outcomes can be characterized ex ante with a single overarching probability distribution. This paper assesses the empirical relevance of this premise using a novel data set. The authors find that Knightian uncertainty, which cannot be reduced to a probability distribution, underpins outcomes in the stock market. This finding reveals the ...

  8. Рerspective Model of Specialized Military Education in Empirical Characteristics

    Directory of Open Access Journals (Sweden)

    Alexander P. Abramov

    2015-06-01

    Full Text Available On the basis of these sociological polls and interview to pupils, graduates of military schools of the Ministry of Defence of the Russian Federation, the Suvorov military schools, the Nakhimov military sea schools and experts in 2002-2013 is developed theoretical construct of perspective model of secondary specialized military education, conceptually is developed and its place and a role in structure of cadet formation of modern Russia is empirically proved.

  9. Experimental verification of bridge seismic damage states quantified by calibrating analytical models with empirical field data

    Institute of Scientific and Technical Information of China (English)

    Swagata Banerjee; Masanobu Shinozuka

    2008-01-01

    Bridges are one of the most vulnerable components of a highway transportation network system subjected to earthquake ground motions.Prediction of resilience and sustainability of bridge performance in a probabilistic manner provides valuable information for pre-event system upgrading and post-event functional recovery of the network.The current study integrates bridge seismic damageability information obtained through empirical,analytical and experimental procedures and quantifies threshold limits of bridge damage states consistent with the physical damage description given in HAZUS.Experimental data from a large-scale shaking table test are utilized for this purpose.This experiment was conducted at the University of Nevada,Reno,where a research team from the University of California,Irvine,participated.Observed experimental damage data are processed to idemify and quantify bridge damage states in terms of rotational ductility at bridge column ends.In parallel,a mechanistic model for fragility curves is developed in such a way that the model can be calibrated against empirical fragility curves that have been constructed from damage data obtained during the 1994 Northridge earthquake.This calibration quantifies threshold values of bridge damage states and makes the analytical study consistent with damage data observed in past earthquakes.The mechanistic model is transportable and applicable to most types and sizes of bridges.Finally,calibrated damage state definitions are compared with that obtained using experimental findings.Comparison shows excellent consistency among results from analytical,empirical and experimental observations.

  10. Context, Experience, Expectation, and Action—Towards an Empirically Grounded, General Model for Analyzing Biographical Uncertainty

    Directory of Open Access Journals (Sweden)

    Herwig Reiter

    2010-01-01

    Full Text Available The article proposes a general, empirically grounded model for analyzing biographical uncertainty. The model is based on findings from a qualitative-explorative study of transforming meanings of unemployment among young people in post-Soviet Lithuania. In a first step, the particular features of the uncertainty puzzle in post-communist youth transitions are briefly discussed. A historical event like the collapse of state socialism in Europe, similar to the recent financial and economic crisis, is a generator of uncertainty par excellence: it undermines the foundations of societies and the taken-for-grantedness of related expectations. Against this background, the case of a young woman and how she responds to the novel threat of unemployment in the transition to the world of work is introduced. Her uncertainty management in the specific time perspective of certainty production is then conceptually rephrased by distinguishing three types or levels of biographical uncertainty: knowledge, outcome, and recognition uncertainty. Biographical uncertainty, it is argued, is empirically observable through the analysis of acting and projecting at the biographical level. The final part synthesizes the empirical findings and the conceptual discussion into a stratification model of biographical uncertainty as a general tool for the biographical analysis of uncertainty phenomena. URN: urn:nbn:de:0114-fqs100120

  11. A simple empirical model for the clarification-thickening process in wastewater treatment plants.

    Science.gov (United States)

    Zhang, Y K; Wang, H C; Qi, L; Liu, G H; He, Z J; Fan, H T

    2015-01-01

    In wastewater treatment plants (WWTPs), activated sludge is thickened in secondary settling tanks and recycled into the biological reactor to maintain enough biomass for wastewater treatment. Accurately estimating the activated sludge concentration in the lower portion of the secondary clarifiers is of great importance for evaluating and controlling the sludge recycled ratio, ensuring smooth and efficient operation of the WWTP. By dividing the overall activated sludge-thickening curve into a hindered zone and a compression zone, an empirical model describing activated sludge thickening in the compression zone was obtained by empirical regression. This empirical model was developed through experiments conducted using sludge from five WWTPs, and validated by the measured data from a sixth WWTP, which fit the model well (R² = 0.98, p settling was also developed. Finally, the effects of denitrification and addition of a polymer were also analysed because of their effect on sludge thickening, which can be useful for WWTP operation, e.g., improving wastewater treatment or the proper use of the polymer.

  12. An empirical movement model for sixgill sharks in Puget Sound: Combining observed and unobserved behavior

    Institute of Scientific and Technical Information of China (English)

    Phillip S. LEVIN; Peter HORNE; Kelly S. ANDREWS; Greg WILLIAMS

    2012-01-01

    Understanding the movement of animals is fundamental to population and community ecology.Historically,it has been difficult to quantify movement patterns of most fishes,but technological advances in acoustic telemetry have increased our abilities to monitor their movement.In this study,we combined small-scale active acoustic tracking with large-scale passive acoustic monitoring to develop an empirical movement model for sixgill sharks in Puget Sound,WA,USA.We began by testing whether a correlated random walk model described the daily movement of sixgills; however,the model failed to capture home-ranging behavior.We added this behavior and used the resultant model (a biased random walk model) to determine whether daily movement patterns are able to explain large-scale seasonal movement.The daily model did not explain the larger-scale patterns of movement observed in the passive monitoring data.In order to create the large-scale patterns,sixgills must have performed behaviors (large,fast directed movements) that were unobserved during small-scale active tracking.In addition,seasonal shifts in location were not captured by the daily model.We added these ‘unobserved' behaviors to the model and were able to capture large-scale seasonal movement of sixgill sharks over 150 days.The development of empirical models of movement allows researchers to develop hypotheses and test mechanisms responsible for a species movement behavior and spatial distribution.This knowledge will increase our ability to successfully manage species of concern [Current Zoology 58 (1):103-115,2012].

  13. Soil Moisture Estimate Under Forest Using a Semi-Empirical Model at P-Band

    Science.gov (United States)

    Truong-Loi, My-Linh; Saatchi, Sassan; Jaruwatanadilok, Sermsak

    2013-01-01

    Here we present the result of a semi-empirical inversion model for soil moisture retrieval using the three backscattering coefficients: sigma(sub HH), sigma(sub VV) and sigma(sub HV). In this paper we focus on the soil moisture estimate and use the biomass as an ancillary parameter estimated automatically from the algorithm and used as a validation parameter, We will first remind the model analytical formulation. Then we will sow some results obtained with real SAR data and compare them to ground estimates.

  14. Generalized Empirical Likelihood Inference in Semiparametric Regression Model for Longitudinal Data

    Institute of Scientific and Technical Information of China (English)

    Gao Rong LI; Ping TIAN; Liu Gen XUE

    2008-01-01

    In this paper, we consider the semiparametric regression model for longitudinal data. Due to the correlation within groups, a generalized empirical log-likelihood ratio statistic for the unknown parameters in the model is suggested by introducing the working covariance matrix. It is proved that the proposed statistic is asymptotically standard chi-squared under some suitable conditions, and hence it can be used to construct the confidence regions of the parameters. A simulation study is conducted to compare the proposed method with the generalized least squares method in terms of coverage accuracy and average lengths of the confidence intervals.

  15. An empirical model of the high-energy electron environment at Jupiter

    Science.gov (United States)

    Soria-Santacruz, M.; Garrett, H. B.; Evans, R. W.; Jun, I.; Kim, W.; Paranicas, C.; Drozdov, A.

    2016-10-01

    We present an empirical model of the energetic electron environment in Jupiter's magnetosphere that we have named the Galileo Interim Radiation Electron Model version-2 (GIRE2) since it is based on Galileo data from the Energetic Particle Detector (EPD). Inside 8RJ, GIRE2 adopts the previously existing model of Divine and Garrett because this region was well sampled by the Pioneer and Voyager spacecraft but poorly covered by Galileo. Outside of 8RJ, the model is based on 10 min averages of Galileo EPD data as well as on measurements from the Geiger Tube Telescope on board the Pioneer spacecraft. In the inner magnetosphere the field configuration is dipolar, while in the outer magnetosphere it presents a disk-like structure. The gradual transition between these two behaviors is centered at about 17RJ. GIRE2 distinguishes between the two different regions characterized by these two magnetic field topologies. Specifically, GIRE2 consists of an inner trapped omnidirectional model between 8 to 17RJ that smoothly joins onto the original Divine and Garrett model inside 8RJ and onto a GIRE2 plasma sheet model at large radial distances. The model provides a complete picture of the high-energy electron environment in the Jovian magnetosphere from ˜1 to 50RJ. The present manuscript describes in great detail the data sets, formulation, and fittings used in the model and provides a discussion of the predicted high-energy electron fluxes as a function of energy and radial distance from the planet.

  16. An empirical Bayesian approach for model-based inference of cellular signaling networks

    Directory of Open Access Journals (Sweden)

    Klinke David J

    2009-11-01

    Full Text Available Abstract Background A common challenge in systems biology is to infer mechanistic descriptions of biological process given limited observations of a biological system. Mathematical models are frequently used to represent a belief about the causal relationships among proteins within a signaling network. Bayesian methods provide an attractive framework for inferring the validity of those beliefs in the context of the available data. However, efficient sampling of high-dimensional parameter space and appropriate convergence criteria provide barriers for implementing an empirical Bayesian approach. The objective of this study was to apply an Adaptive Markov chain Monte Carlo technique to a typical study of cellular signaling pathways. Results As an illustrative example, a kinetic model for the early signaling events associated with the epidermal growth factor (EGF signaling network was calibrated against dynamic measurements observed in primary rat hepatocytes. A convergence criterion, based upon the Gelman-Rubin potential scale reduction factor, was applied to the model predictions. The posterior distributions of the parameters exhibited complicated structure, including significant covariance between specific parameters and a broad range of variance among the parameters. The model predictions, in contrast, were narrowly distributed and were used to identify areas of agreement among a collection of experimental studies. Conclusion In summary, an empirical Bayesian approach was developed for inferring the confidence that one can place in a particular model that describes signal transduction mechanisms and for inferring inconsistencies in experimental measurements.

  17. Creep-fatigue modelling in structural steels using empirical and constitutive creep methods implemented in a strip-yield model

    Science.gov (United States)

    Andrews, Benjamin J.

    The phenomena of creep and fatigue have each been thoroughly studied. More recently, attempts have been made to predict the damage evolution in engineering materials due to combined creep and fatigue loading, but these formulations have been strictly empirical and have not been used successfully outside of a narrow set of conditions. This work proposes a new creep-fatigue crack growth model based on constitutive creep equations (adjusted to experimental data) and Paris law fatigue crack growth. Predictions from this model are compared to experimental data in two steels: modified 9Cr-1Mo steel and AISI 316L stainless steel. Modified 9Cr-1Mo steel is a high-strength steel used in the construction of pressure vessels and piping for nuclear and conventional power plants, especially for high temperature applications. Creep-fatigue and pure creep experimental data from the literature are compared to model predictions, and they show good agreement. Material constants for the constitutive creep model are obtained for AISI 316L stainless steel, an alloy steel widely used for temperature and corrosion resistance for such components as exhaust manifolds, furnace parts, heat exchangers and jet engine parts. Model predictions are compared to pure creep experimental data, with satisfactory results. Assumptions and constraints inherent in the implementation of the present model are examined. They include: spatial discretization, similitude, plane stress constraint and linear elasticity. It is shown that the implementation of the present model had a non-trivial impact on the model solutions in 316L stainless steel, especially the spatial discretization. Based on these studies, the following conclusions are drawn: 1. The constitutive creep model consistently performs better than the Nikbin, Smith and Webster (NSW) model for predicting creep and creep-fatigue crack extension. 2. Given a database of uniaxial creep test data, a constitutive material model such as the one developed for

  18. Empirical results for pedestrian dynamics and their implications for cellular automata models

    CERN Document Server

    Schadschneider, Andreas

    2010-01-01

    A large number of models for pedestrian dynamics have been developed over the years. However, so far not much attention has been paid to their quantitative validation. Usually the focus is on the reproduction of empirically observed collective phenomena, as lane formation in counterflow. This can give an indication for the realism of the model, but practical applications, e.g. in safety analysis, require quantitative predictions. We discuss the current experimental situation, especially for the fundamental diagram which is the most important quantity needed for calibration. In addition we consider the implications for the modelling based on cellular automata. As specific example the floor field model is introduced. Apart from the properties of its fundamental diagram we discuss the implications of an egress experiment for the relevance of conflicts and friction effects.

  19. Empirical approaches to more accurately predict benthic-pelagic coupling in biogeochemical ocean models

    Science.gov (United States)

    Dale, Andy; Stolpovsky, Konstantin; Wallmann, Klaus

    2016-04-01

    The recycling and burial of biogenic material in the sea floor plays a key role in the regulation of ocean chemistry. Proper consideration of these processes in ocean biogeochemical models is becoming increasingly recognized as an important step in model validation and prediction. However, the rate of organic matter remineralization in sediments and the benthic flux of redox-sensitive elements are difficult to predict a priori. In this communication, examples of empirical benthic flux models that can be coupled to earth system models to predict sediment-water exchange in the open ocean are presented. Large uncertainties hindering further progress in this field include knowledge of the reactivity of organic carbon reaching the sediment, the importance of episodic variability in bottom water chemistry and particle rain rates (for both the deep-sea and margins) and the role of benthic fauna. How do we meet the challenge?

  20. An empirical model for dissolution profile and its application to floating dosage forms.

    Science.gov (United States)

    Weiss, Michael; Kriangkrai, Worawut; Sungthongjeen, Srisagul

    2014-06-02

    A sum of two inverse Gaussian functions is proposed as a highly flexible empirical model for fitting of in vitro dissolution profiles. The model was applied to quantitatively describe theophylline release from effervescent multi-layer coated floating tablets containing different amounts of the anti-tacking agents talc or glyceryl monostearate. Model parameters were estimated by nonlinear regression (mixed-effects modeling). The estimated parameters were used to determine the mean dissolution time, as well as to reconstruct the time course of release rate for each formulation, whereby the fractional release rate can serve as a diagnostic tool for classification of dissolution processes. The approach allows quantification of dissolution behavior and could provide additional insights into the underlying processes.

  1. An Empirical Path-Loss Model for Wireless Channels in Indoor Short-Range Office Environment

    Directory of Open Access Journals (Sweden)

    Ye Wang

    2012-01-01

    Full Text Available A novel empirical path-loss model for wireless indoor short-range office environment at 4.3–7.3 GHz band is presented. The model is developed based on the experimental datum sampled in 30 office rooms in both line of sight (LOS and non-LOS (NLOS scenarios. The model is characterized as the path loss to distance with a Gaussian random variable X due to the shadow fading by using linear regression. The path-loss exponent n is fitted by the frequency using power function and modeled as a frequency-dependent Gaussian variable as the standard deviation σ of X. The presented works should be available for the research of wireless channel characteristics under universal indoor short-distance environments in the Internet of Things (IOT.

  2. Comparison of empirical magnetic field models and global MHD simulations: The near-tail currents

    Science.gov (United States)

    Pulkkinen, T. I.; Baker, D. N.; Walker, R. J.; Raeder, J.; Ashour-Abdalla, M.

    1995-01-01

    The tail currents predicted by empirical magnetic field models and global MHD simulations are compared. It is shown that the near-Earth currents obtained from the MHD simulations are much weaker than the currents predicted by the Tsyganenko models, primarily because the ring current is not properly represented in the simulations. On the other hand, in the mid-tail and distant tail the lobe field strength predicted by the simulations is comparable to what is observed at about 50 R(sub E) distance, significantly larger than the very low lobe field values predicted by the Tsyganenko models at that distance. Ways to improve these complementary approaches to model the actual magnetospheric configuration are discussed.

  3. A multistate empirical valence bond model for solvation and transport simulations of OH- in aqueous solutions.

    Science.gov (United States)

    Ufimtsev, Ivan S; Kalinichev, Andrey G; Martinez, Todd J; Kirkpatrick, R James

    2009-11-07

    We describe a new multistate empirical valence bond (MS-EVB) model of OH(-) in aqueous solutions. This model is based on the recently proposed "charged ring" parameterization for the intermolecular interaction of hydroxyl ion with water [Ufimtsev, et al., Chem. Phys. Lett., 2007, 442, 128] and is suitable for classical molecular simulations of OH(-) solvation and transport. The model reproduces the hydration structure of OH(-)(aq) in good agreement with experimental data and the results of ab initio molecular dynamics simulations. It also accurately captures the major structural, energetic, and dynamic aspects of the proton transfer processes involving OH(-) (aq). The model predicts an approximately two-fold increase of the OH(-) mobility due to proton exchange reactions.

  4. Empirical evaluation of the conceptual model underpinning a regional aquatic long-term monitoring program using causal modelling

    Science.gov (United States)

    Irvine, Kathryn M.; Miller, Scott; Al-Chokhachy, Robert K.; Archer, Erik; Roper, Brett B.; Kershner, Jeffrey L.

    2015-01-01

    Conceptual models are an integral facet of long-term monitoring programs. Proposed linkages between drivers, stressors, and ecological indicators are identified within the conceptual model of most mandated programs. We empirically evaluate a conceptual model developed for a regional aquatic and riparian monitoring program using causal models (i.e., Bayesian path analysis). We assess whether data gathered for regional status and trend estimation can also provide insights on why a stream may deviate from reference conditions. We target the hypothesized causal pathways for how anthropogenic drivers of road density, percent grazing, and percent forest within a catchment affect instream biological condition. We found instream temperature and fine sediments in arid sites and only fine sediments in mesic sites accounted for a significant portion of the maximum possible variation explainable in biological condition among managed sites. However, the biological significance of the direct effects of anthropogenic drivers on instream temperature and fine sediments were minimal or not detected. Consequently, there was weak to no biological support for causal pathways related to anthropogenic drivers’ impact on biological condition. With weak biological and statistical effect sizes, ignoring environmental contextual variables and covariates that explain natural heterogeneity would have resulted in no evidence of human impacts on biological integrity in some instances. For programs targeting the effects of anthropogenic activities, it is imperative to identify both land use practices and mechanisms that have led to degraded conditions (i.e., moving beyond simple status and trend estimation). Our empirical evaluation of the conceptual model underpinning the long-term monitoring program provided an opportunity for learning and, consequently, we discuss survey design elements that require modification to achieve question driven monitoring, a necessary step in the practice of

  5. The probabilistic niche model reveals substantial variation in the niche structure of empirical food webs.

    Science.gov (United States)

    Williams, Richard J; Purves, Drew W

    2011-09-01

    The structure of food webs, complex networks of interspecies feeding interactions, plays a crucial role in ecosystem resilience and function, and understanding food web structure remains a central problem in ecology. Previous studies have shown that key features of empirical food webs can be reproduced by low-dimensional "niche" models. Here we examine the form and variability of food web niche structure by fitting a probabilistic niche model to 37 empirical food webs, a much larger number of food webs than used in previous studies. The model relaxes previous assumptions about parameter distributions and hierarchy and returns parameter estimates for each species in each web. The model significantly outperforms previous niche model variants and also performs well for several webs where a body-size-based niche model performs poorly, implying that traits other than body size are important in structuring these webs' niche space. Parameter estimates frequently violate previous models' assumptions: in 19 of 37 webs, parameter values are not significantly hierarchical, 32 of 37 webs have nonuniform niche value distributions, and 15 of 37 webs lack a correlation between niche width and niche position. Extending the model to a two-dimensional niche space yields networks with a mixture of one- and two-dimensional niches and provides a significantly better fit for webs with a large number of species and links. These results confirm that food webs are strongly niche-structured but reveal substantial variation in the form of the niche structuring, a result with fundamental implications for ecosystem resilience and function.

  6. A new regional total electron content empirical model in northeast China

    Science.gov (United States)

    Feng, Jiandi; Wang, Zhengtao; Jiang, Weiping; Zhao, Zhenzhen; Zhang, Bingbing

    2016-10-01

    Using total electron content (TEC) data over one and a half solar cycles (1999-2015) provided by the Center for Orbit Determination in Europe (CODE), this paper proposes a new empirical TEC model for northeast China (40-50N, 120-130E). The model, called TECM-NEC, involves the multiplication of four separable components, including diurnal variation, seasonal variation, geomagnetic field dependency, and solar dependency. Diurnal variation is composed of three parts: the typical daily variation of TEC; corrections of Mid-latitude Summer Nighttime Anomaly (MSNA) that depend on geographic location, season, and local time; and corrections of day-to-night ratio under different seasons and solar activities. Four sub-harmonics of the year with annual, semiannual, four-, and three-month periods are used to describe seasonal variations. For geomagnetic variation, geomagnetic latitude is based on the latest International Geomagnetic Reference Field (IGRF12) model. Compared with similar empirical models, the solar proxy index F10.7P = (F10.7 + F10.7A)/2, where F10.7A is the 81-day running mean of daily F10.7, is chosen as having linear relationship with TEC for the model. This model has 43 coefficients, which are determined by nonlinear least squares fitting (NLSF) technique. The TECM-NEC model fits with the TEC/CODE input data with a bias of 0.03TECU and a RMS deviation of 2.76TECU. The proposed TECM-NEC model can reproduce the MSNA and nighttime TEC enhancements phenomenon over northeast China.

  7. Polarizable six-point water models from computational and empirical optimization.

    Science.gov (United States)

    Tröster, Philipp; Lorenzen, Konstantin; Tavan, Paul

    2014-02-13

    Tröster et al. (J. Phys. Chem B 2013, 117, 9486-9500) recently suggested a mixed computational and empirical approach to the optimization of polarizable molecular mechanics (PMM) water models. In the empirical part the parameters of Buckingham potentials are optimized by PMM molecular dynamics (MD) simulations. The computational part applies hybrid calculations, which combine the quantum mechanical description of a H2O molecule by density functional theory (DFT) with a PMM model of its liquid phase environment generated by MD. While the static dipole moments and polarizabilities of the PMM water models are fixed at the experimental gas phase values, the DFT/PMM calculations are employed to optimize the remaining electrostatic properties. These properties cover the width of a Gaussian inducible dipole positioned at the oxygen and the locations of massless negative charge points within the molecule (the positive charges are attached to the hydrogens). The authors considered the cases of one and two negative charges rendering the PMM four- and five-point models TL4P and TL5P. Here we extend their approach to three negative charges, thus suggesting the PMM six-point model TL6P. As compared to the predecessors and to other PMM models, which also exhibit partial charges at fixed positions, TL6P turned out to predict all studied properties of liquid water at p0 = 1 bar and T0 = 300 K with a remarkable accuracy. These properties cover, for instance, the diffusion constant, viscosity, isobaric heat capacity, isothermal compressibility, dielectric constant, density, and the isobaric thermal expansion coefficient. This success concurrently provides a microscopic physical explanation of corresponding shortcomings of previous models. It uniquely assigns the failures of previous models to substantial inaccuracies in the description of the higher electrostatic multipole moments of liquid phase water molecules. Resulting favorable properties concerning the transferability to

  8. The development of an empirical model for regional public health reporting. A descriptive study in two Dutch pilot regions.

    Science.gov (United States)

    Van Bon-Martens, M J H; Van De Goor, L A M; Achterberg, P W; Van Oers, J A M

    2011-08-01

    To develop and describe an empirical model for regional public health reporting, based on the model and experience of the Dutch national Public Health Status and Forecasts (PHSF) as well as on relevant theories and literature. Three basic requirements were chosen in a preparatory feasibility study: the products to be developed, the project organization of the pilot study, and a regional elaboration of the conceptual model of the national PHSF. Subsequently, from November 2005 to June 2007, a regional PHSF was developed in two Dutch pilot regions, to serve as a base for the empirical model for regional public health reporting. The developed empirical regional PHSF model consists of different products for different purposes and target groups. Regional and Municipal Reports aim to underpin strategic regional and local public health policy. Websites contain up-to-date information, aiming to underpin tactical regional and local public health policy by providing building blocks for translating strategic policy priorities into concrete plans of action. Numerous stakeholders are involved in the development of a regional PHSF. The developed empirical process model for a regional PHSF connects to the theoretical framework in which interaction between researchers and policymakers is an important condition for the use of research data in public health policy. The empirical model for a regional PHSF can be characterized by its 1) products, 2) content and design, and 3) underlying process and organization. This empirical model can be seen as a first step in the direction of a generic model for regional public health reporting.

  9. GNSS-R nonlocal sea state dependencies: Model and empirical verification

    Science.gov (United States)

    Chen-Zhang, David D.; Ruf, Christopher S.; Ardhuin, Fabrice; Park, Jeonghwan

    2016-11-01

    Global Navigation Satellite System Reflectometry (GNSS-R) is an active, bistatic remote sensing technique operating at L-band frequencies. GNSS-R signals scattered from a rough ocean surface are known to interact with longer surface waves than traditional scatterometery and altimetry signals. A revised forward model for GNSS-R measurements is presented which assumes an ocean surface wave spectrum that is forced by other sources than just the local near-surface winds. The model is motivated by recent spaceborne GNSS-R observations that indicate a strong scattering dependence on significant wave height, even after controlling for local wind speed. This behavior is not well represented by the most commonly used GNSS-R scattering model, which features a one-to-one relationship between wind speed and the mean-square-slope of the ocean surface. The revised forward model incorporates a third generation wave model that is skillful at representing long waves, an anchored spectral tail model, and a GNSS-R electromagnetic scattering model. In comparisons with the spaceborne measurements, the new model is much better able to reproduce the empirical behavior.

  10. Solar wind driven empirical forecast models of the time derivative of the ground magnetic field

    Directory of Open Access Journals (Sweden)

    Wintoft Peter

    2015-01-01

    Full Text Available Empirical models are developed to provide 10–30-min forecasts of the magnitude of the time derivative of local horizontal ground geomagnetic field (|dBh/dt| over Europe. The models are driven by ACE solar wind data. A major part of the work has been devoted to the search and selection of datasets to support the model development. To simplify the problem, but at the same time capture sudden changes, 30-min maximum values of |dBh/dt| are forecast with a cadence of 1 min. Models are tested both with and without the use of ACE SWEPAM plasma data. It is shown that the models generally capture sudden increases in |dBh/dt| that are associated with sudden impulses (SI. The SI is the dominant disturbance source for geomagnetic latitudes below 50° N and with minor contribution from substorms. However, at occasions, large disturbances can be seen associated with geomagnetic pulsations. For higher latitudes longer lasting disturbances, associated with substorms, are generally also captured. It is also shown that the models using only solar wind magnetic field as input perform in most cases equally well as models with plasma data. The models have been verified using different approaches including the extremal dependence index which is suitable for rare events.

  11. An Empirical Jet-Surface Interaction Noise Model with Temperature and Nozzle Aspect Ratio Effects

    Science.gov (United States)

    Brown, Cliff

    2015-01-01

    An empirical model for jet-surface interaction (JSI) noise produced by a round jet near a flat plate is described and the resulting model evaluated. The model covers unheated and hot jet conditions (1 less than or equal to jet total temperature ratio less than or equal to 2.7) in the subsonic range (0.5 less than or equal to M(sub a) less than or equal to 0.9), surface lengths 0.6 less than or equal to (axial distance from jet exit to surface trailing edge (inches)/nozzle exit diameter) less than or equal to 10, and surface standoff distances (0 less than or equal to (radial distance from jet lipline to surface (inches)/axial distance from jet exit to surface trailing edge (inches)) less than or equal to 1) using only second-order polynomials to provide predictable behavior. The JSI noise model is combined with an existing jet mixing noise model to produce exhaust noise predictions. Fit quality metrics and comparisons to between the predicted and experimental data indicate that the model is suitable for many system level studies. A first-order correction to the JSI source model that accounts for the effect of nozzle aspect ratio is also explored. This correction is based on changes to the potential core length and frequency scaling associated with rectangular nozzles up to 8:1 aspect ratio. However, more work is needed to refine these findings into a formal model.

  12. Using change-point models to estimate empirical critical loads for nitrogen in mountain ecosystems.

    Science.gov (United States)

    Roth, Tobias; Kohli, Lukas; Rihm, Beat; Meier, Reto; Achermann, Beat

    2017-01-01

    To protect ecosystems and their services, the critical load concept has been implemented under the framework of the Convention on Long-range Transboundary Air Pollution (UNECE) to develop effects-oriented air pollution abatement strategies. Critical loads are thresholds below which damaging effects on sensitive habitats do not occur according to current knowledge. Here we use change-point models applied in a Bayesian context to overcome some of the difficulties when estimating empirical critical loads for nitrogen (N) from empirical data. We tested the method using simulated data with varying sample sizes, varying effects of confounding variables, and with varying negative effects of N deposition on species richness. The method was applied to the national-scale plant species richness data from mountain hay meadows and (sub)alpine scrubs sites in Switzerland. Seven confounding factors (elevation, inclination, precipitation, calcareous content, aspect as well as indicator values for humidity and light) were selected based on earlier studies examining numerous environmental factors to explain Swiss vascular plant diversity. The estimated critical load confirmed the existing empirical critical load of 5-15 kg N ha(-1) yr(-1) for (sub)alpine scrubs, while for mountain hay meadows the estimated critical load was at the lower end of the current empirical critical load range. Based on these results, we suggest to narrow down the critical load range for mountain hay meadows to 10-15 kg N ha(-1) yr(-1).

  13. Dynamics of bloggers’ communities: Bipartite networks from empirical data and agent-based modeling

    Science.gov (United States)

    Mitrović, Marija; Tadić, Bosiljka

    2012-11-01

    We present an analysis of the empirical data and the agent-based modeling of the emotional behavior of users on the Web portals where the user interaction is mediated by posted comments, like Blogs and Diggs. We consider the dataset of discussion-driven popular Diggs, in which all comments are screened by machine-learning emotion detection in the text, to determine positive and negative valence (attractiveness and aversiveness) of each comment. By mapping the data onto a suitable bipartite network, we perform an analysis of the network topology and the related time-series of the emotional comments. The agent-based model is then introduced to simulate the dynamics and to capture the emergence of the emotional behaviors and communities. The agents are linked to posts on a bipartite network, whose structure evolves through their actions on the posts. The emotional states (arousal and valence) of each agent fluctuate in time, subject to the current contents of the posts to which the agent is exposed. By an agent’s action on a post its current emotions are transferred to the post. The model rules and the key parameters are inferred from the considered empirical data to ensure their realistic values and mutual consistency. The model assumes that the emotional arousal over posts drives the agent’s action. The simulations are preformed for the case of constant flux of agents and the results are analyzed in full analogy with the empirical data. The main conclusions are that the emotion-driven dynamics leads to long-range temporal correlations and emergent networks with community structure, that are comparable with the ones in the empirical system of popular posts. In view of pure emotion-driven agents actions, this type of comparisons provide a quantitative measure for the role of emotions in the dynamics on real blogs. Furthermore, the model reveals the underlying mechanisms which relate the post popularity with the emotion dynamics and the prevalence of negative

  14. Empirical versus modelling approaches to the estimation of measurement uncertainty caused by primary sampling.

    Science.gov (United States)

    Lyn, Jennifer A; Ramsey, Michael H; Damant, Andrew P; Wood, Roger

    2007-12-01

    Measurement uncertainty is a vital issue within analytical science. There are strong arguments that primary sampling should be considered the first and perhaps the most influential step in the measurement process. Increasingly, analytical laboratories are required to report measurement results to clients together with estimates of the uncertainty. Furthermore, these estimates can be used when pursuing regulation enforcement to decide whether a measured analyte concentration is above a threshold value. With its recognised importance in analytical measurement, the question arises of 'what is the most appropriate method to estimate the measurement uncertainty?'. Two broad methods for uncertainty estimation are identified, the modelling method and the empirical method. In modelling, the estimation of uncertainty involves the identification, quantification and summation (as variances) of each potential source of uncertainty. This approach has been applied to purely analytical systems, but becomes increasingly problematic in identifying all of such sources when it is applied to primary sampling. Applications of this methodology to sampling often utilise long-established theoretical models of sampling and adopt the assumption that a 'correct' sampling protocol will ensure a representative sample. The empirical approach to uncertainty estimation involves replicated measurements from either inter-organisational trials and/or internal method validation and quality control. A more simple method involves duplicating sampling and analysis, by one organisation, for a small proportion of the total number of samples. This has proven to be a suitable alternative to these often expensive and time-consuming trials, in routine surveillance and one-off surveys, especially where heterogeneity is the main source of uncertainty. A case study of aflatoxins in pistachio nuts is used to broadly demonstrate the strengths and weakness of the two methods of uncertainty estimation. The estimate

  15. EXTENSION OF THE NUCLEAR REACTION MODEL CODE EMPIRE TO ACTINIDES NUCLEAR DATA EVALUATION.

    Energy Technology Data Exchange (ETDEWEB)

    CAPOTE,R.; SIN, M.; TRKOV, A.; HERMAN, M.; CARLSON, B.V.; OBLOZINSKY, P.

    2007-04-22

    Recent extensions and improvements of the EMPIRE code system are outlined. They add new capabilities to the code, such as prompt fission neutron spectra calculations using Hauser-Feshbach plus pre-equilibrium pre-fission spectra, cross section covariance matrix calculations by Monte Carlo method, fitting of optical model parameters, extended set of optical model potentials including new dispersive coupled channel potentials, parity-dependent level densities and transmission through numerically defined fission barriers. These features, along with improved and validated ENDF formatting, exclusive/inclusive spectra, and recoils make the current EMPIRE release a complete and well validated tool for evaluation of nuclear data at incident energies above the resonance region. The current EMPIRE release has been used in evaluations of neutron induced reaction files for {sup 232}Th and {sup 231,233}Pa nuclei in the fast neutron region at IAEA. Triple-humped fission barriers and exclusive pre-fission neutron spectra were considered for the fission data evaluation. Total, fission, capture and neutron emission cross section, average resonance parameters and angular distributions of neutron scattering are in excellent agreement with the available experimental data.

  16. Strategy for a Rock Mechanics Site Descriptive Model. Development and testing of the empirical approach

    Energy Technology Data Exchange (ETDEWEB)

    Roeshoff, Kennert; Lanaro, Flavio [Berg Bygg Konsult AB, Stockholm (Sweden); Lanru Jing [Royal Inst. of Techn., Stockholm (Sweden). Div. of Engineering Geology

    2002-05-01

    This report presents the results of one part of a wide project for the determination of a methodology for the determination of the rock mechanics properties of the rock mass for the so-called Aespoe Test Case. The Project consists of three major parts: the empirical part dealing with the characterisation of the rock mass by applying empirical methods, a part determining the rock mechanics properties of the rock mass through numerical modelling, and a third part carrying out numerical modelling for the determination of the stress state at Aespoe. All Project's parts were performed based on a limited amount of data about the geology and mechanical tests on samples selected from the Aespoe Database. This Report only considers the empirical approach. The purpose of the project is the development of a descriptive rock mechanics model for SKBs rock mass investigations for a final repository site. The empirical characterisation of the rock mass provides correlations with some of the rock mechanics properties of the rock mass such as the deformation modulus, the friction angle and cohesion for a certain stress interval and the uniaxial compressive strength. For the characterisation of the rock mass, several empirical methods were analysed and reviewed. Among those methods, some were chosen because robust, applicable and widespread in modern rock mechanics. Major weight was given to the well-known Tunnel Quality Index (Q) and Rock Mass Rating (RMR) but also the Rock Mass Index (RMi), the Geological Strength Index (GSI) and Ramamurthy's Criterion were applied for comparison with the two classical methods. The process of: i) sorting the geometrical/geological/rock mechanics data, ii) identifying homogeneous rock volumes, iii) determining the input parameters for the empirical ratings for rock mass characterisation; iv) evaluating the mechanical properties by using empirical relations with the rock mass ratings; was considered. By comparing the methodologies involved

  17. Towards global empirical upscaling of FLUXNET eddy covariance observations: validation of a model tree ensemble approach using a biosphere model

    Science.gov (United States)

    Jung, M.; Reichstein, M.; Bondeau, A.

    2009-10-01

    Global, spatially and temporally explicit estimates of carbon and water fluxes derived from empirical up-scaling eddy covariance measurements would constitute a new and possibly powerful data stream to study the variability of the global terrestrial carbon and water cycle. This paper introduces and validates a machine learning approach dedicated to the upscaling of observations from the current global network of eddy covariance towers (FLUXNET). We present a new model TRee Induction ALgorithm (TRIAL) that performs hierarchical stratification of the data set into units where particular multiple regressions for a target variable hold. We propose an ensemble approach (Evolving tRees with RandOm gRowth, ERROR) where the base learning algorithm is perturbed in order to gain a diverse sequence of different model trees which evolves over time. We evaluate the efficiency of the model tree ensemble (MTE) approach using an artificial data set derived from the Lund-Potsdam-Jena managed Land (LPJmL) biosphere model. We aim at reproducing global monthly gross primary production as simulated by LPJmL from 1998-2005 using only locations and months where high quality FLUXNET data exist for the training of the model trees. The model trees are trained with the LPJmL land cover and meteorological input data, climate data, and the fraction of absorbed photosynthetic active radiation simulated by LPJmL. Given that we know the "true result" in the form of global LPJmL simulations we can effectively study the performance of the MTE upscaling and associated problems of extrapolation capacity. We show that MTE is able to explain 92% of the variability of the global LPJmL GPP simulations. The mean spatial pattern and the seasonal variability of GPP that constitute the largest sources of variance are very well reproduced (96% and 94% of variance explained respectively) while the monthly interannual anomalies which occupy much less variance are less well matched (41% of variance explained

  18. Development of a multivariate empirical model for predicting weak rock mass modulus

    Institute of Scientific and Technical Information of China (English)

    Kallu Raj R.; Keffeler Evan R.; Watters Robert J.; Agharazi Alireza

    2015-01-01

    Estimating weak rock mass modulus has historically proven difficult although this mechanical property is an important input to many types of geotechnical analyses. An empirical database of weak rock mass modulus with associated detailed geotechnical parameters was assembled from plate loading tests per-formed at underground mines in Nevada, the Bakhtiary Dam project, and Portugues Dam project. The database was used to assess the accuracy of published single-variate models and to develop a multivari-ate model for predicting in-situ weak rock mass modulus when limited geotechnical data are available. Only two of the published models were adequate for predicting modulus of weak rock masses over lim-ited ranges of alteration intensities, and none of the models provided good estimates of modulus over a range of geotechnical properties. In light of this shortcoming, a multivariate model was developed from the weak rock mass modulus dataset, and the new model is exponential in form and has the following independent variables:(1) average block size or joint spacing, (2) field estimated rock strength, (3) dis-continuity roughness, and (4) discontinuity infilling hardness. The multivariate model provided better estimates of modulus for both hard-blocky rock masses and intensely-altered rock masses.

  19. Apparent and inherent optical properties of turbid estuarine waters: measurements, empirical quantification relationships, and modeling

    Science.gov (United States)

    Doxaran, David; Cherukuru, Nagur; Lavender, Samantha J.

    2006-04-01

    Spectral measurements of remote-sensing reflectance (Rrs) and absorption coefficients carried out in three European estuaries (Gironde and Loire in France, Tamar in the UK) are presented and analyzed. Typical Rrs and absorption spectra are compared with typical values measured in coastal waters. The respective contributions of the water constituents, i.e., suspended sediments, colored dissolved organic matter, and phytoplankton (characterized by chlorophyll-a), are determined. The Rrs spectra are then reproduced with an optical model from the measured absorption coefficients and fitted backscattering coefficients. From Rrs ratios, empirical quantification relationships are established, reproduced, and explained from theoretical calculations. These quantification relationships were established from numerous field measurements and a reflectance model integrating the mean values of the water constituents' inherent optical properties. The model's sensitivity to the biogeochemical constituents and to their nature and composition is assessed.

  20. The Social Networking Application Success Model: An Empirical Study of Facebook and Twitter

    Directory of Open Access Journals (Sweden)

    Carol X. J. Ou

    2016-06-01

    Full Text Available Social networking applications (SNAs are among the fastest growing web applications of recent years. In this paper, we propose a causal model to assess the success of SNAs, grounded on DeLone and McLean’s updated information systems (IS success model. In addition to their original three dimensions of quality, i.e., system quality, information quality and service quality, we propose that a fourth dimension - networking quality - contributes to SNA success. We empirically examined the proposed research model with a survey of 168 Facebook and 149 Twitter users. The data validates the significant role of networking quality in determining the focal SNA’s success. The theoretical and practical implications are discussed.

  1. Simple Empirical Model for Identifying Rheological Properties of Soft Biological Tissues

    CERN Document Server

    Kobayashi, Yo; Miyashita, Tomoyuki; Fujie, Masakatsu G

    2015-01-01

    Understanding the rheological properties of soft biological tissue is a key issue for mechanical systems used in the healthcare field. We propose a simple empirical model using Fractional Dynamics and Exponential Nonlinearity (FDEN) to identify the rheological properties of soft biological tissue. The model is derived from detailed material measurements using samples isolated from porcine liver. We conducted dynamic viscoelastic and creep tests on liver samples using a rheometer. The experimental results indicated that biological tissue has specific properties: i) power law increases in storage elastic modulus and loss elastic modulus with the same slope; ii) power law gain decrease and constant phase delay in the frequency domain over two decades; iii) log-log scale linearity between time and strain relationships under constant force; and iv) linear and log scale linearity between strain and stress relationships. Our simple FDEN model uses only three dependent parameters and represents the specific propertie...

  2. An Empirical LTE Smartphone Power Model with a View to Energy Efficiency Evolution

    DEFF Research Database (Denmark)

    Lauridsen, Mads; Noël, Laurent; Sørensen, Troels Bundgaard

    2014-01-01

    Smartphone users struggle with short battery life, and this affects their device satisfaction level and usage of the network. To evaluate how chipset manufacturers and mobile network operators can improve the battery life, we propose a Long Term Evolution (LTE) smartphone power model. The idea...... manufacturers to identify main power consumers when taking actual operating characteristics into account. The smartphone power consumption model includes the main power consumers in the cellular subsystem as a function of receive and transmit power and data rate, and is fitted to empirical power consumption...... measurements made on state-of-the-art LTE smartphones. Discontinuous Reception (DRX) sleep mode is also modeled, because it is one of the most effective methods to improve smartphone battery life. Energy efficiency has generally improved with each Radio Access Technology (RAT) generation, and to see...

  3. Empirical modeling the ultrasound-assisted base-catalyzed sunflower oil methanolysis kinetics

    Directory of Open Access Journals (Sweden)

    Avramović Jelena M.

    2012-01-01

    Full Text Available The ultrasound-assisted sunflower oil methanolysis catalyzed by KOH was studied to define a simple empirical kinetic model useful for reactor design without complex computation. It was assumed that the neutralization of free fatty acids and the saponification reaction were negligible. The methanolysis process rate was observed to be controlled by the mass transfer limitation in the initial heterogeneous regime and by the chemical reaction in the later pseudo-homogeneous regime. The model involving the irreversible second-order kinetics was established and used for simulation of the triacylglycerol conversion and the fatty acid methyl esters formation in the latter regime. A good agreement between the proposed model and the experimental data in the chemically controlled regime was found.

  4. A minimal empirical model for the cosmic far-infrared background anisotropies

    CERN Document Server

    Wu, Hao-Yi

    2016-01-01

    Cosmic far-infrared background (CFIRB) probes unresolved dusty star-forming galaxies across cosmic time and is complementary to ultraviolet/optical probes of galaxy evolution. In this work, we interpret the observed CFIRB anisotropies using an empirical model based on recent galaxy survey results, including stellar mass functions, star-forming main sequence, and dust attenuation. Without introducing new parameters, our model agrees well with the CFIRB anisotropies observed by Planck and the submillimeter number counts observed by Herschel. We find that the commonly used linear relation between infrared luminosity and star-formation rate over-produces the observed CFIRB amplitudes, and lower infrared luminosities from low-mass galaxies are required. Our results indicate that CFIRB not only provides a consistency check for galaxy evolution models but also informs the star-formation rate and dust content for low-mass galaxies.

  5. Bayesian parameter inference for empirical stochastic models of paleoclimatic records with dating uncertainty

    Science.gov (United States)

    Boers, Niklas; Goswami, Bedartha; Chekroun, Mickael; Svensson, Anders; Rousseau, Denis-Didier; Ghil, Michael

    2016-04-01

    In the recent past, empirical stochastic models have been successfully applied to model a wide range of climatic phenomena [1,2]. In addition to enhancing our understanding of the geophysical systems under consideration, multilayer stochastic models (MSMs) have been shown to be solidly grounded in the Mori-Zwanzig formalism of statistical physics [3]. They are also well-suited for predictive purposes, e.g., for the El Niño Southern Oscillation [4] and the Madden-Julian Oscillation [5]. In general, these models are trained on a given time series under consideration, and then assumed to reproduce certain dynamical properties of the underlying natural system. Most existing approaches are based on least-squares fitting to determine optimal model parameters, which does not allow for an uncertainty estimation of these parameters. This approach significantly limits the degree to which dynamical characteristics of the time series can be safely inferred from the model. Here, we are specifically interested in fitting low-dimensional stochastic models to time series obtained from paleoclimatic proxy records, such as the oxygen isotope ratio and dust concentration of the NGRIP record [6]. The time series derived from these records exhibit substantial dating uncertainties, in addition to the proxy measurement errors. In particular, for time series of this kind, it is crucial to obtain uncertainty estimates for the final model parameters. Following [7], we first propose a statistical procedure to shift dating uncertainties from the time axis to the proxy axis of layer-counted paleoclimatic records. Thereafter, we show how Maximum Likelihood Estimation in combination with Markov Chain Monte Carlo parameter sampling can be employed to translate all uncertainties present in the original proxy time series to uncertainties of the parameter estimates of the stochastic model. We compare time series simulated by the empirical model to the original time series in terms of standard

  6. Empirical models of Total Electron Content based on functional fitting over Taiwan during geomagnetic quiet condition

    Directory of Open Access Journals (Sweden)

    Y. Kakinami

    2009-08-01

    Full Text Available Empirical models of Total Electron Content (TEC based on functional fitting over Taiwan (120° E, 24° N have been constructed using data of the Global Positioning System (GPS from 1998 to 2007 during geomagnetically quiet condition (Dst>−30 nT. The models provide TEC as functions of local time (LT, day of year (DOY and the solar activity (F, which are represented by 1–162 days mean of F10.7 and EUV. Other models based on median values have been also constructed and compared with the models based on the functional fitting. Under same values of F parameter, the models based on the functional fitting show better accuracy than those based on the median values in all cases. The functional fitting model using daily EUV is the most accurate with 9.2 TECu of root mean square error (RMS than the 15-days running median with 10.4 TECu RMS and the model of International Reference Ionosphere 2007 (IRI2007 with 14.7 TECu RMS. IRI2007 overestimates TEC when the solar activity is low, and underestimates TEC when the solar activity is high. Though average of 81 days centered running mean of F10.7 and daily F10.7 is often used as indicator of EUV, our result suggests that average of F10.7 mean from 1 to 54 day prior and current day is better than the average of 81 days centered running mean for reproduction of TEC. This paper is for the first time comparing the median based model with the functional fitting model. Results indicate the functional fitting model yielding a better performance than the median based one. Meanwhile we find that the EUV radiation is essential to derive an optimal TEC.

  7. The effect of empirical potential functions on modeling of amorphous carbon using molecular dynamics method

    Energy Technology Data Exchange (ETDEWEB)

    Li, Longqiu, E-mail: longqiuli@gmail.com [School of Mechatronics Engineering, Harbin Institute of Technology, Harbin, 150001 (China); Xu, Ming; Song, Wenping [School of Mechatronics Engineering, Harbin Institute of Technology, Harbin, 150001 (China); Ovcharenko, Andrey [Western Digital Corporation, San Jose, CA (United States); Zhang, Guangyu; Jia, Ding [School of Mechatronics Engineering, Harbin Institute of Technology, Harbin, 150001 (China)

    2013-12-01

    Empirical potentials have a strong effect on the hybridization and structure of amorphous carbon and are of great importance in molecular dynamics (MD) simulations. In this work, amorphous carbon at densities ranging from 2.0 to 3.2 g/cm{sup 3} was modeled by a liquid quenching method using Tersoff, 2nd REBO, and ReaxFF empirical potentials. The hybridization, structure and radial distribution function G(r) of carbon atoms were analyzed as a function of the three potentials mentioned above. The ReaxFF potential is capable to model the change of the structure of amorphous carbon and MD results are in a good agreement with experimental results and density function theory (DFT) at low density of 2.6 g/cm{sup 3} and below. The 2nd REBO potential can be used when amorphous carbon has a very low density of 2.4 g/cm{sup 3} and below. Considering the computational efficiency, the Tersoff potential is recommended to model amorphous carbon at a high density of 2.6 g/cm{sup 3} and above. In addition, the influence of the quenching time on the hybridization content obtained with the three potentials is discussed.

  8. State-of-Art Empirical Modeling of Ring Current Plasma Pressure

    Science.gov (United States)

    Yue, C.; Ma, Q.; Wang, C. P.; Bortnik, J.; Thorne, R. M.

    2015-12-01

    The plasma pressure in the inner magnetosphere plays a key role in plasma dynamics by changing magnetic field configurations and generating the ring current. In this study, we present our preliminary results of empirically constructing 2D equatorial ring current pressure and pressure anisotropy spatial distributions controlled by Dst based on measurements from two particle instruments (HOPE and RBSPICE) onboard Van Allen Probes. We first obtain the equatorial plasma perpendicular and parallel pressures for different species including H+, He+, O+ and e- from 20 eV to ~1 MeV, and investigate their relative contributions to the total plasma pressure and pressure anisotropy. We then establish empirical equatorial pressure models within ~ 6 RE using a state-of-art machine learning technique, Support Vector Regression Machine (SVRM). The pressure models predict equatorial perpendicular and parallel plasma thermal pressures (for each species and for total pressures) and pressure anisotropy at any given r, MLT, Bz/Br (equivalent Z distance), and Dst within applicable ranges. We are currently validating our model predictions and investigating how the ring current pressure distributions and the associated pressure gradients vary with Dst index.

  9. The logical primitives of thought: Empirical foundations for compositional cognitive models.

    Science.gov (United States)

    Piantadosi, Steven T; Tenenbaum, Joshua B; Goodman, Noah D

    2016-07-01

    The notion of a compositional language of thought (LOT) has been central in computational accounts of cognition from earliest attempts (Boole, 1854; Fodor, 1975) to the present day (Feldman, 2000; Penn, Holyoak, & Povinelli, 2008; Fodor, 2008; Kemp, 2012; Goodman, Tenenbaum, & Gerstenberg, 2015). Recent modeling work shows how statistical inferences over compositionally structured hypothesis spaces might explain learning and development across a variety of domains. However, the primitive components of such representations are typically assumed a priori by modelers and theoreticians rather than determined empirically. We show how different sets of LOT primitives, embedded in a psychologically realistic approximate Bayesian inference framework, systematically predict distinct learning curves in rule-based concept learning experiments. We use this feature of LOT models to design a set of large-scale concept learning experiments that can determine the most likely primitives for psychological concepts involving Boolean connectives and quantification. Subjects' inferences are most consistent with a rich (nonminimal) set of Boolean operations, including first-order, but not second-order, quantification. Our results more generally show how specific LOT theories can be distinguished empirically. (PsycINFO Database Record

  10. Towards global empirical upscaling of FLUXNET eddy covariance observations: validation of a model tree ensemble approach using a biosphere model

    Directory of Open Access Journals (Sweden)

    M. Jung

    2009-05-01

    Full Text Available Global, spatially and temporally explicit estimates of carbon and water fluxes derived from empirical up-scaling eddy covariance measurements would constitute a new and possibly powerful data stream to study the variability of the global terrestrial carbon and water cycle. This paper introduces and validates a machine learning approach dedicated to the upscaling of observations from the current global network of eddy covariance towers (FLUXNET. We present a new model TRee Induction ALgorithm (TRIAL that performs hierarchical stratification of the data set into units where particular multiple regressions for a target variable hold. We propose an ensemble approach (Evolving tRees with RandOm gRowth, ERROR where the base learning algorithm is perturbed in order to gain a diverse sequence of different model trees which evolves over time.

    We evaluate the efficiency of the model tree ensemble approach using an artificial data set derived from the the Lund-Potsdam-Jena managed Land (LPJmL biosphere model. We aim at reproducing global monthly gross primary production as simulated by LPJmL from 1998–2005 using only locations and months where high quality FLUXNET data exist for the training of the model trees. The model trees are trained with the LPJmL land cover and meteorological input data, climate data, and the fraction of absorbed photosynthetic active radiation simulated by LPJmL. Given that we know the "true result" in the form of global LPJmL simulations we can effectively study the performance of the model tree ensemble upscaling and associated problems of extrapolation capacity.

    We show that the model tree ensemble is able to explain 92% of the variability of the global LPJmL GPP simulations. The mean spatial pattern and the seasonal variability of GPP that constitute the largest sources of variance are very well reproduced (96% and 94% of variance explained respectively while the monthly interannual anomalies which occupy

  11. Empirical models for liquid metal heat transfer in the entrance region of tubes and rod bundles

    Science.gov (United States)

    Jaeger, Wadim

    2016-10-01

    Experiments focusing on liquid metals heat transfer in pipes and rod bundles with thermally and hydraulically developing flow are reviewed. Empirical heat transfer correlations are developed for engineering applications. In the developing regions the heat transfer is in-stationary. The heat transfer at the entrance is around 100 % higher due to the developing process including the lateral exchange of energy and momentum than for developed flow. Developing flow is not physically considered in the framework of system codes, which are used for thermal-hydraulic analysis of power and process plants with a multitude of components like pipes, tanks, valves and heat exchangers. Therefore, the application to liquid metal flows is limited to developed flow, which is independent of the distance from the flow entrance. The heat transfer enhancement in developing flows is important for the optimization of components like heat exchangers and helps to reduce unnecessary conservatism. In this work, empirical models are developed to account for developing flows in pipes and rod bundles. A literature review is performed to collect available experimental data for developing flow in liquid metal heat transfer. The evaluation shows that the length for pure thermally developing pipe flow is much larger (20-30 hydraulic diameters) than for combined thermally and hydraulically developing flow (10-15 hydraulic diameters). In rod bundles, fully combined developed flow is established after 30-40 hydraulic diameters downstream of the entrance. The derived empirical models for the heat transfer enhancement in the developing regions are implemented into a best estimate system code. The validation of these models by means of post-test analyses of 16 experiments shows that they are very well able to represent the heat transfer in developing regions.

  12. Empirical models for liquid metal heat transfer in the entrance region of tubes and rod bundles

    Science.gov (United States)

    Jaeger, Wadim

    2017-05-01

    Experiments focusing on liquid metals heat transfer in pipes and rod bundles with thermally and hydraulically developing flow are reviewed. Empirical heat transfer correlations are developed for engineering applications. In the developing regions the heat transfer is in-stationary. The heat transfer at the entrance is around 100 % higher due to the developing process including the lateral exchange of energy and momentum than for developed flow. Developing flow is not physically considered in the framework of system codes, which are used for thermal-hydraulic analysis of power and process plants with a multitude of components like pipes, tanks, valves and heat exchangers. Therefore, the application to liquid metal flows is limited to developed flow, which is independent of the distance from the flow entrance. The heat transfer enhancement in developing flows is important for the optimization of components like heat exchangers and helps to reduce unnecessary conservatism. In this work, empirical models are developed to account for developing flows in pipes and rod bundles. A literature review is performed to collect available experimental data for developing flow in liquid metal heat transfer. The evaluation shows that the length for pure thermally developing pipe flow is much larger (20-30 hydraulic diameters) than for combined thermally and hydraulically developing flow (10-15 hydraulic diameters). In rod bundles, fully combined developed flow is established after 30-40 hydraulic diameters downstream of the entrance. The derived empirical models for the heat transfer enhancement in the developing regions are implemented into a best estimate system code. The validation of these models by means of post-test analyses of 16 experiments shows that they are very well able to represent the heat transfer in developing regions.

  13. Network structure implied by initial axon outgrowth in rodent cortex: empirical measurement and models.

    Science.gov (United States)

    Cahalane, Diarmuid J; Clancy, Barbara; Kingsbury, Marcy A; Graf, Ethan; Sporns, Olaf; Finlay, Barbara L

    2011-01-11

    The developmental mechanisms by which the network organization of the adult cortex is established are incompletely understood. Here we report on empirical data on the development of connections in hamster isocortex and use these data to parameterize a network model of early cortical connectivity. Using anterograde tracers at a series of postnatal ages, we investigate the growth of connections in the early cortical sheet and systematically map initial axon extension from sites in anterior (motor), middle (somatosensory) and posterior (visual) cortex. As a general rule, developing axons extend from all sites to cover relatively large portions of the cortical field that include multiple cortical areas. From all sites, outgrowth is anisotropic, covering a greater distance along the medial/lateral axis than along the anterior/posterior axis. These observations are summarized as 2-dimensional probability distributions of axon terminal sites over the cortical sheet. Our network model consists of nodes, representing parcels of cortex, embedded in 2-dimensional space. Network nodes are connected via directed edges, representing axons, drawn according to the empirically derived anisotropic probability distribution. The networks generated are described by a number of graph theoretic measurements including graph efficiency, node betweenness centrality and average shortest path length. To determine if connectional anisotropy helps reduce the total volume occupied by axons, we define and measure a simple metric for the extra volume required by axons crossing. We investigate the impact of different levels of anisotropy on network structure and volume. The empirically observed level of anisotropy suggests a good trade-off between volume reduction and maintenance of both network efficiency and robustness. Future work will test the model's predictions for connectivity in larger cortices to gain insight into how the regulation of axonal outgrowth may have evolved to achieve efficient

  14. Fog forecasting: ``old fashioned'' semi-empirical methods from radio sounding observations versus ``modern'' numerical models

    Science.gov (United States)

    Holtslag, M. C.; Steeneveld, G. J.; Holtslag, A. A. M.

    2010-07-01

    Fog forecasting is a very challenging task due to the local and small-scale nature of the relevant physical processes and land surface heterogeneities. Despite the many research efforts, numerical models remain to have difficulties with fog forecasting, and forecast skill from direct model output is relatively poor. In order to put the progress of fog forecasting in the last decades into a historical perspective, we compare the fog forecasting skill of a semi-empirical method based on radio sounding observations (developed in the 60s and 70s) with the forecasting skill of a state-of-the-art numerical weather prediction model (MM5) for The Netherlands. The semi-empirical method under investigation, the Fog Stability Index, depends solely on the temperature difference between the surface and 850 hPa, the surface dew point depression and the wind speed at 850 hPa, and a threshold value to indicate the probability of fog in the coming hours. Using the critical success index (CSI) as a criterion for forecast quality, we find that the Fog Stability Index is a rather successful predictor for fog, with similar performance as MM5. The FSI could even been optimized for different observational stations in the Netherlands. Also, it appears that adding the 10 m wind as a predictor did not increase the CSI score for all stations. The results of the current study clearly indicate that the current state of knowledge requires improvement of the physical insight in different physical processes in order to beat simple semi-empirical methods.

  15. Study of a pseudo-empirical model approach to characterize plasma actuators

    Energy Technology Data Exchange (ETDEWEB)

    Marziali Bermudez, M [Departamento de Fisica, Facultad de Ciencias Exactas y Naturales, UBA, Ciudad Universitaria Pab. I, Buenos Aires 1428 (Argentina); Sosa, R; Artana, G [Laboratorio de Fluidodinamica, Facultad de Ingenieria, UBA, Av. Paseo Colon 850, Buenos Aires 1063 (Argentina); Grondona, D; Marquez, A; Kelly, H, E-mail: rsosa@fi.uba.ar [Instituto de Fisica del Plasma (CONICET) - Departamento de Fisica, Facultad de Ciencias Exactas y Naturales, UBA, Ciudad Universitaria Pab. I, Buenos Aires 1428 (Argentina)

    2011-05-01

    The use of plasma actuators is a recent technology that imposes a localized electric force that is used to control air flows. A suitable representation of actuation enables to undertake plasma actuators optimization, to design flow-control strategies, or to analyse the flow stabilization that can be attained by plasma forcing. The problem description may be clearly separated in two regions. An outer region, where the fluid is electrically neutral, in which the flow is described by the Navier-Stokes equation without any forcing term. An inner region, that forms a thin boundary layer, where the fluid is ionized and electric forces are predominant. The outer limit of the inner solution becomes the boundary condition for the outer problem. The outer problem can then be solved with a slip velocity that is issued from the inner solution. Although the solution for the inner problem is quite complex it can be contoured proposing pseudo-empirical models where the slip velocity of the outer problem is determined indirectly from experiments. This pseudo-empirical model approach has been recently tested in different cylinder flows and revealed quite adapted to describe actuated flow behaviour. In this work we determine experimentally the influence of the duty cycle on the slip velocity distribution. The velocity was measured by means of a pitot tube and flow visualizations of the starting vortex (i.e. the induced flow when actuation is activated in a quiescent air) have been done by means of the Schlieren technique. We also performed numerical experiments to simulate the outer region problem when actuation is activated in a quiescent air using a slip velocity distribution as a boundary condition. The experimental and numerical results are in good agreement showing the potential of this pseudo-empirical model approach to characterize the plasma actuation.

  16. Empirical frequency domain model for fixed-pattern noise in infrared focal plane arrays

    Science.gov (United States)

    Pérez, Francisco; Pezoa, Jorge E.; Figueroa, Miguel; Torres, Sergio N.

    2014-11-01

    In this paper, a new empirical model for the spatial structure of the fixed-pattern noise (FPN) observed in infrared (IR) focal-plane arrays (FPA) is presented. The model was conceived after analyzing, in the spatial frequency domain, FPN calibration data from different IR cameras and technologies. The analysis showed that the spatial patterns of the FPN are retained in the phase spectrum, while the noise intensity is determined by the magnitude spectrum. Thus, unlike traditional representations, the proposed model abstracts the FPN structure using one matrix for its magnitude spectrum and another matrix for its phase spectrum. Three applications of the model are addressed here. First, an algorithm is provided for generating random samples of the FPN with the same spatial pattern of the actual FPN. Second, the model is used to assess the performance of non-uniformity correction (NUC) algorithms in the presence of spatially correlated and uncorrelated FPN. Third, the model is used to improve the NUC capability of a method that requires, as a reference, a proper FPN sample.

  17. Use of empirical likelihood to calibrate auxiliary information in partly linear monotone regression models.

    Science.gov (United States)

    Chen, Baojiang; Qin, Jing

    2014-05-10

    In statistical analysis, a regression model is needed if one is interested in finding the relationship between a response variable and covariates. When the response depends on the covariate, then it may also depend on the function of this covariate. If one has no knowledge of this functional form but expect for monotonic increasing or decreasing, then the isotonic regression model is preferable. Estimation of parameters for isotonic regression models is based on the pool-adjacent-violators algorithm (PAVA), where the monotonicity constraints are built in. With missing data, people often employ the augmented estimating method to improve estimation efficiency by incorporating auxiliary information through a working regression model. However, under the framework of the isotonic regression model, the PAVA does not work as the monotonicity constraints are violated. In this paper, we develop an empirical likelihood-based method for isotonic regression model to incorporate the auxiliary information. Because the monotonicity constraints still hold, the PAVA can be used for parameter estimation. Simulation studies demonstrate that the proposed method can yield more efficient estimates, and in some situations, the efficiency improvement is substantial. We apply this method to a dementia study.

  18. Global empirical wind model for the upper mesosphere/lower thermosphere. I. Prevailing wind

    Directory of Open Access Journals (Sweden)

    Y. I. Portnyagin

    Full Text Available An updated empirical climatic zonally averaged prevailing wind model for the upper mesosphere/lower thermosphere (70-110 km, extending from 80°N to 80°S is presented. The model is constructed from the fitting of monthly mean winds from meteor radar and MF radar measurements at more than 40 stations, well distributed over the globe. The height-latitude contour plots of monthly mean zonal and meridional winds for all months of the year, and of annual mean wind, amplitudes and phases of annual and semiannual harmonics of wind variations are analyzed to reveal the main features of the seasonal variation of the global wind structures in the Northern and Southern Hemispheres. Some results of comparison between the ground-based wind models and the space-based models are presented. It is shown that, with the exception of annual mean systematic bias between the zonal winds provided by the ground-based and space-based models, a good agreement between the models is observed. The possible origin of this bias is discussed.

    Key words: Meteorology and Atmospheric dynamics (general circulation; middle atmosphere dynamics; thermospheric dynamics

  19. Improved empirical DC Ⅰ-Ⅴ model for 4H-SiC MESFETs

    Institute of Scientific and Technical Information of China (English)

    CAO QuanJun; ZHANG YiMen; ZHANG YuMing; LV HongLiang; WANG YueHu; TANG XiaoYan; GUO Hui

    2008-01-01

    A novel empirical large signal direct current (DC) Ⅰ-Ⅴ model is presented consider-ing the high saturation voltage, high pinch-off voltage, and wide operational range of drain voltage for 4H-SiC MESFETs. A comparison of the presented model with Statz, Materka, Curtice-Cubic, and recently reported 4H-SiC MESFET large signal Ⅰ-Ⅴ models is made through the Levenberg-Marquardt method for fitting in nonlin-ear regression. The results show that the new model has the advantages of high accuracy, easily making initial value and robustness over other models. The more accurate results are obtained by the improved channel modulation and saturation voltage coefficient when the device is operated in the sub-threshold and near pinch-off region. In addition the new model can be implemented to CAD tools di-rectly, using for design of 4H-SiC MESFET based RF&MW circuit, particularly MMIC (microwave monolithic integrate circuit).

  20. Construction and utilization of linear empirical core models for PWR in-core fuel management

    Energy Technology Data Exchange (ETDEWEB)

    Okafor, K.C.

    1988-01-01

    An empirical core-model construction procedure for pressurized water reactor (PWR) in-core fuel management is developed that allows determining the optimal BOC k{sub {infinity}} profiles in PWRs as a single linear-programming problem and thus facilitates the overall optimization process for in-core fuel management due to algorithmic simplification and reduction in computation time. The optimal profile is defined as one that maximizes cycle burnup. The model construction scheme treats the fuel-assembly power fractions, burnup, and leakage as state variables and BOC zone enrichments as control variables. The core model consists of linear correlations between the state and control variables that describe fuel-assembly behavior in time and space. These correlations are obtained through time-dependent two-dimensional core simulations. The core model incorporates the effects of composition changes in all the enrichment control zones on a given fuel assembly and is valid at all times during the cycle for a given range of control variables. No assumption is made on the geometry of the control zones. A scatter-composition distribution, as well as annular, can be considered for model construction. The application of the methodology to a typical PWR core indicates good agreement between the model and exact simulation results.

  1. An empirical comparison of alternate regime-switching models for electricity spot prices

    Energy Technology Data Exchange (ETDEWEB)

    Janczura, Joanna [Hugo Steinhaus Center, Institute of Mathematics and Computer Science, Wroclaw University of Technology, 50-370 Wroclaw (Poland); Weron, Rafal [Institute of Organization and Management, Wroclaw University of Technology, 50-370 Wroclaw (Poland)

    2010-09-15

    One of the most profound features of electricity spot prices are the price spikes. Markov regime-switching (MRS) models seem to be a natural candidate for modeling this spiky behavior. However, in the studies published so far, the goodness-of-fit of the proposed models has not been a major focus. While most of the models were elegant, their fit to empirical data has either been not examined thoroughly or the signs of a bad fit ignored. With this paper we want to fill the gap. We calibrate and test a range of MRS models in an attempt to find parsimonious specifications that not only address the main characteristics of electricity prices but are statistically sound as well. We find that the best structure is that of an independent spike 3-regime model with time-varying transition probabilities, heteroscedastic diffusion-type base regime dynamics and shifted spike regime distributions. Not only does it allow for a seasonal spike intensity throughout the year and consecutive spikes or price drops, which is consistent with market observations, but also exhibits the 'inverse leverage effect' reported in the literature for spot electricity prices. (author)

  2. Diagnosing Model Errors in Canopy-Atmosphere Exchange Using Empirical Orthogonal Functions

    Science.gov (United States)

    Drewry, D.; Albertson, J.

    2004-12-01

    Multi-layer canopy process models (MLCPMs) have been established as tools for estimating local-scale canopy-atmosphere scalar (carbon dioxide, heat and water vapor) exchange as well as testing hypotheses regarding the mechanistic functioning of complex vegetated land surfaces and the interactions between vegetation and the local microenvironment. These model frameworks are composed of a coupled set of component submodels relating radiation attenuation and absorption, photosynthesis, turbulent mixing, stomatal conductance, surface energy balance and soil and subsurface processes. Submodel formulations have been validated for a variety of ecosystems under varying environmental conditions. However, each submodel component requires parameter values that are known to vary seasonally as canopy structure changes, and over shorter periods characterized by shifts in the environmental regime. The temporal dependence of submodel parameters limits application of MLCPMs to short-term integrations for which a specific parameterization can be trusted. We present a novel application of empirical orthogonal function (EOF) analysis to the identification of the primary source of MLCPM error. Carbon dioxide (CO2) concentration profiles, a commonly collected and underutilized data source, are the observed quantity in this analysis. The technique relies on an ensemble of model runs transformed to EOF space to determine the characteristic patterns of model error associated with specific submodel parameters. These patterns provide a basis onto which error residual (modeled - measured) CO2 concentration profiles can be projected to identify the primary source of model error. Synthetic tests and application to field data collected at Duke Forest (North Carolina, USA) are presented.

  3. Testing an astronomically-based decadal-scale empirical harmonic climate model versus the IPCC (2007) general circulation climate models

    CERN Document Server

    Scafetta, Nicola

    2012-01-01

    We compare the performance of a recently proposed empirical climate model based on astronomical harmonics against all available general circulation climate models (GCM) used by the IPCC (2007) to interpret the 20th century global surface temperature. The proposed model assumes that the climate is resonating with, or synchronized to a set of natural harmonics that have been associated to the solar system planetary motion, mostly determined by Jupiter and Saturn. We show that the GCMs fail to reproduce the major decadal and multidecadal oscillations found in the global surface temperature record from 1850 to 2011. On the contrary, the proposed harmonic model is found to well reconstruct the observed climate oscillations from 1850 to 2011, and it is able to forecast the climate oscillations from 1950 to 2011 using the data covering the period 1850-1950, and vice versa. The 9.1-year cycle is shown to be likely related to a decadal Soli/Lunar tidal oscillation, while the 10-10.5, 20-21 and 60-62 year cycles are sy...

  4. Modelling of volumetric properties of binary and ternary mixtures by CEOS, CEOS/GE and empirical models

    Directory of Open Access Journals (Sweden)

    BOJAN D. DJORDJEVIC

    2007-12-01

    Full Text Available Although many cubic equations of state coupled with van der Waals-one fluid mixing rules including temperature dependent interaction parameters are sufficient for representing phase equilibria and excess properties (excess molar enthalpy HE, excess molar volume VE, etc., difficulties appear in the correlation and prediction of thermodynamic properties of complex mixtures at various temperature and pressure ranges. Great progress has been made by a new approach based on CEOS/GE models. This paper reviews the last six-year of progress achieved in modelling of the volumetric properties for complex binary and ternary systems of non-electrolytes by the CEOS and CEOS/GE approaches. In addition, the vdW1 and TCBT models were used to estimate the excess molar volume VE of ternary systems methanol + chloroform + benzene and 1-propanol + chloroform + benzene, as well as the corresponding binaries methanol + chloroform, chloroform + benzene, 1-propanol + chloroform and 1-propanol + benzene at 288.15–313.15 K and atmospheric pressure. Also, prediction of VE for both ternaries by empirical models (Radojković, Kohler, Jackob–Fitzner, Colinet, Tsao–Smith, Toop, Scatchard, Rastogi was performed.

  5. An empirical supply chain measurement model for a national egg producer based on the supply chain operations reference model

    Directory of Open Access Journals (Sweden)

    Christian Pretorius

    2013-05-01

    Full Text Available The management of a supply chain is both an offensive and defensive weapon that organisations can use to increase their competitive edge and capture a larger share of the market. In management science and supply chain management, multi-criteria decision making techniques have been used to solve a range of real-world problems. The problem is that many, if not most, companies in South Africa either do not have the required skills to use these decision-making techniques to improve or re-configure their supply chain, or they do not have a complete data set with which to model it effectively. In order to manage supply chains effectively, organisations at the very least need feedback on the performance of their entire supply chain. In this article, generic supply chain performance measures were used and a theoretical or empirical model was developed for the performance measurement of a national egg producer’s supply chain. It focused on a managerial program for the identification and management of their supply chain with recommendations for applying a measurement model. The overall performance of the supply chain as well as the five different performance attributes was presented to management in a dashboard format. This article could be used as a basis for future studies of supply chain performance measurement and the model could be used as a foundation for developing an improved version, not only for the egg industry, but for other industries as well. 

  6. IT-enabled dynamic capability on performance: An empirical study of BSC model

    Directory of Open Access Journals (Sweden)

    Adilson Carlos Yoshikuni

    2017-05-01

    Full Text Available ew studies have investigated the influence of “information capital,” through IT-enabled dynamic capability, on corporate performance, particularly in economic turbulence. Our study investigates the causal relationship between performance perspectives of the balanced scorecard using partial least squares path modeling. Using data on 845 Brazilian companies, we conduct a quantitative empirical study of firms during an economic crisis and observe the following interesting results. Operational and analytical IT-enabled dynamic capability had positive effects on business process improvement and corporate performance. Results pertaining to mediation (endogenous variables and moderation (control variables clarify IT’s role in and benefits for corporate performance.

  7. Empirical model of optical sensing via spectral shift of circular Bragg phenomenon

    CERN Document Server

    Mackay, Tom G

    2009-01-01

    Setting up an empirical model of optical sensing to exploit the circular Bragg phenomenon displayed by chiral sculptured thin films (CSTFs), we considered a CSTF with and without a central twist defect of $\\pi/2$ radians. The circular Bragg phenomenon of the defect-free CSTF, and the spectral hole in the co-polarized reflectance spectrum of the CSTF with the twist defect, were both found to be acutely sensitive to the refractive index of a fluid which infiltrates the void regions of the CSTF. These findings bode well for the deployment of CSTFs as optical sensors.

  8. Permeability-driven selection in a semi-empirical protocell model

    DEFF Research Database (Denmark)

    Piedrafita, Gabriel; Monnard, Pierre-Alain; Mavelli, Fabio

    2017-01-01

    to prebiotic systems evolution more intricate, but were surely essential for sustaining far-from-equilibrium chemical dynamics, given their functional relevance in all modern cells. Here we explore a protocellular scenario in which some of those additional constraints/mechanisms are addressed, demonstrating...... their 'system-level' implications. In particular, an experimental study on the permeability of prebiotic vesicle membranes composed of binary lipid mixtures allows us to construct a semi-empirical model where protocells are able to reproduce and undergo an evolutionary process based on their coupling...

  9. An empirical approach to modeling methylmercury concentrations in an Adirondack stream watershed

    Science.gov (United States)

    Burns, Douglas A.; Nystrom, Elizabeth A.; Wolock, David M.; Bradley, Paul M.; Riva-Murray, Karen

    2014-01-01

    Inverse empirical models can inform and improve more complex process-based models by quantifying the principal factors that control water quality variation. Here we developed a multiple regression model that explains 81% of the variation in filtered methylmercury (FMeHg) concentrations in Fishing Brook, a fourth-order stream in the Adirondack Mountains, New York, a known “hot spot” of Hg bioaccumulation. This model builds on previous observations that wetland-dominated riparian areas are the principal source of MeHg to this stream and were based on 43 samples collected during a 33 month period in 2007–2009. Explanatory variables include those that represent the effects of water temperature, streamflow, and modeled riparian water table depth on seasonal and annual patterns of FMeHg concentrations. An additional variable represents the effects of an upstream pond on decreasing FMeHg concentrations. Model results suggest that temperature-driven effects on net Hg methylation rates are the principal control on annual FMeHg concentration patterns. Additionally, streamflow dilutes FMeHg concentrations during the cold dormant season. The model further indicates that depth and persistence of the riparian water table as simulated by TOPMODEL are dominant controls on FMeHg concentration patterns during the warm growing season, especially evident when concentrations during the dry summer of 2007 were less than half of those in the wetter summers of 2008 and 2009. This modeling approach may help identify the principal factors that control variation in surface water FMeHg concentrations in other settings, which can guide the appropriate application of process-based models.

  10. Overview of the Mathematical and Empirical Receptor Models Workshop (Quail Roost II)

    Science.gov (United States)

    Stevens, Robert K.; Pace, Thompson G.

    On 14-17 March 1982, the U.S. Environmental Protection Agency sponsored the Mathematical and Empirical Receptor Models Workshop (Quail Roost II) at the Quail Roost Conference Center, Rougemont, NC. Thirty-five scientists were invited to participate. The objective of the workshop was to document and compare results of source apportionment analyses of simulated and real aerosol data sets. The simulated data set was developed by scientists from the National Bureau of Standards. It consisted of elemental mass data generated using a dispersion model that simulated transport of aerosols from a variety of sources to a receptor site. The real data set contained the mass, elemental, and ionic species concentrations of samples obtained in 18 consecutive 12-h sampling periods in Houston, TX. Some participants performed additional analyses of the Houston filters by X-ray powder diffraction, scanning electron microscopy, or light microscopy. Ten groups analyzed these data sets using a variety of modeling procedures. The results of the modeling exercises were evaluated and structured in a manner that permitted model intercomparisons. The major conclusions and recommendations derived from the intercomparisons were: (1) using aerosol elemental composition data, receptor models can resolve major emission sources, but additional analyses (including light microscopy and X-ray diffraction) significantly increase the number of sources that can be resolved; (2) simulated data sets that contain up to 6 dissimilar emission sources need to be generated, so that different receptor models can be adequately compared; (3) source apportionment methods need to be modified to incorporate a means of apportioning such aerosol species as sulfate and nitrate formed from SO 2 and NO, respectively, because current models tend to resolve particles into chemical species rather than to deduce their sources and (4) a source signature library may be required to be compiled for each airshed in order to

  11. An empirical model of water quality for use in rapid management strategy evaluation in Southeast Queensland, Australia.

    Science.gov (United States)

    de la Mare, William; Ellis, Nick; Pascual, Ricardo; Tickell, Sharon

    2012-04-01

    Simulation models have been widely adopted in fisheries for management strategy evaluation (MSE). However, in catchment management of water quality, MSE is hampered by the complexity of both decision space and the hydrological process models. Empirical models based on monitoring data provide a feasible alternative to process models; they run much faster and, by conditioning on data, they can simulate realistic responses to management actions. Using 10 years of water quality indicators from Queensland, Australia, we built an empirical model suitable for rapid MSE that reproduces the water quality variables' mean and covariance structure, adjusts the expected indicators through local management effects, and propagates effects downstream by capturing inter-site regression relationships. Empirical models enable managers to search the space of possible strategies using rapid assessment. They provide not only realistic responses in water quality indicators but also variability in those indicators, allowing managers to assess strategies in an uncertain world. Copyright © 2012 Elsevier Ltd. All rights reserved.

  12. Proposed Core Competencies and Empirical Validation Procedure in Competency Modeling: Confirmation and Classification.

    Science.gov (United States)

    Baczyńska, Anna K; Rowiński, Tomasz; Cybis, Natalia

    2016-01-01

    Competency models provide insight into key skills which are common to many positions in an organization. Moreover, there is a range of competencies that is used by many companies. Researchers have developed core competency terminology to underline their cross-organizational value. The article presents a theoretical model of core competencies consisting of two main higher-order competencies called performance and entrepreneurship. Each of them consists of three elements: the performance competency includes cooperation, organization of work and goal orientation, while entrepreneurship includes innovativeness, calculated risk-taking and pro-activeness. However, there is lack of empirical validation of competency concepts in organizations and this would seem crucial for obtaining reliable results from organizational research. We propose a two-step empirical validation procedure: (1) confirmation factor analysis, and (2) classification of employees. The sample consisted of 636 respondents (M = 44.5; SD = 15.1). Participants were administered a questionnaire developed for the study purpose. The reliability, measured by Cronbach's alpha, ranged from 0.60 to 0.83 for six scales. Next, we tested the model using a confirmatory factor analysis. The two separate, single models of performance and entrepreneurial orientations fit quite well to the data, while a complex model based on the two single concepts needs further research. In the classification of employees based on the two higher order competencies we obtained four main groups of employees. Their profiles relate to those found in the literature, including so-called niche finders and top performers. Some proposal for organizations is discussed.

  13. An empirical Bayes model using a competition score for metabolite identification in gas chromatography mass spectrometry

    Directory of Open Access Journals (Sweden)

    Kim Seongho

    2011-10-01

    Full Text Available Abstract Background Mass spectrometry (MS based metabolite profiling has been increasingly popular for scientific and biomedical studies, primarily due to recent technological development such as comprehensive two-dimensional gas chromatography time-of-flight mass spectrometry (GCxGC/TOF-MS. Nevertheless, the identifications of metabolites from complex samples are subject to errors. Statistical/computational approaches to improve the accuracy of the identifications and false positive estimate are in great need. We propose an empirical Bayes model which accounts for a competing score in addition to the similarity score to tackle this problem. The competition score characterizes the propensity of a candidate metabolite of being matched to some spectrum based on the metabolite's similarity score with other spectra in the library searched against. The competition score allows the model to properly assess the evidence on the presence/absence status of a metabolite based on whether or not the metabolite is matched to some sample spectrum. Results With a mixture of metabolite standards, we demonstrated that our method has better identification accuracy than other four existing methods. Moreover, our method has reliable false discovery rate estimate. We also applied our method to the data collected from the plasma of a rat and identified some metabolites from the plasma under the control of false discovery rate. Conclusions We developed an empirical Bayes model for metabolite identification and validated the method through a mixture of metabolite standards and rat plasma. The results show that our hierarchical model improves identification accuracy as compared with methods that do not structurally model the involved variables. The improvement in identification accuracy is likely to facilitate downstream analysis such as peak alignment and biomarker identification. Raw data and result matrices can be found at http

  14. Computational optogenetics: empirically-derived voltage- and light-sensitive channelrhodopsin-2 model.

    Directory of Open Access Journals (Sweden)

    John C Williams

    Full Text Available Channelrhodospin-2 (ChR2, a light-sensitive ion channel, and its variants have emerged as new excitatory optogenetic tools not only in neuroscience, but also in other areas, including cardiac electrophysiology. An accurate quantitative model of ChR2 is necessary for in silico prediction of the response to optical stimulation in realistic tissue/organ settings. Such a model can guide the rational design of new ion channel functionality tailored to different cell types/tissues. Focusing on one of the most widely used ChR2 mutants (H134R with enhanced current, we collected a comprehensive experimental data set of the response of this ion channel to different irradiances and voltages, and used these data to develop a model of ChR2 with empirically-derived voltage- and irradiance- dependence, where parameters were fine-tuned via simulated annealing optimization. This ChR2 model offers: 1 accurate inward rectification in the current-voltage response across irradiances; 2 empirically-derived voltage- and light-dependent kinetics (activation, deactivation and recovery from inactivation; and 3 accurate amplitude and morphology of the response across voltage and irradiance settings. Temperature-scaling factors (Q10 were derived and model kinetics was adjusted to physiological temperatures. Using optical action potential clamp, we experimentally validated model-predicted ChR2 behavior in guinea pig ventricular myocytes. The model was then incorporated in a variety of cardiac myocytes, including human ventricular, atrial and Purkinje cell models. We demonstrate the ability of ChR2 to trigger action potentials in human cardiomyocytes at relatively low light levels, as well as the differential response of these cells to light, with the Purkinje cells being most easily excitable and ventricular cells requiring the highest irradiance at all pulse durations. This new experimentally-validated ChR2 model will facilitate virtual experimentation in neural and

  15. Bridging the gap between empirical and mechanistic models for nitrate in groundwater

    Science.gov (United States)

    Nolan, B. T.; Malone, R. W.; Gronberg, J.; Thorp, K.; Ma, L.

    2011-12-01

    Water-quality models are useful tools for predicting the vulnerability of groundwater to nitrate contamination, and include both empirical and mechanistic approaches. Empirical models commonly are used at regional and national scales. Such models are data-driven and have comparatively few parameters, but their capability to simulate processes is limited. In contrast, mechanistic models are physically based, simulate controlling processes, and can have many parameters. The GroundWAter Vulnerability Assessment model (GWAVA), an example of the first approach, is a national-scale nonlinear regression model (R2=0.80) that predicts areally averaged nitrate concentration in groundwater based on mid-1990s land use. The Root Zone Water Quality Model (RZWQM2) is an example of the second approach and simulates N cycling processes, crop growth, and the fate and transport of agricultural chemicals at the field-scale for daily time steps. Thorough accounting by RZWQM2 of key processes can yield more accurate predictions, but application at large spatial scales is difficult because of the numerous parameters. To bridge the gap between these contrasting scales and approaches, we developed metamodels (MMs) to predict nitrate concentrations and N fluxes in the Corn Belt. Metamodels are simplified representations of mechanistic models which map outputs from the latter onto the inputs. Our MMs consisted of artificial neural networks (ANNs), which are inherently flexible and do not require linearity or normally distributed data. The MMs were based on RZWQM2 models previously calibrated to data from field sites in Nebraska, Iowa, and Maryland. The three sites are in corn-soybean rotation and reflect diverse soil types and climatic conditions as well as different management practices. We calibrated the MMs to RZWQM2 predictions of N in tile drainage and leachate below the root zone of crops. Therefore the MMs represent an integrated approach to vulnerability assessment-nitrate leaching

  16. Open-circuit sensitivity model based on empirical parameters for a capacitive-type MEMS acoustic sensor

    Science.gov (United States)

    Lee, Jaewoo; Jeon, J. H.; Je, C. H.; Lee, S. Q.; Yang, W. S.; Lee, S.-G.

    2016-03-01

    An empirical-based open-circuit sensitivity model for a capacitive-type MEMS acoustic sensor is presented. To intuitively evaluate the characteristic of the open-circuit sensitivity, the empirical-based model is proposed and analysed by using a lumped spring-mass model and a pad test sample without a parallel plate capacitor for the parasitic capacitance. The model is composed of three different parameter groups: empirical, theoretical, and mixed data. The empirical residual stress from the measured pull-in voltage of 16.7 V and the measured surface topology of the diaphragm were extracted as +13 MPa, resulting in the effective spring constant of 110.9 N/m. The parasitic capacitance for two probing pads including the substrate part was 0.25 pF. Furthermore, to verify the proposed model, the modelled open-circuit sensitivity was compared with the measured value. The MEMS acoustic sensor had an open- circuit sensitivity of -43.0 dBV/Pa at 1 kHz with a bias of 10 V, while the modelled open- circuit sensitivity was -42.9 dBV/Pa, which showed good agreement in the range from 100 Hz to 18 kHz. This validates the empirical-based open-circuit sensitivity model for designing capacitive-type MEMS acoustic sensors.

  17. Empirical Evidence of Fiscal Policy Impact on Endogenous Models of Economic Growth - the Case of Albania

    Directory of Open Access Journals (Sweden)

    Olta Milova

    2014-02-01

    Full Text Available According to Mankiw (2000, fiscal policy in major macroeconomic models adversely affects the behavior of private agents as consumers and firms and they affect economic growth through investment and savings decisions. Increasing government spending will increase the aggregate demand for goods and services and money demand in the money market leading to an increase of interest rates while markets tend towards equilibrium. The increased interest rates affect negatively the level of private investment. To assess the effect of fiscal policy on economic growth generally are used the endogenous growth models, which include technological progress as an integrated part of this model. These models were called endogenous because they were taking into account long-term economic growth and were using endogenous mechanisms to explain its main source which is the technological progress. Endogenous growth models developed by Barro (1990, Mendosa, Milesi-Ferreti and Asea (1997 or even by other economists, predict that the fiscal policy can affect the level of product and the long run economic growth. This conclusion is analysed in the theory of Barro (1990, which extends the model by including the fiscal policy. The Barro’s model is the model used in this paper to analyse the effect of the fiscal policy on economic growth in the case of Albania. The empirical work shows that all the variables, except inflation which according to theoretical expectations should have a negative effect, affect positively the economic growth. This positive relation between these variables can be explained by investments in infrastructure and other priority sectors that the government has done during all this period.

  18. Conceptual modeling in systems biology fosters empirical findings: the mRNA lifecycle.

    Directory of Open Access Journals (Sweden)

    Dov Dori

    Full Text Available One of the main obstacles to understanding complex biological systems is the extent and rapid evolution of information, way beyond the capacity individuals to manage and comprehend. Current modeling approaches and tools lack adequate capacity to model concurrently structure and behavior of biological systems. Here we propose Object-Process Methodology (OPM, a holistic conceptual modeling paradigm, as a means to model both diagrammatically and textually biological systems formally and intuitively at any desired number of levels of detail. OPM combines objects, e.g., proteins, and processes, e.g., transcription, in a way that is simple and easily comprehensible to researchers and scholars. As a case in point, we modeled the yeast mRNA lifecycle. The mRNA lifecycle involves mRNA synthesis in the nucleus, mRNA transport to the cytoplasm, and its subsequent translation and degradation therein. Recent studies have identified specific cytoplasmic foci, termed processing bodies that contain large complexes of mRNAs and decay factors. Our OPM model of this cellular subsystem, presented here, led to the discovery of a new constituent of these complexes, the translation termination factor eRF3. Association of eRF3 with processing bodies is observed after a long-term starvation period. We suggest that OPM can eventually serve as a comprehensive evolvable model of the entire living cell system. The model would serve as a research and communication platform, highlighting unknown and uncertain aspects that can be addressed empirically and updated consequently while maintaining consistency.

  19. Multifractals of central place systems: models, dimension spectrums, and empirical analysis

    CERN Document Server

    Chen, Yanguang

    2013-01-01

    Central place systems have been demonstrated to possess self-similar patterns in both the theoretical and empirical perspectives. A central place fractal can be treated as a monofractal with a single scaling process. However, in the real world, a system of human settlements is a complex network with multi-scaling processes. The simple fractal central place models are not enough to interpret the spatial patterns and evolutive processes of urban systems. It is necessary to construct multi-scaling fractal models of urban places. Based on the postulate of intermittent space filling, two typical multifractal models of central places are proposed in this paper. One model is put forward to reflect the process of spatial convergence (aggregation), and the generalized correlation dimension varies from 0.7306 to 1.3181; the other model is presented to describe the process of spatial divergence (diffusion), the generalized correlation dimension ranges from 1.6523 to 1.7118. As a case study, an analogy is drawn between t...

  20. An empirical model to form and evolve galaxies in dark matter halos

    CERN Document Server

    Li, Shijie; Yang, Xiaohu; Wang, Huiyuan; Tweed, Dylan; Liu, Chengze; Yang, Lei; Shi, Feng; Lu, Yi; Luo, Wentao; Wei, Jianwen

    2016-01-01

    Based on the star formation histories (SFH) of galaxies in halos of different masses, we develop an empirical model to grow galaxies in dark mattet halos. This model has very few ingredients, any of which can be associated to observational data and thus be efficiently assessed. By applying this model to a very high resolution cosmological $N$-body simulation, we predict a number of galaxy properties that are a very good match to relevant observational data. Namely, for both centrals and satellites, the galaxy stellar mass function (SMF) up to redshift $z\\simeq4$ and the conditional stellar mass functions (CSMF) in the local universe are in good agreement with observations. In addition, the 2-point correlation is well predicted in the different stellar mass ranges explored by our model. Furthermore, after applying stellar population synthesis models to our stellar composition as a function of redshift, we find that the luminosity functions in $^{0.1}u$, $^{0.1}g$, $^{0.1}r$, $^{0.1}i$ and $^{0.1}z$ bands agree...

  1. Construction of linear empirical core models for pressurized water reactor in-core fuel management

    Energy Technology Data Exchange (ETDEWEB)

    Okafor, K.C.; Aldemir, T. (The Ohio State Univ., Dept. of Mechanical Engineering, Nuclear Engineering Program, 206 West 18th Ave., Columbus, OH (US))

    1988-06-01

    An empirical core model construction procedure for pressurized water reactor (PWR) in-core fuel management problems is presented that (a) incorporates the effect of composition changes in all the control zones in the core of a given fuel assembly, (b) is valid at all times during the cycle for a given range of control variables, (c) allows determining the optimal beginning of cycle (BOC) kappainfinity distribution as a single linear programming problem,and (d) provides flexibility in the choice of the material zones to describe core composition. Although the modeling procedure assumes zero BOC burnup, the predicted optimal kappainfinity profiles are also applicable to reload cores. In model construction, assembly power fractions and burnup increments during the cycle are regarded as the state (i.e., dependent) variables. Zone enrichments are the control (i.e., independent) variables. The model construction procedure is validated and implemented for the initial core of a PWR to determine the optimal BOC kappainfinity profiles for two three-zone scatter loading schemes. The predicted BOC kappainfinity profiles agree with the results of other investigators obtained by different modeling techniques.

  2. An empirical model to form and evolve galaxies in dark matter halos

    Science.gov (United States)

    Li, Shi-Jie; Zhang, You-Cai; Yang, Xiao-Hu; Wang, Hui-Yuan; Tweed, Dylan; Liu, Cheng-Ze; Yang, Lei; Shi, Feng; Lu, Yi; Luo, Wen-Tao; Wei, Jian-Wen

    2016-08-01

    Based on the star formation histories of galaxies in halos with different masses, we develop an empirical model to grow galaxies in dark matter halos. This model has very few ingredients, any of which can be associated with observational data and thus be efficiently assessed. By applying this model to a very high resolution cosmological N-body simulation, we predict a number of galaxy properties that are a very good match to relevant observational data. Namely, for both centrals and satellites, the galaxy stellar mass functions up to redshift z ≃ 4 and the conditional stellar mass functions in the local universe are in good agreement with observations. In addition, the two point correlation function is well predicted in the different stellar mass ranges explored by our model. Furthermore, after applying stellar population synthesis models to our stellar composition as a function of redshift, we find that the luminosity functions in the 0.1 u, 0.1 g, 0.1 r, 0.1 i and 0.1 z bands agree quite well with the SDSS observational results down to an absolute magnitude at about -17.0. The SDSS conditional luminosity function itself is predicted well. Finally, the cold gas is derived from the star formation rate to predict the HI gas mass within each mock galaxy. We find a remarkably good match to observed HI-to-stellar mass ratios. These features ensure that such galaxy/gas catalogs can be used to generate reliable mock redshift surveys.

  3. An empirical model of ion plasma in the inner magnetosphere derived from CRRES/MICS measurements

    Science.gov (United States)

    Claudepierre, S. G.; Chen, M. W.; Roeder, J. L.; Fennell, J. F.

    2016-12-01

    We describe an empirical model of energetic ion plasma (˜20-400 keV/q) that is constructed from measurements taken by the Magnetospheric Ion Composition Spectrometer (MICS) instrument that flew on the CRRES spacecraft. This is a unique data set in that it provides energetic ion composition in the near-equatorial ring current region during a very active solar maximum. The model database is binned by energy, equatorial pitch angle, L shell, and magnetic local time and provides unidirectional, differential number fluxes of the major ionic constituents of the inner magnetosphere, such as protons (H+), singly charged oxygen (O+), and singly charged helium (He+). The H+ and O+ model fluxes are examined in detail and are consistent with well-known particle transport effects (e.g., adiabatic heating). We also validate these model fluxes against a number of other ion plasma models that are available in the literature. The primary finding is the elevated levels of energetic O+ flux during the CRRES era. We attribute this to a solar cycle effect, related to the enhanced upwelling and oxygen outflow from the ionosphere that occurs during solar maximum, driven by elevated solar extreme ultraviolet radiation. We briefly discuss the implications that the enhanced O+ environment during the CRRES era may have for other results derived from CRRES observations (e.g., statistical wave distributions).

  4. Multi-objective optimization of empirical hydrological model for streamflow prediction

    Science.gov (United States)

    Guo, Jun; Zhou, Jianzhong; Lu, Jiazheng; Zou, Qiang; Zhang, Huajie; Bi, Sheng

    2014-04-01

    Traditional calibration of hydrological models is performed with a single objective function. Practical experience with the calibration of hydrologic models reveals that single objective functions are often inadequate to properly measure all of the characteristics of the hydrologic system. To circumvent this problem, in recent years, a lot of studies have looked into the automatic calibration of hydrological models with multi-objective functions. In this paper, the multi-objective evolution algorithm MODE-ACM is introduced to solve the multi-objective optimization of hydrologic models. Moreover, to improve the performance of the MODE-ACM, an Enhanced Pareto Multi-Objective Differential Evolution algorithm named EPMODE is proposed in this research. The efficacy of the MODE-ACM and EPMODE are compared with two state-of-the-art algorithms NSGA-II and SPEA2 on two case studies. Five test problems are used as the first case study to generate the true Pareto front. Then this approach is tested on a typical empirical hydrological model for monthly streamflow forecasting. The results of these case studies show that the EPMODE, as well as MODE-ACM, is effective in solving multi-objective problems and has great potential as an efficient and reliable algorithm for water resources applications.

  5. Reviewing the effort-reward imbalance model: drawing up the balance of 45 empirical studies.

    Science.gov (United States)

    van Vegchel, Natasja; de Jonge, Jan; Bosma, Hans; Schaufeli, Wilmar

    2005-03-01

    The present paper provides a review of 45 studies on the Effort-Reward Imbalance (ERI) Model published from 1986 to 2003 (inclusive). In 1986, the ERI Model was introduced by Siegrist et al. (Biological and Psychological Factors in Cardiovascular Disease, Springer, Berlin, 1986, pp. 104-126; Social Science & Medicine 22 (1986) 247). The central tenet of the ERI Model is that an imbalance between (high) efforts and (low) rewards leads to (sustained) strain reactions. Besides efforts and rewards, overcommitment (i.e., a personality characteristic) is a crucial aspect of the model. Essentially, the ERI Model contains three main assumptions, which could be labeled as (1) the extrinsic ERI hypothesis: high efforts in combination with low rewards increase the risk of poor health, (2) the intrinsic overcommitment hypothesis: a high level of overcommitment may increase the risk of poor health, and (3) the interaction hypothesis: employees reporting an extrinsic ERI and a high level of overcommitment have an even higher risk of poor health. The review showed that the extrinsic ERI hypothesis has gained considerable empirical support. Results for overcommitment remain inconsistent and the moderating effect of overcommitment on the relation between ERI and employee health has been scarcely examined. Based on these review results suggestions for future research are proposed.

  6. Empirical model predicting the layer thickness and porosity of p-type mesoporous silicon

    Science.gov (United States)

    Wolter, Sascha J.; Geisler, Dennis; Hensen, Jan; Köntges, Marc; Kajari-Schröder, Sarah; Bahnemann, Detlef W.; Brendel, Rolf

    2017-04-01

    Porous silicon is a promising material for a wide range of applications because of its versatile layer properties and the convenient preparation by electrochemical etching. Nevertheless, the quantitative dependency of the layer thickness and porosity on the etching process parameters is yet unknown. We have developed an empirical model to predict the porosity and layer thickness of p-type mesoporous silicon prepared by electrochemical etching. The impact of the process parameters such as current density, etching time and concentration of hydrogen fluoride is evaluated by ellipsometry. The main influences on the porosity of the porous silicon are the current density, the etching time and their product while the etch rate is dominated by the current density, the concentration of hydrogen fluoride and their product. The developed model predicts the resulting layer properties of a certain porosification process and can, for example be used to enhance the utilization of the employed chemicals.

  7. Using Grey Production Functions in the Macroeconomic Modelling: An Empirical Application for Romania

    Directory of Open Access Journals (Sweden)

    Ana Michaela ANDREI

    2014-01-01

    Full Text Available The work is a development of our earlier studies containing empirical application of models with representative agent. The extensions developed in this paper consist of the following: the introduction of the labor market via the use of labor as the second production factor, the use of the GM(1,1 algorithm in order to adjust the capital and labor data series and to compute grey Cobb-Douglas production function, and finally the comparison of the results obtained applying the model to the actual data and the grey data. The grey production function is estimated using GM(1,1 adjusted statistical series of the GDP, capital stock and labor data. For the two vari-ants we computed the predictions of the indicators: real GDP, consumption, government ex-penditures, trade balance, and burden of debt.

  8. A simple empirical model for activated sludge thickening in secondary clarifiers.

    Science.gov (United States)

    Giokas, D L; Kim, Youngchul; Paraskevas, P A; Paleologos, E K; Lekkas, T D

    2002-07-01

    A simple empirical model for the thickening function of the activated sludge secondary clarifiers is presented. The proposed approach relies on the integration of previous models and it is based on the phenomenon of dilution of the incoming activated sludge in the feeding well of the settling tanks. The method provides a satisfactory description of sludge stratification within the clarifier. The only requirements are limited to parameters which are readily incorporated into the routine analysis performed in an activated sludge plant, thereby eliminating the need for additional experimental or computational effort. The method was tested in a full-scale activated sludge plant and it was found that it describes fairly well the return sludge concentration, the diluted sludge blanket concentration, the sludge blanket solids concentration and the sludge blanket height of full-scale secondary clarifiers.

  9. β-empirical Bayes inference and model diagnosis of microarray data

    Directory of Open Access Journals (Sweden)

    Hossain Mollah Mohammad

    2012-06-01

    Full Text Available Abstract Background Microarray data enables the high-throughput survey of mRNA expression profiles at the genomic level; however, the data presents a challenging statistical problem because of the large number of transcripts with small sample sizes that are obtained. To reduce the dimensionality, various Bayesian or empirical Bayes hierarchical models have been developed. However, because of the complexity of the microarray data, no model can explain the data fully. It is generally difficult to scrutinize the irregular patterns of expression that are not expected by the usual statistical gene by gene models. Results As an extension of empirical Bayes (EB procedures, we have developed the β-empirical Bayes (β-EB approach based on a β-likelihood measure which can be regarded as an ’evidence-based’ weighted (quasi- likelihood inference. The weight of a transcript t is described as a power function of its likelihood, fβ(yt|θ. Genes with low likelihoods have unexpected expression patterns and low weights. By assigning low weights to outliers, the inference becomes robust. The value of β, which controls the balance between the robustness and efficiency, is selected by maximizing the predictive β0-likelihood by cross-validation. The proposed β-EB approach identified six significant (p−5 contaminated transcripts as differentially expressed (DE in normal/tumor tissues from the head and neck of cancer patients. These six genes were all confirmed to be related to cancer; they were not identified as DE genes by the classical EB approach. When applied to the eQTL analysis of Arabidopsis thaliana, the proposed β-EB approach identified some potential master regulators that were missed by the EB approach. Conclusions The simulation data and real gene expression data showed that the proposed β-EB method was robust against outliers. The distribution of the weights was used to scrutinize the irregular patterns of expression and diagnose the model

  10. Empirical analysis and modeling of errors of atmospheric profiles from GPS radio occultation

    Directory of Open Access Journals (Sweden)

    B. Scherllin-Pirscher

    2011-05-01

    Full Text Available The utilization of radio occultation (RO data in atmospheric studies requires precise knowledge of error characteristics. We present results of an empirical error analysis of GPS radio occultation (RO bending angle, refractivity, dry pressure, dry geopotential height, and dry temperature. We find very good agreement between data characteristics of different missions (CHAMP, GRACE-A, and Formosat-3/COSMIC (F3C. In the global mean, observational errors (standard deviation from "true" profiles at mean tangent point location agree within 0.3 % in bending angle, 0.1 % in refractivity, and 0.2 K in dry temperature at all altitude levels between 4 km and 35 km. Above ≈20 km, the observational errors show a strong seasonal dependence at high latitudes. Larger errors occur in hemispheric wintertime and are associated mainly with background data used in the retrieval process. The comparison between UCAR and WEGC results (both data centers have independent inversion processing chains reveals different magnitudes of observational errors in atmospheric parameters, which are attributable to different background fields used. Based on the empirical error estimates, we provide a simple analytical error model for GPS RO atmospheric parameters and account for vertical, latitudinal, and seasonal variations. In the model, which spans the altitude range from 4 km to 35 km, a constant error is adopted around the tropopause region amounting to 0.8 % for bending angle, 0.35 % for refractivity, 0.15 % for dry pressure, 10 m for dry geopotential height, and 0.7 K for dry temperature. Below this region the observational error increases following an inverse height power-law and above it increases exponentially. The observational error model is the same for UCAR and WEGC data but due to somewhat different error characteristics below about 10 km and above about 20 km some parameters have to be adjusted. Overall, the observational error model is easily applicable and

  11. EMPIRICAL WEIGHTED MODELLING ON INTER-COUNTY INEQUALITIES EVOLUTION AND TO TEST ECONOMICAL CONVERGENCE IN ROMANIA

    Directory of Open Access Journals (Sweden)

    Natalia\tMOROIANU‐DUMITRESCU

    2015-06-01

    Full Text Available During the last decades, the regional convergence process in Europe has attracted a considerable interest as a highly significant issue, especially after EU enlargement with the New Member States from Central and Eastern Europe. The most usual empirical approaches are using the β- and σ-convergence, originally developed by a series of neo-classical models. Up-to-date, the EU integration process was proven to be accompanied by an increase of the regional inequalities. In order to determine the existence of a similar increase of the inequalities between the administrative counties (NUTS3 included in the NUTS2 and NUTS1 regions of Romania, this paper provides an empirical modelling of economic convergence allowing to evaluate the level and evolution of the inter-regional inequalities over more than a decade period lasting from 1995 up to 2011. The paper presents the results of a large cross-sectional study of σ-convergence and weighted coefficient of variation, using GDP and population data obtained from the National Institute of Statistics of Romania. Both graphical representation including non-linear regression and the associated tables summarizing numerical values of the main statistical tests are demonstrating the impact of pre- accession policy on the economic development of all Romanian NUTS types. The clearly emphasised convergence in the middle time subinterval can be correlated with the pre-accession drastic changes on economic, political and social level, and with the opening of the Schengen borders for Romanian labor force in 2002.

  12. A model of psychological evaluation of educational environment and its empirical indicators

    Directory of Open Access Journals (Sweden)

    E. B. Laktionova

    2013-04-01

    Full Text Available The topic of the study is to identify ways of complex psychological assessment of educational en-vironment quality, the nature of conditions that affect positive personal development of its members. The solution to this problem is to develop science-based content and technology sup-port for psychological evaluation of the educational environment. The purpose of the study was theoretical rationale and empirical testing of a model of psychological examination of education-al environment. The study is based on the assumption that in order to assess the quality of the educational environment in the aspect of its personality developing potential, we need to create a model of psychological examination as a special developmental system, reflected in terms of the personal characteristics of its subjects. The empirical material is based on a study sample of 717 students and 438 teachers from 28 educational institutions that participated in the program of urban pilot sites of the Department of Education of Moscow. In total, 1,155 people took part it the study.

  13. Discussion on climate oscillations: CMIP5 general circulation models versus a semi empirical harmonic model based on astronomical cycles

    CERN Document Server

    Scafetta, Nicola

    2013-01-01

    Power spectra of global surface temperature (GST) records reveal major periodicities at about 9.1, 10-11, 19-22 and 59-62 years. The Coupled Model Intercomparison Project 5 (CMIP5) general circulation models (GCMs), to be used in the IPCC (2013), are analyzed and found not able to reconstruct this variability. From 2000 to 2013.5 a GST plateau is observed while the GCMs predicted a warming rate of about 2 K/century. In contrast, the hypothesis that the climate is regulated by specific natural oscillations more accurately fits the GST records at multiple time scales. The climate sensitivity to CO2 doubling should be reduced by half, e.g. from the IPCC-2007 2.0-4.5 K range to 1.0-2.3 K with 1.5 C median. Also modern paleoclimatic temperature reconstructions yield the same conclusion. The observed natural oscillations could be driven by astronomical forcings. Herein I propose a semi empirical climate model made of six specific astronomical oscillations as constructors of the natural climate variability spanning ...

  14. An empirical assessment of exposure measurement error and effect attenuation in bipollutant epidemiologic models.

    Science.gov (United States)

    Dionisio, Kathie L; Baxter, Lisa K; Chang, Howard H

    2014-11-01

    Using multipollutant models to understand combined health effects of exposure to multiple pollutants is becoming more common. However, complex relationships between pollutants and differing degrees of exposure error across pollutants can make health effect estimates from multipollutant models difficult to interpret. We aimed to quantify relationships between multiple pollutants and their associated exposure errors across metrics of exposure and to use empirical values to evaluate potential attenuation of coefficients in epidemiologic models. We used three daily exposure metrics (central-site measurements, air quality model estimates, and population exposure model estimates) for 193 ZIP codes in the Atlanta, Georgia, metropolitan area from 1999 through 2002 for PM2.5 and its components (EC and SO4), as well as O3, CO, and NOx, to construct three types of exposure error: δspatial (comparing air quality model estimates to central-site measurements), δpopulation (comparing population exposure model estimates to air quality model estimates), and δtotal (comparing population exposure model estimates to central-site measurements). We compared exposure metrics and exposure errors within and across pollutants and derived attenuation factors (ratio of observed to true coefficient for pollutant of interest) for single- and bipollutant model coefficients. Pollutant concentrations and their exposure errors were moderately to highly correlated (typically, > 0.5), especially for CO, NOx, and EC (i.e., "local" pollutants); correlations differed across exposure metrics and types of exposure error. Spatial variability was evident, with variance of exposure error for local pollutants ranging from 0.25 to 0.83 for δspatial and δtotal. The attenuation of model coefficients in single- and bipollutant epidemiologic models relative to the true value differed across types of exposure error, pollutants, and space. Under a classical exposure-error framework, attenuation may be

  15. Using Whole-House Field Tests to Empirically Derive Moisture Buffering Model Inputs

    Energy Technology Data Exchange (ETDEWEB)

    Woods, J.; Winkler, J.; Christensen, D.; Hancock, E.

    2014-08-01

    Building energy simulations can be used to predict a building's interior conditions, along with the energy use associated with keeping these conditions comfortable. These models simulate the loads on the building (e.g., internal gains, envelope heat transfer), determine the operation of the space conditioning equipment, and then calculate the building's temperature and humidity throughout the year. The indoor temperature and humidity are affected not only by the loads and the space conditioning equipment, but also by the capacitance of the building materials, which buffer changes in temperature and humidity. This research developed an empirical method to extract whole-house model inputs for use with a more accurate moisture capacitance model (the effective moisture penetration depth model). The experimental approach was to subject the materials in the house to a square-wave relative humidity profile, measure all of the moisture transfer terms (e.g., infiltration, air conditioner condensate) and calculate the only unmeasured term: the moisture absorption into the materials. After validating the method with laboratory measurements, we performed the tests in a field house. A least-squares fit of an analytical solution to the measured moisture absorption curves was used to determine the three independent model parameters representing the moisture buffering potential of this house and its furnishings. Follow on tests with realistic latent and sensible loads showed good agreement with the derived parameters, especially compared to the commonly-used effective capacitance approach. These results show that the EMPD model, once the inputs are known, is an accurate moisture buffering model.

  16. Using Whole-House Field Tests to Empirically Derive Moisture Buffering Model Inputs

    Energy Technology Data Exchange (ETDEWEB)

    Woods, J. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Winkler, J. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Christensen, D. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Hancock, E. [Mountain Energy Partnership, Longmont, CO (United States)

    2014-08-01

    Building energy simulations can be used to predict a building's interior conditions, along with the energy use associated with keeping these conditions comfortable. These models simulate the loads on the building (e.g., internal gains, envelope heat transfer), determine the operation of the space conditioning equipment, and then calculate the building's temperature and humidity throughout the year. The indoor temperature and humidity are affected not only by the loads and the space conditioning equipment, but also by the capacitance of the building materials, which buffer changes in temperature and humidity. This research developed an empirical method to extract whole-house model inputs for use with a more accurate moisture capacitance model (the effective moisture penetration depth model). The experimental approach was to subject the materials in the house to a square-wave relative humidity profile, measure all of the moisture transfer terms (e.g., infiltration, air conditioner condensate) and calculate the only unmeasured term: the moisture absorption into the materials. After validating the method with laboratory measurements, we performed the tests in a field house. A least-squares fit of an analytical solution to the measured moisture absorption curves was used to determine the three independent model parameters representing the moisture buffering potential of this house and its furnishings. Follow on tests with realistic latent and sensible loads showed good agreement with the derived parameters, especially compared to the commonly-used effective capacitance approach. These results show that the EMPD model, once the inputs are known, is an accurate moisture buffering model.

  17. An Empirical Study on the Determinants of Anxiety in Gift-Giving Behavior: An Expansion of Wooten's Model

    OpenAIRE

    Ke Han; Takahiro Chiba; Shingoh Iketani; Akinori ONO

    2008-01-01

    Why so many givers become anxious in gift giving? Wooten (2000) provided an answer to the question by proposing a model of gifting anxiety. However, his model is lack of quantitative analysis to support its empirical adequacy. This study aims to expand Wooten's model with additional nine determinants of gifting anxiety and verify the expanded model by conducting quantitative analysis based on consumer surveys. The results show that givers' gifting anxiety arises when highly motivated to impre...

  18. On the Data Mining Technology Applied to Active Marketing Model of International Luxury Marketing Strategy in China— An Empirical Analysis

    OpenAIRE

    Qishen Zhou; Shanhui Wang; Zuowei Yin

    2013-01-01

     This paper emphasizes the importance of active marketing in the customer relationship management. Especially, the data mining technology is applied to establish an active marketing model to empirically analyze the condition of the AH Jewelry Company. Michael Porter's Five Forces Model is employed to assess and calculate the similarity in the active marketing model. Then, the questionnaire analysis on the customer relationship management model is carried out to explain the target market and t...

  19. [The Brodsky & Brodsky risk factor model of schizophrenia--an empirical contribution].

    Science.gov (United States)

    Mensching, M; Lamberti, G; Petermann, F

    1996-01-01

    In the year 1981 the Canadian psychologists Patricia & Marvin Brodsky published an article about a model integrating risk variables involved in the development of Schizophrenia which is very important but not very well known in the German speaking countries. This model which describes essentially the interaction of four risk variables, such as mental health of the mother, neonatal status, temperament and mothering style, shall be published for the interested reader on the one hand. On the other hand the consequences of the risk variable "mothering style" should be empirically examined in a retrospective study. With a standardized interview containing all essential variables of Brodsky's risk model 73 patients suffering in schizophrenia and being inpatient in the Rheinische Landesklinik Bonn were examined and the interview data were evaluated with regard to the contents in contrast to a control group of 26 healthy persons. With regard to the "schizophrenia spectrum" the portion of so called "superphrenics" (defined as "artistically and muscially gifted offspring with a schziophrenic mother" was not significantly increased in the schizophrenic sample (17.8% vs 7.7%) but the comparison of mothering style (empathic contingent vs. non-empathic and non-contingent) revealed a statistically significant difference between the groups. Finally the data of the risk variable "mothering style" are discussed in the light of the Brodsky-model and it is pointed at the special problems in reliable and valid opertionalisation and assessment of the risk variables.

  20. Phoneme restoration and empirical coverage of interactive activation and adaptive resonance models of human speech processing.

    Science.gov (United States)

    Magnuson, James S

    2015-03-01

    Grossberg and Kazerounian [(2011). J. Acoust. Soc. Am. 130, 440-460] present a model of sequence representation for spoken word recognition, the cARTWORD model, which simulates essential aspects of phoneme restoration. Grossberg and Kazerounian also include simulations with the TRACE model presented by McClelland and Elman [(1986). Cognit. Psychol. 18, 1-86] that seem to indicate that TRACE cannot simulate phoneme restoration. Grossberg and Kazerounian also claim cARTWORD should be preferred to TRACE because of TRACE's implausible approach to sequence representation (reduplication of time-specific units) and use of non-modulatory feedback (i.e., without position-specific bottom-up support). This paper responds to Grossberg and Kazerounian first with TRACE simulations that account for phoneme restoration when appropriately constructed noise is used (and with minor changes to TRACE phoneme definitions), then reviews the case for reduplicated units and feedback as implemented in TRACE, as well as TRACE's broad and deep coverage of empirical data. Finally, it is argued that cARTWORD is not comparable to TRACE because cARTWORD cannot represent sequences with repeated elements, has only been implemented with small phoneme and lexical inventories, and has been applied to only one phenomenon (phoneme restoration). Without evidence that cARTWORD captures a similar range and detail of human spoken language processing as alternative models, it is premature to prefer cARTWORD to TRACE.

  1. Empirical model of Skeletonema costatum photosynthetic rate, with applications in the San Francisco Bay estuary

    Science.gov (United States)

    Cloern, J.E.

    1978-01-01

    An empirical model of Skeletonema costatum photosynthetic rate is developed and fit to measurements of photosynthesis selected from the literature. Because the model acknowledges existence of: 1) a light-temperature interaction (by allowing optimum irradiance to vary with temperature), 2) light inhibition, 3) temperature inhibition, and 4) a salinity effect, it accurately estimates photosynthetic rates measured over a wide range of temperature, light intensity, and salinity. Integration of predicted instantaneous rate of photosynthesis with time and depth yields daily net carbon assimilation (pg C cell-1 day-1) in a mixed layer of specified depth, when salinity, temperature, daily irradiance and extinction coefficient are known. The assumption of constant carbon quota (pg C cell-1) allows for prediction of mean specific growth rate (day-1), which can be used in numerical models of Skeletonema costatum population dynamics. Application of the model to northern San Francisco Bay clearly demonstrates the limitation of growth by low light availability, and suggests that large population densities of S. costatum observed during summer months are not the result of active growth in the central deep channels (where growth rates are consistently predicted to be negative). But predicted growth rates in the lateral shallows are positive during summer and fall, thus offering a testable hypothesis that shoals are the only sites of active population growth by S. costatum (and perhaps other neritic diatoms) in the northern reach of San Francisco Bay. ?? 1978.

  2. EMPIRICAL MODELS FOR PERFORMANCE OF DRIPPERS APPLYING CASHEW NUT PROCESSING WASTEWATER

    Directory of Open Access Journals (Sweden)

    KETSON BRUNO DA SILVA

    2016-01-01

    Full Text Available The objective of this work was to develop empirical models for hydraulic performance of drippers operating with cashew nut processing wastewater depending on operating time, operating pressure and effluent quality. The experiment consisted of two factors, types of drippers (D1=1.65 L h-1, D2=2.00 L h-1 and D3=4.00 L h-1, and operating pressures (70, 140, 210 and 280 kPa, with three replications. The flow variation coefficient (FVC, distribution uniformity coefficient (DUC and the physicochemical and biological characteristics of the effluent were evaluated every 20 hours until complete 160 hours of operation. Data were interpreted through simple and multiple linear stepwise regression models. The regression models that fitted to the FVC and DUC as a function of operating time were square root, linear and quadratic, with 17%, 17% and 8%, and 17%, 17% and 0%, respectively. The regression models that fitted to the FVC and DUC as a function of operating pressures were square root, linear and quadratic, with 11%, 22% and 0% and the 0%, 22% and 11%, respectively. Multiple linear regressions showed that the dissolved solids content is the main wastewater characteristic that interfere in the FVC and DUC values of the drip units D1 (1.65 L h-1 and D3 (4.00 L h-1, operating at work pressure of 70 kPa (P1.

  3. Testing the robustness of the anthropogenic climate change detection statements using different empirical models

    Science.gov (United States)

    Imbers, J.; Lopez, A.; Huntingford, C.; Allen, M. R.

    2013-04-01

    This paper aims to test the robustness of the detection and attribution of anthropogenic climate change using four different empirical models that were previously developed to explain the observed global mean temperature changes over the last few decades. These studies postulated that the main drivers of these changes included not only the usual natural forcings, such as solar and volcanic, and anthropogenic forcings, such as greenhouse gases and sulfates, but also other known Earth system oscillations such as El Niño Southern Oscillation (ENSO) or the Atlantic Multidecadal Oscillation (AMO). In this paper, we consider these signals, or forced responses, and test whether or not the anthropogenic signal can be robustly detected under different assumptions for the internal variability of the climate system. We assume that the internal variability of the global mean surface temperature can be described by simple stochastic models that explore a wide range of plausible temporal autocorrelations, ranging from short memory processes exemplified by an AR(1) model to long memory processes, represented by a fractional differenced model. In all instances, we conclude that human-induced changes to atmospheric gas composition is affecting global mean surface temperature changes.

  4. Empirical evaluation reveals best fit of a logistic mutation model for human Y-chromosomal microsatellites.

    Science.gov (United States)

    Jochens, Arne; Caliebe, Amke; Rösler, Uwe; Krawczak, Michael

    2011-12-01

    The rate of microsatellite mutation is dependent upon both the allele length and the repeat motif, but the exact nature of this relationship is still unknown. We analyzed data on the inheritance of human Y-chromosomal microsatellites in father-son duos, taken from 24 published reports and comprising 15,285 directly observable meioses. At the six microsatellites analyzed (DYS19, DYS389I, DYS390, DYS391, DYS392, and DYS393), a total of 162 mutations were observed. For each locus, we employed a maximum-likelihood approach to evaluate one of several single-step mutation models on the basis of the data. For five of the six loci considered, a novel logistic mutation model was found to provide the best fit according to Akaike's information criterion. This implies that the mutation probability at the loci increases (nonlinearly) with allele length at a rate that differs between upward and downward mutations. For DYS392, the best fit was provided by a linear model in which upward and downward mutation probabilities increase equally with allele length. This is the first study to empirically compare different microsatellite mutation models in a locus-specific fashion.

  5. Testing the robustness of the anthropogenic climate change detection statements using different empirical models

    KAUST Repository

    Imbers, J.

    2013-04-27

    This paper aims to test the robustness of the detection and attribution of anthropogenic climate change using four different empirical models that were previously developed to explain the observed global mean temperature changes over the last few decades. These studies postulated that the main drivers of these changes included not only the usual natural forcings, such as solar and volcanic, and anthropogenic forcings, such as greenhouse gases and sulfates, but also other known Earth system oscillations such as El Niño Southern Oscillation (ENSO) or the Atlantic Multidecadal Oscillation (AMO). In this paper, we consider these signals, or forced responses, and test whether or not the anthropogenic signal can be robustly detected under different assumptions for the internal variability of the climate system. We assume that the internal variability of the global mean surface temperature can be described by simple stochastic models that explore a wide range of plausible temporal autocorrelations, ranging from short memory processes exemplified by an AR(1) model to long memory processes, represented by a fractional differenced model. In all instances, we conclude that human-induced changes to atmospheric gas composition is affecting global mean surface temperature changes. ©2013. American Geophysical Union. All Rights Reserved.

  6. Investigating the self-organization of debris flows: theory, modelling, and empirical work

    Science.gov (United States)

    von Elverfeldt, Kirsten; Keiler, Margreth; Elmenreich, Wilfried; Fehárvári, István; Zhevzhyk, Sergii

    2014-05-01

    Here we present the conceptual framework of an interdisciplinary project on the theory, empirics, and modelling of the self-organisation mechanisms within debris flows. Despite the fact that debris flows are causing severe damages in mountainous regions such as the Alps, the process behaviour of debris flows is still not well understood. This is mainly due to the process dynamics of debris flows: Erosion and material entrainment are essential for their destructive power, and because of this destructiveness it is nearly impossible to measure and observe these mechanisms in action. Hence, the interactions between channel bed and debris flow remain largely unknown whilst this knowledge is crucial for the understanding of debris flow behaviour. Furthermore, while these internal parameter interactions are changing during an event, they are at the same time governing the temporal and spatial evolution of a given event. This project aims at answering some of these unknowns by means of bringing theory, empirical work, and modelling of debris flows together. It especially aims at explaining why process types are switching along the flow path during an event, e.g. the change from a debris flow to a hyperconcentrated flow and back. A second focus is the question of why debris flows sometimes exhibit strong erosion and sediment mobilisation during an event and at other times they do not. A promising theoretical framework for the analysis of these observations is that of self-organizing systems, and especially Haken's theory of synergetics. Synergetics is an interdisciplinary theory of open systems that are characterized by many individual, yet interacting parts, resulting in spatio-temporal structures. We hypothesize that debris flows can successfully be analysed within this theoretical framework. In order to test this hypothesis, an innovative modelling approach is chosen in combination with detailed field work. In self-organising systems the interactions of the system

  7. Reciprocating and Screw Compressor semi-empirical models for establishing minimum energy performance standards

    Science.gov (United States)

    Javed, Hassan; Armstrong, Peter

    2015-08-01

    The efficiency bar for a Minimum Equipment Performance Standard (MEPS) generally aims to minimize energy consumption and life cycle cost of a given chiller type and size category serving a typical load profile. Compressor type has a significant chiller performance impact. Performance of screw and reciprocating compressors is expressed in terms of pressure ratio and speed for a given refrigerant and suction density. Isentropic efficiency for a screw compressor is strongly affected by under- and over-compression (UOC) processes. The theoretical simple physical UOC model involves a compressor-specific (but sometimes unknown) volume index parameter and the real gas properties of the refrigerant used. Isentropic efficiency is estimated by the UOC model and a bi-cubic, used to account for flow, friction and electrical losses. The unknown volume index, a smoothing parameter (to flatten the UOC model peak) and bi-cubic coefficients are identified by curve fitting to minimize an appropriate residual norm. Chiller performance maps are produced for each compressor type by selecting optimized sub-cooling and condenser fan speed options in a generic component-based chiller model. SEER is the sum of hourly load (from a typical building in the climate of interest) and specific power for the same hourly conditions. An empirical UAE cooling load model, scalable to any equipment capacity, is used to establish proposed UAE MEPS. Annual electricity use and cost, determined from SEER and annual cooling load, and chiller component cost data are used to find optimal chiller designs and perform life-cycle cost comparison between screw and reciprocating compressor-based chillers. This process may be applied to any climate/load model in order to establish optimized MEPS for any country and/or region.

  8. A Semi-empirical Model of the Stratosphere in the Climate System

    Science.gov (United States)

    Sodergren, A. H.; Bodeker, G. E.; Kremser, S.; Meinshausen, M.; McDonald, A.

    2014-12-01

    Chemistry climate models (CCMs) currently used to project changes in Antarctic ozone are extremely computationally demanding. CCM projections are uncertain due to lack of knowledge of future emissions of greenhouse gases (GHGs) and ozone depleting substances (ODSs), as well as parameterizations within the CCMs that have weakly constrained tuning parameters. While projections should be based on an ensemble of simulations, this is not currently possible due to the complexity of the CCMs. An inexpensive but realistic approach to simulate changes in stratospheric ozone, and its coupling to the climate system, is needed as a complement to CCMs. A simple climate model (SCM) can be used as a fast emulator of complex atmospheric-ocean climate models. If such an SCM includes a representation of stratospheric ozone, the evolution of the global ozone layer can be simulated for a wide range of GHG and ODS emissions scenarios. MAGICC is an SCM used in previous IPCC reports. In the current version of the MAGICC SCM, stratospheric ozone changes depend only on equivalent effective stratospheric chlorine (EESC). In this work, MAGICC is extended to include an interactive stratospheric ozone layer using a semi-empirical model of ozone responses to CO2and EESC, with changes in ozone affecting the radiative forcing in the SCM. To demonstrate the ability of our new, extended SCM to generate projections of global changes in ozone, tuning parameters from 19 coupled atmosphere-ocean general circulation models (AOGCMs) and 10 carbon cycle models (to create an ensemble of 190 simulations) have been used to generate probability density functions of the dates of return of stratospheric column ozone to 1960 and 1980 levels for different latitudes.

  9. Technical Note: A comparison of model and empirical measures of catchment-scale effective energy and mass transfer

    Directory of Open Access Journals (Sweden)

    C. Rasmussen

    2013-09-01

    Full Text Available Recent work suggests that a coupled effective energy and mass transfer (EEMT term, which includes the energy associated with effective precipitation and primary production, may serve as a robust prediction parameter of critical zone structure and function. However, the models used to estimate EEMT have been solely based on long-term climatological data with little validation using direct empirical measures of energy, water, and carbon balances. Here we compare catchment-scale EEMT estimates generated using two distinct approaches: (1 EEMT modeled using the established methodology based on estimates of monthly effective precipitation and net primary production derived from climatological data, and (2 empirical catchment-scale EEMT estimated using data from 86 catchments of the Model Parameter Estimation Experiment (MOPEX and MOD17A3 annual net primary production (NPP product derived from Moderate Resolution Imaging Spectroradiometer (MODIS. Results indicated positive and significant linear correspondence (R2 = 0.75; P −2 yr−1. Modeled EEMT values were consistently greater than empirical measures of EEMT. Empirical catchment estimates of the energy associated with effective precipitation (EPPT were calculated using a mass balance approach that accounts for water losses to quick surface runoff not accounted for in the climatologically modeled EPPT. Similarly, local controls on primary production such as solar radiation and nutrient limitation were not explicitly included in the climatologically based estimates of energy associated with primary production (EBIO, whereas these were captured in the remotely sensed MODIS NPP data. These differences likely explain the greater estimate of modeled EEMT relative to the empirical measures. There was significant positive correlation between catchment aridity and the fraction of EEMT partitioned into EBIO (FBIO, with an increase in FBIO as a fraction of the total as aridity increases and percentage of

  10. A NEW SEMI-EMPIRICAL AMBIENT TO EFFECTIVE DOSE CONVERSION MODEL FOR THE PREDICTIVE CODE FOR AIRCREW RADIATION EXPOSURE (PCAIRE).

    Science.gov (United States)

    Dumouchel, T; McCall, M; Lemay, F; Bennett, L; Lewis, B; Bean, M

    2016-12-01

    The Predictive Code for Aircrew Radiation Exposure (PCAIRE) is a semi-empirical code that estimates both ambient dose equivalent, based on years of on-board measurements, and effective dose to aircrew. Currently, PCAIRE estimates effective dose by converting the ambient dose equivalent to effective dose (E/H) using a model that is based on radiation transport calculations and on the radiation weighting factors recommended in International Commission on Radiological Protection (ICRP) 60. In this study, a new semi-empirical E/H model is proposed to replace the existing transport calculation models. The new model is based on flight data measured using a tissue-equivalent proportional counter (TEPC). The measured flight TEPC data are separated into a low- and a high-lineal-energy spectrum using an amplitude-weighted (137)Cs TEPC spectrum. The high-lineal-energy spectrum is determined by subtracting the low-lineal-energy spectrum from the measured flight TEPC spectrum. With knowledge of E/H for the low- and high-lineal-energy spectra, the total E/H is estimated for a given flight altitude and geographic location. The semi-empirical E/H model also uses new radiation weighting factors to align the model with the most recent ICRP 103 recommendations. The ICRP 103-based semi-empirical effective dose model predicts that there is a ∼30 % reduction in dose in comparison with the ICRP 60-based model. Furthermore, the ambient dose equivalent is now a more conservative dose estimate for jet aircraft altitudes in the range of 7-13 km (FL230-430). This new semi-empirical E/H model is validated against E/H predicted from a Monte Carlo N-Particle transport code simulation of cosmic ray propagation through the Earth's atmosphere. Its implementation allows PCAIRE to provide an accurate semi-empirical estimate of the effective dose.

  11. Empirical Testing of a Theoretical Extension of the Technology Acceptance Model: An Exploratory Study of Educational Wikis

    Science.gov (United States)

    Liu, Xun

    2010-01-01

    This study extended the technology acceptance model and empirically tested the new model with wikis, a new type of educational technology. Based on social cognitive theory and the theory of planned behavior, three new variables, wiki self-efficacy, online posting anxiety, and perceived behavioral control, were added to the original technology…

  12. Time-varying volatility in Malaysian stock exchange: An empirical study using multiple-volatility-shift fractionally integrated model

    Science.gov (United States)

    Cheong, Chin Wen

    2008-02-01

    This article investigated the influences of structural breaks on the fractionally integrated time-varying volatility model in the Malaysian stock markets which included the Kuala Lumpur composite index and four major sectoral indices. A fractionally integrated time-varying volatility model combined with sudden changes is developed to study the possibility of structural change in the empirical data sets. Our empirical results showed substantial reduction in fractional differencing parameters after the inclusion of structural change during the Asian financial and currency crises. Moreover, the fractionally integrated model with sudden change in volatility performed better in the estimation and specification evaluations.

  13. Numerical modeling study into the climatic impact of deforestation associated with the fall of Mayan Empire

    Science.gov (United States)

    Kongoli, C.; Nair, U. S.; Welch, R. M.; Sever, T. L.; Irwin, D.; Pielke, R. A.

    2002-05-01

    The collapse the Mayan Empire, which flourished from 250 to 900 AD in the Southern Mexico and Central American regions, is one of the greatest demographic disasters in the human history. Early studies of Mayan civilization found cessation in dating and inscription of monuments in the ninth century. Later studies suggest a two-thirds decline in Mayan population numbering millions between 830 and 900 AD. The reason for this population decline and the subsequent collapse of Mayan Empire in ninth century is not known. The mass exodus of population has been ruled out since the population in the surrounding regions remained stable during this time period. Other suggested reasons for this population decline include conflict, disease, warfare, climate change. However, studies of historical pollen data indicate increased rates of deforestation starting in the fifth century with most of the trees in the region being cut down by the ninth century. Lake core sediments document a major drought around 800 AD that was one of the most intense drought in an 8000 year history. A recent study on climatic reconstruction from pollen records also indicate that climate became drier following the collapse of the Mayan Empire, and suggest that this may be due to the cutting down of trees. In the present study, the effect of forest clearing on the regional climate in the Mayan region is examined using the Colorado State University Regional Atmospheric Modeling System (CSU RAMS). The RAMS is being used to simulate the rainfall over the Mayan region for conditions where the surface is assumed to be completely forested and deforested. Simulations are being done for two months, both in the wet and dry season. Comparison of RAMS simulated rainfall between the completely forested and deforested scenarios are expected to provide bounds on regional climate change brought about by deforestation. Further details will be presented at the conference.

  14. Empirical models for the prediction of ground motion duration for intraplate earthquakes

    Science.gov (United States)

    Anbazhagan, P.; Neaz Sheikh, M.; Bajaj, Ketan; Mariya Dayana, P. J.; Madhura, H.; Reddy, G. R.

    2017-02-01

    Many empirical relationships for the earthquake ground motion duration were developed for interplate region, whereas only a very limited number of empirical relationships exist for intraplate region. Also, the existing relationships were developed based mostly on the scaled recorded interplate earthquakes to represent intraplate earthquakes. To the author's knowledge, none of the existing relationships for the intraplate regions were developed using only the data from intraplate regions. Therefore, an attempt is made in this study to develop empirical predictive relationships of earthquake ground motion duration (i.e., significant and bracketed) with earthquake magnitude, hypocentral distance, and site conditions (i.e., rock and soil sites) using the data compiled from intraplate regions of Canada, Australia, Peninsular India, and the central and southern parts of the USA. The compiled earthquake ground motion data consists of 600 records with moment magnitudes ranging from 3.0 to 6.5 and hypocentral distances ranging from 4 to 1000 km. The non-linear mixed-effect (NLMEs) and logistic regression techniques (to account for zero duration) were used to fit predictive models to the duration data. The bracketed duration was found to be decreased with an increase in the hypocentral distance and increased with an increase in the magnitude of the earthquake. The significant duration was found to be increased with the increase in the magnitude and hypocentral distance of the earthquake. Both significant and bracketed durations were predicted higher in rock sites than in soil sites. The predictive relationships developed herein are compared with the existing relationships for interplate and intraplate regions. The developed relationship for bracketed duration predicts lower durations for rock and soil sites. However, the developed relationship for a significant duration predicts lower durations up to a certain distance and thereafter predicts higher durations compared to the

  15. Empirical models for the prediction of ground motion duration for intraplate earthquakes

    Science.gov (United States)

    Anbazhagan, P.; Neaz Sheikh, M.; Bajaj, Ketan; Mariya Dayana, P. J.; Madhura, H.; Reddy, G. R.

    2017-07-01

    Many empirical relationships for the earthquake ground motion duration were developed for interplate region, whereas only a very limited number of empirical relationships exist for intraplate region. Also, the existing relationships were developed based mostly on the scaled recorded interplate earthquakes to represent intraplate earthquakes. To the author's knowledge, none of the existing relationships for the intraplate regions were developed using only the data from intraplate regions. Therefore, an attempt is made in this study to develop empirical predictive relationships of earthquake ground motion duration (i.e., significant and bracketed) with earthquake magnitude, hypocentral distance, and site conditions (i.e., rock and soil sites) using the data compiled from intraplate regions of Canada, Australia, Peninsular India, and the central and southern parts of the USA. The compiled earthquake ground motion data consists of 600 records with moment magnitudes ranging from 3.0 to 6.5 and hypocentral distances ranging from 4 to 1000 km. The non-linear mixed-effect (NLMEs) and logistic regression techniques (to account for zero duration) were used to fit predictive models to the duration data. The bracketed duration was found to be decreased with an increase in the hypocentral distance and increased with an increase in the magnitude of the earthquake. The significant duration was found to be increased with the increase in the magnitude and hypocentral distance of the earthquake. Both significant and bracketed durations were predicted higher in rock sites than in soil sites. The predictive relationships developed herein are compared with the existing relationships for interplate and intraplate regions. The developed relationship for bracketed duration predicts lower durations for rock and soil sites. However, the developed relationship for a significant duration predicts lower durations up to a certain distance and thereafter predicts higher durations compared to the

  16. Evaluation of Physically and Empirically Based Models for the Estimation of Green Roof Evapotranspiration

    Science.gov (United States)

    Digiovanni, K. A.; Montalto, F. A.; Gaffin, S.; Rosenzweig, C.

    2010-12-01

    Green roofs and other urban green spaces can provide a variety of valuable benefits including reduction of the urban heat island effect, reduction of stormwater runoff, carbon sequestration, oxygen generation, air pollution mitigation etc. As many of these benefits are directly linked to the processes of evaporation and transpiration, accurate and representative estimation of urban evapotranspiration (ET) is a necessary tool for predicting and quantifying such benefits. However, many common ET estimation procedures were developed for agricultural applications, and thus carry inherent assumptions that may only be rarely applicable to urban green spaces. Various researchers have identified the estimation of expected urban ET rates as critical, yet poorly studied components of urban green space performance prediction and cite that further evaluation is needed to reconcile differences in predictions from varying ET modeling approaches. A small scale green roof lysimeter setup situated on the green roof of the Ethical Culture Fieldston School in the Bronx, NY has been the focus of ongoing monitoring initiated in June 2009. The experimental setup includes a 0.6 m by 1.2 m Lysimeter replicating the anatomy of the 500 m2 green roof of the building, with a roof membrane, drainage layer, 10 cm media depth, and planted with a variety of Sedum species. Soil moisture sensors and qualitative runoff measurements are also recorded in the Lysimeter, while a weather station situated on the rooftop records climatologic data. Direct quantification of actual evapotranspiration (AET) from the green roof weighing lysimeter was achieved through a mass balance approaches during periods absent of precipitation and drainage. A comparison of AET to estimates of potential evapotranspiration (PET) calculated from empirically and physically based ET models was performed in order to evaluate the applicability of conventional ET equations for the estimation of ET from green roofs. Results have

  17. Empirical analysis and modeling of errors of atmospheric profiles from GPS radio occultation

    Directory of Open Access Journals (Sweden)

    U. Foelsche

    2011-09-01

    Full Text Available The utilization of radio occultation (RO data in atmospheric studies requires precise knowledge of error characteristics. We present results of an empirical error analysis of GPS RO bending angle, refractivity, dry pressure, dry geopotential height, and dry temperature. We find very good agreement between data characteristics of different missions (CHAMP, GRACE-A, and Formosat-3/COSMIC (F3C. In the global mean, observational errors (standard deviation from "true" profiles at mean tangent point location agree within 0.3% in bending angle, 0.1% in refractivity, and 0.2 K in dry temperature at all altitude levels between 4 km and 35 km. Above 35 km the increase of the CHAMP raw bending angle observational error is more pronounced than that of GRACE-A and F3C leading to a larger observational error of about 1% at 42 km. Above ≈20 km, the observational errors show a strong seasonal dependence at high latitudes. Larger errors occur in hemispheric wintertime and are associated mainly with background data used in the retrieval process particularly under conditions when ionospheric residual is large. The comparison between UCAR and WEGC results (both data centers have independent inversion processing chains reveals different magnitudes of observational errors in atmospheric parameters, which are attributable to different background fields used. Based on the empirical error estimates, we provide a simple analytical error model for GPS RO atmospheric parameters for the altitude range of 4 km to 35 km and up to 50 km for UCAR raw bending angle and refractivity. In the model, which accounts for vertical, latitudinal, and seasonal variations, a constant error is adopted around the tropopause region amounting to 0.8% for bending angle, 0.35% for refractivity, 0.15% for dry pressure, 10 m for dry geopotential height, and 0.7 K for dry temperature. Below this region the observational error increases following an inverse height power-law and above it increases

  18. Mathematical method to build an empirical model for inhaled anesthetic agent wash-in

    Directory of Open Access Journals (Sweden)

    Grouls René EJ

    2011-06-01

    Full Text Available Abstract Background The wide range of fresh gas flow - vaporizer setting (FGF - FD combinations used by different anesthesiologists during the wash-in period of inhaled anesthetics indicates that the selection of FGF and FD is based on habit and personal experience. An empirical model could rationalize FGF - FD selection during wash-in. Methods During model derivation, 50 ASA PS I-II patients received desflurane in O2 with an ADU® anesthesia machine with a random combination of a fixed FGF - FD setting. The resulting course of the end-expired desflurane concentration (FA was modeled with Excel Solver, with patient age, height, and weight as covariates; NONMEM was used to check for parsimony. The resulting equation was solved for FD, and prospectively tested by having the formula calculate FD to be used by the anesthesiologist after randomly selecting a FGF, a target FA (FAt, and a specified time interval (1 - 5 min after turning on the vaporizer after which FAt had to be reached. The following targets were tested: desflurane FAt 3.5% after 3.5 min (n = 40, 5% after 5 min (n = 37, and 6% after 4.5 min (n = 37. Results Solving the equation derived during model development for FD yields FD=-(e(-FGF*-0.23+FGF*0.24*(e(FGF*-0.23*FAt*Ht*0.1-e(FGF*-0.23*FGF*2.55+40.46-e(FGF*-0.23*40.46+e(FGF*-0.23+Time/-4.08*40.46-e(Time/-4.08*40.46/((-1+e(FGF*0.24*(-1+e(Time/-4.08*39.29. Only height (Ht could be withheld as a significant covariate. Median performance error and median absolute performance error were -2.9 and 7.0% in the 3.5% after 3.5 min group, -3.4 and 11.4% in the 5% after 5 min group, and -16.2 and 16.2% in the 6% after 4.5 min groups, respectively. Conclusions An empirical model can be used to predict the FGF - FD combinations that attain a target end-expired anesthetic agent concentration with clinically acceptable accuracy within the first 5 min of the start of administration. The sequences are easily calculated in an Excel file and simple to

  19. Empirical modeling of drying kinetics and microwave assisted extraction of bioactive compounds from Adathoda vasica

    Directory of Open Access Journals (Sweden)

    Prithvi Simha

    2016-03-01

    Full Text Available To highlight the shortcomings in conventional methods of extraction, this study investigates the efficacy of Microwave Assisted Extraction (MAE toward bioactive compound recovery from pharmaceutically-significant medicinal plants, Adathoda vasica and Cymbopogon citratus. Initially, the microwave (MW drying behavior of the plant leaves was investigated at different sample loadings, MW power and drying time. Kinetics was analyzed through empirical modeling of drying data against 10 conventional thin-layer drying equations that were further improvised through the incorporation of Arrhenius, exponential and linear-type expressions. 81 semi-empirical Midilli equations were derived and subjected to non-linear regression to arrive at the characteristic drying equations. Bioactive compounds recovery from the leaves was examined under various parameters through a comparative approach that studied MAE against Soxhlet extraction. MAE of A. vasica reported similar yields although drastic reduction in extraction time (210 s as against the average time of 10 h in the Soxhlet apparatus. Extract yield for MAE of C. citratus was higher than the conventional process with optimal parameters determined to be 20 g sample load, 1:20 sample/solvent ratio, extraction time of 150 s and 300 W output power. Scanning Electron Microscopy and Fourier Transform Infrared Spectroscopy were performed to depict changes in internal leaf morphology.

  20. Empirical valence bond models for reactive potential energy surfaces: a parallel multilevel genetic program approach.

    Science.gov (United States)

    Bellucci, Michael A; Coker, David F

    2011-07-28

    We describe a new method for constructing empirical valence bond potential energy surfaces using a parallel multilevel genetic program (PMLGP). Genetic programs can be used to perform an efficient search through function space and parameter space to find the best functions and sets of parameters that fit energies obtained by ab initio electronic structure calculations. Building on the traditional genetic program approach, the PMLGP utilizes a hierarchy of genetic programming on two different levels. The lower level genetic programs are used to optimize coevolving populations in parallel while the higher level genetic program (HLGP) is used to optimize the genetic operator probabilities of the lower level genetic programs. The HLGP allows the algorithm to dynamically learn the mutation or combination of mutations that most effectively increase the fitness of the populations, causing a significant increase in the algorithm's accuracy and efficiency. The algorithm's accuracy and efficiency is tested against a standard parallel genetic program with a variety of one-dimensional test cases. Subsequently, the PMLGP is utilized to obtain an accurate empirical valence bond model for proton transfer in 3-hydroxy-gamma-pyrone in gas phase and protic solvent.

  1. Empirical solutions to the high-redshift overproduction of stars in modeled dwarf galaxies

    CERN Document Server

    White, Catherine E; Ferguson, Henry C

    2014-01-01

    Both numerical hydrodynamic and semi-analytic cosmological models of galaxy formation struggle to match observed star formation histories of galaxies in low-mass halos (M$_{\\rm{H}} \\lesssim 10^{11}$ \\msun), predicting more star formation at high redshift and less star formation at low redshift than observed. The fundamental problem is that galaxies' gas accretion and star formation rates are too closely coupled in the models: the accretion rate largely drives the star formation rate. Observations point to gas accretion rates that outpace star formation at high redshift, resulting in a buildup of gas and a delay in star formation until lower redshifts. We present three empirical adjustments of standard recipes in a semi-analytic model motivated by three physical scenarios that could cause this decoupling: 1) the mass loading factors of outflows driven by stellar feedback may have a steeper dependence on halo mass at earlier times, 2) the efficiency of star formation may be lower in low mass halos at high redsh...

  2. MERGANSER: an empirical model to predict fish and loon mercury in New England lakes

    Science.gov (United States)

    Shanley, James B.; Moore, Richard; Smith, Richard A.; Miller, Eric K.; Simcox, Alison; Kamman, Neil; Nacci, Diane; Robinson, Keith; Johnston, John M.; Hughes, Melissa M.; Johnston, Craig; Evers, David; Williams, Kate; Graham, John; King, Susannah

    2012-01-01

    MERGANSER (MERcury Geo-spatial AssessmeNtS for the New England Region) is an empirical least-squares multiple regression model using mercury (Hg) deposition and readily obtainable lake and watershed features to predict fish (fillet) and common loon (blood) Hg in New England lakes. We modeled lakes larger than 8 ha (4404 lakes), using 3470 fish (12 species) and 253 loon Hg concentrations from 420 lakes. MERGANSER predictor variables included Hg deposition, watershed alkalinity, percent wetlands, percent forest canopy, percent agriculture, drainage area, population density, mean annual air temperature, and watershed slope. The model returns fish or loon Hg for user-entered species and fish length. MERGANSER explained 63% of the variance in fish and loon Hg concentrations. MERGANSER predicted that 32-cm smallmouth bass had a median Hg concentration of 0.53 μg g-1 (root-mean-square error 0.27 μg g-1) and exceeded EPA's recommended fish Hg criterion of 0.3 μg g-1 in 90% of New England lakes. Common loon had a median Hg concentration of 1.07 μg g-1 and was in the moderate or higher risk category of >1 μg g-1 Hg in 58% of New England lakes. MERGANSER can be applied to target fish advisories to specific unmonitored lakes, and for scenario evaluation, such as the effect of changes in Hg deposition, land use, or warmer climate on fish and loon mercury.

  3. A Semi-Empirical Model for Tilted-Gun Planar Magnetron Sputtering Accounting for Chimney Shadowing

    Science.gov (United States)

    Bunn, J. K.; Metting, C. J.; Hattrick-Simpers, J.

    2015-01-01

    Integrated computational materials engineering (ICME) approaches to composition and thickness profiles of sputtered thin-film samples are the key to expediting materials exploration for these materials. Here, an ICME-based semi-empirical approach to modeling the thickness of thin-film samples deposited via magnetron sputtering is developed. Using Yamamura's dimensionless differential angular sputtering yield and a measured deposition rate at a point in space for a single experimental condition, the model predicts the deposition profile from planar DC sputtering sources. The model includes corrections for off-center, tilted gun geometries as well as shadowing effects from gun chimneys used in most state-of-the-art sputtering systems. The modeling algorithm was validated by comparing its results with experimental deposition rates obtained from a sputtering system utilizing sources with a multi-piece chimney assembly that consists of a lower ground shield and a removable gas chimney. Simulations were performed for gun-tilts ranging from 0° to 31.3° from the vertical with and without the gas chimney installed. The results for the predicted and experimental angular dependence of the sputtering deposition rate were found to have an average magnitude of relative error of for a 0°-31.3° gun-tilt range without the gas chimney, and for a 17.7°-31.3° gun-tilt range with the gas chimney. The continuum nature of the model renders this approach reverse-optimizable, providing a rapid tool for assisting in the understanding of the synthesis-composition-property space of novel materials.

  4. A new methodology for the development of high-latitude ionospheric climatologies and empirical models

    Science.gov (United States)

    Chisham, G.

    2017-01-01

    Many empirical models and climatologies of high-latitude ionospheric processes, such as convection, have been developed over the last 40 years. One common feature in the development of these models is that measurements from different times are combined and averaged on fixed coordinate grids. This methodology ignores the reality that high-latitude ionospheric features are organized relative to the location of the ionospheric footprint of the boundary between open and closed geomagnetic field lines (OCB). This boundary is in continual motion, and the polar cap that it encloses is continually expanding and contracting in response to changes in the rates of magnetic reconnection at the Earth's magnetopause and in the magnetotail. As a consequence, models that are developed by combining and averaging data in fixed coordinate grids heavily smooth the variations that occur near the boundary location. Here we propose that the development of future models should consider the location of the OCB in order to more accurately model the variations in this region. We present a methodology which involves identifying the OCB from spacecraft auroral images and then organizing measurements in a grid where the bins are placed relative to the OCB location. We demonstrate the plausibility of this methodology using ionospheric vorticity measurements made by the Super Dual Auroral Radar Network radars and OCB measurements from the IMAGE spacecraft FUV auroral imagers. This demonstration shows that this new methodology results in sharpening and clarifying features of climatological maps near the OCB location. We discuss the potential impact of this methodology on space weather applications.

  5. Application of a semi-empirical model for the evaluation of transmission properties of barite mortar.

    Science.gov (United States)

    Santos, Josilene C; Tomal, Alessandra; Mariano, Leandro; Costa, Paulo R

    2015-06-01

    The aim of this study was to estimate barite mortar attenuation curves using X-ray spectra weighted by a workload distribution. A semi-empirical model was used for the evaluation of transmission properties of this material. Since ambient dose equivalent, H(⁎)(10), is the radiation quantity adopted by IAEA for dose assessment, the variation of the H(⁎)(10) as a function of barite mortar thickness was calculated using primary experimental spectra. A CdTe detector was used for the measurement of these spectra. The resulting spectra were adopted for estimating the optimized thickness of protective barrier needed for shielding an area in an X-ray imaging facility.

  6. An empirical model for the study of employee paticipation and its influence on job satisfaction

    Directory of Open Access Journals (Sweden)

    Lucas Joan Pujol Cols

    2015-12-01

    Full Text Available This article provides an analysis of the factors that influence the employee’s possibilities perceived to trigger actions of meaningful participation in three levels: Intra-group Level, Institutional Level and directly in the Leadership team of of the organization.Twelve (12 interviews were done with teachers from the Social and Economic Sciences School of the Mar del Plata (Argentina University, with different positions, areas and working hours.Based on qualitative evidence, an empirical model was constructed claiming to connect different factors for each manifest of participation, establishing hypothetical relations between subgroups.Additionally, in this article the implication of participation, its relationship with the job satisfaction and the role of individual expectations on the participation opportunities that receives each employee, are discussed. Keywords: Participation, Job satisfaction, University, Expectations, Qualitative Analysis. 

  7. A DISTANCE EDUCATION MODEL FOR JORDANIAN STUDENTS BASED ON AN EMPIRICAL STUDY

    Directory of Open Access Journals (Sweden)

    Ahmad SHAHER MASHHOUR

    2007-04-01

    Full Text Available Distance education is expanding worldwide. Numbers of students enrolled in distance education are increasing at very high rates. Distance education is said to be the future of education because it addresses educational needs of the new millennium. This paper represents the findings of an empirical study on a sample of Jordanian distance education students into a requirement model that addresses the need of such education at the national level. The responses of the sample show that distance education is offering a viable and satisfactory alternative to those who cannot enroll in regular residential education. The study also shows that the shortcomings of the regular and the current form of distance education in Jordan can be overcome by the use of modern information technology.

  8. Establishment of Grain Farmers’ Supply Response Model and Empirical Analysis under Minimum Grain Purchase Price Policy

    Institute of Scientific and Technical Information of China (English)

    2012-01-01

    Based on farmers’ supply behavior theory and price expectations theory,this paper establishes grain farmers’ supply response model of two major grain varieties (early indica rice and mixed wheat) in the major producing areas,to test whether the minimum grain purchase price policy can have price-oriented effect on grain production and supply in the major producing areas. Empirical analysis shows that the minimum purchase price published annually by the government has significant positive impact on farmers’ grain supply in the major grain producing areas. In recent years,China steadily raises the level of minimum grain purchase price,which has played an important role in effectively protecting grain farmers’ interests,mobilizing the enthusiasm of farmers’ grain production,and ensuring the market supply of key grain varieties.

  9. Instrument Fault Detection Sensitivity of an Empirical Model under Accident Condition in NPPs

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jae Hwan; Hur, Seop; Cheon, Se Woo; Kim, Jung Taek [KAERI, Daejeon (Korea, Republic of)

    2014-08-15

    After the recent accident in Fukushima, Japan, it has been proven that we cannot obtain fully reliable information from instruments during severe accident conditions. Although the reactor core really melted down, the RV water level indicator showed a more optimistic value than the actual conditions. Accordingly, plant operators were under the misunderstanding that the core was not exposed. This caused confusion for the incident response. Therefore, it is necessary to be equipped with a function that informs operators of the status of the instrument integrity in real time. If plant operators verify that the instruments are working properly during accident conditions, they able to make safer decisions. In an effort to solve this problem, we considered an empirical model using a Process Equipment Monitoring (PEM) tool as a method of instrument diagnosis in a nuclear power plant.

  10. Impact of the chosen turbulent flow empirical model on the prediction of sound radiation and vibration by aircraft panels

    Science.gov (United States)

    Rocha, Joana

    2016-07-01

    A precise definition of the turbulent boundary layer excitation is required to accurately predict the sound radiation and surface vibration levels, produced by an aircraft panel excited turbulent flow during flight. Hence, any existing inaccuracy on turbulent boundary layer excitation models leads to an inaccurate prediction of the panel response. A number of empirical models have been developed over the years to provide the turbulent boundary layer wall pressure spectral density. However, different empirical models provide dissimilar predictions for the wall pressure spectral density. The objective of the present study is to investigate and quantify the impact of the chosen empirical model on the predicted radiated sound power, and on the predicted panel surface acceleration levels. This study provides a novel approach and a detailed analysis on the use of different turbulent boundary layer wall pressure empirical models, and impact on mathematical predictions. Closed-form mathematical relationships are developed, and recommendations are provided for the level of deviation and uncertainty associated to different models, in relation to a baseline model, both for panel surface acceleration and radiated sound power.

  11. A semi-empirical model for the prediction of fouling in railway ballast using GPR

    Science.gov (United States)

    Bianchini Ciampoli, Luca; Tosti, Fabio; Benedetto, Andrea; Alani, Amir M.; Loizos, Andreas; D'Amico, Fabrizio; Calvi, Alessandro

    2016-04-01

    The first step in the planning for a renewal of a railway network consists in gathering information, as effectively as possible, about the state of the railway tracks. Nowadays, this activity is mostly carried out by digging trenches at regular intervals along the whole network, to evaluate both geometrical and geotechnical properties of the railway track bed. This involves issues, mainly concerning the invasiveness of the operations, the impacts on the rail traffic, the high costs, and the low levels of significance concerning such discrete data set. Ground-penetrating radar (GPR) can represent a useful technique for overstepping these issues, as it can be directly mounted onto a train crossing the railway, and collect continuous information along the network. This study is aimed at defining an empirical model for the prediction of fouling in railway ballast, by using GPR. With this purpose, a thorough laboratory campaign was implemented within the facilities of Roma Tre University. In more details, a 1.47 m long × 1.47 m wide × 0.48 m height plexiglass framework, accounting for the domain of investigation, was laid over a perfect electric conductor, and filled up with several configuration of railway ballast and fouling material (clayey sand), thereby representing different levels of fouling. Then, the set of fouling configurations was surveyed with several GPR systems. In particular, a ground-coupled multi-channel radar (600 MHz and 1600 MHz center frequency antennas) and three air-launched radar systems (1000 MHz and 2000 MHz center frequency antennas) were employed for surveying the materials. By observing the results both in terms of time and frequency domains, interesting insights are highlighted and an empirical model, relating in particular the shape of the frequency spectrum of the signal and the percentage of fouling characterizing the surveyed material, is finally proposed. Acknowledgement The Authors thank COST, for funding the Action TU1208 "Civil

  12. LLNL Seismic Locations: Validating Improvement Through Integration of Regionalized Models and Empirical Corrections

    Energy Technology Data Exchange (ETDEWEB)

    Schultz, C.A.; Flanagan, M.P.; Myers, S.C.; Pasyanos, M.E.; Swenson, J.L.; Hanley, W.; Ryall, F.; Dodge, D.

    2001-07-27

    The monitoring of nuclear explosions on a global basis requires accurate event locations. As an example, a typical size used for an on-site inspection search area is 1,000 square kilometers or approximately 17 km accuracy, assuming a circular area. This level of accuracy is a significant challenge for small events that are recorded using a sparse regional network. In such cases, the travel time of seismic energy is strongly affected by crustal and upper mantle heterogeneity and large biases can result. This can lead to large systematic errors in location and, more importantly, to invalid error bounds associated with location estimates. Calibration data and methods are being developed and integrated to correct for these biases. Our research over the last few years has shown that one of the most effective approaches to generate path corrections is the hybrid technique that combine both regionalized models with three-dimensional empirical travel-time corrections. We implement a rigorous and comprehensive uncertainty framework for these hybrid approaches. Qualitative and quantitative validations are presented in the form of single component consistency checks, sensitivity analysis, robustness measures, outlier testing along with end-to-end testing of confidence measures. We focus on screening and validating both empirical and model based calibrations as well as the hybrid form that combines these two types of calibration. We demonstrate that the hybrid approach very effectively calibrates both travel-time and slowness attributes for seismic location in the Middle East North Africa, and Western Eurasia (ME/NAAVE). Furthermore, it provides highly reliable uncertainty estimates. Finally, we summarize the NNSA validated data sets that have been provided to contractors in the last year.

  13. Gold price analysis based on ensemble empirical model decomposition and independent component analysis

    Science.gov (United States)

    Xian, Lu; He, Kaijian; Lai, Kin Keung

    2016-07-01

    In recent years, the increasing level of volatility of the gold price has received the increasing level of attention from the academia and industry alike. Due to the complexity and significant fluctuations observed in the gold market, however, most of current approaches have failed to produce robust and consistent modeling and forecasting results. Ensemble Empirical Model Decomposition (EEMD) and Independent Component Analysis (ICA) are novel data analysis methods that can deal with nonlinear and non-stationary time series. This study introduces a new methodology which combines the two methods and applies it to gold price analysis. This includes three steps: firstly, the original gold price series is decomposed into several Intrinsic Mode Functions (IMFs) by EEMD. Secondly, IMFs are further processed with unimportant ones re-grouped. Then a new set of data called Virtual Intrinsic Mode Functions (VIMFs) is reconstructed. Finally, ICA is used to decompose VIMFs into statistically Independent Components (ICs). The decomposition results reveal that the gold price series can be represented by the linear combination of ICs. Furthermore, the economic meanings of ICs are analyzed and discussed in detail, according to the change trend and ICs' transformation coefficients. The analyses not only explain the inner driving factors and their impacts but also conduct in-depth analysis on how these factors affect gold price. At the same time, regression analysis has been conducted to verify our analysis. Results from the empirical studies in the gold markets show that the EEMD-ICA serve as an effective technique for gold price analysis from a new perspective.

  14. Stochastic empirical loading and dilution model (SELDM) version 1.0.0

    Science.gov (United States)

    Granato, Gregory E.

    2013-01-01

    The Stochastic Empirical Loading and Dilution Model (SELDM) is designed to transform complex scientific data into meaningful information about the risk of adverse effects of runoff on receiving waters, the potential need for mitigation measures, and the potential effectiveness of such management measures for reducing these risks. The U.S. Geological Survey developed SELDM in cooperation with the Federal Highway Administration to help develop planning-level estimates of event mean concentrations, flows, and loads in stormwater from a site of interest and from an upstream basin. Planning-level estimates are defined as the results of analyses used to evaluate alternative management measures; planning-level estimates are recognized to include substantial uncertainties (commonly orders of magnitude). SELDM uses information about a highway site, the associated receiving-water basin, precipitation events, stormflow, water quality, and the performance of mitigation measures to produce a stochastic population of runoff-quality variables. SELDM provides input statistics for precipitation, prestorm flow, runoff coefficients, and concentrations of selected water-quality constituents from National datasets. Input statistics may be selected on the basis of the latitude, longitude, and physical characteristics of the site of interest and the upstream basin. The user also may derive and input statistics for each variable that are specific to a given site of interest or a given area. SELDM is a stochastic model because it uses Monte Carlo methods to produce the random combinations of input variable values needed to generate the stochastic population of values for each component variable. SELDM calculates the dilution of runoff in the receiving waters and the resulting downstream event mean concentrations and annual average lake concentrations. Results are ranked, and plotting positions are calculated, to indicate the level of risk of adverse effects caused by runoff concentrations

  15. Evaluation of Empirical Tropospheric Models Using Satellite-Tracking Tropospheric Wet Delays with Water Vapor Radiometer at Tongji, China

    OpenAIRE

    Miaomiao Wang; Bofeng Li

    2016-01-01

    An empirical tropospheric delay model, together with a mapping function, is commonly used to correct the tropospheric errors in global navigation satellite system (GNSS) processing. As is well-known, the accuracy of tropospheric delay models relies mainly on the correction efficiency for tropospheric wet delays. In this paper, we evaluate the accuracy of three tropospheric delay models, together with five mapping functions in wet delays calculation. The evaluations are conducted by comparing ...

  16. The IT Impact on the Productivity and the Organizational Performance of Firms in Romania. A model of Empirical Analysis

    Directory of Open Access Journals (Sweden)

    Cristina ENACHE

    2011-11-01

    Full Text Available The paper propose an analysis based on an empirical model of IT impact on firms performances of Romania. There are presented the model, the equations of the model and the results of statistical processing. All these shown that the ICT impact on firm performance is greater and positive if the information technologies are accompanied by a proactive management policy and an organizational culture.

  17. An Inventory of Tree and Stand Growth Empirical Modelling Approaches with Potential Application in Coppice Forestry (a Review

    Directory of Open Access Journals (Sweden)

    Michal Kneifl

    2015-01-01

    Full Text Available We examined currently available empirical growth models which could be potentially applicable to coppice growth and production modelling. We compiled a summary of empirical models applied in coppices, high forests and fast-growing tree plantations, including coppice plantations. The collected growth models were analysed in order to find out whether they encompassed any of 13 key dendrometric and structural variables that we found as characteristic for coppices. There is no currently available complex growth model for coppices in Europe. Furthermore, many aspects of coppice growth process have been totally ignored or omitted in the most common modelling approaches so far. Within-stool competition, mortality and stool morphological variability are the most important parameters. However, some individual empirical submodels or their parts are potentially applicable for coppice growth and production modelling (e. g. diameter increment model or model of resprouting probability. As the issue of coppice management gains attention, the need for a decision support tool (e.g. coppice growth simulator becomes more actual.

  18. Modelling solar radiation reached to the Earth using ANFIS, NN-ARX, and empirical models (Case studies: Zahedan and Bojnurd stations)

    Science.gov (United States)

    Piri, Jamshid; Kisi, Ozgur

    2015-02-01

    The amount of incoming solar energy that crosses the Earth's atmosphere is called solar radiation. The solar radiation is a series of ultraviolet wavelengths including visible and infrared light. The solar rays at the Earth's surface is one of the key factor in water resources, environmental and agricultural modelling. Solar radiation is rarely measured by weather stations in Iran and other developing countries; as a result, many empirical approaches have been applied to estimate it by using other climatic parameters. In this study, non-linear models, adaptive neuro-fuzzy inference system (ANFIS) and neural network auto-regressive model with exogenous inputs (NN-ARX) along with empirical models, Angstrom and Hargreaves-Samani, have been used to estimate the solar radiation. The data was collected from two synoptic stations with different climatic conditions (Zahedan and Bojnurd) during the period of 5 and 7 years, respectively. These data contain sunshine hours, maximum temperature, minimum temperature, average relative humidity and solar radiation. The Angstrom and Hargreaves-Samani empirical models, respectively, based on sunshine hours and temperature were calibrated and evaluated in both stations. In order to train, test, and validate ANFIS and NNRX models, 60%, 25%, and 15% of the data were applied, respectively. The results of artificial intelligence models were compared with the empirical models. The findings showed that ANFIS (R2=0.90 and 0.97 for Zahedan and Bojnurd, respectively) and NN-ARX (R2=0.89 and 0.96 for Zahedan and Bojnurd, respectively) performed better than the empirical models in estimating daily solar radiation.

  19. Empirical phylogenies and species abundance distributions are consistent with pre-equilibrium dynamics of neutral community models with gene flow

    KAUST Repository

    Bonnet-Lebrun, Anne-Sophie

    2017-03-17

    Community characteristics reflect past ecological and evolutionary dynamics. Here, we investigate whether it is possible to obtain realistically shaped modelled communities - i.e., with phylogenetic trees and species abundance distributions shaped similarly to typical empirical bird and mammal communities - from neutral community models. To test the effect of gene flow, we contrasted two spatially explicit individual-based neutral models: one with protracted speciation, delayed by gene flow, and one with point mutation speciation, unaffected by gene flow. The former produced more realistic communities (shape of phylogenetic tree and species-abundance distribution), consistent with gene flow being a key process in macro-evolutionary dynamics. Earlier models struggled to capture the empirically observed branching tempo in phylogenetic trees, as measured by the gamma statistic. We show that the low gamma values typical of empirical trees can be obtained in models with protracted speciation, in pre-equilibrium communities developing from an initially abundant and widespread species. This was even more so in communities sampled incompletely, particularly if the unknown species are the youngest. Overall, our results demonstrate that the characteristics of empirical communities that we have studied can, to a large extent, be explained through a purely neutral model under pre-equilibrium conditions. This article is protected by copyright. All rights reserved.

  20. Integrating theory-driven and empirically-derived models of personality development and psychopathology: a proposal for DSM V.

    Science.gov (United States)

    Luyten, Patrick; Blatt, Sidney J

    2011-02-01

    Although there is growing consensus that the fifth edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM) should replace the categorical view of mental disorders with a dimensional approach rooted in personality theory, no consensus has emerged about the dimensions that should be the basis of the new classification system. Moreover, recent attempts to bridge the gap between psychiatric nosology and personality theories have primarily relied on empirically-derived dimensional personality models. While this focus on empirically-derived personality theories may result in a psychometrically valid classification system, it may create a classification system that lacks theoretical and empirical comprehensiveness and has limited clinical utility. In this paper, we first argue that research findings increasingly suggest that an integration of theory-driven and empirically-derived models of personality development is not only possible, but also has the potential to provide a more comprehensive and clinically-relevant approach to classification and diagnosis than either approach alone. Next, we propose a comprehensive model of personality development and psychopathology based on an integration of contemporary theory-driven and empirically-derived models of personality. Finally, we outline the implications of this approach for the future development of DSM, and especially its potential for developing research that addresses the interactions between psychosocial and neurobiological processes implicated in personality development and psychopathology.

  1. Models

    DEFF Research Database (Denmark)

    Juel-Christiansen, Carsten

    2005-01-01

    Artiklen fremhæver den visuelle rotation - billeder, tegninger, modeller, værker - som det privilligerede medium i kommunikationen af ideer imellem skabende arkitekter......Artiklen fremhæver den visuelle rotation - billeder, tegninger, modeller, værker - som det privilligerede medium i kommunikationen af ideer imellem skabende arkitekter...

  2. Factors favouring large organic production in the northern Adriatic: towards the northern Adriatic empirical ecological model

    Directory of Open Access Journals (Sweden)

    R. Kraus

    2015-06-01

    Full Text Available Influenced by one of the largest Mediterranean rivers, Po, the northern Adriatic production is highly variable seasonally and interannually. The changes are especially pronounced between winters and seemingly reflect on total Adriatic bioproduction of certain species (anchovy. We analysed the long-term changes in the phytoplankton production at the transect in the region, as derived from monthly oceanographic cruises, in relation to concomitant geostrophic currents distribution in the area and in the Po River discharge rates in days preceding the cruises. In winter and early spring the phyto-abundances depended on existing circulation fields, in summer and autumn they were related to 1–15 days earlier Po River discharge rates and on concomitant circulation fields, while in late spring phyto-abundances increased 1–3 days after high Po River discharge rates regardless of circulation fields. During the entire year the phyto-abundances were dependant on forcing of the previous 1–12 months of surface fluxes and/or Po River rates. Large February blooms are, as well as February circulation patterns, precondited by low evaporation rates in previous November. From 1990 to 2004 a shift towards large winter bioproduction induced by circulation changes appeared. Performed investigations represent the preliminary actions in building of an empirical ecological model of the northern Adriatic which can be used in the sustainable economy of the region, however also in validation of the numerical ecological model of the region, which is currently being developed.

  3. EVALUATION OF RUTTING DEPTH IN FLEXIBLE PAVEMENTS BY USING FINITE ELEMENT ANALYSIS AND LOCAL EMPIRICAL MODEL

    Directory of Open Access Journals (Sweden)

    Alaa H. Abed

    2012-01-01

    Full Text Available The objective of this research is to predict rut depth in local flexible pavements. Predication model in pavement performance is the process that used to estimate the parameter values which related to pavement structure, environmental condition and traffic loading. The different local empirical models have been used to calculate permanent deformation which include environmental and traffic conditions. Finite element analysis through ANSYS computer software is used to analyze two dimensional linear elastic plane strain problem through (Plane 82 elements. Standard Axle Load (ESAL of 18 kip (80 kN loading on an axle with dual set of tires, the wheel spacing is 13.5 in (343 mm with tire contact pressure of 87 psi (0.6 MPa is used. The pavement system is assumed to be an elastic multi-layers system with each layer being isotropic, homogeneous with specified resilient modulus and Poisson ratio. Each layer is to extend to infinity in the horizontal direction and have a finite thickness except the bottom layer. The analysis of results show that, although, the stress level decrease 14% in the leveling course and 27% in the base course, the rut depth is increased by 12 and 28% in that layers respectively because the material properties is changed.

  4. Factors favouring large organic production in the northern Adriatic: towards the northern Adriatic empirical ecological model

    Science.gov (United States)

    Kraus, R.; Supić, N.; Precali, R.

    2015-06-01

    Influenced by one of the largest Mediterranean rivers, Po, the northern Adriatic production is highly variable seasonally and interannually. The changes are especially pronounced between winters and seemingly reflect on total Adriatic bioproduction of certain species (anchovy). We analysed the long-term changes in the phytoplankton production at the transect in the region, as derived from monthly oceanographic cruises, in relation to concomitant geostrophic currents distribution in the area and in the Po River discharge rates in days preceding the cruises. In winter and early spring the phyto-abundances depended on existing circulation fields, in summer and autumn they were related to 1-15 days earlier Po River discharge rates and on concomitant circulation fields, while in late spring phyto-abundances increased 1-3 days after high Po River discharge rates regardless of circulation fields. During the entire year the phyto-abundances were dependant on forcing of the previous 1-12 months of surface fluxes and/or Po River rates. Large February blooms are, as well as February circulation patterns, precondited by low evaporation rates in previous November. From 1990 to 2004 a shift towards large winter bioproduction induced by circulation changes appeared. Performed investigations represent the preliminary actions in building of an empirical ecological model of the northern Adriatic which can be used in the sustainable economy of the region, however also in validation of the numerical ecological model of the region, which is currently being developed.

  5. Empirical modelling of the BLASTPol achromatic half-wave plate for precision submillimetre polarimetry

    CERN Document Server

    Moncelsi, Lorenzo; Angile, Francesco Elio; Benton, Steven; Devlin, Mark; Fissel, Laura; Gandilo, Natalie; Gundersen, Joshua; Matthews, Tristan; Netterfield, C Barth; Novak, Giles; Nutter, David; Pascale, Enzo; Poidevin, Frederick; Savini, Giorgio; Scott, Douglas; Soler, Juan; Spencer, Locke; Truch, Matthew; Tucker, Gregory; Zhang, Jin

    2012-01-01

    A cryogenic achromatic half-wave plate (HWP) for submillimetre astronomical polarimetry has been designed, manufactured, tested, and deployed in the Balloon-borne Large-Aperture Submillimeter Telescope for Polarimetry (BLASTPol). The design is based on the five-slab Pancharatnam recipe and it works in the wavelength range 200-600 micron, making it the most achromatic HWP built to date at submillimetre wavelengths. The frequency behaviour of the HWP has been fully characterised at room and cryogenic temperatures with incoherent radiation from a polarising Fourier transform spectrometer. We develop a novel empirical model, complementary to the physical and analytical ones available in the literature, that allows us to recover the HWP Mueller matrix and phase shift as a function of frequency and extrapolated to 4K. We show that most of the HWP non-idealities can be modelled by quantifying one wavelength-dependent parameter, the position of the HWP equivalent axes, which is then readily implemented in a map-makin...

  6. An Empirical Non-TNT Approach to Launch Vehicle Explosion Modeling

    Science.gov (United States)

    Blackwood, James M.; Skinner, Troy; Richardson, Erin H.; Bangham, Michal E.

    2015-01-01

    In an effort to increase crew survivability from catastrophic explosions of Launch Vehicles (LV), a study was conducted to determine the best method for predicting LV explosion environments in the near field. After reviewing such methods as TNT equivalence, Vapor Cloud Explosion (VCE) theory, and Computational Fluid Dynamics (CFD), it was determined that the best approach for this study was to assemble all available empirical data from full scale launch vehicle explosion tests and accidents. Approximately 25 accidents or full-scale tests were found that had some amount of measured blast wave, thermal, or fragment explosion environment characteristics. Blast wave overpressure was found to be much lower in the near field than predicted by most TNT equivalence methods. Additionally, fragments tended to be larger, fewer, and slower than expected if the driving force was from a high explosive type event. In light of these discoveries, a simple model for cryogenic rocket explosions is presented. Predictions from this model encompass all known applicable full scale launch vehicle explosion data. Finally, a brief description of on-going analysis and testing to further refine the launch vehicle explosion environment is discussed.

  7. Stream-power incision model in non-steady-state mountain ranges: An empirical approach

    Institute of Scientific and Technical Information of China (English)

    CHEN Yen-Chieh; SUNG Quocheng; CHEN Chao-Nan

    2006-01-01

    Stream-power incision model has always been applied to detecting the steady-state situation of ranges. Oblique arc-continent collision occurring during the period of Penglai Orogeny caused the Taiwan mountain belt to develop landscape of three evolution stages, namely stages of pre-steady-state (growing ranges in southern Taiwan), steady-state (ranges in central Taiwan) and post-steady-state (decaying ranges in northern Taiwan). In the analysis on streams of the Taiwan mountain belt made by exploring the relationship between the slope of bedrock channel (S) and the catchment area (A), the topographic features of the ranges at these three stages are acquired. The S-A plot of the steady-state ranges is in a linear form, revealing that the riverbed height of bedrock channel does not change over time (dz/dt =0). The slope and intercept of the straight line S-A are related to evolution time of steady-state topography and tectonic uplift rate respectively. The S-A plots of the southern and northern ranges of Taiwan mountain belt appear to be in convex and concave forms respectively, implying that the riverbed height of bedrock channel at the two ranges rises (dz/dt>0)and falls (dz/dt<0) over time respectively. Their tangent intercept can still reflect the tectonic uplift rate.This study develops an empirical stream-power eresion model of pre-steady-state and post-steady-state topography.

  8. Going Global: A Model for Evaluating Empirically Supported Family-Based Interventions in New Contexts.

    Science.gov (United States)

    Sundell, Knut; Ferrer-Wreder, Laura; Fraser, Mark W

    2014-06-01

    The spread of evidence-based practice throughout the world has resulted in the wide adoption of empirically supported interventions (ESIs) and a growing number of controlled trials of imported and culturally adapted ESIs. This article is informed by outcome research on family-based interventions including programs listed in the American Blueprints Model and Promising Programs. Evidence from these controlled trials is mixed and, because it is comprised of both successful and unsuccessful replications of ESIs, it provides clues for the translation of promising programs in the future. At least four explanations appear plausible for the mixed results in replication trials. One has to do with methodological differences across trials. A second deals with ambiguities in the cultural adaptation process. A third explanation is that ESIs in failed replications have not been adequately implemented. A fourth source of variation derives from unanticipated contextual influences that might affect the effects of ESIs when transported to other cultures and countries. This article describes a model that allows for the differential examination of adaptations of interventions in new cultural contexts.

  9. Empirical Succession Mapping and Data Assimilation to Constrain Demographic Processes in an Ecosystem Model

    Science.gov (United States)

    Kelly, R.; Andrews, T.; Dietze, M.

    2015-12-01

    Shifts in ecological communities in response to environmental change have implications for biodiversity, ecosystem function, and feedbacks to global climate change. Community composition is fundamentally the product of demography, but demographic processes are simplified or missing altogether in many ecosystem, Earth system, and species distribution models. This limitation arises in part because demographic data are noisy and difficult to synthesize. As a consequence, demographic processes are challenging to formulate in models in the first place, and to verify and constrain with data thereafter. Here, we used a novel analysis of the USFS Forest Inventory Analysis to improve the representation of demography in an ecosystem model. First, we created an Empirical Succession Mapping (ESM) based on ~1 million individual tree observations from the eastern U.S. to identify broad demographic patterns related to forest succession and disturbance. We used results from this analysis to guide reformulation of the Ecosystem Demography model (ED), an existing forest simulator with explicit tree demography. Results from the ESM reveal a coherent, cyclic pattern of change in temperate forest tree size and density over the eastern U.S. The ESM captures key ecological processes including succession, self-thinning, and gap-filling, and quantifies the typical trajectory of these processes as a function of tree size and stand density. Recruitment is most rapid in early-successional stands with low density and mean diameter, but slows as stand density increases; mean diameter increases until thinning promotes recruitment of small-diameter trees. Strikingly, the upper bound of size-density space that emerges in the ESM conforms closely to the self-thinning power law often observed in ecology. The ED model obeys this same overall size-density boundary, but overestimates plot-level growth, mortality, and fecundity rates, leading to unrealistic emergent demographic patterns. In particular

  10. A MACROPRUDENTIAL SUPERVISION MODEL. EMPIRICAL EVIDENCE FROM THE CENTRAL AND EASTERN EUROPEAN BANKING SYSTEM

    Directory of Open Access Journals (Sweden)

    Trenca Ioan

    2013-07-01

    Full Text Available One of the positive effects of the financial crises is the increasing concern of the supervisors regarding the financial system’s stability. There is a need to strengthen the links between different financial components of the financial system and the macroeconomic environment. Banking systems that have an adequate capitalization and liquidity level may face easier economic and financial shocks. The purpose of this empirical study is to identify the main determinants of the banking system’s stability and soundness in the Central and Eastern Europe countries. We asses the impact of different macroeconomic variables on the quality of capital and liquidity conditions and examine the behaviour of these financial stability indicators, by analyzing a sample of 10 banking systems during 2000-2011. The availability of banking capital signals the banking system’s resiliency to shocks. Capital adequacy ratio is the main indicator used to assess the banking fragility. One of the causes of the 2008-2009 financial crisis was the lack of liquidity in the banking system which led to the collapse of several banking institutions and macroeconomic imbalances. Given the importance of liquidity for the banking system, we propose several models in order to determine the macroeconomic variables that have a significant influence on the liquid reserves to total assets ratio. We found evidence that GDP growth, inflation, domestic credit to private sector, as well as the money and quasi money aggregate indicator have significant impact on the banking stability. The empirical regression confirms the high level of interdependence of the real sector with the financial-banking sector. Also, they prove the necessity for an effective macro prudential supervision at country level which enables the supervisory authorities to have an adequate control over the macro prudential indicators and to take appropriate decisions at the right time.

  11. Source parameters of intermediate-depth Vrancea (Romania) earthquakes from empirical Green's functions modeling

    Science.gov (United States)

    Oth, Adrien; Wenzel, Friedemann; Radulian, Mircea

    2007-06-01

    Several source parameters (source dimensions, slip, particle velocity, static and dynamic stress drop) are determined for the moderate-size October 27th, 2004 ( MW = 5.8), and the large August 30th, 1986 ( MW = 7.1) and March 4th, 1977 ( MW = 7.4) Vrancea (Romania) intermediate-depth earthquakes. For this purpose, the empirical Green's functions method of Irikura [e.g. Irikura, K. (1983). Semi-Empirical Estimation of Strong Ground Motions during Large Earthquakes. Bull. Dis. Prev. Res. Inst., Kyoto Univ., 33, Part 2, No. 298, 63-104., Irikura, K. (1986). Prediction of strong acceleration motions using empirical Green's function, in Proceedings of the 7th Japan earthquake engineering symposium, 151-156., Irikura, K. (1999). Techniques for the simulation of strong ground motion and deterministic seismic hazard analysis, in Proceedings of the advanced study course seismotectonic and microzonation techniques in earthquake engineering: integrated training in earthquake risk reduction practices, Kefallinia, 453-554.] is used to generate synthetic time series from recordings of smaller events (with 4 ≤ MW ≤ 5) in order to estimate several parameters characterizing the so-called strong motion generation area, which is defined as an extended area with homogeneous slip and rise time and, for crustal earthquakes, corresponds to an asperity of about 100 bar stress release [Miyake, H., T. Iwata and K. Irikura (2003). Source characterization for broadband ground-motion simulation: Kinematic heterogeneous source model and strong motion generation area. Bull. Seism. Soc. Am., 93, 2531-2545.] The parameters are obtained by acceleration envelope and displacement waveform inversion for the 2004 and 1986 events and MSK intensity pattern inversion for the 1977 event using a genetic algorithm. The strong motion recordings of the analyzed Vrancea earthquakes as well as the MSK intensity pattern of the 1977 earthquake can be well reproduced using relatively small strong motion

  12. Cycling empirical antibiotic therapy in hospitals: meta-analysis and models.

    Directory of Open Access Journals (Sweden)

    Pia Abel zur Wiesch

    2014-06-01

    Full Text Available The rise of resistance together with the shortage of new broad-spectrum antibiotics underlines the urgency of optimizing the use of available drugs to minimize disease burden. Theoretical studies suggest that coordinating empirical usage of antibiotics in a hospital ward can contain the spread of resistance. However, theoretical and clinical studies came to different conclusions regarding the usefulness of rotating first-line therapy (cycling. Here, we performed a quantitative pathogen-specific meta-analysis of clinical studies comparing cycling to standard practice. We searched PubMed and Google Scholar and identified 46 clinical studies addressing the effect of cycling on nosocomial infections, of which 11 met our selection criteria. We employed a method for multivariate meta-analysis using incidence rates as endpoints and find that cycling reduced the incidence rate/1000 patient days of both total infections by 4.95 [9.43-0.48] and resistant infections by 7.2 [14.00-0.44]. This positive effect was observed in most pathogens despite a large variance between individual species. Our findings remain robust in uni- and multivariate metaregressions. We used theoretical models that reflect various infections and hospital settings to compare cycling to random assignment to different drugs (mixing. We make the realistic assumption that therapy is changed when first line treatment is ineffective, which we call "adjustable cycling/mixing". In concordance with earlier theoretical studies, we find that in strict regimens, cycling is detrimental. However, in adjustable regimens single resistance is suppressed and cycling is successful in most settings. Both a meta-regression and our theoretical model indicate that "adjustable cycling" is especially useful to suppress emergence of multiple resistance. While our model predicts that cycling periods of one month perform well, we expect that too long cycling periods are detrimental. Our results suggest that

  13. Improving Landslide Susceptibility Modeling Using an Empirical Threshold Scheme for Excluding Landslide Deposition

    Science.gov (United States)

    Tsai, F.; Lai, J. S.; Chiang, S. H.

    2015-12-01

    Landslides are frequently triggered by typhoons and earthquakes in Taiwan, causing serious economic losses and human casualties. Remotely sensed images and geo-spatial data consisting of land-cover and environmental information have been widely used for producing landslide inventories and causative factors for slope stability analysis. Landslide susceptibility, on the other hand, can represent the spatial likelihood of landslide occurrence and is an important basis for landslide risk assessment. As multi-temporal satellite images become popular and affordable, they are commonly used to generate landslide inventories for subsequent analysis. However, it is usually difficult to distinguish different landslide sub-regions (scarp, debris flow, deposition etc.) directly from remote sensing imagery. Consequently, the extracted landslide extents using image-based visual interpretation and automatic detections may contain many depositions that may reduce the fidelity of the landslide susceptibility model. This study developed an empirical thresholding scheme based on terrain characteristics for eliminating depositions from detected landslide areas to improve landslide susceptibility modeling. In this study, Bayesian network classifier is utilized to build a landslide susceptibility model and to predict sequent rainfall-induced shallow landslides in the Shimen reservoir watershed located in northern Taiwan. Eleven causative factors are considered, including terrain slope, aspect, curvature, elevation, geology, land-use, NDVI, soil, distance to fault, river and road. Landslide areas detected using satellite images acquired before and after eight typhoons between 2004 to 2008 are collected as the main inventory for training and verification. In the analysis, previous landslide events are used as training data to predict the samples of the next event. The results are then compared with recorded landslide areas in the inventory to evaluate the accuracy. Experimental results

  14. Empirical Analysis and Modeling of Stop-Line Crossing Time and Speed at Signalized Intersections

    Science.gov (United States)

    Tang, Keshuang; Wang, Fen; Yao, Jiarong; Sun, Jian

    2016-01-01

    In China, a flashing green (FG) indication of 3 s followed by a yellow (Y) indication of 3 s is commonly applied to end the green phase at signalized intersections. Stop-line crossing behavior of drivers during such a phase transition period significantly influences safety performance of signalized intersections. The objective of this study is thus to empirically analyze and model drivers’ stop-line crossing time and speed in response to the specific phase transition period of FG and Y. High-resolution trajectories for 1465 vehicles were collected at three rural high-speed intersections with a speed limit of 80 km/h and two urban intersections with a speed limit of 50 km/h in Shanghai. With the vehicle trajectory data, statistical analyses were performed to look into the general characteristics of stop-line crossing time and speed at the two types of intersections. A multinomial logit model and a multiple linear regression model were then developed to predict the stop-line crossing patterns and speeds respectively. It was found that the percentage of stop-line crossings during the Y interval is remarkably higher and the stop-line crossing time is approximately 0.7 s longer at the urban intersections, as compared with the rural intersections. In addition, approaching speed and distance to the stop-line at the onset of FG as well as area type significantly affect the percentages of stop-line crossings during the FG and Y intervals. Vehicle type and stop-line crossing pattern were found to significantly influence the stop-line crossing speed, in addition to the above factors. The red-light-running seems to occur more frequently at the large intersections with a long cycle length.

  15. Evaluation and Improvement of a SVD-Based Empirical Atmospheric Model

    Institute of Scientific and Technical Information of China (English)

    2011-01-01

    An empirical atmospheric model(EAM) based on the singular value decomposition(SVD) method is evaluated using the composite El Ni(?)o/Southern Oscillation(ENSO) patterns of sea surface temperature (SST) and wind anomalies as the target scenario.Two versions of the SVD-based EAM were presented for comparisons.The first version estimates the wind anomalies in response to SST variations based on modes that were calculated from a pair of global wind and SST fields(i.e.,conventional EAM or CEAM).The second version utilizes the same model design but is based on modes that were calculated in a region-wise manner by separating the tropical domain from the remaining extratropical regions(i.e.,region-wise EAM or REAM). Our study shows that,while CEAM has shown successful model performance over some tropical areas, such as the equatorial eastern Pacific(EEP),the western North Pacific(WNP),and the tropical Indian Ocean(TIO),its performance over the North Pacific(NP) seems poor.When REAM is used to estimate the wind anomalies instead of CEAM,a marked improvement over NP readily emerges.Analyses of coupled modes indicate that such an improvement can be attributed to a much stronger coupled variability captured by the first region-wise SVD mode at higher latitudes compared with that captured by the conventional one. The newly proposed way of constructing the EAM(i.e.,REAM) can be very useful in the coupled studies because it gives the model a wider application beyond the commonly accepted tropical domain.

  16. Empirical models of scalp-EEG responses using non-concurrent intracranial responses

    Science.gov (United States)

    Kaur, Komalpreet; Shih, Jerry J.; Krusienski, Dean J.

    2014-06-01

    Objective. This study presents inter-subject models of scalp-recorded electroencephalographic (sEEG) event-related potentials (ERPs) using intracranially recorded ERPs from electrocorticography and stereotactic depth electrodes in the hippocampus, generally termed as intracranial EEG (iEEG). Approach. The participants were six patients with medically-intractable epilepsy that underwent temporary placement of intracranial electrode arrays to localize seizure foci. Participants performed one experimental session using a brain-computer interface matrix spelling paradigm controlled by sEEG prior to the iEEG electrode implantation, and one or more identical sessions controlled by iEEG after implantation. All participants were able to achieve excellent spelling accuracy using sEEG, four of the participants achieved roughly equivalent performance in the iEEG sessions, and all participants were significantly above chance accuracy for the iEEG sessions. The sERPs were modeled using a linear combination of iERPs using two different optimization criteria. Main results. The results indicate that sERPs can be accurately estimated from the iERPs for the patients that exhibited stable ERPs over the respective sessions, and that the transformed iERPs can be accurately classified with an sERP-derived classifier. Significance. The resulting models provide a new empirical representation of the formation and distribution of sERPs from underlying composite iERPs. These new insights provide a better understanding of ERP relationships and can potentially lead to the development of more robust signal processing methods for noninvasive EEG applications.

  17. Semi-empirical models for chlorine activation and ozone depletion in the Antarctic stratosphere: proof of concept

    Science.gov (United States)

    Huck, P. E.; Bodeker, G. E.; Kremser, S.; McDonald, A. J.; Rex, M.; Struthers, H.

    2013-03-01

    Two semi-empirical models were developed for the Antarctic stratosphere to relate the shift of species within total chlorine (Cly = HCl + ClONO2 + HOCl + 2 × Cl2 + 2×Cl2O2 + ClO + Cl) into the active forms (here: ClOx = 2×Cl2O2 + ClO), and to relate the rate of ozone destruction to ClOx. These two models provide a fast and computationally inexpensive way to describe the inter- and intra-annual evolution of ClOx and ozone mass deficit (OMD) in the Antarctic spring. The models are based on the underlying physics/chemistry of the system and capture the key chemical and physical processes in the Antarctic stratosphere that determine the interaction between climate change and Antarctic ozone depletion. They were developed considering bulk effects of chemical mechanisms for the duration of the Antarctic vortex period and quantities averaged over the vortex area. The model equations were regressed against observations of daytime ClO and OMD providing a set of empirical fit coefficients. Both semi-empirical models are able to explain much of the intra- and inter-annual variability observed in daily ClOx and OMD time series. This proof-of-concept paper outlines the semi-empirical approach to describing the evolution of Antarctic chlorine activation and ozone depletion.

  18. Semi-empirical models for chlorine activation and ozone depletion in the Antarctic stratosphere: proof of concept

    Directory of Open Access Journals (Sweden)

    P. E. Huck

    2013-03-01

    Full Text Available Two semi-empirical models were developed for the Antarctic stratosphere to relate the shift of species within total chlorine (Cly = HCl + ClONO2 + HOCl + 2 × Cl2 + 2×Cl2O2 + ClO + Cl into the active forms (here: ClOx = 2×Cl2O2 + ClO, and to relate the rate of ozone destruction to ClOx. These two models provide a fast and computationally inexpensive way to describe the inter- and intra-annual evolution of ClOx and ozone mass deficit (OMD in the Antarctic spring. The models are based on the underlying physics/chemistry of the system and capture the key chemical and physical processes in the Antarctic stratosphere that determine the interaction between climate change and Antarctic ozone depletion. They were developed considering bulk effects of chemical mechanisms for the duration of the Antarctic vortex period and quantities averaged over the vortex area. The model equations were regressed against observations of daytime ClO and OMD providing a set of empirical fit coefficients. Both semi-empirical models are able to explain much of the intra- and inter-annual variability observed in daily ClOx and OMD time series. This proof-of-concept paper outlines the semi-empirical approach to describing the evolution of Antarctic chlorine activation and ozone depletion.

  19. Modeling invariant object processing based on tight integration of simulated and empirical data in a Common Brain Space

    Directory of Open Access Journals (Sweden)

    Judith Carolien Peters

    2012-03-01

    Full Text Available Both in the field of Computer Vision and Experimental Neuroscience, recent advances have been made regarding the mechanisms underlying invariant object recognition. However, the differential methodological aims in both fields caused an independent model evolvement. A tighter integration of simulations and empirical observations may contribute to cross-fertilized development of 1 neurobiologically plausible computational models and 2 computationally-defined empirical theories, incrementally merged into a comprehensive brain model.We review recent fMRI findings on object invariance and suggest how they can be quantitatively compared to model simulations by projecting predicted and observed data in one Common Brain Space". The simultaneous matching of activity patterns within and across multiple processing stages in the simulated and empirical large-scale network may help to clarify how high-order invariant representations are created from low-level features. Given that columnar-level imaging is now in reach, due to the advent of high-resolution fMRI, it is time to capitalize on this new window into the brain and test which predictions of the various object recognition models are supported by this novel empirical evidence.

  20. A semi-empirical model for mesospheric and stratospheric NOy produced by energetic particle precipitation

    Science.gov (United States)

    Funke, Bernd; López-Puertas, Manuel; Stiller, Gabriele P.; Versick, Stefan; von Clarmann, Thomas

    2016-07-01

    The MIPAS Fourier transform spectrometer on board Envisat has measured global distributions of the six principal reactive nitrogen (NOy) compounds (HNO3, NO2, NO, N2O5, ClONO2, and HNO4) during 2002-2012. These observations were used previously to detect regular polar winter descent of reactive nitrogen produced by energetic particle precipitation (EPP) down to the lower stratosphere, often called the EPP indirect effect. It has further been shown that the observed fraction of NOy produced by EPP (EPP-NOy) has a nearly linear relationship with the geomagnetic Ap index when taking into account the time lag introduced by transport. Here we exploit these results in a semi-empirical model for computation of EPP-modulated NOy densities and wintertime downward fluxes through stratospheric and mesospheric pressure levels. Since the Ap dependence of EPP-NOy is distorted during episodes of strong descent in Arctic winters associated with elevated stratopause events, a specific parameterization has been developed for these episodes. This model accurately reproduces the observations from MIPAS and is also consistent with estimates from other satellite instruments. Since stratospheric EPP-NOy depositions lead to changes in stratospheric ozone with possible implications for climate, the model presented here can be utilized in climate simulations without the need to incorporate many thermospheric and upper mesospheric processes. By employing historical geomagnetic indices, the model also allows for reconstruction of the EPP indirect effect since 1850. We found secular variations of solar cycle-averaged stratospheric EPP-NOy depositions on the order of 1 GM. In particular, we model a reduction of the EPP-NOy deposition rate during the last 3 decades, related to the coincident decline of geomagnetic activity that corresponds to 1.8 % of the NOy production rate by N2O oxidation. As the decline of the geomagnetic activity level is expected to continue in the coming decades, this is

  1. NON-LINEAR DYNAMIC MODEL RETRIEVAL OF SUBTROPICAL HIGH BASED ON EMPIRICAL ORTHOGONAL FUNCTION AND GENETIC ALGORITHM

    Institute of Scientific and Technical Information of China (English)

    ZHANG Ren; HONG Mei; SUN Zhao-bo; NIU Sheng-jie; ZHU Wei-jun; MIN Jin-zhong; WAN Qi-lin

    2006-01-01

    Aiming at the difficulty of accurately constructing the dynamic model of subtropical high, based on the potential height field time series over 500 hPa layer of T106 numerical forecast products, by using EOF(empirical orthogonal function) temporal-spatial separation technique, the disassembled EOF time coefficients series were regarded as dynamical model variables, and dynamic system retrieval idea as well as genetic algorithm were introduced to make dynamical model parameters optimization search, then, a reasonable non-linear dynamic model of EOF time-coefficients was established. By dynamic model integral and EOF temporal-spatial components assembly, a mid-/long-term forecast of subtropical high was carried out. The experimental results show that the forecast results of dynamic model are superior to that of general numerical model forecast results.A new modeling idea and forecast technique is presented for diagnosing and forecasting such complicated weathers as subtropical high.

  2. The development of empirical models to evaluate energy use and energy cost in wastewater collection

    Science.gov (United States)

    Young, David Morgan

    This research introduces a unique data analysis method and develops empirical models to evaluate energy use and energy cost in wastewater collection systems using operational variables. From these models, several Best Management Processes (BMPs) are identified that should benefit utilities and positively impact the operation of existing infrastructure as well as the design of new infrastructure. Further, the conclusions generated herein display high transferability to certain manufacturing processes. Therefore, it is anticipated that these findings will also benefit pumping applications outside of the water sector. Wastewater treatment is often the single largest expense at the local government level. Not surprisingly, significant research effort has been expended on examining the energy used in wastewater treatment. However, the energy used in wastewater collection systems remains underexplored despite significant potential for energy savings. Estimates place potential energy savings as high as 60% within wastewater collection; which, if applied across the United States equates to the energy used by nearly 125,000 American homes. Employing three years of data from Renewable Water Resources (ReWa), the largest wastewater utility in the Upstate of South Carolina, this study aims to develop useful empirical equations that will allow utilities to efficiently evaluate the energy use and energy cost of its wastewater collection system. ReWa's participation was motivated, in part, by their recent adoption of the United States Environmental Protection Agency "Effective Utility Strategies" within which exists a focus on energy management. The study presented herein identifies two primary variables related to the energy use and cost associated with wastewater collection: Specific Energy (Es) and Specific Cost (Cs). These two variables were found to rely primarily on the volume pumped by the individual pump stations and exhibited similar power functions for the three year

  3. An Empirical Outdoor-to-Indoor Path Loss Model from below 6 GHz to cm-Wave Frequency Bands

    DEFF Research Database (Denmark)

    Rodriguez Larrad, Ignacio; Nguyen, Huan Cong; Kovács, István Z.;

    2016-01-01

    This letter presents an empirical multi-frequency outdoor-to-indoor path loss model. The model is based on measurements performed on the exact same set of scenarios for different frequency bands ranging from traditional cellular allocations below 6 GHz (0.8, 2, 3.5 and 5.2 GHz), up to cm-wave fre......This letter presents an empirical multi-frequency outdoor-to-indoor path loss model. The model is based on measurements performed on the exact same set of scenarios for different frequency bands ranging from traditional cellular allocations below 6 GHz (0.8, 2, 3.5 and 5.2 GHz), up to cm...

  4. EFFECT OF EMPIRICAL COEFFICIENTS ON SIMULATION IN TWO-SCALE SECOND-ORDER MOMENT PARTICLE-PHASE TURBULENCE MODEL

    Institute of Scientific and Technical Information of China (English)

    HU Chun-bo; ZENG Zhuo-xiong

    2006-01-01

    A two-scale second-order moment two-phase turbulence model accounting for inter-particle collision is developed, based on the concept of particle large-scale fluctuation due to turbulence and particle small-scale fluctuation due to collision. The proposed model is used to simulate gas-particle downer reactor flows. The computationsl results of both particle volume fraction and mean velocity are in agreement with the experimental results. After analyzing effects of empirical coefficient on prediction results, we can come to a conclusion that, inside the limit range of empirical coefficient, the predictions do not reveal a large sensitivity to the empirical coefficient in the downer reactor, but a relatively great change of the constants has important effect on the prediction.

  5. Empirical Evaluation of the Proposed eXScrum Model: Results of a Case Study

    CERN Document Server

    Qureshi, M Rizwan Jameel

    2012-01-01

    Agile models promote fast development. XP and Scrum are the most widely used agile models. This paper investigates the phases of XP and Scrum models in order to identify their potentials and drawbacks. XP model has certain drawbacks, such as not suitable for maintenance projects and poor performance for medium and large-scale development projects. Scrum model has certain limitations, such as lacked in engineering practices. Since, both XP and Scrum models contain good features and strengths but still there are improvement possibilities in these models. Majority of the software development companies are reluctant to switch from traditional methodologies to agile methodologies for development of industrial projects. A fine integration, of software management of the Scrum model and engineering practices of XP model, is very much required to accumulate the strengths and remove the limitations of both models. This is achieved by proposing an eXScrum model. The proposed model is validated by conducting a controlled...

  6. Evaluation of Empirical Data and Modeling Studies to Support Soil Vapor Intrusion Screening Criteria for Petroleum Hydrocarbon Compounds

    Science.gov (United States)

    This study is an evaluation of empirical data and select modeling studies of the behavior of petroleum hydrocarbon (PHC) vapors in subsurface soils and how they can affect subsurface-to-indoor air vapor intrusion (VI), henceforth referred to as petroleum vapor intrusion or “PVI” ...

  7. Empirical Modeling of Information Communication Technology Usage Behaviour among Business Education Teachers in Tertiary Colleges of a Developing Country

    Science.gov (United States)

    Isiyaku, Dauda Dansarki; Ayub, Ahmad Fauzi Mohd; Abdulkadir, Suhaida

    2015-01-01

    This study has empirically tested the fitness of a structural model in explaining the influence of two exogenous variables (perceived enjoyment and attitude towards ICTs) on two endogenous variables (behavioural intention and teachers' Information Communication Technology (ICT) usage behavior), based on the proposition of Technology Acceptance…

  8. Empirical Testing of a Conceptual Model and Measurement Instrument for the Assessment of Trustworthiness of Project Team Members

    NARCIS (Netherlands)

    Rusman, Ellen; Van Bruggen, Jan; Valcke, Martin

    2009-01-01

    Rusman, E., Van Bruggen, J., & Valcke, M. (2009). Empirical Testing of a Conceptual Model and Measurement Instrument for the Assessment of Trustworthiness of Project Team Members. Paper presented at the Trust Workshop at the Eighth International Conference on Autonomous Agents and Multiagent Systems

  9. An empirical model for trip distribution of commuters in the Netherlands: Transferability in time and space reconsidered.

    NARCIS (Netherlands)

    Thomas, T.; Tutert, S.I.A.

    2013-01-01

    In this paper, we evaluate the distribution of commute trips in The Netherlands, to assess its transferability in space and time. We used Dutch Travel Surveys from 1995 and 2004–2008 to estimate the empirical distribution from a spatial interaction model as function of travel time and distance. We f

  10. Empirical Bayes Point Estimates of True Score Using a Compound Binomial Error Model. Research Memorandum 74-11.

    Science.gov (United States)

    Kearns, Jack

    Empirical Bayes point estimates of true score may be obtained if the distribution of observed score for a fixed examinee is approximated in one of several ways by a well-known compound binomial model. The Bayes estimates of true score may be expressed in terms of the observed score distribution and the distribution of a hypothetical binomial test.…

  11. Development of an empirical kinetic model for sonocatalytic process using neodymium doped zinc oxide nanoparticles.

    Science.gov (United States)

    Khataee, Alireza; Vahid, Behrouz; Saadi, Shabnam; Joo, Sang Woo

    2016-03-01

    The degradation of Acid Blue 92 (AB92) solution was investigated using a sonocatalytic process with pure and neodymium (Nd)-doped ZnO nanoparticles. The nanoparticles were characterized by X-ray diffraction (XRD), scanning electron microscopy (SEM), and X-ray photoelectron spectroscopy (XPS). The 1% Nd-doped ZnO nanoparticles demonstrated the highest sonocatalytic activity for the treatment of AB92 (10 mg/L) with a degradation efficiency (DE%) of 86.20% compared to pure ZnO (62.92%) and sonication (45.73%) after 150 min. The results reveal that the sonocatalytic degradation followed pseudo-first order kinetics. An empirical kinetic model was developed using nonlinear regression analysis to estimate the pseudo-first-order rate constant (kapp) as a function of the operational parameters, including the initial dye concentration (5-25 mg/L), doped-catalyst dosage (0.25-1 g/L), ultrasonic power (150-400 W), and dopant content (1-6% mol). The results from the kinetic model were consistent with the experimental results (R(2)=0.990). Moreover, DE% increases with addition of potassium periodate, peroxydisulfate, and hydrogen peroxide as radical enhancers by generating more free radicals. However, the addition of chloride, carbonate, sulfate, and t-butanol as radical scavengers declines DE%. Suitable reusability of the doped sonocatalyst was proven for several consecutive runs. Some of the produced intermediates were also detected by GC-MS analysis. The phytotoxicity test using Lemna minor (L. minor) plant confirmed the considerable toxicity removal of the AB92 solution after treatment process.

  12. Supervised neural network modeling: an empirical investigation into learning from imbalanced data with labeling errors.

    Science.gov (United States)

    Khoshgoftaar, Taghi M; Van Hulse, Jason; Napolitano, Amri

    2010-05-01

    Neural network algorithms such as multilayer perceptrons (MLPs) and radial basis function networks (RBFNets) have been used to construct learners which exhibit strong predictive performance. Two data related issues that can have a detrimental impact on supervised learning initiatives are class imbalance and labeling errors (or class noise). Imbalanced data can make it more difficult for the neural network learning algorithms to distinguish between examples of the various classes, and class noise can lead to the formulation of incorrect hypotheses. Both class imbalance and labeling errors are pervasive problems encountered in a wide variety of application domains. Many studies have been performed to investigate these problems in isolation, but few have focused on their combined effects. This study presents a comprehensive empirical investigation using neural network algorithms to learn from imbalanced data with labeling errors. In particular, the first component of our study investigates the impact of class noise and class imbalance on two common neural network learning algorithms, while the second component considers the ability of data sampling (which is commonly used to address the issue of class imbalance) to improve their performances. Our results, for which over two million models were trained and evaluated, show that conclusions drawn using the more commonly studied C4.5 classifier may not apply when using neural networks.

  13. Matrimonios mixtos intraeuropeos: un modelo empírico (Intra-European intermarriage: an empirical model

    Directory of Open Access Journals (Sweden)

    Alaminos Chica, Antonio Francisco

    2008-06-01

    Full Text Available Resumen: La heterogeneidad con la que nos encontramos al estudiar las parejas interculturales o mixtas, va más allá de las diferencias de origen sociocultural; interviene factores tales como el rol que cada individuo adopta dentro de la pareja (por ejemplo, quién contribuye más económicamente, el status, el nivel educativo, etc.. En este artículo se propone un modelo empírico que muestra el efecto de un conjunto de variables, que expresan circunstancias sociales, sobre la decisión de formar un matrimonio interculturalmente mixto. También las consecuencias en la vida social del individuo.Abstract: The intercultural marriages or mixed marriages depend upon several factor. Not only the different cultural origin. Other determinants like the role of the partner (i.e. economic contribution, status, educational level, etc. or the type of the family (modern, traditional, etc. influence the outcomes. This paper contains a proposal of empirical model for study the intra-European mixed marriages.

  14. Empirical modeling of plasma clouds produced by the Metal Oxide Space Clouds experiment

    Science.gov (United States)

    Pedersen, Todd R.; Caton, Ronald G.; Miller, Daniel; Holmes, Jeffrey M.; Groves, Keith M.; Sutton, Eric

    2017-05-01

    The Advanced Research Project Agency (ARPA) Long-Range Tracking And Instrumentation Radar (ALTAIR) radar at Kwajalein Atoll was used in incoherent scatter mode to measure plasma densities within two artificial clouds created by the Air Force Research Laboratory (AFRL) Metal Oxide Space Clouds (MOSC) experiment in May 2013. Optical imager, ionosonde, and ALTAIR measurements were combined to create 3-D empirical descriptions of the plasma clouds as a function of time, which match the radar measurements to within 15%. The plasma clouds closely track the location of the optical clouds, and the best fit plasma cloud widths are generally consistent with isotropic neutral diffusion. Cloud plasma densities decreased as a power of time, with exponents between -0.5 and -1.0, or much more slowly than the -1.5 predicted by diffusion. These exponents and estimates of total ion number from integration through the model volume are consistent with a scenario of slow ionization and a gradually increasing total number of ions with time, reaching a net ionization fraction of 20% after approximately half an hour. These robust representations of the plasma density are being used to study impacts of the artificial clouds on the dynamics of the background ionosphere and on RF propagation.

  15. Metabolic cost of neuronal information in an empirical stimulus-response model.

    Science.gov (United States)

    Kostal, Lubomir; Lansky, Petr; McDonnell, Mark D

    2013-06-01

    The limits on maximum information that can be transferred by single neurons may help us to understand how sensory and other information is being processed in the brain. According to the efficient-coding hypothesis (Barlow, Sensory Comunication, MIT press, Cambridge, 1961), neurons are adapted to the statistical properties of the signals to which they are exposed. In this paper we employ methods of information theory to calculate, both exactly (numerically) and approximately, the ultimate limits on reliable information transmission for an empirical neuronal model. We couple information transfer with the metabolic cost of neuronal activity and determine the optimal information-to-metabolic cost ratios. We find that the optimal input distribution is discrete with only six points of support, both with and without a metabolic constraint. However, we also find that many different input distributions achieve mutual information close to capacity, which implies that the precise structure of the capacity-achieving input is of lesser importance than the value of capacity.

  16. New insight into motor adaptation to pain revealed by a combination of modelling and empirical approaches.

    Science.gov (United States)

    Hodges, P W; Coppieters, M W; MacDonald, D; Cholewicki, J

    2013-09-01

    Movement changes in pain. Unlike the somewhat stereotypical response of limb muscles to pain, trunk muscle responses are highly variable when challenged by pain in that region. This has led many to question the existence of a common underlying theory to explain the adaptation. Here, we tested the hypotheses that (1) adaptation in muscle activation in acute pain leads to enhanced spine stability, despite variation in the pattern of muscle activation changes; and (2) individuals would use a similar 'signature' pattern for tasks with different mechanical demands. In 17 healthy individuals, electromyography recordings were made from a broad array of anterior and posterior trunk muscles while participants moved slowly between trunk flexion and extension with and without experimentally induced back pain. Hypotheses were tested by estimating spine stability (Stability Index) with an electromyography-driven spine model and analysis of individual and overall (net) adaptations in muscle activation. The Stability Index (P muscle activity (P muscle activity. For most, the adaptation was similar between movement directions despite opposite movement demands. These data provide the first empirical confirmation that, in most individuals, acute back pain leads to increased spinal stability and that the pattern of muscle activity is not stereotypical, but instead involves an individual-specific response to pain. This adaptation is likely to provide short-term benefit to enhance spinal protection, but could have long-term consequences for spinal health. © 2013 European Federation of International Association for the Study of Pain Chapters.

  17. CO2 capture in amine solutions: modelling and simulations with non-empirical methods

    Science.gov (United States)

    Andreoni, Wanda; Pietrucci, Fabio

    2016-12-01

    Absorption in aqueous amine solutions is the most advanced technology for the capture of CO2, although suffering from drawbacks that do not allow exploitation on large scale. The search for optimum solvents has been pursued with empirical methods and has also motivated a number of computational approaches over the last decade. However, a deeper level of understanding of the relevant chemical reactions in solution is required so as to contribute to this effort. We present here a brief critical overview of the most recent applications of computer simulations using ab initio methods. Comparison of their outcome shows a strong dependence on the structural models employed to represent the molecular systems in solution and on the strategy used to simulate the reactions. In particular, the results of very recent ab initio molecular dynamics augmented with metadynamics are summarized, showing the crucial role of water, which has been so far strongly underestimated both in the calculations and in the interpretation of experimental data. Indications are given for advances in computational approaches that are necessary if meant to contribute to the rational design of new solvents.

  18. A simple mathematical model to determine the ideal empirical antibiotic therapy for bacteremic patients

    Directory of Open Access Journals (Sweden)

    Felipe F. Tuon

    2014-08-01

    Full Text Available Background Local epidemiological data are always helpful when choosing the best antibiotic regimen, but it is more complex than it seems as it may require the analysis of multiple combinations. The aim of this study was to demonstrate a simplified mathematical calculation to determine the most appropriate antibiotic combination in a scenario where monotherapy is doomed to failure. Methods The susceptibility pattern of 11 antibiotics from 216 positive blood cultures from January 2012 to January 2013 was analyzed based on local policy. The length of hospitalization before bacteremia and the unit (ward or intensive care unit were the analyzed variables. Bacteremia was classified as early, intermediate or late. The antibiotics were combined according to the combination model presented herein. Results A total of 55 possible mathematical associations were found combining 2 by 2, 165 associations with 3 by 3 and 330 combinations with 4 by 4. In the intensive care unit, monotherapy never reached 80% of susceptibility. In the ward, only carbapenems covered more than 90% of early bacteremia. Only three drugs combined reached a susceptibility rate higher than 90% anywhere in the hospital. Several regimens using four drugs combined reached 100% of susceptibility. Conclusions Association of three drugs is necessary for adequate coverage of empirical treatment of bacteremia in both the intensive care unit and the ward.

  19. An Empirical Model for Halo Evolution and Global Gas Dynamics of the Fornax Dwarf Spheroidal Galaxy

    CERN Document Server

    Yuan, Zhen; Jing, Y P

    2015-01-01

    We present an empirical model for the halo evolution and global gas dynamics of Fornax, the brightest Milky Way (MW) dwarf spheroidal galaxy (dSph). Assuming a global star formation rate psi(t)=lambda_*[M_g(t)/M_sun]^alpha consistent with observations of star formation in nearby galaxies and using the data on Fornax's psi(t), we derive the evolution of the total mass M_g(t) for cold gas in Fornax's star-forming disk and the rate Delta F(t) of net gas flow to or from the disk. We identify the onset of the transition in Delta F(t) from a net inflow to a net outflow as the time t_sat at which the Fornax halo became an MW satellite and estimate the evolution of its total mass M_h(t) at t

  20. CO2 capture in amine solutions: modelling and simulations with non-empirical methods.

    Science.gov (United States)

    Andreoni, Wanda; Pietrucci, Fabio

    2016-12-21

    Absorption in aqueous amine solutions is the most advanced technology for the capture of CO2, although suffering from drawbacks that do not allow exploitation on large scale. The search for optimum solvents has been pursued with empirical methods and has also motivated a number of computational approaches over the last decade. However, a deeper level of understanding of the relevant chemical reactions in solution is required so as to contribute to this effort. We present here a brief critical overview of the most recent applications of computer simulations using ab initio methods. Comparison of their outcome shows a strong dependence on the structural models employed to represent the molecular systems in solution and on the strategy used to simulate the reactions. In particular, the results of very recent ab initio molecular dynamics augmented with metadynamics are summarized, showing the crucial role of water, which has been so far strongly underestimated both in the calculations and in the interpretation of experimental data. Indications are given for advances in computational approaches that are necessary if meant to contribute to the rational design of new solvents.

  1. Sustainable Development of Export: Theoretical Meaning, Evaluation Model, and Empirical Research

    Institute of Scientific and Technical Information of China (English)

    Zhou Nianli

    2008-01-01

    With the effect of the human trade doctrine in the international trade field, almost all the countries have paid more attention to the sustainable development of international trade. This article chose the export sustainable development as the research object. On the basis of the analysis of the theoretical connotation of the export sustainable development, this article tried to establish an evaluation indices system and set up an evaluation model of the export sustainable development level, and finally made some empirical research on China. The result indicates that the comprehensive level of the export sustainable development in China showed a tendency to rise from 1985 to 2003 and the export sustainable development level of China in these years can be divided into four grades: excellent, good, moderate and poor. In most years, the social economic benefits of export was obtained at the cost of the deterioration of environment and the depletion of resources, and the economic profit of export did not increase with the enlargement of the export scale because of the deterioration of the terms of trade. Therefore, China should be careful about the problem of poverty accompanied by the increase of export.

  2. Regionally Adaptable Ground Motion Prediction Equation (GMPE) from Empirical Models of Fourier and Duration of Ground Motion

    Science.gov (United States)

    Bora, Sanjay; Scherbaum, Frank; Kuehn, Nicolas; Stafford, Peter; Edwards, Benjamin

    2016-04-01

    The current practice of deriving empirical ground motion prediction equations (GMPEs) involves using ground motions recorded at multiple sites. However, in applications like site-specific (e.g., critical facility) hazard ground motions obtained from the GMPEs are need to be adjusted/corrected to a particular site/site-condition under investigation. This study presents a complete framework for developing a response spectral GMPE, within which the issue of adjustment of ground motions is addressed in a manner consistent with the linear system framework. The present approach is a two-step process in which the first step consists of deriving two separate empirical models, one for Fourier amplitude spectra (FAS) and the other for a random vibration theory (RVT) optimized duration (Drvto) of ground motion. In the second step the two models are combined within the RVT framework to obtain full response spectral amplitudes. Additionally, the framework also involves a stochastic model based extrapolation of individual Fourier spectra to extend the useable frequency limit of the empirically derived FAS model. The stochastic model parameters were determined by inverting the Fourier spectral data using an approach similar to the one as described in Edwards and Faeh (2013). Comparison of median predicted response spectra from present approach with those from other regional GMPEs indicates that the present approach can also be used as a stand-alone model. The dataset used for the presented analysis is a subset of the recently compiled database RESORCE-2012 across Europe, the Middle East and the Mediterranean region.

  3. Modelling

    CERN Document Server

    Spädtke, P

    2013-01-01

    Modeling of technical machines became a standard technique since computer became powerful enough to handle the amount of data relevant to the specific system. Simulation of an existing physical device requires the knowledge of all relevant quantities. Electric fields given by the surrounding boundary as well as magnetic fields caused by coils or permanent magnets have to be known. Internal sources for both fields are sometimes taken into account, such as space charge forces or the internal magnetic field of a moving bunch of charged particles. Used solver routines are briefly described and some bench-marking is shown to estimate necessary computing times for different problems. Different types of charged particle sources will be shown together with a suitable model to describe the physical model. Electron guns are covered as well as different ion sources (volume ion sources, laser ion sources, Penning ion sources, electron resonance ion sources, and H$^-$-sources) together with some remarks on beam transport.

  4. Semi-empirical correction of ab initio harmonic properties by scaling factors: a validated uncertainty model for calibration and prediction

    CERN Document Server

    Pernot, Pascal

    2010-01-01

    Bayesian Model Calibration is used to revisit the problem of scaling factor calibration for semi-empirical correction of ab initio harmonic properties (e.g. vibrational frequencies and zero-point energies). A particular attention is devoted to the evaluation of scaling factor uncertainty, and to its effect on the accuracy of scaled properties. We argue that in most cases of interest the standard calibration model is not statistically valid, in the sense that it is not able to fit experimental calibration data within their uncertainty limits. This impairs any attempt to use the results of the standard model for uncertainty analysis and/or uncertainty propagation. We propose to include a stochastic term in the calibration model to account for model inadequacy. This new model is validated in the Bayesian Model Calibration framework. We provide explicit formulae for prediction uncertainty in typical limit cases: large and small calibration sets of data with negligible measurement uncertainty, and datasets with la...

  5. A Hybrid Model Based on Ensemble Empirical Mode Decomposition and Fruit Fly Optimization Algorithm for Wind Speed Forecasting

    Directory of Open Access Journals (Sweden)

    Zongxi Qu

    2016-01-01

    Full Text Available As a type of clean and renewable energy, the superiority of wind power has increasingly captured the world’s attention. Reliable and precise wind speed prediction is vital for wind power generation systems. Thus, a more effective and precise prediction model is essentially needed in the field of wind speed forecasting. Most previous forecasting models could adapt to various wind speed series data; however, these models ignored the importance of the data preprocessing and model parameter optimization. In view of its importance, a novel hybrid ensemble learning paradigm is proposed. In this model, the original wind speed data is firstly divided into a finite set of signal components by ensemble empirical mode decomposition, and then each signal is predicted by several artificial intelligence models with optimized parameters by using the fruit fly optimization algorithm and the final prediction values were obtained by reconstructing the refined series. To estimate the forecasting ability of the proposed model, 15 min wind speed data for wind farms in the coastal areas of China was performed to forecast as a case study. The empirical results show that the proposed hybrid model is superior to some existing traditional forecasting models regarding forecast performance.

  6. model

    African Journals Online (AJOL)

    trie neural construction oí inoiviouo! unci communal identities in ... occurs, Including models based on Information processing,1 ... Applying the DSM descriptive approach to dissociation in the ... a personal, narrative path lhal connects personal lo ethnic ..... managed the problem in the context of the community, using a.

  7. An Empirical Study of Efficiency and Accuracy of Probabilistic Graphical Models

    DEFF Research Database (Denmark)

    Nielsen, Jens Dalgaard; Jaeger, Manfred

    2006-01-01

    In this paper we compare Na\\ii ve Bayes (NB) models, general Bayes Net (BN) models and Probabilistic Decision Graph (PDG) models w.r.t. accuracy and efficiency. As the basis for our analysis we use graphs of size vs. likelihood that show the theoretical capabilities of the models. We also measure...

  8. Quantifying uncertainty in climatological fields from GPS radio occultation: an empirical-analytical error model

    Directory of Open Access Journals (Sweden)

    B. Scherllin-Pirscher

    2011-05-01

    Full Text Available Due to the measurement principle of the radio occultation (RO technique, RO data are highly suitable for climate studies. Single RO profiles can be used to build climatological fields of different atmospheric parameters like bending angle, refractivity, density, pressure, geopotential height, and temperature. RO climatologies are affected by random (statistical errors, sampling errors, and systematic errors, yielding a total climatological error. Based on empirical error estimates, we provide a simple analytical error model for these error components, which accounts for vertical, latitudinal, and seasonal variations. The vertical structure of each error component is modeled constant around the tropopause region. Above this region the error increases exponentially, below the increase follows an inverse height power-law. The statistical error strongly depends on the number of measurements. It is found to be the smallest error component for monthly mean 10° zonal mean climatologies with more than 600 measurements per bin. Due to smallest atmospheric variability, the sampling error is found to be smallest at low latitudes equatorwards of 40°. Beyond 40°, this error increases roughly linearly, with a stronger increase in hemispheric winter than in hemispheric summer. The sampling error model accounts for this hemispheric asymmetry. However, we recommend to subtract the sampling error when using RO climatologies for climate research since the residual sampling error remaining after such subtraction is estimated to be 50 % of the sampling error for bending angle and 30 % or less for the other atmospheric parameters. The systematic error accounts for potential residual biases in the measurements as well as in the retrieval process and generally dominates the total climatological error. Overall the total error in monthly means is estimated to be smaller than 0.07 % in refractivity and 0.15 K in temperature at low to mid latitudes, increasing towards

  9. Cycling Empirical Antibiotic Therapy in Hospitals: Meta-Analysis and Models

    Science.gov (United States)

    Abel, Sören; Viechtbauer, Wolfgang; Bonhoeffer, Sebastian

    2014-01-01

    The rise of resistance together with the shortage of new broad-spectrum antibiotics underlines the urgency of optimizing the use of available drugs to minimize disease burden. Theoretical studies suggest that coordinating empirical usage of antibiotics in a hospital ward can contain the spread of resistance. However, theoretical and clinical studies came to different conclusions regarding the usefulness of rotating first-line therapy (cycling). Here, we performed a quantitative pathogen-specific meta-analysis of clinical studies comparing cycling to standard practice. We searched PubMed and Google Scholar and identified 46 clinical studies addressing the effect of cycling on nosocomial infections, of which 11 met our selection criteria. We employed a method for multivariate meta-analysis using incidence rates as endpoints and find that cycling reduced the incidence rate/1000 patient days of both total infections by 4.95 [9.43–0.48] and resistant infections by 7.2 [14.00–0.44]. This positive effect was observed in most pathogens despite a large variance between individual species. Our findings remain robust in uni- and multivariate metaregressions. We used theoretical models that reflect various infections and hospital settings to compare cycling to random assignment to different drugs (mixing). We make the realistic assumption that therapy is changed when first line treatment is ineffective, which we call “adjustable cycling/mixing”. In concordance with earlier theoretical studies, we find that in strict regimens, cycling is detrimental. However, in adjustable regimens single resistance is suppressed and cycling is successful in most settings. Both a meta-regression and our theoretical model indicate that “adjustable cycling” is especially useful to suppress emergence of multiple resistance. While our model predicts that cycling periods of one month perform well, we expect that too long cycling periods are detrimental. Our results suggest that

  10. SWIFT: Semi-empirical and numerically efficient stratospheric ozone chemistry for global climate models

    OpenAIRE

    Kreyling, Daniel; Wohltmann, Ingo; Lehmann, Ralph; Rex, Markus

    2015-01-01

    The SWIFT model is a fast yet accurate chemistry scheme for calculating the chemistry of stratospheric ozone. It is mainly intended for use in Global Climate Models (GCMs), Chemistry Climate Models (CCMs) and Earth System Models (ESMs). For computing time reasons these models often do not employ full stratospheric chem- istry modules, but use prescribed ozone instead. This can lead to insufficient representation between stratosphere and troposphere. The SWIFT stratospheric ozone chem...

  11. Evaluation of the existing triple point path models with new experimental data: proposal of an original empirical formulation

    Science.gov (United States)

    Boutillier, J.; Ehrhardt, L.; De Mezzo, S.; Deck, C.; Magnan, P.; Naz, P.; Willinger, R.

    2017-08-01

    With the increasing use of improvised explosive devices (IEDs), the need for better mitigation, either for building integrity or for personal security, increases in importance. Before focusing on the interaction of the shock wave with a target and the potential associated damage, knowledge must be acquired regarding the nature of the blast threat, i.e., the pressure-time history. This requirement motivates gaining further insight into the triple point (TP) path, in order to know precisely which regime the target will encounter (simple reflection or Mach reflection). Within this context, the purpose of this study is to evaluate three existing TP path empirical models, which in turn are used in other empirical models for the determination of the pressure profile. These three TP models are the empirical function of Kinney, the Unified Facilities Criteria (UFC) curves, and the model of the Natural Resources Defense Council (NRDC). As discrepancies are observed between these models, new experimental data were obtained to test their reliability and a new promising formulation is proposed for scaled heights of burst ranging from 24.6-172.9 cm/kg^{1/3}.

  12. EFFECT OF DIGITAL ELEVATION MODEL RESOLUTION ON EMPIRICAL ESTIMATION OF SOIL LOSS AND SEDIMENT TRANSPORT WITH GIS

    Institute of Scientific and Technical Information of China (English)

    Simon WU; Jonathan LI; Gordon HUANG; G.M.ZENG

    2004-01-01

    The horizontal accuracy of topographic data represented by digital elevation model (DEM) resolution brings about uncertainties in landscape process modeling with raster GIS. This paper presents a study on the effect of topographic variability on cell-based empirical estimation of soil loss and sediment transport. An original DEM of 10m resolution for a case watershed was re-sampled to three realizations of higher grid sizes for a comparative examination. Equations based on the USLE are applied to the watershed to calculate soil loss from each cell and total sediment transport to streams. The study found that the calculated total soil loss from the watershed decreases with the increasing DEM resolution with a linear correlation as spatial variability is reduced by cell aggregation. The USLE topographic factors (LS) extracted from applied DEMs represent spatial variability, and determine the estimations as shown in the modeling results. The commonly used USGS 30m DEM appears to be able to reflect essential spatial variability and suitable for the empirical estimation. The appropriateness of a DEM resolution is dependent upon specific landscape characteristics, applied model and its parameterization. This work attempts to provide a general framework for the research in the DEM-based empirical modeling.

  13. Modeling of Principal Flank Wear: An Empirical Approach Combining the Effect of Tool, Environment and Workpiece Hardness

    Science.gov (United States)

    Mia, Mozammel; Al Bashir, Mahmood; Dhar, Nikhil Ranjan

    2016-10-01

    Hard turning is increasingly employed in machining, lately, to replace time-consuming conventional turning followed by grinding process. An excessive amount of tool wear in hard turning is one of the main hurdles to be overcome. Many researchers have developed tool wear model, but most of them developed it for a particular work-tool-environment combination. No aggregate model is developed that can be used to predict the amount of principal flank wear for specific machining time. An empirical model of principal flank wear (VB) has been developed for the different hardness of workpiece (HRC40, HRC48 and HRC56) while turning by coated carbide insert with different configurations (SNMM and SNMG) under both dry and high pressure coolant conditions. Unlike other developed model, this model includes the use of dummy variables along with the base empirical equation to entail the effect of any changes in the input conditions on the response. The base empirical equation for principal flank wear is formulated adopting the Exponential Associate Function using the experimental results. The coefficient of dummy variable reflects the shifting of the response from one set of machining condition to another set of machining condition which is determined by simple linear regression. The independent cutting parameters (speed, rate, depth of cut) are kept constant while formulating and analyzing this model. The developed model is validated with different sets of machining responses in turning hardened medium carbon steel by coated carbide inserts. For any particular set, the model can be used to predict the amount of principal flank wear for specific machining time. Since the predicted results exhibit good resemblance with experimental data and the average percentage error is <10 %, this model can be used to predict the principal flank wear for stated conditions.

  14. Empirical modeling of single-wake advection and expansion using full-scale pulsed lidar-based measurements

    DEFF Research Database (Denmark)

    Machefaux, Ewan; Larsen, Gunner Chr.; Troldborg, Niels;

    2015-01-01

    fairly well in the far wake but lacks accuracy in the outer region of the near wake. An empirical relationship, relating maximum wake induction and wake advection velocity, is derived and linked to the characteristics of a spherical vortex structure. Furthermore, a new empirical model for single-wake......In the present paper, single-wake dynamics have been studied both experimentally and numerically. The use of pulsed lidar measurements allows for validation of basic dynamic wake meandering modeling assumptions. Wake center tracking is used to estimate the wake advection velocity experimentally...... and to obtain an estimate of the wake expansion in a fixed frame of reference. A comparison shows good agreement between the measured average expansion and the Computational Fluid Dynamics (CFD) large eddy simulation–actuator line computations. Frandsen’s expansion model seems to predict the wake expansion...

  15. Meteorological conditions associated to high sublimation amounts in semiarid high-elevation Andes decrease the performance of empirical melt models

    Science.gov (United States)

    Ayala, Alvaro; Pellicciotti, Francesca; MacDonell, Shelley; McPhee, James; Burlando, Paolo

    2015-04-01

    Empirical melt (EM) models are often preferred to surface energy balance (SEB) models to calculate melt amounts of snow and ice in hydrological modelling of high-elevation catchments. The most common reasons to support this decision are that, in comparison to SEB models, EM models require lower levels of meteorological data, complexity and computational costs. However, EM models assume that melt can be characterized by means of a few index variables only, and their results strongly depend on the transferability in space and time of the calibrated empirical parameters. In addition, they are intrinsically limited in accounting for specific process components, the complexity of which cannot be easily reconciled with the empirical nature of the model. As an example of an EM model, in this study we use the Enhanced Temperature Index (ETI) model, which calculates melt amounts using air temperature and the shortwave radiation balance as index variables. We evaluate the performance of the ETI model on dry high-elevation sites where sublimation amounts - that are not explicitly accounted for the EM model - represent a relevant percentage of total ablation (1.1 to 8.7%). We analyse a data set of four Automatic Weather Stations (AWS), which were collected during the ablation season 2013-14, at elevations between 3466 and 4775 m asl, on the glaciers El Tapado, San Francisco, Bello and El Yeso, which are located in the semiarid Andes of central Chile. We complement our analysis using data from past studies in Juncal Norte Glacier (Chile) and Haut Glacier d'Arolla (Switzerland), during the ablation seasons 2008-09 and 2006, respectively. We use the results of a SEB model, applied to each study site, along the entire season, to calibrate the ETI model. The ETI model was not designed to calculate sublimation amounts, however, results show that their ability is low also to simulate melt amounts at sites where sublimation represents larger percentages of total ablation. In fact, we

  16. Comparative approaches from empirical to mechanistic simulation modelling in Land Evaluation studies

    Science.gov (United States)

    Manna, P.; Basile, A.; Bonfante, A.; Terribile, F.

    2009-04-01

    The Land Evaluation (LE) comprise the evaluation procedures to asses the attitudes of the land to a generic or specific use (e.g. biomass production). From local to regional and national scale the approach to the land use planning should requires a deep knowledge of the processes that drive the functioning of the soil-plant-atmosphere system. According to the classical approaches the assessment of attitudes is the result of a qualitative comparison between the land/soil physical properties and the land use requirements. These approaches have a quick and inexpensive applicability; however, they are based on empirical and qualitative models with a basic knowledge structure specifically built for a specific landscape and for the specific object of the evaluation (e.g. crop). The outcome from this situation is the huge difficulties in the spatial extrapolation of the LE results and the rigidity of the system. Modern techniques instead, rely on the application of mechanistic and quantitative simulation modelling that allow a dynamic characterisation of the interrelated physical and chemical processes taking place in the soil landscape. Moreover, the insertion of physical based rules in the LE procedure may make it less difficult in terms of both extending spatially the results and changing the object (e.g. crop species, nitrate dynamics, etc.) of the evaluation. On the other side these modern approaches require high quality and quantity of input data that cause a significant increase in costs. In this scenario nowadays the LE expert is asked to choose the best LE methodology considering costs, complexity of the procedure and benefits in handling a specific land evaluation. In this work we performed a forage maize land suitability study by comparing 9 different methods having increasing complexity and costs. The study area, of about 2000 ha, is located in North Italy in the Lodi plain (Po valley). The range of the 9 employed methods ranged from standard LE approaches to

  17. Empirical Evaluation of the Proposed eXSCRUM Model-Results of a Case Study

    Directory of Open Access Journals (Sweden)

    M Rizwan Jameel Qureshi

    2011-05-01

    Full Text Available Agile models promote fast development. XP and Scrum are the most widely used agile models. This paper investigates the phases of XP and Scrum models in order to identify their potentials and drawbacks. XP model has certain drawbacks, such as not suitable for maintenance projects and poor performance for medium and large-scale development projects. Scrum model has certain limitations, such as lacked in engineering practices. Since, both XP and Scrum models contain good features and strengths but still there are improvement possibilities in these models. Majority of the software development companies are reluctant to switch from traditional methodologies to agile methodologies for development of industrial projects. A fine integration, of software management of the Scrum model and engineering practices of XP model, is very much required to accumulate the strengths and remove the limitations of both models. This is achieved by proposing an eXScrum model. The proposed model is validated by conducting a controlled case study. The results of case study show that the proposed integrated eXScrum model enriches the potentials of both XP and Scrum models and eliminates their drawbacks.

  18. DC thermal modeling of CNTFETs based on a semi-empirical approach

    CERN Document Server

    Marani, Roberto

    2015-01-01

    A new DC thermal model of Carbon Nanotube Field Effect Transistors (CNTFETs) is proposed. The model is based on a number of fitting parameters depending on bias conditions by third order polynomials. The model includes three thermal parameters describing CNTFET behaviour in terms of saturation drain current, threshold voltage and M exponent in the knee region versus the temperature. To confirm the validity of the proposed thermal model, the simulations were performed in very different thermal conditions, obtaining I-V characteristics perfectly coincident with those of other models. The very low CPU calculation time makes the proposed model particularly suitable to be implemented in CAD applications.

  19. An empirical model simulating diurnal and seasonal CO2 flux for diverse vegetation types and climate conditions

    Directory of Open Access Journals (Sweden)

    A. D. Richardson

    2009-04-01

    Full Text Available We present an empirical model for the estimation of diurnal variability in net ecosystem CO2 exchange (NEE in various biomes. The model is based on the use of a simple saturated function for photosynthetic response of the canopy, and was constructed using the AmeriFlux network dataset that contains continuous eddy covariance CO2 flux data obtained at 24 ecosystems sites from seven biomes. The physiological parameters of maximum CO2 uptake rate by the canopy and ecosystem respiration have biome-specific responses to environmental variables. The model uses simplified empirical expression of seasonal variability in biome-specific physiological parameters based on air temperature, vapor pressure deficit, and annual precipitation. The model was validated using measurements of NEE derived from 10 AmeriFlux and four AsiaFlux ecosystem sites. The predicted NEE had reasonable magnitude and seasonal variation and gave adequate timing for the beginning and end of the growing season; the model explained 83–95% and 76–89% of the observed diurnal variations in NEE for the AmeriFlux and AsiaFlux ecosystem sites used for validation, respectively. The model however worked less satisfactorily in two deciduous broadleaf forests, a grassland, a savanna, and a tundra ecosystem sites where leaf area index changed rapidly. These results suggest that including additional plant physiological parameters may improve the model simulation performance in various areas of biomes.

  20. DEVELOPMENT OF A NOVEL EMPIRICAL MODEL TO ESTIMATE THE KRAFT PULP YIELD OF FAST-GROWING EUCALYPTUS

    Directory of Open Access Journals (Sweden)

    Jing Liu,

    2012-01-01

    Full Text Available In this study, several kraft pulps were produced by kraft pulping of fast-growing Eucalyptus with a wide range of cooking conditions. The dependences between pulp yields and some pulp properties, namely, kappa number, HexA contents, and cellulose viscosities, were well investigated. It was found that kraft pulp yields linearly decreased with the reduction of HexA-free kappa number in two different stages, respectively, in which a transition point of measured pulp yield of 48.7% was observed. A similar relationship between pulp yield and HexA was also found, in which the resulting transition point of HexA content was 67 μmol/g. Moreover, the logarithm of pulp viscosity was linearly proportional to the reduction of lignin-free pulp yields. Then, a novel empirical model was successfully developed based on these findings. The parameters in this empirical model were calculated by least-squares estimation using the experimental data from active alkali values of 13.2, 14.7 and 17.8. Another data set was used to verify the effectiveness of this model in predicting the pulp yields. Finally, a good agreement (a linear regression coefficient of 90.59% between experimental and fitting data was obtained, which indicated that the kraft pulp yield of fast-growing Eucalyptus could be accurately predicted by this novel empirical model.

  1. Semi-empirical models for chlorine activation and ozone depletion in the Antarctic stratosphere: proof of concept

    Directory of Open Access Journals (Sweden)

    P. E. Huck

    2012-10-01

    Full Text Available Two semi-empirical models were developed for the Antarctic stratosphere to relate the shift of species within total chlorine (Cly = HCl + ClONO2 + HOCl + 2 × Cl2 + 2 × Cl2O2 + ClO + Cl into the active forms (here: ClOx = 2 × Cl2O2 + ClO, and to relate the rate of ozone destruction to ClOx. These two models provide a fast and computationally inexpensive way to describe the inter- and intra-annual evolution of ClOx and ozone mass deficit (OMD in the Antarctic spring. The models are based on the underlying physics/chemistry of the system and capture the key chemical and physical processes in the Antarctic stratosphere that determine the interaction between climate change and Antarctic ozone depletion. They were developed considering bulk effects of chemical mechanisms for the duration of the Antarctic vortex period and quantities averaged over the vortex area. The model equations were regressed against observations of daytime ClO and OMD providing a set of empirical fit coefficients. Both semi-empirical models are able to explain much of the intra- and inter-annual variability observed in daily ClOx and OMD time series. This proof-of-concept paper outlines the semi-empirical approach to describing the evolution of Antarctic chlorine activation and ozone depletion.

  2. Efficiency test of modeled empirical equations in predicting soil loss from ephemeral gully erosion around Mubi, Northeast Nigeria

    Directory of Open Access Journals (Sweden)

    Ijasini John Tekwa

    2016-03-01

    Full Text Available A field study was carried out to assess soil loss from ephemeral gully (EG erosion at 6 different locations (Digil, Vimtim, Muvur, Gella, Lamorde and Madanya around the Mubi area between April, 2008 and October, 2009. Each location consisted of 3 watershed sites from where data was collected. EG shape, land use, and conservation practices were noted, while EG length, width, and depth were measured. Physico-chemical properties of the soils were studied in the field and laboratory. Soil loss was both measured and predicted using modeled empirical equations. Results showed that the soils are heterogeneous and lying on flat to hilly topographies with few grasses, shrubs and tree vegetations. The soils comprised of sand fractions that predominated the texture, with considerable silt and clay contents. The empirical soil loss was generally related with the measured soil loss and the predictions were widely reliable at all sites, regardless of season. The measured and empirical aggregate soil loss were more related in terms of volume of soil loss (VSL (r2=0.93 and mass of soil loss (MSL (r2=0.92, than area of soil loss (ASL (r2=0.27. The empirical estimates of VSL and MSL were consistently higher at Muvur (less vegetation and lower at Madanya and Gella (denser vegetations in both years. The maximum efficiency (Mse of the empirical equation in predicting ASL was between 1.41 (Digil and 89.07 (Lamorde, while the Mse was higher at Madanya (2.56 and lowest at Vimtim (15.66 in terms of VSL prediction efficiencies. The Mse also ranged from 1.84 (Madanya to 15.74 (Vimtim in respect of MSL predictions. These results led to the recommendation that soil conservationists, farmers, private and/or government agencies should implement the empirical model in erosion studies around Mubi area.

  3. Improving the reliability of seasonal climate forecasts through empirical downscaling and multi-model considerations; presentation

    CSIR Research Space (South Africa)

    Landman, WA

    2012-11-01

    Full Text Available -forecasts) have been generated by a statistical model and by state-of-the-art fully coupled ocean-atmosphere general circulation models. Since forecast users generally require well-calibrated probability forecasts we employ a model output statistics approach...

  4. Empirical analysis of cascade deformable models for multi-view face detection

    NARCIS (Netherlands)

    Orozco, J.; Martinez, B.; Pantic, M.

    2013-01-01

    In this paper, we present a face detector based on Cascade Deformable Part Models (CDPM) [1]. Our model is learnt from partially labelled images using Latent Support Vector Machines (LSVM). Recently Zhu et al. [2] proposed a Tree StructureModel for multi-view face detection trained with facial landm

  5. Modeling Zero-Inflated and Overdispersed Count Data: An Empirical Study of School Suspensions

    Science.gov (United States)

    Desjardins, Christopher David

    2016-01-01

    The purpose of this article is to develop a statistical model that best explains variability in the number of school days suspended. Number of school days suspended is a count variable that may be zero-inflated and overdispersed relative to a Poisson model. Four models were examined: Poisson, negative binomial, Poisson hurdle, and negative…

  6. The EZ diffusion model provides a powerful test of simple empirical effects

    NARCIS (Netherlands)

    van Ravenzwaaij, Don; Donkin, Chris; Vandekerckhove, Joachim

    Over the last four decades, sequential accumulation models for choice response times have spread through cognitive psychology like wildfire. The most popular style of accumulator model is the diffusion model (Ratcliff Psychological Review, 85, 59–108, 1978), which has been shown to account for data

  7. Process-based soil erodibility estimation for empirical water erosion models

    Science.gov (United States)

    A variety of modeling technologies exist for water erosion prediction each with specific parameters. It is of interest to scrutinize parameters of a particular model from the point of their compatibility with dataset of other models. In this research, functional relationships between soil erodibilit...

  8. Improving model fidelity and sensitivity for complex systems through empirical information theory

    Science.gov (United States)

    Majda, Andrew J.; Gershgorin, Boris

    2011-01-01

    In many situations in contemporary science and engineering, the analysis and prediction of crucial phenomena occur often through complex dynamical equations that have significant model errors compared with the true signal in nature. Here, a systematic information theoretic framework is developed to improve model fidelity and sensitivity for complex systems including perturbation formulas and multimodel ensembles that can be utilized to improve both aspects of model error simultaneously. A suite of unambiguous test models is utilized to demonstrate facets of the proposed framework. These results include simple examples of imperfect models with perfect equilibrium statistical fidelity where there are intrinsic natural barriers to improving imperfect model sensitivity. Linear stochastic models with multiple spatiotemporal scales are utilized to demonstrate this information theoretic approach to equilibrium sensitivity, the role of increasing spatial resolution in the information metric for model error, and the ability of imperfect models to capture the true sensitivity. Finally, an instructive statistically nonlinear model with many degrees of freedom, mimicking the observed non-Gaussian statistical behavior of tracers in the atmosphere, with corresponding imperfect eddy-diffusivity parameterization models are utilized here. They demonstrate the important role of additional stochastic forcing of imperfect models in order to systematically improve the information theoretic measures of fidelity and sensitivity developed here. PMID:21646534

  9. Time series count data models: an empirical application to traffic accidents.

    Science.gov (United States)

    Quddus, Mohammed A

    2008-09-01

    Count data are primarily categorised as cross-sectional, time series, and panel. Over the past decade, Poisson and Negative Binomial (NB) models have been used widely to analyse cross-sectional and time series count data, and random effect and fixed effect Poisson and NB models have been used to analyse panel count data. However, recent literature suggests that although the underlying distributional assumptions of these models are appropriate for cross-sectional count data, they are not capable of taking into account the effect of serial correlation often found in pure time series count data. Real-valued time series models, such as the autoregressive integrated moving average (ARIMA) model, introduced by Box and Jenkins have been used in many applications over the last few decades. However, when modelling non-negative integer-valued data such as traffic accidents at a junction over time, Box and Jenkins models may be inappropriate. This is mainly due to the normality assumption of errors in the ARIMA model. Over the last few years, a new class of time series models known as integer-valued autoregressive (INAR) Poisson models, has been studied by many authors. This class of models is particularly applicable to the analysis of time series count data as these models hold the properties of Poisson regression and able to deal with serial correlation, and therefore offers an alternative to the real-valued time series models. The primary objective of this paper is to introduce the class of INAR models for the time series analysis of traffic accidents in Great Britain. Different types of time series count data are considered: aggregated time series data where both the spatial and temporal units of observation are relatively large (e.g., Great Britain and years) and disaggregated time series data where both the spatial and temporal units are relatively small (e.g., congestion charging zone and months). The performance of the INAR models is compared with the class of Box and

  10. A dynamic model of the marriage market-Part 2: simulation of marital states and application to empirical data.

    Science.gov (United States)

    Matthews, A P; Garenne, M L

    2013-09-01

    A dynamic, two-sex, age-structured marriage model is presented. Part 1 focused on first marriage only and described a marriage market matching algorithm. In Part 2 the model is extended to include divorce, widowing, and remarriage. The model produces a self-consistent set of marital states distributed by age and sex in a stable population by means of a gender-symmetric numerical method. The model is compared with empirical data for the case of Zambia. Furthermore, a dynamic marriage function for a changing population is demonstrated in simulations of three hypothetical scenarios of elevated mortality in young to middle adulthood. The marriage model has its primary application to simulation of HIV-AIDS epidemics in African countries.

  11. Semi-empirical model of the maximum electron concentration in the ionosphere: Comparison with data from Toluca (México)

    Science.gov (United States)

    Arriagada, Manuel; Cipagauta, Carolina; Foppiano, Alberto

    2013-05-01

    A simple semi-empirical model to determine the maximum electron concentration in the ionosphere (NmF2) for South American locations is used to calculate NmF2 for a northern hemisphere station in the same longitude sector. NmF2 is determined as the sum of two terms, one related to photochemical and diffusive processes and the other one to transport mechanisms. The model gives diurnal variations of NmF2 representative for winter, summer and equinox conditions, during intervals of high and low solar activity. Model NmF2 results are compared with ionosonde observations made at Toluca-México (19.3°N; 260°E). Differences between model results and observations are similar to those corresponding to comparisons with South American observations. It seems that further improvement of the model could be made by refining the latitude dependencies of coefficients used for the transport term.

  12. Optimizing irrigation and nitrogen for wheat through empirical modeling under semi-arid environment.

    Science.gov (United States)

    Saeed, Umer; Wajid, Syed Aftab; Khaliq, Tasneem; Zahir, Zahir Ahmad

    2017-04-01

    Nitrogen fertilizer availability to plants is strongly linked with water availability. Excessive or insufficient use of nitrogen can cause reduction in grain yield of wheat and environmental issues. The per capita per annum water availability in Pakistan has reduced to less than 1000 m(3) and is expected to reach 800 m(3) during 2025. Irrigating crops with 3 or more than 3 in. of depth without measuring volume of water is not a feasible option anymore. Water productivity and economic return of grain yield can be improved by efficient management of water and nitrogen fertilizer. A study was conducted at post-graduate agricultural research station, University of Agriculture Faisalabad, during 2012-2013 and 2013-2014 to optimize volume of water per irrigation and nitrogen application. Split plot design with three replications was used to conduct experiment; four irrigation levels (I300 = 300 mm, I240 = 240 mm, I180 = 180 mm, I120 = 120 mm for whole growing season at critical growth stages) and four nitrogen levels (N60 = 60 kg ha(-1), N120 = 120 kg ha(-1), N180 = 180 kg ha(-1), and N240 = 240 kg ha(-1)) were randomized as main and sub-plot factors, respectively. The recorded data on grain yield was used to develop empirical regression models. The results based on quadratic equations and economic analysis showed 164, 162, 158, and 107 kg ha(-1) nitrogen as economic optimum with I300, I240, I180, and I120 mm water, respectively, during 2012-2013. During 2013-2014, quadratic equations and economic analysis showed 165, 162, 161, and 117 kg ha(-1) nitrogen as economic optimum with I300, I240, I180, and I120 mm water, respectively. The optimum irrigation level was obtained by fitting economic optimum nitrogen as function of total water. Equations predicted 253 mm as optimum irrigation water for whole growing season during 2012-2013 and 256 mm water as optimum for 2013-2014. The results also revealed that reducing irrigation from I300 to

  13. Recommended survey designs for occupancy modelling using motion-activated cameras: insights from empirical wildlife data.

    Science.gov (United States)

    Shannon, Graeme; Lewis, Jesse S; Gerber, Brian D

    2014-01-01

    Motion-activated cameras are a versatile tool that wildlife biologists can use for sampling wild animal populations to estimate species occurrence. Occupancy modelling provides a flexible framework for the analysis of these data; explicitly recognizing that given a species occupies an area the probability of detecting it is often less than one. Despite the number of studies using camera data in an occupancy framework, there is only limited guidance from the scientific literature about survey design trade-offs when using motion-activated cameras. A fuller understanding of these trade-offs will allow researchers to maximise available resources and determine whether the objectives of a monitoring program or research study are achievable. We use an empirical dataset collected from 40 cameras deployed across 160 km(2) of the Western Slope of Colorado, USA to explore how survey effort (number of cameras deployed and the length of sampling period) affects the accuracy and precision (i.e., error) of the occupancy estimate for ten mammal and three virtual species. We do this using a simulation approach where species occupancy and detection parameters were informed by empirical data from motion-activated cameras. A total of 54 survey designs were considered by varying combinations of sites (10-120 cameras) and occasions (20-120 survey days). Our findings demonstrate that increasing total sampling effort generally decreases error associated with the occupancy estimate, but changing the number of sites or sampling duration can have very different results, depending on whether a species is spatially common or rare (occupancy = ψ) and easy or hard to detect when available (detection probability = p). For rare species with a low probability of detection (i.e., raccoon and spotted skunk) the required survey effort includes maximizing the number of sites and the number of survey days, often to a level that may be logistically unrealistic for many studies. For common species with

  14. Implementation of an Empirical Joint Constitutive Model into Finite-Discrete Element Analysis of the Geomechanical Behaviour of Fractured Rocks

    Science.gov (United States)

    Lei, Qinghua; Latham, John-Paul; Xiang, Jiansheng

    2016-12-01

    An empirical joint constitutive model (JCM) that captures the rough wall interaction behaviour of individual fractures associated with roughness characteristics observed in laboratory experiments is combined with the solid mechanical model of the finite-discrete element method (FEMDEM). The combined JCM-FEMDEM formulation gives realistic fracture behaviour with respect to shear strength, normal closure, and shear dilatancy and includes the recognition of fracture length influence as seen in experiments. The validity of the numerical model is demonstrated by a comparison with the experimentally established empirical solutions. A 2D plane strain geomechanical simulation is conducted using an outcrop-based naturally fractured rock model with far-field stresses loaded in two consecutive phases, i.e. take-up of isotropic stresses and imposition of two deviatoric stress conditions. The modelled behaviour of natural fractures in response to various stress conditions illustrates a range of realistic behaviour including closure, opening, shearing, dilatancy, and new crack propagation. With the increase in stress ratio, significant deformation enhancement occurs in the vicinity of fracture tips, intersections, and bends, where large apertures can be generated. The JCM-FEMDEM model is also compared with conventional approaches that neglect the scale dependency of joint properties or the roughness-induced additional frictional resistance. The results of this paper have important implications for understanding the geomechanical behaviour of fractured rocks in various engineering activities.

  15. Estimation of Energy Potential from Organic Fractions of Municipal Solid Waste by Using Empirical Models at Hyderabad, Pakistan.

    Directory of Open Access Journals (Sweden)

    Muhammad Safar Korai

    2016-01-01

    Full Text Available MSW (Municipal Solid Waste now-a-day is considered as a precious renewable energy resource for various purposes. In view of above fact, one hundred samples of MSW were collected from different locations of study area. Quantities of each organic waste component were determined by using physical balance and also their proximate analysis was performed by using oven and muffle furnace. In this study, nine empirical models were used for estimating the energy value in terms of heat from OFMSW (Organic Fractions of Municipal Solid Waste, namely two of them were based upon physical composition, four of them were on the basis of its proximate analysis and remaining three of them was according to ultimate analysis of OFMSW. From comparison of all energy models, the empirical Model No. 3 and No. 4 based upon proximate analysis have highest energy recovery potential than all of others. Moreover, the result of Model No.3 on the basis of proximate analysis is closer to the calorific value of mixed OFMSW than the values obtained by rest of models. Therefore, this is the best model to be used. From the outcomes of this study it can be realized that the energy recovery from the OFMSW plays a vital role for economical growth of the country. On that account, a systematic approach should be performed in detail before making a decision on such option

  16. A Web Application For Visualizing Empirical Models of the Space-Atmosphere Interface Region: AtModWeb

    Science.gov (United States)

    Knipp, D.; Kilcommons, L. M.; Damas, M. C.

    2015-12-01

    We have created a simple and user-friendly web application to visualize output from empirical atmospheric models that describe the lower atmosphere and the Space-Atmosphere Interface Region (SAIR). The Atmospheric Model Web Explorer (AtModWeb) is a lightweight, multi-user, Python-driven application which uses standard web technology (jQuery, HTML5, CSS3) to give an in-browser interface that can produce plots of modeled quantities such as temperature and individual species and total densities of neutral and ionized upper-atmosphere. Output may be displayed as: 1) a contour plot over a map projection, 2) a pseudo-color plot (heatmap) which allows visualization of a variable as a function of two spatial coordinates, or 3) a simple line plot of one spatial coordinate versus any number of desired model output variables. The application is designed around an abstraction of an empirical atmospheric model, essentially treating the model code as a black box, which makes it simple to add additional models without modifying the main body of the application. Currently implemented are the Naval Research Laboratory NRLMSISE00 model for neutral atmosphere and the International Reference Ionosphere (IRI). These models are relevant to the Low Earth Orbit environment and the SAIR. The interface is simple and usable, allowing users (students and experts) to specify time and location, and choose between historical (i.e. the values for the given date) or manual specification of whichever solar or geomagnetic activity drivers are required by the model. We present a number of use-case examples from research and education: 1) How does atmospheric density between the surface and 1000 km vary with time of day, season and solar cycle?; 2) How do ionospheric layers change with the solar cycle?; 3 How does the composition of the SAIR vary between day and night at a fixed altitude?

  17. Accuracies of the empirical theories of the escape probability based on Eigen model and Braun model compared with the exact extension of Onsager theory.

    Science.gov (United States)

    Wojcik, Mariusz; Tachiya, M

    2009-03-14

    This paper deals with the exact extension of the original Onsager theory of the escape probability to the case of finite recombination rate at nonzero reaction radius. The empirical theories based on the Eigen model and the Braun model, which are applicable in the absence and presence of an external electric field, respectively, are based on a wrong assumption that both recombination and separation processes in geminate recombination follow exponential kinetics. The accuracies of the empirical theories are examined against the exact extension of the Onsager theory. The Eigen model gives the escape probability in the absence of an electric field, which is different by a factor of 3 from the exact one. We have shown that this difference can be removed by operationally redefining the volume occupied by the dissociating partner before dissociation, which appears in the Eigen model as a parameter. The Braun model gives the escape probability in the presence of an electric field, which is significantly different from the exact one over the whole range of electric fields. Appropriate modification of the original Braun model removes the discrepancy at zero or low electric fields, but it does not affect the discrepancy at high electric fields. In all the above theories it is assumed that recombination takes place only at the reaction radius. The escape probability in the case when recombination takes place over a range of distances is also calculated and compared with that in the case of recombination only at the reaction radius.

  18. Machine learning methods for empirical streamflow simulation: a comparison of model accuracy, interpretability, and uncertainty in seasonal watersheds

    Science.gov (United States)

    Shortridge, Julie E.; Guikema, Seth D.; Zaitchik, Benjamin F.

    2016-07-01

    In the past decade, machine learning methods for empirical rainfall-runoff modeling have seen extensive development and been proposed as a useful complement to physical hydrologic models, particularly in basins where data to support process-based models are limited. However, the majority of research has focused on a small number of methods, such as artificial neural networks, despite the development of multiple other approaches for non-parametric regression in recent years. Furthermore, this work has often evaluated model performance based on predictive accuracy alone, while not considering broader objectives, such as model interpretability and uncertainty, that are important if such methods are to be used for planning and management decisions. In this paper, we use multiple regression and machine learning approaches (including generalized additive models, multivariate adaptive regression splines, artificial neural networks, random forests, and M5 cubist models) to simulate monthly streamflow in five highly seasonal rivers in the highlands of Ethiopia and compare their performance in terms of predictive accuracy, error structure and bias, model interpretability, and uncertainty when faced with extreme climate conditions. While the relative predictive performance of models differed across basins, data-driven approaches were able to achieve reduced errors when compared to physical models developed for the region. Methods such as random forests and generalized additive models may have advantages in terms of visualization and interpretation of model structure, which can be useful in providing insights into physical watershed function. However, the uncertainty associated with model predictions under extreme climate conditions should be carefully evaluated, since certain models (especially generalized additive models and multivariate adaptive regression splines) become highly variable when faced with high temperatures.

  19. PCM-air heat exchangers for free-cooling applications in buildings: Empirical model and application to design

    Energy Technology Data Exchange (ETDEWEB)

    Lazaro, Ana; Dolado, Pablo; Marin, Jose M.; Zalba, Belen [Aragon Institute for Engineering Research (I3A), Thermal Engineering and Energy Systems Group, Torres Quevedo Building, C/Maria de Luna 3, 50018 Zaragoza (Spain)

    2009-03-15

    This paper presents novel design conclusions based on the experimental results and on an empirical model for a real-scale prototype of a PCM-air heat exchanger. From experimental results, an empirical model was built aimed at simulating the thermal behavior in the tested heat exchanger in different cases. These simulations were used to evaluate the technical viability of its application. Since the thermal properties of PCM vary with temperature, a PCM-heat exchanger works as a transitory system and therefore, its design must be based on transitory analysis. This work shows that PCM selection criteria must include the power demand. The conclusions obtained for the PCM-air heat exchange can be useful for selecting PCM for other heat exchanger applications that use the tested geometry. (author)

  20. Influence of uncertainties of the empirical models for inferring the E-region electric fields at the dip equator

    Science.gov (United States)

    Moro, Juliano; Denardini, Clezio Marcos; Resende, Laysa Cristina Araújo; Chen, Sony Su; Schuch, Nelson Jorge

    2016-06-01

    Daytime E-region electric fields play a crucial role in the ionospheric dynamics at the geomagnetic dip latitudes. Due to their importance, there is an interest in accurately measuring and modeling the electric fields for both climatological and near real-time studies. In this work, we present the daytime vertical ( Ez) and eastward ( Ey) electric fields for a reference quiet day (February 7, 2001) at the São Luís Space Observatory, Brazil (SLZ, 2.31°S, 44.16°W). The component Ez is inferred from Doppler shifts of type II echoes (gradient drift instability) and the anisotropic factor, which is computed from ion and electron gyro frequencies as well as ion and electron collision frequencies with neutral molecules. The component Ey depends on the ratio of Hall and Pedersen conductivities and Ez. A magnetic field-line-integrated conductivity model is used to obtain the anisotropic factor for calculating Ez and the ionospheric conductivities for calculating Ey. This model uses the NRLMSISE-00, IRI-2007, and IGRF-11 empirical models as input parameters for neutral atmosphere, ionosphere, and geomagnetic field, respectively. Consequently, it is worth determining the uncertainties (or errors) in Ey and Ez associated with these empirical model outputs in order to precisely define the confidence limit for the estimated electric field components. For this purpose, errors of ±10 % were artificially introduced in the magnitude of each empirical model output before estimating Ey and Ez. The corresponding uncertainties in the ionospheric conductivity and electric field are evaluated considering the individual and cumulative contribution of the artificial errors. The results show that the neutral densities and temperature may be responsible for the largest changes in Ey and Ez, followed by changes in the geomagnetic field intensity and electron and ions compositions.

  1. Scattering intensities for a white beam (120 kV) presenting a semi-empirical model to preview scattered beams

    Science.gov (United States)

    Gonçalves, O. D.; Boldt, S.; Kasch, K. U.

    2016-09-01

    This work aims at measuring the scattering cross sections for white beams and the verification of a semi-empirical model predicting scattered energy spectra of an X-ray beam produced by an industrial X-ray tube (Pantack Sievert, 120 kV, tungsten target) incident on a water sample. Both, theoretical and semi-empirical results presented are based on the form factor approach with results well corresponding to performed measurements. The elastic (Rayleigh) scattering cross sections are based on Thomson scattering with a form factor correction as published by Morin (1982). The inelastic (Compton) contribution is based on the Klein Nishina equation (Klein and Nishina, 1929) multiplied by the incoherent scattering factors calculated by Hubbel et al. (1975). Two major results are presented: first, the experimental integrated in energy cross sections corresponds with theoretical cross sections obtained at the mean energy of the measured scattered spectra at a given angle. Secondly, the measured scattered spectra at a given angle correspond to those obtained utilizing the semi-empirical model as proposed here. A good correspondence of experimental results and model predictions can be shown. The latter, therefore, proves to be a useful method to calculate the scattering contributions in a number of applications as for example cone beam tomography.

  2. An all-atom structure-based potential for proteins: bridging minimal models with all-atom empirical forcefields.

    Science.gov (United States)

    Whitford, Paul C; Noel, Jeffrey K; Gosavi, Shachi; Schug, Alexander; Sanbonmatsu, Kevin Y; Onuchic, José N

    2009-05-01

    Protein dynamics take place on many time and length scales. Coarse-grained structure-based (Go) models utilize the funneled energy landscape theory of protein folding to provide an understanding of both long time and long length scale dynamics. All-atom empirical forcefields with explicit solvent can elucidate our understanding of short time dynamics with high energetic and structural resolution. Thus, structure-based models with atomic details included can be used to bridge our understanding between these two approaches. We report on the robustness of folding mechanisms in one such all-atom model. Results for the B domain of Protein A, the SH3 domain of C-Src Kinase, and Chymotrypsin Inhibitor 2 are reported. The interplay between side chain packing and backbone folding is explored. We also compare this model to a C(alpha) structure-based model and an all-atom empirical forcefield. Key findings include: (1) backbone collapse is accompanied by partial side chain packing in a cooperative transition and residual side chain packing occurs gradually with decreasing temperature, (2) folding mechanisms are robust to variations of the energetic parameters, (3) protein folding free-energy barriers can be manipulated through parametric modifications, (4) the global folding mechanisms in a C(alpha) model and the all-atom model agree, although differences can be attributed to energetic heterogeneity in the all-atom model, and (5) proline residues have significant effects on folding mechanisms, independent of isomerization effects. Because this structure-based model has atomic resolution, this work lays the foundation for future studies to probe the contributions of specific energetic factors on protein folding and function.

  3. An empirically grounded agent based model for modeling directs, conflict detection and resolution operations in Air Traffic Management

    CERN Document Server

    Bongiorno, C; Mantegna, Rosario N

    2016-01-01

    We present an agent based model of the Air Traffic Management socio-technical complex system that aims at modeling the interactions between aircrafts and air traffic controllers at a tactical level. The core of the model is given by the conflict detection and resolution module and by the directs module. Directs are flight shortcuts that are given by air controllers to speed up the passage of an aircraft within a certain airspace and therefore to facilitate airline operations. Conflicts resolution between flight trajectories can arise during the en-route phase of each flight due to both not detailed flight trajectory planning or unforeseen events that perturb the planned flight plan. Our model performs a local conflict detection and resolution procedure. Once a flight trajectory has been made conflict-free, the model searches for possible improvements of the system efficiency by issuing directs. We give an example of model calibration based on real data. We then provide an illustration of the capability of our...

  4. A semi-empirical model to predict the probability of capture of buoyant particles by a cylindrical collector through capillarity

    Science.gov (United States)

    Peruzzo, Paolo; Pietro Viero, Daniele; Defina, Andrea

    2016-11-01

    The seeds of many aquatic plants, as well as many propagulae and larvae, are buoyant and transported at the water surface. These particles are therefore subject to surface tension, which may enhance their capture by emergent vegetation through capillary attraction. In this work, we develop a semi-empirical model that predicts the probability that a floating particle is retained by plant stems and branches piercing the water surface, due to capillarity, against the drag force exerted by the flowing water. Specific laboratory experiments are also performed to calibrate and validate the model.

  5. Modeling rain-driven overland flow: empirical versus analytical friction terms in the shallow water approximation

    CERN Document Server

    Kirstetter, G; Delestre, O; Darboux, F; Lagrée, P -Y; Popinet, S; Fullana, J -M; Josserand, C

    2016-01-01

    Modeling and simulating overland flow fed by rainfall is a common issue in watershed surface hydrology. Modelers have to choose among various friction models when defining their simulation framework. The purpose of this work is to compare the simulation quality for the Manning, Darcy-Weisbach, and Poiseuille friction models on the simple case of a constant rain on a thin experimental flume. Results show that the usual friction law of Manning is not suitable for this type of flow. The Poiseuille friction model gave the best results both on the flux at the outlet and the velocity and depth profile along the flume. The Darcy-Weisbach model shows good results for laminar flow. Additional testing should be carried out for turbulent cases.

  6. Improved computational model (AQUIFAS) for activated sludge, integrated fixed-film activated sludge, and moving-bed biofilm reactor systems, Part I: Semi-empirical model development.

    Science.gov (United States)

    Sen, Dipankar; Randall, Clifford W

    2008-05-01

    Research was undertaken to develop a model for activated sludge, integrated fixed-film activated sludge (IFAS), and moving-bed biofilm reactor (MBBR) systems. The model can operate with up to 12 cells (reactors) in series, with biofilm media incorporated to one or more cells, except the anaerobic cells. The process configuration can be any combination of anaerobic, anoxic, aerobic, post-anoxic with or without supplemental carbon, and reaeration; it can also include any combination of step feed and recycles, including recycles for mixed liquor, return activated sludge, nitrates, and membrane bioreactors. This paper presents the structure of the model. The model embeds a biofilm model into a multicell activated sludge model. The biofilm flux rates for organics, nutrients, and biomass can be computed by two methods--a semi-empirical model of the biofilm that is relatively simpler, or a diffusional model that is computationally intensive. The values of the kinetic parameters for the model were measured using pilot-scale activated sludge, IFAS, and MBBR systems. For the semiempirical version, a series of Monod equations were developed for chemical oxygen demand, ammonium-nitrogen, and oxidized-nitrogen fluxes to the biofilm. Within the equations, a second Monod expression is used to simulate the effect of changes in biofilm thickness and fraction nitrifiers in the biofilm. The biofilm flux model is then linked to the activated sludge model. The diffusional model and the verification of the models are presented in subsequent papers (Sen and Randall, 2008a, 2008b). The model can be used to quantify the amount of media and surface area required to achieve nitrification, identify the best locations for the media, and optimize the dissolved oxygen levels and nitrate recycle rates. Some of the advanced features include the ability to apply different media types and fill fractions in cells; quantify nitrification, denitrification, and biomass production in the biofilm and

  7. The incorporation and validation of empirical crawling data into the buildingEXODUS model

    OpenAIRE

    Muhdi, Rani; Gwynne, Steve; Davis, Jerry

    2009-01-01

    The deterioration of environmental conditions can influence evacuee decisions and their subsequent behaviors. Simulating evacuee behaviors enhances the robustness of engineering procedural designs, improves the accuracy of egress models, and better evaluates the safety of evacuees. The purpose of this paper is to more accurately incorporate and validate evacuee crawling behavior into the buildingEXODUS egress model. Crawling data were incorporated into the model and tested for accurate repres...

  8. An improved semi-empirical model for the densification of Antarctic firn

    OpenAIRE

    2011-01-01

    A firn densification model is presented that simulates steady-state Antarctic firn density profiles, as well as the temporal evolution of firn density and surface height. The model uses an improved firn densification expression that is tuned to fit depth-density observations. Liquid water processes (meltwater percolation, retention and refreezing) are also included. Two applications are presented. First, the steady-state model version is used to simulate the strong spatial v...

  9. A Model of Trust, Moods, and Emotions in Multiagent Systems and its Empirical Evaluation

    Science.gov (United States)

    2014-05-05

    one that omits trust or mood. The metrics we use to evaluate alternative Bayesian models are the Akaike Information Criterion (AIC) and the Bayesian ... Information Criterion (BIC) scores, which capture the goodness of fit of a model to the data. Given a Bayesian model, the metrics we use to evaluate our...characterized by commitments among agents. We develop a general approach representing the relationships among these concepts via a Bayesian network

  10. Stock Prices and the Monetrary Model of Exchange Rate: An Empirical Investigation.

    OpenAIRE

    Broom, S.; Morley, B.

    2003-01-01

    This paper develops an alternative version of the monetary model of exchange rate determination, which incorporates a stock price measure. This model is then tested using data from Canada and the USA, applying the cointegration and error correction methodology. In contrast to many previous tests of the monetary model, this version produces evidence of cointegration and stock prices have a highly significant effect on the exchange rate in both the short and long run. In addition the restricted...

  11. Cellular automaton model with dynamical 2D speed-gap relation reproduces empirical and experimental features of traffic flow

    CERN Document Server

    Tian, Junfang; Ma, Shoufeng; Zhu, Chenqiang; Jiang, Rui; Ding, YaoXian

    2015-01-01

    This paper proposes an improved cellular automaton traffic flow model based on the brake light model, which takes into account that the desired time gap of vehicles is remarkably larger than one second. Although the hypothetical steady state of vehicles in the deterministic limit corresponds to a unique relationship between speeds and gaps in the proposed model, the traffic states of vehicles dynamically span a two-dimensional region in the plane of speed versus gap, due to the various randomizations. It is shown that the model is able to well reproduce (i) the free flow, synchronized flow, jam as well as the transitions among the three phases; (ii) the evolution features of disturbances and the spatiotemporal patterns in a car-following platoon; (iii) the empirical time series of traffic speed obtained from NGSIM data. Therefore, we argue that a model can potentially reproduce the empirical and experimental features of traffic flow, provided that the traffic states are able to dynamically span a 2D speed-gap...

  12. Applied welfare economics with discrete choice models: implications of theory for empirical specification

    DEFF Research Database (Denmark)

    Batley, Richard; Ibáñez Rivas, Juan Nicolás

    2013-01-01

    The apparatus of the Random Utility Model (RUM) first emerged in the early 1960s, with Marschak (1960) and Block and Marschak (1960) translating models originally developed for discriminant analysis in psychophysics (Thurstone, 1927) to the alternative domain of discrete choice analysis in econom......The apparatus of the Random Utility Model (RUM) first emerged in the early 1960s, with Marschak (1960) and Block and Marschak (1960) translating models originally developed for discriminant analysis in psychophysics (Thurstone, 1927) to the alternative domain of discrete choice analysis...

  13. Semi-empirical model for fluorescence lines evaluation in diagnostic x-ray beams.

    Science.gov (United States)

    Bontempi, Marco; Andreani, Lucia; Labanti, Claudio; Costa, Paulo Roberto; Rossi, Pier Luca; Baldazzi, Giuseppe

    2016-01-01

    Diagnostic x-ray beams are composed of bremsstrahlung and discrete fluorescence lines. The aim of this study is the development of an efficient model for the evaluation of the fluorescence lines. The most important electron ionization models are analyzed and implemented. The model results were compared with experimental data and with other independent spectra presented in the literature. The implemented peak models allow the discrimination between direct and indirect radiation emitted from tungsten anodes. The comparison with the independent literature spectra indicated a good agreement.

  14. A semi-empirical model for pressurised air-blown fluidized-bed gasification of biomass.

    Science.gov (United States)

    Hannula, Ilkka; Kurkela, Esa

    2010-06-01

    A process model for pressurised fluidized-bed gasification of biomass was developed using Aspen Plus simulation software. Eight main blocks were used to model the fluidized-bed gasifier, complemented with FORTRAN subroutines nested in the programme to simulate hydrocarbon and NH(3) formation as well as carbon conversion. The model was validated with experimental data derived from a PDU-scale test rig operated with various types of biomass. The model was shown to be suitable for simulating the gasification of pine sawdust, pine and eucalyptus chips as well as forest residues, but not for pine bark or wheat straw.

  15. Modeling earthquake ground motion with an earthquake simulation program (EMPSYN) that utilizes empirical Green's functions

    Energy Technology Data Exchange (ETDEWEB)

    Hutchings, L.

    1992-01-01

    This report outlines a method of using empirical Green's functions in an earthquake simulation program EMPSYN that provides realistic seismograms from potential earthquakes. The theory for using empirical Green's functions is developed, implementation of the theory in EMPSYN is outlined, and an example is presented where EMPSYN is used to synthesize observed records from the 1971 San Fernando earthquake. To provide useful synthetic ground motion data from potential earthquakes, synthetic seismograms should model frequencies from 0.5 to 15.0 Hz, the full wave-train energy distribution, and absolute amplitudes. However, high-frequency arrivals are stochastically dependent upon the inhomogeneous geologic structure and irregular fault rupture. The fault rupture can be modeled, but the stochastic nature of faulting is largely an unknown factor in the earthquake process. The effect of inhomogeneous geology can readily be incorporated into synthetic seismograms by using small earthquakes to obtain empirical Green's functions. Small earthquakes with source corner frequencies higher than the site recording limit f{sub max}, or much higher than the frequency of interest, effectively have impulsive point-fault dislocation sources, and their recordings are used as empirical Green's functions. Since empirical Green's functions are actual recordings at a site, they include the effects on seismic waves from all geologic inhomogeneities and include all recordable frequencies, absolute amplitudes, and all phases. They scale only in amplitude with differences in seismic moment. They can provide nearly the exact integrand to the representation relation. Furthermore, since their source events have spatial extent, they can be summed to simulate fault rupture without loss of information, thereby potentially computing the exact representation relation for an extended source earthquake.

  16. An alternative empirical model for the relationship between the bond valence and the thermal expansion rate of chemical bonds.

    Science.gov (United States)

    Sidey, Vasyl

    2015-08-01

    The relationship between the bond valence s and the thermal expansion rate of chemical bonds (dr/dT) has been closely approximated by using the alternative three-parameter empirical model (dr/dT) = (u + vs)(-1/w), where u, v and w are the refinable parameters. Unlike the s-(dr/dT) model developed by Brown et al. [(1997), Acta Cryst. B53, 750-761], this alternative model can be optimized for particular s-(dr/dT) datasets in the least-squares refinement procedure. For routine calculations of the thermal expansion rates of chemical bonds, the alternative model with the parameters u = -63.9, v = 2581.0 and w = 0.647 can be recommended.

  17. Biomass viability: An experimental study and the development of an empirical mathematical model for submerged membrane bioreactor.

    Science.gov (United States)

    Zuthi, M F R; Ngo, H H; Guo, W S; Nghiem, L D; Hai, F I; Xia, S Q; Zhang, Z Q; Li, J X

    2015-08-01

    This study investigates the influence of key biomass parameters on specific oxygen uptake rate (SOUR) in a sponge submerged membrane bioreactor (SSMBR) to develop mathematical models of biomass viability. Extra-cellular polymeric substances (EPS) were considered as a lumped parameter of bound EPS (bEPS) and soluble microbial products (SMP). Statistical analyses of experimental results indicate that the bEPS, SMP, mixed liquor suspended solids and volatile suspended solids (MLSS and MLVSS) have functional relationships with SOUR and their relative influence on SOUR was in the order of EPS>bEPS>SMP>MLVSS/MLSS. Based on correlations among biomass parameters and SOUR, two independent empirical models of biomass viability were developed. The models were validated using results of the SSMBR. However, further validation of the models for different operating conditions is suggested.

  18. A semi-empirical library of galaxy spectra for Gaia classification based on SDSS data and PÉGASE models

    Science.gov (United States)

    Tsalmantza, P.; Karampelas, A.; Kontizas, M.; Bailer-Jones, C. A. L.; Rocca-Volmerange, B.; Livanou, E.; Bellas-Velidis, I.; Kontizas, E.; Vallenari, A.

    2012-01-01

    Aims: This paper is the third in a series implementing a classification system for Gaia observations of unresolved galaxies. The system makes use of template galaxy spectra in order to determine spectral classes and estimate intrinsic astrophysical parameters. In previous work we used synthetic galaxy spectra produced by PÉGASE.2 code to simulate Gaia observations and to test the performance of support vector machine (SVM) classifiers and parametrizers. Here we produce a semi-empirical library of galaxy spectra by fitting SDSS spectra with the previously produced synthetic libraries. We present (1) the semi-empirical library of galaxy spectra; (2) a comparison between the observed and synthetic spectra; and (3) first results of classification and parametrization experiments with simulated Gaia spectrophotometry of this library. Methods: We use χ2-fitting to fit SDSS galaxy spectra with the synthetic library in order to construct a semi-empirical library of galaxy spectra in which (1) the real spectra are extended by the synthetic ones in order to cover the full wavelength range of Gaia; and (2) astrophysical parameters are assigned to the SDSS spectra by the best fitting synthetic spectrum. The SVM models were trained with and applied to semi-empirical spectra. Tests were performed for the classification of spectral types and the estimation of the most significant galaxy parameters (in particular redshift, mass to light ratio and star formation history). Results: We produce a semi-empirical library of 33 670 galaxy spectra covering the wavelength range 250 to 1050 nm at a sampling of 1 nm or less. Using the results of the fitting of the SDSS spectra with our synthetic library, we investigate the range of the input model parameters that produces spectra which are in good agreement with observations. In general the results are very good for the majority of the synthetic spectra of early type, spiral and irregular galaxies, while they reveal problems in the models

  19. Empirical-Statistical Methodology and Methods for Modeling and Forecasting of Climate Variability of Different Temporal Scales

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    Main problem of modern climatology is to assess the present as well as future climate change. For this aim two approaches are used: physic-mathematic modeling on the basis of GCMs and palaeoclimatic analogues. The third approach is based on the empirical-statistical methodology and is developed in this paper. This ap proach allows to decide two main problems: to give a real assessment of climate changes by observed data for climate monitoring and extrapolation of obtained climate tendencies to the nearest future (10-15 years) and give the empiricai basis for further development of physic-mathematicai models. The basic theory and methodology of empirical-statistic approach have been developed as well as a common model for description of space-time climate variatiom taking into account the processes of different time scales. The way of decreasing of the present and future uncertainty is suggested as the extraction of long-term climate changes components in the particular time series and spatial generalization of the same climate tendencies in the obtained homogeneous regions. Algorithm and methods for realization of empirical-statistic methodology have been developed along with methods for generalization of intraannual fluctuations, methods for extraction of homogeneous components of different time scales (interannual, decadal, century), methodology and methods for spatial generalization and modeling, methods for extrapolation on the basis of two main kinds of time models: stochastic and deterministic--stochastic. Some applications of developed methodology and methods are given for the longest time series of temperature and precipitation over the world and for spatial generalization over the European area.

  20. Empirical analysis of cascade deformable models for multi-view face detection

    NARCIS (Netherlands)

    Orozco, Javier; Martinez, Brais; Pantic, Maja

    2015-01-01

    We present a multi-view face detector based on Cascade Deformable Part Models (CDPM). Over the last decade, there have been several attempts to extend the well-established Viola&Jones face detector algorithm to solve the problem of multi-view face detection. Recently a tree structure model for multi

  1. Vertical Equating: An Empirical Study of the Consistency of Thurstone and Rasch Model Approaches.

    Science.gov (United States)

    Schratz, Mary K.

    To explore the appropriateness of the Rasch model for the vertical equating of a multi-level, multi-form achievement test series, both the Rasch model and the traditional Thurstone procedures were applied to the Listening Comprehension subtest scores of the Stanford Achievement Test. Two adjacent levels of these tests were administered in 1981 to…

  2. Safewards: The empirical basis of the model and a critical appraisal

    NARCIS (Netherlands)

    Bowers, L.; Alexander, J.; Bilgin, H. dr.; Botha, M.; Dack, C.; James, K.; Jarrett, M.; Jeffery, D.; Nijman, H.L.I.; Owiti, J.A.; Papadopoulos, C.; Ross, J.; Wright, S.; Stewart, D.

    2014-01-01

    In a previous paper, we described a proposed model explaining differences in rates of conflict (aggression, absconding, self-harm, etc.) and containment (seclusion, special observation, manual restraint, etc.). The Safewards Model identified six originating domains as sources of conflict and contain

  3. A semi-empirical model to assess uncertainty of spatial patterns of erosion

    NARCIS (Netherlands)

    Sterk, G.; Vigiak, O.; Romanowicz, R.J.; Beven, K.J.

    2006-01-01

    Distributed erosion models are potentially good tools for locating soil sediment sources and guiding efficient Soil and Water Conservation (SWC) planning, but the uncertainty of model predictions may be high. In this study, the distribution of erosion within a catchment was predicted with a

  4. An improved semi-empirical model for the densification of Antarctic firn

    NARCIS (Netherlands)

    Ligtenberg, S.R.M.; Helsen, M.M.; van den Broeke, M.R.

    2011-01-01

    A firn densification model is presented that simulates steady-state Antarctic firn density profiles, as well as the temporal evolution of firn density and surface height. The model uses an improved firn densification expression that is tuned to fit depth-density observations. Liquid water processes

  5. Modeling mortality : Empirical studies on the effect of mortality on annuity markets

    NARCIS (Netherlands)

    Hari, N.

    2007-01-01

    Chapter 3 of the dissertation models the macro-longevity risk and introduces a stochastic model for human mortality rates. Chapter 4 analyzes the importance of mortality improvement and mortality risk (macro- and micro-longevity risk and parameter uncertainty) on solvency positions of pension funds

  6. Using Empirical Data to Refine a Model for Information Literacy Instruction for Elementary School Students

    Science.gov (United States)

    Nesset, Valerie

    2015-01-01

    Introduction: As part of a larger study in 2006 of the information-seeking behaviour of third-grade students in Montreal, Quebec, Canada, a model of their information-seeking behaviour was developed. To further improve the model, an extensive examination of the literature into information-seeking behaviour and information literacy was conducted…

  7. The social networking application success model : An Empirical Study of Facebook and Twitter

    NARCIS (Netherlands)

    Ou, Carol; Davison, R.M.; Huang, Q.

    2016-01-01

    Social networking applications (SNAs) are among the fastest growing web applications of recent years. In this paper, we propose a causal model to assess the success of SNAs, grounded on DeLone and McLean’s updated information systems (IS) success model. In addition to their original three dimensions

  8. A New Empirical Sewer Water Quality Model for the Prediction of WWTP Influent Quality

    NARCIS (Netherlands)

    Langeveld, J.G.; Schilperoort, R.P.S.; Rombouts, P.M.M.; Benedetti, L.; Amerlinck, Y.; de Jonge, J.; Flameling, T.; Nopens, I.; Weijers, S.

    2014-01-01

    Modelling of the integrated urban water system is a powerful tool to optimise wastewater system performance or to find cost-effective solutions for receiving water problems. One of the challenges of integrated modelling is the prediction of water quality at the inlet of a WWTP. Recent applications

  9. The social networking application success model : An empirical study of Facebook and Twitter

    NARCIS (Netherlands)

    Ou, Carol; Davison, R.M.; Huang, Q.

    2016-01-01

    Social networking applications (SNAs) are among the fastest growing web applications of recent years. In this paper, we propose a causal model to assess the success of SNAs, grounded on DeLone and McLean’s updated information systems (IS) success model. In addition to their original three dimensions

  10. Methodological and empirical developments for the Ratcliff diffusion model of response times and accuracy

    NARCIS (Netherlands)

    Wagenmakers, E.-J.

    2009-01-01

    The Ratcliff diffusion model for simple two-choice decisions (e.g., Ratcliff, 1978; Ratcliff & McKoon, 2008) has two outstanding advantages. First, the model generally provides an excellent fit to the observed data (i.e., response accuracy and the shape of RT distributions, both for correct and erro

  11. Safewards: The empirical basis of the model and a critical appraisal

    NARCIS (Netherlands)

    Bowers, L.; Alexander, J.; Bilgin, H. dr.; Botha, M.; Dack, C.; James, K.; Jarrett, M.; Jeffery, D.; Nijman, H.L.I.; Owiti, J.A.; Papadopoulos, C.; Ross, J.; Wright, S.; Stewart, D.

    2014-01-01

    In a previous paper, we described a proposed model explaining differences in rates of conflict (aggression, absconding, self-harm, etc.) and containment (seclusion, special observation, manual restraint, etc.). The Safewards Model identified six originating domains as sources of conflict and

  12. Comparing the model-simulated global warming signal to observations using empirical estimates of unforced noise

    Science.gov (United States)

    The comparison of observed global mean surface air temperature (GMT) change to the mean change simulated by climate models has received much attention. For a given global warming signal produced by a climate model ensemble, there exists an envelope of GMT values representing the range of possible un...

  13. Asset Pricing Model and the Liquidity Effect: Empirical Evidence in the Brazilian Stock Market

    Directory of Open Access Journals (Sweden)

    Otávio Ribeiro de Medeiros

    2011-09-01

    Full Text Available This paper is aims to analyze whether a liquidity premium exists in the Brazilian stock market. As a second goal, we include liquidity as an extra risk factor in asset pricing models and test whether this factor is priced and whether stock returns were explained not only by systematic risk, as proposed by the CAPM, by Fama and French’s (1993 three-factor model, and by Carhart’s (1997 momentum-factor model, but also by liquidity, as suggested by Amihud and Mendelson (1986. To achieve this, we used stock portfolios and five measures of liquidity. Among the asset pricing models tested, the CAPM was the least capable of explaining returns. We found that the inclusion of size and book-to-market factors in the CAPM, a momentum factor in the three-factor model, and a liquidity factor in the four-factor model improve their explanatory power of portfolio returns. In addition, we found that the five-factor model is marginally superior to the other asset pricing models tested.

  14. A Semi-Empirical Airborne Particle Erosion Model for Polyesteric Matrix Fiberglass Composites

    Directory of Open Access Journals (Sweden)

    Valeriu DRAGAN

    2013-12-01

    Full Text Available The paper deals with the mathematical modeling of the airborne solid particle erosion rate of composite materials, in particular non-oriented fiberglass reinforced polyesteric matrices. Using the mathematical tool of non-linear regression, based on experimental data available in the state of the art, an algebraic equation has been determined to estimate the relative erosion rate of such composites. The formulation is tailored so that it relates to classical erosion models such as Finnie’s, Bitter’s or Tulsa angle dependent model which can be implemented into commercial computational fluid dynamics software. Although the implementation - per se - is not described herein, the model proposed can be useful in estimating the global effect of solid particle erosion on composite materials in this class. Further theoretical developments may add to the model the capacity to evaluate the erosion rate for a wider class of matrices as well as more types of weavings.

  15. Development of nonlinear empirical models to forecast daily PM2.5 and ozone levels in three large Chinese cities

    Science.gov (United States)

    Lv, Baolei; Cobourn, W. Geoffrey; Bai, Yuqi

    2016-12-01

    Empirical regression models for next-day forecasting of PM2.5 and O3 air pollution concentrations have been developed and evaluated for three large Chinese cities, Beijing, Nanjing and Guangzhou. The forecast models are empirical nonlinear regression models designed for use in an automated data retrieval and forecasting platform. The PM2.5 model includes an upwind air quality variable, PM24, to account for regional transport of PM2.5, and a persistence variable (previous day PM2.5 concentration). The models were evaluated in the hindcast mode with a two-year air quality and meteorological data set using a leave-one-month-out cross validation method, and in the forecast mode with a one-year air quality and forecasted weather dataset that included forecasted air trajectories. The PM2.5 models performed well in the hindcast mode, with coefficient of determination (R2) values of 0.54, 0.65 and 0.64, and normalized mean error (NME) values of 0.40, 0.26 and 0.23 respectively, for the three cities. The O3 models also performed well in the hindcast mode, with R2 values of 0.75, 0.55 and 0.73, and NME values of 0.29, 0.26 and 0.24 in the three cities. The O3 models performed better in summertime than in winter in Beijing and Guangzhou, and captured the O3 variations well all the year round in Nanjing. The overall forecast performance of the PM2.5 and O3 models during the test year varied from fair to good, depending on location. The forecasts were somewhat degraded compared with hindcasts from the same year, depending on the accuracy of the forecasted meteorological input data. For the O3 models, the model forecast accuracy was strongly dependent on the maximum temperature forecasts. For the critical forecasts, involving air quality standard exceedences, the PM2.5 model forecasts were fair to good, and the O3 model forecasts were poor to fair.

  16. Assessing the performance of community-available global MHD models using key system parameters and empirical relationships

    Science.gov (United States)

    Gordeev, E.; Sergeev, V.; Honkonen, I.; Kuznetsova, M.; Rastätter, L.; Palmroth, M.; Janhunen, P.; Tóth, G.; Lyon, J.; Wiltberger, M.

    2015-12-01

    Global magnetohydrodynamic (MHD) modeling is a powerful tool in space weather research and predictions. There are several advanced and still developing global MHD (GMHD) models that are publicly available via Community Coordinated Modeling Center's (CCMC) Run on Request system, which allows the users to simulate the magnetospheric response to different solar wind conditions including extraordinary events, like geomagnetic storms. Systematic validation of GMHD models against observations still continues to be a challenge, as well as comparative benchmarking of different models against each other. In this paper we describe and test a new approach in which (i) a set of critical large-scale system parameters is explored/tested, which are produced by (ii) specially designed set of computer runs to simulate realistic statistical distributions of critical solar wind parameters and are compared to (iii) observation-based empirical relationships for these parameters. Being tested in approximately similar conditions (similar inputs, comparable grid resolution, etc.), the four models publicly available at the CCMC predict rather well the absolute values and variations of those key parameters (magnetospheric size, magnetic field, and pressure) which are directly related to the large-scale magnetospheric equilibrium in the outer magnetosphere, for which the MHD is supposed to be a valid approach. At the same time, the models have systematic differences in other parameters, being especially different in predicting the global convection rate, total field-aligned current, and magnetic flux loading into the magnetotail after the north-south interplanetary magnetic field turning. According to validation results, none of the models emerges as an absolute leader. The new approach suggested for the evaluation of the models performance against reality may be used by model users while planning their investigations, as well as by model developers and those interesting to quantitatively

  17. SWIFT: Semi-empirical and numerically efficient stratospheric ozone chemistry for global climate models

    Science.gov (United States)

    Kreyling, Daniel; Wohltmann, Ingo; Lehmann, Ralph; Rex, Markus

    2015-04-01

    The SWIFT model is a fast yet accurate chemistry scheme for calculating the chemistry of stratospheric ozone. It is mainly intended for use in Global Climate Models (GCMs), Chemistry Climate Models (CCMs) and Earth System Models (ESMs). For computing time reasons these models often do not employ full stratospheric chemistry modules, but use prescribed ozone instead. This can lead to insufficient representation between stratosphere and troposphere. The SWIFT stratospheric ozone chemistry model, focuses on the major reaction mechanisms of ozone production and loss in order to reduce the computational costs. SWIFT consists of two sub-models. 1) Inside the polar vortex, the model calculates polar vortex averaged ozone loss by solving a set of coupled differential equations for the key species in polar ozone chemistry. 2) The extra-polar regime, which this poster is going to focus on. Outside the polar vortex, the complex system of differential equations of a full stratospheric chemistry model is replaced by an explicit algebraic polynomial, which can be solved in a fraction of the time needed by the full scale model. The approach, which is used to construct the polynomial, is also referred to as repro-modeling and has been successfully applied to chemical models (Turanyi (1993), Lowe & Tomlin (2000)). The procedure uses data from the Lagrangian stratospheric chemistry and transport model ATLAS and yields one high-order polynomial for global ozone loss and production rates over 24h per month. The stratospheric ozone change rates can be sufficiently described by 9 variables. Latitude, altitude, temperature, the overhead ozone abundance, 4 mixing ratios of ozone depleting chemical families (chlorine, bromine, nitrogen-oxides and hydrogen-oxides) and the ozone concentrations itself. The ozone change rates in the lower stratosphere as a function of these 9 variables yield a sufficiently compact 9-D hyper-surface, which we can approximate with a polynomial. In the upper

  18. Comparative empirical analysis of flow-weighted transit route networks in R-space and evolution modeling

    Science.gov (United States)

    Huang, Ailing; Zang, Guangzhi; He, Zhengbing; Guan, Wei

    2017-05-01

    Urban public transit system is a typical mixed complex network with dynamic flow, and its evolution should be a process coupling topological structure with flow dynamics, which has received little attention. This paper presents the R-space to make a comparative empirical analysis on Beijing’s flow-weighted transit route network (TRN) and we found that both the Beijing’s TRNs in the year of 2011 and 2015 exhibit the scale-free properties. As such, we propose an evolution model driven by flow to simulate the development of TRNs with consideration of the passengers’ dynamical behaviors triggered by topological change. The model simulates that the evolution of TRN is an iterative process. At each time step, a certain number of new routes are generated driven by travel demands, which leads to dynamical evolution of new routes’ flow and triggers perturbation in nearby routes that will further impact the next round of opening new routes. We present the theoretical analysis based on the mean-field theory, as well as the numerical simulation for this model. The results obtained agree well with our empirical analysis results, which indicate that our model can simulate the TRN evolution with scale-free properties for distributions of node’s strength and degree. The purpose of this paper is to illustrate the global evolutional mechanism of transit network that will be used to exploit planning and design strategies for real TRNs.

  19. Response in the water quality of the Salton Sea, California, to changes in phosphorus loading: An empirical modeling approach

    Science.gov (United States)

    Robertson, D.M.; Schladow, S.G.

    2008-01-01

    Salton Sea, California, like many other lakes, has become eutrophic because of excessive nutrient loading, primarily phosphorus (P). A Total Maximum Daily Load (TMDL) is being prepared for P to reduce the input of P to the Sea. In order to better understand how P-load reductions should affect the average annual water quality of this terminal saline lake, three different eutrophication programs (BATHTUB, WiLMS, and the Seepage Lake Model) were applied. After verifying that specific empirical models within these programs were applicable to this saline lake, each model was calibrated using water-quality and nutrient-loading data for 1999 and then used to simulate the effects of specific P-load reductions. Model simulations indicate that a 50% decrease in external P loading would decrease near-surface total phosphorus concentrations (TP) by 25-50%. Application of other empirical models demonstrated that this decrease in loading should decrease near-surface chlorophyll a concentrations (Chl a) by 17-63% and increase Secchi depths (SD) by 38-97%. The wide range in estimated responses in Chl a and SD were primarily caused by uncertainty in how non-algal turbidity would respond to P-load reductions. If only the models most applicable to the Salton Sea are considered, a 70-90% P-load reduction is required for the Sea to be classified as moderately eutrophic (trophic state index of 55). These models simulate steady-state conditions in the Sea; therefore, it is difficult to ascertain how long it would take for the simulated changes to occur after load reductions. ?? 2008 Springer Science+Business Media B.V.

  20. Modelling time-varying volatility in the Indian stock returns: Some empirical evidence

    Directory of Open Access Journals (Sweden)

    Trilochan Tripathy

    2015-12-01

    Full Text Available This paper models time-varying volatility in one of the Indian main stock markets, namely, the National Stock Exchange (NSE located in Mumbai, investigating whether it has been affected by the recent global financial crisis. A Chow test indicates the presence of a structural break. Both symmetric and asymmetric GARCH models suggest that the volatility of NSE returns is persistent and asymmetric and has increased as a result of the crisis. The model under the Generalized Error Distribution appears to be the most suitable one. However, its out-of-sample forecasting performance is relatively poor.