WorldWideScience

Sample records for model performance model

  1. Modeling typical performance measures

    NARCIS (Netherlands)

    Weekers, Anke Martine

    2009-01-01

    In the educational, employment, and clinical context, attitude and personality inventories are used to measure typical performance traits. Statistical models are applied to obtain latent trait estimates. Often the same statistical models as the models used in maximum performance measurement are appl

  2. Photovoltaic array performance model.

    Energy Technology Data Exchange (ETDEWEB)

    Kratochvil, Jay A.; Boyson, William Earl; King, David L.

    2004-08-01

    This document summarizes the equations and applications associated with the photovoltaic array performance model developed at Sandia National Laboratories over the last twelve years. Electrical, thermal, and optical characteristics for photovoltaic modules are included in the model, and the model is designed to use hourly solar resource and meteorological data. The versatility and accuracy of the model has been validated for flat-plate modules (all technologies) and for concentrator modules, as well as for large arrays of modules. Applications include system design and sizing, 'translation' of field performance measurements to standard reporting conditions, system performance optimization, and real-time comparison of measured versus expected system performance.

  3. Hadoop Performance Models

    OpenAIRE

    Herodotou, Herodotos

    2011-01-01

    Hadoop MapReduce is now a popular choice for performing large-scale data analytics. This technical report describes a detailed set of mathematical performance models for describing the execution of a MapReduce job on Hadoop. The models describe dataflow and cost information at the fine granularity of phases within the map and reduce tasks of a job execution. The models can be used to estimate the performance of MapReduce jobs as well as to find the optimal configuration settings to use when r...

  4. Hadoop Performance Models

    CERN Document Server

    Herodotou, Herodotos

    2011-01-01

    Hadoop MapReduce is now a popular choice for performing large-scale data analytics. This technical report describes a detailed set of mathematical performance models for describing the execution of a MapReduce job on Hadoop. The models describe dataflow and cost information at the fine granularity of phases within the map and reduce tasks of a job execution. The models can be used to estimate the performance of MapReduce jobs as well as to find the optimal configuration settings to use when running the jobs.

  5. NIF capsule performance modeling

    OpenAIRE

    Weber S.; Callahan D.; Cerjan C.; Edwards M.; Haan S.; Hicks D.; Jones O.; Kyrala G.; Meezan N.; Olson R; Robey H.; Spears B.; Springer P.; Town R.

    2013-01-01

    Post-shot modeling of NIF capsule implosions was performed in order to validate our physical and numerical models. Cryogenic layered target implosions and experiments with surrogate targets produce an abundance of capsule performance data including implosion velocity, remaining ablator mass, times of peak x-ray and neutron emission, core image size, core symmetry, neutron yield, and x-ray spectra. We have attempted to match the integrated data set with capsule-only simulations by adjusting th...

  6. NIF capsule performance modeling

    Directory of Open Access Journals (Sweden)

    Weber S.

    2013-11-01

    Full Text Available Post-shot modeling of NIF capsule implosions was performed in order to validate our physical and numerical models. Cryogenic layered target implosions and experiments with surrogate targets produce an abundance of capsule performance data including implosion velocity, remaining ablator mass, times of peak x-ray and neutron emission, core image size, core symmetry, neutron yield, and x-ray spectra. We have attempted to match the integrated data set with capsule-only simulations by adjusting the drive and other physics parameters within expected uncertainties. The simulations include interface roughness, time-dependent symmetry, and a model of mix. We were able to match many of the measured performance parameters for a selection of shots.

  7. Performance modeling of Beamlet

    Energy Technology Data Exchange (ETDEWEB)

    Auerbach, J.M.; Lawson, J.K.; Rotter, M.D.; Sacks, R.A.; Van Wonterghem, B.W.; Williams, W.H.

    1995-06-27

    Detailed modeling of beam propagation in Beamlet has been made to predict system performance. New software allows extensive use of optical component characteristics. This inclusion of real optical component characteristics has resulted in close agreement between calculated and measured beam distributions.

  8. ATR performance modeling concepts

    Science.gov (United States)

    Ross, Timothy D.; Baker, Hyatt B.; Nolan, Adam R.; McGinnis, Ryan E.; Paulson, Christopher R.

    2016-05-01

    Performance models are needed for automatic target recognition (ATR) development and use. ATRs consume sensor data and produce decisions about the scene observed. ATR performance models (APMs) on the other hand consume operating conditions (OCs) and produce probabilities about what the ATR will produce. APMs are needed for many modeling roles of many kinds of ATRs (each with different sensing modality and exploitation functionality combinations); moreover, there are different approaches to constructing the APMs. Therefore, although many APMs have been developed, there is rarely one that fits a particular need. Clarified APM concepts may allow us to recognize new uses of existing APMs and identify new APM technologies and components that better support coverage of the needed APMs. The concepts begin with thinking of ATRs as mapping OCs of the real scene (including the sensor data) to reports. An APM is then a mapping from explicit quantized OCs (represented with less resolution than the real OCs) and latent OC distributions to report distributions. The roles of APMs can be distinguished by the explicit OCs they consume. APMs used in simulations consume the true state that the ATR is attempting to report. APMs used online with the exploitation consume the sensor signal and derivatives, such as match scores. APMs used in sensor management consume neither of those, but estimate performance from other OCs. This paper will summarize the major building blocks for APMs, including knowledge sources, OC models, look-up tables, analytical and learned mappings, and tools for signal synthesis and exploitation.

  9. A Model Performance

    Science.gov (United States)

    Thornton, Bradley D.; Smalley, Robert A.

    2008-01-01

    Building information modeling (BIM) uses three-dimensional modeling concepts, information technology and interoperable software to design, construct and operate a facility. However, BIM can be more than a tool for virtual modeling--it can provide schools with a 3-D walkthrough of a project while it still is on the electronic drawing board. BIM can…

  10. Principles of Sonar Performance Modeling

    NARCIS (Netherlands)

    Ainslie, M.A.

    2010-01-01

    Sonar performance modelling (SPM) is concerned with the prediction of quantitative measures of sonar performance, such as probability of detection. It is a multidisciplinary subject, requiring knowledge and expertise in the disparate fields of underwater acoustics, acoustical oceanography, sonar sig

  11. Digital Troposcatter Performance Model

    Science.gov (United States)

    1983-12-01

    D,.-iD out ’e Pthr ~ These performance measures require a complete statistical de- scription of the components of the detection variable, which we...BER threshold Pthr " Let us denote by r the region of the 5-dimensional space (y,y) in which the BER exceeds Pthr : r = (Yy I): Pe(Y,!i) > Pthrl (A.46...y) by solving the nonlinear equation ... Pe ( Y,_21)= Pthr . A closed form expression for Pout(y) cannot be - obtained. Instead we developed an

  12. Statistical modeling of program performance

    Directory of Open Access Journals (Sweden)

    A. P. Karpenko

    2014-01-01

    Full Text Available A task of evaluation of program performance often occurs in the process of design of computer systems or during iterative compilation. A traditional way to solve this problem is emulation of program execution on the target system. A modern alternative approach to evaluation of program performance is based on statistical modeling of program performance on a computer under investigation. This statistical method of modeling program performance called Velocitas was introduced in this work. The method and its implementation in the Adaptor framework were presented. Investigation of the method's effectiveness showed high adequacy of program performance prediction.

  13. Air Conditioner Compressor Performance Model

    Energy Technology Data Exchange (ETDEWEB)

    Lu, Ning; Xie, YuLong; Huang, Zhenyu

    2008-09-05

    During the past three years, the Western Electricity Coordinating Council (WECC) Load Modeling Task Force (LMTF) has led the effort to develop the new modeling approach. As part of this effort, the Bonneville Power Administration (BPA), Southern California Edison (SCE), and Electric Power Research Institute (EPRI) Solutions tested 27 residential air-conditioning units to assess their response to delayed voltage recovery transients. After completing these tests, different modeling approaches were proposed, among them a performance modeling approach that proved to be one of the three favored for its simplicity and ability to recreate different SVR events satisfactorily. Funded by the California Energy Commission (CEC) under its load modeling project, researchers at Pacific Northwest National Laboratory (PNNL) led the follow-on task to analyze the motor testing data to derive the parameters needed to develop a performance models for the single-phase air-conditioning (SPAC) unit. To derive the performance model, PNNL researchers first used the motor voltage and frequency ramping test data to obtain the real (P) and reactive (Q) power versus voltage (V) and frequency (f) curves. Then, curve fitting was used to develop the P-V, Q-V, P-f, and Q-f relationships for motor running and stalling states. The resulting performance model ignores the dynamic response of the air-conditioning motor. Because the inertia of the air-conditioning motor is very small (H<0.05), the motor reaches from one steady state to another in a few cycles. So, the performance model is a fair representation of the motor behaviors in both running and stalling states.

  14. MODELING SUPPLY CHAIN PERFORMANCE VARIABLES

    Directory of Open Access Journals (Sweden)

    Ashish Agarwal

    2005-01-01

    Full Text Available In order to understand the dynamic behavior of the variables that can play a major role in the performance improvement in a supply chain, a System Dynamics-based model is proposed. The model provides an effective framework for analyzing different variables affecting supply chain performance. Among different variables, a causal relationship among different variables has been identified. Variables emanating from performance measures such as gaps in customer satisfaction, cost minimization, lead-time reduction, service level improvement and quality improvement have been identified as goal-seeking loops. The proposed System Dynamics-based model analyzes the affect of dynamic behavior of variables for a period of 10 years on performance of case supply chain in auto business.

  15. Performability Modelling Tools, Evaluation Techniques and Applications

    NARCIS (Netherlands)

    Haverkort, Boudewijn R.H.M.

    1990-01-01

    This thesis deals with three aspects of quantitative evaluation of fault-tolerant and distributed computer and communication systems: performability evaluation techniques, performability modelling tools, and performability modelling applications. Performability modelling is a relatively new

  16. Modeling road-cycling performance.

    Science.gov (United States)

    Olds, T S; Norton, K I; Lowe, E L; Olive, S; Reay, F; Ly, S

    1995-04-01

    This paper presents a complete set of equations for a "first principles" mathematical model of road-cycling performance, including corrections for the effect of winds, tire pressure and wheel radius, altitude, relative humidity, rotational kinetic energy, drafting, and changed drag. The relevant physiological, biophysical, and environmental variables were measured in 41 experienced cyclists completing a 26-km road time trial. The correlation between actual and predicted times was 0.89 (P road-cycling performance are maximal O2 consumption, fractional utilization of maximal O2 consumption, mechanical efficiency, and projected frontal area. The model is then applied to some practical problems in road cycling: the effect of drafting, the advantage of using smaller front wheels, the effects of added mass, the importance of rotational kinetic energy, the effect of changes in drag due to changes in bicycle configuration, the normalization of performances under different conditions, and the limits of human performance.

  17. Data management system performance modeling

    Science.gov (United States)

    Kiser, Larry M.

    1993-01-01

    This paper discusses analytical techniques that have been used to gain a better understanding of the Space Station Freedom's (SSF's) Data Management System (DMS). The DMS is a complex, distributed, real-time computer system that has been redesigned numerous times. The implications of these redesigns have not been fully analyzed. This paper discusses the advantages and disadvantages for static analytical techniques such as Rate Monotonic Analysis (RMA) and also provides a rationale for dynamic modeling. Factors such as system architecture, processor utilization, bus architecture, queuing, etc. are well suited for analysis with a dynamic model. The significance of performance measures for a real-time system are discussed.

  18. Performance modeling of optical refrigerators

    Energy Technology Data Exchange (ETDEWEB)

    Mills, G.; Mord, A. [Ball Aerospace and Technologies Corp., Boulder, CO (United States). Cryogenic and Thermal Engineering

    2006-02-15

    Optical refrigeration using anti-Stokes fluorescence in solids has several advantages over more conventional techniques including low mass, low volume, low cost and no vibration. It also has the potential of allowing miniature cryocoolers on the scale of a few cubic centimeters. It has been the topic of analysis and experimental work by several organizations. In 2003, we demonstrated the first optical refrigerator. We have developed a comprehensive system-level performance model of optical refrigerators. Our current version models the refrigeration cycle based on the fluorescent material emission and absorption data at ambient and reduced temperature for the Ytterbium-ZBLAN glass (Yb:ZBLAN) cooling material. It also includes the heat transfer into the refrigerator cooling assembly due to radiation and conduction. In this paper, we report on modeling results which reveal the interplay between size, power input, and cooling load. This interplay results in practical size limitations using Yb:ZBLAN. (author)

  19. Model performance analysis and model validation in logistic regression

    Directory of Open Access Journals (Sweden)

    Rosa Arboretti Giancristofaro

    2007-10-01

    Full Text Available In this paper a new model validation procedure for a logistic regression model is presented. At first, we illustrate a brief review of different techniques of model validation. Next, we define a number of properties required for a model to be considered "good", and a number of quantitative performance measures. Lastly, we describe a methodology for the assessment of the performance of a given model by using an example taken from a management study.

  20. Analysing the temporal dynamics of model performance for hydrological models

    NARCIS (Netherlands)

    Reusser, D.E.; Blume, T.; Schaefli, B.; Zehe, E.

    2009-01-01

    The temporal dynamics of hydrological model performance gives insights into errors that cannot be obtained from global performance measures assigning a single number to the fit of a simulated time series to an observed reference series. These errors can include errors in data, model parameters, or m

  1. Generalization performance of regularized neural network models

    DEFF Research Database (Denmark)

    Larsen, Jan; Hansen, Lars Kai

    1994-01-01

    Architecture optimization is a fundamental problem of neural network modeling. The optimal architecture is defined as the one which minimizes the generalization error. This paper addresses estimation of the generalization performance of regularized, complete neural network models. Regularization...

  2. A medal share model for Olympic performance

    OpenAIRE

    Ang Sun; Rui Wang; Zhaoguo Zhan

    2015-01-01

    A sizable empirical literature relates a nation's Olympic performance to socioeconomic factors by adopting linear regression or a Tobit approach suggested by Bernard and Busse (2004). We propose an alternative model where a nation's medal share depends on its competitiveness relative to other nations and the model is logically consistent. Empirical evidence shows that our model fits data better than the existing linear regression and Tobit model. Besides Olympic Games, the proposed model and ...

  3. Performance of Information Criteria for Spatial Models.

    Science.gov (United States)

    Lee, Hyeyoung; Ghosh, Sujit K

    2009-01-01

    Model choice is one of the most crucial aspect in any statistical data analysis. It is well known that most models are just an approximation to the true data generating process but among such model approximations it is our goal to select the "best" one. Researchers typically consider a finite number of plausible models in statistical applications and the related statistical inference depends on the chosen model. Hence model comparison is required to identify the "best" model among several such candidate models. This article considers the problem of model selection for spatial data. The issue of model selection for spatial models has been addressed in the literature by the use of traditional information criteria based methods, even though such criteria have been developed based on the assumption of independent observations. We evaluate the performance of some of the popular model selection critera via Monte Carlo simulation experiments using small to moderate samples. In particular, we compare the performance of some of the most popular information criteria such as Akaike Information Criterion (AIC), Bayesian Information Criterion (BIC), and Corrected AIC (AICc) in selecting the true model. The ability of these criteria to select the correct model is evaluated under several scenarios. This comparison is made using various spatial covariance models ranging from stationary isotropic to nonstationary models.

  4. Summary of photovoltaic system performance models

    Energy Technology Data Exchange (ETDEWEB)

    Smith, J. H.; Reiter, L. J.

    1984-01-15

    The purpose of this study is to provide a detailed overview of photovoltaics (PV) performance modeling capabilities that have been developed during recent years for analyzing PV system and component design and policy issues. A set of 10 performance models have been selected which span a representative range of capabilities from generalized first-order calculations to highly specialized electrical network simulations. A set of performance modeling topics and characteristics is defined and used to examine some of the major issues associated with photovoltaic performance modeling. Next, each of the models is described in the context of these topics and characteristics to assess its purpose, approach, and level of detail. Then each of the issues is discussed in terms of the range of model capabilities available and summarized in tabular form for quick reference. Finally, the models are grouped into categories to illustrate their purposes and perspectives.

  5. Intern Performance in Three Supervisory Models

    Science.gov (United States)

    Womack, Sid T.; Hanna, Shellie L.; Callaway, Rebecca; Woodall, Peggy

    2011-01-01

    Differences in intern performance, as measured by a Praxis III-similar instrument were found between interns supervised in three supervisory models: Traditional triad model, cohort model, and distance supervision. Candidates in this study's particular form of distance supervision were not as effective as teachers as candidates in traditional-triad…

  6. Performance modeling of automated manufacturing systems

    Science.gov (United States)

    Viswanadham, N.; Narahari, Y.

    A unified and systematic treatment is presented of modeling methodologies and analysis techniques for performance evaluation of automated manufacturing systems. The book is the first treatment of the mathematical modeling of manufacturing systems. Automated manufacturing systems are surveyed and three principal analytical modeling paradigms are discussed: Markov chains, queues and queueing networks, and Petri nets.

  7. Performance-oriented Organisation Modelling

    NARCIS (Netherlands)

    Popova, V.; Sharpanskykh, A.

    2006-01-01

    Each organisation exists or is created for the achievement of one or more goals. To ensure continued success, the organisation should monitor its performance with respect to the formulated goals. In practice the performance of an organization is often evaluated by estimating its performance indicato

  8. Modeling Performance of Plant Growth Regulators

    Directory of Open Access Journals (Sweden)

    W. C. Kreuser

    2017-03-01

    Full Text Available Growing degree day (GDD models can predict the performance of plant growth regulators (PGRs applied to creeping bentgrass ( L.. The goal of this letter is to describe experimental design strategies and modeling approaches to create PGR models for different PGRs, application rates, and turf species. Results from testing the models indicate that clipping yield should be measured until the growth response has diminished. This is in contrast to reapplication of a PGR at preselected intervals. During modeling, inclusion of an amplitude-dampening coefficient in the sinewave model allows the PGR effect to dissipate with time.

  9. METAPHOR (version 1): Users guide. [performability modeling

    Science.gov (United States)

    Furchtgott, D. G.

    1979-01-01

    General information concerning METAPHOR, an interactive software package to facilitate performability modeling and evaluation, is presented. Example systems are studied and their performabilities are calculated. Each available METAPHOR command and array generator is described. Complete METAPHOR sessions are included.

  10. Assembly line performance and modeling

    National Research Council Canada - National Science Library

    Rane, Arun B; Sunnapwar, Vivek K

    2017-01-01

    Automobile sector forms the backbone of manufacturing sector. Vehicle assembly line is important section in automobile plant where repetitive tasks are performed one after another at different workstations...

  11. Analysing the temporal dynamics of model performance for hydrological models

    Directory of Open Access Journals (Sweden)

    D. E. Reusser

    2008-11-01

    Full Text Available The temporal dynamics of hydrological model performance gives insights into errors that cannot be obtained from global performance measures assigning a single number to the fit of a simulated time series to an observed reference series. These errors can include errors in data, model parameters, or model structure. Dealing with a set of performance measures evaluated at a high temporal resolution implies analyzing and interpreting a high dimensional data set. This paper presents a method for such a hydrological model performance assessment with a high temporal resolution and illustrates its application for two very different rainfall-runoff modeling case studies. The first is the Wilde Weisseritz case study, a headwater catchment in the eastern Ore Mountains, simulated with the conceptual model WaSiM-ETH. The second is the Malalcahuello case study, a headwater catchment in the Chilean Andes, simulated with the physics-based model Catflow. The proposed time-resolved performance assessment starts with the computation of a large set of classically used performance measures for a moving window. The key of the developed approach is a data-reduction method based on self-organizing maps (SOMs and cluster analysis to classify the high-dimensional performance matrix. Synthetic peak errors are used to interpret the resulting error classes. The final outcome of the proposed method is a time series of the occurrence of dominant error types. For the two case studies analyzed here, 6 such error types have been identified. They show clear temporal patterns which can lead to the identification of model structural errors.

  12. Analysing the temporal dynamics of model performance for hydrological models

    Directory of Open Access Journals (Sweden)

    E. Zehe

    2009-07-01

    Full Text Available The temporal dynamics of hydrological model performance gives insights into errors that cannot be obtained from global performance measures assigning a single number to the fit of a simulated time series to an observed reference series. These errors can include errors in data, model parameters, or model structure. Dealing with a set of performance measures evaluated at a high temporal resolution implies analyzing and interpreting a high dimensional data set. This paper presents a method for such a hydrological model performance assessment with a high temporal resolution and illustrates its application for two very different rainfall-runoff modeling case studies. The first is the Wilde Weisseritz case study, a headwater catchment in the eastern Ore Mountains, simulated with the conceptual model WaSiM-ETH. The second is the Malalcahuello case study, a headwater catchment in the Chilean Andes, simulated with the physics-based model Catflow. The proposed time-resolved performance assessment starts with the computation of a large set of classically used performance measures for a moving window. The key of the developed approach is a data-reduction method based on self-organizing maps (SOMs and cluster analysis to classify the high-dimensional performance matrix. Synthetic peak errors are used to interpret the resulting error classes. The final outcome of the proposed method is a time series of the occurrence of dominant error types. For the two case studies analyzed here, 6 such error types have been identified. They show clear temporal patterns, which can lead to the identification of model structural errors.

  13. Transmutation Fuel Performance Code Thermal Model Verification

    Energy Technology Data Exchange (ETDEWEB)

    Gregory K. Miller; Pavel G. Medvedev

    2007-09-01

    FRAPCON fuel performance code is being modified to be able to model performance of the nuclear fuels of interest to the Global Nuclear Energy Partnership (GNEP). The present report documents the effort for verification of the FRAPCON thermal model. It was found that, with minor modifications, FRAPCON thermal model temperature calculation agrees with that of the commercial software ABAQUS (Version 6.4-4). This report outlines the methodology of the verification, code input, and calculation results.

  14. System Advisor Model: Flat Plate Photovoltaic Performance Modeling Validation Report

    Energy Technology Data Exchange (ETDEWEB)

    Freeman, J.; Whitmore, J.; Kaffine, L.; Blair, N.; Dobos, A. P.

    2013-12-01

    The System Advisor Model (SAM) is a free software tool that performs detailed analysis of both system performance and system financing for a variety of renewable energy technologies. This report provides detailed validation of the SAM flat plate photovoltaic performance model by comparing SAM-modeled PV system generation data to actual measured production data for nine PV systems ranging from 75 kW to greater than 25 MW in size. The results show strong agreement between SAM predictions and field data, with annualized prediction error below 3% for all fixed tilt cases and below 8% for all one axis tracked cases. The analysis concludes that snow cover and system outages are the primary sources of disagreement, and other deviations resulting from seasonal biases in the irradiation models and one axis tracking issues are discussed in detail.

  15. Photovoltaic performance models - A report card

    Science.gov (United States)

    Smith, J. H.; Reiter, L. R.

    1985-01-01

    Models for the analysis of photovoltaic (PV) systems' designs, implementation policies, and economic performance, have proliferated while keeping pace with rapid changes in basic PV technology and extensive empirical data compiled for such systems' performance. Attention is presently given to the results of a comparative assessment of ten well documented and widely used models, which range in complexity from first-order approximations of PV system performance to in-depth, circuit-level characterizations. The comparisons were made on the basis of the performance of their subsystem, as well as system, elements. The models fall into three categories in light of their degree of aggregation into subsystems: (1) simplified models for first-order calculation of system performance, with easily met input requirements but limited capability to address more than a small variety of design considerations; (2) models simulating PV systems in greater detail, encompassing types primarily intended for either concentrator-incorporating or flat plate collector PV systems; and (3) models not specifically designed for PV system performance modeling, but applicable to aspects of electrical system design. Models ignoring subsystem failure or degradation are noted to exclude operating and maintenance characteristics as well.

  16. Towards Systematic Benchmarking of Climate Model Performance

    Science.gov (United States)

    Gleckler, P. J.

    2014-12-01

    The process by which climate models are evaluated has evolved substantially over the past decade, with the Coupled Model Intercomparison Project (CMIP) serving as a centralizing activity for coordinating model experimentation and enabling research. Scientists with a broad spectrum of expertise have contributed to the CMIP model evaluation process, resulting in many hundreds of publications that have served as a key resource for the IPCC process. For several reasons, efforts are now underway to further systematize some aspects of the model evaluation process. First, some model evaluation can now be considered routine and should not require "re-inventing the wheel" or a journal publication simply to update results with newer models. Second, the benefit of CMIP research to model development has not been optimal because the publication of results generally takes several years and is usually not reproducible for benchmarking newer model versions. And third, there are now hundreds of model versions and many thousands of simulations, but there is no community-based mechanism for routinely monitoring model performance changes. An important change in the design of CMIP6 can help address these limitations. CMIP6 will include a small set standardized experiments as an ongoing exercise (CMIP "DECK": ongoing Diagnostic, Evaluation and Characterization of Klima), so that modeling groups can submit them at any time and not be overly constrained by deadlines. In this presentation, efforts to establish routine benchmarking of existing and future CMIP simulations will be described. To date, some benchmarking tools have been made available to all CMIP modeling groups to enable them to readily compare with CMIP5 simulations during the model development process. A natural extension of this effort is to make results from all CMIP simulations widely available, including the results from newer models as soon as the simulations become available for research. Making the results from routine

  17. Assembly line performance and modeling

    Science.gov (United States)

    Rane, Arun B.; Sunnapwar, Vivek K.

    2017-03-01

    Automobile sector forms the backbone of manufacturing sector. Vehicle assembly line is important section in automobile plant where repetitive tasks are performed one after another at different workstations. In this thesis, a methodology is proposed to reduce cycle time and time loss due to important factors like equipment failure, shortage of inventory, absenteeism, set-up, material handling, rejection and fatigue to improve output within given cost constraints. Various relationships between these factors, corresponding cost and output are established by scientific approach. This methodology is validated in three different vehicle assembly plants. Proposed methodology may help practitioners to optimize the assembly line using lean techniques.

  18. Model Performance Evaluation and Scenario Analysis (MPESA) Tutorial

    Science.gov (United States)

    This tool consists of two parts: model performance evaluation and scenario analysis (MPESA). The model performance evaluation consists of two components: model performance evaluation metrics and model diagnostics. These metrics provides modelers with statistical goodness-of-fit m...

  19. Performance Engineering in the Community Atmosphere Model

    Energy Technology Data Exchange (ETDEWEB)

    Worley, P; Mirin, A; Drake, J; Sawyer, W

    2006-05-30

    The Community Atmosphere Model (CAM) is the atmospheric component of the Community Climate System Model (CCSM) and is the primary consumer of computer resources in typical CCSM simulations. Performance engineering has been an important aspect of CAM development throughout its existence. This paper briefly summarizes these efforts and their impacts over the past five years.

  20. Performance of hedging strategies in interval models

    NARCIS (Netherlands)

    Roorda, Berend; Engwerda, Jacob; Schumacher, J.M.

    2005-01-01

    For a proper assessment of risks associated with the trading of derivatives, the performance of hedging strategies should be evaluated not only in the context of the idealized model that has served as the basis of strategy development, but also in the context of other models. In this paper we consid

  1. Critical review of glass performance modeling

    Energy Technology Data Exchange (ETDEWEB)

    Bourcier, W.L. [Lawrence Livermore National Lab., CA (United States)

    1994-07-01

    Borosilicate glass is to be used for permanent disposal of high-level nuclear waste in a geologic repository. Mechanistic chemical models are used to predict the rate at which radionuclides will be released from the glass under repository conditions. The most successful and useful of these models link reaction path geochemical modeling programs with a glass dissolution rate law that is consistent with transition state theory. These models have been used to simulate several types of short-term laboratory tests of glass dissolution and to predict the long-term performance of the glass in a repository. Although mechanistically based, the current models are limited by a lack of unambiguous experimental support for some of their assumptions. The most severe problem of this type is the lack of an existing validated mechanism that controls long-term glass dissolution rates. Current models can be improved by performing carefully designed experiments and using the experimental results to validate the rate-controlling mechanisms implicit in the models. These models should be supported with long-term experiments to be used for model validation. The mechanistic basis of the models should be explored by using modern molecular simulations such as molecular orbital and molecular dynamics to investigate both the glass structure and its dissolution process.

  2. Performance modeling, stochastic networks, and statistical multiplexing

    CERN Document Server

    Mazumdar, Ravi R

    2013-01-01

    This monograph presents a concise mathematical approach for modeling and analyzing the performance of communication networks with the aim of introducing an appropriate mathematical framework for modeling and analysis as well as understanding the phenomenon of statistical multiplexing. The models, techniques, and results presented form the core of traffic engineering methods used to design, control and allocate resources in communication networks.The novelty of the monograph is the fresh approach and insights provided by a sample-path methodology for queueing models that highlights the importan

  3. Performance results of HESP physical model

    Science.gov (United States)

    Chanumolu, Anantha; Thirupathi, Sivarani; Jones, Damien; Giridhar, Sunetra; Grobler, Deon; Jakobsson, Robert

    2017-02-01

    As a continuation to the published work on model based calibration technique with HESP(Hanle Echelle Spectrograph) as a case study, in this paper we present the performance results of the technique. We also describe how the open parameters were chosen in the model for optimization, the glass data accuracy and handling the discrepancies. It is observed through simulations that the discrepancies in glass data can be identified but not quantifiable. So having an accurate glass data is important which is possible to obtain from the glass manufacturers. The model's performance in various aspects is presented using the ThAr calibration frames from HESP during its pre-shipment tests. Accuracy of model predictions and its wave length calibration comparison with conventional empirical fitting, the behaviour of open parameters in optimization, model's ability to track instrumental drifts in the spectrum and the double fibres performance were discussed. It is observed that the optimized model is able to predict to a high accuracy the drifts in the spectrum from environmental fluctuations. It is also observed that the pattern in the spectral drifts across the 2D spectrum which vary from image to image is predictable with the optimized model. We will also discuss the possible science cases where the model can contribute.

  4. Performance modeling, loss networks, and statistical multiplexing

    CERN Document Server

    Mazumdar, Ravi

    2009-01-01

    This monograph presents a concise mathematical approach for modeling and analyzing the performance of communication networks with the aim of understanding the phenomenon of statistical multiplexing. The novelty of the monograph is the fresh approach and insights provided by a sample-path methodology for queueing models that highlights the important ideas of Palm distributions associated with traffic models and their role in performance measures. Also presented are recent ideas of large buffer, and many sources asymptotics that play an important role in understanding statistical multiplexing. I

  5. Generating Performance Models for Irregular Applications

    Energy Technology Data Exchange (ETDEWEB)

    Friese, Ryan D.; Tallent, Nathan R.; Vishnu, Abhinav; Kerbyson, Darren J.; Hoisie, Adolfy

    2017-05-30

    Many applications have irregular behavior --- non-uniform input data, input-dependent solvers, irregular memory accesses, unbiased branches --- that cannot be captured using today's automated performance modeling techniques. We describe new hierarchical critical path analyses for the \\Palm model generation tool. To create a model's structure, we capture tasks along representative MPI critical paths. We create a histogram of critical tasks with parameterized task arguments and instance counts. To model each task, we identify hot instruction-level sub-paths and model each sub-path based on data flow, instruction scheduling, and data locality. We describe application models that generate accurate predictions for strong scaling when varying CPU speed, cache speed, memory speed, and architecture. We present results for the Sweep3D neutron transport benchmark; Page Rank on multiple graphs; Support Vector Machine with pruning; and PFLOTRAN's reactive flow/transport solver with domain-induced load imbalance.

  6. PV performance modeling workshop summary report.

    Energy Technology Data Exchange (ETDEWEB)

    Stein, Joshua S.; Tasca, Coryne Adelle (SRA International, Inc., Fairfax, VA); Cameron, Christopher P.

    2011-05-01

    During the development of a solar photovoltaic (PV) energy project, predicting expected energy production from a system is a key part of understanding system value. System energy production is a function of the system design and location, the mounting configuration, the power conversion system, and the module technology, as well as the solar resource. Even if all other variables are held constant, annual energy yield (kWh/kWp) will vary among module technologies because of differences in response to low-light levels and temperature. A number of PV system performance models have been developed and are in use, but little has been published on validation of these models or the accuracy and uncertainty of their output. With support from the U.S. Department of Energy's Solar Energy Technologies Program, Sandia National Laboratories organized a PV Performance Modeling Workshop in Albuquerque, New Mexico, September 22-23, 2010. The workshop was intended to address the current state of PV system models, develop a path forward for establishing best practices on PV system performance modeling, and set the stage for standardization of testing and validation procedures for models and input parameters. This report summarizes discussions and presentations from the workshop, as well as examines opportunities for collaborative efforts to develop objective comparisons between models and across sites and applications.

  7. Cost and Performance Model for Photovoltaic Systems

    Science.gov (United States)

    Borden, C. S.; Smith, J. H.; Davisson, M. C.; Reiter, L. J.

    1986-01-01

    Lifetime cost and performance (LCP) model assists in assessment of design options for photovoltaic systems. LCP is simulation of performance, cost, and revenue streams associated with photovoltaic power systems connected to electric-utility grid. LCP provides user with substantial flexibility in specifying technical and economic environment of application.

  8. Analytical performance modeling for computer systems

    CERN Document Server

    Tay, Y C

    2013-01-01

    This book is an introduction to analytical performance modeling for computer systems, i.e., writing equations to describe their performance behavior. It is accessible to readers who have taken college-level courses in calculus and probability, networking and operating systems. This is not a training manual for becoming an expert performance analyst. Rather, the objective is to help the reader construct simple models for analyzing and understanding the systems that they are interested in.Describing a complicated system abstractly with mathematical equations requires a careful choice of assumpti

  9. A Procurement Performance Model for Construction Frameworks

    Directory of Open Access Journals (Sweden)

    Terence Y M Lam

    2015-07-01

    Full Text Available Collaborative construction frameworks have been developed in the United Kingdom (UK to create longer term relationships between clients and suppliers in order to improve project outcomes. Research undertaken into highways maintenance set within a major county council has confirmed that such collaborative procurement methods can improve time, cost and quality of construction projects. Building upon this and examining the same single case, this research aims to develop a performance model through identification of performance drivers in the whole project delivery process including pre and post contract phases. A priori performance model based on operational and sociological constructs was proposed and then checked by a pilot study. Factor analysis and central tendency statistics from the questionnaires as well as content analysis from the interview transcripts were conducted. It was confirmed that long term relationships, financial and non-financial incentives and stronger communication are the sociological behaviour factors driving performance. The interviews also established that key performance indicators (KPIs can be used as an operational measure to improve performance. With the posteriori performance model, client project managers can effectively collaboratively manage contractor performance through procurement measures including use of longer term and KPIs for the contract so that the expected project outcomes can be achieved. The findings also make significant contribution to construction framework procurement theory by identifying the interrelated sociological and operational performance drivers. This study is set predominantly in the field of highways civil engineering. It is suggested that building based projects or other projects that share characteristics are grouped together and used for further research of the phenomena discovered.

  10. Cognitive performance modeling based on general systems performance theory.

    Science.gov (United States)

    Kondraske, George V

    2010-01-01

    General Systems Performance Theory (GSPT) was initially motivated by problems associated with quantifying different aspects of human performance. It has proved to be invaluable for measurement development and understanding quantitative relationships between human subsystem capacities and performance in complex tasks. It is now desired to bring focus to the application of GSPT to modeling of cognitive system performance. Previous studies involving two complex tasks (i.e., driving and performing laparoscopic surgery) and incorporating measures that are clearly related to cognitive performance (information processing speed and short-term memory capacity) were revisited. A GSPT-derived method of task analysis and performance prediction termed Nonlinear Causal Resource Analysis (NCRA) was employed to determine the demand on basic cognitive performance resources required to support different levels of complex task performance. This approach is presented as a means to determine a cognitive workload profile and the subsequent computation of a single number measure of cognitive workload (CW). Computation of CW may be a viable alternative to measuring it. Various possible "more basic" performance resources that contribute to cognitive system performance are discussed. It is concluded from this preliminary exploration that a GSPT-based approach can contribute to defining cognitive performance models that are useful for both individual subjects and specific groups (e.g., military pilots).

  11. Human visual performance model for crewstation design

    Science.gov (United States)

    Larimer, James; Prevost, Michael; Arditi, Aries; Azueta, Steven; Bergen, James; Lubin, Jeffrey

    1991-01-01

    An account is given of a Visibility Modeling Tool (VMT) which furnishes a crew-station designer with the means to assess configurational tradeoffs, with a view to the impact of various options on the unambiguous access of information to the pilot. The interactive interface of the VMT allows the manipulation of cockpit geometry, ambient lighting, pilot ergonomics, and the displayed symbology. Performance data can be displayed in the form of 3D contours into the crewstation graphic model, thereby yielding an indication of the operator's visual capabilities.

  12. A conceptual model for manufacturing performance improvement

    Directory of Open Access Journals (Sweden)

    M.A. Karim

    2009-07-01

    Full Text Available Purpose: Important performance objectives manufacturers sought can be achieved through adopting the appropriate manufacturing practices. This paper presents a conceptual model proposing relationship between advanced quality practices, perceived manufacturing difficulties and manufacturing performances.Design/methodology/approach: A survey-based approach was adopted to test the hypotheses proposed in this study. The selection of research instruments for inclusion in this survey was based on literature review, the pilot case studies and relevant industrial experience of the author. A sample of 1000 manufacturers across Australia was randomly selected. Quality managers were requested to complete the questionnaire, as the task of dealing with the quality and reliability issues is a quality manager’s major responsibility.Findings: Evidence indicates that product quality and reliability is the main competitive factor for manufacturers. Design and manufacturing capability and on time delivery came second. Price is considered as the least important factor for the Australian manufacturers. Results show that collectively the advanced quality practices proposed in this study neutralize the difficulties manufacturers face and contribute to the most performance objectives of the manufacturers. The companies who have put more emphasize on the advanced quality practices have less problem in manufacturing and better performance in most manufacturing performance indices. The results validate the proposed conceptual model and lend credence to hypothesis that proposed relationship between quality practices, manufacturing difficulties and manufacturing performances.Practical implications: The model shown in this paper provides a simple yet highly effective approach to achieving significant improvements in product quality and manufacturing performance. This study introduces a relationship based ‘proactive’ quality management approach and provides great

  13. High temperature furnace modeling and performance verifications

    Science.gov (United States)

    Smith, James E., Jr.

    1992-01-01

    Analytical, numerical, and experimental studies were performed on two classes of high temperature materials processing sources for their potential use as directional solidification furnaces. The research concentrated on a commercially available high temperature furnace using a zirconia ceramic tube as the heating element and an Arc Furnace based on a tube welder. The first objective was to assemble the zirconia furnace and construct parts needed to successfully perform experiments. The 2nd objective was to evaluate the zirconia furnace performance as a directional solidification furnace element. The 3rd objective was to establish a data base on materials used in the furnace construction, with particular emphasis on emissivities, transmissivities, and absorptivities as functions of wavelength and temperature. A 1-D and 2-D spectral radiation heat transfer model was developed for comparison with standard modeling techniques, and were used to predict wall and crucible temperatures. The 4th objective addressed the development of a SINDA model for the Arc Furnace and was used to design sample holders and to estimate cooling media temperatures for the steady state operation of the furnace. And, the 5th objective addressed the initial performance evaluation of the Arc Furnace and associated equipment for directional solidification. Results of these objectives are presented.

  14. Hybrid Modeling Improves Health and Performance Monitoring

    Science.gov (United States)

    2007-01-01

    Scientific Monitoring Inc. was awarded a Phase I Small Business Innovation Research (SBIR) project by NASA's Dryden Flight Research Center to create a new, simplified health-monitoring approach for flight vehicles and flight equipment. The project developed a hybrid physical model concept that provided a structured approach to simplifying complex design models for use in health monitoring, allowing the output or performance of the equipment to be compared to what the design models predicted, so that deterioration or impending failure could be detected before there would be an impact on the equipment's operational capability. Based on the original modeling technology, Scientific Monitoring released I-Trend, a commercial health- and performance-monitoring software product named for its intelligent trending, diagnostics, and prognostics capabilities, as part of the company's complete ICEMS (Intelligent Condition-based Equipment Management System) suite of monitoring and advanced alerting software. I-Trend uses the hybrid physical model to better characterize the nature of health or performance alarms that result in "no fault found" false alarms. Additionally, the use of physical principles helps I-Trend identify problems sooner. I-Trend technology is currently in use in several commercial aviation programs, and the U.S. Air Force recently tapped Scientific Monitoring to develop next-generation engine health-management software for monitoring its fleet of jet engines. Scientific Monitoring has continued the original NASA work, this time under a Phase III SBIR contract with a joint NASA-Pratt & Whitney aviation security program on propulsion-controlled aircraft under missile-damaged aircraft conditions.

  15. Modelling the predictive performance of credit scoring

    Directory of Open Access Journals (Sweden)

    Shi-Wei Shen

    2013-02-01

    Full Text Available Orientation: The article discussed the importance of rigour in credit risk assessment.Research purpose: The purpose of this empirical paper was to examine the predictive performance of credit scoring systems in Taiwan.Motivation for the study: Corporate lending remains a major business line for financial institutions. However, in light of the recent global financial crises, it has become extremely important for financial institutions to implement rigorous means of assessing clients seeking access to credit facilities.Research design, approach and method: Using a data sample of 10 349 observations drawn between 1992 and 2010, logistic regression models were utilised to examine the predictive performance of credit scoring systems.Main findings: A test of Goodness of fit demonstrated that credit scoring models that incorporated the Taiwan Corporate Credit Risk Index (TCRI, micro- and also macroeconomic variables possessed greater predictive power. This suggests that macroeconomic variables do have explanatory power for default credit risk.Practical/managerial implications: The originality in the study was that three models were developed to predict corporate firms’ defaults based on different microeconomic and macroeconomic factors such as the TCRI, asset growth rates, stock index and gross domestic product.Contribution/value-add: The study utilises different goodness of fits and receiver operator characteristics during the examination of the robustness of the predictive power of these factors.

  16. Computer modeling of thermoelectric generator performance

    Science.gov (United States)

    Chmielewski, A. B.; Shields, V.

    1982-01-01

    Features of the DEGRA 2 computer code for simulating the operations of a spacecraft thermoelectric generator are described. The code models the physical processes occurring during operation. Input variables include the thermoelectric couple geometry and composition, the thermoelectric materials' properties, interfaces and insulation in the thermopile, the heat source characteristics, mission trajectory, and generator electrical requirements. Time steps can be specified and sublimation of the leg and hot shoe is accounted for, as are shorts between legs. Calculations are performed for conduction, Peltier, Thomson, and Joule heating, the cold junction can be adjusted for solar radition, and the legs of the thermoelectric couple are segmented to enhance the approximation accuracy. A trial run covering 18 couple modules yielded data with 0.3% accuracy with regard to test data. The model has been successful with selenide materials, SiGe, and SiN4, with output of all critical operational variables.

  17. Reactive puff model SCICHEM: Model enhancements and performance studies

    Science.gov (United States)

    Chowdhury, B.; Karamchandani, P. K.; Sykes, R. I.; Henn, D. S.; Knipping, E.

    2015-09-01

    The SCICHEM model incorporates complete gas phase, aqueous and aerosol phase chemistry within a state-of-the-science Gaussian puff model SCIPUFF (Second-order Closure Integrated Puff). The model is a valuable tool that can be used to calculate the impacts of a single source or a small number of sources on downwind ozone and PM2.5. The model has flexible data requirements: it can be run with routine surface and upper air observations or with prognostic meteorological model outputs and source emissions are specified in a simple text format. This paper describes significant advances to the dispersion and chemistry components of the model in the latest release, SCICHEM 3.0. Some of the major advancements include modeling of skewed turbulence for convective boundary layer and updated chemistry schemes (CB05 gas phase chemical mechanism; AERO5 aerosol and aqueous modules). The results from SCICHEM 3.0 are compared with observations from a tracer study as well as aircraft measurements of reactive species in power plant plumes from two field studies. The results with the tracer experiment (Copenhagen study) show that the incorporation of skewed turbulence improves the calculation of tracer dispersion and transport. The comparisons with the Cumberland and Dolet Hills power plume measurements show good correlation between the observed and predicted concentrations of reactive gaseous species at most downwind distances from the source.

  18. Optical Performance Modeling of FUSE Telescope Mirror

    Science.gov (United States)

    Saha, Timo T.; Ohl, Raymond G.; Friedman, Scott D.; Moos, H. Warren

    2000-01-01

    We describe the Metrology Data Processor (METDAT), the Optical Surface Analysis Code (OSAC), and their application to the image evaluation of the Far Ultraviolet Spectroscopic Explorer (FUSE) mirrors. The FUSE instrument - designed and developed by the Johns Hopkins University and launched in June 1999 is an astrophysics satellite which provides high resolution spectra (lambda/Delta(lambda) = 20,000 - 25,000) in the wavelength region from 90.5 to 118.7 nm The FUSE instrument is comprised of four co-aligned, normal incidence, off-axis parabolic mirrors, four Rowland circle spectrograph channels with holographic gratings, and delay line microchannel plate detectors. The OSAC code provides a comprehensive analysis of optical system performance, including the effects of optical surface misalignments, low spatial frequency deformations described by discrete polynomial terms, mid- and high-spatial frequency deformations (surface roughness), and diffraction due to the finite size of the aperture. Both normal incidence (traditionally infrared, visible, and near ultraviolet mirror systems) and grazing incidence (x-ray mirror systems) systems can be analyzed. The code also properly accounts for reflectance losses on the mirror surfaces. Low frequency surface errors are described in OSAC by using Zernike polynomials for normal incidence mirrors and Legendre-Fourier polynomials for grazing incidence mirrors. The scatter analysis of the mirror is based on scalar scatter theory. The program accepts simple autocovariance (ACV) function models or power spectral density (PSD) models derived from mirror surface metrology data as input to the scatter calculation. The end product of the program is a user-defined pixel array containing the system Point Spread Function (PSF). The METDAT routine is used in conjunction with the OSAC program. This code reads in laboratory metrology data in a normalized format. The code then fits the data using Zernike polynomials for normal incidence

  19. CASTOR detector Model, objectives and simulated performance

    CERN Document Server

    Angelis, Aris L S; Bartke, Jerzy; Bogolyubsky, M Yu; Chileev, K; Erine, S; Gladysz-Dziadus, E; Kharlov, Yu V; Kurepin, A B; Lobanov, M O; Maevskaya, A I; Mavromanolakis, G; Nicolis, N G; Panagiotou, A D; Sadovsky, S A; Wlodarczyk, Z

    2001-01-01

    We present a phenomenological model describing the formation and evolution of a Centauro fireball in the baryon-rich region in nucleus-nucleus interactions in the upper atmosphere and at the LHC. The small particle multiplicity and imbalance of electromagnetic and hadronic content characterizing a Centauro event and also the strongly penetrating particles (assumed to be strangelets) frequently accompanying them can be naturally explained. We describe the CASTOR calorimeter, a subdetector of the ALICE experiment dedicated to the search for Centauro in the very forward, baryon-rich region of central Pb+Pb collisions at the LHC. The basic characteristics and simulated performance of the calorimeter are presented. (22 refs).

  20. CASTOR detector. Model, objectives and simulated performance

    Energy Technology Data Exchange (ETDEWEB)

    Angelis, A. L. S.; Mavromanolakis, G.; Panagiotou, A. D. [University of Athens, Nuclear and Particle Physics Division, Athens (Greece); Aslanoglou, X.; Nicolis, N. [Ioannina Univ., Ioannina (Greece). Dept. of Physics; Bartke, J.; Gladysz-Dziadus, E. [Institute of Nuclear Physics, Cracow (Poland); Lobanov, M.; Erine, S.; Kharlov, Y.V.; Bogolyubsky, M.Y. [Institute for High Energy Physics, Protvino (Russian Federation); Kurepin, A.B.; Chileev, K. [Institute for Nuclear Research, Moscow (Russian Federation); Wlodarczyk, Z. [Pedagogical University, Institute of Physics, Kielce (Poland)

    2001-10-01

    It is presented a phenomenological model describing the formation and evolution of a Centauro fireball in the baryon-rich region in nucleus-nucleus interactions in the upper atmosphere and at the LHC. The small particle multiplicity and imbalance of electromagnetic and hadronic content characterizing a Centauro event and also the strongly penetrating particles (assumed to be strangelets) frequently accompanying them can be naturally explained. It is described the CASTOR calorimeter, a sub detector of the ALICE experiment dedicated to the search for Centauro in the very forward, baryon-rich region of central Pb+Pb collisions at the LHC. The basic characteristics and simulated performance of the calorimeter are presented.

  1. HANDOVER MANAGEABILITY AND PERFORMANCE MODELING IN

    African Journals Online (AJOL)

    SOFTLINKS DIGITAL

    April 2014. 103 engineering and development including ... formal model management team must rely on guess work. ... model provides a systematic method for comparing ...... In 18th Annual Software Engineering and Knowledge. Engineering ...

  2. Modelling and simulating fire tube boiler performance

    DEFF Research Database (Denmark)

    Sørensen, K.; Condra, T.; Houbak, Niels;

    2003-01-01

    A model for a flue gas boiler covering the flue gas and the water-/steam side has been formulated. The model has been formulated as a number of sub models that are merged into an overall model for the complete boiler. Sub models have been defined for the furnace, the convection zone (split in 2......: a zone submerged in water and a zone covered by steam), a model for the material in the boiler (the steel) and 2 models for resp. the water/steam zone (the boiling) and the steam. The dynamic model has been developed as a number of Differential-Algebraic-Equation system (DAE). Subsequently Mat......Lab/Simulink has been applied for carrying out the simulations. To be able to verify the simulated results experiments has been carried out on a full scale boiler plant....

  3. Modelling and simulating fire tube boiler performance

    DEFF Research Database (Denmark)

    Sørensen, Kim; Karstensen, Claus; Condra, Thomas Joseph;

    2003-01-01

    A model for a ue gas boiler covering the ue gas and the water-/steam side has been formulated. The model has been formulated as a number of sub models that are merged into an overall model for the complete boiler. Sub models have been dened for the furnace, the convection zone (split in 2: a zone...... submerged in water and a zone covered by steam), a model for the material in the boiler (the steel) and 2 models for resp. the water/steam zone (the boiling) and the steam. The dynamic model has been developed as a number of Differential-Algebraic- Equation system (DAE). Subsequently MatLab/Simulink has...... been applied for carrying out the simulations. To be able to verify the simulated results an experiments has been carried out on a full scale boiler plant....

  4. Probability and Statistics in Sensor Performance Modeling

    Science.gov (United States)

    2010-12-01

    transformed Rice- Nakagami distribution ......................................................................... 49 Report Documentation Page...acoustic or electromagnetic waves are scattered by both objects and turbulent wind. A version of the Rice- Nakagami model (specifically with a...Gaussian, lognormal, exponential, gamma, and the 2XX → transformed Rice- Nakagami —as well as a discrete model. (Other examples of statistical models

  5. Multilevel Modeling of the Performance Variance

    Directory of Open Access Journals (Sweden)

    Alexandre Teixeira Dias

    2012-12-01

    Full Text Available Focusing on the identification of the role played by Industry on the relations between Corporate Strategic Factors and Performance, the hierarchical multilevel modeling method was adopted when measuring and analyzing the relations between the variables that comprise each level of analysis. The adequacy of the multilevel perspective to the study of the proposed relations was identified and the relative importance analysis point out to the lower relevance of industry as a moderator of the effects of corporate strategic factors on performance, when the latter was measured by means of return on assets, and that industry don‟t moderates the relations between corporate strategic factors and Tobin‟s Q. The main conclusions of the research are that the organizations choices in terms of corporate strategy presents a considerable influence and plays a key role on the determination of performance level, but that industry should be considered when analyzing the performance variation despite its role as a moderator or not of the relations between corporate strategic factors and performance.

  6. DKIST Polarization Modeling and Performance Predictions

    Science.gov (United States)

    Harrington, David

    2016-05-01

    Calibrating the Mueller matrices of large aperture telescopes and associated coude instrumentation requires astronomical sources and several modeling assumptions to predict the behavior of the system polarization with field of view, altitude, azimuth and wavelength. The Daniel K Inouye Solar Telescope (DKIST) polarimetric instrumentation requires very high accuracy calibration of a complex coude path with an off-axis f/2 primary mirror, time dependent optical configurations and substantial field of view. Polarization predictions across a diversity of optical configurations, tracking scenarios, slit geometries and vendor coating formulations are critical to both construction and contined operations efforts. Recent daytime sky based polarization calibrations of the 4m AEOS telescope and HiVIS spectropolarimeter on Haleakala have provided system Mueller matrices over full telescope articulation for a 15-reflection coude system. AEOS and HiVIS are a DKIST analog with a many-fold coude optical feed and similar mirror coatings creating 100% polarization cross-talk with altitude, azimuth and wavelength. Polarization modeling predictions using Zemax have successfully matched the altitude-azimuth-wavelength dependence on HiVIS with the few percent amplitude limitations of several instrument artifacts. Polarization predictions for coude beam paths depend greatly on modeling the angle-of-incidence dependences in powered optics and the mirror coating formulations. A 6 month HiVIS daytime sky calibration plan has been analyzed for accuracy under a wide range of sky conditions and data analysis algorithms. Predictions of polarimetric performance for the DKIST first-light instrumentation suite have been created under a range of configurations. These new modeling tools and polarization predictions have substantial impact for the design, fabrication and calibration process in the presence of manufacturing issues, science use-case requirements and ultimate system calibration

  7. High-performance phase-field modeling

    KAUST Repository

    Vignal, Philippe

    2015-04-27

    Many processes in engineering and sciences involve the evolution of interfaces. Among the mathematical frameworks developed to model these types of problems, the phase-field method has emerged as a possible solution. Phase-fields nonetheless lead to complex nonlinear, high-order partial differential equations, whose solution poses mathematical and computational challenges. Guaranteeing some of the physical properties of the equations has lead to the development of efficient algorithms and discretizations capable of recovering said properties by construction [2, 5]. This work builds-up on these ideas, and proposes novel discretization strategies that guarantee numerical energy dissipation for both conserved and non-conserved phase-field models. The temporal discretization is based on a novel method which relies on Taylor series and ensures strong energy stability. It is second-order accurate, and can also be rendered linear to speed-up the solution process [4]. The spatial discretization relies on Isogeometric Analysis, a finite element method that possesses the k-refinement technology and enables the generation of high-order, high-continuity basis functions. These basis functions are well suited to handle the high-order operators present in phase-field models. Two-dimensional and three dimensional results of the Allen-Cahn, Cahn-Hilliard, Swift-Hohenberg and phase-field crystal equation will be presented, which corroborate the theoretical findings, and illustrate the robustness of the method. Results related to more challenging examples, namely the Navier-Stokes Cahn-Hilliard and a diusion-reaction Cahn-Hilliard system, will also be presented. The implementation was done in PetIGA and PetIGA-MF, high-performance Isogeometric Analysis frameworks [1, 3], designed to handle non-linear, time-dependent problems.

  8. Models

    DEFF Research Database (Denmark)

    Juel-Christiansen, Carsten

    2005-01-01

    Artiklen fremhæver den visuelle rotation - billeder, tegninger, modeller, værker - som det privilligerede medium i kommunikationen af ideer imellem skabende arkitekter......Artiklen fremhæver den visuelle rotation - billeder, tegninger, modeller, værker - som det privilligerede medium i kommunikationen af ideer imellem skabende arkitekter...

  9. The COD Model: Simulating Workgroup Performance

    Science.gov (United States)

    Biggiero, Lucio; Sevi, Enrico

    Though the question of the determinants of workgroup performance is one of the most central in organization science, precise theoretical frameworks and formal demonstrations are still missing. In order to fill in this gap the COD agent-based simulation model is here presented and used to study the effects of task interdependence and bounded rationality on workgroup performance. The first relevant finding is an algorithmic demonstration of the ordering of interdependencies in terms of complexity, showing that the parallel mode is the most simplex, followed by the sequential and then by the reciprocal. This result is far from being new in organization science, but what is remarkable is that now it has the strength of an algorithmic demonstration instead of being based on the authoritativeness of some scholar or on some episodic empirical finding. The second important result is that the progressive introduction of realistic limits to agents' rationality dramatically reduces workgroup performance and addresses to a rather interesting result: when agents' rationality is severely bounded simple norms work better than complex norms. The third main finding is that when the complexity of interdependence is high, then the appropriate coordination mechanism is agents' direct and active collaboration, which means teamwork.

  10. Numerical modeling capabilities to predict repository performance

    Energy Technology Data Exchange (ETDEWEB)

    1979-09-01

    This report presents a summary of current numerical modeling capabilities that are applicable to the design and performance evaluation of underground repositories for the storage of nuclear waste. The report includes codes that are available in-house, within Golder Associates and Lawrence Livermore Laboratories; as well as those that are generally available within the industry and universities. The first listing of programs are in-house codes in the subject areas of hydrology, solute transport, thermal and mechanical stress analysis, and structural geology. The second listing of programs are divided by subject into the following categories: site selection, structural geology, mine structural design, mine ventilation, hydrology, and mine design/construction/operation. These programs are not specifically designed for use in the design and evaluation of an underground repository for nuclear waste; but several or most of them may be so used.

  11. Performance model to predict overall defect density

    Directory of Open Access Journals (Sweden)

    J Venkatesh

    2012-08-01

    Full Text Available Management by metrics is the expectation from the IT service providers to stay as a differentiator. Given a project, the associated parameters and dynamics, the behaviour and outcome need to be predicted. There is lot of focus on the end state and in minimizing defect leakage as much as possible. In most of the cases, the actions taken are re-active. It is too late in the life cycle. Root cause analysis and corrective actions can be implemented only to the benefit of the next project. The focus has to shift left, towards the execution phase than waiting for lessons to be learnt post the implementation. How do we pro-actively predict defect metrics and have a preventive action plan in place. This paper illustrates the process performance model to predict overall defect density based on data from projects in an organization.

  12. Detailed Performance Model for Photovoltaic Systems: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Tian, H.; Mancilla-David, F.; Ellis, K.; Muljadi, E.; Jenkins, P.

    2012-07-01

    This paper presents a modified current-voltage relationship for the single diode model. The single-diode model has been derived from the well-known equivalent circuit for a single photovoltaic cell. The modification presented in this paper accounts for both parallel and series connections in an array.

  13. Complex Systems and Human Performance Modeling

    Science.gov (United States)

    2013-12-01

    constitute a cognitive architecture or decomposing the work flows and resource constraints that characterize human-system interactions, the modeler...also explored the generation of so-called “ fractal ” series from simple task network models where task times are the calculated by way of a moving

  14. HANDOVER MANAGEABILITY AND PERFORMANCE MODELING IN

    African Journals Online (AJOL)

    SOFTLINKS DIGITAL

    West African Journal of Industrial & Academic Research Vol.10 No.1 April ... sketches out a model of proactive and reactive mitigation response model for individuals ... valuable asset to all firms. ... information is shared only among ... ability to ensure that a party to a contract or ... organizations only react to security threats,.

  15. Modeling and optimization of LCD optical performance

    CERN Document Server

    Yakovlev, Dmitry A; Kwok, Hoi-Sing

    2015-01-01

    The aim of this book is to present the theoretical foundations of modeling the optical characteristics of liquid crystal displays, critically reviewing modern modeling methods and examining areas of applicability. The modern matrix formalisms of optics of anisotropic stratified media, most convenient for solving problems of numerical modeling and optimization of LCD, will be considered in detail. The benefits of combined use of the matrix methods will be shown, which generally provides the best compromise between physical adequacy and accuracy with computational efficiency and optimization fac

  16. Performance Appraisal: A New Model for Academic Advisement.

    Science.gov (United States)

    Hazleton, Vincent; Tuttle, George E.

    1981-01-01

    Presents the performance appraisal model for student advisement, a centralized developmental model that focuses on the content and process of advisement. The model has three content objectives: job definition, performance assessment, and goal setting. Operation of the model is described. Benefits and potential limitations are identified. (Author)

  17. Hydrologic Evaluation of Landfill Performance (HELP) Model

    Science.gov (United States)

    The program models rainfall, runoff, infiltration, and other water pathways to estimate how much water builds up above each landfill liner. It can incorporate data on vegetation, soil types, geosynthetic materials, initial moisture conditions, slopes, etc.

  18. Integrated thermodynamic model for ignition target performance

    Directory of Open Access Journals (Sweden)

    Springer P.T.

    2013-11-01

    Full Text Available We have derived a 3-dimensional synthetic model for NIF implosion conditions, by predicting and optimizing fits to a broad set of x-ray and nuclear diagnostics obtained on each shot. By matching x-ray images, burn width, neutron time-of-flight ion temperature, yield, and fuel ρr, we obtain nearly unique constraints on conditions in the hotspot and fuel in a model that is entirely consistent with the observables. This model allows us to determine hotspot density, pressure, areal density (ρr, total energy, and other ignition-relevant parameters not available from any single diagnostic. This article describes the model and its application to National Ignition Facility (NIF tritium–hydrogen–deuterium (THD and DT implosion data, and provides an explanation for the large yield and ρr degradation compared to numerical code predictions.

  19. Modelling

    CERN Document Server

    Spädtke, P

    2013-01-01

    Modeling of technical machines became a standard technique since computer became powerful enough to handle the amount of data relevant to the specific system. Simulation of an existing physical device requires the knowledge of all relevant quantities. Electric fields given by the surrounding boundary as well as magnetic fields caused by coils or permanent magnets have to be known. Internal sources for both fields are sometimes taken into account, such as space charge forces or the internal magnetic field of a moving bunch of charged particles. Used solver routines are briefly described and some bench-marking is shown to estimate necessary computing times for different problems. Different types of charged particle sources will be shown together with a suitable model to describe the physical model. Electron guns are covered as well as different ion sources (volume ion sources, laser ion sources, Penning ion sources, electron resonance ion sources, and H$^-$-sources) together with some remarks on beam transport.

  20. Performance evaluation of quality monitor models in spot welding

    Institute of Scientific and Technical Information of China (English)

    Zhang Zhongdian; Li Dongqing; Wang Kai

    2005-01-01

    Performance of quality monitor models in spot welding determines the monitor precision directly, so it's crucial to evaluate it. Previously, mean square error ( MSE ) is often used to evaluate performances of models, but it can only show the total errors of finite specimens of models, and cannot show whether the quality information inferred from models are accurate and reliable enough or not. For this reason, by means of measure error theory, a new way to evaluate the performances of models according to the error distributions is developed as follows: Only if correct and precise enough the error distribution of model is, the quality information inferred from model is accurate and reliable.

  1. A unified tool for performance modelling and prediction

    Energy Technology Data Exchange (ETDEWEB)

    Gilmore, Stephen [Laboratory for Foundations of Computer Science, University of Edinburgh, King' s Buildings, Mayfield Road, Edinburgh, Scotland EH9 3JZ (United Kingdom)]. E-mail: stg@inf.ed.ac.uk; Kloul, Leila [Laboratory for Foundations of Computer Science, University of Edinburgh, King' s Buildings, Mayfield Road, Edinburgh, Scotland EH9 3JZ (United Kingdom)

    2005-07-01

    We describe a novel performability modelling approach, which facilitates the efficient solution of performance models extracted from high-level descriptions of systems. The notation which we use for our high-level designs is the Unified Modelling Language (UML) graphical modelling language. The technology which provides the efficient representation capability for the underlying performance model is the multi-terminal binary decision diagram (MTBDD)-based PRISM probabilistic model checker. The UML models are compiled through an intermediate language, the stochastic process algebra PEPA, before translation into MTBDDs for solution. We illustrate our approach on a real-world analysis problem from the domain of mobile telephony.

  2. model

    African Journals Online (AJOL)

    trie neural construction oí inoiviouo! unci communal identities in ... occurs, Including models based on Information processing,1 ... Applying the DSM descriptive approach to dissociation in the ... a personal, narrative path lhal connects personal lo ethnic ..... managed the problem in the context of the community, using a.

  3. PV Performance Modeling Methods and Practices: Results from the 4th PV Performance Modeling Collaborative Workshop.

    Energy Technology Data Exchange (ETDEWEB)

    Stein, Joshua [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-03-01

    In 2014, the IEA PVPS Task 13 added the PVPMC as a formal activity to its technical work plan for 2014-2017. The goal of this activity is to expand the reach of the PVPMC to a broader international audience and help to reduce PV performance modeling uncertainties worldwide. One of the main deliverables of this activity is to host one or more PVPMC workshops outside the US to foster more international participation within this collaborative group. This report reviews the results of the first in a series of these joint IEA PVPS Task 13/PVPMC workshops. The 4th PV Performance Modeling Collaborative Workshop was held in Cologne, Germany at the headquarters of TÜV Rheinland on October 22-23, 2015.

  4. Performance of turbulence models for transonic flows in a diffuser

    Science.gov (United States)

    Liu, Yangwei; Wu, Jianuo; Lu, Lipeng

    2016-09-01

    Eight turbulence models frequently used in aerodynamics have been employed in the detailed numerical investigations for transonic flows in the Sajben diffuser, to assess the predictive capabilities of the turbulence models for shock wave/turbulent boundary layer interactions (SWTBLI) in internal flows. The eight turbulence models include: the Spalart-Allmaras model, the standard k - 𝜀 model, the RNG k - 𝜀 model, the realizable k - 𝜀 model, the standard k - ω model, the SST k - ω model, the v2¯ - f model and the Reynolds stress model. The performance of the different turbulence models adopted has been systematically assessed by comparing the numerical results with the available experimental data. The comparisons show that the predictive performance becomes worse as the shock wave becomes stronger. The v2¯ - f model and the SST k - ω model perform much better than other models, and the SST k - ω model predicts a little better than the v2¯ - f model for pressure on walls and velocity profile, whereas the v2¯ - f model predicts a little better than the SST k - ω model for separation location, reattachment location and separation length for strong shock case.

  5. An Outline Course on Human Performance Modeling

    Science.gov (United States)

    2006-01-01

    complementary or competing tasks: Dario SaIvucci, ??? 46. Bonnie Johns, David Kieras 47. ecological interface design 48. More into modeling human... alarcon 70. Ben Knott 71. Evelyn Rozanski 7.Pete Khooshabeh Optional: If ou would like to be on a mailin list for further seminars lease enter our email

  6. Persistence Modeling for Assessing Marketing Strategy Performance

    NARCIS (Netherlands)

    M.G. Dekimpe (Marnik); D.M. Hanssens (Dominique)

    2003-01-01

    textabstractThe question of long-run market response lies at the heart of any marketing strategy that tries to create a sustainable competitive advantage for the firm or brand. A key challenge, however, is that only short-run results of marketing actions are readily observable. Persistence modeling

  7. Persistence Modeling for Assessing Marketing Strategy Performance

    NARCIS (Netherlands)

    M.G. Dekimpe (Marnik); D.M. Hanssens (Dominique)

    2003-01-01

    textabstractThe question of long-run market response lies at the heart of any marketing strategy that tries to create a sustainable competitive advantage for the firm or brand. A key challenge, however, is that only short-run results of marketing actions are readily observable. Persistence modeling

  8. Performance Improvement/HPT Model: Guiding the Process

    Science.gov (United States)

    Dessinger, Joan Conway; Moseley, James L.; Van Tiem, Darlene M.

    2012-01-01

    This commentary is part of an ongoing dialogue that began in the October 2011 special issue of "Performance Improvement"--Exploring a Universal Performance Model for HPT: Notes From the Field. The performance improvement/HPT (human performance technology) model represents a unifying process that helps accomplish successful change, create…

  9. Determinants of business model performance in software firms

    OpenAIRE

    Rajala, Risto

    2009-01-01

    The antecedents and consequences of business model design have gained increasing interest among information system (IS) scholars and business practitioners alike. Based on an extensive literature review and empirical research, this study investigates the factors that drive business model design and the performance effects generated by the different kinds of business models in software firms. The main research question is: “What are the determinants of business model performance in the softwar...

  10. Compound fuzzy model for thermal performance of refrigeration compressors

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    The fuzzy method is introduced to the calculation of thermal performance of refrigeration compressors. A compound model combining classical thermodynamic theory and fuzzy theory is presented and compared with a simple fuzzy model without classical thermodynamic fundamentals. Case study of refrigeration compressors shows that the compound fuzzy model and the simple fuzzy model are both more efficient than the classical thermodynamic method. However, the compound fuzzy model is of better precision and adaptability.

  11. Performance Management: A model and research agenda

    NARCIS (Netherlands)

    D.N. den Hartog (Deanne); J.P.P.E.F. Boselie (Paul); J. Paauwe (Jaap)

    2004-01-01

    textabstractPerformance Management deals with the challenge organizations face in defining, measuring and stimulating employee performance with the ultimate goal to improve organizational performance. Thus, Performance Management involves multiple levels of analysis and is clearly linked to the topi

  12. High Performance Geostatistical Modeling of Biospheric Resources

    Science.gov (United States)

    Pedelty, J. A.; Morisette, J. T.; Smith, J. A.; Schnase, J. L.; Crosier, C. S.; Stohlgren, T. J.

    2004-12-01

    We are using parallel geostatistical codes to study spatial relationships among biospheric resources in several study areas. For example, spatial statistical models based on large- and small-scale variability have been used to predict species richness of both native and exotic plants (hot spots of diversity) and patterns of exotic plant invasion. However, broader use of geostastics in natural resource modeling, especially at regional and national scales, has been limited due to the large computing requirements of these applications. To address this problem, we implemented parallel versions of the kriging spatial interpolation algorithm. The first uses the Message Passing Interface (MPI) in a master/slave paradigm on an open source Linux Beowulf cluster, while the second is implemented with the new proprietary Xgrid distributed processing system on an Xserve G5 cluster from Apple Computer, Inc. These techniques are proving effective and provide the basis for a national decision support capability for invasive species management that is being jointly developed by NASA and the US Geological Survey.

  13. Comparison of performance of simulation models for floor heating

    DEFF Research Database (Denmark)

    Weitzmann, Peter; Svendsen, Svend

    2005-01-01

    only the floor construction, the differences can be directly compared. In this comparison, a two-dimensional model of a slab-on-grade floor including foundation is used as reference. The other models include a one-dimensional model and a thermal network model including the linear thermal transmittance......This paper describes the comparison of performance of simulation models for floor heating with different level of detail in the modelling process. The models are compared in an otherwise identical simulation model containing room model, walls, windows, ceiling and ventilation system. By exchanging...... of the foundation. The result can be also be found in the energy consumption of the building, since up to half the energy consumption is lost through the ground. Looking at the different implementations it is also found, that including a 1m ground volume below the floor construction under a one-dimensional model...

  14. Comparisons of Faulting-Based Pavement Performance Prediction Models

    Directory of Open Access Journals (Sweden)

    Weina Wang

    2017-01-01

    Full Text Available Faulting prediction is the core of concrete pavement maintenance and design. Highway agencies are always faced with the problem of lower accuracy for the prediction which causes costly maintenance. Although many researchers have developed some performance prediction models, the accuracy of prediction has remained a challenge. This paper reviews performance prediction models and JPCP faulting models that have been used in past research. Then three models including multivariate nonlinear regression (MNLR model, artificial neural network (ANN model, and Markov Chain (MC model are tested and compared using a set of actual pavement survey data taken on interstate highway with varying design features, traffic, and climate data. It is found that MNLR model needs further recalibration, while the ANN model needs more data for training the network. MC model seems a good tool for pavement performance prediction when the data is limited, but it is based on visual inspections and not explicitly related to quantitative physical parameters. This paper then suggests that the further direction for developing the performance prediction model is incorporating the advantages and disadvantages of different models to obtain better accuracy.

  15. Atomic scale simulations for improved CRUD and fuel performance modeling

    Energy Technology Data Exchange (ETDEWEB)

    Andersson, Anders David Ragnar [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Cooper, Michael William Donald [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-01-06

    A more mechanistic description of fuel performance codes can be achieved by deriving models and parameters from atomistic scale simulations rather than fitting models empirically to experimental data. The same argument applies to modeling deposition of corrosion products on fuel rods (CRUD). Here are some results from publications in 2016 carried out using the CASL allocation at LANL.

  16. Performance of Modeling wireless networks in realistic environment

    CERN Document Server

    Siraj, M

    2012-01-01

    A wireless network is realized by mobile devices which communicate over radio channels. Since, experiments of real life problem with real devices are very difficult, simulation is used very often. Among many other important properties that have to be defined for simulative experiments, the mobility model and the radio propagation model have to be selected carefully. Both have strong impact on the performance of mobile wireless networks, e.g., the performance of routing protocols varies with these models. There are many mobility and radio propagation models proposed in literature. Each of them was developed with different objectives and is not suited for every physical scenario. The radio propagation models used in common wireless network simulators, in general researcher consider simple radio propagation models and neglect obstacles in the propagation environment. In this paper, we study the performance of wireless networks simulation by consider different Radio propagation models with considering obstacles i...

  17. Temporal diagnostic analysis of the SWAT model to detect dominant periods of poor model performance

    Science.gov (United States)

    Guse, Björn; Reusser, Dominik E.; Fohrer, Nicola

    2013-04-01

    Hydrological models generally include thresholds and non-linearities, such as snow-rain-temperature thresholds, non-linear reservoirs, infiltration thresholds and the like. When relating observed variables to modelling results, formal methods often calculate performance metrics over long periods, reporting model performance with only few numbers. Such approaches are not well suited to compare dominating processes between reality and model and to better understand when thresholds and non-linearities are driving model results. We present a combination of two temporally resolved model diagnostic tools to answer when a model is performing (not so) well and what the dominant processes are during these periods. We look at the temporal dynamics of parameter sensitivities and model performance to answer this question. For this, the eco-hydrological SWAT model is applied in the Treene lowland catchment in Northern Germany. As a first step, temporal dynamics of parameter sensitivities are analyzed using the Fourier Amplitude Sensitivity test (FAST). The sensitivities of the eight model parameters investigated show strong temporal variations. High sensitivities were detected for two groundwater (GW_DELAY, ALPHA_BF) and one evaporation parameters (ESCO) most of the time. The periods of high parameter sensitivity can be related to different phases of the hydrograph with dominances of the groundwater parameters in the recession phases and of ESCO in baseflow and resaturation periods. Surface runoff parameters show high parameter sensitivities in phases of a precipitation event in combination with high soil water contents. The dominant parameters give indication for the controlling processes during a given period for the hydrological catchment. The second step included the temporal analysis of model performance. For each time step, model performance was characterized with a "finger print" consisting of a large set of performance measures. These finger prints were clustered into

  18. Emerging Carbon Nanotube Electronic Circuits, Modeling, and Performance

    OpenAIRE

    Yao Xu; Ashok Srivastava; Sharma, Ashwani K.

    2010-01-01

    Current transport and dynamic models of carbon nanotube field-effect transistors are presented. A model of single-walled carbon nanotube as interconnect is also presented and extended in modeling of single-walled carbon nanotube bundles. These models are applied in studying the performances of circuits such as the complementary carbon nanotube inverter pair and carbon nanotube as interconnect. Cadence/Spectre simulations show that carbon nanotube field-effect transistor circuits can operate a...

  19. Cost and Performance Assumptions for Modeling Electricity Generation Technologies

    Energy Technology Data Exchange (ETDEWEB)

    Tidball, Rick [ICF International, Fairfax, VA (United States); Bluestein, Joel [ICF International, Fairfax, VA (United States); Rodriguez, Nick [ICF International, Fairfax, VA (United States); Knoke, Stu [ICF International, Fairfax, VA (United States)

    2010-11-01

    The goal of this project was to compare and contrast utility scale power plant characteristics used in data sets that support energy market models. Characteristics include both technology cost and technology performance projections to the year 2050. Cost parameters include installed capital costs and operation and maintenance (O&M) costs. Performance parameters include plant size, heat rate, capacity factor or availability factor, and plant lifetime. Conventional, renewable, and emerging electricity generating technologies were considered. Six data sets, each associated with a different model, were selected. Two of the data sets represent modeled results, not direct model inputs. These two data sets include cost and performance improvements that result from increased deployment as well as resulting capacity factors estimated from particular model runs; other data sets represent model input data. For the technologies contained in each data set, the levelized cost of energy (LCOE) was also evaluated, according to published cost, performance, and fuel assumptions.

  20. Cost and Performance Assumptions for Modeling Electricity Generation Technologies

    Energy Technology Data Exchange (ETDEWEB)

    Tidball, R.; Bluestein, J.; Rodriguez, N.; Knoke, S.

    2010-11-01

    The goal of this project was to compare and contrast utility scale power plant characteristics used in data sets that support energy market models. Characteristics include both technology cost and technology performance projections to the year 2050. Cost parameters include installed capital costs and operation and maintenance (O&M) costs. Performance parameters include plant size, heat rate, capacity factor or availability factor, and plant lifetime. Conventional, renewable, and emerging electricity generating technologies were considered. Six data sets, each associated with a different model, were selected. Two of the data sets represent modeled results, not direct model inputs. These two data sets include cost and performance improvements that result from increased deployment as well as resulting capacity factors estimated from particular model runs; other data sets represent model input data. For the technologies contained in each data set, the levelized cost of energy (LCOE) was also evaluated, according to published cost, performance, and fuel assumptions.

  1. Reference Manual for the System Advisor Model's Wind Power Performance Model

    Energy Technology Data Exchange (ETDEWEB)

    Freeman, J.; Jorgenson, J.; Gilman, P.; Ferguson, T.

    2014-08-01

    This manual describes the National Renewable Energy Laboratory's System Advisor Model (SAM) wind power performance model. The model calculates the hourly electrical output of a single wind turbine or of a wind farm. The wind power performance model requires information about the wind resource, wind turbine specifications, wind farm layout (if applicable), and costs. In SAM, the performance model can be coupled to one of the financial models to calculate economic metrics for residential, commercial, or utility-scale wind projects. This manual describes the algorithms used by the wind power performance model, which is available in the SAM user interface and as part of the SAM Simulation Core (SSC) library, and is intended to supplement the user documentation that comes with the software.

  2. Performance Modeling of Communication Networks with Markov Chains

    CERN Document Server

    Mo, Jeonghoon

    2010-01-01

    This book is an introduction to Markov chain modeling with applications to communication networks. It begins with a general introduction to performance modeling in Chapter 1 where we introduce different performance models. We then introduce basic ideas of Markov chain modeling: Markov property, discrete time Markov chain (DTMe and continuous time Markov chain (CTMe. We also discuss how to find the steady state distributions from these Markov chains and how they can be used to compute the system performance metric. The solution methodologies include a balance equation technique, limiting probab

  3. Modeling the Mechanical Performance of Die Casting Dies

    Energy Technology Data Exchange (ETDEWEB)

    R. Allen Miller

    2004-02-27

    The following report covers work performed at Ohio State on modeling the mechanical performance of dies. The focus of the project was development and particularly verification of finite element techniques used to model and predict displacements and stresses in die casting dies. The work entails a major case study performed with and industrial partner on a production die and laboratory experiments performed at Ohio State.

  4. New Metacognitive Model for Human Performance Technology

    Science.gov (United States)

    Turner, John R.

    2011-01-01

    Addressing metacognitive functions has been shown to improve performance at the individual, team, group, and organizational levels. Metacognition is beginning to surface as an added cognate discipline for the field of human performance technology (HPT). Advances from research in the fields of cognition and metacognition offer a place for HPT to…

  5. New Metacognitive Model for Human Performance Technology

    Science.gov (United States)

    Turner, John R.

    2011-01-01

    Addressing metacognitive functions has been shown to improve performance at the individual, team, group, and organizational levels. Metacognition is beginning to surface as an added cognate discipline for the field of human performance technology (HPT). Advances from research in the fields of cognition and metacognition offer a place for HPT to…

  6. Building performance modelling for sustainable building design

    Directory of Open Access Journals (Sweden)

    Olufolahan Oduyemi

    2016-12-01

    The output revealed that BPM delivers information needed for enhanced design and building performance. Recommendations such as the establishment of proper mechanisms to monitor the performance of BPM related construction are suggested to allow for its continuous implementation. This research consolidates collective movements towards wider implementation of BPM and forms a base for developing a sound BIM strategy and guidance.

  7. Performance of Air Pollution Models on Massively Parallel Computers

    DEFF Research Database (Denmark)

    Brown, John; Hansen, Per Christian; Wasniewski, Jerzy

    1996-01-01

    To compare the performance and use of three massively parallel SIMD computers, we implemented a large air pollution model on the computers. Using a realistic large-scale model, we gain detailed insight about the performance of the three computers when used to solve large-scale scientific problems...

  8. A Spectral Evaluation of Models Performances in Mediterranean Oak Woodlands

    Science.gov (United States)

    Vargas, R.; Baldocchi, D. D.; Abramowitz, G.; Carrara, A.; Correia, A.; Kobayashi, H.; Papale, D.; Pearson, D.; Pereira, J.; Piao, S.; Rambal, S.; Sonnentag, O.

    2009-12-01

    Ecosystem processes are influenced by climatic trends at multiple temporal scales including diel patterns and other mid-term climatic modes, such as interannual and seasonal variability. Because interactions between biophysical components of ecosystem processes are complex, it is important to test how models perform in frequency (e.g. hours, days, weeks, months, years) and time (i.e. day of the year) domains in addition to traditional tests of annual or monthly sums. Here we present a spectral evaluation using wavelet time series analysis of model performance in seven Mediterranean Oak Woodlands that encompass three deciduous and four evergreen sites. We tested the performance of five models (CABLE, ORCHIDEE, BEPS, Biome-BGC, and JULES) on measured variables of gross primary production (GPP) and evapotranspiration (ET). In general, model performance fails at intermediate periods (e.g. weeks to months) likely because these models do not represent the water pulse dynamics that influence GPP and ET at these Mediterranean systems. To improve the performance of a model it is critical to identify first where and when the model fails. Only by identifying where a model fails we can improve the model performance and use them as prognostic tools and to generate further hypotheses that can be tested by new experiments and measurements.

  9. System Level Modelling and Performance Estimation of Embedded Systems

    DEFF Research Database (Denmark)

    Tranberg-Hansen, Anders Sejer

    is simulation based and allows performance estimation to be carried out throughout all design phases ranging from early functional to cycle accurate and bit true descriptions of the system, modelling both hardware and software components in a unied way. Design space exploration and performance estimation...... an efficient system level design methodology, a modelling framework for performance estimation and design space exploration at the system level is required. This thesis presents a novel component based modelling framework for system level modelling and performance estimation of embedded systems. The framework...... is performed by having the framework produce detailed quantitative information about the system model under investigation. The project is part of the national Danish research project, Danish Network of Embedded Systems (DaNES), which is funded by the Danish National Advanced Technology Foundation. The project...

  10. Performance Predictable ServiceBSP Model for Grid Computing

    Institute of Scientific and Technical Information of China (English)

    TONG Weiqin; MIAO Weikai

    2007-01-01

    This paper proposes a performance prediction model for grid computing model ServiceBSP to support developing high quality applications in grid environment. In ServiceBSP model,the agents carrying computing tasks are dispatched to the local domain of the selected computation services. By using the IP (integer program) approach, the Service Selection Agent selects the computation services with global optimized QoS (quality of service) consideration. The performance of a ServiceBSP application can be predicted according to the performance prediction model based on the QoS of the selected services. The performance prediction model can help users to analyze their applications and improve them by optimized the factors which affects the performance. The experiment shows that the Service Selection Agent can provide ServiceBSP users with satisfied QoS of applications.

  11. Advanced Performance Modeling with Combined Passive and Active Monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Dovrolis, Constantine [Georgia Inst. of Technology, Atlanta, GA (United States); Sim, Alex [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2015-04-15

    To improve the efficiency of resource utilization and scheduling of scientific data transfers on high-speed networks, the "Advanced Performance Modeling with combined passive and active monitoring" (APM) project investigates and models a general-purpose, reusable and expandable network performance estimation framework. The predictive estimation model and the framework will be helpful in optimizing the performance and utilization of networks as well as sharing resources with predictable performance for scientific collaborations, especially in data intensive applications. Our prediction model utilizes historical network performance information from various network activity logs as well as live streaming measurements from network peering devices. Historical network performance information is used without putting extra load on the resources by active measurement collection. Performance measurements collected by active probing is used judiciously for improving the accuracy of predictions.

  12. Models used to assess the performance of photovoltaic systems.

    Energy Technology Data Exchange (ETDEWEB)

    Stein, Joshua S.; Klise, Geoffrey T.

    2009-12-01

    This report documents the various photovoltaic (PV) performance models and software developed and utilized by researchers at Sandia National Laboratories (SNL) in support of the Photovoltaics and Grid Integration Department. In addition to PV performance models, hybrid system and battery storage models are discussed. A hybrid system using other distributed sources and energy storage can help reduce the variability inherent in PV generation, and due to the complexity of combining multiple generation sources and system loads, these models are invaluable for system design and optimization. Energy storage plays an important role in reducing PV intermittency and battery storage models are used to understand the best configurations and technologies to store PV generated electricity. Other researcher's models used by SNL are discussed including some widely known models that incorporate algorithms developed at SNL. There are other models included in the discussion that are not used by or were not adopted from SNL research but may provide some benefit to researchers working on PV array performance, hybrid system models and energy storage. The paper is organized into three sections to describe the different software models as applied to photovoltaic performance, hybrid systems, and battery storage. For each model, there is a description which includes where to find the model, whether it is currently maintained and any references that may be available. Modeling improvements underway at SNL include quantifying the uncertainty of individual system components, the overall uncertainty in modeled vs. measured results and modeling large PV systems. SNL is also conducting research into the overall reliability of PV systems.

  13. Individualized Biomathematical Modeling of Fatigue and Performance

    Science.gov (United States)

    2008-05-29

    prior information about the initial state parameters may be acquired by other means, though. For instance, actigraphy could be used to track sleep ...J., Saper C. B. Neurobiology of the sleep -wake cycle: Sleep architecture , circadian regulation, and regulatory feedback. J. Biol. Rhythms 21, 482... Sleep and Performance Research Center 8. PERFORMING ORGANIZATION REPORT NUMBER Washington State University, Spokane P.O. Box 1495 Spokane, WA

  14. Manufacturing Excellence Approach to Business Performance Model

    Directory of Open Access Journals (Sweden)

    Jesus Cruz Alvarez

    2015-03-01

    Full Text Available Six Sigma, lean manufacturing, total quality management, quality control, and quality function deployment are the fundamental set of tools to enhance productivity in organizations. There is some research that outlines the benefit of each tool into a particular context of firm´s productivity, but not into a broader context of firm´s competitiveness that is achieved thru business performance. The aim of this theoretical research paper is to contribute to this mean and propose a manufacturing excellence approach that links productivity tools into a broader context of business performance.

  15. ASYMMETRIC PRICE TRANSMISSION MODELING: THE IMPORTANCE OF MODEL COMPLEXITY AND THE PERFORMANCE OF THE SELECTION CRITERIA

    Directory of Open Access Journals (Sweden)

    Henry de-Graft Acquah

    2013-01-01

    Full Text Available Information Criteria provides an attractive basis for selecting the best model from a set of competing asymmetric price transmission models or theories. However, little is understood about the sensitivity of the model selection methods to model complexity. This study therefore fits competing asymmetric price transmission models that differ in complexity to simulated data and evaluates the ability of the model selection methods to recover the true model. The results of Monte Carlo experimentation suggest that in general BIC, CAIC and DIC were superior to AIC when the true data generating process was the standard error correction model, whereas AIC was more successful when the true model was the complex error correction model. It is also shown that the model selection methods performed better in large samples for a complex asymmetric data generating process than with a standard asymmetric data generating process. Except for complex models, AIC's performance did not make substantial gains in recovery rates as sample size increased. The research findings demonstrate the influence of model complexity in asymmetric price transmission model comparison and selection.

  16. An Empirical Study of a Solo Performance Assessment Model

    Science.gov (United States)

    Russell, Brian E.

    2015-01-01

    The purpose of this study was to test a hypothesized model of solo music performance assessment. Specifically, this study investigates the influence of technique and musical expression on perceptions of overall performance quality. The Aural Musical Performance Quality (AMPQ) measure was created to measure overall performance quality, technique,…

  17. Developing an Energy Performance Modeling Startup Kit

    Energy Technology Data Exchange (ETDEWEB)

    none,

    2012-10-01

    In 2011, the NAHB Research Center began assessing the needs and motivations of residential remodelers regarding energy performance remodeling. This report outlines: the current remodeling industry and the role of energy efficiency; gaps and barriers to adding energy efficiency into remodeling; and support needs of professional remodelers to increase sales and projects involving improving home energy efficiency.

  18. Does model performance improve with complexity? A case study with three hydrological models

    Science.gov (United States)

    Orth, Rene; Staudinger, Maria; Seneviratne, Sonia I.; Seibert, Jan; Zappa, Massimiliano

    2015-04-01

    In recent decades considerable progress has been made in climate model development. Following the massive increase in computational power, models became more sophisticated. At the same time also simple conceptual models have advanced. In this study we validate and compare three hydrological models of different complexity to investigate whether their performance varies accordingly. For this purpose we use runoff and also soil moisture measurements, which allow a truly independent validation, from several sites across Switzerland. The models are calibrated in similar ways with the same runoff data. Our results show that the more complex models HBV and PREVAH outperform the simple water balance model (SWBM) in case of runoff but not for soil moisture. Furthermore the most sophisticated PREVAH model shows an added value compared to the HBV model only in case of soil moisture. Focusing on extreme events we find generally improved performance of the SWBM during drought conditions and degraded agreement with observations during wet extremes. For the more complex models we find the opposite behavior, probably because they were primarily developed for prediction of runoff extremes. As expected given their complexity, HBV and PREVAH have more problems with over-fitting. All models show a tendency towards better performance in lower altitudes as opposed to (pre-) alpine sites. The results vary considerably across the investigated sites. In contrast, the different metrics we consider to estimate the agreement between models and observations lead to similar conclusions, indicating that the performance of the considered models is similar at different time scales as well as for anomalies and long-term means. We conclude that added complexity does not necessarily lead to improved performance of hydrological models, and that performance can vary greatly depending on the considered hydrological variable (e.g. runoff vs. soil moisture) or hydrological conditions (floods vs. droughts).

  19. Space Station Freedom electrical performance model

    Science.gov (United States)

    Hojnicki, Jeffrey S.; Green, Robert D.; Kerslake, Thomas W.; Mckissock, David B.; Trudell, Jeffrey J.

    1993-01-01

    The baseline Space Station Freedom electric power system (EPS) employs photovoltaic (PV) arrays and nickel hydrogen (NiH2) batteries to supply power to housekeeping and user electrical loads via a direct current (dc) distribution system. The EPS was originally designed for an operating life of 30 years through orbital replacement of components. As the design and development of the EPS continues, accurate EPS performance predictions are needed to assess design options, operating scenarios, and resource allocations. To meet these needs, NASA Lewis Research Center (LeRC) has, over a 10 year period, developed SPACE (Station Power Analysis for Capability Evaluation), a computer code designed to predict EPS performance. This paper describes SPACE, its functionality, and its capabilities.

  20. Manufacturing Excellence Approach to Business Performance Model

    OpenAIRE

    Jesus Cruz Alvarez; Carlos Monge Perry

    2015-01-01

    Six Sigma, lean manufacturing, total quality management, quality control, and quality function deployment are the fundamental set of tools to enhance productivity in organizations. There is some research that outlines the benefit of each tool into a particular context of firm´s productivity, but not into a broader context of firm´s competitiveness that is achieved thru business performance. The aim of this theoretical research paper is to contribute to this mean and propose a manufacturing ex...

  1. Performance model for grid-connected photovoltaic inverters.

    Energy Technology Data Exchange (ETDEWEB)

    Boyson, William Earl; Galbraith, Gary M.; King, David L.; Gonzalez, Sigifredo

    2007-09-01

    This document provides an empirically based performance model for grid-connected photovoltaic inverters used for system performance (energy) modeling and for continuous monitoring of inverter performance during system operation. The versatility and accuracy of the model were validated for a variety of both residential and commercial size inverters. Default parameters for the model can be obtained from manufacturers specification sheets, and the accuracy of the model can be further refined using measurements from either well-instrumented field measurements in operational systems or using detailed measurements from a recognized testing laboratory. An initial database of inverter performance parameters was developed based on measurements conducted at Sandia National Laboratories and at laboratories supporting the solar programs of the California Energy Commission.

  2. Performance Modeling for Heterogeneous Wireless Networks with Multiservice Overflow Traffic

    DEFF Research Database (Denmark)

    Huang, Qian; Ko, King-Tim; Iversen, Villy Bæk

    2009-01-01

    Performance modeling is important for the purpose of developing efficient dimensioning tools for large complicated networks. But it is difficult to achieve in heterogeneous wireless networks, where different networks have different statistical characteristics in service and traffic models....... Multiservice loss analysis based on multi-dimensional Markov chain becomes intractable in these networks due to intensive computations required. This paper focuses on performance modeling for heterogeneous wireless networks based on a hierarchical overlay infrastructure. A method based on decomposition...... of the correlated traffic is used to achieve an approximate performance modeling for multiservice in hierarchical heterogeneous wireless networks with overflow traffic. The accuracy of the approximate performance obtained by our proposed modeling is verified by simulations....

  3. Hierarchical Bulk Synchronous Parallel Model and Performance Optimization

    Institute of Scientific and Technical Information of China (English)

    HUANG Linpeng; SUNYongqiang; YUAN Wei

    1999-01-01

    Based on the framework of BSP, aHierarchical Bulk Synchronous Parallel (HBSP) performance model isintroduced in this paper to capture the performance optimizationproblem for various stages in parallel program development and toaccurately predict the performance of a parallel program byconsidering factors causing variance at local computation and globalcommunication. The related methodology has been applied to several realapplications and the results show that HBSP is a suitable model foroptimizing parallel programs.

  4. Developing an Energy Performance Modeling Startup Kit

    Energy Technology Data Exchange (ETDEWEB)

    Wood, A.

    2012-10-01

    In 2011, the NAHB Research Center began the first part of the multi-year effort by assessing the needs and motivations of residential remodelers regarding energy performance remodeling. The scope is multifaceted - all perspectives will be sought related to remodeling firms ranging in size from small-scale, sole proprietor to national. This will allow the Research Center to gain a deeper understanding of the remodeling and energy retrofit business and the needs of contractors when offering energy upgrade services. To determine the gaps and the motivation for energy performance remodeling, the NAHB Research Center conducted (1) an initial series of focus groups with remodelers at the 2011 International Builders' Show, (2) a second series of focus groups with remodelers at the NAHB Research Center in conjunction with the NAHB Spring Board meeting in DC, and (3) quantitative market research with remodelers based on the findings from the focus groups. The goal was threefold, to: Understand the current remodeling industry and the role of energy efficiency; Identify the gaps and barriers to adding energy efficiency into remodeling; and Quantify and prioritize the support needs of professional remodelers to increase sales and projects involving improving home energy efficiency. This report outlines all three of these tasks with remodelers.

  5. Impact of reactive settler models on simulated WWTP performance.

    Science.gov (United States)

    Gernaey, K V; Jeppsson, U; Batstone, D J; Ingildsen, P

    2006-01-01

    Including a reactive settler model in a wastewater treatment plant model allows representation of the biological reactions taking place in the sludge blanket in the settler, something that is neglected in many simulation studies. The idea of including a reactive settler model is investigated for an ASM1 case study. Simulations with a whole plant model including the non-reactive Takács settler model are used as a reference, and are compared to simulation results considering two reactive settler models. The first is a return sludge model block removing oxygen and a user-defined fraction of nitrate, combined with a non-reactive Takács settler. The second is a fully reactive ASM1 Takács settler model. Simulations with the ASM1 reactive settler model predicted a 15.3% and 7.4% improvement of the simulated N removal performance, for constant (steady-state) and dynamic influent conditions respectively. The oxygen/nitrate return sludge model block predicts a 10% improvement of N removal performance under dynamic conditions, and might be the better modelling option for ASM1 plants: it is computationally more efficient and it will not overrate the importance of decay processes in the settler.

  6. Towards an Accurate Performance Modeling of Parallel SparseFactorization

    Energy Technology Data Exchange (ETDEWEB)

    Grigori, Laura; Li, Xiaoye S.

    2006-05-26

    We present a performance model to analyze a parallel sparseLU factorization algorithm on modern cached-based, high-end parallelarchitectures. Our model characterizes the algorithmic behavior bytakingaccount the underlying processor speed, memory system performance, aswell as the interconnect speed. The model is validated using theSuperLU_DIST linear system solver, the sparse matrices from realapplications, and an IBM POWER3 parallel machine. Our modelingmethodology can be easily adapted to study performance of other types ofsparse factorizations, such as Cholesky or QR.

  7. Disaggregation of Rainy Hours: Compared Performance of Various Models.

    Science.gov (United States)

    Ben Haha, M.; Hingray, B.; Musy, A.

    In the urban environment, the response times of catchments are usually short. To de- sign or to diagnose waterworks in that context, it is necessary to describe rainfall events with a good time resolution: a 10mn time step is often necessary. Such in- formation is not always available. Rainfall disaggregation models have thus to be applied to produce from rough rainfall data that short time resolution information. The communication will present the performance obtained with several rainfall dis- aggregation models that allow for the disaggregation of rainy hours into six 10mn rainfall amounts. The ability of the models to reproduce some statistical character- istics of rainfall (mean, variance, overall distribution of 10mn-rainfall amounts; ex- treme values of maximal rainfall amounts over different durations) is evaluated thanks to different graphical and numerical criteria. The performance of simple models pre- sented in some scientific papers or developed in the Hydram laboratory as well as the performance of more sophisticated ones is compared with the performance of the basic constant disaggregation model. The compared models are either deterministic or stochastic; for some of them the disaggregation is based on scaling properties of rainfall. The compared models are in increasing complexity order: constant model, linear model (Ben Haha, 2001), Ormsbee Deterministic model (Ormsbee, 1989), Ar- tificial Neuronal Network based model (Burian et al. 2000), Hydram Stochastic 1 and Hydram Stochastic 2 (Ben Haha, 2001), Multiplicative Cascade based model (Olsson and Berndtsson, 1998), Ormsbee Stochastic model (Ormsbee, 1989). The 625 rainy hours used for that evaluation (with a hourly rainfall amount greater than 5mm) were extracted from the 21 years chronological rainfall series (10mn time step) observed at the Pully meteorological station, Switzerland. The models were also evaluated when applied to different rainfall classes depending on the season first and on the

  8. A Mathematical Model to Improve the Performance of Logistics Network

    Directory of Open Access Journals (Sweden)

    Muhammad Izman Herdiansyah

    2012-01-01

    Full Text Available The role of logistics nowadays is expanding from just providing transportation and warehousing to offering total integrated logistics. To remain competitive in the global market environment, business enterprises need to improve their logistics operations performance. The improvement will be achieved when we can provide a comprehensive analysis and optimize its network performances. In this paper, a mixed integer linier model for optimizing logistics network performance is developed. It provides a single-product multi-period multi-facilities model, as well as the multi-product concept. The problem is modeled in form of a network flow problem with the main objective to minimize total logistics cost. The problem can be solved using commercial linear programming package like CPLEX or LINDO. Even in small case, the solver in Excel may also be used to solve such model.Keywords: logistics network, integrated model, mathematical programming, network optimization

  9. A Mathematical Model to Improve the Performance of Logistics Network

    Directory of Open Access Journals (Sweden)

    Muhammad Izman Herdiansyah

    2012-01-01

    Full Text Available The role of logistics nowadays is expanding from just providing transportation and warehousing to offering total integrated logistics. To remain competitive in the global market environment, business enterprises need to improve their logistics operations performance. The improvement will be achieved when we can provide a comprehensive analysis and optimize its network performances. In this paper, a mixed integer linier model for optimizing logistics network performance is developed. It provides a single-product multi-period multi-facilities model, as well as the multi-product concept. The problem is modeled in form of a network flow problem with the main objective to minimize total logistics cost. The problem can be solved using commercial linear programming package like CPLEX or LINDO. Even in small case, the solver in Excel may also be used to solve such model.Keywords: logistics network, integrated model, mathematical programming, network optimization

  10. Performance analysis of FXLMS algorithm with secondary path modeling error

    Institute of Scientific and Technical Information of China (English)

    SUN Xu; CHEN Duanshi

    2003-01-01

    Performance analysis of filtered-X LMS (FXLMS) algorithm with secondary path modeling error is carried out in both time and frequency domain. It is shown firstly that the effects of secondary path modeling error on the performance of FXLMS algorithm are determined by the distribution of the relative error of secondary path model along with frequency.In case of that the distribution of relative error is uniform the modeling error of secondary path will have no effects on the performance of the algorithm. In addition, a limitation property of FXLMS algorithm is proved, which implies that the negative effects of secondary path modeling error can be compensated by increasing the adaptive filter length. At last, some insights into the "spillover" phenomenon of FXLMS algorithm are given.

  11. Performance evaluation of quantum well infrared phototransistor instrumentation through modeling

    Science.gov (United States)

    El-Tokhy, Mohamed S.; Mahmoud, Imbaby I.

    2014-05-01

    This paper presents a theoretical analysis for the characteristics of quantum well infrared phototransistors (QWIPTs). A mathematical model describing this device is introduced under nonuniformity distribution of quantum wells (QWs). MATLAB environment is used to devise this model. Furthermore, block diagram models through the VisSim environment were used to describe the device characteristics. The developed models are used to investigate the behavior of the device with different values of performance parameters such as bias voltage, spacing between QWs, and temperature. These parameters are tuned to enhance the performance of these quantum phototransistors through the presented modeling. Moreover, the resultant performance characteristics and comparison between both QWIPTs and quantum wire infrared phototransistors are investigated. Also, the obtained results are validated against experimental published work and full agreements are obtained.

  12. Planetary Suit Hip Bearing Model for Predicting Design vs. Performance

    Science.gov (United States)

    Cowley, Matthew S.; Margerum, Sarah; Harvil, Lauren; Rajulu, Sudhakar

    2011-01-01

    Designing a planetary suit is very complex and often requires difficult trade-offs between performance, cost, mass, and system complexity. In order to verifying that new suit designs meet requirements, full prototypes must eventually be built and tested with human subjects. Using computer models early in the design phase of new hardware development can be advantageous, allowing virtual prototyping to take place. Having easily modifiable models of the suit hard sections may reduce the time it takes to make changes to the hardware designs and then to understand their impact on suit and human performance. A virtual design environment gives designers the ability to think outside the box and exhaust design possibilities before building and testing physical prototypes with human subjects. Reductions in prototyping and testing may eventually reduce development costs. This study is an attempt to develop computer models of the hard components of the suit with known physical characteristics, supplemented with human subject performance data. Objectives: The primary objective was to develop an articulating solid model of the Mark III hip bearings to be used for evaluating suit design performance of the hip joint. Methods: Solid models of a planetary prototype (Mark III) suit s hip bearings and brief section were reverse-engineered from the prototype. The performance of the models was then compared by evaluating the mobility performance differences between the nominal hardware configuration and hardware modifications. This was accomplished by gathering data from specific suited tasks. Subjects performed maximum flexion and abduction tasks while in a nominal suit bearing configuration and in three off-nominal configurations. Performance data for the hip were recorded using state-of-the-art motion capture technology. Results: The results demonstrate that solid models of planetary suit hard segments for use as a performance design tool is feasible. From a general trend perspective

  13. Activity-Based Costing Model for Assessing Economic Performance.

    Science.gov (United States)

    DeHayes, Daniel W.; Lovrinic, Joseph G.

    1994-01-01

    An economic model for evaluating the cost performance of academic and administrative programs in higher education is described. Examples from its application at Indiana University-Purdue University Indianapolis are used to illustrate how the model has been used to control costs and reengineer processes. (Author/MSE)

  14. Null Objects in Second Language Acquisition: Grammatical vs. Performance Models

    Science.gov (United States)

    Zyzik, Eve C.

    2008-01-01

    Null direct objects provide a favourable testing ground for grammatical and performance models of argument omission. This article examines both types of models in order to determine which gives a more plausible account of the second language data. The data were collected from second language (L2) learners of Spanish by means of four oral…

  15. Modelling the Performance of Product Integrated Photovoltaic (PIPV) Cells Indoors

    NARCIS (Netherlands)

    Apostolou, G.; Verwaal, M.; Reinders, Angelina H.M.E.

    2014-01-01

    In this paper we present a model, which have been developed for the estimation of the PV products’ cells’ performance in an indoor environment. The model computes the efficiency and power production of PV technologies, as a function of distance from natural and artificial light sources. It intents

  16. Performance evaluation of four directional emissivity analytical models with thermal SAIL model and airborne images.

    Science.gov (United States)

    Ren, Huazhong; Liu, Rongyuan; Yan, Guangjian; Li, Zhao-Liang; Qin, Qiming; Liu, Qiang; Nerry, Françoise

    2015-04-01

    Land surface emissivity is a crucial parameter in the surface status monitoring. This study aims at the evaluation of four directional emissivity models, including two bi-directional reflectance distribution function (BRDF) models and two gap-frequency-based models. Results showed that the kernel-driven BRDF model could well represent directional emissivity with an error less than 0.002, and was consequently used to retrieve emissivity with an accuracy of about 0.012 from an airborne multi-angular thermal infrared data set. Furthermore, we updated the cavity effect factor relating to multiple scattering inside canopy, which improved the performance of the gap-frequency-based models.

  17. Inconsistent strategies to spin up models in CMIP5: implications for ocean biogeochemical model performance assessment

    Science.gov (United States)

    Seferian, R.; Gehlen, M.; Bopp, L.; Resplandy, L.; Orr, J. C.; Marti, O.

    2016-12-01

    During the fifth phase of the Coupled Model Intercomparison Project (CMIP5) substantial efforts were made to systematically assess the skills of Earth system models against available modern observations. However, most of these skill-assessment approaches can be considered as "blind" given that they were applied without considering models' specific characteristics and treat models a priori as independent of observations. Indeed, since these models are typically initialized from observations, the spin-up procedure (e.g. the length of time for which the model has been run since initialization, and therefore the degree to which it has approached it's own equilibrium) has the potential to exert a significant control over the skill-assessment metrics calculated for each model. Here, we explore how the large diversity in spin-up protocols used for marine biogeochemistry in CMIP5 Earth system models (ESM) contributes to model-to-model differences in the simulated fields. We focus on the amplification of biases in selected biogeochemical fields (O2, NO3, Alk-DIC) as a function of spin-up duration in a dedicated 500-year-long spin-up simulation performed with IPSL-CM5A-LR as well as an ensemble of 24 CMIP5 ESMs. We demonstrate that a relationship between spin-up duration and skill-assessment metrics emerges from the results of a single model and holds when confronted with a larger ensemble of CMIP5 models. This shows that drift in biogeochemical fields has implications for performance assessment in addition to possibly influence estimates of climate change impact. Our study suggests that differences in spin-up protocols could explain a substantial part of model disparities, constituting a source of model-to-model uncertainty. This requires more attention in future model intercomparison exercises in order to provide quantitatively more correct ESM results on marine biogeochemistry and carbon cycle feedbacks.

  18. Inconsistent strategies to spin up models in CMIP5: implications for ocean biogeochemical model performance assessment

    Science.gov (United States)

    Séférian, Roland; Gehlen, Marion; Bopp, Laurent; Resplandy, Laure; Orr, James C.; Marti, Olivier; Dunne, John P.; Christian, James R.; Doney, Scott C.; Ilyina, Tatiana; Lindsay, Keith; Halloran, Paul R.; Heinze, Christoph; Segschneider, Joachim; Tjiputra, Jerry; Aumont, Olivier; Romanou, Anastasia

    2016-05-01

    During the fifth phase of the Coupled Model Intercomparison Project (CMIP5) substantial efforts were made to systematically assess the skill of Earth system models. One goal was to check how realistically representative marine biogeochemical tracer distributions could be reproduced by models. In routine assessments model historical hindcasts were compared with available modern biogeochemical observations. However, these assessments considered neither how close modeled biogeochemical reservoirs were to equilibrium nor the sensitivity of model performance to initial conditions or to the spin-up protocols. Here, we explore how the large diversity in spin-up protocols used for marine biogeochemistry in CMIP5 Earth system models (ESMs) contributes to model-to-model differences in the simulated fields. We take advantage of a 500-year spin-up simulation of IPSL-CM5A-LR to quantify the influence of the spin-up protocol on model ability to reproduce relevant data fields. Amplification of biases in selected biogeochemical fields (O2, NO3, Alk-DIC) is assessed as a function of spin-up duration. We demonstrate that a relationship between spin-up duration and assessment metrics emerges from our model results and holds when confronted with a larger ensemble of CMIP5 models. This shows that drift has implications for performance assessment in addition to possibly aliasing estimates of climate change impact. Our study suggests that differences in spin-up protocols could explain a substantial part of model disparities, constituting a source of model-to-model uncertainty. This requires more attention in future model intercomparison exercises in order to provide quantitatively more correct ESM results on marine biogeochemistry and carbon cycle feedbacks.

  19. Inconsistent Strategies to Spin up Models in CMIP5: Implications for Ocean Biogeochemical Model Performance Assessment

    Science.gov (United States)

    Seferian, Roland; Gehlen, Marion; Bopp, Laurent; Resplandy, Laure; Orr, James C.; Marti, Olivier; Dunne, John P.; Christian, James R.; Doney, Scott C.; Ilyina, Tatiana; Romanou, Anastasia

    2015-01-01

    During the fifth phase of the Coupled Model Intercomparison Project (CMIP5) substantial efforts were made to systematically assess the skill of Earth system models. One goal was to check how realistically representative marine biogeochemical tracer distributions could be reproduced by models. In routine assessments model historical hindcasts were compared with available modern biogeochemical observations. However, these assessments considered neither how close modeled biogeochemical reservoirs were to equilibrium nor the sensitivity of model performance to initial conditions or to the spin-up protocols. Here, we explore how the large diversity in spin-up protocols used for marine biogeochemistry in CMIP5 Earth system models (ESMs) contributes to model-to-model differences in the simulated fields. We take advantage of a 500-year spin-up simulation of IPSL-CM5A-LR to quantify the influence of the spin-up protocol on model ability to reproduce relevant data fields. Amplification of biases in selected biogeochemical fields (O2, NO3, Alk-DIC) is assessed as a function of spin-up duration. We demonstrate that a relationship between spin-up duration and assessment metrics emerges from our model results and holds when confronted with a larger ensemble of CMIP5 models. This shows that drift has implications for performance assessment in addition to possibly aliasing estimates of climate change impact. Our study suggests that differences in spin-up protocols could explain a substantial part of model disparities, constituting a source of model-to- model uncertainty. This requires more attention in future model intercomparison exercises in order to provide quantitatively more correct ESM results on marine biogeochemistry and carbon cycle feedbacks.

  20. Inconsistent strategies to spin up models in CMIP5: implications for ocean biogeochemical model performance assessment

    Science.gov (United States)

    Séférian, R.; Gehlen, M.; Bopp, L.; Resplandy, L.; Orr, J. C.; Marti, O.; Dunne, J. P.; Christian, J. R.; Doney, S. C.; Ilyina, T.; Lindsay, K.; Halloran, P.; Heinze, C.; Segschneider, J.; Tjiputra, J.

    2015-10-01

    During the fifth phase of the Coupled Model Intercomparison Project (CMIP5) substantial efforts were carried out on the systematic assessment of the skill of Earth system models. One goal was to check how realistically representative marine biogeochemical tracer distributions could be reproduced by models. Mean-state assessments routinely compared model hindcasts to available modern biogeochemical observations. However, these assessments considered neither the extent of equilibrium in modeled biogeochemical reservoirs nor the sensitivity of model performance to initial conditions or to the spin-up protocols. Here, we explore how the large diversity in spin-up protocols used for marine biogeochemistry in CMIP5 Earth system models (ESM) contribute to model-to-model differences in the simulated fields. We take advantage of a 500 year spin-up simulation of IPSL-CM5A-LR to quantify the influence of the spin-up protocol on model ability to reproduce relevant data fields. Amplification of biases in selected biogeochemical fields (O2, NO3, Alk-DIC) is assessed as a function of spin-up duration. We demonstrate that a relationship between spin-up duration and assessment metrics emerges from our model results and is consistent when confronted against a larger ensemble of CMIP5 models. This shows that drift has implications on their performance assessment in addition to possibly aliasing estimates of climate change impact. Our study suggests that differences in spin-up protocols could explain a substantial part of model disparities, constituting a source of model-to-model uncertainty. This requires more attention in future model intercomparison exercices in order to provide realistic ESM results on marine biogeochemistry and carbon cycle feedbacks.

  1. Integrated Main Propulsion System Performance Reconstruction Process/Models

    Science.gov (United States)

    Lopez, Eduardo; Elliott, Katie; Snell, Steven; Evans, Michael

    2013-01-01

    The Integrated Main Propulsion System (MPS) Performance Reconstruction process provides the MPS post-flight data files needed for postflight reporting to the project integration management and key customers to verify flight performance. This process/model was used as the baseline for the currently ongoing Space Launch System (SLS) work. The process utilizes several methodologies, including multiple software programs, to model integrated propulsion system performance through space shuttle ascent. It is used to evaluate integrated propulsion systems, including propellant tanks, feed systems, rocket engine, and pressurization systems performance throughout ascent based on flight pressure and temperature data. The latest revision incorporates new methods based on main engine power balance model updates to model higher mixture ratio operation at lower engine power levels.

  2. Thermal Model Predictions of Advanced Stirling Radioisotope Generator Performance

    Science.gov (United States)

    Wang, Xiao-Yen J.; Fabanich, William Anthony; Schmitz, Paul C.

    2014-01-01

    This presentation describes the capabilities of three-dimensional thermal power model of advanced stirling radioisotope generator (ASRG). The performance of the ASRG is presented for different scenario, such as Venus flyby with or without the auxiliary cooling system.

  3. Performance Implications of Business Model Change: A Case Study

    Directory of Open Access Journals (Sweden)

    Jana Poláková

    2015-01-01

    Full Text Available The paper deals with changes in performance level introduced by the change of business model. The selected case is a small family business undergoing through substantial changes in reflection of structural changes of its markets. The authors used the concept of business model to describe value creation processes within the selected family business and by contrasting the differences between value creation processes before and after the change introduced they prove the role of business model as the performance differentiator. This is illustrated with the use of business model canvas constructed on the basis interviews, observations and document analysis. The two business model canvases allow for explanation of cause-and-effect relationships within the business leading to change in performance. The change in the performance is assessed by financial analysis of the business conducted over the period of 2006–2012 demonstrates changes in performance (comparing development of ROA, ROE and ROS having their lowest levels before the change of business model was introduced, growing after the introduction of the change, as well as the activity indicators with similar developments of the family business. The described case study contributes to the concept of business modeling with the arguments supporting its value as strategic tool facilitating decisions related to value creation within the business.

  4. Petascale computation performance of lightweight multiscale cardiac models using hybrid programming models.

    Science.gov (United States)

    Pope, Bernard J; Fitch, Blake G; Pitman, Michael C; Rice, John J; Reumann, Matthias

    2011-01-01

    Future multiscale and multiphysics models must use the power of high performance computing (HPC) systems to enable research into human disease, translational medical science, and treatment. Previously we showed that computationally efficient multiscale models will require the use of sophisticated hybrid programming models, mixing distributed message passing processes (e.g. the message passing interface (MPI)) with multithreading (e.g. OpenMP, POSIX pthreads). The objective of this work is to compare the performance of such hybrid programming models when applied to the simulation of a lightweight multiscale cardiac model. Our results show that the hybrid models do not perform favourably when compared to an implementation using only MPI which is in contrast to our results using complex physiological models. Thus, with regards to lightweight multiscale cardiac models, the user may not need to increase programming complexity by using a hybrid programming approach. However, considering that model complexity will increase as well as the HPC system size in both node count and number of cores per node, it is still foreseeable that we will achieve faster than real time multiscale cardiac simulations on these systems using hybrid programming models.

  5. Performance modeling of data dissemination in vehicular ad hoc networks

    DEFF Research Database (Denmark)

    Chaqfeh, Moumena; Lakas, Abderrahmane; Lazarova-Molnar, Sanja

    2013-01-01

    ad hoc nature which does not require fixed infrastructure or centralized administration. However, designing scalable information dissemination techniques for VANET applications remains a challenging task due to the inherent nature of such highly dynamic environments. Existing dissemination techniques...... often resort to simulation for performance evaluation and there are only few studies that offer mathematical modeling. In this paper we provide a comparative study of existing performance modeling approaches for data dissemination techniques designed for different VANET applications....

  6. Modeling radial flow ion exchange performance for condensate polisher conditions

    Energy Technology Data Exchange (ETDEWEB)

    Shallcross, D. [University of Melbourne, Melbourne, VIC (Australia). Department of Chemical Engineering; Renouf, P.

    2001-11-01

    A theoretical model is developed which simulates ion exchange performance within an annular resin bed. Flow within the mixed ion exchange bed is diverging, with the solution flowing outwards away from the bed's axis. The model is used to simulate performance of a mixed annular bed operating under condensate polisher conditions. The simulation predictions are used to develop design envelope curves for practical radial flow beds and to estimate potential cost savings flowing from less expensive polisher vessels. (orig.)

  7. Performance of Air Pollution Models on Massively Parallel Computers

    DEFF Research Database (Denmark)

    Brown, John; Hansen, Per Christian; Wasniewski, Jerzy

    1996-01-01

    To compare the performance and use of three massively parallel SIMD computers, we implemented a large air pollution model on the computers. Using a realistic large-scale model, we gain detailed insight about the performance of the three computers when used to solve large-scale scientific problems...... that involve several types of numerical computations. The computers considered in our study are the Connection Machines CM-200 and CM-5, and the MasPar MP-2216...

  8. A Formal Comparison of Model Variants for Performance Prediction

    Science.gov (United States)

    2009-12-01

    400 450 500 1 2 3 4 5 6 7 8 P e rf o rm a n c e S c o re s Mission Team Performance in UAS Predator Simulation CERI , 2005 Humans Model...Simulation CERI , 2005 Humans Model Team Performance in F-16 Simulator Missions DMO Testbd, Mesa Table 2. Cross-validation RMSD...Warfighter Readiness Research Division. The authors would like to thank the Cognitive Engineering Research Institute ( CERI ) and researchers from Mesa’s

  9. Evaluating performances of simplified physically based models for landslide susceptibility

    Directory of Open Access Journals (Sweden)

    G. Formetta

    2015-12-01

    Full Text Available Rainfall induced shallow landslides cause loss of life and significant damages involving private and public properties, transportation system, etc. Prediction of shallow landslides susceptible locations is a complex task that involves many disciplines: hydrology, geotechnical science, geomorphology, and statistics. Usually to accomplish this task two main approaches are used: statistical or physically based model. Reliable models' applications involve: automatic parameters calibration, objective quantification of the quality of susceptibility maps, model sensitivity analysis. This paper presents a methodology to systemically and objectively calibrate, verify and compare different models and different models performances indicators in order to individuate and eventually select the models whose behaviors are more reliable for a certain case study. The procedure was implemented in package of models for landslide susceptibility analysis and integrated in the NewAge-JGrass hydrological model. The package includes three simplified physically based models for landslides susceptibility analysis (M1, M2, and M3 and a component for models verifications. It computes eight goodness of fit indices by comparing pixel-by-pixel model results and measurements data. Moreover, the package integration in NewAge-JGrass allows the use of other components such as geographic information system tools to manage inputs-output processes, and automatic calibration algorithms to estimate model parameters. The system was applied for a case study in Calabria (Italy along the Salerno-Reggio Calabria highway, between Cosenza and Altilia municipality. The analysis provided that among all the optimized indices and all the three models, the optimization of the index distance to perfect classification in the receiver operating characteristic plane (D2PC coupled with model M3 is the best modeling solution for our test case.

  10. Evaluating performances of simplified physically based models for landslide susceptibility

    Science.gov (United States)

    Formetta, G.; Capparelli, G.; Versace, P.

    2015-12-01

    Rainfall induced shallow landslides cause loss of life and significant damages involving private and public properties, transportation system, etc. Prediction of shallow landslides susceptible locations is a complex task that involves many disciplines: hydrology, geotechnical science, geomorphology, and statistics. Usually to accomplish this task two main approaches are used: statistical or physically based model. Reliable models' applications involve: automatic parameters calibration, objective quantification of the quality of susceptibility maps, model sensitivity analysis. This paper presents a methodology to systemically and objectively calibrate, verify and compare different models and different models performances indicators in order to individuate and eventually select the models whose behaviors are more reliable for a certain case study. The procedure was implemented in package of models for landslide susceptibility analysis and integrated in the NewAge-JGrass hydrological model. The package includes three simplified physically based models for landslides susceptibility analysis (M1, M2, and M3) and a component for models verifications. It computes eight goodness of fit indices by comparing pixel-by-pixel model results and measurements data. Moreover, the package integration in NewAge-JGrass allows the use of other components such as geographic information system tools to manage inputs-output processes, and automatic calibration algorithms to estimate model parameters. The system was applied for a case study in Calabria (Italy) along the Salerno-Reggio Calabria highway, between Cosenza and Altilia municipality. The analysis provided that among all the optimized indices and all the three models, the optimization of the index distance to perfect classification in the receiver operating characteristic plane (D2PC) coupled with model M3 is the best modeling solution for our test case.

  11. Kinetic models in industrial biotechnology - Improving cell factory performance.

    Science.gov (United States)

    Almquist, Joachim; Cvijovic, Marija; Hatzimanikatis, Vassily; Nielsen, Jens; Jirstrand, Mats

    2014-07-01

    An increasing number of industrial bioprocesses capitalize on living cells by using them as cell factories that convert sugars into chemicals. These processes range from the production of bulk chemicals in yeasts and bacteria to the synthesis of therapeutic proteins in mammalian cell lines. One of the tools in the continuous search for improved performance of such production systems is the development and application of mathematical models. To be of value for industrial biotechnology, mathematical models should be able to assist in the rational design of cell factory properties or in the production processes in which they are utilized. Kinetic models are particularly suitable towards this end because they are capable of representing the complex biochemistry of cells in a more complete way compared to most other types of models. They can, at least in principle, be used to in detail understand, predict, and evaluate the effects of adding, removing, or modifying molecular components of a cell factory and for supporting the design of the bioreactor or fermentation process. However, several challenges still remain before kinetic modeling will reach the degree of maturity required for routine application in industry. Here we review the current status of kinetic cell factory modeling. Emphasis is on modeling methodology concepts, including model network structure, kinetic rate expressions, parameter estimation, optimization methods, identifiability analysis, model reduction, and model validation, but several applications of kinetic models for the improvement of cell factories are also discussed.

  12. Comparison of Simple Versus Performance-Based Fall Prediction Models

    Directory of Open Access Journals (Sweden)

    Shekhar K. Gadkaree BS

    2015-05-01

    Full Text Available Objective: To compare the predictive ability of standard falls prediction models based on physical performance assessments with more parsimonious prediction models based on self-reported data. Design: We developed a series of fall prediction models progressing in complexity and compared area under the receiver operating characteristic curve (AUC across models. Setting: National Health and Aging Trends Study (NHATS, which surveyed a nationally representative sample of Medicare enrollees (age ≥65 at baseline (Round 1: 2011-2012 and 1-year follow-up (Round 2: 2012-2013. Participants: In all, 6,056 community-dwelling individuals participated in Rounds 1 and 2 of NHATS. Measurements: Primary outcomes were 1-year incidence of “any fall” and “recurrent falls.” Prediction models were compared and validated in development and validation sets, respectively. Results: A prediction model that included demographic information, self-reported problems with balance and coordination, and previous fall history was the most parsimonious model that optimized AUC for both any fall (AUC = 0.69, 95% confidence interval [CI] = [0.67, 0.71] and recurrent falls (AUC = 0.77, 95% CI = [0.74, 0.79] in the development set. Physical performance testing provided a marginal additional predictive value. Conclusion: A simple clinical prediction model that does not include physical performance testing could facilitate routine, widespread falls risk screening in the ambulatory care setting.

  13. Human performance modeling for system of systems analytics.

    Energy Technology Data Exchange (ETDEWEB)

    Dixon, Kevin R.; Lawton, Craig R.; Basilico, Justin Derrick; Longsine, Dennis E. (INTERA, Inc., Austin, TX); Forsythe, James Chris; Gauthier, John Henry; Le, Hai D.

    2008-10-01

    A Laboratory-Directed Research and Development project was initiated in 2005 to investigate Human Performance Modeling in a System of Systems analytic environment. SAND2006-6569 and SAND2006-7911 document interim results from this effort; this report documents the final results. The problem is difficult because of the number of humans involved in a System of Systems environment and the generally poorly defined nature of the tasks that each human must perform. A two-pronged strategy was followed: one prong was to develop human models using a probability-based method similar to that first developed for relatively well-understood probability based performance modeling; another prong was to investigate more state-of-art human cognition models. The probability-based modeling resulted in a comprehensive addition of human-modeling capability to the existing SoSAT computer program. The cognitive modeling resulted in an increased understanding of what is necessary to incorporate cognition-based models to a System of Systems analytic environment.

  14. Comparison of Simple Versus Performance-Based Fall Prediction Models

    Directory of Open Access Journals (Sweden)

    Shekhar K. Gadkaree BS

    2015-05-01

    Full Text Available Objective: To compare the predictive ability of standard falls prediction models based on physical performance assessments with more parsimonious prediction models based on self-reported data. Design: We developed a series of fall prediction models progressing in complexity and compared area under the receiver operating characteristic curve (AUC across models. Setting: National Health and Aging Trends Study (NHATS, which surveyed a nationally representative sample of Medicare enrollees (age ≥65 at baseline (Round 1: 2011-2012 and 1-year follow-up (Round 2: 2012-2013. Participants: In all, 6,056 community-dwelling individuals participated in Rounds 1 and 2 of NHATS. Measurements: Primary outcomes were 1-year incidence of “ any fall ” and “ recurrent falls .” Prediction models were compared and validated in development and validation sets, respectively. Results: A prediction model that included demographic information, self-reported problems with balance and coordination, and previous fall history was the most parsimonious model that optimized AUC for both any fall (AUC = 0.69, 95% confidence interval [CI] = [0.67, 0.71] and recurrent falls (AUC = 0.77, 95% CI = [0.74, 0.79] in the development set. Physical performance testing provided a marginal additional predictive value. Conclusion: A simple clinical prediction model that does not include physical performance testing could facilitate routine, widespread falls risk screening in the ambulatory care setting.

  15. Assessment of performance of survival prediction models for cancer prognosis

    Directory of Open Access Journals (Sweden)

    Chen Hung-Chia

    2012-07-01

    Full Text Available Abstract Background Cancer survival studies are commonly analyzed using survival-time prediction models for cancer prognosis. A number of different performance metrics are used to ascertain the concordance between the predicted risk score of each patient and the actual survival time, but these metrics can sometimes conflict. Alternatively, patients are sometimes divided into two classes according to a survival-time threshold, and binary classifiers are applied to predict each patient’s class. Although this approach has several drawbacks, it does provide natural performance metrics such as positive and negative predictive values to enable unambiguous assessments. Methods We compare the survival-time prediction and survival-time threshold approaches to analyzing cancer survival studies. We review and compare common performance metrics for the two approaches. We present new randomization tests and cross-validation methods to enable unambiguous statistical inferences for several performance metrics used with the survival-time prediction approach. We consider five survival prediction models consisting of one clinical model, two gene expression models, and two models from combinations of clinical and gene expression models. Results A public breast cancer dataset was used to compare several performance metrics using five prediction models. 1 For some prediction models, the hazard ratio from fitting a Cox proportional hazards model was significant, but the two-group comparison was insignificant, and vice versa. 2 The randomization test and cross-validation were generally consistent with the p-values obtained from the standard performance metrics. 3 Binary classifiers highly depended on how the risk groups were defined; a slight change of the survival threshold for assignment of classes led to very different prediction results. Conclusions 1 Different performance metrics for evaluation of a survival prediction model may give different conclusions in

  16. Comparison of Predictive Models for PV Module Performance (Presentation)

    Energy Technology Data Exchange (ETDEWEB)

    Marion, B.

    2008-05-01

    This paper examines three models used to estimate the maximum power (P{sub m}) of PV modules when the irradiance and PV cell temperature are known: (1) the power temperature coefficient model, (2) the PVFORM model, and (3) the bilinear interpolation model. A variation of the power temperature coefficient model is also presented that improved model accuracy. For modeling values of P{sub m}, an 'effective' plane-of-array (POA) irradiance (E{sub e}) and the PV cell temperature (T) are used as model inputs. Using E{sub e} essentially removes the effects of variations in solar spectrum and reflectance losses, and permits the influence of irradiance and temperature on model performance for P{sub m} to be more easily studied. Eq. 1 is used to determine E{sub e} from T and the PV module's measured short-circuit current (I{sub sc}). Zero subscripts denote performance at Standard Reporting Conditions (SRC).

  17. Comparative Performance of Volatility Models for Oil Price

    Directory of Open Access Journals (Sweden)

    Afees A. Salisu

    2012-07-01

    Full Text Available In this paper, we compare the performance of volatility models for oil price using daily returns of WTI. The innovations of this paper are in two folds: (i we analyse the oil price across three sub samples namely period before, during and after the global financial crisis, (ii we also analyse the comparative performance of both symmetric and asymmetric volatility models for the oil price. We find that oil price was most volatile during the global financial crises compared to other sub samples. Based on the appropriate model selection criteria, the asymmetric GARCH models appear superior to the symmetric ones in dealing with oil price volatility. This finding indicates evidence of leverage effects in the oil market and ignoring these effects in oil price modelling will lead to serious biases and misleading results.

  18. Modeling and performance analysis of QoS data

    Science.gov (United States)

    Strzeciwilk, Dariusz; Zuberek, Włodzimierz M.

    2016-09-01

    The article presents the results of modeling and analysis of data transmission performance on systems that support quality of service. Models are designed and tested, taking into account multiservice network architecture, i.e. supporting the transmission of data related to different classes of traffic. Studied were mechanisms of traffic shaping systems, which are based on the Priority Queuing with an integrated source of data and the various sources of data that is generated. Discussed were the basic problems of the architecture supporting QoS and queuing systems. Designed and built were models based on Petri nets, supported by temporal logics. The use of simulation tools was to verify the mechanisms of shaping traffic with the applied queuing algorithms. It is shown that temporal models of Petri nets can be effectively used in the modeling and analysis of the performance of computer networks.

  19. Performance Models for Split-execution Computing Systems

    Energy Technology Data Exchange (ETDEWEB)

    Humble, Travis S [ORNL; McCaskey, Alex [ORNL; Schrock, Jonathan [ORNL; Seddiqi, Hadayat [ORNL; Britt, Keith A [ORNL; Imam, Neena [ORNL

    2016-01-01

    Split-execution computing leverages the capabilities of multiple computational models to solve problems, but splitting program execution across different computational models incurs costs associated with the translation between domains. We analyze the performance of a split-execution computing system developed from conventional and quantum processing units (QPUs) by using behavioral models that track resource usage. We focus on asymmetric processing models built using conventional CPUs and a family of special-purpose QPUs that employ quantum computing principles. Our performance models account for the translation of a classical optimization problem into the physical representation required by the quantum processor while also accounting for hardware limitations and conventional processor speed and memory. We conclude that the bottleneck in this split-execution computing system lies at the quantum-classical interface and that the primary time cost is independent of quantum processor behavior.

  20. Performance Assessment of Hydrological Models Considering Acceptable Forecast Error Threshold

    Directory of Open Access Journals (Sweden)

    Qianjin Dong

    2015-11-01

    Full Text Available It is essential to consider the acceptable threshold in the assessment of a hydrological model because of the scarcity of research in the hydrology community and errors do not necessarily cause risk. Two forecast errors, including rainfall forecast error and peak flood forecast error, have been studied based on the reliability theory. The first order second moment (FOSM and bound methods are used to identify the reliability. Through the case study of the Dahuofang (DHF Reservoir, it is shown that the correlation between these two errors has great influence on the reliability index of hydrological model. In particular, the reliability index of the DHF hydrological model decreases with the increasing correlation. Based on the reliability theory, the proposed performance evaluation framework incorporating the acceptable forecast error threshold and correlation among the multiple errors can be used to evaluate the performance of a hydrological model and to quantify the uncertainties of a hydrological model output.

  1. Performance Model and Sensitivity Analysis for a Solar Thermoelectric Generator

    Science.gov (United States)

    Rehman, Naveed Ur; Siddiqui, Mubashir Ali

    2017-01-01

    In this paper, a regression model for evaluating the performance of solar concentrated thermoelectric generators (SCTEGs) is established and the significance of contributing parameters is discussed in detail. The model is based on several natural, design and operational parameters of the system, including the thermoelectric generator (TEG) module and its intrinsic material properties, the connected electrical load, concentrator attributes, heat transfer coefficients, solar flux, and ambient temperature. The model is developed by fitting a response curve, using the least-squares method, to the results. The sample points for the model were obtained by simulating a thermodynamic model, also developed in this paper, over a range of values of input variables. These samples were generated employing the Latin hypercube sampling (LHS) technique using a realistic distribution of parameters. The coefficient of determination was found to be 99.2%. The proposed model is validated by comparing the predicted results with those in the published literature. In addition, based on the elasticity for parameters in the model, sensitivity analysis was performed and the effects of parameters on the performance of SCTEGs are discussed in detail. This research will contribute to the design and performance evaluation of any SCTEG system for a variety of applications.

  2. Performance Model and Sensitivity Analysis for a Solar Thermoelectric Generator

    Science.gov (United States)

    Rehman, Naveed Ur; Siddiqui, Mubashir Ali

    2017-03-01

    In this paper, a regression model for evaluating the performance of solar concentrated thermoelectric generators (SCTEGs) is established and the significance of contributing parameters is discussed in detail. The model is based on several natural, design and operational parameters of the system, including the thermoelectric generator (TEG) module and its intrinsic material properties, the connected electrical load, concentrator attributes, heat transfer coefficients, solar flux, and ambient temperature. The model is developed by fitting a response curve, using the least-squares method, to the results. The sample points for the model were obtained by simulating a thermodynamic model, also developed in this paper, over a range of values of input variables. These samples were generated employing the Latin hypercube sampling (LHS) technique using a realistic distribution of parameters. The coefficient of determination was found to be 99.2%. The proposed model is validated by comparing the predicted results with those in the published literature. In addition, based on the elasticity for parameters in the model, sensitivity analysis was performed and the effects of parameters on the performance of SCTEGs are discussed in detail. This research will contribute to the design and performance evaluation of any SCTEG system for a variety of applications.

  3. Software life cycle dynamic simulation model: The organizational performance submodel

    Science.gov (United States)

    Tausworthe, Robert C.

    1985-01-01

    The submodel structure of a software life cycle dynamic simulation model is described. The software process is divided into seven phases, each with product, staff, and funding flows. The model is subdivided into an organizational response submodel, a management submodel, a management influence interface, and a model analyst interface. The concentration here is on the organizational response model, which simulates the performance characteristics of a software development subject to external and internal influences. These influences emanate from two sources: the model analyst interface, which configures the model to simulate the response of an implementing organization subject to its own internal influences, and the management submodel that exerts external dynamic control over the production process. A complete characterization is given of the organizational response submodel in the form of parameterized differential equations governing product, staffing, and funding levels. The parameter values and functions are allocated to the two interfaces.

  4. Evaluating Performance of the DGM(2,1 Model and Its Modified Models

    Directory of Open Access Journals (Sweden)

    Ying-Fang Huang

    2016-03-01

    Full Text Available The direct grey model (DGM(2,1 is considered for fluctuation characteristics of the sampling data in Grey system theory. However, its applications are quite uncommon in the past literature. The improvement of the precision of the DGM(2,1 is only presented in few previous researches. Moreover, the evaluation of forecasted performance of the DGM(2,1 model and its applications was not conducted in previous studies. As the results, this study aims to evaluate forecasted performance of the DGM(2,1 and its three modified models, including the Markov direct grey model MDGM(2,1, the Fourier direct grey model FDGM(2,1, and the Fourier Markov direct grey model FMDGM(2,1 in order to determine the application of the DGM(2,1 model in practical applications and academic research. The results demonstrate that the DGM(2,1 model has lower precision than its modified models, while the forecasted precision of the FDGM(2,1 is better than that of MDGM(2,1. Additionally, the FMDGM(2,1 model presents the best performance among all of the modified models of DGM(2,1, which can effectively overcome the fluctuating of the data sample and minimize the predicted error of the DGM(2,1 model. The finding indicated that the FMDGM(2,1 model does not only have advantages with regard to the sample size requirement, but can also be flexibly applied to the large fluctuation and random sequences with a high quality of estimation.

  5. Selecting Optimal Subset of Features for Student Performance Model

    Directory of Open Access Journals (Sweden)

    Hany M. Harb

    2012-09-01

    Full Text Available Educational data mining (EDM is a new growing research area and the essence of data mining concepts are used in the educational field for the purpose of extracting useful information on the student behavior in the learning process. Classification methods like decision trees, rule mining, and Bayesian network, can be applied on the educational data for predicting the student behavior like performance in an examination. This prediction may help in student evaluation. As the feature selection influences the predictive accuracy of any performance model, it is essential to study elaborately the effectiveness of student performance model in connection with feature selection techniques. The main objective of this work is to achieve high predictive performance by adopting various feature selection techniques to increase the predictive accuracy with least number of features. The outcomes show a reduction in computational time and constructional cost in both training and classification phases of the student performance model.

  6. Thermal performance modeling of NASA s scientific balloons

    Science.gov (United States)

    Franco, H.; Cathey, H.

    The flight performance of a scientific balloon is highly dependant on the interaction between the balloon and its environment. The balloon is a thermal vehicle. Modeling a scientific balloon's thermal performance has proven to be a difficult analytical task. Most previous thermal models have attempted these analyses by using either a bulk thermal model approach, or by simplified representations of the balloon. These approaches to date have provided reasonable, but not very accurate results. Improvements have been made in recent years using thermal analysis tools developed for the thermal modeling of spacecraft and other sophisticated heat transfer problems. These tools, which now allow for accurate modeling of highly transmissive materials, have been applied to the thermal analysis of NASA's scientific balloons. A research effort has been started that utilizes the "Thermal Desktop" addition to AUTO CAD. This paper will discuss the development of thermal models for both conventional and Ultra Long Duration super-pressure balloons. This research effort has focused on incremental analysis stages of development to assess the accuracy of the tool and the required model resolution to produce usable data. The first stage balloon thermal analyses started with simple spherical balloon models with a limited number of nodes, and expanded the number of nodes to determine required model resolution. These models were then modified to include additional details such as load tapes. The second stage analyses looked at natural shaped Zero Pressure balloons. Load tapes were then added to these shapes, again with the goal of determining the required modeling accuracy by varying the number of gores. The third stage, following the same steps as the Zero Pressure balloon efforts, was directed at modeling super-pressure pumpkin shaped balloons. The results were then used to develop analysis guidelines and an approach for modeling balloons for both simple first order estimates and detailed

  7. Performance modeling and prediction for linear algebra algorithms

    OpenAIRE

    Iakymchuk, Roman

    2012-01-01

    This dissertation incorporates two research projects: performance modeling and prediction for dense linear algebra algorithms, and high-performance computing on clouds. The first project is focused on dense matrix computations, which are often used as computational kernels for numerous scientific applications. To solve a particular mathematical operation, linear algebra libraries provide a variety of algorithms. The algorithm of choice depends, obviously, on its performance. Performance of su...

  8. Construction Of A Performance Assessment Model For Zakat Management Institutions

    Directory of Open Access Journals (Sweden)

    Sri Fadilah

    2016-12-01

    Full Text Available The objective of the research is to examine the performance evaluation using Balanced Scorecard model. The research is conducted due to a big gap existing between zakat (alms and religious tax in Islam with its potential earn of as much as 217 trillion rupiahs and the realization of the collected zakat fund that is only reached for three trillion. This indicates that the performance of zakat management organizations in collecting the zakat is still very low. On the other hand, the quantity and the quality of zakat management organizations have to be improved. This means the performance evaluation model as a tool to evaluate performance is needed. The model construct is making a performance evaluation model that can be implemented to zakat management organizations. The organizational performance with Balanced Scorecard evaluation model will be effective if it is supported by three aspects, namely:  PI, BO and TQM. This research uses explanatory method and data analysis tool of SEM/PLS. Data collecting technique are questionnaires, interviews and documentation. The result of this research shows that PI, BO and TQM simultaneously and partially gives a significant effect on organizational performance.

  9. Causal Analysis for Performance Modeling of Computer Programs

    Directory of Open Access Journals (Sweden)

    Jan Lemeire

    2007-01-01

    Full Text Available Causal modeling and the accompanying learning algorithms provide useful extensions for in-depth statistical investigation and automation of performance modeling. We enlarged the scope of existing causal structure learning algorithms by using the form-free information-theoretic concept of mutual information and by introducing the complexity criterion for selecting direct relations among equivalent relations. The underlying probability distribution of experimental data is estimated by kernel density estimation. We then reported on the benefits of a dependency analysis and the decompositional capacities of causal models. Useful qualitative models, providing insight into the role of every performance factor, were inferred from experimental data. This paper reports on the results for a LU decomposition algorithm and on the study of the parameter sensitivity of the Kakadu implementation of the JPEG-2000 standard. Next, the analysis was used to search for generic performance characteristics of the applications.

  10. Direct-Steam Linear Fresnel Performance Model for NREL's System Advisor Model

    Energy Technology Data Exchange (ETDEWEB)

    Wagner, M. J.; Zhu, G.

    2012-09-01

    This paper presents the technical formulation and demonstrated model performance results of a new direct-steam-generation (DSG) model in NREL's System Advisor Model (SAM). The model predicts the annual electricity production of a wide range of system configurations within the DSG Linear Fresnel technology by modeling hourly performance of the plant in detail. The quasi-steady-state formulation allows users to investigate energy and mass flows, operating temperatures, and pressure drops for geometries and solar field configurations of interest. The model includes tools for heat loss calculation using either empirical polynomial heat loss curves as a function of steam temperature, ambient temperature, and wind velocity, or a detailed evacuated tube receiver heat loss model. Thermal losses are evaluated using a computationally efficient nodal approach, where the solar field and headers are discretized into multiple nodes where heat losses, thermal inertia, steam conditions (including pressure, temperature, enthalpy, etc.) are individually evaluated during each time step of the simulation. This paper discusses the mathematical formulation for the solar field model and describes how the solar field is integrated with the other subsystem models, including the power cycle and optional auxiliary fossil system. Model results are also presented to demonstrate plant behavior in the various operating modes.

  11. The Use of Neural Network Technology to Model Swimming Performance

    Science.gov (United States)

    Silva, António José; Costa, Aldo Manuel; Oliveira, Paulo Moura; Reis, Victor Machado; Saavedra, José; Perl, Jurgen; Rouboa, Abel; Marinho, Daniel Almeida

    2007-01-01

    The aims of the present study were: to identify the factors which are able to explain the performance in the 200 meters individual medley and 400 meters front crawl events in young swimmers, to model the performance in those events using non-linear mathematic methods through artificial neural networks (multi-layer perceptrons) and to assess the neural network models precision to predict the performance. A sample of 138 young swimmers (65 males and 73 females) of national level was submitted to a test battery comprising four different domains: kinanthropometric evaluation, dry land functional evaluation (strength and flexibility), swimming functional evaluation (hydrodynamics, hydrostatic and bioenergetics characteristics) and swimming technique evaluation. To establish a profile of the young swimmer non-linear combinations between preponderant variables for each gender and swim performance in the 200 meters medley and 400 meters font crawl events were developed. For this purpose a feed forward neural network was used (Multilayer Perceptron) with three neurons in a single hidden layer. The prognosis precision of the model (error lower than 0.8% between true and estimated performances) is supported by recent evidence. Therefore, we consider that the neural network tool can be a good approach in the resolution of complex problems such as performance modeling and the talent identification in swimming and, possibly, in a wide variety of sports. Key pointsThe non-linear analysis resulting from the use of feed forward neural network allowed us the development of four performance models.The mean difference between the true and estimated results performed by each one of the four neural network models constructed was low.The neural network tool can be a good approach in the resolution of the performance modeling as an alternative to the standard statistical models that presume well-defined distributions and independence among all inputs.The use of neural networks for sports

  12. Hybrid Corporate Performance Prediction Model Considering Technical Capability

    Directory of Open Access Journals (Sweden)

    Joonhyuck Lee

    2016-07-01

    Full Text Available Many studies have tried to predict corporate performance and stock prices to enhance investment profitability using qualitative approaches such as the Delphi method. However, developments in data processing technology and machine-learning algorithms have resulted in efforts to develop quantitative prediction models in various managerial subject areas. We propose a quantitative corporate performance prediction model that applies the support vector regression (SVR algorithm to solve the problem of the overfitting of training data and can be applied to regression problems. The proposed model optimizes the SVR training parameters based on the training data, using the genetic algorithm to achieve sustainable predictability in changeable markets and managerial environments. Technology-intensive companies represent an increasing share of the total economy. The performance and stock prices of these companies are affected by their financial standing and their technological capabilities. Therefore, we apply both financial indicators and technical indicators to establish the proposed prediction model. Here, we use time series data, including financial, patent, and corporate performance information of 44 electronic and IT companies. Then, we predict the performance of these companies as an empirical verification of the prediction performance of the proposed model.

  13. Facial Performance Transfer via Deformable Models and Parametric Correspondence.

    Science.gov (United States)

    Asthana, Akshay; de la Hunty, Miles; Dhall, Abhinav; Goecke, Roland

    2012-09-01

    The issue of transferring facial performance from one person's face to another's has been an area of interest for the movie industry and the computer graphics community for quite some time. In recent years, deformable face models, such as the Active Appearance Model (AAM), have made it possible to track and synthesize faces in real time. Not surprisingly, deformable face model-based approaches for facial performance transfer have gained tremendous interest in the computer vision and graphics community. In this paper, we focus on the problem of real-time facial performance transfer using the AAM framework. We propose a novel approach of learning the mapping between the parameters of two completely independent AAMs, using them to facilitate the facial performance transfer in a more realistic manner than previous approaches. The main advantage of modeling this parametric correspondence is that it allows a "meaningful" transfer of both the nonrigid shape and texture across faces irrespective of the speakers' gender, shape, and size of the faces, and illumination conditions. We explore linear and nonlinear methods for modeling the parametric correspondence between the AAMs and show that the sparse linear regression method performs the best. Moreover, we show the utility of the proposed framework for a cross-language facial performance transfer that is an area of interest for the movie dubbing industry.

  14. MODEL-BASED PERFORMANCE EVALUATION APPROACH FOR MOBILE AGENT SYSTEMS

    Institute of Scientific and Technical Information of China (English)

    Li Xin; Mi Zhengkun; Meng Xudong

    2004-01-01

    Claimed as the next generation programming paradigm, mobile agent technology has attracted extensive interests in recent years. However, up to now, limited research efforts have been devoted to the performance study of mobile agent system and most of these researches focus on agent behavior analysis resulting in that models are hard to apply to mobile agent systems. To bridge the gap, a new performance evaluation model derived from operation mechanisms of mobile agent platforms is proposed. Details are discussed for the design of companion simulation software, which can provide the system performance such as response time of platform to mobile agent. Further investigation is followed on the determination of model parameters. Finally comparison is made between the model-based simulation results and measurement-based real performance of mobile agent systems. The results show that the proposed model and designed software are effective in evaluating performance characteristics of mobile agent systems. The proposed approach can also be considered as the basis of performance analysis for large systems composed of multiple mobile agent platforms.

  15. Bounding SAR ATR performance based on model similarity

    Science.gov (United States)

    Boshra, Michael; Bhanu, Bir

    1999-08-01

    Similarity between model targets plays a fundamental role in determining the performance of target recognition. We analyze the effect of model similarity on the performance of a vote- based approach for target recognition from SAR images. In such an approach, each model target is represented by a set of SAR views sampled at a variety of azimuth angles and a specific depression angle. Both model and data views are represented by locations of scattering centers, which are peak features. The model hypothesis (view of a specific target and associated location) corresponding to a given data view is chosen to be the one with the highest number of data-supported model features (votes). We address three issues in this paper. Firstly, we present a quantitative measure of the similarity between a pair of model views. Such a measure depends on the degree of structural overlap between the two views, and the amount of uncertainty. Secondly, we describe a similarity- based framework for predicting an upper bound on recognition performance in the presence of uncertainty, occlusion and clutter. Thirdly, we validate the proposed framework using MSTAR public data, which are obtained under different depression angles, configurations and articulations.

  16. Performance verification tests of JT-60SA CS model coil

    Energy Technology Data Exchange (ETDEWEB)

    Obana, Tetsuhiro, E-mail: obana.tetsuhiro@LHD.nifs.ac.jp [National Institute for Fusion Science, 322-6 Oroshi, Toki, Gifu 509-5292 (Japan); Murakami, Haruyuki [Japan Atomic Energy Agency, 801-1 Mukoyama, Naka, Ibaraki 311-0193 (Japan); Takahata, Kazuya; Hamaguchi, Shinji; Chikaraishi, Hirotaka; Mito, Toshiyuki; Imagawa, Shinsaku [National Institute for Fusion Science, 322-6 Oroshi, Toki, Gifu 509-5292 (Japan); Kizu, Kaname; Natsume, Kyohei; Yoshida, Kiyoshi [Japan Atomic Energy Agency, 801-1 Mukoyama, Naka, Ibaraki 311-0193 (Japan)

    2015-11-15

    Highlights: • The performance of the JT-60SA CS model coil was verified. • The CS model coil comprised a quad-pancake wound with a Nb{sub 3}Sn CIC conductor. • The CS model coil met the design requirements. - Abstract: As a final check of the coil manufacturing method of the JT-60 Super Advanced (JT-60SA) central solenoid (CS), we verified the performance of a CS model coil. The model coil comprised a quad-pancake wound with a Nb{sub 3}Sn cable-in-conduit conductor. Measurements of the critical current, joint resistance, pressure drop, and magnetic field were conducted in the verification tests. In the critical-current measurement, the critical current of the model coil coincided with the estimation derived from a strain of −0.62% for the Nb{sub 3}Sn strands. As a result, critical-current degradation caused by the coil manufacturing process was not observed. The results of the performance verification tests indicate that the model coil met the design requirements. Consequently, the manufacturing process of the JT-60SA CS was established.

  17. Configuration of Distributed Message Converter Systems using Performance Modeling

    NARCIS (Netherlands)

    Aberer, Karl; Risse, Thomas; Wombacher, Andreas

    2001-01-01

    To find a configuration of a distributed system satisfying performance goals is a complex search problem that involves many design parameters, like hardware selection, job distribution and process configuration. Performance models are a powerful tools to analyse potential system configurations, howe

  18. A Composite Model for Employees' Performance Appraisal and Improvement

    Science.gov (United States)

    Manoharan, T. R.; Muralidharan, C.; Deshmukh, S. G.

    2012-01-01

    Purpose: The purpose of this paper is to develop an innovative method of performance appraisal that will be useful for designing a structured training programme. Design/methodology/approach: Employees' performance appraisals are conducted using new approaches, namely data envelopment analysis and an integrated fuzzy model. Interpretive structural…

  19. Introducing Model Predictive Control for Improving Power Plant Portfolio Performance

    DEFF Research Database (Denmark)

    Edlund, Kristian Skjoldborg; Bendtsen, Jan Dimon; Børresen, Simon

    2008-01-01

    This paper introduces a model predictive control (MPC) approach for construction of a controller for balancing the power generation against consumption in a power system. The objective of the controller is to coordinate a portfolio consisting of multiple power plant units in the effort to perform...... reference tracking and disturbance rejection in an economically optimal way. The performance function is chosen as a mixture of the `1-norm and a linear weighting to model the economics of the system. Simulations show a significant improvement of the performance of the MPC compared to the current...

  20. A CHAID Based Performance Prediction Model in Educational Data Mining

    Directory of Open Access Journals (Sweden)

    R. Bhaskaran

    2010-01-01

    Full Text Available The performance in higher secondary school education in India is a turning point in the academic lives of all students. As this academic performance is influenced by many factors, it is essential to develop predictive data mining model for students' performance so as to identify the slow learners and study the influence of the dominant factors on their academic performance. In the present investigation, a survey cum experimental methodology was adopted to generate a database and it was constructed from a primary and a secondary source. While the primary data was collected from the regular students, the secondary data was gathered from the school and office of the Chief Educational Officer (CEO. A total of 1000 datasets of the year 2006 from five different schools in three different districts of Tamilnadu were collected. The raw data was preprocessed in terms of filling up missing values, transforming values in one form into another and relevant attribute/ variable selection. As a result, we had 772 student records, which were used for CHAID prediction model construction. A set of prediction rules were extracted from CHIAD prediction model and the efficiency of the generated CHIAD prediction model was found. The accuracy of the present model was compared with other model and it has been found to be satisfactory.

  1. An ambient agent model for analyzing managers' performance during stress

    Science.gov (United States)

    ChePa, Noraziah; Aziz, Azizi Ab; Gratim, Haned

    2016-08-01

    Stress at work have been reported everywhere. Work related performance during stress is a pattern of reactions that occurs when managers are presented with work demands that are not matched with their knowledge, skills, or abilities, and which challenge their ability to cope. Although there are many prior findings pertaining to explain the development of manager performance during stress, less attention has been given to explain the same concept through computational models. In such, a descriptive nature in psychological theories about managers' performance during stress can be transformed into a causal-mechanistic stage that explains the relationship between a series of observed phenomena. This paper proposed an ambient agent model for analyzing managers' performance during stress. Set of properties and variables are identified through past literatures to construct the model. Differential equations have been used in formalizing the model. Set of equations reflecting relations involved in the proposed model are presented. The proposed model is essential and can be encapsulated within an intelligent agent or robots that can be used to support managers during stress.

  2. Human performance modeling for system of systems analytics :soldier fatigue.

    Energy Technology Data Exchange (ETDEWEB)

    Lawton, Craig R.; Campbell, James E.; Miller, Dwight Peter

    2005-10-01

    The military has identified Human Performance Modeling (HPM) as a significant requirement and challenge of future systems modeling and analysis initiatives as can be seen in the Department of Defense's (DoD) Defense Modeling and Simulation Office's (DMSO) Master Plan (DoD 5000.59-P 1995). To this goal, the military is currently spending millions of dollars on programs devoted to HPM in various military contexts. Examples include the Human Performance Modeling Integration (HPMI) program within the Air Force Research Laboratory, which focuses on integrating HPMs with constructive models of systems (e.g. cockpit simulations) and the Navy's Human Performance Center (HPC) established in September 2003. Nearly all of these initiatives focus on the interface between humans and a single system. This is insufficient in the era of highly complex network centric SoS. This report presents research and development in the area of HPM in a system-of-systems (SoS). Specifically, this report addresses modeling soldier fatigue and the potential impacts soldier fatigue can have on SoS performance.

  3. Improve Query Performance On Hierarchical Data. Adjacency List Model Vs. Nested Set Model

    Directory of Open Access Journals (Sweden)

    Cornelia Gyorödi

    2016-04-01

    Full Text Available Hierarchical data are found in a variety of database applications, including content management categories, forums, business organization charts, and product categories. In this paper, we will examine two models deal with hierarchical data in relational databases namely, adjacency list model and nested set model. We analysed these models by executing various operations and queries in a web-application for the management of categories, thus highlighting the results obtained during performance comparison tests. The purpose of this paper is to present the advantages and disadvantages of using an adjacency list model compared to nested set model in a relational database integrated into an application for the management of categories, which needs to manipulate a big amount of hierarchical data.

  4. Port performance evaluation tool based on microsimulation model

    Directory of Open Access Journals (Sweden)

    Tsavalista Burhani Jzolanda

    2017-01-01

    Full Text Available As port performance is becoming correlative to national competitiveness, the issue of port performance evaluation has significantly raised. Port performances can simply be indicated by port service levels to the ship (e.g., throughput, waiting for berthing etc., as well as the utilization level of equipment and facilities within a certain period. The performances evaluation then can be used as a tool to develop related policies for improving the port’s performance to be more effective and efficient. However, the evaluation is frequently conducted based on deterministic approach, which hardly captures the nature variations of port parameters. Therefore, this paper presents a stochastic microsimulation model for investigating the impacts of port parameter variations to the port performances. The variations are derived from actual data in order to provide more realistic results. The model is further developed using MATLAB and Simulink based on the queuing theory.

  5. Comparative performance of high-fidelity training models for flexible ureteroscopy: Are all models effective?

    Directory of Open Access Journals (Sweden)

    Shashikant Mishra

    2011-01-01

    Full Text Available Objective: We performed a comparative study of high-fidelity training models for flexible ureteroscopy (URS. Our objective was to determine whether high-fidelity non-virtual reality (VR models are as effective as the VR model in teaching flexible URS skills. Materials and Methods: Twenty-one trained urologists without clinical experience of flexible URS underwent dry lab simulation practice. After a warm-up period of 2 h, tasks were performed on a high-fidelity non-VR (Uro-scopic Trainer TM ; Endo-Urologie-Modell TM and a high-fidelity VR model (URO Mentor TM . The participants were divided equally into three batches with rotation on each of the three stations for 30 min. Performance of the trainees was evaluated by an expert ureteroscopist using pass rating and global rating score (GRS. The participants rated a face validity questionnaire at the end of each session. Results: The GRS improved statistically at evaluation performed after second rotation (P<0.001 for batches 1, 2 and 3. Pass ratings also improved significantly for all training models when the third and first rotations were compared (P<0.05. The batch that was trained on the VR-based model had more improvement on pass ratings on second rotation but could not achieve statistical significance. Most of the realistic domains were higher for a VR model as compared with the non-VR model, except the realism of the flexible endoscope. Conclusions: All the models used for training flexible URS were effective in increasing the GRS and pass ratings irrespective of the VR status.

  6. Observer analysis and its impact on task performance modeling

    Science.gov (United States)

    Jacobs, Eddie L.; Brown, Jeremy B.

    2014-05-01

    Fire fighters use relatively low cost thermal imaging cameras to locate hot spots and fire hazards in buildings. This research describes the analyses performed to study the impact of thermal image quality on fire fighter fire hazard detection task performance. Using human perception data collected by the National Institute of Standards and Technology (NIST) for fire fighters detecting hazards in a thermal image, an observer analysis was performed to quantify the sensitivity and bias of each observer. Using this analysis, the subjects were divided into three groups representing three different levels of performance. The top-performing group was used for the remainder of the modeling. Models were developed which related image quality factors such as contrast, brightness, spatial resolution, and noise to task performance probabilities. The models were fitted to the human perception data using logistic regression, as well as probit regression. Probit regression was found to yield superior fits and showed that models with not only 2nd order parameter interactions, but also 3rd order parameter interactions performed the best.

  7. Model-based approach for elevator performance estimation

    Science.gov (United States)

    Esteban, E.; Salgado, O.; Iturrospe, A.; Isasa, I.

    2016-02-01

    In this paper, a dynamic model for an elevator installation is presented in the state space domain. The model comprises both the mechanical and the electrical subsystems, including the electrical machine and a closed-loop field oriented control. The proposed model is employed for monitoring the condition of the elevator installation. The adopted model-based approach for monitoring employs the Kalman filter as an observer. A Kalman observer estimates the elevator car acceleration, which determines the elevator ride quality, based solely on the machine control signature and the encoder signal. Finally, five elevator key performance indicators are calculated based on the estimated car acceleration. The proposed procedure is experimentally evaluated, by comparing the key performance indicators calculated based on the estimated car acceleration and the values obtained from actual acceleration measurements in a test bench. Finally, the proposed procedure is compared with the sliding mode observer.

  8. Model for performance prediction in multi-axis machining

    CERN Document Server

    Lavernhe, Sylvain; Lartigue, Claire; 10.1007/s00170-007-1001-4

    2009-01-01

    This paper deals with a predictive model of kinematical performance in 5-axis milling within the context of High Speed Machining. Indeed, 5-axis high speed milling makes it possible to improve quality and productivity thanks to the degrees of freedom brought by the tool axis orientation. The tool axis orientation can be set efficiently in terms of productivity by considering kinematical constraints resulting from the set machine-tool/NC unit. Capacities of each axis as well as some NC unit functions can be expressed as limiting constraints. The proposed model relies on each axis displacement in the joint space of the machine-tool and predicts the most limiting axis for each trajectory segment. Thus, the calculation of the tool feedrate can be performed highlighting zones for which the programmed feedrate is not reached. This constitutes an indicator for trajectory optimization. The efficiency of the model is illustrated through examples. Finally, the model could be used for optimizing process planning.

  9. PORFLOW Modeling Supporting The H-Tank Farm Performance Assessment

    Energy Technology Data Exchange (ETDEWEB)

    Jordan, J. M.; Flach, G. P.; Westbrook, M. L.

    2012-08-31

    Numerical simulations of groundwater flow and contaminant transport in the vadose and saturated zones have been conducted using the PORFLOW code in support of an overall Performance Assessment (PA) of the H-Tank Farm. This report provides technical detail on selected aspects of PORFLOW model development and describes the structure of the associated electronic files. The PORFLOW models for the H-Tank Farm PA, Rev. 1 were updated with grout, solubility, and inventory changes. The aquifer model was refined. In addition, a set of flow sensitivity runs were performed to allow flow to be varied in the related probabilistic GoldSim models. The final PORFLOW concentration values are used as input into a GoldSim dose calculator.

  10. Models for the energy performance of low-energy houses

    DEFF Research Database (Denmark)

    Andersen, Philip Hvidthøft Delff

    such as mechanical ventilation, floor heating, and control of the lighting effect, the heat dynamics must be taken into account. Hence, this thesis provides methods for data-driven modeling of heat dynamics of modern buildings. While most of the work in this thesis is related to characterization of heat dynamics...... - referred to as "grey-box” modeling - one-step predictions can be generated and used for model validation by testing statistically whether the model describes all variation and dynamics observed in the data. The possibility of validating the model dynamics is a great advantage from the use of stochastic......-building. The building is well-insulated and features large modern energy-effcient windows and oor heating. These features lead to increased non-linear responses to solar radiation and longer time constants. The building is equipped with advanced control and measuring equipment. Experiments are designed and performed...

  11. Acoustic performance of industrial mufflers with CAE modeling and simulation

    Science.gov (United States)

    Jeon, Soohong; Kim, Daehwan; Hong, Chinsuk; Jeong, Weuibong

    2014-12-01

    This paper investigates the noise transmission performance of industrial mufflers widely used in ships based on the CAE modeling and simulation. Since the industrial mufflers have very complicated internal structures, the conventional Transfer Matrix Method (TMM) is of limited use. The CAE modeling and simulation is therefore required to incorporate commercial softwares: CATIA for geometry modeling, MSC/PATRAN for FE meshing and LMS/ SYSNOISE for analysis. Main sources of difficulties in this study are led by complicated arrangement of reactive elements, perforated walls and absorption materials. The reactive elements and absorbent materials are modeled by applying boundary conditions given by impedance. The perforated walls are modeled by applying the transfer impedance on the duplicated node mesh. The CAE approach presented in this paper is verified by comparing with the theoretical solution of a concentric-tube resonator and is applied for industrial mufflers.

  12. Acoustic performance of industrial mufflers with CAE modeling and simulation

    Directory of Open Access Journals (Sweden)

    Jeon Soohong

    2014-12-01

    Full Text Available This paper investigates the noise transmission performance of industrial mufflers widely used in ships based on the CAE modeling and simulation. Since the industrial mufflers have very complicated internal structures, the conventional Transfer Matrix Method (TMM is of limited use. The CAE modeling and simulation is therefore required to incorporate commercial softwares: CATIA for geometry modeling, MSC/PATRAN for FE meshing and LMS/ SYSNOISE for analysis. Main sources of difficulties in this study are led by complicated arrangement of reactive elements, perforated walls and absorption materials. The reactive elements and absorbent materials are modeled by applying boundary conditions given by impedance. The perforated walls are modeled by applying the transfer impedance on the duplicated node mesh. The CAE approach presented in this paper is verified by comparing with the theoretical solution of a concentric-tube resonator and is applied for industrial mufflers.

  13. Multiscale Modeling and Uncertainty Quantification for Nuclear Fuel Performance

    Energy Technology Data Exchange (ETDEWEB)

    Estep, Donald [Colorado State Univ., Fort Collins, CO (United States); El-Azab, Anter [Florida State Univ., Tallahassee, FL (United States); Pernice, Michael [Idaho National Lab. (INL), Idaho Falls, ID (United States); Peterson, John W. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Polyakov, Peter [Univ. of Wyoming, Laramie, WY (United States); Tavener, Simon [Colorado State Univ., Fort Collins, CO (United States); Xiu, Dongbin [Purdue Univ., West Lafayette, IN (United States); Univ. of Utah, Salt Lake City, UT (United States)

    2017-03-23

    In this project, we will address the challenges associated with constructing high fidelity multiscale models of nuclear fuel performance. We (*) propose a novel approach for coupling mesoscale and macroscale models, (*) devise efficient numerical methods for simulating the coupled system, and (*) devise and analyze effective numerical approaches for error and uncertainty quantification for the coupled multiscale system. As an integral part of the project, we will carry out analysis of the effects of upscaling and downscaling, investigate efficient methods for stochastic sensitivity analysis of the individual macroscale and mesoscale models, and carry out a posteriori error analysis for computed results. We will pursue development and implementation of solutions in software used at Idaho National Laboratories on models of interest to the Nuclear Energy Advanced Modeling and Simulation (NEAMS) program.

  14. New Mechanical Model for the Transmutation Fuel Performance Code

    Energy Technology Data Exchange (ETDEWEB)

    Gregory K. Miller

    2008-04-01

    A new mechanical model has been developed for implementation into the TRU fuel performance code. The new model differs from the existing FRAPCON 3 model, which it is intended to replace, in that it will include structural deformations (elasticity, plasticity, and creep) of the fuel. Also, the plasticity algorithm is based on the “plastic strain–total strain” approach, which should allow for more rapid and assured convergence. The model treats three situations relative to interaction between the fuel and cladding: (1) an open gap between the fuel and cladding, such that there is no contact, (2) contact between the fuel and cladding where the contact pressure is below a threshold value, such that axial slippage occurs at the interface, and (3) contact between the fuel and cladding where the contact pressure is above a threshold value, such that axial slippage is prevented at the interface. The first stage of development of the model included only the fuel. In this stage, results obtained from the model were compared with those obtained from finite element analysis using ABAQUS on a problem involving elastic, plastic, and thermal strains. Results from the two analyses showed essentially exact agreement through both loading and unloading of the fuel. After the cladding and fuel/clad contact were added, the model demonstrated expected behavior through all potential phases of fuel/clad interaction, and convergence was achieved without difficulty in all plastic analysis performed. The code is currently in stand alone form. Prior to implementation into the TRU fuel performance code, creep strains will have to be added to the model. The model will also have to be verified against an ABAQUS analysis that involves contact between the fuel and cladding.

  15. Performance analysis of IP QoS provision model

    Institute of Scientific and Technical Information of China (English)

    SUN Danning; Moonsik Kang

    2006-01-01

    The Performance of a heterogeneous IP QoS provision service model was analyzed. This model utilized RSVP technique to set up dynamic resource reservation interface between the user and the network, meanwhile, DiffServ technique was utilized to transmit class-based packets in different per hop behaviors. Furthermore, accordingly queue management and packets scheduling mechanisms were presented for end-to-end QoS guarantees and appropriate cooperation of network elements.

  16. Does better rainfall interpolation improve hydrological model performance?

    Science.gov (United States)

    Bàrdossy, Andràs; Kilsby, Chris; Lewis, Elisabeth

    2017-04-01

    High spatial variability of precipitation is one of the main sources of uncertainty in rainfall/runoff modelling. Spatially distributed models require detailed space time information on precipitation as input. In the past decades a lot of effort was spent on improving precipitation interpolation using point observations. Different geostatistical methods like Ordinary Kriging, External Drift Kriging or Copula based interpolation can be used to find the best estimators for unsampled locations. The purpose of this work is to investigate to what extents more sophisticated precipitation estimation methods can improve model performance. For this purpose the Wye catchment in Wales was selected. The physically-based spatially-distributed hydrological model SHETRAN is used to describe the hydrological processes in the catchment. 31 raingauges with 1 hourly temporal resolution are available for a time period of 6 years. In order to avoid the effect of model uncertainty model parameters were not altered in this study. Instead 100 random subsets consisting of 14 stations each were selected. For each of the configurations precipitation was interpolated for each time step using nearest neighbor (NN), inverse distance (ID) and Ordinary Kriging (OK). The variogram was obtained using the temporal correlation of the time series measured at different locations. The interpolated data were used as input for the spatially distributed model. Performance was evaluated for daily mean discharges using the Nash-Sutcliffe coefficient, temporal correlations, flow volumes and flow duration curves. The results show that the simplest NN and the sophisticated OK performances are practically equally good, while ID performed worse. NN was often better for high flows. The reason for this is that NN does not reduce the variance, while OK and ID yield smooth precipitation fields. The study points out the importance of precipitation variability and suggests the use of conditional spatial simulation as

  17. Performance Analysis of MANET Routing Protocols in Different Mobility Models

    Directory of Open Access Journals (Sweden)

    Anuj K. Gupta

    2013-05-01

    Full Text Available A mobile ad-hoc network (MANET is basically called as a network without any central administration or fixed infrastructure. It consists of a number of mobile nodes that use to send data packets through a wireless medium. There is always a need of a good routing protocol in order to establish the connection between mobile nodes since they possess the property of dynamic changing topology. Further, in all the existing routing protocols, mobility of a node has always been one of the important characteristics in determining the overall performance of the ad hoc network. Thus, it is essential to know about various mobility models and their effect on the routing protocols. In this paper, we have made an attempt to compare different mobility models and provide an overview of their current research status. The main focus is on Random Mobility Models and Group Mobility Models. Firstly, we present a survey of the characteristics, drawbacks and research challenges of mobility modeling. At the last we present simulation results that illustrate the importance of choosing a mobility model in the simulation of an ad hoc network protocol. Also, we illustrate how the performance results of an ad hoc network protocol drastically change as a result of changing the mobility model simulated.

  18. Developing a model of forecasting information systems performance

    Directory of Open Access Journals (Sweden)

    G. N. Isaev

    2017-01-01

    Full Text Available Research aim: to develop a model to forecast the performance ofinformation systems as a mechanism for preliminary assessment of the information system effectiveness before the beginning of financing the information system project.Materials and methods: the starting material used the results of studying the parameters of the statistical structure of information system data processing defects. Methods of cluster analysis and regression analysis were applied.Results: in order to reduce financial risks, information systems customers try to make decisions on the basis of preliminary calculations on the effectiveness of future information systems. However, the assumptions on techno-economic justification of the project can only be obtained when the funding for design work is already open. Its evaluation can be done before starting the project development using a model of forecasting information system performance. The model is developed using regression analysis in the form of a multiple linear regression. The value of information system performance is the predicted variable in the regression equation. The values of data processing defects in the classes of accuracy, completeness and timeliness are the forecast variables. Measurement and evaluation of parameters of the statistical structure of defects were done through programmes of cluster analysis and regression analysis. The calculations for determining the actual and forecast values of the information system performance were conducted.Conclusion: in terms of implementing the model, a research of information systems was carried out, as well as the development of forecasting model of information system performance. The conducted experimental work showed the adequacy of the model. The model is implemented in the complex task of designing information systems in education and industry.

  19. Modeling the marketing strategy-performance relationship : towards an hierarchical marketing performance framework

    NARCIS (Netherlands)

    Huizingh, Eelko K.R.E.; Zengerink, Evelien

    2001-01-01

    Accurate measurement of marketing performance is an important topic for both marketing academics and marketing managers. Many researchers have recognized that marketing performance measurement should go beyond financial measurement. In this paper we propose a conceptual framework that models

  20. Modeling the marketing strategy-performance relationship : towards an hierarchical marketing performance framework

    NARCIS (Netherlands)

    Huizingh, Eelko K.R.E.; Zengerink, Evelien

    2001-01-01

    Accurate measurement of marketing performance is an important topic for both marketing academics and marketing managers. Many researchers have recognized that marketing performance measurement should go beyond financial measurement. In this paper we propose a conceptual framework that models marketi

  1. CORPORATE FORESIGHT AND PERFORMANCE: A CHAIN-OF-EFFECTS MODEL

    DEFF Research Database (Denmark)

    Jissink, Tymen; Huizingh, Eelko K.R.E.; Rohrbeck, René

    2015-01-01

    , formal organization, and culture. We investigate the relation of corporate foresight with three innovation performance dimensions – new product success, new product innovativeness, and financial performance. We use partial-least-squares structural equations modelling to assess our measurement mode ls......In this paper we develop and validate a measurement scale for corporate foresight and examine its impact on performance in a chain-of-effects model. We conceptualize corporate foresight as an organizational ability consisting of five distinct dimensions: information scope, method usage, people...... and test our research hypotheses. Using a cross-industry sample of 153 innovative firms, we find that corporate foresight can be validly and reliably measured by our measurement instrument. The results of the structural model support the hypothesized positive effects of corporate foresight on all...

  2. A multiserver multiqueue network:modeling and performance analysis

    Institute of Scientific and Technical Information of China (English)

    ZhiguangShan; YangYang; 等

    2002-01-01

    A new categroy of system model,multiserver multiqueue network(MSMQN),is proposed for distributed systems such as the geopgraphically distributed web-server clusters.A MSMQN comprises multiple multiserver multiqueue(MSMQ) nodes distributed over the network.and every node consists of a number of servers that each contains multiple priority queues for waiting customers.An incoming request can be distributed to a waiting queue of any server in any node,according to the routing policy integrated by the nodeselection policy at network-level,request-dispatching policy at node-level,and request-scheduling policy at server-level.The model is investigated using stochastic high-level Petrinet(SHLPN) modeling and performance analysis techniques.The performance metrics concerned includes the delay time of requests in the MSMQ node and the response time perceived by the users.The numerical example shows the feeiciency of the performance analysis technique.

  3. Mantis: Predicting System Performance through Program Analysis and Modeling

    CERN Document Server

    Chun, Byung-Gon; Lee, Sangmin; Maniatis, Petros; Naik, Mayur

    2010-01-01

    We present Mantis, a new framework that automatically predicts program performance with high accuracy. Mantis integrates techniques from programming language and machine learning for performance modeling, and is a radical departure from traditional approaches. Mantis extracts program features, which are information about program execution runs, through program instrumentation. It uses machine learning techniques to select features relevant to performance and creates prediction models as a function of the selected features. Through program analysis, it then generates compact code slices that compute these feature values for prediction. Our evaluation shows that Mantis can achieve more than 93% accuracy with less than 10% training data set, which is a significant improvement over models that are oblivious to program features. The system generates code slices that are cheap to compute feature values.

  4. Aircraft Anomaly Detection Using Performance Models Trained on Fleet Data

    Science.gov (United States)

    Gorinevsky, Dimitry; Matthews, Bryan L.; Martin, Rodney

    2012-01-01

    This paper describes an application of data mining technology called Distributed Fleet Monitoring (DFM) to Flight Operational Quality Assurance (FOQA) data collected from a fleet of commercial aircraft. DFM transforms the data into aircraft performance models, flight-to-flight trends, and individual flight anomalies by fitting a multi-level regression model to the data. The model represents aircraft flight performance and takes into account fixed effects: flight-to-flight and vehicle-to-vehicle variability. The regression parameters include aerodynamic coefficients and other aircraft performance parameters that are usually identified by aircraft manufacturers in flight tests. Using DFM, the multi-terabyte FOQA data set with half-million flights was processed in a few hours. The anomalies found include wrong values of competed variables, (e.g., aircraft weight), sensor failures and baises, failures, biases, and trends in flight actuators. These anomalies were missed by the existing airline monitoring of FOQA data exceedances.

  5. A Bibliometric Analysis and Review on Performance Modeling Literature

    Directory of Open Access Journals (Sweden)

    Barbara Livieri

    2015-04-01

    Full Text Available In management practice, performance indicators are considered as a prerequisite to make informed decisions in line with the organization’s goals. On the other hand, indicators summarizes compound phenomena in a few digits, which can induce to inadequate decisions, biased by information loss and conflicting values. Model driven approaches in enterprise engineering can be very effective to avoid these pitfalls, or to take it under control. For that reason, “performance modeling” has the numbers to play a primary role in the “model driven enterprise” scenario, together with process, information and other enterprise-related aspects. In this perspective, we propose a systematic review of the literature on performance modeling in order to retrieve, classify, and summarize existing research, identify the core authors and define areas and opportunities for future research.

  6. Testing a Model of Work Performance in an Academic Environment

    Directory of Open Access Journals (Sweden)

    B. Charles Tatum

    2012-04-01

    Full Text Available In modern society, people both work and study. The intersection between organizational and educational research suggests that a common model should apply to both academic and job performance. The purpose of this study was to apply a model of work and job performance (based on general expectancy theory to a classroom setting, and test the predicted relationships using a causal/path model methodology. The findings revealed that motivation and ability predicted student expectations and self-efficacy, and that expectations and efficacy predicted class performance. Limitations, implications, and future research directions are discussed. This study showed how the research in industrial and organizational psychology is relevant to education. It was concluded that greater effort should be made to integrate knowledge across a wider set of domains.

  7. THE USE OF NEURAL NETWORK TECHNOLOGY TO MODEL SWIMMING PERFORMANCE

    Directory of Open Access Journals (Sweden)

    António José Silva

    2007-03-01

    Full Text Available The aims of the present study were: to identify the factors which are able to explain the performance in the 200 meters individual medley and 400 meters front crawl events in young swimmers, to model the performance in those events using non-linear mathematic methods through artificial neural networks (multi-layer perceptrons and to assess the neural network models precision to predict the performance. A sample of 138 young swimmers (65 males and 73 females of national level was submitted to a test battery comprising four different domains: kinanthropometric evaluation, dry land functional evaluation (strength and flexibility, swimming functional evaluation (hydrodynamics, hydrostatic and bioenergetics characteristics and swimming technique evaluation. To establish a profile of the young swimmer non-linear combinations between preponderant variables for each gender and swim performance in the 200 meters medley and 400 meters font crawl events were developed. For this purpose a feed forward neural network was used (Multilayer Perceptron with three neurons in a single hidden layer. The prognosis precision of the model (error lower than 0.8% between true and estimated performances is supported by recent evidence. Therefore, we consider that the neural network tool can be a good approach in the resolution of complex problems such as performance modeling and the talent identification in swimming and, possibly, in a wide variety of sports

  8. Performance Comparison of the European Storm Surge Models and Chaotic Model in Forecasting Extreme Storm Surges

    Science.gov (United States)

    Siek, M. B.; Solomatine, D. P.

    2009-04-01

    Storm surge modeling has rapidly developed considerably over the past 30 years. A number of significant advances on operational storm surge models have been implemented and tested, consisting of: refining computational grids, calibrating the model, using a better numerical scheme (i.e. more realistic model physics for air-sea interaction), implementing data assimilation and ensemble model forecasts. This paper addresses the performance comparison between the existing European storm surge models and the recently developed methods of nonlinear dynamics and chaos theory in forecasting storm surge dynamics. The chaotic model is built using adaptive local models based on the dynamical neighbours in the reconstructed phase space of observed time series data. The comparison focused on the model accuracy in forecasting a recently extreme storm surge in the North Sea on November 9th, 2007 that hit the coastlines of several European countries. The combination of a high tide, north-westerly winds exceeding 50 mph and low pressure produced an exceptional storm tide. The tidal level was exceeded 3 meters above normal sea levels. Flood warnings were issued for the east coast of Britain and the entire Dutch coast. The Maeslant barrier's two arc-shaped steel doors in the Europe's biggest port of Rotterdam was closed for the first time since its construction in 1997 due to this storm surge. In comparison to the chaotic model performance, the forecast data from several European physically-based storm surge models were provided from: BSH Germany, DMI Denmark, DNMI Norway, KNMI Netherlands and MUMM Belgium. The performance comparison was made over testing datasets for two periods/conditions: non-stormy period (1-Sep-2007 till 14-Oct-2007) and stormy period (15-Oct-2007 till 20-Nov-2007). A scalar chaotic model with optimized parameters was developed by utilizing an hourly training dataset of observations (11-Sep-2005 till 31-Aug-2007). The comparison results indicated the chaotic

  9. Circuit modeling and performance analysis of photoconductive antenna

    Science.gov (United States)

    Prajapati, Jitendra; Bharadwaj, Mrinmoy; Chatterjee, Amitabh; Bhattacharjee, Ratnajit

    2017-07-01

    In recent years, several experimental and simulation studies have been reported on the terahertz (THz) generation using a photoconductive antenna (PCA). The major problem with PCA is its low overall efficiency, which depends on several parameters related to a semiconductor material, an antenna geometry, and characteristics of the laser beam. To analyze the effect of different parameters on PCA efficiency, accurate circuit modeling, using physics undergoing in the device, is necessary. Although a few equivalent circuit models have been proposed in the literature, these models do not adequately capture the semiconductor physics in PCA. This paper presents an equivalent electrical circuit model of PCA incorporating basic semiconductor device physics. The proposed equivalent circuit model is validated using Sentaurus TCAD device level modeling tool as well as with the experimental results available in the literature. The results obtained from the proposed circuit model are in close agreement with the TCAD results as well as available experimental results. The proposed circuit model is expected to contribute towards future research efforts aimed at optimization of the performance of the PCA system.

  10. How well can we forecast future model error and uncertainty by mining past model performance data

    Science.gov (United States)

    Solomatine, Dimitri

    2016-04-01

    Consider a hydrological model Y(t) = M(X(t), P), where X=vector of inputs; P=vector of parameters; Y=model output (typically flow); t=time. In cases when there is enough past data on the model M performance, it is possible to use this data to build a (data-driven) model EC of model M error. This model EC will be able to forecast error E when a new input X is fed into model M; then subtracting E from the model prediction Y a better estimate of Y can be obtained. Model EC is usually called the error corrector (in meteorology - a bias corrector). However, we may go further in characterizing model deficiencies, and instead of using the error (a real value) we may consider a more sophisticated characterization, namely a probabilistic one. So instead of rather a model EC of the model M error it is also possible to build a model U of model M uncertainty; if uncertainty is described as the model error distribution D this model will calculate its properties - mean, variance, other moments, and quantiles. The general form of this model could be: D = U (RV), where RV=vector of relevant variables having influence on model uncertainty (to be identified e.g. by mutual information analysis); D=vector of variables characterizing the error distribution (typically, two or more quantiles). There is one aspect which is not always explicitly mentioned in uncertainty analysis work. In our view it is important to distinguish the following main types of model uncertainty: 1. The residual uncertainty of models. In this case the model parameters and/or model inputs are considered to be fixed (deterministic), i.e. the model is considered to be optimal (calibrated) and deterministic. Model error is considered as the manifestation of uncertainty. If there is enough past data about the model errors (i.e. its uncertainty), it is possible to build a statistical or machine learning model of uncertainty trained on this data. Here the following methods can be mentioned: (a) quantile regression (QR

  11. Model of service-oriented catering supply chain performance evaluation

    Directory of Open Access Journals (Sweden)

    Juanqiong Gou

    2013-03-01

    Full Text Available Purpose: The aim of this paper is constructing a performance evaluation model for service-oriented catering supply chain. Design/methodology/approach: With the research on the current situation of catering industry, this paper summarized the characters of the catering supply chain, and then presents the service-oriented catering supply chain model based on the platform of logistics and information. At last, the fuzzy AHP method is used to evaluate the performance of service-oriented catering supply chain. Findings: With the analysis of the characteristics of catering supply chain, we construct the performance evaluation model in order to guarantee the food safety, logistics efficiency, price stability and so on. Practical implications: In order to evolve an efficient and effective service supply chain, it can not only used to own enterprise improvement, but also can be used for selecting different customers, to choose a different model of development. Originality/value: This paper has a new definition of service-oriented catering supply chain. And it offers a model to evaluate the performance of this catering supply chain.

  12. Performance based Ranking Model for Cloud SaaS Services

    Directory of Open Access Journals (Sweden)

    Sahar Abdalla Elmubarak

    2017-01-01

    Full Text Available Cloud computing systems provide virtualized resources that can be provisioned on demand basis. Enormous number of cloud providers are offering diverse number of services. The performance of these services is a critical factor for clients to determine the cloud provider that they will choose. However, determining a provider with efficient and effective services is a challenging task. There is a need for an efficient model that help clients to select the best provider based on the performance attributes and measurements. Cloud service ranking is a standard method used to perform this task. It is the process of arranging and classifying several cloud services within the cloud, then compute the relative ranking values of them based on the quality of service required by clients and the features of the cloud services. The objective of this study is to propose an enhanced performance based ranking model to help users choose the best service they need. The proposed model combines the attributes and measurements from cloud computing field and the welldefined and established software engineering field. SMICloud Toolkit has been used to test the applicability of the proposed model. The experimentation results of the proposed model were promising.

  13. Neural Network Based Model for Predicting Housing Market Performance

    Institute of Scientific and Technical Information of China (English)

    Ahmed Khalafallah

    2008-01-01

    The United States real estate market is currently facing its worst hit in two decades due to the slowdown of housing sales. The most affected by this decline are real estate investors and home develop-ers who are currently struggling to break-even financially on their investments. For these investors, it is of utmost importance to evaluate the current status of the market and predict its performance over the short-term in order to make appropriate financial decisions. This paper presents the development of artificial neu-ral network based models to support real estate investors and home developers in this critical task. The pa-per describes the decision variables, design methodology, and the implementation of these models. The models utilize historical market performance data sets to train the artificial neural networks in order to pre-dict unforeseen future performances. An application example is analyzed to demonstrate the model capabili-ties in analyzing and predicting the market performance. The model testing and validation showed that the error in prediction is in the range between -2% and +2%.

  14. New performance evaluation models for character detection in images

    Science.gov (United States)

    Wang, YanWei; Ding, XiaoQing; Liu, ChangSong; Wang, Kongqiao

    2010-02-01

    Detection of characters regions is a meaningful research work for both highlighting region of interest and recognition for further information processing. A lot of researches have been performed on character localization and extraction and this leads to the great needs of performance evaluation scheme to inspect detection algorithms. In this paper, two probability models are established to accomplish evaluation tasks for different applications respectively. For highlighting region of interest, a Gaussian probability model, which simulates the property of a low-pass Gaussian filter of human vision system (HVS), was constructed to allocate different weights to different character parts. It reveals the greatest potential to describe the performance of detectors, especially, when the result detected is an incomplete character, where other methods cannot effectively work. For the recognition destination, we also introduced a weighted probability model to give an appropriate description for the contribution of detection results to final recognition results. The validity of performance evaluation models proposed in this paper are proved by experiments on web images and natural scene images. These models proposed in this paper may also be able to be applied in evaluating algorithms of locating other objects, like face detection and more wide experiments need to be done to examine the assumption.

  15. Ecological niche modeling in Maxent: the importance of model complexity and the performance of model selection criteria.

    Science.gov (United States)

    Warren, Dan L; Seifert, Stephanie N

    2011-03-01

    Maxent, one of the most commonly used methods for inferring species distributions and environmental tolerances from occurrence data, allows users to fit models of arbitrary complexity. Model complexity is typically constrained via a process known as L1 regularization, but at present little guidance is available for setting the appropriate level of regularization, and the effects of inappropriately complex or simple models are largely unknown. In this study, we demonstrate the use of information criterion approaches to setting regularization in Maxent, and we compare models selected using information criteria to models selected using other criteria that are common in the literature. We evaluate model performance using occurrence data generated from a known "true" initial Maxent model, using several different metrics for model quality and transferability. We demonstrate that models that are inappropriately complex or inappropriately simple show reduced ability to infer habitat quality, reduced ability to infer the relative importance of variables in constraining species' distributions, and reduced transferability to other time periods. We also demonstrate that information criteria may offer significant advantages over the methods commonly used in the literature.

  16. Cognition and procedure representational requirements for predictive human performance models

    Science.gov (United States)

    Corker, K.

    1992-01-01

    Models and modeling environments for human performance are becoming significant contributors to early system design and analysis procedures. Issues of levels of automation, physical environment, informational environment, and manning requirements are being addressed by such man/machine analysis systems. The research reported here investigates the close interaction between models of human cognition and models that described procedural performance. We describe a methodology for the decomposition of aircrew procedures that supports interaction with models of cognition on the basis of procedures observed; that serves to identify cockpit/avionics information sources and crew information requirements; and that provides the structure to support methods for function allocation among crew and aiding systems. Our approach is to develop an object-oriented, modular, executable software representation of the aircrew, the aircraft, and the procedures necessary to satisfy flight-phase goals. We then encode in a time-based language, taxonomies of the conceptual, relational, and procedural constraints among the cockpit avionics and control system and the aircrew. We have designed and implemented a goals/procedures hierarchic representation sufficient to describe procedural flow in the cockpit. We then execute the procedural representation in simulation software and calculate the values of the flight instruments, aircraft state variables and crew resources using the constraints available from the relationship taxonomies. The system provides a flexible, extensible, manipulative and executable representation of aircrew and procedures that is generally applicable to crew/procedure task-analysis. The representation supports developed methods of intent inference, and is extensible to include issues of information requirements and functional allocation. We are attempting to link the procedural representation to models of cognitive functions to establish several intent inference methods

  17. A Real-Time Performance Analysis Model for Cryptographic Protocols

    Directory of Open Access Journals (Sweden)

    Amos Olagunju

    2012-12-01

    Full Text Available Several encryption algorithms exist today for securing data in storage and transmission over network systems. The choice of encryption algorithms must weigh performance requirements against the call for protection of sensitive data. This research investigated the processing times of alternative encryption algorithms under specific conditions. The paper presents the architecture of a model multiplatform tool for the evaluation of candidate encryption algorithms based on different data and key sizes. The model software was used to appraise the real-time performance of DES, AES, 3DES, MD5, SHA1, and SHA2 encryption algorithms.

  18. Computational model of sustained acceleration effects on human cognitive performance.

    Science.gov (United States)

    McKinlly, Richard A; Gallimore, Jennie J

    2013-08-01

    Extreme acceleration maneuvers encountered in modern agile fighter aircraft can wreak havoc on human physiology, thereby significantly influencing cognitive task performance. As oxygen content declines under acceleration stress, the activity of high order cortical tissue reduces to ensure sufficient metabolic resources are available for critical life-sustaining autonomic functions. Consequently, cognitive abilities reliant on these affected areas suffer significant performance degradations. The goal was to develop and validate a model capable of predicting human cognitive performance under acceleration stress. Development began with creation of a proportional control cardiovascular model that produced predictions of several hemodynamic parameters, including eye-level blood pressure and regional cerebral oxygen saturation (rSo2). An algorithm was derived to relate changes in rSo2 within specific brain structures to performance on cognitive tasks that require engagement of different brain areas. Data from the "precision timing" experiment were then used to validate the model predicting cognitive performance as a function of G(z) profile. The following are value ranges. Results showed high agreement between the measured and predicted values for the rSo2 (correlation coefficient: 0.7483-0.8687; linear best-fit slope: 0.5760-0.9484; mean percent error: 0.75-3.33) and cognitive performance models (motion inference task--correlation coefficient: 0.7103-0.9451; linear best-fit slope: 0.7416-0.9144; mean percent error: 6.35-38.21; precision timing task--correlation coefficient: 0.6856-0.9726; linear best-fit slope: 0.5795-1.027; mean percent error: 6.30-17.28). The evidence suggests that the model is capable of accurately predicting cognitive performance of simplistic tasks under high acceleration stress.

  19. Outdoor FSO Communications Under Fog: Attenuation Modeling and Performance Evaluation

    KAUST Repository

    Esmail, Maged Abdullah

    2016-07-18

    Fog is considered to be a primary challenge for free space optics (FSO) systems. It may cause attenuation that is up to hundreds of decibels per kilometer. Hence, accurate modeling of fog attenuation will help telecommunication operators to engineer and appropriately manage their networks. In this paper, we examine fog measurement data coming from several locations in Europe and the United States and derive a unified channel attenuation model. Compared with existing attenuation models, our proposed model achieves a minimum of 9 dB, which is lower than the average root-mean-square error (RMSE). Moreover, we have investigated the statistical behavior of the channel and developed a probabilistic model under stochastic fog conditions. Furthermore, we studied the performance of the FSO system addressing various performance metrics, including signal-to-noise ratio (SNR), bit-error rate (BER), and channel capacity. Our results show that in communication environments with frequent fog, FSO is typically a short-range data transmission technology. Therefore, FSO will have its preferred market segment in future wireless fifth-generation/sixth-generation (5G/6G) networks having cell sizes that are lower than a 1-km diameter. Moreover, the results of our modeling and analysis can be applied in determining the switching/thresholding conditions in highly reliable hybrid FSO/radio-frequency (RF) networks.

  20. Performance Analysis, Modeling and Scaling of HPC Applications and Tools

    Energy Technology Data Exchange (ETDEWEB)

    Bhatele, Abhinav [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-01-13

    E cient use of supercomputers at DOE centers is vital for maximizing system throughput, mini- mizing energy costs and enabling science breakthroughs faster. This requires complementary e orts along several directions to optimize the performance of scienti c simulation codes and the under- lying runtimes and software stacks. This in turn requires providing scalable performance analysis tools and modeling techniques that can provide feedback to physicists and computer scientists developing the simulation codes and runtimes respectively. The PAMS project is using time allocations on supercomputers at ALCF, NERSC and OLCF to further the goals described above by performing research along the following fronts: 1. Scaling Study of HPC applications; 2. Evaluation of Programming Models; 3. Hardening of Performance Tools; 4. Performance Modeling of Irregular Codes; and 5. Statistical Analysis of Historical Performance Data. We are a team of computer and computational scientists funded by both DOE/NNSA and DOE/ ASCR programs such as ECRP, XStack (Traleika Glacier, PIPER), ExaOSR (ARGO), SDMAV II (MONA) and PSAAP II (XPACC). This allocation will enable us to study big data issues when analyzing performance on leadership computing class systems and to assist the HPC community in making the most e ective use of these resources.

  1. Product Data Model for Performance-driven Design

    Science.gov (United States)

    Hu, Guang-Zhong; Xu, Xin-Jian; Xiao, Shou-Ne; Yang, Guang-Wu; Pu, Fan

    2017-09-01

    When designing large-sized complex machinery products, the design focus is always on the overall performance; however, there exist no design theory and method based on performance driven. In view of the deficiency of the existing design theory, according to the performance features of complex mechanical products, the performance indices are introduced into the traditional design theory of "Requirement-Function-Structure" to construct a new five-domain design theory of "Client Requirement-Function-Performance-Structure-Design Parameter". To support design practice based on this new theory, a product data model is established by using performance indices and the mapping relationship between them and the other four domains. When the product data model is applied to high-speed train design and combining the existing research result and relevant standards, the corresponding data model and its structure involving five domains of high-speed trains are established, which can provide technical support for studying the relationships between typical performance indices and design parameters and the fast achievement of a high-speed train scheme design. The five domains provide a reference for the design specification and evaluation criteria of high speed train and a new idea for the train's parameter design.

  2. A PERFORMANCE MANAGEMENT MODEL FOR PHYSICAL ASSET MANAGEMENT

    Directory of Open Access Journals (Sweden)

    J.L. Jooste

    2012-01-01

    Full Text Available

    ENGLISH ABSTRACT: There has been an emphasis shift from maintenance management towards asset management, where the focus is on reliable and operational equipment and on effective assets at optimum life-cycle costs. A challenge in the manufacturing industry is to develop an asset performance management model that is integrated with business processes and strategies. The authors developed the APM2 model to satisfy that requirement. The model has a generic reference structure and is supported by operational protocols to assist in operations management. It facilitates performance measurement, business integration and continuous improvement, whilst exposing industry to the latest developments in asset performance management.

    AFRIKAANSE OPSOMMING: Daar is ‘n klemverskuiwing vanaf onderhoudsbestuur na batebestuur, waar daar gefokus word op betroubare en operasionele toerusting, asook effektiewe bates teen optimum lewensikluskoste. ‘n Uitdaging in die vervaardigingsindustrie is die ontwikkeling van ‘n prestasiemodel vir bates, wat geïntegreer is met besigheidsprosesse en –strategieë. Die outeurs het die APM2 model ontwikkel om in hierdie behoefte te voorsien. Die model het ‘n generiese verwysingsstruktuur, wat ondersteun word deur operasionele instruksies wat operasionele bestuur bevorder. Dit fasiliteer prestasiebestuur, besigheidsintegrasie en voortdurende verbetering, terwyl dit die industrie ook blootstel aan die nuutste ontwikkelinge in prestasiebestuur van bates.

  3. The performance of FLake in the Met Office Unified Model

    Directory of Open Access Journals (Sweden)

    Gabriel Gerard Rooney

    2013-12-01

    Full Text Available We present results from the coupling of FLake to the Met Office Unified Model (MetUM. The coupling and initialisation are first described, and the results of testing the coupled model in local and global model configurations are presented. These show that FLake has a small statistical impact on screen temperature, but has the potential to modify the weather in the vicinity of areas of significant inland water. Examination of FLake lake ice has revealed that the behaviour of lakes in the coupled model is unrealistic in some areas of significant sub-grid orography. Tests of various modifications to ameliorate this behaviour are presented. The results indicate which of the possible model changes best improve the annual cycle of lake ice. As FLake has been developed and tuned entirely outside the Unified Model system, these results can be interpreted as a useful objective measure of the performance of the Unified Model in terms of its near-surface characteristics.

  4. Performance modeling of a feature-aided tracker

    Science.gov (United States)

    Goley, G. Steven; Nolan, Adam R.

    2012-06-01

    In order to provide actionable intelligence in a layered sensing paradigm, exploitation algorithms should produce a confidence estimate in addition to the inference variable. This article presents a methodology and results of one such algorithm for feature-aided tracking of vehicles in wide area motion imagery. To perform experiments a synthetic environment was developed, which provided explicit knowledge of ground truth, tracker prediction accuracy, and control of operating conditions. This synthetic environment leveraged physics-based modeling simulations to re-create both traffic flow, reflectance of vehicles, obscuration and shadowing. With the ability to control operating conditions as well as the availability of ground truth, several experiments were conducted to test both the tracker and expected performance. The results show that the performance model produces a meaningful estimate of the tracker performance over the subset of operating conditions.

  5. On Performance Modeling of Ad Hoc Routing Protocols

    Directory of Open Access Journals (Sweden)

    Khayam SyedAli

    2010-01-01

    Full Text Available Simulation studies have been the predominant method of evaluating ad hoc routing algorithms. Despite their wide use and merits, simulations are generally time consuming. Furthermore, several prominent ad hoc simulations report inconsistent and unrepeatable results. We, therefore, argue that simulation-based evaluation of ad hoc routing protocols should be complemented with mathematical verification and comparison. In this paper, we propose a performance evaluation framework that can be used to model two key performance metrics of an ad hoc routing algorithm, namely, routing overhead and route optimality. We also evaluate derivatives of the two metrics, namely, total energy consumption and route discovery latency. Using the proposed framework, we evaluate the performance of four prominent ad hoc routing algorithms: DSDV, DSR, AODV-LL, and Gossiping. We show that the modeled metrics not only allow unbiased performance comparison but also provide interesting insight about the impact of different parameters on the behavior of these protocols.

  6. visCOS: An R-package to evaluate model performance of hydrological models

    Science.gov (United States)

    Klotz, Daniel; Herrnegger, Mathew; Wesemann, Johannes; Schulz, Karsten

    2016-04-01

    The evaluation of model performance is a central part of (hydrological) modelling. Much attention has been given to the development of evaluation criteria and diagnostic frameworks. (Klemeš, 1986; Gupta et al., 2008; among many others). Nevertheless, many applications exist for which objective functions do not yet provide satisfying summaries. Thus, the necessity to visualize results arises in order to explore a wider range of model capacities, be it strengths or deficiencies. Visualizations are usually devised for specific projects and these efforts are often not distributed to a broader community (e.g. via open source software packages). Hence, the opportunity to explicitly discuss a state-of-the-art presentation technique is often missed. We therefore present a comprehensive R-package for evaluating model performance by visualizing and exploring different aspects of hydrological time-series. The presented package comprises a set of useful plots and visualization methods, which complement existing packages, such as hydroGOF (Zambrano-Bigiarini et al., 2012). It is derived from practical applications of the hydrological models COSERO and COSEROreg (Kling et al., 2014). visCOS, providing an interface in R, represents an easy-to-use software package for visualizing and assessing model performance and can be implemented in the process of model calibration or model development. The package provides functions to load hydrological data into R, clean the data, process, visualize, explore and finally save the results in a consistent way. Together with an interactive zoom function of the time series, an online calculation of the objective functions for variable time-windows is included. Common hydrological objective functions, such as the Nash-Sutcliffe Efficiency and the Kling-Gupta Efficiency, can also be evaluated and visualized in different ways for defined sub-periods like hydrological years or seasonal sections. Many hydrologists use long-term water-balances as a

  7. A personality trait-based interactionist model of job performance.

    Science.gov (United States)

    Tett, Robert P; Burnett, Dawn D

    2003-06-01

    Evidence for situational specificity of personality-job performance relations calls for better understanding of how personality is expressed as valued work behavior. On the basis of an interactionist principle of trait activation (R. P. Tett & H. A. Guterman, 2000), a model is proposed that distinguishes among 5 situational features relevant to trait expression (job demands, distracters, constraints, releasers, and facilitators), operating at task, social, and organizational levels. Trait-expressive work behavior is distinguished from (valued) job performance in clarifying the conditions favoring personality use in selection efforts. The model frames linkages between situational taxonomies (e.g., J. L. Holland's [1985] RIASEC model) and the Big Five and promotes useful discussion of critical issues, including situational specificity, personality-oriented job analysis, team building, and work motivation.

  8. PHARAO Laser Source Flight Model: Design and Performances

    CERN Document Server

    Lévèque, Thomas; Esnault, François-Xavier; Delaroche, Christophe; Massonnet, Didier; Grosjean, Olivier; Buffe, Fabrice; Torresi, Patrizia; Bomer, Thierry; Pichon, Alexandre; Béraud, Pascal; Lelay, Jean-Pierre; Thomin, Stéphane; Laurent, Philippe

    2015-01-01

    In this paper, we describe the design and the main performances of the PHARAO laser source flight model. PHARAO is a laser cooled cesium clock specially designed for operation in space and the laser source is one of the main sub-systems. The flight model presented in this work is the first remote-controlled laser system designed for spaceborne cold atom manipulation. The main challenges arise from mechanical compatibility with space constraints, which impose a high level of compactness, a low electric power consumption, a wide range of operating temperature and a vacuum environment. We describe the main functions of the laser source and give an overview of the main technologies developed for this instrument. We present some results of the qualification process. The characteristics of the laser source flight model, and their impact on the clock performances, have been verified in operational conditions.

  9. Performance Comparison of Sub Phonetic Model with Input Signal Processing

    Directory of Open Access Journals (Sweden)

    Dr E. Ramaraj

    2006-01-01

    Full Text Available The quest to arrive at a better model for signal transformation for speech has resulted in striving to develop better signal representations and algorithm. The article explores the word model which is a concatenation of state dependent senones as an alternate for phoneme. The Research Work has an objective of involving the senone with the Input signal processing an algorithm which has been tried with phoneme and has been quite successful and try to compare the performance of senone with ISP and Phoneme with ISP and supply the result analysis. The research model has taken the SPHINX IV[4] speech engine for its implementation owing to its flexibility to the new algorithm, robustness and performance consideration.

  10. A multiserver multiqueue network: modeling and performance analysis

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    A new category of system model, multiserver multiqueue network (MSMQN), is proposed for distributed systems such as the geographically distributed Web-server clusters. A MSMQN comprises multiple multiserver multiqueue (MSMQ) nodes distributed over the network, and everynode consists of a number of servers that each contains multiple priority queues for waiting customers. An incoming request can be distributed to a waiting queue of any server in any node, according to the routing policy integrated by the node-selection policy at network-level, request-dispatching policy at node-level, and request-scheduling policy at server-level. The model is investigated using stochastic high-level Petri net (SHLPN) modeling and performance analysis techniques. Theperformance metrics concerned includes the delay time of requests in the MSMQ node and the response time perceived by the users. The numerical example shows the efficiency of the performance analysis technique.

  11. Frequency modulated continuous wave lidar performance model for target detection

    Science.gov (United States)

    Du Bosq, Todd W.; Preece, Bradley L.

    2017-05-01

    The desire to provide the warfighter both ranging and reflected intensity information is increasing to meet expanding operational needs. LIDAR imaging systems can provide the user with intensity, range, and even velocity information of a scene. The ability to predict the performance of LIDAR systems is critical for the development of future designs without the need to conduct time consuming and costly field studies. Performance modeling of a frequency modulated continuous wave (FMCW) LIDAR system is challenging due to the addition of the chirped laser source and waveform mixing. The FMCW LIDAR model is implemented in the NV-IPM framework using the custom component generation tool. This paper presents an overview of the FMCW Lidar, the customized LIDAR components, and a series of trade studies using the LIDAR model.

  12. Tiling for Performance Tuning on Different Models of GPUs

    CERN Document Server

    Xu, Chang; Jenkins, Samantha

    2010-01-01

    The strategy of using CUDA-compatible GPUs as a parallel computation solution to improve the performance of programs has been more and more widely approved during the last two years since the CUDA platform was released. Its benefit extends from the graphic domain to many other computationally intensive domains. Tiling, as the most general and important technique, is widely used for optimization in CUDA programs. New models of GPUs with better compute capabilities have, however, been released, new versions of CUDA SDKs were also released. These updated compute capabilities must to be considered when optimizing using the tiling technique. In this paper, we implement image interpolation algorithms as a test case to discuss how different tiling strategies affect the program's performance. We especially focus on how the different models of GPUs affect the tiling's effectiveness by executing the same program on two different models of GPUs equipped testing platforms. The results demonstrate that an optimized tiling...

  13. An improved model for TPV performance predictions and optimization

    Science.gov (United States)

    Schroeder, K. L.; Rose, M. F.; Burkhalter, J. E.

    1997-03-01

    Previously a model has been presented for calculating the performance of a TPV system. This model has been revised into a general purpose algorithm, improved in fidelity, and is presented here. The basic model is an energy based formulation and evaluates both the radiant and heat source elements of a combustion based system. Improvements in the radiant calculations include the use of ray tracking formulations and view factors for evaluating various flat plate and cylindrical configurations. Calculation of photocell temperature and performance parameters as a function of position and incident power have also been incorporated. Heat source calculations have been fully integrated into the code by the incorporation of a modified version of the NASA Complex Chemical Equilibrium Compositions and Applications (CEA) code. Additionally, coding has been incorporated to allow optimization of various system parameters and configurations. Several examples cases are presented and compared, and an optimum flat plate emitter/filter/photovoltaic configuration is also described.

  14. PHARAO laser source flight model: Design and performances

    Energy Technology Data Exchange (ETDEWEB)

    Lévèque, T., E-mail: thomas.leveque@cnes.fr; Faure, B.; Esnault, F. X.; Delaroche, C.; Massonnet, D.; Grosjean, O.; Buffe, F.; Torresi, P. [Centre National d’Etudes Spatiales, 18 avenue Edouard Belin, 31400 Toulouse (France); Bomer, T.; Pichon, A.; Béraud, P.; Lelay, J. P.; Thomin, S. [Sodern, 20 Avenue Descartes, 94451 Limeil-Brévannes (France); Laurent, Ph. [LNE-SYRTE, CNRS, UPMC, Observatoire de Paris, 61 avenue de l’Observatoire, 75014 Paris (France)

    2015-03-15

    In this paper, we describe the design and the main performances of the PHARAO laser source flight model. PHARAO is a laser cooled cesium clock specially designed for operation in space and the laser source is one of the main sub-systems. The flight model presented in this work is the first remote-controlled laser system designed for spaceborne cold atom manipulation. The main challenges arise from mechanical compatibility with space constraints, which impose a high level of compactness, a low electric power consumption, a wide range of operating temperature, and a vacuum environment. We describe the main functions of the laser source and give an overview of the main technologies developed for this instrument. We present some results of the qualification process. The characteristics of the laser source flight model, and their impact on the clock performances, have been verified in operational conditions.

  15. Performance and Prediction: Bayesian Modelling of Fallible Choice in Chess

    Science.gov (United States)

    Haworth, Guy; Regan, Ken; di Fatta, Giuseppe

    Evaluating agents in decision-making applications requires assessing their skill and predicting their behaviour. Both are well developed in Poker-like situations, but less so in more complex game and model domains. This paper addresses both tasks by using Bayesian inference in a benchmark space of reference agents. The concepts are explained and demonstrated using the game of chess but the model applies generically to any domain with quantifiable options and fallible choice. Demonstration applications address questions frequently asked by the chess community regarding the stability of the rating scale, the comparison of players of different eras and/or leagues, and controversial incidents possibly involving fraud. The last include alleged under-performance, fabrication of tournament results, and clandestine use of computer advice during competition. Beyond the model world of games, the aim is to improve fallible human performance in complex, high-value tasks.

  16. Modeling performance measurement applications and implementation issues in DEA

    CERN Document Server

    Cook, Wade D

    2005-01-01

    Addresses advanced/new DEA methodology and techniques that are developed for modeling unique and new performance evaluation issuesPesents new DEA methodology and techniques via discussions on how to solve managerial problemsProvides an easy-to-use DEA software - DEAFrontier (www.deafrontier.com) which is an excellent tool for both DEA researchers and practitioners.

  17. High Performance Computing tools for the Integrated Tokamak Modelling project

    Energy Technology Data Exchange (ETDEWEB)

    Guillerminet, B., E-mail: bernard.guillerminet@cea.f [Association Euratom-CEA sur la Fusion, IRFM, DSM, CEA Cadarache (France); Plasencia, I. Campos [Instituto de Fisica de Cantabria (IFCA), CSIC, Santander (Spain); Haefele, M. [Universite Louis Pasteur, Strasbourg (France); Iannone, F. [EURATOM/ENEA Fusion Association, Frascati (Italy); Jackson, A. [University of Edinburgh (EPCC) (United Kingdom); Manduchi, G. [EURATOM/ENEA Fusion Association, Padova (Italy); Plociennik, M. [Poznan Supercomputing and Networking Center (PSNC) (Poland); Sonnendrucker, E. [Universite Louis Pasteur, Strasbourg (France); Strand, P. [Chalmers University of Technology (Sweden); Owsiak, M. [Poznan Supercomputing and Networking Center (PSNC) (Poland)

    2010-07-15

    Fusion Modelling and Simulation are very challenging and the High Performance Computing issues are addressed here. Toolset for jobs launching and scheduling, data communication and visualization have been developed by the EUFORIA project and used with a plasma edge simulation code.

  18. Range-dependent sonar performance modelling during Battlespace Preparation 2007

    NARCIS (Netherlands)

    Raa, L.A. te; Lam, F.P.A.; Schouten M.W.; Janmaat, J.

    2009-01-01

    Spatial and temporal variations in sound speed can have substantial effects on sound propagation and hence sonar performance. Operational oceanographic models can provide forecasts of oceanographic variables as temperature, salinity and sound speed up to several days ahead. These four-dimensional fo

  19. Towards a Social Networks Model for Online Learning & Performance

    Science.gov (United States)

    Chung, Kon Shing Kenneth; Paredes, Walter Christian

    2015-01-01

    In this study, we develop a theoretical model to investigate the association between social network properties, "content richness" (CR) in academic learning discourse, and performance. CR is the extent to which one contributes content that is meaningful, insightful and constructive to aid learning and by social network properties we…

  20. Introducing Model Predictive Control for Improving Power Plant Portfolio Performance

    DEFF Research Database (Denmark)

    Edlund, Kristian Skjoldborg; Bendtsen, Jan Dimon; Børresen, Simon

    2008-01-01

    This paper introduces a model predictive control (MPC) approach for construction of a controller for balancing the power generation against consumption in a power system. The objective of the controller is to coordinate a portfolio consisting of multiple power plant units in the effort to perform...

  1. Evaluating hydrological model performance using information theory-based metrics

    Science.gov (United States)

    The accuracy-based model performance metrics not necessarily reflect the qualitative correspondence between simulated and measured streamflow time series. The objective of this work was to use the information theory-based metrics to see whether they can be used as complementary tool for hydrologic m...

  2. Performance evaluation:= (process algebra + model checking) x Markov chains

    NARCIS (Netherlands)

    Hermanns, H.; Katoen, J.P.; Larsen, Kim G.; Nielsen, Mogens

    2001-01-01

    Markov chains are widely used in practice to determine system performance and reliability characteristics. The vast majority of applications considers continuous-time Markov chains (CTMCs). This tutorial paper shows how successful model specification and analysis techniques from concurrency theory c

  3. Performance in model transformations: experiments with ATL and QVT

    NARCIS (Netherlands)

    van Amstel, Marcel; Bosems, S.; Ivanov, Ivan; Ferreira Pires, Luis; Cabot, Jordi; Visser, Eelco

    Model transformations are increasingly being incorporated in software development processes. However, as systems being developed with transformations grow in size and complexity, the performance of the transformations tends to degrade. In this paper we investigate the factors that have an impact on

  4. An e-Procurement Model for Logistic Performance Increase

    NARCIS (Netherlands)

    Toma, Cristina; Vasilescu, Bogdan; Popescu, Catalin; Soliman, KS

    2009-01-01

    This paper discusses the suitability of an e-procurement system in increasing logistic performance, given the growth in fast Internet availability,. In consequence, a model is derived and submitted for analysis. The scope of the research is limited at the intermediary goods importing sector for a be

  5. Performance Analysis of OFDM with Frequency Offset and Correction Model

    Institute of Scientific and Technical Information of China (English)

    QIN Sheng-ping; YIN Chang-chuan; LUO Tao; YUE Guang-xin

    2003-01-01

    The performance of OFDM with frequency offset is analyzed and simulated in this paper. It is concluded that the SIR is very large and the BER of OFDM system with frequency offset is strongly affected. A BER calculating method is introduced and simulated. Assumed that the frequency offset is known, frequency offset correction model is discussed.

  6. Towards a Social Networks Model for Online Learning & Performance

    Science.gov (United States)

    Chung, Kon Shing Kenneth; Paredes, Walter Christian

    2015-01-01

    In this study, we develop a theoretical model to investigate the association between social network properties, "content richness" (CR) in academic learning discourse, and performance. CR is the extent to which one contributes content that is meaningful, insightful and constructive to aid learning and by social network properties we…

  7. Stutter-Step Models of Performance in School

    Science.gov (United States)

    Morgan, Stephen L.; Leenman, Theodore S.; Todd, Jennifer J.; Kentucky; Weeden, Kim A.

    2013-01-01

    To evaluate a stutter-step model of academic performance in high school, this article adopts a unique measure of the beliefs of 12,591 high school sophomores from the Education Longitudinal Study, 2002-2006. Verbatim responses to questions on occupational plans are coded to capture specific job titles, the listing of multiple jobs, and the listing…

  8. A New Model to Simulate Energy Performance of VRF Systems

    Energy Technology Data Exchange (ETDEWEB)

    Hong, Tianzhen; Pang, Xiufeng; Schetrit, Oren; Wang, Liping; Kasahara, Shinichi; Yura, Yoshinori; Hinokuma, Ryohei

    2014-03-30

    This paper presents a new model to simulate energy performance of variable refrigerant flow (VRF) systems in heat pump operation mode (either cooling or heating is provided but not simultaneously). The main improvement of the new model is the introduction of the evaporating and condensing temperature in the indoor and outdoor unit capacity modifier functions. The independent variables in the capacity modifier functions of the existing VRF model in EnergyPlus are mainly room wet-bulb temperature and outdoor dry-bulb temperature in cooling mode and room dry-bulb temperature and outdoor wet-bulb temperature in heating mode. The new approach allows compliance with different specifications of each indoor unit so that the modeling accuracy is improved. The new VRF model was implemented in a custom version of EnergyPlus 7.2. This paper first describes the algorithm for the new VRF model, which is then used to simulate the energy performance of a VRF system in a Prototype House in California that complies with the requirements of Title 24 ? the California Building Energy Efficiency Standards. The VRF system performance is then compared with three other types of HVAC systems: the Title 24-2005 Baseline system, the traditional High Efficiency system, and the EnergyStar Heat Pump system in three typical California climates: Sunnyvale, Pasadena and Fresno. Calculated energy savings from the VRF systems are significant. The HVAC site energy savings range from 51 to 85percent, while the TDV (Time Dependent Valuation) energy savings range from 31 to 66percent compared to the Title 24 Baseline Systems across the three climates. The largest energy savings are in Fresno climate followed by Sunnyvale and Pasadena. The paper discusses various characteristics of the VRF systems contributing to the energy savings. It should be noted that these savings are calculated using the Title 24 prototype House D under standard operating conditions. Actual performance of the VRF systems for real

  9. Forecasting Performance of Asymmetric GARCH Stock Market Volatility Models

    Directory of Open Access Journals (Sweden)

    Hojin Lee

    2009-12-01

    Full Text Available We investigate the asymmetry between positive and negative returns in their effect on conditional variance of the stock market index and incorporate the characteristics to form an out-of-sample volatility forecast. Contrary to prior evidence, however, the results in this paper suggest that no asymmetric GARCH model is superior to basic GARCH(1,1 model. It is our prior knowledge that, for equity returns, it is unlikely that positive and negative shocks have the same impact on the volatility. In order to reflect this intuition, we implement three diagnostic tests for volatility models: the Sign Bias Test, the Negative Size Bias Test, and the Positive Size Bias Test and the tests against the alternatives of QGARCH and GJR-GARCH. The asymmetry test results indicate that the sign and the size of the unexpected return shock do not influence current volatility differently which contradicts our presumption that there are asymmetric effects in the stock market volatility. This result is in line with various diagnostic tests which are designed to determine whether the GARCH(1,1 volatility estimates adequately represent the data. The diagnostic tests in section 2 indicate that the GARCH(1,1 model for weekly KOSPI returns is robust to the misspecification test. We also investigate two representative asymmetric GARCH models, QGARCH and GJR-GARCH model, for our out-of-sample forecasting performance. The out-of-sample forecasting ability test reveals that no single model is clearly outperforming. It is seen that the GJR-GARCH and QGARCH model give mixed results in forecasting ability on all four criteria across all forecast horizons considered. Also, the predictive accuracy test of Diebold and Mariano based on both absolute and squared prediction errors suggest that the forecasts from the linear and asymmetric GARCH models need not be significantly different from each other.

  10. Performance Comparison of Two Meta-Model for the Application to Finite Element Model Updating of Structures

    Institute of Scientific and Technical Information of China (English)

    Yang Liu; DeJun Wang; Jun Ma; Yang Li

    2014-01-01

    To investigate the application of meta-model for finite element ( FE) model updating of structures, the performance of two popular meta-model, i.e., Kriging model and response surface model (RSM), were compared in detail. Firstly, above two kinds of meta-model were introduced briefly. Secondly, some key issues of the application of meta-model to FE model updating of structures were proposed and discussed, and then some advices were presented in order to select a reasonable meta-model for the purpose of updating the FE model of structures. Finally, the procedure of FE model updating based on meta-model was implemented by updating the FE model of a truss bridge model with the measured modal parameters. The results showed that the Kriging model was more proper for FE model updating of complex structures.

  11. Evaluation of the performance of DIAS ionospheric forecasting models

    Directory of Open Access Journals (Sweden)

    Tsagouri Ioanna

    2011-08-01

    Full Text Available Nowcasting and forecasting ionospheric products and services for the European region are regularly provided since August 2006 through the European Digital upper Atmosphere Server (DIAS, http://dias.space.noa.gr. Currently, DIAS ionospheric forecasts are based on the online implementation of two models: (i the solar wind driven autoregression model for ionospheric short-term forecast (SWIF, which combines historical and real-time ionospheric observations with solar-wind parameters obtained in real time at the L1 point from NASA ACE spacecraft, and (ii the geomagnetically correlated autoregression model (GCAM, which is a time series forecasting method driven by a synthetic geomagnetic index. In this paper we investigate the operational ability and the accuracy of both DIAS models carrying out a metrics-based evaluation of their performance under all possible conditions. The analysis was established on the systematic comparison between models’ predictions with actual observations obtained over almost one solar cycle (1998–2007 at four European ionospheric locations (Athens, Chilton, Juliusruh and Rome and on the comparison of the models’ performance against two simple prediction strategies, the median- and the persistence-based predictions during storm conditions. The results verify operational validity for both models and quantify their prediction accuracy under all possible conditions in support of operational applications but also of comparative studies in assessing or expanding the current ionospheric forecasting capabilities.

  12. Lightweight ZERODUR: Validation of Mirror Performance and Mirror Modeling Predictions

    Science.gov (United States)

    Hull, Tony; Stahl, H. Philip; Westerhoff, Thomas; Valente, Martin; Brooks, Thomas; Eng, Ron

    2017-01-01

    Upcoming spaceborne missions, both moderate and large in scale, require extreme dimensional stability while relying both upon established lightweight mirror materials, and also upon accurate modeling methods to predict performance under varying boundary conditions. We describe tests, recently performed at NASA's XRCF chambers and laboratories in Huntsville Alabama, during which a 1.2 m diameter, f/1.2988% lightweighted SCHOTT lightweighted ZERODUR(TradeMark) mirror was tested for thermal stability under static loads in steps down to 230K. Test results are compared to model predictions, based upon recently published data on ZERODUR(TradeMark). In addition to monitoring the mirror surface for thermal perturbations in XRCF Thermal Vacuum tests, static load gravity deformations have been measured and compared to model predictions. Also the Modal Response(dynamic disturbance) was measured and compared to model. We will discuss the fabrication approach and optomechanical design of the ZERODUR(TradeMark) mirror substrate by SCHOTT, its optical preparation for test by Arizona Optical Systems (AOS). Summarize the outcome of NASA's XRCF tests and model validations

  13. 3D Massive MIMO Systems: Channel Modeling and Performance Analysis

    KAUST Repository

    Nadeem, Qurrat-Ul-Ain

    2015-03-01

    Multiple-input-multiple-output (MIMO) systems of current LTE releases are capable of adaptation in the azimuth only. More recently, the trend is to enhance the system performance by exploiting the channel\\'s degrees of freedom in the elevation through the dynamic adaptation of the vertical antenna beam pattern. This necessitates the derivation and characterization of three-dimensional (3D) channels. Over the years, channel models have evolved to address the challenges of wireless communication technologies. In parallel to theoretical studies on channel modeling, many standardized channel models like COST-based models, 3GPP SCM, WINNER, ITU have emerged that act as references for industries and telecommunication companies to assess system-level and link-level performances of advanced signal processing techniques over real-like channels. Given the existing channels are only two dimensional (2D) in nature; a large effort in channel modeling is needed to study the impact of the channel component in the elevation direction. The first part of this work sheds light on the current 3GPP activity around 3D channel modeling and beamforming, an aspect that to our knowledge has not been extensively covered by a research publication. The standardized MIMO channel model is presented, that incorporates both the propagation effects of the environment and the radio effects of the antennas. In order to facilitate future studies on the use of 3D beamforming, the main features of the proposed 3D channel model are discussed. A brief overview of the future 3GPP 3D channel model being outlined for the next generation of wireless networks is also provided. In the subsequent part of this work, we present an information-theoretic channel model for MIMO systems that supports the elevation dimension. The model is based on the principle of maximum entropy, which enables us to determine the distribution of the channel matrix consistent with the prior information on the angles of departure and

  14. Ranking streamflow model performance based on Information theory metrics

    Science.gov (United States)

    Martinez, Gonzalo; Pachepsky, Yakov; Pan, Feng; Wagener, Thorsten; Nicholson, Thomas

    2016-04-01

    The accuracy-based model performance metrics not necessarily reflect the qualitative correspondence between simulated and measured streamflow time series. The objective of this work was to use the information theory-based metrics to see whether they can be used as complementary tool for hydrologic model evaluation and selection. We simulated 10-year streamflow time series in five watersheds located in Texas, North Carolina, Mississippi, and West Virginia. Eight model of different complexity were applied. The information-theory based metrics were obtained after representing the time series as strings of symbols where different symbols corresponded to different quantiles of the probability distribution of streamflow. The symbol alphabet was used. Three metrics were computed for those strings - mean information gain that measures the randomness of the signal, effective measure complexity that characterizes predictability and fluctuation complexity that characterizes the presence of a pattern in the signal. The observed streamflow time series has smaller information content and larger complexity metrics than the precipitation time series. Watersheds served as information filters and and streamflow time series were less random and more complex than the ones of precipitation. This is reflected the fact that the watershed acts as the information filter in the hydrologic conversion process from precipitation to streamflow. The Nash Sutcliffe efficiency metric increased as the complexity of models increased, but in many cases several model had this efficiency values not statistically significant from each other. In such cases, ranking models by the closeness of the information-theory based parameters in simulated and measured streamflow time series can provide an additional criterion for the evaluation of hydrologic model performance.

  15. Modeling logistic performance in quantitative microbial risk assessment.

    Science.gov (United States)

    Rijgersberg, Hajo; Tromp, Seth; Jacxsens, Liesbeth; Uyttendaele, Mieke

    2010-01-01

    In quantitative microbial risk assessment (QMRA), food safety in the food chain is modeled and simulated. In general, prevalences, concentrations, and numbers of microorganisms in media are investigated in the different steps from farm to fork. The underlying rates and conditions (such as storage times, temperatures, gas conditions, and their distributions) are determined. However, the logistic chain with its queues (storages, shelves) and mechanisms for ordering products is usually not taken into account. As a consequence, storage times-mutually dependent in successive steps in the chain-cannot be described adequately. This may have a great impact on the tails of risk distributions. Because food safety risks are generally very small, it is crucial to model the tails of (underlying) distributions as accurately as possible. Logistic performance can be modeled by describing the underlying planning and scheduling mechanisms in discrete-event modeling. This is common practice in operations research, specifically in supply chain management. In this article, we present the application of discrete-event modeling in the context of a QMRA for Listeria monocytogenes in fresh-cut iceberg lettuce. We show the potential value of discrete-event modeling in QMRA by calculating logistic interventions (modifications in the logistic chain) and determining their significance with respect to food safety.

  16. Gender consequences of a national performance-based funding model

    DEFF Research Database (Denmark)

    Nielsen, Mathias Wullum

    2015-01-01

    This article investigates the extent to which the Danish Bibliometric Research Indicator (BRI) reflects the performance of men and women differently. The model is based on a differentiated counting of peer-reviewed publications, awarding three and eight points for contributions to ‘well-regarded’...... privileges collaborative research, which disadvantages women due to gender differences in collaborative network relations.......This article investigates the extent to which the Danish Bibliometric Research Indicator (BRI) reflects the performance of men and women differently. The model is based on a differentiated counting of peer-reviewed publications, awarding three and eight points for contributions to ‘well......-regarded’ and highly selective journals and book publishers, and 1 and 5 points for equivalent scientific contributions via ‘normal level’ channels. On the basis of bibliometric data, the study shows that the BRI considerably widens the existing gender gap in researcher performance, since men on average receive more...

  17. Performance optimization of Jatropha biodiesel engine model using Taguchi approach

    Energy Technology Data Exchange (ETDEWEB)

    Ganapathy, T.; Murugesan, K.; Gakkhar, R.P. [Mechanical and Industrial Engineering Department, Indian Institute of Technology Roorkee, Roorkee 247 667 (India)

    2009-11-15

    This paper proposes a methodology for thermodynamic model analysis of Jatropha biodiesel engine in combination with Taguchi's optimization approach to determine the optimum engine design and operating parameters. A thermodynamic model based on two-zone Weibe's heat release function has been employed to simulate the Jatropha biodiesel engine performance. Among the important engine design and operating parameters 10 critical parameters were selected assuming interactions between the pair of parameters. Using linear graph theory and Taguchi method an L{sub 16} orthogonal array has been utilized to determine the engine test trials layout. In order to maximize the performance of Jatropha biodiesel engine the signal to noise ratio (SNR) related to higher-the-better (HTB) quality characteristics has been used. The present methodology correctly predicted the compression ratio, Weibe's heat release constants and combustion zone duration as the critical parameters that affect the performance of the engine compared to other parameters. (author)

  18. Performance Models and Risk Management in Communications Systems

    CERN Document Server

    Harrison, Peter; Rüstem, Berç

    2011-01-01

    This volume covers recent developments in the design, operation, and management of telecommunication and computer network systems in performance engineering and addresses issues of uncertainty, robustness, and risk. Uncertainty regarding loading and system parameters leads to challenging optimization and robustness issues. Stochastic modeling combined with optimization theory ensures the optimum end-to-end performance of telecommunication or computer network systems. In view of the diverse design options possible, supporting models have many adjustable parameters and choosing the best set for a particular performance objective is delicate and time-consuming. An optimization based approach determines the optimal possible allocation for these parameters. Researchers and graduate students working at the interface of telecommunications and operations research will benefit from this book. Due to the practical approach, this book will also serve as a reference tool for scientists and engineers in telecommunication ...

  19. Human task animation from performance models and natural language input

    Science.gov (United States)

    Esakov, Jeffrey; Badler, Norman I.; Jung, Moon

    1989-01-01

    Graphical manipulation of human figures is essential for certain types of human factors analyses such as reach, clearance, fit, and view. In many situations, however, the animation of simulated people performing various tasks may be based on more complicated functions involving multiple simultaneous reaches, critical timing, resource availability, and human performance capabilities. One rather effective means for creating such a simulation is through a natural language description of the tasks to be carried out. Given an anthropometrically-sized figure and a geometric workplace environment, various simple actions such as reach, turn, and view can be effectively controlled from language commands or standard NASA checklist procedures. The commands may also be generated by external simulation tools. Task timing is determined from actual performance models, if available, such as strength models or Fitts' Law. The resulting action specification are animated on a Silicon Graphics Iris workstation in real-time.

  20. Modeling the seakeeping performance of luxury cruise ships

    Science.gov (United States)

    Cao, Yu; Yu, Bao-Jun; Wang, Jian-Fang

    2010-09-01

    The seakeeping performance of a luxury cruise ship was evaluated during the concept design phase. By comparing numerical predictions based on 3-D linear potential flow theory in the frequency domain with the results of model tests, it was shown that the 3-D method predicted the seakeeping performance of the luxury cruise ship well. Based on the model, the seakeeping features of the luxury cruise ship were analyzed, and then the influence was seen of changes to the primary design parameters (center of gravity, inertial radius, etc.). Based on the results, suggestions were proposed to improve the choice of parameters for luxury cruise ships during the concept design phase. They should improve seakeeping performance.

  1. Performance of GeantV EM Physics Models

    Energy Technology Data Exchange (ETDEWEB)

    Amadio, G.; et al.

    2016-10-14

    The recent progress in parallel hardware architectures with deeper vector pipelines or many-cores technologies brings opportunities for HEP experiments to take advantage of SIMD and SIMT computing models. Launched in 2013, the GeantV project studies performance gains in propagating multiple particles in parallel, improving instruction throughput and data locality in HEP event simulation on modern parallel hardware architecture. Due to the complexity of geometry description and physics algorithms of a typical HEP application, performance analysis is indispensable in identifying factors limiting parallel execution. In this report, we will present design considerations and preliminary computing performance of GeantV physics models on coprocessors (Intel Xeon Phi and NVidia GPUs) as well as on mainstream CPUs.

  2. Implicit Value Updating Explains Transitive Inference Performance: The Betasort Model.

    Directory of Open Access Journals (Sweden)

    Greg Jensen

    Full Text Available Transitive inference (the ability to infer that B > D given that B > C and C > D is a widespread characteristic of serial learning, observed in dozens of species. Despite these robust behavioral effects, reinforcement learning models reliant on reward prediction error or associative strength routinely fail to perform these inferences. We propose an algorithm called betasort, inspired by cognitive processes, which performs transitive inference at low computational cost. This is accomplished by (1 representing stimulus positions along a unit span using beta distributions, (2 treating positive and negative feedback asymmetrically, and (3 updating the position of every stimulus during every trial, whether that stimulus was visible or not. Performance was compared for rhesus macaques, humans, and the betasort algorithm, as well as Q-learning, an established reward-prediction error (RPE model. Of these, only Q-learning failed to respond above chance during critical test trials. Betasort's success (when compared to RPE models and its computational efficiency (when compared to full Markov decision process implementations suggests that the study of reinforcement learning in organisms will be best served by a feature-driven approach to comparing formal models.

  3. Towards Modeling Realistic Mobility for Performance Evaluations in MANET

    Science.gov (United States)

    Aravind, Alex; Tahir, Hassan

    Simulation modeling plays crucial role in conducting research on complex dynamic systems like mobile ad hoc networks and often the only way. Simulation has been successfully applied in MANET for more than two decades. In several recent studies, it is observed that the credibility of the simulation results in the field has decreased while the use of simulation has steadily increased. Part of this credibility crisis has been attributed to the simulation of mobility of the nodes in the system. Mobility has such a fundamental influence on the behavior and performance of mobile ad hoc networks. Accurate modeling and knowledge of mobility of the nodes in the system is not only helpful but also essential for the understanding and interpretation of the performance of the system under study. Several ideas, mostly in isolation, have been proposed in the literature to infuse realism in the mobility of nodes. In this paper, we attempt a holistic analysis of creating realistic mobility models and then demonstrate creation and analysis of realistic mobility models using a software tool we have developed. Using our software tool, desired mobility of the nodes in the system can be specified, generated, analyzed, and then the trace can be exported to be used in the performance studies of proposed algorithms or systems.

  4. Performance benchmarks for a next generation numerical dynamo model

    Science.gov (United States)

    Matsui, Hiroaki; Heien, Eric; Aubert, Julien; Aurnou, Jonathan M.; Avery, Margaret; Brown, Ben; Buffett, Bruce A.; Busse, Friedrich; Christensen, Ulrich R.; Davies, Christopher J.; Featherstone, Nicholas; Gastine, Thomas; Glatzmaier, Gary A.; Gubbins, David; Guermond, Jean-Luc; Hayashi, Yoshi-Yuki; Hollerbach, Rainer; Hwang, Lorraine J.; Jackson, Andrew; Jones, Chris A.; Jiang, Weiyuan; Kellogg, Louise H.; Kuang, Weijia; Landeau, Maylis; Marti, Philippe; Olson, Peter; Ribeiro, Adolfo; Sasaki, Youhei; Schaeffer, Nathanaël.; Simitev, Radostin D.; Sheyko, Andrey; Silva, Luis; Stanley, Sabine; Takahashi, Futoshi; Takehiro, Shin-ichi; Wicht, Johannes; Willis, Ashley P.

    2016-05-01

    Numerical simulations of the geodynamo have successfully represented many observable characteristics of the geomagnetic field, yielding insight into the fundamental processes that generate magnetic fields in the Earth's core. Because of limited spatial resolution, however, the diffusivities in numerical dynamo models are much larger than those in the Earth's core, and consequently, questions remain about how realistic these models are. The typical strategy used to address this issue has been to continue to increase the resolution of these quasi-laminar models with increasing computational resources, thus pushing them toward more realistic parameter regimes. We assess which methods are most promising for the next generation of supercomputers, which will offer access to O(106) processor cores for large problems. Here we report performance and accuracy benchmarks from 15 dynamo codes that employ a range of numerical and parallelization methods. Computational performance is assessed on the basis of weak and strong scaling behavior up to 16,384 processor cores. Extrapolations of our weak-scaling results indicate that dynamo codes that employ two-dimensional or three-dimensional domain decompositions can perform efficiently on up to ˜106 processor cores, paving the way for more realistic simulations in the next model generation.

  5. Instruction-level performance modeling and characterization of multimedia applications

    Energy Technology Data Exchange (ETDEWEB)

    Luo, Y. [Los Alamos National Lab., NM (United States). Scientific Computing Group; Cameron, K.W. [Louisiana State Univ., Baton Rouge, LA (United States). Dept. of Computer Science

    1999-06-01

    One of the challenges for characterizing and modeling realistic multimedia applications is the lack of access to source codes. On-chip performance counters effectively resolve this problem by monitoring run-time behaviors at the instruction-level. This paper presents a novel technique of characterizing and modeling workloads at the instruction level for realistic multimedia applications using hardware performance counters. A variety of instruction counts are collected from some multimedia applications, such as RealPlayer, GSM Vocoder, MPEG encoder/decoder, and speech synthesizer. These instruction counts can be used to form a set of abstract characteristic parameters directly related to a processor`s architectural features. Based on microprocessor architectural constraints and these calculated abstract parameters, the architectural performance bottleneck for a specific application can be estimated. Meanwhile, the bottleneck estimation can provide suggestions about viable architectural/functional improvement for certain workloads. The biggest advantage of this new characterization technique is a better understanding of processor utilization efficiency and architectural bottleneck for each application. This technique also provides predictive insight of future architectural enhancements and their affect on current codes. In this paper the authors also attempt to model architectural effect on processor utilization without memory influence. They derive formulas for calculating CPI{sub 0}, CPI without memory effect, and they quantify utilization of architectural parameters. These equations are architecturally diagnostic and predictive in nature. Results provide promise in code characterization, and empirical/analytical modeling.

  6. Rethinking board role performance: Towards an integrative model

    Directory of Open Access Journals (Sweden)

    Babić Verica M.

    2011-01-01

    Full Text Available This research focuses on the board role evolution analysis which took place simultaneously with the development of different corporate governance theories and perspectives. The purpose of this paper is to provide understanding of key factors that make a board effective in the performance of its role. We argue that analysis of board role performance should incorporate both structural and process variables. This paper’s contribution is the development of an integrative model that aims to establish the relationship between the board structure and processes on the one hand, and board role performance on the other.

  7. Coupled Atmosphere-Fire Simulations of Fireflux: Impacts of Model Resolution on Model Performance

    CERN Document Server

    Kochanski, Adam K; Jenkins, M A; Mandel, J; Beezley, J D

    2011-01-01

    The ability to forecast grass fire spread could be of a great importance for agencies making decisions about prescribed burns. However, the usefulness of the models used for fire-spread predictions is limited by the time required for completing the coupled atmosphere-fire simulations. In this study we analyze the sensitivity of a coupled model with respect to the vertical resolution of the atmospheric grid and the resolution of fire mesh that both affect computational performance of the model. Based on the observations of the plume properties recorded during the FireFlux experiment (Clements et al., 2007), we try to establish the optimal model configuration that provides realistic results for the least computational expense.

  8. Performance measurement and modeling of component applications in a high performance computing environment : a case study.

    Energy Technology Data Exchange (ETDEWEB)

    Armstrong, Robert C.; Ray, Jaideep; Malony, A. (University of Oregon, Eugene, OR); Shende, Sameer (University of Oregon, Eugene, OR); Trebon, Nicholas D.

    2003-11-01

    We present a case study of performance measurement and modeling of a CCA (Common Component Architecture) component-based application in a high performance computing environment. We explore issues peculiar to component-based HPC applications and propose a performance measurement infrastructure for HPC based loosely on recent work done for Grid environments. A prototypical implementation of the infrastructure is used to collect data for a three components in a scientific application and construct performance models for two of them. Both computational and message-passing performance are addressed.

  9. Electrical circuit models for performance modeling of Lithium-Sulfur batteries

    DEFF Research Database (Denmark)

    Knap, Vaclav; Stroe, Daniel Ioan; Teodorescu, Remus

    2015-01-01

    Energy storage technologies such as Lithium-ion (Li-ion) batteries are widely used in the present effort to move towards more ecological solutions in sectors like transportation or renewable-energy integration. However, today's Li-ion batteries are reaching their limits and not all demands...... of the industry are met yet. Therefore, researchers focus on alternative battery chemistries as Lithium-Sulfur (Li-S), which have a huge potential due to their high theoretical specific capacity (approx. 1675 Ah/kg) and theoretical energy density of almost 2600 Wh/kg. To analyze the suitability of this new...... emerging technology for various applications, there is a need for Li-S battery performance model; however, developing such models represents a challenging task due to batteries' complex ongoing chemical reactions. Therefore, the literature review was performed to summarize electrical circuit models (ECMs...

  10. Performance and robustness of hybrid model predictive control for controllable dampers in building models

    Science.gov (United States)

    Johnson, Erik A.; Elhaddad, Wael M.; Wojtkiewicz, Steven F.

    2016-04-01

    A variety of strategies have been developed over the past few decades to determine controllable damping device forces to mitigate the response of structures and mechanical systems to natural hazards and other excitations. These "smart" damping devices produce forces through passive means but have properties that can be controlled in real time, based on sensor measurements of response across the structure, to dramatically reduce structural motion by exploiting more than the local "information" that is available to purely passive devices. A common strategy is to design optimal damping forces using active control approaches and then try to reproduce those forces with the smart damper. However, these design forces, for some structures and performance objectives, may achieve high performance by selectively adding energy, which cannot be replicated by a controllable damping device, causing the smart damper performance to fall far short of what an active system would provide. The authors have recently demonstrated that a model predictive control strategy using hybrid system models, which utilize both continuous and binary states (the latter to capture the switching behavior between dissipative and non-dissipative forces), can provide reductions in structural response on the order of 50% relative to the conventional clipped-optimal design strategy. This paper explores the robustness of this newly proposed control strategy through evaluating controllable damper performance when the structure model differs from the nominal one used to design the damping strategy. Results from the application to a two-degree-of-freedom structure model confirms the robustness of the proposed strategy.

  11. Model Checking for a Class of Performance Properties of Fluid Stochastic Models

    NARCIS (Netherlands)

    Bujorianu, L.M.; Bujorianu, M.C.; Horváth, A.; Telek, M.

    2006-01-01

    Recently, there is an explosive development of fluid approa- ches to computer and distributed systems. These approaches are inherently stochastic and generate continuous state space models. Usually, the performance measures for these systems are defined using probabilities of reaching certain sets o

  12. Tank System Integrated Model: A Cryogenic Tank Performance Prediction Program

    Science.gov (United States)

    Bolshinskiy, L. G.; Hedayat, A.; Hastings, L. J.; Sutherlin, S. G.; Schnell, A. R.; Moder, J. P.

    2017-01-01

    Accurate predictions of the thermodynamic state of the cryogenic propellants, pressurization rate, and performance of pressure control techniques in cryogenic tanks are required for development of cryogenic fluid long-duration storage technology and planning for future space exploration missions. This Technical Memorandum (TM) presents the analytical tool, Tank System Integrated Model (TankSIM), which can be used for modeling pressure control and predicting the behavior of cryogenic propellant for long-term storage for future space missions. Utilizing TankSIM, the following processes can be modeled: tank self-pressurization, boiloff, ullage venting, mixing, and condensation on the tank wall. This TM also includes comparisons of TankSIM program predictions with the test data andexamples of multiphase mission calculations.

  13. Performance Evaluation of 3d Modeling Software for Uav Photogrammetry

    Science.gov (United States)

    Yanagi, H.; Chikatsu, H.

    2016-06-01

    UAV (Unmanned Aerial Vehicle) photogrammetry, which combines UAV and freely available internet-based 3D modeling software, is widely used as a low-cost and user-friendly photogrammetry technique in the fields such as remote sensing and geosciences. In UAV photogrammetry, only the platform used in conventional aerial photogrammetry is changed. Consequently, 3D modeling software contributes significantly to its expansion. However, the algorithms of the 3D modelling software are black box algorithms. As a result, only a few studies have been able to evaluate their accuracy using 3D coordinate check points. With this motive, Smart3DCapture and Pix4Dmapper were downloaded from the Internet and commercial software PhotoScan was also employed; investigations were performed in this paper using check points and images obtained from UAV.

  14. Toward a high performance distributed memory climate model

    Energy Technology Data Exchange (ETDEWEB)

    Wehner, M.F.; Ambrosiano, J.J.; Brown, J.C.; Dannevik, W.P.; Eltgroth, P.G.; Mirin, A.A. [Lawrence Livermore National Lab., CA (United States); Farrara, J.D.; Ma, C.C.; Mechoso, C.R.; Spahr, J.A. [Univ. of California, Los Angeles, CA (US). Dept. of Atmospheric Sciences

    1993-02-15

    As part of a long range plan to develop a comprehensive climate systems modeling capability, the authors have taken the Atmospheric General Circulation Model originally developed by Arakawa and collaborators at UCLA and have recast it in a portable, parallel form. The code uses an explicit time-advance procedure on a staggered three-dimensional Eulerian mesh. The authors have implemented a two-dimensional latitude/longitude domain decomposition message passing strategy. Both dynamic memory management and interprocessor communication are handled with macro constructs that are preprocessed prior to compilation. The code can be moved about a variety of platforms, including massively parallel processors, workstation clusters, and vector processors, with a mere change of three parameters. Performance on the various platforms as well as issues associated with coupling different models for major components of the climate system are discussed.

  15. Cooperative cognitive radio networking system model, enabling techniques, and performance

    CERN Document Server

    Cao, Bin; Mark, Jon W

    2016-01-01

    This SpringerBrief examines the active cooperation between users of Cooperative Cognitive Radio Networking (CCRN), exploring the system model, enabling techniques, and performance. The brief provides a systematic study on active cooperation between primary users and secondary users, i.e., (CCRN), followed by the discussions on research issues and challenges in designing spectrum-energy efficient CCRN. As an effort to shed light on the design of spectrum-energy efficient CCRN, they model the CCRN based on orthogonal modulation and orthogonally dual-polarized antenna (ODPA). The resource allocation issues are detailed with respect to both models, in terms of problem formulation, solution approach, and numerical results. Finally, the optimal communication strategies for both primary and secondary users to achieve spectrum-energy efficient CCRN are analyzed.

  16. Modelling of green roof hydrological performance for urban drainage applications

    DEFF Research Database (Denmark)

    Locatelli, Luca; Mark, Ole; Mikkelsen, Peter Steen

    2014-01-01

    Green roofs are being widely implemented for stormwater management and their impact on the urban hydrological cycle can be evaluated by incorporating them into urban drainage models. This paper presents a model of green roof long term and single event hydrological performance. The model includes...... from 3 different extensive sedum roofs in Denmark. These data consist of high-resolution measurements of runoff, precipitation and atmospheric variables in the period 2010–2012. The hydrological response of green roofs was quantified based on statistical analysis of the results of a 22-year (1989...... and that the mean annual runoff is not linearly related to the storage. Green roofs have therefore the potential to be important parts of future urban stormwater management plans....

  17. Integrated modeling tool for performance engineering of complex computer systems

    Science.gov (United States)

    Wright, Gary; Ball, Duane; Hoyt, Susan; Steele, Oscar

    1989-01-01

    This report summarizes Advanced System Technologies' accomplishments on the Phase 2 SBIR contract NAS7-995. The technical objectives of the report are: (1) to develop an evaluation version of a graphical, integrated modeling language according to the specification resulting from the Phase 2 research; and (2) to determine the degree to which the language meets its objectives by evaluating ease of use, utility of two sets of performance predictions, and the power of the language constructs. The technical approach followed to meet these objectives was to design, develop, and test an evaluation prototype of a graphical, performance prediction tool. The utility of the prototype was then evaluated by applying it to a variety of test cases found in the literature and in AST case histories. Numerous models were constructed and successfully tested. The major conclusion of this Phase 2 SBIR research and development effort is that complex, real-time computer systems can be specified in a non-procedural manner using combinations of icons, windows, menus, and dialogs. Such a specification technique provides an interface that system designers and architects find natural and easy to use. In addition, PEDESTAL's multiview approach provides system engineers with the capability to perform the trade-offs necessary to produce a design that meets timing performance requirements. Sample system designs analyzed during the development effort showed that models could be constructed in a fraction of the time required by non-visual system design capture tools.

  18. The predictive performance and stability of six species distribution models.

    Science.gov (United States)

    Duan, Ren-Yan; Kong, Xiao-Quan; Huang, Min-Yi; Fan, Wei-Yi; Wang, Zhi-Gao

    2014-01-01

    Predicting species' potential geographical range by species distribution models (SDMs) is central to understand their ecological requirements. However, the effects of using different modeling techniques need further investigation. In order to improve the prediction effect, we need to assess the predictive performance and stability of different SDMs. We collected the distribution data of five common tree species (Pinus massoniana, Betula platyphylla, Quercus wutaishanica, Quercus mongolica and Quercus variabilis) and simulated their potential distribution area using 13 environmental variables and six widely used SDMs: BIOCLIM, DOMAIN, MAHAL, RF, MAXENT, and SVM. Each model run was repeated 100 times (trials). We compared the predictive performance by testing the consistency between observations and simulated distributions and assessed the stability by the standard deviation, coefficient of variation, and the 99% confidence interval of Kappa and AUC values. The mean values of AUC and Kappa from MAHAL, RF, MAXENT, and SVM trials were similar and significantly higher than those from BIOCLIM and DOMAIN trials (pMAXENT, and SVM. Compared to BIOCLIM and DOMAIN, other SDMs (MAHAL, RF, MAXENT, and SVM) had higher prediction accuracy, smaller confidence intervals, and were more stable and less affected by the random variable (randomly selected pseudo-absence points). According to the prediction performance and stability of SDMs, we can divide these six SDMs into two categories: a high performance and stability group including MAHAL, RF, MAXENT, and SVM, and a low performance and stability group consisting of BIOCLIM, and DOMAIN. We highlight that choosing appropriate SDMs to address a specific problem is an important part of the modeling process.

  19. The predictive performance and stability of six species distribution models.

    Directory of Open Access Journals (Sweden)

    Ren-Yan Duan

    Full Text Available Predicting species' potential geographical range by species distribution models (SDMs is central to understand their ecological requirements. However, the effects of using different modeling techniques need further investigation. In order to improve the prediction effect, we need to assess the predictive performance and stability of different SDMs.We collected the distribution data of five common tree species (Pinus massoniana, Betula platyphylla, Quercus wutaishanica, Quercus mongolica and Quercus variabilis and simulated their potential distribution area using 13 environmental variables and six widely used SDMs: BIOCLIM, DOMAIN, MAHAL, RF, MAXENT, and SVM. Each model run was repeated 100 times (trials. We compared the predictive performance by testing the consistency between observations and simulated distributions and assessed the stability by the standard deviation, coefficient of variation, and the 99% confidence interval of Kappa and AUC values.The mean values of AUC and Kappa from MAHAL, RF, MAXENT, and SVM trials were similar and significantly higher than those from BIOCLIM and DOMAIN trials (p<0.05, while the associated standard deviations and coefficients of variation were larger for BIOCLIM and DOMAIN trials (p<0.05, and the 99% confidence intervals for AUC and Kappa values were narrower for MAHAL, RF, MAXENT, and SVM. Compared to BIOCLIM and DOMAIN, other SDMs (MAHAL, RF, MAXENT, and SVM had higher prediction accuracy, smaller confidence intervals, and were more stable and less affected by the random variable (randomly selected pseudo-absence points.According to the prediction performance and stability of SDMs, we can divide these six SDMs into two categories: a high performance and stability group including MAHAL, RF, MAXENT, and SVM, and a low performance and stability group consisting of BIOCLIM, and DOMAIN. We highlight that choosing appropriate SDMs to address a specific problem is an important part of the modeling process.

  20. Modeling and design of a high-performance hybrid actuator

    Science.gov (United States)

    Aloufi, Badr; Behdinan, Kamran; Zu, Jean

    2016-12-01

    This paper presents the model and design of a novel hybrid piezoelectric actuator which provides high active and passive performances for smart structural systems. The actuator is composed of a pair of curved pre-stressed piezoelectric actuators, so-called commercially THUNDER actuators, installed opposite each other using two clamping mechanisms constructed of in-plane fixable hinges, grippers and solid links. A fully mathematical model is developed to describe the active and passive dynamics of the actuator and investigate the effects of its geometrical parameters on the dynamic stiffness, free displacement and blocked force properties. Among the literature that deals with piezoelectric actuators in which THUNDER elements are used as a source of electromechanical power, the proposed study is unique in that it presents a mathematical model that has the ability to predict the actuator characteristics and achieve other phenomena, such as resonances, mode shapes, phase shifts, dips, etc. For model validation, the measurements of the free dynamic response per unit voltage and passive acceleration transmissibility of a particular actuator design are used to check the accuracy of the results predicted by the model. The results reveal that there is a good agreement between the model and experiment. Another experiment is performed to teste the linearity of the actuator system by examining the variation of the output dynamic responses with varying forces and voltages at different frequencies. From the results, it can be concluded that the actuator acts approximately as a linear system at frequencies up to 1000 Hz. A parametric study is achieved here by applying the developed model to analyze the influence of the geometrical parameters of the fixable hinges on the active and passive actuator properties. The model predictions in the frequency range of 0-1000 Hz show that the hinge thickness, radius, and opening angle parameters have great effects on the frequency dynamic

  1. A conceptual model to improve performance in virtual teams

    Directory of Open Access Journals (Sweden)

    Shopee Dube

    2016-04-01

    Full Text Available Background: The vast improvement in communication technologies and sophisticated project management tools, methods and techniques has allowed geographically and culturally diverse groups to operate and function in a virtual environment. To succeed in this virtual environment where time and space are becoming increasingly irrelevant, organisations must define new ways of implementing initiatives. This virtual environment phenomenon has brought about the formation of virtual project teams that allow organisations to harness the skills and knowhow of the best resources, irrespective of their location.Objectives: The aim of this article was to investigate performance criteria and develop a conceptual model which can be applied to enhance the success of virtual project teams. There are no clear guidelines of the performance criteria in managing virtual project teams.Method: A qualitative research methodology was used in this article. The purpose of content analysis was to explore the literature to understand the concept of performance in virtual project teams and to summarise the findings of the literature reviewed.Results: The research identified a set of performance criteria for the virtual project teams as follows: leadership, trust, communication, team cooperation, reliability, motivation, comfort and social interaction. These were used to conceptualise the model.Conclusion: The conceptual model can be used in a holistic way to determine the overall performance of the virtual project team, but each factor can be analysed individually to determine the impact on the overall performance. The knowledge of performance criteria for virtual project teams could aid project managers in enhancing the success of these teams and taking a different approach to better manage and coordinate them.

  2. Modeling time-lagged reciprocal psychological empowerment-performance relationships.

    Science.gov (United States)

    Maynard, M Travis; Luciano, Margaret M; D'Innocenzo, Lauren; Mathieu, John E; Dean, Matthew D

    2014-11-01

    Employee psychological empowerment is widely accepted as a means for organizations to compete in increasingly dynamic environments. Previous empirical research and meta-analyses have demonstrated that employee psychological empowerment is positively related to several attitudinal and behavioral outcomes including job performance. While this research positions psychological empowerment as an antecedent influencing such outcomes, a close examination of the literature reveals that this relationship is primarily based on cross-sectional research. Notably, evidence supporting the presumed benefits of empowerment has failed to account for potential reciprocal relationships and endogeneity effects. Accordingly, using a multiwave, time-lagged design, we model reciprocal relationships between psychological empowerment and job performance using a sample of 441 nurses from 5 hospitals. Incorporating temporal effects in a staggered research design and using structural equation modeling techniques, our findings provide support for the conventional positive correlation between empowerment and subsequent performance. Moreover, accounting for the temporal stability of variables over time, we found support for empowerment levels as positive influences on subsequent changes in performance. Finally, we also found support for the reciprocal relationship, as performance levels were shown to relate positively to changes in empowerment over time. Theoretical and practical implications of the reciprocal psychological empowerment-performance relationships are discussed. (PsycINFO Database Record (c) 2014 APA, all rights reserved).

  3. Safety performance models for urban intersections in Brazil.

    Science.gov (United States)

    Barbosa, Heloisa; Cunto, Flávio; Bezerra, Bárbara; Nodari, Christine; Jacques, Maria Alice

    2014-09-01

    This paper presents a modeling effort for developing safety performance models (SPM) for urban intersections for three major Brazilian cities. The proposed methodology for calibrating SPM has been divided into the following steps: defining the safety study objective, choosing predictive variables and sample size, data acquisition, defining model expression and model parameters and model evaluation. Among the predictive variables explored in the calibration phase were exposure variables (AADT), number of lanes, number of approaches and central median status. SPMs were obtained for three cities: Fortaleza, Belo Horizonte and Brasília. The SPM developed for signalized intersections in Fortaleza and Belo Horizonte had the same structure and the most significant independent variables, which were AADT entering the intersection and number of lanes, and in addition, the coefficient of the best models were in the same range of values. For Brasília, because of the sample size, the signalized and unsignalized intersections were grouped, and the AADT was split in minor and major approaches, which were the most significant variables. This paper also evaluated SPM transferability to other jurisdiction. The SPM for signalized intersections from Fortaleza and Belo Horizonte have been recalibrated (in terms of the Cx) to the city of Porto Alegre. The models were adjusted following the Highway Safety Manual (HSM) calibration procedure and yielded Cx of 0.65 and 2.06 for Fortaleza and Belo Horizonte SPM respectively. This paper showed the experience and future challenges toward the initiatives on development of SPMs in Brazil, that can serve as a guide for other countries that are in the same stage in this subject.

  4. Green roof hydrologic performance and modeling: a review.

    Science.gov (United States)

    Li, Yanling; Babcock, Roger W

    2014-01-01

    Green roofs reduce runoff from impervious surfaces in urban development. This paper reviews the technical literature on green roof hydrology. Laboratory experiments and field measurements have shown that green roofs can reduce stormwater runoff volume by 30 to 86%, reduce peak flow rate by 22 to 93% and delay the peak flow by 0 to 30 min and thereby decrease pollution, flooding and erosion during precipitation events. However, the effectiveness can vary substantially due to design characteristics making performance predictions difficult. Evaluation of the most recently published study findings indicates that the major factors affecting green roof hydrology are precipitation volume, precipitation dynamics, antecedent conditions, growth medium, plant species, and roof slope. This paper also evaluates the computer models commonly used to simulate hydrologic processes for green roofs, including stormwater management model, soil water atmosphere and plant, SWMS-2D, HYDRUS, and other models that are shown to be effective for predicting precipitation response and economic benefits. The review findings indicate that green roofs are effective for reduction of runoff volume and peak flow, and delay of peak flow, however, no tool or model is available to predict expected performance for any given anticipated system based on design parameters that directly affect green roof hydrology.

  5. A Fluid Model for Performance Analysis in Cellular Networks

    Directory of Open Access Journals (Sweden)

    Coupechoux Marceau

    2010-01-01

    Full Text Available We propose a new framework to study the performance of cellular networks using a fluid model and we derive from this model analytical formulas for interference, outage probability, and spatial outage probability. The key idea of the fluid model is to consider the discrete base station (BS entities as a continuum of transmitters that are spatially distributed in the network. This model allows us to obtain simple analytical expressions to reveal main characteristics of the network. In this paper, we focus on the downlink other-cell interference factor (OCIF, which is defined for a given user as the ratio of its outer cell received power to its inner cell received power. A closed-form formula of the OCIF is provided in this paper. From this formula, we are able to obtain the global outage probability as well as the spatial outage probability, which depends on the location of a mobile station (MS initiating a new call. Our analytical results are compared to Monte Carlo simulations performed in a traditional hexagonal network. Furthermore, we demonstrate an application of the outage probability related to cell breathing and densification of cellular networks.

  6. Mixing Model Performance in Non-Premixed Turbulent Combustion

    Science.gov (United States)

    Pope, Stephen B.; Ren, Zhuyin

    2002-11-01

    In order to shed light on their qualitative and quantitative performance, three different turbulent mixing models are studied in application to non-premixed turbulent combustion. In previous works, PDF model calculations with detailed kinetics have been shown to agree well with experimental data for non-premixed piloted jet flames. The calculations from two different groups using different descriptions of the chemistry and turbulent mixing are capable of producing the correct levels of local extinction and reignition. The success of these calculations raises several questions, since it is not clear that the mixing models used contain an adequate description of the processes involved. To address these questions, three mixing models (IEM, modified Curl and EMST) are applied to a partially-stirred reactor burning hydrogen in air. The parameters varied are the residence time and the mixing time scale. For small relative values of the mixing time scale (approaching the perfectly-stirred limit) the models yield the same extinction behavior. But for larger values, the behavior is distictly different, with EMST being must resistant to extinction.

  7. Modeling impact of environmental factors on photovoltaic array performance

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Jie; Sun, Yize; Xu, Yang [College of Mechanical Engineering, Donghua University NO.2999, North Renmin Road, Shanghai (China)

    2013-07-01

    It is represented in this paper that a methodology to model and quantify the impact of the three environmental factors, the ambient temperature, the incident irradiance and the wind speed, upon the performance of photovoltaic array operating under outdoor conditions. First, A simple correlation correlating operating temperature with the three environmental variables is validated for a range of wind speed studied, 2-8, and for irradiance values between 200 and 1000. Root mean square error (RMSE) between modeled operating temperature and measured values is 1.19% and the mean bias error (MBE) is -0.09%. The environmental factors studied influence I-V curves, P-V curves, and maximum-power outputs of photovoltaic array. The cell-to-module-to-array mathematical model for photovoltaic panels is established in this paper and the method defined as segmented iteration is adopted to solve the I-V curve expression to relate model I-V curves. The model I-V curves and P-V curves are concluded to coincide well with measured data points. The RMSE between numerically calculated maximum-power outputs and experimentally measured ones is 0.2307%, while the MBE is 0.0183%. In addition, a multivariable non-linear regression equation is proposed to eliminate the difference between numerically calculated values and measured ones of maximum power outputs over the range of high ambient temperature and irradiance at noon and in the early afternoon. In conclusion, the proposed method is reasonably simple and accurate.

  8. Evaluation of multidimensional models of WAIS-IV subtest performance.

    Science.gov (United States)

    McFarland, Dennis J

    2017-04-21

    The present study examined the extent to which the covariance structure of the WAIS-IV is best accounted for by models that assume that test performance is the result of group-level factors and multiple independent general factors. Structural models with one to four general factors were evaluated with either four or five group-level factors. Simulations based on four general factors were run to clarify the adequacy of the estimates of the allocation of covariance by the models. Four independent general factors provided better fit than a single general factor for either model with four or five group-level factors. While one of the general factors had much larger loadings than all other factors, simulation results suggested that this might be an artifact of the statistical procedure rather than a reflection of the nature of individual differences in cognitive abilities. These results argue against the contention that clinical interpretation of cognitive test batteries should primarily be at the level of general intelligence. It is a fallacy to assume that factor analysis can reveal the structure of human abilities. Test validity should not be based solely on the results of modeling the covariance of test batteries.

  9. Urban Modelling Performance of Next Generation SAR Missions

    Science.gov (United States)

    Sefercik, U. G.; Yastikli, N.; Atalay, C.

    2017-09-01

    In synthetic aperture radar (SAR) technology, urban mapping and modelling have become possible with revolutionary missions TerraSAR-X (TSX) and Cosmo-SkyMed (CSK) since 2007. These satellites offer 1m spatial resolution in high-resolution spotlight imaging mode and capable for high quality digital surface model (DSM) acquisition for urban areas utilizing interferometric SAR (InSAR) technology. With the advantage of independent generation from seasonal weather conditions, TSX and CSK DSMs are much in demand by scientific users. The performance of SAR DSMs is influenced by the distortions such as layover, foreshortening, shadow and double-bounce depend up on imaging geometry. In this study, the potential of DSMs derived from convenient 1m high-resolution spotlight (HS) InSAR pairs of CSK and TSX is validated by model-to-model absolute and relative accuracy estimations in an urban area. For the verification, an airborne laser scanning (ALS) DSM of the study area was used as the reference model. Results demonstrated that TSX and CSK urban DSMs are compatible in open, built-up and forest land forms with the absolute accuracy of 8-10 m. The relative accuracies based on the coherence of neighbouring pixels are superior to absolute accuracies both for CSK and TSX.

  10. Photovoltaic Pixels for Neural Stimulation: Circuit Models and Performance.

    Science.gov (United States)

    Boinagrov, David; Lei, Xin; Goetz, Georges; Kamins, Theodore I; Mathieson, Keith; Galambos, Ludwig; Harris, James S; Palanker, Daniel

    2016-02-01

    Photovoltaic conversion of pulsed light into pulsed electric current enables optically-activated neural stimulation with miniature wireless implants. In photovoltaic retinal prostheses, patterns of near-infrared light projected from video goggles onto subretinal arrays of photovoltaic pixels are converted into patterns of current to stimulate the inner retinal neurons. We describe a model of these devices and evaluate the performance of photovoltaic circuits, including the electrode-electrolyte interface. Characteristics of the electrodes measured in saline with various voltages, pulse durations, and polarities were modeled as voltage-dependent capacitances and Faradaic resistances. The resulting mathematical model of the circuit yielded dynamics of the electric current generated by the photovoltaic pixels illuminated by pulsed light. Voltages measured in saline with a pipette electrode above the pixel closely matched results of the model. Using the circuit model, our pixel design was optimized for maximum charge injection under various lighting conditions and for different stimulation thresholds. To speed discharge of the electrodes between the pulses of light, a shunt resistor was introduced and optimized for high frequency stimulation.

  11. Performance model for Micro Tunnelling Boring Machines (MTBM

    Directory of Open Access Journals (Sweden)

    J. Gallo

    2017-06-01

    Full Text Available From the last decades of the XX century, various formulae have been proposed to estimate the performance in tunnelling of disc cutters, mainly employed in Tunnelling Boring Machines (TBM. Nevertheless, their suitability has not been verified in Micro Tunnelling Boring Machines (MTBM, with smaller diameter of excavation, between 1,000 and 2,500 mm and smaller cutter tools, where parameters like joint spacing may have a different influence. This paper analyzes those models proposed for TBM. After having observed very low correlation with data obtained in 15 microtunnels, a new performance model is developed, adapted to the geomechanical data available in this type of works. Moreover, a method to calculate the total amount of hours that are necessary to carry out microtunnels, including all the tasks of the excavation cycle and installation and uninstallation.

  12. Performance potential for simulating spin models on GPU

    CERN Document Server

    Weigel, Martin

    2011-01-01

    Graphics processing units (GPUs) are recently being used to an increasing degree for general computational purposes. This development is motivated by their theoretical peak performance, which significantly exceeds that of broadly available CPUs. For practical purposes, however, it is far from clear how much of this theoretical performance can be realized in actual scientific applications. As is discussed here for the case of studying classical spin models of statistical mechanics by Monte Carlo simulations, only an explicit tailoring of the involved algorithms to the specific architecture under consideration allows to harvest the computational power of GPU systems. A number of examples, ranging from Metropolis simulations of ferromagnetic Ising models, over continuous Heisenberg and disordered spin-glass systems to parallel-tempering simulations are discussed. Significant speed-ups by factors of up to 1000 compared to serial CPU code as well as previous GPU implementations are observed.

  13. Performance potential for simulating spin models on GPU

    Science.gov (United States)

    Weigel, Martin

    2012-04-01

    Graphics processing units (GPUs) are recently being used to an increasing degree for general computational purposes. This development is motivated by their theoretical peak performance, which significantly exceeds that of broadly available CPUs. For practical purposes, however, it is far from clear how much of this theoretical performance can be realized in actual scientific applications. As is discussed here for the case of studying classical spin models of statistical mechanics by Monte Carlo simulations, only an explicit tailoring of the involved algorithms to the specific architecture under consideration allows to harvest the computational power of GPU systems. A number of examples, ranging from Metropolis simulations of ferromagnetic Ising models, over continuous Heisenberg and disordered spin-glass systems to parallel-tempering simulations are discussed. Significant speed-ups by factors of up to 1000 compared to serial CPU code as well as previous GPU implementations are observed.

  14. Model for magnetostrictive performance in soft/hard coupled bilayers

    Energy Technology Data Exchange (ETDEWEB)

    Jianjun, Li, E-mail: ljj8081@gmail.com [National Key Laboratory of Science and Technology on Advanced Composites in Special Environments, Harbin Institute of Technology, Harbin 150080 (China); Laboratoire de Magnétisme de Bretagne, Université de Bretagne Occidentale, 29238 Brest Cedex 3 (France); Beibei, Duan; Minglun, Li [National Key Laboratory of Science and Technology on Advanced Composites in Special Environments, Harbin Institute of Technology, Harbin 150080 (China)

    2015-11-01

    A model is set up to investigate the magnetostrictive performance and spin response in soft/hard magnetostrictive coupled bilayers. Direct coupling between soft ferromagnet and hard TbFe{sub 2} at the interface is assumed. The magnetostriction results from the rotation of ferromagnetic vector and TbFe{sub 2} vectors from the easy axis driven by applied magnetic field. Dependence of magnetostriction on TbFe{sub 2} layer thickness and interfacial exchange interaction is studied. The simulated results reveal the compromise between interfacial exchange interaction and anisotropy of TbFe{sub 2} hard layer. - Highlights: • A model for magnetostrictive performance in soft/hard coupled bilayers. • Simulated magnetostriction loop and corresponding spin response. • Competition and compromise between interfacial interaction and TbFe{sub 2} anisotropy. • Dependence of saturated magnetostriction on different parameters.

  15. Thermal performance modeling of cross-flow heat exchangers

    CERN Document Server

    Cabezas-Gómez, Luben; Saíz-Jabardo, José Maria

    2014-01-01

    This monograph introduces a numerical computational methodology for thermal performance modeling of cross-flow heat exchangers, with applications in chemical, refrigeration and automobile industries. This methodology allows obtaining effectiveness-number of transfer units (e-NTU) data and has been used for simulating several standard and complex flow arrangements configurations of cross-flow heat exchangers. Simulated results have been validated through comparisons with results from available exact and approximate analytical solutions. Very accurate results have been obtained over wide ranges

  16. Towards an Improved Performance Measure for Language Models

    CERN Document Server

    Ueberla, J P

    1997-01-01

    In this paper a first attempt at deriving an improved performance measure for language models, the probability ratio measure (PRM) is described. In a proof of concept experiment, it is shown that PRM correlates better with recognition accuracy and can lead to better recognition results when used as the optimisation criterion of a clustering algorithm. Inspite of the approximations and limitations of this preliminary work, the results are very encouraging and should justify more work along the same lines.

  17. A Fuzzy Knowledge Representation Model for Student Performance Assessment

    DEFF Research Database (Denmark)

    Badie, Farshad

    Knowledge representation models based on Fuzzy Description Logics (DLs) can provide a foundation for reasoning in intelligent learning environments. While basic DLs are suitable for expressing crisp concepts and binary relationships, Fuzzy DLs are capable of processing degrees of truth/completene....../completeness about vague or imprecise information. This paper tackles the issue of representing fuzzy classes using OWL2 in a dataset describing Performance Assessment Results of Students (PARS)....

  18. Human Engineering Modeling and Performance Lab Study Project

    Science.gov (United States)

    Oliva-Buisson, Yvette J.

    2014-01-01

    The HEMAP (Human Engineering Modeling and Performance) Lab is a joint effort between the Industrial and Human Engineering group and the KAVE (Kennedy Advanced Visualiations Environment) group. The lab consists of sixteen camera system that is used to capture human motions and operational tasks, through te use of a Velcro suit equipped with sensors, and then simulate these tasks in an ergonomic software package know as Jac, The Jack software is able to identify the potential risk hazards.

  19. Modeling and Simulation of Ceramic Arrays to Improve Ballistic Performance

    Science.gov (United States)

    2014-04-30

    distribution is Unlimited. 13. SUPPLEMENTARY NOTES 14. ABSTRACT -Develop Modeling and Simulation tools, use Depth of Penetration ( DOP ) as metric...7.62 APM2 -Evaluate SiC tile on Aluminum with material properties from literature -Develop seam designs to improve performance, demonstrate with DOP ...5083, SiC, DoP Expeminets, AutoDyn Sin 16. SECURITY CLASSIFICATION OF: UU a. REPORT b. ABSTRACT c. THIS PAGE 17. LIMITATION OF ABSTRACT UU 18

  20. Towards Accreditation of Diagnostic Models for Improved Performance

    Science.gov (United States)

    2004-10-02

    analysis. Secondly, while performing testability, the diagnostic algorithm is not included to assess Anuradha Kodali et al. This is an open-access...to assess the diagnosis (Sheppard, & Simpson, 1998). Considering these factors, Interactive Diagnostic Modeling Evaluator (i-DME) ( Kodali , Robinson...requirements set before to suit practical compulsions. This may lead to changing the basic principles and to refine the existing methods continuously

  1. An integrative modeling approach to elucidate suction-feeding performance.

    Science.gov (United States)

    Holzman, Roi; Collar, David C; Mehta, Rita S; Wainwright, Peter C

    2012-01-01

    Research on suction-feeding performance has mostly focused on measuring individual underlying components such as suction pressure, flow velocity, ram or the effects of suction-induced forces on prey movement during feeding. Although this body of work has advanced our understanding of aquatic feeding, no consensus has yet emerged on how to combine all of these variables to predict prey-capture performance. Here, we treated the aquatic predator-prey encounter as a hydrodynamic interaction between a solid particle (representing the prey) and the unsteady suction flows around it, to integrate the effects of morphology, physiology, skull kinematics, ram and fluid mechanics on suction-feeding performance. We developed the suction-induced force-field (SIFF) model to study suction-feeding performance in 18 species of centrarchid fishes, and asked what morphological and functional traits underlie the evolution of feeding performance on three types of prey. Performance gradients obtained using SIFF revealed that different trait combinations contribute to the ability to feed on attached, evasive and (strain-sensitive) zooplanktonic prey because these prey types impose different challenges on the predator. The low overlap in the importance of different traits in determining performance also indicated that the evolution of suction-feeding ability along different ecological axes is largely unconstrained. SIFF also yielded estimates of feeding ability that performed better than kinematic traits in explaining natural patterns of prey use. When compared with principal components describing variation in the kinematics of suction-feeding events, SIFF output explained significantly more variation in centrarchid diets, suggesting that the inclusion of more mechanistic hydrodynamic models holds promise for gaining insight into the evolution of aquatic feeding performance.

  2. Performance of fire behavior fuel models developed for the Rothermel Surface Fire Spread Model

    Science.gov (United States)

    Robert Ziel; W. Matt Jolly

    2009-01-01

    In 2005, 40 new fire behavior fuel models were published for use with the Rothermel Surface Fire Spread Model. These new models are intended to augment the original 13 developed in 1972 and 1976. As a compiled set of quantitative fuel descriptions that serve as input to the Rothermel model, the selected fire behavior fuel model has always been critical to the resulting...

  3. 3D Massive MIMO Systems: Modeling and Performance Analysis

    KAUST Repository

    Nadeem, Qurrat-Ul-Ain

    2015-07-30

    Multiple-input-multiple-output (MIMO) systems of current LTE releases are capable of adaptation in the azimuth only. Recently, the trend is to enhance system performance by exploiting the channel’s degrees of freedom in the elevation, which necessitates the characterization of 3D channels. We present an information-theoretic channel model for MIMO systems that supports the elevation dimension. The model is based on the principle of maximum entropy, which enables us to determine the distribution of the channel matrix consistent with the prior information on the angles. Based on this model, we provide analytical expression for the cumulative density function (CDF) of the mutual information (MI) for systems with a single receive and finite number of transmit antennas in the general signalto- interference-plus-noise-ratio (SINR) regime. The result is extended to systems with finite receive antennas in the low SINR regime. A Gaussian approximation to the asymptotic behavior of MI distribution is derived for the large number of transmit antennas and paths regime. We corroborate our analysis with simulations that study the performance gains realizable through meticulous selection of the transmit antenna downtilt angles, confirming the potential of elevation beamforming to enhance system performance. The results are directly applicable to the analysis of 5G 3D-Massive MIMO-systems.

  4. Modelling the Progression of Male Swimmers’ Performances through Adolescence

    Directory of Open Access Journals (Sweden)

    Shilo J. Dormehl

    2016-01-01

    Full Text Available Insufficient data on adolescent athletes is contributing to the challenges facing youth athletic development and accurate talent identification. The purpose of this study was to model the progression of male sub-elite swimmers’ performances during adolescence. The performances of 446 males (12–19 year olds competing in seven individual events (50, 100, 200 m freestyle, 100 m backstroke, breaststroke, butterfly, 200 m individual medley over an eight-year period at an annual international schools swimming championship, run under FINA regulations were collected. Quadratic functions for each event were determined using mixed linear models. Thresholds of peak performance were achieved between the ages of 18.5 ± 0.1 (50 m freestyle and 200 m individual medley and 19.8 ± 0.1 (100 m butterfly years. The slowest rate of improvement was observed in the 200 m individual medley (20.7% and the highest in the 100 m butterfly (26.2%. Butterfly does however appear to be one of the last strokes in which males specialise. The models may be useful as talent identification tools, as they predict the age at which an average sub-elite swimmer could potentially peak. The expected rate of improvement could serve as a tool in which to monitor and evaluate benchmarks.

  5. Performance Evaluation Based on EFQM Excellence Model in Sport Organizations

    Directory of Open Access Journals (Sweden)

    Rasoul Faraji

    2012-06-01

    Full Text Available The present study aims to evaluate the performance of physical education (P.E. general office of Tehran province through model of European Foundation for Quality Management (EFQM. Questionnaire approach was used in this study. Therefore validity of the 50-item EFQM questionnaire verified by the experts and the reliability also calculated in a pilot study (α=0.928. 95 questionnaires distributed between subjects (N=n and 80 questionnaires returned and concluded in the statistical analysis. From nine EFQM criteria, the highest scores were gained in key performance results (37.62% and the lowest gained in people results (27.94%. Totally, this organization achieved 337.11 pointes out of a total of 1000. Additionally, there was a strong relationship (r=0.827, p=0.001 between enablers and results (P<0.05. Based on scores gained in the criteria, improving measures in all criteria is essential for this organization, especially in the people criterion from enablers and people results criterion from results domain. Furthermore, it is believed that the physical education area is one of the best fields for application of the excellence model towards the performance excellence and gaining better results and hence, it seems that the model has a high potential in responding to problems commonly seen in sport sector.

  6. Performance of chromatographic systems to model soil-water sorption.

    Science.gov (United States)

    Hidalgo-Rodríguez, Marta; Fuguet, Elisabet; Ràfols, Clara; Rosés, Martí

    2012-08-24

    A systematic approach for evaluating the goodness of chromatographic systems to model the sorption of neutral organic compounds by soil from water is presented in this work. It is based on the examination of the three sources of error that determine the overall variance obtained when soil-water partition coefficients are correlated against chromatographic retention factors: the variance of the soil-water sorption data, the variance of the chromatographic data, and the variance attributed to the dissimilarity between the two systems. These contributions of variance are easily predicted through the characterization of the systems by the solvation parameter model. According to this method, several chromatographic systems besides the reference octanol-water partition system have been selected to test their performance in the emulation of soil-water sorption. The results from the experimental correlations agree with the predicted variances. The high-performance liquid chromatography system based on an immobilized artificial membrane and the micellar electrokinetic chromatography systems of sodium dodecylsulfate and sodium taurocholate provide the most precise correlation models. They have shown to predict well soil-water sorption coefficients of several tested herbicides. Octanol-water partitions and high-performance liquid chromatography measurements using C18 columns are less suited for the estimation of soil-water partition coefficients.

  7. Model for determining and optimizing delivery performance in industrial systems

    Directory of Open Access Journals (Sweden)

    Fechete Flavia

    2017-01-01

    Full Text Available Performance means achieving organizational objectives regardless of their nature and variety, and even overcoming them. Improving performance is one of the major goals of any company. Achieving the global performance means not only obtaining the economic performance, it is a must to take into account other functions like: function of quality, delivery, costs and even the employees satisfaction. This paper aims to improve the delivery performance of an industrial system due to their very low results. The delivery performance took into account all categories of performance indicators, such as on time delivery, backlog efficiency or transport efficiency. The research was focused on optimizing the delivery performance of the industrial system, using linear programming. Modeling the delivery function using linear programming led to obtaining precise quantities to be produced and delivered each month by the industrial system in order to minimize their transport cost, satisfying their customers orders and to control their stock. The optimization led to a substantial improvement in all four performance indicators that concern deliveries.

  8. Compact models and performance investigations for subthreshold interconnects

    CERN Document Server

    Dhiman, Rohit

    2014-01-01

    The book provides a detailed analysis of issues related to sub-threshold interconnect performance from the perspective of analytical approach and design techniques. Particular emphasis is laid on the performance analysis of coupling noise and variability issues in sub-threshold domain to develop efficient compact models. The proposed analytical approach gives physical insight of the parameters affecting the transient behavior of coupled interconnects. Remedial design techniques are also suggested to mitigate the effect of coupling noise. The effects of wire width, spacing between the wires, wi

  9. Modeling the Performance of Fast Mulipole Method on HPC platforms

    KAUST Repository

    Ibeid, Huda

    2012-04-06

    The current trend in high performance computing is pushing towards exascale computing. To achieve this exascale performance, future systems will have between 100 million and 1 billion cores assuming gigahertz cores. Currently, there are many efforts studying the hardware and software bottlenecks for building an exascale system. It is important to understand and meet these bottlenecks in order to attain 10 PFLOPS performance. On applications side, there is an urgent need to model application performance and to understand what changes need to be made to ensure continued scalability at this scale. Fast multipole methods (FMM) were originally developed for accelerating N-body problems for particle based methods. Nowadays, FMM is more than an N-body solver, recent trends in HPC have been to use FMMs in unconventional application areas. FMM is likely to be a main player in exascale due to its hierarchical nature and the techniques used to access the data via a tree structure which allow many operations to happen simultaneously at each level of the hierarchy. In this thesis , we discuss the challenges for FMM on current parallel computers and future exasclae architecture. Furthermore, we develop a novel performance model for FMM. Our ultimate aim of this thesis is to ensure the scalability of FMM on the future exascale machines.

  10. Synthesised model of market orientation-business performance relationship

    Directory of Open Access Journals (Sweden)

    G. Nwokah

    2006-12-01

    Full Text Available Purpose: The purpose of this paper is to assess the impact of market orientation on the performance of the organisation. While much empirical works have centered on market orientation, the generalisability of its impact on performance of the Food and Beverages organisations in the Nigeria context has been under-researched. Design/Methodology/Approach: The study adopted a triangulation methodology (quantitative and qualitative approach. Data was collected from key informants using a research instrument. Returned instruments were analyzed using nonparametric correlation through the use of the Statistical Package for Social Sciences (SPSS version 10. Findings: The study validated the earlier instruments but did not find any strong association between market orientation and business performance in the Nigerian context using the food and beverages organisations for the study. The reasons underlying the weak relationship between market orientation and business performance of the Food and Beverages organisations is government policies, new product development, diversification, innovation and devaluation of the Nigerian currency. One important finding of this study is that market orientation leads to business performance through some moderating variables. Implications: The study recommends that Nigerian Government should ensure a stable economy and make economic policies that will enhance existing business development in the country. Also, organisations should have performance measurement systems to detect the impact of investment on market orientation with the aim of knowing how the organisation works. Originality/Value: This study significantly refines the body of knowledge concerning the impact of market orientation on the performance of the organisation, and thereby offers a model of market orientation and business performance in the Nigerian context for marketing scholars and practitioners. This model will, no doubt, contribute to the body of

  11. Predicting optimum vortex tube performance using a simplified CFD model

    Energy Technology Data Exchange (ETDEWEB)

    Karimi-Esfahani, M; Fartaj, A.; Rankin, G.W. [Univ. of Windsor, Dept. of Mechanical, Automotive and Materials Engineering, Windsor, Ontario (Canada)]. E-mail: mki_60@hotmail.com

    2004-07-01

    The Ranque-Hilsch tube is a particular type of vortex tube device. The flow enters the device tangentially near one end and exits from the open ends of the tube. The inlet air is of a uniform temperature throughout while the outputs are of different temperatures. One outlet is hotter and the other is colder than the inlet air. This device has no moving parts and does not require any additional power for its operation other than that supplied to the device to compress the inlet air. It has, however, not been widely used, mainly because of its low efficiency. In this paper, a simplified 2-dimensional computational fluid dynamics model for the flow in the vortex tube is developed using FLUENT. This model makes use of the assumption of axial symmetry throughout the entire flow domain. Compared to a three-dimensional computational solution, the simplified model requires significantly less computational time. This is important because the model is to be used for an optimization study. A user-defined function is generated to implement a modified version of the k-epsilon model to account for turbulence. This model is validated by comparing a particular solution with available experimental data. The variation of cold temperature drop and efficiency of the device with orifice diameter, inlet pressure and cold mass flow ratio qualitatively agree with experimental results. Variation of these performance indices with tube length did not agree with the experiments for small values of tube length. However, it did agree qualitatively for large values. (author)

  12. Clinical laboratory as an economic model for business performance analysis.

    Science.gov (United States)

    Buljanović, Vikica; Patajac, Hrvoje; Petrovecki, Mladen

    2011-08-15

    To perform SWOT (strengths, weaknesses, opportunities, and threats) analysis of a clinical laboratory as an economic model that may be used to improve business performance of laboratories by removing weaknesses, minimizing threats, and using external opportunities and internal strengths. Impact of possible threats to and weaknesses of the Clinical Laboratory at Našice General County Hospital business performance and use of strengths and opportunities to improve operating profit were simulated using models created on the basis of SWOT analysis results. The operating profit as a measure of profitability of the clinical laboratory was defined as total revenue minus total expenses and presented using a profit and loss account. Changes in the input parameters in the profit and loss account for 2008 were determined using opportunities and potential threats, and economic sensitivity analysis was made by using changes in the key parameters. The profit and loss account and economic sensitivity analysis were tools for quantifying the impact of changes in the revenues and expenses on the business operations of clinical laboratory. Results of simulation models showed that operational profit of €470 723 in 2008 could be reduced to only €21 542 if all possible threats became a reality and current weaknesses remained the same. Also, operational gain could be increased to €535 804 if laboratory strengths and opportunities were utilized. If both the opportunities and threats became a reality, the operational profit would decrease by €384 465. The operational profit of the clinical laboratory could be significantly reduced if all threats became a reality and the current weaknesses remained the same. The operational profit could be increased by utilizing strengths and opportunities as much as possible. This type of modeling may be used to monitor business operations of any clinical laboratory and improve its financial situation by implementing changes in the next fiscal

  13. Clinical laboratory as an economic model for business performance analysis

    Science.gov (United States)

    Buljanović, Vikica; Patajac, Hrvoje; Petrovečki, Mladen

    2011-01-01

    Aim To perform SWOT (strengths, weaknesses, opportunities, and threats) analysis of a clinical laboratory as an economic model that may be used to improve business performance of laboratories by removing weaknesses, minimizing threats, and using external opportunities and internal strengths. Methods Impact of possible threats to and weaknesses of the Clinical Laboratory at Našice General County Hospital business performance and use of strengths and opportunities to improve operating profit were simulated using models created on the basis of SWOT analysis results. The operating profit as a measure of profitability of the clinical laboratory was defined as total revenue minus total expenses and presented using a profit and loss account. Changes in the input parameters in the profit and loss account for 2008 were determined using opportunities and potential threats, and economic sensitivity analysis was made by using changes in the key parameters. The profit and loss account and economic sensitivity analysis were tools for quantifying the impact of changes in the revenues and expenses on the business operations of clinical laboratory. Results Results of simulation models showed that operational profit of €470 723 in 2008 could be reduced to only €21 542 if all possible threats became a reality and current weaknesses remained the same. Also, operational gain could be increased to €535 804 if laboratory strengths and opportunities were utilized. If both the opportunities and threats became a reality, the operational profit would decrease by €384 465. Conclusion The operational profit of the clinical laboratory could be significantly reduced if all threats became a reality and the current weaknesses remained the same. The operational profit could be increased by utilizing strengths and opportunities as much as possible. This type of modeling may be used to monitor business operations of any clinical laboratory and improve its financial situation by

  14. Generation IV benchmarking of TRISO fuel performance models under accident conditions: Modeling input data

    Energy Technology Data Exchange (ETDEWEB)

    Collin, Blaise P. [Idaho National Laboratory (INL), Idaho Falls, ID (United States)

    2014-09-01

    This document presents the benchmark plan for the calculation of particle fuel performance on safety testing experiments that are representative of operational accidental transients. The benchmark is dedicated to the modeling of fission product release under accident conditions by fuel performance codes from around the world, and the subsequent comparison to post-irradiation experiment (PIE) data from the modeled heating tests. The accident condition benchmark is divided into three parts: the modeling of a simplified benchmark problem to assess potential numerical calculation issues at low fission product release; the modeling of the AGR-1 and HFR-EU1bis safety testing experiments; and, the comparison of the AGR-1 and HFR-EU1bis modeling results with PIE data. The simplified benchmark case, thereafter named NCC (Numerical Calculation Case), is derived from ''Case 5'' of the International Atomic Energy Agency (IAEA) Coordinated Research Program (CRP) on coated particle fuel technology [IAEA 2012]. It is included so participants can evaluate their codes at low fission product release. ''Case 5'' of the IAEA CRP-6 showed large code-to-code discrepancies in the release of fission products, which were attributed to ''effects of the numerical calculation method rather than the physical model''[IAEA 2012]. The NCC is therefore intended to check if these numerical effects subsist. The first two steps imply the involvement of the benchmark participants with a modeling effort following the guidelines and recommendations provided by this document. The third step involves the collection of the modeling results by Idaho National Laboratory (INL) and the comparison of these results with the available PIE data. The objective of this document is to provide all necessary input data to model the benchmark cases, and to give some methodology guidelines and recommendations in order to make all results suitable for comparison

  15. Generation IV benchmarking of TRISO fuel performance models under accident conditions: Modeling input data

    Energy Technology Data Exchange (ETDEWEB)

    Collin, Blaise P. [Idaho National Laboratory (INL), Idaho Falls, ID (United States)

    2014-09-01

    This document presents the benchmark plan for the calculation of particle fuel performance on safety testing experiments that are representative of operational accidental transients. The benchmark is dedicated to the modeling of fission product release under accident conditions by fuel performance codes from around the world, and the subsequent comparison to post-irradiation experiment (PIE) data from the modeled heating tests. The accident condition benchmark is divided into three parts: the modeling of a simplified benchmark problem to assess potential numerical calculation issues at low fission product release; the modeling of the AGR-1 and HFR-EU1bis safety testing experiments; and, the comparison of the AGR-1 and HFR-EU1bis modeling results with PIE data. The simplified benchmark case, thereafter named NCC (Numerical Calculation Case), is derived from ''Case 5'' of the International Atomic Energy Agency (IAEA) Coordinated Research Program (CRP) on coated particle fuel technology [IAEA 2012]. It is included so participants can evaluate their codes at low fission product release. ''Case 5'' of the IAEA CRP-6 showed large code-to-code discrepancies in the release of fission products, which were attributed to ''effects of the numerical calculation method rather than the physical model''[IAEA 2012]. The NCC is therefore intended to check if these numerical effects subsist. The first two steps imply the involvement of the benchmark participants with a modeling effort following the guidelines and recommendations provided by this document. The third step involves the collection of the modeling results by Idaho National Laboratory (INL) and the comparison of these results with the available PIE data. The objective of this document is to provide all necessary input data to model the benchmark cases, and to give some methodology guidelines and recommendations in order to make all results suitable for comparison

  16. Performance analysis of a medical record exchanges model.

    Science.gov (United States)

    Huang, Ean-Wen; Liou, Der-Ming

    2007-03-01

    Electronic medical record exchange among hospitals can provide more information for physician diagnosis and reduce costs from duplicate examinations. In this paper, we proposed and implemented a medical record exchange model. According to our study, exchange interface servers (EISs) are designed for hospitals to manage the information communication through the intra and interhospital networks linked with a medical records database. An index service center can be given responsibility for managing the EIS and publishing the addresses and public keys. The prototype system has been implemented to generate, parse, and transfer the health level seven query messages. Moreover, the system can encrypt and decrypt a message using the public-key encryption algorithm. The queuing theory is applied to evaluate the performance of our proposed model. We estimated the service time for each queue of the CPU, database, and network, and measured the response time and possible bottlenecks of the model. The capacity of the model is estimated to process the medical records of about 4000 patients/h in the 1-MB network backbone environments, which comprises about the 4% of the total outpatients in Taiwan.

  17. Key performance indicators in hospital based on balanced scorecard model

    Directory of Open Access Journals (Sweden)

    Hamed Rahimi

    2017-01-01

    Full Text Available Introduction: Performance measurement is receiving increasing verification all over the world. Nowadays in a lot of organizations, irrespective of their type or size, performance evaluation is the main concern and a key issue for top administrators. The purpose of this study is to organize suitable key performance indicators (KPIs for hospitals’ performance evaluation based on the balanced scorecard (BSC. Method: This is a mixed method study. In order to identify the hospital’s performance indicators (HPI, first related literature was reviewed and then the experts’ panel and Delphi method were used. In this study, two rounds were needed for the desired level of consensus. The experts rated the importance of the indicators, on a five-point Likert scale. In the consensus calculation, the consensus percentage was calculated by classifying the values 1-3 as not important (0 and 4-5 to (1 as important. Simple additive weighting technique was used to rank the indicators and select hospital’s KPIs. The data were analyzed by Excel 2010 software. Results: About 218 indicators were obtained from a review of selected literature. Through internal expert panel, 77 indicators were selected. Finally, 22 were selected for KPIs of hospitals. Ten indicators were selected in internal process perspective and 5, 4, and 3 indicators in finance, learning and growth, and customer, respectively. Conclusion: This model can be a useful tool for evaluating and comparing the performance of hospitals. However, this model is flexible and can be adjusted according to differences in the target hospitals. This study can be beneficial for hospital administrators and it can help them to change their perspective about performance evaluation.

  18. Performance Evaluation of Photovoltaic Models Based on a Solar Model Tester

    Directory of Open Access Journals (Sweden)

    Salih Mohammed Salih

    2012-07-01

    Full Text Available The performances of 130 W (Solara PV and 100 W (Sunworth PV solar modules are evaluated using a single diode equivalent circuit. The equivalent circuit is able to simulate both the I–V and P–V characteristic curves, and is used to study the effect of the operating temperature, diode ideality factor, series resistance, and solar irradiance level on the model performance. The results of the PV characteristics curves are compared with the parameters from the manufacturing companies for each model. Afterwards, the Solara PV model is tested under different irradiance levels. The relationship between the model power versus its current under different irradiance levels is plotted, such that if the solar power meter (pyrheliometer does not exist, the irradiance-current (G–I curve can be used to measure solar radiation power without using the solar power meter. The measurement is achieved by moving the solar panel by a certain angle toward the solar radiation, and then measuring the corresponding current.

  19. Beamforming in Ad Hoc Networks: MAC Design and Performance Modeling

    Directory of Open Access Journals (Sweden)

    Khalil Fakih

    2009-01-01

    Full Text Available We examine in this paper the benefits of beamforming techniques in ad hoc networks. We first devise a novel MAC paradigm for ad hoc networks when using these techniques in multipath fading environment. In such networks, the use of conventional directional antennas does not necessarily improve the system performance. On the other hand, the exploitation of the potential benefits of smart antenna systems and especially beamforming techniques needs a prior knowledge of the physical channel. Our proposition performs jointly channel estimation and radio resource sharing. We validate the fruitfulness of the proposed MAC and we evaluate the effects of the channel estimation on the network performance. We then present an accurate analytical model for the performance of IEEE 802.11 MAC protocol. We extend the latter model, by introducing the fading probability, to derive the saturation throughput for our proposed MAC when the simplest beamforming strategy is used in real multipath fading ad hoc networks. Finally, numerical results validate our proposition.

  20. A logistical model for performance evaluations of hybrid generation systems

    Energy Technology Data Exchange (ETDEWEB)

    Bonanno, F.; Consoli, A.; Raciti, A. [Univ. of Catania (Italy). Dept. of Electrical, Electronic, and Systems Engineering; Lombardo, S. [Schneider Electric SpA, Torino (Italy)

    1998-11-01

    In order to evaluate the fuel and energy savings, and to focus on the problems related to the exploitation of combined renewable and conventional energies, a logistical model for hybrid generation systems (HGS`s) has been prepared. A software package written in ACSL, allowing easy handling of the models and data of the HGS components, is presented. A special feature of the proposed model is that an auxiliary fictitious source is introduced in order to obtain the power electric balance at the busbars during the simulation state and, also, in the case of ill-sized components. The observed imbalance powers are then used to update the system design. As a case study, the simulation program is applied to evaluate the energetic performance of a power plant relative to a small isolated community, and island in the Mediterranean Sea, in order to establish the potential improvement achievable via an optimal integration of renewable energy sources in conventional plants. Evaluations and comparisons among different-sized wind, photovoltaic, and diesel groups, as well as of different management strategies have been performed using the simulation package and are reported and discussed in order to present the track followed to select the final design.

  1. ICT evaluation models and performance of medium and small enterprises

    Directory of Open Access Journals (Sweden)

    Bayaga Anass

    2014-01-01

    Full Text Available Building on prior research related to (1 impact of information communication technology (ICT and (2 operational risk management (ORM in the context of medium and small enterprises (MSEs, the focus of this study was to investigate the relationship between (1 ICT operational risk management (ORM and (2 performances of MSEs. To achieve the focus, the research investigated evaluating models for understanding the value of ICT ORM in MSEs. Multiple regression, Repeated-Measures Analysis of Variance (RM-ANOVA and Repeated-Measures Multivariate Analysis of Variance (RM-MANOVA were performed. The findings of the distribution revealed that only one variable made a significant percentage contribution to the level of ICT operation in MSEs, the Payback method (β = 0.410, p < .000. It may thus be inferred that the Payback method is the prominent variable, explaining the variation in level of evaluation models affecting ICT adoption within MSEs. Conclusively, in answering the two questions (1 degree of variability explained and (2 predictors, the results revealed that the variable contributed approximately 88.4% of the variations in evaluation models affecting ICT adoption within MSEs. The analysis of variance also revealed that the regression coefficients were real and did not occur by chance

  2. Consistency of QSAR models: Correct split of training and test sets, ranking of models and performance parameters.

    Science.gov (United States)

    Rácz, A; Bajusz, D; Héberger, K

    2015-01-01

    Recent implementations of QSAR modelling software provide the user with numerous models and a wealth of information. In this work, we provide some guidance on how one should interpret the results of QSAR modelling, compare and assess the resulting models, and select the best and most consistent ones. Two QSAR datasets are applied as case studies for the comparison of model performance parameters and model selection methods. We demonstrate the capabilities of sum of ranking differences (SRD) in model selection and ranking, and identify the best performance indicators and models. While the exchange of the original training and (external) test sets does not affect the ranking of performance parameters, it provides improved models in certain cases (despite the lower number of molecules in the training set). Performance parameters for external validation are substantially separated from the other merits in SRD analyses, highlighting their value in data fusion.

  3. Integrated healthcare networks' performance: a growth curve modeling approach.

    Science.gov (United States)

    Wan, Thomas T H; Wang, Bill B L

    2003-05-01

    This study examines the effects of integration on the performance ratings of the top 100 integrated healthcare networks (IHNs) in the United States. A strategic-contingency theory is used to identify the relationship of IHNs' performance to their structural and operational characteristics and integration strategies. To create a database for the panel study, the top 100 IHNs selected by the SMG Marketing Group in 1998 were followed up in 1999 and 2000. The data were merged with the Dorenfest data on information system integration. A growth curve model was developed and validated by the Mplus statistical program. Factors influencing the top 100 IHNs' performance in 1998 and their subsequent rankings in the consecutive years were analyzed. IHNs' initial performance scores were positively influenced by network size, number of affiliated physicians and profit margin, and were negatively associated with average length of stay and technical efficiency. The continuing high performance, judged by maintaining higher performance scores, tended to be enhanced by the use of more managerial or executive decision-support systems. Future studies should include time-varying operational indicators to serve as predictors of network performance.

  4. Duct thermal performance models for large commercial buildings

    Energy Technology Data Exchange (ETDEWEB)

    Wray, Craig P.

    2003-10-01

    Despite the potential for significant energy savings by reducing duct leakage or other thermal losses from duct systems in large commercial buildings, California Title 24 has no provisions to credit energy-efficient duct systems in these buildings. A substantial reason is the lack of readily available simulation tools to demonstrate the energy-saving benefits associated with efficient duct systems in large commercial buildings. The overall goal of the Efficient Distribution Systems (EDS) project within the PIER High Performance Commercial Building Systems Program is to bridge the gaps in current duct thermal performance modeling capabilities, and to expand our understanding of duct thermal performance in California large commercial buildings. As steps toward this goal, our strategy in the EDS project involves two parts: (1) developing a whole-building energy simulation approach for analyzing duct thermal performance in large commercial buildings, and (2) using the tool to identify the energy impacts of duct leakage in California large commercial buildings, in support of future recommendations to address duct performance in the Title 24 Energy Efficiency Standards for Nonresidential Buildings. The specific technical objectives for the EDS project were to: (1) Identify a near-term whole-building energy simulation approach that can be used in the impacts analysis task of this project (see Objective 3), with little or no modification. A secondary objective is to recommend how to proceed with long-term development of an improved compliance tool for Title 24 that addresses duct thermal performance. (2) Develop an Alternative Calculation Method (ACM) change proposal to include a new metric for thermal distribution system efficiency in the reporting requirements for the 2005 Title 24 Standards. The metric will facilitate future comparisons of different system types using a common ''yardstick''. (3) Using the selected near-term simulation approach

  5. Model for the Analysis of the Company Performance

    Directory of Open Access Journals (Sweden)

    Mădălina DUMBRAVĂ

    2010-08-01

    Full Text Available The analysis of the performance of a firm (company has a determinant role in setting the strategy to follow and this is more necessary during the period of economic-financial crisis. In the following items, I have performed the analysis, based on the balance sheet data, for SC DELTA SRL, using a system of indicators that have relevance, and the interpretation of which allows to draw certain conclusions, depending on which the future development can be forecasted. I have tried to use a number of indicators, viewed as a system, which would define, in the end, a model for company performance analysis. The research focused on using the system of indicators on the data from the balance sheet of SC DELTA SRL.

  6. Model helicopter performance degradation with simulated ice shapes

    Science.gov (United States)

    Tinetti, Ana F.; Korkan, Kenneth D.

    1987-01-01

    An experimental program using a commercially available model helicopter has been conducted in the Texas A&M University Subsonic Wind Tunnel to investigate main rotor performance degradation due to generic ice. The simulated ice, including both primary and secondary formations, was scaled by chord from previously documented artificial ice accretions. Base and iced performance data were gathered as functions of fuselage incidence, blade collective pitch, main rotor rotational velocity, and freestream velocity. It was observed that the presence of simulated ice tends to decrease the lift to equivalent drag ratio, as well as thrust coefficient for the range of velocity ratios tested. Also, increases in torque coefficient due to the generic ice formations were observed. Evaluation of the data has indicated that the addition of roughness due to secondary ice formations is crucial for proper evaluation of the degradation in main rotor performance.

  7. A performance measurement using balanced scorecard and structural equation modeling

    Directory of Open Access Journals (Sweden)

    Rosha Makvandi

    2014-02-01

    Full Text Available During the past few years, balanced scorecard (BSC has been widely used as a promising method for performance measurement. BSC studies organizations in terms of four perspectives including customer, internal processes, learning and growth and financial figures. This paper presents a hybrid of BSC and structural equation modeling (SEM to measure the performance of an Iranian university in province of Alborz, Iran. The proposed study of this paper uses this conceptual method, designs a questionnaire and distributes it among some university students and professors. Using SEM technique, the survey analyzes the data and the results indicate that the university did poorly in terms of all four perspectives. The survey extracts necessary target improvement by presenting necessary attributes for performance improvement.

  8. The application of DEA model in enterprise environmental performance auditing

    Science.gov (United States)

    Li, F.; Zhu, L. Y.; Zhang, J. D.; Liu, C. Y.; Qu, Z. G.; Xiao, M. S.

    2017-01-01

    As a part of society, enterprises have an inescapable responsibility for environmental protection and governance. This article discusses the feasibility and necessity of enterprises environmental performance auditing and uses DEA model calculate the environmental performance of Haier for example. The most of reference data are selected and sorted from Haier’s environmental reportspublished in 2008, 2009, 2011 and 2015, and some of the data from some published articles and fieldwork. All the calculation results are calculated by DEAP software andhave a high credibility. The analysis results of this article can give corporate managements an idea about using environmental performance auditing to adjust their corporate environmental investments capital quota and change their company’s environmental strategies.

  9. Advanced transport systems analysis, modeling, and evaluation of performances

    CERN Document Server

    Janić, Milan

    2014-01-01

    This book provides a systematic analysis, modeling and evaluation of the performance of advanced transport systems. It offers an innovative approach by presenting a multidimensional examination of the performance of advanced transport systems and transport modes, useful for both theoretical and practical purposes. Advanced transport systems for the twenty-first century are characterized by the superiority of one or several of their infrastructural, technical/technological, operational, economic, environmental, social, and policy performances as compared to their conventional counterparts. The advanced transport systems considered include: Bus Rapid Transit (BRT) and Personal Rapid Transit (PRT) systems in urban area(s), electric and fuel cell passenger cars, high speed tilting trains, High Speed Rail (HSR), Trans Rapid Maglev (TRM), Evacuated Tube Transport system (ETT), advanced commercial subsonic and Supersonic Transport Aircraft (STA), conventionally- and Liquid Hydrogen (LH2)-fuelled commercial air trans...

  10. Model helicopter performance degradation with simulated ice shapes

    Science.gov (United States)

    Tinetti, Ana F.; Korkan, Kenneth D.

    1987-01-01

    An experimental program using a commercially available model helicopter has been conducted in the Texas A&M University Subsonic Wind Tunnel to investigate main rotor performance degradation due to generic ice. The simulated ice, including both primary and secondary formations, was scaled by chord from previously documented artificial ice accretions. Base and iced performance data were gathered as functions of fuselage incidence, blade collective pitch, main rotor rotational velocity, and freestream velocity. It was observed that the presence of simulated ice tends to decrease the lift to equivalent drag ratio, as well as thrust coefficient for the range of velocity ratios tested. Also, increases in torque coefficient due to the generic ice formations were observed. Evaluation of the data has indicated that the addition of roughness due to secondary ice formations is crucial for proper evaluation of the degradation in main rotor performance.

  11. Comparative assessment of PV plant performance models considering climate effects

    DEFF Research Database (Denmark)

    Tina, Giuseppe; Ventura, Cristina; Sera, Dezso

    2017-01-01

    The paper investigates the effect of climate conditions on the accuracy of PV system performance models (physical and interpolation methods) which are used within a monitoring system as a reference for the power produced by a PV system to detect inefficient or faulty operating conditions....... The methodological approach is based on comparative tests of the analyzed models applied to two PV plants installed respectively in north of Denmark (Aalborg) and in the south of Italy (Agrigento). The different ambient, operating and installation conditions allow to understand how these factors impact the precision...... and effectiveness of such approaches, among these factors it is worth mentioning the different percentage of diffuse component of the yearly solar radiation on the global one. The experimental results show the effectiveness of the proposed approach. In order to have the possibility to analyze and compare...

  12. BISON and MARMOT Development for Modeling Fast Reactor Fuel Performance

    Energy Technology Data Exchange (ETDEWEB)

    Gamble, Kyle Allan Lawrence [Idaho National Lab. (INL), Idaho Falls, ID (United States); Williamson, Richard L. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Schwen, Daniel [Idaho National Lab. (INL), Idaho Falls, ID (United States); Zhang, Yongfeng [Idaho National Lab. (INL), Idaho Falls, ID (United States); Novascone, Stephen Rhead [Idaho National Lab. (INL), Idaho Falls, ID (United States); Medvedev, Pavel G. [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-09-01

    BISON and MARMOT are two codes under development at the Idaho National Laboratory for engineering scale and lower length scale fuel performance modeling. It is desired to add capabilities for fast reactor applications to these codes. The fast reactor fuel types under consideration are metal (U-Pu-Zr) and oxide (MOX). The cladding types of interest include 316SS, D9, and HT9. The purpose of this report is to outline the proposed plans for code development and provide an overview of the models added to the BISON and MARMOT codes for fast reactor fuel behavior. A brief overview of preliminary discussions on the formation of a bilateral agreement between the Idaho National Laboratory and the National Nuclear Laboratory in the United Kingdom is presented.

  13. Performance of enlarged blood pump models with five different impellers.

    Science.gov (United States)

    Chua, L P; Yu, S C; Leo, H L

    2000-01-01

    In earlier studies, a 5:1 enlarged pump model of the Kyoto-NTN Magnetically Suspended Centrifugal Blood Pump had been constructed and the flow characteristics investigated. Although the results obtained were satisfactory, the medium used was air. A 5:1 enlarged pump model using water as the medium thus was designed and constructed. Five different impeller blade profile designs were used in the present study. By varying (1) the blade profile design: forward, radial, and backward, (2) the number of blades used, and (3) the rotating speed, the flow characteristics of the pump were investigated. It was found that the impeller with the higher number of blades, used in the forward and straight blade profiles, have the best performance.

  14. Validating the ACE Model for Evaluating Student Performance Using a Teaching-Learning Process Based on Computational Modeling Systems

    Science.gov (United States)

    Louzada, Alexandre Neves; Elia, Marcos da Fonseca; Sampaio, Fábio Ferrentini; Vidal, Andre Luiz Pestana

    2014-01-01

    The aim of this work is to adapt and test, in a Brazilian public school, the ACE model proposed by Borkulo for evaluating student performance as a teaching-learning process based on computational modeling systems. The ACE model is based on different types of reasoning involving three dimensions. In addition to adapting the model and introducing…

  15. GEN-IV BENCHMARKING OF TRISO FUEL PERFORMANCE MODELS UNDER ACCIDENT CONDITIONS MODELING INPUT DATA

    Energy Technology Data Exchange (ETDEWEB)

    Collin, Blaise Paul [Idaho National Laboratory

    2016-09-01

    This document presents the benchmark plan for the calculation of particle fuel performance on safety testing experiments that are representative of operational accidental transients. The benchmark is dedicated to the modeling of fission product release under accident conditions by fuel performance codes from around the world, and the subsequent comparison to post-irradiation experiment (PIE) data from the modeled heating tests. The accident condition benchmark is divided into three parts: • The modeling of a simplified benchmark problem to assess potential numerical calculation issues at low fission product release. • The modeling of the AGR-1 and HFR-EU1bis safety testing experiments. • The comparison of the AGR-1 and HFR-EU1bis modeling results with PIE data. The simplified benchmark case, thereafter named NCC (Numerical Calculation Case), is derived from “Case 5” of the International Atomic Energy Agency (IAEA) Coordinated Research Program (CRP) on coated particle fuel technology [IAEA 2012]. It is included so participants can evaluate their codes at low fission product release. “Case 5” of the IAEA CRP-6 showed large code-to-code discrepancies in the release of fission products, which were attributed to “effects of the numerical calculation method rather than the physical model” [IAEA 2012]. The NCC is therefore intended to check if these numerical effects subsist. The first two steps imply the involvement of the benchmark participants with a modeling effort following the guidelines and recommendations provided by this document. The third step involves the collection of the modeling results by Idaho National Laboratory (INL) and the comparison of these results with the available PIE data. The objective of this document is to provide all necessary input data to model the benchmark cases, and to give some methodology guidelines and recommendations in order to make all results suitable for comparison with each other. The participants should read

  16. Modeling impact of environmental factors on photovoltaic array performance

    Directory of Open Access Journals (Sweden)

    Jie Yang, Yize Sun, Yang Xu

    2013-01-01

    Full Text Available It is represented in this paper that a methodology to model and quantify the impact of the three environmental factors, the ambient temperature, the incident irradiance and the wind speed, upon the performance of photovoltaic array operating under outdoor conditions. First, A simple correlation correlating operating temperature with the three environmental variables is validated for a range of wind speed studied, 2-8 m/s, and for irradiance values between 200 and 1000 W/m2. Root mean square error (RMSE between modeled operating temperature and measured values is 1.19% and the mean bias error (MBE is -0.09%. The environmental factors studied influence I-V curves, P-V curves, and maximum-power outputs of photovoltaic array. The cell-to-module-to-array mathematical model for photovoltaic panels is established in this paper and the method defined as segmented iteration is adopted to solve the I-V curve expression to relate model I-V curves. The model I-V curves and P-V curves are concluded to coincide well with measured data points. The RMSE between numerically calculated maximum-power outputs and experimentally measured ones is 0.2307%, while the MBE is 0.0183%. In addition, a multivariable non-linear regression equation is proposed to eliminate the difference between numerically calculated values and measured ones of maximum power outputs over the range of high ambient temperature and irradiance at noon and in the early afternoon. In conclusion, the proposed method is reasonably simple and accurate.

  17. Graphical User Interface for Simulink Integrated Performance Analysis Model

    Science.gov (United States)

    Durham, R. Caitlyn

    2009-01-01

    The J-2X Engine (built by Pratt & Whitney Rocketdyne,) in the Upper Stage of the Ares I Crew Launch Vehicle, will only start within a certain range of temperature and pressure for Liquid Hydrogen and Liquid Oxygen propellants. The purpose of the Simulink Integrated Performance Analysis Model is to verify that in all reasonable conditions the temperature and pressure of the propellants are within the required J-2X engine start boxes. In order to run the simulation, test variables must be entered at all reasonable values of parameters such as heat leak and mass flow rate. To make this testing process as efficient as possible in order to save the maximum amount of time and money, and to show that the J-2X engine will start when it is required to do so, a graphical user interface (GUI) was created to allow the input of values to be used as parameters in the Simulink Model, without opening or altering the contents of the model. The GUI must allow for test data to come from Microsoft Excel files, allow those values to be edited before testing, place those values into the Simulink Model, and get the output from the Simulink Model. The GUI was built using MATLAB, and will run the Simulink simulation when the Simulate option is activated. After running the simulation, the GUI will construct a new Microsoft Excel file, as well as a MATLAB matrix file, using the output values for each test of the simulation so that they may graphed and compared to other values.

  18. A Hybrid Fuzzy Model for Lean Product Development Performance Measurement

    Science.gov (United States)

    Osezua Aikhuele, Daniel; Mohd Turan, Faiz

    2016-02-01

    In the effort for manufacturing companies to meet up with the emerging consumer demands for mass customized products, many are turning to the application of lean in their product development process, and this is gradually moving from being a competitive advantage to a necessity. However, due to lack of clear understanding of the lean performance measurements, many of these companies are unable to implement and fully integrated the lean principle into their product development process. Extensive literature shows that only few studies have focus systematically on the lean product development performance (LPDP) evaluation. In order to fill this gap, the study therefore proposed a novel hybrid model based on Fuzzy Reasoning Approach (FRA), and the extension of Fuzzy-AHP and Fuzzy-TOPSIS methods for the assessment of the LPDP. Unlike the existing methods, the model considers the importance weight of each of the decision makers (Experts) since the performance criteria/attributes are required to be rated, and these experts have different level of expertise. The rating is done using a new fuzzy Likert rating scale (membership-scale) which is designed such that it can address problems resulting from information lost/distortion due to closed-form scaling and the ordinal nature of the existing Likert scale.

  19. Modelling of LOCA Tests with the BISON Fuel Performance Code

    Energy Technology Data Exchange (ETDEWEB)

    Williamson, Richard L [Idaho National Laboratory; Pastore, Giovanni [Idaho National Laboratory; Novascone, Stephen Rhead [Idaho National Laboratory; Spencer, Benjamin Whiting [Idaho National Laboratory; Hales, Jason Dean [Idaho National Laboratory

    2016-05-01

    BISON is a modern finite-element based, multidimensional nuclear fuel performance code that is under development at Idaho National Laboratory (USA). Recent advances of BISON include the extension of the code to the analysis of LWR fuel rod behaviour during loss-of-coolant accidents (LOCAs). In this work, BISON models for the phenomena relevant to LWR cladding behaviour during LOCAs are described, followed by presentation of code results for the simulation of LOCA tests. Analysed experiments include separate effects tests of cladding ballooning and burst, as well as the Halden IFA-650.2 fuel rod test. Two-dimensional modelling of the experiments is performed, and calculations are compared to available experimental data. Comparisons include cladding burst pressure and temperature in separate effects tests, as well as the evolution of fuel rod inner pressure during ballooning and time to cladding burst. Furthermore, BISON three-dimensional simulations of separate effects tests are performed, which demonstrate the capability to reproduce the effect of azimuthal temperature variations in the cladding. The work has been carried out in the frame of the collaboration between Idaho National Laboratory and Halden Reactor Project, and the IAEA Coordinated Research Project FUMAC.

  20. Performance Evaluation of the Prototype Model NEXT Ion Thruster

    Science.gov (United States)

    Herman, Daniel A.; Soulas, George C.; Patterson, Michael J.

    2008-01-01

    The performance testing results of the first prototype model NEXT ion engine, PM1, are presented. The NEXT program has developed the next generation ion propulsion system to enhance and enable Discovery, New Frontiers, and Flagship-type NASA missions. The PM1 thruster exhibits operational behavior consistent with its predecessors, the engineering model thrusters, with substantial mass savings, enhanced thermal margins, and design improvements for environmental testing compliance. The dry mass of PM1 is 12.7 kg. Modifications made in the thruster design have resulted in improved performance and operating margins, as anticipated. PM1 beginning-of-life performance satisfies all of the electric propulsion thruster mission-derived technical requirements. It demonstrates a wide range of throttleability by processing input power levels from 0.5 to 6.9 kW. At 6.9 kW, the PM1 thruster demonstrates specific impulse of 4190 s, 237 mN of thrust, and a thrust efficiency of 0.71. The flat beam profile, flatness parameters vary from 0.66 at low-power to 0.88 at full-power, and advanced ion optics reduce localized accelerator grid erosion and increases margins for electron backstreaming, impingement-limited voltage, and screen grid ion transparency. The thruster throughput capability is predicted to exceed 750 kg of xenon, an equivalent of 36,500 hr of continuous operation at the full-power operating condition.

  1. Modeling the performance of coated LPG tanks engulfed in fires

    Energy Technology Data Exchange (ETDEWEB)

    Landucci, Gabriele [CONPRICI - Dipartimento di Ingegneria Chimica, Chimica Industriale e Scienza dei Materiali, Universita di Pisa, via Diotisalvi n.2, 56126 Pisa (Italy); Molag, Menso [Nederlandse Organisatie voor toegepast-natuurwetenschappelijk onderzoek TNO, Princetonlaan 6, 3584 CB Utrecht (Netherlands); Cozzani, Valerio, E-mail: valerio.cozzani@unibo.it [CONPRICI - Dipartimento di Ingegneria Chimica, Mineraria e delle Tecnologie Ambientali, Alma Mater Studiorum - Universita di Bologna, Via Terracini 28 - 40131 Bologna (Italy)

    2009-12-15

    The improvement of passive fire protection of storage vessels is a key factor to enhance safety among the LPG distribution chain. A thermal and mechanical model based on finite elements simulations was developed to assess the behaviour of full size tanks used for LPG storage and transportation in fire engulfment scenarios. The model was validated by experimental results. A specific analysis of the performance of four different reference coating materials was then carried out, also defining specific key performance indicators (KPIs) to assess design safety margins in near-miss simulations. The results confirmed the wide influence of coating application on the expected vessel time to failure due to fire engulfment. A quite different performance of the alternative coating materials was evidenced. General correlations were developed among the vessel time to failure and the effective coating thickness in full engulfment scenarios, providing a preliminary assessment of the coating thickness required to prevent tank rupture for a given time lapse. The KPIs defined allowed the assessment of the available safety margins in the reference scenarios analyzed and of the robustness of thermal protection design.

  2. Modeling the performance of coated LPG tanks engulfed in fires.

    Science.gov (United States)

    Landucci, Gabriele; Molag, Menso; Cozzani, Valerio

    2009-12-15

    The improvement of passive fire protection of storage vessels is a key factor to enhance safety among the LPG distribution chain. A thermal and mechanical model based on finite elements simulations was developed to assess the behaviour of full size tanks used for LPG storage and transportation in fire engulfment scenarios. The model was validated by experimental results. A specific analysis of the performance of four different reference coating materials was then carried out, also defining specific key performance indicators (KPIs) to assess design safety margins in near-miss simulations. The results confirmed the wide influence of coating application on the expected vessel time to failure due to fire engulfment. A quite different performance of the alternative coating materials was evidenced. General correlations were developed among the vessel time to failure and the effective coating thickness in full engulfment scenarios, providing a preliminary assessment of the coating thickness required to prevent tank rupture for a given time lapse. The KPIs defined allowed the assessment of the available safety margins in the reference scenarios analyzed and of the robustness of thermal protection design.

  3. Storage Capacity Modeling of Reservoir Systems Employing Performance Measures

    Directory of Open Access Journals (Sweden)

    Issa Saket Oskoui

    2014-12-01

    Full Text Available Developing a prediction relationship for total (i.e. within-year plus over-year storage capacity of reservoir systems is beneficial because it can be used as an alternative to the analysis of reservoirs during designing stage and gives an opportunity to planner to examine and compare different cases in a fraction of time required for complete analysis where detailed analysis is not necessary. Existing relationships for storage capacity are mostly capable of estimating over-year storage capacity and total storage capacity can be obtained through relationships for adjusting over-year capacity and there is no independent relationship to estimate total storage capacity. Moreover these relationships do not involve vulnerability performance criterion and are not verified for Malaysia Rivers. In this study two different reservoirs in Southern part of Peninsular Malaysia, Melaka and Muar, are analyzed through a Monte Carlo simulation approach involving performance metrics. Subsequently the storage capacity results of the simulation are compared with those of the well-known existing equations. It is observed that existing models may not predict total capacity appropriately for Malaysian reservoirs. Consequently, applying the simulation results, two separate regression equations are developed to model total storage capacity of study reservoirs employing time based reliability and vulnerability performance measures.

  4. From Performance Measurement to Strategic Management Model: Balanced Scorecard

    Directory of Open Access Journals (Sweden)

    Cihat Savsar

    2015-03-01

    Full Text Available Abstract: In Today’s competitive markets, one of the main conditions of the surviving of enterprises is the necessity to have effective performance management systems. Decisions must be taken by the management according to the performance of assets. In the transition from industrial society to information society, the presence of business structures have changed and the values of non-financial assets have increased in this period. So some systems have emerged based on intangible assets and to measure them instead of tangible assets and their measurements. With economic and technological development multi-dimensional evaluation in the business couldn’t be sufficient.  Performance evaluation methods can be applied in business with an integrated approach by its accordance with business strategy, linking to reward system and cause effects link established between performance measures. Balanced scorecard is one of the commonly used in measurement methods. While it was used for the first time in 1992 as a performance measurement tool today it has been used as a strategic management model besides its conventional uses. BSC contains customer perspective, internal perspective and learning and growth perspective besides financial perspective. Learning and growth perspective is determinant of other perspectives. In order to achieve the objectives set out in the financial perspective in other dimensions that need to be accomplished, is emphasized. Establishing a causal link between performance measures and targets how to achieve specified goals with strategy maps are described.

  5. CFD modelling of hydrogen stratification in enclosures: Model validation and application to PAR performance

    Energy Technology Data Exchange (ETDEWEB)

    Hoyes, J.R., E-mail: james.hoyes@hsl.gsi.gov.uk; Ivings, M.J.

    2016-12-15

    Highlights: • The ability of CFD to predict hydrogen stratification phenomena is investigated. • Contrary to expectation, simulations on tetrahedral meshes under-predict mixing. • Simulations on structured meshes give good agreement with experimental data. • CFD model used to investigate the effects of stratification on PAR performance. • Results show stratification can have a significant effect on PAR performance. - Abstract: Computational Fluid Dynamics (CFD) models are maturing into useful tools for supporting safety analyses. This paper investigates the capabilities of CFD models for predicting hydrogen stratification in a containment vessel using data from the NEA/OECD SETH2 MISTRA experiments. Further simulations are then carried out to illustrate the qualitative effects of hydrogen stratification on the performance of Passive Autocatalytic Recombiner (PAR) units. The MISTRA experiments have well-defined initial and boundary conditions which makes them well suited for use in a validation study. Results are presented for the sensitivity to mesh resolution and mesh type. Whilst the predictions are shown to be largely insensitive to the mesh resolution they are surprisingly sensitive to the mesh type. In particular, tetrahedral meshes are found to induce small unphysical convection currents that result in molecular diffusion and turbulent mixing being under-predicted. This behaviour is not unique to the CFD model used here (ANSYS CFX) and furthermore, it may affect simulations run on other non-aligned meshes (meshes that are not aligned perpendicular to gravity), including non-aligned structured meshes. Following existing best practice guidelines can help to identify potential unphysical predictions, but as an additional precaution consideration should be given to using gravity-aligned meshes for modelling stratified flows. CFD simulations of hydrogen recombination in the Becker Technologies THAI facility are presented with high and low PAR positions

  6. SR 97 - Alternative models project. Discrete fracture network modelling for performance assessment of Aberg

    Energy Technology Data Exchange (ETDEWEB)

    Dershowitz, B.; Eiben, T. [Golder Associates Inc., Seattle (United States); Follin, S.; Andersson, Johan [Golder Grundteknik KB, Stockholm (Sweden)

    1999-08-01

    As part of studies into the siting of a deep repository for nuclear waste, Swedish Nuclear Fuel and Waste Management Company (SKB) has commissioned the Alternative Models Project (AMP). The AMP is a comparison of three alternative modeling approaches for geosphere performance assessment for a single hypothetical site. The hypothetical site, arbitrarily named Aberg is based on parameters from the Aespoe Hard Rock Laboratory in southern Sweden. The Aberg model domain, boundary conditions and canister locations are defined as a common reference case to facilitate comparisons between approaches. This report presents the results of a discrete fracture pathways analysis of the Aberg site, within the context of the SR 97 performance assessment exercise. The Aberg discrete fracture network (DFN) site model is based on consensus Aberg parameters related to the Aespoe HRL site. Discrete fracture pathways are identified from canister locations in a prototype repository design to the surface of the island or to the sea bottom. The discrete fracture pathways analysis presented in this report is used to provide the following parameters for SKB's performance assessment transport codes FARF31 and COMP23: * F-factor: Flow wetted surface normalized with regards to flow rate (yields an appreciation of the contact area available for diffusion and sorption processes) [TL{sup -1}]. * Travel Time: Advective transport time from a canister location to the environmental discharge [T]. * Canister Flux: Darcy flux (flow rate per unit area) past a representative canister location [LT{sup -1}]. In addition to the above, the discrete fracture pathways analysis in this report also provides information about: additional pathway parameters such as pathway length, pathway width, transport aperture, reactive surface area and transmissivity, percentage of canister locations with pathways to the surface discharge, spatial pattern of pathways and pathway discharges, visualization of pathways, and

  7. Waterflooding performance using Dykstra-Parsons as compared with numerical model performance

    Energy Technology Data Exchange (ETDEWEB)

    Mobarak, S.

    1975-01-01

    Multilayered models have been used by a number of investigators to represent heterogeneous reservoirs. The purpose of this note is to present waterflood performance for multilayered systems using the standard Dykstra-Parsons method as method as compared with that predicted by the modified form using equations given and those obtained by using a numerical model. The predicted oil recovery, using Johnson charts or the standard Dykstra-Parsons recovery modulus chart is always conservative, if not overly pessimistic. The modified Dykstra-Parsons method, as explained in the text, shows good agreement with the numerical model.

  8. Petri Net Modeling and Performance Analyzing for MGC

    Institute of Scientific and Technical Information of China (English)

    HUANGYongfeng; LIXing; ZHANGKe

    2004-01-01

    This article analyses the advantages of disassembled gateway based on its functions, and introduces the processing of signaling and media streams. Moreover, a function-separated gateway is designed and implemented, which consists of media gateway controller and media gateway. So we propose originally the Stochastic Petri-net model of media gateway controller, and educe the mathematical relationship between load of gateway controller and other factors such as rate of call initialing, delay of call setup, and delay of call release. Lastly, we summarize some important factors which affect performances of gateway, and make a conclusion that decreasing the delay of call setup will improve Media gateway controller efficiency.

  9. Voxel model in BNCT treatment planning: performance analysis and improvements

    Science.gov (United States)

    González, Sara J.; Carando, Daniel G.; Santa Cruz, Gustavo A.; Zamenhof, Robert G.

    2005-02-01

    In recent years, many efforts have been made to study the performance of treatment planning systems in deriving an accurate dosimetry of the complex radiation fields involved in boron neutron capture therapy (BNCT). The computational model of the patient's anatomy is one of the main factors involved in this subject. This work presents a detailed analysis of the performance of the 1 cm based voxel reconstruction approach. First, a new and improved material assignment algorithm implemented in NCTPlan treatment planning system for BNCT is described. Based on previous works, the performances of the 1 cm based voxel methods used in the MacNCTPlan and NCTPlan treatment planning systems are compared by standard simulation tests. In addition, the NCTPlan voxel model is benchmarked against in-phantom physical dosimetry of the RA-6 reactor of Argentina. This investigation shows the 1 cm resolution to be accurate enough for all reported tests, even in the extreme cases such as a parallelepiped phantom irradiated through one of its sharp edges. This accuracy can be degraded at very shallow depths in which, to improve the estimates, the anatomy images need to be positioned in a suitable way. Rules for this positioning are presented. The skin is considered one of the organs at risk in all BNCT treatments and, in the particular case of cutaneous melanoma of extremities, limits the delivered dose to the patient. Therefore, the performance of the voxel technique is deeply analysed in these shallow regions. A theoretical analysis is carried out to assess the distortion caused by homogenization and material percentage rounding processes. Then, a new strategy for the treatment of surface voxels is proposed and tested using two different irradiation problems. For a parallelepiped phantom perpendicularly irradiated with a 5 keV neutron source, the large thermal neutron fluence deviation present at shallow depths (from 54% at 0 mm depth to 5% at 4 mm depth) is reduced to 2% on average

  10. Impact of reactive settler models on simulated WWTP performance

    DEFF Research Database (Denmark)

    Gernaey, Krist; Jeppsson, Ulf; Batstone, Damien J.

    2006-01-01

    Including a reactive settler model in a wastewater treatment plant model allows representation of the biological reactions taking place in the sludge blanket in the settler, something that is neglected in many simulation studies. The idea of including a reactive settler model is investigated for ...

  11. Fuzzy regression modeling for tool performance prediction and degradation detection.

    Science.gov (United States)

    Li, X; Er, M J; Lim, B S; Zhou, J H; Gan, O P; Rutkowski, L

    2010-10-01

    In this paper, the viability of using Fuzzy-Rule-Based Regression Modeling (FRM) algorithm for tool performance and degradation detection is investigated. The FRM is developed based on a multi-layered fuzzy-rule-based hybrid system with Multiple Regression Models (MRM) embedded into a fuzzy logic inference engine that employs Self Organizing Maps (SOM) for clustering. The FRM converts a complex nonlinear problem to a simplified linear format in order to further increase the accuracy in prediction and rate of convergence. The efficacy of the proposed FRM is tested through a case study - namely to predict the remaining useful life of a ball nose milling cutter during a dry machining process of hardened tool steel with a hardness of 52-54 HRc. A comparative study is further made between four predictive models using the same set of experimental data. It is shown that the FRM is superior as compared with conventional MRM, Back Propagation Neural Networks (BPNN) and Radial Basis Function Networks (RBFN) in terms of prediction accuracy and learning speed.

  12. Current Capabilities of the Fuel Performance Modeling Code PARFUME

    Energy Technology Data Exchange (ETDEWEB)

    G. K. Miller; D. A. Petti; J. T. Maki; D. L. Knudson

    2004-09-01

    The success of gas reactors depends upon the safety and quality of the coated particle fuel. A fuel performance modeling code (called PARFUME), which simulates the mechanical and physico-chemical behavior of fuel particles during irradiation, is under development at the Idaho National Engineering and Environmental Laboratory. Among current capabilities in the code are: 1) various options for calculating CO production and fission product gas release, 2) a thermal model that calculates a time-dependent temperature profile through a pebble bed sphere or a prismatic block core, as well as through the layers of each analyzed particle, 3) simulation of multi-dimensional particle behavior associated with cracking in the IPyC layer, partial debonding of the IPyC from the SiC, particle asphericity, kernel migration, and thinning of the SiC caused by interaction of fission products with the SiC, 4) two independent methods for determining particle failure probabilities, 5) a model for calculating release-to-birth (R/B) ratios of gaseous fission products, that accounts for particle failures and uranium contamination in the fuel matrix, and 6) the evaluation of an accident condition, where a particle experiences a sudden change in temperature following a period of normal irradiation. This paper presents an overview of the code.

  13. A reformer performance model for fuel cell applications

    Science.gov (United States)

    Sandhu, S. S.; Saif, Y. A.; Fellner, J. P.

    A performance model for a reformer, consisting of the catalytic partial oxidation (CPO), high- and low-temperature water-gas shift (HTWGS and LTWGS), and preferential oxidation (PROX) reactors, has been formulated. The model predicts the composition and temperature of the hydrogen-rich reformed fuel-gas mixture needed for the fuel cell applications. The mathematical model equations, based on the principles of classical thermodynamics and chemical kinetics, were implemented into a computer program. The resulting software was employed to calculate the chemical species molar flow rates and the gas mixture stream temperature for the steady-state operation of the reformer. Typical computed results, such as the gas mixture temperature at the CPO reactor exit and the profiles of the fractional conversion of carbon monoxide, temperature, and mole fractions of the chemical species as a function of the catalyst weight in the HTWGS, LTWGS, and PROX reactors, are here presented at the carbon-to-oxygen atom ratio (C/O) of 1 for the feed mixture of n-decane (fuel) and dry air (oxidant).

  14. Modelling of Performance of Caisson Type Breakwaters under Extreme Waves

    Science.gov (United States)

    Güney Doǧan, Gözde; Özyurt Tarakcıoǧlu, Gülizar; Baykal, Cüneyt

    2016-04-01

    Many coastal structures are designed without considering loads of tsunami-like waves or long waves although they are constructed in areas prone to encounter these waves. Performance of caisson type breakwaters under extreme swells is tested in Middle East Technical University (METU) Coastal and Ocean Engineering Laboratory. This paper presents the comparison of pressure measurements taken along the surface of caisson type breakwaters and obtained from numerical modelling of them using IH2VOF as well as damage behavior of the breakwater under the same extreme swells tested in a wave flume at METU. Experiments are conducted in the 1.5 m wide wave flume, which is divided into two parallel sections (0.74 m wide each). A piston type of wave maker is used to generate the long wave conditions located at one end of the wave basin. Water depth is determined as 0.4m and kept constant during the experiments. A caisson type breakwater is constructed to one side of the divided flume. The model scale, based on the Froude similitude law, is chosen as 1:50. 7 different wave conditions are applied in the tests as the wave period ranging from 14.6 s to 34.7 s, wave heights from 3.5 m to 7.5 m and steepness from 0.002 to 0.015 in prototype scale. The design wave parameters for the breakwater were 5m wave height and 9.5s wave period in prototype. To determine the damage of the breakwater which were designed according to this wave but tested under swell waves, video and photo analysis as well as breakwater profile measurements before and after each test are performed. Further investigations are carried out about the acting wave forces on the concrete blocks of the caisson structures via pressure measurements on the surfaces of these structures where the structures are fixed to the channel bottom minimizing. Finally, these pressure measurements will be compared with the results obtained from the numerical study using IH2VOF which is one of the RANS models that can be applied to simulate

  15. Cross-Industry Performance Modeling: Toward Cooperative Analysis

    Energy Technology Data Exchange (ETDEWEB)

    H. S. Blackman; W. J. Reece

    1998-10-01

    One of the current unsolved problems in human factors is the difficulty in acquiring information from lessons learned and data collected among human performance analysts in different domains. There are several common concerns and generally accepted issues of importance for human factors, psychology and industry analysts of performance and safety. Among these are the need to incorporate lessons learned in design, to carefully consider implementation of new designs and automation, and the need to reduce human performance-based contributions to risk. In spite of shared concerns, there are several road blocks to widespread sharing of data and lessons learned from operating experience and simulation, including the fact that very few publicly accessible data bases exist(Gertman & Blackman, 1994, and Kirwan, 1997). There is a need to draw together analysts and analytic methodologies to comprise a centralized source of data with sufficient detail to be meaningful while ensuring source anonymity. We propose that a generic source of performance data and a multi-domain data store may provide the first steps toward cooperative performance modeling and analysis across industries.

  16. Cross-industry Performance Modeling: Toward Cooperative Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Reece, Wendy Jane; Blackman, Harold Stabler

    1998-10-01

    One of the current unsolved problems in human factors is the difficulty in acquiring information from lessons learned and data collected among human performance analysts in different domains. There are several common concerns and generally accepted issues of importance for human factors, psychology and industry analysts of performance and safety. Among these are the need to incorporate lessons learned in design, to carefully consider implementation of new designs and automation, and the need to reduce human performance-based contributions to risk. In spite of shared concerns, there are several roadblocks to widespread sharing of data and lessons learned from operating experience and simulation, including the fact that very few publicly accessible data bases exist (Gertman & Blackman, 1994, and Kirwan, 1997). There is a need to draw together analysts and analytic methodologies to comprise a centralized source of data with sufficient detail to be meaningful while ensuring source anonymity. We propose that a generic source of performance data and a multi-domain data store may provide the first steps toward cooperative performance modeling and analysis across industries.

  17. Modelling of green roof hydrological performance for urban drainage applications

    Science.gov (United States)

    Locatelli, Luca; Mark, Ole; Mikkelsen, Peter Steen; Arnbjerg-Nielsen, Karsten; Bergen Jensen, Marina; Binning, Philip John

    2014-11-01

    Green roofs are being widely implemented for stormwater management and their impact on the urban hydrological cycle can be evaluated by incorporating them into urban drainage models. This paper presents a model of green roof long term and single event hydrological performance. The model includes surface and subsurface storage components representing the overall retention capacity of the green roof which is continuously re-established by evapotranspiration. The runoff from the model is described through a non-linear reservoir approach. The model was calibrated and validated using measurement data from 3 different extensive sedum roofs in Denmark. These data consist of high-resolution measurements of runoff, precipitation and atmospheric variables in the period 2010-2012. The hydrological response of green roofs was quantified based on statistical analysis of the results of a 22-year (1989-2010) continuous simulation with Danish climate data. The results show that during single events, the 10 min runoff intensities were reduced by 10-36% for 5-10 years return period and 40-78% for 0.1-1 year return period; the runoff volumes were reduced by 2-5% for 5-10 years return period and 18-28% for 0.1-1 year return period. Annual runoff volumes were estimated to be 43-68% of the total precipitation. The peak time delay was found to greatly vary from 0 to more than 40 min depending on the type of event, and a general decrease in the time delay was observed for increasing rainfall intensities. Furthermore, the model was used to evaluate the variation of the average annual runoff from green roofs as a function of the total available storage and vegetation type. The results show that even a few millimeters of storage can reduce the mean annual runoff by up to 20% when compared to a traditional roof and that the mean annual runoff is not linearly related to the storage. Green roofs have therefore the potential to be important parts of future urban stormwater management plans.

  18. LCP- LIFETIME COST AND PERFORMANCE MODEL FOR DISTRIBUTED PHOTOVOLTAIC SYSTEMS

    Science.gov (United States)

    Borden, C. S.

    1994-01-01

    The Lifetime Cost and Performance (LCP) Model was developed to assist in the assessment of Photovoltaic (PV) system design options. LCP is a simulation of the performance, cost, and revenue streams associated with distributed PV power systems. LCP provides the user with substantial flexibility in specifying the technical and economic environment of the PV application. User-specified input parameters are available to describe PV system characteristics, site climatic conditions, utility purchase and sellback rate structures, discount and escalation rates, construction timing, and lifetime of the system. Such details as PV array orientation and tilt angle, PV module and balance-of-system performance attributes, and the mode of utility interconnection are user-specified. LCP assumes that the distributed PV system is utility grid interactive without dedicated electrical storage. In combination with a suitable economic model, LCP can provide an estimate of the expected net present worth of a PV system to the owner, as compared to electricity purchased from a utility grid. Similarly, LCP might be used to perform sensitivity analyses to identify those PV system parameters having significant impact on net worth. The user describes the PV system configuration to LCP via the basic electrical components. The module is the smallest entity in the PV system which is modeled. A PV module is defined in the simulation by its short circuit current, which varies over the system lifetime due to degradation and failure. Modules are wired in series to form a branch circuit. Bypass diodes are allowed between modules in the branch circuits. Branch circuits are then connected in parallel to form a bus. A collection of buses is connected in parallel to form an increment to capacity of the system. By choosing the appropriate series-parallel wiring design, the user can specify the current, voltage, and reliability characteristics of the system. LCP simulation of system performance is site

  19. Danish and Brazilian Modeling of Whole-Building Hygrothermal Performance

    DEFF Research Database (Denmark)

    Rode, Carsten; Mendes, Nathan; Grau, Karl

    2006-01-01

    computational analysis of the hygrothermal performance of whole buildings. Such developments have led to new hygrothermal models for whole buildings. The paper gives examples of two such recent developments and will illustrate some calculation results that can be obtained. Finally the paper will mention some......The humidity of rooms and moisture conditions of materials in the enclosure of buildings depend much on each other because of the moisture exchange that takes place over the interior surfaces. These moisture influences also depend strongly on the thermal conditions of indoor spaces and enclosure...... the humidity low and thus reduce the risk of moisture damage in the building enclosure. In either case the indoor humidity has a direct or indirect impact on the energy performance of the HVAC system of a building. To analyze this situation, one could benefit from some recent developments in integrated...

  20. A Tool for Performance Modeling of Parallel Programs

    Directory of Open Access Journals (Sweden)

    J.A. González

    2003-01-01

    Full Text Available Current performance prediction analytical models try to characterize the performance behavior of actual machines through a small set of parameters. In practice, substantial deviations are observed. These differences are due to factors as memory hierarchies or network latency. A natural approach is to associate a different proportionality constant with each basic block, and analogously, to associate different latencies and bandwidths with each "communication block". Unfortunately, to use this approach implies that the evaluation of parameters must be done for each algorithm. This is a heavy task, implying experiment design, timing, statistics, pattern recognition and multi-parameter fitting algorithms. Software support is required. We present a compiler that takes as source a C program annotated with complexity formulas and produces as output an instrumented code. The trace files obtained from the execution of the resulting code are analyzed with an interactive interpreter, giving us, among other information, the values of those parameters.

  1. Performance assessment modeling of the proposed Genting Island repository facility

    Energy Technology Data Exchange (ETDEWEB)

    Imardjoko, Y.U. [Gadjah Mada Univ., Yogyakarta (Indonesia); Bullen, D.B. [Iowa State Univ., Ames, IA (United States); Yatim, S. [National Atomic Energy Agency, Tangerang (Indonesia)

    1996-12-01

    Indonesia is about to enter the nuclear era with the construction of several nuclear power plants in the near future. Numerous issues, including disposal of high-level radioactive wastes, must be addressed to evaluate the impact of these plants on the environment. This paper reviews the Genting ocean island repository site development plan with respect to three main areas, the inventory of HLRW, the barrier systems (natural and engineered), and the physical condition of the site. The radionuclide inventory and waste form require analyses of the waste package that include selection of container materials, the type of engineered barrier and its predicted performance, and radionuclide release models. Parameters pertinent to the repository site includes information pertaining to the geology, hydrology, climatology, and water chemistry of the site. These data are important to aid in the prediction of the long-term performance of the site.

  2. MODEL AND PERFORMANCE ANALYSIS OF FMT SYSTEM AND ORTHOGONAL FILTERBANKS

    Institute of Scientific and Technical Information of China (English)

    Xu Xiaorong; Zheng Baoyu; Wang Lei

    2009-01-01

    An orthogonal multi-carrier modulation-Filtered Multi-Tone (FMT) modulation, is evaluated in this paper. The objective of this paper is to model the FMT system with polyphase filterbank network, design its synthesis/analysis orthogonal filterbanks and analyze their performance. Oversampled and critically sampled cases of FMT system are discussed in detail. Perfect Reconstruction (PR) properties of parallel orthogonal subchannels in the case of critically sampled is derived from filterbank polyphase decomposition. Diverse types of prototype filters which include Infinite Impulse Response (IIR) and Finite Impulse Response (FIR) are designed and analyzed. Performance analysis of orthogonal filterbanks which are implemented by these prototype filters are proposed and compared respectively. Simulation results of FMT orthogonal filterbanks are presented. In the end, the application prospect of FMT is discussed.

  3. Human performance modeling for system of systems analytics: combat performance-shaping factors.

    Energy Technology Data Exchange (ETDEWEB)

    Lawton, Craig R.; Miller, Dwight Peter

    2006-01-01

    The US military has identified Human Performance Modeling (HPM) as a significant requirement and challenge of future systems modeling and analysis initiatives. To support this goal, Sandia National Laboratories (SNL) has undertaken a program of HPM as an integral augmentation to its system-of-system (SoS) analytics capabilities. The previous effort, reported in SAND2005-6569, evaluated the effects of soldier cognitive fatigue on SoS performance. The current effort began with a very broad survey of any performance-shaping factors (PSFs) that also might affect soldiers performance in combat situations. The work included consideration of three different approaches to cognition modeling and how appropriate they would be for application to SoS analytics. This bulk of this report categorizes 47 PSFs into three groups (internal, external, and task-related) and provides brief descriptions of how each affects combat performance, according to the literature. The PSFs were then assembled into a matrix with 22 representative military tasks and assigned one of four levels of estimated negative impact on task performance, based on the literature. Blank versions of the matrix were then sent to two ex-military subject-matter experts to be filled out based on their personal experiences. Data analysis was performed to identify the consensus most influential PSFs. Results indicate that combat-related injury, cognitive fatigue, inadequate training, physical fatigue, thirst, stress, poor perceptual processing, and presence of chemical agents are among the PSFs with the most negative impact on combat performance.

  4. A performance model for the communication in fast multipole methods on high-performance computing platforms

    KAUST Repository

    Ibeid, Huda

    2016-03-04

    Exascale systems are predicted to have approximately 1 billion cores, assuming gigahertz cores. Limitations on affordable network topologies for distributed memory systems of such massive scale bring new challenges to the currently dominant parallel programing model. Currently, there are many efforts to evaluate the hardware and software bottlenecks of exascale designs. It is therefore of interest to model application performance and to understand what changes need to be made to ensure extrapolated scalability. The fast multipole method (FMM) was originally developed for accelerating N-body problems in astrophysics and molecular dynamics but has recently been extended to a wider range of problems. Its high arithmetic intensity combined with its linear complexity and asynchronous communication patterns make it a promising algorithm for exascale systems. In this paper, we discuss the challenges for FMM on current parallel computers and future exascale architectures, with a focus on internode communication. We focus on the communication part only; the efficiency of the computational kernels are beyond the scope of the present study. We develop a performance model that considers the communication patterns of the FMM and observe a good match between our model and the actual communication time on four high-performance computing (HPC) systems, when latency, bandwidth, network topology, and multicore penalties are all taken into account. To our knowledge, this is the first formal characterization of internode communication in FMM that validates the model against actual measurements of communication time. The ultimate communication model is predictive in an absolute sense; however, on complex systems, this objective is often out of reach or of a difficulty out of proportion to its benefit when there exists a simpler model that is inexpensive and sufficient to guide coding decisions leading to improved scaling. The current model provides such guidance.

  5. THE PENA BLANCA NATURAL ANALOGUE PERFORMANCE ASSESSMENT MODEL

    Energy Technology Data Exchange (ETDEWEB)

    G. Saulnier and W. Statham

    2006-04-16

    The Nopal I uranium mine in the Sierra Pena Blanca, Chihuahua, Mexico serves as a natural analogue to the Yucca Mountain repository. The Pena Blanca Natural Analogue Performance Assessment Model simulates the mobilization and transport of radionuclides that are released from the mine and transported to the saturated zone. The Pena Blanca Natural Analogue Performance Assessment Model uses probabilistic simulations of hydrogeologic processes that are analogous to the processes that occur at the Yucca Mountain site. The Nopal I uranium deposit lies in fractured, welded, and altered rhyolitic ash-flow tuffs that overlie carbonate rocks, a setting analogous to the geologic formations at the Yucca Mountain site. The Nopal I mine site has the following analogous characteristics as compared to the Yucca Mountain repository site: (1) Analogous source--UO{sub 2} uranium ore deposit = spent nuclear fuel in the repository; (2) Analogous geology--(i.e. fractured, welded, and altered rhyolitic ash-flow tuffs); (3) Analogous climate--Semiarid to arid; (4) Analogous setting--Volcanic tuffs overlie carbonate rocks; and (5) Analogous geochemistry--Oxidizing conditions Analogous hydrogeology: The ore deposit lies in the unsaturated zone above the water table.

  6. Modelling of green roofs' hydrologic performance using EPA's SWMM.

    Science.gov (United States)

    Burszta-Adamiak, E; Mrowiec, M

    2013-01-01

    Green roofs significantly affect the increase in water retention and thus the management of rain water in urban areas. In Poland, as in many other European countries, excess rainwater resulting from snowmelt and heavy rainfall contributes to the development of local flooding in urban areas. Opportunities to reduce surface runoff and reduce flood risks are among the reasons why green roofs are more likely to be used also in this country. However, there are relatively few data on their in situ performance. In this study the storm water performance was simulated for the green roofs experimental plots using the Storm Water Management Model (SWMM) with Low Impact Development (LID) Controls module (version 5.0.022). The model consists of many parameters for a particular layer of green roofs but simulation results were unsatisfactory considering the hydrologic response of the green roofs. For the majority of the tested rain events, the Nash coefficient had negative values. It indicates a weak fit between observed and measured flow-rates. Therefore complexity of the LID module does not affect the increase of its accuracy. Further research at a technical scale is needed to determine the role of the green roof slope, vegetation cover and drying process during the inter-event periods.

  7. A modelling study of long term green roof retention performance.

    Science.gov (United States)

    Stovin, Virginia; Poë, Simon; Berretta, Christian

    2013-12-15

    This paper outlines the development of a conceptual hydrological flux model for the long term continuous simulation of runoff and drought risk for green roof systems. A green roof's retention capacity depends upon its physical configuration, but it is also strongly influenced by local climatic controls, including the rainfall characteristics and the restoration of retention capacity associated with evapotranspiration during dry weather periods. The model includes a function that links evapotranspiration rates to substrate moisture content, and is validated against observed runoff data. The model's application to typical extensive green roof configurations is demonstrated with reference to four UK locations characterised by contrasting climatic regimes, using 30-year rainfall time-series inputs at hourly simulation time steps. It is shown that retention performance is dependent upon local climatic conditions. Volumetric retention ranges from 0.19 (cool, wet climate) to 0.59 (warm, dry climate). Per event retention is also considered, and it is demonstrated that retention performance decreases significantly when high return period events are considered in isolation. For example, in Sheffield the median per-event retention is 1.00 (many small events), but the median retention for events exceeding a 1 in 1 yr return period threshold is only 0.10. The simulation tool also provides useful information about the likelihood of drought periods, for which irrigation may be required. A sensitivity study suggests that green roofs with reduced moisture-holding capacity and/or low evapotranspiration rates will tend to offer reduced levels of retention, whilst high moisture-holding capacity and low evapotranspiration rates offer the strongest drought resistance.

  8. Assigning probability distributions to input parameters of performance assessment models

    Energy Technology Data Exchange (ETDEWEB)

    Mishra, Srikanta [INTERA Inc., Austin, TX (United States)

    2002-02-01

    This study presents an overview of various approaches for assigning probability distributions to input parameters and/or future states of performance assessment models. Specifically,three broad approaches are discussed for developing input distributions: (a) fitting continuous distributions to data, (b) subjective assessment of probabilities, and (c) Bayesian updating of prior knowledge based on new information. The report begins with a summary of the nature of data and distributions, followed by a discussion of several common theoretical parametric models for characterizing distributions. Next, various techniques are presented for fitting continuous distributions to data. These include probability plotting, method of moments, maximum likelihood estimation and nonlinear least squares analysis. The techniques are demonstrated using data from a recent performance assessment study for the Yucca Mountain project. Goodness of fit techniques are also discussed, followed by an overview of how distribution fitting is accomplished in commercial software packages. The issue of subjective assessment of probabilities is dealt with in terms of the maximum entropy distribution selection approach, as well as some common rules for codifying informal expert judgment. Formal expert elicitation protocols are discussed next, and are based primarily on the guidance provided by the US NRC. The Bayesian framework for updating prior distributions (beliefs) when new information becomes available is discussed. A simple numerical approach is presented for facilitating practical applications of the Bayes theorem. Finally, a systematic framework for assigning distributions is presented: (a) for the situation where enough data are available to define an empirical CDF or fit a parametric model to the data, and (b) to deal with the situation where only a limited amount of information is available.

  9. Simulation Modeling and Performance Evaluation of Space Networks

    Science.gov (United States)

    Jennings, Esther H.; Segui, John

    2006-01-01

    In space exploration missions, the coordinated use of spacecraft as communication relays increases the efficiency of the endeavors. To conduct trade-off studies of the performance and resource usage of different communication protocols and network designs, JPL designed a comprehensive extendable tool, the Multi-mission Advanced Communications Hybrid Environment for Test and Evaluation (MACHETE). The design and development of MACHETE began in 2000 and is constantly evolving. Currently, MACHETE contains Consultative Committee for Space Data Systems (CCSDS) protocol standards such as Proximity-1, Advanced Orbiting Systems (AOS), Packet Telemetry/Telecommand, Space Communications Protocol Specification (SCPS), and the CCSDS File Delivery Protocol (CFDP). MACHETE uses the Aerospace Corporation s Satellite Orbital Analysis Program (SOAP) to generate the orbital geometry information and contact opportunities. Matlab scripts provide the link characteristics. At the core of MACHETE is a discrete event simulator, QualNet. Delay Tolerant Networking (DTN) is an end-to-end architecture providing communication in and/or through highly stressed networking environments. Stressed networking environments include those with intermittent connectivity, large and/or variable delays, and high bit error rates. To provide its services, the DTN protocols reside at the application layer of the constituent internets, forming a store-and-forward overlay network. The key capabilities of the bundling protocols include custody-based reliability, ability to cope with intermittent connectivity, ability to take advantage of scheduled and opportunistic connectivity, and late binding of names to addresses. In this presentation, we report on the addition of MACHETE models needed to support DTN, namely: the Bundle Protocol (BP) model. To illustrate the use of MACHETE with the additional DTN model, we provide an example simulation to benchmark its performance. We demonstrate the use of the DTN protocol

  10. Reduced-complexity modeling of braided rivers: Assessing model performance by sensitivity analysis, calibration, and validation

    Science.gov (United States)

    Ziliani, L.; Surian, N.; Coulthard, T. J.; Tarantola, S.

    2013-12-01

    paper addresses an important question of modeling stream dynamics: How may numerical models of braided stream morphodynamics be rigorously and objectively evaluated against a real case study? Using simulations from the Cellular Automaton Evolutionary Slope and River (CAESAR) reduced-complexity model (RCM) of a 33 km reach of a large gravel bed river (the Tagliamento River, Italy), this paper aims to (i) identify a sound strategy for calibration and validation of RCMs, (ii) investigate the effectiveness of multiperformance model assessments, (iii) assess the potential of using CAESAR at mesospatial and mesotemporal scales. The approach used has three main steps: first sensitivity analysis (using a screening method and a variance-based method), then calibration, and finally validation. This approach allowed us to analyze 12 input factors initially and then to focus calibration only on the factors identified as most important. Sensitivity analysis and calibration were performed on a 7.5 km subreach, using a hydrological time series of 20 months, while validation on the whole 33 km study reach over a period of 8 years (2001-2009). CAESAR was able to reproduce the macromorphological changes of the study reach and gave good results as for annual bed load sediment estimates which turned out to be consistent with measurements in other large gravel bed rivers but showed a poorer performance in reproducing the characteristics of the braided channel (e.g., braiding intensity). The approach developed in this study can be effectively applied in other similar RCM contexts, allowing the use of RCMs not only in an explorative manner but also in obtaining quantitative results and scenarios.

  11. Performance assessment of models to forecast induced seismicity

    Science.gov (United States)

    Wiemer, Stefan; Karvounis, Dimitrios; Zechar, Jeremy; Király, Eszter; Kraft, Toni; Pio Rinaldi, Antonio; Catalli, Flaminia; Mignan, Arnaud

    2015-04-01

    Managing and mitigating induced seismicity during reservoir stimulation and operation is a critical prerequisite for many GeoEnergy applications. We are currently developing and validating so called 'Adaptive Traffic Light Systems' (ATLS), fully probabilistic forecast models that integrate all relevant data on the fly into a time-dependent hazard and risk model. The combined model intrinsically considers both aleatory and model-uncertainties, the robustness of the forecast is maximized by using a dynamically update ensemble weighting. At the heart of the ATLS approach are a variety of forecast models that range from purely statistical models, such as flow-controlled Epidemic Type Aftershock Sequence (ETAS) models, to models that consider various physical interaction mechanism (e.g., pore pressure changes, dynamic and static stress transfer, volumetric strain changes). The automated re-calibration of these models on the fly given data imperfection, degrees of freedom, and time-constraints is a sizable challenge, as is the validation of the models for applications outside of their calibrated range (different settings, larger magnitudes, changes in physical processes etc.). Here we present an overview of the status of the model development, calibration and validation. We also demonstrate how such systems can contribute to a quantitative risk assessment and mitigation of induced seismicity in a wide range of applications and time scales.

  12. Test of the classic model for predicting endurance running performance.

    Science.gov (United States)

    McLaughlin, James E; Howley, Edward T; Bassett, David R; Thompson, Dixie L; Fitzhugh, Eugene C

    2010-05-01

    To compare the classic physiological variables linked to endurance performance (VO2max, %VO2max at lactate threshold (LT), and running economy (RE)) with peak treadmill velocity (PTV) as predictors of performance in a 16-km time trial. Seventeen healthy, well-trained distance runners (10 males and 7 females) underwent laboratory testing to determine maximal oxygen uptake (VO2max), RE, percentage of maximal oxygen uptake at the LT (%VO2max at LT), running velocity at LT, and PTV. Velocity at VO2max (vVO2max) was calculated from RE and VO2max. Three stepwise regression models were used to determine the best predictors (classic vs treadmill performance protocols) for the 16-km running time trial. Simple Pearson correlations of the variables with 16-km performance showed vVO2max to have the highest correlation (r = -0.972) and %VO2max at the LT the lowest (r = 0.136). The correlation coefficients for LT, VO2max, and PTV were very similar in magnitude (r = -0.903 to r = -0.892). When VO2max, %VO2max at LT, RE, and PTV were entered into SPSS stepwise analysis, VO2max explained 81.3% of the total variance, and RE accounted for an additional 10.7%. vVO2max was shown to be the best predictor of the 16-km performance, accounting for 94.4% of the total variance. The measured velocity at VO2max (PTV) was highly correlated with the estimated velocity at vVO2max (r = 0.8867). Among well-trained subjects heterogeneous in VO2max and running performance, vVO2max is the best predictor of running performance because it integrates both maximal aerobic power and the economy of running. The PTV is linked to the same physiological variables that determine vVO2max.

  13. Performance modeling of a wearable brain PET (BET) camera

    Science.gov (United States)

    Schmidtlein, C. R.; Turner, J. N.; Thompson, M. O.; Mandal, K. C.; Häggström, I.; Zhang, J.; Humm, J. L.; Feiglin, D. H.; Krol, A.

    2016-03-01

    Purpose: To explore, by means of analytical and Monte Carlo modeling, performance of a novel lightweight and low-cost wearable helmet-shaped Brain PET (BET) camera based on thin-film digital Geiger Avalanche Photo Diode (dGAPD) with LSO and LaBr3 scintillators for imaging in vivo human brain processes for freely moving and acting subjects responding to various stimuli in any environment. Methods: We performed analytical and Monte Carlo modeling PET performance of a spherical cap BET device and cylindrical brain PET (CYL) device, both with 25 cm diameter and the same total mass of LSO scintillator. Total mass of LSO in both the BET and CYL systems is about 32 kg for a 25 mm thick scintillator, and 13 kg for 10 mm thick scintillator (assuming an LSO density of 7.3 g/ml). We also investigated a similar system using an LaBr3 scintillator corresponding to 22 kg and 9 kg for the 25 mm and 10 mm thick systems (assuming an LaBr3 density of 5.08 g/ml). In addition, we considered a clinical whole body (WB) LSO PET/CT scanner with 82 cm ring diameter and 15.8 cm axial length to represent a reference system. BET consisted of distributed Autonomous Detector Arrays (ADAs) integrated into Intelligent Autonomous Detector Blocks (IADBs). The ADA comprised of an array of small LYSO scintillator volumes (voxels with base a×a: 1.0 50% better noise equivalent count (NEC) performance relative to the CYL geometry, and >1100% better performance than a WB geometry for 25 mm thick LSO and LaBr3. For 10 mm thick LaBr3 equivalent mass systems LSO (7 mm thick) performed ~40% higher NEC than LaBr3. Analytic and Monte Carlo simulations also showed that 1×1×3 mm scintillator crystals can achieve ~1.2 mm FWHM spatial resolution. Conclusions: This study shows that a spherical cap brain PET system can provide improved NEC while preserving spatial resolution when compared to an equivalent dedicated cylindrical PET brain camera and shows greatly improved PET performance relative to a conventional

  14. Instructor’s Performance: A Proposed Model for Online Evaluation

    Directory of Open Access Journals (Sweden)

    Salah Alkhafaji

    2013-10-01

    Full Text Available Currently due to high awareness and quality audits, the higher education institutions have made to keep a track on various performances of the institutions. One such most important activity that has to be analyzed and evaluated is Instructor’s classroom performance. As the students are the main stakeholders of the educational process, their concerns over the instructor, teaching pedagogies and methodologies, assessment techniques need to be collected and analyzed for achieving the institution’s goals and objectives. The students shall give their opinions related to the various performance indicators of instructor.In general, the higher education institutions use various techniques to evaluate instructor’s performance in the classroom from the students. The latest technological developments help in data collection using web technologies. Online system with required questionnaire and attributes will help the higher education institutions in easy data collection. Apart from that the students shall give their opinions without any fear from any place and at any time. In this paper, we have identified the major factors and users of an instructor online evaluation system. Also, we have proposed a model for such system with subsystem interface, entity relationship diagram and context diagram.

  15. A CHAID Based Performance Prediction Model in Educational Data Mining

    CERN Document Server

    Ramaswami, M

    2010-01-01

    The performance in higher secondary school education in India is a turning point in the academic lives of all students. As this academic performance is influenced by many factors, it is essential to develop predictive data mining model for students' performance so as to identify the slow learners and study the influence of the dominant factors on their academic performance. In the present investigation, a survey cum experimental methodology was adopted to generate a database and it was constructed from a primary and a secondary source. While the primary data was collected from the regular students, the secondary data was gathered from the school and office of the Chief Educational Officer (CEO). A total of 1000 datasets of the year 2006 from five different schools in three different districts of Tamilnadu were collected. The raw data was preprocessed in terms of filling up missing values, transforming values in one form into another and relevant attribute/ variable selection. As a result, we had 772 student r...

  16. A High Performance Bayesian Computing Framework for Spatiotemporal Uncertainty Modeling

    Science.gov (United States)

    Cao, G.

    2015-12-01

    All types of spatiotemporal measurements are subject to uncertainty. With spatiotemporal data becomes increasingly involved in scientific research and decision making, it is important to appropriately model the impact of uncertainty. Quantitatively modeling spatiotemporal uncertainty, however, is a challenging problem considering the complex dependence and dataheterogeneities.State-space models provide a unifying and intuitive framework for dynamic systems modeling. In this paper, we aim to extend the conventional state-space models for uncertainty modeling in space-time contexts while accounting for spatiotemporal effects and data heterogeneities. Gaussian Markov Random Field (GMRF) models, also known as conditional autoregressive models, are arguably the most commonly used methods for modeling of spatially dependent data. GMRF models basically assume that a geo-referenced variable primarily depends on its neighborhood (Markov property), and the spatial dependence structure is described via a precision matrix. Recent study has shown that GMRFs are efficient approximation to the commonly used Gaussian fields (e.g., Kriging), and compared with Gaussian fields, GMRFs enjoy a series of appealing features, such as fast computation and easily accounting for heterogeneities in spatial data (e.g, point and areal). This paper represents each spatial dataset as a GMRF and integrates them into a state-space form to statistically model the temporal dynamics. Different types of spatial measurements (e.g., categorical, count or continuous), can be accounted for by according link functions. A fast alternative to MCMC framework, so-called Integrated Nested Laplace Approximation (INLA), was adopted for model inference.Preliminary case studies will be conducted to showcase the advantages of the described framework. In the first case, we apply the proposed method for modeling the water table elevation of Ogallala aquifer over the past decades. In the second case, we analyze the

  17. A dynamic model for performance calculations of grid-connected horizontal axis wind turbines. Pt. 1; Description of the model

    Energy Technology Data Exchange (ETDEWEB)

    Sheinman, Y.; Rosen, A. (Technion-Israel Inst. of Tech., Haifa (Israel). Faculty of Aerospace Engineering)

    1991-01-01

    A new model for performance calculations of grid-connected horizontal axis wind turbines is presented. This model takes into account the important dynamic characteristics of the various components comprising the turbine system, including rotor, gear-box, generator, shafts, couplings and brakes, and the grid. There is a special effort to obtain an appropriate balance between efficiency and accuracy. The model is modular and thus offers an easy implementation of new sub-models for new components, or changing of existing sub-models. The complete model of the wind turbine system is nonlinear and thus complicated. Linearization of this model leads to an eigenvalue problem that helps in understanding the dynamic characteristics of the turbine. A special reduction technique helps in reducing the size of the model and as a result increasing the model efficiency without practically decreasing its accuracy for performance calculations. (author).

  18. A process-based model for cattle manure compost windrows: Model performance and application

    Science.gov (United States)

    A model was developed and incorporated in the Integrated Farm System Model (IFSM, v.4.3) that simulates important processes occurring during windrow composting of manure. The model, documented in an accompanying paper, predicts changes in windrow properties and conditions and the resulting emissions...

  19. The European computer model for optronic system performance prediction (ECOMOS)

    Science.gov (United States)

    Repasi, Endre; Bijl, Piet; Labarre, Luc; Wittenstein, Wolfgang; Bürsing, Helge

    2017-05-01

    ECOMOS is a multinational effort within the framework of an EDA Project Arrangement. Its aim is to provide a generally accepted and harmonized European computer model for computing nominal Target Acquisition (TA) ranges of optronic imagers operating in the Visible or thermal Infrared (IR). The project involves close co-operation of defence and security industry and public research institutes from France, Germany, Italy, The Netherlands and Sweden. ECOMOS uses and combines well-accepted existing European tools to build up a strong competitive position. This includes two TA models: the analytical TRM4 model and the image-based TOD model. In addition, it uses the atmosphere model MATISSE. In this paper, the central idea of ECOMOS is exposed. The overall software structure and the underlying models are shown and elucidated. The status of the project development is given as well as a short outlook on validation tests and the future potential of simulation for sensor assessment.

  20. Anxiety and Performance: An Endogenous Learning-by-doing Model

    OpenAIRE

    Michael T. Rauh; Giulio Seccia

    2005-01-01

    In this article, we show that a standard economic model, the endogenous learning-by-doing model, captures several major themes from the anxiety literature in psychology. In our model, anxiety is a fully endogenous construct that can be separated naturally into its cognitive and physiological components. As such, our results are directly comparable with hypotheses and evidence from psychology. We show that anxiety can serve a motivating function, which suggests potential applications in the pr...

  1. Role of breathing in cardiac performance: experimental and mathematical models

    Science.gov (United States)

    Tran, Binh Q.; Hoffman, Eric A.

    1999-05-01

    Due to the close proximity of the heart and lungs within a closed chest environment, we expect breathing to affect various cardiac performance parameters and hence cardiac output. We present an integrative approach to study heart-lung interactions, combining a mathematical formulation of the circulation system with imaging techniques using echo-planar magnetic resonance imaging (EPI) and dynamic x-ray CT (EBCT). We hypothesize that appropriate synchronization of mechanical ventilation to cardiac-cycle specific events can improve cardiac function, i.e. stroke volume (SV) and cardiac output (CO). Computational and experimental results support the notion that heart-lung interaction, leading to altered cardiac output associated with inspiration/expiration, is not directly associated with lung inflation/deflation and thus is felt to be more influenced by pleural pressure changes. The mathematical model of the circulation demonstrates the importance of cardiac-cycle specific timing of ventilation on cardiac function and matches with experimentally observed relationships found in animal models studied via EBCT and human studies using EPI. Results show that positive pressure mechanical ventilation timed to systolic events may increase SV and CO by up to 30%, mainly by increased filling of the ventricles during diastole. Similarly, negative pressure (spontaneous) respiration has its greatest effect on ventricular diastolic filling. Cardiac-gated mechanical ventilation may provide sufficient cardiac augmentation to warrant further investigation as a minimally-invasive technique for temporary cardiac assist. Through computational modeling and advanced imaging protocols, we were able to uniquely study heart-lung interactions within the intact milieu of the never-invaded thorax.

  2. PERFORMANCE MODELING AND ANALYSIS OF BLOOD FLOW IN ELASTIC ARTERIES

    Institute of Scientific and Technical Information of China (English)

    Anil Kumar; C.L. Varshney; G.C. Sharma

    2005-01-01

    Two different non-Newtonian models for blood flow are considered, first a simple power law model displaying shear thinning viscosity, and second a generalized Maxwell model displaying both shear thinning viscosity and oscillating flow viscous-elasticity. These models are used along with a Newtonian model to study sinusoidal flow of blood in rigid and elastic straight arteries in the presence of magnetic field. The elasticity of blood does not appear to influence its flow behavior under physiological conditions in the large arteries,purely viscous shear thinning model should be quite realistic for simulating blood flow under these conditions. On using the power law model with high shear rate for sinusoidal flow simulation in elastic arteries, the mean and amplitude of the flow rate were found to be lower for a power law fluid compared to Newtonian fluid for the same pressure gradient. The governing equations have been solved by Crank-Niclson scheme. The results are interpreted in the context of blood in the elastic arteries keeping the magnetic effects in view. For physiological flow simulation in the aorta, an increase in mean wall shear stress, but a reduction in peak wall shear stress were observed for power law model compared to a Newtonian fluid model for matched flow rate wave form. Blood flow in the presence of transverse magnetic field in an elastic artery is investigated and the influence of factors such as morphology and surface irregularity is evaluated.

  3. Forecasting project schedule performance using probabilistic and deterministic models

    Directory of Open Access Journals (Sweden)

    S.A. Abdel Azeem

    2014-04-01

    Full Text Available Earned value management (EVM was originally developed for cost management and has not widely been used for forecasting project duration. In addition, EVM based formulas for cost or schedule forecasting are still deterministic and do not provide any information about the range of possible outcomes and the probability of meeting the project objectives. The objective of this paper is to develop three models to forecast the estimated duration at completion. Two of these models are deterministic; earned value (EV and earned schedule (ES models. The third model is a probabilistic model and developed based on Kalman filter algorithm and earned schedule management. Hence, the accuracies of the EV, ES and Kalman Filter Forecasting Model (KFFM through the different project periods will be assessed and compared with the other forecasting methods such as the Critical Path Method (CPM, which makes the time forecast at activity level by revising the actual reporting data for each activity at a certain data date. A case study project is used to validate the results of the three models. Hence, the best model is selected based on the lowest average percentage of error. The results showed that the KFFM developed in this study provides probabilistic prediction bounds of project duration at completion and can be applied through the different project periods with smaller errors than those observed in EV and ES forecasting models.

  4. A ’Millipede’ scanner model - Energy consumption and performance

    OpenAIRE

    Engelen, Johan B.C.; Khatib, Mohammed G.

    2008-01-01

    This short report (1) describes an energy model for the seek and read/write operations in a mass-balanced Y-scanner for parallel-probe storage by IBM [1] and (2) updates the settings of the MEMS model in DiskSim with recent published figures from this XY-scanner. To speedup system simulations, a straight forward second-order model is used without control loop. Read/write operation is modeled by quasi-static calculations. To approximate seek behavior, ’bang-bang’ control is assumed; the result...

  5. Methodology for Modeling Building Energy Performance across the Commercial Sector

    Energy Technology Data Exchange (ETDEWEB)

    Griffith, B.; Long, N.; Torcellini, P.; Judkoff, R.; Crawley, D.; Ryan, J.

    2008-03-01

    This report uses EnergyPlus simulations of each building in the 2003 Commercial Buildings Energy Consumption Survey (CBECS) to document and demonstrate bottom-up methods of modeling the entire U.S. commercial buildings sector (EIA 2006). The ability to use a whole-building simulation tool to model the entire sector is of interest because the energy models enable us to answer subsequent 'what-if' questions that involve technologies and practices related to energy. This report documents how the whole-building models were generated from the building characteristics in 2003 CBECS and compares the simulation results to the survey data for energy use.

  6. Externally Fired micro-Gas Turbine: Modelling and experimental performance

    Energy Technology Data Exchange (ETDEWEB)

    Traverso, Alberto; Massardo, Aristide F. [Thermochemical Power Group, Dipartimento di Macchine, Sistemi Energetici e Trasporti, Universita di Genova, Genova (Italy); Scarpellini, Riccardo [Ansaldo Ricerche s.r.l., Genova (Italy)

    2006-11-15

    This work presents the steady-state and transient performance obtained by an Externally Fired micro-Gas Turbine (EFmGT) demonstration plant. The plant was designed by Ansaldo Ricerche (ARI) s.r.l. and the Thermochemical Power Group (TPG) of the Universita di Genova, using the in-house TPG codes TEMP (Thermoeconomic Modular Program) and TRANSEO. The plant was based on a recuperated 80kW micro-gas turbine (Elliott TA-80R), which was integrated with the externally fired cycle at the ARI laboratory. The first goal of the plant construction was the demonstration of the EFmGT control system. The performance obtained in the field can be improved in the near future using high-temperature heat exchangers and apt external combustors, which should allow the system to operate at the actual micro-gas turbine inlet temperature (900-950{sup o}C). This paper presents the plant layout and the control system employed for regulating the microturbine power and rotational speed. The experimental results obtained by the pilot plant in early 2004 are shown: the feasibility of such a plant configuration has been demonstrated, and the control system has successfully regulated the shaft speed in all the tests performed. Finally, the plant model in TRANSEO, which was formerly used to design the control system, is shown to accurately simulate the plant behaviour both at steady-state and transient conditions. (author)

  7. Computational fluid dynamics model of WTP clearwell: Evaluation of critical parameters influencing model performance

    Energy Technology Data Exchange (ETDEWEB)

    Ducoste, J.; Brauer, R.

    1999-07-01

    Analysis of a computational fluid dynamics (CFD) model for a water treatment plant clearwell was done. Model parameters were analyzed to determine their influence on the effluent-residence time distribution (RTD) function. The study revealed that several model parameters could have significant impact on the shape of the RTD function and consequently raise the level of uncertainty on accurate predictions of clearwell hydraulics. The study also revealed that although the modeler could select a distribution of values for some of the model parameters, most of these values can be ruled out by requiring the difference between the calculated and theoretical hydraulic retention time to within 5% of the theoretical value.

  8. Human Performance Modeling and Simulation for Launch Team Applications

    Science.gov (United States)

    Peaden, Cary J.; Payne, Stephen J.; Hoblitzell, Richard M., Jr.; Chandler, Faith T.; LaVine, Nils D.; Bagnall, Timothy M.

    2006-01-01

    This paper describes ongoing research into modeling and simulation of humans for launch team analysis, training, and evaluation. The initial research is sponsored by the National Aeronautics and Space Administration's (NASA)'s Office of Safety and Mission Assurance (OSMA) and NASA's Exploration Program and is focused on current and future launch team operations at Kennedy Space Center (KSC). The paper begins with a description of existing KSC launch team environments and procedures. It then describes the goals of new Simulation and Analysis of Launch Teams (SALT) research. The majority of this paper describes products from the SALT team's initial proof-of-concept effort. These products include a nominal case task analysis and a discrete event model and simulation of launch team performance during the final phase of a shuttle countdown; and a first proof-of-concept training demonstration of launch team communications in which the computer plays most roles, and the trainee plays a role of the trainee's choice. This paper then describes possible next steps for the research team and provides conclusions. This research is expected to have significant value to NASA's Exploration Program.

  9. Consideration of climate changes in biosphere modelling for performance assessment

    Energy Technology Data Exchange (ETDEWEB)

    Semioshkina, Nathascha; Staudt, Christian; Kaiser, Christian [Helmholtz Zentrum Muenchen Deutsches Forschungszentrum fuer Gesundheit und Umwelt GmbH, Neuherberg (Germany); Proehl, Gerhard; Fahrenholz, Christine; Noseck, Ulrich [Gesellschaft fuer Anlagen- und Reaktorsicherheit mbH (GRS), Koeln (Germany)

    2012-12-15

    The assessment of the long-term safety of a repository for radioactive or hazardous waste and therewith the development of a safety case requires a comprehensive system understanding, a continuous development of the methods of a safety case and capable and qualified numerical tools. The objective of the project ''Scientific basis for the assessment of the long-term safety of repositories'', identification number 02 E 10548, was to follow national and international developments in this area, to evaluate research projects, which contribute to knowledge, model approaches and data, and to perform specific investigations to improve the methodologies of the safety case and the long-term safety assessment.

  10. FASTSim: A Model to Estimate Vehicle Efficiency, Cost and Performance

    Energy Technology Data Exchange (ETDEWEB)

    Brooker, A.; Gonder, J.; Wang, L.; Wood, E.; Lopp, S.; Ramroth, L.

    2015-05-04

    The Future Automotive Systems Technology Simulator (FASTSim) is a high-level advanced vehicle powertrain systems analysis tool supported by the U.S. Department of Energy’s Vehicle Technologies Office. FASTSim provides a quick and simple approach to compare powertrains and estimate the impact of technology improvements on light- and heavy-duty vehicle efficiency, performance, cost, and battery batches of real-world drive cycles. FASTSim’s calculation framework and balance among detail, accuracy, and speed enable it to simulate thousands of driven miles in minutes. The key components and vehicle outputs have been validated by comparing the model outputs to test data for many different vehicles to provide confidence in the results. A graphical user interface makes FASTSim easy and efficient to use. FASTSim is freely available for download from the National Renewable Energy Laboratory’s website (see www.nrel.gov/fastsim).

  11. Hair loss and regeneration performed on animal models.

    Science.gov (United States)

    Orasan, Meda Sandra; Roman, Iulia Ioana; Coneac, Andrei; Muresan, Adriana; Orasan, Remus Ioan

    2016-01-01

    Research in the field of reversal hair loss remains a challenging subject. As Minoxidil 2% or 5% and Finasteride are so far the only FDA approved topical treatments for inducing hair regrowth, research is necessary in order to improve therapeutical approach in alopecia. In vitro studies have focused on cultures of a cell type - dermal papilla or organ culture of isolated cell follicles. In vivo research on this topic was performed on mice, rats, hamsters, rabbits, sheep and monkeys, taking into consideration the advantages and disadvantages of each animal model and the depilation options. Further studies are required not only to compare the efficiency of different therapies but more importantly to establish their long term safety.

  12. Physiological fidelity or model parsimony? The relative performance of reverse-toxicokinetic modeling approaches.

    Science.gov (United States)

    Rowland, Michael A; Perkins, Edward J; Mayo, Michael L

    2017-03-11

    Physiologically-based toxicokinetic (PBTK) models are often developed to facilitate in vitro to in vivo extrapolation (IVIVE) using a top-down, compartmental approach, favoring architectural simplicity over physiological fidelity despite the lack of general guidelines relating model design to dynamical predictions. Here we explore the impact of design choice (high vs. low fidelity) on chemical distribution throughout an animal's organ system. We contrast transient dynamics and steady states of three previously proposed PBTK models of varying complexity in response to chemical exposure. The steady states for each model were determined analytically to predict exposure conditions from tissue measurements. Steady state whole-body concentrations differ between models, despite identical environmental conditions, which originates from varying levels of physiological fidelity captured by the models. These differences affect the relative predictive accuracy of the inverted models used in exposure reconstruction to link effects-based exposure data with whole-organism response thresholds obtained from in vitro assay measurements. Our results demonstrate how disregarding physiological fideltiy in favor of simpler models affects the internal dynamics and steady state estimates for chemical accumulation within tissues, which, in turn, poses significant challenges for the exposure reconstruction efforts that underlie many IVIVE methods. Developing standardized systems-level models for ecological organisms would not only ensure predictive consistency among future modeling studies, but also ensure pragmatic extrapolation of in vivo effects from in vitro data or modeling exposure-response relationships.

  13. Blocking performance of the hose model and the pipe model for VPN service provisioning over WDM optical networks

    Science.gov (United States)

    Wang, Haibo; Swee Poo, Gee

    2004-08-01

    We study the provisioning of virtual private network (VPN) service over WDM optical networks. For this purpose, we investigate the blocking performance of the hose model versus the pipe model for the provisioning. Two techniques are presented: an analytical queuing model and a discrete event simulation. The queuing model is developed from the multirate reduced-load approximation technique. The simulation is done with the OPNET simulator. Several experimental situations were used. The blocking probabilities calculated from the two approaches show a close match, indicating that the multirate reduced-load approximation technique is capable of predicting the blocking performance for the pipe model and the hose model in WDM networks. A comparison of the blocking behavior of the two models shows that the hose model has superior blocking performance as compared with pipe model. By and large, the blocking probability of the hose model is better than that of the pipe model by a few orders of magnitude, particularly at low load regions. The flexibility of the hose model allowing for the sharing of resources on a link among all connections accounts for its superior performance.

  14. Computational Human Performance Modeling For Alarm System Design

    Energy Technology Data Exchange (ETDEWEB)

    Jacques Hugo

    2012-07-01

    The introduction of new technologies like adaptive automation systems and advanced alarms processing and presentation techniques in nuclear power plants is already having an impact on the safety and effectiveness of plant operations and also the role of the control room operator. This impact is expected to escalate dramatically as more and more nuclear power utilities embark on upgrade projects in order to extend the lifetime of their plants. One of the most visible impacts in control rooms will be the need to replace aging alarm systems. Because most of these alarm systems use obsolete technologies, the methods, techniques and tools that were used to design the previous generation of alarm system designs are no longer effective and need to be updated. The same applies to the need to analyze and redefine operators’ alarm handling tasks. In the past, methods for analyzing human tasks and workload have relied on crude, paper-based methods that often lacked traceability. New approaches are needed to allow analysts to model and represent the new concepts of alarm operation and human-system interaction. State-of-the-art task simulation tools are now available that offer a cost-effective and efficient method for examining the effect of operator performance in different conditions and operational scenarios. A discrete event simulation system was used by human factors researchers at the Idaho National Laboratory to develop a generic alarm handling model to examine the effect of operator performance with simulated modern alarm system. It allowed analysts to evaluate alarm generation patterns as well as critical task times and human workload predicted by the system.

  15. MARKOV CHAIN MODELING OF PERFORMANCE DEGRADATION OF PHOTOVOLTAIC SYSTEM

    Directory of Open Access Journals (Sweden)

    E. Suresh Kumar

    2012-01-01

    Full Text Available Modern probability theory studies chance processes for which theknowledge of previous outcomes influence predictions for future experiments. In principle, when a sequence of chance experiments, all of the past outcomes could influence the predictions for the next experiment. In Markov chain type of chance, the outcome of a given experiment can affect the outcome of the next experiment. The system state changes with time and the state X and time t are two random variables. Each of these variables can be either continuous or discrete. Various degradation on photovoltaic (PV systems can be viewed as different Markov states and further degradation can be treated as the outcome of the present state. The PV system is treated as a discrete state continuous time system with four possible outcomes, namely, s1 : Good condition, s2 : System with partial degradation failures and fully operational, s3 : System with major faults and partially working and hence partial output power, s4 : System completely fails. The calculation of the reliability of the photovoltaic system is complicated since the system have elements or subsystems exhibiting dependent failures and involving repair and standby operations. Markov model is a better technique that has much appeal and works well when failure hazards and repair hazards are constant. The usual practice of reliability analysis techniques include FMEA((failure mode and effect analysis, Parts count analysis, RBD ( reliability block diagram , FTA( fault tree analysis etc. These are logical, boolean and block diagram approaches and never accounts the environmental degradation on the performance of the system. This is too relevant in the case of PV systems which are operated under harsh environmental conditions. This paper is an insight into the degradation of performance of PV systems and presenting a Markov model of the system by means of the different states and transitions between these states.

  16. THE PENA BLANCA NATURAL ANALOGUE PERFORMANCE ASSESSMENT MODEL

    Energy Technology Data Exchange (ETDEWEB)

    G.J. Saulnier Jr; W. Statham

    2006-03-10

    The Nopal I uranium mine in the Sierra Pena Blanca, Chihuahua, Mexico serves as a natural analogue to the Yucca Mountain repository. The Pena Blanca Natural Analogue Performance Assessment Model simulates the mobilization and transport of radionuclides that are released from the mine and transported to the saturated zone. the Pena Blanca Natural Analogue Model uses probabilistic simulations of hydrogeologic processes that are analogous to the processes that occur at the Yucca Mountain site. The Nopal I uranium deposit lies in fractured, welded, and altered rhyolitic ash flow tuffs that overlie carbonate rocks, a setting analogous to the geologic formations at the Yucca Mountain site. The Nopal I mine site has the following characteristics as compared to the Yucca Mountain repository site. (1) Analogous source: UO{sub 2} uranium ore deposit = spent nuclear fuel in the repository; (2) Analogous geologic setting: fractured, welded, and altered rhyolitic ash flow tuffs overlying carbonate rocks; (3) Analogous climate: Semiarid to arid; (4) Analogous geochemistry: Oxidizing conditions; and (5) Analogous hydrogeology: The ore deposit lies in the unsaturated zone above the water table. The Nopal I deposit is approximately 8 {+-} 0.5 million years old and has been exposed to oxidizing conditions during the last 3.2 to 3.4 million years. The Pena Blanca Natural Analogue Model considers that the uranium oxide and uranium silicates in the ore deposit were originally analogous to uranium-oxide spent nuclear fuel. The Pena Blanca site has been characterized using field and laboratory investigations of its fault and fracture distribution, mineralogy, fracture fillings, seepage into the mine adits, regional hydrology, and mineralization that shows the extent of radionuclide migration. Three boreholes were drilled at the Nopal I mine site in 2003 and these boreholes have provided samples for lithologic characterization, water-level measurements, and water samples for laboratory

  17. LTSmin: high-performance language-independent model checking

    NARCIS (Netherlands)

    Kant, Gijs; Laarman, Alfons; Meijer, Jeroen; Pol, van de Jaco; Blom, Stefan; Dijk, van Tom; Baier, Christel; Tinelli, Cesare

    2015-01-01

    In recent years, the LTSmin model checker has been extended with support for several new modelling languages, including probabilistic (Mapa) and timed systems (Uppaal). Also, connecting additional language front-ends or ad-hoc state-space generators to LTSmin was simplified using custom C-code. From

  18. Performance Aspects of Orbit Propagation using the Unified State Model

    NARCIS (Netherlands)

    Vittaldev, V.; Mooij, E.; Naeije, M.C.

    2010-01-01

    The Unified State Model is a method for expressing orbits using a set of seven elements. The elements consist of a quaternion and three parameters based on the velocity hodograph. The equations of this model and the background theory necessary to understand them have been shown here. Numerical simul

  19. Spatial flood extent modelling. A performance based comparison

    NARCIS (Netherlands)

    Werner, M.G.F.

    2004-01-01

    The rapid development of Geographical Information Systems (GIS) has together with the inherent spatial nature of hydrological modelling led to an equally rapid development in the integration between GIS and hydrological models. The advantages of integration are particularly apparent in flood extent

  20. Modelling of human transplacental transport as performed in Copenhagen, Denmark

    DEFF Research Database (Denmark)

    Mathiesen, Line; Mørck, Thit Aarøe; Zuri, Giuseppina

    2014-01-01

    Placenta perfusion models are very effective when studying the placental mechanisms in order to extrapolate to real-life situations. The models are most often used to investigate the transport of substances between mother and foetus, including the potential metabolism of these. We have studied th...

  1. A Structural Equation Model for Predicting Business Student Performance

    Science.gov (United States)

    Pomykalski, James J.; Dion, Paul; Brock, James L.

    2008-01-01

    In this study, the authors developed a structural equation model that accounted for 79% of the variability of a student's final grade point average by using a sample size of 147 students. The model is based on student grades in 4 foundational business courses: introduction to business, macroeconomics, statistics, and using databases. Educators and…

  2. A ’Millipede’ scanner model - Energy consumption and performance

    NARCIS (Netherlands)

    Engelen, Johan B.C.; Khatib, Mohammed G.

    2008-01-01

    This short report (1) describes an energy model for the seek and read/write operations in a mass-balanced Y-scanner for parallel-probe storage by IBM [1] and (2) updates the settings of the MEMS model in DiskSim with recent published figures from this XY-scanner. To speedup system simulations, a str

  3. Predicting Transfer Performance: A Comparison of Competing Function Learning Models

    Science.gov (United States)

    McDaniel, Mark A.; Dimperio, Eric; Griego, Jacqueline A.; Busemeyer, Jerome R.

    2009-01-01

    The population of linear experts (POLE) model suggests that function learning and transfer are mediated by activation of a set of prestored linear functions that together approximate the given function (Kalish, Lewandowsky, & Kruschke, 2004). In the extrapolation-association (EXAM) model, an exemplar-based architecture associates trained input…

  4. WRF model performance under flash-flood associated rainfall

    Science.gov (United States)

    Mejia-Estrada, Iskra; Bates, Paul; Ángel Rico-Ramírez, Miguel

    2017-04-01

    Understanding the natural processes that precede the occurrence of flash floods is crucial to improve the future flood projections in a changing climate. Using numerical weather prediction tools allows to determine one of the triggering conditions for these particularly dangerous events, difficult to forecast due to their short lead-time. However, simulating the spatial and temporal evolution of the rainfall that leads to a rapid rise in river levels requires determining the best model configuration without compromising the computational efficiency. The current research involves the results of the first part of a cascade modeling approach, where the Weather Research and Forecasting (WRF) model is used to simulate the heavy rainfall in the east of the UK in June 2012 when stationary thunderstorms caused 2-hour accumulated values to match those expected in the whole month of June over the city of Newcastle. The optimum model set-up was obtained after extensive testing regarding physics parameterizations, spin-up times, datasets used as initial conditions and model resolution and nesting, hence determining its sensitivity to reproduce localised events of short duration. The outputs were qualitatively and quantitatively assessed using information from the national weather radar network as well as interpolated rainfall values from gauges, respectively. Statistical and skill score values show that the model is able to produce reliable accumulated precipitation values while explicitly solving the atmospheric equations in high resolution domains as long as several hydrometeors are considered with a spin-up time that allows the model to assimilate the initial conditions without going too far back in time from the event of interest. The results from the WRF model will serve as input to run a semi-distributed hydrological model to determine the rainfall-runoff relationship within an uncertainty assessment framework that will allow evaluating the implications of assumptions at

  5. Raindrop size distribution: Fitting performance of common theoretical models

    Science.gov (United States)

    Adirosi, E.; Volpi, E.; Lombardo, F.; Baldini, L.

    2016-10-01

    Modelling raindrop size distribution (DSD) is a fundamental issue to connect remote sensing observations with reliable precipitation products for hydrological applications. To date, various standard probability distributions have been proposed to build DSD models. Relevant questions to ask indeed are how often and how good such models fit empirical data, given that the advances in both data availability and technology used to estimate DSDs have allowed many of the deficiencies of early analyses to be mitigated. Therefore, we present a comprehensive follow-up of a previous study on the comparison of statistical fitting of three common DSD models against 2D-Video Distrometer (2DVD) data, which are unique in that the size of individual drops is determined accurately. By maximum likelihood method, we fit models based on lognormal, gamma and Weibull distributions to more than 42.000 1-minute drop-by-drop data taken from the field campaigns of the NASA Ground Validation program of the Global Precipitation Measurement (GPM) mission. In order to check the adequacy between the models and the measured data, we investigate the goodness of fit of each distribution using the Kolmogorov-Smirnov test. Then, we apply a specific model selection technique to evaluate the relative quality of each model. Results show that the gamma distribution has the lowest KS rejection rate, while the Weibull distribution is the most frequently rejected. Ranking for each minute the statistical models that pass the KS test, it can be argued that the probability distributions whose tails are exponentially bounded, i.e. light-tailed distributions, seem to be adequate to model the natural variability of DSDs. However, in line with our previous study, we also found that frequency distributions of empirical DSDs could be heavy-tailed in a number of cases, which may result in severe uncertainty in estimating statistical moments and bulk variables.

  6. Modeling silica aerogel optical performance by determining its radiative properties

    Directory of Open Access Journals (Sweden)

    Lin Zhao

    2016-02-01

    Full Text Available Silica aerogel has been known as a promising candidate for high performance transparent insulation material (TIM. Optical transparency is a crucial metric for silica aerogels in many solar related applications. Both scattering and absorption can reduce the amount of light transmitted through an aerogel slab. Due to multiple scattering, the transmittance deviates from the Beer-Lambert law (exponential attenuation. To better understand its optical performance, we decoupled and quantified the extinction contributions of absorption and scattering separately by identifying two sets of radiative properties. The radiative properties are deduced from the measured total transmittance and reflectance spectra (from 250 nm to 2500 nm of synthesized aerogel samples by solving the inverse problem of the 1-D Radiative Transfer Equation (RTE. The obtained radiative properties are found to be independent of the sample geometry and can be considered intrinsic material properties, which originate from the aerogel’s microstructure. This finding allows for these properties to be directly compared between different samples. We also demonstrate that by using the obtained radiative properties, we can model the photon transport in aerogels of arbitrary shapes, where an analytical solution is difficult to obtain.

  7. Battery Performance Modelling ad Simulation: a Neural Network Based Approach

    Science.gov (United States)

    Ottavianelli, Giuseppe; Donati, Alessandro

    2002-01-01

    This project has developed on the background of ongoing researches within the Control Technology Unit (TOS-OSC) of the Special Projects Division at the European Space Operations Centre (ESOC) of the European Space Agency. The purpose of this research is to develop and validate an Artificial Neural Network tool (ANN) able to model, simulate and predict the Cluster II battery system's performance degradation. (Cluster II mission is made of four spacecraft flying in tetrahedral formation and aimed to observe and study the interaction between sun and earth by passing in and out of our planet's magnetic field). This prototype tool, named BAPER and developed with a commercial neural network toolbox, could be used to support short and medium term mission planning in order to improve and maximise the batteries lifetime, determining which are the future best charge/discharge cycles for the batteries given their present states, in view of a Cluster II mission extension. This study focuses on the five Silver-Cadmium batteries onboard of Tango, the fourth Cluster II satellite, but time restrains have allowed so far to perform an assessment only on the first battery. In their most basic form, ANNs are hyper-dimensional curve fits for non-linear data. With their remarkable ability to derive meaning from complicated or imprecise history data, ANN can be used to extract patterns and detect trends that are too complex to be noticed by either humans or other computer techniques. ANNs learn by example, and this is why they can be described as an inductive, or data-based models for the simulation of input/target mappings. A trained ANN can be thought of as an "expert" in the category of information it has been given to analyse, and this expert can then be used, as in this project, to provide projections given new situations of interest and answer "what if" questions. The most appropriate algorithm, in terms of training speed and memory storage requirements, is clearly the Levenberg

  8. Correlation between human observer performance and model observer performance in differential phase contrast CT

    Energy Technology Data Exchange (ETDEWEB)

    Li, Ke; Garrett, John [Department of Medical Physics, University of Wisconsin-Madison, 1111 Highland Avenue, Madison, Wisconsin 53705 (United States); Chen, Guang-Hong [Department of Medical Physics, University of Wisconsin-Madison, 1111 Highland Avenue, Madison, Wisconsin 53705 and Department of Radiology, University of Wisconsin-Madison, 600 Highland Avenue, Madison, Wisconsin 53792 (United States)

    2013-11-15

    Purpose: With the recently expanding interest and developments in x-ray differential phase contrast CT (DPC-CT), the evaluation of its task-specific detection performance and comparison with the corresponding absorption CT under a given radiation dose constraint become increasingly important. Mathematical model observers are often used to quantify the performance of imaging systems, but their correlations with actual human observers need to be confirmed for each new imaging method. This work is an investigation of the effects of stochastic DPC-CT noise on the correlation of detection performance between model and human observers with signal-known-exactly (SKE) detection tasks.Methods: The detectabilities of different objects (five disks with different diameters and two breast lesion masses) embedded in an experimental DPC-CT noise background were assessed using both model and human observers. The detectability of the disk and lesion signals was then measured using five types of model observers including the prewhitening ideal observer, the nonprewhitening (NPW) observer, the nonprewhitening observer with eye filter and internal noise (NPWEi), the prewhitening observer with eye filter and internal noise (PWEi), and the channelized Hotelling observer (CHO). The same objects were also evaluated by four human observers using the two-alternative forced choice method. The results from the model observer experiment were quantitatively compared to the human observer results to assess the correlation between the two techniques.Results: The contrast-to-detail (CD) curve generated by the human observers for the disk-detection experiments shows that the required contrast to detect a disk is inversely proportional to the square root of the disk size. Based on the CD curves, the ideal and NPW observers tend to systematically overestimate the performance of the human observers. The NPWEi and PWEi observers did not predict human performance well either, as the slopes of their CD

  9. Dst Index in the 2008 GEM Modeling Challenge - Model Performance for Moderate and Strong Magnetic Storms

    Science.gov (United States)

    Rastaetter, Lutz; Kuznetsova, Maria; Hesse, Michael; Chulaki, Anna; Pulkkinen, Antti; Ridley, Aaron J.; Gombosi, Tamas; Vapirev, Alexander; Raeder, Joachim; Wiltberger, Michael James; Mays, M. L.; Fok, Mei-Ching H.; Weigel, Robert S.; Welling, Daniel T.

    2010-01-01

    The GEM 2008 modeling challenge efforts are expanding beyond comparing in-situ measurements in the magnetosphere and ionosphere to include the computation of indices to be compared. The Dst index measures the largest deviations of the horizontal magnetic field at 4 equatorial magnetometers from the quiet-time background field and is commonly used to track the strength of the magnetic disturbance of the magnetosphere during storms. Models can calculate a proxy Dst index in various ways, including using the Dessler-Parker Sckopke relation and the energy of the ring current and Biot-Savart integration of electric currents in the magnetosphere. The GEM modeling challenge investigates 4 space weather events and we compare models available at CCMC against each other and the observed values of Ost. Models used include SWMF/BATSRUS, OpenGGCM, LFM, GUMICS (3D magnetosphere MHD models), Fok-RC, CRCM, RAM-SCB (kinetic drift models of the ring current), WINDMI (magnetosphere-ionosphere electric circuit model), and predictions based on an impulse response function (IRF) model and analytic coupling functions with inputs of solar wind data. In addition to the analysis of model-observation comparisons we look at the way Dst is computed in global magnetosphere models. The default value of Dst computed by the SWMF model is for Bz the Earth's center. In addition to this, we present results obtained at different locations on the Earth's surface. We choose equatorial locations at local noon, dusk (18:00 hours), midnight and dawn (6:00 hours). The different virtual observatory locations reveal the variation around the earth-centered Dst value resulting from the distribution of electric currents in the magnetosphere during different phases of a storm.

  10. Discharge simulations performed with a hydrological model using bias corrected regional climate model input

    Directory of Open Access Journals (Sweden)

    S. C. van Pelt

    2009-12-01

    Full Text Available Studies have demonstrated that precipitation on Northern Hemisphere mid-latitudes has increased in the last decades and that it is likely that this trend will continue. This will have an influence on discharge of the river Meuse. The use of bias correction methods is important when the effect of precipitation change on river discharge is studied. The objective of this paper is to investigate the effect of using two different bias correction methods on output from a Regional Climate Model (RCM simulation. In this study a Regional Atmospheric Climate Model (RACMO2 run is used, forced by ECHAM5/MPIOM under the condition of the SRES-A1B emission scenario, with a 25 km horizontal resolution. The RACMO2 runs contain a systematic precipitation bias on which two bias correction methods are applied. The first method corrects for the wet day fraction and wet day average (WD bias correction and the second method corrects for the mean and coefficient of variance (MV bias correction. The WD bias correction initially corrects well for the average, but it appears that too many successive precipitation days were removed with this correction. The second method performed less well on average bias correction, but the temporal precipitation pattern was better. Subsequently, the discharge was calculated by using RACMO2 output as forcing to the HBV-96 hydrological model. A large difference was found between the simulated discharge of the uncorrected RACMO2 run, the WD bias corrected run and the MV bias corrected run. These results show the importance of an appropriate bias correction.

  11. Predictive models for population performance on real biological fitness landscapes.

    Science.gov (United States)

    Rowe, William; Wedge, David C; Platt, Mark; Kell, Douglas B; Knowles, Joshua

    2010-09-01

    Directed evolution, in addition to its principal application of obtaining novel biomolecules, offers significant potential as a vehicle for obtaining useful information about the topologies of biomolecular fitness landscapes. In this article, we make use of a special type of model of fitness landscapes-based on finite state machines-which can be inferred from directed evolution experiments. Importantly, the model is constructed only from the fitness data and phylogeny, not sequence or structural information, which is often absent. The model, called a landscape state machine (LSM), has already been used successfully in the evolutionary computation literature to model the landscapes of artificial optimization problems. Here, we use the method for the first time to simulate a biological fitness landscape based on experimental evaluation. We demonstrate in this study that LSMs are capable not only of representing the structure of model fitness landscapes such as NK-landscapes, but also the fitness landscape of real DNA oligomers binding to a protein (allophycocyanin), data we derived from experimental evaluations on microarrays. The LSMs prove adept at modelling the progress of evolution as a function of various controlling parameters, as validated by evaluations on the real landscapes. Specifically, the ability of the model to 'predict' optimal mutation rates and other parameters of the evolution is demonstrated. A modification to the standard LSM also proves accurate at predicting the effects of recombination on the evolution.

  12. Performance Optimization of NEMO Oceanic Model at High Resolution

    Science.gov (United States)

    Epicoco, Italo; Mocavero, Silvia; Aloisio, Giovanni

    2014-05-01

    The NEMO oceanic model is based on the Navier-Stokes equations along with a nonlinear equation of state, which couples the two active tracers (temperature and salinity) to the fluid velocity. The code is written in Fortan 90 and parallelized using MPI. The resolution of the global ocean models used today for climate change studies limits the prediction accuracy. To overcome this limit, a new high-resolution global model, based on NEMO, simulating at 1/16° and 100 vertical levels has been developed at CMCC. The model is computational and memory intensive, so it requires many resources to be run. An optimization activity is needed. The strategy requires a preliminary analysis to highlight scalability bottlenecks. It has been performed on a SandyBridge architecture at CMCC. An efficiency of 48% on 7K cores (the maximum available) has been achieved. The analysis has been also carried out at routine level, so that the improvement actions could be designed for the entire code or for the single kernel. The analysis highlighted for example a loss of performance due to the routine used to implement the north fold algorithm (i.e. handling the points at the north pole of the 3-poles Grids): indeed an optimization of the routine implementation is needed. The folding is achieved considering only the last 4 rows on the top of the global domain and by applying a rotation pivoting on the point in the middle. During the folding, the point on the top left is updated with the value of the point on bottom right and so on. The current version of the parallel algorithm is based on the domain decomposition. Each MPI process takes care of a block of points. Each process can update its points using values belonging to the symmetric process. In the current implementation, each received message is placed in a buffer with a number of elements equal to the total dimension of the global domain. Each process sweeps the entire buffer, but only a part of that computation is really useful for the

  13. An Integrated Performance Evaluation Model for the Photovoltaics Industry

    Directory of Open Access Journals (Sweden)

    He-Yau Kang

    2012-04-01

    Full Text Available Global warming is causing damaging changes to climate around the World. For environmental protection and natural resource scarcity, alternative forms of energy, such as wind energy, fire energy, hydropower energy, geothermal energy, solar energy, biomass energy, ocean power and natural gas, are gaining attention as means of meeting global energy demands. Due to Japan’s nuclear plant disaster in March 2011, people are demanding a good alternative energy resource, which not only produces zero or little air pollutants and greenhouse gases, but also with a high safety level to protect the World. Solar energy, which depends on an infinite resource, the sun, is one of the most promising renewable energy sources from the perspective of environmental sustainability. Currently, the manufacturing cost of solar cells is still very high, and the power conversion efficiency is low. Therefore, photovoltaics (PV firms must continue to invest in research and development, commit to product differentiation, achieve economies of scale, and consider the possibility of vertical integration, in order to strengthen their competitiveness and to acquire the maximum benefit from the PV market. This research proposes a performance evaluation model by integrating analytic hierarchy process (AHP and data envelopment analysis (DEA to assess the current business performance of PV firms. AHP is applied to obtain experts’ opinions on the importance of the factors, and DEA is used to determine which firms are efficient. A case study is performed on the crystalline silicon PV firms in Taiwan. The findings shall help the firms determine their strengths and weaknesses and provide directions for future improvements in business operations.

  14. Human Performance Modeling for Dynamic Human Reliability Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Boring, Ronald Laurids [Idaho National Laboratory; Joe, Jeffrey Clark [Idaho National Laboratory; Mandelli, Diego [Idaho National Laboratory

    2015-08-01

    Part of the U.S. Department of Energy’s (DOE’s) Light Water Reac- tor Sustainability (LWRS) Program, the Risk-Informed Safety Margin Charac- terization (RISMC) Pathway develops approaches to estimating and managing safety margins. RISMC simulations pair deterministic plant physics models with probabilistic risk models. As human interactions are an essential element of plant risk, it is necessary to integrate human actions into the RISMC risk framework. In this paper, we review simulation based and non simulation based human reliability analysis (HRA) methods. This paper summarizes the founda- tional information needed to develop a feasible approach to modeling human in- teractions in RISMC simulations.

  15. The Performance of Discrete Models of Low Reynolds Number Swimmers

    CERN Document Server

    Wang, Qixuan

    2015-01-01

    Swimming by shape changes at low Reynolds number is widely used in biology and understanding how the efficiency of movement depends on the geometric pattern of shape changes is important to understand swimming of microorganisms and in designing low Reynolds number swimming models. The simplest models of shape changes are those that comprise a series of linked spheres that can change their separation and/or their size. Herein we compare the efficiency of three models in which these modes are used in different ways.

  16. Procedure for assessing the performance of a rockfall fragmentation model

    Science.gov (United States)

    Matas, Gerard; Lantada, Nieves; Corominas, Jordi; Gili, Josep Antoni; Ruiz-Carulla, Roger; Prades, Albert

    2017-04-01

    A Rockfall is a mass instability process frequently observed in road cuts, open pit mines and quarries, steep slopes and cliffs. It is frequently observed that the detached rock mass becomes fragmented when it impacts with the slope surface. The consideration of the fragmentation of the rockfall mass is critical for the calculation of block's trajectories and their impact energies, to further assess their potential to cause damage and design adequate preventive structures. We present here the performance of the RockGIS model. It is a GIS-Based tool that simulates stochastically the fragmentation of the rockfalls, based on a lumped mass approach. In RockGIS, the fragmentation initiates by the disaggregation of the detached rock mass through the pre-existing discontinuities just before the impact with the ground. An energy threshold is defined in order to determine whether the impacting blocks break or not. The distribution of the initial mass between a set of newly generated rock fragments is carried out stochastically following a power law. The trajectories of the new rock fragments are distributed within a cone. The model requires the calibration of both the runout of the resultant blocks and the spatial distribution of the volumes of fragments generated by breakage during their propagation. As this is a coupled process which is controlled by several parameters, a set of performance criteria to be met by the simulation have been defined. The criteria includes: position of the centre of gravity of the whole block distribution, histogram of the runout of the blocks, extent and boundaries of the young debris cover over the slope surface, lateral dispersion of trajectories, total number of blocks generated after fragmentation, volume distribution of the generated fragments, the number of blocks and volume passages past a reference line and the maximum runout distance Since the number of parameters to fit increases significantly when considering fragmentation, the

  17. AGING PERFORMANCE OF MODEL 9975 PACKAGE FLUOROELASTOMER O-RINGS

    Energy Technology Data Exchange (ETDEWEB)

    Hoffman, E.; Daugherty, W.; Skidmore, E.; Dunn, K.; Fisher, D.

    2011-05-31

    The influence of temperature and radiation on Viton{reg_sign} GLT and GLT-S fluoroelastomer O-rings is an ongoing research focus at the Savannah River National Laboratory. The O-rings are credited for leaktight containment in the Model 9975 shipping package used for transportation of plutonium-bearing materials. At the Savannah River Site, the Model 9975 packages are being used for interim storage. Primary research efforts have focused on surveillance of O-rings from actual packages, leak testing of seals at bounding aging conditions and the effect of aging temperature on compression stress relaxation behavior, with the goal of service life prediction for long-term storage conditions. Recently, an additional effort to evaluate the effect of aging temperature on the oxidation of the materials has begun. Degradation in the mechanical properties of elastomers is directly related to the oxidation of the polymer. Sensitive measurements of the oxidation rate can be performed in a more timely manner than waiting for a measurable change in mechanical properties, especially at service temperatures. Measuring the oxidation rate therefore provides a means to validate the assumption that the degradation mechanisms(s) do not change from the elevated temperatures used for accelerated aging and the lower service temperatures. Monitoring the amount of oxygen uptake by the material over time at various temperatures can provide increased confidence in lifetime predictions. Preliminary oxygen consumption analysis of a Viton GLT-based fluoroelastomer compound (Parker V0835-75) using an Oxzilla II differential oxygen analyzer in the temperature range of 40-120 C was performed. Early data suggests oxygen consumption rates may level off within the first 100,000 hours (10-12 years) at 40 C and that sharp changes in the degradation mechanism (stress-relaxation) are not expected over the temperature range examined. This is consistent with the known long-term heat aging resistance of

  18. Sustainable innovation, business models and economic performance: an overview

    NARCIS (Netherlands)

    Montalvo Corral, C.

    2013-01-01

    Sustainable development requires radical and systemic innovations. Such innovations can be more effectively created and studied when building on the concept of business models. This concept provides firms with a holistic framework to envision and implement sustainable innovations. For researchers,

  19. Comparison of the performance of net radiation calculation models

    DEFF Research Database (Denmark)

    Kjærsgaard, Jeppe Hvelplund; Cuenca, R H; Martinez-Cob, A

    2009-01-01

    values of net radiation were calculated using three net outgoing long-wave radiation models and compared to measured values. Four meteorological datasets representing two climate regimes, a sub-humid, high-latitude environment and a semi-arid mid-latitude environment, were used to test the models...... meteorological input data is limited. Model predictions were found to have a higher bias and scatter when using summed calculated hourly time steps compared to using daily input data.......Daily values of net radiation are used in many applications of crop-growth modeling and agricultural water management. Measurements of net radiation are not part of the routine measurement program at many weather stations and are commonly estimated based on other meteorological parameters. Daily...

  20. Simplified Predictive Models for CO2 Sequestration Performance Assessment

    Science.gov (United States)

    Mishra, Srikanta; RaviGanesh, Priya; Schuetter, Jared; Mooney, Douglas; He, Jincong; Durlofsky, Louis

    2014-05-01

    We present results from an ongoing research project that seeks to develop and validate a portfolio of simplified modeling approaches that will enable rapid feasibility and risk assessment for CO2 sequestration in deep saline formation. The overall research goal is to provide tools for predicting: (a) injection well and formation pressure buildup, and (b) lateral and vertical CO2 plume migration. Simplified modeling approaches that are being developed in this research fall under three categories: (1) Simplified physics-based modeling (SPM), where only the most relevant physical processes are modeled, (2) Statistical-learning based modeling (SLM), where the simulator is replaced with a "response surface", and (3) Reduced-order method based modeling (RMM), where mathematical approximations reduce the computational burden. The system of interest is a single vertical well injecting supercritical CO2 into a 2-D layered reservoir-caprock system with variable layer permeabilities. In the first category (SPM), we use a set of well-designed full-physics compositional simulations to understand key processes and parameters affecting pressure propagation and buoyant plume migration. Based on these simulations, we have developed correlations for dimensionless injectivity as a function of the slope of fractional-flow curve, variance of layer permeability values, and the nature of vertical permeability arrangement. The same variables, along with a modified gravity number, can be used to develop a correlation for the total storage efficiency within the CO2 plume footprint. In the second category (SLM), we develop statistical "proxy models" using the simulation domain described previously with two different approaches: (a) classical Box-Behnken experimental design with a quadratic response surface fit, and (b) maximin Latin Hypercube sampling (LHS) based design with a Kriging metamodel fit using a quadratic trend and Gaussian correlation structure. For roughly the same number of

  1. Model-based analysis of control performance in sewer systems

    DEFF Research Database (Denmark)

    Mollerup, Ane Høyer; Mauricio Iglesias, Miguel; Johansen, N.B.;

    2012-01-01

    Design and assessment of control in wastewater systems has to be tackled at all levels, including supervisory and regulatory level. We present here an integrated approach to assessment of control in sewer systems based on modelling and the use of process control tools to assess the controllability...... of the process. A case study of a subcatchment area in Copenhagen (Denmark) is used to illustrate the combined approach in modelling of the system and control assessment....

  2. Limits of performance for the model reduction problem of hidden Markov models

    KAUST Repository

    Kotsalis, Georgios

    2015-12-15

    We introduce system theoretic notions of a Hankel operator, and Hankel norm for hidden Markov models. We show how the related Hankel singular values provide lower bounds on the norm of the difference between a hidden Markov model of order n and any lower order approximant of order n̂ < n.

  3. The peer model advantage in infants’ imitation of familiar gestures performed by differently aged models

    Directory of Open Access Journals (Sweden)

    Norbert eZmyj

    2012-07-01

    Full Text Available Research on infant´s imitation of differently aged models, which has predominantly studied object- related actions, has so far lead to mixed evidence. Whereas some studies reported an increased likelihood of imitating peer models in contrast to adult models, other studies reported the opposite pattern of results. In the present study, 14-month-old infants were presented with four familiar gestures (e.g., clapping that were demonstrated by differently aged televised models (peer, older child, adult. Results revealed that infants were more likely to imitate the peer model than the older child or the adult. This result is discussed with respect to a social function of imitation and the cognitive mechanism of imitating familiar behavior.

  4. DiamondTorre Algorithm for High-Performance Wave Modeling

    Directory of Open Access Journals (Sweden)

    Vadim Levchenko

    2016-08-01

    Full Text Available Effective algorithms of physical media numerical modeling problems’ solution are discussed. The computation rate of such problems is limited by memory bandwidth if implemented with traditional algorithms. The numerical solution of the wave equation is considered. A finite difference scheme with a cross stencil and a high order of approximation is used. The DiamondTorre algorithm is constructed, with regard to the specifics of the GPGPU’s (general purpose graphical processing unit memory hierarchy and parallelism. The advantages of these algorithms are a high level of data localization, as well as the property of asynchrony, which allows one to effectively utilize all levels of GPGPU parallelism. The computational intensity of the algorithm is greater than the one for the best traditional algorithms with stepwise synchronization. As a consequence, it becomes possible to overcome the above-mentioned limitation. The algorithm is implemented with CUDA. For the scheme with the second order of approximation, the calculation performance of 50 billion cells per second is achieved. This exceeds the result of the best traditional algorithm by a factor of five.

  5. Using Machine Learning to Create Turbine Performance Models (Presentation)

    Energy Technology Data Exchange (ETDEWEB)

    Clifton, A.

    2013-04-01

    Wind turbine power output is known to be a strong function of wind speed, but is also affected by turbulence and shear. In this work, new aerostructural simulations of a generic 1.5 MW turbine are used to explore atmospheric influences on power output. Most significant is the hub height wind speed, followed by hub height turbulence intensity and then wind speed shear across the rotor disk. These simulation data are used to train regression trees that predict the turbine response for any combination of wind speed, turbulence intensity, and wind shear that might be expected at a turbine site. For a randomly selected atmospheric condition, the accuracy of the regression tree power predictions is three times higher than that of the traditional power curve methodology. The regression tree method can also be applied to turbine test data and used to predict turbine performance at a new site. No new data is required in comparison to the data that are usually collected for a wind resource assessment. Implementing the method requires turbine manufacturers to create a turbine regression tree model from test site data. Such an approach could significantly reduce bias in power predictions that arise because of different turbulence and shear at the new site, compared to the test site.

  6. Plans for performance and model improvements in the LISE++ software

    Science.gov (United States)

    Kuchera, M. P.; Tarasov, O. B.; Bazin, D.; Sherrill, B. M.; Tarasova, K. V.

    2016-06-01

    The LISE++ software for fragment separator simulations is undergoing a major update. LISE++ is the standard software used at in-flight separator facilities for predicting beam intensity and purity. The code simulates nuclear physics experiments where fragments are produced and then selected with a fragment separator. A set of modifications to improve the functionality of the code is discussed in this work. These modifications include transportation to a modern graphics framework and updated compilers to aid in the performance and sustainability of the code. To accommodate the diversity of our users' computer platform preferences, we extend the software from Windows to a cross-platform application. The calculations of beam transport and isotope production are becoming more computationally intense with the new large scale facilities. Planned new features include new types of optimization, for example, optimization of ion optics, improvements in reaction models, and new event generator options. In addition, LISE++ interface with control systems are planned. Computational improvements as well as the schedule for updating this large package will be discussed.

  7. Computational fluid dynamics analysis of cyclist aerodynamics: performance of different turbulence-modelling and boundary-layer modelling approaches.

    Science.gov (United States)

    Defraeye, Thijs; Blocken, Bert; Koninckx, Erwin; Hespel, Peter; Carmeliet, Jan

    2010-08-26

    This study aims at assessing the accuracy of computational fluid dynamics (CFD) for applications in sports aerodynamics, for example for drag predictions of swimmers, cyclists or skiers, by evaluating the applied numerical modelling techniques by means of detailed validation experiments. In this study, a wind-tunnel experiment on a scale model of a cyclist (scale 1:2) is presented. Apart from three-component forces and moments, also high-resolution surface pressure measurements on the scale model's surface, i.e. at 115 locations, are performed to provide detailed information on the flow field. These data are used to compare the performance of different turbulence-modelling techniques, such as steady Reynolds-averaged Navier-Stokes (RANS), with several k-epsilon and k-omega turbulence models, and unsteady large-eddy simulation (LES), and also boundary-layer modelling techniques, namely wall functions and low-Reynolds number modelling (LRNM). The commercial CFD code Fluent 6.3 is used for the simulations. The RANS shear-stress transport (SST) k-omega model shows the best overall performance, followed by the more computationally expensive LES. Furthermore, LRNM is clearly preferred over wall functions to model the boundary layer. This study showed that there are more accurate alternatives for evaluating flow around bluff bodies with CFD than the standard k-epsilon model combined with wall functions, which is often used in CFD studies in sports.

  8. A High Resolution Nonhydrostatic Tropical Atmospheric Model and Its Performance

    Institute of Scientific and Technical Information of China (English)

    SHEN Xueshun; Akimasa SUMI

    2005-01-01

    A high resolution nonhydrostatic tropical atmospheric model is developed by using a ready-made regional atmospheric modeling system. The motivation is to investigate the convective activities associated with the tropical intraseasonal oscillation (ISO) through a cloud resolving calculation. Due to limitations in computing resources, a 2000 km×2000 km region covering the forefront of an ISO-related westerly is selected as the model domain, in which a cloud-resolving integration with a 5-km horizontal resolution is conducted. The results indicate the importance of stratus-cumulus interactions in the organization of the cloud clusters embedded in the ISO. In addition, comparative integrations with 2-km and 5-km grid sizes are conducted, which suggest no distinctive differences between the two cases although some finer structures of convections are discernible in the 2-km case. The significance of this study resides in supplying a powerful tool for investigating tropical cloud activities without the controversy of cloud parameterizations. The parallel computing method applied in this model allows sufficient usage of computer memory, which is different from the usual method used when parallelizing regional model. Further simulation for the global tropics with a resolution around 5 km is being prepared.

  9. Empirical slip and viscosity model performance for microscale gas flows.

    Energy Technology Data Exchange (ETDEWEB)

    Gallis, Michail A.; Boyd, Iain D. (University of Michigan, Ann Arbor, MI); McNenly, Matthew J. (University of Michigan, Ann Arbor, MI)

    2004-07-01

    For the simple geometries of Couette and Poiseuille flows, the velocity profile maintains a similar shape from continuum to free molecular flow. Therefore, modifications to the fluid viscosity and slip boundary conditions can improve the continuum based Navier-Stokes solution in the non-continuum non-equilibrium regime. In this investigation, the optimal modifications are found by a linear least-squares fit of the Navier-Stokes solution to the non-equilibrium solution obtained using the direct simulation Monte Carlo (DSMC) method. Models are then constructed for the Knudsen number dependence of the viscosity correction and the slip model from a database of DSMC solutions for Couette and Poiseuille flows of argon and nitrogen gas, with Knudsen numbers ranging from 0.01 to 10. Finally, the accuracy of the models is measured for non-equilibrium cases both in and outside the DSMC database. Flows outside the database include: combined Couette and Poiseuille flow, partial wall accommodation, helium gas, and non-zero convective acceleration. The models reproduce the velocity profiles in the DSMC database within an L{sub 2} error norm of 3% for Couette flows and 7% for Poiseuille flows. However, the errors in the model predictions outside the database are up to five times larger.

  10. Experimental Investigation on Performance of Pulse Detonation Rocket Engine Model

    Institute of Scientific and Technical Information of China (English)

    LI Qiang; FAN Wei; YAN Chuan-jun; HU Cheng-qi; YE Bin

    2007-01-01

    The PDRE test model used in these experiments utilized kerosene as the fuel, oxygen as oxidizer, and nitrogen as purge gas. The solenoid valves were employed to control intermittent supplies of kerosene, oxygen and purge gas. PDRE test model was 50 mm in inner diameter by 1.2 m long. The DDT (defiagration to detonation transition) enhancement device Shchelkin spiral was used in the test model.The effects of detonation frequency on its time-averaged thrust and specific impulse were experimentally investigated. The obtained results showes that the time-averaged thrust of PDRE test model was approximately proportional to the detonation frequency. For the detonation frequency 20 Hz, the time-averaged thrust was around 107 N, and the specific impulse was around 125 s. The nozzle experiments were conducted using PDRE test model with three traditional nozzles. The experimental results obtained demonstrated that all of those nozzles could augment the thrust and specific impulse. Among those three nozzles, the convergent nozzle had the largest increased augmentation, which was approximately 18%, under the specific condition of the experiment.

  11. A review of TRISO-coated particle nuclear fuel performance models

    Institute of Scientific and Technical Information of China (English)

    LIU Bing; LIANG Tongxiang; TANG Chunhe

    2006-01-01

    The success of high temperature gas cooled reactor depends upon the safety and quality of the coated particle fuel. The understanding and evaluation of this fuel requires the development of an integrated mechanistic fuel performance model that fully describes the mechanical and physicochemical behavior of the fuel particle under irradiation. In this paper, a review of the analytical capability of some of the existing computer codes for coated particle fuel was performed. These existing models and codes include FZJ model, JAERI model, Stress3 model, ATLAS model, PARFUME model and TIMCOAT model. The theoretic model, methodology, calculation parameters and benchmark of these codes were classified. Based on the failure mechanism of coated particle, the advantage and limits of the models were compared and discussed. The calculated results of the coated particles for China HTR-10 by using some existing code are shown. Finally, problems and challenges in fuel performance modeling were listed.

  12. Microstructural Modeling of Brittle Materials for Enhanced Performance and Reliability.

    Energy Technology Data Exchange (ETDEWEB)

    Teague, Melissa Christine [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Teague, Melissa Christine [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Rodgers, Theron [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Rodgers, Theron [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Grutzik, Scott Joseph [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Grutzik, Scott Joseph [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Meserole, Stephen [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Meserole, Stephen [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-08-01

    Brittle failure is often influenced by difficult to measure and variable microstructure-scale stresses. Recent advances in photoluminescence spectroscopy (PLS), including improved confocal laser measurement and rapid spectroscopic data collection have established the potential to map stresses with microscale spatial resolution (%3C2 microns). Advanced PLS was successfully used to investigate both residual and externally applied stresses in polycrystalline alumina at the microstructure scale. The measured average stresses matched those estimated from beam theory to within one standard deviation, validating the technique. Modeling the residual stresses within the microstructure produced general agreement in comparison with the experimentally measured results. Microstructure scale modeling is primed to take advantage of advanced PLS to enable its refinement and validation, eventually enabling microstructure modeling to become a predictive tool for brittle materials.

  13. Model of single-electron performance of micropixel avalanche photodiodes

    CERN Document Server

    Sadygov, Z; Akhmedov, G; Akhmedov, F; Khorev, S; Mukhtarov, R; Sadigov, A; Sidelev, A; Titov, A; Zerrouk, F; Zhezher, V

    2014-01-01

    An approximate iterative model of avalanche process in a pixel of micropixel avalanche photodiode initiated by a single photoelectron is presented. The model describes development of the avalanche process in time, taking into account change of electric field within the depleted region caused by internal discharge and external recharge currents. Conclusions obtained as a result of modelling are compared with experimental data. Simulations show that typical durations of the front and rear edges of the discharge current have the same magnitude of less than 50 ps. The front of the external recharge current has the same duration, however duration of the rear edge depends on value of the quenching micro-resistor. It was found that effective capacitance of the pixel calculated as the slope of linear dependence of the pulse charge on bias voltage exceeds its real capacitance by a factor of two.

  14. A Performance Evaluation Model for Global Macro Funds

    Directory of Open Access Journals (Sweden)

    Adam Zaremba

    2014-02-01

    Full Text Available The paper concentrates on value and size effects in country portfolios. It contributes to academic literature threefold. First, I provide fresh evidence that the value and size effects may be useful in explaining the cross-sectional variation in country returns. The computations are based on a broad sample of 66 countries in years 2000-2013. Second, I document that the country-level value and size effects are indifferent to currency conversions. Finally, I introduce an alternative macro-level Fama-French model, which, contrary to its prototype, employs country-based factors. I show that applying this modification makes the model more successful in evaluation of funds with global investment mandate than the standard CAPM and FF models.

  15. Lifetime-Aware Cloud Data Centers: Models and Performance Evaluation

    Directory of Open Access Journals (Sweden)

    Luca Chiaraviglio

    2016-06-01

    Full Text Available We present a model to evaluate the server lifetime in cloud data centers (DCs. In particular, when the server power level is decreased, the failure rate tends to be reduced as a consequence of the limited number of components powered on. However, the variation between the different power states triggers a failure rate increase. We therefore consider these two effects in a server lifetime model, subject to an energy-aware management policy. We then evaluate our model in a realistic case study. Our results show that the impact on the server lifetime is far from negligible. As a consequence, we argue that a lifetime-aware approach should be pursued to decide how and when to apply a power state change to a server.

  16. As-Built Modeling of Ojbects for Performance Assessment

    Energy Technology Data Exchange (ETDEWEB)

    Kokko, E J; Martz, H E; Chinn, D J; Childs, H R; Jackson, J A; Chambers, D H; Schneberk, D J; Clark, G A

    2005-09-12

    The goal of ''as-built'' computational modeling is to incorporate the most representative geometry and material information for an (fabricated or legacy) object into simulations. While most engineering finite element simulations are based on an object's idealized ''as-designed'' configuration with information obtained from technical drawings or computer-aided design models, ''as-built'' modeling uses nondestructive characterization and metrology techniques to provide the feature information. By incorporating more representative geometry and material features as initial conditions, the uncertainty in the simulation results can be reduced, providing a more realistic understanding of the event and object being modeled. In this paper, key steps and technology areas in the as-built modeling framework are: (1) inspection using non-destructive characterization (NDC) and metrology techniques; (2) data reduction (signal and image processing including artifact removal, data sensor fusion, and geometric feature extraction); and (3) engineering and physics analysis using finite element codes. We illustrate the process with a cylindrical phantom and include a discussion of the key concepts and areas that need improvement. Our results show that reasonable as-built initial conditions based on a volume overlap criteria can be achieved and that notable differences between simulations of the as-built and as-designed configurations can be observed for a given load case. Specifically, a volume averaged difference of accumulated plastic strain of 3% and local spatially varying differences up to 10%. The example presented provides motivation and justification to engineering teams for the additional effort required in the as-built modeling of high value parts. Further validation of the approach has been proposed as future work.

  17. Modeling and Simulation of Ceramic Arrays to Improve Ballaistic Performance

    Science.gov (United States)

    2013-10-01

    are modeled using SPH elements. Model validation runs with monolithic SiC tiles are conducted based on the DoP experiments described in reference...TERMS ,30cal AP M2 Projectile, 762x39 PS Projectile, SPH , Aluminum 5083, SiC, DoP Expeminets, AutoDyn Simulations, Tile Gap 16. SECURITY...Yarlagadda 19b. TELEPHONE NUMBER (include area code ) 302-831-4941 Standard Form 298 (Rev. 8-98) :..-.,.... „.,<-. C((j Z39.18 •MWl^ MONTHLY

  18. Modeling and Simulation of Ceramic Arrays to Improve Ballaistic Performance

    Science.gov (United States)

    2013-09-09

    SUPPLEMENTARY NOTES 14. ABSTRACT -Quarter-symmetric model is used in AutoDyn to simulate DoP experiments on aluminum targets and ceramic-faced aluminum...targets with .30cal AP M2 projectile using SPH elements. -Model validation runs were conducted based on the DoP experiments described in reference...effect of material properties on DoP 15. SUBJECT TERMS .30cal AP M2 Projectile, 762x39 PS Projectile, SPH, Aluminum 5083, SiC, DoP Expeminets

  19. Multi-Site Validation of the SWAT Model on the Bani Catchment: Model Performance and Predictive Uncertainty

    Directory of Open Access Journals (Sweden)

    Jamilatou Chaibou Begou

    2016-04-01

    Full Text Available The objective of this study was to assess the performance and predictive uncertainty of the Soil and Water Assessment Tool (SWAT model on the Bani River Basin, at catchment and subcatchment levels. The SWAT model was calibrated using the Generalized Likelihood Uncertainty Estimation (GLUE approach. Potential Evapotranspiration (PET and biomass were considered in the verification of model outputs accuracy. Global Sensitivity Analysis (GSA was used for identifying important model parameters. Results indicated a good performance of the global model at daily as well as monthly time steps with adequate predictive uncertainty. PET was found to be overestimated but biomass was better predicted in agricultural land and forest. Surface runoff represents the dominant process on streamflow generation in that region. Individual calibration at subcatchment scale yielded better performance than when the global parameter sets were applied. These results are very useful and provide a support to further studies on regionalization to make prediction in ungauged basins.

  20. Stochastic modeling and performance monitoring of wind farm power production

    CERN Document Server

    Milan, Patrick; Peinke, Joachim

    2015-01-01

    We present a new stochastic approach to describe and remodel the conversion process of a wind farm at a sampling frequency of 1Hz. When conditioning on various wind direction sectors, the dynamics of the conversion process appear as a fluctuating trajectory around an average IEC-like power curve, see section II. Our approach is to consider the wind farm as a dynamical system that can be described as a stochastic drift/diffusion model, where a drift coefficient describes the attraction towards the power curve and a diffusion coefficient quantifies additional turbulent fluctuations. These stochastic coefficients are inserted into a Langevin equation that, once properly adapted to our particular system, models a synthetic signal of power output for any given wind speed/direction signals, see section III. When combined with a pre-model for turbulent wind fluctuations, the stochastic approach models the power output of the wind farm at a sampling frequency of 1Hz using only ten-minute average values of wind speed ...

  1. Traversable Terrain Modeling and Performance Measurement of Mobile Robots

    Science.gov (United States)

    2004-08-01

    In this paper, we have described a technique for terrain traversability assessment modeling of mobile robots operating in natural terrain and...presented a fast near-optimum algorithm for autonomous navigational path planning of mobile robots in rough terrain environments. The proposed method is

  2. Modeling and simulation performance of sucker rod beam pump

    Science.gov (United States)

    Aditsania, Annisa; Rahmawati, Silvy Dewi; Sukarno, Pudjo; Soewono, Edy

    2015-09-01

    Artificial lift is a mechanism to lift hydrocarbon, generally petroleum, from a well to surface. This is used in the case that the natural pressure from the reservoir has significantly decreased. Sucker rod beam pumping is a method of artificial lift. Sucker rod beam pump is modeled in this research as a function of geometry of the surface part, the size of sucker rod string, and fluid properties. Besides its length, sucker rod string also classified into tapered and un-tapered. At the beginning of this research, for easy modeling, the sucker rod string was assumed as un-tapered. The assumption proved non-realistic to use. Therefore, the tapered sucker rod string modeling needs building. The numerical solution of this sucker rod beam pump model is computed using finite difference method. The numerical result shows that the peak of polished rod load for sucker rod beam pump unit C-456-D-256-120, for non-tapered sucker rod string is 38504.2 lb, while for tapered rod string is 25723.3 lb. For that reason, to avoid the sucker rod string breaks due to the overload, the use of tapered sucker rod beam string is suggested in this research.

  3. Modeling Logistic Performance in Quantitative Microbial Risk Assessment

    NARCIS (Netherlands)

    Rijgersberg, H.; Tromp, S.O.; Jacxsens, L.; Uyttendaele, M.

    2010-01-01

    In quantitative microbial risk assessment (QMRA), food safety in the food chain is modeled and simulated. In general, prevalences, concentrations, and numbers of microorganisms in media are investigated in the different steps from farm to fork. The underlying rates and conditions (such as storage ti

  4. Sustainable innovation, business models and economic performance: an overview

    NARCIS (Netherlands)

    Montalvo Corral, C.

    2013-01-01

    Sustainable development requires radical and systemic innovations. Such innovations can be more effectively created and studied when building on the concept of business models. This concept provides firms with a holistic framework to envision and implement sustainable innovations. For researchers, t

  5. Modeling and simulation performance of sucker rod beam pump

    Energy Technology Data Exchange (ETDEWEB)

    Aditsania, Annisa, E-mail: annisaaditsania@gmail.com [Department of Computational Sciences, Institut Teknologi Bandung (Indonesia); Rahmawati, Silvy Dewi, E-mail: silvyarahmawati@gmail.com; Sukarno, Pudjo, E-mail: psukarno@gmail.com [Department of Petroleum Engineering, Institut Teknologi Bandung (Indonesia); Soewono, Edy, E-mail: esoewono@math.itb.ac.id [Department of Mathematics, Institut Teknologi Bandung (Indonesia)

    2015-09-30

    Artificial lift is a mechanism to lift hydrocarbon, generally petroleum, from a well to surface. This is used in the case that the natural pressure from the reservoir has significantly decreased. Sucker rod beam pumping is a method of artificial lift. Sucker rod beam pump is modeled in this research as a function of geometry of the surface part, the size of sucker rod string, and fluid properties. Besides its length, sucker rod string also classified into tapered and un-tapered. At the beginning of this research, for easy modeling, the sucker rod string was assumed as un-tapered. The assumption proved non-realistic to use. Therefore, the tapered sucker rod string modeling needs building. The numerical solution of this sucker rod beam pump model is computed using finite difference method. The numerical result shows that the peak of polished rod load for sucker rod beam pump unit C-456-D-256-120, for non-tapered sucker rod string is 38504.2 lb, while for tapered rod string is 25723.3 lb. For that reason, to avoid the sucker rod string breaks due to the overload, the use of tapered sucker rod beam string is suggested in this research.

  6. Performance of Random Effects Model Estimators under Complex Sampling Designs

    Science.gov (United States)

    Jia, Yue; Stokes, Lynne; Harris, Ian; Wang, Yan

    2011-01-01

    In this article, we consider estimation of parameters of random effects models from samples collected via complex multistage designs. Incorporation of sampling weights is one way to reduce estimation bias due to unequal probabilities of selection. Several weighting methods have been proposed in the literature for estimating the parameters of…

  7. Influence of horizontal resolution and ensemble size on model performance

    CSIR Research Space (South Africa)

    Dalton, A

    2014-10-01

    Full Text Available southern Africa. Furthermore a comparison is made between forecast skill of the 850 hPa geopotential heights and raw model rainfall outputs. The determination of skill was done by way of empirical post-processing procedures in order to project ensemble mean...

  8. Stochastic Modeling and Performance Analysis of Multimedia SoCs

    DEFF Research Database (Denmark)

    Raman, Balaji; Nouri, Ayoub; Gangadharan, Deepak

    2013-01-01

    decoder. The results shows that, for our stochastic design metric, the analytical framework upper bounds (and relatively accurate) compare to the statistical model checking technique. Also, we observed significant reduction in resource usage (such as output buffer size) with tolerable loss in output...

  9. Modeling the performance of coated LPG tanks engulfes in fires

    NARCIS (Netherlands)

    Cozzani, V.; Landucci, G.; Molag, M. (Menso)

    2009-01-01

    The improvement of passive fire protection of storage vessels is a key factor to enhance safety among the LPG distribution chain. A thermal and mechanical model based on finite elements simulations was developed to assess the behaviour of full size tanks used for LPG storage and transportation in fi

  10. Modeling the performance of coated LPG tanks engulfed in fires

    NARCIS (Netherlands)

    Landucci, G.; Molag, M.; Cozzani, V.

    2009-01-01

    The improvement of passive fire protection of storage vessels is a key factor to enhance safety among the LPG distribution chain. A thermal and mechanical model based on finite elements simulations was developed to assess the behaviour of full size tanks used for LPG storage and transportation in fi

  11. Comparing the performance of species distribution models of

    NARCIS (Netherlands)

    Valle , M.; van Katwijk, M.M.; de Jong, D.J.; Bouma, T.; Schipper, A.M.; Chust, G.; Benito, B.M.; Garmendia, J.M.; Borja, A.

    2013-01-01

    Intertidal seagrasses show high variability in their extent and location, with local extinctions and (re-)colonizations being inherent in their population dynamics. Suitable habitats are identified usually using Species Distribution Models (SDM), based upon the overall distribution of the species;

  12. Constraining performance assessment models with tracer test results: a comparison between two conceptual models

    Science.gov (United States)

    McKenna, Sean A.; Selroos, Jan-Olof

    Tracer tests are conducted to ascertain solute transport parameters of a single rock feature over a 5-m transport pathway. Two different conceptualizations of double-porosity solute transport provide estimates of the tracer breakthrough curves. One of the conceptualizations (single-rate) employs a single effective diffusion coefficient in a matrix with infinite penetration depth. However, the tracer retention between different flow paths can vary as the ratio of flow-wetted surface to flow rate differs between the path lines. The other conceptualization (multirate) employs a continuous distribution of multiple diffusion rate coefficients in a matrix with variable, yet finite, capacity. Application of these two models with the parameters estimated on the tracer test breakthrough curves produces transport results that differ by orders of magnitude in peak concentration and time to peak concentration at the performance assessment (PA) time and length scales (100,000 years and 1,000 m). These differences are examined by calculating the time limits for the diffusive capacity to act as an infinite medium. These limits are compared across both conceptual models and also against characteristic times for diffusion at both the tracer test and PA scales. Additionally, the differences between the models are examined by re-estimating parameters for the multirate model from the traditional double-porosity model results at the PA scale. Results indicate that for each model the amount of the diffusive capacity that acts as an infinite medium over the specified time scale explains the differences between the model results and that tracer tests alone cannot provide reliable estimates of transport parameters for the PA scale. Results of Monte Carlo runs of the transport models with varying travel times and path lengths show consistent results between models and suggest that the variation in flow-wetted surface to flow rate along path lines is insignificant relative to variability in

  13. Parameter Selection and Performance Analysis of Mobile Terminal Models Based on Unity3D

    Institute of Scientific and Technical Information of China (English)

    KONG Li-feng; ZHAO Hai-ying; XU Guang-mei

    2014-01-01

    Mobile platform is now widely seen as a promising multimedia service with a favorable user group and market prospect. To study the influence of mobile terminal models on the quality of scene roaming, a parameter setting platform of mobile terminal models is established to select the parameter selection and performance index on different mobile platforms in this paper. This test platform is established based on model optimality principle, analyzing the performance curve of mobile terminals in different scene models and then deducing the external parameter of model establishment. Simulation results prove that the established test platform is able to analyze the parameter and performance matching list of a mobile terminal model.

  14. Age-aware solder performance models : level 2 milestone completion.

    Energy Technology Data Exchange (ETDEWEB)

    Neilsen, Michael K.; Vianco, Paul Thomas; Neidigk, Matthew Aaron; Holm, Elizabeth Ann

    2010-09-01

    Legislated requirements and industry standards are replacing eutectic lead-tin (Pb-Sn) solders with lead-free (Pb-free) solders in future component designs and in replacements and retrofits. Since Pb-free solders have not yet seen service for long periods, their long-term behavior is poorly characterized. Because understanding the reliability of Pb-free solders is critical to supporting the next generation of circuit board designs, it is imperative that we develop, validate and exercise a solder lifetime model that can capture the thermomechanical response of Pb-free solder joints in stockpile components. To this end, an ASC Level 2 milestone was identified for fiscal year 2010: Milestone 3605: Utilize experimentally validated constitutive model for lead-free solder to simulate aging and reliability of solder joints in stockpile components. This report documents the completion of this milestone, including evidence that the milestone completion criteria were met and a summary of the milestone Program Review.

  15. Critical Appraisal of Translational Research Models for Suitability in Performance Assessment of Cancer Centers

    OpenAIRE

    Rajan, Abinaya; Sullivan, Richard; Bakker, Suzanne; van Harten, Wim H.

    2012-01-01

    This study aimed to critically appraise translational research models for suitability in performance assessment of cancer centers. Process models, such as the Process Marker Model and Lean and Six Sigma applications, seem to be suitable for performance assessment of cancer centers. However, they must be thoroughly tested in practice.

  16. An Alumni Oriented Approach to Sport Management Curriculum Design Using Performance Ratings and a Regression Model.

    Science.gov (United States)

    Ulrich, David; Parkhouse, Bonnie L.

    1982-01-01

    An alumni-based model is proposed as an alternative to sports management curriculum design procedures. The model relies on the assessment of curriculum by sport management alumni and uses performance ratings of employers and measures of satisfaction by alumni in a regression model to identify curriculum leading to increased work performance and…

  17. Modelling and performance analysis of a neurostimulation system

    OpenAIRE

    2011-01-01

    2009 - 2010 The activity of this thesis is devoted to modelling the electromagnetic behaviour of complex biological structures and in particular of nerve cells and the study of nanotechnology applications to these structures for therapeutic, diagnostic and investigative purposes. The 'design' of artificial nanoscale devices is the chance to accurately understand and manipulate the phenomena that occur inside biological structures. Until a few years ago, the most widely accep...

  18. Modeling Reduced Human Performance as a Complex Adaptive System

    Science.gov (United States)

    2003-09-01

    fittingly, the latest research paper describes these types of components as LEGOs (listener event graph objects). “The name is also a metaphor for how...Buss, A. H. and P. J. Sanchez (2002). Building Complex Models With LEGOs (Listener Event Graph Objects). Winter Simulation Conference. Buss, D. (1999...Kaarlela, C. (1997). New Gene Therapy Technique Could Eliminate Insulin Injections for many Diabetics, Jeffrey Norris and Jennifer O’Brien (415) 476-481

  19. Hadoop performance modeling for job estimation and resource provisioning

    OpenAIRE

    2015-01-01

    MapReduce has become a major computing model for data intensive applications. Hadoop, an open source implementation of MapReduce, has been adopted by an increasingly growing user community. Cloud computing service providers such as Amazon EC2 Cloud offer the opportunities for Hadoop users to lease a certain amount of resources and pay for their use. However, a key challenge is that cloud service providers do not have a resource provisioning mechanism to satisfy user jobs with deadline require...

  20. HEISHI: A fuel performance model for space nuclear applications

    Energy Technology Data Exchange (ETDEWEB)

    Young, M.F.

    1994-08-01

    HEISHI is a Fortran computer model designed to aid in analysis, prediction, and optimization of fuel characteristics for use in Space Nuclear Thermal Propulsion (SNTP). Calculational results include fission product release rate, fuel failure fraction, mode of fuel failure, stress-strain state, and fuel material morphology. HEISHI contains models for decay chain calculations of retained and released fission products, based on an input power history and release coefficients. Decay chain parameters such as direct fission yield, decay rates, and branching fractions are obtained from a database. HEISHI also contains models for stress-strain behavior of multilayered fuel particles with creep and differential thermal expansion effects, transient particle temperature profile, grain growth, and fuel particle failure fraction. Grain growth is treated as a function of temperature; the failure fraction depends on the coating tensile strength, which in turn is a function of grain size. The HEISHI code is intended for use in analysis of coated fuel particles for use in particle bed reactors; however, much of the code is geometry-independent and applicable to fuel geometries other than spherical.

  1. Performance Analysis and Modeling of Thermally Sprayed Resistive Heaters

    Science.gov (United States)

    Lamarre, Jean-Michel; Marcoux, Pierre; Perrault, Michel; Abbott, Richard C.; Legoux, Jean-Gabriel

    2013-08-01

    Many processes and systems require hot surfaces. These are usually heated using electrical elements located in their vicinity. However, this solution is subject to intrinsic limitations associated with heating element geometry and physical location. Thermally spraying electrical elements directly on surfaces can overcome these limitations by tailoring the geometry of the heating element to the application. Moreover, the element heat transfer is maximized by minimizing the distance between the heater and the surface to be heated. This article is aimed at modeling and characterizing resistive heaters sprayed on metallic substrates. Heaters were fabricated by using a plasma-sprayed alumina dielectric insulator and a wire flame-sprayed iron-based alloy resistive element. Samples were energized and kept at a constant temperature of 425 °C for up to 4 months. SEM cross-sectional observations revealed the formation of cracks at very specific locations in the alumina layer after thermal use. Finite-element modeling shows that these cracks originate from high local thermal stresses and can be predicted according to the considered geometry. The simulation model was refined using experimental parameters obtained by several techniques such as emissivity and time-dependent temperature profile (infra-red camera), resistivity (four-probe technique), thermal diffusivity (laser flash method), and mechanical properties (micro and nanoindentation). The influence of the alumina thickness and the substrate material on crack formation was evaluated.

  2. Link performance model for filter bank based multicarrier systems

    Science.gov (United States)

    Petrov, Dmitry; Oborina, Alexandra; Giupponi, Lorenza; Stitz, Tobias Hidalgo

    2014-12-01

    This paper presents a complete link level abstraction model for link quality estimation on the system level of filter bank multicarrier (FBMC)-based networks. The application of mean mutual information per coded bit (MMIB) approach is validated for the FBMC systems. The considered quality measure of the resource element for the FBMC transmission is the received signal-to-noise-plus-distortion ratio (SNDR). Simulation results of the proposed link abstraction model show that the proposed approach is capable of estimating the block error rate (BLER) accurately, even when the signal is propagated through the channels with deep and frequent fades, as it is the case for the 3GPP Hilly Terrain (3GPP-HT) and Enhanced Typical Urban (ETU) models. The FBMC-related results of link level simulations are compared with cyclic prefix orthogonal frequency division multiplexing (CP-OFDM) analogs. Simulation results are also validated through the comparison to reference publicly available results. Finally, the steps of link level abstraction algorithm for FBMC are formulated and its application for system level simulation of a professional mobile radio (PMR) network is discussed.

  3. Atmospheric Climate Model Experiments Performed at Multiple Horizontal Resolutions

    Energy Technology Data Exchange (ETDEWEB)

    Phillips, T; Bala, G; Gleckler, P; Lobell, D; Mirin, A; Maxwell, R; Rotman, D

    2007-12-21

    This report documents salient features of version 3.3 of the Community Atmosphere Model (CAM3.3) and of three climate simulations in which the resolution of its latitude-longitude grid was systematically increased. For all these simulations of global atmospheric climate during the period 1980-1999, observed monthly ocean surface temperatures and sea ice extents were prescribed according to standard Atmospheric Model Intercomparison Project (AMIP) values. These CAM3.3 resolution experiments served as control runs for subsequent simulations of the climatic effects of agricultural irrigation, the focus of a Laboratory Directed Research and Development (LDRD) project. The CAM3.3 model was able to replicate basic features of the historical climate, although biases in a number of atmospheric variables were evident. Increasing horizontal resolution also generally failed to ameliorate the large-scale errors in most of the climate variables that could be compared with observations. A notable exception was the simulation of precipitation, which incrementally improved with increasing resolution, especially in regions where orography plays a central role in determining the local hydroclimate.

  4. Performance Estimation of Networked Business Models: Case Study on a Finnish eHealth Service Project

    Directory of Open Access Journals (Sweden)

    Marikka Heikkilä

    2014-08-01

    Full Text Available Purpose: The objective of this paper is to propose and demonstrate a framework for estimating performance in a networked business model. Design/methodology/approach: Our approach is design science, utilising action research in studying a case of four independent firms in Health & Wellbeing sector aiming to jointly provide a new service for business and private customers. The duration of the research study is 3 years. Findings: We propose that a balanced set of performance indicators can be defined by paying attention to all main components of the business model, enriched with of network collaboration. The results highlight the importance of measuring all main components of the business model and also the business network partners’ view on trust, contracts and fairness. Research implications: This article contributes to the business model literature by combining business modelling with performance evaluation. The article points out that it is essential to create metrics that can be applied to evaluate and improve the business model blueprints, but it is also important to measure business collaboration aspects. Practical implications: Companies have already adopted Business model canvas or similar business model tools to innovate new business models. We suggest that companies continue their business model innovation work by agreeing on a set of performance metrics, building on the business model components model enriched with measures of network collaboration. Originality/value: This article contributes to the business model literature and praxis by combining business modelling with performance evaluation.

  5. A Binomial Mixture Model for Classification Performance: A Commentary on Waxman, Chambers, Yntema, and Gelman (1989).

    Science.gov (United States)

    Thomas, Hoben

    1989-01-01

    Individual differences in children's performance on a classification task are modeled by a two component binomial mixture distribution. The model accounts for data well, with variance accounted for ranging from 87 to 95 percent. (RJC)

  6. Loss Performance Modeling for Hierarchical Heterogeneous Wireless Networks With Speed-Sensitive Call Admission Control

    DEFF Research Database (Denmark)

    Huang, Qian; Huang, Yue-Cai; Ko, King-Tim;

    2011-01-01

    dimensioning and planning. This paper investigates the computationally efficient loss performance modeling for multiservice in hierarchical heterogeneous wireless networks. A speed-sensitive call admission control (CAC) scheme is considered in our model to assign overflowed calls to appropriate tiers...

  7. Developing Performance Management in State Government: An Exploratory Model for Danish State Institutions

    DEFF Research Database (Denmark)

    Nielsen, Steen; Rikhardsson, Pall M.

    management model. The findings are built on a questionnaire study of 45 high level accounting officers in central governmental institutions. Our statistical model consists of five explored constructs: improvements; initiatives and reforms, incentives and contracts, the use of management accounting practices......, and cost allocations and their relations to performance management. Findings based on structural equation modelling and partial least squares regression (PLS) indicates a positive effect on the latent depending variable, called performance management results. The models/theories explain a significant...

  8. Energy Performance Measurement and Simulation Modeling of Tactical Soft-Wall Shelters

    Science.gov (United States)

    2015-07-01

    Basic for Applications ( VBA ). The objective function was the root mean square (RMS) errors between modeled and measured heating load and the modeled ...ER D C/ CE RL T R- 15 -1 3 Operational Energy Capabilities Improvement Energy Performance Measurement and Simulation Modeling of...and Ashok Kumar July 2015 Approved for public release; distribution is unlimited. HDT AirBeam Model 2032 Utilis Model TM60 ERDC-CERL

  9. Influence of input matrix representation on topic modelling performance

    CSIR Research Space (South Africa)

    De Waal, A

    2010-11-01

    Full Text Available model, perplexity is an appropriate measure. It provides an indication of the model’s ability to generalise by measuring the exponent of the mean log-likelihood of words in a held-out test set of the corpus. The exploratory abilities of the latent.... The phrases are clearly more intelligible than only single word phrases in many cases, thus demonstrating the qualitative advantage of the proposed method. 1For the CRAN corpus, each subset of chunks includes the top 1000 chunks with the highest...

  10. Performance modeling and optimization solutions for networking systems

    OpenAIRE

    Zhao, Jian; 趙建

    2014-01-01

    This thesis targets at modeling and resolving practical problems using mathematical tools in two representative networking systems nowadays, i.e., peer-to-peer (P2P) video streaming system and cloud computing system. In the first part, we study how to mitigate the following tussle between content service providers and ISPs in P2P video streaming systems: network-agnostic P2P protocol designs bring lots of inter-ISP traffic and increase traffic relay cost of ISPs; in turn, ISPs start to thrott...

  11. An Enlisted Performance Prediction Model for Aviation Structural Mechanics.

    Science.gov (United States)

    1983-09-01

    8217" , * **’" ,’’’."’"’-"’ ’ ’,’’ ’ " "- ’ TABLE 27 DISCRIMINNT ANALYSIS OUTPUT AND VALIDATION FOR MODEL 1 (PRIOR PROBABILITIES .62 / .38) CLASSIFICATION MATRIZ 1 2 TOTAL 1 1508 147 1655 91.12...CRYKC!~ IF HYECull THIN CHYEC:18; IF HYBCu12 THEN CHTEC 20; IF H!EC=13 THIN CH!EC=11.5; Hy EC ;CRT BC * IFf(NOTRC~b-1) AND (NIUYTPA! GZ 4) AND

  12. Analytical performance models for geologic repositories. Volume 2

    Energy Technology Data Exchange (ETDEWEB)

    Chambre, P.L.; Pigford, T.H.; Fujita, A.; Kanki, T.; Kobayashi, A.; Lung, H.; Ting, D.; Sato, Y.; Zavoshy, S.J.

    1982-10-01

    This report presents analytical solutions of the dissolution and hydrogeologic transport of radionuclides in geologic repositories. Numerical examples are presented to demonstrate the equations resulting from these analyses. The subjects treated in this report are: solubility-limited transport with transverse dispersion (chapter 2); transport of a radionuclide chain with nonequilibrium chemical reactions (chapter 3); advective transport in a two-dimensional flow field (chapter 4); radionuclide transport in fractured media (chapter 5); a mathematical model for EPA's analysis of generic repositories (chapter 6); and dissolution of radionuclides from solid waste (chapter 7). Volume 2 contains chapters 5, 6, and 7.

  13. Modeling Robot Dynamic Performance for Endpoint Force Control

    Science.gov (United States)

    1988-08-01

    Task Dynamics 55 2.5.1 The Dynamic Workpiece Model 55 2.5.2 Adding Robot Dynamics 56 2.5.3 Adding Actuator Dynamics 56 Tabie I o iiau 6 2.6 Grip...motion control system. Robot dynamics couple with the task dynamics in a very complex way. When the robot makes contact with the environment, the impact...robot flexibility or actuator dynamics. 2.5.2 Adding Robot Dynamics Figure 2.29 shows the robot now represented by two lumped masses, as in the robot

  14. Mathematically modelling the effects of pacing, finger strategies and urgency on numerical typing performance with queuing network model human processor.

    Science.gov (United States)

    Lin, Cheng-Jhe; Wu, Changxu

    2012-01-01

    Numerical typing is an important perceptual-motor task whose performance may vary with different pacing, finger strategies and urgency of situations. Queuing network-model human processor (QN-MHP), a computational architecture, allows performance of perceptual-motor tasks to be modelled mathematically. The current study enhanced QN-MHP with a top-down control mechanism, a close-loop movement control and a finger-related motor control mechanism to account for task interference, endpoint reduction, and force deficit, respectively. The model also incorporated neuromotor noise theory to quantify endpoint variability in typing. The model predictions of typing speed and accuracy were validated with Lin and Wu's (2011) experimental results. The resultant root-mean-squared errors were 3.68% with a correlation of 95.55% for response time, and 35.10% with a correlation of 96.52% for typing accuracy. The model can be applied to provide optimal speech rates for voice synthesis and keyboard designs in different numerical typing situations. An enhanced QN-MHP model was proposed in the study to mathematically account for the effects of pacing, finger strategies and internalised urgency on numerical typing performance. The model can be used to provide optimal pacing for voice synthesise systems and suggested optimal numerical keyboard designs under urgency.

  15. Study on Performances of Car-following Models Induced by Motions of a Leading Car

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    This paper investigated the performances of a well-known car-following model with numerical simulations in describing the deceleration process induced by the motion of a leading car. A leading car with a pre-specified speed profile was used to test the above model. The results show that this model is to some extent deficient in performing the process aforementioned. Modifications of the model to overcome these deficiencies were demonstrated and a modified car-following model was proposed accordingly. Furthermore, the delay time of car motion of the new model were studied.

  16. HEADING RECOVERY FROM OPTIC FLOW: COMPARING PERFORMANCE OF HUMANS AND COMPUTATIONAL MODELS

    Directory of Open Access Journals (Sweden)

    Andrew John Foulkes

    2013-06-01

    Full Text Available Human observers can perceive their direction of heading with a precision of about a degree. Several computational models of the processes underpinning the perception of heading have been proposed. In the present study we set out to assess which of four candidate models best captured human performance; the four models we selected reflected key differences in terms of approach and methods to modelling optic flow processing to recover movement parameters. We first generated a performance profile for human observers by measuring how performance changed as we systematically manipulated both the quantity (number of dots in the stimulus per frame and quality (amount of 2D directional noise of the flow field information. We then generated comparable performance profiles for the four candidate models. Models varied markedly in terms of both their performance and similarity to human data. To formally assess the match between the models and human performance we regressed the output of each of the four models against human performance data. We were able to rule out two models that produced very different performance profiles to human observers. The remaining two shared some similarities with human performance profiles in terms of the magnitude and pattern of thresholds. However none of the models tested could capture all aspect of the human data.

  17. Predictive Model of Graphene Based Polymer Nanocomposites: Electrical Performance

    Science.gov (United States)

    Manta, Asimina; Gresil, Matthieu; Soutis, Constantinos

    2017-04-01

    In this computational work, a new simulation tool on the graphene/polymer nanocomposites electrical response is developed based on the finite element method (FEM). This approach is built on the multi-scale multi-physics format, consisting of a unit cell and a representative volume element (RVE). The FE methodology is proven to be a reliable and flexible tool on the simulation of the electrical response without inducing the complexity of raw programming codes, while it is able to model any geometry, thus the response of any component. This characteristic is supported by its ability in preliminary stage to predict accurately the percolation threshold of experimental material structures and its sensitivity on the effect of different manufacturing methodologies. Especially, the percolation threshold of two material structures of the same constituents (PVDF/Graphene) prepared with different methods was predicted highlighting the effect of the material preparation on the filler distribution, percolation probability and percolation threshold. The assumption of the random filler distribution was proven to be efficient on modelling material structures obtained by solution methods, while the through-the -thickness normal particle distribution was more appropriate for nanocomposites constructed by film hot-pressing. Moreover, the parametrical analysis examine the effect of each parameter on the variables of the percolation law. These graphs could be used as a preliminary design tool for more effective material system manufacturing.

  18. Building and Running the Yucca Mountain Total System Performance Model in a Quality Environment

    Energy Technology Data Exchange (ETDEWEB)

    D.A. Kalinich; K.P. Lee; J.A. McNeish

    2005-01-09

    A Total System Performance Assessment (TSPA) model has been developed to support the Safety Analysis Report (SAR) for the Yucca Mountain High-Level Waste Repository. The TSPA model forecasts repository performance over a 20,000-year simulation period. It has a high degree of complexity due to the complexity of its underlying process and abstraction models. This is reflected in the size of the model (a 27,000 element GoldSim file), its use of dynamic-linked libraries (14 DLLs), the number and size of its input files (659 files totaling 4.7 GB), and the number of model input parameters (2541 input database entries). TSPA model development and subsequent simulations with the final version of the model were performed to a set of Quality Assurance (QA) procedures. Due to the complexity of the model, comments on previous TSPAs, and the number of analysts involved (22 analysts in seven cities across four time zones), additional controls for the entire life-cycle of the TSPA model, including management, physical, model change, and input controls were developed and documented. These controls did not replace the QA. procedures, rather they provided guidance for implementing the requirements of the QA procedures with the specific intent of ensuring that the model development process and the simulations performed with the final version of the model had sufficient checking, traceability, and transparency. Management controls were developed to ensure that only management-approved changes were implemented into the TSPA model and that only management-approved model runs were performed. Physical controls were developed to track the use of prototype software and preliminary input files, and to ensure that only qualified software and inputs were used in the final version of the TSPA model. In addition, a system was developed to name, file, and track development versions of the TSPA model as well as simulations performed with the final version of the model.

  19. Brief Lags in Interrupted Sequential Performance: Evaluating a Model and Model Evaluation Method

    Science.gov (United States)

    2015-01-05

    Task interruption Sequence errors Cognitive modeling Goodness-of-fit testing a b s t r a c t We examined effects of adding brief (1 second ) lags...should decay faster than pred2, such that after a lag there is increased probability of an intrusion by pred2 and thus an error at offset 1. The second ...For example, language production requires that words be produced in the correct order, and research in this domain has examined sequence errors at the

  20. Effects of modeling decisions on cold region hydrological model performance: snow, soil and streamflow

    Science.gov (United States)

    Musselman, Keith; Clark, Martyn; Endalamaw, Abraham; Bolton, W. Robert; Nijssen, Bart; Arnold, Jeffrey

    2017-04-01

    Cold regions are characterized by intense spatial gradients in climate, vegetation and soil properties that determine the complex spatiotemporal patterns of snowpack evolution, frozen soil dynamics, catchment connectivity, and streamflow. These spatial gradients pose unique challenges for hydrological models, including: 1) how the spatial variability of the physical processes are best represented across a hierarchy of scales, and 2) what algorithms and parameter sets best describe the biophysical and hydrological processes at the spatial scale of interest. To address these topics, we apply the Structure for Unifying Multiple Modeling Alternatives (SUMMA) to simulate hydrological processes at the Caribou - Poker Creeks Research Watershed in the Alaskan sub-arctic Boreal forest. The site is characterized by numerous gauged headwater catchments ranging in size from 5 sq. km to 106 sq. km with varying extents (3% to 53%) of discontinuous permafrost that permits a multi-scale paired watershed analysis of the hydrological impacts of frozen soils. We evaluate the effects of model decisions on the skill of SUMMA to simulate observed snow and soil dynamics, and the spatial integration of these processes as catchment streamflow. Decisions such as the number of soil layers, total soil column depth, and vertical soil discretization are shown to have profound impacts on the simulation of seasonal active layer dynamics. Decisions on the spatial organization (lateral connectivity, representation of riparian response units, and the spatial discretization of the hydrological landscape) are shown to be as important as accurate snowpack and soil process representation in the simulation of streamflow. The work serves to better inform hydrological model decisions for cold region hydrologic evaluation and to improve predictive capacity for water resource planning.

  1. The problem with total error models in establishing performance specifications and a simple remedy.

    Science.gov (United States)

    Krouwer, Jan S

    2016-08-01

    A recent issue in this journal revisited performance specifications since the Stockholm conference. Of the three recommended methods, two use total error models to establish performance specifications. It is shown that the most commonly used total error model - the Westgard model - is deficient, yet even more complete models fail to capture all errors that comprise total error. Moreover, total error models are often set at 95% of results, which leave 5% of results as unspecified. Glucose meter performance standards are used to illustrate these problems. The Westgard model is useful to asses assay performance but not to set performance specifications. Total error can be used to set performance specifications if the specifications include 100% of the results.

  2. Developing a Performance Evaluation Model of Trustees Boards in Iranian Universities of Medical Sciences

    Directory of Open Access Journals (Sweden)

    Haniye Sadat Sajadi

    2015-05-01

    Full Text Available The critical role of the boards of trustee in the governance of universities clarifies the necessity of evaluating its performance. Despite the importance of such evaluation, evidence demonstrated few studies have been done on the model of board performance evaluation especially in Iran. Aim: This study was aimed to develop a model to evaluate the board performance in Iranian Universities of Medical Sciences. Methodology: The present study was a mix qualitative-quantitative study. The participants were all stakeholders of board performance evaluation. The study, firstly, focused on the world experiences about the models of the board performance evaluation in the universities. Then, this study tried to investigate the current and proposed model of the board performance evaluation in the Iranian Universities of Medical Sciences. Hence, data were collected through interviews, observation and relevant document analysis and analyzed using framework approach. After that, the clustering and rating of the proposed dimensions and indicators of the board performance evaluation was done using the concept mapping method. Finally, the study concentrated on the expert consensus about the initial proposed model of the board performance evaluation. A model was proposed to evaluate the board performance in Iranian Universities of Medical Sciences, which had eight parts and sixty-four indicators proposing for the board performance evaluation. This study helped to develop a valid model to evaluate the board performance evaluation in a special kind of university. Such model can be used to produce useful tool for evaluating the performance of the board.

  3. Observed versus modelled u-, g-, r-, i-, z-band photometry of local galaxies - evaluation of model performance

    Science.gov (United States)

    Hansson, K. S. Alexander; Lisker, Thorsten; Grebel, Eva K.

    2012-12-01

    We test how well available stellar population models can reproduce observed u-, g-, r-, i-, z-band photometry of the local galaxy population (0.02 ≤ z ≤ 0.03) as probed by the Sloan Digital Sky Survey (SDSS). Our study is conducted from the perspective of a user of the models, who has observational data in hand and seeks to convert them into physical quantities. Stellar population models for galaxies are created by synthesizing star formation histories and chemical enrichments using single stellar populations from several groups (STARBURST99, GALAXEV, the Maraston models, GALEV). The role of dust is addressed through a simplistic, but observationally motivated, dust model that couples the amplitude of the extinction to the star formation history, metallicity and the viewing angle. Moreover, the influence of emission lines is considered (for the subset of models for which this component is included). The performance of the models is investigated by (1) comparing their prediction with the observed galaxy population in the SDSS using the (u - g)-(r - i) and (g - r)-(i - z) colour planes, (2) comparing predicted stellar mass and luminosity weighted ages and metallicities, specific star formation rates, mass-to-light ratios and total extinctions with literature values from studies based on spectroscopy. Strong differences between the various models are seen with several models occupying regions in the colour-colour diagrams where no galaxies are observed. We would therefore like to emphasize the importance of the choice of model. Using our preferred model we find that the star formation history, metallicity and also dust content can be constrained over a large part of the parameter space through the use of u-, g-, r-, i-, z-band photometry. However, strong local degeneracies are present due to overlap of models with high and low extinction in certain parts of the colour space.

  4. Modeling and performance study of a beam microgyroscope

    Science.gov (United States)

    Ghommem, M.; Nayfeh, A. H.; Choura, S.; Najar, F.; Abdel-Rahman, E. M.

    2010-11-01

    We develop a mathematical model of a microgyroscope whose principal component is a rotating cantilever beam equipped with a proof mass at its end. The microgyroscope undergoes two flexural vibrations that are coupled via base rotation about the microbeam longitudinal axis. The primary vibratory motion is produced in one direction (drive direction) of the microbeam by a pair of DC and AC voltages actuating the proof mass. The microbeam angular rotation induces a secondary vibration in the orthogonal (sense) direction actuated by a second DC voltage. Closed-form solutions are developed for the linearized problem to study the relationship between the base rotation and gyroscopic coupling. The response of the microgyroscope to variations in the DC voltage across the drive and sense electrodes and frequency of excitation are examined and a calibration curve of the gyroscope is obtained analytically.

  5. High-Performance data flows using analytical models and measurements

    Energy Technology Data Exchange (ETDEWEB)

    Rao, Nageswara S [ORNL; Towlsey, D. [University of Massachusetts; Vardoyan, G. [University of Massachusetts; Kettimuthu, R. [Argonne National Laboratory (ANL); Foster, I. [Argonne National Laboratory (ANL); Settlemyer, Bradley [Los Alamos National Laboratory (LANL)

    2016-01-01

    The combination of analytical models and measurements provide practical configurations and parameters to achieve high data transport rates: (a) buffer sizes and number of parallel streams for improved memory and file transfer rates, (b) Hamilton and Scalable TCP congestion control modules for memory transfers in place of default CUBIC, and (c) direct IO mode for Lustre file systems for wide-area transfers. Conventional parameter selection using full sweeps is impractical in many cases since it takes months. By exploiting the unimodality of throughput profiles, we developed the d-w method that significantly reduces the number of measurements needed for parameter identification. This heuristic method was effective in practice in reducing the measurements by about 90% for Lustre and XFS file transfers.

  6. Improving Statistical Language Model Performance with Automatically Generated Word Hierarchies

    CERN Document Server

    McMahon, J; Mahon, John Mc

    1995-01-01

    An automatic word classification system has been designed which processes word unigram and bigram frequency statistics extracted from a corpus of natural language utterances. The system implements a binary top-down form of word clustering which employs an average class mutual information metric. Resulting classifications are hierarchical, allowing variable class granularity. Words are represented as structural tags --- unique $n$-bit numbers the most significant bit-patterns of which incorporate class information. Access to a structural tag immediately provides access to all classification levels for the corresponding word. The classification system has successfully revealed some of the structure of English, from the phonemic to the semantic level. The system has been compared --- directly and indirectly --- with other recent word classification systems. Class based interpolated language models have been constructed to exploit the extra information supplied by the classifications and some experiments have sho...

  7. Improving CASINO performance for models with large number of electrons

    Energy Technology Data Exchange (ETDEWEB)

    Anton, L; Alfe, D; Hood, R Q; Tanqueray, D

    2009-05-13

    Quantum Monte Carlo calculations have at their core algorithms based on statistical ensembles of multidimensional random walkers which are straightforward to use on parallel computers. Nevertheless some computations have reached the limit of the memory resources for models with more than 1000 electrons because of the need to store a large amount of electronic orbitals related data. Besides that, for systems with large number of electrons, it is interesting to study if the evolution of one configuration of random walkers can be done faster in parallel. We present a comparative study of two ways to solve these problems: (1) distributed orbital data done with MPI or Unix inter-process communication tools, (2) second level parallelism for configuration computation.

  8. Analytical Performance Modeling and Validation of Intel’s Xeon Phi Architecture

    Energy Technology Data Exchange (ETDEWEB)

    Chunduri, Sudheer; Balaprakash, Prasanna; Morozov, Vitali; Vishwanath, Venkatram; Kumaran, Kalyan

    2017-01-01

    Modeling the performance of scientific applications on emerging hardware plays a central role in achieving extreme-scale computing goals. Analytical models that capture the interaction between applications and hardware characteristics are attractive because even a reasonably accurate model can be useful for performance tuning before the hardware is made available. In this paper, we develop a hardware model for Intel’s second-generation Xeon Phi architecture code-named Knights Landing (KNL) for the SKOPE framework. We validate the KNL hardware model by projecting the performance of mini-benchmarks and application kernels. The results show that our KNL model can project the performance with prediction errors of 10% to 20%. The hardware model also provides informative recommendations for code transformations and tuning.

  9. Performance Evaluation Test of the Orbit Screen Model 68A and the Komplet Model 48-25 Rock Crusher

    Science.gov (United States)

    2008-08-01

    screen and on a Komplet Italia , s.r.l., Model 48-25 rock crusher. The test was conducted during August 2008 at a U.S. Army test site in central... Italia , s.r.l., Model 48-25 Rock Crusher.......................................................... 2 2.3 ASV SR-80 Rubber-Tracked Skid-Steer Loader... Italia , s.r.l., Rock Crusher Testing ......................................................................... 16 5.1 Crushing Performance Test

  10. Performance Evaluation of UML2-Modeled Embedded Streaming Applications with System-Level Simulation

    Directory of Open Access Journals (Sweden)

    Arpinen Tero

    2009-01-01

    Full Text Available This article presents an efficient method to capture abstract performance model of streaming data real-time embedded systems (RTESs. Unified Modeling Language version 2 (UML2 is used for the performance modeling and as a front-end for a tool framework that enables simulation-based performance evaluation and design-space exploration. The adopted application meta-model in UML resembles the Kahn Process Network (KPN model and it is targeted at simulation-based performance evaluation. The application workload modeling is done using UML2 activity diagrams, and platform is described with structural UML2 diagrams and model elements. These concepts are defined using a subset of the profile for Modeling and Analysis of Realtime and Embedded (MARTE systems from OMG and custom stereotype extensions. The goal of the performance modeling and simulation is to achieve early estimates on task response times, processing element, memory, and on-chip network utilizations, among other information that is used for design-space exploration. As a case study, a video codec application on multiple processors is modeled, evaluated, and explored. In comparison to related work, this is the first proposal that defines transformation between UML activity diagrams and streaming data application workload meta models and successfully adopts it for RTES performance evaluation.

  11. CLIMBER-2: a climate system model of intermediate complexity. Pt. 1. Model description and performance for present climate

    Energy Technology Data Exchange (ETDEWEB)

    Petoukhov, V.; Ganopolski, A.; Brovkin, V.; Claussen, M.; Eliseev, A.; Kubatzki, C.; Rahmstorf, S.

    1998-02-01

    A 2.5-dimensional climate system model of intermediate complexity CLIMBER-2 and its performance for present climate conditions are presented. The model consists of modules describing atmosphere, ocean, sea ice, land surface processes, terrestrial vegetation cover, and global carbon cycle. The modules interact (on-line) through the fluxes of momentum, energy, water and carbon. The model has a coarse spatial resolution, allowing nevertheless to capture the major features of the Earth`s geography. The model describes temporal variability of the system on seasonal and longer time scales. Due to the fact that the model does not employ any type of flux adjustment and has fast turnaround time, it can be used for study of climates significantly different from the present one and allows to perform long-term (multimillennia) simulations. The constraints for coupling the atmosphere and ocean without flux adjustment are discussed. The results of a model validation against present climate data show that the model successfully describes the seasonal variability of a large set of characteristics of the climate system, including radiative balance, temperature, precipitation, ocean circulation and cryosphere. (orig.) 62 refs.

  12. On the performance of a generic length scale turbulence model within an adaptive finite element ocean model

    Science.gov (United States)

    Hill, Jon; Piggott, M. D.; Ham, David A.; Popova, E. E.; Srokosz, M. A.

    2012-10-01

    Research into the use of unstructured mesh methods for ocean modelling has been growing steadily in the last few years. One advantage of using unstructured meshes is that one can concentrate resolution where it is needed. In addition, dynamic adaptive mesh optimisation (DAMO) strategies allow resolution to be concentrated when this is required. Despite the advantage that DAMO gives in terms of improving the spatial resolution where and when required, small-scale turbulence in the oceans still requires parameterisation. A two-equation, generic length scale (GLS) turbulence model (one equation for turbulent kinetic energy and another for a generic turbulence length-scale quantity) adds this parameterisation and can be used in conjunction with adaptive mesh techniques. In this paper, an implementation of the GLS turbulence parameterisation is detailed in a non-hydrostatic, finite-element, unstructured mesh ocean model, Fluidity-ICOM. The implementation is validated by comparing to both a laboratory-scale experiment and real-world observations, on both fixed and adaptive meshes. The model performs well, matching laboratory and observed data, with resolution being adjusted as necessary by DAMO. Flexibility in the prognostic fields used to construct the error metric used in DAMO is required to ensure best performance. Moreover, the adaptive mesh models perform as well as fixed mesh models in terms of root mean square error to observation or theoretical mixed layer depths, but uses fewer elements and hence has a reduced computational cost.

  13. Representing Microbial Dormancy in Soil Decomposition Models Improves Model Performance and Reveals Key Ecosystem Controls on Microbial Activity

    Science.gov (United States)

    He, Y.; Yang, J.; Zhuang, Q.; Wang, G.; Liu, Y.

    2014-12-01

    Climate feedbacks from soils can result from environmental change and subsequent responses of plant and microbial communities and nutrient cycling. Explicit consideration of microbial life history traits and strategy may be necessary to predict climate feedbacks due to microbial physiology and community changes and their associated effect on carbon cycling. In this study, we developed an explicit microbial-enzyme decomposition model and examined model performance with and without representation of dormancy at six temperate forest sites with observed soil efflux ranged from 4 to 10 years across different forest types. We then extrapolated the model to all temperate forests in the Northern Hemisphere (25-50°N) to investigate spatial controls on microbial and soil C dynamics. Both models captured the observed soil heterotrophic respiration (RH), yet no-dormancy model consistently exhibited large seasonal amplitude and overestimation in microbial biomass. Spatially, the total RH from temperate forests based on dormancy model amounts to 6.88PgC/yr, and 7.99PgC/yr based on no-dormancy model. However, no-dormancy model notably overestimated the ratio of microbial biomass to SOC. Spatial correlation analysis revealed key controls of soil C:N ratio on the active proportion of microbial biomass, whereas local dormancy is primarily controlled by soil moisture and temperature, indicating scale-dependent environmental and biotic controls on microbial and SOC dynamics. These developments should provide essential support to modeling future soil carbon dynamics and enhance the avenue for collaboration between empirical soil experiment and modeling in the sense that more microbial physiological measurements are needed to better constrain and evaluate the models.

  14. Performance model of a recirculating stack nickel hydrogen cell

    Science.gov (United States)

    Zimmerman, Albert H.

    1994-01-01

    A theoretical model of the nickel hydrogen battery cell has been utilized to describe the chemical and physical changes during charge and overcharge in a recirculating stack nickel hydrogen cell. In particular, the movement of gas and electrolyte have been examined as a function of the amount of electrolyte put into the cell stack during cell activation, and as a function of flooding in regions of the gas screen in this cell design. Additionally, a two-dimensional variation on this model has been utilized to describe the effects of non-uniform loading in the nickel-electrode on the movement of gas and electrolyte within the recirculating stack nickel hydrogen cell. The type of nonuniform loading that has been examined here is that associated with higher than average loading near the surface of the sintered nickel electrode, a condition present to some degree in many nickel electrodes made by electrochemical impregnation methods. The effects of high surface loading were examined primarily under conditions of overcharge, since the movement of gas and electrolyte in the overcharging condition was typically where the greatest effects of non-uniform loading were found. The results indicate that significant changes in the capillary forces between cell components occur as the percentage of free volume in the stack filled by electrolyte becomes very high. These changes create large gradients in gas-filled space and oxygen concentrations near the boundary between the separator and the hydrogen electrode when the electrolyte fill is much greater than about 95 percent of the stack free volume. At lower electrolyte fill levels, these gaseous and electrolyte gradients become less extreme, and shift through the separator towards the nickel electrode. Similarly, flooding of areas in the gas screen cause higher concentrations of oxygen gas to approach the platinum/hydrogen electrode that is opposite the back side of the nickel electrode. These results illustrate the need for

  15. Operational Street Pollution Model (OSPM) - a review of performed validation studies, and future prospects

    DEFF Research Database (Denmark)

    Kakosimos K.E., Konstantinos E.; Hertel, Ole; Ketzel, Matthias

    2010-01-01

    Traffic emissions constitute a major source of health hazardous air pollution in urban areas. Models describing pollutant levels in urban streets are thus important tools in air pollution management as a supplement to measurements in routine monitoring programmes. A widely used model...... needs are outlined for traffic air pollution modelling in general but with outset in the research performed with OSPM....

  16. Performance of Linear and Nonlinear Two-Leaf Light Use Efficiency Models at Different Temporal Scales

    DEFF Research Database (Denmark)

    Wu, Xiaocui; Ju, Weimin; Zhou, Yanlian;

    2015-01-01

    two-leaf model (TL-LUE), and a big-leaf light use efficiency model (MOD17) to simulate GPP at half-hourly, daily and 8-day scales using GPP derived from 58 eddy-covariance flux sites in Asia, Europe and North America as benchmarks. Model evaluation showed that the overall performance of TL...

  17. A modular ducted rocket missile model for threat and performance assessment

    NARCIS (Netherlands)

    Mayer, A.E.H.J.; Halswijk, W.H.C.; Komduur, H.J.; Lauzon, M.; Stowe, R.A.

    2005-01-01

    A model was developed to predict the thrust of throttled ramjet propelled missiles. The model is called DRCORE and fulfils the growing need to predict the performance of air breathing missiles. Each subsystem of the propulsion unit of this model is coded by using engineering formulae and enables the

  18. Bayesian Comparison of Alternative Graded Response Models for Performance Assessment Applications

    Science.gov (United States)

    Zhu, Xiaowen; Stone, Clement A.

    2012-01-01

    This study examined the relative effectiveness of Bayesian model comparison methods in selecting an appropriate graded response (GR) model for performance assessment applications. Three popular methods were considered: deviance information criterion (DIC), conditional predictive ordinate (CPO), and posterior predictive model checking (PPMC). Using…

  19. Beamforming in Ad Hoc Networks: MAC Design and Performance Modeling

    National Research Council Canada - National Science Library

    Fakih, Khalil; Diouris, Jean-Francois; Andrieux, Guillaume

    2009-01-01

    .... Our proposition performs jointly channel estimation and radio resource sharing. We validate the fruitfulness of the proposed MAC and we evaluate the effects of the channel estimation on the network performance...

  20. Performance Appraisals: One Step in a Comprehensive Staff Supervision Model

    Science.gov (United States)

    Kilbourne, Susan

    2007-01-01

    Performance reviews, while stressful, can prepare employees for the next stages of their career. The best performance reviews are those where the supervisor knows the employee's skills and talents and offers suggestions on how to use those talents to develop other areas of job performance and professional growth. In this article, the author…