WorldWideScience

Sample records for model performance model

  1. Well performance model

    International Nuclear Information System (INIS)

    Thomas, L.K.; Evans, C.E.; Pierson, R.G.; Scott, S.L.

    1992-01-01

    This paper describes the development and application of a comprehensive oil or gas well performance model. The model contains six distinct sections: stimulation design, tubing and/or casing flow, reservoir and near-wellbore calculations, production forecasting, wellbore heat transmission, and economics. These calculations may be performed separately or in an integrated fashion with data and results shared among the different sections. The model analysis allows evaluation of all aspects of well completion design, including the effects on future production and overall well economics

  2. Model performance analysis and model validation in logistic regression

    Directory of Open Access Journals (Sweden)

    Rosa Arboretti Giancristofaro

    2007-10-01

    Full Text Available In this paper a new model validation procedure for a logistic regression model is presented. At first, we illustrate a brief review of different techniques of model validation. Next, we define a number of properties required for a model to be considered "good", and a number of quantitative performance measures. Lastly, we describe a methodology for the assessment of the performance of a given model by using an example taken from a management study.

  3. Performance Measurement Model A TarBase model with ...

    Indian Academy of Sciences (India)

    rohit

    Model A 8.0 2.0 94.52% 88.46% 76 108 12 12 0.86 0.91 0.78 0.94. Model B 2.0 2.0 93.18% 89.33% 64 95 10 9 0.88 0.90 0.75 0.98. The above results for TEST – 1 show details for our two models (Model A and Model B).Performance of Model A after adding of 32 negative dataset of MiRTif on our testing set(MiRecords) ...

  4. Electrical circuit models for performance modeling of Lithium-Sulfur batteries

    DEFF Research Database (Denmark)

    Knap, Vaclav; Stroe, Daniel Ioan; Teodorescu, Remus

    2015-01-01

    emerging technology for various applications, there is a need for Li-S battery performance model; however, developing such models represents a challenging task due to batteries' complex ongoing chemical reactions. Therefore, the literature review was performed to summarize electrical circuit models (ECMs......) used for modeling the performance behavior of Li-S batteries. The studied Li-S pouch cell was tested in the laboratory in order to parametrize four basic ECM topologies. These topologies were compared by analyzing their voltage estimation accuracy values, which were obtained for different battery...... current profiles. Based on these results, the 3 R-C ECM was chosen and the Li-S battery cell discharging performance model with current dependent parameters was derived and validated....

  5. Multiprocessor performance modeling with ADAS

    Science.gov (United States)

    Hayes, Paul J.; Andrews, Asa M.

    1989-01-01

    A graph managing strategy referred to as the Algorithm to Architecture Mapping Model (ATAMM) appears useful for the time-optimized execution of application algorithm graphs in embedded multiprocessors and for the performance prediction of graph designs. This paper reports the modeling of ATAMM in the Architecture Design and Assessment System (ADAS) to make an independent verification of ATAMM's performance prediction capability and to provide a user framework for the evaluation of arbitrary algorithm graphs. Following an overview of ATAMM and its major functional rules are descriptions of the ADAS model of ATAMM, methods to enter an arbitrary graph into the model, and techniques to analyze the simulation results. The performance of a 7-node graph example is evaluated using the ADAS model and verifies the ATAMM concept by substantiating previously published performance results.

  6. Performability assessment by model checking of Markov reward models

    NARCIS (Netherlands)

    Baier, Christel; Cloth, L.; Haverkort, Boudewijn R.H.M.; Hermanns, H.; Katoen, Joost P.

    2010-01-01

    This paper describes efficient procedures for model checking Markov reward models, that allow us to evaluate, among others, the performability of computer-communication systems. We present the logic CSRL (Continuous Stochastic Reward Logic) to specify performability measures. It provides flexibility

  7. NIF capsule performance modeling

    Directory of Open Access Journals (Sweden)

    Weber S.

    2013-11-01

    Full Text Available Post-shot modeling of NIF capsule implosions was performed in order to validate our physical and numerical models. Cryogenic layered target implosions and experiments with surrogate targets produce an abundance of capsule performance data including implosion velocity, remaining ablator mass, times of peak x-ray and neutron emission, core image size, core symmetry, neutron yield, and x-ray spectra. We have attempted to match the integrated data set with capsule-only simulations by adjusting the drive and other physics parameters within expected uncertainties. The simulations include interface roughness, time-dependent symmetry, and a model of mix. We were able to match many of the measured performance parameters for a selection of shots.

  8. Atmospheric statistical dynamic models. Model performance: the Lawrence Livermore Laboratoy Zonal Atmospheric Model

    International Nuclear Information System (INIS)

    Potter, G.L.; Ellsaesser, H.W.; MacCracken, M.C.; Luther, F.M.

    1978-06-01

    Results from the zonal model indicate quite reasonable agreement with observation in terms of the parameters and processes that influence the radiation and energy balance calculations. The model produces zonal statistics similar to those from general circulation models, and has also been shown to produce similar responses in sensitivity studies. Further studies of model performance are planned, including: comparison with July data; comparison of temperature and moisture transport and wind fields for winter and summer months; and a tabulation of atmospheric energetics. Based on these preliminary performance studies, however, it appears that the zonal model can be used in conjunction with more complex models to help unravel the problems of understanding the processes governing present climate and climate change. As can be seen in the subsequent paper on model sensitivity studies, in addition to reduced cost of computation, the zonal model facilitates analysis of feedback mechanisms and simplifies analysis of the interactions between processes

  9. Photovoltaic performance models - A report card

    Science.gov (United States)

    Smith, J. H.; Reiter, L. R.

    1985-01-01

    Models for the analysis of photovoltaic (PV) systems' designs, implementation policies, and economic performance, have proliferated while keeping pace with rapid changes in basic PV technology and extensive empirical data compiled for such systems' performance. Attention is presently given to the results of a comparative assessment of ten well documented and widely used models, which range in complexity from first-order approximations of PV system performance to in-depth, circuit-level characterizations. The comparisons were made on the basis of the performance of their subsystem, as well as system, elements. The models fall into three categories in light of their degree of aggregation into subsystems: (1) simplified models for first-order calculation of system performance, with easily met input requirements but limited capability to address more than a small variety of design considerations; (2) models simulating PV systems in greater detail, encompassing types primarily intended for either concentrator-incorporating or flat plate collector PV systems; and (3) models not specifically designed for PV system performance modeling, but applicable to aspects of electrical system design. Models ignoring subsystem failure or degradation are noted to exclude operating and maintenance characteristics as well.

  10. Modelling and evaluation of surgical performance using hidden Markov models.

    Science.gov (United States)

    Megali, Giuseppe; Sinigaglia, Stefano; Tonet, Oliver; Dario, Paolo

    2006-10-01

    Minimally invasive surgery has become very widespread in the last ten years. Since surgeons experience difficulties in learning and mastering minimally invasive techniques, the development of training methods is of great importance. While the introduction of virtual reality-based simulators has introduced a new paradigm in surgical training, skill evaluation methods are far from being objective. This paper proposes a method for defining a model of surgical expertise and an objective metric to evaluate performance in laparoscopic surgery. Our approach is based on the processing of kinematic data describing movements of surgical instruments. We use hidden Markov model theory to define an expert model that describes expert surgical gesture. The model is trained on kinematic data related to exercises performed on a surgical simulator by experienced surgeons. Subsequently, we use this expert model as a reference model in the definition of an objective metric to evaluate performance of surgeons with different abilities. Preliminary results show that, using different topologies for the expert model, the method can be efficiently used both for the discrimination between experienced and novice surgeons, and for the quantitative assessment of surgical ability.

  11. Characterising performance of environmental models

    NARCIS (Netherlands)

    Bennett, N.D.; Croke, B.F.W.; Guariso, G.; Guillaume, J.H.A.; Hamilton, S.H.; Jakeman, A.J.; Marsili-Libelli, S.; Newham, L.T.H.; Norton, J.; Perrin, C.; Pierce, S.; Robson, B.; Seppelt, R.; Voinov, A.; Fath, B.D.; Andreassian, V.

    2013-01-01

    In order to use environmental models effectively for management and decision-making, it is vital to establish an appropriate level of confidence in their performance. This paper reviews techniques available across various fields for characterising the performance of environmental models with focus

  12. Ion thruster performance model

    International Nuclear Information System (INIS)

    Brophy, J.R.

    1984-01-01

    A model of ion thruster performance is developed for high flux density cusped magnetic field thruster designs. This model is formulated in terms of the average energy required to produce an ion in the discharge chamber plasma and the fraction of these ions that are extracted to form the beam. The direct loss of high energy (primary) electrons from the plasma to the anode is shown to have a major effect on thruster performance. The model provides simple algebraic equations enabling one to calculate the beam ion energy cost, the average discharge chamber plasma ion energy cost, the primary electron density, the primary-to-Maxwellian electron density ratio and the Maxwellian electron temperature. Experiments indicate that the model correctly predicts the variation in plasma ion energy cost for changes in propellant gas (Ar, Kr, and Xe), grid transparency to neutral atoms, beam extraction area, discharge voltage, and discharge chamber wall temperature

  13. Performability Modelling Tools, Evaluation Techniques and Applications

    NARCIS (Netherlands)

    Haverkort, Boudewijn R.H.M.

    1990-01-01

    This thesis deals with three aspects of quantitative evaluation of fault-tolerant and distributed computer and communication systems: performability evaluation techniques, performability modelling tools, and performability modelling applications. Performability modelling is a relatively new

  14. Analysing the temporal dynamics of model performance for hydrological models

    NARCIS (Netherlands)

    Reusser, D.E.; Blume, T.; Schaefli, B.; Zehe, E.

    2009-01-01

    The temporal dynamics of hydrological model performance gives insights into errors that cannot be obtained from global performance measures assigning a single number to the fit of a simulated time series to an observed reference series. These errors can include errors in data, model parameters, or

  15. System Advisor Model: Flat Plate Photovoltaic Performance Modeling Validation Report

    Energy Technology Data Exchange (ETDEWEB)

    Freeman, Janine [National Renewable Energy Lab. (NREL), Golden, CO (United States); Whitmore, Jonathan [National Renewable Energy Lab. (NREL), Golden, CO (United States); Kaffine, Leah [National Renewable Energy Lab. (NREL), Golden, CO (United States); Blair, Nate [National Renewable Energy Lab. (NREL), Golden, CO (United States); Dobos, Aron P. [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2013-12-01

    The System Advisor Model (SAM) is a free software tool that performs detailed analysis of both system performance and system financing for a variety of renewable energy technologies. This report provides detailed validation of the SAM flat plate photovoltaic performance model by comparing SAM-modeled PV system generation data to actual measured production data for nine PV systems ranging from 75 kW to greater than 25 MW in size. The results show strong agreement between SAM predictions and field data, with annualized prediction error below 3% for all fixed tilt cases and below 8% for all one axis tracked cases. The analysis concludes that snow cover and system outages are the primary sources of disagreement, and other deviations resulting from seasonal biases in the irradiation models and one axis tracking issues are discussed in detail.

  16. Performance of different radiotherapy workload models

    International Nuclear Information System (INIS)

    Barbera, Lisa; Jackson, Lynda D.; Schulze, Karleen; Groome, Patti A.; Foroudi, Farshad; Delaney, Geoff P.; Mackillop, William J.

    2003-01-01

    Purpose: The purpose of this study was to evaluate the performance of different radiotherapy workload models using a prospectively collected dataset of patient and treatment information from a single center. Methods and Materials: Information about all individual radiotherapy treatments was collected for 2 weeks from the three linear accelerators (linacs) in our department. This information included diagnosis code, treatment site, treatment unit, treatment time, fields per fraction, technique, beam type, blocks, wedges, junctions, port films, and Eastern Cooperative Oncology Group (ECOG) performance status. We evaluated the accuracy and precision of the original and revised basic treatment equivalent (BTE) model, the simple and complex Addenbrooke models, the equivalent simple treatment visit (ESTV) model, fields per hour, and two local standards of workload measurement. Results: Data were collected for 2 weeks in June 2001. During this time, 151 patients were treated with 857 fractions. The revised BTE model performed better than the other models with a mean vertical bar observed - predicted vertical bar of 2.62 (2.44-2.80). It estimated 88.0% of treatment times within 5 min, which is similar to the previously reported accuracy of the model. Conclusion: The revised BTE model had similar accuracy and precision for data collected in our center as it did for the original dataset and performed the best of the models assessed. This model would have uses for patient scheduling, and describing workloads and case complexity

  17. Reference Manual for the System Advisor Model's Wind Power Performance Model

    Energy Technology Data Exchange (ETDEWEB)

    Freeman, J.; Jorgenson, J.; Gilman, P.; Ferguson, T.

    2014-08-01

    This manual describes the National Renewable Energy Laboratory's System Advisor Model (SAM) wind power performance model. The model calculates the hourly electrical output of a single wind turbine or of a wind farm. The wind power performance model requires information about the wind resource, wind turbine specifications, wind farm layout (if applicable), and costs. In SAM, the performance model can be coupled to one of the financial models to calculate economic metrics for residential, commercial, or utility-scale wind projects. This manual describes the algorithms used by the wind power performance model, which is available in the SAM user interface and as part of the SAM Simulation Core (SSC) library, and is intended to supplement the user documentation that comes with the software.

  18. The Five Key Questions of Human Performance Modeling.

    Science.gov (United States)

    Wu, Changxu

    2018-01-01

    Via building computational (typically mathematical and computer simulation) models, human performance modeling (HPM) quantifies, predicts, and maximizes human performance, human-machine system productivity and safety. This paper describes and summarizes the five key questions of human performance modeling: 1) Why we build models of human performance; 2) What the expectations of a good human performance model are; 3) What the procedures and requirements in building and verifying a human performance model are; 4) How we integrate a human performance model with system design; and 5) What the possible future directions of human performance modeling research are. Recent and classic HPM findings are addressed in the five questions to provide new thinking in HPM's motivations, expectations, procedures, system integration and future directions.

  19. Maintenance Personnel Performance Simulation (MAPPS) model

    International Nuclear Information System (INIS)

    Siegel, A.I.; Bartter, W.D.; Wolf, J.J.; Knee, H.E.; Haas, P.M.

    1984-01-01

    A stochastic computer model for simulating the actions and behavior of nuclear power plant maintenance personnel is described. The model considers personnel, environmental, and motivational variables to yield predictions of maintenance performance quality and time to perform. The mode has been fully developed and sensitivity tested. Additional evaluation of the model is now taking place

  20. Does model performance improve with complexity? A case study with three hydrological models

    Science.gov (United States)

    Orth, Rene; Staudinger, Maria; Seneviratne, Sonia I.; Seibert, Jan; Zappa, Massimiliano

    2015-04-01

    In recent decades considerable progress has been made in climate model development. Following the massive increase in computational power, models became more sophisticated. At the same time also simple conceptual models have advanced. In this study we validate and compare three hydrological models of different complexity to investigate whether their performance varies accordingly. For this purpose we use runoff and also soil moisture measurements, which allow a truly independent validation, from several sites across Switzerland. The models are calibrated in similar ways with the same runoff data. Our results show that the more complex models HBV and PREVAH outperform the simple water balance model (SWBM) in case of runoff but not for soil moisture. Furthermore the most sophisticated PREVAH model shows an added value compared to the HBV model only in case of soil moisture. Focusing on extreme events we find generally improved performance of the SWBM during drought conditions and degraded agreement with observations during wet extremes. For the more complex models we find the opposite behavior, probably because they were primarily developed for prediction of runoff extremes. As expected given their complexity, HBV and PREVAH have more problems with over-fitting. All models show a tendency towards better performance in lower altitudes as opposed to (pre-) alpine sites. The results vary considerably across the investigated sites. In contrast, the different metrics we consider to estimate the agreement between models and observations lead to similar conclusions, indicating that the performance of the considered models is similar at different time scales as well as for anomalies and long-term means. We conclude that added complexity does not necessarily lead to improved performance of hydrological models, and that performance can vary greatly depending on the considered hydrological variable (e.g. runoff vs. soil moisture) or hydrological conditions (floods vs. droughts).

  1. Model Performance Evaluation and Scenario Analysis (MPESA)

    Science.gov (United States)

    Model Performance Evaluation and Scenario Analysis (MPESA) assesses the performance with which models predict time series data. The tool was developed Hydrological Simulation Program-Fortran (HSPF) and the Stormwater Management Model (SWMM)

  2. Work domain constraints for modelling surgical performance.

    Science.gov (United States)

    Morineau, Thierry; Riffaud, Laurent; Morandi, Xavier; Villain, Jonathan; Jannin, Pierre

    2015-10-01

    Three main approaches can be identified for modelling surgical performance: a competency-based approach, a task-based approach, both largely explored in the literature, and a less known work domain-based approach. The work domain-based approach first describes the work domain properties that constrain the agent's actions and shape the performance. This paper presents a work domain-based approach for modelling performance during cervical spine surgery, based on the idea that anatomical structures delineate the surgical performance. This model was evaluated through an analysis of junior and senior surgeons' actions. Twenty-four cervical spine surgeries performed by two junior and two senior surgeons were recorded in real time by an expert surgeon. According to a work domain-based model describing an optimal progression through anatomical structures, the degree of adjustment of each surgical procedure to a statistical polynomial function was assessed. Each surgical procedure showed a significant suitability with the model and regression coefficient values around 0.9. However, the surgeries performed by senior surgeons fitted this model significantly better than those performed by junior surgeons. Analysis of the relative frequencies of actions on anatomical structures showed that some specific anatomical structures discriminate senior from junior performances. The work domain-based modelling approach can provide an overall statistical indicator of surgical performance, but in particular, it can highlight specific points of interest among anatomical structures that the surgeons dwelled on according to their level of expertise.

  3. ATR performance modeling concepts

    Science.gov (United States)

    Ross, Timothy D.; Baker, Hyatt B.; Nolan, Adam R.; McGinnis, Ryan E.; Paulson, Christopher R.

    2016-05-01

    Performance models are needed for automatic target recognition (ATR) development and use. ATRs consume sensor data and produce decisions about the scene observed. ATR performance models (APMs) on the other hand consume operating conditions (OCs) and produce probabilities about what the ATR will produce. APMs are needed for many modeling roles of many kinds of ATRs (each with different sensing modality and exploitation functionality combinations); moreover, there are different approaches to constructing the APMs. Therefore, although many APMs have been developed, there is rarely one that fits a particular need. Clarified APM concepts may allow us to recognize new uses of existing APMs and identify new APM technologies and components that better support coverage of the needed APMs. The concepts begin with thinking of ATRs as mapping OCs of the real scene (including the sensor data) to reports. An APM is then a mapping from explicit quantized OCs (represented with less resolution than the real OCs) and latent OC distributions to report distributions. The roles of APMs can be distinguished by the explicit OCs they consume. APMs used in simulations consume the true state that the ATR is attempting to report. APMs used online with the exploitation consume the sensor signal and derivatives, such as match scores. APMs used in sensor management consume neither of those, but estimate performance from other OCs. This paper will summarize the major building blocks for APMs, including knowledge sources, OC models, look-up tables, analytical and learned mappings, and tools for signal synthesis and exploitation.

  4. Base Station Performance Model

    OpenAIRE

    Walsh, Barbara; Farrell, Ronan

    2005-01-01

    At present the testing of power amplifiers within base station transmitters is limited to testing at component level as opposed to testing at the system level. While the detection of catastrophic failure is possible, that of performance degradation is not. This paper proposes a base station model with respect to transmitter output power with the aim of introducing system level monitoring of the power amplifier behaviour within the base station. Our model reflects the expe...

  5. Temporal diagnostic analysis of the SWAT model to detect dominant periods of poor model performance

    Science.gov (United States)

    Guse, Björn; Reusser, Dominik E.; Fohrer, Nicola

    2013-04-01

    Hydrological models generally include thresholds and non-linearities, such as snow-rain-temperature thresholds, non-linear reservoirs, infiltration thresholds and the like. When relating observed variables to modelling results, formal methods often calculate performance metrics over long periods, reporting model performance with only few numbers. Such approaches are not well suited to compare dominating processes between reality and model and to better understand when thresholds and non-linearities are driving model results. We present a combination of two temporally resolved model diagnostic tools to answer when a model is performing (not so) well and what the dominant processes are during these periods. We look at the temporal dynamics of parameter sensitivities and model performance to answer this question. For this, the eco-hydrological SWAT model is applied in the Treene lowland catchment in Northern Germany. As a first step, temporal dynamics of parameter sensitivities are analyzed using the Fourier Amplitude Sensitivity test (FAST). The sensitivities of the eight model parameters investigated show strong temporal variations. High sensitivities were detected for two groundwater (GW_DELAY, ALPHA_BF) and one evaporation parameters (ESCO) most of the time. The periods of high parameter sensitivity can be related to different phases of the hydrograph with dominances of the groundwater parameters in the recession phases and of ESCO in baseflow and resaturation periods. Surface runoff parameters show high parameter sensitivities in phases of a precipitation event in combination with high soil water contents. The dominant parameters give indication for the controlling processes during a given period for the hydrological catchment. The second step included the temporal analysis of model performance. For each time step, model performance was characterized with a "finger print" consisting of a large set of performance measures. These finger prints were clustered into

  6. Critical review of glass performance modeling

    International Nuclear Information System (INIS)

    Bourcier, W.L.

    1994-07-01

    Borosilicate glass is to be used for permanent disposal of high-level nuclear waste in a geologic repository. Mechanistic chemical models are used to predict the rate at which radionuclides will be released from the glass under repository conditions. The most successful and useful of these models link reaction path geochemical modeling programs with a glass dissolution rate law that is consistent with transition state theory. These models have been used to simulate several types of short-term laboratory tests of glass dissolution and to predict the long-term performance of the glass in a repository. Although mechanistically based, the current models are limited by a lack of unambiguous experimental support for some of their assumptions. The most severe problem of this type is the lack of an existing validated mechanism that controls long-term glass dissolution rates. Current models can be improved by performing carefully designed experiments and using the experimental results to validate the rate-controlling mechanisms implicit in the models. These models should be supported with long-term experiments to be used for model validation. The mechanistic basis of the models should be explored by using modern molecular simulations such as molecular orbital and molecular dynamics to investigate both the glass structure and its dissolution process

  7. Critical review of glass performance modeling

    Energy Technology Data Exchange (ETDEWEB)

    Bourcier, W.L. [Lawrence Livermore National Lab., CA (United States)

    1994-07-01

    Borosilicate glass is to be used for permanent disposal of high-level nuclear waste in a geologic repository. Mechanistic chemical models are used to predict the rate at which radionuclides will be released from the glass under repository conditions. The most successful and useful of these models link reaction path geochemical modeling programs with a glass dissolution rate law that is consistent with transition state theory. These models have been used to simulate several types of short-term laboratory tests of glass dissolution and to predict the long-term performance of the glass in a repository. Although mechanistically based, the current models are limited by a lack of unambiguous experimental support for some of their assumptions. The most severe problem of this type is the lack of an existing validated mechanism that controls long-term glass dissolution rates. Current models can be improved by performing carefully designed experiments and using the experimental results to validate the rate-controlling mechanisms implicit in the models. These models should be supported with long-term experiments to be used for model validation. The mechanistic basis of the models should be explored by using modern molecular simulations such as molecular orbital and molecular dynamics to investigate both the glass structure and its dissolution process.

  8. Tailored model abstraction in performance assessments

    International Nuclear Information System (INIS)

    Kessler, J.H.

    1995-01-01

    Total System Performance Assessments (TSPAs) are likely to be one of the most significant parts of making safety cases for the continued development and licensing of geologic repositories for the disposal of spent fuel and HLW. Thus, it is critical that the TSPA model capture the 'essence' of the physical processes relevant to demonstrating the appropriate regulation is met. But how much detail about the physical processes must be modeled and understood before there is enough confidence that the appropriate essence has been captured? In this summary the level of model abstraction that is required is discussed. Approaches for subsystem and total system performance analyses are outlined, and the role of best estimate models is examined. It is concluded that a conservative approach for repository performance, based on limited amount of field and laboratory data, can provide sufficient confidence for a regulatory decision

  9. A unified tool for performance modelling and prediction

    International Nuclear Information System (INIS)

    Gilmore, Stephen; Kloul, Leila

    2005-01-01

    We describe a novel performability modelling approach, which facilitates the efficient solution of performance models extracted from high-level descriptions of systems. The notation which we use for our high-level designs is the Unified Modelling Language (UML) graphical modelling language. The technology which provides the efficient representation capability for the underlying performance model is the multi-terminal binary decision diagram (MTBDD)-based PRISM probabilistic model checker. The UML models are compiled through an intermediate language, the stochastic process algebra PEPA, before translation into MTBDDs for solution. We illustrate our approach on a real-world analysis problem from the domain of mobile telephony

  10. Predictive performance models and multiple task performance

    Science.gov (United States)

    Wickens, Christopher D.; Larish, Inge; Contorer, Aaron

    1989-01-01

    Five models that predict how performance of multiple tasks will interact in complex task scenarios are discussed. The models are shown in terms of the assumptions they make about human operator divided attention. The different assumptions about attention are then empirically validated in a multitask helicopter flight simulation. It is concluded from this simulation that the most important assumption relates to the coding of demand level of different component tasks.

  11. Comparisons of Faulting-Based Pavement Performance Prediction Models

    Directory of Open Access Journals (Sweden)

    Weina Wang

    2017-01-01

    Full Text Available Faulting prediction is the core of concrete pavement maintenance and design. Highway agencies are always faced with the problem of lower accuracy for the prediction which causes costly maintenance. Although many researchers have developed some performance prediction models, the accuracy of prediction has remained a challenge. This paper reviews performance prediction models and JPCP faulting models that have been used in past research. Then three models including multivariate nonlinear regression (MNLR model, artificial neural network (ANN model, and Markov Chain (MC model are tested and compared using a set of actual pavement survey data taken on interstate highway with varying design features, traffic, and climate data. It is found that MNLR model needs further recalibration, while the ANN model needs more data for training the network. MC model seems a good tool for pavement performance prediction when the data is limited, but it is based on visual inspections and not explicitly related to quantitative physical parameters. This paper then suggests that the further direction for developing the performance prediction model is incorporating the advantages and disadvantages of different models to obtain better accuracy.

  12. Calibration of PMIS pavement performance prediction models.

    Science.gov (United States)

    2012-02-01

    Improve the accuracy of TxDOTs existing pavement performance prediction models through calibrating these models using actual field data obtained from the Pavement Management Information System (PMIS). : Ensure logical performance superiority patte...

  13. Performance Modelling of Steam Turbine Performance using Fuzzy ...

    African Journals Online (AJOL)

    Performance Modelling of Steam Turbine Performance using Fuzzy Logic ... AFRICAN JOURNALS ONLINE (AJOL) · Journals · Advanced Search · USING AJOL · RESOURCES. Journal of Applied Sciences and Environmental Management ... A Fuzzy Inference System for predicting the performance of steam turbine

  14. Wave and Wind Model Performance Metrics Tools

    Science.gov (United States)

    Choi, J. K.; Wang, D. W.

    2016-02-01

    Continual improvements and upgrades of Navy ocean wave and wind models are essential to the assurance of battlespace environment predictability of ocean surface wave and surf conditions in support of Naval global operations. Thus, constant verification and validation of model performance is equally essential to assure the progress of model developments and maintain confidence in the predictions. Global and regional scale model evaluations may require large areas and long periods of time. For observational data to compare against, altimeter winds and waves along the tracks from past and current operational satellites as well as moored/drifting buoys can be used for global and regional coverage. Using data and model runs in previous trials such as the planned experiment, the Dynamics of the Adriatic in Real Time (DART), we demonstrated the use of accumulated altimeter wind and wave data over several years to obtain an objective evaluation of the performance the SWAN (Simulating Waves Nearshore) model running in the Adriatic Sea. The assessment provided detailed performance of wind and wave models by using cell-averaged statistical variables maps with spatial statistics including slope, correlation, and scatter index to summarize model performance. Such a methodology is easily generalized to other regions and at global scales. Operational technology currently used by subject matter experts evaluating the Navy Coastal Ocean Model and the Hybrid Coordinate Ocean Model can be expanded to evaluate wave and wind models using tools developed for ArcMAP, a GIS application developed by ESRI. Recent inclusion of altimeter and buoy data into a format through the Naval Oceanographic Office's (NAVOCEANO) quality control system and the netCDF standards applicable to all model output makes it possible for the fusion of these data and direct model verification. Also, procedures were developed for the accumulation of match-ups of modelled and observed parameters to form a data base

  15. Measurement-based reliability/performability models

    Science.gov (United States)

    Hsueh, Mei-Chen

    1987-01-01

    Measurement-based models based on real error-data collected on a multiprocessor system are described. Model development from the raw error-data to the estimation of cumulative reward is also described. A workload/reliability model is developed based on low-level error and resource usage data collected on an IBM 3081 system during its normal operation in order to evaluate the resource usage/error/recovery process in a large mainframe system. Thus, both normal and erroneous behavior of the system are modeled. The results provide an understanding of the different types of errors and recovery processes. The measured data show that the holding times in key operational and error states are not simple exponentials and that a semi-Markov process is necessary to model the system behavior. A sensitivity analysis is performed to investigate the significance of using a semi-Markov process, as opposed to a Markov process, to model the measured system.

  16. Performance model for a CCTV-MTI

    International Nuclear Information System (INIS)

    Dunn, D.R.; Dunbar, D.L.

    1978-01-01

    CCTV-MTI (closed circuit television--moving target indicator) monitors represent typical components of access control systems, as for example in a material control and accounting (MC and A) safeguards system. This report describes a performance model for a CCTV-MTI monitor. The performance of a human in an MTI role is a separate problem and is not addressed here. This work was done in conjunction with the NRC sponsored LLL assessment procedure for MC and A systems which is presently under development. We develop a noise model for a generic camera system and a model for the detection mechanism for a postulated MTI design. These models are then translated into an overall performance model. Measures of performance are probabilities of detection and false alarm as a function of intruder-induced grey level changes in the protected area. Sensor responsivity, lens F-number, source illumination and spectral response were treated as design parameters. Some specific results are illustrated for a postulated design employing a camera with a Si-target vidicon. Reflectance or light level changes in excess of 10% due to an intruder will be detected with a very high probability for the portion of the visible spectrum with wavelengths above 500 nm. The resulting false alarm rate was less than one per year. We did not address sources of nuisance alarms due to adverse environments, reliability, resistance to tampering, nor did we examine the effects of the spatial frequency response of the optics. All of these are important and will influence overall system detection performance

  17. Photovoltaic Reliability Performance Model v 2.0

    Energy Technology Data Exchange (ETDEWEB)

    2016-12-16

    PV-RPM is intended to address more “real world” situations by coupling a photovoltaic system performance model with a reliability model so that inverters, modules, combiner boxes, etc. can experience failures and be repaired (or left unrepaired). The model can also include other effects, such as module output degradation over time or disruptions such as electrical grid outages. In addition, PV-RPM is a dynamic probabilistic model that can be used to run many realizations (i.e., possible future outcomes) of a system’s performance using probability distributions to represent uncertain parameter inputs.

  18. Principles of Sonar Performance Modeling

    NARCIS (Netherlands)

    Ainslie, M.A.

    2010-01-01

    Sonar performance modelling (SPM) is concerned with the prediction of quantitative measures of sonar performance, such as probability of detection. It is a multidisciplinary subject, requiring knowledge and expertise in the disparate fields of underwater acoustics, acoustical oceanography, sonar

  19. Four-Stroke, Internal Combustion Engine Performance Modeling

    Science.gov (United States)

    Wagner, Richard C.

    In this thesis, two models of four-stroke, internal combustion engines are created and compared. The first model predicts the intake and exhaust processes using isentropic flow equations augmented by discharge coefficients. The second model predicts the intake and exhaust processes using a compressible, time-accurate, Quasi-One-Dimensional (Q1D) approach. Both models employ the same heat release and reduced-order modeling of the cylinder charge. Both include friction and cylinder loss models so that the predicted performance values can be compared to measurements. The results indicate that the isentropic-based model neglects important fluid mechanics and returns inaccurate results. The Q1D flow model, combined with the reduced-order model of the cylinder charge, is able to capture the dominant intake and exhaust fluid mechanics and produces results that compare well with measurement. Fluid friction, convective heat transfer, piston ring and skirt friction and temperature-varying specific heats in the working fluids are all shown to be significant factors in engine performance predictions. Charge blowby is shown to play a lesser role.

  20. Performance modeling of Beamlet

    International Nuclear Information System (INIS)

    Auerbach, J.M.; Lawson, J.K.; Rotter, M.D.; Sacks, R.A.; Van Wonterghem, B.W.; Williams, W.H.

    1995-01-01

    Detailed modeling of beam propagation in Beamlet has been made to predict system performance. New software allows extensive use of optical component characteristics. This inclusion of real optical component characteristics has resulted in close agreement between calculated and measured beam distributions

  1. Model for measuring complex performance in an aviation environment

    International Nuclear Information System (INIS)

    Hahn, H.A.

    1988-01-01

    An experiment was conducted to identify models of pilot performance through the attainment and analysis of concurrent verbal protocols. Sixteen models were identified. Novice and expert pilots differed with respect to the models they used. Models were correlated to performance, particularly in the case of expert subjects. Models were not correlated to performance shaping factors (i.e. workload). 3 refs., 1 tab

  2. Assessing The Performance of Hydrological Models

    Science.gov (United States)

    van der Knijff, Johan

    The performance of hydrological models is often characterized using the coefficient of efficiency, E. The sensitivity of E to extreme streamflow values, and the difficulty of deciding what value of E should be used as a threshold to identify 'good' models or model parameterizations, have proven to be serious shortcomings of this index. This paper reviews some alternative performance indices that have appeared in the litera- ture. Legates and McCabe (1999) suggested a more generalized form of E, E'(j,B). Here, j is a parameter that controls how much emphasis is put on extreme streamflow values, and B defines a benchmark or 'null hypothesis' against which the results of the model are tested. E'(j,B) was used to evaluate a large number of parameterizations of a conceptual rainfall-runoff model, using 6 different combinations of j and B. First, the effect of j and B is explained. Second, it is demonstrated how the index can be used to explicitly test hypotheses about the model and the data. This approach appears to be particularly attractive if the index is used as a likelihood measure within a GLUE-type analysis.

  3. Advances in HTGR fuel performance models

    International Nuclear Information System (INIS)

    Stansfield, O.M.; Goodin, D.T.; Hanson, D.L.; Turner, R.F.

    1985-01-01

    Advances in HTGR fuel performance models have improved the agreement between observed and predicted performance and contributed to an enhanced position of the HTGR with regard to investment risk and passive safety. Heavy metal contamination is the source of about 55% of the circulating activity in the HTGR during normal operation, and the remainder comes primarily from particles which failed because of defective or missing buffer coatings. These failed particles make up about 5 x 10 -4 fraction of the total core inventory. In addition to prediction of fuel performance during normal operation, the models are used to determine fuel failure and fission product release during core heat-up accident conditions. The mechanistic nature of the models, which incorporate all important failure modes, permits the prediction of performance from the relatively modest accident temperatures of a passively safe HTGR to the much more severe accident conditions of the larger 2240-MW/t HTGR. (author)

  4. Analytical performance modeling for computer systems

    CERN Document Server

    Tay, Y C

    2013-01-01

    This book is an introduction to analytical performance modeling for computer systems, i.e., writing equations to describe their performance behavior. It is accessible to readers who have taken college-level courses in calculus and probability, networking and operating systems. This is not a training manual for becoming an expert performance analyst. Rather, the objective is to help the reader construct simple models for analyzing and understanding the systems that they are interested in.Describing a complicated system abstractly with mathematical equations requires a careful choice of assumpti

  5. Performance Evaluation Model for Application Layer Firewalls.

    Science.gov (United States)

    Xuan, Shichang; Yang, Wu; Dong, Hui; Zhang, Jiangchuan

    2016-01-01

    Application layer firewalls protect the trusted area network against information security risks. However, firewall performance may affect user experience. Therefore, performance analysis plays a significant role in the evaluation of application layer firewalls. This paper presents an analytic model of the application layer firewall, based on a system analysis to evaluate the capability of the firewall. In order to enable users to improve the performance of the application layer firewall with limited resources, resource allocation was evaluated to obtain the optimal resource allocation scheme in terms of throughput, delay, and packet loss rate. The proposed model employs the Erlangian queuing model to analyze the performance parameters of the system with regard to the three layers (network, transport, and application layers). Then, the analysis results of all the layers are combined to obtain the overall system performance indicators. A discrete event simulation method was used to evaluate the proposed model. Finally, limited service desk resources were allocated to obtain the values of the performance indicators under different resource allocation scenarios in order to determine the optimal allocation scheme. Under limited resource allocation, this scheme enables users to maximize the performance of the application layer firewall.

  6. Performance Evaluation Model for Application Layer Firewalls.

    Directory of Open Access Journals (Sweden)

    Shichang Xuan

    Full Text Available Application layer firewalls protect the trusted area network against information security risks. However, firewall performance may affect user experience. Therefore, performance analysis plays a significant role in the evaluation of application layer firewalls. This paper presents an analytic model of the application layer firewall, based on a system analysis to evaluate the capability of the firewall. In order to enable users to improve the performance of the application layer firewall with limited resources, resource allocation was evaluated to obtain the optimal resource allocation scheme in terms of throughput, delay, and packet loss rate. The proposed model employs the Erlangian queuing model to analyze the performance parameters of the system with regard to the three layers (network, transport, and application layers. Then, the analysis results of all the layers are combined to obtain the overall system performance indicators. A discrete event simulation method was used to evaluate the proposed model. Finally, limited service desk resources were allocated to obtain the values of the performance indicators under different resource allocation scenarios in order to determine the optimal allocation scheme. Under limited resource allocation, this scheme enables users to maximize the performance of the application layer firewall.

  7. Human performance modeling for system of systems analytics.

    Energy Technology Data Exchange (ETDEWEB)

    Dixon, Kevin R.; Lawton, Craig R.; Basilico, Justin Derrick; Longsine, Dennis E. (INTERA, Inc., Austin, TX); Forsythe, James Chris; Gauthier, John Henry; Le, Hai D.

    2008-10-01

    A Laboratory-Directed Research and Development project was initiated in 2005 to investigate Human Performance Modeling in a System of Systems analytic environment. SAND2006-6569 and SAND2006-7911 document interim results from this effort; this report documents the final results. The problem is difficult because of the number of humans involved in a System of Systems environment and the generally poorly defined nature of the tasks that each human must perform. A two-pronged strategy was followed: one prong was to develop human models using a probability-based method similar to that first developed for relatively well-understood probability based performance modeling; another prong was to investigate more state-of-art human cognition models. The probability-based modeling resulted in a comprehensive addition of human-modeling capability to the existing SoSAT computer program. The cognitive modeling resulted in an increased understanding of what is necessary to incorporate cognition-based models to a System of Systems analytic environment.

  8. Comparative performance of high-fidelity training models for flexible ureteroscopy: Are all models effective?

    Directory of Open Access Journals (Sweden)

    Shashikant Mishra

    2011-01-01

    Full Text Available Objective: We performed a comparative study of high-fidelity training models for flexible ureteroscopy (URS. Our objective was to determine whether high-fidelity non-virtual reality (VR models are as effective as the VR model in teaching flexible URS skills. Materials and Methods: Twenty-one trained urologists without clinical experience of flexible URS underwent dry lab simulation practice. After a warm-up period of 2 h, tasks were performed on a high-fidelity non-VR (Uro-scopic Trainer TM ; Endo-Urologie-Modell TM and a high-fidelity VR model (URO Mentor TM . The participants were divided equally into three batches with rotation on each of the three stations for 30 min. Performance of the trainees was evaluated by an expert ureteroscopist using pass rating and global rating score (GRS. The participants rated a face validity questionnaire at the end of each session. Results: The GRS improved statistically at evaluation performed after second rotation (P<0.001 for batches 1, 2 and 3. Pass ratings also improved significantly for all training models when the third and first rotations were compared (P<0.05. The batch that was trained on the VR-based model had more improvement on pass ratings on second rotation but could not achieve statistical significance. Most of the realistic domains were higher for a VR model as compared with the non-VR model, except the realism of the flexible endoscope. Conclusions: All the models used for training flexible URS were effective in increasing the GRS and pass ratings irrespective of the VR status.

  9. Performance Modeling of Communication Networks with Markov Chains

    CERN Document Server

    Mo, Jeonghoon

    2010-01-01

    This book is an introduction to Markov chain modeling with applications to communication networks. It begins with a general introduction to performance modeling in Chapter 1 where we introduce different performance models. We then introduce basic ideas of Markov chain modeling: Markov property, discrete time Markov chain (DTMe and continuous time Markov chain (CTMe. We also discuss how to find the steady state distributions from these Markov chains and how they can be used to compute the system performance metric. The solution methodologies include a balance equation technique, limiting probab

  10. Transmutation Fuel Performance Code Thermal Model Verification

    Energy Technology Data Exchange (ETDEWEB)

    Gregory K. Miller; Pavel G. Medvedev

    2007-09-01

    FRAPCON fuel performance code is being modified to be able to model performance of the nuclear fuels of interest to the Global Nuclear Energy Partnership (GNEP). The present report documents the effort for verification of the FRAPCON thermal model. It was found that, with minor modifications, FRAPCON thermal model temperature calculation agrees with that of the commercial software ABAQUS (Version 6.4-4). This report outlines the methodology of the verification, code input, and calculation results.

  11. visCOS: An R-package to evaluate model performance of hydrological models

    Science.gov (United States)

    Klotz, Daniel; Herrnegger, Mathew; Wesemann, Johannes; Schulz, Karsten

    2016-04-01

    The evaluation of model performance is a central part of (hydrological) modelling. Much attention has been given to the development of evaluation criteria and diagnostic frameworks. (Klemeš, 1986; Gupta et al., 2008; among many others). Nevertheless, many applications exist for which objective functions do not yet provide satisfying summaries. Thus, the necessity to visualize results arises in order to explore a wider range of model capacities, be it strengths or deficiencies. Visualizations are usually devised for specific projects and these efforts are often not distributed to a broader community (e.g. via open source software packages). Hence, the opportunity to explicitly discuss a state-of-the-art presentation technique is often missed. We therefore present a comprehensive R-package for evaluating model performance by visualizing and exploring different aspects of hydrological time-series. The presented package comprises a set of useful plots and visualization methods, which complement existing packages, such as hydroGOF (Zambrano-Bigiarini et al., 2012). It is derived from practical applications of the hydrological models COSERO and COSEROreg (Kling et al., 2014). visCOS, providing an interface in R, represents an easy-to-use software package for visualizing and assessing model performance and can be implemented in the process of model calibration or model development. The package provides functions to load hydrological data into R, clean the data, process, visualize, explore and finally save the results in a consistent way. Together with an interactive zoom function of the time series, an online calculation of the objective functions for variable time-windows is included. Common hydrological objective functions, such as the Nash-Sutcliffe Efficiency and the Kling-Gupta Efficiency, can also be evaluated and visualized in different ways for defined sub-periods like hydrological years or seasonal sections. Many hydrologists use long-term water-balances as a

  12. Performance modeling, loss networks, and statistical multiplexing

    CERN Document Server

    Mazumdar, Ravi

    2009-01-01

    This monograph presents a concise mathematical approach for modeling and analyzing the performance of communication networks with the aim of understanding the phenomenon of statistical multiplexing. The novelty of the monograph is the fresh approach and insights provided by a sample-path methodology for queueing models that highlights the important ideas of Palm distributions associated with traffic models and their role in performance measures. Also presented are recent ideas of large buffer, and many sources asymptotics that play an important role in understanding statistical multiplexing. I

  13. Constrained bayesian inference of project performance models

    OpenAIRE

    Sunmola, Funlade

    2013-01-01

    Project performance models play an important role in the management of project success. When used for monitoring projects, they can offer predictive ability such as indications of possible delivery problems. Approaches for monitoring project performance relies on available project information including restrictions imposed on the project, particularly the constraints of cost, quality, scope and time. We study in this paper a Bayesian inference methodology for project performance modelling in ...

  14. Integrated Main Propulsion System Performance Reconstruction Process/Models

    Science.gov (United States)

    Lopez, Eduardo; Elliott, Katie; Snell, Steven; Evans, Michael

    2013-01-01

    The Integrated Main Propulsion System (MPS) Performance Reconstruction process provides the MPS post-flight data files needed for postflight reporting to the project integration management and key customers to verify flight performance. This process/model was used as the baseline for the currently ongoing Space Launch System (SLS) work. The process utilizes several methodologies, including multiple software programs, to model integrated propulsion system performance through space shuttle ascent. It is used to evaluate integrated propulsion systems, including propellant tanks, feed systems, rocket engine, and pressurization systems performance throughout ascent based on flight pressure and temperature data. The latest revision incorporates new methods based on main engine power balance model updates to model higher mixture ratio operation at lower engine power levels.

  15. Driver Performance Model: 1. Conceptual Framework

    National Research Council Canada - National Science Library

    Heimerl, Joseph

    2001-01-01

    ...'. At the present time, no such comprehensive model exists. This report discusses a conceptual framework designed to encompass the relationships, conditions, and constraints related to direct, indirect, and remote modes of driving and thus provides a guide or 'road map' for the construction and creation of a comprehensive driver performance model.

  16. Hybrid Corporate Performance Prediction Model Considering Technical Capability

    Directory of Open Access Journals (Sweden)

    Joonhyuck Lee

    2016-07-01

    Full Text Available Many studies have tried to predict corporate performance and stock prices to enhance investment profitability using qualitative approaches such as the Delphi method. However, developments in data processing technology and machine-learning algorithms have resulted in efforts to develop quantitative prediction models in various managerial subject areas. We propose a quantitative corporate performance prediction model that applies the support vector regression (SVR algorithm to solve the problem of the overfitting of training data and can be applied to regression problems. The proposed model optimizes the SVR training parameters based on the training data, using the genetic algorithm to achieve sustainable predictability in changeable markets and managerial environments. Technology-intensive companies represent an increasing share of the total economy. The performance and stock prices of these companies are affected by their financial standing and their technological capabilities. Therefore, we apply both financial indicators and technical indicators to establish the proposed prediction model. Here, we use time series data, including financial, patent, and corporate performance information of 44 electronic and IT companies. Then, we predict the performance of these companies as an empirical verification of the prediction performance of the proposed model.

  17. Performance analysis of NOAA tropospheric signal delay model

    International Nuclear Information System (INIS)

    Ibrahim, Hassan E; El-Rabbany, Ahmed

    2011-01-01

    Tropospheric delay is one of the dominant global positioning system (GPS) errors, which degrades the positioning accuracy. Recent development in tropospheric modeling relies on implementation of more accurate numerical weather prediction (NWP) models. In North America one of the NWP-based tropospheric correction models is the NOAA Tropospheric Signal Delay Model (NOAATrop), which was developed by the US National Oceanic and Atmospheric Administration (NOAA). Because of its potential to improve the GPS positioning accuracy, the NOAATrop model became the focus of many researchers. In this paper, we analyzed the performance of the NOAATrop model and examined its effect on ionosphere-free-based precise point positioning (PPP) solution. We generated 3 year long tropospheric zenith total delay (ZTD) data series for the NOAATrop model, Hopfield model, and the International GNSS Services (IGS) final tropospheric correction product, respectively. These data sets were generated at ten IGS reference stations spanning Canada and the United States. We analyzed the NOAATrop ZTD data series and compared them with those of the Hopfield model. The IGS final tropospheric product was used as a reference. The analysis shows that the performance of the NOAATrop model is a function of both season (time of the year) and geographical location. However, its performance was superior to the Hopfield model in all cases. We further investigated the effect of implementing the NOAATrop model on the ionosphere-free-based PPP solution convergence and accuracy. It is shown that the use of the NOAATrop model improved the PPP solution convergence by 1%, 10% and 15% for the latitude, longitude and height components, respectively

  18. Generating Performance Models for Irregular Applications

    Energy Technology Data Exchange (ETDEWEB)

    Friese, Ryan D.; Tallent, Nathan R.; Vishnu, Abhinav; Kerbyson, Darren J.; Hoisie, Adolfy

    2017-05-30

    Many applications have irregular behavior --- non-uniform input data, input-dependent solvers, irregular memory accesses, unbiased branches --- that cannot be captured using today's automated performance modeling techniques. We describe new hierarchical critical path analyses for the \\Palm model generation tool. To create a model's structure, we capture tasks along representative MPI critical paths. We create a histogram of critical tasks with parameterized task arguments and instance counts. To model each task, we identify hot instruction-level sub-paths and model each sub-path based on data flow, instruction scheduling, and data locality. We describe application models that generate accurate predictions for strong scaling when varying CPU speed, cache speed, memory speed, and architecture. We present results for the Sweep3D neutron transport benchmark; Page Rank on multiple graphs; Support Vector Machine with pruning; and PFLOTRAN's reactive flow/transport solver with domain-induced load imbalance.

  19. Identifying the connective strength between model parameters and performance criteria

    Directory of Open Access Journals (Sweden)

    B. Guse

    2017-11-01

    Full Text Available In hydrological models, parameters are used to represent the time-invariant characteristics of catchments and to capture different aspects of hydrological response. Hence, model parameters need to be identified based on their role in controlling the hydrological behaviour. For the identification of meaningful parameter values, multiple and complementary performance criteria are used that compare modelled and measured discharge time series. The reliability of the identification of hydrologically meaningful model parameter values depends on how distinctly a model parameter can be assigned to one of the performance criteria. To investigate this, we introduce the new concept of connective strength between model parameters and performance criteria. The connective strength assesses the intensity in the interrelationship between model parameters and performance criteria in a bijective way. In our analysis of connective strength, model simulations are carried out based on a latin hypercube sampling. Ten performance criteria including Nash–Sutcliffe efficiency (NSE, Kling–Gupta efficiency (KGE and its three components (alpha, beta and r as well as RSR (the ratio of the root mean square error to the standard deviation for different segments of the flow duration curve (FDC are calculated. With a joint analysis of two regression tree (RT approaches, we derive how a model parameter is connected to different performance criteria. At first, RTs are constructed using each performance criterion as the target variable to detect the most relevant model parameters for each performance criterion. Secondly, RTs are constructed using each parameter as the target variable to detect which performance criteria are impacted by changes in the values of one distinct model parameter. Based on this, appropriate performance criteria are identified for each model parameter. In this study, a high bijective connective strength between model parameters and performance criteria

  20. Models used to assess the performance of photovoltaic systems.

    Energy Technology Data Exchange (ETDEWEB)

    Stein, Joshua S.; Klise, Geoffrey T.

    2009-12-01

    This report documents the various photovoltaic (PV) performance models and software developed and utilized by researchers at Sandia National Laboratories (SNL) in support of the Photovoltaics and Grid Integration Department. In addition to PV performance models, hybrid system and battery storage models are discussed. A hybrid system using other distributed sources and energy storage can help reduce the variability inherent in PV generation, and due to the complexity of combining multiple generation sources and system loads, these models are invaluable for system design and optimization. Energy storage plays an important role in reducing PV intermittency and battery storage models are used to understand the best configurations and technologies to store PV generated electricity. Other researcher's models used by SNL are discussed including some widely known models that incorporate algorithms developed at SNL. There are other models included in the discussion that are not used by or were not adopted from SNL research but may provide some benefit to researchers working on PV array performance, hybrid system models and energy storage. The paper is organized into three sections to describe the different software models as applied to photovoltaic performance, hybrid systems, and battery storage. For each model, there is a description which includes where to find the model, whether it is currently maintained and any references that may be available. Modeling improvements underway at SNL include quantifying the uncertainty of individual system components, the overall uncertainty in modeled vs. measured results and modeling large PV systems. SNL is also conducting research into the overall reliability of PV systems.

  1. Evaluating Models of Human Performance: Safety-Critical Systems Applications

    Science.gov (United States)

    Feary, Michael S.

    2012-01-01

    This presentation is part of panel discussion on Evaluating Models of Human Performance. The purpose of this panel is to discuss the increasing use of models in the world today and specifically focus on how to describe and evaluate models of human performance. My presentation will focus on discussions of generating distributions of performance, and the evaluation of different strategies for humans performing tasks with mixed initiative (Human-Automation) systems. I will also discuss issues with how to provide Human Performance modeling data to support decisions on acceptability and tradeoffs in the design of safety critical systems. I will conclude with challenges for the future.

  2. Model Performance Evaluation and Scenario Analysis (MPESA) Tutorial

    Science.gov (United States)

    The model performance evaluation consists of metrics and model diagnostics. These metrics provides modelers with statistical goodness-of-fit measures that capture magnitude only, sequence only, and combined magnitude and sequence errors.

  3. Probabilistic Radiological Performance Assessment Modeling and Uncertainty

    Science.gov (United States)

    Tauxe, J.

    2004-12-01

    A generic probabilistic radiological Performance Assessment (PA) model is presented. The model, built using the GoldSim systems simulation software platform, concerns contaminant transport and dose estimation in support of decision making with uncertainty. Both the U.S. Nuclear Regulatory Commission (NRC) and the U.S. Department of Energy (DOE) require assessments of potential future risk to human receptors of disposal of LLW. Commercially operated LLW disposal facilities are licensed by the NRC (or agreement states), and the DOE operates such facilities for disposal of DOE-generated LLW. The type of PA model presented is probabilistic in nature, and hence reflects the current state of knowledge about the site by using probability distributions to capture what is expected (central tendency or average) and the uncertainty (e.g., standard deviation) associated with input parameters, and propagating through the model to arrive at output distributions that reflect expected performance and the overall uncertainty in the system. Estimates of contaminant release rates, concentrations in environmental media, and resulting doses to human receptors well into the future are made by running the model in Monte Carlo fashion, with each realization representing a possible combination of input parameter values. Statistical summaries of the results can be compared to regulatory performance objectives, and decision makers are better informed of the inherently uncertain aspects of the model which supports their decision-making. While this information may make some regulators uncomfortable, they must realize that uncertainties which were hidden in a deterministic analysis are revealed in a probabilistic analysis, and the chance of making a correct decision is now known rather than hoped for. The model includes many typical features and processes that would be part of a PA, but is entirely fictitious. This does not represent any particular site and is meant to be a generic example. A

  4. Shock circle model for ejector performance evaluation

    International Nuclear Information System (INIS)

    Zhu, Yinhai; Cai, Wenjian; Wen, Changyun; Li, Yanzhong

    2007-01-01

    In this paper, a novel shock circle model for the prediction of ejector performance at the critical mode operation is proposed. By introducing the 'shock circle' at the entrance of the constant area chamber, a 2D exponential expression for velocity distribution is adopted to approximate the viscosity flow near the ejector inner wall. The advantage of the 'shock circle' analysis is that the calculation of ejector performance is independent of the flows in the constant area chamber and diffuser. Consequently, the calculation is even simpler than many 1D modeling methods and can predict the performance of critical mode operation ejectors much more accurately. The effectiveness of the method is validated by two experimental results reported earlier. The proposed modeling method using two coefficients is shown to produce entrainment ratio, efficiency and coefficient of performance (COP) accurately and much closer to experimental results than those of 1D analysis methods

  5. Advanced Performance Modeling with Combined Passive and Active Monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Dovrolis, Constantine [Georgia Inst. of Technology, Atlanta, GA (United States); Sim, Alex [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2015-04-15

    To improve the efficiency of resource utilization and scheduling of scientific data transfers on high-speed networks, the "Advanced Performance Modeling with combined passive and active monitoring" (APM) project investigates and models a general-purpose, reusable and expandable network performance estimation framework. The predictive estimation model and the framework will be helpful in optimizing the performance and utilization of networks as well as sharing resources with predictable performance for scientific collaborations, especially in data intensive applications. Our prediction model utilizes historical network performance information from various network activity logs as well as live streaming measurements from network peering devices. Historical network performance information is used without putting extra load on the resources by active measurement collection. Performance measurements collected by active probing is used judiciously for improving the accuracy of predictions.

  6. Modeling the Mechanical Performance of Die Casting Dies

    Energy Technology Data Exchange (ETDEWEB)

    R. Allen Miller

    2004-02-27

    The following report covers work performed at Ohio State on modeling the mechanical performance of dies. The focus of the project was development and particularly verification of finite element techniques used to model and predict displacements and stresses in die casting dies. The work entails a major case study performed with and industrial partner on a production die and laboratory experiments performed at Ohio State.

  7. Global climate model performance over Alaska and Greenland

    DEFF Research Database (Denmark)

    Walsh, John E.; Chapman, William L.; Romanovsky, Vladimir

    2008-01-01

    The performance of a set of 15 global climate models used in the Coupled Model Intercomparison Project is evaluated for Alaska and Greenland, and compared with the performance over broader pan-Arctic and Northern Hemisphere extratropical domains. Root-mean-square errors relative to the 1958...... to narrowing the uncertainty and obtaining more robust estimates of future climate change in regions such as Alaska, Greenland, and the broader Arctic....... of the models are generally much larger than the biases of the composite output, indicating that the systematic errors differ considerably among the models. There is a tendency for the models with smaller errors to simulate a larger greenhouse warming over the Arctic, as well as larger increases of Arctic...

  8. Studies in Software Cost Model Behavior: Do We Really Understand Cost Model Performance?

    Science.gov (United States)

    Lum, Karen; Hihn, Jairus; Menzies, Tim

    2006-01-01

    While there exists extensive literature on software cost estimation techniques, industry practice continues to rely upon standard regression-based algorithms. These software effort models are typically calibrated or tuned to local conditions using local data. This paper cautions that current approaches to model calibration often produce sub-optimal models because of the large variance problem inherent in cost data and by including far more effort multipliers than the data supports. Building optimal models requires that a wider range of models be considered while correctly calibrating these models requires rejection rules that prune variables and records and use multiple criteria for evaluating model performance. The main contribution of this paper is to document a standard method that integrates formal model identification, estimation, and validation. It also documents what we call the large variance problem that is a leading cause of cost model brittleness or instability.

  9. Multitasking TORT Under UNICOS: Parallel Performance Models and Measurements

    International Nuclear Information System (INIS)

    Azmy, Y.Y.; Barnett, D.A.

    1999-01-01

    The existing parallel algorithms in the TORT discrete ordinates were updated to function in a UNI-COS environment. A performance model for the parallel overhead was derived for the existing algorithms. The largest contributors to the parallel overhead were identified and a new algorithm was developed. A parallel overhead model was also derived for the new algorithm. The results of the comparison of parallel performance models were compared to applications of the code to two TORT standard test problems and a large production problem. The parallel performance models agree well with the measured parallel overhead

  10. Multitasking TORT under UNICOS: Parallel performance models and measurements

    International Nuclear Information System (INIS)

    Barnett, A.; Azmy, Y.Y.

    1999-01-01

    The existing parallel algorithms in the TORT discrete ordinates code were updated to function in a UNICOS environment. A performance model for the parallel overhead was derived for the existing algorithms. The largest contributors to the parallel overhead were identified and a new algorithm was developed. A parallel overhead model was also derived for the new algorithm. The results of the comparison of parallel performance models were compared to applications of the code to two TORT standard test problems and a large production problem. The parallel performance models agree well with the measured parallel overhead

  11. Cognitive performance modeling based on general systems performance theory.

    Science.gov (United States)

    Kondraske, George V

    2010-01-01

    General Systems Performance Theory (GSPT) was initially motivated by problems associated with quantifying different aspects of human performance. It has proved to be invaluable for measurement development and understanding quantitative relationships between human subsystem capacities and performance in complex tasks. It is now desired to bring focus to the application of GSPT to modeling of cognitive system performance. Previous studies involving two complex tasks (i.e., driving and performing laparoscopic surgery) and incorporating measures that are clearly related to cognitive performance (information processing speed and short-term memory capacity) were revisited. A GSPT-derived method of task analysis and performance prediction termed Nonlinear Causal Resource Analysis (NCRA) was employed to determine the demand on basic cognitive performance resources required to support different levels of complex task performance. This approach is presented as a means to determine a cognitive workload profile and the subsequent computation of a single number measure of cognitive workload (CW). Computation of CW may be a viable alternative to measuring it. Various possible "more basic" performance resources that contribute to cognitive system performance are discussed. It is concluded from this preliminary exploration that a GSPT-based approach can contribute to defining cognitive performance models that are useful for both individual subjects and specific groups (e.g., military pilots).

  12. Performance modeling, stochastic networks, and statistical multiplexing

    CERN Document Server

    Mazumdar, Ravi R

    2013-01-01

    This monograph presents a concise mathematical approach for modeling and analyzing the performance of communication networks with the aim of introducing an appropriate mathematical framework for modeling and analysis as well as understanding the phenomenon of statistical multiplexing. The models, techniques, and results presented form the core of traffic engineering methods used to design, control and allocate resources in communication networks.The novelty of the monograph is the fresh approach and insights provided by a sample-path methodology for queueing models that highlights the importan

  13. ASYMMETRIC PRICE TRANSMISSION MODELING: THE IMPORTANCE OF MODEL COMPLEXITY AND THE PERFORMANCE OF THE SELECTION CRITERIA

    Directory of Open Access Journals (Sweden)

    Henry de-Graft Acquah

    2013-01-01

    Full Text Available Information Criteria provides an attractive basis for selecting the best model from a set of competing asymmetric price transmission models or theories. However, little is understood about the sensitivity of the model selection methods to model complexity. This study therefore fits competing asymmetric price transmission models that differ in complexity to simulated data and evaluates the ability of the model selection methods to recover the true model. The results of Monte Carlo experimentation suggest that in general BIC, CAIC and DIC were superior to AIC when the true data generating process was the standard error correction model, whereas AIC was more successful when the true model was the complex error correction model. It is also shown that the model selection methods performed better in large samples for a complex asymmetric data generating process than with a standard asymmetric data generating process. Except for complex models, AIC's performance did not make substantial gains in recovery rates as sample size increased. The research findings demonstrate the influence of model complexity in asymmetric price transmission model comparison and selection.

  14. Data Model Performance in Data Warehousing

    Science.gov (United States)

    Rorimpandey, G. C.; Sangkop, F. I.; Rantung, V. P.; Zwart, J. P.; Liando, O. E. S.; Mewengkang, A.

    2018-02-01

    Data Warehouses have increasingly become important in organizations that have large amount of data. It is not a product but a part of a solution for the decision support system in those organizations. Data model is the starting point for designing and developing of data warehouses architectures. Thus, the data model needs stable interfaces and consistent for a longer period of time. The aim of this research is to know which data model in data warehousing has the best performance. The research method is descriptive analysis, which has 3 main tasks, such as data collection and organization, analysis of data and interpretation of data. The result of this research is discussed in a statistic analysis method, represents that there is no statistical difference among data models used in data warehousing. The organization can utilize four data model proposed when designing and developing data warehouse.

  15. Performance Implications of Business Model Change: A Case Study

    Directory of Open Access Journals (Sweden)

    Jana Poláková

    2015-01-01

    Full Text Available The paper deals with changes in performance level introduced by the change of business model. The selected case is a small family business undergoing through substantial changes in reflection of structural changes of its markets. The authors used the concept of business model to describe value creation processes within the selected family business and by contrasting the differences between value creation processes before and after the change introduced they prove the role of business model as the performance differentiator. This is illustrated with the use of business model canvas constructed on the basis interviews, observations and document analysis. The two business model canvases allow for explanation of cause-and-effect relationships within the business leading to change in performance. The change in the performance is assessed by financial analysis of the business conducted over the period of 2006–2012 demonstrates changes in performance (comparing development of ROA, ROE and ROS having their lowest levels before the change of business model was introduced, growing after the introduction of the change, as well as the activity indicators with similar developments of the family business. The described case study contributes to the concept of business modeling with the arguments supporting its value as strategic tool facilitating decisions related to value creation within the business.

  16. Confirming the Value of Swimming-Performance Models for Adolescents.

    Science.gov (United States)

    Dormehl, Shilo J; Robertson, Samuel J; Barker, Alan R; Williams, Craig A

    2017-10-01

    To evaluate the efficacy of existing performance models to assess the progression of male and female adolescent swimmers through a quantitative and qualitative mixed-methods approach. Fourteen published models were tested using retrospective data from an independent sample of Dutch junior national-level swimmers from when they were 12-18 y of age (n = 13). The degree of association by Pearson correlations was compared between the calculated differences from the models and quadratic functions derived from the Dutch junior national qualifying times. Swimmers were grouped based on their differences from the models and compared with their swimming histories that were extracted from questionnaires and follow-up interviews. Correlations of the deviations from both the models and quadratic functions derived from the Dutch qualifying times were all significant except for the 100-m breaststroke and butterfly and the 200-m freestyle for females (P motivation appeared to be synonymous with higher-level career performance. This mixed-methods approach helped confirm the validity of the models that were found to be applicable to adolescent swimmers at all levels, allowing coaches to track performance and set goals. The value of the models in being able to account for the expected performance gains during adolescence enables quantification of peripheral factors that could affect performance.

  17. Propagation of uncertainty in nasal spray in vitro performance models using Monte Carlo simulation: Part II. Error propagation during product performance modeling.

    Science.gov (United States)

    Guo, Changning; Doub, William H; Kauffman, John F

    2010-08-01

    Monte Carlo simulations were applied to investigate the propagation of uncertainty in both input variables and response measurements on model prediction for nasal spray product performance design of experiment (DOE) models in the first part of this study, with an initial assumption that the models perfectly represent the relationship between input variables and the measured responses. In this article, we discard the initial assumption, and extended the Monte Carlo simulation study to examine the influence of both input variable variation and product performance measurement variation on the uncertainty in DOE model coefficients. The Monte Carlo simulations presented in this article illustrate the importance of careful error propagation during product performance modeling. Our results show that the error estimates based on Monte Carlo simulation result in smaller model coefficient standard deviations than those from regression methods. This suggests that the estimated standard deviations from regression may overestimate the uncertainties in the model coefficients. Monte Carlo simulations provide a simple software solution to understand the propagation of uncertainty in complex DOE models so that design space can be specified with statistically meaningful confidence levels. (c) 2010 Wiley-Liss, Inc. and the American Pharmacists Association

  18. A Water Treatment Case Study for Quantifying Model Performance with Multilevel Flow Modelling

    DEFF Research Database (Denmark)

    Nielsen, Emil Krabbe; Bram, Mads Valentin; Frutiger, Jerome

    2018-01-01

    Decision support systems are a key focus of research on developing control rooms to aid operators in making reliable decisions, and reducing incidents caused by human errors. For this purpose, models of complex systems can be developed to diagnose causes or consequences for specific alarms. Models...... during operation, this work aims to synthesize a procedure to measure model performance according to diagnostic requirements. A simple procedure is proposed for validating and evaluating the concept of Multilevel Flow Modelling. For this purpose, expert statements, dynamic process simulations, and pilot...

  19. Real-time individualization of the unified model of performance.

    Science.gov (United States)

    Liu, Jianbo; Ramakrishnan, Sridhar; Laxminarayan, Srinivas; Balkin, Thomas J; Reifman, Jaques

    2017-12-01

    Existing mathematical models for predicting neurobehavioural performance are not suited for mobile computing platforms because they cannot adapt model parameters automatically in real time to reflect individual differences in the effects of sleep loss. We used an extended Kalman filter to develop a computationally efficient algorithm that continually adapts the parameters of the recently developed Unified Model of Performance (UMP) to an individual. The algorithm accomplishes this in real time as new performance data for the individual become available. We assessed the algorithm's performance by simulating real-time model individualization for 18 subjects subjected to 64 h of total sleep deprivation (TSD) and 7 days of chronic sleep restriction (CSR) with 3 h of time in bed per night, using psychomotor vigilance task (PVT) data collected every 2 h during wakefulness. This UMP individualization process produced parameter estimates that progressively approached the solution produced by a post-hoc fitting of model parameters using all data. The minimum number of PVT measurements needed to individualize the model parameters depended upon the type of sleep-loss challenge, with ~30 required for TSD and ~70 for CSR. However, model individualization depended upon the overall duration of data collection, yielding increasingly accurate model parameters with greater number of days. Interestingly, reducing the PVT sampling frequency by a factor of two did not notably hamper model individualization. The proposed algorithm facilitates real-time learning of an individual's trait-like responses to sleep loss and enables the development of individualized performance prediction models for use in a mobile computing platform. © 2017 European Sleep Research Society.

  20. A statistical model for predicting muscle performance

    Science.gov (United States)

    Byerly, Diane Leslie De Caix

    The objective of these studies was to develop a capability for predicting muscle performance and fatigue to be utilized for both space- and ground-based applications. To develop this predictive model, healthy test subjects performed a defined, repetitive dynamic exercise to failure using a Lordex spinal machine. Throughout the exercise, surface electromyography (SEMG) data were collected from the erector spinae using a Mega Electronics ME3000 muscle tester and surface electrodes placed on both sides of the back muscle. These data were analyzed using a 5th order Autoregressive (AR) model and statistical regression analysis. It was determined that an AR derived parameter, the mean average magnitude of AR poles, significantly correlated with the maximum number of repetitions (designated Rmax) that a test subject was able to perform. Using the mean average magnitude of AR poles, a test subject's performance to failure could be predicted as early as the sixth repetition of the exercise. This predictive model has the potential to provide a basis for improving post-space flight recovery, monitoring muscle atrophy in astronauts and assessing the effectiveness of countermeasures, monitoring astronaut performance and fatigue during Extravehicular Activity (EVA) operations, providing pre-flight assessment of the ability of an EVA crewmember to perform a given task, improving the design of training protocols and simulations for strenuous International Space Station assembly EVA, and enabling EVA work task sequences to be planned enhancing astronaut performance and safety. Potential ground-based, medical applications of the predictive model include monitoring muscle deterioration and performance resulting from illness, establishing safety guidelines in the industry for repetitive tasks, monitoring the stages of rehabilitation for muscle-related injuries sustained in sports and accidents, and enhancing athletic performance through improved training protocols while reducing

  1. Modeling the performance of low concentration photovoltaic systems

    Energy Technology Data Exchange (ETDEWEB)

    Reis, F. [SESUL, Faculdade de Ciencias da Universidade de Lisboa, 1749-016 Lisboa (Portugal); WS Energia, Ed. Tecnologia II 47, Taguspark, Oeiras (Portugal); Brito, M.C. [SESUL, Faculdade de Ciencias da Universidade de Lisboa, 1749-016 Lisboa (Portugal); Corregidor, V.; Wemans, J. [WS Energia, Ed. Tecnologia II 47, Taguspark, Oeiras (Portugal); Sorasio, G. [WS Energia, Ed. Tecnologia II 47, Taguspark, Oeiras (Portugal); Centro Richerche ISCAT, VS Pellico, 12037, Saluzzo (Italy)

    2010-07-15

    A theoretical model has been developed to describe the response of V-trough systems in terms of module temperature, power output and energy yield using as inputs the atmospheric conditions. The model was adjusted to DoubleSun {sup registered} concentration technology, which integrates dual-axis tracker and conventional mono-crystalline Si modules. The good agreement between model predictions and the results obtained at WS Energia laboratory, Portugal, validated the model. It is shown that DoubleSun {sup registered} technology increases up to 86% the yearly energy yield of conventional modules relative to a fixed flat-plate system. The model was also used to perform a sensitivity analysis, in order to highlight the relevance of the leading working parameters (such as irradiance) in system performance (energy yield and module temperature). Model results show that the operation module temperature is always below the maximum working temperature defined by the module manufacturers. (author)

  2. Sensitivity and uncertainty analyses for performance assessment modeling

    International Nuclear Information System (INIS)

    Doctor, P.G.

    1988-08-01

    Sensitivity and uncertainty analyses methods for computer models are being applied in performance assessment modeling in the geologic high level radioactive waste repository program. The models used in performance assessment tend to be complex physical/chemical models with large numbers of input variables. There are two basic approaches to sensitivity and uncertainty analyses: deterministic and statistical. The deterministic approach to sensitivity analysis involves numerical calculation or employs the adjoint form of a partial differential equation to compute partial derivatives; the uncertainty analysis is based on Taylor series expansions of the input variables propagated through the model to compute means and variances of the output variable. The statistical approach to sensitivity analysis involves a response surface approximation to the model with the sensitivity coefficients calculated from the response surface parameters; the uncertainty analysis is based on simulation. The methods each have strengths and weaknesses. 44 refs

  3. A Spectral Evaluation of Models Performances in Mediterranean Oak Woodlands

    Science.gov (United States)

    Vargas, R.; Baldocchi, D. D.; Abramowitz, G.; Carrara, A.; Correia, A.; Kobayashi, H.; Papale, D.; Pearson, D.; Pereira, J.; Piao, S.; Rambal, S.; Sonnentag, O.

    2009-12-01

    Ecosystem processes are influenced by climatic trends at multiple temporal scales including diel patterns and other mid-term climatic modes, such as interannual and seasonal variability. Because interactions between biophysical components of ecosystem processes are complex, it is important to test how models perform in frequency (e.g. hours, days, weeks, months, years) and time (i.e. day of the year) domains in addition to traditional tests of annual or monthly sums. Here we present a spectral evaluation using wavelet time series analysis of model performance in seven Mediterranean Oak Woodlands that encompass three deciduous and four evergreen sites. We tested the performance of five models (CABLE, ORCHIDEE, BEPS, Biome-BGC, and JULES) on measured variables of gross primary production (GPP) and evapotranspiration (ET). In general, model performance fails at intermediate periods (e.g. weeks to months) likely because these models do not represent the water pulse dynamics that influence GPP and ET at these Mediterranean systems. To improve the performance of a model it is critical to identify first where and when the model fails. Only by identifying where a model fails we can improve the model performance and use them as prognostic tools and to generate further hypotheses that can be tested by new experiments and measurements.

  4. Comparison of performance of simulation models for floor heating

    DEFF Research Database (Denmark)

    Weitzmann, Peter; Svendsen, Svend

    2005-01-01

    This paper describes the comparison of performance of simulation models for floor heating with different level of detail in the modelling process. The models are compared in an otherwise identical simulation model containing room model, walls, windows, ceiling and ventilation system. By exchanging...

  5. Positioning performance of the NTCM model driven by GPS Klobuchar model parameters

    Science.gov (United States)

    Hoque, Mohammed Mainul; Jakowski, Norbert; Berdermann, Jens

    2018-03-01

    Users of the Global Positioning System (GPS) utilize the Ionospheric Correction Algorithm (ICA) also known as Klobuchar model for correcting ionospheric signal delay or range error. Recently, we developed an ionosphere correction algorithm called NTCM-Klobpar model for single frequency GNSS applications. The model is driven by a parameter computed from GPS Klobuchar model and consecutively can be used instead of the GPS Klobuchar model for ionospheric corrections. In the presented work we compare the positioning solutions obtained using NTCM-Klobpar with those using the Klobuchar model. Our investigation using worldwide ground GPS data from a quiet and a perturbed ionospheric and geomagnetic activity period of 17 days each shows that the 24-hour prediction performance of the NTCM-Klobpar is better than the GPS Klobuchar model in global average. The root mean squared deviation of the 3D position errors are found to be about 0.24 and 0.45 m less for the NTCM-Klobpar compared to the GPS Klobuchar model during quiet and perturbed condition, respectively. The presented algorithm has the potential to continuously improve the accuracy of GPS single frequency mass market devices with only little software modification.

  6. A water treatment case study for quantifying model performance with multilevel flow modeling

    Directory of Open Access Journals (Sweden)

    Emil K. Nielsen

    2018-05-01

    Full Text Available Decision support systems are a key focus of research on developing control rooms to aid operators in making reliable decisions and reducing incidents caused by human errors. For this purpose, models of complex systems can be developed to diagnose causes or consequences for specific alarms. Models applied in safety systems of complex and safety-critical systems require rigorous and reliable model building and testing. Multilevel flow modeling is a qualitative and discrete method for diagnosing faults and has previously only been validated by subjective and qualitative means. To ensure reliability during operation, this work aims to synthesize a procedure to measure model performance according to diagnostic requirements. A simple procedure is proposed for validating and evaluating the concept of multilevel flow modeling. For this purpose, expert statements, dynamic process simulations, and pilot plant experiments are used for validation of simple multilevel flow modeling models of a hydrocyclone unit for oil removal from produced water. Keywords: Fault Diagnosis, Model Validation, Multilevel Flow Modeling, Produced Water Treatment

  7. Some considerations for validation of repository performance assessment models

    International Nuclear Information System (INIS)

    Eisenberg, N.

    1991-01-01

    Validation is an important aspect of the regulatory uses of performance assessment. A substantial body of literature exists indicating the manner in which validation of models is usually pursued. Because performance models for a nuclear waste repository cannot be tested over the long time periods for which the model must make predictions, the usual avenue for model validation is precluded. Further impediments to model validation include a lack of fundamental scientific theory to describe important aspects of repository performance and an inability to easily deduce the complex, intricate structures characteristic of a natural system. A successful strategy for validation must attempt to resolve these difficulties in a direct fashion. Although some procedural aspects will be important, the main reliance of validation should be on scientific substance and logical rigor. The level of validation needed will be mandated, in part, by the uses to which these models are put, rather than by the ideal of validation of a scientific theory. Because of the importance of the validation of performance assessment models, the NRC staff has engaged in a program of research and international cooperation to seek progress in this important area. 2 figs., 16 refs

  8. Charge-coupled-device X-ray detector performance model

    Science.gov (United States)

    Bautz, M. W.; Berman, G. E.; Doty, J. P.; Ricker, G. R.

    1987-01-01

    A model that predicts the performance characteristics of CCD detectors being developed for use in X-ray imaging is presented. The model accounts for the interactions of both X-rays and charged particles with the CCD and simulates the transport and loss of charge in the detector. Predicted performance parameters include detective and net quantum efficiencies, split-event probability, and a parameter characterizing the effective thickness presented by the detector to cosmic-ray protons. The predicted performance of two CCDs of different epitaxial layer thicknesses is compared. The model predicts that in each device incomplete recovery of the charge liberated by a photon of energy between 0.1 and 10 keV is very likely to be accompanied by charge splitting between adjacent pixels. The implications of the model predictions for CCD data processing algorithms are briefly discussed.

  9. Maintenance Personnel Performance Simulation (MAPPS) model: description of model content, structure, and sensitivity testing. Volume 2

    International Nuclear Information System (INIS)

    Siegel, A.I.; Bartter, W.D.; Wolf, J.J.; Knee, H.E.

    1984-12-01

    This volume of NUREG/CR-3626 presents details of the content, structure, and sensitivity testing of the Maintenance Personnel Performance Simulation (MAPPS) model that was described in summary in volume one of this report. The MAPPS model is a generalized stochastic computer simulation model developed to simulate the performance of maintenance personnel in nuclear power plants. The MAPPS model considers workplace, maintenance technician, motivation, human factors, and task oriented variables to yield predictive information about the effects of these variables on successful maintenance task performance. All major model variables are discussed in detail and their implementation and interactive effects are outlined. The model was examined for disqualifying defects from a number of viewpoints, including sensitivity testing. This examination led to the identification of some minor recalibration efforts which were carried out. These positive results indicate that MAPPS is ready for initial and controlled applications which are in conformity with its purposes

  10. Disaggregation of Rainy Hours: Compared Performance of Various Models.

    Science.gov (United States)

    Ben Haha, M.; Hingray, B.; Musy, A.

    In the urban environment, the response times of catchments are usually short. To de- sign or to diagnose waterworks in that context, it is necessary to describe rainfall events with a good time resolution: a 10mn time step is often necessary. Such in- formation is not always available. Rainfall disaggregation models have thus to be applied to produce from rough rainfall data that short time resolution information. The communication will present the performance obtained with several rainfall dis- aggregation models that allow for the disaggregation of rainy hours into six 10mn rainfall amounts. The ability of the models to reproduce some statistical character- istics of rainfall (mean, variance, overall distribution of 10mn-rainfall amounts; ex- treme values of maximal rainfall amounts over different durations) is evaluated thanks to different graphical and numerical criteria. The performance of simple models pre- sented in some scientific papers or developed in the Hydram laboratory as well as the performance of more sophisticated ones is compared with the performance of the basic constant disaggregation model. The compared models are either deterministic or stochastic; for some of them the disaggregation is based on scaling properties of rainfall. The compared models are in increasing complexity order: constant model, linear model (Ben Haha, 2001), Ormsbee Deterministic model (Ormsbee, 1989), Ar- tificial Neuronal Network based model (Burian et al. 2000), Hydram Stochastic 1 and Hydram Stochastic 2 (Ben Haha, 2001), Multiplicative Cascade based model (Olsson and Berndtsson, 1998), Ormsbee Stochastic model (Ormsbee, 1989). The 625 rainy hours used for that evaluation (with a hourly rainfall amount greater than 5mm) were extracted from the 21 years chronological rainfall series (10mn time step) observed at the Pully meteorological station, Switzerland. The models were also evaluated when applied to different rainfall classes depending on the season first and on the

  11. PV Performance Modeling Methods and Practices: Results from the 4th PV Performance Modeling Collaborative Workshop.

    Energy Technology Data Exchange (ETDEWEB)

    Stein, Joshua [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-03-01

    In 2014, the IEA PVPS Task 13 added the PVPMC as a formal activity to its technical work plan for 2014-2017. The goal of this activity is to expand the reach of the PVPMC to a broader international audience and help to reduce PV performance modeling uncertainties worldwide. One of the main deliverables of this activity is to host one or more PVPMC workshops outside the US to foster more international participation within this collaborative group. This report reviews the results of the first in a series of these joint IEA PVPS Task 13/PVPMC workshops. The 4th PV Performance Modeling Collaborative Workshop was held in Cologne, Germany at the headquarters of TÜV Rheinland on October 22-23, 2015.

  12. Assessing Ecosystem Model Performance in Semiarid Systems

    Science.gov (United States)

    Thomas, A.; Dietze, M.; Scott, R. L.; Biederman, J. A.

    2017-12-01

    In ecosystem process modelling, comparing outputs to benchmark datasets observed in the field is an important way to validate models, allowing the modelling community to track model performance over time and compare models at specific sites. Multi-model comparison projects as well as models themselves have largely been focused on temperate forests and similar biomes. Semiarid regions, on the other hand, are underrepresented in land surface and ecosystem modelling efforts, and yet will be disproportionately impacted by disturbances such as climate change due to their sensitivity to changes in the water balance. Benchmarking models at semiarid sites is an important step in assessing and improving models' suitability for predicting the impact of disturbance on semiarid ecosystems. In this study, several ecosystem models were compared at a semiarid grassland in southwestern Arizona using PEcAn, or the Predictive Ecosystem Analyzer, an open-source eco-informatics toolbox ideal for creating the repeatable model workflows necessary for benchmarking. Models included SIPNET, DALEC, JULES, ED2, GDAY, LPJ-GUESS, MAESPA, CLM, CABLE, and FATES. Comparison between model output and benchmarks such as net ecosystem exchange (NEE) tended to produce high root mean square error and low correlation coefficients, reflecting poor simulation of seasonality and the tendency for models to create much higher carbon sources than observed. These results indicate that ecosystem models do not currently adequately represent semiarid ecosystem processes.

  13. Theoretical performance model for single image depth from defocus.

    Science.gov (United States)

    Trouvé-Peloux, Pauline; Champagnat, Frédéric; Le Besnerais, Guy; Idier, Jérôme

    2014-12-01

    In this paper we present a performance model for depth estimation using single image depth from defocus (SIDFD). Our model is based on an original expression of the Cramér-Rao bound (CRB) in this context. We show that this model is consistent with the expected behavior of SIDFD. We then study the influence on the performance of the optical parameters of a conventional camera such as the focal length, the aperture, and the position of the in-focus plane (IFP). We derive an approximate analytical expression of the CRB away from the IFP, and we propose an interpretation of the SIDFD performance in this domain. Finally, we illustrate the predictive capacity of our performance model on experimental data comparing several settings of a consumer camera.

  14. Qualitative and quantitative examination of the performance of regional air quality models representing different modeling approaches

    International Nuclear Information System (INIS)

    Bhumralkar, C.M.; Ludwig, F.L.; Shannon, J.D.; McNaughton, D.

    1985-04-01

    The calculations of three different air quality models were compared with the best available observations. The comparisons were made without calibrating the models to improve agreement with the observations. Model performance was poor for short averaging times (less than 24 hours). Some of the poor performance can be traced to errors in the input meteorological fields, but error exist on all levels. It should be noted that these models were not originally designed for treating short-term episodes. For short-term episodes, much of the variance in the data can arise from small spatial scale features that tend to be averaged out over longer periods. These small spatial scale features cannot be resolved with the coarse grids that are used for the meteorological and emissions inputs. Thus, it is not surprising that the models performed for the longer averaging times. The models compared were RTM-II, ENAMAP-2 and ACID. (17 refs., 5 figs., 4 tabs

  15. Causal Analysis for Performance Modeling of Computer Programs

    Directory of Open Access Journals (Sweden)

    Jan Lemeire

    2007-01-01

    Full Text Available Causal modeling and the accompanying learning algorithms provide useful extensions for in-depth statistical investigation and automation of performance modeling. We enlarged the scope of existing causal structure learning algorithms by using the form-free information-theoretic concept of mutual information and by introducing the complexity criterion for selecting direct relations among equivalent relations. The underlying probability distribution of experimental data is estimated by kernel density estimation. We then reported on the benefits of a dependency analysis and the decompositional capacities of causal models. Useful qualitative models, providing insight into the role of every performance factor, were inferred from experimental data. This paper reports on the results for a LU decomposition algorithm and on the study of the parameter sensitivity of the Kakadu implementation of the JPEG-2000 standard. Next, the analysis was used to search for generic performance characteristics of the applications.

  16. In Silico Modeling of Gastrointestinal Drug Absorption: Predictive Performance of Three Physiologically Based Absorption Models.

    Science.gov (United States)

    Sjögren, Erik; Thörn, Helena; Tannergren, Christer

    2016-06-06

    Gastrointestinal (GI) drug absorption is a complex process determined by formulation, physicochemical and biopharmaceutical factors, and GI physiology. Physiologically based in silico absorption models have emerged as a widely used and promising supplement to traditional in vitro assays and preclinical in vivo studies. However, there remains a lack of comparative studies between different models. The aim of this study was to explore the strengths and limitations of the in silico absorption models Simcyp 13.1, GastroPlus 8.0, and GI-Sim 4.1, with respect to their performance in predicting human intestinal drug absorption. This was achieved by adopting an a priori modeling approach and using well-defined input data for 12 drugs associated with incomplete GI absorption and related challenges in predicting the extent of absorption. This approach better mimics the real situation during formulation development where predictive in silico models would be beneficial. Plasma concentration-time profiles for 44 oral drug administrations were calculated by convolution of model-predicted absorption-time profiles and reported pharmacokinetic parameters. Model performance was evaluated by comparing the predicted plasma concentration-time profiles, Cmax, tmax, and exposure (AUC) with observations from clinical studies. The overall prediction accuracies for AUC, given as the absolute average fold error (AAFE) values, were 2.2, 1.6, and 1.3 for Simcyp, GastroPlus, and GI-Sim, respectively. The corresponding AAFE values for Cmax were 2.2, 1.6, and 1.3, respectively, and those for tmax were 1.7, 1.5, and 1.4, respectively. Simcyp was associated with underprediction of AUC and Cmax; the accuracy decreased with decreasing predicted fabs. A tendency for underprediction was also observed for GastroPlus, but there was no correlation with predicted fabs. There were no obvious trends for over- or underprediction for GI-Sim. The models performed similarly in capturing dependencies on dose and

  17. Performance modeling of neighbor discovery in proactive routing protocols

    Directory of Open Access Journals (Sweden)

    Andres Medina

    2011-07-01

    Full Text Available It is well known that neighbor discovery is a critical component of proactive routing protocols in wireless ad hoc networks. However there is no formal study on the performance of proposed neighbor discovery mechanisms. This paper provides a detailed model of key performance metrics of neighbor discovery algorithms, such as node degree and the distribution of the distance to symmetric neighbors. The model accounts for the dynamics of neighbor discovery as well as node density, mobility, radio and interference. The paper demonstrates a method for applying these models to the evaluation of global network metrics. In particular, it describes a model of network connectivity. Validation of the models shows that the degree estimate agrees, within 5% error, with simulations for the considered scenarios. The work presented in this paper serves as a basis for the performance evaluation of remaining performance metrics of routing protocols, vital for large scale deployment of ad hoc networks.

  18. Characterization uncertainty and its effects on models and performance

    International Nuclear Information System (INIS)

    Rautman, C.A.; Treadway, A.H.

    1991-01-01

    Geostatistical simulation is being used to develop multiple geologic models of rock properties at the proposed Yucca Mountain repository site. Because each replicate model contains the same known information, and is thus essentially indistinguishable statistically from others, the differences between models may be thought of as representing the uncertainty in the site description. The variability among performance measures, such as ground water travel time, calculated using these replicate models therefore quantifies the uncertainty in performance that arises from uncertainty in site characterization

  19. Modelling and Predicting Backstroke Start Performance Using Non-Linear and Linear Models.

    Science.gov (United States)

    de Jesus, Karla; Ayala, Helon V H; de Jesus, Kelly; Coelho, Leandro Dos S; Medeiros, Alexandre I A; Abraldes, José A; Vaz, Mário A P; Fernandes, Ricardo J; Vilas-Boas, João Paulo

    2018-03-01

    Our aim was to compare non-linear and linear mathematical model responses for backstroke start performance prediction. Ten swimmers randomly completed eight 15 m backstroke starts with feet over the wedge, four with hands on the highest horizontal and four on the vertical handgrip. Swimmers were videotaped using a dual media camera set-up, with the starts being performed over an instrumented block with four force plates. Artificial neural networks were applied to predict 5 m start time using kinematic and kinetic variables and to determine the accuracy of the mean absolute percentage error. Artificial neural networks predicted start time more robustly than the linear model with respect to changing training to the validation dataset for the vertical handgrip (3.95 ± 1.67 vs. 5.92 ± 3.27%). Artificial neural networks obtained a smaller mean absolute percentage error than the linear model in the horizontal (0.43 ± 0.19 vs. 0.98 ± 0.19%) and vertical handgrip (0.45 ± 0.19 vs. 1.38 ± 0.30%) using all input data. The best artificial neural network validation revealed a smaller mean absolute error than the linear model for the horizontal (0.007 vs. 0.04 s) and vertical handgrip (0.01 vs. 0.03 s). Artificial neural networks should be used for backstroke 5 m start time prediction due to the quite small differences among the elite level performances.

  20. Switching performance of OBS network model under prefetched real traffic

    Science.gov (United States)

    Huang, Zhenhua; Xu, Du; Lei, Wen

    2005-11-01

    Optical Burst Switching (OBS) [1] is now widely considered as an efficient switching technique in building the next generation optical Internet .So it's very important to precisely evaluate the performance of the OBS network model. The performance of the OBS network model is variable in different condition, but the most important thing is that how it works under real traffic load. In the traditional simulation models, uniform traffics are usually generated by simulation software to imitate the data source of the edge node in the OBS network model, and through which the performance of the OBS network is evaluated. Unfortunately, without being simulated by real traffic, the traditional simulation models have several problems and their results are doubtable. To deal with this problem, we present a new simulation model for analysis and performance evaluation of the OBS network, which uses prefetched IP traffic to be data source of the OBS network model. The prefetched IP traffic can be considered as real IP source of the OBS edge node and the OBS network model has the same clock rate with a real OBS system. So it's easy to conclude that this model is closer to the real OBS system than the traditional ones. The simulation results also indicate that this model is more accurate to evaluate the performance of the OBS network system and the results of this model are closer to the actual situation.

  1. Data management system performance modeling

    Science.gov (United States)

    Kiser, Larry M.

    1993-01-01

    This paper discusses analytical techniques that have been used to gain a better understanding of the Space Station Freedom's (SSF's) Data Management System (DMS). The DMS is a complex, distributed, real-time computer system that has been redesigned numerous times. The implications of these redesigns have not been fully analyzed. This paper discusses the advantages and disadvantages for static analytical techniques such as Rate Monotonic Analysis (RMA) and also provides a rationale for dynamic modeling. Factors such as system architecture, processor utilization, bus architecture, queuing, etc. are well suited for analysis with a dynamic model. The significance of performance measures for a real-time system are discussed.

  2. Proposal for a Method for Business Model Performance Assessment: Toward an Experimentation Tool for Business Model Innovation

    Directory of Open Access Journals (Sweden)

    Antonio Batocchio

    2017-04-01

    Full Text Available The representation of business models has been recently widespread, especially in the pursuit of innovation. However, defining a company’s business model is sometimes limited to discussion and debates. This study observes the need for performance measurement so that business models can be data-driven. To meet this goal, the work proposed as a hypothesis the creation of a method that combines the practices of the Balanced Scorecard with a method of business models representation – the Business Model Canvas. Such a combination was based on study of conceptual adaptation, resulting in an application roadmap. A case study application was performed to check the functionality of the proposition, focusing on startup organizations. It was concluded that based on the performance assessment of the business model it is possible to propose the search for change through experimentation, a path that can lead to business model innovation.

  3. Performance of hedging strategies in interval models

    NARCIS (Netherlands)

    Roorda, Berend; Engwerda, Jacob; Schumacher, J.M.

    2005-01-01

    For a proper assessment of risks associated with the trading of derivatives, the performance of hedging strategies should be evaluated not only in the context of the idealized model that has served as the basis of strategy development, but also in the context of other models. In this paper we

  4. Modelling Client Satisfaction Levels: The Impact of Contractor Performance

    Directory of Open Access Journals (Sweden)

    Robby Soetanto

    2012-11-01

    Full Text Available The performance of contractors is known to be a key determinant of client satisfaction.Here, using factor analysis, clients’ satisfaction is defined in several dimensions. Based onclients’ assessment of contractor performance, a number of satisfaction models developedusing the multiple regression (MR technique are presented. The models identify arange of variables encompassing contractor performance, project performance and respondent(i.e. client attributes as useful predictors of satisfaction levels. Contractor performanceattributes were found to be of utmost importance indicating that clientsatisfaction levels are mainly dependent on the performance of the contractor. Furthermore,findings suggest that subjectivity is to some extent prevalent in clients’ performanceassessment. The models demonstrate accurate and reliable predictive power as confirmedby validation tests. Contractors could use the models to help improve their performanceleading to more satisfied clients. This would also promote the development ofharmonious working relationships within the construction project coalition.

  5. Impact of reactive settler models on simulated WWTP performance

    DEFF Research Database (Denmark)

    Gernaey, Krist; Jeppsson, Ulf; Batstone, Damien J.

    2006-01-01

    for an ASM1 case study. Simulations with a whole plant model including the non-reactive Takacs settler model are used as a reference, and are compared to simulation results considering two reactive settler models. The first is a return sludge model block removing oxygen and a user-defined fraction of nitrate......, combined with a non-reactive Takacs settler. The second is a fully reactive ASM1 Takacs settler model. Simulations with the ASM1 reactive settler model predicted a 15.3% and 7.4% improvement of the simulated N removal performance, for constant (steady-state) and dynamic influent conditions respectively....... The oxygen/nitrate return sludge model block predicts a 10% improvement of N removal performance under dynamic conditions, and might be the better modelling option for ASM1 plants: it is computationally more efficient and it will not overrate the importance of decay processes in the settler....

  6. Models for Automated Tube Performance Calculations

    International Nuclear Information System (INIS)

    Brunkhorst, C.

    2002-01-01

    High power radio-frequency systems, as typically used in fusion research devices, utilize vacuum tubes. Evaluation of vacuum tube performance involves data taken from tube operating curves. The acquisition of data from such graphical sources is a tedious process. A simple modeling method is presented that will provide values of tube currents for a given set of element voltages. These models may be used as subroutines in iterative solutions of amplifier operating conditions for a specific loading impedance

  7. Data modelling and performance of data base systems

    International Nuclear Information System (INIS)

    Rossiter, B.N.

    1984-01-01

    The three main methods of data modelling, hierarchical, network, and relational are described together with their advantages and disadvantages. The hierarchical model has strictly limited applicability, but the other two are of general use, although the network model in many respects defines a storage structure whilst the relational model defines a logical structure. Because of this, network systems are more difficult to use than relational systems but are easier to tune to obtain efficient performance. More advanced models have been developed to capture more semantic detail, and two of these RM/T and the role model are discussed. (orig.)

  8. Statistical and Machine Learning Models to Predict Programming Performance

    OpenAIRE

    Bergin, Susan

    2006-01-01

    This thesis details a longitudinal study on factors that influence introductory programming success and on the development of machine learning models to predict incoming student performance. Although numerous studies have developed models to predict programming success, the models struggled to achieve high accuracy in predicting the likely performance of incoming students. Our approach overcomes this by providing a machine learning technique, using a set of three significant...

  9. Performance engineering in the community atmosphere model

    International Nuclear Information System (INIS)

    Worley, P; Mirin, A; Drake, J; Sawyer, W

    2006-01-01

    The Community Atmosphere Model (CAM) is the atmospheric component of the Community Climate System Model (CCSM) and is the primary consumer of computer resources in typical CCSM simulations. Performance engineering has been an important aspect of CAM development throughout its existence. This paper briefly summarizes these efforts and their impacts over the past five years

  10. Enhancing pavement performance prediction models for the Illinois Tollway System

    Directory of Open Access Journals (Sweden)

    Laxmikanth Premkumar

    2016-01-01

    Full Text Available Accurate pavement performance prediction represents an important role in prioritizing future maintenance and rehabilitation needs, and predicting future pavement condition in a pavement management system. The Illinois State Toll Highway Authority (Tollway with over 2000 lane miles of pavement utilizes the condition rating survey (CRS methodology to rate pavement performance. Pavement performance models developed in the past for the Illinois Department of Transportation (IDOT are used by the Tollway to predict the future condition of its network. The model projects future CRS ratings based on pavement type, thickness, traffic, pavement age and current CRS rating. However, with time and inclusion of newer pavement types there was a need to calibrate the existing pavement performance models, as well as, develop models for newer pavement types.This study presents the results of calibrating the existing models, and developing new models for the various pavement types in the Illinois Tollway network. The predicted future condition of the pavements is used in estimating its remaining service life to failure, which is of immediate use in recommending future maintenance and rehabilitation requirements for the network. Keywords: Pavement performance models, Remaining life, Pavement management

  11. Model of service-oriented catering supply chain performance evaluation

    Directory of Open Access Journals (Sweden)

    Juanqiong Gou

    2013-03-01

    Full Text Available Purpose: The aim of this paper is constructing a performance evaluation model for service-oriented catering supply chain. Design/methodology/approach: With the research on the current situation of catering industry, this paper summarized the characters of the catering supply chain, and then presents the service-oriented catering supply chain model based on the platform of logistics and information. At last, the fuzzy AHP method is used to evaluate the performance of service-oriented catering supply chain. Findings: With the analysis of the characteristics of catering supply chain, we construct the performance evaluation model in order to guarantee the food safety, logistics efficiency, price stability and so on. Practical implications: In order to evolve an efficient and effective service supply chain, it can not only used to own enterprise improvement, but also can be used for selecting different customers, to choose a different model of development. Originality/value: This paper has a new definition of service-oriented catering supply chain. And it offers a model to evaluate the performance of this catering supply chain.

  12. A service based estimation method for MPSoC performance modelling

    DEFF Research Database (Denmark)

    Tranberg-Hansen, Anders Sejer; Madsen, Jan; Jensen, Bjørn Sand

    2008-01-01

    This paper presents an abstract service based estimation method for MPSoC performance modelling which allows fast, cycle accurate design space exploration of complex architectures including multi processor configurations at a very early stage in the design phase. The modelling method uses a service...... oriented model of computation based on Hierarchical Colored Petri Nets and allows the modelling of both software and hardware in one unified model. To illustrate the potential of the method, a small MPSoC system, developed at Bang & Olufsen ICEpower a/s, is modelled and performance estimates are produced...

  13. A Procurement Performance Model for Construction Frameworks

    Directory of Open Access Journals (Sweden)

    Terence Y M Lam

    2015-07-01

    Full Text Available Collaborative construction frameworks have been developed in the United Kingdom (UK to create longer term relationships between clients and suppliers in order to improve project outcomes. Research undertaken into highways maintenance set within a major county council has confirmed that such collaborative procurement methods can improve time, cost and quality of construction projects. Building upon this and examining the same single case, this research aims to develop a performance model through identification of performance drivers in the whole project delivery process including pre and post contract phases. A priori performance model based on operational and sociological constructs was proposed and then checked by a pilot study. Factor analysis and central tendency statistics from the questionnaires as well as content analysis from the interview transcripts were conducted. It was confirmed that long term relationships, financial and non-financial incentives and stronger communication are the sociological behaviour factors driving performance. The interviews also established that key performance indicators (KPIs can be used as an operational measure to improve performance. With the posteriori performance model, client project managers can effectively collaboratively manage contractor performance through procurement measures including use of longer term and KPIs for the contract so that the expected project outcomes can be achieved. The findings also make significant contribution to construction framework procurement theory by identifying the interrelated sociological and operational performance drivers. This study is set predominantly in the field of highways civil engineering. It is suggested that building based projects or other projects that share characteristics are grouped together and used for further research of the phenomena discovered.

  14. Hybrid Modeling Improves Health and Performance Monitoring

    Science.gov (United States)

    2007-01-01

    Scientific Monitoring Inc. was awarded a Phase I Small Business Innovation Research (SBIR) project by NASA's Dryden Flight Research Center to create a new, simplified health-monitoring approach for flight vehicles and flight equipment. The project developed a hybrid physical model concept that provided a structured approach to simplifying complex design models for use in health monitoring, allowing the output or performance of the equipment to be compared to what the design models predicted, so that deterioration or impending failure could be detected before there would be an impact on the equipment's operational capability. Based on the original modeling technology, Scientific Monitoring released I-Trend, a commercial health- and performance-monitoring software product named for its intelligent trending, diagnostics, and prognostics capabilities, as part of the company's complete ICEMS (Intelligent Condition-based Equipment Management System) suite of monitoring and advanced alerting software. I-Trend uses the hybrid physical model to better characterize the nature of health or performance alarms that result in "no fault found" false alarms. Additionally, the use of physical principles helps I-Trend identify problems sooner. I-Trend technology is currently in use in several commercial aviation programs, and the U.S. Air Force recently tapped Scientific Monitoring to develop next-generation engine health-management software for monitoring its fleet of jet engines. Scientific Monitoring has continued the original NASA work, this time under a Phase III SBIR contract with a joint NASA-Pratt & Whitney aviation security program on propulsion-controlled aircraft under missile-damaged aircraft conditions.

  15. Performance of GeantV EM Physics Models

    Energy Technology Data Exchange (ETDEWEB)

    Amadio, G.; et al.

    2016-10-14

    The recent progress in parallel hardware architectures with deeper vector pipelines or many-cores technologies brings opportunities for HEP experiments to take advantage of SIMD and SIMT computing models. Launched in 2013, the GeantV project studies performance gains in propagating multiple particles in parallel, improving instruction throughput and data locality in HEP event simulation on modern parallel hardware architecture. Due to the complexity of geometry description and physics algorithms of a typical HEP application, performance analysis is indispensable in identifying factors limiting parallel execution. In this report, we will present design considerations and preliminary computing performance of GeantV physics models on coprocessors (Intel Xeon Phi and NVidia GPUs) as well as on mainstream CPUs.

  16. Performance of GeantV EM Physics Models

    Science.gov (United States)

    Amadio, G.; Ananya, A.; Apostolakis, J.; Aurora, A.; Bandieramonte, M.; Bhattacharyya, A.; Bianchini, C.; Brun, R.; Canal, P.; Carminati, F.; Cosmo, G.; Duhem, L.; Elvira, D.; Folger, G.; Gheata, A.; Gheata, M.; Goulas, I.; Iope, R.; Jun, S. Y.; Lima, G.; Mohanty, A.; Nikitina, T.; Novak, M.; Pokorski, W.; Ribon, A.; Seghal, R.; Shadura, O.; Vallecorsa, S.; Wenzel, S.; Zhang, Y.

    2017-10-01

    The recent progress in parallel hardware architectures with deeper vector pipelines or many-cores technologies brings opportunities for HEP experiments to take advantage of SIMD and SIMT computing models. Launched in 2013, the GeantV project studies performance gains in propagating multiple particles in parallel, improving instruction throughput and data locality in HEP event simulation on modern parallel hardware architecture. Due to the complexity of geometry description and physics algorithms of a typical HEP application, performance analysis is indispensable in identifying factors limiting parallel execution. In this report, we will present design considerations and preliminary computing performance of GeantV physics models on coprocessors (Intel Xeon Phi and NVidia GPUs) as well as on mainstream CPUs.

  17. Performance of GeantV EM Physics Models

    CERN Document Server

    Amadio, G; Apostolakis, J; Aurora, A; Bandieramonte, M; Bhattacharyya, A; Bianchini, C; Brun, R; Canal P; Carminati, F; Cosmo, G; Duhem, L; Elvira, D; Folger, G; Gheata, A; Gheata, M; Goulas, I; Iope, R; Jun, S Y; Lima, G; Mohanty, A; Nikitina, T; Novak, M; Pokorski, W; Ribon, A; Seghal, R; Shadura, O; Vallecorsa, S; Wenzel, S; Zhang, Y

    2017-01-01

    The recent progress in parallel hardware architectures with deeper vector pipelines or many-cores technologies brings opportunities for HEP experiments to take advantage of SIMD and SIMT computing models. Launched in 2013, the GeantV project studies performance gains in propagating multiple particles in parallel, improving instruction throughput and data locality in HEP event simulation on modern parallel hardware architecture. Due to the complexity of geometry description and physics algorithms of a typical HEP application, performance analysis is indispensable in identifying factors limiting parallel execution. In this report, we will present design considerations and preliminary computing performance of GeantV physics models on coprocessors (Intel Xeon Phi and NVidia GPUs) as well as on mainstream CPUs.

  18. Development of a Stochastically-driven, Forward Predictive Performance Model for PEMFCs

    Science.gov (United States)

    Harvey, David Benjamin Paul

    A one-dimensional multi-scale coupled, transient, and mechanistic performance model for a PEMFC membrane electrode assembly has been developed. The model explicitly includes each of the 5 layers within a membrane electrode assembly and solves for the transport of charge, heat, mass, species, dissolved water, and liquid water. Key features of the model include the use of a multi-step implementation of the HOR reaction on the anode, agglomerate catalyst sub-models for both the anode and cathode catalyst layers, a unique approach that links the composition of the catalyst layer to key properties within the agglomerate model and the implementation of a stochastic input-based approach for component material properties. The model employs a new methodology for validation using statistically varying input parameters and statistically-based experimental performance data; this model represents the first stochastic input driven unit cell performance model. The stochastic input driven performance model was used to identify optimal ionomer content within the cathode catalyst layer, demonstrate the role of material variation in potential low performing MEA materials, provide explanation for the performance of low-Pt loaded MEAs, and investigate the validity of transient-sweep experimental diagnostic methods.

  19. Analytical Performance Modeling and Validation of Intel’s Xeon Phi Architecture

    Energy Technology Data Exchange (ETDEWEB)

    Chunduri, Sudheer; Balaprakash, Prasanna; Morozov, Vitali; Vishwanath, Venkatram; Kumaran, Kalyan

    2017-01-01

    Modeling the performance of scientific applications on emerging hardware plays a central role in achieving extreme-scale computing goals. Analytical models that capture the interaction between applications and hardware characteristics are attractive because even a reasonably accurate model can be useful for performance tuning before the hardware is made available. In this paper, we develop a hardware model for Intel’s second-generation Xeon Phi architecture code-named Knights Landing (KNL) for the SKOPE framework. We validate the KNL hardware model by projecting the performance of mini-benchmarks and application kernels. The results show that our KNL model can project the performance with prediction errors of 10% to 20%. The hardware model also provides informative recommendations for code transformations and tuning.

  20. Utilities for high performance dispersion model PHYSIC

    International Nuclear Information System (INIS)

    Yamazawa, Hiromi

    1992-09-01

    The description and usage of the utilities for the dispersion calculation model PHYSIC were summarized. The model was developed in the study of developing high performance SPEEDI with the purpose of introducing meteorological forecast function into the environmental emergency response system. The procedure of PHYSIC calculation consists of three steps; preparation of relevant files, creation and submission of JCL, and graphic output of results. A user can carry out the above procedure with the help of the Geographical Data Processing Utility, the Model Control Utility, and the Graphic Output Utility. (author)

  1. High-performance speech recognition using consistency modeling

    Science.gov (United States)

    Digalakis, Vassilios; Murveit, Hy; Monaco, Peter; Neumeyer, Leo; Sankar, Ananth

    1994-12-01

    The goal of SRI's consistency modeling project is to improve the raw acoustic modeling component of SRI's DECIPHER speech recognition system and develop consistency modeling technology. Consistency modeling aims to reduce the number of improper independence assumptions used in traditional speech recognition algorithms so that the resulting speech recognition hypotheses are more self-consistent and, therefore, more accurate. At the initial stages of this effort, SRI focused on developing the appropriate base technologies for consistency modeling. We first developed the Progressive Search technology that allowed us to perform large-vocabulary continuous speech recognition (LVCSR) experiments. Since its conception and development at SRI, this technique has been adopted by most laboratories, including other ARPA contracting sites, doing research on LVSR. Another goal of the consistency modeling project is to attack difficult modeling problems, when there is a mismatch between the training and testing phases. Such mismatches may include outlier speakers, different microphones and additive noise. We were able to either develop new, or transfer and evaluate existing, technologies that adapted our baseline genonic HMM recognizer to such difficult conditions.

  2. Planetary Suit Hip Bearing Model for Predicting Design vs. Performance

    Science.gov (United States)

    Cowley, Matthew S.; Margerum, Sarah; Harvil, Lauren; Rajulu, Sudhakar

    2011-01-01

    Designing a planetary suit is very complex and often requires difficult trade-offs between performance, cost, mass, and system complexity. In order to verifying that new suit designs meet requirements, full prototypes must eventually be built and tested with human subjects. Using computer models early in the design phase of new hardware development can be advantageous, allowing virtual prototyping to take place. Having easily modifiable models of the suit hard sections may reduce the time it takes to make changes to the hardware designs and then to understand their impact on suit and human performance. A virtual design environment gives designers the ability to think outside the box and exhaust design possibilities before building and testing physical prototypes with human subjects. Reductions in prototyping and testing may eventually reduce development costs. This study is an attempt to develop computer models of the hard components of the suit with known physical characteristics, supplemented with human subject performance data. Objectives: The primary objective was to develop an articulating solid model of the Mark III hip bearings to be used for evaluating suit design performance of the hip joint. Methods: Solid models of a planetary prototype (Mark III) suit s hip bearings and brief section were reverse-engineered from the prototype. The performance of the models was then compared by evaluating the mobility performance differences between the nominal hardware configuration and hardware modifications. This was accomplished by gathering data from specific suited tasks. Subjects performed maximum flexion and abduction tasks while in a nominal suit bearing configuration and in three off-nominal configurations. Performance data for the hip were recorded using state-of-the-art motion capture technology. Results: The results demonstrate that solid models of planetary suit hard segments for use as a performance design tool is feasible. From a general trend perspective

  3. A PERFORMANCE MANAGEMENT MODEL FOR PHYSICAL ASSET MANAGEMENT

    Directory of Open Access Journals (Sweden)

    J.L. Jooste

    2012-01-01

    Full Text Available

    ENGLISH ABSTRACT: There has been an emphasis shift from maintenance management towards asset management, where the focus is on reliable and operational equipment and on effective assets at optimum life-cycle costs. A challenge in the manufacturing industry is to develop an asset performance management model that is integrated with business processes and strategies. The authors developed the APM2 model to satisfy that requirement. The model has a generic reference structure and is supported by operational protocols to assist in operations management. It facilitates performance measurement, business integration and continuous improvement, whilst exposing industry to the latest developments in asset performance management.

    AFRIKAANSE OPSOMMING: Daar is ‘n klemverskuiwing vanaf onderhoudsbestuur na batebestuur, waar daar gefokus word op betroubare en operasionele toerusting, asook effektiewe bates teen optimum lewensikluskoste. ‘n Uitdaging in die vervaardigingsindustrie is die ontwikkeling van ‘n prestasiemodel vir bates, wat geïntegreer is met besigheidsprosesse en –strategieë. Die outeurs het die APM2 model ontwikkel om in hierdie behoefte te voorsien. Die model het ‘n generiese verwysingsstruktuur, wat ondersteun word deur operasionele instruksies wat operasionele bestuur bevorder. Dit fasiliteer prestasiebestuur, besigheidsintegrasie en voortdurende verbetering, terwyl dit die industrie ook blootstel aan die nuutste ontwikkelinge in prestasiebestuur van bates.

  4. Comparison of Simple Versus Performance-Based Fall Prediction Models

    Directory of Open Access Journals (Sweden)

    Shekhar K. Gadkaree BS

    2015-05-01

    Full Text Available Objective: To compare the predictive ability of standard falls prediction models based on physical performance assessments with more parsimonious prediction models based on self-reported data. Design: We developed a series of fall prediction models progressing in complexity and compared area under the receiver operating characteristic curve (AUC across models. Setting: National Health and Aging Trends Study (NHATS, which surveyed a nationally representative sample of Medicare enrollees (age ≥65 at baseline (Round 1: 2011-2012 and 1-year follow-up (Round 2: 2012-2013. Participants: In all, 6,056 community-dwelling individuals participated in Rounds 1 and 2 of NHATS. Measurements: Primary outcomes were 1-year incidence of “ any fall ” and “ recurrent falls .” Prediction models were compared and validated in development and validation sets, respectively. Results: A prediction model that included demographic information, self-reported problems with balance and coordination, and previous fall history was the most parsimonious model that optimized AUC for both any fall (AUC = 0.69, 95% confidence interval [CI] = [0.67, 0.71] and recurrent falls (AUC = 0.77, 95% CI = [0.74, 0.79] in the development set. Physical performance testing provided a marginal additional predictive value. Conclusion: A simple clinical prediction model that does not include physical performance testing could facilitate routine, widespread falls risk screening in the ambulatory care setting.

  5. Facial Performance Transfer via Deformable Models and Parametric Correspondence.

    Science.gov (United States)

    Asthana, Akshay; de la Hunty, Miles; Dhall, Abhinav; Goecke, Roland

    2012-09-01

    The issue of transferring facial performance from one person's face to another's has been an area of interest for the movie industry and the computer graphics community for quite some time. In recent years, deformable face models, such as the Active Appearance Model (AAM), have made it possible to track and synthesize faces in real time. Not surprisingly, deformable face model-based approaches for facial performance transfer have gained tremendous interest in the computer vision and graphics community. In this paper, we focus on the problem of real-time facial performance transfer using the AAM framework. We propose a novel approach of learning the mapping between the parameters of two completely independent AAMs, using them to facilitate the facial performance transfer in a more realistic manner than previous approaches. The main advantage of modeling this parametric correspondence is that it allows a "meaningful" transfer of both the nonrigid shape and texture across faces irrespective of the speakers' gender, shape, and size of the faces, and illumination conditions. We explore linear and nonlinear methods for modeling the parametric correspondence between the AAMs and show that the sparse linear regression method performs the best. Moreover, we show the utility of the proposed framework for a cross-language facial performance transfer that is an area of interest for the movie dubbing industry.

  6. Performance of Air Pollution Models on Massively Parallel Computers

    DEFF Research Database (Denmark)

    Brown, John; Hansen, Per Christian; Wasniewski, Jerzy

    1996-01-01

    To compare the performance and use of three massively parallel SIMD computers, we implemented a large air pollution model on the computers. Using a realistic large-scale model, we gain detailed insight about the performance of the three computers when used to solve large-scale scientific problems...

  7. A model for evaluating the social performance of construction waste management

    International Nuclear Information System (INIS)

    Yuan Hongping

    2012-01-01

    Highlights: ► Scant attention is paid to social performance of construction waste management (CWM). ► We develop a model for assessing the social performance of CWM. ► With the model, the social performance of CWM can be quantitatively simulated. - Abstract: It has been determined by existing literature that a lot of research efforts have been made to the economic performance of construction waste management (CWM), but less attention is paid to investigation of the social performance of CWM. This study therefore attempts to develop a model for quantitatively evaluating the social performance of CWM by using a system dynamics (SD) approach. Firstly, major variables affecting the social performance of CWM are identified and a holistic system for assessing the social performance of CWM is formulated in line with feedback relationships underlying these variables. The developed system is then converted into a SD model through the software iThink. An empirical case study is finally conducted to demonstrate application of the model. Results of model validation indicate that the model is robust and reasonable to reflect the situation of the real system under study. Findings of the case study offer helpful insights into effectively promoting the social performance of CWM of the project investigated. Furthermore, the model exhibits great potential to function as an experimental platform for dynamically evaluating effects of management measures on improving the social performance of CWM of construction projects.

  8. Off gas condenser performance modelling

    International Nuclear Information System (INIS)

    Cains, P.W.; Hills, K.M.; Waring, S.; Pratchett, A.G.

    1989-12-01

    A suite of three programmes has been developed to model the ruthenium decontamination performance of a vitrification plant off-gas condenser. The stages of the model are: condensation of water vapour, NO x absorption in the condensate, RuO 4 absorption in the condensate. Juxtaposition of these stages gives a package that may be run on an IBM-compatible desktop PC. Experimental work indicates that the criterion [HNO 2 ] > 10 [RuO 4 ] used to determine RuO 4 destruction in solution is probably realistic under condenser conditions. Vapour pressures of RuO 4 over aqueous solutions at 70 o -90 o C are slightly lower than the values given by extrapolating the ln K p vs. T -1 relation derived from lower temperature data. (author)

  9. An analytical model of the HINT performance metric

    Energy Technology Data Exchange (ETDEWEB)

    Snell, Q.O.; Gustafson, J.L. [Scalable Computing Lab., Ames, IA (United States)

    1996-10-01

    The HINT benchmark was developed to provide a broad-spectrum metric for computers and to measure performance over the full range of memory sizes and time scales. We have extended our understanding of why HINT performance curves look the way they do and can now predict the curves using an analytical model based on simple hardware specifications as input parameters. Conversely, by fitting the experimental curves with the analytical model, hardware specifications such as memory performance can be inferred to provide insight into the nature of a given computer system.

  10. Construction Of A Performance Assessment Model For Zakat Management Institutions

    Directory of Open Access Journals (Sweden)

    Sri Fadilah

    2016-12-01

    Full Text Available The objective of the research is to examine the performance evaluation using Balanced Scorecard model. The research is conducted due to a big gap existing between zakat (alms and religious tax in Islam with its potential earn of as much as 217 trillion rupiahs and the realization of the collected zakat fund that is only reached for three trillion. This indicates that the performance of zakat management organizations in collecting the zakat is still very low. On the other hand, the quantity and the quality of zakat management organizations have to be improved. This means the performance evaluation model as a tool to evaluate performance is needed. The model construct is making a performance evaluation model that can be implemented to zakat management organizations. The organizational performance with Balanced Scorecard evaluation model will be effective if it is supported by three aspects, namely:  PI, BO and TQM. This research uses explanatory method and data analysis tool of SEM/PLS. Data collecting technique are questionnaires, interviews and documentation. The result of this research shows that PI, BO and TQM simultaneously and partially gives a significant effect on organizational performance.

  11. Team performance modeling for HRA in dynamic situations

    International Nuclear Information System (INIS)

    Shu Yufei; Furuta, Kazuo; Kondo, Shunsuke

    2002-01-01

    This paper proposes a team behavior network model that can simulate and analyze response of an operator team to an incident in a dynamic and context-sensitive situation. The model is composed of four sub-models, which describe the context of team performance. They are task model, event model, team model and human-machine interface model. Each operator demonstrates aspects of his/her specific cognitive behavior and interacts with other operators and the environment in order to deal with an incident. Individual human factors, which determine the basis of communication and interaction between individuals, and cognitive process of an operator, such as information acquisition, state-recognition, decision-making and action execution during development of an event scenario are modeled. A case of feed and bleed operation in pressurized water reactor under an emergency situation was studied and the result was compared with an experiment to check the validity of the proposed model

  12. Human performance modeling for system of systems analytics :soldier fatigue.

    Energy Technology Data Exchange (ETDEWEB)

    Lawton, Craig R.; Campbell, James E.; Miller, Dwight Peter

    2005-10-01

    The military has identified Human Performance Modeling (HPM) as a significant requirement and challenge of future systems modeling and analysis initiatives as can be seen in the Department of Defense's (DoD) Defense Modeling and Simulation Office's (DMSO) Master Plan (DoD 5000.59-P 1995). To this goal, the military is currently spending millions of dollars on programs devoted to HPM in various military contexts. Examples include the Human Performance Modeling Integration (HPMI) program within the Air Force Research Laboratory, which focuses on integrating HPMs with constructive models of systems (e.g. cockpit simulations) and the Navy's Human Performance Center (HPC) established in September 2003. Nearly all of these initiatives focus on the interface between humans and a single system. This is insufficient in the era of highly complex network centric SoS. This report presents research and development in the area of HPM in a system-of-systems (SoS). Specifically, this report addresses modeling soldier fatigue and the potential impacts soldier fatigue can have on SoS performance.

  13. DETRA: Model description and evaluation of model performance

    International Nuclear Information System (INIS)

    Suolanen, V.

    1996-01-01

    The computer code DETRA is a generic tool for environmental transfer analyses of radioactive or stable substances. The code has been applied for various purposes, mainly problems related to the biospheric transfer of radionuclides both in safety analyses of disposal of nuclear wastes and in consideration of foodchain exposure pathways in the analyses of off-site consequences of reactor accidents. For each specific application an individually tailored conceptual model can be developed. The biospheric transfer analyses performed by the code are typically carried out for terrestrial, aquatic and food chain applications. 21 refs, 35 figs, 15 tabs

  14. ExaSAT: An exascale co-design tool for performance modeling

    International Nuclear Information System (INIS)

    Unat, Didem; Chan, Cy; Zhang, Weiqun; Williams, Samuel; Bachan, John

    2015-01-01

    One of the emerging challenges to designing HPC systems is understanding and projecting the requirements of exascale applications. In order to determine the performance consequences of different hardware designs, analytic models are essential because they can provide fast feedback to the co-design centers and chip designers without costly simulations. However, current attempts to analytically model program performance typically rely on the user manually specifying a performance model. Here we introduce the ExaSAT framework that automates the extraction of parameterized performance models directly from source code using compiler analysis. The parameterized analytic model enables quantitative evaluation of a broad range of hardware design trade-offs and software optimizations on a variety of different performance metrics, with a primary focus on data movement as a metric. Finally, we demonstrate the ExaSAT framework’s ability to perform deep code analysis of a proxy application from the Department of Energy Combustion Co-design Center to illustrate its value to the exascale co-design process. ExaSAT analysis provides insights into the hardware and software trade-offs and lays the groundwork for exploring a more targeted set of design points using cycle-accurate architectural simulators.

  15. PORFLOW Modeling Supporting The H-Tank Farm Performance Assessment

    International Nuclear Information System (INIS)

    Jordan, J. M.; Flach, G. P.; Westbrook, M. L.

    2012-01-01

    Numerical simulations of groundwater flow and contaminant transport in the vadose and saturated zones have been conducted using the PORFLOW code in support of an overall Performance Assessment (PA) of the H-Tank Farm. This report provides technical detail on selected aspects of PORFLOW model development and describes the structure of the associated electronic files. The PORFLOW models for the H-Tank Farm PA, Rev. 1 were updated with grout, solubility, and inventory changes. The aquifer model was refined. In addition, a set of flow sensitivity runs were performed to allow flow to be varied in the related probabilistic GoldSim models. The final PORFLOW concentration values are used as input into a GoldSim dose calculator

  16. PORFLOW Modeling Supporting The H-Tank Farm Performance Assessment

    Energy Technology Data Exchange (ETDEWEB)

    Jordan, J. M.; Flach, G. P.; Westbrook, M. L.

    2012-08-31

    Numerical simulations of groundwater flow and contaminant transport in the vadose and saturated zones have been conducted using the PORFLOW code in support of an overall Performance Assessment (PA) of the H-Tank Farm. This report provides technical detail on selected aspects of PORFLOW model development and describes the structure of the associated electronic files. The PORFLOW models for the H-Tank Farm PA, Rev. 1 were updated with grout, solubility, and inventory changes. The aquifer model was refined. In addition, a set of flow sensitivity runs were performed to allow flow to be varied in the related probabilistic GoldSim models. The final PORFLOW concentration values are used as input into a GoldSim dose calculator.

  17. Performance verification tests of JT-60SA CS model coil

    Energy Technology Data Exchange (ETDEWEB)

    Obana, Tetsuhiro, E-mail: obana.tetsuhiro@LHD.nifs.ac.jp [National Institute for Fusion Science, 322-6 Oroshi, Toki, Gifu 509-5292 (Japan); Murakami, Haruyuki [Japan Atomic Energy Agency, 801-1 Mukoyama, Naka, Ibaraki 311-0193 (Japan); Takahata, Kazuya; Hamaguchi, Shinji; Chikaraishi, Hirotaka; Mito, Toshiyuki; Imagawa, Shinsaku [National Institute for Fusion Science, 322-6 Oroshi, Toki, Gifu 509-5292 (Japan); Kizu, Kaname; Natsume, Kyohei; Yoshida, Kiyoshi [Japan Atomic Energy Agency, 801-1 Mukoyama, Naka, Ibaraki 311-0193 (Japan)

    2015-11-15

    Highlights: • The performance of the JT-60SA CS model coil was verified. • The CS model coil comprised a quad-pancake wound with a Nb{sub 3}Sn CIC conductor. • The CS model coil met the design requirements. - Abstract: As a final check of the coil manufacturing method of the JT-60 Super Advanced (JT-60SA) central solenoid (CS), we verified the performance of a CS model coil. The model coil comprised a quad-pancake wound with a Nb{sub 3}Sn cable-in-conduit conductor. Measurements of the critical current, joint resistance, pressure drop, and magnetic field were conducted in the verification tests. In the critical-current measurement, the critical current of the model coil coincided with the estimation derived from a strain of −0.62% for the Nb{sub 3}Sn strands. As a result, critical-current degradation caused by the coil manufacturing process was not observed. The results of the performance verification tests indicate that the model coil met the design requirements. Consequently, the manufacturing process of the JT-60SA CS was established.

  18. Acute Myocardial Infarction Readmission Risk Prediction Models: A Systematic Review of Model Performance.

    Science.gov (United States)

    Smith, Lauren N; Makam, Anil N; Darden, Douglas; Mayo, Helen; Das, Sandeep R; Halm, Ethan A; Nguyen, Oanh Kieu

    2018-01-01

    Hospitals are subject to federal financial penalties for excessive 30-day hospital readmissions for acute myocardial infarction (AMI). Prospectively identifying patients hospitalized with AMI at high risk for readmission could help prevent 30-day readmissions by enabling targeted interventions. However, the performance of AMI-specific readmission risk prediction models is unknown. We systematically searched the published literature through March 2017 for studies of risk prediction models for 30-day hospital readmission among adults with AMI. We identified 11 studies of 18 unique risk prediction models across diverse settings primarily in the United States, of which 16 models were specific to AMI. The median overall observed all-cause 30-day readmission rate across studies was 16.3% (range, 10.6%-21.0%). Six models were based on administrative data; 4 on electronic health record data; 3 on clinical hospital data; and 5 on cardiac registry data. Models included 7 to 37 predictors, of which demographics, comorbidities, and utilization metrics were the most frequently included domains. Most models, including the Centers for Medicare and Medicaid Services AMI administrative model, had modest discrimination (median C statistic, 0.65; range, 0.53-0.79). Of the 16 reported AMI-specific models, only 8 models were assessed in a validation cohort, limiting generalizability. Observed risk-stratified readmission rates ranged from 3.0% among the lowest-risk individuals to 43.0% among the highest-risk individuals, suggesting good risk stratification across all models. Current AMI-specific readmission risk prediction models have modest predictive ability and uncertain generalizability given methodological limitations. No existing models provide actionable information in real time to enable early identification and risk-stratification of patients with AMI before hospital discharge, a functionality needed to optimize the potential effectiveness of readmission reduction interventions

  19. Hydrological model performance and parameter estimation in the wavelet-domain

    Directory of Open Access Journals (Sweden)

    B. Schaefli

    2009-10-01

    Full Text Available This paper proposes a method for rainfall-runoff model calibration and performance analysis in the wavelet-domain by fitting the estimated wavelet-power spectrum (a representation of the time-varying frequency content of a time series of a simulated discharge series to the one of the corresponding observed time series. As discussed in this paper, calibrating hydrological models so as to reproduce the time-varying frequency content of the observed signal can lead to different results than parameter estimation in the time-domain. Therefore, wavelet-domain parameter estimation has the potential to give new insights into model performance and to reveal model structural deficiencies. We apply the proposed method to synthetic case studies and a real-world discharge modeling case study and discuss how model diagnosis can benefit from an analysis in the wavelet-domain. The results show that for the real-world case study of precipitation – runoff modeling for a high alpine catchment, the calibrated discharge simulation captures the dynamics of the observed time series better than the results obtained through calibration in the time-domain. In addition, the wavelet-domain performance assessment of this case study highlights the frequencies that are not well reproduced by the model, which gives specific indications about how to improve the model structure.

  20. Activity-Based Costing Model for Assessing Economic Performance.

    Science.gov (United States)

    DeHayes, Daniel W.; Lovrinic, Joseph G.

    1994-01-01

    An economic model for evaluating the cost performance of academic and administrative programs in higher education is described. Examples from its application at Indiana University-Purdue University Indianapolis are used to illustrate how the model has been used to control costs and reengineer processes. (Author/MSE)

  1. Modeling and Performance Analysis of Manufacturing Systems in ...

    African Journals Online (AJOL)

    Modeling and Performance Analysis of Manufacturing Systems in Footwear Industry. ... researcher to experiment with different variables and controls the manufacturing process ... In this study Arena simulation software is employed to model and measure ... for Authors · for Policy Makers · about Open Access · Journal Quality.

  2. Performance evaluation of four directional emissivity analytical models with thermal SAIL model and airborne images.

    Science.gov (United States)

    Ren, Huazhong; Liu, Rongyuan; Yan, Guangjian; Li, Zhao-Liang; Qin, Qiming; Liu, Qiang; Nerry, Françoise

    2015-04-06

    Land surface emissivity is a crucial parameter in the surface status monitoring. This study aims at the evaluation of four directional emissivity models, including two bi-directional reflectance distribution function (BRDF) models and two gap-frequency-based models. Results showed that the kernel-driven BRDF model could well represent directional emissivity with an error less than 0.002, and was consequently used to retrieve emissivity with an accuracy of about 0.012 from an airborne multi-angular thermal infrared data set. Furthermore, we updated the cavity effect factor relating to multiple scattering inside canopy, which improved the performance of the gap-frequency-based models.

  3. Evaluation of performance of distributed delay model for chemotherapy-induced myelosuppression.

    Science.gov (United States)

    Krzyzanski, Wojciech; Hu, Shuhua; Dunlavey, Michael

    2018-04-01

    The distributed delay model has been introduced that replaces the transit compartments in the classic model of chemotherapy-induced myelosuppression with a convolution integral. The maturation of granulocyte precursors in the bone marrow is described by the gamma probability density function with the shape parameter (ν). If ν is a positive integer, the distributed delay model coincides with the classic model with ν transit compartments. The purpose of this work was to evaluate performance of the distributed delay model with particular focus on model deterministic identifiability in the presence of the shape parameter. The classic model served as a reference for comparison. Previously published white blood cell (WBC) count data in rats receiving bolus doses of 5-fluorouracil were fitted by both models. The negative two log-likelihood objective function (-2LL) and running times were used as major markers of performance. Local sensitivity analysis was done to evaluate the impact of ν on the pharmacodynamics response WBC. The ν estimate was 1.46 with 16.1% CV% compared to ν = 3 for the classic model. The difference of 6.78 in - 2LL between classic model and the distributed delay model implied that the latter performed significantly better than former according to the log-likelihood ratio test (P = 0.009), although the overall performance was modestly better. The running times were 1 s and 66.2 min, respectively. The long running time of the distributed delay model was attributed to computationally intensive evaluation of the convolution integral. The sensitivity analysis revealed that ν strongly influences the WBC response by controlling cell proliferation and elimination of WBCs from the circulation. In conclusion, the distributed delay model was deterministically identifiable from typical cytotoxic data. Its performance was modestly better than the classic model with significantly longer running time.

  4. Geographic and temporal validity of prediction models: Different approaches were useful to examine model performance

    NARCIS (Netherlands)

    P.C. Austin (Peter); D. van Klaveren (David); Y. Vergouwe (Yvonne); D. Nieboer (Daan); D.S. Lee (Douglas); E.W. Steyerberg (Ewout)

    2016-01-01

    textabstractObjective: Validation of clinical prediction models traditionally refers to the assessment of model performance in new patients. We studied different approaches to geographic and temporal validation in the setting of multicenter data from two time periods. Study Design and Setting: We

  5. How motivation affects academic performance: a structural equation modelling analysis.

    Science.gov (United States)

    Kusurkar, R A; Ten Cate, Th J; Vos, C M P; Westers, P; Croiset, G

    2013-03-01

    Few studies in medical education have studied effect of quality of motivation on performance. Self-Determination Theory based on quality of motivation differentiates between Autonomous Motivation (AM) that originates within an individual and Controlled Motivation (CM) that originates from external sources. To determine whether Relative Autonomous Motivation (RAM, a measure of the balance between AM and CM) affects academic performance through good study strategy and higher study effort and compare this model between subgroups: males and females; students selected via two different systems namely qualitative and weighted lottery selection. Data on motivation, study strategy and effort was collected from 383 medical students of VU University Medical Center Amsterdam and their academic performance results were obtained from the student administration. Structural Equation Modelling analysis technique was used to test a hypothesized model in which high RAM would positively affect Good Study Strategy (GSS) and study effort, which in turn would positively affect academic performance in the form of grade point averages. This model fit well with the data, Chi square = 1.095, df = 3, p = 0.778, RMSEA model fit = 0.000. This model also fitted well for all tested subgroups of students. Differences were found in the strength of relationships between the variables for the different subgroups as expected. In conclusion, RAM positively correlated with academic performance through deep strategy towards study and higher study effort. This model seems valid in medical education in subgroups such as males, females, students selected by qualitative and weighted lottery selection.

  6. Model performance evaluation (validation and calibration) in model-based studies of therapeutic interventions for cardiovascular diseases : a review and suggested reporting framework.

    Science.gov (United States)

    Haji Ali Afzali, Hossein; Gray, Jodi; Karnon, Jonathan

    2013-04-01

    Decision analytic models play an increasingly important role in the economic evaluation of health technologies. Given uncertainties around the assumptions used to develop such models, several guidelines have been published to identify and assess 'best practice' in the model development process, including general modelling approach (e.g., time horizon), model structure, input data and model performance evaluation. This paper focuses on model performance evaluation. In the absence of a sufficient level of detail around model performance evaluation, concerns regarding the accuracy of model outputs, and hence the credibility of such models, are frequently raised. Following presentation of its components, a review of the application and reporting of model performance evaluation is presented. Taking cardiovascular disease as an illustrative example, the review investigates the use of face validity, internal validity, external validity, and cross model validity. As a part of the performance evaluation process, model calibration is also discussed and its use in applied studies investigated. The review found that the application and reporting of model performance evaluation across 81 studies of treatment for cardiovascular disease was variable. Cross-model validation was reported in 55 % of the reviewed studies, though the level of detail provided varied considerably. We found that very few studies documented other types of validity, and only 6 % of the reviewed articles reported a calibration process. Considering the above findings, we propose a comprehensive model performance evaluation framework (checklist), informed by a review of best-practice guidelines. This framework provides a basis for more accurate and consistent documentation of model performance evaluation. This will improve the peer review process and the comparability of modelling studies. Recognising the fundamental role of decision analytic models in informing public funding decisions, the proposed

  7. Cost and Performance Model for Photovoltaic Systems

    Science.gov (United States)

    Borden, C. S.; Smith, J. H.; Davisson, M. C.; Reiter, L. J.

    1986-01-01

    Lifetime cost and performance (LCP) model assists in assessment of design options for photovoltaic systems. LCP is simulation of performance, cost, and revenue streams associated with photovoltaic power systems connected to electric-utility grid. LCP provides user with substantial flexibility in specifying technical and economic environment of application.

  8. Performance and robustness of hybrid model predictive control for controllable dampers in building models

    Science.gov (United States)

    Johnson, Erik A.; Elhaddad, Wael M.; Wojtkiewicz, Steven F.

    2016-04-01

    A variety of strategies have been developed over the past few decades to determine controllable damping device forces to mitigate the response of structures and mechanical systems to natural hazards and other excitations. These "smart" damping devices produce forces through passive means but have properties that can be controlled in real time, based on sensor measurements of response across the structure, to dramatically reduce structural motion by exploiting more than the local "information" that is available to purely passive devices. A common strategy is to design optimal damping forces using active control approaches and then try to reproduce those forces with the smart damper. However, these design forces, for some structures and performance objectives, may achieve high performance by selectively adding energy, which cannot be replicated by a controllable damping device, causing the smart damper performance to fall far short of what an active system would provide. The authors have recently demonstrated that a model predictive control strategy using hybrid system models, which utilize both continuous and binary states (the latter to capture the switching behavior between dissipative and non-dissipative forces), can provide reductions in structural response on the order of 50% relative to the conventional clipped-optimal design strategy. This paper explores the robustness of this newly proposed control strategy through evaluating controllable damper performance when the structure model differs from the nominal one used to design the damping strategy. Results from the application to a two-degree-of-freedom structure model confirms the robustness of the proposed strategy.

  9. New model performance index for engineering design of control systems

    Science.gov (United States)

    1970-01-01

    Performance index includes a model representing linear control-system design specifications. Based on a geometric criterion for approximation of the model by the actual system, the index can be interpreted directly in terms of the desired system response model without actually having the model's time response.

  10. A Practical Model to Perform Comprehensive Cybersecurity Audits

    Directory of Open Access Journals (Sweden)

    Regner Sabillon

    2018-03-01

    Full Text Available These days organizations are continually facing being targets of cyberattacks and cyberthreats; the sophistication and complexity of modern cyberattacks and the modus operandi of cybercriminals including Techniques, Tactics and Procedures (TTP keep growing at unprecedented rates. Cybercriminals are always adopting new strategies to plan and launch cyberattacks based on existing cybersecurity vulnerabilities and exploiting end users by using social engineering techniques. Cybersecurity audits are extremely important to verify that information security controls are in place and to detect weaknesses of inexistent cybersecurity or obsolete controls. This article presents an innovative and comprehensive cybersecurity audit model. The CyberSecurity Audit Model (CSAM can be implemented to perform internal or external cybersecurity audits. This model can be used to perform single cybersecurity audits or can be part of any corporate audit program to improve cybersecurity controls. Any information security or cybersecurity audit team has either the options to perform a full audit for all cybersecurity domains or by selecting specific domains to audit certain areas that need control verification and hardening. The CSAM has 18 domains; Domain 1 is specific for Nation States and Domains 2-18 can be implemented at any organization. The organization can be any small, medium or large enterprise, the model is also applicable to any Non-Profit Organization (NPO.

  11. Adding propensity scores to pure prediction models fails to improve predictive performance

    Directory of Open Access Journals (Sweden)

    Amy S. Nowacki

    2013-08-01

    Full Text Available Background. Propensity score usage seems to be growing in popularity leading researchers to question the possible role of propensity scores in prediction modeling, despite the lack of a theoretical rationale. It is suspected that such requests are due to the lack of differentiation regarding the goals of predictive modeling versus causal inference modeling. Therefore, the purpose of this study is to formally examine the effect of propensity scores on predictive performance. Our hypothesis is that a multivariable regression model that adjusts for all covariates will perform as well as or better than those models utilizing propensity scores with respect to model discrimination and calibration.Methods. The most commonly encountered statistical scenarios for medical prediction (logistic and proportional hazards regression were used to investigate this research question. Random cross-validation was performed 500 times to correct for optimism. The multivariable regression models adjusting for all covariates were compared with models that included adjustment for or weighting with the propensity scores. The methods were compared based on three predictive performance measures: (1 concordance indices; (2 Brier scores; and (3 calibration curves.Results. Multivariable models adjusting for all covariates had the highest average concordance index, the lowest average Brier score, and the best calibration. Propensity score adjustment and inverse probability weighting models without adjustment for all covariates performed worse than full models and failed to improve predictive performance with full covariate adjustment.Conclusion. Propensity score techniques did not improve prediction performance measures beyond multivariable adjustment. Propensity scores are not recommended if the analytical goal is pure prediction modeling.

  12. A Designer’s Guide to Human Performance Modelling (La Modelisation des Performances Humaines: Manuel du Concepteur).

    Science.gov (United States)

    1998-12-01

    into the Systems Engineering Process 17 5.3 Validation of HPMs 18 5.4 Commercialisation of human performance modelling software 18 5.5 Model Tool...budget) so that inappropriate models/tools are not offered. The WG agreed that another form of ’ educating ’ designers in the use of models was by means... Commercialisation of human performance modelling Software 5.2.8 Include human performance in system test. g More and more, customer’s are mandating the provision

  13. Optimization of A(2)O BNR processes using ASM and EAWAG Bio-P models: model performance.

    Science.gov (United States)

    El Shorbagy, Walid E; Radif, Nawras N; Droste, Ronald L

    2013-12-01

    This paper presents the performance of an optimization model for a biological nutrient removal (BNR) system using the anaerobic-anoxic-oxic (A(2)O) process. The formulated model simulates removal of organics, nitrogen, and phosphorus using a reduced International Water Association (IWA) Activated Sludge Model #3 (ASM3) model and a Swiss Federal Institute for Environmental Science and Technology (EAWAG) Bio-P module. Optimal sizing is attained considering capital and operational costs. Process performance is evaluated against the effect of influent conditions, effluent limits, and selected parameters of various optimal solutions with the following results: an increase of influent temperature from 10 degrees C to 25 degrees C decreases the annual cost by about 8.5%, an increase of influent flow from 500 to 2500 m(3)/h triples the annual cost, the A(2)O BNR system is more sensitive to variations in influent ammonia than phosphorus concentration and the maximum growth rate of autotrophic biomass was the most sensitive kinetic parameter in the optimization model.

  14. Some concepts of model uncertainty for performance assessments of nuclear waste repositories

    International Nuclear Information System (INIS)

    Eisenberg, N.A.; Sagar, B.; Wittmeyer, G.W.

    1994-01-01

    Models of the performance of nuclear waste repositories will be central to making regulatory decisions regarding the safety of such facilities. The conceptual model of repository performance is represented by mathematical relationships, which are usually implemented as one or more computer codes. A geologic system may allow many conceptual models, which are consistent with the observations. These conceptual models may or may not have the same mathematical representation. Experiences in modeling the performance of a waste repository representation. Experiences in modeling the performance of a waste repository (which is, in part, a geologic system), show that this non-uniqueness of conceptual models is a significant source of model uncertainty. At the same time, each conceptual model has its own set of parameters and usually, it is not be possible to completely separate model uncertainty from parameter uncertainty for the repository system. Issues related to the origin of model uncertainty, its relation to parameter uncertainty, and its incorporation in safety assessments are discussed from a broad regulatory perspective. An extended example in which these issues are explored numerically is also provided

  15. A Mathematical Model to Improve the Performance of Logistics Network

    Directory of Open Access Journals (Sweden)

    Muhammad Izman Herdiansyah

    2012-01-01

    Full Text Available The role of logistics nowadays is expanding from just providing transportation and warehousing to offering total integrated logistics. To remain competitive in the global market environment, business enterprises need to improve their logistics operations performance. The improvement will be achieved when we can provide a comprehensive analysis and optimize its network performances. In this paper, a mixed integer linier model for optimizing logistics network performance is developed. It provides a single-product multi-period multi-facilities model, as well as the multi-product concept. The problem is modeled in form of a network flow problem with the main objective to minimize total logistics cost. The problem can be solved using commercial linear programming package like CPLEX or LINDO. Even in small case, the solver in Excel may also be used to solve such model.Keywords: logistics network, integrated model, mathematical programming, network optimization

  16. Predicting detection performance with model observers: Fourier domain or spatial domain?

    Science.gov (United States)

    Chen, Baiyu; Yu, Lifeng; Leng, Shuai; Kofler, James; Favazza, Christopher; Vrieze, Thomas; McCollough, Cynthia

    2016-02-27

    The use of Fourier domain model observer is challenged by iterative reconstruction (IR), because IR algorithms are nonlinear and IR images have noise texture different from that of FBP. A modified Fourier domain model observer, which incorporates nonlinear noise and resolution properties, has been proposed for IR and needs to be validated with human detection performance. On the other hand, the spatial domain model observer is theoretically applicable to IR, but more computationally intensive than the Fourier domain method. The purpose of this study is to compare the modified Fourier domain model observer to the spatial domain model observer with both FBP and IR images, using human detection performance as the gold standard. A phantom with inserts of various low contrast levels and sizes was repeatedly scanned 100 times on a third-generation, dual-source CT scanner at 5 dose levels and reconstructed using FBP and IR algorithms. The human detection performance of the inserts was measured via a 2-alternative-forced-choice (2AFC) test. In addition, two model observer performances were calculated, including a Fourier domain non-prewhitening model observer and a spatial domain channelized Hotelling observer. The performance of these two mode observers was compared in terms of how well they correlated with human observer performance. Our results demonstrated that the spatial domain model observer correlated well with human observers across various dose levels, object contrast levels, and object sizes. The Fourier domain observer correlated well with human observers using FBP images, but overestimated the detection performance using IR images.

  17. A human capital predictive model for agent performance in contact centres

    Directory of Open Access Journals (Sweden)

    Chris Jacobs

    2011-10-01

    Research purpose: The primary focus of this article was to develop a theoretically derived human capital predictive model for agent performance in contact centres and Business Process Outsourcing (BPO based on a review of current empirical research literature. Motivation for the study: The study was motivated by the need for a human capital predictive model that can predict agent and overall business performance. Research design: A nonempirical (theoretical research paradigm was adopted for this study and more specifically a theory or model-building approach was followed. A systematic review of published empirical research articles (for the period 2000–2009 in scholarly search portals was performed. Main findings: Eight building blocks of the human capital predictive model for agent performance in contact centres were identified. Forty-two of the human capital contact centre related articles are detailed in this study. Key empirical findings suggest that person– environment fit, job demands-resources, human resources management practices, engagement, agent well-being, agent competence; turnover intention; and agent performance are related to contact centre performance. Practical/managerial implications: The human capital predictive model serves as an operational management model that has performance implications for agents and ultimately influences the contact centre’s overall business performance. Contribution/value-add: This research can contribute to the fields of human resource management (HRM, human capital and performance management within the contact centre and BPO environment.

  18. Tree-based flood damage modeling of companies: Damage processes and model performance

    Science.gov (United States)

    Sieg, Tobias; Vogel, Kristin; Merz, Bruno; Kreibich, Heidi

    2017-07-01

    Reliable flood risk analyses, including the estimation of damage, are an important prerequisite for efficient risk management. However, not much is known about flood damage processes affecting companies. Thus, we conduct a flood damage assessment of companies in Germany with regard to two aspects. First, we identify relevant damage-influencing variables. Second, we assess the prediction performance of the developed damage models with respect to the gain by using an increasing amount of training data and a sector-specific evaluation of the data. Random forests are trained with data from two postevent surveys after flood events occurring in the years 2002 and 2013. For a sector-specific consideration, the data set is split into four subsets corresponding to the manufacturing, commercial, financial, and service sectors. Further, separate models are derived for three different company assets: buildings, equipment, and goods and stock. Calculated variable importance values reveal different variable sets relevant for the damage estimation, indicating significant differences in the damage process for various company sectors and assets. With an increasing number of data used to build the models, prediction errors decrease. Yet the effect is rather small and seems to saturate for a data set size of several hundred observations. In contrast, the prediction improvement achieved by a sector-specific consideration is more distinct, especially for damage to equipment and goods and stock. Consequently, sector-specific data acquisition and a consideration of sector-specific company characteristics in future flood damage assessments is expected to improve the model performance more than a mere increase in data.

  19. Input data requirements for performance modelling and monitoring of photovoltaic plants

    DEFF Research Database (Denmark)

    Gavriluta, Anamaria Florina; Spataru, Sergiu; Sera, Dezso

    2018-01-01

    This work investigates the input data requirements in the context of performance modeling of thin-film photovoltaic (PV) systems. The analysis focuses on the PVWatts performance model, well suited for on-line performance monitoring of PV strings, due to its low number of parameters and high......, modelling the performance of the PV modules at high irradiances requires a dataset of only a few hundred samples in order to obtain a power estimation accuracy of ~1-2\\%....

  20. A model for evaluating the social performance of construction waste management.

    Science.gov (United States)

    Yuan, Hongping

    2012-06-01

    It has been determined by existing literature that a lot of research efforts have been made to the economic performance of construction waste management (CWM), but less attention is paid to investigation of the social performance of CWM. This study therefore attempts to develop a model for quantitatively evaluating the social performance of CWM by using a system dynamics (SD) approach. Firstly, major variables affecting the social performance of CWM are identified and a holistic system for assessing the social performance of CWM is formulated in line with feedback relationships underlying these variables. The developed system is then converted into a SD model through the software iThink. An empirical case study is finally conducted to demonstrate application of the model. Results of model validation indicate that the model is robust and reasonable to reflect the situation of the real system under study. Findings of the case study offer helpful insights into effectively promoting the social performance of CWM of the project investigated. Furthermore, the model exhibits great potential to function as an experimental platform for dynamically evaluating effects of management measures on improving the social performance of CWM of construction projects. Copyright © 2012 Elsevier Ltd. All rights reserved.

  1. Port performance evaluation tool based on microsimulation model

    Directory of Open Access Journals (Sweden)

    Tsavalista Burhani Jzolanda

    2017-01-01

    Full Text Available As port performance is becoming correlative to national competitiveness, the issue of port performance evaluation has significantly raised. Port performances can simply be indicated by port service levels to the ship (e.g., throughput, waiting for berthing etc., as well as the utilization level of equipment and facilities within a certain period. The performances evaluation then can be used as a tool to develop related policies for improving the port’s performance to be more effective and efficient. However, the evaluation is frequently conducted based on deterministic approach, which hardly captures the nature variations of port parameters. Therefore, this paper presents a stochastic microsimulation model for investigating the impacts of port parameter variations to the port performances. The variations are derived from actual data in order to provide more realistic results. The model is further developed using MATLAB and Simulink based on the queuing theory.

  2. Human performance modeling for system of systems analytics: combat performance-shaping factors.

    Energy Technology Data Exchange (ETDEWEB)

    Lawton, Craig R.; Miller, Dwight Peter

    2006-01-01

    The US military has identified Human Performance Modeling (HPM) as a significant requirement and challenge of future systems modeling and analysis initiatives. To support this goal, Sandia National Laboratories (SNL) has undertaken a program of HPM as an integral augmentation to its system-of-system (SoS) analytics capabilities. The previous effort, reported in SAND2005-6569, evaluated the effects of soldier cognitive fatigue on SoS performance. The current effort began with a very broad survey of any performance-shaping factors (PSFs) that also might affect soldiers performance in combat situations. The work included consideration of three different approaches to cognition modeling and how appropriate they would be for application to SoS analytics. This bulk of this report categorizes 47 PSFs into three groups (internal, external, and task-related) and provides brief descriptions of how each affects combat performance, according to the literature. The PSFs were then assembled into a matrix with 22 representative military tasks and assigned one of four levels of estimated negative impact on task performance, based on the literature. Blank versions of the matrix were then sent to two ex-military subject-matter experts to be filled out based on their personal experiences. Data analysis was performed to identify the consensus most influential PSFs. Results indicate that combat-related injury, cognitive fatigue, inadequate training, physical fatigue, thirst, stress, poor perceptual processing, and presence of chemical agents are among the PSFs with the most negative impact on combat performance.

  3. CFD modelling of hydrogen stratification in enclosures: Model validation and application to PAR performance

    Energy Technology Data Exchange (ETDEWEB)

    Hoyes, J.R., E-mail: james.hoyes@hsl.gsi.gov.uk; Ivings, M.J.

    2016-12-15

    Highlights: • The ability of CFD to predict hydrogen stratification phenomena is investigated. • Contrary to expectation, simulations on tetrahedral meshes under-predict mixing. • Simulations on structured meshes give good agreement with experimental data. • CFD model used to investigate the effects of stratification on PAR performance. • Results show stratification can have a significant effect on PAR performance. - Abstract: Computational Fluid Dynamics (CFD) models are maturing into useful tools for supporting safety analyses. This paper investigates the capabilities of CFD models for predicting hydrogen stratification in a containment vessel using data from the NEA/OECD SETH2 MISTRA experiments. Further simulations are then carried out to illustrate the qualitative effects of hydrogen stratification on the performance of Passive Autocatalytic Recombiner (PAR) units. The MISTRA experiments have well-defined initial and boundary conditions which makes them well suited for use in a validation study. Results are presented for the sensitivity to mesh resolution and mesh type. Whilst the predictions are shown to be largely insensitive to the mesh resolution they are surprisingly sensitive to the mesh type. In particular, tetrahedral meshes are found to induce small unphysical convection currents that result in molecular diffusion and turbulent mixing being under-predicted. This behaviour is not unique to the CFD model used here (ANSYS CFX) and furthermore, it may affect simulations run on other non-aligned meshes (meshes that are not aligned perpendicular to gravity), including non-aligned structured meshes. Following existing best practice guidelines can help to identify potential unphysical predictions, but as an additional precaution consideration should be given to using gravity-aligned meshes for modelling stratified flows. CFD simulations of hydrogen recombination in the Becker Technologies THAI facility are presented with high and low PAR positions

  4. CORPORATE FORESIGHT AND PERFORMANCE: A CHAIN-OF-EFFECTS MODEL

    DEFF Research Database (Denmark)

    Jissink, Tymen; Huizingh, Eelko K.R.E.; Rohrbeck, René

    2015-01-01

    In this paper we develop and validate a measurement scale for corporate foresight and examine its impact on performance in a chain-of-effects model. We conceptualize corporate foresight as an organizational ability consisting of five distinct dimensions: information scope, method usage, people......, formal organization, and culture. We investigate the relation of corporate foresight with three innovation performance dimensions – new product success, new product innovativeness, and financial performance. We use partial-least-squares structural equations modelling to assess our measurement mode ls...... and test our research hypotheses. Using a cross-industry sample of 153 innovative firms, we find that corporate foresight can be validly and reliably measured by our measurement instrument. The results of the structural model support the hypothesized positive effects of corporate foresight on all...

  5. A Bibliometric Analysis and Review on Performance Modeling Literature

    Directory of Open Access Journals (Sweden)

    Barbara Livieri

    2015-04-01

    Full Text Available In management practice, performance indicators are considered as a prerequisite to make informed decisions in line with the organization’s goals. On the other hand, indicators summarizes compound phenomena in a few digits, which can induce to inadequate decisions, biased by information loss and conflicting values. Model driven approaches in enterprise engineering can be very effective to avoid these pitfalls, or to take it under control. For that reason, “performance modeling” has the numbers to play a primary role in the “model driven enterprise” scenario, together with process, information and other enterprise-related aspects. In this perspective, we propose a systematic review of the literature on performance modeling in order to retrieve, classify, and summarize existing research, identify the core authors and define areas and opportunities for future research.

  6. Bayesian calibration of power plant models for accurate performance prediction

    International Nuclear Information System (INIS)

    Boksteen, Sowande Z.; Buijtenen, Jos P. van; Pecnik, Rene; Vecht, Dick van der

    2014-01-01

    Highlights: • Bayesian calibration is applied to power plant performance prediction. • Measurements from a plant in operation are used for model calibration. • A gas turbine performance model and steam cycle model are calibrated. • An integrated plant model is derived. • Part load efficiency is accurately predicted as a function of ambient conditions. - Abstract: Gas turbine combined cycles are expected to play an increasingly important role in the balancing of supply and demand in future energy markets. Thermodynamic modeling of these energy systems is frequently applied to assist in decision making processes related to the management of plant operation and maintenance. In most cases, model inputs, parameters and outputs are treated as deterministic quantities and plant operators make decisions with limited or no regard of uncertainties. As the steady integration of wind and solar energy into the energy market induces extra uncertainties, part load operation and reliability are becoming increasingly important. In the current study, methods are proposed to not only quantify various types of uncertainties in measurements and plant model parameters using measured data, but to also assess their effect on various aspects of performance prediction. The authors aim to account for model parameter and measurement uncertainty, and for systematic discrepancy of models with respect to reality. For this purpose, the Bayesian calibration framework of Kennedy and O’Hagan is used, which is especially suitable for high-dimensional industrial problems. The article derives a calibrated model of the plant efficiency as a function of ambient conditions and operational parameters, which is also accurate in part load. The article shows that complete statistical modeling of power plants not only enhances process models, but can also increases confidence in operational decisions

  7. Model tests on dynamic performance of RC shear walls

    International Nuclear Information System (INIS)

    Nagashima, Toshio; Shibata, Akenori; Inoue, Norio; Muroi, Kazuo.

    1991-01-01

    For the inelastic dynamic response analysis of a reactor building subjected to earthquakes, it is essentially important to properly evaluate its restoring force characteristics under dynamic loading condition and its damping performance. Reinforced concrete shear walls are the main structural members of a reactor building, and dominate its seismic behavior. In order to obtain the basic information on the dynamic restoring force characteristics and damping performance of shear walls, the dynamic test using a large shaking table, static displacement control test and the pseudo-dynamic test on the models of a shear wall were conducted. In the dynamic test, four specimens were tested on a large shaking table. In the static test, four specimens were tested, and in the pseudo-dynamic test, three specimens were tested. These tests are outlined. The results of these tests were compared, placing emphasis on the restoring force characteristics and damping performance of the RC wall models. The strength was higher in the dynamic test models than in the static test models mainly due to the effect of loading rate. (K.I.)

  8. Thermal Model Predictions of Advanced Stirling Radioisotope Generator Performance

    Science.gov (United States)

    Wang, Xiao-Yen J.; Fabanich, William Anthony; Schmitz, Paul C.

    2014-01-01

    This paper presents recent thermal model results of the Advanced Stirling Radioisotope Generator (ASRG). The three-dimensional (3D) ASRG thermal power model was built using the Thermal Desktop(trademark) thermal analyzer. The model was correlated with ASRG engineering unit test data and ASRG flight unit predictions from Lockheed Martin's (LM's) I-deas(trademark) TMG thermal model. The auxiliary cooling system (ACS) of the ASRG is also included in the ASRG thermal model. The ACS is designed to remove waste heat from the ASRG so that it can be used to heat spacecraft components. The performance of the ACS is reported under nominal conditions and during a Venus flyby scenario. The results for the nominal case are validated with data from Lockheed Martin. Transient thermal analysis results of ASRG for a Venus flyby with a representative trajectory are also presented. In addition, model results of an ASRG mounted on a Cassini-like spacecraft with a sunshade are presented to show a way to mitigate the high temperatures of a Venus flyby. It was predicted that the sunshade can lower the temperature of the ASRG alternator by 20 C for the representative Venus flyby trajectory. The 3D model also was modified to predict generator performance after a single Advanced Stirling Convertor failure. The geometry of the Microtherm HT insulation block on the outboard side was modified to match deformation and shrinkage observed during testing of a prototypic ASRG test fixture by LM. Test conditions and test data were used to correlate the model by adjusting the thermal conductivity of the deformed insulation to match the post-heat-dump steady state temperatures. Results for these conditions showed that the performance of the still-functioning inboard ACS was unaffected.

  9. A measurement-based performability model for a multiprocessor system

    Science.gov (United States)

    Ilsueh, M. C.; Iyer, Ravi K.; Trivedi, K. S.

    1987-01-01

    A measurement-based performability model based on real error-data collected on a multiprocessor system is described. Model development from the raw errror-data to the estimation of cumulative reward is described. Both normal and failure behavior of the system are characterized. The measured data show that the holding times in key operational and failure states are not simple exponential and that semi-Markov process is necessary to model the system behavior. A reward function, based on the service rate and the error rate in each state, is then defined in order to estimate the performability of the system and to depict the cost of different failure types and recovery procedures.

  10. Rising to the challenge : A model of contest performance

    OpenAIRE

    DesAutels, Philip; Berthon, Pierre; Salehi-Sangari, Esmail

    2011-01-01

    Contests are a ubiquitous form of promotion widely adopted by financial services advertisers, yet, paradoxically, academic research on them is conspicuous in its absence. This work addresses this gap by developing a model of contest engagement and performance. Using motivation theory, factors that drive participant engagement are modeled, and engagement's effect on experience and marketing success of the contest specified. Measures of contest performance, in-contest engagement and post-contes...

  11. Application of air pollution dispersion modeling for source-contribution assessment and model performance evaluation at integrated industrial estate-Pantnagar

    Energy Technology Data Exchange (ETDEWEB)

    Banerjee, T., E-mail: tirthankaronline@gmail.com [Department of Environmental Science, G.B. Pant University of Agriculture and Technology, Pantnagar, U.S. Nagar, Uttarakhand 263 145 (India); Barman, S.C., E-mail: scbarman@yahoo.com [Department of Environmental Monitoring, Indian Institute of Toxicology Research, Post Box No. 80, Mahatma Gandhi Marg, Lucknow-226 001, Uttar Pradesh (India); Srivastava, R.K., E-mail: rajeevsrivastava08@gmail.com [Department of Environmental Science, G.B. Pant University of Agriculture and Technology, Pantnagar, U.S. Nagar, Uttarakhand 263 145 (India)

    2011-04-15

    Source-contribution assessment of ambient NO{sub 2} concentration was performed at Pantnagar, India through simulation of two urban mathematical dispersive models namely Gaussian Finite Line Source Model (GFLSM) and Industrial Source Complex Model (ISCST-3) and model performances were evaluated. Principal approaches were development of comprehensive emission inventory, monitoring of traffic density and regional air quality and conclusively simulation of urban dispersive models. Initially, 18 industries were found responsible for emission of 39.11 kg/h of NO{sub 2} through 43 elevated stacks. Further, vehicular emission potential in terms of NO{sub 2} was computed as 7.1 kg/h. Air quality monitoring delineates an annual average NO{sub 2} concentration of 32.6 {mu}g/m{sup 3}. Finally, GFLSM and ISCST-3 were simulated in conjunction with developed emission inventories and existing meteorological conditions. Models simulation indicated that contribution of NO{sub 2} from industrial and vehicular source was in a range of 45-70% and 9-39%, respectively. Further, statistical analysis revealed satisfactory model performance with an aggregate accuracy of 61.9%. - Research highlights: > Application of dispersion modeling for source-contribution assessment of ambient NO{sub 2}. > Inventorization revealed emission from industry and vehicles was 39.11 and 7.1 kg/h. > GFLSM revealed that vehicular pollution contributes a range of 9.0-38.6%. > Source-contribution of 45-70% was found for industrial emission through ISCST-3. > Aggregate performance of both models shows good agreement with an accuracy of 61.9%. - Development of industrial and vehicular inventory in terms of ambient NO{sub 2} for model simulation at Pantnagar, India and model validation revealed satisfactory outcome.

  12. Performance modeling of parallel algorithms for solving neutron diffusion problems

    International Nuclear Information System (INIS)

    Azmy, Y.Y.; Kirk, B.L.

    1995-01-01

    Neutron diffusion calculations are the most common computational methods used in the design, analysis, and operation of nuclear reactors and related activities. Here, mathematical performance models are developed for the parallel algorithm used to solve the neutron diffusion equation on message passing and shared memory multiprocessors represented by the Intel iPSC/860 and the Sequent Balance 8000, respectively. The performance models are validated through several test problems, and these models are used to estimate the performance of each of the two considered architectures in situations typical of practical applications, such as fine meshes and a large number of participating processors. While message passing computers are capable of producing speedup, the parallel efficiency deteriorates rapidly as the number of processors increases. Furthermore, the speedup fails to improve appreciably for massively parallel computers so that only small- to medium-sized message passing multiprocessors offer a reasonable platform for this algorithm. In contrast, the performance model for the shared memory architecture predicts very high efficiency over a wide range of number of processors reasonable for this architecture. Furthermore, the model efficiency of the Sequent remains superior to that of the hypercube if its model parameters are adjusted to make its processors as fast as those of the iPSC/860. It is concluded that shared memory computers are better suited for this parallel algorithm than message passing computers

  13. Computational Modeling of Human Multiple-Task Performance

    National Research Council Canada - National Science Library

    Kieras, David E; Meyer, David

    2005-01-01

    This is the final report for a project that was a continuation of an earlier, long-term project on the development and validation of the EPIC cognitive architecture for modeling human cognition and performance...

  14. Investigation into the performance of different models for predicting stutter.

    Science.gov (United States)

    Bright, Jo-Anne; Curran, James M; Buckleton, John S

    2013-07-01

    In this paper we have examined five possible models for the behaviour of the stutter ratio, SR. These were two log-normal models, two gamma models, and a two-component normal mixture model. A two-component normal mixture model was chosen with different behaviours of variance; at each locus SR was described with two distributions, both with the same mean. The distributions have difference variances: one for the majority of the observations and a second for the less well-behaved ones. We apply each model to a set of known single source Identifiler™, NGM SElect™ and PowerPlex(®) 21 DNA profiles to show the applicability of our findings to different data sets. SR determined from the single source profiles were compared to the calculated SR after application of the models. The model performance was tested by calculating the log-likelihoods and comparing the difference in Akaike information criterion (AIC). The two-component normal mixture model systematically outperformed all others, despite the increase in the number of parameters. This model, as well as performing well statistically, has intuitive appeal for forensic biologists and could be implemented in an expert system with a continuous method for DNA interpretation. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  15. Performance Analysis of Several GPS/Galileo Precise Point Positioning Models.

    Science.gov (United States)

    Afifi, Akram; El-Rabbany, Ahmed

    2015-06-19

    This paper examines the performance of several precise point positioning (PPP) models, which combine dual-frequency GPS/Galileo observations in the un-differenced and between-satellite single-difference (BSSD) modes. These include the traditional un-differenced model, the decoupled clock model, the semi-decoupled clock model, and the between-satellite single-difference model. We take advantage of the IGS-MGEX network products to correct for the satellite differential code biases and the orbital and satellite clock errors. Natural Resources Canada's GPSPace PPP software is modified to handle the various GPS/Galileo PPP models. A total of six data sets of GPS and Galileo observations at six IGS stations are processed to examine the performance of the various PPP models. It is shown that the traditional un-differenced GPS/Galileo PPP model, the GPS decoupled clock model, and the semi-decoupled clock GPS/Galileo PPP model improve the convergence time by about 25% in comparison with the un-differenced GPS-only model. In addition, the semi-decoupled GPS/Galileo PPP model improves the solution precision by about 25% compared to the traditional un-differenced GPS/Galileo PPP model. Moreover, the BSSD GPS/Galileo PPP model improves the solution convergence time by about 50%, in comparison with the un-differenced GPS PPP model, regardless of the type of BSSD combination used. As well, the BSSD model improves the precision of the estimated parameters by about 50% and 25% when the loose and the tight combinations are used, respectively, in comparison with the un-differenced GPS-only model. Comparable results are obtained through the tight combination when either a GPS or a Galileo satellite is selected as a reference.

  16. Pavement Pre- and Post-Treatment Performance Models Using LTPP Data

    OpenAIRE

    Lu, Pan; Tolliver, Denver

    2012-01-01

    This paper determines that pavement performance in International Roughness Index (IRI) is affected by exogenous interventions such as pavement age, precipitation level, freeze-thaw level, and lower level preservation maintenance strategies. An exponential function of pavement age was used to represent pavement IRI performance curves. Moreover, this paper demonstrates a method which calculates short-term post-pavement performance models from maintenance effect models and pre-treatment performa...

  17. Monitoring the performance of Aux. Feedwater Pump using Smart Sensing Model

    Energy Technology Data Exchange (ETDEWEB)

    No, Young Gyu; Seong, Poong Hyun [Korea Advanced Institute of Science and Technology, Daejeon (Korea, Republic of)

    2015-10-15

    Many artificial intelligence (AI) techniques equipped with learning systems have recently been proposed to monitor sensors and components in NPPs. Therefore, the objective of this study is the development of an integrity evaluation method for safety critical components such as Aux. feedwater pump, high pressure safety injection (HPSI) pump, etc. using smart sensing models based on AI techniques. In this work, the smart sensing model is developed at first to predict the performance of Aux. feedwater pump by estimating flowrate using group method of data handing (GMDH) method. If the performance prediction is achieved by this feasibility study, the smart sensing model will be applied to development of the integrity evaluation method for safety critical components. Also, the proposed algorithm for the performance prediction is verified by comparison with the simulation data of the MARS code for station blackout (SBO) events. In this study, the smart sensing model for the prediction performance of Aux. feedwater pump has been developed. In order to develop the smart sensing model, the GMDH algorithm is employed. The GMDH algorithm is the way to find a function that can well express a dependent variable from independent variables. This method uses a data structure similar to that of multiple regression models. The proposed GMDH model can accurately predict the performance of Aux.

  18. Monitoring the performance of Aux. Feedwater Pump using Smart Sensing Model

    International Nuclear Information System (INIS)

    No, Young Gyu; Seong, Poong Hyun

    2015-01-01

    Many artificial intelligence (AI) techniques equipped with learning systems have recently been proposed to monitor sensors and components in NPPs. Therefore, the objective of this study is the development of an integrity evaluation method for safety critical components such as Aux. feedwater pump, high pressure safety injection (HPSI) pump, etc. using smart sensing models based on AI techniques. In this work, the smart sensing model is developed at first to predict the performance of Aux. feedwater pump by estimating flowrate using group method of data handing (GMDH) method. If the performance prediction is achieved by this feasibility study, the smart sensing model will be applied to development of the integrity evaluation method for safety critical components. Also, the proposed algorithm for the performance prediction is verified by comparison with the simulation data of the MARS code for station blackout (SBO) events. In this study, the smart sensing model for the prediction performance of Aux. feedwater pump has been developed. In order to develop the smart sensing model, the GMDH algorithm is employed. The GMDH algorithm is the way to find a function that can well express a dependent variable from independent variables. This method uses a data structure similar to that of multiple regression models. The proposed GMDH model can accurately predict the performance of Aux

  19. Development of a Generic Performance Measurement Model in an Emergency Department

    DEFF Research Database (Denmark)

    Sørup, Christian Michel; Lundager Forberg, Jakob

    , and the use of triage. All of the mentioned initiatives are new and not well validated to date. It would be desirable to enable measurement of each of the initiative’s effects. The goal of this PhD project was to develop a performance measurement model for EDs. The new model comprises only the most important...... performance measures that provide an estimate for overall ED performance levels. Furthermore, a thorough analysis of the interdependencies between the included performance measures was conducted in order to gain deeper knowledge of the ED as a system. The model enables monitoring of how well the ED performs...... over time, including how performance is impacted by the various initiatives. In the end, the developed model will be an important management tool to meet the management’s vision of providing the best possible care for the acute patient meanwhile achieving the highest possible utilisation of resources....

  20. Data harmonization and model performance

    Science.gov (United States)

    The Joint Committee on Urban Storm Drainage of the International Association for Hydraulic Research (IAHR) and International Association on Water Pollution Research and Control (IAWPRC) was formed in 1982. The current committee members are (no more than two from a country): B. C. Yen, Chairman (USA); P. Harremoes, Vice Chairman (Denmark); R. K. Price, Secretary (UK); P. J. Colyer (UK), M. Desbordes (France), W. C. Huber (USA), K. Krauth (FRG), A. Sjoberg (Sweden), and T. Sueishi (Japan).The IAHR/IAWPRC Joint Committee is forming a Task Group on Data Harmonization and Model Performance. One objective is to promote international urban drainage data harmonization for easy data and information exchange. Another objective is to publicize available models and data internationally. Comments and suggestions concerning the formation and charge of the Task Group are welcome and should be sent to: B. C. Yen, Dept. of Civil Engineering, Univ. of Illinois, 208 N. Romine St., Urbana, IL 61801.

  1. Model of service-oriented catering supply chain performance evaluation

    OpenAIRE

    Gou, Juanqiong; Shen, Guguan; Chai, Rui

    2013-01-01

    Purpose: The aim of this paper is constructing a performance evaluation model for service-oriented catering supply chain. Design/methodology/approach: With the research on the current situation of catering industry, this paper summarized the characters of the catering supply chain, and then presents the service-oriented catering supply chain model based on the platform of logistics and information. At last, the fuzzy AHP method is used to evaluate the performance of service-oriented catering ...

  2. PHARAO laser source flight model: Design and performances

    Energy Technology Data Exchange (ETDEWEB)

    Lévèque, T., E-mail: thomas.leveque@cnes.fr; Faure, B.; Esnault, F. X.; Delaroche, C.; Massonnet, D.; Grosjean, O.; Buffe, F.; Torresi, P. [Centre National d’Etudes Spatiales, 18 avenue Edouard Belin, 31400 Toulouse (France); Bomer, T.; Pichon, A.; Béraud, P.; Lelay, J. P.; Thomin, S. [Sodern, 20 Avenue Descartes, 94451 Limeil-Brévannes (France); Laurent, Ph. [LNE-SYRTE, CNRS, UPMC, Observatoire de Paris, 61 avenue de l’Observatoire, 75014 Paris (France)

    2015-03-15

    In this paper, we describe the design and the main performances of the PHARAO laser source flight model. PHARAO is a laser cooled cesium clock specially designed for operation in space and the laser source is one of the main sub-systems. The flight model presented in this work is the first remote-controlled laser system designed for spaceborne cold atom manipulation. The main challenges arise from mechanical compatibility with space constraints, which impose a high level of compactness, a low electric power consumption, a wide range of operating temperature, and a vacuum environment. We describe the main functions of the laser source and give an overview of the main technologies developed for this instrument. We present some results of the qualification process. The characteristics of the laser source flight model, and their impact on the clock performances, have been verified in operational conditions.

  3. Iowa calibration of MEPDG performance prediction models.

    Science.gov (United States)

    2013-06-01

    This study aims to improve the accuracy of AASHTO Mechanistic-Empirical Pavement Design Guide (MEPDG) pavement : performance predictions for Iowa pavement systems through local calibration of MEPDG prediction models. A total of 130 : representative p...

  4. Modelling and measurement of a moving magnet linear compressor performance

    International Nuclear Information System (INIS)

    Liang, Kun; Stone, Richard; Davies, Gareth; Dadd, Mike; Bailey, Paul

    2014-01-01

    A novel moving magnet linear compressor with clearance seals and flexure bearings has been designed and constructed. It is suitable for a refrigeration system with a compact heat exchanger, such as would be needed for CPU cooling. The performance of the compressor has been experimentally evaluated with nitrogen and a mathematical model has been developed to evaluate the performance of the linear compressor. The results from the compressor model and the measurements have been compared in terms of cylinder pressure, the ‘P–V’ loop, stroke, mass flow rate and shaft power. The cylinder pressure was not measured directly but was derived from the compressor dynamics and the motor magnetic force characteristics. The comparisons indicate that the compressor model is well validated and can be used to study the performance of this type of compressor, to help with design optimization and the identification of key parameters affecting the system transients. The electrical and thermodynamic losses were also investigated, particularly for the design point (stroke of 13 mm and pressure ratio of 3.0), since a full understanding of these can lead to an increase in compressor efficiency. - Highlights: • Model predictions of the performance of a novel moving magnet linear compressor. • Prototype linear compressor performance measurements using nitrogen. • Reconstruction of P–V loops using a model of the dynamics and electromagnetics. • Close agreement between the model and measurements for the P–V loops. • The design point motor efficiency was 74%, with potential improvements identified

  5. Two analytical models for evaluating performance of Gigabit Ethernet Hosts

    International Nuclear Information System (INIS)

    Salah, K.

    2006-01-01

    Two analytical models are developed to study the impact of interrupt overhead on operating system performance of network hosts when subjected to Gigabit network traffic. Under heavy network traffic, the system performance will be negatively affected due to interrupt overhead caused by incoming traffic. In particular, excessive latency and significant degradation in system throughput can be experienced. Also user application may livelock as the CPU power is mostly consumed by interrupt handling and protocol processing. In this paper we present and compare two analytical models that capture host behavior and evaluate its performance. The first model is based Markov processes and queuing theory, while the second, which is more accurate but more complex is a pure Markov process. For the most part both models give mathematically-equivalent closed-form solutions for a number of important system performance metrics. These metrics include throughput, latency and stability condition, CPU utilization of interrupt handling and protocol processing and CPU availability for user applications. The analysis yields insight into understanding and predicting the impact of system and network choices on the performance of interrupt-driven systems when subjected to light and heavy network loads. More, importantly, our analytical work can also be valuable in improving host performance. The paper gives guidelines and recommendations to address design and implementation issues. Simulation and reported experimental results show that our analytical models are valid and give a good approximation. (author)

  6. A refined index of model performance: a rejoinder

    Science.gov (United States)

    Legates, David R.; McCabe, Gregory J.

    2013-01-01

    Willmott et al. [Willmott CJ, Robeson SM, Matsuura K. 2012. A refined index of model performance. International Journal of Climatology, forthcoming. DOI:10.1002/joc.2419.] recently suggest a refined index of model performance (dr) that they purport to be superior to other methods. Their refined index ranges from − 1.0 to 1.0 to resemble a correlation coefficient, but it is merely a linear rescaling of our modified coefficient of efficiency (E1) over the positive portion of the domain of dr. We disagree with Willmott et al. (2012) that dr provides a better interpretation; rather, E1 is more easily interpreted such that a value of E1 = 1.0 indicates a perfect model (no errors) while E1 = 0.0 indicates a model that is no better than the baseline comparison (usually the observed mean). Negative values of E1 (and, for that matter, dr McCabe [Legates DR, McCabe GJ. 1999. Evaluating the use of “goodness-of-fit” measures in hydrologic and hydroclimatic model validation. Water Resources Research 35(1): 233-241.] and Schaefli and Gupta [Schaefli B, Gupta HV. 2007. Do Nash values have value? Hydrological Processes 21: 2075-2080. DOI: 10.1002/hyp.6825.]. This important discussion focuses on the appropriate baseline comparison to use, and why the observed mean often may be an inadequate choice for model evaluation and development. 

  7. Modelling performance of a small array of Wave Energy Converters: Comparison of Spectral and Boussinesq models

    International Nuclear Information System (INIS)

    Greenwood, Charles; Christie, David; Venugopal, Vengatesan; Morrison, James; Vogler, Arne

    2016-01-01

    This paper presents results from numerical simulations of three Oscillating Wave Surge Converters (OWSC) using two different computational models, Boussinesq wave (BW) and Spectral wave (SW) of the commercial software suite MIKE. The simulation of a shallow water wave farm applies alternative methods for implementing a frequency dependent absorption in both the BW and SW models, where energy extraction is based on experimental data from a scaled Oyster device. The effects of including wave diffraction within the SW model is tested by using diffraction smoothing steps and various directional wave conditions. The results of this study reveal important information on the models realms of validity that is heavily dependent on the incident sea state and the removal of diffraction for the SW model. This yields an increase in simulation accuracy for far-field disturbances when diffraction is entirely removed. This highlights specific conditions where the BW and SW model may thrive but also regions where reduced performance is observed. The results presented in this paper have not been validated with real sea site wave device array performance, however, the methodology described would be useful to device developers to arrive at preliminary decisions on array configurations and to minimise negative environmental impacts.

  8. DMFC performance and methanol cross-over: Experimental analysis and model validation

    Energy Technology Data Exchange (ETDEWEB)

    Casalegno, A.; Marchesi, R. [Dipartimento di Energia, Politecnico di Milano, Piazza Leonardo da Vinci 32, 20133 Milano (Italy)

    2008-10-15

    A combined experimental and modelling approach is proposed to analyze methanol cross-over and its effect on DMFC performance. The experimental analysis is performed in order to allow an accurate investigation of methanol cross-over influence on DMFC performance, hence measurements were characterized in terms of uncertainty and reproducibility. The findings suggest that methanol cross-over is mainly determined by diffusion transport and affects cell performance partly via methanol electro-oxidation at the cathode. The modelling analysis is carried out to further investigate methanol cross-over phenomenon. A simple model evaluates the effectiveness of two proposed interpretations regarding methanol cross-over and its effects. The model is validated using the experimental data gathered. Both the experimental analysis and the proposed and validated model allow a substantial step forward in the understanding of the main phenomena associated with methanol cross-over. The findings confirm the possibility to reduce methanol cross-over by optimizing anode feeding. (author)

  9. A Perspective on Computational Human Performance Models as Design Tools

    Science.gov (United States)

    Jones, Patricia M.

    2010-01-01

    The design of interactive systems, including levels of automation, displays, and controls, is usually based on design guidelines and iterative empirical prototyping. A complementary approach is to use computational human performance models to evaluate designs. An integrated strategy of model-based and empirical test and evaluation activities is particularly attractive as a methodology for verification and validation of human-rated systems for commercial space. This talk will review several computational human performance modeling approaches and their applicability to design of display and control requirements.

  10. Modeling Nanoscale FinFET Performance by a Neural Network Method

    Directory of Open Access Journals (Sweden)

    Jin He

    2017-07-01

    Full Text Available This paper presents a neural network method to model nanometer FinFET performance. The principle of this method is firstly introduced and its application in modeling DC and conductance characteristics of nanoscale FinFET transistor is demonstrated in detail. It is shown that this method does not need parameter extraction routine while its prediction of the transistor performance has a small relative error within 1 % compared with measured data, thus this new method is as accurate as the physics based surface potential model.

  11. Performance Evaluation of UML2-Modeled Embedded Streaming Applications with System-Level Simulation

    Directory of Open Access Journals (Sweden)

    Arpinen Tero

    2009-01-01

    Full Text Available This article presents an efficient method to capture abstract performance model of streaming data real-time embedded systems (RTESs. Unified Modeling Language version 2 (UML2 is used for the performance modeling and as a front-end for a tool framework that enables simulation-based performance evaluation and design-space exploration. The adopted application meta-model in UML resembles the Kahn Process Network (KPN model and it is targeted at simulation-based performance evaluation. The application workload modeling is done using UML2 activity diagrams, and platform is described with structural UML2 diagrams and model elements. These concepts are defined using a subset of the profile for Modeling and Analysis of Realtime and Embedded (MARTE systems from OMG and custom stereotype extensions. The goal of the performance modeling and simulation is to achieve early estimates on task response times, processing element, memory, and on-chip network utilizations, among other information that is used for design-space exploration. As a case study, a video codec application on multiple processors is modeled, evaluated, and explored. In comparison to related work, this is the first proposal that defines transformation between UML activity diagrams and streaming data application workload meta models and successfully adopts it for RTES performance evaluation.

  12. Modeling the marketing strategy-performance relationship : towards an hierarchical marketing performance framework

    NARCIS (Netherlands)

    Huizingh, Eelko K.R.E.; Zengerink, Evelien

    2001-01-01

    Accurate measurement of marketing performance is an important topic for both marketing academics and marketing managers. Many researchers have recognized that marketing performance measurement should go beyond financial measurement. In this paper we propose a conceptual framework that models

  13. Modelling fuel cell performance using artificial intelligence

    Science.gov (United States)

    Ogaji, S. O. T.; Singh, R.; Pilidis, P.; Diacakis, M.

    Over the last few years, fuel cell technology has been increasing promisingly its share in the generation of stationary power. Numerous pilot projects are operating worldwide, continuously increasing the amount of operating hours either as stand-alone devices or as part of gas turbine combined cycles. An essential tool for the adequate and dynamic analysis of such systems is a software model that enables the user to assess a large number of alternative options in the least possible time. On the other hand, the sphere of application of artificial neural networks has widened covering such endeavours of life such as medicine, finance and unsurprisingly engineering (diagnostics of faults in machines). Artificial neural networks have been described as diagrammatic representation of a mathematical equation that receives values (inputs) and gives out results (outputs). Artificial neural networks systems have the capacity to recognise and associate patterns and because of their inherent design features, they can be applied to linear and non-linear problem domains. In this paper, the performance of the fuel cell is modelled using artificial neural networks. The inputs to the network are variables that are critical to the performance of the fuel cell while the outputs are the result of changes in any one or all of the fuel cell design variables, on its performance. Critical parameters for the cell include the geometrical configuration as well as the operating conditions. For the neural network, various network design parameters such as the network size, training algorithm, activation functions and their causes on the effectiveness of the performance modelling are discussed. Results from the analysis as well as the limitations of the approach are presented and discussed.

  14. Modelling fuel cell performance using artificial intelligence

    Energy Technology Data Exchange (ETDEWEB)

    Ogaji, S.O.T.; Singh, R.; Pilidis, P.; Diacakis, M. [Power Propulsion and Aerospace Engineering Department, Centre for Diagnostics and Life Cycle Costs, Cranfield University (United Kingdom)

    2006-03-09

    Over the last few years, fuel cell technology has been increasing promisingly its share in the generation of stationary power. Numerous pilot projects are operating worldwide, continuously increasing the amount of operating hours either as stand-alone devices or as part of gas turbine combined cycles. An essential tool for the adequate and dynamic analysis of such systems is a software model that enables the user to assess a large number of alternative options in the least possible time. On the other hand, the sphere of application of artificial neural networks has widened covering such endeavours of life such as medicine, finance and unsurprisingly engineering (diagnostics of faults in machines). Artificial neural networks have been described as diagrammatic representation of a mathematical equation that receives values (inputs) and gives out results (outputs). Artificial neural networks systems have the capacity to recognise and associate patterns and because of their inherent design features, they can be applied to linear and non-linear problem domains. In this paper, the performance of the fuel cell is modelled using artificial neural networks. The inputs to the network are variables that are critical to the performance of the fuel cell while the outputs are the result of changes in any one or all of the fuel cell design variables, on its performance. Critical parameters for the cell include the geometrical configuration as well as the operating conditions. For the neural network, various network design parameters such as the network size, training algorithm, activation functions and their causes on the effectiveness of the performance modelling are discussed. Results from the analysis as well as the limitations of the approach are presented and discussed. (author)

  15. UNCONSTRAINED HANDWRITING RECOGNITION : LANGUAGE MODELS, PERPLEXITY, AND SYSTEM PERFORMANCE

    NARCIS (Netherlands)

    Marti, U-V.; Bunke, H.

    2004-01-01

    In this paper we present a number of language models and their behavior in the recognition of unconstrained handwritten English sentences. We use the perplexity to compare the different models and their prediction power, and relate it to the performance of a recognition system under different

  16. A multilateral modelling of Youth Soccer Performance Index (YSPI)

    Science.gov (United States)

    Bisyri Husin Musawi Maliki, Ahmad; Razali Abdullah, Mohamad; Juahir, Hafizan; Abdullah, Farhana; Ain Shahirah Abdullah, Nurul; Muazu Musa, Rabiu; Musliha Mat-Rasid, Siti; Adnan, Aleesha; Azura Kosni, Norlaila; Muhamad, Wan Siti Amalina Wan; Afiqah Mohamad Nasir, Nur

    2018-04-01

    This study aims to identify the most dominant factors that influencing performance of soccer player and to predict group performance for soccer players. A total of 184 of youth soccer players from Malaysia sport school and six soccer academy encompasses as respondence of the study. Exploratory factor analysis (EFA) and Confirmatory factor analysis (CFA) were computed to identify the most dominant factors whereas reducing the initial 26 parameters with recommended >0.5 of factor loading. Meanwhile, prediction of the soccer performance was predicted by regression model. CFA revealed that sit and reach, vertical jump, VO2max, age, weight, height, sitting height, calf circumference (cc), medial upper arm circumference (muac), maturation, bicep, triceps, subscapular, suprailiac, 5M, 10M, and 20M speed were the most dominant factors. Further index analysis forming Youth Soccer Performance Index (YSPI) resulting by categorizing three groups namely, high, moderate, and low. The regression model for this study was significant set as p < 0.001 and R2 is 0.8222 which explained that the model contributed a total of 82% prediction ability to predict the whole set of the variables. The significant parameters in contributing prediction of YSPI are discussed. As a conclusion, the precision of the prediction models by integrating a multilateral factor reflecting for predicting potential soccer player and hopefully can create a competitive soccer games.

  17. Supercomputer and cluster performance modeling and analysis efforts:2004-2006.

    Energy Technology Data Exchange (ETDEWEB)

    Sturtevant, Judith E.; Ganti, Anand; Meyer, Harold (Hal) Edward; Stevenson, Joel O.; Benner, Robert E., Jr. (.,; .); Goudy, Susan Phelps; Doerfler, Douglas W.; Domino, Stefan Paul; Taylor, Mark A.; Malins, Robert Joseph; Scott, Ryan T.; Barnette, Daniel Wayne; Rajan, Mahesh; Ang, James Alfred; Black, Amalia Rebecca; Laub, Thomas William; Vaughan, Courtenay Thomas; Franke, Brian Claude

    2007-02-01

    This report describes efforts by the Performance Modeling and Analysis Team to investigate performance characteristics of Sandia's engineering and scientific applications on the ASC capability and advanced architecture supercomputers, and Sandia's capacity Linux clusters. Efforts to model various aspects of these computers are also discussed. The goals of these efforts are to quantify and compare Sandia's supercomputer and cluster performance characteristics; to reveal strengths and weaknesses in such systems; and to predict performance characteristics of, and provide guidelines for, future acquisitions and follow-on systems. Described herein are the results obtained from running benchmarks and applications to extract performance characteristics and comparisons, as well as modeling efforts, obtained during the time period 2004-2006. The format of the report, with hypertext links to numerous additional documents, purposefully minimizes the document size needed to disseminate the extensive results from our research.

  18. A personality trait-based interactionist model of job performance.

    Science.gov (United States)

    Tett, Robert P; Burnett, Dawn D

    2003-06-01

    Evidence for situational specificity of personality-job performance relations calls for better understanding of how personality is expressed as valued work behavior. On the basis of an interactionist principle of trait activation (R. P. Tett & H. A. Guterman, 2000), a model is proposed that distinguishes among 5 situational features relevant to trait expression (job demands, distracters, constraints, releasers, and facilitators), operating at task, social, and organizational levels. Trait-expressive work behavior is distinguished from (valued) job performance in clarifying the conditions favoring personality use in selection efforts. The model frames linkages between situational taxonomies (e.g., J. L. Holland's [1985] RIASEC model) and the Big Five and promotes useful discussion of critical issues, including situational specificity, personality-oriented job analysis, team building, and work motivation.

  19. 3D Massive MIMO Systems: Channel Modeling and Performance Analysis

    KAUST Repository

    Nadeem, Qurrat-Ul-Ain

    2015-03-01

    Multiple-input-multiple-output (MIMO) systems of current LTE releases are capable of adaptation in the azimuth only. More recently, the trend is to enhance the system performance by exploiting the channel\\'s degrees of freedom in the elevation through the dynamic adaptation of the vertical antenna beam pattern. This necessitates the derivation and characterization of three-dimensional (3D) channels. Over the years, channel models have evolved to address the challenges of wireless communication technologies. In parallel to theoretical studies on channel modeling, many standardized channel models like COST-based models, 3GPP SCM, WINNER, ITU have emerged that act as references for industries and telecommunication companies to assess system-level and link-level performances of advanced signal processing techniques over real-like channels. Given the existing channels are only two dimensional (2D) in nature; a large effort in channel modeling is needed to study the impact of the channel component in the elevation direction. The first part of this work sheds light on the current 3GPP activity around 3D channel modeling and beamforming, an aspect that to our knowledge has not been extensively covered by a research publication. The standardized MIMO channel model is presented, that incorporates both the propagation effects of the environment and the radio effects of the antennas. In order to facilitate future studies on the use of 3D beamforming, the main features of the proposed 3D channel model are discussed. A brief overview of the future 3GPP 3D channel model being outlined for the next generation of wireless networks is also provided. In the subsequent part of this work, we present an information-theoretic channel model for MIMO systems that supports the elevation dimension. The model is based on the principle of maximum entropy, which enables us to determine the distribution of the channel matrix consistent with the prior information on the angles of departure and

  20. Performance Modeling and Optimization of a High Energy CollidingBeam Simulation Code

    Energy Technology Data Exchange (ETDEWEB)

    Shan, Hongzhang; Strohmaier, Erich; Qiang, Ji; Bailey, David H.; Yelick, Kathy

    2006-06-01

    An accurate modeling of the beam-beam interaction is essential to maximizing the luminosity in existing and future colliders. BeamBeam3D was the first parallel code that can be used to study this interaction fully self-consistently on high-performance computing platforms. Various all-to-all personalized communication (AAPC) algorithms dominate its communication patterns, for which we developed a sequence of performance models using a series of micro-benchmarks. We find that for SMP based systems the most important performance constraint is node-adapter contention, while for 3D-Torus topologies good performance models are not possible without considering link contention. The best average model prediction error is very low on SMP based systems with of 3% to 7%. On torus based systems errors of 29% are higher but optimized performance can again be predicted within 8% in some cases. These excellent results across five different systems indicate that this methodology for performance modeling can be applied to a large class of algorithms.

  1. Performance Modeling and Optimization of a High Energy Colliding Beam Simulation Code

    International Nuclear Information System (INIS)

    Shan, Hongzhang; Strohmaier, Erich; Qiang, Ji; Bailey, David H.; Yelick, Kathy

    2006-01-01

    An accurate modeling of the beam-beam interaction is essential to maximizing the luminosity in existing and future colliders. BeamBeam3D was the first parallel code that can be used to study this interaction fully self-consistently on high-performance computing platforms. Various all-to-all personalized communication (AAPC) algorithms dominate its communication patterns, for which we developed a sequence of performance models using a series of micro-benchmarks. We find that for SMP based systems the most important performance constraint is node-adapter contention, while for 3D-Torus topologies good performance models are not possible without considering link contention. The best average model prediction error is very low on SMP based systems with of 3% to 7%. On torus based systems errors of 29% are higher but optimized performance can again be predicted within 8% in some cases. These excellent results across five different systems indicate that this methodology for performance modeling can be applied to a large class of algorithms

  2. Spatial variability and parametric uncertainty in performance assessment models

    International Nuclear Information System (INIS)

    Pensado, Osvaldo; Mancillas, James; Painter, Scott; Tomishima, Yasuo

    2011-01-01

    The problem of defining an appropriate treatment of distribution functions (which could represent spatial variability or parametric uncertainty) is examined based on a generic performance assessment model for a high-level waste repository. The generic model incorporated source term models available in GoldSim ® , the TDRW code for contaminant transport in sparse fracture networks with a complex fracture-matrix interaction process, and a biosphere dose model known as BDOSE TM . Using the GoldSim framework, several Monte Carlo sampling approaches and transport conceptualizations were evaluated to explore the effect of various treatments of spatial variability and parametric uncertainty on dose estimates. Results from a model employing a representative source and ensemble-averaged pathway properties were compared to results from a model allowing for stochastic variation of transport properties along streamline segments (i.e., explicit representation of spatial variability within a Monte Carlo realization). We concluded that the sampling approach and the definition of an ensemble representative do influence consequence estimates. In the examples analyzed in this paper, approaches considering limited variability of a transport resistance parameter along a streamline increased the frequency of fast pathways resulting in relatively high dose estimates, while those allowing for broad variability along streamlines increased the frequency of 'bottlenecks' reducing dose estimates. On this basis, simplified approaches with limited consideration of variability may suffice for intended uses of the performance assessment model, such as evaluation of site safety. (author)

  3. Modelling of Box Type Solar Cooker Performance in a Tropical ...

    African Journals Online (AJOL)

    Thermal performance model of box type solar cooker with loaded water is presented. The model was developed using the method of Funk to estimate cooking power in terms of climatic and design parameters for box type solar cooker in a tropical environment. Coefficients for each term used in the model were determined ...

  4. Statistical modelling of networked human-automation performance using working memory capacity.

    Science.gov (United States)

    Ahmed, Nisar; de Visser, Ewart; Shaw, Tyler; Mohamed-Ameen, Amira; Campbell, Mark; Parasuraman, Raja

    2014-01-01

    This study examines the challenging problem of modelling the interaction between individual attentional limitations and decision-making performance in networked human-automation system tasks. Analysis of real experimental data from a task involving networked supervision of multiple unmanned aerial vehicles by human participants shows that both task load and network message quality affect performance, but that these effects are modulated by individual differences in working memory (WM) capacity. These insights were used to assess three statistical approaches for modelling and making predictions with real experimental networked supervisory performance data: classical linear regression, non-parametric Gaussian processes and probabilistic Bayesian networks. It is shown that each of these approaches can help designers of networked human-automated systems cope with various uncertainties in order to accommodate future users by linking expected operating conditions and performance from real experimental data to observable cognitive traits like WM capacity. Practitioner Summary: Working memory (WM) capacity helps account for inter-individual variability in operator performance in networked unmanned aerial vehicle supervisory tasks. This is useful for reliable performance prediction near experimental conditions via linear models; robust statistical prediction beyond experimental conditions via Gaussian process models and probabilistic inference about unknown task conditions/WM capacities via Bayesian network models.

  5. A formalism to generate probability distributions for performance-assessment modeling

    International Nuclear Information System (INIS)

    Kaplan, P.G.

    1990-01-01

    A formalism is presented for generating probability distributions of parameters used in performance-assessment modeling. The formalism is used when data are either sparse or nonexistent. The appropriate distribution is a function of the known or estimated constraints and is chosen to maximize a quantity known as Shannon's informational entropy. The formalism is applied to a parameter used in performance-assessment modeling. The functional form of the model that defines the parameter, data from the actual field site, and natural analog data are analyzed to estimate the constraints. A beta probability distribution of the example parameter is generated after finding four constraints. As an example of how the formalism is applied to the site characterization studies of Yucca Mountain, the distribution is generated for an input parameter in a performance-assessment model currently used to estimate compliance with disposal of high-level radioactive waste in geologic repositories, 10 CFR 60.113(a)(2), commonly known as the ground water travel time criterion. 8 refs., 2 figs

  6. Calibrating mechanistic-empirical pavement performance models with an expert matrix

    Energy Technology Data Exchange (ETDEWEB)

    Tighe, S.; AlAssar, R.; Haas, R. [Waterloo Univ., ON (Canada). Dept. of Civil Engineering; Zhiwei, H. [Stantec Consulting Ltd., Cambridge, ON (Canada)

    2001-07-01

    Proper management of pavement infrastructure requires pavement performance modelling. For the past 20 years, the Ontario Ministry of Transportation has used the Ontario Pavement Analysis of Costs (OPAC) system for pavement design. Pavement needs, however, have changed substantially during that time. To address this need, a new research contract is underway to enhance the model and verify the predictions, particularly at extreme points such as low and high traffic volume pavement design. This initiative included a complete evaluation of the existing OPAC pavement design method, the construction of a new set of pavement performance prediction models, and the development of the flexible pavement design procedure that incorporates reliability analysis. The design was also expanded to include rigid pavement designs and modification of the existing life cycle cost analysis procedure which includes both the agency cost and road user cost. Performance prediction and life-cycle costs were developed based on several factors, including material properties, traffic loads and climate. Construction and maintenance schedules were also considered. The methodology for the calibration and validation of a mechanistic-empirical flexible pavement performance model was described. Mechanistic-empirical design methods combine theory based design such as calculated stresses, strains or deflections with empirical methods, where a measured response is associated with thickness and pavement performance. Elastic layer analysis was used to determine pavement response to determine the most effective design using cumulative Equivalent Single Axle Loads (ESALs), below grade type and layer thickness.The new mechanistic-empirical model separates the environment and traffic effects on performance. This makes it possible to quantify regional differences between Southern and Northern Ontario. In addition, roughness can be calculated in terms of the International Roughness Index or Riding comfort Index

  7. Modelling the predictive performance of credit scoring

    Directory of Open Access Journals (Sweden)

    Shi-Wei Shen

    2013-07-01

    Research purpose: The purpose of this empirical paper was to examine the predictive performance of credit scoring systems in Taiwan. Motivation for the study: Corporate lending remains a major business line for financial institutions. However, in light of the recent global financial crises, it has become extremely important for financial institutions to implement rigorous means of assessing clients seeking access to credit facilities. Research design, approach and method: Using a data sample of 10 349 observations drawn between 1992 and 2010, logistic regression models were utilised to examine the predictive performance of credit scoring systems. Main findings: A test of Goodness of fit demonstrated that credit scoring models that incorporated the Taiwan Corporate Credit Risk Index (TCRI, micro- and also macroeconomic variables possessed greater predictive power. This suggests that macroeconomic variables do have explanatory power for default credit risk. Practical/managerial implications: The originality in the study was that three models were developed to predict corporate firms’ defaults based on different microeconomic and macroeconomic factors such as the TCRI, asset growth rates, stock index and gross domestic product. Contribution/value-add: The study utilises different goodness of fits and receiver operator characteristics during the examination of the robustness of the predictive power of these factors.

  8. Green roof hydrologic performance and modeling: a review.

    Science.gov (United States)

    Li, Yanling; Babcock, Roger W

    2014-01-01

    Green roofs reduce runoff from impervious surfaces in urban development. This paper reviews the technical literature on green roof hydrology. Laboratory experiments and field measurements have shown that green roofs can reduce stormwater runoff volume by 30 to 86%, reduce peak flow rate by 22 to 93% and delay the peak flow by 0 to 30 min and thereby decrease pollution, flooding and erosion during precipitation events. However, the effectiveness can vary substantially due to design characteristics making performance predictions difficult. Evaluation of the most recently published study findings indicates that the major factors affecting green roof hydrology are precipitation volume, precipitation dynamics, antecedent conditions, growth medium, plant species, and roof slope. This paper also evaluates the computer models commonly used to simulate hydrologic processes for green roofs, including stormwater management model, soil water atmosphere and plant, SWMS-2D, HYDRUS, and other models that are shown to be effective for predicting precipitation response and economic benefits. The review findings indicate that green roofs are effective for reduction of runoff volume and peak flow, and delay of peak flow, however, no tool or model is available to predict expected performance for any given anticipated system based on design parameters that directly affect green roof hydrology.

  9. Behavior model for performance assessment

    International Nuclear Information System (INIS)

    Brown-VanHoozer, S. A.

    1999-01-01

    Every individual channels information differently based on their preference of the sensory modality or representational system (visual auditory or kinesthetic) we tend to favor most (our primary representational system (PRS)). Therefore, some of us access and store our information primarily visually first, some auditorily, and others kinesthetically (through feel and touch); which in turn establishes our information processing patterns and strategies and external to internal (and subsequently vice versa) experiential language representation. Because of the different ways we channel our information, each of us will respond differently to a task--the way we gather and process the external information (input), our response time (process), and the outcome (behavior). Traditional human models of decision making and response time focus on perception, cognitive and motor systems stimulated and influenced by the three sensory modalities, visual, auditory and kinesthetic. For us, these are the building blocks to knowing how someone is thinking. Being aware of what is taking place and how to ask questions is essential in assessing performance toward reducing human errors. Existing models give predications based on time values or response times for a particular event, and may be summed and averaged for a generalization of behavior(s). However, by our not establishing a basic understanding of the foundation of how the behavior was predicated through a decision making strategy process, predicative models are overall inefficient in their analysis of the means by which behavior was generated. What is seen is the end result

  10. Behavior model for performance assessment.

    Energy Technology Data Exchange (ETDEWEB)

    Borwn-VanHoozer, S. A.

    1999-07-23

    Every individual channels information differently based on their preference of the sensory modality or representational system (visual auditory or kinesthetic) we tend to favor most (our primary representational system (PRS)). Therefore, some of us access and store our information primarily visually first, some auditorily, and others kinesthetically (through feel and touch); which in turn establishes our information processing patterns and strategies and external to internal (and subsequently vice versa) experiential language representation. Because of the different ways we channel our information, each of us will respond differently to a task--the way we gather and process the external information (input), our response time (process), and the outcome (behavior). Traditional human models of decision making and response time focus on perception, cognitive and motor systems stimulated and influenced by the three sensory modalities, visual, auditory and kinesthetic. For us, these are the building blocks to knowing how someone is thinking. Being aware of what is taking place and how to ask questions is essential in assessing performance toward reducing human errors. Existing models give predications based on time values or response times for a particular event, and may be summed and averaged for a generalization of behavior(s). However, by our not establishing a basic understanding of the foundation of how the behavior was predicated through a decision making strategy process, predicative models are overall inefficient in their analysis of the means by which behavior was generated. What is seen is the end result.

  11. A review of performance measurement’s maturity models

    Directory of Open Access Journals (Sweden)

    María Paula Bertolli

    2017-01-01

    Full Text Available Introduction: In a context as dynamic as today, SMEs need performance measurement systems (PMS that are able to generate useful, relevant and reliable information to manage. Measuring the maturity of PMS is an essential step to achieve its evolution to an ideal state that allows a better control of the results and to act consequently, improving management and decision making. Objective: To develop a bibliographic review to identify and characterize PMS maturity models, recognizing between them the most feasible models to apply in SMEs, in order to generate a contribution for the strengthening of such systems, facilitating effective and timely decision making in organizations. Methodology: The research question defined is: which existing PMS maturity model can be used by industrial SMEs? Google Scholar database was consulted for searching information, using certain search parameters. Based on a previous criteria definition, the selected models are compared. Finally, the conclusions about these models are elaborated. Results: From the results obtained through the bibliographic search in Google Scholar, different criteria were used to select the models to be characterized and compared. The four models selected were the proposed by Wettstein and Kueng, Van Aken, Tangen and Aho. Conclusions: The models considered most adequate are those proposed by Wettstein and Kueng (2002 and Aho (2012, due to their easy application and the low requirement of resource use. However, as such models do not have an evaluation tool, it has to be defined by the company.

  12. A cost-performance model for ground-based optical communications receiving telescopes

    Science.gov (United States)

    Lesh, J. R.; Robinson, D. L.

    1986-01-01

    An analytical cost-performance model for a ground-based optical communications receiving telescope is presented. The model considers costs of existing telescopes as a function of diameter and field of view. This, coupled with communication performance as a function of receiver diameter and field of view, yields the appropriate telescope cost versus communication performance curve.

  13. A practical model for sustainable operational performance

    International Nuclear Information System (INIS)

    Vlek, C.A.J.; Steg, E.M.; Feenstra, D.; Gerbens-Leenis, W.; Lindenberg, S.; Moll, H.; Schoot Uiterkamp, A.; Sijtsma, F.; Van Witteloostuijn, A.

    2002-01-01

    By means of a concrete model for sustainable operational performance enterprises can report uniformly on the sustainability of their contributions to the economy, welfare and the environment. The development and design of a three-dimensional monitoring system is presented and discussed [nl

  14. Performance evaluation of Maxwell and Cercignani-Lampis gas-wall interaction models in the modeling of thermally driven rarefied gas transport

    KAUST Repository

    Liang, Tengfei; Li, Qi; Ye, Wenjing

    2013-01-01

    A systematic study on the performance of two empirical gas-wall interaction models, the Maxwell model and the Cercignani-Lampis (CL) model, in the entire Knudsen range is conducted. The models are evaluated by examining the accuracy of key

  15. Seismic assessment and performance of nonstructural components affected by structural modeling

    Energy Technology Data Exchange (ETDEWEB)

    Hur, Jieun; Althoff, Eric; Sezen, Halil; Denning, Richard; Aldemir, Tunc [Ohio State University, Columbus (United States)

    2017-03-15

    Seismic probabilistic risk assessment (SPRA) requires a large number of simulations to evaluate the seismic vulnerability of structural and nonstructural components in nuclear power plants. The effect of structural modeling and analysis assumptions on dynamic analysis of 3D and simplified 2D stick models of auxiliary buildings and the attached nonstructural components is investigated. Dynamic characteristics and seismic performance of building models are also evaluated, as well as the computational accuracy of the models. The presented results provide a better understanding of the dynamic behavior and seismic performance of auxiliary buildings. The results also help to quantify the impact of uncertainties associated with modeling and analysis of simplified numerical models of structural and nonstructural components subjected to seismic shaking on the predicted seismic failure probabilities of these systems.

  16. Analysis report for WIPP colloid model constraints and performance assessment parameters

    Energy Technology Data Exchange (ETDEWEB)

    Mariner, Paul E.; Sassani, David Carl

    2014-03-01

    An analysis of the Waste Isolation Pilot Plant (WIPP) colloid model constraints and parameter values was performed. The focus of this work was primarily on intrinsic colloids, mineral fragment colloids, and humic substance colloids, with a lesser focus on microbial colloids. Comments by the US Environmental Protection Agency (EPA) concerning intrinsic Th(IV) colloids and Mg-Cl-OH mineral fragment colloids were addressed in detail, assumptions and data used to constrain colloid model calculations were evaluated, and inconsistencies between data and model parameter values were identified. This work resulted in a list of specific conclusions regarding model integrity, model conservatism, and opportunities for improvement related to each of the four colloid types included in the WIPP performance assessment.

  17. Conceptual adsorption models and open issues pertaining to performance assessment

    International Nuclear Information System (INIS)

    Serne, R.J.

    1992-01-01

    Recently several articles have been published that question the appropriateness of the distribution coefficient, Rd, concept to quantify radionuclide migration. Several distinct issues surrounding the modeling of nuclide retardation. The first section defines adsorption terminology and discusses various adsorption processes. The next section describes five commonly used adsorption conceptual models, specifically emphasizing what attributes that affect adsorption are explicitly accommodated in each model. I also review efforts to incorporate each adsorption model into performance assessment transport computer codes. The five adsorption conceptual models are (1) the constant Rd model, (2) the parametric Rd model, (3) isotherm adsorption models, (4) mass action adsorption models, and (5) surface-complexation with electrostatics models. The final section discusses the adequacy of the distribution ratio concept, the adequacy of transport calculations that rely on constant retardation factors and the status of incorporating sophisticated adsorption models into transport codes. 86 refs., 1 fig., 1 tab

  18. Performance Models and Risk Management in Communications Systems

    CERN Document Server

    Harrison, Peter; Rüstem, Berç

    2011-01-01

    This volume covers recent developments in the design, operation, and management of telecommunication and computer network systems in performance engineering and addresses issues of uncertainty, robustness, and risk. Uncertainty regarding loading and system parameters leads to challenging optimization and robustness issues. Stochastic modeling combined with optimization theory ensures the optimum end-to-end performance of telecommunication or computer network systems. In view of the diverse design options possible, supporting models have many adjustable parameters and choosing the best set for a particular performance objective is delicate and time-consuming. An optimization based approach determines the optimal possible allocation for these parameters. Researchers and graduate students working at the interface of telecommunications and operations research will benefit from this book. Due to the practical approach, this book will also serve as a reference tool for scientists and engineers in telecommunication ...

  19. A network application for modeling a centrifugal compressor performance map

    Science.gov (United States)

    Nikiforov, A.; Popova, D.; Soldatova, K.

    2017-08-01

    The approximation of aerodynamic performance of a centrifugal compressor stage and vaneless diffuser by neural networks is presented. Advantages, difficulties and specific features of the method are described. An example of a neural network and its structure is shown. The performances in terms of efficiency, pressure ratio and work coefficient of 39 model stages within the range of flow coefficient from 0.01 to 0.08 were modeled with mean squared error 1.5 %. In addition, the loss and friction coefficients of vaneless diffusers of relative widths 0.014-0.10 are modeled with mean squared error 2.45 %.

  20. Integrated model for supplier selection and performance evaluation

    Directory of Open Access Journals (Sweden)

    Borges de Araújo, Maria Creuza

    2015-08-01

    Full Text Available This paper puts forward a model for selecting suppliers and evaluating the performance of those already working with a company. A simulation was conducted in a food industry. This sector has high significance in the economy of Brazil. The model enables the phases of selecting and evaluating suppliers to be integrated. This is important so that a company can have partnerships with suppliers who are able to meet their needs. Additionally, a group method is used to enable managers who will be affected by this decision to take part in the selection stage. Finally, the classes resulting from the performance evaluation are shown to support the contractor in choosing the most appropriate relationship with its suppliers.

  1. THE USE OF NEURAL NETWORK TECHNOLOGY TO MODEL SWIMMING PERFORMANCE

    Directory of Open Access Journals (Sweden)

    António José Silva

    2007-03-01

    Full Text Available The aims of the present study were: to identify the factors which are able to explain the performance in the 200 meters individual medley and 400 meters front crawl events in young swimmers, to model the performance in those events using non-linear mathematic methods through artificial neural networks (multi-layer perceptrons and to assess the neural network models precision to predict the performance. A sample of 138 young swimmers (65 males and 73 females of national level was submitted to a test battery comprising four different domains: kinanthropometric evaluation, dry land functional evaluation (strength and flexibility, swimming functional evaluation (hydrodynamics, hydrostatic and bioenergetics characteristics and swimming technique evaluation. To establish a profile of the young swimmer non-linear combinations between preponderant variables for each gender and swim performance in the 200 meters medley and 400 meters font crawl events were developed. For this purpose a feed forward neural network was used (Multilayer Perceptron with three neurons in a single hidden layer. The prognosis precision of the model (error lower than 0.8% between true and estimated performances is supported by recent evidence. Therefore, we consider that the neural network tool can be a good approach in the resolution of complex problems such as performance modeling and the talent identification in swimming and, possibly, in a wide variety of sports

  2. Evaluating performance of simplified physically based models for shallow landslide susceptibility

    Directory of Open Access Journals (Sweden)

    G. Formetta

    2016-11-01

    Full Text Available Rainfall-induced shallow landslides can lead to loss of life and significant damage to private and public properties, transportation systems, etc. Predicting locations that might be susceptible to shallow landslides is a complex task and involves many disciplines: hydrology, geotechnical science, geology, hydrogeology, geomorphology, and statistics. Two main approaches are commonly used: statistical or physically based models. Reliable model applications involve automatic parameter calibration, objective quantification of the quality of susceptibility maps, and model sensitivity analyses. This paper presents a methodology to systemically and objectively calibrate, verify, and compare different models and model performance indicators in order to identify and select the models whose behavior is the most reliable for particular case studies.The procedure was implemented in a package of models for landslide susceptibility analysis and integrated in the NewAge-JGrass hydrological model. The package includes three simplified physically based models for landslide susceptibility analysis (M1, M2, and M3 and a component for model verification. It computes eight goodness-of-fit indices by comparing pixel-by-pixel model results and measurement data. The integration of the package in NewAge-JGrass uses other components, such as geographic information system tools, to manage input–output processes, and automatic calibration algorithms to estimate model parameters. The system was applied for a case study in Calabria (Italy along the Salerno–Reggio Calabria highway, between Cosenza and Altilia. The area is extensively subject to rainfall-induced shallow landslides mainly because of its complex geology and climatology. The analysis was carried out considering all the combinations of the eight optimized indices and the three models. Parameter calibration, verification, and model performance assessment were performed by a comparison with a detailed landslide

  3. Atomic scale simulations for improved CRUD and fuel performance modeling

    Energy Technology Data Exchange (ETDEWEB)

    Andersson, Anders David Ragnar [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Cooper, Michael William Donald [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-01-06

    A more mechanistic description of fuel performance codes can be achieved by deriving models and parameters from atomistic scale simulations rather than fitting models empirically to experimental data. The same argument applies to modeling deposition of corrosion products on fuel rods (CRUD). Here are some results from publications in 2016 carried out using the CASL allocation at LANL.

  4. Performance of neutron kinetics models for ADS transient analyses

    International Nuclear Information System (INIS)

    Rineiski, A.; Maschek, W.; Rimpault, G.

    2002-01-01

    Within the framework of the SIMMER code development, neutron kinetics models for simulating transients and hypothetical accidents in advanced reactor systems, in particular in Accelerator Driven Systems (ADSs), have been developed at FZK/IKET in cooperation with CE Cadarache. SIMMER is a fluid-dynamics/thermal-hydraulics code, coupled with a structure model and a space-, time- and energy-dependent neutronics module for analyzing transients and accidents. The advanced kinetics models have also been implemented into KIN3D, a module of the VARIANT/TGV code (stand-alone neutron kinetics) for broadening application and for testing and benchmarking. In the paper, a short review of the SIMMER and KIN3D neutron kinetics models is given. Some typical transients related to ADS perturbations are analyzed. The general models of SIMMER and KIN3D are compared with more simple techniques developed in the context of this work to get a better understanding of the specifics of transients in subcritical systems and to estimate the performance of different kinetics options. These comparisons may also help in elaborating new kinetics models and extending existing computation tools for ADS transient analyses. The traditional point-kinetics model may give rather inaccurate transient reaction rate distributions in an ADS even if the material configuration does not change significantly. This inaccuracy is not related to the problem of choosing a 'right' weighting function: the point-kinetics model with any weighting function cannot take into account pronounced flux shape variations related to possible significant changes in the criticality level or to fast beam trips. To improve the accuracy of the point-kinetics option for slow transients, we have introduced a correction factor technique. The related analyses give a better understanding of 'long-timescale' kinetics phenomena in the subcritical domain and help to evaluate the performance of the quasi-static scheme in a particular case. One

  5. Modeling and prediction of flotation performance using support vector regression

    Directory of Open Access Journals (Sweden)

    Despotović Vladimir

    2017-01-01

    Full Text Available Continuous efforts have been made in recent year to improve the process of paper recycling, as it is of critical importance for saving the wood, water and energy resources. Flotation deinking is considered to be one of the key methods for separation of ink particles from the cellulose fibres. Attempts to model the flotation deinking process have often resulted in complex models that are difficult to implement and use. In this paper a model for prediction of flotation performance based on Support Vector Regression (SVR, is presented. Representative data samples were created in laboratory, under a variety of practical control variables for the flotation deinking process, including different reagents, pH values and flotation residence time. Predictive model was created that was trained on these data samples, and the flotation performance was assessed showing that Support Vector Regression is a promising method even when dataset used for training the model is limited.

  6. Standardizing the performance evaluation of short-term wind prediction models

    DEFF Research Database (Denmark)

    Madsen, Henrik; Pinson, Pierre; Kariniotakis, G.

    2005-01-01

    Short-term wind power prediction is a primary requirement for efficient large-scale integration of wind generation in power systems and electricity markets. The choice of an appropriate prediction model among the numerous available models is not trivial, and has to be based on an objective...... evaluation of model performance. This paper proposes a standardized protocol for the evaluation of short-term wind-poser preciction systems. A number of reference prediction models are also described, and their use for performance comparison is analysed. The use of the protocol is demonstrated using results...... from both on-shore and off-shore wind forms. The work was developed in the frame of the Anemos project (EU R&D project) where the protocol has been used to evaluate more than 10 prediction systems....

  7. Modeling and Evaluating Pilot Performance in NextGen: Review of and Recommendations Regarding Pilot Modeling Efforts, Architectures, and Validation Studies

    Science.gov (United States)

    Wickens, Christopher; Sebok, Angelia; Keller, John; Peters, Steve; Small, Ronald; Hutchins, Shaun; Algarin, Liana; Gore, Brian Francis; Hooey, Becky Lee; Foyle, David C.

    2013-01-01

    NextGen operations are associated with a variety of changes to the national airspace system (NAS) including changes to the allocation of roles and responsibilities among operators and automation, the use of new technologies and automation, additional information presented on the flight deck, and the entire concept of operations (ConOps). In the transition to NextGen airspace, aviation and air operations designers need to consider the implications of design or system changes on human performance and the potential for error. To ensure continued safety of the NAS, it will be necessary for researchers to evaluate design concepts and potential NextGen scenarios well before implementation. One approach for such evaluations is through human performance modeling. Human performance models (HPMs) provide effective tools for predicting and evaluating operator performance in systems. HPMs offer significant advantages over empirical, human-in-the-loop testing in that (1) they allow detailed analyses of systems that have not yet been built, (2) they offer great flexibility for extensive data collection, (3) they do not require experimental participants, and thus can offer cost and time savings. HPMs differ in their ability to predict performance and safety with NextGen procedures, equipment and ConOps. Models also vary in terms of how they approach human performance (e.g., some focus on cognitive processing, others focus on discrete tasks performed by a human, while others consider perceptual processes), and in terms of their associated validation efforts. The objectives of this research effort were to support the Federal Aviation Administration (FAA) in identifying HPMs that are appropriate for predicting pilot performance in NextGen operations, to provide guidance on how to evaluate the quality of different models, and to identify gaps in pilot performance modeling research, that could guide future research opportunities. This research effort is intended to help the FAA

  8. Performance modeling of network data services

    Energy Technology Data Exchange (ETDEWEB)

    Haynes, R.A.; Pierson, L.G.

    1997-01-01

    Networks at major computational organizations are becoming increasingly complex. The introduction of large massively parallel computers and supercomputers with gigabyte memories are requiring greater and greater bandwidth for network data transfers to widely dispersed clients. For networks to provide adequate data transfer services to high performance computers and remote users connected to them, the networking components must be optimized from a combination of internal and external performance criteria. This paper describes research done at Sandia National Laboratories to model network data services and to visualize the flow of data from source to sink when using the data services.

  9. An Empirical Study of a Solo Performance Assessment Model

    Science.gov (United States)

    Russell, Brian E.

    2015-01-01

    The purpose of this study was to test a hypothesized model of solo music performance assessment. Specifically, this study investigates the influence of technique and musical expression on perceptions of overall performance quality. The Aural Musical Performance Quality (AMPQ) measure was created to measure overall performance quality, technique,…

  10. Conceptual adsorption models and open issues pertaining to performance assessment

    International Nuclear Information System (INIS)

    Serne, R.J.

    1991-10-01

    Recently several articles have been published that question the appropriateness of the distribution coefficient, Rd, concept to quantify radionuclide migration. Several distinct issues are raised by various critics. In this paper I provide some perspective on issues surrounding the modeling of nuclide retardation. The first section defines adsorption terminology and discusses various adsorption processes. The next section describes five commonly used adsorption conceptual models, specifically emphasizing what attributes that affect adsorption are explicitly accommodated in each model. I also review efforts to incorporate each adsorption model into performance assessment transport computer codes. The five adsorption conceptual models are (1) the constant Rd model, (2) the parametric Rd model, (3) isotherm adsorption models, (4) mass-action adsorption models, and (5) surface-complexation with electrostatics models. The final section discusses the adequacy of the distribution ratio concept, the adequacy of transport calculations that rely on constant retardation factors and the status of incorporating sophisticated adsorption models into transport codes

  11. Modeling electrochemical performance in large scale proton exchange membrane fuel cell stacks

    Energy Technology Data Exchange (ETDEWEB)

    Lee, J H [Los Alamos National Lab., NM (United States); Lalk, T R [Texas A and M Univ., College Station, TX (United States). Dept. of Mechanical Engineering; Appleby, A J [Center for Electrochemical Studies and Hydrogen Research, Texas Engineering Experimentation Station, Texas A and M Univ., College Station, TX (United States)

    1998-02-01

    The processes, losses, and electrical characteristics of a Membrane-Electrode Assembly (MEA) of a Proton Exchange Membrane Fuel Cell (PEMFC) are described. In addition, a technique for numerically modeling the electrochemical performance of a MEA, developed specifically to be implemented as part of a numerical model of a complete fuel cell stack, is presented. The technique of calculating electrochemical performance was demonstrated by modeling the MEA of a 350 cm{sup 2}, 125 cell PEMFC and combining it with a dynamic fuel cell stack model developed by the authors. Results from the demonstration that pertain to the MEA sub-model are given and described. These include plots of the temperature, pressure, humidity, and oxygen partial pressure distributions for the middle MEA of the modeled stack as well as the corresponding current produced by that MEA. The demonstration showed that models developed using this technique produce results that are reasonable when compared to established performance expectations and experimental results. (orig.)

  12. A strategic management model for evaluation of health, safety and environmental performance.

    Science.gov (United States)

    Abbaspour, Majid; Toutounchian, Solmaz; Roayaei, Emad; Nassiri, Parvin

    2012-05-01

    Strategic health, safety, and environmental management system (HSE-MS) involves systematic and cooperative planning in each phase of the lifecycle of a project to ensure that interaction among the industry group, client, contractor, stakeholder, and host community exists with the highest level of health, safety, and environmental standard performances. Therefore, it seems necessary to assess the HSE-MS performance of contractor(s) by a comparative strategic management model with the aim of continuous improvement. The present Strategic Management Model (SMM) has been illustrated by a case study and the results show that the model is a suitable management tool for decision making in a contract environment, especially in oil and gas fields and based on accepted international standards within the framework of management deming cycle. To develop this model, a data bank has been created, which includes the statistical data calculated by converting the HSE performance qualitative data into quantitative values. Based on this fact, the structure of the model has been formed by defining HSE performance indicators according to the HSE-MS model. Therefore, 178 indicators have been selected which have been grouped into four attributes. Model output provides quantitative measures of HSE-MS performance as a percentage of an ideal level with maximum possible score for each attribute. Defining the strengths and weaknesses of the contractor(s) is another capability of this model. On the other hand, this model provides a ranking that could be used as the basis for decision making at the contractors' pre-qualification phase or during the execution of the project.

  13. SR 97 - Alternative models project. Discrete fracture network modelling for performance assessment of Aberg

    International Nuclear Information System (INIS)

    Dershowitz, B.; Eiben, T.; Follin, S.; Andersson, Johan

    1999-08-01

    As part of studies into the siting of a deep repository for nuclear waste, Swedish Nuclear Fuel and Waste Management Company (SKB) has commissioned the Alternative Models Project (AMP). The AMP is a comparison of three alternative modeling approaches for geosphere performance assessment for a single hypothetical site. The hypothetical site, arbitrarily named Aberg is based on parameters from the Aespoe Hard Rock Laboratory in southern Sweden. The Aberg model domain, boundary conditions and canister locations are defined as a common reference case to facilitate comparisons between approaches. This report presents the results of a discrete fracture pathways analysis of the Aberg site, within the context of the SR 97 performance assessment exercise. The Aberg discrete fracture network (DFN) site model is based on consensus Aberg parameters related to the Aespoe HRL site. Discrete fracture pathways are identified from canister locations in a prototype repository design to the surface of the island or to the sea bottom. The discrete fracture pathways analysis presented in this report is used to provide the following parameters for SKB's performance assessment transport codes FARF31 and COMP23: * F-factor: Flow wetted surface normalized with regards to flow rate (yields an appreciation of the contact area available for diffusion and sorption processes) [TL -1 ]. * Travel Time: Advective transport time from a canister location to the environmental discharge [T]. * Canister Flux: Darcy flux (flow rate per unit area) past a representative canister location [LT -1 ]. In addition to the above, the discrete fracture pathways analysis in this report also provides information about: additional pathway parameters such as pathway length, pathway width, transport aperture, reactive surface area and transmissivity, percentage of canister locations with pathways to the surface discharge, spatial pattern of pathways and pathway discharges, visualization of pathways, and statistical

  14. SR 97 - Alternative models project. Discrete fracture network modelling for performance assessment of Aberg

    Energy Technology Data Exchange (ETDEWEB)

    Dershowitz, B.; Eiben, T. [Golder Associates Inc., Seattle (United States); Follin, S.; Andersson, Johan [Golder Grundteknik KB, Stockholm (Sweden)

    1999-08-01

    As part of studies into the siting of a deep repository for nuclear waste, Swedish Nuclear Fuel and Waste Management Company (SKB) has commissioned the Alternative Models Project (AMP). The AMP is a comparison of three alternative modeling approaches for geosphere performance assessment for a single hypothetical site. The hypothetical site, arbitrarily named Aberg is based on parameters from the Aespoe Hard Rock Laboratory in southern Sweden. The Aberg model domain, boundary conditions and canister locations are defined as a common reference case to facilitate comparisons between approaches. This report presents the results of a discrete fracture pathways analysis of the Aberg site, within the context of the SR 97 performance assessment exercise. The Aberg discrete fracture network (DFN) site model is based on consensus Aberg parameters related to the Aespoe HRL site. Discrete fracture pathways are identified from canister locations in a prototype repository design to the surface of the island or to the sea bottom. The discrete fracture pathways analysis presented in this report is used to provide the following parameters for SKB's performance assessment transport codes FARF31 and COMP23: * F-factor: Flow wetted surface normalized with regards to flow rate (yields an appreciation of the contact area available for diffusion and sorption processes) [TL{sup -1}]. * Travel Time: Advective transport time from a canister location to the environmental discharge [T]. * Canister Flux: Darcy flux (flow rate per unit area) past a representative canister location [LT{sup -1}]. In addition to the above, the discrete fracture pathways analysis in this report also provides information about: additional pathway parameters such as pathway length, pathway width, transport aperture, reactive surface area and transmissivity, percentage of canister locations with pathways to the surface discharge, spatial pattern of pathways and pathway discharges, visualization of pathways, and

  15. Modeling and analysis to quantify MSE wall behavior and performance.

    Science.gov (United States)

    2009-08-01

    To better understand potential sources of adverse performance of mechanically stabilized earth (MSE) walls, a suite of analytical models was studied using the computer program FLAC, a numerical modeling computer program widely used in geotechnical en...

  16. Approach to modeling of human performance for purposes of probabilistic risk assessment

    International Nuclear Information System (INIS)

    Swain, A.D.

    1983-01-01

    This paper describes the general approach taken in NUREG/CR-1278 to model human performance in sufficienct detail to permit probabilistic risk assessments of nuclear power plant operations. To show the basis for the more specific models in the above NUREG, a simplified model of the human component in man-machine systems is presented, the role of performance shaping factors is discussed, and special problems in modeling the cognitive aspect of behavior are described

  17. Comparative study of turbulence model performance for axisymmetric sudden expansion flow

    International Nuclear Information System (INIS)

    Bae, Youngmin; Kim, Young In; Kim, Keung Koo; Yoon, Juhyeon

    2013-01-01

    In this study, the performance of turbulence models in predicting the turbulent flow in an axisymmetric sudden expansion with an expansion ratio of 4 is assessed for a Reynolds number of 5.6 Χ 10 4 . The comparisons show that the standard k-ε and RSM models provide the best agreement with the experimental data, whereas the standard k-ω model gives poor predictions. Owing to its computational efficiency, the Reynolds Averaged Navier-Stokes (RANS) approach has been widely used for the prediction of turbulent flows and associated pressure losses in a variety of internal flow systems such as a diffuser, orifice, converging nozzle, and pipes with sudden expansion. However, the lack of a general turbulence model often leads to limited applications of a RANS approach, i. e., the accuracy and validity of solutions obtained from RANS equations vary with the turbulence model, flow regime, near-wall treatment, and configuration of the problem. In light of the foregoing, a large amount of turbulence research has been conducted to assess the performance of existing turbulence models for different flow fields. In this paper, the turbulent flow in an axisymmetric sudden expansion is numerically investigated for a Reynolds number of 5.6 Χ 10 4 , with the aim of examining the performance of several turbulence models

  18. Multiscale Modeling and Uncertainty Quantification for Nuclear Fuel Performance

    Energy Technology Data Exchange (ETDEWEB)

    Estep, Donald [Colorado State Univ., Fort Collins, CO (United States); El-Azab, Anter [Florida State Univ., Tallahassee, FL (United States); Pernice, Michael [Idaho National Lab. (INL), Idaho Falls, ID (United States); Peterson, John W. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Polyakov, Peter [Univ. of Wyoming, Laramie, WY (United States); Tavener, Simon [Colorado State Univ., Fort Collins, CO (United States); Xiu, Dongbin [Purdue Univ., West Lafayette, IN (United States); Univ. of Utah, Salt Lake City, UT (United States)

    2017-03-23

    In this project, we will address the challenges associated with constructing high fidelity multiscale models of nuclear fuel performance. We (*) propose a novel approach for coupling mesoscale and macroscale models, (*) devise efficient numerical methods for simulating the coupled system, and (*) devise and analyze effective numerical approaches for error and uncertainty quantification for the coupled multiscale system. As an integral part of the project, we will carry out analysis of the effects of upscaling and downscaling, investigate efficient methods for stochastic sensitivity analysis of the individual macroscale and mesoscale models, and carry out a posteriori error analysis for computed results. We will pursue development and implementation of solutions in software used at Idaho National Laboratories on models of interest to the Nuclear Energy Advanced Modeling and Simulation (NEAMS) program.

  19. Modeling and design of a high-performance hybrid actuator

    Science.gov (United States)

    Aloufi, Badr; Behdinan, Kamran; Zu, Jean

    2016-12-01

    This paper presents the model and design of a novel hybrid piezoelectric actuator which provides high active and passive performances for smart structural systems. The actuator is composed of a pair of curved pre-stressed piezoelectric actuators, so-called commercially THUNDER actuators, installed opposite each other using two clamping mechanisms constructed of in-plane fixable hinges, grippers and solid links. A fully mathematical model is developed to describe the active and passive dynamics of the actuator and investigate the effects of its geometrical parameters on the dynamic stiffness, free displacement and blocked force properties. Among the literature that deals with piezoelectric actuators in which THUNDER elements are used as a source of electromechanical power, the proposed study is unique in that it presents a mathematical model that has the ability to predict the actuator characteristics and achieve other phenomena, such as resonances, mode shapes, phase shifts, dips, etc. For model validation, the measurements of the free dynamic response per unit voltage and passive acceleration transmissibility of a particular actuator design are used to check the accuracy of the results predicted by the model. The results reveal that there is a good agreement between the model and experiment. Another experiment is performed to teste the linearity of the actuator system by examining the variation of the output dynamic responses with varying forces and voltages at different frequencies. From the results, it can be concluded that the actuator acts approximately as a linear system at frequencies up to 1000 Hz. A parametric study is achieved here by applying the developed model to analyze the influence of the geometrical parameters of the fixable hinges on the active and passive actuator properties. The model predictions in the frequency range of 0-1000 Hz show that the hinge thickness, radius, and opening angle parameters have great effects on the frequency dynamic

  20. An integrated radar model solution for mission level performance and cost trades

    Science.gov (United States)

    Hodge, John; Duncan, Kerron; Zimmerman, Madeline; Drupp, Rob; Manno, Mike; Barrett, Donald; Smith, Amelia

    2017-05-01

    A fully integrated Mission-Level Radar model is in development as part of a multi-year effort under the Northrop Grumman Mission Systems (NGMS) sector's Model Based Engineering (MBE) initiative to digitally interconnect and unify previously separate performance and cost models. In 2016, an NGMS internal research and development (IR and D) funded multidisciplinary team integrated radio frequency (RF), power, control, size, weight, thermal, and cost models together using a commercial-off-the-shelf software, ModelCenter, for an Active Electronically Scanned Array (AESA) radar system. Each represented model was digitally connected with standard interfaces and unified to allow end-to-end mission system optimization and trade studies. The radar model was then linked to the Air Force's own mission modeling framework (AFSIM). The team first had to identify the necessary models, and with the aid of subject matter experts (SMEs) understand and document the inputs, outputs, and behaviors of the component models. This agile development process and collaboration enabled rapid integration of disparate models and the validation of their combined system performance. This MBE framework will allow NGMS to design systems more efficiently and affordably, optimize architectures, and provide increased value to the customer. The model integrates detailed component models that validate cost and performance at the physics level with high-level models that provide visualization of a platform mission. This connectivity of component to mission models allows hardware and software design solutions to be better optimized to meet mission needs, creating cost-optimal solutions for the customer, while reducing design cycle time through risk mitigation and early validation of design decisions.

  1. Performance assessment modeling of pyrometallurgical process wasteforms

    International Nuclear Information System (INIS)

    Nutt, W.M.; Hill, R.N.; Bullen, D.B.

    1995-01-01

    Performance assessment analyses have been completed to estimate the behavior of high-level nuclear wasteforms generated from the pyrometallurgical processing of liquid metal reactor (LMR) and light water reactor (LWR) spent nuclear fuel. Waste emplaced in the proposed repository at Yucca Mountain is investigated as the basis for the study. The resulting cumulative actinide and fission product releases to the accessible environment within a 100,000 year period from the various pyrometallurgical process wasteforms are compared to those of directly disposed LWR spent fuel using the same total repository system model. The impact of differing radionuclide transport models on the overall release characteristics is investigated

  2. FARMLAND: Model description and evaluation of model performance

    International Nuclear Information System (INIS)

    Attwood, C.; Fayers, C.; Mayall, A.; Brown, J.; Simmonds, J.R.

    1996-01-01

    The FARMLAND model was originally developed for use in connection with continuous, routine releases of radionuclides, but because it has many time-dependent features it has been developed further for a single accidental release. The most recent version of FARMLAND is flexible and can be used to predict activity concentrations in food as a function of time after both accidental and routine releases of radionuclides. The effect of deposition at different times of the year can be taken into account. FARMLAND contains a suite of models which simulate radionuclide transfer through different parts of the foodchain. The models can be used in different combinations and offer the flexibility to assess a variety of radiological situations. The main foods considered are green vegetables, grain products, root vegetables, milk, meat and offal from cattle, and meat and offal from sheep. A large variety of elements can be considered although the degree of complexity with which some are modelled is greater than others; isotopes of caesium, strontium and iodine are treated in greatest detail. 22 refs, 12 figs, 10 tabs

  3. FARMLAND: Model description and evaluation of model performance

    Energy Technology Data Exchange (ETDEWEB)

    Attwood, C; Fayers, C; Mayall, A; Brown, J; Simmonds, J R [National Radiological Protection Board, Chilton (United Kingdom)

    1996-09-01

    The FARMLAND model was originally developed for use in connection with continuous, routine releases of radionuclides, but because it has many time-dependent features it has been developed further for a single accidental release. The most recent version of FARMLAND is flexible and can be used to predict activity concentrations in food as a function of time after both accidental and routine releases of radionuclides. The effect of deposition at different times of the year can be taken into account. FARMLAND contains a suite of models which simulate radionuclide transfer through different parts of the foodchain. The models can be used in different combinations and offer the flexibility to assess a variety of radiological situations. The main foods considered are green vegetables, grain products, root vegetables, milk, meat and offal from cattle, and meat and offal from sheep. A large variety of elements can be considered although the degree of complexity with which some are modelled is greater than others; isotopes of caesium, strontium and iodine are treated in greatest detail. 22 refs, 12 figs, 10 tabs.

  4. ECOPATH: Model description and evaluation of model performance

    International Nuclear Information System (INIS)

    Bergstroem, U.; Nordlinder, S.

    1996-01-01

    The model is based upon compartment theory and it is run in combination with a statistical error propagation method (PRISM, Gardner et al. 1983). It is intended to be generic for application on other sites with simple changing of parameter values. It was constructed especially for this scenario. However, it is based upon an earlier designed model for calculating relations between released amount of radioactivity and doses to critical groups (used for Swedish regulations concerning annual reports of released radioactivity from routine operation of Swedish nuclear power plants (Bergstroem och Nordlinder, 1991)). The model handles exposure from deposition on terrestrial areas as well as deposition on lakes, starting with deposition values. 14 refs, 16 figs, 7 tabs

  5. Lysimeter data as input to performance assessment models

    International Nuclear Information System (INIS)

    McConnell, J.W. Jr.

    1998-01-01

    The Field Lysimeter Investigations: Low-Level Waste Data Base Development Program is obtaining information on the performance of radioactive waste forms in a disposal environment. Waste forms fabricated using ion-exchange resins from EPICOR-117 prefilters employed in the cleanup of the Three Mile Island (TMI) Nuclear Power Station are being tested to develop a low-level waste data base and to obtain information on survivability of waste forms in a disposal environment. The program includes reviewing radionuclide releases from those waste forms in the first 7 years of sampling and examining the relationship between code input parameters and lysimeter data. Also, lysimeter data are applied to performance assessment source term models, and initial results from use of data in two models are presented

  6. Benchmarking of protein descriptor sets in proteochemometric modeling (part 2): modeling performance of 13 amino acid descriptor sets

    Science.gov (United States)

    2013-01-01

    Background While a large body of work exists on comparing and benchmarking descriptors of molecular structures, a similar comparison of protein descriptor sets is lacking. Hence, in the current work a total of 13 amino acid descriptor sets have been benchmarked with respect to their ability of establishing bioactivity models. The descriptor sets included in the study are Z-scales (3 variants), VHSE, T-scales, ST-scales, MS-WHIM, FASGAI, BLOSUM, a novel protein descriptor set (termed ProtFP (4 variants)), and in addition we created and benchmarked three pairs of descriptor combinations. Prediction performance was evaluated in seven structure-activity benchmarks which comprise Angiotensin Converting Enzyme (ACE) dipeptidic inhibitor data, and three proteochemometric data sets, namely (1) GPCR ligands modeled against a GPCR panel, (2) enzyme inhibitors (NNRTIs) with associated bioactivities against a set of HIV enzyme mutants, and (3) enzyme inhibitors (PIs) with associated bioactivities on a large set of HIV enzyme mutants. Results The amino acid descriptor sets compared here show similar performance (set differences ( > 0.3 log units RMSE difference and >0.7 difference in MCC). Combining different descriptor sets generally leads to better modeling performance than utilizing individual sets. The best performers were Z-scales (3) combined with ProtFP (Feature), or Z-Scales (3) combined with an average Z-Scale value for each target, while ProtFP (PCA8), ST-Scales, and ProtFP (Feature) rank last. Conclusions While amino acid descriptor sets capture different aspects of amino acids their ability to be used for bioactivity modeling is still – on average – surprisingly similar. Still, combining sets describing complementary information consistently leads to small but consistent improvement in modeling performance (average MCC 0.01 better, average RMSE 0.01 log units lower). Finally, performance differences exist between the targets compared thereby underlining that

  7. Bayesian Comparison of Alternative Graded Response Models for Performance Assessment Applications

    Science.gov (United States)

    Zhu, Xiaowen; Stone, Clement A.

    2012-01-01

    This study examined the relative effectiveness of Bayesian model comparison methods in selecting an appropriate graded response (GR) model for performance assessment applications. Three popular methods were considered: deviance information criterion (DIC), conditional predictive ordinate (CPO), and posterior predictive model checking (PPMC). Using…

  8. Weather model performance on extreme rainfall events simulation's over Western Iberian Peninsula

    Science.gov (United States)

    Pereira, S. C.; Carvalho, A. C.; Ferreira, J.; Nunes, J. P.; Kaiser, J. J.; Rocha, A.

    2012-08-01

    This study evaluates the performance of the WRF-ARW numerical weather model in simulating the spatial and temporal patterns of an extreme rainfall period over a complex orographic region in north-central Portugal. The analysis was performed for the December month of 2009, during the Portugal Mainland rainy season. The heavy rainfall to extreme heavy rainfall periods were due to several low surface pressure's systems associated with frontal surfaces. The total amount of precipitation for December exceeded, in average, the climatological mean for the 1971-2000 time period in +89 mm, varying from 190 mm (south part of the country) to 1175 mm (north part of the country). Three model runs were conducted to assess possible improvements in model performance: (1) the WRF-ARW is forced with the initial fields from a global domain model (RunRef); (2) data assimilation for a specific location (RunObsN) is included; (3) nudging is used to adjust the analysis field (RunGridN). Model performance was evaluated against an observed hourly precipitation dataset of 15 rainfall stations using several statistical parameters. The WRF-ARW model reproduced well the temporal rainfall patterns but tended to overestimate precipitation amounts. The RunGridN simulation provided the best results but model performance of the other two runs was good too, so that the selected extreme rainfall episode was successfully reproduced.

  9. Titan I propulsion system modeling and possible performance improvements

    Science.gov (United States)

    Giusti, Oreste

    This thesis features the Titan I propulsion systems and offers data-supported suggestions for improvements to increase performance. The original propulsion systems were modeled both graphically in CAD and via equations. Due to the limited availability of published information, it was necessary to create a more detailed, secondary set of models. Various engineering equations---pertinent to rocket engine design---were implemented in order to generate the desired extra detail. This study describes how these new models were then imported into the ESI CFD Suite. Various parameters are applied to these imported models as inputs that include, for example, bi-propellant combinations, pressure, temperatures, and mass flow rates. The results were then processed with ESI VIEW, which is visualization software. The output files were analyzed for forces in the nozzle, and various results were generated, including sea level thrust and ISP. Experimental data are provided to compare the original engine configuration models to the derivative suggested improvement models.

  10. Introducing Model Predictive Control for Improving Power Plant Portfolio Performance

    DEFF Research Database (Denmark)

    Edlund, Kristian Skjoldborg; Bendtsen, Jan Dimon; Børresen, Simon

    2008-01-01

    This paper introduces a model predictive control (MPC) approach for construction of a controller for balancing the power generation against consumption in a power system. The objective of the controller is to coordinate a portfolio consisting of multiple power plant units in the effort to perform...... reference tracking and disturbance rejection in an economically optimal way. The performance function is chosen as a mixture of the `1-norm and a linear weighting to model the economics of the system. Simulations show a significant improvement of the performance of the MPC compared to the current...

  11. Modeling logistic performance in quantitative microbial risk assessment.

    Science.gov (United States)

    Rijgersberg, Hajo; Tromp, Seth; Jacxsens, Liesbeth; Uyttendaele, Mieke

    2010-01-01

    In quantitative microbial risk assessment (QMRA), food safety in the food chain is modeled and simulated. In general, prevalences, concentrations, and numbers of microorganisms in media are investigated in the different steps from farm to fork. The underlying rates and conditions (such as storage times, temperatures, gas conditions, and their distributions) are determined. However, the logistic chain with its queues (storages, shelves) and mechanisms for ordering products is usually not taken into account. As a consequence, storage times-mutually dependent in successive steps in the chain-cannot be described adequately. This may have a great impact on the tails of risk distributions. Because food safety risks are generally very small, it is crucial to model the tails of (underlying) distributions as accurately as possible. Logistic performance can be modeled by describing the underlying planning and scheduling mechanisms in discrete-event modeling. This is common practice in operations research, specifically in supply chain management. In this article, we present the application of discrete-event modeling in the context of a QMRA for Listeria monocytogenes in fresh-cut iceberg lettuce. We show the potential value of discrete-event modeling in QMRA by calculating logistic interventions (modifications in the logistic chain) and determining their significance with respect to food safety.

  12. Critical research issues in development of biomathematical models of fatigue and performance.

    Science.gov (United States)

    Dinges, David F

    2004-03-01

    This article reviews the scientific research needed to ensure the continued development, validation, and operational transition of biomathematical models of fatigue and performance. These models originated from the need to ascertain the formal underlying relationships among sleep and circadian dynamics in the control of alertness and neurobehavioral performance capability. Priority should be given to research that further establishes their basic validity, including the accuracy of the core mathematical formulae and parameters that instantiate the interactions of sleep/wake and circadian processes. Since individuals can differ markedly and reliably in their responses to sleep loss and to countermeasures for it, models must incorporate estimates of these inter-individual differences, and research should identify predictors of them. To ensure models accurately predict recovery of function with sleep of varying durations, dose-response curves for recovery of performance as a function of prior sleep homeostatic load and the number of days of recovery are needed. It is also necessary to establish whether the accuracy of models is affected by using work/rest schedules as surrogates for sleep/wake inputs to models. Given the importance of light as both a circadian entraining agent and an alerting agent, research should determine the extent to which light input could incrementally improve model predictions of performance, especially in persons exposed to night work, jet lag, and prolonged work. Models seek to estimate behavioral capability and/or the relative risk of adverse events in a fatigued state. Research is needed on how best to scale and interpret metrics of behavioral capability, and incorporate factors that amplify or diminish the relationship between model predictions of performance and risk outcomes.

  13. Evaluating Flight Crew Performance by a Bayesian Network Model

    Directory of Open Access Journals (Sweden)

    Wei Chen

    2018-03-01

    Full Text Available Flight crew performance is of great significance in keeping flights safe and sound. When evaluating the crew performance, quantitative detailed behavior information may not be available. The present paper introduces the Bayesian Network to perform flight crew performance evaluation, which permits the utilization of multidisciplinary sources of objective and subjective information, despite sparse behavioral data. In this paper, the causal factors are selected based on the analysis of 484 aviation accidents caused by human factors. Then, a network termed Flight Crew Performance Model is constructed. The Delphi technique helps to gather subjective data as a supplement to objective data from accident reports. The conditional probabilities are elicited by the leaky noisy MAX model. Two ways of inference for the BN—probability prediction and probabilistic diagnosis are used and some interesting conclusions are drawn, which could provide data support to make interventions for human error management in aviation safety.

  14. Investigating the performance of directional boundary layer model through staged modeling method

    Science.gov (United States)

    Jeong, Moon-Gyu; Lee, Won-Chan; Yang, Seung-Hune; Jang, Sung-Hoon; Shim, Seong-Bo; Kim, Young-Chang; Suh, Chun-Suk; Choi, Seong-Woon; Kim, Young-Hee

    2011-04-01

    BLM since the feasibility of the BLM has been investigated in many papers[4][5][6]. Instead of fitting the parameters to the wafer critical dimensions (CD) directly, we tried to use the aerial image (AI) from the rigorous simulator with the electromagnetic field (EMF) solver. Usually that kind of method is known as the staged modeling method. To see the advantages of this method we conducted several experiments and observed the results comparing the method of fitting to the wafer CD directly. Through the tests we could observe some remarkable results and confirmed that the staged modeling had better performance in many ways.

  15. Fracture modelling of a high performance armour steel

    Science.gov (United States)

    Skoglund, P.; Nilsson, M.; Tjernberg, A.

    2006-08-01

    The fracture characteristics of the high performance armour steel Armox 500T is investigated. Tensile mechanical experiments using samples with different notch geometries are used to investigate the effect of multi-axial stress states on the strain to fracture. The experiments are numerically simulated and from the simulation the stress at the point of fracture initiation is determined as a function of strain and these data are then used to extract parameters for fracture models. A fracture model based on quasi-static experiments is suggested and the model is tested against independent experiments done at both static and dynamic loading. The result show that the fracture model give reasonable good agreement between simulations and experiments at both static and dynamic loading condition. This indicates that multi-axial loading is more important to the strain to fracture than the deformation rate in the investigated loading range. However on-going work will further characterise the fracture behaviour of Armox 500T.

  16. Aqua/Aura Updated Inclination Adjust Maneuver Performance Prediction Model

    Science.gov (United States)

    Boone, Spencer

    2017-01-01

    This presentation will discuss the updated Inclination Adjust Maneuver (IAM) performance prediction model that was developed for Aqua and Aura following the 2017 IAM series. This updated model uses statistical regression methods to identify potential long-term trends in maneuver parameters, yielding improved predictions when re-planning past maneuvers. The presentation has been reviewed and approved by Eric Moyer, ESMO Deputy Project Manager.

  17. ASSESSING INDIVIDUAL PERFORMANCE ON INFORMATION TECHNOLOGY ADOPTION: A NEW MODEL

    OpenAIRE

    Diah Hari Suryaningrum

    2012-01-01

    This paper aims to propose a new model in assessing individual performance on information technology adoption. The new model to assess individual performance was derived from two different theories: decomposed theory of planned behavior and task-technology fit theory. Although many researchers have tried to expand these theories, some of their efforts might lack of theoretical assumptions. To overcome this problem and enhance the coherence of the integration, I used a theory from social scien...

  18. Performing arts medicine: A research model for South Africa

    Directory of Open Access Journals (Sweden)

    Karendra Devroop

    2014-11-01

    Full Text Available Performing Arts Medicine has developed into a highly specialised field over the past three decades. The Performing Arts Medical Association (PAMA has been the leading proponent of this unique and innovative field with ground-breaking research studies, symposia, conferences and journals dedicated specifically to the medical problems of performing artists. Similar to sports medicine, performing arts medicine caters specifically for the medical problems of performing artists including musicians and dancers. In South Africa there is a tremendous lack of knowledge of the field and unlike our international counterparts, we do not have specialised clinical settings that cater for the medical problems of performing artists. There is also a tremendous lack of research on performance-related medical problems of performing artists in South Africa. Accordingly the purpose of this paper is to present an overview of the field of performing arts medicine, highlight some of the significant findings from recent research studies and present a model for conducting research into the field of performing arts medicine. It is hoped that this research model will lead to increased research on the medical problems of performing artists in South Africa.

  19. Comparative study of turbulence model performance for axisymmetric sudden expansion flow

    Energy Technology Data Exchange (ETDEWEB)

    Bae, Youngmin; Kim, Young In; Kim, Keung Koo; Yoon, Juhyeon [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2013-10-15

    In this study, the performance of turbulence models in predicting the turbulent flow in an axisymmetric sudden expansion with an expansion ratio of 4 is assessed for a Reynolds number of 5.6 Χ 10{sup 4}. The comparisons show that the standard k-ε and RSM models provide the best agreement with the experimental data, whereas the standard k-ω model gives poor predictions. Owing to its computational efficiency, the Reynolds Averaged Navier-Stokes (RANS) approach has been widely used for the prediction of turbulent flows and associated pressure losses in a variety of internal flow systems such as a diffuser, orifice, converging nozzle, and pipes with sudden expansion. However, the lack of a general turbulence model often leads to limited applications of a RANS approach, i. e., the accuracy and validity of solutions obtained from RANS equations vary with the turbulence model, flow regime, near-wall treatment, and configuration of the problem. In light of the foregoing, a large amount of turbulence research has been conducted to assess the performance of existing turbulence models for different flow fields. In this paper, the turbulent flow in an axisymmetric sudden expansion is numerically investigated for a Reynolds number of 5.6 Χ 10{sup 4}, with the aim of examining the performance of several turbulence models.

  20. Computational Model-Based Prediction of Human Episodic Memory Performance Based on Eye Movements

    Science.gov (United States)

    Sato, Naoyuki; Yamaguchi, Yoko

    Subjects' episodic memory performance is not simply reflected by eye movements. We use a ‘theta phase coding’ model of the hippocampus to predict subjects' memory performance from their eye movements. Results demonstrate the ability of the model to predict subjects' memory performance. These studies provide a novel approach to computational modeling in the human-machine interface.

  1. Modelling of PEM Fuel Cell Performance: Steady-State and Dynamic Experimental Validation

    Directory of Open Access Journals (Sweden)

    Idoia San Martín

    2014-02-01

    Full Text Available This paper reports on the modelling of a commercial 1.2 kW proton exchange membrane fuel cell (PEMFC, based on interrelated electrical and thermal models. The electrical model proposed is based on the integration of the thermodynamic and electrochemical phenomena taking place in the FC whilst the thermal model is established from the FC thermal energy balance. The combination of both models makes it possible to predict the FC voltage, based on the current demanded and the ambient temperature. Furthermore, an experimental characterization is conducted and the parameters for the models associated with the FC electrical and thermal performance are obtained. The models are implemented in Matlab Simulink and validated in a number of operating environments, for steady-state and dynamic modes alike. In turn, the FC models are validated in an actual microgrid operating environment, through the series connection of 4 PEMFC. The simulations of the models precisely and accurately reproduce the FC electrical and thermal performance.

  2. MODELING SIMULATION AND PERFORMANCE STUDY OF GRIDCONNECTED PHOTOVOLTAIC ENERGY SYSTEM

    OpenAIRE

    Nagendra K; Karthik J; Keerthi Rao C; Kumar Raja Pemmadi

    2017-01-01

    This paper presents Modeling Simulation of grid connected Photovoltaic Energy System and performance study using MATLAB/Simulink. The Photovoltaic energy system is considered in three main parts PV Model, Power conditioning System and Grid interface. The Photovoltaic Model is inter-connected with grid through full scale power electronic devices. The simulation is conducted on the PV energy system at normal temperature and at constant load by using MATLAB.

  3. Indonesian Private University Lecturer Performance Improvement Model to Improve a Sustainable Organization Performance

    Science.gov (United States)

    Suryaman

    2018-01-01

    Lecturer performance will affect the quality and carrying capacity of the sustainability of an organization, in this case the university. There are many models developed to measure the performance of teachers, but not much to discuss the influence of faculty performance itself towards sustainability of an organization. This study was conducted in…

  4. Performance Evaluation and Modelling of Container Terminals

    Science.gov (United States)

    Venkatasubbaiah, K.; Rao, K. Narayana; Rao, M. Malleswara; Challa, Suresh

    2018-02-01

    The present paper evaluates and analyzes the performance of 28 container terminals of south East Asia through data envelopment analysis (DEA), principal component analysis (PCA) and hybrid method of DEA-PCA. DEA technique is utilized to identify efficient decision making unit (DMU)s and to rank DMUs in a peer appraisal mode. PCA is a multivariate statistical method to evaluate the performance of container terminals. In hybrid method, DEA is integrated with PCA to arrive the ranking of container terminals. Based on the composite ranking, performance modelling and optimization of container terminals is carried out through response surface methodology (RSM).

  5. Human performance models for computer-aided engineering

    Science.gov (United States)

    Elkind, Jerome I. (Editor); Card, Stuart K. (Editor); Hochberg, Julian (Editor); Huey, Beverly Messick (Editor)

    1989-01-01

    This report discusses a topic important to the field of computational human factors: models of human performance and their use in computer-based engineering facilities for the design of complex systems. It focuses on a particular human factors design problem -- the design of cockpit systems for advanced helicopters -- and on a particular aspect of human performance -- vision and related cognitive functions. By focusing in this way, the authors were able to address the selected topics in some depth and develop findings and recommendations that they believe have application to many other aspects of human performance and to other design domains.

  6. Evaluation of the performance of DIAS ionospheric forecasting models

    Directory of Open Access Journals (Sweden)

    Tsagouri Ioanna

    2011-08-01

    Full Text Available Nowcasting and forecasting ionospheric products and services for the European region are regularly provided since August 2006 through the European Digital upper Atmosphere Server (DIAS, http://dias.space.noa.gr. Currently, DIAS ionospheric forecasts are based on the online implementation of two models: (i the solar wind driven autoregression model for ionospheric short-term forecast (SWIF, which combines historical and real-time ionospheric observations with solar-wind parameters obtained in real time at the L1 point from NASA ACE spacecraft, and (ii the geomagnetically correlated autoregression model (GCAM, which is a time series forecasting method driven by a synthetic geomagnetic index. In this paper we investigate the operational ability and the accuracy of both DIAS models carrying out a metrics-based evaluation of their performance under all possible conditions. The analysis was established on the systematic comparison between models’ predictions with actual observations obtained over almost one solar cycle (1998–2007 at four European ionospheric locations (Athens, Chilton, Juliusruh and Rome and on the comparison of the models’ performance against two simple prediction strategies, the median- and the persistence-based predictions during storm conditions. The results verify operational validity for both models and quantify their prediction accuracy under all possible conditions in support of operational applications but also of comparative studies in assessing or expanding the current ionospheric forecasting capabilities.

  7. Performance Analysis, Modeling and Scaling of HPC Applications and Tools

    Energy Technology Data Exchange (ETDEWEB)

    Bhatele, Abhinav [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-01-13

    E cient use of supercomputers at DOE centers is vital for maximizing system throughput, mini- mizing energy costs and enabling science breakthroughs faster. This requires complementary e orts along several directions to optimize the performance of scienti c simulation codes and the under- lying runtimes and software stacks. This in turn requires providing scalable performance analysis tools and modeling techniques that can provide feedback to physicists and computer scientists developing the simulation codes and runtimes respectively. The PAMS project is using time allocations on supercomputers at ALCF, NERSC and OLCF to further the goals described above by performing research along the following fronts: 1. Scaling Study of HPC applications; 2. Evaluation of Programming Models; 3. Hardening of Performance Tools; 4. Performance Modeling of Irregular Codes; and 5. Statistical Analysis of Historical Performance Data. We are a team of computer and computational scientists funded by both DOE/NNSA and DOE/ ASCR programs such as ECRP, XStack (Traleika Glacier, PIPER), ExaOSR (ARGO), SDMAV II (MONA) and PSAAP II (XPACC). This allocation will enable us to study big data issues when analyzing performance on leadership computing class systems and to assist the HPC community in making the most e ective use of these resources.

  8. A Fluid Model for Performance Analysis in Cellular Networks

    Directory of Open Access Journals (Sweden)

    Coupechoux Marceau

    2010-01-01

    Full Text Available We propose a new framework to study the performance of cellular networks using a fluid model and we derive from this model analytical formulas for interference, outage probability, and spatial outage probability. The key idea of the fluid model is to consider the discrete base station (BS entities as a continuum of transmitters that are spatially distributed in the network. This model allows us to obtain simple analytical expressions to reveal main characteristics of the network. In this paper, we focus on the downlink other-cell interference factor (OCIF, which is defined for a given user as the ratio of its outer cell received power to its inner cell received power. A closed-form formula of the OCIF is provided in this paper. From this formula, we are able to obtain the global outage probability as well as the spatial outage probability, which depends on the location of a mobile station (MS initiating a new call. Our analytical results are compared to Monte Carlo simulations performed in a traditional hexagonal network. Furthermore, we demonstrate an application of the outage probability related to cell breathing and densification of cellular networks.

  9. A novel spatial performance metric for robust pattern optimization of distributed hydrological models

    Science.gov (United States)

    Stisen, S.; Demirel, C.; Koch, J.

    2017-12-01

    Evaluation of performance is an integral part of model development and calibration as well as it is of paramount importance when communicating modelling results to stakeholders and the scientific community. There exists a comprehensive and well tested toolbox of metrics to assess temporal model performance in the hydrological modelling community. On the contrary, the experience to evaluate spatial performance is not corresponding to the grand availability of spatial observations readily available and to the sophisticate model codes simulating the spatial variability of complex hydrological processes. This study aims at making a contribution towards advancing spatial pattern oriented model evaluation for distributed hydrological models. This is achieved by introducing a novel spatial performance metric which provides robust pattern performance during model calibration. The promoted SPAtial EFficiency (spaef) metric reflects three equally weighted components: correlation, coefficient of variation and histogram overlap. This multi-component approach is necessary in order to adequately compare spatial patterns. spaef, its three components individually and two alternative spatial performance metrics, i.e. connectivity analysis and fractions skill score, are tested in a spatial pattern oriented model calibration of a catchment model in Denmark. The calibration is constrained by a remote sensing based spatial pattern of evapotranspiration and discharge timeseries at two stations. Our results stress that stand-alone metrics tend to fail to provide holistic pattern information to the optimizer which underlines the importance of multi-component metrics. The three spaef components are independent which allows them to complement each other in a meaningful way. This study promotes the use of bias insensitive metrics which allow comparing variables which are related but may differ in unit in order to optimally exploit spatial observations made available by remote sensing

  10. Econometric model as a regulatory tool in electricity distribution - Case Network Performance Assessment Model

    International Nuclear Information System (INIS)

    Honkapuro, S.; Lassila, J.; Viljainen, S.; Tahvanainen, K.; Partanen, J.

    2004-01-01

    Electricity distribution companies operate in the state of natural monopolies since building of parallel networks is not cost-effective. Monopoly companies do not have pressure from the open markets to keep their prices and costs at reasonable level. The regulation of these companies is needed to prevent the misuse of the monopoly position. Regulation is usually focused either on the profit of company or on the price of electricity. In this document, the usability of an econometric model in the regulation of electricity distribution companies is evaluated. Regulation method which determines allowed income for each company with generic computation model can be seen as an econometric model. As the special case of an econometric model, the method called Network Performance Assessment Model, NPAM (Naetnyttomodellen in Swedish), is analysed. NPAM is developed by Swedish Energy Agency (STEM) for the regulation of electricity distribution companies. Both theoretical analysis and calculations of an example network area are presented in this document to find the major directing effects of the model. The parameters of NPAM, which are used in the calculations of this research report, were dated on 30th of March 2004. These parameters were most recent available at the time when analysis was done. However, since NPAM is under development, the parameters have been constantly changing. Therefore slightly changes in the results can occur if calculations were made with latest parameters. However, main conclusions are same and do not depend on exact parameters. (orig.)

  11. A performance model for the communication in fast multipole methods on high-performance computing platforms

    KAUST Repository

    Ibeid, Huda

    2016-03-04

    Exascale systems are predicted to have approximately 1 billion cores, assuming gigahertz cores. Limitations on affordable network topologies for distributed memory systems of such massive scale bring new challenges to the currently dominant parallel programing model. Currently, there are many efforts to evaluate the hardware and software bottlenecks of exascale designs. It is therefore of interest to model application performance and to understand what changes need to be made to ensure extrapolated scalability. The fast multipole method (FMM) was originally developed for accelerating N-body problems in astrophysics and molecular dynamics but has recently been extended to a wider range of problems. Its high arithmetic intensity combined with its linear complexity and asynchronous communication patterns make it a promising algorithm for exascale systems. In this paper, we discuss the challenges for FMM on current parallel computers and future exascale architectures, with a focus on internode communication. We focus on the communication part only; the efficiency of the computational kernels are beyond the scope of the present study. We develop a performance model that considers the communication patterns of the FMM and observe a good match between our model and the actual communication time on four high-performance computing (HPC) systems, when latency, bandwidth, network topology, and multicore penalties are all taken into account. To our knowledge, this is the first formal characterization of internode communication in FMM that validates the model against actual measurements of communication time. The ultimate communication model is predictive in an absolute sense; however, on complex systems, this objective is often out of reach or of a difficulty out of proportion to its benefit when there exists a simpler model that is inexpensive and sufficient to guide coding decisions leading to improved scaling. The current model provides such guidance.

  12. A modular ducted rocket missile model for threat and performance assessment

    NARCIS (Netherlands)

    Mayer, A.E.H.J.; Halswijk, W.H.C.; Komduur, H.J.; Lauzon, M.; Stowe, R.A.

    2005-01-01

    A model was developed to predict the thrust of throttled ramjet propelled missiles. The model is called DRCORE and fulfils the growing need to predict the performance of air breathing missiles. Each subsystem of the propulsion unit of this model is coded by using engineering formulae and enables the

  13. Maintenance personnel performance simulation (MAPPS): a model for predicting maintenance performance reliability in nuclear power plants

    International Nuclear Information System (INIS)

    Knee, H.E.; Krois, P.A.; Haas, P.M.; Siegel, A.I.; Ryan, T.G.

    1983-01-01

    The NRC has developed a structured, quantitative, predictive methodology in the form of a computerized simulation model for assessing maintainer task performance. Objective of the overall program is to develop, validate, and disseminate a practical, useful, and acceptable methodology for the quantitative assessment of NPP maintenance personnel reliability. The program was organized into four phases: (1) scoping study, (2) model development, (3) model evaluation, and (4) model dissemination. The program is currently nearing completion of Phase 2 - Model Development

  14. High-performance phase-field modeling

    KAUST Repository

    Vignal, Philippe

    2015-04-27

    Many processes in engineering and sciences involve the evolution of interfaces. Among the mathematical frameworks developed to model these types of problems, the phase-field method has emerged as a possible solution. Phase-fields nonetheless lead to complex nonlinear, high-order partial differential equations, whose solution poses mathematical and computational challenges. Guaranteeing some of the physical properties of the equations has lead to the development of efficient algorithms and discretizations capable of recovering said properties by construction [2, 5]. This work builds-up on these ideas, and proposes novel discretization strategies that guarantee numerical energy dissipation for both conserved and non-conserved phase-field models. The temporal discretization is based on a novel method which relies on Taylor series and ensures strong energy stability. It is second-order accurate, and can also be rendered linear to speed-up the solution process [4]. The spatial discretization relies on Isogeometric Analysis, a finite element method that possesses the k-refinement technology and enables the generation of high-order, high-continuity basis functions. These basis functions are well suited to handle the high-order operators present in phase-field models. Two-dimensional and three dimensional results of the Allen-Cahn, Cahn-Hilliard, Swift-Hohenberg and phase-field crystal equation will be presented, which corroborate the theoretical findings, and illustrate the robustness of the method. Results related to more challenging examples, namely the Navier-Stokes Cahn-Hilliard and a diusion-reaction Cahn-Hilliard system, will also be presented. The implementation was done in PetIGA and PetIGA-MF, high-performance Isogeometric Analysis frameworks [1, 3], designed to handle non-linear, time-dependent problems.

  15. Technical performance of percutaneous and laminectomy leads analyzed by modeling

    NARCIS (Netherlands)

    Manola, L.; Holsheimer, J.

    2004-01-01

    The objective of this study was to compare the technical performance of laminectomy and percutaneous spinal cord stimulation leads with similar contact spacing by computer modeling. Monopolar and tripolar (guarded cathode) stimulation with both lead types in a low-thoracic spine model was simulated

  16. Modeling take-over performance in level 3 conditionally automated vehicles.

    Science.gov (United States)

    Gold, Christian; Happee, Riender; Bengler, Klaus

    2017-11-28

    Taking over vehicle control from a Level 3 conditionally automated vehicle can be a demanding task for a driver. The take-over determines the controllability of automated vehicle functions and thereby also traffic safety. This paper presents models predicting the main take-over performance variables take-over time, minimum time-to-collision, brake application and crash probability. These variables are considered in relation to the situational and driver-related factors time-budget, traffic density, non-driving-related task, repetition, the current lane and driver's age. Regression models were developed using 753 take-over situations recorded in a series of driving simulator experiments. The models were validated with data from five other driving simulator experiments of mostly unrelated authors with another 729 take-over situations. The models accurately captured take-over time, time-to-collision and crash probability, and moderately predicted the brake application. Especially the time-budget, traffic density and the repetition strongly influenced the take-over performance, while the non-driving-related tasks, the lane and drivers' age explained a minor portion of the variance in the take-over performances. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Performance evaluation of Maxwell and Cercignani-Lampis gas-wall interaction models in the modeling of thermally driven rarefied gas transport

    KAUST Repository

    Liang, Tengfei

    2013-07-16

    A systematic study on the performance of two empirical gas-wall interaction models, the Maxwell model and the Cercignani-Lampis (CL) model, in the entire Knudsen range is conducted. The models are evaluated by examining the accuracy of key macroscopic quantities such as temperature, density, and pressure, in three benchmark thermal problems, namely the Fourier thermal problem, the Knudsen force problem, and the thermal transpiration problem. The reference solutions are obtained from a validated hybrid DSMC-MD algorithm developed in-house. It has been found that while both models predict temperature and density reasonably well in the Fourier thermal problem, the pressure profile obtained from Maxwell model exhibits a trend that opposes that from the reference solution. As a consequence, the Maxwell model is unable to predict the orientation change of the Knudsen force acting on a cold cylinder embedded in a hot cylindrical enclosure at a certain Knudsen number. In the simulation of the thermal transpiration coefficient, although all three models overestimate the coefficient, the coefficient obtained from CL model is the closest to the reference solution. The Maxwell model performs the worst. The cause of the overestimated coefficient is investigated and its link to the overly constrained correlation between the tangential momentum accommodation coefficient and the tangential energy accommodation coefficient inherent in the models is pointed out. Directions for further improvement of models are suggested.

  18. Aircraft Anomaly Detection Using Performance Models Trained on Fleet Data

    Science.gov (United States)

    Gorinevsky, Dimitry; Matthews, Bryan L.; Martin, Rodney

    2012-01-01

    This paper describes an application of data mining technology called Distributed Fleet Monitoring (DFM) to Flight Operational Quality Assurance (FOQA) data collected from a fleet of commercial aircraft. DFM transforms the data into aircraft performance models, flight-to-flight trends, and individual flight anomalies by fitting a multi-level regression model to the data. The model represents aircraft flight performance and takes into account fixed effects: flight-to-flight and vehicle-to-vehicle variability. The regression parameters include aerodynamic coefficients and other aircraft performance parameters that are usually identified by aircraft manufacturers in flight tests. Using DFM, the multi-terabyte FOQA data set with half-million flights was processed in a few hours. The anomalies found include wrong values of competed variables, (e.g., aircraft weight), sensor failures and baises, failures, biases, and trends in flight actuators. These anomalies were missed by the existing airline monitoring of FOQA data exceedances.

  19. The Quadruple Helix Model Enhancing Innovative Performance Of Indonesian Creative Industry

    Directory of Open Access Journals (Sweden)

    Sri Wahyu Lelly Hana Setyanti

    2017-11-01

    Full Text Available The creative industry in Indonesia has contributed positively to the national economic growth. Creative industry grows from the creativity and innovation performance of the business actors. The challenge of creative industry is how to completely understand the creative and innovative processes in business management. Therefore it requires an approach that combines the synergy between academicians entrepreneurs government and society in a quadruple helix model. The objective of this research is to develop a creativity model through a quadruple helix model in improving innovation performance of the creative industry.

  20. Correlation between human observer performance and model observer performance in differential phase contrast CT

    International Nuclear Information System (INIS)

    Li, Ke; Garrett, John; Chen, Guang-Hong

    2013-01-01

    Purpose: With the recently expanding interest and developments in x-ray differential phase contrast CT (DPC-CT), the evaluation of its task-specific detection performance and comparison with the corresponding absorption CT under a given radiation dose constraint become increasingly important. Mathematical model observers are often used to quantify the performance of imaging systems, but their correlations with actual human observers need to be confirmed for each new imaging method. This work is an investigation of the effects of stochastic DPC-CT noise on the correlation of detection performance between model and human observers with signal-known-exactly (SKE) detection tasks.Methods: The detectabilities of different objects (five disks with different diameters and two breast lesion masses) embedded in an experimental DPC-CT noise background were assessed using both model and human observers. The detectability of the disk and lesion signals was then measured using five types of model observers including the prewhitening ideal observer, the nonprewhitening (NPW) observer, the nonprewhitening observer with eye filter and internal noise (NPWEi), the prewhitening observer with eye filter and internal noise (PWEi), and the channelized Hotelling observer (CHO). The same objects were also evaluated by four human observers using the two-alternative forced choice method. The results from the model observer experiment were quantitatively compared to the human observer results to assess the correlation between the two techniques.Results: The contrast-to-detail (CD) curve generated by the human observers for the disk-detection experiments shows that the required contrast to detect a disk is inversely proportional to the square root of the disk size. Based on the CD curves, the ideal and NPW observers tend to systematically overestimate the performance of the human observers. The NPWEi and PWEi observers did not predict human performance well either, as the slopes of their CD

  1. SpF: Enabling Petascale Performance for Pseudospectral Dynamo Models

    Science.gov (United States)

    Jiang, W.; Clune, T.; Vriesema, J.; Gutmann, G.

    2013-12-01

    Pseudospectral (PS) methods possess a number of characteristics (e.g., efficiency, accuracy, natural boundary conditions) that are extremely desirable for dynamo models. Unfortunately, dynamo models based upon PS methods face a number of daunting challenges, which include exposing additional parallelism, leveraging hardware accelerators, exploiting hybrid parallelism, and improving the scalability of global memory transposes. Although these issues are a concern for most models, solutions for PS methods tend to require far more pervasive changes to underlying data and control structures. Further, improvements in performance in one model are difficult to transfer to other models, resulting in significant duplication of effort across the research community. We have developed an extensible software framework for pseudospectral methods called SpF that is intended to enable extreme scalability and optimal performance. High-level abstractions provided by SpF unburden applications of the responsibility of managing domain decomposition and load balance while reducing the changes in code required to adapt to new computing architectures. The key design concept in SpF is that each phase of the numerical calculation is partitioned into disjoint numerical 'kernels' that can be performed entirely in-processor. The granularity of domain-decomposition provided by SpF is only constrained by the data-locality requirements of these kernels. SpF builds on top of optimized vendor libraries for common numerical operations such as transforms, matrix solvers, etc., but can also be configured to use open source alternatives for portability. SpF includes several alternative schemes for global data redistribution and is expected to serve as an ideal testbed for further research into optimal approaches for different network architectures. In this presentation, we will describe the basic architecture of SpF as well as preliminary performance data and experience with adapting legacy dynamo codes

  2. Numerical investigation on thermal-hydraulic performance of new printed circuit heat exchanger model

    International Nuclear Information System (INIS)

    Kim, Dong Eok; Kim, Moo Hwan; Cha, Jae Eun; Kim, Seong O.

    2008-01-01

    Three-dimensional numerical analysis was performed to investigate heat transfer and pressure drop characteristics of supercritical CO 2 flow in new Printed Circuit Heat Exchanger (PCHE) model using commercial CFD code, Fluent 6.3. First, numerical analysis for conventional zigzag channel PCHE model was performed and compared with previous experimental data. Maximum deviation of in-outlet temperature difference and pressure drop from experimental data is about 10%. A new PCHE model has been designed to optimize thermal-hydraulic performance of PCHE. The new PCHE model has several airfoil shape fins (NACA 0020 model), which are designed to streamlined shape. Simulation results showed that in the airfoil shape fin PCHE, total heat transfer rate per unit volume was almost same with zigzag channel PCHE and the pressure drop was reduced to one-twentieth of that in zigzag channel PCHE. In airfoil shape fin PCHE model, the enhancement of heat transfer area and the uniform flow configuration contributed to obtain the same heat transfer performance with zigzag channel PCHE model. And the reduction of pressure drop in airfoil shape fin PCHE model was caused by suppressing generation of separated flow owing to streamlined shape of airfoil fins

  3. Isotopic modelling using the ENIGMA-B fuel performance code

    International Nuclear Information System (INIS)

    Rossiter, G.D.; Cook, P.M.A.; Weston, R.

    2001-01-01

    A number of experimental programmes by BNFL and other MOX fabricators have now shown that the in-pile performance of MOX fuel is generally similar to that of conventional UO 2 fuel. Models based on UO 2 fuel experience form a good basis for a description of MOX fuel behaviour. However, an area where the performance of MOX fuel is sufficiently different from that of UO 2 to warrant model changes is in the radial power and burnup profile. The differences in radial power and burnup profile arise from the presence of significant concentrations of plutonium in MOX fuel, at beginning of life, and their subsequent evolution with burnup. Amongst other effects, plutonium has a greater neutron absorption cross-section than uranium. This paper focuses on the development of a new model for the radial power and burnup profile within a UO 2 or MOX fuel rod, in which the underlying fissile isotope concentration distributions are tracked during irradiation. The new model has been incorporated into the ENIGMA-B fuel performance code and has been extended to track the isotopic concentrations of the fission gases, xenon and krypton. The calculated distributions have been validated against results from rod puncture measurements and electron probe micro-analysis (EPMA) linescans, performed during the M501 post irradiation examination (PIE) programme. The predicted gas inventory of the fuel/clad gap is compared with the isotopic composition measured during rod puncture and the measured radial distributions of burnup (from neodymium measurements) and plutonium in the fuel are compared with the calculated distributions. It is shown that there is good agreement between the code predictions and the measurements. (author)

  4. Instruction-level performance modeling and characterization of multimedia applications

    Energy Technology Data Exchange (ETDEWEB)

    Luo, Y. [Los Alamos National Lab., NM (United States). Scientific Computing Group; Cameron, K.W. [Louisiana State Univ., Baton Rouge, LA (United States). Dept. of Computer Science

    1999-06-01

    One of the challenges for characterizing and modeling realistic multimedia applications is the lack of access to source codes. On-chip performance counters effectively resolve this problem by monitoring run-time behaviors at the instruction-level. This paper presents a novel technique of characterizing and modeling workloads at the instruction level for realistic multimedia applications using hardware performance counters. A variety of instruction counts are collected from some multimedia applications, such as RealPlayer, GSM Vocoder, MPEG encoder/decoder, and speech synthesizer. These instruction counts can be used to form a set of abstract characteristic parameters directly related to a processor`s architectural features. Based on microprocessor architectural constraints and these calculated abstract parameters, the architectural performance bottleneck for a specific application can be estimated. Meanwhile, the bottleneck estimation can provide suggestions about viable architectural/functional improvement for certain workloads. The biggest advantage of this new characterization technique is a better understanding of processor utilization efficiency and architectural bottleneck for each application. This technique also provides predictive insight of future architectural enhancements and their affect on current codes. In this paper the authors also attempt to model architectural effect on processor utilization without memory influence. They derive formulas for calculating CPI{sub 0}, CPI without memory effect, and they quantify utilization of architectural parameters. These equations are architecturally diagnostic and predictive in nature. Results provide promise in code characterization, and empirical/analytical modeling.

  5. Evaluation of CFVS Performance with SPARC Model and Application

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Sung Il; Na, Young Su; Ha, Kwang Soon; Cho, Song Won [KAERI, Daejeon (Korea, Republic of)

    2016-05-15

    Containment Filtered Venting System (CFVS) is one of the important safety features to reduce the amount of released fission product into the environment by depressurizing the containment. KAERI has been conducted the integrated performance verification test of CFVS as a part of a Ministry of Trade, Industry and Energy (MOTIE) project. Generally, some codes are used in the case of wet type filter, such as SPARC, BUSCA, SUPRA, etc. Especially SPARC model is included in the MELCOR to calculate the fission product removal rate through the pool scrubbing. In this study, CFVS performance is evaluated using SPARC model in MELCOR according to the steam fraction in the containment. The calculation is mainly focused on the effect of steam fraction in the containment, and the calculation result is explained with the aerosol removal model in SPARC. Previous study on the OPR 1000 is applied to the result. There were two CFVS valve opening period and it is found that the CFVS performance is different in each case. The result of the study provides the fundamental data can be used to decide the CFVS operation time, however, more calculation data is necessary to generalize the result.

  6. Mathematical Models of Elementary Mathematics Learning and Performance. Final Report.

    Science.gov (United States)

    Suppes, Patrick

    This project was concerned with the development of mathematical models of elementary mathematics learning and performance. Probabilistic finite automata and register machines with a finite number of registers were developed as models and extensively tested with data arising from the elementary-mathematics strand curriculum developed by the…

  7. Cognition and procedure representational requirements for predictive human performance models

    Science.gov (United States)

    Corker, K.

    1992-01-01

    Models and modeling environments for human performance are becoming significant contributors to early system design and analysis procedures. Issues of levels of automation, physical environment, informational environment, and manning requirements are being addressed by such man/machine analysis systems. The research reported here investigates the close interaction between models of human cognition and models that described procedural performance. We describe a methodology for the decomposition of aircrew procedures that supports interaction with models of cognition on the basis of procedures observed; that serves to identify cockpit/avionics information sources and crew information requirements; and that provides the structure to support methods for function allocation among crew and aiding systems. Our approach is to develop an object-oriented, modular, executable software representation of the aircrew, the aircraft, and the procedures necessary to satisfy flight-phase goals. We then encode in a time-based language, taxonomies of the conceptual, relational, and procedural constraints among the cockpit avionics and control system and the aircrew. We have designed and implemented a goals/procedures hierarchic representation sufficient to describe procedural flow in the cockpit. We then execute the procedural representation in simulation software and calculate the values of the flight instruments, aircraft state variables and crew resources using the constraints available from the relationship taxonomies. The system provides a flexible, extensible, manipulative and executable representation of aircrew and procedures that is generally applicable to crew/procedure task-analysis. The representation supports developed methods of intent inference, and is extensible to include issues of information requirements and functional allocation. We are attempting to link the procedural representation to models of cognitive functions to establish several intent inference methods

  8. Multiscale modeling and characterization for performance and safety of lithium-ion batteries

    International Nuclear Information System (INIS)

    Pannala, S.; Turner, J. A.; Allu, S.; Elwasif, W. R.; Kalnaus, S.; Simunovic, S.; Kumar, A.; Billings, J. J.; Wang, H.; Nanda, J.

    2015-01-01

    Lithium-ion batteries are highly complex electrochemical systems whose performance and safety are governed by coupled nonlinear electrochemical-electrical-thermal-mechanical processes over a range of spatiotemporal scales. Gaining an understanding of the role of these processes as well as development of predictive capabilities for design of better performing batteries requires synergy between theory, modeling, and simulation, and fundamental experimental work to support the models. This paper presents the overview of the work performed by the authors aligned with both experimental and computational efforts. In this paper, we describe a new, open source computational environment for battery simulations with an initial focus on lithium-ion systems but designed to support a variety of model types and formulations. This system has been used to create a three-dimensional cell and battery pack models that explicitly simulate all the battery components (current collectors, electrodes, and separator). The models are used to predict battery performance under normal operations and to study thermal and mechanical safety aspects under adverse conditions. This paper also provides an overview of the experimental techniques to obtain crucial validation data to benchmark the simulations at various scales for performance as well as abuse. We detail some initial validation using characterization experiments such as infrared and neutron imaging and micro-Raman mapping. In addition, we identify opportunities for future integration of theory, modeling, and experiments

  9. Characterizing the performance of the Conway-Maxwell Poisson generalized linear model.

    Science.gov (United States)

    Francis, Royce A; Geedipally, Srinivas Reddy; Guikema, Seth D; Dhavala, Soma Sekhar; Lord, Dominique; LaRocca, Sarah

    2012-01-01

    Count data are pervasive in many areas of risk analysis; deaths, adverse health outcomes, infrastructure system failures, and traffic accidents are all recorded as count events, for example. Risk analysts often wish to estimate the probability distribution for the number of discrete events as part of doing a risk assessment. Traditional count data regression models of the type often used in risk assessment for this problem suffer from limitations due to the assumed variance structure. A more flexible model based on the Conway-Maxwell Poisson (COM-Poisson) distribution was recently proposed, a model that has the potential to overcome the limitations of the traditional model. However, the statistical performance of this new model has not yet been fully characterized. This article assesses the performance of a maximum likelihood estimation method for fitting the COM-Poisson generalized linear model (GLM). The objectives of this article are to (1) characterize the parameter estimation accuracy of the MLE implementation of the COM-Poisson GLM, and (2) estimate the prediction accuracy of the COM-Poisson GLM using simulated data sets. The results of the study indicate that the COM-Poisson GLM is flexible enough to model under-, equi-, and overdispersed data sets with different sample mean values. The results also show that the COM-Poisson GLM yields accurate parameter estimates. The COM-Poisson GLM provides a promising and flexible approach for performing count data regression. © 2011 Society for Risk Analysis.

  10. Techniques for Modeling Human Performance in Synthetic Environments: A Supplementary Review

    National Research Council Canada - National Science Library

    Ritter, Frank E; Shadbolt, Nigel R; Elliman, David; Young, Richard M; Gobet, Fernand; Baxter, Gordon D

    2003-01-01

    Selected recent developments and promising directions for improving the quality of models of human performance in synthetic environments are summarized, beginning with the potential uses and goals for behavioral models...

  11. An analytical model on thermal performance evaluation of counter flow wet cooling tower

    Directory of Open Access Journals (Sweden)

    Wang Qian

    2017-01-01

    Full Text Available This paper proposes an analytical model for simultaneous heat and mass transfer processes in a counter flow wet cooling tower, with the assumption that the enthalpy of the saturated air is a linear function of the water surface temperature. The performance of the proposed analytical model is validated in some typical cases. The validation reveals that, when cooling range is in a certain interval, the proposed model is not only comparable with the accurate model, but also can reduce computational complexity. In addition, with the proposed analytical model, the thermal performance of the counter flow wet cooling towers in power plants is calculated. The results show that the proposed analytical model can be applied to evaluate and predict the thermal performance of counter flow wet cooling towers.

  12. TQM and firms performance: An EFQM excellence model research based survey

    Directory of Open Access Journals (Sweden)

    Santos-Vijande, M. L.

    2007-01-01

    Full Text Available The purpose of this article is to develop an instrument for measuring TQM implementation following the European Foundation for Quality Management Excellence Model and to provide empirical evidence on the relationship between management practices and measures of business performance in the model. To this end, the study employs survey data collected from Spanish manufacturing and service firms. Confirmatory factor analysis is used to test the psychometric properties of the measurement scales and the hypothesized relationships between total quality management practices and organizational performance are examined using structural equation modeling. The findings of the research indicate that the adoption of the TQM practices suggested in the EFQM Excellence Model allows firms to outperform their competitors in the results criteria included in the Model. Therefore, this paper provides a valuable benchmarking data for firms as it substantiates the EFQM Enabler’s contribution to the attainment of competitive advantage.

  13. Contribution to the modelling and analysis of logistics system performance by Petri nets and simulation models: Application in a supply chain

    Science.gov (United States)

    Azougagh, Yassine; Benhida, Khalid; Elfezazi, Said

    2016-02-01

    In this paper, the focus is on studying the performance of complex systems in a supply chain context by developing a structured modelling approach based on the methodology ASDI (Analysis, Specification, Design and Implementation) by combining the modelling by Petri nets and simulation using ARENA. The linear approach typically followed in conducting of this kind of problems has to cope with a difficulty of modelling due to the complexity and the number of parameters of concern. Therefore, the approach used in this work is able to structure modelling a way to cover all aspects of the performance study. The modelling structured approach is first introduced before being applied to the case of an industrial system in the field of phosphate. Results of the performance indicators obtained from the models developed, permitted to test the behaviour and fluctuations of this system and to develop improved models of the current situation. In addition, in this paper, it was shown how Arena software can be adopted to simulate complex systems effectively. The method in this research can be applied to investigate various improvements scenarios and their consequences before implementing them in reality.

  14. Building and Running the Yucca Mountain Total System Performance Model in a Quality Environment

    International Nuclear Information System (INIS)

    D.A. Kalinich; K.P. Lee; J.A. McNeish

    2005-01-01

    A Total System Performance Assessment (TSPA) model has been developed to support the Safety Analysis Report (SAR) for the Yucca Mountain High-Level Waste Repository. The TSPA model forecasts repository performance over a 20,000-year simulation period. It has a high degree of complexity due to the complexity of its underlying process and abstraction models. This is reflected in the size of the model (a 27,000 element GoldSim file), its use of dynamic-linked libraries (14 DLLs), the number and size of its input files (659 files totaling 4.7 GB), and the number of model input parameters (2541 input database entries). TSPA model development and subsequent simulations with the final version of the model were performed to a set of Quality Assurance (QA) procedures. Due to the complexity of the model, comments on previous TSPAs, and the number of analysts involved (22 analysts in seven cities across four time zones), additional controls for the entire life-cycle of the TSPA model, including management, physical, model change, and input controls were developed and documented. These controls did not replace the QA. procedures, rather they provided guidance for implementing the requirements of the QA procedures with the specific intent of ensuring that the model development process and the simulations performed with the final version of the model had sufficient checking, traceability, and transparency. Management controls were developed to ensure that only management-approved changes were implemented into the TSPA model and that only management-approved model runs were performed. Physical controls were developed to track the use of prototype software and preliminary input files, and to ensure that only qualified software and inputs were used in the final version of the TSPA model. In addition, a system was developed to name, file, and track development versions of the TSPA model as well as simulations performed with the final version of the model

  15. Stutter-Step Models of Performance in School

    Science.gov (United States)

    Morgan, Stephen L.; Leenman, Theodore S.; Todd, Jennifer J.; Kentucky; Weeden, Kim A.

    2013-01-01

    To evaluate a stutter-step model of academic performance in high school, this article adopts a unique measure of the beliefs of 12,591 high school sophomores from the Education Longitudinal Study, 2002-2006. Verbatim responses to questions on occupational plans are coded to capture specific job titles, the listing of multiple jobs, and the listing…

  16. Modelling the Performance of Product Integrated Photovoltaic (PIPV) Cells Indoors

    NARCIS (Netherlands)

    Apostolou, G.; Verwaal, M.; Reinders, Angelina H.M.E.

    2014-01-01

    In this paper we present a model, which have been developed for the estimation of the PV products’ cells’ performance in an indoor environment. The model computes the efficiency and power production of PV technologies, as a function of distance from natural and artificial light sources. It intents

  17. Building Information Modeling (BIM) for Indoor Environmental Performance Analysis

    DEFF Research Database (Denmark)

    The report is a part of a research assignment carried out by students in the 5ETCS course “Project Byggeri – [entitled as: Building Information Modeling (BIM) – Modeling & Analysis]”, during the 3rd semester of master degree in Civil and Architectural Engineering, Department of Engineering, Aarhus...... University. This includes seven papers describing BIM for Sustainability, concentrating specifically on individual topics regarding to Indoor Environment Performance Analysis....

  18. A conceptual model to improve performance in virtual teams

    Directory of Open Access Journals (Sweden)

    Shopee Dube

    2016-09-01

    Full Text Available Background: The vast improvement in communication technologies and sophisticated project management tools, methods and techniques has allowed geographically and culturally diverse groups to operate and function in a virtual environment. To succeed in this virtual environment where time and space are becoming increasingly irrelevant, organisations must define new ways of implementing initiatives. This virtual environment phenomenon has brought about the formation of virtual project teams that allow organisations to harness the skills and knowhow of the best resources, irrespective of their location. Objectives: The aim of this article was to investigate performance criteria and develop a conceptual model which can be applied to enhance the success of virtual project teams. There are no clear guidelines of the performance criteria in managing virtual project teams. Method: A qualitative research methodology was used in this article. The purpose of content analysis was to explore the literature to understand the concept of performance in virtual project teams and to summarise the findings of the literature reviewed. Results: The research identified a set of performance criteria for the virtual project teams as follows: leadership, trust, communication, team cooperation, reliability, motivation, comfort and social interaction. These were used to conceptualise the model. Conclusion: The conceptual model can be used in a holistic way to determine the overall performance of the virtual project team, but each factor can be analysed individually to determine the impact on the overall performance. The knowledge of performance criteria for virtual project teams could aid project managers in enhancing the success of these teams and taking a different approach to better manage and coordinate them.

  19. The better model to predict and improve pediatric health care quality: performance or importance-performance?

    Science.gov (United States)

    Olsen, Rebecca M; Bryant, Carol A; McDermott, Robert J; Ortinau, David

    2013-01-01

    The perpetual search for ways to improve pediatric health care quality has resulted in a multitude of assessments and strategies; however, there is little research evidence as to their conditions for maximum effectiveness. A major reason for the lack of evaluation research and successful quality improvement initiatives is the methodological challenge of measuring quality from the parent perspective. Comparison of performance-only and importance-performance models was done to determine the better predictor of pediatric health care quality and more successful method for improving the quality of care provided to children. Fourteen pediatric health care centers serving approximately 250,000 patients in 70,000 households in three West Central Florida counties were studied. A cross-sectional design was used to determine the importance and performance of 50 pediatric health care attributes and four global assessments of pediatric health care quality. Exploratory factor analysis revealed five dimensions of care (physician care, access, customer service, timeliness of services, and health care facility). Hierarchical multiple regression compared the performance-only and the importance-performance models. In-depth interviews, participant observations, and a direct cognitive structural analysis identified 50 health care attributes included in a mailed survey to parents(n = 1,030). The tailored design method guided survey development and data collection. The importance-performance multiplicative additive model was a better predictor of pediatric health care quality. Attribute importance moderates performance and quality, making the importance-performance model superior for measuring and providing a deeper understanding of pediatric health care quality and a better method for improving the quality of care provided to children. Regardless of attribute performance, if the level of attribute importance is not taken into consideration, health care organizations may spend valuable

  20. Econometric model as a regulatory tool in electricity distribution. Case network performance assessment model

    International Nuclear Information System (INIS)

    Honkapuro, S.; Lassila, J.; Viljainen, S.; Tahvanainen, K.; Partanen, J.

    2004-01-01

    Electricity distribution companies operate in the state of natural monopolies since building of parallel networks is not cost- effective. Monopoly companies do not have pressure from the open markets to keep their prices and costs at reasonable level. The regulation of these companies is needed to prevent the misuse of the monopoly position. Regulation is usually focused either on the profit of company or on the price of electricity. Regulation method which determines allowed income for each company with generic computation model can be seen as an econometric model. In this document, the usability of an econometric model in the regulation of electricity distribution companies is evaluated. As the special case of an econometric model, the method called Network Performance Assessment Model, NPAM (Naetnyttomodellen in Swedish), is analysed. NPAM is developed by Swedish Energy Agency (STEM) for the regulation of electricity distribution companies. Both theoretical analysis and calculations of an example network area are presented in this document to find the major directing effects of the model. The parameters of NPAM, which are used in the calculations of this research report, were dated on 30th of March 2004. These parameters were most recent ones available at the time when analysis was done. However, since NPAM have been under development, the parameters have been constantly changing. Therefore slight changes might occur in the numerical results of calculations if they were made with the latest set of parameters. However, main conclusions are same and do not depend on exact parameters

  1. Modeling the Performance of Water-Zeolite 13X Adsorption Heat Pump

    Science.gov (United States)

    Kowalska, Kinga; Ambrożek, Bogdan

    2017-12-01

    The dynamic performance of cylindrical double-tube adsorption heat pump is numerically analysed using a non-equilibrium model, which takes into account both heat and mass transfer processes. The model includes conservation equations for: heat transfer in heating/cooling fluids, heat transfer in the metal tube, and heat and mass transfer in the adsorbent. The mathematical model is numerically solved using the method of lines. Numerical simulations are performed for the system water-zeolite 13X, chosen as the working pair. The effect of the evaporator and condenser temperatures on the adsorption and desorption kinetics is examined. The results of the numerical investigation show that both of these parameters have a significant effect on the adsorption heat pump performance. Based on computer simulation results, the values of the coefficients of performance for heating and cooling are calculated. The results show that adsorption heat pumps have relatively low efficiency compared to other heat pumps. The value of the coefficient of performance for heating is higher than for cooling

  2. SIMPLIFIED PREDICTIVE MODELS FOR CO₂ SEQUESTRATION PERFORMANCE ASSESSMENT RESEARCH TOPICAL REPORT ON TASK #3 STATISTICAL LEARNING BASED MODELS

    Energy Technology Data Exchange (ETDEWEB)

    Mishra, Srikanta; Schuetter, Jared

    2014-11-01

    We compare two approaches for building a statistical proxy model (metamodel) for CO₂ geologic sequestration from the results of full-physics compositional simulations. The first approach involves a classical Box-Behnken or Augmented Pairs experimental design with a quadratic polynomial response surface. The second approach used a space-filling maxmin Latin Hypercube sampling or maximum entropy design with the choice of five different meta-modeling techniques: quadratic polynomial, kriging with constant and quadratic trend terms, multivariate adaptive regression spline (MARS) and additivity and variance stabilization (AVAS). Simulations results for CO₂ injection into a reservoir-caprock system with 9 design variables (and 97 samples) were used to generate the data for developing the proxy models. The fitted models were validated with using an independent data set and a cross-validation approach for three different performance metrics: total storage efficiency, CO₂ plume radius and average reservoir pressure. The Box-Behnken–quadratic polynomial metamodel performed the best, followed closely by the maximin LHS–kriging metamodel.

  3. Sensitivity of hydrological performance assessment analysis to variations in material properties, conceptual models, and ventilation models

    Energy Technology Data Exchange (ETDEWEB)

    Sobolik, S.R.; Ho, C.K.; Dunn, E. [Sandia National Labs., Albuquerque, NM (United States); Robey, T.H. [Spectra Research Inst., Albuquerque, NM (United States); Cruz, W.T. [Univ. del Turabo, Gurabo (Puerto Rico)

    1996-07-01

    The Yucca Mountain Site Characterization Project is studying Yucca Mountain in southwestern Nevada as a potential site for a high-level nuclear waste repository. Site characterization includes surface- based and underground testing. Analyses have been performed to support the design of an Exploratory Studies Facility (ESF) and the design of the tests performed as part of the characterization process, in order to ascertain that they have minimal impact on the natural ability of the site to isolate waste. The information in this report pertains to sensitivity studies evaluating previous hydrological performance assessment analyses to variation in the material properties, conceptual models, and ventilation models, and the implications of this sensitivity on previous recommendations supporting ESF design. This document contains information that has been used in preparing recommendations for Appendix I of the Exploratory Studies Facility Design Requirements document.

  4. Sensitivity of hydrological performance assessment analysis to variations in material properties, conceptual models, and ventilation models

    International Nuclear Information System (INIS)

    Sobolik, S.R.; Ho, C.K.; Dunn, E.; Robey, T.H.; Cruz, W.T.

    1996-07-01

    The Yucca Mountain Site Characterization Project is studying Yucca Mountain in southwestern Nevada as a potential site for a high-level nuclear waste repository. Site characterization includes surface- based and underground testing. Analyses have been performed to support the design of an Exploratory Studies Facility (ESF) and the design of the tests performed as part of the characterization process, in order to ascertain that they have minimal impact on the natural ability of the site to isolate waste. The information in this report pertains to sensitivity studies evaluating previous hydrological performance assessment analyses to variation in the material properties, conceptual models, and ventilation models, and the implications of this sensitivity on previous recommendations supporting ESF design. This document contains information that has been used in preparing recommendations for Appendix I of the Exploratory Studies Facility Design Requirements document

  5. URBAN MODELLING PERFORMANCE OF NEXT GENERATION SAR MISSIONS

    Directory of Open Access Journals (Sweden)

    U. G. Sefercik

    2017-09-01

    Full Text Available In synthetic aperture radar (SAR technology, urban mapping and modelling have become possible with revolutionary missions TerraSAR-X (TSX and Cosmo-SkyMed (CSK since 2007. These satellites offer 1m spatial resolution in high-resolution spotlight imaging mode and capable for high quality digital surface model (DSM acquisition for urban areas utilizing interferometric SAR (InSAR technology. With the advantage of independent generation from seasonal weather conditions, TSX and CSK DSMs are much in demand by scientific users. The performance of SAR DSMs is influenced by the distortions such as layover, foreshortening, shadow and double-bounce depend up on imaging geometry. In this study, the potential of DSMs derived from convenient 1m high-resolution spotlight (HS InSAR pairs of CSK and TSX is validated by model-to-model absolute and relative accuracy estimations in an urban area. For the verification, an airborne laser scanning (ALS DSM of the study area was used as the reference model. Results demonstrated that TSX and CSK urban DSMs are compatible in open, built-up and forest land forms with the absolute accuracy of 8–10 m. The relative accuracies based on the coherence of neighbouring pixels are superior to absolute accuracies both for CSK and TSX.

  6. Integrated performance assessment model for waste package behavior and radionuclide release

    International Nuclear Information System (INIS)

    Kossik, R.; Miller, I.; Cunnane, M.

    1992-01-01

    Golder Associates Inc. (GAI) has developed a probabilistic total system performance assessment and strategy evaluation model (RIP) which can be applied in an iterative manner to evaluate repository site suitability and guide site characterization. This paper describes one component of the RIP software, the waste package behavior and radionuclide release model. The waste package component model considers waste package failure by various modes, matrix alteration/dissolution, and radionuclide mass transfer. Model parameters can be described as functions of local environmental conditions. The waste package component model is coupled to component models for far-field radionuclide transport and disruptive events. The model has recently been applied to the proposed repository at Yucca Mountain

  7. An evaluation of the performance of chemistry transport models by comparison with research aircraft observations. Part 1: Concepts and overall model performance

    Directory of Open Access Journals (Sweden)

    D. Brunner

    2003-01-01

    Full Text Available A rigorous evaluation of five global Chemistry-Transport and two Chemistry-Climate Models operated by several different groups in Europe, was performed. Comparisons were made of the models with trace gas observations from a number of research aircraft measurement campaigns during the four-year period 1995-1998. Whenever possible the models were run over the same four-year period and at each simulation time step the instantaneous tracer fields were interpolated to all coinciding observation points. This approach allows for a very close comparison with observations and fully accounts for the specific meteorological conditions during the measurement flights. This is important considering the often limited availability and representativity of such trace gas measurements. A new extensive database including all major research and commercial aircraft measurements between 1995 and 1998, as well as ozone soundings, was established specifically to support this type of direct comparison. Quantitative methods were applied to judge model performance including the calculation of average concentration biases and the visualization of correlations and RMS errors in the form of so-called Taylor diagrams. We present the general concepts applied, the structure and content of the database, and an overall analysis of model skills over four distinct regions. These regions were selected to represent various atmospheric conditions and to cover large geographical domains such that sufficient observations are available for comparison. The comparison of model results with the observations revealed specific problems for each individual model. This study suggests the further improvements needed and serves as a benchmark for re-evaluations of such improvements. In general all models show deficiencies with respect to both mean concentrations and vertical gradients of important trace gases. These include ozone, CO and NOx at the tropopause. Too strong two-way mixing across the

  8. Multilevel Modeling of the Performance Variance

    Directory of Open Access Journals (Sweden)

    Alexandre Teixeira Dias

    2012-12-01

    Full Text Available Focusing on the identification of the role played by Industry on the relations between Corporate Strategic Factors and Performance, the hierarchical multilevel modeling method was adopted when measuring and analyzing the relations between the variables that comprise each level of analysis. The adequacy of the multilevel perspective to the study of the proposed relations was identified and the relative importance analysis point out to the lower relevance of industry as a moderator of the effects of corporate strategic factors on performance, when the latter was measured by means of return on assets, and that industry don‟t moderates the relations between corporate strategic factors and Tobin‟s Q. The main conclusions of the research are that the organizations choices in terms of corporate strategy presents a considerable influence and plays a key role on the determination of performance level, but that industry should be considered when analyzing the performance variation despite its role as a moderator or not of the relations between corporate strategic factors and performance.

  9. A model to describe the performance of the UASB reactor.

    Science.gov (United States)

    Rodríguez-Gómez, Raúl; Renman, Gunno; Moreno, Luis; Liu, Longcheng

    2014-04-01

    A dynamic model to describe the performance of the Upflow Anaerobic Sludge Blanket (UASB) reactor was developed. It includes dispersion, advection, and reaction terms, as well as the resistances through which the substrate passes before its biotransformation. The UASB reactor is viewed as several continuous stirred tank reactors connected in series. The good agreement between experimental and simulated results shows that the model is able to predict the performance of the UASB reactor (i.e. substrate concentration, biomass concentration, granule size, and height of the sludge bed).

  10. Statistical multi-model approach for performance assessment of cooling tower

    International Nuclear Information System (INIS)

    Pan, Tian-Hong; Shieh, Shyan-Shu; Jang, Shi-Shang; Tseng, Wen-Hung; Wu, Chan-Wei; Ou, Jenq-Jang

    2011-01-01

    This paper presents a data-driven model-based assessment strategy to investigate the performance of a cooling tower. In order to achieve this objective, the operations of a cooling tower are first characterized using a data-driven method, multiple models, which presents a set of local models in the format of linear equations. Satisfactory fuzzy c-mean clustering algorithm is used to classify operating data into several groups to build local models. The developed models are then applied to predict the performance of the system based on design input parameters provided by the manufacturer. The tower characteristics are also investigated using the proposed models via the effects of the water/air flow ratio. The predicted results tend to agree well with the calculated tower characteristics using actual measured operating data from an industrial plant. By comparison with the design characteristic curve provided by the manufacturer, the effectiveness of cooling tower can be obtained in the end. A case study conducted in a commercial plant demonstrates the validity of proposed approach. It should be noted that this is the first attempt to assess the cooling efficiency which is deviated from the original design value using operating data for an industrial scale process. Moreover, the evaluated process need not interrupt the normal operation of the cooling tower. This should be of particular interest in industrial applications.

  11. Effect of Using Extreme Years in Hydrologic Model Calibration Performance

    Science.gov (United States)

    Goktas, R. K.; Tezel, U.; Kargi, P. G.; Ayvaz, T.; Tezyapar, I.; Mesta, B.; Kentel, E.

    2017-12-01

    Hydrological models are useful in predicting and developing management strategies for controlling the system behaviour. Specifically they can be used for evaluating streamflow at ungaged catchments, effect of climate change, best management practices on water resources, or identification of pollution sources in a watershed. This study is a part of a TUBITAK project named "Development of a geographical information system based decision-making tool for water quality management of Ergene Watershed using pollutant fingerprints". Within the scope of this project, first water resources in Ergene Watershed is studied. Streamgages found in the basin are identified and daily streamflow measurements are obtained from State Hydraulic Works of Turkey. Streamflow data is analysed using box-whisker plots, hydrographs and flow-duration curves focusing on identification of extreme periods, dry or wet. Then a hydrological model is developed for Ergene Watershed using HEC-HMS in the Watershed Modeling System (WMS) environment. The model is calibrated for various time periods including dry and wet ones and the performance of calibration is evaluated using Nash-Sutcliffe Efficiency (NSE), correlation coefficient, percent bias (PBIAS) and root mean square error. It is observed that calibration period affects the model performance, and the main purpose of the development of the hydrological model should guide calibration period selection. Acknowledgement: This study is funded by The Scientific and Technological Research Council of Turkey (TUBITAK) under Project Number 115Y064.

  12. Outdoor FSO Communications Under Fog: Attenuation Modeling and Performance Evaluation

    KAUST Repository

    Esmail, Maged Abdullah

    2016-07-18

    Fog is considered to be a primary challenge for free space optics (FSO) systems. It may cause attenuation that is up to hundreds of decibels per kilometer. Hence, accurate modeling of fog attenuation will help telecommunication operators to engineer and appropriately manage their networks. In this paper, we examine fog measurement data coming from several locations in Europe and the United States and derive a unified channel attenuation model. Compared with existing attenuation models, our proposed model achieves a minimum of 9 dB, which is lower than the average root-mean-square error (RMSE). Moreover, we have investigated the statistical behavior of the channel and developed a probabilistic model under stochastic fog conditions. Furthermore, we studied the performance of the FSO system addressing various performance metrics, including signal-to-noise ratio (SNR), bit-error rate (BER), and channel capacity. Our results show that in communication environments with frequent fog, FSO is typically a short-range data transmission technology. Therefore, FSO will have its preferred market segment in future wireless fifth-generation/sixth-generation (5G/6G) networks having cell sizes that are lower than a 1-km diameter. Moreover, the results of our modeling and analysis can be applied in determining the switching/thresholding conditions in highly reliable hybrid FSO/radio-frequency (RF) networks.

  13. A Composite Model for Employees' Performance Appraisal and Improvement

    Science.gov (United States)

    Manoharan, T. R.; Muralidharan, C.; Deshmukh, S. G.

    2012-01-01

    Purpose: The purpose of this paper is to develop an innovative method of performance appraisal that will be useful for designing a structured training programme. Design/methodology/approach: Employees' performance appraisals are conducted using new approaches, namely data envelopment analysis and an integrated fuzzy model. Interpretive structural…

  14. The performance of FLake in the Met Office Unified Model

    Directory of Open Access Journals (Sweden)

    Gabriel Gerard Rooney

    2013-12-01

    Full Text Available We present results from the coupling of FLake to the Met Office Unified Model (MetUM. The coupling and initialisation are first described, and the results of testing the coupled model in local and global model configurations are presented. These show that FLake has a small statistical impact on screen temperature, but has the potential to modify the weather in the vicinity of areas of significant inland water. Examination of FLake lake ice has revealed that the behaviour of lakes in the coupled model is unrealistic in some areas of significant sub-grid orography. Tests of various modifications to ameliorate this behaviour are presented. The results indicate which of the possible model changes best improve the annual cycle of lake ice. As FLake has been developed and tuned entirely outside the Unified Model system, these results can be interpreted as a useful objective measure of the performance of the Unified Model in terms of its near-surface characteristics.

  15. A New Model to Simulate Energy Performance of VRF Systems

    Energy Technology Data Exchange (ETDEWEB)

    Hong, Tianzhen; Pang, Xiufeng; Schetrit, Oren; Wang, Liping; Kasahara, Shinichi; Yura, Yoshinori; Hinokuma, Ryohei

    2014-03-30

    This paper presents a new model to simulate energy performance of variable refrigerant flow (VRF) systems in heat pump operation mode (either cooling or heating is provided but not simultaneously). The main improvement of the new model is the introduction of the evaporating and condensing temperature in the indoor and outdoor unit capacity modifier functions. The independent variables in the capacity modifier functions of the existing VRF model in EnergyPlus are mainly room wet-bulb temperature and outdoor dry-bulb temperature in cooling mode and room dry-bulb temperature and outdoor wet-bulb temperature in heating mode. The new approach allows compliance with different specifications of each indoor unit so that the modeling accuracy is improved. The new VRF model was implemented in a custom version of EnergyPlus 7.2. This paper first describes the algorithm for the new VRF model, which is then used to simulate the energy performance of a VRF system in a Prototype House in California that complies with the requirements of Title 24 ? the California Building Energy Efficiency Standards. The VRF system performance is then compared with three other types of HVAC systems: the Title 24-2005 Baseline system, the traditional High Efficiency system, and the EnergyStar Heat Pump system in three typical California climates: Sunnyvale, Pasadena and Fresno. Calculated energy savings from the VRF systems are significant. The HVAC site energy savings range from 51 to 85percent, while the TDV (Time Dependent Valuation) energy savings range from 31 to 66percent compared to the Title 24 Baseline Systems across the three climates. The largest energy savings are in Fresno climate followed by Sunnyvale and Pasadena. The paper discusses various characteristics of the VRF systems contributing to the energy savings. It should be noted that these savings are calculated using the Title 24 prototype House D under standard operating conditions. Actual performance of the VRF systems for real

  16. Kinetic Hydration Heat Modeling for High-Performance Concrete Containing Limestone Powder

    Directory of Open Access Journals (Sweden)

    Xiao-Yong Wang

    2017-01-01

    Full Text Available Limestone powder is increasingly used in producing high-performance concrete in the modern concrete industry. Limestone powder blended concrete has many advantages, such as increasing the early-age strength, reducing the setting time, improving the workability, and reducing the heat of hydration. This study presents a kinetic model for modeling the hydration heat of limestone blended concrete. First, an improved hydration model is proposed which considers the dilution effect and nucleation effect due to limestone powder addition. A degree of hydration is calculated using this improved hydration model. Second, hydration heat is calculated using the degree of hydration. The effects of water to binder ratio and limestone replacement ratio on hydration heat are clarified. Third, the temperature history and temperature distribution of hardening limestone blended concrete are calculated by combining hydration model with finite element method. The analysis results generally agree with experimental results of high-performance concrete with various mixing proportions.

  17. Parameter Selection and Performance Analysis of Mobile Terminal Models Based on Unity3D

    Institute of Scientific and Technical Information of China (English)

    KONG Li-feng; ZHAO Hai-ying; XU Guang-mei

    2014-01-01

    Mobile platform is now widely seen as a promising multimedia service with a favorable user group and market prospect. To study the influence of mobile terminal models on the quality of scene roaming, a parameter setting platform of mobile terminal models is established to select the parameter selection and performance index on different mobile platforms in this paper. This test platform is established based on model optimality principle, analyzing the performance curve of mobile terminals in different scene models and then deducing the external parameter of model establishment. Simulation results prove that the established test platform is able to analyze the parameter and performance matching list of a mobile terminal model.

  18. A SEQUENTIAL MODEL OF INNOVATION STRATEGY—COMPANY NON-FINANCIAL PERFORMANCE LINKS

    Directory of Open Access Journals (Sweden)

    Wakhid Slamet Ciptono

    2006-05-01

    Full Text Available This study extends the prior research (Zahra and Das 1993 by examining the association between a company’s innovation strategy and its non-financial performance in the upstream and downstream strategic business units (SBUs of oil and gas companies. The sequential model suggests a causal sequence among six dimensions of innovation strategy (leadership orientation, process innovation, product/service innovation, external innovation source, internal innovation source, and investment that may lead to higher company non-financial performance (productivity and operational reliability. The study distributed a questionnaire (by mail, e-mailed web system, and focus group discussion to three levels of managers (top, middle, and first-line of 49 oil and gas companies with 140 SBUs in Indonesia. These qualified samples fell into 47 upstream (supply-chain companies with 132 SBUs, and 2 downstream (demand-chain companies with 8 SBUs. A total of 1,332 individual usable questionnaires were returned thus qualified for analysis, representing an effective response rate of 50.19 percent. The researcher conducts structural equation modeling (SEM and hierarchical multiple regression analysis to assess the goodness-of-fit between the research models and the sample data and to test whether innovation strategy mediates the impact of leadership orientation on company non-financial performance. SEM reveals that the models have met goodness-of-fit criteria, thus the interpretation of the sequential models fits with the data. The results of SEM and hierarchical multiple regression: (1 support the importance of innovation strategy as a determinant of company non-financial performance, (2 suggest that the sequential model is appropriate for examining the relationships between six dimensions of innovation strategy and company non-financial performance, and (3 show that the sequential model provides additional insights into the indirect contribution of the individual

  19. Developing Performance Management in State Government: An Exploratory Model for Danish State Institutions

    DEFF Research Database (Denmark)

    Nielsen, Steen; Rikhardsson, Pall M.

    . The question remains how and if accounting departments in central government can deal with these challenges. This exploratory study proposes and tests a model depicting different areas, elements and characteristics within a government accounting departments and their association with a perceived performance...... management model. The findings are built on a questionnaire study of 45 high level accounting officers in central governmental institutions. Our statistical model consists of five explored constructs: improvements; initiatives and reforms, incentives and contracts, the use of management accounting practices......, and cost allocations and their relations to performance management. Findings based on structural equation modelling and partial least squares regression (PLS) indicates a positive effect on the latent depending variable, called performance management results. The models/theories explain a significant...

  20. Computerized heat balance models to predict performance of operating nuclear power plants

    International Nuclear Information System (INIS)

    Breeding, C.L.; Carter, J.C.; Schaefer, R.C.

    1983-01-01

    The use of computerized heat balance models has greatly enhanced the decision making ability of TVA's Division of Nuclear Power. These models are utilized to predict the effects of various operating modes and to analyze changes in plant performance resulting from turbine cycle equipment modifications with greater speed and accuracy than was possible before. Computer models have been successfully used to optimize plant output by predicting the effects of abnormal condenser circulating water conditions. They were utilized to predict the degradation in performance resulting from installation of a baffle plate assembly to replace damaged low-pressure blading, thereby providing timely information allowing an optimal economic judgement as to when to replace the blading. Future use will be for routine performance test analysis. This paper presents the benefits of utility use of computerized heat balance models

  1. INFORMATION SYSTEM FOR MODELING ECONOMIC AND FINANCIAL PERFORMANCES

    Directory of Open Access Journals (Sweden)

    Boldeanu Dana Maria

    2009-05-01

    Full Text Available The analysis of the most important financial and economic indicators at the level of some organizations from the same sector of activity, the selection of performance ratios and generating a particular analysis model help companies to move from the desire

  2. 3D Massive MIMO Systems: Modeling and Performance Analysis

    KAUST Repository

    Nadeem, Qurrat-Ul-Ain

    2015-07-30

    Multiple-input-multiple-output (MIMO) systems of current LTE releases are capable of adaptation in the azimuth only. Recently, the trend is to enhance system performance by exploiting the channel’s degrees of freedom in the elevation, which necessitates the characterization of 3D channels. We present an information-theoretic channel model for MIMO systems that supports the elevation dimension. The model is based on the principle of maximum entropy, which enables us to determine the distribution of the channel matrix consistent with the prior information on the angles. Based on this model, we provide analytical expression for the cumulative density function (CDF) of the mutual information (MI) for systems with a single receive and finite number of transmit antennas in the general signalto- interference-plus-noise-ratio (SINR) regime. The result is extended to systems with finite receive antennas in the low SINR regime. A Gaussian approximation to the asymptotic behavior of MI distribution is derived for the large number of transmit antennas and paths regime. We corroborate our analysis with simulations that study the performance gains realizable through meticulous selection of the transmit antenna downtilt angles, confirming the potential of elevation beamforming to enhance system performance. The results are directly applicable to the analysis of 5G 3D-Massive MIMO-systems.

  3. Performance of chromatographic systems to model soil-water sorption.

    Science.gov (United States)

    Hidalgo-Rodríguez, Marta; Fuguet, Elisabet; Ràfols, Clara; Rosés, Martí

    2012-08-24

    A systematic approach for evaluating the goodness of chromatographic systems to model the sorption of neutral organic compounds by soil from water is presented in this work. It is based on the examination of the three sources of error that determine the overall variance obtained when soil-water partition coefficients are correlated against chromatographic retention factors: the variance of the soil-water sorption data, the variance of the chromatographic data, and the variance attributed to the dissimilarity between the two systems. These contributions of variance are easily predicted through the characterization of the systems by the solvation parameter model. According to this method, several chromatographic systems besides the reference octanol-water partition system have been selected to test their performance in the emulation of soil-water sorption. The results from the experimental correlations agree with the predicted variances. The high-performance liquid chromatography system based on an immobilized artificial membrane and the micellar electrokinetic chromatography systems of sodium dodecylsulfate and sodium taurocholate provide the most precise correlation models. They have shown to predict well soil-water sorption coefficients of several tested herbicides. Octanol-water partitions and high-performance liquid chromatography measurements using C18 columns are less suited for the estimation of soil-water partition coefficients. Copyright © 2012 Elsevier B.V. All rights reserved.

  4. Rotary engine performance limits predicted by a zero-dimensional model

    Science.gov (United States)

    Bartrand, Timothy A.; Willis, Edward A.

    1992-01-01

    A parametric study was performed to determine the performance limits of a rotary combustion engine. This study shows how well increasing the combustion rate, insulating, and turbocharging increase brake power and decrease fuel consumption. Several generalizations can be made from the findings. First, it was shown that the fastest combustion rate is not necessarily the best combustion rate. Second, several engine insulation schemes were employed for a turbocharged engine. Performance improved only for a highly insulated engine. Finally, the variability of turbocompounding and the influence of exhaust port shape were calculated. Rotary engines performance was predicted by an improved zero-dimensional computer model based on a model developed at the Massachusetts Institute of Technology in the 1980's. Independent variables in the study include turbocharging, manifold pressures, wall thermal properties, leakage area, and exhaust port geometry. Additions to the computer programs since its results were last published include turbocharging, manifold modeling, and improved friction power loss calculation. The baseline engine for this study is a single rotor 650 cc direct-injection stratified-charge engine with aluminum housings and a stainless steel rotor. Engine maps are provided for the baseline and turbocharged versions of the engine.

  5. Software life cycle dynamic simulation model: The organizational performance submodel

    Science.gov (United States)

    Tausworthe, Robert C.

    1985-01-01

    The submodel structure of a software life cycle dynamic simulation model is described. The software process is divided into seven phases, each with product, staff, and funding flows. The model is subdivided into an organizational response submodel, a management submodel, a management influence interface, and a model analyst interface. The concentration here is on the organizational response model, which simulates the performance characteristics of a software development subject to external and internal influences. These influences emanate from two sources: the model analyst interface, which configures the model to simulate the response of an implementing organization subject to its own internal influences, and the management submodel that exerts external dynamic control over the production process. A complete characterization is given of the organizational response submodel in the form of parameterized differential equations governing product, staffing, and funding levels. The parameter values and functions are allocated to the two interfaces.

  6. Decline curve based models for predicting natural gas well performance

    Directory of Open Access Journals (Sweden)

    Arash Kamari

    2017-06-01

    Full Text Available The productivity of a gas well declines over its production life as cannot cover economic policies. To overcome such problems, the production performance of gas wells should be predicted by applying reliable methods to analyse the decline trend. Therefore, reliable models are developed in this study on the basis of powerful artificial intelligence techniques viz. the artificial neural network (ANN modelling strategy, least square support vector machine (LSSVM approach, adaptive neuro-fuzzy inference system (ANFIS, and decision tree (DT method for the prediction of cumulative gas production as well as initial decline rate multiplied by time as a function of the Arps' decline curve exponent and ratio of initial gas flow rate over total gas flow rate. It was concluded that the results obtained based on the models developed in current study are in satisfactory agreement with the actual gas well production data. Furthermore, the results of comparative study performed demonstrates that the LSSVM strategy is superior to the other models investigated for the prediction of both cumulative gas production, and initial decline rate multiplied by time.

  7. Models for the financial-performance effects of Marketing

    NARCIS (Netherlands)

    Hanssens, D.M.; Dekimpe, Marnik; Wierenga, B.; van der Lans, R.

    We consider marketing-mix models that explicitly include financial performance criteria. These financial metrics are not only comparable across the marketing mix, they also relate well to investors’ evaluation of the firm. To that extent, we treat marketing as an investment in customer value

  8. Photovoltaic Pixels for Neural Stimulation: Circuit Models and Performance.

    Science.gov (United States)

    Boinagrov, David; Lei, Xin; Goetz, Georges; Kamins, Theodore I; Mathieson, Keith; Galambos, Ludwig; Harris, James S; Palanker, Daniel

    2016-02-01

    Photovoltaic conversion of pulsed light into pulsed electric current enables optically-activated neural stimulation with miniature wireless implants. In photovoltaic retinal prostheses, patterns of near-infrared light projected from video goggles onto subretinal arrays of photovoltaic pixels are converted into patterns of current to stimulate the inner retinal neurons. We describe a model of these devices and evaluate the performance of photovoltaic circuits, including the electrode-electrolyte interface. Characteristics of the electrodes measured in saline with various voltages, pulse durations, and polarities were modeled as voltage-dependent capacitances and Faradaic resistances. The resulting mathematical model of the circuit yielded dynamics of the electric current generated by the photovoltaic pixels illuminated by pulsed light. Voltages measured in saline with a pipette electrode above the pixel closely matched results of the model. Using the circuit model, our pixel design was optimized for maximum charge injection under various lighting conditions and for different stimulation thresholds. To speed discharge of the electrodes between the pulses of light, a shunt resistor was introduced and optimized for high frequency stimulation.

  9. Individualized Next-Generation Biomathematical Modeling of Fatigue and Performance

    National Research Council Canada - National Science Library

    Van Dongen, Hans P

    2006-01-01

    .... This project employed a cutting-edge technique called Bayesian forecasting to develop a novel biomathematical performance model to predict responses to sleep loss and circadian displacement for individual subjects...

  10. Predicting the Consequences of Workload Management Strategies with Human Performance Modeling

    Science.gov (United States)

    Mitchell, Diane Kuhl; Samma, Charneta

    2011-01-01

    Human performance modelers at the US Army Research Laboratory have developed an approach for establishing Soldier high workload that can be used for analyses of proposed system designs. Their technique includes three key components. To implement the approach in an experiment, the researcher would create two experimental conditions: a baseline and a design alternative. Next they would identify a scenario in which the test participants perform all their representative concurrent interactions with the system. This scenario should include any events that would trigger a different set of goals for the human operators. They would collect workload values during both the control and alternative design condition to see if the alternative increased workload and decreased performance. They have successfully implemented this approach for military vehicle. designs using the human performance modeling tool, IMPRINT. Although ARL researches use IMPRINT to implement their approach, it can be applied to any workload analysis. Researchers using other modeling and simulations tools or conducting experiments or field tests can use the same approach.

  11. Effect of calibration data series length on performance and optimal parameters of hydrological model

    Directory of Open Access Journals (Sweden)

    Chuan-zhe Li

    2010-12-01

    Full Text Available In order to assess the effects of calibration data series length on the performance and optimal parameter values of a hydrological model in ungauged or data-limited catchments (data are non-continuous and fragmental in some catchments, we used non-continuous calibration periods for more independent streamflow data for SIMHYD (simple hydrology model calibration. Nash-Sutcliffe efficiency and percentage water balance error were used as performance measures. The particle swarm optimization (PSO method was used to calibrate the rainfall-runoff models. Different lengths of data series ranging from one year to ten years, randomly sampled, were used to study the impact of calibration data series length. Fifty-five relatively unimpaired catchments located all over Australia with daily precipitation, potential evapotranspiration, and streamflow data were tested to obtain more general conclusions. The results show that longer calibration data series do not necessarily result in better model performance. In general, eight years of data are sufficient to obtain steady estimates of model performance and parameters for the SIMHYD model. It is also shown that most humid catchments require fewer calibration data to obtain a good performance and stable parameter values. The model performs better in humid and semi-humid catchments than in arid catchments. Our results may have useful and interesting implications for the efficiency of using limited observation data for hydrological model calibration in different climates.

  12. Modeling the Performance of Water-Zeolite 13X Adsorption Heat Pump

    Directory of Open Access Journals (Sweden)

    Kowalska Kinga

    2017-12-01

    Full Text Available The dynamic performance of cylindrical double-tube adsorption heat pump is numerically analysed using a non-equilibrium model, which takes into account both heat and mass transfer processes. The model includes conservation equations for: heat transfer in heating/cooling fluids, heat transfer in the metal tube, and heat and mass transfer in the adsorbent. The mathematical model is numerically solved using the method of lines. Numerical simulations are performed for the system water-zeolite 13X, chosen as the working pair. The effect of the evaporator and condenser temperatures on the adsorption and desorption kinetics is examined. The results of the numerical investigation show that both of these parameters have a significant effect on the adsorption heat pump performance. Based on computer simulation results, the values of the coefficients of performance for heating and cooling are calculated. The results show that adsorption heat pumps have relatively low efficiency compared to other heat pumps. The value of the coefficient of performance for heating is higher than for cooling

  13. Wind farms providing secondary frequency regulation: evaluating the performance of model-based receding horizon control

    Directory of Open Access Journals (Sweden)

    C. R. Shapiro

    2018-01-01

    Full Text Available This paper is an extended version of our paper presented at the 2016 TORQUE conference (Shapiro et al., 2016. We investigate the use of wind farms to provide secondary frequency regulation for a power grid using a model-based receding horizon control framework. In order to enable real-time implementation, the control actions are computed based on a time-varying one-dimensional wake model. This model describes wake advection and wake interactions, both of which play an important role in wind farm power production. In order to test the control strategy, it is implemented in a large-eddy simulation (LES model of an 84-turbine wind farm using the actuator disk turbine representation. Rotor-averaged velocity measurements at each turbine are used to provide feedback for error correction. The importance of including the dynamics of wake advection in the underlying wake model is tested by comparing the performance of this dynamic-model control approach to a comparable static-model control approach that relies on a modified Jensen model. We compare the performance of both control approaches using two types of regulation signals, RegA and RegD, which are used by PJM, an independent system operator in the eastern United States. The poor performance of the static-model control relative to the dynamic-model control demonstrates that modeling the dynamics of wake advection is key to providing the proposed type of model-based coordinated control of large wind farms. We further explore the performance of the dynamic-model control via composite performance scores used by PJM to qualify plants for regulation services or markets. Our results demonstrate that the dynamic-model-controlled wind farm consistently performs well, passing the qualification threshold for all fast-acting RegD signals. For the RegA signal, which changes over slower timescales, the dynamic-model control leads to average performance that surpasses the qualification threshold, but further

  14. The Social Responsibility Performance Outcomes Model: Building Socially Responsible Companies through Performance Improvement Outcomes.

    Science.gov (United States)

    Hatcher, Tim

    2000-01-01

    Considers the role of performance improvement professionals and human resources development professionals in helping organizations realize the ethical and financial power of corporate social responsibility. Explains the social responsibility performance outcomes model, which incorporates the concepts of societal needs and outcomes. (LRW)

  15. Performance Estimation of Networked Business Models: Case Study on a Finnish eHealth Service Project

    Directory of Open Access Journals (Sweden)

    Marikka Heikkilä

    2014-08-01

    Full Text Available Purpose: The objective of this paper is to propose and demonstrate a framework for estimating performance in a networked business model. Design/methodology/approach: Our approach is design science, utilising action research in studying a case of four independent firms in Health & Wellbeing sector aiming to jointly provide a new service for business and private customers. The duration of the research study is 3 years. Findings: We propose that a balanced set of performance indicators can be defined by paying attention to all main components of the business model, enriched with of network collaboration. The results highlight the importance of measuring all main components of the business model and also the business network partners’ view on trust, contracts and fairness. Research implications: This article contributes to the business model literature by combining business modelling with performance evaluation. The article points out that it is essential to create metrics that can be applied to evaluate and improve the business model blueprints, but it is also important to measure business collaboration aspects. Practical implications: Companies have already adopted Business model canvas or similar business model tools to innovate new business models. We suggest that companies continue their business model innovation work by agreeing on a set of performance metrics, building on the business model components model enriched with measures of network collaboration. Originality/value: This article contributes to the business model literature and praxis by combining business modelling with performance evaluation.

  16. Modelling of performance of the ATLAS SCT detector

    International Nuclear Information System (INIS)

    Kazi, S.

    2000-01-01

    Full text: The ATLAS detector being built at LHC will use the SCT (semiconductor tracking) module for particle tracking in the inner core of the detector. An analytical/numerical model of the discriminator threshold dependence and the temperature dependence of the SCT module was derived. Measurements were conducted on the performance of the SCT module versus temperature and these results were compared with the predictions made by the model. The affect of radiation damage of the SCT detector was also investigated. The detector will operate for approximately 10 years so a study was carried out on the effects of the 10 years of radiation exposure to the SCT

  17. The Effect of Covert Modeling on Communication Apprehension, Communication Confidence, and Performance.

    Science.gov (United States)

    Nimocks, Mittie J.; Bromley, Patricia L.; Parsons, Theron E.; Enright, Corinne S.; Gates, Elizabeth A.

    This study examined the effect of covert modeling on communication apprehension, public speaking anxiety, and communication competence. Students identified as highly communication apprehensive received covert modeling, a technique in which one first observes a model doing a behavior, then visualizes oneself performing the behavior and obtaining a…

  18. Evaluating Internal Model Strength and Performance of Myoelectric Prosthesis Control Strategies.

    Science.gov (United States)

    Shehata, Ahmed W; Scheme, Erik J; Sensinger, Jonathon W

    2018-05-01

    On-going developments in myoelectric prosthesis control have provided prosthesis users with an assortment of control strategies that vary in reliability and performance. Many studies have focused on improving performance by providing feedback to the user but have overlooked the effect of this feedback on internal model development, which is key to improve long-term performance. In this paper, the strength of internal models developed for two commonly used myoelectric control strategies: raw control with raw feedback (using a regression-based approach) and filtered control with filtered feedback (using a classifier-based approach), were evaluated using two psychometric measures: trial-by-trial adaptation and just-noticeable difference. The performance of both strategies was also evaluated using Schmidt's style target acquisition task. Results obtained from 24 able-bodied subjects showed that although filtered control with filtered feedback had better short-term performance in path efficiency ( ), raw control with raw feedback resulted in stronger internal model development ( ), which may lead to better long-term performance. Despite inherent noise in the control signals of the regression controller, these findings suggest that rich feedback associated with regression control may be used to improve human understanding of the myoelectric control system.

  19. Adaptation Method for Overall and Local Performances of Gas Turbine Engine Model

    Science.gov (United States)

    Kim, Sangjo; Kim, Kuisoon; Son, Changmin

    2018-04-01

    An adaptation method was proposed to improve the modeling accuracy of overall and local performances of gas turbine engine. The adaptation method was divided into two steps. First, the overall performance parameters such as engine thrust, thermal efficiency, and pressure ratio were adapted by calibrating compressor maps, and second, the local performance parameters such as temperature of component intersection and shaft speed were adjusted by additional adaptation factors. An optimization technique was used to find the correlation equation of adaptation factors for compressor performance maps. The multi-island genetic algorithm (MIGA) was employed in the present optimization. The correlations of local adaptation factors were generated based on the difference between the first adapted engine model and performance test data. The proposed adaptation method applied to a low-bypass ratio turbofan engine of 12,000 lb thrust. The gas turbine engine model was generated and validated based on the performance test data in the sea-level static condition. In flight condition at 20,000 ft and 0.9 Mach number, the result of adapted engine model showed improved prediction in engine thrust (overall performance parameter) by reducing the difference from 14.5 to 3.3%. Moreover, there was further improvement in the comparison of low-pressure turbine exit temperature (local performance parameter) as the difference is reduced from 3.2 to 0.4%.

  20. VALIDITY OF THE DIMENSIONAL CONFIGURATION OF THE REDUCED POTENTIAL PERFORMANCE MODEL IN SKI JUMPING

    OpenAIRE

    Ulaga, Maja; Čoh, Milan; Jošt, Bojan

    2007-01-01

    The aim of the study was to establish the validity of the dimensional configuration of the reduced po-tential performance model in ski jumping. Two performance models were prepared (models A and B), dif-fering only in terms of their method of determining the weights (dimensional configuration). Model A in-volves the dependent determination of weights while model B includes the independent determination of weights. The sample consisted of 104 Slovenian ski jumpers from the senior-men’s categor...

  1. Maintenance personnel performance simulation (MAPPS) model: overview and evaluation efforts

    International Nuclear Information System (INIS)

    Knee, H.E.; Haas, P.M.; Siegel, A.I.; Bartter, W.D.; Wolf, J.J.; Ryan, T.G.

    1984-01-01

    The development of the MAPPS model has been completed and the model is currently undergoing evaluation. These efforts are addressing a number of identified issues concerning practicality, acceptability, usefulness, and validity. Preliminary analysis of the evaluation data that has been collected indicates that MAPPS will provide comprehensive and reliable data for PRA purposes and for a number of other applications. The MAPPS computer simulation model provides the user with a sophisticated tool for gaining insights into tasks performed by NPP maintenance personnel. Its wide variety of input parameters and output data makes it extremely flexible for application to a number of diverse applications. With the demonstration of favorable model evaluation results, the MAPPS model will represent a valuable source of NPP maintainer reliability data and provide PRA studies with a source of data on maintainers that has previously not existed

  2. Acoustic/seismic signal propagation and sensor performance modeling

    Science.gov (United States)

    Wilson, D. Keith; Marlin, David H.; Mackay, Sean

    2007-04-01

    Performance, optimal employment, and interpretation of data from acoustic and seismic sensors depend strongly and in complex ways on the environment in which they operate. Software tools for guiding non-expert users of acoustic and seismic sensors are therefore much needed. However, such tools require that many individual components be constructed and correctly connected together. These components include the source signature and directionality, representation of the atmospheric and terrain environment, calculation of the signal propagation, characterization of the sensor response, and mimicking of the data processing at the sensor. Selection of an appropriate signal propagation model is particularly important, as there are significant trade-offs between output fidelity and computation speed. Attenuation of signal energy, random fading, and (for array systems) variations in wavefront angle-of-arrival should all be considered. Characterization of the complex operational environment is often the weak link in sensor modeling: important issues for acoustic and seismic modeling activities include the temporal/spatial resolution of the atmospheric data, knowledge of the surface and subsurface terrain properties, and representation of ambient background noise and vibrations. Design of software tools that address these challenges is illustrated with two examples: a detailed target-to-sensor calculation application called the Sensor Performance Evaluator for Battlefield Environments (SPEBE) and a GIS-embedded approach called Battlefield Terrain Reasoning and Awareness (BTRA).

  3. Study on dynamic team performance evaluation methodology based on team situation awareness model

    International Nuclear Information System (INIS)

    Kim, Suk Chul

    2005-02-01

    The purpose of this thesis is to provide a theoretical framework and its evaluation methodology of team dynamic task performance of operating team at nuclear power plant under the dynamic and tactical environment such as radiological accident. This thesis suggested a team dynamic task performance evaluation model so called team crystallization model stemmed from Endsely's situation awareness model being comprised of four elements: state, information, organization, and orientation and its quantification methods using system dynamics approach and a communication process model based on a receding horizon control approach. The team crystallization model is a holistic approach for evaluating the team dynamic task performance in conjunction with team situation awareness considering physical system dynamics and team behavioral dynamics for a tactical and dynamic task at nuclear power plant. This model provides a systematic measure to evaluate time-dependent team effectiveness or performance affected by multi-agents such as plant states, communication quality in terms of transferring situation-specific information and strategies for achieving the team task goal at given time, and organizational factors. To demonstrate the applicability of the proposed model and its quantification method, the case study was carried out using the data obtained from a full-scope power plant simulator for 1,000MWe pressurized water reactors with four on-the-job operating groups and one expert group who knows accident sequences. Simulated results team dynamic task performance with reference key plant parameters behavior and team-specific organizational center of gravity and cue-and-response matrix illustrated good symmetry with observed value. The team crystallization model will be useful and effective tool for evaluating team effectiveness in terms of recruiting new operating team for new plant as cost-benefit manner. Also, this model can be utilized as a systematic analysis tool for

  4. Study on dynamic team performance evaluation methodology based on team situation awareness model

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Suk Chul

    2005-02-15

    The purpose of this thesis is to provide a theoretical framework and its evaluation methodology of team dynamic task performance of operating team at nuclear power plant under the dynamic and tactical environment such as radiological accident. This thesis suggested a team dynamic task performance evaluation model so called team crystallization model stemmed from Endsely's situation awareness model being comprised of four elements: state, information, organization, and orientation and its quantification methods using system dynamics approach and a communication process model based on a receding horizon control approach. The team crystallization model is a holistic approach for evaluating the team dynamic task performance in conjunction with team situation awareness considering physical system dynamics and team behavioral dynamics for a tactical and dynamic task at nuclear power plant. This model provides a systematic measure to evaluate time-dependent team effectiveness or performance affected by multi-agents such as plant states, communication quality in terms of transferring situation-specific information and strategies for achieving the team task goal at given time, and organizational factors. To demonstrate the applicability of the proposed model and its quantification method, the case study was carried out using the data obtained from a full-scope power plant simulator for 1,000MWe pressurized water reactors with four on-the-job operating groups and one expert group who knows accident sequences. Simulated results team dynamic task performance with reference key plant parameters behavior and team-specific organizational center of gravity and cue-and-response matrix illustrated good symmetry with observed value. The team crystallization model will be useful and effective tool for evaluating team effectiveness in terms of recruiting new operating team for new plant as cost-benefit manner. Also, this model can be utilized as a systematic analysis tool for

  5. Delay model and performance testing for FPGA carry chain TDC

    International Nuclear Information System (INIS)

    Kang Xiaowen; Liu Yaqiang; Cui Junjian Yang Zhangcan; Jin Yongjie

    2011-01-01

    Time-of-flight (TOF) information would improve the performance of PET (position emission tomography). TDC design is a key technique. It proposed Carry Chain TDC Delay model. Through changing the significant delay parameter of model, paper compared the difference of TDC performance, and finally realized Time-to-Digital Convertor (TDC) based on Carry Chain Method using FPGA EP2C20Q240C8N with 69 ps LSB, max error below 2 LSB. Such result could meet the TOF demand. It also proposed a Coaxial Cable Measuring method for TDC testing, without High-precision test equipment. (authors)

  6. Identification of human operator performance models utilizing time series analysis

    Science.gov (United States)

    Holden, F. M.; Shinners, S. M.

    1973-01-01

    The results of an effort performed by Sperry Systems Management Division for AMRL in applying time series analysis as a tool for modeling the human operator are presented. This technique is utilized for determining the variation of the human transfer function under various levels of stress. The human operator's model is determined based on actual input and output data from a tracking experiment.

  7. Achievable ADC Performance by Postcorrection Utilizing Dynamic Modeling of the Integral Nonlinearity

    Directory of Open Access Journals (Sweden)

    Peter Händel

    2008-03-01

    Full Text Available There is a need for a universal dynamic model of analog-to-digital converters (ADC’s aimed for postcorrection. However, it is complicated to fully describe the properties of an ADC by a single model. An alternative is to split up the ADC model in different components, where each component has unique properties. In this paper, a model based on three components is used, and a performance analysis for each component is presented. Each component can be postcorrected individually and by the method that best suits the application. The purpose of postcorrection of an ADC is to improve the performance. Hence, for each component, expressions for the potential improvement have been developed. The measures of performance are total harmonic distortion (THD and signal to noise and distortion (SINAD, and to some extent spurious-free dynamic range (SFDR.

  8. Improved Fuzzy Modelling to Predict the Academic Performance of Distance Education Students

    Directory of Open Access Journals (Sweden)

    Osman Yildiz

    2013-12-01

    Full Text Available It is essential to predict distance education students’ year-end academic performance early during the course of the semester and to take precautions using such prediction-based information. This will, in particular, help enhance their academic performance and, therefore, improve the overall educational quality. The present study was on the development of a mathematical model intended to predict distance education students’ year-end academic performance using the first eight-week data on the learning management system. First, two fuzzy models were constructed, namely the classical fuzzy model and the expert fuzzy model, the latter being based on expert opinion. Afterwards, a gene-fuzzy model was developed optimizing membership functions through genetic algorithm. The data on distance education were collected through Moodle, an open source learning management system. The data were on a total of 218 students who enrolled in Basic Computer Sciences in 2012. The input data consisted of the following variables: When a student logged on to the system for the last time after the content of a lesson was uploaded, how often he/she logged on to the system, how long he/she stayed online in the last login, what score he/she got in the quiz taken in Week 4, and what score he/she got in the midterm exam taken in Week 8. A comparison was made among the predictions of the three models concerning the students’ year-end academic performance.

  9. Model description and evaluation of model performance: DOSDIM model

    International Nuclear Information System (INIS)

    Lewyckyj, N.; Zeevaert, T.

    1996-01-01

    DOSDIM was developed to assess the impact to man from routine and accidental atmospheric releases. It is a compartmental, deterministic, radiological model. For an accidental release, dynamic transfer are used in opposition to a routine release for which equilibrium transfer factors are used. Parameters values were chosen to be conservative. Transfer between compartments are described by first-order differential equations. 2 figs

  10. Teacher characteristics and student performance: An analysis using hierarchical linear modelling

    Directory of Open Access Journals (Sweden)

    Paula Armstrong

    2015-12-01

    Full Text Available This research makes use of hierarchical linear modelling to investigate which teacher characteristics are significantly associated with student performance. Using data from the SACMEQ III study of 2007, an interesting and potentially important finding is that younger teachers are better able to improve the mean mathematics performance of their students. Furthermore, younger teachers themselves perform better on subject tests than do their older counterparts. Identical models are run for Sub Saharan countries bordering on South Africa, as well for Kenya and the strong relationship between teacher age and student performance is not observed. Similarly, the model is run for South Africa using data from SACMEQ II (conducted in 2002 and the relationship between teacher age and student performance is also not observed. It must be noted that South African teachers were not tested in SACMEQ II so it was not possible to observe differences in subject knowledge amongst teachers in different cohorts and it was not possible to control for teachers’ level of subject knowledge when observing the relationship between teacher age and student performance. Changes in teacher education in the late 1990s and early 2000s may explain the differences in the performance of younger teachers relative to their older counterparts observed in the later dataset.

  11. Ergonomic evaluation model of operational room based on team performance

    Directory of Open Access Journals (Sweden)

    YANG Zhiyi

    2017-05-01

    Full Text Available A theoretical calculation model based on the ergonomic evaluation of team performance was proposed in order to carry out the ergonomic evaluation of the layout design schemes of the action station in a multitasking operational room. This model was constructed in order to calculate and compare the theoretical value of team performance in multiple layout schemes by considering such substantial influential factors as frequency of communication, distance, angle, importance, human cognitive characteristics and so on. An experiment was finally conducted to verify the proposed model under the criteria of completion time and accuracy rating. As illustrated by the experiment results,the proposed approach is conductive to the prediction and ergonomic evaluation of the layout design schemes of the action station during early design stages,and provides a new theoretical method for the ergonomic evaluation,selection and optimization design of layout design schemes.

  12. Performance prediction of a proton exchange membrane fuel cell using the ANFIS model

    Energy Technology Data Exchange (ETDEWEB)

    Vural, Yasemin; Ingham, Derek B.; Pourkashanian, Mohamed [Centre for Computational Fluid Dynamics, University of Leeds, Houldsworth Building, LS2 9JT Leeds (United Kingdom)

    2009-11-15

    In this study, the performance (current-voltage curve) prediction of a Proton Exchange Membrane Fuel Cell (PEMFC) is performed for different operational conditions using an Adaptive Neuro-Fuzzy Inference System (ANFIS). First, ANFIS is trained with a set of input and output data. The trained model is then tested with an independent set of experimental data. The trained and tested model is then used to predict the performance curve of the PEMFC under various operational conditions. The model shows very good agreement with the experimental data and this indicates that ANFIS is capable of predicting fuel cell performance (in terms of cell voltage) with a high accuracy in an easy, rapid and cost effective way for the case presented. Finally, the capabilities and the limitations of the model for the application in fuel cells have been discussed. (author)

  13. Source-term model for the SYVAC3-NSURE performance assessment code

    International Nuclear Information System (INIS)

    Rowat, J.H.; Rattan, D.S.; Dolinar, G.M.

    1996-11-01

    Radionuclide contaminants in wastes emplaced in disposal facilities will not remain in those facilities indefinitely. Engineered barriers will eventually degrade, allowing radioactivity to escape from the vault. The radionuclide release rate from a low-level radioactive waste (LLRW) disposal facility, the source term, is a key component in the performance assessment of the disposal system. This report describes the source-term model that has been implemented in Ver. 1.03 of the SYVAC3-NSURE (Systems Variability Analysis Code generation 3-Near Surface Repository) code. NSURE is a performance assessment code that evaluates the impact of near-surface disposal of LLRW through the groundwater pathway. The source-term model described here was developed for the Intrusion Resistant Underground Structure (IRUS) disposal facility, which is a vault that is to be located in the unsaturated overburden at AECL's Chalk River Laboratories. The processes included in the vault model are roof and waste package performance, and diffusion, advection and sorption of radionuclides in the vault backfill. The model presented here was developed for the IRUS vault; however, it is applicable to other near-surface disposal facilities. (author). 40 refs., 6 figs

  14. Performance assessment of the RANS turbulence models in nuclear fuel rod bundles

    International Nuclear Information System (INIS)

    In, Wang Kee; Chun, Tae Hyun; Oh, Dong Seok; Shin, Chang Hwan

    2005-02-01

    The three experiments for turbulent flow in a rod bundle geometry were simulated in this CFD analysis using various RANS models. The CFD predictions were compared with the experimental and DNS results. The RANS models used here are the nonlinear quadratic/cubic κ-ε models and the second-order closure models (SSG, LRR, RSM-ω). The anisotropic models predicted the secondary flow and showed a significantly improved agreement with the measurements from the standard κ-ε model. In particular, the SSG model resulted in the best performance showing the closest agreement with the experimental results. However, the RANS models could not predict the very high anisotropy observed in a rod bundle with a small pitch-to-diameter ratio

  15. Numeric-modeling sensitivity analysis of the performance of wind turbine arrays

    Energy Technology Data Exchange (ETDEWEB)

    Lissaman, P.B.S.; Gyatt, G.W.; Zalay, A.D.

    1982-06-01

    An evaluation of the numerical model created by Lissaman for predicting the performance of wind turbine arrays has been made. Model predictions of the wake parameters have been compared with both full-scale and wind tunnel measurements. Only limited, full-scale data were available, while wind tunnel studies showed difficulties in representing real meteorological conditions. Nevertheless, several modifications and additions have been made to the model using both theoretical and empirical techniques and the new model shows good correlation with experiment. The larger wake growth rate and shorter near wake length predicted by the new model lead to reduced interference effects on downstream turbines and hence greater array efficiencies. The array model has also been re-examined and now incorporates the ability to show the effects of real meteorological conditions such as variations in wind speed and unsteady winds. The resulting computer code has been run to show the sensitivity of array performance to meteorological, machine, and array parameters. Ambient turbulence and windwise spacing are shown to dominate, while hub height ratio is seen to be relatively unimportant. Finally, a detailed analysis of the Goodnoe Hills wind farm in Washington has been made to show how power output can be expected to vary with ambient turbulence, wind speed, and wind direction.

  16. Performance analysis and dynamic modeling of a single-spool turbojet engine

    Science.gov (United States)

    Andrei, Irina-Carmen; Toader, Adrian; Stroe, Gabriela; Frunzulica, Florin

    2017-01-01

    The purposes of modeling and simulation of a turbojet engine are the steady state analysis and transient analysis. From the steady state analysis, which consists in the investigation of the operating, equilibrium regimes and it is based on appropriate modeling describing the operation of a turbojet engine at design and off-design regimes, results the performance analysis, concluded by the engine's operational maps (i.e. the altitude map, velocity map and speed map) and the engine's universal map. The mathematical model that allows the calculation of the design and off-design performances, in case of a single spool turbojet is detailed. An in house code was developed, its calibration was done for the J85 turbojet engine as the test case. The dynamic modeling of the turbojet engine is obtained from the energy balance equations for compressor, combustor and turbine, as the engine's main parts. The transient analysis, which is based on appropriate modeling of engine and its main parts, expresses the dynamic behavior of the turbojet engine, and further, provides details regarding the engine's control. The aim of the dynamic analysis is to determine a control program for the turbojet, based on the results provided by performance analysis. In case of the single-spool turbojet engine, with fixed nozzle geometry, the thrust is controlled by one parameter, which is the fuel flow rate. The design and management of the aircraft engine controls are based on the results of the transient analysis. The construction of the design model is complex, since it is based on both steady-state and transient analysis, further allowing the flight path cycle analysis and optimizations. This paper presents numerical simulations for a single-spool turbojet engine (J85 as test case), with appropriate modeling for steady-state and dynamic analysis.

  17. Performance and Cognitive Assessment in 3-D Modeling

    Science.gov (United States)

    Fahrer, Nolan E.; Ernst, Jeremy V.; Branoff, Theodore J.; Clark, Aaron C.

    2011-01-01

    The purpose of this study was to investigate identifiable differences between performance and cognitive assessment scores in a 3-D modeling unit of an engineering drafting course curriculum. The study aimed to provide further investigation of the need of skill-based assessments in engineering/technical graphics courses to potentially increase…

  18. Use of GIS and 3D Modeling for Development and Conceptualization of a Performance Assessment Model for Decommissioning of a Complex Site

    International Nuclear Information System (INIS)

    Esh, D. W.; Gross, A. J.; Thaggard, M.

    2006-01-01

    Geographic Information Systems (GIS) and 3D geo-spatial modeling were employed to facilitate development and conceptualization of a performance assessment (PA) model that will be used to evaluate the health impacts of residual radioactivity at a former nuclear materials processing facility site in New York. Previous operations have resulted in a number of different sources of radiological contamination that must be assessed during site decommissioning. A performance assessment model is being developed to estimate radiological dose to potential receptors through the simulation of the release and transport of radionuclides, and exposure to residual contamination for hundreds to thousands of years in the future. A variety of inputs are required to parameterize the performance assessment model, such as: distance from the waste to surface water bodies, thickness of geologic units for saturated transport, saturated thickness of the geologic units, and spatial and temporal average of percent of waste that is saturated. GIS and 3D modeling are used to analyze and abstract aleatory uncertainty associated with the dimensionality of the geologic system into epistemic uncertainty for one- and two-dimensional process models for flow and transport of radionuclides. Three-dimensional geo-spatial modeling was used to develop the geologic framework and the geometrical representation of the residual contamination within the geologic framework. GIS was used in the initial development and parameterization of the transport pathways, to provide spatial context to the PA model, and to link it to the 3D geologic framework and contamination geometry models. Both the GIS and 3-D modeling were used to interpret the results of runs of the PA model. (authors)

  19. An Improved Cognitive Model of the Iowa and Soochow Gambling Tasks With Regard to Model Fitting Performance and Tests of Parameter Consistency

    Directory of Open Access Journals (Sweden)

    Junyi eDai

    2015-03-01

    Full Text Available The Iowa Gambling Task (IGT and the Soochow Gambling Task (SGT are two experience-based risky decision-making tasks for examining decision-making deficits in clinical populations. Several cognitive models, including the expectancy-valence learning model (EVL and the prospect valence learning model (PVL, have been developed to disentangle the motivational, cognitive, and response processes underlying the explicit choices in these tasks. The purpose of the current study was to develop an improved model that can fit empirical data better than the EVL and PVL models and, in addition, produce more consistent parameter estimates across the IGT and SGT. Twenty-six opiate users (mean age 34.23; SD 8.79 and 27 control participants (mean age 35; SD 10.44 completed both tasks. Eighteen cognitive models varying in evaluation, updating, and choice rules were fit to individual data and their performances were compared to that of a statistical baseline model to find a best fitting model. The results showed that the model combining the prospect utility function treating gains and losses separately, the decay-reinforcement updating rule, and the trial-independent choice rule performed the best in both tasks. Furthermore, the winning model produced more consistent individual parameter estimates across the two tasks than any of the other models.

  20. Acoustic performance of industrial mufflers with CAE modeling and simulation

    Directory of Open Access Journals (Sweden)

    Jeon Soohong

    2014-12-01

    Full Text Available This paper investigates the noise transmission performance of industrial mufflers widely used in ships based on the CAE modeling and simulation. Since the industrial mufflers have very complicated internal structures, the conventional Transfer Matrix Method (TMM is of limited use. The CAE modeling and simulation is therefore required to incorporate commercial softwares: CATIA for geometry modeling, MSC/PATRAN for FE meshing and LMS/ SYSNOISE for analysis. Main sources of difficulties in this study are led by complicated arrangement of reactive elements, perforated walls and absorption materials. The reactive elements and absorbent materials are modeled by applying boundary conditions given by impedance. The perforated walls are modeled by applying the transfer impedance on the duplicated node mesh. The CAE approach presented in this paper is verified by comparing with the theoretical solution of a concentric-tube resonator and is applied for industrial mufflers.

  1. Acoustic performance of industrial mufflers with CAE modeling and simulation

    Directory of Open Access Journals (Sweden)

    Soohong Jeon

    2014-12-01

    Full Text Available This paper investigates the noise transmission performance of industrial mufflers widely used in ships based on the CAE modeling and simulation. Since the industrial mufflers have very complicated internal structures, the conventional Transfer Matrix Method (TMM is of limited use. The CAE modeling and simulation is therefore required to incorporate commercial softwares: CATIA for geometry modeling, MSC/PATRAN for FE meshing and LMS/SYSNOISE for analysis. Main sources of difficulties in this study are led by complicated arrangement of reactive ele- ments, perforated walls and absorption materials. The reactive elements and absorbent materials are modeled by applying boundary conditions given by impedance. The perforated walls are modeled by applying the transfer impedance on the duplicated node mesh. The CAE approach presented in this paper is verified by comparing with the theoretical solution of a concentric-tube resonator and is applied for industrial mufflers.

  2. WWER reactor fuel performance, modelling and experimental support. Proceedings

    International Nuclear Information System (INIS)

    Stefanova, S.; Chantoin, P.; Kolev, I.

    1994-01-01

    This publication is a compilation of 36 papers presented at the International Seminar on WWER Reactor Fuel Performance, Modelling and Experimental Support, organised by the Institute for Nuclear Research and Nuclear Energy (BG), in cooperation with the International Atomic Energy Agency. The Seminar was attended by 76 participants from 16 countries, including representatives of all major Russian plants and institutions responsible for WWER reactor fuel manufacturing, design and research. The reports are grouped in four chapters: 1) WWER Fuel Performance and Economics: Status and Improvement Prospects: 2) WWER Fuel Behaviour Modelling and Experimental Support; 3) Licensing of WWER Fuel and Fuel Analysis Codes; 4) Spent Fuel of WWER Plants. The reports from the corresponding four panel discussion sessions are also included. All individual papers are recorded in INIS as separate items

  3. WWER reactor fuel performance, modelling and experimental support. Proceedings

    Energy Technology Data Exchange (ETDEWEB)

    Stefanova, S; Chantoin, P; Kolev, I [eds.

    1994-12-31

    This publication is a compilation of 36 papers presented at the International Seminar on WWER Reactor Fuel Performance, Modelling and Experimental Support, organised by the Institute for Nuclear Research and Nuclear Energy (BG), in cooperation with the International Atomic Energy Agency. The Seminar was attended by 76 participants from 16 countries, including representatives of all major Russian plants and institutions responsible for WWER reactor fuel manufacturing, design and research. The reports are grouped in four chapters: (1) WWER Fuel Performance and Economics: Status and Improvement Prospects: (2) WWER Fuel Behaviour Modelling and Experimental Support; (3) Licensing of WWER Fuel and Fuel Analysis Codes; (4) Spent Fuel of WWER Plants. The reports from the corresponding four panel discussion sessions are also included. All individual papers are recorded in INIS as separate items.

  4. Wavefront control performance modeling with WFIRST shaped pupil coronagraph testbed

    Science.gov (United States)

    Zhou, Hanying; Nemati, Bijian; Krist, John; Cady, Eric; Kern, Brian; Poberezhskiy, Ilya

    2017-09-01

    NASA's WFIRST mission includes a coronagraph instrument (CGI) for direct imaging of exoplanets. Significant improvement in CGI model fidelity has been made recently, alongside a testbed high contrast demonstration in a simulated dynamic environment at JPL. We present our modeling method and results of comparisons to testbed's high order wavefront correction performance for the shaped pupil coronagraph. Agreement between model prediction and testbed result at better than a factor of 2 has been consistently achieved in raw contrast (contrast floor, chromaticity, and convergence), and with that comes good agreement in contrast sensitivity to wavefront perturbations and mask lateral shear.

  5. Modelling and Comparative Performance Analysis of a Time-Reversed UWB System

    Directory of Open Access Journals (Sweden)

    Popovski K

    2007-01-01

    Full Text Available The effects of multipath propagation lead to a significant decrease in system performance in most of the proposed ultra-wideband communication systems. A time-reversed system utilises the multipath channel impulse response to decrease receiver complexity, through a prefiltering at the transmitter. This paper discusses the modelling and comparative performance of a UWB system utilising time-reversed communications. System equations are presented, together with a semianalytical formulation on the level of intersymbol interference and multiuser interference. The standardised IEEE 802.15.3a channel model is applied, and the estimated error performance is compared through simulation with the performance of both time-hopped time-reversed and RAKE-based UWB systems.

  6. Rethinking board role performance: Towards an integrative model

    Directory of Open Access Journals (Sweden)

    Babić Verica M.

    2011-01-01

    Full Text Available This research focuses on the board role evolution analysis which took place simultaneously with the development of different corporate governance theories and perspectives. The purpose of this paper is to provide understanding of key factors that make a board effective in the performance of its role. We argue that analysis of board role performance should incorporate both structural and process variables. This paper’s contribution is the development of an integrative model that aims to establish the relationship between the board structure and processes on the one hand, and board role performance on the other.

  7. Performances Of Estimators Of Linear Models With Autocorrelated ...

    African Journals Online (AJOL)

    The performances of five estimators of linear models with Autocorrelated error terms are compared when the independent variable is autoregressive. The results reveal that the properties of the estimators when the sample size is finite is quite similar to the properties of the estimators when the sample size is infinite although ...

  8. Forecasting Performance of Asymmetric GARCH Stock Market Volatility Models

    Directory of Open Access Journals (Sweden)

    Hojin Lee

    2009-12-01

    Full Text Available We investigate the asymmetry between positive and negative returns in their effect on conditional variance of the stock market index and incorporate the characteristics to form an out-of-sample volatility forecast. Contrary to prior evidence, however, the results in this paper suggest that no asymmetric GARCH model is superior to basic GARCH(1,1 model. It is our prior knowledge that, for equity returns, it is unlikely that positive and negative shocks have the same impact on the volatility. In order to reflect this intuition, we implement three diagnostic tests for volatility models: the Sign Bias Test, the Negative Size Bias Test, and the Positive Size Bias Test and the tests against the alternatives of QGARCH and GJR-GARCH. The asymmetry test results indicate that the sign and the size of the unexpected return shock do not influence current volatility differently which contradicts our presumption that there are asymmetric effects in the stock market volatility. This result is in line with various diagnostic tests which are designed to determine whether the GARCH(1,1 volatility estimates adequately represent the data. The diagnostic tests in section 2 indicate that the GARCH(1,1 model for weekly KOSPI returns is robust to the misspecification test. We also investigate two representative asymmetric GARCH models, QGARCH and GJR-GARCH model, for our out-of-sample forecasting performance. The out-of-sample forecasting ability test reveals that no single model is clearly outperforming. It is seen that the GJR-GARCH and QGARCH model give mixed results in forecasting ability on all four criteria across all forecast horizons considered. Also, the predictive accuracy test of Diebold and Mariano based on both absolute and squared prediction errors suggest that the forecasts from the linear and asymmetric GARCH models need not be significantly different from each other.

  9. Commercial Building Energy Baseline Modeling Software: Performance Metrics and Method Testing with Open Source Models and Implications for Proprietary Software Testing

    Energy Technology Data Exchange (ETDEWEB)

    Price, Phillip N.; Granderson, Jessica; Sohn, Michael; Addy, Nathan; Jump, David

    2013-09-01

    The overarching goal of this work is to advance the capabilities of technology evaluators in evaluating the building-level baseline modeling capabilities of Energy Management and Information System (EMIS) software. Through their customer engagement platforms and products, EMIS software products have the potential to produce whole-building energy savings through multiple strategies: building system operation improvements, equipment efficiency upgrades and replacements, and inducement of behavioral change among the occupants and operations personnel. Some offerings may also automate the quantification of whole-building energy savings, relative to a baseline period, using empirical models that relate energy consumption to key influencing parameters, such as ambient weather conditions and building operation schedule. These automated baseline models can be used to streamline the whole-building measurement and verification (M&V) process, and therefore are of critical importance in the context of multi-measure whole-building focused utility efficiency programs. This report documents the findings of a study that was conducted to begin answering critical questions regarding quantification of savings at the whole-building level, and the use of automated and commercial software tools. To evaluate the modeling capabilities of EMIS software particular to the use case of whole-building savings estimation, four research questions were addressed: 1. What is a general methodology that can be used to evaluate baseline model performance, both in terms of a) overall robustness, and b) relative to other models? 2. How can that general methodology be applied to evaluate proprietary models that are embedded in commercial EMIS tools? How might one handle practical issues associated with data security, intellectual property, appropriate testing ‘blinds’, and large data sets? 3. How can buildings be pre-screened to identify those that are the most model-predictable, and therefore those

  10. A Model for Effective Performance in the Indonesian Navy.

    Science.gov (United States)

    1987-06-01

    NAVY LEADERSHIP AND MANAGEMENT COM PETENCY M ODEL .................................. 15 D. MCBER COMPETENT MANAGERS MODEL ................ IS E. SU M M... leadership and managerial skills which emphasize on effective performance of the officers in managing the human resources under their cormnand and...supervision. By effective performance we mean officers who not only know about management theories , but who possess the characteristics, knowledge, skill, and

  11. Performance modeling & simulation of complex systems (A systems engineering design & analysis approach)

    Science.gov (United States)

    Hall, Laverne

    1995-01-01

    Modeling of the Multi-mission Image Processing System (MIPS) will be described as an example of the use of a modeling tool to design a distributed system that supports multiple application scenarios. This paper examines: (a) modeling tool selection, capabilities, and operation (namely NETWORK 2.5 by CACl), (b) pointers for building or constructing a model and how the MIPS model was developed, (c) the importance of benchmarking or testing the performance of equipment/subsystems being considered for incorporation the design/architecture, (d) the essential step of model validation and/or calibration using the benchmark results, (e) sample simulation results from the MIPS model, and (f) how modeling and simulation analysis affected the MIPS design process by having a supportive and informative impact.

  12. High-Performance Computer Modeling of the Cosmos-Iridium Collision

    Energy Technology Data Exchange (ETDEWEB)

    Olivier, S; Cook, K; Fasenfest, B; Jefferson, D; Jiang, M; Leek, J; Levatin, J; Nikolaev, S; Pertica, A; Phillion, D; Springer, K; De Vries, W

    2009-08-28

    This paper describes the application of a new, integrated modeling and simulation framework, encompassing the space situational awareness (SSA) enterprise, to the recent Cosmos-Iridium collision. This framework is based on a flexible, scalable architecture to enable efficient simulation of the current SSA enterprise, and to accommodate future advancements in SSA systems. In particular, the code is designed to take advantage of massively parallel, high-performance computer systems available, for example, at Lawrence Livermore National Laboratory. We will describe the application of this framework to the recent collision of the Cosmos and Iridium satellites, including (1) detailed hydrodynamic modeling of the satellite collision and resulting debris generation, (2) orbital propagation of the simulated debris and analysis of the increased risk to other satellites (3) calculation of the radar and optical signatures of the simulated debris and modeling of debris detection with space surveillance radar and optical systems (4) determination of simulated debris orbits from modeled space surveillance observations and analysis of the resulting orbital accuracy, (5) comparison of these modeling and simulation results with Space Surveillance Network observations. We will also discuss the use of this integrated modeling and simulation framework to analyze the risks and consequences of future satellite collisions and to assess strategies for mitigating or avoiding future incidents, including the addition of new sensor systems, used in conjunction with the Space Surveillance Network, for improving space situational awareness.

  13. Synthesised model of market orientation-business performance relationship

    Directory of Open Access Journals (Sweden)

    G. Nwokah

    2006-12-01

    Full Text Available Purpose: The purpose of this paper is to assess the impact of market orientation on the performance of the organisation. While much empirical works have centered on market orientation, the generalisability of its impact on performance of the Food and Beverages organisations in the Nigeria context has been under-researched. Design/Methodology/Approach: The study adopted a triangulation methodology (quantitative and qualitative approach. Data was collected from key informants using a research instrument. Returned instruments were analyzed using nonparametric correlation through the use of the Statistical Package for Social Sciences (SPSS version 10. Findings: The study validated the earlier instruments but did not find any strong association between market orientation and business performance in the Nigerian context using the food and beverages organisations for the study. The reasons underlying the weak relationship between market orientation and business performance of the Food and Beverages organisations is government policies, new product development, diversification, innovation and devaluation of the Nigerian currency. One important finding of this study is that market orientation leads to business performance through some moderating variables. Implications: The study recommends that Nigerian Government should ensure a stable economy and make economic policies that will enhance existing business development in the country. Also, organisations should have performance measurement systems to detect the impact of investment on market orientation with the aim of knowing how the organisation works. Originality/Value: This study significantly refines the body of knowledge concerning the impact of market orientation on the performance of the organisation, and thereby offers a model of market orientation and business performance in the Nigerian context for marketing scholars and practitioners. This model will, no doubt, contribute to the body of

  14. A model to decompose the performance of supplementary private health insurance markets.

    Science.gov (United States)

    Leidl, Reiner

    2008-09-01

    For an individual insurance firm offering supplementary private health insurance, a model is developed to decompose market performance in terms of insurer profits. For the individual contract, the model specifies the conditions under which adverse selection, cream skimming, and moral hazard occur, shows the impact of information on contracting, and the profit contribution. Contracts are determined by comparing willingness to pay for insurance with the individual's risk position, and information on both sides of the market. Finally, performance is aggregated up to the total market. The model provides a framework to explain the attractiveness of supplementary markets to insurers.

  15. Evaluating the Impact of Prescription Fill Rates on Risk Stratification Model Performance.

    Science.gov (United States)

    Chang, Hsien-Yen; Richards, Thomas M; Shermock, Kenneth M; Elder Dalpoas, Stacy; J Kan, Hong; Alexander, G Caleb; Weiner, Jonathan P; Kharrazi, Hadi

    2017-12-01

    Risk adjustment models are traditionally derived from administrative claims. Prescription fill rates-extracted by comparing electronic health record prescriptions and pharmacy claims fills-represent a novel measure of medication adherence and may improve the performance of risk adjustment models. We evaluated the impact of prescription fill rates on claims-based risk adjustment models in predicting both concurrent and prospective costs and utilization. We conducted a retrospective cohort study of 43,097 primary care patients from HealthPartners network between 2011 and 2012. Diagnosis and/or pharmacy claims of 2011 were used to build 3 base models using the Johns Hopkins ACG system, in addition to demographics. Model performances were compared before and after adding 3 types of prescription fill rates: primary 0-7 days, primary 0-30 days, and overall. Overall fill rates utilized all ordered prescriptions from electronic health record while primary fill rates excluded refill orders. The overall, primary 0-7, and 0-30 days fill rates were 72.30%, 59.82%, and 67.33%. The fill rates were similar between sexes but varied across different medication classifications, whereas the youngest had the highest rate. Adding fill rates modestly improved the performance of all models in explaining medical costs (improving concurrent R by 1.15% to 2.07%), followed by total costs (0.58% to 1.43%), and pharmacy costs (0.07% to 0.65%). The impact was greater for concurrent costs compared with prospective costs. Base models without diagnosis information showed the highest improvement using prescription fill rates. Prescription fill rates can modestly enhance claims-based risk prediction models; however, population-level improvements in predicting utilization are limited.

  16. Integrated performance assessment model for waste policy package behavior and radionuclide release

    International Nuclear Information System (INIS)

    Kossik, R.; Miller, I.; Cunnane, M.

    1992-01-01

    Golder Associates Inc. (GAI) has developed a probabilistic total system performance assessment and strategy evaluation model (RIP) which can be applied in an iterative manner to evaluate repository site suitability and guide site characterization. This paper describes one component of the RIP software, the waste package behavior and radionuclide release model. The waste package component model considers waste package failure by various modes, matrix alteration/dissolution, and radionuclide mass transfer. Model parameters can be described as functions of local environmental conditions. The waste package component model is coupled to component models for far-field radionuclide transport and disruptive events. The model has recently been applied to the proposed repository at Yucca Mountain

  17. Performance modeling of hybrid MPI/OpenMP scientific applications on large-scale multicore supercomputers

    KAUST Repository

    Wu, Xingfu; Taylor, Valerie

    2013-01-01

    In this paper, we present a performance modeling framework based on memory bandwidth contention time and a parameterized communication model to predict the performance of OpenMP, MPI and hybrid applications with weak scaling on three large-scale multicore supercomputers: IBM POWER4, POWER5+ and BlueGene/P, and analyze the performance of these MPI, OpenMP and hybrid applications. We use STREAM memory benchmarks and Intel's MPI benchmarks to provide initial performance analysis and model validation of MPI and OpenMP applications on these multicore supercomputers because the measured sustained memory bandwidth can provide insight into the memory bandwidth that a system should sustain on scientific applications with the same amount of workload per core. In addition to using these benchmarks, we also use a weak-scaling hybrid MPI/OpenMP large-scale scientific application: Gyrokinetic Toroidal Code (GTC) in magnetic fusion to validate our performance model of the hybrid application on these multicore supercomputers. The validation results for our performance modeling method show less than 7.77% error rate in predicting the performance of hybrid MPI/OpenMP GTC on up to 512 cores on these multicore supercomputers. © 2013 Elsevier Inc.

  18. Performance modeling of hybrid MPI/OpenMP scientific applications on large-scale multicore supercomputers

    KAUST Repository

    Wu, Xingfu

    2013-12-01

    In this paper, we present a performance modeling framework based on memory bandwidth contention time and a parameterized communication model to predict the performance of OpenMP, MPI and hybrid applications with weak scaling on three large-scale multicore supercomputers: IBM POWER4, POWER5+ and BlueGene/P, and analyze the performance of these MPI, OpenMP and hybrid applications. We use STREAM memory benchmarks and Intel\\'s MPI benchmarks to provide initial performance analysis and model validation of MPI and OpenMP applications on these multicore supercomputers because the measured sustained memory bandwidth can provide insight into the memory bandwidth that a system should sustain on scientific applications with the same amount of workload per core. In addition to using these benchmarks, we also use a weak-scaling hybrid MPI/OpenMP large-scale scientific application: Gyrokinetic Toroidal Code (GTC) in magnetic fusion to validate our performance model of the hybrid application on these multicore supercomputers. The validation results for our performance modeling method show less than 7.77% error rate in predicting the performance of hybrid MPI/OpenMP GTC on up to 512 cores on these multicore supercomputers. © 2013 Elsevier Inc.

  19. Assessing the performance of prediction models: a framework for traditional and novel measures

    DEFF Research Database (Denmark)

    Steyerberg, Ewout W; Vickers, Andrew J; Cook, Nancy R

    2010-01-01

    The performance of prediction models can be assessed using a variety of methods and metrics. Traditional measures for binary and survival outcomes include the Brier score to indicate overall model performance, the concordance (or c) statistic for discriminative ability (or area under the receiver...

  20. Assessing the performance of prediction models: A framework for traditional and novel measures

    NARCIS (Netherlands)

    E.W. Steyerberg (Ewout); A.J. Vickers (Andrew); N.R. Cook (Nancy); T.A. Gerds (Thomas); M. Gonen (Mithat); N. Obuchowski (Nancy); M. Pencina (Michael); M.W. Kattan (Michael)

    2010-01-01

    textabstractThe performance of prediction models can be assessed using a variety of methods and metrics. Traditional measures for binary and survival outcomes include the Brier score to indicate overall model performance, the concordance (or c) statistic for discriminative ability (or area under the

  1. Modelling of proton exchange membrane fuel cell performance based on semi-empirical equations

    Energy Technology Data Exchange (ETDEWEB)

    Al-Baghdadi, Maher A.R. Sadiq [Babylon Univ., Dept. of Mechanical Engineering, Babylon (Iraq)

    2005-08-01

    Using semi-empirical equations for modeling a proton exchange membrane fuel cell is proposed for providing a tool for the design and analysis of fuel cell total systems. The focus of this study is to derive an empirical model including process variations to estimate the performance of fuel cell without extensive calculations. The model take into account not only the current density but also the process variations, such as the gas pressure, temperature, humidity, and utilization to cover operating processes, which are important factors in determining the real performance of fuel cell. The modelling results are compared well with known experimental results. The comparison shows good agreements between the modeling results and the experimental data. The model can be used to investigate the influence of process variables for design optimization of fuel cells, stacks, and complete fuel cell power system. (Author)

  2. A framework for performance evaluation of model-based optical trackers

    NARCIS (Netherlands)

    Smit, F.A.; Liere, van R.

    2008-01-01

    We describe a software framework to evaluate the performance of model-based optical trackers in virtual environments. The framework can be used to evaluate and compare the performance of different trackers under various conditions, to study the effects of varying intrinsic and extrinsic camera

  3. Application of ANN-SCE model on the evaluation of automatic generation control performance

    Energy Technology Data Exchange (ETDEWEB)

    Chang-Chien, L.R.; Lo, C.S.; Lee, K.S. [National Cheng Kung Univ., Tainan, Taiwan (China)

    2005-07-01

    An accurate evaluation of load frequency control (LFC) performance is needed to balance minute-to-minute electricity generation and demand. In this study, an artificial neural network-based system control error (ANN-SCE) model was used to assess the performance of automatic generation controls (AGC). The model was used to identify system dynamics for control references in supplementing AGC logic. The artificial neural network control error model was used to track a single area's LFC dynamics in Taiwan. The model was used to gauge the impacts of regulation control. Results of the training, evaluating, and projecting processes showed that the ANN-SCE model could be algebraically decomposed into components corresponding to different impact factors. The SCE information obtained from testing of various AGC gains provided data for the creation of a new control approach. The ANN-SCE model was used in conjunction with load forecasting and scheduled generation data to create an ANN-SCE identifier. The model successfully simulated SCE dynamics. 13 refs., 10 figs.

  4. Radionuclide release rates from spent fuel for performance assessment modeling

    International Nuclear Information System (INIS)

    Curtis, D.B.

    1994-01-01

    In a scenario of aqueous transport from a high-level radioactive waste repository, the concentration of radionuclides in water in contact with the waste constitutes the source term for transport models, and as such represents a fundamental component of all performance assessment models. Many laboratory experiments have been done to characterize release rates and understand processes influencing radionuclide release rates from irradiated nuclear fuel. Natural analogues of these waste forms have been studied to obtain information regarding the long-term stability of potential waste forms in complex natural systems. This information from diverse sources must be brought together to develop and defend methods used to define source terms for performance assessment models. In this manuscript examples of measures of radionuclide release rates from spent nuclear fuel or analogues of nuclear fuel are presented. Each example represents a very different approach to obtaining a numerical measure and each has its limitations. There is no way to obtain an unambiguous measure of this or any parameter used in performance assessment codes for evaluating the effects of processes operative over many millennia. The examples are intended to suggest by example that in the absence of the ability to evaluate accuracy and precision, consistency of a broadly based set of data can be used as circumstantial evidence to defend the choice of parameters used in performance assessments

  5. MaMR: High-performance MapReduce programming model for material cloud applications

    Science.gov (United States)

    Jing, Weipeng; Tong, Danyu; Wang, Yangang; Wang, Jingyuan; Liu, Yaqiu; Zhao, Peng

    2017-02-01

    With the increasing data size in materials science, existing programming models no longer satisfy the application requirements. MapReduce is a programming model that enables the easy development of scalable parallel applications to process big data on cloud computing systems. However, this model does not directly support the processing of multiple related data, and the processing performance does not reflect the advantages of cloud computing. To enhance the capability of workflow applications in material data processing, we defined a programming model for material cloud applications that supports multiple different Map and Reduce functions running concurrently based on hybrid share-memory BSP called MaMR. An optimized data sharing strategy to supply the shared data to the different Map and Reduce stages was also designed. We added a new merge phase to MapReduce that can efficiently merge data from the map and reduce modules. Experiments showed that the model and framework present effective performance improvements compared to previous work.

  6. Thermal Model Predictions of Advanced Stirling Radioisotope Generator Performance

    Science.gov (United States)

    Wang, Xiao-Yen J.; Fabanich, William Anthony; Schmitz, Paul C.

    2014-01-01

    This presentation describes the capabilities of three-dimensional thermal power model of advanced stirling radioisotope generator (ASRG). The performance of the ASRG is presented for different scenario, such as Venus flyby with or without the auxiliary cooling system.

  7. Evaluation model based on FAHP for nuclear power project contract performance

    International Nuclear Information System (INIS)

    Liu Bohang; Cheng Jing

    2012-01-01

    Fuzzy Comprehensive Evaluation is a common tool to analyze comprehensive integration. Fuzzy Analytic Hierarchy Process is an improvement for Analytic Hierarchy Process. Firstly the paper pointed out the concept of FAHP, and then used FAHP to setup an evaluation system model for nuclear power project contract performance. Based on this model, all the evaluation factors were assigned to different weightiness. By weighting the score of each factor, output would be the result which could evaluate the contract performance. On the basis of the research, the paper gave the principle of evaluating contract performance of nuclear power suppliers, which can assure the procurement process. (authors)

  8. A risk-return based model to measure the performance of portfolio management

    Directory of Open Access Journals (Sweden)

    Hamid Reza Vakili Fard

    2014-10-01

    Full Text Available The primary concern in all portfolio management systems is to find a good tradeoff between risk and expected return and a good balance between accepted risk and actual return indicates the performance of a particular portfolio. This paper develops “A-Y Model” to measure the performance of a portfolio and analyze it during the bull and the bear market. This paper considers the daily information of one year before and one year after Iran's 2013 precedential election. The proposed model of this paper provides lost profit and unrealized loss to measure the portfolio performance. The proposed study first ranks the resulted data and then uses some non-parametric methods to see whether there is any change because of the changes in markets on the performance of the portfolio. The results indicate that despite increasing profitable opportunities in bull market, the performance of the portfolio did not match the target risk. As a result, using A-Y Model as a risk and return base model to measure portfolio management's performance appears to reduce risks and increases return of portfolio.

  9. Compact models and performance investigations for subthreshold interconnects

    CERN Document Server

    Dhiman, Rohit

    2014-01-01

    The book provides a detailed analysis of issues related to sub-threshold interconnect performance from the perspective of analytical approach and design techniques. Particular emphasis is laid on the performance analysis of coupling noise and variability issues in sub-threshold domain to develop efficient compact models. The proposed analytical approach gives physical insight of the parameters affecting the transient behavior of coupled interconnects. Remedial design techniques are also suggested to mitigate the effect of coupling noise. The effects of wire width, spacing between the wires, wi

  10. Prediction of Cognitive Performance and Subjective Sleepiness Using a Model of Arousal Dynamics.

    Science.gov (United States)

    Postnova, Svetlana; Lockley, Steven W; Robinson, Peter A

    2018-04-01

    A model of arousal dynamics is applied to predict objective performance and subjective sleepiness measures, including lapses and reaction time on a visual Performance Vigilance Test (vPVT), performance on a mathematical addition task (ADD), and the Karolinska Sleepiness Scale (KSS). The arousal dynamics model is comprised of a physiologically based flip-flop switch between the wake- and sleep-active neuronal populations and a dynamic circadian oscillator, thus allowing prediction of sleep propensity. Published group-level experimental constant routine (CR) and forced desynchrony (FD) data are used to calibrate the model to predict performance and sleepiness. Only the studies using dim light (performance measures during CR and FD protocols, with sleep-wake cycles ranging from 20 to 42.85 h and a 2:1 wake-to-sleep ratio. New metrics relating model outputs to performance and sleepiness data are developed and tested against group average outcomes from 7 (vPVT lapses), 5 (ADD), and 8 (KSS) experimental protocols, showing good quantitative and qualitative agreement with the data (root mean squared error of 0.38, 0.19, and 0.35, respectively). The weights of the homeostatic and circadian effects are found to be different between the measures, with KSS having stronger homeostatic influence compared with the objective measures of performance. Using FD data in addition to CR data allows us to challenge the model in conditions of both acute sleep deprivation and structured circadian misalignment, ensuring that the role of the circadian and homeostatic drives in performance is properly captured.

  11. The performance indicators of model projects. A special evaluation

    International Nuclear Information System (INIS)

    1995-11-01

    As a result of the acknowledgment of the key role of the Model Project concept in the Agency's Technical Co-operation Programme, the present review of the objectives of the model projects which are now in operation, was undertaken, as recommended by the Board of Governors, to determine at an early stage: the extent to which the present objectives have been defined in a measurable way; whether objectively verifiable performance indicators and success criteria had been identified for each project; whether mechanisms to obtain feedback on the achievements had been foreseen. The overall budget for the 23 model projects, as approved from 1994 to 1998, amounts to $32,557,560, of which 45% is funded by Technical Co-operation Fund. This represents an average investment of about $8 million per year, that is over 15% of the annual TC budget. The conceptual importance of the Model Project initiative, as well as the significant funds allocated to them, led the Secretariat to plan the methods to be used to determine their socio-economic impact. 1 tab

  12. Integrating geophysics and hydrology for reducing the uncertainty of groundwater model predictions and improved prediction performance

    DEFF Research Database (Denmark)

    Christensen, Nikolaj Kruse; Christensen, Steen; Ferre, Ty

    the integration of geophysical data in the construction of a groundwater model increases the prediction performance. We suggest that modelers should perform a hydrogeophysical “test-bench” analysis of the likely value of geophysics data for improving groundwater model prediction performance before actually...... and the resulting predictions can be compared with predictions from the ‘true’ model. By performing this analysis we expect to give the modeler insight into how the uncertainty of model-based prediction can be reduced.......A major purpose of groundwater modeling is to help decision-makers in efforts to manage the natural environment. Increasingly, it is recognized that both the predictions of interest and their associated uncertainties should be quantified to support robust decision making. In particular, decision...

  13. A Structural Model of Business Performance: An Empirical Study on Tobacco Farmers

    Directory of Open Access Journals (Sweden)

    Sony Heru Priyanto

    2006-01-01

    The results of the analysis indicate that factors like personal aspects, together with physical, economic and institutional environments, affect farmers’ entrepreneurship. Personal aspects turn out to be the dominant factor that determines entrepreneurship and farm performance. This study also shows that farmers’ entrepreneurship is affected by their management capacity, which, in turn, affects the farmers’ farm performance. While there is no doubt in the adequacy of the model to estimate farm performance, this finding invites further investigation to validate it in other fields and scale of business, such as in small and medium enterprises and other companies. Furthermore, in order to evaluate the goodness of fit of the model in various contexts, further research both in a cross-cultural context and cross-national contexts using this model should be conducted.

  14. Modeling Students' Problem Solving Performance in the Computer-Based Mathematics Learning Environment

    Science.gov (United States)

    Lee, Young-Jin

    2017-01-01

    Purpose: The purpose of this paper is to develop a quantitative model of problem solving performance of students in the computer-based mathematics learning environment. Design/methodology/approach: Regularized logistic regression was used to create a quantitative model of problem solving performance of students that predicts whether students can…

  15. Proficient brain for optimal performance: the MAP model perspective.

    Science.gov (United States)

    Bertollo, Maurizio; di Fronso, Selenia; Filho, Edson; Conforto, Silvia; Schmid, Maurizio; Bortoli, Laura; Comani, Silvia; Robazza, Claudio

    2016-01-01

    Background. The main goal of the present study was to explore theta and alpha event-related desynchronization/synchronization (ERD/ERS) activity during shooting performance. We adopted the idiosyncratic framework of the multi-action plan (MAP) model to investigate different processing modes underpinning four types of performance. In particular, we were interested in examining the neural activity associated with optimal-automated (Type 1) and optimal-controlled (Type 2) performances. Methods. Ten elite shooters (6 male and 4 female) with extensive international experience participated in the study. ERD/ERS analysis was used to investigate cortical dynamics during performance. A 4 × 3 (performance types × time) repeated measures analysis of variance was performed to test the differences among the four types of performance during the three seconds preceding the shots for theta, low alpha, and high alpha frequency bands. The dependent variables were the ERD/ERS percentages in each frequency band (i.e., theta, low alpha, high alpha) for each electrode site across the scalp. This analysis was conducted on 120 shots for each participant in three different frequency bands and the individual data were then averaged. Results. We found ERS to be mainly associated with optimal-automatic performance, in agreement with the "neural efficiency hypothesis." We also observed more ERD as related to optimal-controlled performance in conditions of "neural adaptability" and proficient use of cortical resources. Discussion. These findings are congruent with the MAP conceptualization of four performance states, in which unique psychophysiological states underlie distinct performance-related experiences. From an applied point of view, our findings suggest that the MAP model can be used as a framework to develop performance enhancement strategies based on cognitive and neurofeedback techniques.

  16. Modeling of high-density U-MO dispersion fuel plate performance

    International Nuclear Information System (INIS)

    Hayes, S.L.; Meyer, M.K.; Hofman, G.L.; Rest, J.; Snelgrove, J.L.

    2002-01-01

    Results from postirradiation examinations (PIE) of highly loaded U-Mo/Al dispersion fuel plates over the past several years have shown that the interaction between the metallic fuel particles and the matrix aluminum can be extensive, reducing the volume of the high-conductivity matrix phase and producing a significant volume of low-conductivity reaction-product phase. This phenomenon results in a significant decrease in fuel meat thermal conductivity during irradiation. PIE has further shown that the fuel-matrix interaction rate is a sensitive function of irradiation temperature. The interplay between fuel temperature and fuel-matrix interaction makes the development of a simple empirical correlation between the two difficult. For this reason a comprehensive thermal model has been developed to calculate temperatures throughout the fuel plate over its lifetime, taking into account the changing volume fractions of fuel, matrix and reaction-product phases within the fuel meat owing to fuel-matrix interaction; this thermal model has been incorporated into the dispersion fuel performance code designated PLATE. Other phenomena important to fuel thermal performance that are also treated in PLATE include: gas generation and swelling in the fuel and reaction-product phases, incorporation of matrix aluminum into solid solution with the unreacted metallic fuel particles, matrix extrusion resulting from fuel swelling, and cladding corrosion. The phenomena modeled also make possible a prediction of fuel plate swelling. This paper presents a description of the models and empirical correlations employed within PLATE as well as validation of code predictions against fuel performance data for U-Mo experimental fuel plates from the RERTR-3 irradiation test. (author)

  17. Maintenance Personnel Performance Simulation (MAPPS) model: a human reliability analysis tool

    International Nuclear Information System (INIS)

    Knee, H.E.

    1985-01-01

    The Maintenance Personnel Performance Simulation (MAPPS) model is a computerized, stochastic, task-oriented human behavioral model developed to provide estimates of nuclear power plant (NPP) maintenance team performance measures. It is capable of addressing person-machine, person-environment, and person-person relationships, and accounts for interdependencies that exist between the subelements that make up the maintenance task of interest. The primary measures of performance estimated by MAPPS are: (1) the probability of successfully completing the task of interest; and (2) the task duration time. MAPPS also estimates a host of other performance indices, including the probability of an undetected error, identification of the most- and least-likely error-prone subelements, and maintenance team stress profiles during task execution. The MAPPS model was subjected to a number of evaluation efforts that focused upon its practicality, acceptability, usefulness, and validity. Methods used for these efforts included a case method approach, consensus estimation, and comparison with observed task performance measures at a NPP. Favorable results, such as close agreement between task duration times for two tasks observed in the field (67.0 and 119.8 minutes, respectively), and estimates by MAPPS (72.0 and 124.0 minutes, respectively) enhance the confidence in the future use of MAPPS. 8 refs., 1 fig

  18. Modeling and Performance Considerations for Automated Fault Isolation in Complex Systems

    Science.gov (United States)

    Ferrell, Bob; Oostdyk, Rebecca

    2010-01-01

    The purpose of this paper is to document the modeling considerations and performance metrics that were examined in the development of a large-scale Fault Detection, Isolation and Recovery (FDIR) system. The FDIR system is envisioned to perform health management functions for both a launch vehicle and the ground systems that support the vehicle during checkout and launch countdown by using suite of complimentary software tools that alert operators to anomalies and failures in real-time. The FDIR team members developed a set of operational requirements for the models that would be used for fault isolation and worked closely with the vendor of the software tools selected for fault isolation to ensure that the software was able to meet the requirements. Once the requirements were established, example models of sufficient complexity were used to test the performance of the software. The results of the performance testing demonstrated the need for enhancements to the software in order to meet the demands of the full-scale ground and vehicle FDIR system. The paper highlights the importance of the development of operational requirements and preliminary performance testing as a strategy for identifying deficiencies in highly scalable systems and rectifying those deficiencies before they imperil the success of the project

  19. Decline Curve Based Models for Predicting Natural Gas Well Performance

    OpenAIRE

    Kamari, Arash; Mohammadi, Amir H.; Lee, Moonyong; Mahmood, Tariq; Bahadori, Alireza

    2016-01-01

    The productivity of a gas well declines over its production life as cannot cover economic policies. To overcome such problems, the production performance of gas wells should be predicted by applying reliable methods to analyse the decline trend. Therefore, reliable models are developed in this study on the basis of powerful artificial intelligence techniques viz. the artificial neural network (ANN) modelling strategy, least square support vector machine (LSSVM) approach, adaptive neuro-fuzzy ...

  20. Evaluating the performance and utility of regional climate models

    DEFF Research Database (Denmark)

    Christensen, Jens H.; Carter, Timothy R.; Rummukainen, Markku

    2007-01-01

    This special issue of Climatic Change contains a series of research articles documenting co-ordinated work carried out within a 3-year European Union project 'Prediction of Regional scenarios and Uncertainties for Defining European Climate change risks and Effects' (PRUDENCE). The main objective...... of the PRUDENCE project was to provide high resolution climate change scenarios for Europe at the end of the twenty-first century by means of dynamical downscaling (regional climate modelling) of global climate simulations. The first part of the issue comprises seven overarching PRUDENCE papers on: (1) the design...... of the model simulations and analyses of climate model performance, (2 and 3) evaluation and intercomparison of simulated climate changes, (4 and 5) specialised analyses of impacts on water resources and on other sectors including agriculture, ecosystems, energy, and transport, (6) investigation of extreme...

  1. Development of models for use in the assessment of waste repository performance

    International Nuclear Information System (INIS)

    Dickson, A.G.; Weare, J.H.

    1989-09-01

    The work outlined in this proposal is intended both to provide thermodynamic data that is needed to assist in the assessment of waste repository performance and the modeling necessary to ascertain to what extent the data produced is consistent, both with itself and with other published data on related systems. During this stage of the research we shall endeavor to develop a model of the chemistry of aluminum in aqueous solution which is consistent with a wide variety of experimental data including data generated as part of this project together with data that has been published previously in the research literature. We propose a program of research designed to enable us to model the interaction of canister materials (e.g. copper and iron) with natural waters. Both experimental work and a modeling program are outlined. In the experimental program e.m.f. measurements and spectroscopic measurements will be made so as to determine the various association equilibria of iron and copper with the anions OH-, HCO 3 - , and CO 3 2- . The initial stages of the modeling program will concentrate on the identification and use of existing experimental data to produce a preliminary model. This will allow us to identify those areas where special emphasis should be placed to meet the needs of the waste disposal program objectives. The objective of this research is to produce thermodynamic data for use in the assessment of waste repository performance that has been measured using experimental procedures performed in accord with the Level 1 quality assurance requirements detailed in the L.L.N.L. Yucca Mountain Project Quality Procedures Manual. The modeling approach used in experimental planning and data assessment is a Level 3 activity. In addition to the establishment of the thermodynamic data base proposed here, results should lead to improved consistency in the overall modeling effort. 29 refs., 2 tabs

  2. The irradiance and temperature dependent mathematical model for estimation of photovoltaic panel performances

    International Nuclear Information System (INIS)

    Barukčić, M.; Ćorluka, V.; Miklošević, K.

    2015-01-01

    Highlights: • The temperature and irradiance dependent model for the I–V curve estimation is presented. • The purely mathematical model based on the analysis of the I–V curve shape is presented. • The model includes the Gompertz function with temperature and irradiance dependent parameters. • The input data are extracted from the data sheet I–V curves. - Abstract: The temperature and irradiance dependent mathematical model for photovoltaic panel performances estimation is proposed in the paper. The base of the model is the mathematical function of the photovoltaic panel current–voltage curve. The model of the current–voltage curve is based on the sigmoid function with temperature and irradiance dependent parameters. The temperature and irradiance dependencies of the parameters are proposed in the form of analytic functions. The constant parameters are involved in the analytical functions. The constant parameters need to be estimated to get the temperature and irradiance dependent current–voltage curve. The mathematical model contains 12 constant parameters and they are estimated by using the evolutionary algorithm. The optimization problem is defined for this purpose. The optimization problem objective function is based on estimated and extracted (measured) current and voltage values. The current and voltage values are extracted from current–voltage curves given in datasheet of the photovoltaic panels. The new procedure for estimation of open circuit voltage value at any temperature and irradiance is proposed in the model. The performance of the proposed mathematical model is presented for three different photovoltaic panel technologies. The simulation results indicate that the proposed mathematical model is acceptable for estimation of temperature and irradiance dependent current–voltage curve and photovoltaic panel performances within temperature and irradiance ranges

  3. Hydrological Modeling in Northern Tunisia with Regional Climate Model Outputs: Performance Evaluation and Bias-Correction in Present Climate Conditions

    Directory of Open Access Journals (Sweden)

    Asma Foughali

    2015-07-01

    Full Text Available This work aims to evaluate the performance of a hydrological balance model in a watershed located in northern Tunisia (wadi Sejnane, 378 km2 in present climate conditions using input variables provided by four regional climate models. A modified version (MBBH of the lumped and single layer surface model BBH (Bucket with Bottom Hole model, in which pedo-transfer parameters estimated using watershed physiographic characteristics are introduced is adopted to simulate the water balance components. Only two parameters representing respectively the water retention capacity of the soil and the vegetation resistance to evapotranspiration are calibrated using rainfall-runoff data. The evaluation criterions for the MBBH model calibration are: relative bias, mean square error and the ratio of mean actual evapotranspiration to mean potential evapotranspiration. Daily air temperature, rainfall and runoff observations are available from 1960 to 1984. The period 1960–1971 is selected for calibration while the period 1972–1984 is chosen for validation. Air temperature and precipitation series are provided by four regional climate models (DMI, ARP, SMH and ICT from the European program ENSEMBLES, forced by two global climate models (GCM: ECHAM and ARPEGE. The regional climate model outputs (precipitation and air temperature are compared to the observations in terms of statistical distribution. The analysis was performed at the seasonal scale for precipitation. We found out that RCM precipitation must be corrected before being introduced as MBBH inputs. Thus, a non-parametric quantile-quantile bias correction method together with a dry day correction is employed. Finally, simulated runoff generated using corrected precipitation from the regional climate model SMH is found the most acceptable by comparison with runoff simulated using observed precipitation data, to reproduce the temporal variability of mean monthly runoff. The SMH model is the most accurate to

  4. Asymptotic performance modelling of DCF protocol with prioritized channel access

    Science.gov (United States)

    Choi, Woo-Yong

    2017-11-01

    Recently, the modification of the DCF (Distributed Coordination Function) protocol by the prioritized channel access was proposed to resolve the problem that the DCF performance worsens exponentially as more nodes exist in IEEE 802.11 wireless LANs. In this paper, an asymptotic analytical performance model is presented to analyze the MAC performance of the DCF protocol with the prioritized channel access.

  5. A Model of Statistics Performance Based on Achievement Goal Theory.

    Science.gov (United States)

    Bandalos, Deborah L.; Finney, Sara J.; Geske, Jenenne A.

    2003-01-01

    Tests a model of statistics performance based on achievement goal theory. Both learning and performance goals affected achievement indirectly through study strategies, self-efficacy, and test anxiety. Implications of these findings for teaching and learning statistics are discussed. (Contains 47 references, 3 tables, 3 figures, and 1 appendix.)…

  6. High Performance Programming Using Explicit Shared Memory Model on Cray T3D1

    Science.gov (United States)

    Simon, Horst D.; Saini, Subhash; Grassi, Charles

    1994-01-01

    The Cray T3D system is the first-phase system in Cray Research, Inc.'s (CRI) three-phase massively parallel processing (MPP) program. This system features a heterogeneous architecture that closely couples DEC's Alpha microprocessors and CRI's parallel-vector technology, i.e., the Cray Y-MP and Cray C90. An overview of the Cray T3D hardware and available programming models is presented. Under Cray Research adaptive Fortran (CRAFT) model four programming methods (data parallel, work sharing, message-passing using PVM, and explicit shared memory model) are available to the users. However, at this time data parallel and work sharing programming models are not available to the user community. The differences between standard PVM and CRI's PVM are highlighted with performance measurements such as latencies and communication bandwidths. We have found that the performance of neither standard PVM nor CRI s PVM exploits the hardware capabilities of the T3D. The reasons for the bad performance of PVM as a native message-passing library are presented. This is illustrated by the performance of NAS Parallel Benchmarks (NPB) programmed in explicit shared memory model on Cray T3D. In general, the performance of standard PVM is about 4 to 5 times less than obtained by using explicit shared memory model. This degradation in performance is also seen on CM-5 where the performance of applications using native message-passing library CMMD on CM-5 is also about 4 to 5 times less than using data parallel methods. The issues involved (such as barriers, synchronization, invalidating data cache, aligning data cache etc.) while programming in explicit shared memory model are discussed. Comparative performance of NPB using explicit shared memory programming model on the Cray T3D and other highly parallel systems such as the TMC CM-5, Intel Paragon, Cray C90, IBM-SP1, etc. is presented.

  7. Modeling and experimental verification of proof mass effects on vibration energy harvester performance

    International Nuclear Information System (INIS)

    Kim, Miso; Hoegen, Mathias; Dugundji, John; Wardle, Brian L

    2010-01-01

    An electromechanically coupled model for a cantilevered piezoelectric energy harvester with a proof mass is presented. Proof masses are essential in microscale devices to move device resonances towards optimal frequency points for harvesting. Such devices with proof masses have not been rigorously modeled previously; instead, lumped mass or concentrated point masses at arbitrary points on the beam have been used. Thus, this work focuses on the exact vibration analysis of cantilevered energy harvester devices including a tip proof mass. The model is based not only on a detailed modal analysis, but also on a thorough investigation of damping ratios that can significantly affect device performance. A model with multiple degrees of freedom is developed and then reduced to a single-mode model, yielding convenient closed-form normalized predictions of device performance. In order to verify the analytical model, experimental tests are undertaken on a macroscale, symmetric, bimorph, piezoelectric energy harvester with proof masses of different geometries. The model accurately captures all aspects of the measured response, including the location of peak-power operating points at resonance and anti-resonance, and trends such as the dependence of the maximal power harvested on the frequency. It is observed that even a small change in proof mass geometry results in a substantial change of device performance due not only to the frequency shift, but also to the effect on the strain distribution along the device length. Future work will include the optimal design of devices for various applications, and quantification of the importance of nonlinearities (structural and piezoelectric coupling) for device performance

  8. Evolution in performance assessment modeling as a result of regulatory review

    Energy Technology Data Exchange (ETDEWEB)

    Rowat, J.H.; Dolinar, G.M.; Stephens, M.E. [AECL Chalk River Labs., Ontario (Canada)] [and others

    1995-12-31

    AECL is planning to build the IRUS (Intrusion Resistant Underground Structure) facility for near-surface disposal of LLRW. The PSAR (preliminary safety assessment report) was subject to an initial regulatory review during mid-1992. The regulatory authority provided comments on many aspects of the safety assessment documentation including a number of questions on specific PA (Performance Assessment) modelling assumptions. As a result of these comments as well as a separate detailed review of the IRUS disposal concept, changes were made to the conceptual and mathematical models. The original disposal concept included a non-sorbing vault backfill, with a strong reliance on the wasteform as a barrier. This concept was altered to decrease reliance on the wasteform by replacing the original backfill with a sand/clinoptilolite mix, which is a better sorber of metal cations. This change lead to changes in the PA models which in turn altered the safety case for the facility. This, and other changes that impacted performance assessment modelling are the subject of this paper.

  9. Comparison of HSPF and SWAT models performance for runoff and sediment yield prediction.

    Science.gov (United States)

    Im, Sangjun; Brannan, Kevin M; Mostaghimi, Saied; Kim, Sang Min

    2007-09-01

    A watershed model can be used to better understand the relationship between land use activities and hydrologic/water quality processes that occur within a watershed. The physically based, distributed parameter model (SWAT) and a conceptual, lumped parameter model (HSPF), were selected and their performance were compared in simulating runoff and sediment yields from the Polecat Creek watershed in Virginia, which is 12,048 ha in size. A monitoring project was conducted in Polecat Creek watershed during the period of October 1994 to June 2000. The observed data (stream flow and sediment yield) from the monitoring project was used in the calibration/validations of the models. The period of September 1996 to June 2000 was used for the calibration and October 1994 to December 1995 was used for the validation of the models. The outputs from the models were compared to the observed data at several sub-watershed outlets and at the watershed outlet of the Polecat Creek watershed. The results indicated that both models were generally able to simulate stream flow and sediment yields well during both the calibration/validation periods. For annual and monthly loads, HSPF simulated hydrologic and sediment yield more accurately than SWAT at all monitoring sites within the watershed. The results of this study indicate that both the SWAT and HSPF watershed models performed sufficiently well in the simulation of stream flow and sediment yield with HSPF performing moderately better than SWAT for simulation time-steps greater than a month.

  10. Using physical models to study the gliding performance of extinct animals.

    Science.gov (United States)

    Koehl, M A R; Evangelista, Dennis; Yang, Karen

    2011-12-01

    Aerodynamic studies using physical models of fossil organisms can provide quantitative information about how performance of defined activities, such as gliding, depends on specific morphological features. Such analyses allow us to rule out hypotheses about the function of extinct organisms that are not physically plausible and to determine if and how specific morphological features and postures affect performance. The purpose of this article is to provide a practical guide for the design of dynamically scaled physical models to study the gliding of extinct animals using examples from our research on the theropod dinosaur, †Microraptor gui, which had flight feathers on its hind limbs as well as on its forelimbs. Analysis of the aerodynamics of †M. gui can shed light on the design of gliders with large surfaces posterior to the center of mass and provide functional information to evolutionary biologists trying to unravel the origins of flight in the dinosaurian ancestors and sister groups to birds. Measurements of lift, drag, side force, and moments in pitch, roll, and yaw on models in a wind tunnel can be used to calculate indices of gliding and parachuting performance, aerodynamic static stability, and control effectiveness in maneuvering. These indices permit the aerodynamic performance of bodies of different shape, size, stiffness, texture, and posture to be compared and thus can provide insights about the design of gliders, both biological and man-made. Our measurements of maximum lift-to-drag ratios of 2.5-3.1 for physical models of †M. gui suggest that its gliding performance was similar to that of flying squirrels and that the various leg postures that might have been used by †M. gui make little difference to that aspect of aerodynamic performance. We found that body orientation relative to the movement of air past the animal determines whether it is difficult or easy to maneuver.

  11. Performance Evaluation of Sadoghi Hospital Based on «EFQM» Organizational Excellence Model

    Directory of Open Access Journals (Sweden)

    A Sanayeei

    2013-04-01

    Full Text Available Introduction: Realm of health care that organizations have faced in recent years has been described with high level of dynamism and development. To survive in such conditions, performance evaluation can have an effective role in satisfying proper quality for services. This study aimed to evaluate the performance of Shahid Sadoghi Yazd hospital through EFQM approach. Methods: This was a descriptive cross-sectional study. Data collection instrument was EFQM organization Excellence Model questionnaire which was completed by all the managers. The research data was gathered from a sample of 302 patients, staff, personnel and medical staff working in different parts of the hospital. Random stratified samples were selected and descriptive statistics were utilized in order to analyze the data. Results: The results revealed that Shahid Sadoughi hospital acquired 185.41 points out of the total 500 points considered in the model EFQM. In other words, the rating reflects the fact that regarding the defined desired position, the hospital has not achieved the desired rating. Conclusion: Since the hospital performance is posited in a low-middle class, much more attention is required in regard to therapeutic management in this hospital. Therefore, codifying an efficient and effective program to improve the hospital performance is necessary. Furthermore, it seems that EFQM model can be considered as a comprehensive model for performance evaluation in hospitals.

  12. Impact of Supply Chain Alignment on Construction Performance: A developed model for Vietnam

    Directory of Open Access Journals (Sweden)

    Huy Troung Quang

    2017-12-01

    Full Text Available There are many articles mentioning the advantages and benefits of supply chain alignment none, however, describe how to model such alignment in the supply chain. This paper offers a framework for examining and understanding the impact supply chain alignment has on performance. Based on supply chain mapping approach, a model describing alignment between processes/ flows in the supply chain network is developed. The model is then validated using a dataset of 316 enterprises operating in the Vietnam construction sector. Evidence indicates that the supply chain processes and flows were aligned. According to the results, the proposed supply chain alignment model is able to explain a 59.9% variance in operational performance, 58.9% in customer satisfaction, 34.5% in operating costs and 67.4% in business performance. To successfully align the supply chain network, companies can use the proposed model as a “road-map” to reduce high costs, to avoid the loss of control, management difficulties and/or vulnerability to opportunistic action, all of which may hinder efforts to align the supply chains.

  13. Towards a realistic approach to validation of reactive transport models for performance assessment

    International Nuclear Information System (INIS)

    Siegel, M.D.

    1993-01-01

    Performance assessment calculations are based on geochemical models that assume that interactions among radionuclides, rocks and groundwaters under natural conditions, can be estimated or bound by data obtained from laboratory-scale studies. The data include radionuclide distribution coefficients, measured in saturated batch systems of powdered rocks, and retardation factors measured in short-term column experiments. Traditional approaches to model validation cannot be applied in a straightforward manner to the simple reactive transport models that use these data. An approach to model validation in support of performance assessment is described in this paper. It is based on a recognition of different levels of model validity and is compatible with the requirements of current regulations for high-level waste disposal. Activities that are being carried out in support of this approach include (1) laboratory and numerical experiments to test the validity of important assumptions inherent in current performance assessment methodologies,(2) integrated transport experiments, and (3) development of a robust coupled reaction/transport code for sensitivity analyses using massively parallel computers

  14. Representing Microbial Dormancy in Soil Decomposition Models Improves Model Performance and Reveals Key Ecosystem Controls on Microbial Activity

    Science.gov (United States)

    He, Y.; Yang, J.; Zhuang, Q.; Wang, G.; Liu, Y.

    2014-12-01

    Climate feedbacks from soils can result from environmental change and subsequent responses of plant and microbial communities and nutrient cycling. Explicit consideration of microbial life history traits and strategy may be necessary to predict climate feedbacks due to microbial physiology and community changes and their associated effect on carbon cycling. In this study, we developed an explicit microbial-enzyme decomposition model and examined model performance with and without representation of dormancy at six temperate forest sites with observed soil efflux ranged from 4 to 10 years across different forest types. We then extrapolated the model to all temperate forests in the Northern Hemisphere (25-50°N) to investigate spatial controls on microbial and soil C dynamics. Both models captured the observed soil heterotrophic respiration (RH), yet no-dormancy model consistently exhibited large seasonal amplitude and overestimation in microbial biomass. Spatially, the total RH from temperate forests based on dormancy model amounts to 6.88PgC/yr, and 7.99PgC/yr based on no-dormancy model. However, no-dormancy model notably overestimated the ratio of microbial biomass to SOC. Spatial correlation analysis revealed key controls of soil C:N ratio on the active proportion of microbial biomass, whereas local dormancy is primarily controlled by soil moisture and temperature, indicating scale-dependent environmental and biotic controls on microbial and SOC dynamics. These developments should provide essential support to modeling future soil carbon dynamics and enhance the avenue for collaboration between empirical soil experiment and modeling in the sense that more microbial physiological measurements are needed to better constrain and evaluate the models.

  15. An evaluation of sex-age-kill (SAK) model performance

    Science.gov (United States)

    Millspaugh, Joshua J.; Skalski, John R.; Townsend, Richard L.; Diefenbach, Duane R.; Boyce, Mark S.; Hansen, Lonnie P.; Kammermeyer, Kent

    2009-01-01

    The sex-age-kill (SAK) model is widely used to estimate abundance of harvested large mammals, including white-tailed deer (Odocoileus virginianus). Despite a long history of use, few formal evaluations of SAK performance exist. We investigated how violations of the stable age distribution and stationary population assumption, changes to male or female harvest, stochastic effects (i.e., random fluctuations in recruitment and survival), and sampling efforts influenced SAK estimation. When the simulated population had a stable age distribution and λ > 1, the SAK model underestimated abundance. Conversely, when λ < 1, the SAK overestimated abundance. When changes to male harvest were introduced, SAK estimates were opposite the true population trend. In contrast, SAK estimates were robust to changes in female harvest rates. Stochastic effects caused SAK estimates to fluctuate about their equilibrium abundance, but the effect dampened as the size of the surveyed population increased. When we considered both stochastic effects and sampling error at a deer management unit scale the resultant abundance estimates were within ±121.9% of the true population level 95% of the time. These combined results demonstrate extreme sensitivity to model violations and scale of analysis. Without changes to model formulation, the SAK model will be biased when λ ≠ 1. Furthermore, any factor that alters the male harvest rate, such as changes to regulations or changes in hunter attitudes, will bias population estimates. Sex-age-kill estimates may be precise at large spatial scales, such as the state level, but less so at the individual management unit level. Alternative models, such as statistical age-at-harvest models, which require similar data types, might allow for more robust, broad-scale demographic assessments.

  16. Linking asphalt binder fatigue to asphalt mixture fatigue performance using viscoelastic continuum damage modeling

    Science.gov (United States)

    Safaei, Farinaz; Castorena, Cassie; Kim, Y. Richard

    2016-08-01

    Fatigue cracking is a major form of distress in asphalt pavements. Asphalt binder is the weakest asphalt concrete constituent and, thus, plays a critical role in determining the fatigue resistance of pavements. Therefore, the ability to characterize and model the inherent fatigue performance of an asphalt binder is a necessary first step to design mixtures and pavements that are not susceptible to premature fatigue failure. The simplified viscoelastic continuum damage (S-VECD) model has been used successfully by researchers to predict the damage evolution in asphalt mixtures for various traffic and climatic conditions using limited uniaxial test data. In this study, the S-VECD model, developed for asphalt mixtures, is adapted for asphalt binders tested under cyclic torsion in a dynamic shear rheometer. Derivation of the model framework is presented. The model is verified by producing damage characteristic curves that are both temperature- and loading history-independent based on time sweep tests, given that the effects of plasticity and adhesion loss on the material behavior are minimal. The applicability of the S-VECD model to the accelerated loading that is inherent of the linear amplitude sweep test is demonstrated, which reveals reasonable performance predictions, but with some loss in accuracy compared to time sweep tests due to the confounding effects of nonlinearity imposed by the high strain amplitudes included in the test. The asphalt binder S-VECD model is validated through comparisons to asphalt mixture S-VECD model results derived from cyclic direct tension tests and Accelerated Loading Facility performance tests. The results demonstrate good agreement between the asphalt binder and mixture test results and pavement performance, indicating that the developed model framework is able to capture the asphalt binder's contribution to mixture fatigue and pavement fatigue cracking performance.

  17. SUMO, System performance assessment for a high-level nuclear waste repository: Mathematical models

    International Nuclear Information System (INIS)

    Eslinger, P.W.; Miley, T.B.; Engel, D.W.; Chamberlain, P.J. II.

    1992-09-01

    Following completion of the preliminary risk assessment of the potential Yucca Mountain Site by Pacific Northwest Laboratory (PNL) in 1988, the Office of Civilian Radioactive Waste Management (OCRWM) of the US Department of Energy (DOE) requested the Performance Assessment Scientific Support (PASS) Program at PNL to develop an integrated system model and computer code that provides performance and risk assessment analysis capabilities for a potential high-level nuclear waste repository. The system model that has been developed addresses the cumulative radionuclide release criteria established by the US Environmental Protection Agency (EPA) and estimates population risks in terms of dose to humans. The system model embodied in the SUMO (System Unsaturated Model) code will also allow benchmarking of other models being developed for the Yucca Mountain Project. The system model has three natural divisions: (1) source term, (2) far-field transport, and (3) dose to humans. This document gives a detailed description of the mathematics of each of these three divisions. Each of the governing equations employed is based on modeling assumptions that are widely accepted within the scientific community

  18. Modeling of HVAC operational faults in building performance simulation

    International Nuclear Information System (INIS)

    Zhang, Rongpeng; Hong, Tianzhen

    2017-01-01

    Highlights: •Discuss significance of capturing operational faults in existing buildings. •Develop a novel feature in EnergyPlus to model operational faults of HVAC systems. •Compare three approaches to faults modeling using EnergyPlus. •A case study demonstrates the use of the fault-modeling feature. •Future developments of new faults are discussed. -- Abstract: Operational faults are common in the heating, ventilating, and air conditioning (HVAC) systems of existing buildings, leading to a decrease in energy efficiency and occupant comfort. Various fault detection and diagnostic methods have been developed to identify and analyze HVAC operational faults at the component or subsystem level. However, current methods lack a holistic approach to predicting the overall impacts of faults at the building level—an approach that adequately addresses the coupling between various operational components, the synchronized effect between simultaneous faults, and the dynamic nature of fault severity. This study introduces the novel development of a fault-modeling feature in EnergyPlus which fills in the knowledge gap left by previous studies. This paper presents the design and implementation of the new feature in EnergyPlus and discusses in detail the fault-modeling challenges faced. The new fault-modeling feature enables EnergyPlus to quantify the impacts of faults on building energy use and occupant comfort, thus supporting the decision making of timely fault corrections. Including actual building operational faults in energy models also improves the accuracy of the baseline model, which is critical in the measurement and verification of retrofit or commissioning projects. As an example, EnergyPlus version 8.6 was used to investigate the impacts of a number of typical operational faults in an office building across several U.S. climate zones. The results demonstrate that the faults have significant impacts on building energy performance as well as on occupant

  19. A Unified Model of Performance for Predicting the Effects of Sleep and Caffeine

    Science.gov (United States)

    Ramakrishnan, Sridhar; Wesensten, Nancy J.; Kamimori, Gary H.; Moon, James E.; Balkin, Thomas J.; Reifman, Jaques

    2016-01-01

    Study Objectives: Existing mathematical models of neurobehavioral performance cannot predict the beneficial effects of caffeine across the spectrum of sleep loss conditions, limiting their practical utility. Here, we closed this research gap by integrating a model of caffeine effects with the recently validated unified model of performance (UMP) into a single, unified modeling framework. We then assessed the accuracy of this new UMP in predicting performance across multiple studies. Methods: We hypothesized that the pharmacodynamics of caffeine vary similarly during both wakefulness and sleep, and that caffeine has a multiplicative effect on performance. Accordingly, to represent the effects of caffeine in the UMP, we multiplied a dose-dependent caffeine factor (which accounts for the pharmacokinetics and pharmacodynamics of caffeine) to the performance estimated in the absence of caffeine. We assessed the UMP predictions in 14 distinct laboratory- and field-study conditions, including 7 different sleep-loss schedules (from 5 h of sleep per night to continuous sleep loss for 85 h) and 6 different caffeine doses (from placebo to repeated 200 mg doses to a single dose of 600 mg). Results: The UMP accurately predicted group-average psychomotor vigilance task performance data across the different sleep loss and caffeine conditions (6% caffeine resulted in improved predictions (after caffeine consumption) by up to 70%. Conclusions: The UMP provides the first comprehensive tool for accurate selection of combinations of sleep schedules and caffeine countermeasure strategies to optimize neurobehavioral performance. Citation: Ramakrishnan S, Wesensten NJ, Kamimori GH, Moon JE, Balkin TJ, Reifman J. A unified model of performance for predicting the effects of sleep and caffeine. SLEEP 2016;39(10):1827–1841. PMID:27397562

  20. A hybrid condenser model for real-time applications in performance monitoring, control and optimization

    International Nuclear Information System (INIS)

    Ding Xudong; Cai Wenjian; Jia Lei; Wen Changyun; Zhang Guiqing

    2009-01-01

    In this paper, a simple, yet accurate hybrid modeling technique for condensers is presented. The method starts with fundamental physical principles but captures only few key operational characteristic parameters to predict the system performances. The advantages of the methods lie that linear or non-linear least-squares methods can be directly used to determine no more than four key operational characteristic parameters in the model, which can significantly reduce the computational burden. The developed model is verified with the experimental data taken from a pilot system. The testing results confirm that the proposed model can predict accurately the performance of the real-time operating condenser with the maximum error of less than ±10%. The model technique proposed will have wide applications not only in condenser operating optimization, but also in performance assessment and fault detection and diagnosis.

  1. Range performance calculations using the NVEOL-Georgia Tech Research Institute 0.1- to 100-GHz radar performance model

    Science.gov (United States)

    Rodak, S. P.; Thomas, N. I.

    1983-05-01

    A computer model that can be used to calculate radar range performance at any frequency in the 0.1-to 100-GHz electromagnetic spectrum is described. These different numerical examples are used to demonstrate how to use the radar range performance model. Input/output documentation are included for each case that was run on the MERADCOM CDC 6600 computer at Fort Belvoir, Virginia.

  2. Integrated modeling tool for performance engineering of complex computer systems

    Science.gov (United States)

    Wright, Gary; Ball, Duane; Hoyt, Susan; Steele, Oscar

    1989-01-01

    This report summarizes Advanced System Technologies' accomplishments on the Phase 2 SBIR contract NAS7-995. The technical objectives of the report are: (1) to develop an evaluation version of a graphical, integrated modeling language according to the specification resulting from the Phase 2 research; and (2) to determine the degree to which the language meets its objectives by evaluating ease of use, utility of two sets of performance predictions, and the power of the language constructs. The technical approach followed to meet these objectives was to design, develop, and test an evaluation prototype of a graphical, performance prediction tool. The utility of the prototype was then evaluated by applying it to a variety of test cases found in the literature and in AST case histories. Numerous models were constructed and successfully tested. The major conclusion of this Phase 2 SBIR research and development effort is that complex, real-time computer systems can be specified in a non-procedural manner using combinations of icons, windows, menus, and dialogs. Such a specification technique provides an interface that system designers and architects find natural and easy to use. In addition, PEDESTAL's multiview approach provides system engineers with the capability to perform the trade-offs necessary to produce a design that meets timing performance requirements. Sample system designs analyzed during the development effort showed that models could be constructed in a fraction of the time required by non-visual system design capture tools.

  3. Modelling Flat Spring performance using FEA

    International Nuclear Information System (INIS)

    Fatola, B O; Keogh, P; Hicks, B

    2009-01-01

    This paper reports how the stiffness of a Flat Spring can be predicted using nonlinear Finite Element Analysis (FEA). The analysis of a Flat Spring is a nonlinear problem involving contact mechanics, geometric nonlinearity and material property nonlinearity. Research has been focused on improving the accuracy of the model by identifying and exploring the significant assumptions contributing to errors. This paper presents results from some of the models developed using FEA software. The validation process is shown to identify where improvements can be made to the model assumptions to increase the accuracy of prediction. The goal is to achieve an accuracy level of ±10 % as the intention is to replace practical testing with FEA modelling, thereby reducing the product development time and cost. Results from the FEA models are compared with experimental results to validate the accuracy.

  4. Comparing the performance of SIMD computers by running large air pollution models

    DEFF Research Database (Denmark)

    Brown, J.; Hansen, Per Christian; Wasniewski, J.

    1996-01-01

    To compare the performance and use of three massively parallel SIMD computers, we implemented a large air pollution model on these computers. Using a realistic large-scale model, we gained detailed insight about the performance of the computers involved when used to solve large-scale scientific...... problems that involve several types of numerical computations. The computers used in our study are the Connection Machines CM-200 and CM-5, and the MasPar MP-2216...

  5. Predictive models for PEM-electrolyzer performance using adaptive neuro-fuzzy inference systems

    Energy Technology Data Exchange (ETDEWEB)

    Becker, Steffen [University of Tasmania, Hobart 7001, Tasmania (Australia); Karri, Vishy [Australian College of Kuwait (Kuwait)

    2010-09-15

    Predictive models were built using neural network based Adaptive Neuro-Fuzzy Inference Systems for hydrogen flow rate, electrolyzer system-efficiency and stack-efficiency respectively. A comprehensive experimental database forms the foundation for the predictive models. It is argued that, due to the high costs associated with the hydrogen measuring equipment; these reliable predictive models can be implemented as virtual sensors. These models can also be used on-line for monitoring and safety of hydrogen equipment. The quantitative accuracy of the predictive models is appraised using statistical techniques. These mathematical models are found to be reliable predictive tools with an excellent accuracy of {+-}3% compared with experimental values. The predictive nature of these models did not show any significant bias to either over prediction or under prediction. These predictive models, built on a sound mathematical and quantitative basis, can be seen as a step towards establishing hydrogen performance prediction models as generic virtual sensors for wider safety and monitoring applications. (author)

  6. Linear models to perform treaty verification tasks for enhanced information security

    International Nuclear Information System (INIS)

    MacGahan, Christopher J.; Kupinski, Matthew A.; Brubaker, Erik M.; Hilton, Nathan R.; Marleau, Peter A.

    2017-01-01

    Linear mathematical models were applied to binary-discrimination tasks relevant to arms control verification measurements in which a host party wishes to convince a monitoring party that an item is or is not treaty accountable. These models process data in list-mode format and can compensate for the presence of variability in the source, such as uncertain object orientation and location. The Hotelling observer applies an optimal set of weights to binned detector data, yielding a test statistic that is thresholded to make a decision. The channelized Hotelling observer applies a channelizing matrix to the vectorized data, resulting in a lower dimensional vector available to the monitor to make decisions. We demonstrate how incorporating additional terms in this channelizing-matrix optimization offers benefits for treaty verification. We present two methods to increase shared information and trust between the host and monitor. The first method penalizes individual channel performance in order to maximize the information available to the monitor while maintaining optimal performance. Second, we present a method that penalizes predefined sensitive information while maintaining the capability to discriminate between binary choices. Data used in this study was generated using Monte Carlo simulations for fission neutrons, accomplished with the GEANT4 toolkit. Custom models for plutonium inspection objects were measured in simulation by a radiation imaging system. Model performance was evaluated and presented using the area under the receiver operating characteristic curve.

  7. Linear models to perform treaty verification tasks for enhanced information security

    Energy Technology Data Exchange (ETDEWEB)

    MacGahan, Christopher J., E-mail: cmacgahan@optics.arizona.edu [College of Optical Sciences, The University of Arizona, 1630 E. University Blvd, Tucson, AZ 85721 (United States); Sandia National Laboratories, Livermore, CA 94551 (United States); Kupinski, Matthew A. [College of Optical Sciences, The University of Arizona, 1630 E. University Blvd, Tucson, AZ 85721 (United States); Brubaker, Erik M.; Hilton, Nathan R.; Marleau, Peter A. [Sandia National Laboratories, Livermore, CA 94551 (United States)

    2017-02-01

    Linear mathematical models were applied to binary-discrimination tasks relevant to arms control verification measurements in which a host party wishes to convince a monitoring party that an item is or is not treaty accountable. These models process data in list-mode format and can compensate for the presence of variability in the source, such as uncertain object orientation and location. The Hotelling observer applies an optimal set of weights to binned detector data, yielding a test statistic that is thresholded to make a decision. The channelized Hotelling observer applies a channelizing matrix to the vectorized data, resulting in a lower dimensional vector available to the monitor to make decisions. We demonstrate how incorporating additional terms in this channelizing-matrix optimization offers benefits for treaty verification. We present two methods to increase shared information and trust between the host and monitor. The first method penalizes individual channel performance in order to maximize the information available to the monitor while maintaining optimal performance. Second, we present a method that penalizes predefined sensitive information while maintaining the capability to discriminate between binary choices. Data used in this study was generated using Monte Carlo simulations for fission neutrons, accomplished with the GEANT4 toolkit. Custom models for plutonium inspection objects were measured in simulation by a radiation imaging system. Model performance was evaluated and presented using the area under the receiver operating characteristic curve.

  8. Neuro-fuzzy model for evaluating the performance of processes ...

    Indian Academy of Sciences (India)

    CHIDOZIE CHUKWUEMEKA NWOBI-OKOYE

    2017-11-16

    Nov 16, 2017 ... In this work an Adaptive Neuro-Fuzzy Inference System (ANFIS) was used to model the periodic performance of ... Since the .... The investigation hubs are a local brewing company ..... Industrial Engineers, Systems Engineers, Operations ... responsibility the overall management of the new system lies.

  9. Evaluating hydrological model performance using information theory-based metrics

    Science.gov (United States)

    The accuracy-based model performance metrics not necessarily reflect the qualitative correspondence between simulated and measured streamflow time series. The objective of this work was to use the information theory-based metrics to see whether they can be used as complementary tool for hydrologic m...

  10. Hybrid Building Performance Simulation Models for Industrial Energy Efficiency Applications

    Directory of Open Access Journals (Sweden)

    Peter Smolek

    2018-06-01

    Full Text Available In the challenge of achieving environmental sustainability, industrial production plants, as large contributors to the overall energy demand of a country, are prime candidates for applying energy efficiency measures. A modelling approach using cubes is used to decompose a production facility into manageable modules. All aspects of the facility are considered, classified into the building, energy system, production and logistics. This approach leads to specific challenges for building performance simulations since all parts of the facility are highly interconnected. To meet this challenge, models for the building, thermal zones, energy converters and energy grids are presented and the interfaces to the production and logistics equipment are illustrated. The advantages and limitations of the chosen approach are discussed. In an example implementation, the feasibility of the approach and models is shown. Different scenarios are simulated to highlight the models and the results are compared.

  11. The contribution of high-performance computing and modelling for industrial development

    CSIR Research Space (South Africa)

    Sithole, Happy

    2017-10-01

    Full Text Available Performance Computing and Modelling for Industrial Development Dr Happy Sithole and Dr Onno Ubbink 2 Strategic context • High-performance computing (HPC) combined with machine Learning and artificial intelligence present opportunities to non...

  12. Off-design performance loss model for radial turbines with pivoting, variable-area stators

    Science.gov (United States)

    Meitner, P. L.; Glassman, A. J.

    1980-01-01

    An off-design performance loss model was developed for variable stator (pivoted vane), radial turbines through analytical modeling and experimental data analysis. Stator loss is determined by a viscous loss model; stator vane end-clearance leakage effects are determined by a clearance flow model. Rotor loss coefficient were obtained by analyzing the experimental data from a turbine rotor previously tested with six stators having throat areas from 20 to 144 percent of design area and were correlated with stator-to-rotor throat area ratio. An incidence loss model was selected to obtain best agreement with experimental results. Predicted turbine performance is compared with experimental results for the design rotor as well as with results for extended and cutback versions of the rotor. Sample calculations were made to show the effects of stator vane end-clearance leakage.

  13. Prioritizing Public- Private Partnership Models for Public Hospitals of Iran Based on Performance Indicators

    Directory of Open Access Journals (Sweden)

    Mohammad Asghari Jaafarabadi

    2012-12-01

    Full Text Available Background: The present study was conducted to scrutinize Public- Private Partnership (PPP models in public hospitals of different countries based on performance indicators in order to se-lect appropriated models for Iran hospitals.Methods: In this mixed (quantitative-qualitative study, systematic review and expert panel hasbeen done to identify varied models of PPP as well as performance indicators. In the second stepwe prioritized performance indicator and PPP models based on selected performance indicatorsby Analytical Hierarchy process (AHP technique. The data were analyzed by Excel 2007 andExpert Choice11 software’s.Results: In quality – effectiveness area, indicators like the rate of hospital infections(100%, hospital accidents prevalence rate (73%, pure rate of hospital mortality (63%, patientsatisfaction percentage (53%, in accessibility equity area indicators such as average inpatientwaiting time (100% and average outpatient waiting time (74%, and in financial – efficiency area,indicators including average length of stay (100%, bed occupation ratio (99%, specific incometo total cost ratio (97% have been chosen to be the most key performance indicators. In the prioritizationof the PPP models clinical outsourcing, management, privatization, BOO (build, own,operate and non-clinical outsourcing models, achieved high priority for various performance indicatorareas.Conclusion: This study had been provided the most common PPP options in the field of public hospitals and had gathered suitable evidences from experts for choosing appropriate PPP option for public hospitals. Effect of private sector presence in public hospital performance, based on which PPP options undertaken, will be different.

  14. Prioritizing public- private partnership models for public hospitals of iran based on performance indicators.

    Science.gov (United States)

    Gholamzadeh Nikjoo, Raana; Jabbari Beyrami, Hossein; Jannati, Ali; Asghari Jaafarabadi, Mohammad

    2012-01-01

    The present study was conducted to scrutinize Public- Private Partnership (PPP) models in public hospitals of different countries based on performance indicators in order to se-lect appropriated models for Iran hospitals. In this mixed (quantitative-qualitative) study, systematic review and expert panel has been done to identify varied models of PPP as well as performance indicators. In the second step we prioritized performance indicator and PPP models based on selected performance indicators by Analytical Hierarchy process (AHP) technique. The data were analyzed by Excel 2007 and Expert Choice11 software's. In quality - effectiveness area, indicators like the rate of hospital infections (100%), hospital accidents prevalence rate (73%), pure rate of hospital mortality (63%), patient satisfaction percentage (53%), in accessibility equity area indicators such as average inpatient waiting time (100%) and average outpatient waiting time (74%), and in financial - efficiency area, indicators including average length of stay (100%), bed occupation ratio (99%), specific income to total cost ratio (97%) have been chosen to be the most key performance indicators. In the pri¬oritization of the PPP models clinical outsourcing, management, privatization, BOO (build, own, operate) and non-clinical outsourcing models, achieved high priority for various performance in¬dicator areas. This study had been provided the most common PPP options in the field of public hospitals and had gathered suitable evidences from experts for choosing appropriate PPP option for public hospitals. Effect of private sector presence in public hospital performance, based on which PPP options undertaken, will be different.

  15. [The Brazilian National Health Surveillance Agency performance evaluation at the management contract model].

    Science.gov (United States)

    Moreira, Elka Maltez de Miranda; Costa, Ediná Alves

    2010-11-01

    The Brazilian National Health Surveillance Agency (Anvisa) is supervised by the Ministry of Health by means of a management contract, a performance evaluation tool. This case study was aimed at describing and analyzing Anvisa's performance evaluation model based on the agency's institutional purpose, according to the following analytical categories: the management contract formalization, evaluation tools, evaluators and institutional performance. Semi-structured interviews and document analysis revealed that Anvisa signed only one management contract with the Ministry of Health in 1999, updated by four additive terms. The Collegiate Board of Directors and the Advisory Center for Strategic Management play the role of Anvisa's internal evaluators and an Assessing Committee, comprising the Ministry of Health, constitutes its external evaluator. Three phases were identified in the evaluation model: the structuring of the new management model (1999-2000), legitimation regarding the productive segment (2001-2004) and widespread legitimation (2005). The best performance was presented in 2000 (86.05%) and the worst in 2004 (40.00%). The evaluation model was shown to have contributed little towards the agency's institutional purpose and the effectiveness measurement of the implemented actions.

  16. Proficient brain for optimal performance: the MAP model perspective

    Directory of Open Access Journals (Sweden)

    Maurizio Bertollo

    2016-05-01

    Full Text Available Background. The main goal of the present study was to explore theta and alpha event-related desynchronization/synchronization (ERD/ERS activity during shooting performance. We adopted the idiosyncratic framework of the multi-action plan (MAP model to investigate different processing modes underpinning four types of performance. In particular, we were interested in examining the neural activity associated with optimal-automated (Type 1 and optimal-controlled (Type 2 performances. Methods. Ten elite shooters (6 male and 4 female with extensive international experience participated in the study. ERD/ERS analysis was used to investigate cortical dynamics during performance. A 4 × 3 (performance types × time repeated measures analysis of variance was performed to test the differences among the four types of performance during the three seconds preceding the shots for theta, low alpha, and high alpha frequency bands. The dependent variables were the ERD/ERS percentages in each frequency band (i.e., theta, low alpha, high alpha for each electrode site across the scalp. This analysis was conducted on 120 shots for each participant in three different frequency bands and the individual data were then averaged. Results. We found ERS to be mainly associated with optimal-automatic performance, in agreement with the “neural efficiency hypothesis.” We also observed more ERD as related to optimal-controlled performance in conditions of “neural adaptability” and proficient use of cortical resources. Discussion. These findings are congruent with the MAP conceptualization of four performance states, in which unique psychophysiological states underlie distinct performance-related experiences. From an applied point of view, our findings suggest that the MAP model can be used as a framework to develop performance enhancement strategies based on cognitive and neurofeedback techniques.

  17. Model Complexity and Out-of-Sample Performance: Evidence from S&P 500 Index Returns

    NARCIS (Netherlands)

    Kaeck, Andreas; Rodrigues, Paulo; Seeger, Norman J.

    We apply a range of out-of-sample specification tests to more than forty competing stochastic volatility models to address how model complexity affects out-of-sample performance. Using daily S&P 500 index returns, model confidence set estimations provide strong evidence that the most important model

  18. User's Manual for Data for Validating Models for PV Module Performance

    Energy Technology Data Exchange (ETDEWEB)

    Marion, W.; Anderberg, A.; Deline, C.; Glick, S.; Muller, M.; Perrin, G.; Rodriguez, J.; Rummel, S.; Terwilliger, K.; Silverman, T. J.

    2014-04-01

    This user's manual describes performance data measured for flat-plate photovoltaic (PV) modules installed in Cocoa, Florida, Eugene, Oregon, and Golden, Colorado. The data include PV module current-voltage curves and associated meteorological data for approximately one-year periods. These publicly available data are intended to facilitate the validation of existing models for predicting the performance of PV modules, and for the development of new and improved models. For comparing different modeling approaches, using these public data will provide transparency and more meaningful comparisons of the relative benefits.

  19. A New Performance Improvement Model: Adding Benchmarking to the Analysis of Performance Indicator Data.

    Science.gov (United States)

    Al-Kuwaiti, Ahmed; Homa, Karen; Maruthamuthu, Thennarasu

    2016-01-01

    A performance improvement model was developed that focuses on the analysis and interpretation of performance indicator (PI) data using statistical process control and benchmarking. PIs are suitable for comparison with benchmarks only if the data fall within the statistically accepted limit-that is, show only random variation. Specifically, if there is no significant special-cause variation over a period of time, then the data are ready to be benchmarked. The proposed Define, Measure, Control, Internal Threshold, and Benchmark model is adapted from the Define, Measure, Analyze, Improve, Control (DMAIC) model. The model consists of the following five steps: Step 1. Define the process; Step 2. Monitor and measure the variation over the period of time; Step 3. Check the variation of the process; if stable (no significant variation), go to Step 4; otherwise, control variation with the help of an action plan; Step 4. Develop an internal threshold and compare the process with it; Step 5.1. Compare the process with an internal benchmark; and Step 5.2. Compare the process with an external benchmark. The steps are illustrated through the use of health care-associated infection (HAI) data collected for 2013 and 2014 from the Infection Control Unit, King Fahd Hospital, University of Dammam, Saudi Arabia. Monitoring variation is an important strategy in understanding and learning about a process. In the example, HAI was monitored for variation in 2013, and the need to have a more predictable process prompted the need to control variation by an action plan. The action plan was successful, as noted by the shift in the 2014 data, compared to the historical average, and, in addition, the variation was reduced. The model is subject to limitations: For example, it cannot be used without benchmarks, which need to be calculated the same way with similar patient populations, and it focuses only on the "Analyze" part of the DMAIC model.

  20. THE PENA BLANCA NATURAL ANALOGUE PERFORMANCE ASSESSMENT MODEL

    Energy Technology Data Exchange (ETDEWEB)

    G. Saulnier and W. Statham

    2006-04-16

    The Nopal I uranium mine in the Sierra Pena Blanca, Chihuahua, Mexico serves as a natural analogue to the Yucca Mountain repository. The Pena Blanca Natural Analogue Performance Assessment Model simulates the mobilization and transport of radionuclides that are released from the mine and transported to the saturated zone. The Pena Blanca Natural Analogue Performance Assessment Model uses probabilistic simulations of hydrogeologic processes that are analogous to the processes that occur at the Yucca Mountain site. The Nopal I uranium deposit lies in fractured, welded, and altered rhyolitic ash-flow tuffs that overlie carbonate rocks, a setting analogous to the geologic formations at the Yucca Mountain site. The Nopal I mine site has the following analogous characteristics as compared to the Yucca Mountain repository site: (1) Analogous source--UO{sub 2} uranium ore deposit = spent nuclear fuel in the repository; (2) Analogous geology--(i.e. fractured, welded, and altered rhyolitic ash-flow tuffs); (3) Analogous climate--Semiarid to arid; (4) Analogous setting--Volcanic tuffs overlie carbonate rocks; and (5) Analogous geochemistry--Oxidizing conditions Analogous hydrogeology: The ore deposit lies in the unsaturated zone above the water table.

  1. THE PENA BLANCA NATURAL ANALOGUE PERFORMANCE ASSESSMENT MODEL

    International Nuclear Information System (INIS)

    G. Saulnier; W. Statham

    2006-01-01

    The Nopal I uranium mine in the Sierra Pena Blanca, Chihuahua, Mexico serves as a natural analogue to the Yucca Mountain repository. The Pena Blanca Natural Analogue Performance Assessment Model simulates the mobilization and transport of radionuclides that are released from the mine and transported to the saturated zone. The Pena Blanca Natural Analogue Performance Assessment Model uses probabilistic simulations of hydrogeologic processes that are analogous to the processes that occur at the Yucca Mountain site. The Nopal I uranium deposit lies in fractured, welded, and altered rhyolitic ash-flow tuffs that overlie carbonate rocks, a setting analogous to the geologic formations at the Yucca Mountain site. The Nopal I mine site has the following analogous characteristics as compared to the Yucca Mountain repository site: (1) Analogous source--UO 2 uranium ore deposit = spent nuclear fuel in the repository; (2) Analogous geology--(i.e. fractured, welded, and altered rhyolitic ash-flow tuffs); (3) Analogous climate--Semiarid to arid; (4) Analogous setting--Volcanic tuffs overlie carbonate rocks; and (5) Analogous geochemistry--Oxidizing conditions Analogous hydrogeology: The ore deposit lies in the unsaturated zone above the water table

  2. On the performance of satellite precipitation products in riverine flood modeling: A review

    Science.gov (United States)

    Maggioni, Viviana; Massari, Christian

    2018-03-01

    This work is meant to summarize lessons learned on using satellite precipitation products for riverine flood modeling and to propose future directions in this field of research. Firstly, the most common satellite precipitation products (SPPs) during the Tropical Rainfall Measuring Mission (TRMM) and Global Precipitation Mission (GPM) eras are reviewed. Secondly, we discuss the main errors and uncertainty sources in these datasets that have the potential to affect streamflow and runoff model simulations. Thirdly, past studies that focused on using SPPs for predicting streamflow and runoff are analyzed. As the impact of floods depends not only on the characteristics of the flood itself, but also on the characteristics of the region (population density, land use, geophysical and climatic factors), a regional analysis is required to assess the performance of hydrologic models in monitoring and predicting floods. The performance of SPP-forced hydrological models was shown to largely depend on several factors, including precipitation type, seasonality, hydrological model formulation, topography. Across several basins around the world, the bias in SPPs was recognized as a major issue and bias correction methods of different complexity were shown to significantly reduce streamflow errors. Model re-calibration was also raised as a viable option to improve SPP-forced streamflow simulations, but caution is necessary when recalibrating models with SPP, which may result in unrealistic parameter values. From a general standpoint, there is significant potential for using satellite observations in flood forecasting, but the performance of SPP in hydrological modeling is still inadequate for operational purposes.

  3. Long‐Term Post‐CABG Survival: Performance of Clinical Risk Models Versus Actuarial Predictions

    Science.gov (United States)

    Carr, Brendan M.; Romeiser, Jamie; Ruan, Joyce; Gupta, Sandeep; Seifert, Frank C.; Zhu, Wei

    2015-01-01

    Abstract Background/aim Clinical risk models are commonly used to predict short‐term coronary artery bypass grafting (CABG) mortality but are less commonly used to predict long‐term mortality. The added value of long‐term mortality clinical risk models over traditional actuarial models has not been evaluated. To address this, the predictive performance of a long‐term clinical risk model was compared with that of an actuarial model to identify the clinical variable(s) most responsible for any differences observed. Methods Long‐term mortality for 1028 CABG patients was estimated using the Hannan New York State clinical risk model and an actuarial model (based on age, gender, and race/ethnicity). Vital status was assessed using the Social Security Death Index. Observed/expected (O/E) ratios were calculated, and the models' predictive performances were compared using a nested c‐index approach. Linear regression analyses identified the subgroup of risk factors driving the differences observed. Results Mortality rates were 3%, 9%, and 17% at one‐, three‐, and five years, respectively (median follow‐up: five years). The clinical risk model provided more accurate predictions. Greater divergence between model estimates occurred with increasing long‐term mortality risk, with baseline renal dysfunction identified as a particularly important driver of these differences. Conclusions Long‐term mortality clinical risk models provide enhanced predictive power compared to actuarial models. Using the Hannan risk model, a patient's long‐term mortality risk can be accurately assessed and subgroups of higher‐risk patients can be identified for enhanced follow‐up care. More research appears warranted to refine long‐term CABG clinical risk models. doi: 10.1111/jocs.12665 (J Card Surg 2016;31:23–30) PMID:26543019

  4. A biomathematical model of the restoring effects of caffeine on cognitive performance during sleep deprivation.

    Science.gov (United States)

    Ramakrishnan, Sridhar; Rajaraman, Srinivasan; Laxminarayan, Srinivas; Wesensten, Nancy J; Kamimori, Gary H; Balkin, Thomas J; Reifman, Jaques

    2013-02-21

    While caffeine is widely used as a countermeasure to sleep loss, mathematical models are lacking. Develop a biomathematical model for the performance-restoring effects of caffeine in sleep-deprived subjects. We hypothesized that caffeine has a multiplicative effect on performance during sleep loss. Accordingly, we first used a phenomenological two-process model of sleep regulation to estimate performance in the absence of caffeine, and then multiplied a caffeine-effect factor, which relates the pharmacokinetic-pharmacodynamic effects through the Hill equation, to estimate the performance-restoring effects of caffeine. We validated the model on psychomotor vigilance test data from two studies involving 12 subjects each: (1) single caffeine dose of 600mg after 64.5h of wakefulness and (2) repeated doses of 200mg after 20, 22, and 24h of wakefulness. Individualized caffeine models produced overall errors that were 19% and 42% lower than their population-average counterparts for the two studies. Had we not accounted for the effects of caffeine, the individualized model errors would have been 117% and 201% larger, respectively. The presented model captured the performance-enhancing effects of caffeine for most subjects in the single- and repeated-dose studies, suggesting that the proposed multiplicative factor is a feasible solution. Copyright © 2012 Elsevier Ltd. All rights reserved.

  5. Computational modelling of expressive music performance in hexaphonic guitar

    OpenAIRE

    Siquier, Marc

    2017-01-01

    Computational modelling of expressive music performance has been widely studied in the past. While previous work in this area has been mainly focused on classical piano music, there has been very little work on guitar music, and such work has focused on monophonic guitar playing. In this work, we present a machine learning approach to automatically generate expressive performances from non expressive music scores for polyphonic guitar. We treated guitar as an hexaphonic instrument, obtaining ...

  6. Photovoltaic array performance simulation models

    Energy Technology Data Exchange (ETDEWEB)

    Menicucci, D. F.

    1986-09-15

    The experience of the solar industry confirms that, despite recent cost reductions, the profitability of photovoltaic (PV) systems is often marginal and the configuration and sizing of a system is a critical problem for the design engineer. Construction and evaluation of experimental systems are expensive and seldom justifiable. A mathematical model or computer-simulation program is a desirable alternative, provided reliable results can be obtained. Sandia National Laboratories, Albuquerque (SNLA), has been studying PV-system modeling techniques in an effort to develop an effective tool to be used by engineers and architects in the design of cost-effective PV systems. This paper reviews two of the sources of error found in previous PV modeling programs, presents the remedies developed to correct these errors, and describes a new program that incorporates these improvements.

  7. Performance of models for estimating absolute risk difference in multicenter trials with binary outcome

    Directory of Open Access Journals (Sweden)

    Claudia Pedroza

    2016-08-01

    Full Text Available Abstract Background Reporting of absolute risk difference (RD is recommended for clinical and epidemiological prospective studies. In analyses of multicenter studies, adjustment for center is necessary when randomization is stratified by center or when there is large variation in patients outcomes across centers. While regression methods are used to estimate RD adjusted for baseline predictors and clustering, no formal evaluation of their performance has been previously conducted. Methods We performed a simulation study to evaluate 6 regression methods fitted under a generalized estimating equation framework: binomial identity, Poisson identity, Normal identity, log binomial, log Poisson, and logistic regression model. We compared the model estimates to unadjusted estimates. We varied the true response function (identity or log, number of subjects per center, true risk difference, control outcome rate, effect of baseline predictor, and intracenter correlation. We compared the models in terms of convergence, absolute bias and coverage of 95 % confidence intervals for RD. Results The 6 models performed very similar to each other for the majority of scenarios. However, the log binomial model did not converge for a large portion of the scenarios including a baseline predictor. In scenarios with outcome rate close to the parameter boundary, the binomial and Poisson identity models had the best performance, but differences from other models were negligible. The unadjusted method introduced little bias to the RD estimates, but its coverage was larger than the nominal value in some scenarios with an identity response. Under the log response, coverage from the unadjusted method was well below the nominal value (<80 % for some scenarios. Conclusions We recommend the use of a binomial or Poisson GEE model with identity link to estimate RD for correlated binary outcome data. If these models fail to run, then either a logistic regression, log Poisson

  8. FRAPCON-3: Modifications to fuel rod material properties and performance models for high-burnup application

    International Nuclear Information System (INIS)

    Lanning, D.D.; Beyer, C.E.; Painter, C.L.

    1997-12-01

    This volume describes the fuel rod material and performance models that were updated for the FRAPCON-3 steady-state fuel rod performance code. The property and performance models were changed to account for behavior at extended burnup levels up to 65 Gwd/MTU. The property and performance models updated were the fission gas release, fuel thermal conductivity, fuel swelling, fuel relocation, radial power distribution, solid-solid contact gap conductance, cladding corrosion and hydriding, cladding mechanical properties, and cladding axial growth. Each updated property and model was compared to well characterized data up to high burnup levels. The installation of these properties and models in the FRAPCON-3 code along with input instructions are provided in Volume 2 of this report and Volume 3 provides a code assessment based on comparison to integral performance data. The updated FRAPCON-3 code is intended to replace the earlier codes FRAPCON-2 and GAPCON-THERMAL-2. 94 refs., 61 figs., 9 tabs

  9. Modelling and simulating fire tube boiler performance

    DEFF Research Database (Denmark)

    Sørensen, K.; Condra, T.; Houbak, Niels

    2003-01-01

    A model for a flue gas boiler covering the flue gas and the water-/steam side has been formulated. The model has been formulated as a number of sub models that are merged into an overall model for the complete boiler. Sub models have been defined for the furnace, the convection zone (split in 2......: a zone submerged in water and a zone covered by steam), a model for the material in the boiler (the steel) and 2 models for resp. the water/steam zone (the boiling) and the steam. The dynamic model has been developed as a number of Differential-Algebraic-Equation system (DAE). Subsequently Mat......Lab/Simulink has been applied for carrying out the simulations. To be able to verify the simulated results experiments has been carried out on a full scale boiler plant....

  10. A model evaluation checklist for process-based environmental models

    Science.gov (United States)

    Jackson-Blake, Leah

    2015-04-01

    Mechanistic catchment-scale phosphorus models appear to perform poorly where diffuse sources dominate. The reasons for this were investigated for one commonly-applied model, the INtegrated model of CAtchment Phosphorus (INCA-P). Model output was compared to 18 months of daily water quality monitoring data in a small agricultural catchment in Scotland, and model structure, key model processes and internal model responses were examined. Although the model broadly reproduced dissolved phosphorus dynamics, it struggled with particulates. The reasons for poor performance were explored, together with ways in which improvements could be made. The process of critiquing and assessing model performance was then generalised to provide a broadly-applicable model evaluation checklist, incorporating: (1) Calibration challenges, relating to difficulties in thoroughly searching a high-dimensional parameter space and in selecting appropriate means of evaluating model performance. In this study, for example, model simplification was identified as a necessary improvement to reduce the number of parameters requiring calibration, whilst the traditionally-used Nash Sutcliffe model performance statistic was not able to discriminate between realistic and unrealistic model simulations, and alternative statistics were needed. (2) Data limitations, relating to a lack of (or uncertainty in) input data, data to constrain model parameters, data for model calibration and testing, and data to test internal model processes. In this study, model reliability could be improved by addressing all four kinds of data limitation. For example, there was insufficient surface water monitoring data for model testing against an independent dataset to that used in calibration, whilst additional monitoring of groundwater and effluent phosphorus inputs would help distinguish between alternative plausible model parameterisations. (3) Model structural inadequacies, whereby model structure may inadequately represent

  11. Performances of estimators of linear auto-correlated error model ...

    African Journals Online (AJOL)

    The performances of five estimators of linear models with autocorrelated disturbance terms are compared when the independent variable is exponential. The results reveal that for both small and large samples, the Ordinary Least Squares (OLS) compares favourably with the Generalized least Squares (GLS) estimators in ...

  12. Modeling the Effect of Bandwidth Allocation on Network Performance

    African Journals Online (AJOL)

    ... The proposed model showed improved performance for CDMA networks, but further increase in the bandwidth did not benefit the network; (iii) A reliability measure such as the spectral efficiency is therefore useful to redeem the limitation in (ii). Keywords: Coverage Capacity, CDMA, Mobile Network, Network Throughput ...

  13. Reference Material Properties and Standard Problems to Verify the Fuel Performance Models Ver 1.0

    International Nuclear Information System (INIS)

    Yang, Yong Sik; Kim, Jae Yong; Koo, Yang Hyun

    2010-12-01

    All fuel performance models must be validated by in-pile and out-pile tests. However, the model validation requires much efforts and times to confirm its exactness. In many fields, new performance models and codes are confirmed by code-to-code benchmarking process under simplified standard problem analysis. At present, the DUOS, which is the steady state fuel performance analysis code for dual cooled annular fuel, development project is progressing and new FEM module is developed to analyze the fuel performance during transient period. In addition, the verification process is planning to examine the new models and module's rightness by comparing with commercial finite element analysis such as a ADINA, ABAQUS and ANSYS. This reports contains the result of unification of material properties and establishment of standard problem to verify the newly developed models with commercial FEM code

  14. Hypothetical model of factors determining performance and sports achievement in team sports

    Directory of Open Access Journals (Sweden)

    Trninić Marko

    2011-01-01

    Full Text Available The objective of this paper is formation of a comprehensive hypothetical dynamic interactional process model structured by assumed constructs, i.e. processes or mechanisms that obtain real features and influences on athlete's performance and athletic achievement. Thus there are formed and assumed reciprocal relations between high training and competition - based stress as the input variable, cognitive appraisal and interpretation as the mediator, and mood state as the moderator based on the development of the dynamic systems theory. Also, proposed model uses basic assumptions of the Action-Theory approach and it is in accordance with the contemporary socialcognitive view of team functioning in sports. Within the process model, the output variables are measures of efficacy evident through athlete's individual and team performance and athletic achievement. The situation, the team and athlete attributes, the performance and the athletic achievement are joined variables, and the individual and the collective efficacy are the consequence of their reciprocal interaction. Therefore, there are complex and reciprocal interactive processes in real sports and explorative situations amongst the attributes of athlete and team and the behaviour and situation that determine performance and athletic achievement. This is probably the result of an integrated network of reciprocal multi-causal activity of a set of stated assumed constructs from different theories. Thus the hypothetical model is an effort to describe elaborate correlations and/or interdependencies between internal and external determinants which presumably affect athlete's performance and athletic achievement.

  15. Performance Assessment of a Low-Level Radioactive Waste Disposal Site using GoldSim Integrated Systems Model

    Science.gov (United States)

    Merrell, G.; Singh, A.; Tauxe, J.; Perona, R.; Dornsife, W.; grisak, G. E.; Holt, R. M.

    2011-12-01

    Texas Commission on Environmental Quality has approved licenses for four landfills at the Waste Control Specialists (WCS) site located in Andrews County, West Texas. The site includes a hazardous waste landfill and three landfills for radioactive waste. An updated performance assessment is necessary prior to acceptance of waste at the landfills. The updated performance assessment a) provides for more realistic and flexible dose modeling capabilities, b) addresses all plausible release and accident scenarios as they relate to the performance objectives, c) includes impact of climate and hydrologic scenarios that may impact long-term performance of the landfill, d) addresses impact of cover naturalization and degradation on the landfill, and e) incorporates uncertainty and sensitivity analysis for critical parameters. For the updated performance assessment, WCS has developed an integrated systems level performance assessment model using the GoldSim platform. GoldSim serves as a model for integrating all of the major components of a performance assessment, which include the radionuclide source term, facility design, environmental transport pathways, exposure scenarios, and radiological doses. Unlike many computer models that are based on first principles, GoldSim is a systems level model that can be used to integrate and abstract more complex sub-models into one system. This can then be used to assess the results into a unified model of the disposal system and environment. In this particular application, the GoldSim model consists of a) hydrogeologic model that simulates flow and transport through the Dockum geologic unit that underlies all of the waste facilities, b) waste cells that represent the containment unit and simulate degradation of waste forms, radionuclide leaching, and partitioning into the liquid and vapor phase within the waste unit, c) a cover system model that simulates upward diffusive transport from the underground repository to the atmosphere. In

  16. A Fuzzy Knowledge Representation Model for Student Performance Assessment

    DEFF Research Database (Denmark)

    Badie, Farshad

    Knowledge representation models based on Fuzzy Description Logics (DLs) can provide a foundation for reasoning in intelligent learning environments. While basic DLs are suitable for expressing crisp concepts and binary relationships, Fuzzy DLs are capable of processing degrees of truth/completene......Knowledge representation models based on Fuzzy Description Logics (DLs) can provide a foundation for reasoning in intelligent learning environments. While basic DLs are suitable for expressing crisp concepts and binary relationships, Fuzzy DLs are capable of processing degrees of truth....../completeness about vague or imprecise information. This paper tackles the issue of representing fuzzy classes using OWL2 in a dataset describing Performance Assessment Results of Students (PARS)....

  17. Performance Analysis of Receive Diversity in Wireless Sensor Networks over GBSBE Models

    OpenAIRE

    Goel, Shivali; Abawajy, Jemal H.; Kim, Tai-hoon

    2010-01-01

    Wireless sensor networks have attracted a lot of attention recently. In this paper, we develop a channel model based on the elliptical model for multipath components involving randomly placed scatterers in the scattering region with sensors deployed on a field. We verify that in a sensor network, the use of receive diversity techniques improves the performance of the system. Extensive performance analysis of the system is carried out for both single and multiple antennas with the applied rece...

  18. Using satellite observations in performance evaluation for regulatory air quality modeling: Comparison with ground-level measurements

    Science.gov (United States)

    Odman, M. T.; Hu, Y.; Russell, A.; Chai, T.; Lee, P.; Shankar, U.; Boylan, J.

    2012-12-01

    Regulatory air quality modeling, such as State Implementation Plan (SIP) modeling, requires that model performance meets recommended criteria in the base-year simulations using period-specific, estimated emissions. The goal of the performance evaluation is to assure that the base-year modeling accurately captures the observed chemical reality of the lower troposphere. Any significant deficiencies found in the performance evaluation must be corrected before any base-case (with typical emissions) and future-year modeling is conducted. Corrections are usually made to model inputs such as emission-rate estimates or meteorology and/or to the air quality model itself, in modules that describe specific processes. Use of ground-level measurements that follow approved protocols is recommended for evaluating model performance. However, ground-level monitoring networks are spatially sparse, especially for particulate matter. Satellite retrievals of atmospheric chemical properties such as aerosol optical depth (AOD) provide spatial coverage that can compensate for the sparseness of ground-level measurements. Satellite retrievals can also help diagnose potential model or data problems in the upper troposphere. It is possible to achieve good model performance near the ground, but have, for example, erroneous sources or sinks in the upper troposphere that may result in misleading and unrealistic responses to emission reductions. Despite these advantages, satellite retrievals are rarely used in model performance evaluation, especially for regulatory modeling purposes, due to the high uncertainty in retrievals associated with various contaminations, for example by clouds. In this study, 2007 was selected as the base year for SIP modeling in the southeastern U.S. Performance of the Community Multiscale Air Quality (CMAQ) model, at a 12-km horizontal resolution, for this annual simulation is evaluated using both recommended ground-level measurements and non-traditional satellite

  19. Performance of svm, k-nn and nbc classifiers for text-independent speaker identification with and without modelling through merging models

    Directory of Open Access Journals (Sweden)

    Yussouf Nahayo

    2016-04-01

    Full Text Available This paper proposes some methods of robust text-independent speaker identification based on Gaussian Mixture Model (GMM. We implemented a combination of GMM model with a set of classifiers such as Support Vector Machine (SVM, K-Nearest Neighbour (K-NN, and Naive Bayes Classifier (NBC. In order to improve the identification rate, we developed a combination of hybrid systems by using validation technique. The experiments were performed on the dialect DR1 of the TIMIT corpus. The results have showed a better performance for the developed technique compared to the individual techniques.

  20. The Relationship between entrepreneurial orientation, entrepreneurial competencies, entrepreneurial leadership, and firm performance: A proposed model

    Directory of Open Access Journals (Sweden)

    Chijioke Nwachukwu

    2017-06-01

    Full Text Available This study develops a conceptual model and propositions for researchers to explore the direct and indirect relationship between entrepreneurial orientation, entrepreneurial competencies, entrepreneurial leadership and firm performance. Authors searched various databases including ProQuest, EBSCOhost databases, Scopus for peer reviewed Journals, books, and other relevant publications on the subject. A conceptual review provides direction for researchers to empirically examine the direct relationships between entrepreneurial orientation (EO, entrepreneurial competencies (EC, and firm performance and the mediating effect of entrepreneurial leadership (EL in the relationship between EO, EC, and firm performance. We suggest the use of entrepreneurial orientation scale (EOS, The entrecomp framework (2016, Renko et al., 2015 entrepreneurial leadership styles scale (ENTRELEAD, and Santos & Brito (2012 subjective measurement model for firm performance for measurement of the constructs of EO, EC, EL and performance. For researchers and academics, the model provides a basis for further research by testing empirically the validity of the model. Testing of this model could provide a better understanding of the EO, EC constructs that better predicts strategic and financial performance.

  1. Modeling the Performance of Fast Mulipole Method on HPC platforms

    KAUST Repository

    Ibeid, Huda

    2012-01-01

    In this thesis , we discuss the challenges for FMM on current parallel computers and future exasclae architecture. Furthermore, we develop a novel performance model for FMM. Our ultimate aim of this thesis

  2. Reactor core modeling practice: Operational requirements, model characteristics, and model validation

    International Nuclear Information System (INIS)

    Zerbino, H.

    1997-01-01

    The physical models implemented in power plant simulators have greatly increased in performance and complexity in recent years. This process has been enabled by the ever increasing computing power available at affordable prices. This paper describes this process from several angles: First the operational requirements which are more critical from the point of view of model performance, both for normal and off-normal operating conditions; A second section discusses core model characteristics in the light of the solutions implemented by Thomson Training and Simulation (TT and S) in several full-scope simulators recently built and delivered for Dutch, German, and French nuclear power plants; finally we consider the model validation procedures, which are of course an integral part of model development, and which are becoming more and more severe as performance expectations increase. As a conclusion, it may be asserted that in the core modeling field, as in other areas, the general improvement in the quality of simulation codes has resulted in a fairly rapid convergence towards mainstream engineering-grade calculations. This is remarkable performance in view of the stringent real-time requirements which the simulation codes must satisfy as well as the extremely wide range of operating conditions that they are called upon to cover with good accuracy. (author)

  3. Science-Based Simulation Model of Human Performance for Human Reliability Analysis

    International Nuclear Information System (INIS)

    Kelly, Dana L.; Boring, Ronald L.; Mosleh, Ali; Smidts, Carol

    2011-01-01

    Human reliability analysis (HRA), a component of an integrated probabilistic risk assessment (PRA), is the means by which the human contribution to risk is assessed, both qualitatively and quantitatively. However, among the literally dozens of HRA methods that have been developed, most cannot fully model and quantify the types of errors that occurred at Three Mile Island. Furthermore, all of the methods lack a solid empirical basis, relying heavily on expert judgment or empirical results derived in non-reactor domains. Finally, all of the methods are essentially static, and are thus unable to capture the dynamics of an accident in progress. The objective of this work is to begin exploring a dynamic simulation approach to HRA, one whose models have a basis in psychological theories of human performance, and whose quantitative estimates have an empirical basis. This paper highlights a plan to formalize collaboration among the Idaho National Laboratory (INL), the University of Maryland, and The Ohio State University (OSU) to continue development of a simulation model initially formulated at the University of Maryland. Initial work will focus on enhancing the underlying human performance models with the most recent psychological research, and on planning follow-on studies to establish an empirical basis for the model, based on simulator experiments to be carried out at the INL and at the OSU.

  4. Enhancing pavement performance prediction models for the Illinois Tollway System

    OpenAIRE

    Laxmikanth Premkumar; William R. Vavrik

    2016-01-01

    Accurate pavement performance prediction represents an important role in prioritizing future maintenance and rehabilitation needs, and predicting future pavement condition in a pavement management system. The Illinois State Toll Highway Authority (Tollway) with over 2000 lane miles of pavement utilizes the condition rating survey (CRS) methodology to rate pavement performance. Pavement performance models developed in the past for the Illinois Department of Transportation (IDOT) are used by th...

  5. A forecasting performance comparison of dynamic factor models based on static and dynamic methods

    Directory of Open Access Journals (Sweden)

    Marra Fabio Della

    2017-03-01

    Full Text Available We present a comparison of the forecasting performances of three Dynamic Factor Models on a large monthly data panel of macroeconomic and financial time series for the UE economy. The first model relies on static principal-component and was introduced by Stock and Watson (2002a, b. The second is based on generalized principal components and it was introduced by Forni, Hallin, Lippi and Reichlin (2000, 2005. The last model has been recently proposed by Forni, Hallin, Lippi and Zaffaroni (2015, 2016. The data panel is split into two parts: the calibration sample, from February 1986 to December 2000, is used to select the most performing specification for each class of models in a in- sample environment, and the proper sample, from January 2001 to November 2015, is used to compare the performances of the selected models in an out-of-sample environment. The metholodogical approach is analogous to Forni, Giovannelli, Lippi and Soccorsi (2016, but also the size of the rolling window is empirically estimated in the calibration process to achieve more robustness. We find that, on the proper sample, the last model is the most performing for the Inflation. However, mixed evidencies appear over the proper sample for the Industrial Production.

  6. Formal Implementation of a Performance Evaluation Model for the Face Recognition System

    Directory of Open Access Journals (Sweden)

    Yong-Nyuo Shin

    2008-01-01

    Full Text Available Due to usability features, practical applications, and its lack of intrusiveness, face recognition technology, based on information, derived from individuals' facial features, has been attracting considerable attention recently. Reported recognition rates of commercialized face recognition systems cannot be admitted as official recognition rates, as they are based on assumptions that are beneficial to the specific system and face database. Therefore, performance evaluation methods and tools are necessary to objectively measure the accuracy and performance of any face recognition system. In this paper, we propose and formalize a performance evaluation model for the biometric recognition system, implementing an evaluation tool for face recognition systems based on the proposed model. Furthermore, we performed evaluations objectively by providing guidelines for the design and implementation of a performance evaluation system, formalizing the performance test process.

  7. Relative performance of different numerical weather prediction models for short term predition of wind wnergy

    Energy Technology Data Exchange (ETDEWEB)

    Giebel, G; Landberg, L [Risoe National Lab., Wind Energy and Atmospheric Physics Dept., Roskilde (Denmark); Moennich, K; Waldl, H P [Carl con Ossietzky Univ., Faculty of Physics, Dept. of Energy and Semiconductor, Oldenburg (Germany)

    1999-03-01

    In several approaches presented in other papers in this conference, short term forecasting of wind power for a time horizon covering the next two days is done on the basis of Numerical Weather Prediction (NWP) models. This paper explores the relative merits of HIRLAM, which is the model used by the Danish Meteorological Institute, the Deutschlandmodell from the German Weather Service and the Nested Grid Model used in the US. The performance comparison will be mainly done for a site in Germany which is in the forecasting area of both the Deutschlandmodell and HIRLAM. In addition, a comparison of measured data with the forecasts made for one site in Iowa will be included, which allows conclusions on the merits of all three models. Differences in the relative performances could be due to a better tailoring of one model to its country, or to a tighter grid, or could be a function of the distance between the grid points and the measuring site. Also the amount, in which the performance can be enhanced by the use of model output statistics (topic of other papers in this conference) could give insights into the performance of the models. (au)

  8. Solving Problems in Various Domains by Hybrid Models of High Performance Computations

    Directory of Open Access Journals (Sweden)

    Yurii Rogozhin

    2014-03-01

    Full Text Available This work presents a hybrid model of high performance computations. The model is based on membrane system (P~system where some membranes may contain quantum device that is triggered by the data entering the membrane. This model is supposed to take advantages of both biomolecular and quantum paradigms and to overcome some of their inherent limitations. The proposed approach is demonstrated through two selected problems: SAT, and image retrieving.

  9. Performance prediction for a magnetostrictive actuator using a simplified model

    Science.gov (United States)

    Yoo, Jin-Hyeong; Jones, Nicholas J.

    2018-03-01

    Iron-Gallium alloys (Galfenol) are promising transducer materials that combine high magnetostriction, desirable mechanical properties, high permeability, and a wide operational temperature range. Most of all, the material is capable of operating under tensile stress, and is relatively resistant to shock. These materials are generally characterized using a solid, cylindrically-shaped specimen under controlled compressive stress and magnetization conditions. Because the magnetostriction strongly depends on both the applied stress and magnetization, the characterization of the material is usually conducted under controlled conditions so each parameter is varied independently of the other. However, in a real application the applied stress and magnetization will not be maintained constant during operation. Even though the controlled characterization measurement gives insight into standard material properties, usage of this data in an application, while possible, is not straight forward. This study presents an engineering modeling methodology for magnetostrictive materials based on a piezo-electric governing equation. This model suggests phenomenological, nonlinear, three-dimensional functions for strain and magnetic flux density responses as functions of applied stress and magnetic field. Load line performances as a function of maximum magnetic field input were simulated based on the model. To verify the modeling performance, a polycrystalline magnetostrictive rod (Fe-Ga alloy, Galfenol) was characterized under compressive loads using a dead-weight test setup, with strain gages on the rod and a magnetic field driving coil around the sample. The magnetic flux density through the Galfenol rod was measured with a sensing coil; the compressive loads were measured using a load cell on the bottom of the Galfenol rod. The experimental results are compared with the simulation results using the suggested model, showing good agreement.

  10. MAPPS (Maintenance Personnel Performance Simulation): a computer simulation model for human reliability analysis

    International Nuclear Information System (INIS)

    Knee, H.E.; Haas, P.M.

    1985-01-01

    A computer model has been developed, sensitivity tested, and evaluated capable of generating reliable estimates of human performance measures in the nuclear power plant (NPP) maintenance context. The model, entitled MAPPS (Maintenance Personnel Performance Simulation), is of the simulation type and is task-oriented. It addresses a number of person-machine, person-environment, and person-person variables and is capable of providing the user with a rich spectrum of important performance measures including mean time for successful task performance by a maintenance team and maintenance team probability of task success. These two measures are particularly important for input to probabilistic risk assessment (PRA) studies which were the primary impetus for the development of MAPPS. The simulation nature of the model along with its generous input parameters and output variables allows its usefulness to extend beyond its input to PRA

  11. Modeling Friction Performance of Drill String Torsional Oscillation Using Dynamic Friction Model

    Directory of Open Access Journals (Sweden)

    Xingming Wang

    2017-01-01

    Full Text Available Drill string torsional and longitudinal oscillation can significantly reduce axial drag in horizontal drilling. An improved theoretical model for the analysis of the frictional force was proposed based on microscopic contact deformation theory and a bristle model. The established model, an improved dynamic friction model established for drill strings in a wellbore, was used to determine the relationship of friction force changes and the drill string torsional vibration. The model results were in good agreement with the experimental data, verifying the accuracy of the established model. The analysis of the influence of drilling mud properties indicated that there is an approximately linear relationship between the axial friction force and dynamic shear and viscosity. The influence of drill string torsional oscillation on the axial friction force is discussed. The results indicated that the drill string transverse velocity is a prerequisite for reducing axial friction. In addition, low amplitude of torsional vibration speed can significantly reduce axial friction. Then, increasing the amplitude of transverse vibration speed, the effect of axial reduction is not significant. In addition, by involving general field drilling parameters, this model can accurately describe the friction behavior and quantitatively predict the frictional resistance in horizontal drilling.

  12. Exploring performance and power properties of modern multicore chips via simple machine models

    OpenAIRE

    Hager, Georg; Treibig, Jan; Habich, Johannes; Wellein, Gerhard

    2012-01-01

    Modern multicore chips show complex behavior with respect to performance and power. Starting with the Intel Sandy Bridge processor, it has become possible to directly measure the power dissipation of a CPU chip and correlate this data with the performance properties of the running code. Going beyond a simple bottleneck analysis, we employ the recently published Execution-Cache-Memory (ECM) model to describe the single- and multi-core performance of streaming kernels. The model refines the wel...

  13. Calibration Modeling Methodology to Optimize Performance for Low Range Applications

    Science.gov (United States)

    McCollum, Raymond A.; Commo, Sean A.; Parker, Peter A.

    2010-01-01

    Calibration is a vital process in characterizing the performance of an instrument in an application environment and seeks to obtain acceptable accuracy over the entire design range. Often, project requirements specify a maximum total measurement uncertainty, expressed as a percent of full-scale. However in some applications, we seek to obtain enhanced performance at the low range, therefore expressing the accuracy as a percent of reading should be considered as a modeling strategy. For example, it is common to desire to use a force balance in multiple facilities or regimes, often well below its designed full-scale capacity. This paper presents a general statistical methodology for optimizing calibration mathematical models based on a percent of reading accuracy requirement, which has broad application in all types of transducer applications where low range performance is required. A case study illustrates the proposed methodology for the Mars Entry Atmospheric Data System that employs seven strain-gage based pressure transducers mounted on the heatshield of the Mars Science Laboratory mission.

  14. Contributions of Organizational Performance Measurement Model Performance Prism to the Balanced Scorecard: a study from the stakeholder’s perspective

    Directory of Open Access Journals (Sweden)

    Sady Darcy da Silva Jr

    2013-12-01

    Full Text Available Measuring the organizational performance is a big challenge to companies. Thus, we should highlight the organizational performance measurement model Balanced Scorecard (BSC. However, the Performance Prism (PP model emphasizes the organizational stakeholders and states that BSC treats them in a superficial way, giving more importance to the shareholders and the customers. The objective of this research is to identify the PP contributions to the BSC, from the stakeholders perspective. With this objective, a semi-structured script to the interviews was applied to professionals of the strategic area. In parallel, the structure of the models was compared to enrich the results, as well as to complement the analysis of the perceptions of respondents. The results were very relevant given important contributions of the PP to the BSC, opposing the original criticisms of the PP. These criticisms became questionable through the perception of respondents and the comparison between the models.

  15. Towards a Social Networks Model for Online Learning & Performance

    Science.gov (United States)

    Chung, Kon Shing Kenneth; Paredes, Walter Christian

    2015-01-01

    In this study, we develop a theoretical model to investigate the association between social network properties, "content richness" (CR) in academic learning discourse, and performance. CR is the extent to which one contributes content that is meaningful, insightful and constructive to aid learning and by social network properties we…

  16. Performance Modeling of Hybrid MPI/OpenMP Scientific Applications on Large-scale Multicore Cluster Systems

    KAUST Repository

    Wu, Xingfu; Taylor, Valerie

    2011-01-01

    In this paper, we present a performance modeling framework based on memory bandwidth contention time and a parameterized communication model to predict the performance of OpenMP, MPI and hybrid applications with weak scaling on three large-scale multicore clusters: IBM POWER4, POWER5+ and Blue Gene/P, and analyze the performance of these MPI, OpenMP and hybrid applications. We use STREAM memory benchmarks to provide initial performance analysis and model validation of MPI and OpenMP applications on these multicore clusters because the measured sustained memory bandwidth can provide insight into the memory bandwidth that a system should sustain on scientific applications with the same amount of workload per core. In addition to using these benchmarks, we also use a weak-scaling hybrid MPI/OpenMP large-scale scientific application: Gyro kinetic Toroidal Code in magnetic fusion to validate our performance model of the hybrid application on these multicore clusters. The validation results for our performance modeling method show less than 7.77% error rate in predicting the performance of hybrid MPI/OpenMP GTC on up to 512 cores on these multicore clusters. © 2011 IEEE.

  17. Performance Modeling of Hybrid MPI/OpenMP Scientific Applications on Large-scale Multicore Cluster Systems

    KAUST Repository

    Wu, Xingfu

    2011-08-01

    In this paper, we present a performance modeling framework based on memory bandwidth contention time and a parameterized communication model to predict the performance of OpenMP, MPI and hybrid applications with weak scaling on three large-scale multicore clusters: IBM POWER4, POWER5+ and Blue Gene/P, and analyze the performance of these MPI, OpenMP and hybrid applications. We use STREAM memory benchmarks to provide initial performance analysis and model validation of MPI and OpenMP applications on these multicore clusters because the measured sustained memory bandwidth can provide insight into the memory bandwidth that a system should sustain on scientific applications with the same amount of workload per core. In addition to using these benchmarks, we also use a weak-scaling hybrid MPI/OpenMP large-scale scientific application: Gyro kinetic Toroidal Code in magnetic fusion to validate our performance model of the hybrid application on these multicore clusters. The validation results for our performance modeling method show less than 7.77% error rate in predicting the performance of hybrid MPI/OpenMP GTC on up to 512 cores on these multicore clusters. © 2011 IEEE.

  18. The Maintenance Personnel Performance Simulation (MAPPS) model: A human reliability analysis tool

    International Nuclear Information System (INIS)

    Knee, H.E.

    1985-01-01

    The Maintenance Personnel Performance Simulation (MAPPS) model is a computerized, stochastic, task-oriented human behavioral model developed to provide estimates of nuclear power plant (NPP) maintenance team performance measures. It is capable of addressing person-machine, person-environment, and person-person relationships, and accounts for interdependencies that exist between the subelements that make up the maintenance task of interest. The primary measures of performance estimated by MAPPS are: 1) the probability of successfully completing the task of interest and 2) the task duration time. MAPPS also estimates a host of other performance indices, including the probability of an undetected error, identification of the most- and least-likely error-prone subelements, and maintenance team stress profiles during task execution

  19. Cassini Radar EQM Model: Instrument Description and Performance Status

    Science.gov (United States)

    Borgarelli, L.; Faustini, E. Zampolini; Im, E.; Johnson, W. T. K.

    1996-01-01

    The spaeccraft of the Cassini Mission is planned to be launched towards Saturn in October 1997. The mission is designed to study the physical structure and chemical composition of Titan. The results of the tests performed on the Cassini radar engineering qualification model (EQM) are summarized. The approach followed in the verification and evaluation of the performance of the radio frequency subsystem EQM is presented. The results show that the instrument satisfies the relevant mission requirements.

  20. FAST: a combined NOC and transient fuel performance model using a commercial FEM environment

    Energy Technology Data Exchange (ETDEWEB)

    Prudil, A.; Bell, J.; Oussoren, A.; Chan, P. [Royal Military College of Canada, Kingston, ON (Canada); Lewis, B. [Univ. of Ontario Inst. of Tech., Oshawa, ON (Canada)

    2014-07-01

    The Fuel And Sheath modelling Tool (FAST) is a combined normal operating conditions (NOC) and transient fuel performance code developed on the COMSOL Multiphysics finite-element platform. The FAST code has demonstrated excellent performance in proof of concept validation tests against experimental data and comparison to the ELESIM, ELESTRES and ELOCA fuel performance codes. In this paper we discuss ongoing efforts to expand the capabilities of the code to include multiple pellet geometries, model stress-corrosion cracking phenomena and modelling of advanced fuels composed of mixed oxides of thorium, uranium, and plutonium for the Canadian Supercritical Water Reactor (SCWR). (author)

  1. Performance assessment of sealing systems. Conceptual and integrated modelling of plugs and seals

    Energy Technology Data Exchange (ETDEWEB)

    Ruebel, Andre; Buhmann, Dieter; Kindlein, Jonathan; Lauke, Thomas

    2016-08-15

    The long-time isolation of radionuclides from the biosphere is the goal of the storage of radioactive waste in deep geological repositories. For repositories in rock salt, this goal is achieved on the one hand by the impermeable undisturbed part of the salt host rock formation and on the other hand by crushed salt, which is used to backfill the mine openings in the emplacement areas and galleries created during the construction of the repository. The crushed salt backfill is compacted over time and achieves a sufficiently high hydraulic resistance to avoid inflow of brines into the emplacement areas of the repository in the long-term. Plugs and seals must additionally provide their sealing function during the early post closure phase, until the compaction of the backfill is adequate and the permeability of the backfill is sufficiently low. To assess the future development of the waste repository, an adequate knowledge of the material behaviour is necessary and related mathematical models must be developed to be able to perform predictions on the long-term safety of the repository. An integrated performance assessment model was formulated that describes the long-term behaviour of a sealing built from salt concrete. The average permeability of the sealing changes with time after its emplacement from various processes of which two were regarded in a constitutive model: first, the healing of the EDZ in the host rock around the sealing, and second, the corrosion of the salt concrete material resulting from brine attack. Empirical parameter model functions were defined for both processes to reflect the actual behaviour. The mathematical model was implemented in the integrated performance assessment model LOPOS which is used by GRS as near-field model for repositories in salt. Deterministic and probabilistic calculations were performed with realistic parameters showing how the permeability of the sealing decreases during the first 2 000 years due to the healing of the EDZ

  2. Performance assessment of sealing systems. Conceptual and integrated modelling of plugs and seals

    International Nuclear Information System (INIS)

    Ruebel, Andre; Buhmann, Dieter; Kindlein, Jonathan; Lauke, Thomas

    2016-08-01

    The long-time isolation of radionuclides from the biosphere is the goal of the storage of radioactive waste in deep geological repositories. For repositories in rock salt, this goal is achieved on the one hand by the impermeable undisturbed part of the salt host rock formation and on the other hand by crushed salt, which is used to backfill the mine openings in the emplacement areas and galleries created during the construction of the repository. The crushed salt backfill is compacted over time and achieves a sufficiently high hydraulic resistance to avoid inflow of brines into the emplacement areas of the repository in the long-term. Plugs and seals must additionally provide their sealing function during the early post closure phase, until the compaction of the backfill is adequate and the permeability of the backfill is sufficiently low. To assess the future development of the waste repository, an adequate knowledge of the material behaviour is necessary and related mathematical models must be developed to be able to perform predictions on the long-term safety of the repository. An integrated performance assessment model was formulated that describes the long-term behaviour of a sealing built from salt concrete. The average permeability of the sealing changes with time after its emplacement from various processes of which two were regarded in a constitutive model: first, the healing of the EDZ in the host rock around the sealing, and second, the corrosion of the salt concrete material resulting from brine attack. Empirical parameter model functions were defined for both processes to reflect the actual behaviour. The mathematical model was implemented in the integrated performance assessment model LOPOS which is used by GRS as near-field model for repositories in salt. Deterministic and probabilistic calculations were performed with realistic parameters showing how the permeability of the sealing decreases during the first 2 000 years due to the healing of the EDZ

  3. Seasonal versus Episodic Performance Evaluation for an Eulerian Photochemical Air Quality Model

    Energy Technology Data Exchange (ETDEWEB)

    Jin, Ling; Brown, Nancy J.; Harley, Robert A.; Bao, Jian-Wen; Michelson, Sara A; Wilczak, James M

    2010-04-16

    This study presents detailed evaluation of the seasonal and episodic performance of the Community Multiscale Air Quality (CMAQ) modeling system applied to simulate air quality at a fine grid spacing (4 km horizontal resolution) in central California, where ozone air pollution problems are severe. A rich aerometric database collected during the summer 2000 Central California Ozone Study (CCOS) is used to prepare model inputs and to evaluate meteorological simulations and chemical outputs. We examine both temporal and spatial behaviors of ozone predictions. We highlight synoptically driven high-ozone events (exemplified by the four intensive operating periods (IOPs)) for evaluating both meteorological inputs and chemical outputs (ozone and its precursors) and compare them to the summer average. For most of the summer days, cross-domain normalized gross errors are less than 25% for modeled hourly ozone, and normalized biases are between {+-}15% for both hourly and peak (1 h and 8 h) ozone. The domain-wide aggregated metrics indicate similar performance between the IOPs and the whole summer with respect to predicted ozone and its precursors. Episode-to-episode differences in ozone predictions are more pronounced at a subregional level. The model performs consistently better in the San Joaquin Valley than other air basins, and episodic ozone predictions there are similar to the summer average. Poorer model performance (normalized peak ozone biases <-15% or >15%) is found in the Sacramento Valley and the Bay Area and is most noticeable in episodes that are subject to the largest uncertainties in meteorological fields (wind directions in the Sacramento Valley and timing and strength of onshore flow in the Bay Area) within the boundary layer.

  4. Clinical laboratory as an economic model for business performance analysis.

    Science.gov (United States)

    Buljanović, Vikica; Patajac, Hrvoje; Petrovecki, Mladen

    2011-08-15

    To perform SWOT (strengths, weaknesses, opportunities, and threats) analysis of a clinical laboratory as an economic model that may be used to improve business performance of laboratories by removing weaknesses, minimizing threats, and using external opportunities and internal strengths. Impact of possible threats to and weaknesses of the Clinical Laboratory at Našice General County Hospital business performance and use of strengths and opportunities to improve operating profit were simulated using models created on the basis of SWOT analysis results. The operating profit as a measure of profitability of the clinical laboratory was defined as total revenue minus total expenses and presented using a profit and loss account. Changes in the input parameters in the profit and loss account for 2008 were determined using opportunities and potential threats, and economic sensitivity analysis was made by using changes in the key parameters. The profit and loss account and economic sensitivity analysis were tools for quantifying the impact of changes in the revenues and expenses on the business operations of clinical laboratory. Results of simulation models showed that operational profit of €470 723 in 2008 could be reduced to only €21 542 if all possible threats became a reality and current weaknesses remained the same. Also, operational gain could be increased to €535 804 if laboratory strengths and opportunities were utilized. If both the opportunities and threats became a reality, the operational profit would decrease by €384 465. The operational profit of the clinical laboratory could be significantly reduced if all threats became a reality and the current weaknesses remained the same. The operational profit could be increased by utilizing strengths and opportunities as much as possible. This type of modeling may be used to monitor business operations of any clinical laboratory and improve its financial situation by implementing changes in the next fiscal

  5. Comparative assessment of PV plant performance models considering climate effects

    DEFF Research Database (Denmark)

    Tina, Giuseppe; Ventura, Cristina; Sera, Dezso

    2017-01-01

    . The methodological approach is based on comparative tests of the analyzed models applied to two PV plants installed respectively in north of Denmark (Aalborg) and in the south of Italy (Agrigento). The different ambient, operating and installation conditions allow to understand how these factors impact the precision...... the performance of the studied PV plants with others, the efficiency of the systems has been estimated by both conventional Performance Ratio and Corrected Performance Ratio...

  6. Modeling impact of environmental factors on photovoltaic array performance

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Jie; Sun, Yize; Xu, Yang [College of Mechanical Engineering, Donghua University NO.2999, North Renmin Road, Shanghai (China)

    2013-07-01

    It is represented in this paper that a methodology to model and quantify the impact of the three environmental factors, the ambient temperature, the incident irradiance and the wind speed, upon the performance of photovoltaic array operating under outdoor conditions. First, A simple correlation correlating operating temperature with the three environmental variables is validated for a range of wind speed studied, 2-8, and for irradiance values between 200 and 1000. Root mean square error (RMSE) between modeled operating temperature and measured values is 1.19% and the mean bias error (MBE) is -0.09%. The environmental factors studied influence I-V curves, P-V curves, and maximum-power outputs of photovoltaic array. The cell-to-module-to-array mathematical model for photovoltaic panels is established in this paper and the method defined as segmented iteration is adopted to solve the I-V curve expression to relate model I-V curves. The model I-V curves and P-V curves are concluded to coincide well with measured data points. The RMSE between numerically calculated maximum-power outputs and experimentally measured ones is 0.2307%, while the MBE is 0.0183%. In addition, a multivariable non-linear regression equation is proposed to eliminate the difference between numerically calculated values and measured ones of maximum power outputs over the range of high ambient temperature and irradiance at noon and in the early afternoon. In conclusion, the proposed method is reasonably simple and accurate.

  7. Plural Governance and Performance - a review of explanations and a model

    DEFF Research Database (Denmark)

    Mols, Niels Peter

    2007-01-01

    Plural governance is a strategy where a firm is making and buying the same good or service (Bradach & Eccles 1989; Heide 2003; Parmigiani 2007). This paper reviews and compares explanations to why firms both make and buy, and based on the review it develops a model suggesting how the combination...... of buying and making the same component or service affects performance. The model suggests that both making and buying the same product or service has several effects on performance: (1) it reduces the negative relationships between uncertainty and performance in both the market and the hierarchy, (2......) it reduces the negative relationship between transaction specific assets and performance in the market, (3) it reduces the cost of determining transfer prices in the hierarchy, (4) it enhances the positive effect of strong internal and external capabilities, and (5) finally it adds cost because of increased...

  8. A Unified Model of Performance for Predicting the Effects of Sleep and Caffeine.

    Science.gov (United States)

    Ramakrishnan, Sridhar; Wesensten, Nancy J; Kamimori, Gary H; Moon, James E; Balkin, Thomas J; Reifman, Jaques

    2016-10-01

    Existing mathematical models of neurobehavioral performance cannot predict the beneficial effects of caffeine across the spectrum of sleep loss conditions, limiting their practical utility. Here, we closed this research gap by integrating a model of caffeine effects with the recently validated unified model of performance (UMP) into a single, unified modeling framework. We then assessed the accuracy of this new UMP in predicting performance across multiple studies. We hypothesized that the pharmacodynamics of caffeine vary similarly during both wakefulness and sleep, and that caffeine has a multiplicative effect on performance. Accordingly, to represent the effects of caffeine in the UMP, we multiplied a dose-dependent caffeine factor (which accounts for the pharmacokinetics and pharmacodynamics of caffeine) to the performance estimated in the absence of caffeine. We assessed the UMP predictions in 14 distinct laboratory- and field-study conditions, including 7 different sleep-loss schedules (from 5 h of sleep per night to continuous sleep loss for 85 h) and 6 different caffeine doses (from placebo to repeated 200 mg doses to a single dose of 600 mg). The UMP accurately predicted group-average psychomotor vigilance task performance data across the different sleep loss and caffeine conditions (6% caffeine resulted in improved predictions (after caffeine consumption) by up to 70%. The UMP provides the first comprehensive tool for accurate selection of combinations of sleep schedules and caffeine countermeasure strategies to optimize neurobehavioral performance. © 2016 Associated Professional Sleep Societies, LLC.

  9. University Library Strategy Development: A Conceptual Model of Researcher Performance to Inform Service Delivery

    Science.gov (United States)

    Maddox, Alexia; Zhao, Linlin

    2017-01-01

    This case study presents a conceptual model of researcher performance developed by Deakin University Library, Australia. The model aims to organize research performance data into meaningful researcher profiles, referred to as researcher typologies, which support the demonstration of research impact and value. Three dimensions shaping researcher…

  10. Neural Networks Method in modeling of the financial company’s performance

    Directory of Open Access Journals (Sweden)

    I. P. Kurochkina

    2017-01-01

    Full Text Available The content of modern management accounting is formed in conjunction with the rapid development of information technology, using complex algorithms of economic analysis. It makes possible the practical realization of the effective management idea - management of key performance indicators, which certainly includes the indicators of financial performance of economic entities.An important place in this process is given to the construction and calculation of factorial systems of economic indicators. A substantial theoretical and empirical experience has been accumulated to solve the problems that arise. The aim of this study is to develop a universal modern model for factor analysis of finance results, allowing multivariate solutions both current and promising character with monitoring in real time.The realization of this goal is achievable by using artificial neural networks (ANN in an appropriate simulation, which are increasingly used in the economy as a tool for supporting management decisionmaking. In comparison with classical deterministic and stochastic models, ANN brings the intellectual component to the modeling process. They are able to learn to function based on the gained experience, the result of allowing less and less mistakes.The article reveals the advantages of such an alternative approach. An alternative approach to factor analysis, based on the method of neural networks, is proposed. Advantages of this approach are marked. The paper presents a phased algorithm of modeling complex cause-and-effect nature relationships, including factors’ selection for the studied result, the creation of the neural network architecture and its training. The universality of such modeling lies in the fact that it can be used for any resulting indicator.The authors have proposed and described a mathematical model of the factor analysis for financial indicators. It is important that the model included the factors of both direct and indirect actions

  11. Employees’ Perceptions of Corporate Social Responsibility and Job Performance: A Sequential Mediation Model

    Directory of Open Access Journals (Sweden)

    Inyong Shin

    2016-05-01

    Full Text Available In spite of the increasing importance of corporate social responsibility (CSR and employee job performance, little is still known about the links between the socially responsible actions of organizations and the job performance of their members. In order to explain how employees’ perceptions of CSR influence their job performance, this study first examines the relationships between perceived CSR, organizational identification, job satisfaction, and job performance, and then develops a sequential mediation model by fully integrating these links. The results of structural equation modeling analyses conducted for 250 employees at hotels in South Korea offered strong support for the proposed model. We found that perceived CSR was indirectly and positively associated with job performance sequentially mediated first through organizational identification and then job satisfaction. This study theoretically contributes to the CSR literature by revealing the sequential mechanism through which employees’ perceptions of CSR affect their job performance, and offers practical implications by stressing the importance of employees’ perceptions of CSR. Limitations of this study and future research directions are discussed.

  12. Piezoresistive Cantilever Performance-Part I: Analytical Model for Sensitivity.

    Science.gov (United States)

    Park, Sung-Jin; Doll, Joseph C; Pruitt, Beth L

    2010-02-01

    An accurate analytical model for the change in resistance of a piezoresistor is necessary for the design of silicon piezoresistive transducers. Ion implantation requires a high-temperature oxidation or annealing process to activate the dopant atoms, and this treatment results in a distorted dopant profile due to diffusion. Existing analytical models do not account for the concentration dependence of piezoresistance and are not accurate for nonuniform dopant profiles. We extend previous analytical work by introducing two nondimensional factors, namely, the efficiency and geometry factors. A practical benefit of this efficiency factor is that it separates the process parameters from the design parameters; thus, designers may address requirements for cantilever geometry and fabrication process independently. To facilitate the design process, we provide a lookup table for the efficiency factor over an extensive range of process conditions. The model was validated by comparing simulation results with the experimentally determined sensitivities of piezoresistive cantilevers. We performed 9200 TSUPREM4 simulations and fabricated 50 devices from six unique process flows; we systematically explored the design space relating process parameters and cantilever sensitivity. Our treatment focuses on piezoresistive cantilevers, but the analytical sensitivity model is extensible to other piezoresistive transducers such as membrane pressure sensors.

  13. Simulation model of a twin-tail, high performance airplane

    Science.gov (United States)

    Buttrill, Carey S.; Arbuckle, P. Douglas; Hoffler, Keith D.

    1992-01-01

    The mathematical model and associated computer program to simulate a twin-tailed high performance fighter airplane (McDonnell Douglas F/A-18) are described. The simulation program is written in the Advanced Continuous Simulation Language. The simulation math model includes the nonlinear six degree-of-freedom rigid-body equations, an engine model, sensors, and first order actuators with rate and position limiting. A simplified form of the F/A-18 digital control laws (version 8.3.3) are implemented. The simulated control law includes only inner loop augmentation in the up and away flight mode. The aerodynamic forces and moments are calculated from a wind-tunnel-derived database using table look-ups with linear interpolation. The aerodynamic database has an angle-of-attack range of -10 to +90 and a sideslip range of -20 to +20 degrees. The effects of elastic deformation are incorporated in a quasi-static-elastic manner. Elastic degrees of freedom are not actively simulated. In the engine model, the throttle-commanded steady-state thrust level and the dynamic response characteristics of the engine are based on airflow rate as determined from a table look-up. Afterburner dynamics are switched in at a threshold based on the engine airflow and commanded thrust.

  14. Performance Analysis of Different NeQuick Ionospheric Model Parameters

    Directory of Open Access Journals (Sweden)

    WANG Ningbo

    2017-04-01

    Full Text Available Galileo adopts NeQuick model for single-frequency ionospheric delay corrections. For the standard operation of Galileo, NeQuick model is driven by the effective ionization level parameter Az instead of the solar activity level index, and the three broadcast ionospheric coefficients are determined by a second-polynomial through fitting the Az values estimated from globally distributed Galileo Sensor Stations (GSS. In this study, the processing strategies for the estimation of NeQuick ionospheric coefficients are discussed and the characteristics of the NeQuick coefficients are also analyzed. The accuracy of Global Position System (GPS broadcast Klobuchar, original NeQuick2 and fitted NeQuickC as well as Galileo broadcast NeQuickG models is evaluated over the continental and oceanic regions, respectively, in comparison with the ionospheric total electron content (TEC provided by global ionospheric maps (GIM, GPS test stations and JASON-2 altimeter. The results show that NeQuickG can mitigate ionospheric delay by 54.2%~65.8% on a global scale, and NeQuickC can correct for 71.1%~74.2% of the ionospheric delay. NeQuick2 performs at the same level with NeQuickG, which is a bit better than that of GPS broadcast Klobuchar model.

  15. A new rate-dependent model for high-frequency tracking performance enhancement of piezoactuator system

    Science.gov (United States)

    Tian, Lizhi; Xiong, Zhenhua; Wu, Jianhua; Ding, Han

    2017-05-01

    Feedforward-feedback control is widely used in motion control of piezoactuator systems. Due to the phase lag caused by incomplete dynamics compensation, the performance of the composite controller is greatly limited at high frequency. This paper proposes a new rate-dependent model to improve the high-frequency tracking performance by reducing dynamics compensation error. The rate-dependent model is designed as a function of the input and input variation rate to describe the input-output relationship of the residual system dynamics which mainly performs as phase lag in a wide frequency band. Then the direct inversion of the proposed rate-dependent model is used to compensate the residual system dynamics. Using the proposed rate-dependent model as feedforward term, the open loop performance can be improved significantly at medium-high frequency. Then, combining the with feedback controller, the composite controller can provide enhanced close loop performance from low frequency to high frequency. At the frequency of 1 Hz, the proposed controller presents the same performance as previous methods. However, at the frequency of 900 Hz, the tracking error is reduced to be 30.7% of the decoupled approach.

  16. Modeling Operator Performance in Low Task Load Supervisory Domains

    Science.gov (United States)

    2011-06-01

    important to model the best and 65 worst performers separately. It is easy to see that the best performers were better multitaskers and more directed...the expected population this research will influence is expected to contain men and women between the ages of 18 and 50 with an interest in using...for your patience and great sense of humor. I could not ask for a better thesis reader. Thank you, Amy D’Agostino, for taking the time to read my

  17. Operational Street Pollution Model (OSPM) - a review of performed validation studies, and future prospects

    DEFF Research Database (Denmark)

    Kakosimos K.E., Konstantinos E.; Hertel, Ole; Ketzel, Matthias

    2010-01-01

    in this context is the fast and easy to apply Operational Street Pollution Model (OSPM). For almost 20 years, OSPM has been routinely used in many countries for studying traffic pollution, performing analyses of field campaign measurements, studying efficiency of pollution abatement strategies, carrying out...... exposure assessments and as reference in comparisons to other models. OSPM is generally considered as state-of-the-art in applied street pollution modelling. This paper outlines the most important findings in OSPM validation and application studies in literature. At the end of the paper, future research...... needs are outlined for traffic air pollution modelling in general but with outset in the research performed with OSPM....

  18. Out-of-sample Forecasting Performance of Won/Dollar Exchange Rate Return Volatility Model

    Directory of Open Access Journals (Sweden)

    Hojin Lee

    2009-06-01

    Full Text Available We compare the out-of-sample forecasting performance of volatility models using daily exchange rate for the KRW/USD during the period from 1992 to 2008. For various forecasting horizons, historical volatility models with a long memory tend to make more accurate forecasts. Especially, we carefully observe the difference between the EWMA and the GARCH(1,1 model. Our empirical finding that the GARCH model puts too much weight on recent observations relative to those in the past is consistent with prior evidence showing that asset market volatility has a long memory, such as Ding and Granger (1996. The forecasting model with the lowest MSFE and VaR forecast error among the models we consider is the EWMA model in which the forecast volatility for the coming period is a weighted average of recent squared return with exponentially declining weights. In terms of forecast accuracy, it clearly dominates the widely accepted GARCH and rolling window GARCH models. We also present a multiple comparison of the out-of-sample forecasting performance of volatility using the stationary bootstrap of Politis and Romano (1994. We find that the White's reality check for the GARCH(1,1 expanding window model and the FIGARCH(1,1 expanding window model clearly reject the null hypothesis and there exists a better model than the two benchmark models. On the other hand, when the EWMA model is the benchmark, the White's for all forecasting horizons are very high, which indicates the null hypothesis may not be rejected. The Hansen's report the same results. The GARCH(1,1 expanding window model and the FIGARCH(1,1 expanding window model are dominated by the best competing model in most of the forecasting horizons. In contrast, the RiskMetrics model seems to be the most preferred. We also consider combining the forecasts generated by averaging the six raw forecasts and a trimmed set of forecasts which calculate the mean of the four forecasts after disregarding the highest and

  19. Modelling innovation performance of European regions using multi-output neural networks.

    Science.gov (United States)

    Hajek, Petr; Henriques, Roberto

    2017-01-01

    Regional innovation performance is an important indicator for decision-making regarding the implementation of policies intended to support innovation. However, patterns in regional innovation structures are becoming increasingly diverse, complex and nonlinear. To address these issues, this study aims to develop a model based on a multi-output neural network. Both intra- and inter-regional determinants of innovation performance are empirically investigated using data from the 4th and 5th Community Innovation Surveys of NUTS 2 (Nomenclature of Territorial Units for Statistics) regions. The results suggest that specific innovation strategies must be developed based on the current state of input attributes in the region. Thus, it is possible to develop appropriate strategies and targeted interventions to improve regional innovation performance. We demonstrate that support of entrepreneurship is an effective instrument of innovation policy. We also provide empirical support that both business and government R&D activity have a sigmoidal effect, implying that the most effective R&D support should be directed to regions with below-average and average R&D activity. We further show that the multi-output neural network outperforms traditional statistical and machine learning regression models. In general, therefore, it seems that the proposed model can effectively reflect both the multiple-output nature of innovation performance and the interdependency of the output attributes.

  20. Modelling innovation performance of European regions using multi-output neural networks.

    Directory of Open Access Journals (Sweden)

    Petr Hajek

    Full Text Available Regional innovation performance is an important indicator for decision-making regarding the implementation of policies intended to support innovation. However, patterns in regional innovation structures are becoming increasingly diverse, complex and nonlinear. To address these issues, this study aims to develop a model based on a multi-output neural network. Both intra- and inter-regional determinants of innovation performance are empirically investigated using data from the 4th and 5th Community Innovation Surveys of NUTS 2 (Nomenclature of Territorial Units for Statistics regions. The results suggest that specific innovation strategies must be developed based on the current state of input attributes in the region. Thus, it is possible to develop appropriate strategies and targeted interventions to improve regional innovation performance. We demonstrate that support of entrepreneurship is an effective instrument of innovation policy. We also provide empirical support that both business and government R&D activity have a sigmoidal effect, implying that the most effective R&D support should be directed to regions with below-average and average R&D activity. We further show that the multi-output neural network outperforms traditional statistical and machine learning regression models. In general, therefore, it seems that the proposed model can effectively reflect both the multiple-output nature of innovation performance and the interdependency of the output attributes.