WorldWideScience

Sample records for model perform significantly

  1. On the significance of the noise model for the performance of a linear MPC in closed-loop operation

    DEFF Research Database (Denmark)

    Hagdrup, Morten; Boiroux, Dimitri; Mahmoudi, Zeinab

    2016-01-01

    This paper discusses the significance of the noise model for the performance of a Model Predictive Controller when operating in closed-loop. The process model is parametrized as a continuous-time (CT) model and the relevant sampled-data filtering and control algorithms are developed. Using CT...... models typically means less parameters to identify. Systematic tuning of such controllers is discussed. Simulation studies are conducted for linear time-invariant systems showing that choosing a noise model of low order is beneficial for closed-loop performance. (C) 2016, IFAC (International Federation...

  2. Field significance of performance measures in the context of regional climate model evaluation. Part 2: precipitation

    Science.gov (United States)

    Ivanov, Martin; Warrach-Sagi, Kirsten; Wulfmeyer, Volker

    2018-04-01

    A new approach for rigorous spatial analysis of the downscaling performance of regional climate model (RCM) simulations is introduced. It is based on a multiple comparison of the local tests at the grid cells and is also known as `field' or `global' significance. The block length for the local resampling tests is precisely determined to adequately account for the time series structure. New performance measures for estimating the added value of downscaled data relative to the large-scale forcing fields are developed. The methodology is exemplarily applied to a standard EURO-CORDEX hindcast simulation with the Weather Research and Forecasting (WRF) model coupled with the land surface model NOAH at 0.11 ∘ grid resolution. Daily precipitation climatology for the 1990-2009 period is analysed for Germany for winter and summer in comparison with high-resolution gridded observations from the German Weather Service. The field significance test controls the proportion of falsely rejected local tests in a meaningful way and is robust to spatial dependence. Hence, the spatial patterns of the statistically significant local tests are also meaningful. We interpret them from a process-oriented perspective. While the downscaled precipitation distributions are statistically indistinguishable from the observed ones in most regions in summer, the biases of some distribution characteristics are significant over large areas in winter. WRF-NOAH generates appropriate stationary fine-scale climate features in the daily precipitation field over regions of complex topography in both seasons and appropriate transient fine-scale features almost everywhere in summer. As the added value of global climate model (GCM)-driven simulations cannot be smaller than this perfect-boundary estimate, this work demonstrates in a rigorous manner the clear additional value of dynamical downscaling over global climate simulations. The evaluation methodology has a broad spectrum of applicability as it is

  3. Significance of uncertainties derived from settling tank model structure and parameters on predicting WWTP performance - A global sensitivity analysis study

    DEFF Research Database (Denmark)

    Ramin, Elham; Sin, Gürkan; Mikkelsen, Peter Steen

    2011-01-01

    Uncertainty derived from one of the process models – such as one-dimensional secondary settling tank (SST) models – can impact the output of the other process models, e.g., biokinetic (ASM1), as well as the integrated wastewater treatment plant (WWTP) models. The model structure and parameter...... and from the last aerobic bioreactor upstream to the SST (Garrett/hydraulic method). For model structure uncertainty, two one-dimensional secondary settling tank (1-D SST) models are assessed, including a first-order model (the widely used Takács-model), in which the feasibility of using measured...... uncertainty of settler models can therefore propagate, and add to the uncertainties in prediction of any plant performance criteria. Here we present an assessment of the relative significance of secondary settling model performance in WWTP simulations. We perform a global sensitivity analysis (GSA) based...

  4. Field significance of performance measures in the context of regional climate model evaluation. Part 1: temperature

    Science.gov (United States)

    Ivanov, Martin; Warrach-Sagi, Kirsten; Wulfmeyer, Volker

    2018-04-01

    A new approach for rigorous spatial analysis of the downscaling performance of regional climate model (RCM) simulations is introduced. It is based on a multiple comparison of the local tests at the grid cells and is also known as "field" or "global" significance. New performance measures for estimating the added value of downscaled data relative to the large-scale forcing fields are developed. The methodology is exemplarily applied to a standard EURO-CORDEX hindcast simulation with the Weather Research and Forecasting (WRF) model coupled with the land surface model NOAH at 0.11 ∘ grid resolution. Monthly temperature climatology for the 1990-2009 period is analysed for Germany for winter and summer in comparison with high-resolution gridded observations from the German Weather Service. The field significance test controls the proportion of falsely rejected local tests in a meaningful way and is robust to spatial dependence. Hence, the spatial patterns of the statistically significant local tests are also meaningful. We interpret them from a process-oriented perspective. In winter and in most regions in summer, the downscaled distributions are statistically indistinguishable from the observed ones. A systematic cold summer bias occurs in deep river valleys due to overestimated elevations, in coastal areas due probably to enhanced sea breeze circulation, and over large lakes due to the interpolation of water temperatures. Urban areas in concave topography forms have a warm summer bias due to the strong heat islands, not reflected in the observations. WRF-NOAH generates appropriate fine-scale features in the monthly temperature field over regions of complex topography, but over spatially homogeneous areas even small biases can lead to significant deteriorations relative to the driving reanalysis. As the added value of global climate model (GCM)-driven simulations cannot be smaller than this perfect-boundary estimate, this work demonstrates in a rigorous manner the

  5. The Real World Significance of Performance Prediction

    Science.gov (United States)

    Pardos, Zachary A.; Wang, Qing Yang; Trivedi, Shubhendu

    2012-01-01

    In recent years, the educational data mining and user modeling communities have been aggressively introducing models for predicting student performance on external measures such as standardized tests as well as within-tutor performance. While these models have brought statistically reliable improvement to performance prediction, the real world…

  6. Performance evaluation recommendations of nuclear power plants outdoor significant civil structures earthquake resistance. Performance evaluation examples

    International Nuclear Information System (INIS)

    2005-06-01

    The Japan Society of Civil Engineers has updated performance evaluation recommendations of nuclear power plants outdoor significant civil structures earthquake resistance in June 2005. Based on experimental and analytical considerations, analytical seismic models of soils for underground structures, effects of vertical motions on time-history dynamic analysis and shear fracture of reinforced concretes by cyclic loadings have been incorporated in new recommendations. This document shows outdoor civil structures earthquake resistance and endurance performance evaluation examples based on revised recommendations. (T. Tanaka)

  7. Work domain constraints for modelling surgical performance.

    Science.gov (United States)

    Morineau, Thierry; Riffaud, Laurent; Morandi, Xavier; Villain, Jonathan; Jannin, Pierre

    2015-10-01

    Three main approaches can be identified for modelling surgical performance: a competency-based approach, a task-based approach, both largely explored in the literature, and a less known work domain-based approach. The work domain-based approach first describes the work domain properties that constrain the agent's actions and shape the performance. This paper presents a work domain-based approach for modelling performance during cervical spine surgery, based on the idea that anatomical structures delineate the surgical performance. This model was evaluated through an analysis of junior and senior surgeons' actions. Twenty-four cervical spine surgeries performed by two junior and two senior surgeons were recorded in real time by an expert surgeon. According to a work domain-based model describing an optimal progression through anatomical structures, the degree of adjustment of each surgical procedure to a statistical polynomial function was assessed. Each surgical procedure showed a significant suitability with the model and regression coefficient values around 0.9. However, the surgeries performed by senior surgeons fitted this model significantly better than those performed by junior surgeons. Analysis of the relative frequencies of actions on anatomical structures showed that some specific anatomical structures discriminate senior from junior performances. The work domain-based modelling approach can provide an overall statistical indicator of surgical performance, but in particular, it can highlight specific points of interest among anatomical structures that the surgeons dwelled on according to their level of expertise.

  8. Tailored model abstraction in performance assessments

    International Nuclear Information System (INIS)

    Kessler, J.H.

    1995-01-01

    Total System Performance Assessments (TSPAs) are likely to be one of the most significant parts of making safety cases for the continued development and licensing of geologic repositories for the disposal of spent fuel and HLW. Thus, it is critical that the TSPA model capture the 'essence' of the physical processes relevant to demonstrating the appropriate regulation is met. But how much detail about the physical processes must be modeled and understood before there is enough confidence that the appropriate essence has been captured? In this summary the level of model abstraction that is required is discussed. Approaches for subsystem and total system performance analyses are outlined, and the role of best estimate models is examined. It is concluded that a conservative approach for repository performance, based on limited amount of field and laboratory data, can provide sufficient confidence for a regulatory decision

  9. ARMA modeling of stochastic processes in nuclear reactor with significant detection noise

    International Nuclear Information System (INIS)

    Zavaljevski, N.

    1992-01-01

    The theoretical basis of ARMA modelling of stochastic processes in nuclear reactor was presented in a previous paper, neglecting observational noise. The identification of real reactor data indicated that in some experiments the detection noise is significant. Thus a more rigorous theoretical modelling of stochastic processes in nuclear reactor is performed. Starting from the fundamental stochastic differential equations of the Langevin type for the interaction of the detector with neutron field, a new theoretical ARMA model is developed. preliminary identification results confirm the theoretical expectations. (author)

  10. The European Academy laparoscopic “Suturing Training and Testing’’ (SUTT) significantly improves surgeons’ performance

    Science.gov (United States)

    Sleiman, Z.; Tanos, V.; Van Belle, Y.; Carvalho, J.L.; Campo, R.

    2015-01-01

    The efficiency of suturing training and testing (SUTT) model by laparoscopy was evaluated, measuring the suturingskill acquisition of trainee gynecologists at the beginning and at the end of a teaching course. During a workshop organized by the European Academy of Gynecological Surgery (EAGS), 25 participants with three different experience levels in laparoscopy (minor, intermediate and major) performed the 4 exercises of the SUTT model (Ex 1: both hands stitching and continuous suturing, Ex 2: right hand stitching and intracorporeal knotting, Ex 3: left hand stitching and intracorporeal knotting, Ex 4: dominant hand stitching, tissue approximation and intracorporeal knotting). The time needed to perform the exercises is recorded for each trainee and group and statistical analysis used to note the differences. Overall, all trainees achieved significant improvement in suturing time (p psychomotor skills, surgery, teaching, training suturing model. PMID:26977264

  11. Roles and significance of water conducting features for transport models in performance assessment

    International Nuclear Information System (INIS)

    Carrera, J.; Sanchez-Vila, X.; Medina, A.

    1999-01-01

    The term water conducting features (WCF) refers to zones of high hydraulic conductivity. In the context of waste disposal, it is further implied that they are narrow so that chances of sampling them are low. Yet, they may carry significant amounts of water. Moreover, their relatively small volumetric water content causes solutes to travel fast through them. Water-conducting features are a rather common feature of natural media. The fact that they have become a source of concern in recent years, reflects more the increased level of testing and monitoring than any intrinsic property of low permeability media. Accurate simulations of solute transport require a realistic accounting for water conducting features. Methods are presented to do so and examples are shown to illustrate these methods. Since detailed accounting of WCF's will not be possible in actual performance assessments, efforts should be directed towards typification, so as to identify the essential effects of WCF's on solute transport through different types of rocks. Field evidence suggests that, although individual WCF's may be difficult to characterize, their effects are quite predictable. (author)

  12. Performance Evaluation Model for Application Layer Firewalls.

    Science.gov (United States)

    Xuan, Shichang; Yang, Wu; Dong, Hui; Zhang, Jiangchuan

    2016-01-01

    Application layer firewalls protect the trusted area network against information security risks. However, firewall performance may affect user experience. Therefore, performance analysis plays a significant role in the evaluation of application layer firewalls. This paper presents an analytic model of the application layer firewall, based on a system analysis to evaluate the capability of the firewall. In order to enable users to improve the performance of the application layer firewall with limited resources, resource allocation was evaluated to obtain the optimal resource allocation scheme in terms of throughput, delay, and packet loss rate. The proposed model employs the Erlangian queuing model to analyze the performance parameters of the system with regard to the three layers (network, transport, and application layers). Then, the analysis results of all the layers are combined to obtain the overall system performance indicators. A discrete event simulation method was used to evaluate the proposed model. Finally, limited service desk resources were allocated to obtain the values of the performance indicators under different resource allocation scenarios in order to determine the optimal allocation scheme. Under limited resource allocation, this scheme enables users to maximize the performance of the application layer firewall.

  13. Performance Evaluation Model for Application Layer Firewalls.

    Directory of Open Access Journals (Sweden)

    Shichang Xuan

    Full Text Available Application layer firewalls protect the trusted area network against information security risks. However, firewall performance may affect user experience. Therefore, performance analysis plays a significant role in the evaluation of application layer firewalls. This paper presents an analytic model of the application layer firewall, based on a system analysis to evaluate the capability of the firewall. In order to enable users to improve the performance of the application layer firewall with limited resources, resource allocation was evaluated to obtain the optimal resource allocation scheme in terms of throughput, delay, and packet loss rate. The proposed model employs the Erlangian queuing model to analyze the performance parameters of the system with regard to the three layers (network, transport, and application layers. Then, the analysis results of all the layers are combined to obtain the overall system performance indicators. A discrete event simulation method was used to evaluate the proposed model. Finally, limited service desk resources were allocated to obtain the values of the performance indicators under different resource allocation scenarios in order to determine the optimal allocation scheme. Under limited resource allocation, this scheme enables users to maximize the performance of the application layer firewall.

  14. Statistical and Machine Learning Models to Predict Programming Performance

    OpenAIRE

    Bergin, Susan

    2006-01-01

    This thesis details a longitudinal study on factors that influence introductory programming success and on the development of machine learning models to predict incoming student performance. Although numerous studies have developed models to predict programming success, the models struggled to achieve high accuracy in predicting the likely performance of incoming students. Our approach overcomes this by providing a machine learning technique, using a set of three significant...

  15. Leukoaraiosis significantly worsens driving performance of ordinary older drivers.

    Directory of Open Access Journals (Sweden)

    Kimihiko Nakano

    Full Text Available BACKGROUND: Leukoaraiosis is defined as extracellular space caused mainly by atherosclerotic or demyelinated changes in the brain tissue and is commonly found in the brains of healthy older people. A significant association between leukoaraiosis and traffic crashes was reported in our previous study; however, the reason for this is still unclear. METHOD: This paper presents a comprehensive evaluation of driving performance in ordinary older drivers with leukoaraiosis. First, the degree of leukoaraiosis was examined in 33 participants, who underwent an actual-vehicle driving examination on a standard driving course, and a driver skill rating was also collected while the driver carried out a paced auditory serial addition test, which is a calculating task given verbally. At the same time, a steering entropy method was used to estimate steering operation performance. RESULTS: The experimental results indicated that a normal older driver with leukoaraiosis was readily affected by external disturbances and made more operation errors and steered less smoothly than one without leukoaraiosis during driving; at the same time, their steering skill significantly deteriorated. CONCLUSIONS: Leukoaraiosis worsens the driving performance of older drivers because of their increased vulnerability to distraction.

  16. Four-Stroke, Internal Combustion Engine Performance Modeling

    Science.gov (United States)

    Wagner, Richard C.

    In this thesis, two models of four-stroke, internal combustion engines are created and compared. The first model predicts the intake and exhaust processes using isentropic flow equations augmented by discharge coefficients. The second model predicts the intake and exhaust processes using a compressible, time-accurate, Quasi-One-Dimensional (Q1D) approach. Both models employ the same heat release and reduced-order modeling of the cylinder charge. Both include friction and cylinder loss models so that the predicted performance values can be compared to measurements. The results indicate that the isentropic-based model neglects important fluid mechanics and returns inaccurate results. The Q1D flow model, combined with the reduced-order model of the cylinder charge, is able to capture the dominant intake and exhaust fluid mechanics and produces results that compare well with measurement. Fluid friction, convective heat transfer, piston ring and skirt friction and temperature-varying specific heats in the working fluids are all shown to be significant factors in engine performance predictions. Charge blowby is shown to play a lesser role.

  17. Measurement-based reliability/performability models

    Science.gov (United States)

    Hsueh, Mei-Chen

    1987-01-01

    Measurement-based models based on real error-data collected on a multiprocessor system are described. Model development from the raw error-data to the estimation of cumulative reward is also described. A workload/reliability model is developed based on low-level error and resource usage data collected on an IBM 3081 system during its normal operation in order to evaluate the resource usage/error/recovery process in a large mainframe system. Thus, both normal and erroneous behavior of the system are modeled. The results provide an understanding of the different types of errors and recovery processes. The measured data show that the holding times in key operational and error states are not simple exponentials and that a semi-Markov process is necessary to model the system behavior. A sensitivity analysis is performed to investigate the significance of using a semi-Markov process, as opposed to a Markov process, to model the measured system.

  18. Comparative performance of high-fidelity training models for flexible ureteroscopy: Are all models effective?

    Directory of Open Access Journals (Sweden)

    Shashikant Mishra

    2011-01-01

    Full Text Available Objective: We performed a comparative study of high-fidelity training models for flexible ureteroscopy (URS. Our objective was to determine whether high-fidelity non-virtual reality (VR models are as effective as the VR model in teaching flexible URS skills. Materials and Methods: Twenty-one trained urologists without clinical experience of flexible URS underwent dry lab simulation practice. After a warm-up period of 2 h, tasks were performed on a high-fidelity non-VR (Uro-scopic Trainer TM ; Endo-Urologie-Modell TM and a high-fidelity VR model (URO Mentor TM . The participants were divided equally into three batches with rotation on each of the three stations for 30 min. Performance of the trainees was evaluated by an expert ureteroscopist using pass rating and global rating score (GRS. The participants rated a face validity questionnaire at the end of each session. Results: The GRS improved statistically at evaluation performed after second rotation (P<0.001 for batches 1, 2 and 3. Pass ratings also improved significantly for all training models when the third and first rotations were compared (P<0.05. The batch that was trained on the VR-based model had more improvement on pass ratings on second rotation but could not achieve statistical significance. Most of the realistic domains were higher for a VR model as compared with the non-VR model, except the realism of the flexible endoscope. Conclusions: All the models used for training flexible URS were effective in increasing the GRS and pass ratings irrespective of the VR status.

  19. Performance evaluation recommendations of nuclear power plants outdoor significant civil structures earthquake resistance. Technical documentation

    International Nuclear Information System (INIS)

    2005-06-01

    The Japan Society of Civil Engineers has updated performance evaluation recommendations of nuclear power plants outdoor significant civil structures earthquake resistance in June 2005. Experimental and analytical considerations on the seismic effects evaluation criteria, such as analytical seismic models of soils for underground structures, effects of vertical motions on time-history dynamic analysis and shear fracture of reinforced concretes by cyclic loadings, were shown in this document and incorporated in new recommendations. (T. Tanaka)

  20. Data management system performance modeling

    Science.gov (United States)

    Kiser, Larry M.

    1993-01-01

    This paper discusses analytical techniques that have been used to gain a better understanding of the Space Station Freedom's (SSF's) Data Management System (DMS). The DMS is a complex, distributed, real-time computer system that has been redesigned numerous times. The implications of these redesigns have not been fully analyzed. This paper discusses the advantages and disadvantages for static analytical techniques such as Rate Monotonic Analysis (RMA) and also provides a rationale for dynamic modeling. Factors such as system architecture, processor utilization, bus architecture, queuing, etc. are well suited for analysis with a dynamic model. The significance of performance measures for a real-time system are discussed.

  1. Evaluating significance in linear mixed-effects models in R.

    Science.gov (United States)

    Luke, Steven G

    2017-08-01

    Mixed-effects models are being used ever more frequently in the analysis of experimental data. However, in the lme4 package in R the standards for evaluating significance of fixed effects in these models (i.e., obtaining p-values) are somewhat vague. There are good reasons for this, but as researchers who are using these models are required in many cases to report p-values, some method for evaluating the significance of the model output is needed. This paper reports the results of simulations showing that the two most common methods for evaluating significance, using likelihood ratio tests and applying the z distribution to the Wald t values from the model output (t-as-z), are somewhat anti-conservative, especially for smaller sample sizes. Other methods for evaluating significance, including parametric bootstrapping and the Kenward-Roger and Satterthwaite approximations for degrees of freedom, were also evaluated. The results of these simulations suggest that Type 1 error rates are closest to .05 when models are fitted using REML and p-values are derived using the Kenward-Roger or Satterthwaite approximations, as these approximations both produced acceptable Type 1 error rates even for smaller samples.

  2. Well performance model

    International Nuclear Information System (INIS)

    Thomas, L.K.; Evans, C.E.; Pierson, R.G.; Scott, S.L.

    1992-01-01

    This paper describes the development and application of a comprehensive oil or gas well performance model. The model contains six distinct sections: stimulation design, tubing and/or casing flow, reservoir and near-wellbore calculations, production forecasting, wellbore heat transmission, and economics. These calculations may be performed separately or in an integrated fashion with data and results shared among the different sections. The model analysis allows evaluation of all aspects of well completion design, including the effects on future production and overall well economics

  3. Human performance modeling for system of systems analytics: combat performance-shaping factors.

    Energy Technology Data Exchange (ETDEWEB)

    Lawton, Craig R.; Miller, Dwight Peter

    2006-01-01

    The US military has identified Human Performance Modeling (HPM) as a significant requirement and challenge of future systems modeling and analysis initiatives. To support this goal, Sandia National Laboratories (SNL) has undertaken a program of HPM as an integral augmentation to its system-of-system (SoS) analytics capabilities. The previous effort, reported in SAND2005-6569, evaluated the effects of soldier cognitive fatigue on SoS performance. The current effort began with a very broad survey of any performance-shaping factors (PSFs) that also might affect soldiers performance in combat situations. The work included consideration of three different approaches to cognition modeling and how appropriate they would be for application to SoS analytics. This bulk of this report categorizes 47 PSFs into three groups (internal, external, and task-related) and provides brief descriptions of how each affects combat performance, according to the literature. The PSFs were then assembled into a matrix with 22 representative military tasks and assigned one of four levels of estimated negative impact on task performance, based on the literature. Blank versions of the matrix were then sent to two ex-military subject-matter experts to be filled out based on their personal experiences. Data analysis was performed to identify the consensus most influential PSFs. Results indicate that combat-related injury, cognitive fatigue, inadequate training, physical fatigue, thirst, stress, poor perceptual processing, and presence of chemical agents are among the PSFs with the most negative impact on combat performance.

  4. Performance evaluation recommendations and manuals of nuclear power plants outdoor significant civil structures earthquake resistance

    International Nuclear Information System (INIS)

    2005-06-01

    Performance evaluation recommendations and manuals of nuclear power plants outdoor significant civil structures earthquake resistance have been updated in June 2005 by the Japan Society of Civil Engineers. Based on experimental and analytical considerations on the recommendations of May 2002, analytical seismic models of soils for underground structures, effects of vertical motions on time-history dynamic analysis and shear fracture of reinforced concretes by cyclic loadings have been evaluated and incorporated in new recommendations. (T. Tanaka)

  5. Modeling of environmentally significant interfaces: Two case studies

    International Nuclear Information System (INIS)

    Williford, R.E.

    2006-01-01

    When some parameters cannot be easily measured experimentally, mathematical models can often be used to deconvolute or interpret data collected on complex systems, such as those characteristic of many environmental problems. These models can help quantify the contributions of various physical or chemical phenomena that contribute to the overall behavior, thereby enabling the scientist to control and manipulate these phenomena, and thus to optimize the performance of the material or device. In the first case study presented here, a model is used to test the hypothesis that oxygen interactions with hydrogen on the catalyst particles of solid oxide fuel cell anodes can sometimes occur a finite distance away from the triple phase boundary (TPB), so that such reactions are not restricted to the TPB as normally assumed. The model may help explain a discrepancy between the observed structure of SOFCs and their performance. The second case study develops a simple physical model that allows engineers to design and control the sizes and shapes of mesopores in silica thin films. Such pore design can be useful for enhancing the selectivity and reactivity of environmental sensors and catalysts. This paper demonstrates the mutually beneficial interactions between experiment and modeling in the solution of a wide range of problems

  6. Port performance evaluation tool based on microsimulation model

    Directory of Open Access Journals (Sweden)

    Tsavalista Burhani Jzolanda

    2017-01-01

    Full Text Available As port performance is becoming correlative to national competitiveness, the issue of port performance evaluation has significantly raised. Port performances can simply be indicated by port service levels to the ship (e.g., throughput, waiting for berthing etc., as well as the utilization level of equipment and facilities within a certain period. The performances evaluation then can be used as a tool to develop related policies for improving the port’s performance to be more effective and efficient. However, the evaluation is frequently conducted based on deterministic approach, which hardly captures the nature variations of port parameters. Therefore, this paper presents a stochastic microsimulation model for investigating the impacts of port parameter variations to the port performances. The variations are derived from actual data in order to provide more realistic results. The model is further developed using MATLAB and Simulink based on the queuing theory.

  7. A statistical model for predicting muscle performance

    Science.gov (United States)

    Byerly, Diane Leslie De Caix

    The objective of these studies was to develop a capability for predicting muscle performance and fatigue to be utilized for both space- and ground-based applications. To develop this predictive model, healthy test subjects performed a defined, repetitive dynamic exercise to failure using a Lordex spinal machine. Throughout the exercise, surface electromyography (SEMG) data were collected from the erector spinae using a Mega Electronics ME3000 muscle tester and surface electrodes placed on both sides of the back muscle. These data were analyzed using a 5th order Autoregressive (AR) model and statistical regression analysis. It was determined that an AR derived parameter, the mean average magnitude of AR poles, significantly correlated with the maximum number of repetitions (designated Rmax) that a test subject was able to perform. Using the mean average magnitude of AR poles, a test subject's performance to failure could be predicted as early as the sixth repetition of the exercise. This predictive model has the potential to provide a basis for improving post-space flight recovery, monitoring muscle atrophy in astronauts and assessing the effectiveness of countermeasures, monitoring astronaut performance and fatigue during Extravehicular Activity (EVA) operations, providing pre-flight assessment of the ability of an EVA crewmember to perform a given task, improving the design of training protocols and simulations for strenuous International Space Station assembly EVA, and enabling EVA work task sequences to be planned enhancing astronaut performance and safety. Potential ground-based, medical applications of the predictive model include monitoring muscle deterioration and performance resulting from illness, establishing safety guidelines in the industry for repetitive tasks, monitoring the stages of rehabilitation for muscle-related injuries sustained in sports and accidents, and enhancing athletic performance through improved training protocols while reducing

  8. Confirming the Value of Swimming-Performance Models for Adolescents.

    Science.gov (United States)

    Dormehl, Shilo J; Robertson, Samuel J; Barker, Alan R; Williams, Craig A

    2017-10-01

    To evaluate the efficacy of existing performance models to assess the progression of male and female adolescent swimmers through a quantitative and qualitative mixed-methods approach. Fourteen published models were tested using retrospective data from an independent sample of Dutch junior national-level swimmers from when they were 12-18 y of age (n = 13). The degree of association by Pearson correlations was compared between the calculated differences from the models and quadratic functions derived from the Dutch junior national qualifying times. Swimmers were grouped based on their differences from the models and compared with their swimming histories that were extracted from questionnaires and follow-up interviews. Correlations of the deviations from both the models and quadratic functions derived from the Dutch qualifying times were all significant except for the 100-m breaststroke and butterfly and the 200-m freestyle for females (P motivation appeared to be synonymous with higher-level career performance. This mixed-methods approach helped confirm the validity of the models that were found to be applicable to adolescent swimmers at all levels, allowing coaches to track performance and set goals. The value of the models in being able to account for the expected performance gains during adolescence enables quantification of peripheral factors that could affect performance.

  9. A multilateral modelling of Youth Soccer Performance Index (YSPI)

    Science.gov (United States)

    Bisyri Husin Musawi Maliki, Ahmad; Razali Abdullah, Mohamad; Juahir, Hafizan; Abdullah, Farhana; Ain Shahirah Abdullah, Nurul; Muazu Musa, Rabiu; Musliha Mat-Rasid, Siti; Adnan, Aleesha; Azura Kosni, Norlaila; Muhamad, Wan Siti Amalina Wan; Afiqah Mohamad Nasir, Nur

    2018-04-01

    This study aims to identify the most dominant factors that influencing performance of soccer player and to predict group performance for soccer players. A total of 184 of youth soccer players from Malaysia sport school and six soccer academy encompasses as respondence of the study. Exploratory factor analysis (EFA) and Confirmatory factor analysis (CFA) were computed to identify the most dominant factors whereas reducing the initial 26 parameters with recommended >0.5 of factor loading. Meanwhile, prediction of the soccer performance was predicted by regression model. CFA revealed that sit and reach, vertical jump, VO2max, age, weight, height, sitting height, calf circumference (cc), medial upper arm circumference (muac), maturation, bicep, triceps, subscapular, suprailiac, 5M, 10M, and 20M speed were the most dominant factors. Further index analysis forming Youth Soccer Performance Index (YSPI) resulting by categorizing three groups namely, high, moderate, and low. The regression model for this study was significant set as p < 0.001 and R2 is 0.8222 which explained that the model contributed a total of 82% prediction ability to predict the whole set of the variables. The significant parameters in contributing prediction of YSPI are discussed. As a conclusion, the precision of the prediction models by integrating a multilateral factor reflecting for predicting potential soccer player and hopefully can create a competitive soccer games.

  10. A Procurement Performance Model for Construction Frameworks

    Directory of Open Access Journals (Sweden)

    Terence Y M Lam

    2015-07-01

    Full Text Available Collaborative construction frameworks have been developed in the United Kingdom (UK to create longer term relationships between clients and suppliers in order to improve project outcomes. Research undertaken into highways maintenance set within a major county council has confirmed that such collaborative procurement methods can improve time, cost and quality of construction projects. Building upon this and examining the same single case, this research aims to develop a performance model through identification of performance drivers in the whole project delivery process including pre and post contract phases. A priori performance model based on operational and sociological constructs was proposed and then checked by a pilot study. Factor analysis and central tendency statistics from the questionnaires as well as content analysis from the interview transcripts were conducted. It was confirmed that long term relationships, financial and non-financial incentives and stronger communication are the sociological behaviour factors driving performance. The interviews also established that key performance indicators (KPIs can be used as an operational measure to improve performance. With the posteriori performance model, client project managers can effectively collaboratively manage contractor performance through procurement measures including use of longer term and KPIs for the contract so that the expected project outcomes can be achieved. The findings also make significant contribution to construction framework procurement theory by identifying the interrelated sociological and operational performance drivers. This study is set predominantly in the field of highways civil engineering. It is suggested that building based projects or other projects that share characteristics are grouped together and used for further research of the phenomena discovered.

  11. Construction Of A Performance Assessment Model For Zakat Management Institutions

    Directory of Open Access Journals (Sweden)

    Sri Fadilah

    2016-12-01

    Full Text Available The objective of the research is to examine the performance evaluation using Balanced Scorecard model. The research is conducted due to a big gap existing between zakat (alms and religious tax in Islam with its potential earn of as much as 217 trillion rupiahs and the realization of the collected zakat fund that is only reached for three trillion. This indicates that the performance of zakat management organizations in collecting the zakat is still very low. On the other hand, the quantity and the quality of zakat management organizations have to be improved. This means the performance evaluation model as a tool to evaluate performance is needed. The model construct is making a performance evaluation model that can be implemented to zakat management organizations. The organizational performance with Balanced Scorecard evaluation model will be effective if it is supported by three aspects, namely:  PI, BO and TQM. This research uses explanatory method and data analysis tool of SEM/PLS. Data collecting technique are questionnaires, interviews and documentation. The result of this research shows that PI, BO and TQM simultaneously and partially gives a significant effect on organizational performance.

  12. Performance monitoring and error significance in patients with obsessive-compulsive disorder.

    Science.gov (United States)

    Endrass, Tanja; Schuermann, Beate; Kaufmann, Christan; Spielberg, Rüdiger; Kniesche, Rainer; Kathmann, Norbert

    2010-05-01

    Performance monitoring has been consistently found to be overactive in obsessive-compulsive disorder (OCD). The present study examines whether performance monitoring in OCD is adjusted with error significance. Therefore, errors in a flanker task were followed by neutral (standard condition) or punishment feedbacks (punishment condition). In the standard condition patients had significantly larger error-related negativity (ERN) and correct-related negativity (CRN) ampliudes than controls. But, in the punishment condition groups did not differ in ERN and CRN amplitudes. While healthy controls showed an amplitude enhancement between standard and punishment condition, OCD patients showed no variation. In contrast, group differences were not found for the error positivity (Pe): both groups had larger Pe amplitudes in the punishment condition. Results confirm earlier findings of overactive error monitoring in OCD. The absence of a variation with error significance might indicate that OCD patients are unable to down-regulate their monitoring activity according to external requirements. Copyright 2010 Elsevier B.V. All rights reserved.

  13. Evaluation of performance of distributed delay model for chemotherapy-induced myelosuppression.

    Science.gov (United States)

    Krzyzanski, Wojciech; Hu, Shuhua; Dunlavey, Michael

    2018-04-01

    The distributed delay model has been introduced that replaces the transit compartments in the classic model of chemotherapy-induced myelosuppression with a convolution integral. The maturation of granulocyte precursors in the bone marrow is described by the gamma probability density function with the shape parameter (ν). If ν is a positive integer, the distributed delay model coincides with the classic model with ν transit compartments. The purpose of this work was to evaluate performance of the distributed delay model with particular focus on model deterministic identifiability in the presence of the shape parameter. The classic model served as a reference for comparison. Previously published white blood cell (WBC) count data in rats receiving bolus doses of 5-fluorouracil were fitted by both models. The negative two log-likelihood objective function (-2LL) and running times were used as major markers of performance. Local sensitivity analysis was done to evaluate the impact of ν on the pharmacodynamics response WBC. The ν estimate was 1.46 with 16.1% CV% compared to ν = 3 for the classic model. The difference of 6.78 in - 2LL between classic model and the distributed delay model implied that the latter performed significantly better than former according to the log-likelihood ratio test (P = 0.009), although the overall performance was modestly better. The running times were 1 s and 66.2 min, respectively. The long running time of the distributed delay model was attributed to computationally intensive evaluation of the convolution integral. The sensitivity analysis revealed that ν strongly influences the WBC response by controlling cell proliferation and elimination of WBCs from the circulation. In conclusion, the distributed delay model was deterministically identifiable from typical cytotoxic data. Its performance was modestly better than the classic model with significantly longer running time.

  14. Introducing Model Predictive Control for Improving Power Plant Portfolio Performance

    DEFF Research Database (Denmark)

    Edlund, Kristian Skjoldborg; Bendtsen, Jan Dimon; Børresen, Simon

    2008-01-01

    This paper introduces a model predictive control (MPC) approach for construction of a controller for balancing the power generation against consumption in a power system. The objective of the controller is to coordinate a portfolio consisting of multiple power plant units in the effort to perform...... reference tracking and disturbance rejection in an economically optimal way. The performance function is chosen as a mixture of the `1-norm and a linear weighting to model the economics of the system. Simulations show a significant improvement of the performance of the MPC compared to the current...

  15. Human performance modeling for system of systems analytics :soldier fatigue.

    Energy Technology Data Exchange (ETDEWEB)

    Lawton, Craig R.; Campbell, James E.; Miller, Dwight Peter

    2005-10-01

    The military has identified Human Performance Modeling (HPM) as a significant requirement and challenge of future systems modeling and analysis initiatives as can be seen in the Department of Defense's (DoD) Defense Modeling and Simulation Office's (DMSO) Master Plan (DoD 5000.59-P 1995). To this goal, the military is currently spending millions of dollars on programs devoted to HPM in various military contexts. Examples include the Human Performance Modeling Integration (HPMI) program within the Air Force Research Laboratory, which focuses on integrating HPMs with constructive models of systems (e.g. cockpit simulations) and the Navy's Human Performance Center (HPC) established in September 2003. Nearly all of these initiatives focus on the interface between humans and a single system. This is insufficient in the era of highly complex network centric SoS. This report presents research and development in the area of HPM in a system-of-systems (SoS). Specifically, this report addresses modeling soldier fatigue and the potential impacts soldier fatigue can have on SoS performance.

  16. NIF capsule performance modeling

    Directory of Open Access Journals (Sweden)

    Weber S.

    2013-11-01

    Full Text Available Post-shot modeling of NIF capsule implosions was performed in order to validate our physical and numerical models. Cryogenic layered target implosions and experiments with surrogate targets produce an abundance of capsule performance data including implosion velocity, remaining ablator mass, times of peak x-ray and neutron emission, core image size, core symmetry, neutron yield, and x-ray spectra. We have attempted to match the integrated data set with capsule-only simulations by adjusting the drive and other physics parameters within expected uncertainties. The simulations include interface roughness, time-dependent symmetry, and a model of mix. We were able to match many of the measured performance parameters for a selection of shots.

  17. Multiprocessor performance modeling with ADAS

    Science.gov (United States)

    Hayes, Paul J.; Andrews, Asa M.

    1989-01-01

    A graph managing strategy referred to as the Algorithm to Architecture Mapping Model (ATAMM) appears useful for the time-optimized execution of application algorithm graphs in embedded multiprocessors and for the performance prediction of graph designs. This paper reports the modeling of ATAMM in the Architecture Design and Assessment System (ADAS) to make an independent verification of ATAMM's performance prediction capability and to provide a user framework for the evaluation of arbitrary algorithm graphs. Following an overview of ATAMM and its major functional rules are descriptions of the ADAS model of ATAMM, methods to enter an arbitrary graph into the model, and techniques to analyze the simulation results. The performance of a 7-node graph example is evaluated using the ADAS model and verifies the ATAMM concept by substantiating previously published performance results.

  18. Integrated model for supplier selection and performance evaluation

    Directory of Open Access Journals (Sweden)

    Borges de Araújo, Maria Creuza

    2015-08-01

    Full Text Available This paper puts forward a model for selecting suppliers and evaluating the performance of those already working with a company. A simulation was conducted in a food industry. This sector has high significance in the economy of Brazil. The model enables the phases of selecting and evaluating suppliers to be integrated. This is important so that a company can have partnerships with suppliers who are able to meet their needs. Additionally, a group method is used to enable managers who will be affected by this decision to take part in the selection stage. Finally, the classes resulting from the performance evaluation are shown to support the contractor in choosing the most appropriate relationship with its suppliers.

  19. Evaluating Flight Crew Performance by a Bayesian Network Model

    Directory of Open Access Journals (Sweden)

    Wei Chen

    2018-03-01

    Full Text Available Flight crew performance is of great significance in keeping flights safe and sound. When evaluating the crew performance, quantitative detailed behavior information may not be available. The present paper introduces the Bayesian Network to perform flight crew performance evaluation, which permits the utilization of multidisciplinary sources of objective and subjective information, despite sparse behavioral data. In this paper, the causal factors are selected based on the analysis of 484 aviation accidents caused by human factors. Then, a network termed Flight Crew Performance Model is constructed. The Delphi technique helps to gather subjective data as a supplement to objective data from accident reports. The conditional probabilities are elicited by the leaky noisy MAX model. Two ways of inference for the BN—probability prediction and probabilistic diagnosis are used and some interesting conclusions are drawn, which could provide data support to make interventions for human error management in aviation safety.

  20. Model training across multiple breeding cycles significantly improves genomic prediction accuracy in rye (Secale cereale L.).

    Science.gov (United States)

    Auinger, Hans-Jürgen; Schönleben, Manfred; Lehermeier, Christina; Schmidt, Malthe; Korzun, Viktor; Geiger, Hartwig H; Piepho, Hans-Peter; Gordillo, Andres; Wilde, Peer; Bauer, Eva; Schön, Chris-Carolin

    2016-11-01

    Genomic prediction accuracy can be significantly increased by model calibration across multiple breeding cycles as long as selection cycles are connected by common ancestors. In hybrid rye breeding, application of genome-based prediction is expected to increase selection gain because of long selection cycles in population improvement and development of hybrid components. Essentially two prediction scenarios arise: (1) prediction of the genetic value of lines from the same breeding cycle in which model training is performed and (2) prediction of lines from subsequent cycles. It is the latter from which a reduction in cycle length and consequently the strongest impact on selection gain is expected. We empirically investigated genome-based prediction of grain yield, plant height and thousand kernel weight within and across four selection cycles of a hybrid rye breeding program. Prediction performance was assessed using genomic and pedigree-based best linear unbiased prediction (GBLUP and PBLUP). A total of 1040 S 2 lines were genotyped with 16 k SNPs and each year testcrosses of 260 S 2 lines were phenotyped in seven or eight locations. The performance gap between GBLUP and PBLUP increased significantly for all traits when model calibration was performed on aggregated data from several cycles. Prediction accuracies obtained from cross-validation were in the order of 0.70 for all traits when data from all cycles (N CS  = 832) were used for model training and exceeded within-cycle accuracies in all cases. As long as selection cycles are connected by a sufficient number of common ancestors and prediction accuracy has not reached a plateau when increasing sample size, aggregating data from several preceding cycles is recommended for predicting genetic values in subsequent cycles despite decreasing relatedness over time.

  1. Performing arts medicine: A research model for South Africa

    Directory of Open Access Journals (Sweden)

    Karendra Devroop

    2014-11-01

    Full Text Available Performing Arts Medicine has developed into a highly specialised field over the past three decades. The Performing Arts Medical Association (PAMA has been the leading proponent of this unique and innovative field with ground-breaking research studies, symposia, conferences and journals dedicated specifically to the medical problems of performing artists. Similar to sports medicine, performing arts medicine caters specifically for the medical problems of performing artists including musicians and dancers. In South Africa there is a tremendous lack of knowledge of the field and unlike our international counterparts, we do not have specialised clinical settings that cater for the medical problems of performing artists. There is also a tremendous lack of research on performance-related medical problems of performing artists in South Africa. Accordingly the purpose of this paper is to present an overview of the field of performing arts medicine, highlight some of the significant findings from recent research studies and present a model for conducting research into the field of performing arts medicine. It is hoped that this research model will lead to increased research on the medical problems of performing artists in South Africa.

  2. Performance Modelling of Steam Turbine Performance using Fuzzy ...

    African Journals Online (AJOL)

    Performance Modelling of Steam Turbine Performance using Fuzzy Logic ... AFRICAN JOURNALS ONLINE (AJOL) · Journals · Advanced Search · USING AJOL · RESOURCES. Journal of Applied Sciences and Environmental Management ... A Fuzzy Inference System for predicting the performance of steam turbine

  3. The relationship between quality management practices and organisational performance: A structural equation modelling approach

    Science.gov (United States)

    Jamaluddin, Z.; Razali, A. M.; Mustafa, Z.

    2015-02-01

    The purpose of this paper is to examine the relationship between the quality management practices (QMPs) and organisational performance for the manufacturing industry in Malaysia. In this study, a QMPs and organisational performance framework is developed according to a comprehensive literature review which cover aspects of hard and soft quality factors in manufacturing process environment. A total of 11 hypotheses have been put forward to test the relationship amongst the six constructs, which are management commitment, training, process management, quality tools, continuous improvement and organisational performance. The model is analysed using Structural Equation Modeling (SEM) with AMOS software version 18.0 using Maximum Likelihood (ML) estimation. A total of 480 questionnaires were distributed, and 210 questionnaires were valid for analysis. The results of the modeling analysis using ML estimation indicate that the fits statistics of QMPs and organisational performance model for manufacturing industry is admissible. From the results, it found that the management commitment have significant impact on the training and process management. Similarly, the training had significant effect to the quality tools, process management and continuous improvement. Furthermore, the quality tools have significant influence on the process management and continuous improvement. Likewise, the process management also has a significant impact to the continuous improvement. In addition the continuous improvement has significant influence the organisational performance. However, the results of the study also found that there is no significant relationship between management commitment and quality tools, and between the management commitment and continuous improvement. The results of the study can be used by managers to prioritize the implementation of QMPs. For instances, those practices that are found to have positive impact on organisational performance can be recommended to

  4. Significant uncertainty in global scale hydrological modeling from precipitation data errors

    Science.gov (United States)

    Sperna Weiland, Frederiek C.; Vrugt, Jasper A.; van Beek, Rens (L.) P. H.; Weerts, Albrecht H.; Bierkens, Marc F. P.

    2015-10-01

    In the past decades significant progress has been made in the fitting of hydrologic models to data. Most of this work has focused on simple, CPU-efficient, lumped hydrologic models using discharge, water table depth, soil moisture, or tracer data from relatively small river basins. In this paper, we focus on large-scale hydrologic modeling and analyze the effect of parameter and rainfall data uncertainty on simulated discharge dynamics with the global hydrologic model PCR-GLOBWB. We use three rainfall data products; the CFSR reanalysis, the ERA-Interim reanalysis, and a combined ERA-40 reanalysis and CRU dataset. Parameter uncertainty is derived from Latin Hypercube Sampling (LHS) using monthly discharge data from five of the largest river systems in the world. Our results demonstrate that the default parameterization of PCR-GLOBWB, derived from global datasets, can be improved by calibrating the model against monthly discharge observations. Yet, it is difficult to find a single parameterization of PCR-GLOBWB that works well for all of the five river basins considered herein and shows consistent performance during both the calibration and evaluation period. Still there may be possibilities for regionalization based on catchment similarities. Our simulations illustrate that parameter uncertainty constitutes only a minor part of predictive uncertainty. Thus, the apparent dichotomy between simulations of global-scale hydrologic behavior and actual data cannot be resolved by simply increasing the model complexity of PCR-GLOBWB and resolving sub-grid processes. Instead, it would be more productive to improve the characterization of global rainfall amounts at spatial resolutions of 0.5° and smaller.

  5. Photovoltaic performance models - A report card

    Science.gov (United States)

    Smith, J. H.; Reiter, L. R.

    1985-01-01

    Models for the analysis of photovoltaic (PV) systems' designs, implementation policies, and economic performance, have proliferated while keeping pace with rapid changes in basic PV technology and extensive empirical data compiled for such systems' performance. Attention is presently given to the results of a comparative assessment of ten well documented and widely used models, which range in complexity from first-order approximations of PV system performance to in-depth, circuit-level characterizations. The comparisons were made on the basis of the performance of their subsystem, as well as system, elements. The models fall into three categories in light of their degree of aggregation into subsystems: (1) simplified models for first-order calculation of system performance, with easily met input requirements but limited capability to address more than a small variety of design considerations; (2) models simulating PV systems in greater detail, encompassing types primarily intended for either concentrator-incorporating or flat plate collector PV systems; and (3) models not specifically designed for PV system performance modeling, but applicable to aspects of electrical system design. Models ignoring subsystem failure or degradation are noted to exclude operating and maintenance characteristics as well.

  6. Some concepts of model uncertainty for performance assessments of nuclear waste repositories

    International Nuclear Information System (INIS)

    Eisenberg, N.A.; Sagar, B.; Wittmeyer, G.W.

    1994-01-01

    Models of the performance of nuclear waste repositories will be central to making regulatory decisions regarding the safety of such facilities. The conceptual model of repository performance is represented by mathematical relationships, which are usually implemented as one or more computer codes. A geologic system may allow many conceptual models, which are consistent with the observations. These conceptual models may or may not have the same mathematical representation. Experiences in modeling the performance of a waste repository representation. Experiences in modeling the performance of a waste repository (which is, in part, a geologic system), show that this non-uniqueness of conceptual models is a significant source of model uncertainty. At the same time, each conceptual model has its own set of parameters and usually, it is not be possible to completely separate model uncertainty from parameter uncertainty for the repository system. Issues related to the origin of model uncertainty, its relation to parameter uncertainty, and its incorporation in safety assessments are discussed from a broad regulatory perspective. An extended example in which these issues are explored numerically is also provided

  7. Characterising performance of environmental models

    NARCIS (Netherlands)

    Bennett, N.D.; Croke, B.F.W.; Guariso, G.; Guillaume, J.H.A.; Hamilton, S.H.; Jakeman, A.J.; Marsili-Libelli, S.; Newham, L.T.H.; Norton, J.; Perrin, C.; Pierce, S.; Robson, B.; Seppelt, R.; Voinov, A.; Fath, B.D.; Andreassian, V.

    2013-01-01

    In order to use environmental models effectively for management and decision-making, it is vital to establish an appropriate level of confidence in their performance. This paper reviews techniques available across various fields for characterising the performance of environmental models with focus

  8. Performability Modelling Tools, Evaluation Techniques and Applications

    NARCIS (Netherlands)

    Haverkort, Boudewijn R.H.M.

    1990-01-01

    This thesis deals with three aspects of quantitative evaluation of fault-tolerant and distributed computer and communication systems: performability evaluation techniques, performability modelling tools, and performability modelling applications. Performability modelling is a relatively new

  9. Model performance analysis and model validation in logistic regression

    Directory of Open Access Journals (Sweden)

    Rosa Arboretti Giancristofaro

    2007-10-01

    Full Text Available In this paper a new model validation procedure for a logistic regression model is presented. At first, we illustrate a brief review of different techniques of model validation. Next, we define a number of properties required for a model to be considered "good", and a number of quantitative performance measures. Lastly, we describe a methodology for the assessment of the performance of a given model by using an example taken from a management study.

  10. Modelling and Comparative Performance Analysis of a Time-Reversed UWB System

    Directory of Open Access Journals (Sweden)

    Popovski K

    2007-01-01

    Full Text Available The effects of multipath propagation lead to a significant decrease in system performance in most of the proposed ultra-wideband communication systems. A time-reversed system utilises the multipath channel impulse response to decrease receiver complexity, through a prefiltering at the transmitter. This paper discusses the modelling and comparative performance of a UWB system utilising time-reversed communications. System equations are presented, together with a semianalytical formulation on the level of intersymbol interference and multiuser interference. The standardised IEEE 802.15.3a channel model is applied, and the estimated error performance is compared through simulation with the performance of both time-hopped time-reversed and RAKE-based UWB systems.

  11. Delay model and performance testing for FPGA carry chain TDC

    International Nuclear Information System (INIS)

    Kang Xiaowen; Liu Yaqiang; Cui Junjian Yang Zhangcan; Jin Yongjie

    2011-01-01

    Time-of-flight (TOF) information would improve the performance of PET (position emission tomography). TDC design is a key technique. It proposed Carry Chain TDC Delay model. Through changing the significant delay parameter of model, paper compared the difference of TDC performance, and finally realized Time-to-Digital Convertor (TDC) based on Carry Chain Method using FPGA EP2C20Q240C8N with 69 ps LSB, max error below 2 LSB. Such result could meet the TOF demand. It also proposed a Coaxial Cable Measuring method for TDC testing, without High-precision test equipment. (authors)

  12. Two analytical models for evaluating performance of Gigabit Ethernet Hosts

    International Nuclear Information System (INIS)

    Salah, K.

    2006-01-01

    Two analytical models are developed to study the impact of interrupt overhead on operating system performance of network hosts when subjected to Gigabit network traffic. Under heavy network traffic, the system performance will be negatively affected due to interrupt overhead caused by incoming traffic. In particular, excessive latency and significant degradation in system throughput can be experienced. Also user application may livelock as the CPU power is mostly consumed by interrupt handling and protocol processing. In this paper we present and compare two analytical models that capture host behavior and evaluate its performance. The first model is based Markov processes and queuing theory, while the second, which is more accurate but more complex is a pure Markov process. For the most part both models give mathematically-equivalent closed-form solutions for a number of important system performance metrics. These metrics include throughput, latency and stability condition, CPU utilization of interrupt handling and protocol processing and CPU availability for user applications. The analysis yields insight into understanding and predicting the impact of system and network choices on the performance of interrupt-driven systems when subjected to light and heavy network loads. More, importantly, our analytical work can also be valuable in improving host performance. The paper gives guidelines and recommendations to address design and implementation issues. Simulation and reported experimental results show that our analytical models are valid and give a good approximation. (author)

  13. Structural mode significance using INCA. [Interactive Controls Analysis computer program

    Science.gov (United States)

    Bauer, Frank H.; Downing, John P.; Thorpe, Christopher J.

    1990-01-01

    Structural finite element models are often too large to be used in the design and analysis of control systems. Model reduction techniques must be applied to reduce the structural model to manageable size. In the past, engineers either performed the model order reduction by hand or used distinct computer programs to retrieve the data, to perform the significance analysis and to reduce the order of the model. To expedite this process, the latest version of INCA has been expanded to include an interactive graphical structural mode significance and model order reduction capability.

  14. Modeling the Performance of Water-Zeolite 13X Adsorption Heat Pump

    Science.gov (United States)

    Kowalska, Kinga; Ambrożek, Bogdan

    2017-12-01

    The dynamic performance of cylindrical double-tube adsorption heat pump is numerically analysed using a non-equilibrium model, which takes into account both heat and mass transfer processes. The model includes conservation equations for: heat transfer in heating/cooling fluids, heat transfer in the metal tube, and heat and mass transfer in the adsorbent. The mathematical model is numerically solved using the method of lines. Numerical simulations are performed for the system water-zeolite 13X, chosen as the working pair. The effect of the evaporator and condenser temperatures on the adsorption and desorption kinetics is examined. The results of the numerical investigation show that both of these parameters have a significant effect on the adsorption heat pump performance. Based on computer simulation results, the values of the coefficients of performance for heating and cooling are calculated. The results show that adsorption heat pumps have relatively low efficiency compared to other heat pumps. The value of the coefficient of performance for heating is higher than for cooling

  15. Electrical circuit models for performance modeling of Lithium-Sulfur batteries

    DEFF Research Database (Denmark)

    Knap, Vaclav; Stroe, Daniel Ioan; Teodorescu, Remus

    2015-01-01

    emerging technology for various applications, there is a need for Li-S battery performance model; however, developing such models represents a challenging task due to batteries' complex ongoing chemical reactions. Therefore, the literature review was performed to summarize electrical circuit models (ECMs......) used for modeling the performance behavior of Li-S batteries. The studied Li-S pouch cell was tested in the laboratory in order to parametrize four basic ECM topologies. These topologies were compared by analyzing their voltage estimation accuracy values, which were obtained for different battery...... current profiles. Based on these results, the 3 R-C ECM was chosen and the Li-S battery cell discharging performance model with current dependent parameters was derived and validated....

  16. The performance of FLake in the Met Office Unified Model

    Directory of Open Access Journals (Sweden)

    Gabriel Gerard Rooney

    2013-12-01

    Full Text Available We present results from the coupling of FLake to the Met Office Unified Model (MetUM. The coupling and initialisation are first described, and the results of testing the coupled model in local and global model configurations are presented. These show that FLake has a small statistical impact on screen temperature, but has the potential to modify the weather in the vicinity of areas of significant inland water. Examination of FLake lake ice has revealed that the behaviour of lakes in the coupled model is unrealistic in some areas of significant sub-grid orography. Tests of various modifications to ameliorate this behaviour are presented. The results indicate which of the possible model changes best improve the annual cycle of lake ice. As FLake has been developed and tuned entirely outside the Unified Model system, these results can be interpreted as a useful objective measure of the performance of the Unified Model in terms of its near-surface characteristics.

  17. Ion thruster performance model

    International Nuclear Information System (INIS)

    Brophy, J.R.

    1984-01-01

    A model of ion thruster performance is developed for high flux density cusped magnetic field thruster designs. This model is formulated in terms of the average energy required to produce an ion in the discharge chamber plasma and the fraction of these ions that are extracted to form the beam. The direct loss of high energy (primary) electrons from the plasma to the anode is shown to have a major effect on thruster performance. The model provides simple algebraic equations enabling one to calculate the beam ion energy cost, the average discharge chamber plasma ion energy cost, the primary electron density, the primary-to-Maxwellian electron density ratio and the Maxwellian electron temperature. Experiments indicate that the model correctly predicts the variation in plasma ion energy cost for changes in propellant gas (Ar, Kr, and Xe), grid transparency to neutral atoms, beam extraction area, discharge voltage, and discharge chamber wall temperature

  18. Performability assessment by model checking of Markov reward models

    NARCIS (Netherlands)

    Baier, Christel; Cloth, L.; Haverkort, Boudewijn R.H.M.; Hermanns, H.; Katoen, Joost P.

    2010-01-01

    This paper describes efficient procedures for model checking Markov reward models, that allow us to evaluate, among others, the performability of computer-communication systems. We present the logic CSRL (Continuous Stochastic Reward Logic) to specify performability measures. It provides flexibility

  19. Cognitive performance modeling based on general systems performance theory.

    Science.gov (United States)

    Kondraske, George V

    2010-01-01

    General Systems Performance Theory (GSPT) was initially motivated by problems associated with quantifying different aspects of human performance. It has proved to be invaluable for measurement development and understanding quantitative relationships between human subsystem capacities and performance in complex tasks. It is now desired to bring focus to the application of GSPT to modeling of cognitive system performance. Previous studies involving two complex tasks (i.e., driving and performing laparoscopic surgery) and incorporating measures that are clearly related to cognitive performance (information processing speed and short-term memory capacity) were revisited. A GSPT-derived method of task analysis and performance prediction termed Nonlinear Causal Resource Analysis (NCRA) was employed to determine the demand on basic cognitive performance resources required to support different levels of complex task performance. This approach is presented as a means to determine a cognitive workload profile and the subsequent computation of a single number measure of cognitive workload (CW). Computation of CW may be a viable alternative to measuring it. Various possible "more basic" performance resources that contribute to cognitive system performance are discussed. It is concluded from this preliminary exploration that a GSPT-based approach can contribute to defining cognitive performance models that are useful for both individual subjects and specific groups (e.g., military pilots).

  20. The Five Key Questions of Human Performance Modeling.

    Science.gov (United States)

    Wu, Changxu

    2018-01-01

    Via building computational (typically mathematical and computer simulation) models, human performance modeling (HPM) quantifies, predicts, and maximizes human performance, human-machine system productivity and safety. This paper describes and summarizes the five key questions of human performance modeling: 1) Why we build models of human performance; 2) What the expectations of a good human performance model are; 3) What the procedures and requirements in building and verifying a human performance model are; 4) How we integrate a human performance model with system design; and 5) What the possible future directions of human performance modeling research are. Recent and classic HPM findings are addressed in the five questions to provide new thinking in HPM's motivations, expectations, procedures, system integration and future directions.

  1. A multiparametric magnetic resonance imaging-based risk model to determine the risk of significant prostate cancer prior to biopsy.

    Science.gov (United States)

    van Leeuwen, Pim J; Hayen, Andrew; Thompson, James E; Moses, Daniel; Shnier, Ron; Böhm, Maret; Abuodha, Magdaline; Haynes, Anne-Maree; Ting, Francis; Barentsz, Jelle; Roobol, Monique; Vass, Justin; Rasiah, Krishan; Delprado, Warick; Stricker, Phillip D

    2017-12-01

    To develop and externally validate a predictive model for detection of significant prostate cancer. Development of the model was based on a prospective cohort including 393 men who underwent multiparametric magnetic resonance imaging (mpMRI) before biopsy. External validity of the model was then examined retrospectively in 198 men from a separate institution whom underwent mpMRI followed by biopsy for abnormal prostate-specific antigen (PSA) level or digital rectal examination (DRE). A model was developed with age, PSA level, DRE, prostate volume, previous biopsy, and Prostate Imaging Reporting and Data System (PIRADS) score, as predictors for significant prostate cancer (Gleason 7 with >5% grade 4, ≥20% cores positive or ≥7 mm of cancer in any core). Probability was studied via logistic regression. Discriminatory performance was quantified by concordance statistics and internally validated with bootstrap resampling. In all, 393 men had complete data and 149 (37.9%) had significant prostate cancer. While the variable model had good accuracy in predicting significant prostate cancer, area under the curve (AUC) of 0.80, the advanced model (incorporating mpMRI) had a significantly higher AUC of 0.88 (P prostate cancer. Individualised risk assessment of significant prostate cancer using a predictive model that incorporates mpMRI PIRADS score and clinical data allows a considerable reduction in unnecessary biopsies and reduction of the risk of over-detection of insignificant prostate cancer at the cost of a very small increase in the number of significant cancers missed. © 2017 The Authors BJU International © 2017 BJU International Published by John Wiley & Sons Ltd.

  2. Developing Performance Management in State Government: An Exploratory Model for Danish State Institutions

    DEFF Research Database (Denmark)

    Nielsen, Steen; Rikhardsson, Pall M.

    . The question remains how and if accounting departments in central government can deal with these challenges. This exploratory study proposes and tests a model depicting different areas, elements and characteristics within a government accounting departments and their association with a perceived performance...... management model. The findings are built on a questionnaire study of 45 high level accounting officers in central governmental institutions. Our statistical model consists of five explored constructs: improvements; initiatives and reforms, incentives and contracts, the use of management accounting practices......, and cost allocations and their relations to performance management. Findings based on structural equation modelling and partial least squares regression (PLS) indicates a positive effect on the latent depending variable, called performance management results. The models/theories explain a significant...

  3. Predictive performance models and multiple task performance

    Science.gov (United States)

    Wickens, Christopher D.; Larish, Inge; Contorer, Aaron

    1989-01-01

    Five models that predict how performance of multiple tasks will interact in complex task scenarios are discussed. The models are shown in terms of the assumptions they make about human operator divided attention. The different assumptions about attention are then empirically validated in a multitask helicopter flight simulation. It is concluded from this simulation that the most important assumption relates to the coding of demand level of different component tasks.

  4. Maintenance Personnel Performance Simulation (MAPPS) model

    International Nuclear Information System (INIS)

    Siegel, A.I.; Bartter, W.D.; Wolf, J.J.; Knee, H.E.; Haas, P.M.

    1984-01-01

    A stochastic computer model for simulating the actions and behavior of nuclear power plant maintenance personnel is described. The model considers personnel, environmental, and motivational variables to yield predictions of maintenance performance quality and time to perform. The mode has been fully developed and sensitivity tested. Additional evaluation of the model is now taking place

  5. Significance of size dependent and material structure coupling on the characteristics and performance of nanocrystalline micro/nano gyroscopes

    Science.gov (United States)

    Larkin, K.; Ghommem, M.; Abdelkefi, A.

    2018-05-01

    Capacitive-based sensing microelectromechanical (MEMS) and nanoelectromechanical (NEMS) gyroscopes have significant advantages over conventional gyroscopes, such as low power consumption, batch fabrication, and possible integration with electronic circuits. However, inadequacies in the modeling of these inertial sensors have presented issues of reliability and functionality of micro-/nano-scale gyroscopes. In this work, a micromechanical model is developed to represent the unique microstructure of nanocrystalline materials and simulate the response of micro-/nano-gyroscope comprising an electrostatically-actuated cantilever beam with a tip mass at the free end. Couple stress and surface elasticity theories are integrated into the classical Euler-Bernoulli beam model in order to derive a size-dependent model. This model is then used to investigate the influence of size-dependent effects on the static pull-in instability, the natural frequencies and the performance output of gyroscopes as the scale decreases from micro-to nano-scale. The simulation results show significant changes in the static pull-in voltage and the natural frequency as the scale of the system is decreased. However, the differential frequency between the two vibration modes of the gyroscope is observed to drastically decrease as the size of the gyroscope is reduced. As such, the frequency-based operation mode may not be an efficient strategy for nano-gyroscopes. The results show that a strong coupling between the surface elasticity and material structure takes place when smaller grain sizes and higher void percentages are considered.

  6. Modeling the Performance of Water-Zeolite 13X Adsorption Heat Pump

    Directory of Open Access Journals (Sweden)

    Kowalska Kinga

    2017-12-01

    Full Text Available The dynamic performance of cylindrical double-tube adsorption heat pump is numerically analysed using a non-equilibrium model, which takes into account both heat and mass transfer processes. The model includes conservation equations for: heat transfer in heating/cooling fluids, heat transfer in the metal tube, and heat and mass transfer in the adsorbent. The mathematical model is numerically solved using the method of lines. Numerical simulations are performed for the system water-zeolite 13X, chosen as the working pair. The effect of the evaporator and condenser temperatures on the adsorption and desorption kinetics is examined. The results of the numerical investigation show that both of these parameters have a significant effect on the adsorption heat pump performance. Based on computer simulation results, the values of the coefficients of performance for heating and cooling are calculated. The results show that adsorption heat pumps have relatively low efficiency compared to other heat pumps. The value of the coefficient of performance for heating is higher than for cooling

  7. Influence of Wind Model Performance on Wave Forecasts of the Naval Oceanographic Office

    Science.gov (United States)

    Gay, P. S.; Edwards, K. L.

    2017-12-01

    Significant discrepancies between the Naval Oceanographic Office's significant wave height (SWH) predictions and observations have been noted in some model domains. The goal of this study is to evaluate these discrepancies and identify to what extent inaccuracies in the wind predictions may explain inaccuracies in SWH predictions. A one-year time series of data is evaluated at various locations in Southern California and eastern Florida. Correlations are generally quite good, ranging from 73% at Pendleton to 88% at both Santa Barbara, California, and Cape Canaveral, Florida. Correlations for month-long periods off Southern California drop off significantly in late spring through early autumn - less so off eastern Florida - likely due to weaker local wind seas and generally smaller SWH in addition to the influence of remotely-generated swell, which may not propagate accurately into and through the wave models. The results of this study suggest that it is likely that a change in meteorological and/or oceanographic conditions explains the change in model performance, partially as a result of a seasonal reduction in wind model performance in the summer months.

  8. Modeling of high-density U-MO dispersion fuel plate performance

    International Nuclear Information System (INIS)

    Hayes, S.L.; Meyer, M.K.; Hofman, G.L.; Rest, J.; Snelgrove, J.L.

    2002-01-01

    Results from postirradiation examinations (PIE) of highly loaded U-Mo/Al dispersion fuel plates over the past several years have shown that the interaction between the metallic fuel particles and the matrix aluminum can be extensive, reducing the volume of the high-conductivity matrix phase and producing a significant volume of low-conductivity reaction-product phase. This phenomenon results in a significant decrease in fuel meat thermal conductivity during irradiation. PIE has further shown that the fuel-matrix interaction rate is a sensitive function of irradiation temperature. The interplay between fuel temperature and fuel-matrix interaction makes the development of a simple empirical correlation between the two difficult. For this reason a comprehensive thermal model has been developed to calculate temperatures throughout the fuel plate over its lifetime, taking into account the changing volume fractions of fuel, matrix and reaction-product phases within the fuel meat owing to fuel-matrix interaction; this thermal model has been incorporated into the dispersion fuel performance code designated PLATE. Other phenomena important to fuel thermal performance that are also treated in PLATE include: gas generation and swelling in the fuel and reaction-product phases, incorporation of matrix aluminum into solid solution with the unreacted metallic fuel particles, matrix extrusion resulting from fuel swelling, and cladding corrosion. The phenomena modeled also make possible a prediction of fuel plate swelling. This paper presents a description of the models and empirical correlations employed within PLATE as well as validation of code predictions against fuel performance data for U-Mo experimental fuel plates from the RERTR-3 irradiation test. (author)

  9. Performance of different radiotherapy workload models

    International Nuclear Information System (INIS)

    Barbera, Lisa; Jackson, Lynda D.; Schulze, Karleen; Groome, Patti A.; Foroudi, Farshad; Delaney, Geoff P.; Mackillop, William J.

    2003-01-01

    Purpose: The purpose of this study was to evaluate the performance of different radiotherapy workload models using a prospectively collected dataset of patient and treatment information from a single center. Methods and Materials: Information about all individual radiotherapy treatments was collected for 2 weeks from the three linear accelerators (linacs) in our department. This information included diagnosis code, treatment site, treatment unit, treatment time, fields per fraction, technique, beam type, blocks, wedges, junctions, port films, and Eastern Cooperative Oncology Group (ECOG) performance status. We evaluated the accuracy and precision of the original and revised basic treatment equivalent (BTE) model, the simple and complex Addenbrooke models, the equivalent simple treatment visit (ESTV) model, fields per hour, and two local standards of workload measurement. Results: Data were collected for 2 weeks in June 2001. During this time, 151 patients were treated with 857 fractions. The revised BTE model performed better than the other models with a mean vertical bar observed - predicted vertical bar of 2.62 (2.44-2.80). It estimated 88.0% of treatment times within 5 min, which is similar to the previously reported accuracy of the model. Conclusion: The revised BTE model had similar accuracy and precision for data collected in our center as it did for the original dataset and performed the best of the models assessed. This model would have uses for patient scheduling, and describing workloads and case complexity

  10. Model Performance Evaluation and Scenario Analysis (MPESA)

    Science.gov (United States)

    Model Performance Evaluation and Scenario Analysis (MPESA) assesses the performance with which models predict time series data. The tool was developed Hydrological Simulation Program-Fortran (HSPF) and the Stormwater Management Model (SWMM)

  11. A New Performance Improvement Model: Adding Benchmarking to the Analysis of Performance Indicator Data.

    Science.gov (United States)

    Al-Kuwaiti, Ahmed; Homa, Karen; Maruthamuthu, Thennarasu

    2016-01-01

    A performance improvement model was developed that focuses on the analysis and interpretation of performance indicator (PI) data using statistical process control and benchmarking. PIs are suitable for comparison with benchmarks only if the data fall within the statistically accepted limit-that is, show only random variation. Specifically, if there is no significant special-cause variation over a period of time, then the data are ready to be benchmarked. The proposed Define, Measure, Control, Internal Threshold, and Benchmark model is adapted from the Define, Measure, Analyze, Improve, Control (DMAIC) model. The model consists of the following five steps: Step 1. Define the process; Step 2. Monitor and measure the variation over the period of time; Step 3. Check the variation of the process; if stable (no significant variation), go to Step 4; otherwise, control variation with the help of an action plan; Step 4. Develop an internal threshold and compare the process with it; Step 5.1. Compare the process with an internal benchmark; and Step 5.2. Compare the process with an external benchmark. The steps are illustrated through the use of health care-associated infection (HAI) data collected for 2013 and 2014 from the Infection Control Unit, King Fahd Hospital, University of Dammam, Saudi Arabia. Monitoring variation is an important strategy in understanding and learning about a process. In the example, HAI was monitored for variation in 2013, and the need to have a more predictable process prompted the need to control variation by an action plan. The action plan was successful, as noted by the shift in the 2014 data, compared to the historical average, and, in addition, the variation was reduced. The model is subject to limitations: For example, it cannot be used without benchmarks, which need to be calculated the same way with similar patient populations, and it focuses only on the "Analyze" part of the DMAIC model.

  12. 40 CFR 141.723 - Requirements to respond to significant deficiencies identified in sanitary surveys performed by EPA.

    Science.gov (United States)

    2010-07-01

    ... deficiencies identified in sanitary surveys performed by EPA. 141.723 Section 141.723 Protection of Environment... performed by EPA, systems must respond in writing to significant deficiencies identified in sanitary survey... will address significant deficiencies noted in the survey. (d) Systems must correct significant...

  13. Identifying significant uncertainties in thermally dependent processes for repository performance analysis

    International Nuclear Information System (INIS)

    Gansemer, J.D.; Lamont, A.

    1994-01-01

    In order to study the performance of the potential Yucca Mountain Nuclear Waste Repository, scientific investigations are being conducted to reduce the uncertainty about process models and system parameters. This paper is intended to demonstrate a method for determining a strategy for the cost effective management of these investigations. It is not meant to be a complete study of all processes and interactions, but does outline a method which can be applied to more in-depth investigations

  14. The baseline serum value of α-amylase is a significant predictor of distance running performance.

    Science.gov (United States)

    Lippi, Giuseppe; Salvagno, Gian Luca; Danese, Elisa; Tarperi, Cantor; La Torre, Antonio; Guidi, Gian Cesare; Schena, Federico

    2015-02-01

    This study was planned to investigate whether serum α-amylase concentration may be associated with running performance, physiological characteristics and other clinical chemistry analytes in a large sample of recreational athletes undergoing distance running. Forty-three amateur runners successfully concluded a 21.1 km half-marathon at 75%-85% of their maximal oxygen uptake (VO2max). Blood was drawn during warm up and 15 min after conclusion of the run. After correction for body weight change, significant post-run increases were observed for serum values of alkaline phosphatase, alanine aminotransferase, aspartate aminotransferase, bilirubin, creatine kinase (CK), iron, lactate dehydrogenase (LDH), triglycerides, urea and uric acid, whereas the values of body weight, glomerular filtration rate, total and low density lipoprotein-cholesterol were significantly decreased. The concentration of serum α-amylase was unchanged. In univariate analysis, significant associations with running performance were found for gender, VO2max, training regimen and pre-run serum values of α-amylase, CK, glucose, high density lipoprotein-cholesterol, LDH, urea and uric acid. In multivariate analysis, only VO2max (p=0.042) and baseline α-amylase (p=0.021) remained significant predictors of running performance. The combination of these two variables predicted 71% of variance in running performance. The baseline concentration of serum α-amylase was positively correlated with variation of serum glucose during the trial (r=0.345; p=0.025) and negatively with capillary blood lactate at the end of the run (r=-0.352; p=0.021). We showed that the baseline serum α-amylase concentration significantly and independently predicts distance running performance in recreational runners.

  15. Teacher characteristics and student performance: An analysis using hierarchical linear modelling

    Directory of Open Access Journals (Sweden)

    Paula Armstrong

    2015-12-01

    Full Text Available This research makes use of hierarchical linear modelling to investigate which teacher characteristics are significantly associated with student performance. Using data from the SACMEQ III study of 2007, an interesting and potentially important finding is that younger teachers are better able to improve the mean mathematics performance of their students. Furthermore, younger teachers themselves perform better on subject tests than do their older counterparts. Identical models are run for Sub Saharan countries bordering on South Africa, as well for Kenya and the strong relationship between teacher age and student performance is not observed. Similarly, the model is run for South Africa using data from SACMEQ II (conducted in 2002 and the relationship between teacher age and student performance is also not observed. It must be noted that South African teachers were not tested in SACMEQ II so it was not possible to observe differences in subject knowledge amongst teachers in different cohorts and it was not possible to control for teachers’ level of subject knowledge when observing the relationship between teacher age and student performance. Changes in teacher education in the late 1990s and early 2000s may explain the differences in the performance of younger teachers relative to their older counterparts observed in the later dataset.

  16. Performance Measurement Model A TarBase model with ...

    Indian Academy of Sciences (India)

    rohit

    Model A 8.0 2.0 94.52% 88.46% 76 108 12 12 0.86 0.91 0.78 0.94. Model B 2.0 2.0 93.18% 89.33% 64 95 10 9 0.88 0.90 0.75 0.98. The above results for TEST – 1 show details for our two models (Model A and Model B).Performance of Model A after adding of 32 negative dataset of MiRTif on our testing set(MiRecords) ...

  17. Six Significant Questions About Performance and Performance Courses in the Major.

    Science.gov (United States)

    Lawson, Hal A.; Pugh, D. Lionel

    James Bryant Conant's book, "The Education of America" (1963), triggered a major change in physical education curricula. Formerly a (sports) skills and applied techniques oriented discipline, physical education has expanded to areas such as kinesiology and sports sociology. However, performance and performance courses are still an important aspect…

  18. Performance of neutron kinetics models for ADS transient analyses

    International Nuclear Information System (INIS)

    Rineiski, A.; Maschek, W.; Rimpault, G.

    2002-01-01

    Within the framework of the SIMMER code development, neutron kinetics models for simulating transients and hypothetical accidents in advanced reactor systems, in particular in Accelerator Driven Systems (ADSs), have been developed at FZK/IKET in cooperation with CE Cadarache. SIMMER is a fluid-dynamics/thermal-hydraulics code, coupled with a structure model and a space-, time- and energy-dependent neutronics module for analyzing transients and accidents. The advanced kinetics models have also been implemented into KIN3D, a module of the VARIANT/TGV code (stand-alone neutron kinetics) for broadening application and for testing and benchmarking. In the paper, a short review of the SIMMER and KIN3D neutron kinetics models is given. Some typical transients related to ADS perturbations are analyzed. The general models of SIMMER and KIN3D are compared with more simple techniques developed in the context of this work to get a better understanding of the specifics of transients in subcritical systems and to estimate the performance of different kinetics options. These comparisons may also help in elaborating new kinetics models and extending existing computation tools for ADS transient analyses. The traditional point-kinetics model may give rather inaccurate transient reaction rate distributions in an ADS even if the material configuration does not change significantly. This inaccuracy is not related to the problem of choosing a 'right' weighting function: the point-kinetics model with any weighting function cannot take into account pronounced flux shape variations related to possible significant changes in the criticality level or to fast beam trips. To improve the accuracy of the point-kinetics option for slow transients, we have introduced a correction factor technique. The related analyses give a better understanding of 'long-timescale' kinetics phenomena in the subcritical domain and help to evaluate the performance of the quasi-static scheme in a particular case. One

  19. Wavefront control performance modeling with WFIRST shaped pupil coronagraph testbed

    Science.gov (United States)

    Zhou, Hanying; Nemati, Bijian; Krist, John; Cady, Eric; Kern, Brian; Poberezhskiy, Ilya

    2017-09-01

    NASA's WFIRST mission includes a coronagraph instrument (CGI) for direct imaging of exoplanets. Significant improvement in CGI model fidelity has been made recently, alongside a testbed high contrast demonstration in a simulated dynamic environment at JPL. We present our modeling method and results of comparisons to testbed's high order wavefront correction performance for the shaped pupil coronagraph. Agreement between model prediction and testbed result at better than a factor of 2 has been consistently achieved in raw contrast (contrast floor, chromaticity, and convergence), and with that comes good agreement in contrast sensitivity to wavefront perturbations and mask lateral shear.

  20. ATR performance modeling concepts

    Science.gov (United States)

    Ross, Timothy D.; Baker, Hyatt B.; Nolan, Adam R.; McGinnis, Ryan E.; Paulson, Christopher R.

    2016-05-01

    Performance models are needed for automatic target recognition (ATR) development and use. ATRs consume sensor data and produce decisions about the scene observed. ATR performance models (APMs) on the other hand consume operating conditions (OCs) and produce probabilities about what the ATR will produce. APMs are needed for many modeling roles of many kinds of ATRs (each with different sensing modality and exploitation functionality combinations); moreover, there are different approaches to constructing the APMs. Therefore, although many APMs have been developed, there is rarely one that fits a particular need. Clarified APM concepts may allow us to recognize new uses of existing APMs and identify new APM technologies and components that better support coverage of the needed APMs. The concepts begin with thinking of ATRs as mapping OCs of the real scene (including the sensor data) to reports. An APM is then a mapping from explicit quantized OCs (represented with less resolution than the real OCs) and latent OC distributions to report distributions. The roles of APMs can be distinguished by the explicit OCs they consume. APMs used in simulations consume the true state that the ATR is attempting to report. APMs used online with the exploitation consume the sensor signal and derivatives, such as match scores. APMs used in sensor management consume neither of those, but estimate performance from other OCs. This paper will summarize the major building blocks for APMs, including knowledge sources, OC models, look-up tables, analytical and learned mappings, and tools for signal synthesis and exploitation.

  1. Principles of Sonar Performance Modeling

    NARCIS (Netherlands)

    Ainslie, M.A.

    2010-01-01

    Sonar performance modelling (SPM) is concerned with the prediction of quantitative measures of sonar performance, such as probability of detection. It is a multidisciplinary subject, requiring knowledge and expertise in the disparate fields of underwater acoustics, acoustical oceanography, sonar

  2. Biological variability in biomechanical engineering research: Significance and meta-analysis of current modeling practices.

    Science.gov (United States)

    Cook, Douglas; Julias, Margaret; Nauman, Eric

    2014-04-11

    Biological systems are characterized by high levels of variability, which can affect the results of biomechanical analyses. As a review of this topic, we first surveyed levels of variation in materials relevant to biomechanics, and compared these values to standard engineered materials. As expected, we found significantly higher levels of variation in biological materials. A meta-analysis was then performed based on thorough reviews of 60 research studies from the field of biomechanics to assess the methods and manner in which biological variation is currently handled in our field. The results of our meta-analysis revealed interesting trends in modeling practices, and suggest a need for more biomechanical studies that fully incorporate biological variation in biomechanical models and analyses. Finally, we provide some case study example of how biological variability may provide valuable insights or lead to surprising results. The purpose of this study is to promote the advancement of biomechanics research by encouraging broader treatment of biological variability in biomechanical modeling. Copyright © 2014 Elsevier Ltd. All rights reserved.

  3. Millisecond photo-thermal process on significant improvement of supercapacitor’s performance

    International Nuclear Information System (INIS)

    Wang, Kui; Wang, Jixiao; Wu, Ying; Zhao, Song; Wang, Zhi; Wang, Shichang

    2016-01-01

    Graphical abstract: A high way for charge transfer is created by a millisecond photo-thermal process which could decrease contact resistance among nanomaterials and improve the electrochemical performances. - Highlights: • Improve conductivity among nanomaterials with a millisecond photo-thermal process. • The specific capacitance can increase about 25% with an photo-thermal process. • The circle stability and rate capability can be improved above 10% with photo-thermal process. • Provide a new way that create electron path to improve electrochemical performance. - Abstract: Supercapacitors fabricated with nanomaterials usually have high specific capacitance and excellent performance. However, the small size of nanomaterials renders a considerable limitation of the contact area among nanomaterials, which is harmful to charge carrier transfer. This fact may hinder the development and application of nanomaterials in electrochemical storage systems. Here, a millisecond photo-thermal process was introduced to create a charge carries transfer path to decrease the contact resistance among nanomaterials, and enhance the electrochemical performance of supercapacitors. Polyaniline (PANI) nanowire, as a model nanomaterial, was used to modify electrodes under different photo-thermal process conditions. The modified electrodes were characterized by scanning electronic microscopy (SEM), cyclic voltammetry (CV), electrochemical impedance spectroscopy (EIS) and the results were analysed by equivalent circuit simulation. These results demonstrate that the photo-thermal process can alter the morphology of PANI nanowires, lower the charge transfer resistances and thus improve the performance of electrodes. The specific capacitance increase of the modified electrodes is about 25%. The improvement of the circle stability and rate capability are above 10%. To the best of our knowledge, this is the first attempt on research the effect of photo-thermal process on the conductivity

  4. Analysing the temporal dynamics of model performance for hydrological models

    NARCIS (Netherlands)

    Reusser, D.E.; Blume, T.; Schaefli, B.; Zehe, E.

    2009-01-01

    The temporal dynamics of hydrological model performance gives insights into errors that cannot be obtained from global performance measures assigning a single number to the fit of a simulated time series to an observed reference series. These errors can include errors in data, model parameters, or

  5. Synthesised model of market orientation-business performance relationship

    Directory of Open Access Journals (Sweden)

    G. Nwokah

    2006-12-01

    Full Text Available Purpose: The purpose of this paper is to assess the impact of market orientation on the performance of the organisation. While much empirical works have centered on market orientation, the generalisability of its impact on performance of the Food and Beverages organisations in the Nigeria context has been under-researched. Design/Methodology/Approach: The study adopted a triangulation methodology (quantitative and qualitative approach. Data was collected from key informants using a research instrument. Returned instruments were analyzed using nonparametric correlation through the use of the Statistical Package for Social Sciences (SPSS version 10. Findings: The study validated the earlier instruments but did not find any strong association between market orientation and business performance in the Nigerian context using the food and beverages organisations for the study. The reasons underlying the weak relationship between market orientation and business performance of the Food and Beverages organisations is government policies, new product development, diversification, innovation and devaluation of the Nigerian currency. One important finding of this study is that market orientation leads to business performance through some moderating variables. Implications: The study recommends that Nigerian Government should ensure a stable economy and make economic policies that will enhance existing business development in the country. Also, organisations should have performance measurement systems to detect the impact of investment on market orientation with the aim of knowing how the organisation works. Originality/Value: This study significantly refines the body of knowledge concerning the impact of market orientation on the performance of the organisation, and thereby offers a model of market orientation and business performance in the Nigerian context for marketing scholars and practitioners. This model will, no doubt, contribute to the body of

  6. A conceptual model of nurses' goal orientation, service behavior, and service performance.

    Science.gov (United States)

    Chien, Chun-Cheng; Chou, Hsin-Kai; Hung, Shuo-Tsung

    2008-01-01

    Based on the conceptual framework known as the "service triangle," the authors constructed a model of nurses' goal orientation, service behavior, and service performance to investigate the antecedents and consequences of the medical service behavior provided by nurses. This cross-sectional study collected data from 127 nurses in six hospitals using a mail-in questionnaire. Analysis of the model revealed that the customer-oriented behavior of nurses had a positive influence on organizational citizenship behavior; and both of these behaviors had a significant positive influence on service performance. The results also indicate that a higher learning goal orientation among nurses was associated with the performance of both observable customer-oriented behavior and organizational-citizenship behavior.

  7. Comprehensive Simulation Lifecycle Management for High Performance Computing Modeling and Simulation, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — There are significant logistical barriers to entry-level high performance computing (HPC) modeling and simulation (M IllinoisRocstar) sets up the infrastructure for...

  8. Teaching physical activities to students with significant disabilities using video modeling.

    Science.gov (United States)

    Cannella-Malone, Helen I; Mizrachi, Sharona V; Sabielny, Linsey M; Jimenez, Eliseo D

    2013-06-01

    The objective of this study was to examine the effectiveness of video modeling on teaching physical activities to three adolescents with significant disabilities. The study implemented a multiple baseline across six physical activities (three per student): jumping rope, scooter board with cones, ladder drill (i.e., feet going in and out), ladder design (i.e., multiple steps), shuttle run, and disc ride. Additional prompt procedures (i.e., verbal, gestural, visual cues, and modeling) were implemented within the study. After the students mastered the physical activities, we tested to see if they would link the skills together (i.e., complete an obstacle course). All three students made progress learning the physical activities, but only one learned them with video modeling alone (i.e., without error correction). Video modeling can be an effective tool for teaching students with significant disabilities various physical activities, though additional prompting procedures may be needed.

  9. Impacts of government subsidies on pricing and performance level choice in Energy Performance Contracting: A two-step optimal decision model

    International Nuclear Information System (INIS)

    Lu, Zhijian; Shao, Shuai

    2016-01-01

    Highlights: • An ESCO optimal decision model considering governmental subsidies is proposed. • Optimal price and performance level are deduced via a two-stage model. • Demand, profit, and performance level increase with increasing subsidies. • ESCO’s market strategy should firstly focus on high energy consumption industries. • Governmental subsidies standard in different industries should be differentiated. - Abstract: Government subsidies generally play a crucial role in pricing and the choice of performance levels in Energy Performance Contracting (EPC). However, the existing studies pay little attention to how the Energy Service Company (ESCO) prices and chooses performance levels for EPC with government subsidies. To fill such a gap, we propose a joint optimal decision model of pricing and performance level in EPC considering government subsidies. The optimization of the model is achieved via a two-stage process. At the first stage, given a performance level, ESCOs choose the best price; and at the second stage, ESCOs choose the optimal performance level for the optimal price. Furthermore, we carry out a numerical analysis to illuminate such an optimal decision mechanism. The results show that both price sensitivity and performance level sensitivity have significant effects on the choice of performance levels with government subsidies. Government subsidies can induce higher performance levels of EPC, the demand for EPC, and the profit of ESCO. We suggest that ESCO’s market strategy should firstly focus on high energy consumption industries with government subsidies and that government subsidies standard adopted in different industries should be differentiated according to the market characteristics and energy efficiency levels of various industries.

  10. Isotopic modelling using the ENIGMA-B fuel performance code

    International Nuclear Information System (INIS)

    Rossiter, G.D.; Cook, P.M.A.; Weston, R.

    2001-01-01

    A number of experimental programmes by BNFL and other MOX fabricators have now shown that the in-pile performance of MOX fuel is generally similar to that of conventional UO 2 fuel. Models based on UO 2 fuel experience form a good basis for a description of MOX fuel behaviour. However, an area where the performance of MOX fuel is sufficiently different from that of UO 2 to warrant model changes is in the radial power and burnup profile. The differences in radial power and burnup profile arise from the presence of significant concentrations of plutonium in MOX fuel, at beginning of life, and their subsequent evolution with burnup. Amongst other effects, plutonium has a greater neutron absorption cross-section than uranium. This paper focuses on the development of a new model for the radial power and burnup profile within a UO 2 or MOX fuel rod, in which the underlying fissile isotope concentration distributions are tracked during irradiation. The new model has been incorporated into the ENIGMA-B fuel performance code and has been extended to track the isotopic concentrations of the fission gases, xenon and krypton. The calculated distributions have been validated against results from rod puncture measurements and electron probe micro-analysis (EPMA) linescans, performed during the M501 post irradiation examination (PIE) programme. The predicted gas inventory of the fuel/clad gap is compared with the isotopic composition measured during rod puncture and the measured radial distributions of burnup (from neodymium measurements) and plutonium in the fuel are compared with the calculated distributions. It is shown that there is good agreement between the code predictions and the measurements. (author)

  11. Calibration of PMIS pavement performance prediction models.

    Science.gov (United States)

    2012-02-01

    Improve the accuracy of TxDOTs existing pavement performance prediction models through calibrating these models using actual field data obtained from the Pavement Management Information System (PMIS). : Ensure logical performance superiority patte...

  12. Base Station Performance Model

    OpenAIRE

    Walsh, Barbara; Farrell, Ronan

    2005-01-01

    At present the testing of power amplifiers within base station transmitters is limited to testing at component level as opposed to testing at the system level. While the detection of catastrophic failure is possible, that of performance degradation is not. This paper proposes a base station model with respect to transmitter output power with the aim of introducing system level monitoring of the power amplifier behaviour within the base station. Our model reflects the expe...

  13. On the performance of satellite precipitation products in riverine flood modeling: A review

    Science.gov (United States)

    Maggioni, Viviana; Massari, Christian

    2018-03-01

    This work is meant to summarize lessons learned on using satellite precipitation products for riverine flood modeling and to propose future directions in this field of research. Firstly, the most common satellite precipitation products (SPPs) during the Tropical Rainfall Measuring Mission (TRMM) and Global Precipitation Mission (GPM) eras are reviewed. Secondly, we discuss the main errors and uncertainty sources in these datasets that have the potential to affect streamflow and runoff model simulations. Thirdly, past studies that focused on using SPPs for predicting streamflow and runoff are analyzed. As the impact of floods depends not only on the characteristics of the flood itself, but also on the characteristics of the region (population density, land use, geophysical and climatic factors), a regional analysis is required to assess the performance of hydrologic models in monitoring and predicting floods. The performance of SPP-forced hydrological models was shown to largely depend on several factors, including precipitation type, seasonality, hydrological model formulation, topography. Across several basins around the world, the bias in SPPs was recognized as a major issue and bias correction methods of different complexity were shown to significantly reduce streamflow errors. Model re-calibration was also raised as a viable option to improve SPP-forced streamflow simulations, but caution is necessary when recalibrating models with SPP, which may result in unrealistic parameter values. From a general standpoint, there is significant potential for using satellite observations in flood forecasting, but the performance of SPP in hydrological modeling is still inadequate for operational purposes.

  14. Analytical performance modeling for computer systems

    CERN Document Server

    Tay, Y C

    2013-01-01

    This book is an introduction to analytical performance modeling for computer systems, i.e., writing equations to describe their performance behavior. It is accessible to readers who have taken college-level courses in calculus and probability, networking and operating systems. This is not a training manual for becoming an expert performance analyst. Rather, the objective is to help the reader construct simple models for analyzing and understanding the systems that they are interested in.Describing a complicated system abstractly with mathematical equations requires a careful choice of assumpti

  15. Modelling and evaluation of surgical performance using hidden Markov models.

    Science.gov (United States)

    Megali, Giuseppe; Sinigaglia, Stefano; Tonet, Oliver; Dario, Paolo

    2006-10-01

    Minimally invasive surgery has become very widespread in the last ten years. Since surgeons experience difficulties in learning and mastering minimally invasive techniques, the development of training methods is of great importance. While the introduction of virtual reality-based simulators has introduced a new paradigm in surgical training, skill evaluation methods are far from being objective. This paper proposes a method for defining a model of surgical expertise and an objective metric to evaluate performance in laparoscopic surgery. Our approach is based on the processing of kinematic data describing movements of surgical instruments. We use hidden Markov model theory to define an expert model that describes expert surgical gesture. The model is trained on kinematic data related to exercises performed on a surgical simulator by experienced surgeons. Subsequently, we use this expert model as a reference model in the definition of an objective metric to evaluate performance of surgeons with different abilities. Preliminary results show that, using different topologies for the expert model, the method can be efficiently used both for the discrimination between experienced and novice surgeons, and for the quantitative assessment of surgical ability.

  16. System Advisor Model: Flat Plate Photovoltaic Performance Modeling Validation Report

    Energy Technology Data Exchange (ETDEWEB)

    Freeman, Janine [National Renewable Energy Lab. (NREL), Golden, CO (United States); Whitmore, Jonathan [National Renewable Energy Lab. (NREL), Golden, CO (United States); Kaffine, Leah [National Renewable Energy Lab. (NREL), Golden, CO (United States); Blair, Nate [National Renewable Energy Lab. (NREL), Golden, CO (United States); Dobos, Aron P. [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2013-12-01

    The System Advisor Model (SAM) is a free software tool that performs detailed analysis of both system performance and system financing for a variety of renewable energy technologies. This report provides detailed validation of the SAM flat plate photovoltaic performance model by comparing SAM-modeled PV system generation data to actual measured production data for nine PV systems ranging from 75 kW to greater than 25 MW in size. The results show strong agreement between SAM predictions and field data, with annualized prediction error below 3% for all fixed tilt cases and below 8% for all one axis tracked cases. The analysis concludes that snow cover and system outages are the primary sources of disagreement, and other deviations resulting from seasonal biases in the irradiation models and one axis tracking issues are discussed in detail.

  17. Performance modelling of plasma microthruster nozzles in vacuum

    Science.gov (United States)

    Ho, Teck Seng; Charles, Christine; Boswell, Rod

    2018-05-01

    Computational fluid dynamics and plasma simulations of three geometrical variations of the Pocket Rocket radiofrequency plasma electrothermal microthruster are conducted, comparing pulsed plasma to steady state cold gas operation. While numerical limitations prevent plasma modelling in a vacuum environment, results may be obtained by extrapolating from plasma simulations performed in a pressurised environment, using the performance delta from cold gas simulations performed in both environments. Slip regime boundary layer effects are significant at these operating conditions. The present investigation targets a power budget of ˜10 W for applications on CubeSats. During plasma operation, the thrust force increases by ˜30% with a power efficiency of ˜30 μNW-1. These performance metrics represent instantaneous or pulsed operation and will increase over time as the discharge chamber attains thermal equilibrium with the heated propellant. Additionally, the sculpted nozzle geometry achieves plasma confinement facilitated by the formation of a plasma sheath at the nozzle throat, and fast recombination ensures a neutral exhaust plume that avoids the contamination of solar panels and interference with externally mounted instruments.

  18. Advances in HTGR fuel performance models

    International Nuclear Information System (INIS)

    Stansfield, O.M.; Goodin, D.T.; Hanson, D.L.; Turner, R.F.

    1985-01-01

    Advances in HTGR fuel performance models have improved the agreement between observed and predicted performance and contributed to an enhanced position of the HTGR with regard to investment risk and passive safety. Heavy metal contamination is the source of about 55% of the circulating activity in the HTGR during normal operation, and the remainder comes primarily from particles which failed because of defective or missing buffer coatings. These failed particles make up about 5 x 10 -4 fraction of the total core inventory. In addition to prediction of fuel performance during normal operation, the models are used to determine fuel failure and fission product release during core heat-up accident conditions. The mechanistic nature of the models, which incorporate all important failure modes, permits the prediction of performance from the relatively modest accident temperatures of a passively safe HTGR to the much more severe accident conditions of the larger 2240-MW/t HTGR. (author)

  19. Significant uncertainty in global scale hydrological modeling from precipitation data erros

    NARCIS (Netherlands)

    Sperna Weiland, F.; Vrugt, J.A.; Beek, van P.H.; Weerts, A.H.; Bierkens, M.F.P.

    2015-01-01

    In the past decades significant progress has been made in the fitting of hydrologic models to data. Most of this work has focused on simple, CPU-efficient, lumped hydrologic models using discharge, water table depth, soil moisture, or tracer data from relatively small river basins. In this paper, we

  20. Significant uncertainty in global scale hydrological modeling from precipitation data errors

    NARCIS (Netherlands)

    Weiland, Frederiek C. Sperna; Vrugt, Jasper A.; van Beek, Rens (L. ) P. H.; Weerts, Albrecht H.; Bierkens, Marc F. P.

    2015-01-01

    In the past decades significant progress has been made in the fitting of hydrologic models to data. Most of this work has focused on simple, CPU-efficient, lumped hydrologic models using discharge, water table depth, soil moisture, or tracer data from relatively small river basins. In this paper, we

  1. Performance model for a CCTV-MTI

    International Nuclear Information System (INIS)

    Dunn, D.R.; Dunbar, D.L.

    1978-01-01

    CCTV-MTI (closed circuit television--moving target indicator) monitors represent typical components of access control systems, as for example in a material control and accounting (MC and A) safeguards system. This report describes a performance model for a CCTV-MTI monitor. The performance of a human in an MTI role is a separate problem and is not addressed here. This work was done in conjunction with the NRC sponsored LLL assessment procedure for MC and A systems which is presently under development. We develop a noise model for a generic camera system and a model for the detection mechanism for a postulated MTI design. These models are then translated into an overall performance model. Measures of performance are probabilities of detection and false alarm as a function of intruder-induced grey level changes in the protected area. Sensor responsivity, lens F-number, source illumination and spectral response were treated as design parameters. Some specific results are illustrated for a postulated design employing a camera with a Si-target vidicon. Reflectance or light level changes in excess of 10% due to an intruder will be detected with a very high probability for the portion of the visible spectrum with wavelengths above 500 nm. The resulting false alarm rate was less than one per year. We did not address sources of nuisance alarms due to adverse environments, reliability, resistance to tampering, nor did we examine the effects of the spatial frequency response of the optics. All of these are important and will influence overall system detection performance

  2. A unified tool for performance modelling and prediction

    International Nuclear Information System (INIS)

    Gilmore, Stephen; Kloul, Leila

    2005-01-01

    We describe a novel performability modelling approach, which facilitates the efficient solution of performance models extracted from high-level descriptions of systems. The notation which we use for our high-level designs is the Unified Modelling Language (UML) graphical modelling language. The technology which provides the efficient representation capability for the underlying performance model is the multi-terminal binary decision diagram (MTBDD)-based PRISM probabilistic model checker. The UML models are compiled through an intermediate language, the stochastic process algebra PEPA, before translation into MTBDDs for solution. We illustrate our approach on a real-world analysis problem from the domain of mobile telephony

  3. Performance modeling of Beamlet

    International Nuclear Information System (INIS)

    Auerbach, J.M.; Lawson, J.K.; Rotter, M.D.; Sacks, R.A.; Van Wonterghem, B.W.; Williams, W.H.

    1995-01-01

    Detailed modeling of beam propagation in Beamlet has been made to predict system performance. New software allows extensive use of optical component characteristics. This inclusion of real optical component characteristics has resulted in close agreement between calculated and measured beam distributions

  4. A hybrid condenser model for real-time applications in performance monitoring, control and optimization

    International Nuclear Information System (INIS)

    Ding Xudong; Cai Wenjian; Jia Lei; Wen Changyun; Zhang Guiqing

    2009-01-01

    In this paper, a simple, yet accurate hybrid modeling technique for condensers is presented. The method starts with fundamental physical principles but captures only few key operational characteristic parameters to predict the system performances. The advantages of the methods lie that linear or non-linear least-squares methods can be directly used to determine no more than four key operational characteristic parameters in the model, which can significantly reduce the computational burden. The developed model is verified with the experimental data taken from a pilot system. The testing results confirm that the proposed model can predict accurately the performance of the real-time operating condenser with the maximum error of less than ±10%. The model technique proposed will have wide applications not only in condenser operating optimization, but also in performance assessment and fault detection and diagnosis.

  5. Tree-based flood damage modeling of companies: Damage processes and model performance

    Science.gov (United States)

    Sieg, Tobias; Vogel, Kristin; Merz, Bruno; Kreibich, Heidi

    2017-07-01

    Reliable flood risk analyses, including the estimation of damage, are an important prerequisite for efficient risk management. However, not much is known about flood damage processes affecting companies. Thus, we conduct a flood damage assessment of companies in Germany with regard to two aspects. First, we identify relevant damage-influencing variables. Second, we assess the prediction performance of the developed damage models with respect to the gain by using an increasing amount of training data and a sector-specific evaluation of the data. Random forests are trained with data from two postevent surveys after flood events occurring in the years 2002 and 2013. For a sector-specific consideration, the data set is split into four subsets corresponding to the manufacturing, commercial, financial, and service sectors. Further, separate models are derived for three different company assets: buildings, equipment, and goods and stock. Calculated variable importance values reveal different variable sets relevant for the damage estimation, indicating significant differences in the damage process for various company sectors and assets. With an increasing number of data used to build the models, prediction errors decrease. Yet the effect is rather small and seems to saturate for a data set size of several hundred observations. In contrast, the prediction improvement achieved by a sector-specific consideration is more distinct, especially for damage to equipment and goods and stock. Consequently, sector-specific data acquisition and a consideration of sector-specific company characteristics in future flood damage assessments is expected to improve the model performance more than a mere increase in data.

  6. Comparisons of Faulting-Based Pavement Performance Prediction Models

    Directory of Open Access Journals (Sweden)

    Weina Wang

    2017-01-01

    Full Text Available Faulting prediction is the core of concrete pavement maintenance and design. Highway agencies are always faced with the problem of lower accuracy for the prediction which causes costly maintenance. Although many researchers have developed some performance prediction models, the accuracy of prediction has remained a challenge. This paper reviews performance prediction models and JPCP faulting models that have been used in past research. Then three models including multivariate nonlinear regression (MNLR model, artificial neural network (ANN model, and Markov Chain (MC model are tested and compared using a set of actual pavement survey data taken on interstate highway with varying design features, traffic, and climate data. It is found that MNLR model needs further recalibration, while the ANN model needs more data for training the network. MC model seems a good tool for pavement performance prediction when the data is limited, but it is based on visual inspections and not explicitly related to quantitative physical parameters. This paper then suggests that the further direction for developing the performance prediction model is incorporating the advantages and disadvantages of different models to obtain better accuracy.

  7. Reference Manual for the System Advisor Model's Wind Power Performance Model

    Energy Technology Data Exchange (ETDEWEB)

    Freeman, J.; Jorgenson, J.; Gilman, P.; Ferguson, T.

    2014-08-01

    This manual describes the National Renewable Energy Laboratory's System Advisor Model (SAM) wind power performance model. The model calculates the hourly electrical output of a single wind turbine or of a wind farm. The wind power performance model requires information about the wind resource, wind turbine specifications, wind farm layout (if applicable), and costs. In SAM, the performance model can be coupled to one of the financial models to calculate economic metrics for residential, commercial, or utility-scale wind projects. This manual describes the algorithms used by the wind power performance model, which is available in the SAM user interface and as part of the SAM Simulation Core (SSC) library, and is intended to supplement the user documentation that comes with the software.

  8. Performance assessment of the RANS turbulence models in nuclear fuel rod bundles

    International Nuclear Information System (INIS)

    In, Wang Kee; Chun, Tae Hyun; Oh, Dong Seok; Shin, Chang Hwan

    2005-02-01

    The three experiments for turbulent flow in a rod bundle geometry were simulated in this CFD analysis using various RANS models. The CFD predictions were compared with the experimental and DNS results. The RANS models used here are the nonlinear quadratic/cubic κ-ε models and the second-order closure models (SSG, LRR, RSM-ω). The anisotropic models predicted the secondary flow and showed a significantly improved agreement with the measurements from the standard κ-ε model. In particular, the SSG model resulted in the best performance showing the closest agreement with the experimental results. However, the RANS models could not predict the very high anisotropy observed in a rod bundle with a small pitch-to-diameter ratio

  9. Acoustic/seismic signal propagation and sensor performance modeling

    Science.gov (United States)

    Wilson, D. Keith; Marlin, David H.; Mackay, Sean

    2007-04-01

    Performance, optimal employment, and interpretation of data from acoustic and seismic sensors depend strongly and in complex ways on the environment in which they operate. Software tools for guiding non-expert users of acoustic and seismic sensors are therefore much needed. However, such tools require that many individual components be constructed and correctly connected together. These components include the source signature and directionality, representation of the atmospheric and terrain environment, calculation of the signal propagation, characterization of the sensor response, and mimicking of the data processing at the sensor. Selection of an appropriate signal propagation model is particularly important, as there are significant trade-offs between output fidelity and computation speed. Attenuation of signal energy, random fading, and (for array systems) variations in wavefront angle-of-arrival should all be considered. Characterization of the complex operational environment is often the weak link in sensor modeling: important issues for acoustic and seismic modeling activities include the temporal/spatial resolution of the atmospheric data, knowledge of the surface and subsurface terrain properties, and representation of ambient background noise and vibrations. Design of software tools that address these challenges is illustrated with two examples: a detailed target-to-sensor calculation application called the Sensor Performance Evaluator for Battlefield Environments (SPEBE) and a GIS-embedded approach called Battlefield Terrain Reasoning and Awareness (BTRA).

  10. Constrained bayesian inference of project performance models

    OpenAIRE

    Sunmola, Funlade

    2013-01-01

    Project performance models play an important role in the management of project success. When used for monitoring projects, they can offer predictive ability such as indications of possible delivery problems. Approaches for monitoring project performance relies on available project information including restrictions imposed on the project, particularly the constraints of cost, quality, scope and time. We study in this paper a Bayesian inference methodology for project performance modelling in ...

  11. SpF: Enabling Petascale Performance for Pseudospectral Dynamo Models

    Science.gov (United States)

    Jiang, W.; Clune, T.; Vriesema, J.; Gutmann, G.

    2013-12-01

    Pseudospectral (PS) methods possess a number of characteristics (e.g., efficiency, accuracy, natural boundary conditions) that are extremely desirable for dynamo models. Unfortunately, dynamo models based upon PS methods face a number of daunting challenges, which include exposing additional parallelism, leveraging hardware accelerators, exploiting hybrid parallelism, and improving the scalability of global memory transposes. Although these issues are a concern for most models, solutions for PS methods tend to require far more pervasive changes to underlying data and control structures. Further, improvements in performance in one model are difficult to transfer to other models, resulting in significant duplication of effort across the research community. We have developed an extensible software framework for pseudospectral methods called SpF that is intended to enable extreme scalability and optimal performance. High-level abstractions provided by SpF unburden applications of the responsibility of managing domain decomposition and load balance while reducing the changes in code required to adapt to new computing architectures. The key design concept in SpF is that each phase of the numerical calculation is partitioned into disjoint numerical 'kernels' that can be performed entirely in-processor. The granularity of domain-decomposition provided by SpF is only constrained by the data-locality requirements of these kernels. SpF builds on top of optimized vendor libraries for common numerical operations such as transforms, matrix solvers, etc., but can also be configured to use open source alternatives for portability. SpF includes several alternative schemes for global data redistribution and is expected to serve as an ideal testbed for further research into optimal approaches for different network architectures. In this presentation, we will describe the basic architecture of SpF as well as preliminary performance data and experience with adapting legacy dynamo codes

  12. How Often Is the Misfit of Item Response Theory Models Practically Significant?

    Science.gov (United States)

    Sinharay, Sandip; Haberman, Shelby J.

    2014-01-01

    Standard 3.9 of the Standards for Educational and Psychological Testing ([, 1999]) demands evidence of model fit when item response theory (IRT) models are employed to data from tests. Hambleton and Han ([Hambleton, R. K., 2005]) and Sinharay ([Sinharay, S., 2005]) recommended the assessment of practical significance of misfit of IRT models, but…

  13. Correction of the significance level when attempting multiple transformations of an explanatory variable in generalized linear models

    Science.gov (United States)

    2013-01-01

    Background In statistical modeling, finding the most favorable coding for an exploratory quantitative variable involves many tests. This process involves multiple testing problems and requires the correction of the significance level. Methods For each coding, a test on the nullity of the coefficient associated with the new coded variable is computed. The selected coding corresponds to that associated with the largest statistical test (or equivalently the smallest pvalue). In the context of the Generalized Linear Model, Liquet and Commenges (Stat Probability Lett,71:33–38,2005) proposed an asymptotic correction of the significance level. This procedure, based on the score test, has been developed for dichotomous and Box-Cox transformations. In this paper, we suggest the use of resampling methods to estimate the significance level for categorical transformations with more than two levels and, by definition those that involve more than one parameter in the model. The categorical transformation is a more flexible way to explore the unknown shape of the effect between an explanatory and a dependent variable. Results The simulations we ran in this study showed good performances of the proposed methods. These methods were illustrated using the data from a study of the relationship between cholesterol and dementia. Conclusion The algorithms were implemented using R, and the associated CPMCGLM R package is available on the CRAN. PMID:23758852

  14. Significance of Bias Correction in Drought Frequency and Scenario Analysis Based on Climate Models

    Science.gov (United States)

    Aryal, Y.; Zhu, J.

    2015-12-01

    Assessment of future drought characteristics is difficult as climate models usually have bias in simulating precipitation frequency and intensity. To overcome this limitation, output from climate models need to be bias corrected based on the specific purpose of applications. In this study, we examine the significance of bias correction in the context of drought frequency and scenario analysis using output from climate models. In particular, we investigate the performance of three widely used bias correction techniques: (1) monthly bias correction (MBC), (2) nested bias correction (NBC), and (3) equidistance quantile mapping (EQM) The effect of bias correction in future scenario of drought frequency is also analyzed. The characteristics of drought are investigated in terms of frequency and severity in nine representative locations in different climatic regions across the United States using regional climate model (RCM) output from the North American Regional Climate Change Assessment Program (NARCCAP). The Standardized Precipitation Index (SPI) is used as the means to compare and forecast drought characteristics at different timescales. Systematic biases in the RCM precipitation output are corrected against the National Centers for Environmental Prediction (NCEP) North American Regional Reanalysis (NARR) data. The results demonstrate that bias correction significantly decreases the RCM errors in reproducing drought frequency derived from the NARR data. Preserving mean and standard deviation is essential for climate models in drought frequency analysis. RCM biases both have regional and timescale dependence. Different timescale of input precipitation in the bias corrections show similar results. Drought frequency obtained from the RCM future (2040-2070) scenarios is compared with that from the historical simulations. The changes in drought characteristics occur in all climatic regions. The relative changes in drought frequency in future scenario in relation to

  15. Model for measuring complex performance in an aviation environment

    International Nuclear Information System (INIS)

    Hahn, H.A.

    1988-01-01

    An experiment was conducted to identify models of pilot performance through the attainment and analysis of concurrent verbal protocols. Sixteen models were identified. Novice and expert pilots differed with respect to the models they used. Models were correlated to performance, particularly in the case of expert subjects. Models were not correlated to performance shaping factors (i.e. workload). 3 refs., 1 tab

  16. Modeling logistic performance in quantitative microbial risk assessment.

    Science.gov (United States)

    Rijgersberg, Hajo; Tromp, Seth; Jacxsens, Liesbeth; Uyttendaele, Mieke

    2010-01-01

    In quantitative microbial risk assessment (QMRA), food safety in the food chain is modeled and simulated. In general, prevalences, concentrations, and numbers of microorganisms in media are investigated in the different steps from farm to fork. The underlying rates and conditions (such as storage times, temperatures, gas conditions, and their distributions) are determined. However, the logistic chain with its queues (storages, shelves) and mechanisms for ordering products is usually not taken into account. As a consequence, storage times-mutually dependent in successive steps in the chain-cannot be described adequately. This may have a great impact on the tails of risk distributions. Because food safety risks are generally very small, it is crucial to model the tails of (underlying) distributions as accurately as possible. Logistic performance can be modeled by describing the underlying planning and scheduling mechanisms in discrete-event modeling. This is common practice in operations research, specifically in supply chain management. In this article, we present the application of discrete-event modeling in the context of a QMRA for Listeria monocytogenes in fresh-cut iceberg lettuce. We show the potential value of discrete-event modeling in QMRA by calculating logistic interventions (modifications in the logistic chain) and determining their significance with respect to food safety.

  17. CFD modelling of hydrogen stratification in enclosures: Model validation and application to PAR performance

    Energy Technology Data Exchange (ETDEWEB)

    Hoyes, J.R., E-mail: james.hoyes@hsl.gsi.gov.uk; Ivings, M.J.

    2016-12-15

    Highlights: • The ability of CFD to predict hydrogen stratification phenomena is investigated. • Contrary to expectation, simulations on tetrahedral meshes under-predict mixing. • Simulations on structured meshes give good agreement with experimental data. • CFD model used to investigate the effects of stratification on PAR performance. • Results show stratification can have a significant effect on PAR performance. - Abstract: Computational Fluid Dynamics (CFD) models are maturing into useful tools for supporting safety analyses. This paper investigates the capabilities of CFD models for predicting hydrogen stratification in a containment vessel using data from the NEA/OECD SETH2 MISTRA experiments. Further simulations are then carried out to illustrate the qualitative effects of hydrogen stratification on the performance of Passive Autocatalytic Recombiner (PAR) units. The MISTRA experiments have well-defined initial and boundary conditions which makes them well suited for use in a validation study. Results are presented for the sensitivity to mesh resolution and mesh type. Whilst the predictions are shown to be largely insensitive to the mesh resolution they are surprisingly sensitive to the mesh type. In particular, tetrahedral meshes are found to induce small unphysical convection currents that result in molecular diffusion and turbulent mixing being under-predicted. This behaviour is not unique to the CFD model used here (ANSYS CFX) and furthermore, it may affect simulations run on other non-aligned meshes (meshes that are not aligned perpendicular to gravity), including non-aligned structured meshes. Following existing best practice guidelines can help to identify potential unphysical predictions, but as an additional precaution consideration should be given to using gravity-aligned meshes for modelling stratified flows. CFD simulations of hydrogen recombination in the Becker Technologies THAI facility are presented with high and low PAR positions

  18. Photovoltaic Reliability Performance Model v 2.0

    Energy Technology Data Exchange (ETDEWEB)

    2016-12-16

    PV-RPM is intended to address more “real world” situations by coupling a photovoltaic system performance model with a reliability model so that inverters, modules, combiner boxes, etc. can experience failures and be repaired (or left unrepaired). The model can also include other effects, such as module output degradation over time or disruptions such as electrical grid outages. In addition, PV-RPM is a dynamic probabilistic model that can be used to run many realizations (i.e., possible future outcomes) of a system’s performance using probability distributions to represent uncertain parameter inputs.

  19. An Integrated Model to Explain How Corporate Social Responsibility Affects Corporate Financial Performance

    Directory of Open Access Journals (Sweden)

    Chin-Shien Lin

    2015-06-01

    Full Text Available The effect of corporate social responsibility (CSR on financial performance has important implications for enterprises, communities, and countries, and the significance of this issue cannot be ignored. Therefore, this paper proposes an integrated model to explain the influence of CSR on financial performance with intellectual capital as a mediator and industry type as a moderator. Empirical results indicate that intellectual capital mediates the relationship between CSR and financial performance, and industry type moderates the direct influence of CSR on financial performance. Such results have critical implications for both academia and practice.

  20. A new rate-dependent model for high-frequency tracking performance enhancement of piezoactuator system

    Science.gov (United States)

    Tian, Lizhi; Xiong, Zhenhua; Wu, Jianhua; Ding, Han

    2017-05-01

    Feedforward-feedback control is widely used in motion control of piezoactuator systems. Due to the phase lag caused by incomplete dynamics compensation, the performance of the composite controller is greatly limited at high frequency. This paper proposes a new rate-dependent model to improve the high-frequency tracking performance by reducing dynamics compensation error. The rate-dependent model is designed as a function of the input and input variation rate to describe the input-output relationship of the residual system dynamics which mainly performs as phase lag in a wide frequency band. Then the direct inversion of the proposed rate-dependent model is used to compensate the residual system dynamics. Using the proposed rate-dependent model as feedforward term, the open loop performance can be improved significantly at medium-high frequency. Then, combining the with feedback controller, the composite controller can provide enhanced close loop performance from low frequency to high frequency. At the frequency of 1 Hz, the proposed controller presents the same performance as previous methods. However, at the frequency of 900 Hz, the tracking error is reduced to be 30.7% of the decoupled approach.

  1. Performance Modeling of Communication Networks with Markov Chains

    CERN Document Server

    Mo, Jeonghoon

    2010-01-01

    This book is an introduction to Markov chain modeling with applications to communication networks. It begins with a general introduction to performance modeling in Chapter 1 where we introduce different performance models. We then introduce basic ideas of Markov chain modeling: Markov property, discrete time Markov chain (DTMe and continuous time Markov chain (CTMe. We also discuss how to find the steady state distributions from these Markov chains and how they can be used to compute the system performance metric. The solution methodologies include a balance equation technique, limiting probab

  2. Atmospheric statistical dynamic models. Model performance: the Lawrence Livermore Laboratoy Zonal Atmospheric Model

    International Nuclear Information System (INIS)

    Potter, G.L.; Ellsaesser, H.W.; MacCracken, M.C.; Luther, F.M.

    1978-06-01

    Results from the zonal model indicate quite reasonable agreement with observation in terms of the parameters and processes that influence the radiation and energy balance calculations. The model produces zonal statistics similar to those from general circulation models, and has also been shown to produce similar responses in sensitivity studies. Further studies of model performance are planned, including: comparison with July data; comparison of temperature and moisture transport and wind fields for winter and summer months; and a tabulation of atmospheric energetics. Based on these preliminary performance studies, however, it appears that the zonal model can be used in conjunction with more complex models to help unravel the problems of understanding the processes governing present climate and climate change. As can be seen in the subsequent paper on model sensitivity studies, in addition to reduced cost of computation, the zonal model facilitates analysis of feedback mechanisms and simplifies analysis of the interactions between processes

  3. Critical review of glass performance modeling

    International Nuclear Information System (INIS)

    Bourcier, W.L.

    1994-07-01

    Borosilicate glass is to be used for permanent disposal of high-level nuclear waste in a geologic repository. Mechanistic chemical models are used to predict the rate at which radionuclides will be released from the glass under repository conditions. The most successful and useful of these models link reaction path geochemical modeling programs with a glass dissolution rate law that is consistent with transition state theory. These models have been used to simulate several types of short-term laboratory tests of glass dissolution and to predict the long-term performance of the glass in a repository. Although mechanistically based, the current models are limited by a lack of unambiguous experimental support for some of their assumptions. The most severe problem of this type is the lack of an existing validated mechanism that controls long-term glass dissolution rates. Current models can be improved by performing carefully designed experiments and using the experimental results to validate the rate-controlling mechanisms implicit in the models. These models should be supported with long-term experiments to be used for model validation. The mechanistic basis of the models should be explored by using modern molecular simulations such as molecular orbital and molecular dynamics to investigate both the glass structure and its dissolution process

  4. Critical review of glass performance modeling

    Energy Technology Data Exchange (ETDEWEB)

    Bourcier, W.L. [Lawrence Livermore National Lab., CA (United States)

    1994-07-01

    Borosilicate glass is to be used for permanent disposal of high-level nuclear waste in a geologic repository. Mechanistic chemical models are used to predict the rate at which radionuclides will be released from the glass under repository conditions. The most successful and useful of these models link reaction path geochemical modeling programs with a glass dissolution rate law that is consistent with transition state theory. These models have been used to simulate several types of short-term laboratory tests of glass dissolution and to predict the long-term performance of the glass in a repository. Although mechanistically based, the current models are limited by a lack of unambiguous experimental support for some of their assumptions. The most severe problem of this type is the lack of an existing validated mechanism that controls long-term glass dissolution rates. Current models can be improved by performing carefully designed experiments and using the experimental results to validate the rate-controlling mechanisms implicit in the models. These models should be supported with long-term experiments to be used for model validation. The mechanistic basis of the models should be explored by using modern molecular simulations such as molecular orbital and molecular dynamics to investigate both the glass structure and its dissolution process.

  5. Significance tests to determine the direction of effects in linear regression models.

    Science.gov (United States)

    Wiedermann, Wolfgang; Hagmann, Michael; von Eye, Alexander

    2015-02-01

    Previous studies have discussed asymmetric interpretations of the Pearson correlation coefficient and have shown that higher moments can be used to decide on the direction of dependence in the bivariate linear regression setting. The current study extends this approach by illustrating that the third moment of regression residuals may also be used to derive conclusions concerning the direction of effects. Assuming non-normally distributed variables, it is shown that the distribution of residuals of the correctly specified regression model (e.g., Y is regressed on X) is more symmetric than the distribution of residuals of the competing model (i.e., X is regressed on Y). Based on this result, 4 one-sample tests are discussed which can be used to decide which variable is more likely to be the response and which one is more likely to be the explanatory variable. A fifth significance test is proposed based on the differences of skewness estimates, which leads to a more direct test of a hypothesis that is compatible with direction of dependence. A Monte Carlo simulation study was performed to examine the behaviour of the procedures under various degrees of associations, sample sizes, and distributional properties of the underlying population. An empirical example is given which illustrates the application of the tests in practice. © 2014 The British Psychological Society.

  6. Lack of significant associations with early career performance suggest no link between the DMRT3 "Gait Keeper" mutation and precocity in Coldblooded trotters.

    Directory of Open Access Journals (Sweden)

    Kim Jäderkvist Fegraeus

    Full Text Available The Swedish-Norwegian Coldblooded trotter (CBT is a local breed in Sweden and Norway mainly used for harness racing. Previous studies have shown that a mutation from cytosine (C to adenine (A in the doublesex and mab-3 related transcription factor 3 (DMRT3 gene has a major impact on harness racing performance of different breeds. An association of the DMRT3 mutation with early career performance has also been suggested. The aim of the current study was to investigate this proposed association in a randomly selected group of CBTs. 769 CBTs (485 raced, 284 unraced were genotyped for the DMRT3 mutation. The association with racing performance was investigated for 13 performance traits and three different age intervals: 3 years, 3 to 6 years, and 7 to 10 years of age, using the statistical software R. Each performance trait was analyzed for association with DMRT3 using linear models. The results suggest no association of the DMRT3 mutation with precocity (i.e. performance at 3 years of age. Only two traits (race time and number of disqualifications were significantly different between the genotypes, with AA horses having the fastest times and CC horses having the highest number of disqualifications at 3 years of age. The frequency of the AA genotype was significantly lower in the raced CBT sample compared with the unraced sample and less than 50% of the AA horses participated in a race. For the age intervals 3 to 6 and 7 to 10 years the AA horses also failed to demonstrate significantly better performance than the other genotypes. Although suggested as the most favorable genotype for racing performance in Standardbreds and Finnhorses across all ages, the AA genotype does not appear to be associated with superior performance, early or late, in the racing career of CBTs.

  7. Transmutation Fuel Performance Code Thermal Model Verification

    Energy Technology Data Exchange (ETDEWEB)

    Gregory K. Miller; Pavel G. Medvedev

    2007-09-01

    FRAPCON fuel performance code is being modified to be able to model performance of the nuclear fuels of interest to the Global Nuclear Energy Partnership (GNEP). The present report documents the effort for verification of the FRAPCON thermal model. It was found that, with minor modifications, FRAPCON thermal model temperature calculation agrees with that of the commercial software ABAQUS (Version 6.4-4). This report outlines the methodology of the verification, code input, and calculation results.

  8. Wave and Wind Model Performance Metrics Tools

    Science.gov (United States)

    Choi, J. K.; Wang, D. W.

    2016-02-01

    Continual improvements and upgrades of Navy ocean wave and wind models are essential to the assurance of battlespace environment predictability of ocean surface wave and surf conditions in support of Naval global operations. Thus, constant verification and validation of model performance is equally essential to assure the progress of model developments and maintain confidence in the predictions. Global and regional scale model evaluations may require large areas and long periods of time. For observational data to compare against, altimeter winds and waves along the tracks from past and current operational satellites as well as moored/drifting buoys can be used for global and regional coverage. Using data and model runs in previous trials such as the planned experiment, the Dynamics of the Adriatic in Real Time (DART), we demonstrated the use of accumulated altimeter wind and wave data over several years to obtain an objective evaluation of the performance the SWAN (Simulating Waves Nearshore) model running in the Adriatic Sea. The assessment provided detailed performance of wind and wave models by using cell-averaged statistical variables maps with spatial statistics including slope, correlation, and scatter index to summarize model performance. Such a methodology is easily generalized to other regions and at global scales. Operational technology currently used by subject matter experts evaluating the Navy Coastal Ocean Model and the Hybrid Coordinate Ocean Model can be expanded to evaluate wave and wind models using tools developed for ArcMAP, a GIS application developed by ESRI. Recent inclusion of altimeter and buoy data into a format through the Naval Oceanographic Office's (NAVOCEANO) quality control system and the netCDF standards applicable to all model output makes it possible for the fusion of these data and direct model verification. Also, procedures were developed for the accumulation of match-ups of modelled and observed parameters to form a data base

  9. Hybrid Corporate Performance Prediction Model Considering Technical Capability

    Directory of Open Access Journals (Sweden)

    Joonhyuck Lee

    2016-07-01

    Full Text Available Many studies have tried to predict corporate performance and stock prices to enhance investment profitability using qualitative approaches such as the Delphi method. However, developments in data processing technology and machine-learning algorithms have resulted in efforts to develop quantitative prediction models in various managerial subject areas. We propose a quantitative corporate performance prediction model that applies the support vector regression (SVR algorithm to solve the problem of the overfitting of training data and can be applied to regression problems. The proposed model optimizes the SVR training parameters based on the training data, using the genetic algorithm to achieve sustainable predictability in changeable markets and managerial environments. Technology-intensive companies represent an increasing share of the total economy. The performance and stock prices of these companies are affected by their financial standing and their technological capabilities. Therefore, we apply both financial indicators and technical indicators to establish the proposed prediction model. Here, we use time series data, including financial, patent, and corporate performance information of 44 electronic and IT companies. Then, we predict the performance of these companies as an empirical verification of the prediction performance of the proposed model.

  10. Using Performance Assessment Model in Physics Laboratory to Increase Students’ Critical Thinking Disposition

    Science.gov (United States)

    Emiliannur, E.; Hamidah, I.; Zainul, A.; Wulan, A. R.

    2017-09-01

    Performance Assessment Model (PAM) has been developed to represent the physics concepts which able to be devided into five experiments: 1) acceleration due to gravity; 2) Hooke’s law; 3) simple harmonic motion; 4) work-energy concepts; and 5) the law of momentum conservation. The aim of this study was to determine the contribution of PAM in physics laboratory to increase students’ Critical Thinking Disposition (CTD) at senior high school. Subject of the study were 11th grade consist 32 students of a senior high school in Lubuk Sikaping, West Sumatera. The research used one group pretest-postest design. Data was collected through essay test and questionnaire about CTD. Data was analyzed using quantitative way with N-gain value. This study concluded that performance assessmet model effectively increases the N-gain at medium category. It means students’ critical thinking disposition significant increase after implementation of performance assessment model in physics laboratory.

  11. Does model performance improve with complexity? A case study with three hydrological models

    Science.gov (United States)

    Orth, Rene; Staudinger, Maria; Seneviratne, Sonia I.; Seibert, Jan; Zappa, Massimiliano

    2015-04-01

    In recent decades considerable progress has been made in climate model development. Following the massive increase in computational power, models became more sophisticated. At the same time also simple conceptual models have advanced. In this study we validate and compare three hydrological models of different complexity to investigate whether their performance varies accordingly. For this purpose we use runoff and also soil moisture measurements, which allow a truly independent validation, from several sites across Switzerland. The models are calibrated in similar ways with the same runoff data. Our results show that the more complex models HBV and PREVAH outperform the simple water balance model (SWBM) in case of runoff but not for soil moisture. Furthermore the most sophisticated PREVAH model shows an added value compared to the HBV model only in case of soil moisture. Focusing on extreme events we find generally improved performance of the SWBM during drought conditions and degraded agreement with observations during wet extremes. For the more complex models we find the opposite behavior, probably because they were primarily developed for prediction of runoff extremes. As expected given their complexity, HBV and PREVAH have more problems with over-fitting. All models show a tendency towards better performance in lower altitudes as opposed to (pre-) alpine sites. The results vary considerably across the investigated sites. In contrast, the different metrics we consider to estimate the agreement between models and observations lead to similar conclusions, indicating that the performance of the considered models is similar at different time scales as well as for anomalies and long-term means. We conclude that added complexity does not necessarily lead to improved performance of hydrological models, and that performance can vary greatly depending on the considered hydrological variable (e.g. runoff vs. soil moisture) or hydrological conditions (floods vs. droughts).

  12. Modeling the Mechanical Performance of Die Casting Dies

    Energy Technology Data Exchange (ETDEWEB)

    R. Allen Miller

    2004-02-27

    The following report covers work performed at Ohio State on modeling the mechanical performance of dies. The focus of the project was development and particularly verification of finite element techniques used to model and predict displacements and stresses in die casting dies. The work entails a major case study performed with and industrial partner on a production die and laboratory experiments performed at Ohio State.

  13. Molecular modeling for the design of novel performance chemicals and materials

    CERN Document Server

    Rai, Beena

    2012-01-01

    Molecular modeling (MM) tools offer significant benefits in the design of industrial chemical plants and material processing operations. While the role of MM in biological fields is well established, in most cases MM works as an accessory in novel products/materials development rather than a tool for direct innovation. As a result, MM engineers and practitioners are often seized with the question: ""How do I leverage these tools to develop novel materials or chemicals in my industry?"" Molecular Modeling for the Design of Novel Performance Chemicals and Materials answers this important questio

  14. The Impact of Individual Differences, Types of Model and Social Settings on Block Building Performance among Chinese Preschoolers.

    Science.gov (United States)

    Tian, Mi; Deng, Zhu; Meng, Zhaokun; Li, Rui; Zhang, Zhiyi; Qi, Wenhui; Wang, Rui; Yin, Tingting; Ji, Menghui

    2018-01-01

    Children's block building performances are used as indicators of other abilities in multiple domains. In the current study, we examined individual differences, types of model and social settings as influences on children's block building performance. Chinese preschoolers ( N = 180) participated in a block building activity in a natural setting, and performance was assessed with multiple measures in order to identify a range of specific skills. Using scores generated across these measures, three dependent variables were analyzed: block building skills, structural balance and structural features. An overall MANOVA showed that there were significant main effects of gender and grade level across most measures. Types of model showed no significant effect in children's block building. There was a significant main effect of social settings on structural features, with the best performance in the 5-member group, followed by individual and then the 10-member block building. These findings suggest that boys performed better than girls in block building activity. Block building performance increased significantly from 1st to 2nd year of preschool, but not from second to third. The preschoolers created more representational constructions when presented with a model made of wooden rather than with a picture. There was partial evidence that children performed better when working with peers in a small group than when working alone or working in a large group. It is suggested that future study should examine other modalities rather than the visual one, diversify the samples and adopt a longitudinal investigation.

  15. Integrated Main Propulsion System Performance Reconstruction Process/Models

    Science.gov (United States)

    Lopez, Eduardo; Elliott, Katie; Snell, Steven; Evans, Michael

    2013-01-01

    The Integrated Main Propulsion System (MPS) Performance Reconstruction process provides the MPS post-flight data files needed for postflight reporting to the project integration management and key customers to verify flight performance. This process/model was used as the baseline for the currently ongoing Space Launch System (SLS) work. The process utilizes several methodologies, including multiple software programs, to model integrated propulsion system performance through space shuttle ascent. It is used to evaluate integrated propulsion systems, including propellant tanks, feed systems, rocket engine, and pressurization systems performance throughout ascent based on flight pressure and temperature data. The latest revision incorporates new methods based on main engine power balance model updates to model higher mixture ratio operation at lower engine power levels.

  16. A New Model to Simulate Energy Performance of VRF Systems

    Energy Technology Data Exchange (ETDEWEB)

    Hong, Tianzhen; Pang, Xiufeng; Schetrit, Oren; Wang, Liping; Kasahara, Shinichi; Yura, Yoshinori; Hinokuma, Ryohei

    2014-03-30

    This paper presents a new model to simulate energy performance of variable refrigerant flow (VRF) systems in heat pump operation mode (either cooling or heating is provided but not simultaneously). The main improvement of the new model is the introduction of the evaporating and condensing temperature in the indoor and outdoor unit capacity modifier functions. The independent variables in the capacity modifier functions of the existing VRF model in EnergyPlus are mainly room wet-bulb temperature and outdoor dry-bulb temperature in cooling mode and room dry-bulb temperature and outdoor wet-bulb temperature in heating mode. The new approach allows compliance with different specifications of each indoor unit so that the modeling accuracy is improved. The new VRF model was implemented in a custom version of EnergyPlus 7.2. This paper first describes the algorithm for the new VRF model, which is then used to simulate the energy performance of a VRF system in a Prototype House in California that complies with the requirements of Title 24 ? the California Building Energy Efficiency Standards. The VRF system performance is then compared with three other types of HVAC systems: the Title 24-2005 Baseline system, the traditional High Efficiency system, and the EnergyStar Heat Pump system in three typical California climates: Sunnyvale, Pasadena and Fresno. Calculated energy savings from the VRF systems are significant. The HVAC site energy savings range from 51 to 85percent, while the TDV (Time Dependent Valuation) energy savings range from 31 to 66percent compared to the Title 24 Baseline Systems across the three climates. The largest energy savings are in Fresno climate followed by Sunnyvale and Pasadena. The paper discusses various characteristics of the VRF systems contributing to the energy savings. It should be noted that these savings are calculated using the Title 24 prototype House D under standard operating conditions. Actual performance of the VRF systems for real

  17. Evaluating performance of simplified physically based models for shallow landslide susceptibility

    Directory of Open Access Journals (Sweden)

    G. Formetta

    2016-11-01

    Full Text Available Rainfall-induced shallow landslides can lead to loss of life and significant damage to private and public properties, transportation systems, etc. Predicting locations that might be susceptible to shallow landslides is a complex task and involves many disciplines: hydrology, geotechnical science, geology, hydrogeology, geomorphology, and statistics. Two main approaches are commonly used: statistical or physically based models. Reliable model applications involve automatic parameter calibration, objective quantification of the quality of susceptibility maps, and model sensitivity analyses. This paper presents a methodology to systemically and objectively calibrate, verify, and compare different models and model performance indicators in order to identify and select the models whose behavior is the most reliable for particular case studies.The procedure was implemented in a package of models for landslide susceptibility analysis and integrated in the NewAge-JGrass hydrological model. The package includes three simplified physically based models for landslide susceptibility analysis (M1, M2, and M3 and a component for model verification. It computes eight goodness-of-fit indices by comparing pixel-by-pixel model results and measurement data. The integration of the package in NewAge-JGrass uses other components, such as geographic information system tools, to manage input–output processes, and automatic calibration algorithms to estimate model parameters. The system was applied for a case study in Calabria (Italy along the Salerno–Reggio Calabria highway, between Cosenza and Altilia. The area is extensively subject to rainfall-induced shallow landslides mainly because of its complex geology and climatology. The analysis was carried out considering all the combinations of the eight optimized indices and the three models. Parameter calibration, verification, and model performance assessment were performed by a comparison with a detailed landslide

  18. Shock circle model for ejector performance evaluation

    International Nuclear Information System (INIS)

    Zhu, Yinhai; Cai, Wenjian; Wen, Changyun; Li, Yanzhong

    2007-01-01

    In this paper, a novel shock circle model for the prediction of ejector performance at the critical mode operation is proposed. By introducing the 'shock circle' at the entrance of the constant area chamber, a 2D exponential expression for velocity distribution is adopted to approximate the viscosity flow near the ejector inner wall. The advantage of the 'shock circle' analysis is that the calculation of ejector performance is independent of the flows in the constant area chamber and diffuser. Consequently, the calculation is even simpler than many 1D modeling methods and can predict the performance of critical mode operation ejectors much more accurately. The effectiveness of the method is validated by two experimental results reported earlier. The proposed modeling method using two coefficients is shown to produce entrainment ratio, efficiency and coefficient of performance (COP) accurately and much closer to experimental results than those of 1D analysis methods

  19. Advanced Performance Modeling with Combined Passive and Active Monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Dovrolis, Constantine [Georgia Inst. of Technology, Atlanta, GA (United States); Sim, Alex [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2015-04-15

    To improve the efficiency of resource utilization and scheduling of scientific data transfers on high-speed networks, the "Advanced Performance Modeling with combined passive and active monitoring" (APM) project investigates and models a general-purpose, reusable and expandable network performance estimation framework. The predictive estimation model and the framework will be helpful in optimizing the performance and utilization of networks as well as sharing resources with predictable performance for scientific collaborations, especially in data intensive applications. Our prediction model utilizes historical network performance information from various network activity logs as well as live streaming measurements from network peering devices. Historical network performance information is used without putting extra load on the resources by active measurement collection. Performance measurements collected by active probing is used judiciously for improving the accuracy of predictions.

  20. Performance modeling, loss networks, and statistical multiplexing

    CERN Document Server

    Mazumdar, Ravi

    2009-01-01

    This monograph presents a concise mathematical approach for modeling and analyzing the performance of communication networks with the aim of understanding the phenomenon of statistical multiplexing. The novelty of the monograph is the fresh approach and insights provided by a sample-path methodology for queueing models that highlights the important ideas of Palm distributions associated with traffic models and their role in performance measures. Also presented are recent ideas of large buffer, and many sources asymptotics that play an important role in understanding statistical multiplexing. I

  1. Evaluating Models of Human Performance: Safety-Critical Systems Applications

    Science.gov (United States)

    Feary, Michael S.

    2012-01-01

    This presentation is part of panel discussion on Evaluating Models of Human Performance. The purpose of this panel is to discuss the increasing use of models in the world today and specifically focus on how to describe and evaluate models of human performance. My presentation will focus on discussions of generating distributions of performance, and the evaluation of different strategies for humans performing tasks with mixed initiative (Human-Automation) systems. I will also discuss issues with how to provide Human Performance modeling data to support decisions on acceptability and tradeoffs in the design of safety critical systems. I will conclude with challenges for the future.

  2. Human performance modeling for system of systems analytics.

    Energy Technology Data Exchange (ETDEWEB)

    Dixon, Kevin R.; Lawton, Craig R.; Basilico, Justin Derrick; Longsine, Dennis E. (INTERA, Inc., Austin, TX); Forsythe, James Chris; Gauthier, John Henry; Le, Hai D.

    2008-10-01

    A Laboratory-Directed Research and Development project was initiated in 2005 to investigate Human Performance Modeling in a System of Systems analytic environment. SAND2006-6569 and SAND2006-7911 document interim results from this effort; this report documents the final results. The problem is difficult because of the number of humans involved in a System of Systems environment and the generally poorly defined nature of the tasks that each human must perform. A two-pronged strategy was followed: one prong was to develop human models using a probability-based method similar to that first developed for relatively well-understood probability based performance modeling; another prong was to investigate more state-of-art human cognition models. The probability-based modeling resulted in a comprehensive addition of human-modeling capability to the existing SoSAT computer program. The cognitive modeling resulted in an increased understanding of what is necessary to incorporate cognition-based models to a System of Systems analytic environment.

  3. Knowledge-fused differential dependency network models for detecting significant rewiring in biological networks.

    Science.gov (United States)

    Tian, Ye; Zhang, Bai; Hoffman, Eric P; Clarke, Robert; Zhang, Zhen; Shih, Ie-Ming; Xuan, Jianhua; Herrington, David M; Wang, Yue

    2014-07-24

    Modeling biological networks serves as both a major goal and an effective tool of systems biology in studying mechanisms that orchestrate the activities of gene products in cells. Biological networks are context-specific and dynamic in nature. To systematically characterize the selectively activated regulatory components and mechanisms, modeling tools must be able to effectively distinguish significant rewiring from random background fluctuations. While differential networks cannot be constructed by existing knowledge alone, novel incorporation of prior knowledge into data-driven approaches can improve the robustness and biological relevance of network inference. However, the major unresolved roadblocks include: big solution space but a small sample size; highly complex networks; imperfect prior knowledge; missing significance assessment; and heuristic structural parameter learning. To address these challenges, we formulated the inference of differential dependency networks that incorporate both conditional data and prior knowledge as a convex optimization problem, and developed an efficient learning algorithm to jointly infer the conserved biological network and the significant rewiring across different conditions. We used a novel sampling scheme to estimate the expected error rate due to "random" knowledge. Based on that scheme, we developed a strategy that fully exploits the benefit of this data-knowledge integrated approach. We demonstrated and validated the principle and performance of our method using synthetic datasets. We then applied our method to yeast cell line and breast cancer microarray data and obtained biologically plausible results. The open-source R software package and the experimental data are freely available at http://www.cbil.ece.vt.edu/software.htm. Experiments on both synthetic and real data demonstrate the effectiveness of the knowledge-fused differential dependency network in revealing the statistically significant rewiring in biological

  4. Modeling and Evaluating Pilot Performance in NextGen: Review of and Recommendations Regarding Pilot Modeling Efforts, Architectures, and Validation Studies

    Science.gov (United States)

    Wickens, Christopher; Sebok, Angelia; Keller, John; Peters, Steve; Small, Ronald; Hutchins, Shaun; Algarin, Liana; Gore, Brian Francis; Hooey, Becky Lee; Foyle, David C.

    2013-01-01

    NextGen operations are associated with a variety of changes to the national airspace system (NAS) including changes to the allocation of roles and responsibilities among operators and automation, the use of new technologies and automation, additional information presented on the flight deck, and the entire concept of operations (ConOps). In the transition to NextGen airspace, aviation and air operations designers need to consider the implications of design or system changes on human performance and the potential for error. To ensure continued safety of the NAS, it will be necessary for researchers to evaluate design concepts and potential NextGen scenarios well before implementation. One approach for such evaluations is through human performance modeling. Human performance models (HPMs) provide effective tools for predicting and evaluating operator performance in systems. HPMs offer significant advantages over empirical, human-in-the-loop testing in that (1) they allow detailed analyses of systems that have not yet been built, (2) they offer great flexibility for extensive data collection, (3) they do not require experimental participants, and thus can offer cost and time savings. HPMs differ in their ability to predict performance and safety with NextGen procedures, equipment and ConOps. Models also vary in terms of how they approach human performance (e.g., some focus on cognitive processing, others focus on discrete tasks performed by a human, while others consider perceptual processes), and in terms of their associated validation efforts. The objectives of this research effort were to support the Federal Aviation Administration (FAA) in identifying HPMs that are appropriate for predicting pilot performance in NextGen operations, to provide guidance on how to evaluate the quality of different models, and to identify gaps in pilot performance modeling research, that could guide future research opportunities. This research effort is intended to help the FAA

  5. Identifying the connective strength between model parameters and performance criteria

    Directory of Open Access Journals (Sweden)

    B. Guse

    2017-11-01

    Full Text Available In hydrological models, parameters are used to represent the time-invariant characteristics of catchments and to capture different aspects of hydrological response. Hence, model parameters need to be identified based on their role in controlling the hydrological behaviour. For the identification of meaningful parameter values, multiple and complementary performance criteria are used that compare modelled and measured discharge time series. The reliability of the identification of hydrologically meaningful model parameter values depends on how distinctly a model parameter can be assigned to one of the performance criteria. To investigate this, we introduce the new concept of connective strength between model parameters and performance criteria. The connective strength assesses the intensity in the interrelationship between model parameters and performance criteria in a bijective way. In our analysis of connective strength, model simulations are carried out based on a latin hypercube sampling. Ten performance criteria including Nash–Sutcliffe efficiency (NSE, Kling–Gupta efficiency (KGE and its three components (alpha, beta and r as well as RSR (the ratio of the root mean square error to the standard deviation for different segments of the flow duration curve (FDC are calculated. With a joint analysis of two regression tree (RT approaches, we derive how a model parameter is connected to different performance criteria. At first, RTs are constructed using each performance criterion as the target variable to detect the most relevant model parameters for each performance criterion. Secondly, RTs are constructed using each parameter as the target variable to detect which performance criteria are impacted by changes in the values of one distinct model parameter. Based on this, appropriate performance criteria are identified for each model parameter. In this study, a high bijective connective strength between model parameters and performance criteria

  6. FRAMEWORK AND APPLICATION FOR MODELING CONTROL ROOM CREW PERFORMANCE AT NUCLEAR POWER PLANTS

    Energy Technology Data Exchange (ETDEWEB)

    Ronald L Boring; David I Gertman; Tuan Q Tran; Brian F Gore

    2008-09-01

    This paper summarizes an emerging project regarding the utilization of high-fidelity MIDAS simulations for visualizing and modeling control room crew performance at nuclear power plants. The key envisioned uses for MIDAS-based control room simulations are: (i) the estimation of human error associated with advanced control room equipment and configurations, (ii) the investigative determination of contributory cognitive factors for risk significant scenarios involving control room operating crews, and (iii) the certification of reduced staffing levels in advanced control rooms. It is proposed that MIDAS serves as a key component for the effective modeling of cognition, elements of situation awareness, and risk associated with human performance in next generation control rooms.

  7. FRAMEWORK AND APPLICATION FOR MODELING CONTROL ROOM CREW PERFORMANCE AT NUCLEAR POWER PLANTS

    International Nuclear Information System (INIS)

    Ronald L Boring; David I Gertman; Tuan Q Tran; Brian F Gore

    2008-01-01

    This paper summarizes an emerging project regarding the utilization of high-fidelity MIDAS simulations for visualizing and modeling control room crew performance at nuclear power plants. The key envisioned uses for MIDAS-based control room simulations are: (1) the estimation of human error associated with advanced control room equipment and configurations, (2) the investigative determination of contributory cognitive factors for risk significant scenarios involving control room operating crews, and (3) the certification of reduced staffing levels in advanced control rooms. It is proposed that MIDAS serves as a key component for the effective modeling of cognition, elements of situation awareness, and risk associated with human performance in next generation control rooms

  8. Challenges and opportunities of modeling plasma–surface interactions in tungsten using high-performance computing

    Energy Technology Data Exchange (ETDEWEB)

    Wirth, Brian D., E-mail: bdwirth@utk.edu [Department of Nuclear Engineering, University of Tennessee, Knoxville, TN 37996 (United States); Nuclear Science and Engineering Directorate, Oak Ridge National Laboratory, Oak Ridge, TN (United States); Hammond, K.D. [Department of Nuclear Engineering, University of Tennessee, Knoxville, TN 37996 (United States); Krasheninnikov, S.I. [University of California, San Diego, La Jolla, CA (United States); Maroudas, D. [University of Massachusetts, Amherst, Amherst, MA 01003 (United States)

    2015-08-15

    The performance of plasma facing components (PFCs) is critical for ITER and future magnetic fusion reactors. The ITER divertor will be tungsten, which is the primary candidate material for future reactors. Recent experiments involving tungsten exposure to low-energy helium plasmas reveal significant surface modification, including the growth of nanometer-scale tendrils of “fuzz” and formation of nanometer-sized bubbles in the near-surface region. The large span of spatial and temporal scales governing plasma surface interactions are among the challenges to modeling divertor performance. Fortunately, recent innovations in computational modeling, increasingly powerful high-performance computers, and improved experimental characterization tools provide a path toward self-consistent, experimentally validated models of PFC and divertor performance. Recent advances in understanding tungsten–helium interactions are reviewed, including such processes as helium clustering, which serve as nuclei for gas bubbles; and trap mutation, dislocation loop punching and bubble bursting; which together initiate surface morphological modification.

  9. Challenges and opportunities of modeling plasma–surface interactions in tungsten using high-performance computing

    International Nuclear Information System (INIS)

    Wirth, Brian D.; Hammond, K.D.; Krasheninnikov, S.I.; Maroudas, D.

    2015-01-01

    The performance of plasma facing components (PFCs) is critical for ITER and future magnetic fusion reactors. The ITER divertor will be tungsten, which is the primary candidate material for future reactors. Recent experiments involving tungsten exposure to low-energy helium plasmas reveal significant surface modification, including the growth of nanometer-scale tendrils of “fuzz” and formation of nanometer-sized bubbles in the near-surface region. The large span of spatial and temporal scales governing plasma surface interactions are among the challenges to modeling divertor performance. Fortunately, recent innovations in computational modeling, increasingly powerful high-performance computers, and improved experimental characterization tools provide a path toward self-consistent, experimentally validated models of PFC and divertor performance. Recent advances in understanding tungsten–helium interactions are reviewed, including such processes as helium clustering, which serve as nuclei for gas bubbles; and trap mutation, dislocation loop punching and bubble bursting; which together initiate surface morphological modification

  10. Assessing The Performance of Hydrological Models

    Science.gov (United States)

    van der Knijff, Johan

    The performance of hydrological models is often characterized using the coefficient of efficiency, E. The sensitivity of E to extreme streamflow values, and the difficulty of deciding what value of E should be used as a threshold to identify 'good' models or model parameterizations, have proven to be serious shortcomings of this index. This paper reviews some alternative performance indices that have appeared in the litera- ture. Legates and McCabe (1999) suggested a more generalized form of E, E'(j,B). Here, j is a parameter that controls how much emphasis is put on extreme streamflow values, and B defines a benchmark or 'null hypothesis' against which the results of the model are tested. E'(j,B) was used to evaluate a large number of parameterizations of a conceptual rainfall-runoff model, using 6 different combinations of j and B. First, the effect of j and B is explained. Second, it is demonstrated how the index can be used to explicitly test hypotheses about the model and the data. This approach appears to be particularly attractive if the index is used as a likelihood measure within a GLUE-type analysis.

  11. Cognition and procedure representational requirements for predictive human performance models

    Science.gov (United States)

    Corker, K.

    1992-01-01

    Models and modeling environments for human performance are becoming significant contributors to early system design and analysis procedures. Issues of levels of automation, physical environment, informational environment, and manning requirements are being addressed by such man/machine analysis systems. The research reported here investigates the close interaction between models of human cognition and models that described procedural performance. We describe a methodology for the decomposition of aircrew procedures that supports interaction with models of cognition on the basis of procedures observed; that serves to identify cockpit/avionics information sources and crew information requirements; and that provides the structure to support methods for function allocation among crew and aiding systems. Our approach is to develop an object-oriented, modular, executable software representation of the aircrew, the aircraft, and the procedures necessary to satisfy flight-phase goals. We then encode in a time-based language, taxonomies of the conceptual, relational, and procedural constraints among the cockpit avionics and control system and the aircrew. We have designed and implemented a goals/procedures hierarchic representation sufficient to describe procedural flow in the cockpit. We then execute the procedural representation in simulation software and calculate the values of the flight instruments, aircraft state variables and crew resources using the constraints available from the relationship taxonomies. The system provides a flexible, extensible, manipulative and executable representation of aircrew and procedures that is generally applicable to crew/procedure task-analysis. The representation supports developed methods of intent inference, and is extensible to include issues of information requirements and functional allocation. We are attempting to link the procedural representation to models of cognitive functions to establish several intent inference methods

  12. Performance evaluation of selected crop yield-water use models for wheat crop

    Directory of Open Access Journals (Sweden)

    H. E. Igbadun

    2001-10-01

    Full Text Available Crop yield-water use models that provide useful information about the exact form of crop response to different amounts of water used by the crop throughout its growth stages and those that provide adequate information for decisions on optimal use of water in the farm were evaluated. Three crop yield models: Jensen (1968, Minhas et al., (1974 and Bras and Cordova (1981 additive type models were studied. Wheat (Triticum aestivum was planted at the Institute for Agricultural Research Farm during the 1995/96 and 1996/97 irrigation seasons of November to March. The data collected from the field experiments during the 1995/96 planting season were used to calibrate the models and their stress sensitivity factors estimated for four selected growth stages of the wheat crop. The ability of the model to predict grain yield of wheat with the estimated stress sensitivity factors was evaluated by comparing predicted grain yields by each model with those obtained in the field during the 1996/97 season. The three models performed fairly well in predicting grain yields, as the predicted results were not significantly different from the field measured grain yield at 5% level of significance.

  13. Development of an analytical model to assess fuel property effects on combustor performance

    Science.gov (United States)

    Sutton, R. D.; Troth, D. L.; Miles, G. A.; Riddlebaugh, S. M.

    1987-01-01

    A generalized first-order computer model has been developed in order to analytically evaluate the potential effect of alternative fuels' effects on gas turbine combustors. The model assesses the size, configuration, combustion reliability, and durability of the combustors required to meet performance and emission standards while operating on a broad range of fuels. Predictions predicated on combustor flow-field determinations by the model indicate that fuel chemistry, as defined by hydrogen content, exerts a significant influence on flame retardation, liner wall temperature, and smoke emission.

  14. The Impact of Individual Differences, Types of Model and Social Settings on Block Building Performance among Chinese Preschoolers

    Directory of Open Access Journals (Sweden)

    Mi Tian

    2018-01-01

    Full Text Available Children’s block building performances are used as indicators of other abilities in multiple domains. In the current study, we examined individual differences, types of model and social settings as influences on children’s block building performance. Chinese preschoolers (N = 180 participated in a block building activity in a natural setting, and performance was assessed with multiple measures in order to identify a range of specific skills. Using scores generated across these measures, three dependent variables were analyzed: block building skills, structural balance and structural features. An overall MANOVA showed that there were significant main effects of gender and grade level across most measures. Types of model showed no significant effect in children’s block building. There was a significant main effect of social settings on structural features, with the best performance in the 5-member group, followed by individual and then the 10-member block building. These findings suggest that boys performed better than girls in block building activity. Block building performance increased significantly from 1st to 2nd year of preschool, but not from second to third. The preschoolers created more representational constructions when presented with a model made of wooden rather than with a picture. There was partial evidence that children performed better when working with peers in a small group than when working alone or working in a large group. It is suggested that future study should examine other modalities rather than the visual one, diversify the samples and adopt a longitudinal investigation.

  15. Temporal diagnostic analysis of the SWAT model to detect dominant periods of poor model performance

    Science.gov (United States)

    Guse, Björn; Reusser, Dominik E.; Fohrer, Nicola

    2013-04-01

    Hydrological models generally include thresholds and non-linearities, such as snow-rain-temperature thresholds, non-linear reservoirs, infiltration thresholds and the like. When relating observed variables to modelling results, formal methods often calculate performance metrics over long periods, reporting model performance with only few numbers. Such approaches are not well suited to compare dominating processes between reality and model and to better understand when thresholds and non-linearities are driving model results. We present a combination of two temporally resolved model diagnostic tools to answer when a model is performing (not so) well and what the dominant processes are during these periods. We look at the temporal dynamics of parameter sensitivities and model performance to answer this question. For this, the eco-hydrological SWAT model is applied in the Treene lowland catchment in Northern Germany. As a first step, temporal dynamics of parameter sensitivities are analyzed using the Fourier Amplitude Sensitivity test (FAST). The sensitivities of the eight model parameters investigated show strong temporal variations. High sensitivities were detected for two groundwater (GW_DELAY, ALPHA_BF) and one evaporation parameters (ESCO) most of the time. The periods of high parameter sensitivity can be related to different phases of the hydrograph with dominances of the groundwater parameters in the recession phases and of ESCO in baseflow and resaturation periods. Surface runoff parameters show high parameter sensitivities in phases of a precipitation event in combination with high soil water contents. The dominant parameters give indication for the controlling processes during a given period for the hydrological catchment. The second step included the temporal analysis of model performance. For each time step, model performance was characterized with a "finger print" consisting of a large set of performance measures. These finger prints were clustered into

  16. Does the amount of tagged stool and fluid significantly affect the radiation exposure in low-dose CT colonography performed with an automatic exposure control?

    Energy Technology Data Exchange (ETDEWEB)

    Lim, Hyun Kyong; Lee, Kyoung Ho; Kim, So Yeon; Kim, Young Hoon [Seoul National University Bundang Hospital, Department of Radiology, Seongnam-si, Gyeonggi-do (Korea, Republic of); Seoul National University College of Medicine, Seoul National University Medical Research Center, Institute of Radiation Medicine, Bundang (Korea, Republic of); Kim, Kil Joong [Seoul National University College of Medicine, Department of Radiation Applied Life Science, Seoul (Korea, Republic of); Kim, Bohyoung; Lee, Hyunna [Seoul National University, School of Computer Science and Engineering, Seoul (Korea, Republic of); Park, Seong Ho [University of Ulsan College of Medicine, Department of Radiology and Research Institute of Radiology, Asan Medical Center, Seoul (Korea, Republic of); Yanof, Jeffrey H. [Philips Healthcare, CT Clinical Science, Cleveland, OH (United States); Hwang, Seung-sik [Inha University School of Medicine, Department of Social and Preventive Medicine, Incheon (Korea, Republic of)

    2011-02-15

    To determine whether the amount of tagged stool and fluid significantly affects the radiation exposure in low-dose screening CT colonography performed with an automatic tube-current modulation technique. The study included 311 patients. The tagging agent was barium (n = 271) or iodine (n = 40). Correlation was measured between mean volume CT dose index (CTDI{sub vol}) and the estimated x-ray attenuation of the tagged stool and fluid (ATT). Multiple linear regression analyses were performed to determine the effect of ATT on CTDI{sub vol} and the effect of ATT on image noise while adjusting for other variables including abdominal circumference. CTDI{sub vol} varied from 0.88 to 2.54 mGy. There was no significant correlation between CTDI{sub vol} and ATT (p = 0.61). ATT did not significantly affect CTDI{sub vol} (p = 0.93), while abdominal circumference was the only factor significantly affecting CTDI{sub vol} (p < 0.001). Image noise ranged from 59.5 to 64.1 HU. The p value for the regression model explaining the noise was 0.38. The amount of stool and fluid tagging does not significantly affect radiation exposure. (orig.)

  17. Does the amount of tagged stool and fluid significantly affect the radiation exposure in low-dose CT colonography performed with an automatic exposure control?

    International Nuclear Information System (INIS)

    Lim, Hyun Kyong; Lee, Kyoung Ho; Kim, So Yeon; Kim, Young Hoon; Kim, Kil Joong; Kim, Bohyoung; Lee, Hyunna; Park, Seong Ho; Yanof, Jeffrey H.; Hwang, Seung-sik

    2011-01-01

    To determine whether the amount of tagged stool and fluid significantly affects the radiation exposure in low-dose screening CT colonography performed with an automatic tube-current modulation technique. The study included 311 patients. The tagging agent was barium (n = 271) or iodine (n = 40). Correlation was measured between mean volume CT dose index (CTDI vol ) and the estimated x-ray attenuation of the tagged stool and fluid (ATT). Multiple linear regression analyses were performed to determine the effect of ATT on CTDI vol and the effect of ATT on image noise while adjusting for other variables including abdominal circumference. CTDI vol varied from 0.88 to 2.54 mGy. There was no significant correlation between CTDI vol and ATT (p = 0.61). ATT did not significantly affect CTDI vol (p = 0.93), while abdominal circumference was the only factor significantly affecting CTDI vol (p < 0.001). Image noise ranged from 59.5 to 64.1 HU. The p value for the regression model explaining the noise was 0.38. The amount of stool and fluid tagging does not significantly affect radiation exposure. (orig.)

  18. The performance indicators of model projects. A special evaluation

    International Nuclear Information System (INIS)

    1995-11-01

    As a result of the acknowledgment of the key role of the Model Project concept in the Agency's Technical Co-operation Programme, the present review of the objectives of the model projects which are now in operation, was undertaken, as recommended by the Board of Governors, to determine at an early stage: the extent to which the present objectives have been defined in a measurable way; whether objectively verifiable performance indicators and success criteria had been identified for each project; whether mechanisms to obtain feedback on the achievements had been foreseen. The overall budget for the 23 model projects, as approved from 1994 to 1998, amounts to $32,557,560, of which 45% is funded by Technical Co-operation Fund. This represents an average investment of about $8 million per year, that is over 15% of the annual TC budget. The conceptual importance of the Model Project initiative, as well as the significant funds allocated to them, led the Secretariat to plan the methods to be used to determine their socio-economic impact. 1 tab

  19. Comparative Performance and Model Agreement of Three Common Photovoltaic Array Configurations.

    Science.gov (United States)

    Boyd, Matthew T

    2018-02-01

    Three grid-connected monocrystalline silicon arrays on the National Institute of Standards and Technology (NIST) campus in Gaithersburg, MD have been instrumented and monitored for 1 yr, with only minimal gaps in the data sets. These arrays range from 73 kW to 271 kW, and all use the same module, but have different tilts, orientations, and configurations. One array is installed facing east and west over a parking lot, one in an open field, and one on a flat roof. Various measured relationships and calculated standard metrics have been used to compare the relative performance of these arrays in their different configurations. Comprehensive performance models have also been created in the modeling software pvsyst for each array, and its predictions using measured on-site weather data are compared to the arrays' measured outputs. The comparisons show that all three arrays typically have monthly performance ratios (PRs) above 0.75, but differ significantly in their relative output, strongly correlating to their operating temperature and to a lesser extent their orientation. The model predictions are within 5% of the monthly delivered energy values except during the winter months, when there was intermittent snow on the arrays, and during maintenance and other outages.

  20. The Significant of Model School in Pluralistic Society of the Three Southern Border Provinces of Thailand

    Directory of Open Access Journals (Sweden)

    Haji-Awang Faisol

    2016-01-01

    The result of the study show that, a significant traits of the model schools in the multi-cultural society are not merely performed well in administrative procedure, teaching and learning process, but these schools also able to reveal the real social norm and religious believe into communities’ practical life as a truly “Malay-Muslim” society. It is means that, the school able to run the integrated programs under the shade of philosophy of Islamic education paralleled the National Education aims to ensure that the productivities of the programs able to serve both sides, national education on the one hand and the Malay Muslim communities’ satisfaction on the other hand.

  1. Driver Performance Model: 1. Conceptual Framework

    National Research Council Canada - National Science Library

    Heimerl, Joseph

    2001-01-01

    ...'. At the present time, no such comprehensive model exists. This report discusses a conceptual framework designed to encompass the relationships, conditions, and constraints related to direct, indirect, and remote modes of driving and thus provides a guide or 'road map' for the construction and creation of a comprehensive driver performance model.

  2. Modeling and experimental verification of proof mass effects on vibration energy harvester performance

    International Nuclear Information System (INIS)

    Kim, Miso; Hoegen, Mathias; Dugundji, John; Wardle, Brian L

    2010-01-01

    An electromechanically coupled model for a cantilevered piezoelectric energy harvester with a proof mass is presented. Proof masses are essential in microscale devices to move device resonances towards optimal frequency points for harvesting. Such devices with proof masses have not been rigorously modeled previously; instead, lumped mass or concentrated point masses at arbitrary points on the beam have been used. Thus, this work focuses on the exact vibration analysis of cantilevered energy harvester devices including a tip proof mass. The model is based not only on a detailed modal analysis, but also on a thorough investigation of damping ratios that can significantly affect device performance. A model with multiple degrees of freedom is developed and then reduced to a single-mode model, yielding convenient closed-form normalized predictions of device performance. In order to verify the analytical model, experimental tests are undertaken on a macroscale, symmetric, bimorph, piezoelectric energy harvester with proof masses of different geometries. The model accurately captures all aspects of the measured response, including the location of peak-power operating points at resonance and anti-resonance, and trends such as the dependence of the maximal power harvested on the frequency. It is observed that even a small change in proof mass geometry results in a substantial change of device performance due not only to the frequency shift, but also to the effect on the strain distribution along the device length. Future work will include the optimal design of devices for various applications, and quantification of the importance of nonlinearities (structural and piezoelectric coupling) for device performance

  3. Impact of Disturbing Factors on Cooperation in Logistics Outsourcing Performance: The Empirical Model

    Directory of Open Access Journals (Sweden)

    Andreja Križman

    2010-05-01

    Full Text Available The purpose of this paper is to present the research results of a study conducted in the Slovene logistics market of conflicts and opportunism as disturbing factors while examining their impact on cooperation in logistics outsourcing performance. Relationship variables are proposed that directly or indirectly affect logistics performance and conceptualize the hypotheses based on causal linkages for the constructs. On the basis of extant literature and new argumentations that are derived from in-depth interviews of logistics experts, including providers and customers, the measurement and structural models are empirically analyzed. Existing measurement scales for the constructs are slightly modified for this analysis. Purification testing and measurement for validity and reliability are performed. Multivariate statistical methods are utilized and hypotheses are tested. The results show that conflicts have a significantly negative impact on cooperation between customers and logistics service providers (LSPs, while opportunism does not play an important role in these relationships. The observed antecedents of logistics outsourcing performance in the model account for 58.4% of the variance of the goal achievement and 36.5% of the variance of the exceeded goal. KEYWORDS: logistics outsourcing performance; logistics customer–provider relationships; conflicts and cooperation in logistics outsourcing; PLS path modelling

  4. MRI and morphological observation in C6 glioma model rats and significance

    International Nuclear Information System (INIS)

    Zhou Ying; Yuan Bo; Wang Hao; Lu Jin; Yuan Changji; Ma Yue; Tong Dan; Zhang Kun; Gao Feng; Wu Xiaogang

    2013-01-01

    Objective: To establish stable and reliable rat C6 glioma model, and to perform MRI dynamic observation and pathomorphological observation in model animal brain, and to provide experimental basis for pharmaceutical research on anti-glioma drugs. Methods: The C6 glioma cells were cultured and 20 μL cultural fluid containing 1×10 6 C6 cells was sterotactically implanted into the left caudate nuclei in 10 male Wistar rats, respectively. The changes in the behavior of the rats after implantation were observed and recorded. MRI dynamic scanning was performed in 10 rats 2, 3 and 4 weeks after implantation and the brain tissues were taken for general and pathological examination when the 10 rats were naturally dead. The survival period of tumor-bearing rats was calculated. Results: 2 weeks after implantation the rats showed decreased activities and food intake, fur lackluster, and conjunctival congestion and so on; 3 weeks later, some rats appeared nerve symptoms such as body twitch, body hemiplegy, body distortion, rotation and so on. All the 10 rats died in 8-30 d. The median survival period of the tumor-bearing rats was 18 d, the average survival period was (18.3±7.3) d. The pathological examination showed that the tumor cells were arranged irregularly closely and karyokinesis was easy to see; tumor vascular tissue proliferation and tumor invasive growth into surrounding normal tissues were found. The expression of glial fibrillary acidic protein (GFAP) was positive in the tumors. Conclusion: A stable animal model of intracranial glioma is successfully established by stereotactic implantation of C6 cells into the rat caudate nucleus. The results of MRI dynamic observation and pathohistological observation on the model animal brain tissue. Can provide experimental basis for selecting the appropriate time window to perform the pharmaceutical research on anti-glioma drugs. (authors)

  5. Modeling the marketing strategy-performance relationship : towards an hierarchical marketing performance framework

    NARCIS (Netherlands)

    Huizingh, Eelko K.R.E.; Zengerink, Evelien

    2001-01-01

    Accurate measurement of marketing performance is an important topic for both marketing academics and marketing managers. Many researchers have recognized that marketing performance measurement should go beyond financial measurement. In this paper we propose a conceptual framework that models

  6. Electric Circuit Model for the Aerodynamic Performance Analysis of a Three-Blade Darrieus-Type Vertical Axis Wind Turbine: The Tchakoua Model

    Directory of Open Access Journals (Sweden)

    Pierre Tchakoua

    2016-10-01

    Full Text Available The complex and unsteady aerodynamics of vertical axis wind turbines (VAWTs pose significant challenges for simulation tools. Recently, significant research efforts have focused on the development of new methods for analysing and optimising the aerodynamic performance of VAWTs. This paper presents an electric circuit model for Darrieus-type vertical axis wind turbine (DT-VAWT rotors. The novel Tchakoua model is based on the mechanical description given by the Paraschivoiu double-multiple streamtube model using a mechanical‑electrical analogy. Model simulations were conducted using MATLAB for a three-bladed rotor architecture, characterized by a NACA0012 profile, an average Reynolds number of 40,000 for the blade and a tip speed ratio of 5. The results obtained show strong agreement with findings from both aerodynamic and computational fluid dynamics (CFD models in the literature.

  7. PV Performance Modeling Methods and Practices: Results from the 4th PV Performance Modeling Collaborative Workshop.

    Energy Technology Data Exchange (ETDEWEB)

    Stein, Joshua [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-03-01

    In 2014, the IEA PVPS Task 13 added the PVPMC as a formal activity to its technical work plan for 2014-2017. The goal of this activity is to expand the reach of the PVPMC to a broader international audience and help to reduce PV performance modeling uncertainties worldwide. One of the main deliverables of this activity is to host one or more PVPMC workshops outside the US to foster more international participation within this collaborative group. This report reviews the results of the first in a series of these joint IEA PVPS Task 13/PVPMC workshops. The 4th PV Performance Modeling Collaborative Workshop was held in Cologne, Germany at the headquarters of TÜV Rheinland on October 22-23, 2015.

  8. Modeling of HVAC operational faults in building performance simulation

    International Nuclear Information System (INIS)

    Zhang, Rongpeng; Hong, Tianzhen

    2017-01-01

    Highlights: •Discuss significance of capturing operational faults in existing buildings. •Develop a novel feature in EnergyPlus to model operational faults of HVAC systems. •Compare three approaches to faults modeling using EnergyPlus. •A case study demonstrates the use of the fault-modeling feature. •Future developments of new faults are discussed. -- Abstract: Operational faults are common in the heating, ventilating, and air conditioning (HVAC) systems of existing buildings, leading to a decrease in energy efficiency and occupant comfort. Various fault detection and diagnostic methods have been developed to identify and analyze HVAC operational faults at the component or subsystem level. However, current methods lack a holistic approach to predicting the overall impacts of faults at the building level—an approach that adequately addresses the coupling between various operational components, the synchronized effect between simultaneous faults, and the dynamic nature of fault severity. This study introduces the novel development of a fault-modeling feature in EnergyPlus which fills in the knowledge gap left by previous studies. This paper presents the design and implementation of the new feature in EnergyPlus and discusses in detail the fault-modeling challenges faced. The new fault-modeling feature enables EnergyPlus to quantify the impacts of faults on building energy use and occupant comfort, thus supporting the decision making of timely fault corrections. Including actual building operational faults in energy models also improves the accuracy of the baseline model, which is critical in the measurement and verification of retrofit or commissioning projects. As an example, EnergyPlus version 8.6 was used to investigate the impacts of a number of typical operational faults in an office building across several U.S. climate zones. The results demonstrate that the faults have significant impacts on building energy performance as well as on occupant

  9. Multitasking TORT Under UNICOS: Parallel Performance Models and Measurements

    International Nuclear Information System (INIS)

    Azmy, Y.Y.; Barnett, D.A.

    1999-01-01

    The existing parallel algorithms in the TORT discrete ordinates were updated to function in a UNI-COS environment. A performance model for the parallel overhead was derived for the existing algorithms. The largest contributors to the parallel overhead were identified and a new algorithm was developed. A parallel overhead model was also derived for the new algorithm. The results of the comparison of parallel performance models were compared to applications of the code to two TORT standard test problems and a large production problem. The parallel performance models agree well with the measured parallel overhead

  10. Multitasking TORT under UNICOS: Parallel performance models and measurements

    International Nuclear Information System (INIS)

    Barnett, A.; Azmy, Y.Y.

    1999-01-01

    The existing parallel algorithms in the TORT discrete ordinates code were updated to function in a UNICOS environment. A performance model for the parallel overhead was derived for the existing algorithms. The largest contributors to the parallel overhead were identified and a new algorithm was developed. A parallel overhead model was also derived for the new algorithm. The results of the comparison of parallel performance models were compared to applications of the code to two TORT standard test problems and a large production problem. The parallel performance models agree well with the measured parallel overhead

  11. Performance of Linear and Nonlinear Two-Leaf Light Use Efficiency Models at Different Temporal Scales

    Directory of Open Access Journals (Sweden)

    Xiaocui Wu

    2015-02-01

    Full Text Available The reliable simulation of gross primary productivity (GPP at various spatial and temporal scales is of significance to quantifying the net exchange of carbon between terrestrial ecosystems and the atmosphere. This study aimed to verify the ability of a nonlinear two-leaf model (TL-LUEn, a linear two-leaf model (TL-LUE, and a big-leaf light use efficiency model (MOD17 to simulate GPP at half-hourly, daily and 8-day scales using GPP derived from 58 eddy-covariance flux sites in Asia, Europe and North America as benchmarks. Model evaluation showed that the overall performance of TL-LUEn was slightly but not significantly better than TL-LUE at half-hourly and daily scale, while the overall performance of both TL-LUEn and TL-LUE were significantly better (p < 0.0001 than MOD17 at the two temporal scales. The improvement of TL-LUEn over TL-LUE was relatively small in comparison with the improvement of TL-LUE over MOD17. However, the differences between TL-LUEn and MOD17, and TL-LUE and MOD17 became less distinct at the 8-day scale. As for different vegetation types, TL-LUEn and TL-LUE performed better than MOD17 for all vegetation types except crops at the half-hourly scale. At the daily and 8-day scales, both TL-LUEn and TL-LUE outperformed MOD17 for forests. However, TL-LUEn had a mixed performance for the three non-forest types while TL-LUE outperformed MOD17 slightly for all these non-forest types at daily and 8-day scales. The better performance of TL-LUEn and TL-LUE for forests was mainly achieved by the correction of the underestimation/overestimation of GPP simulated by MOD17 under low/high solar radiation and sky clearness conditions. TL-LUEn is more applicable at individual sites at the half-hourly scale while TL-LUE could be regionally used at half-hourly, daily and 8-day scales. MOD17 is also an applicable option regionally at the 8-day scale.

  12. ICT evaluation models and performance of medium and small enterprises

    Directory of Open Access Journals (Sweden)

    Bayaga Anass

    2014-01-01

    Full Text Available Building on prior research related to (1 impact of information communication technology (ICT and (2 operational risk management (ORM in the context of medium and small enterprises (MSEs, the focus of this study was to investigate the relationship between (1 ICT operational risk management (ORM and (2 performances of MSEs. To achieve the focus, the research investigated evaluating models for understanding the value of ICT ORM in MSEs. Multiple regression, Repeated-Measures Analysis of Variance (RM-ANOVA and Repeated-Measures Multivariate Analysis of Variance (RM-MANOVA were performed. The findings of the distribution revealed that only one variable made a significant percentage contribution to the level of ICT operation in MSEs, the Payback method (β = 0.410, p < .000. It may thus be inferred that the Payback method is the prominent variable, explaining the variation in level of evaluation models affecting ICT adoption within MSEs. Conclusively, in answering the two questions (1 degree of variability explained and (2 predictors, the results revealed that the variable contributed approximately 88.4% of the variations in evaluation models affecting ICT adoption within MSEs. The analysis of variance also revealed that the regression coefficients were real and did not occur by chance

  13. The quest for significance model of radicalization: implications for the management of terrorist detainees.

    Science.gov (United States)

    Dugas, Michelle; Kruglanski, Arie W

    2014-01-01

    Radicalization and its culmination in terrorism represent a grave threat to the security and stability of the world. A related challenge is effective management of extremists who are detained in prison facilities. The major aim of this article is to review the significance quest model of radicalization and its implications for management of terrorist detainees. First, we review the significance quest model, which elaborates on the roles of motivation, ideology, and social processes in radicalization. Secondly, we explore the implications of the model in relation to the risks of prison radicalization. Finally, we analyze the model's implications for deradicalization strategies and review preliminary evidence for the effectiveness of a rehabilitation program targeting components of the significance quest. Based on this evidence, we argue that the psychology of radicalization provides compelling reason for the inclusion of deradicalization efforts as an essential component of the management of terrorist detainees. Copyright © 2014 John Wiley & Sons, Ltd.

  14. Performance Implications of Business Model Change: A Case Study

    Directory of Open Access Journals (Sweden)

    Jana Poláková

    2015-01-01

    Full Text Available The paper deals with changes in performance level introduced by the change of business model. The selected case is a small family business undergoing through substantial changes in reflection of structural changes of its markets. The authors used the concept of business model to describe value creation processes within the selected family business and by contrasting the differences between value creation processes before and after the change introduced they prove the role of business model as the performance differentiator. This is illustrated with the use of business model canvas constructed on the basis interviews, observations and document analysis. The two business model canvases allow for explanation of cause-and-effect relationships within the business leading to change in performance. The change in the performance is assessed by financial analysis of the business conducted over the period of 2006–2012 demonstrates changes in performance (comparing development of ROA, ROE and ROS having their lowest levels before the change of business model was introduced, growing after the introduction of the change, as well as the activity indicators with similar developments of the family business. The described case study contributes to the concept of business modeling with the arguments supporting its value as strategic tool facilitating decisions related to value creation within the business.

  15. Moderated Mediation Model of Interrelations between Workplace Romance, Wellbeing, and Employee Performance.

    Science.gov (United States)

    Khan, Muhammad Aamir Shafique; Jianguo, Du; Usman, Muhammad; Ahmad, Malik I

    2017-01-01

    In this study, first we examined the effect of workplace romance on employee job performance, and the mediatory role of psychological wellbeing in the relationship between workplace romance and employee performance. Then we tested the moderating effects of gender and workplace romance type - lateral or hierarchical - on the indirect effect of workplace romance on employee performance. Based on a survey of 311 doctors from five government teaching hospitals in Pakistan, we used structural equation modeling and bootstrapping to test these relationships. This study reveals that psychological wellbeing significantly fully mediates the positive relationship between workplace romance and job performance. Moreover, multi-group analysis shows that gender moderates the indirect effect of workplace romance on employee performance, where the indirect effect of workplace romance on employee performance is stronger for male participants. This study carries important implications, particularly for the policy makers and managers of healthcare sector organizations.

  16. Focused R&D For Electrochromic Smart Windowsa: Significant Performance and Yield Enhancements

    Energy Technology Data Exchange (ETDEWEB)

    Mark Burdis; Neil Sbar

    2003-01-31

    There is a need to improve the energy efficiency of building envelopes as they are the primary factor governing the heating, cooling, lighting and ventilation requirements of buildings--influencing 53% of building energy use. In particular, windows contribute significantly to the overall energy performance of building envelopes, thus there is a need to develop advanced energy efficient window and glazing systems. Electrochromic (EC) windows represent the next generation of advanced glazing technology that will (1) reduce the energy consumed in buildings, (2) improve the overall comfort of the building occupants, and (3) improve the thermal performance of the building envelope. ''Switchable'' EC windows provide, on demand, dynamic control of visible light, solar heat gain, and glare without blocking the view. As exterior light levels change, the window's performance can be electronically adjusted to suit conditions. A schematic illustrating how SageGlass{reg_sign} electrochromic windows work is shown in Figure I.1. SageGlass{reg_sign} EC glazings offer the potential to save cooling and lighting costs, with the added benefit of improving thermal and visual comfort. Control over solar heat gain will also result in the use of smaller HVAC equipment. If a step change in the energy efficiency and performance of buildings is to be achieved, there is a clear need to bring EC technology to the marketplace. This project addresses accelerating the widespread introduction of EC windows in buildings and thus maximizing total energy savings in the U.S. and worldwide. We report on R&D activities to improve the optical performance needed to broadly penetrate the full range of architectural markets. Also, processing enhancements have been implemented to reduce manufacturing costs. Finally, tests are being conducted to demonstrate the durability of the EC device and the dual pane insulating glass unit (IGU) to be at least equal to that of conventional

  17. Theoretical performance model for single image depth from defocus.

    Science.gov (United States)

    Trouvé-Peloux, Pauline; Champagnat, Frédéric; Le Besnerais, Guy; Idier, Jérôme

    2014-12-01

    In this paper we present a performance model for depth estimation using single image depth from defocus (SIDFD). Our model is based on an original expression of the Cramér-Rao bound (CRB) in this context. We show that this model is consistent with the expected behavior of SIDFD. We then study the influence on the performance of the optical parameters of a conventional camera such as the focal length, the aperture, and the position of the in-focus plane (IFP). We derive an approximate analytical expression of the CRB away from the IFP, and we propose an interpretation of the SIDFD performance in this domain. Finally, we illustrate the predictive capacity of our performance model on experimental data comparing several settings of a consumer camera.

  18. Performance of GeantV EM Physics Models

    Energy Technology Data Exchange (ETDEWEB)

    Amadio, G.; et al.

    2016-10-14

    The recent progress in parallel hardware architectures with deeper vector pipelines or many-cores technologies brings opportunities for HEP experiments to take advantage of SIMD and SIMT computing models. Launched in 2013, the GeantV project studies performance gains in propagating multiple particles in parallel, improving instruction throughput and data locality in HEP event simulation on modern parallel hardware architecture. Due to the complexity of geometry description and physics algorithms of a typical HEP application, performance analysis is indispensable in identifying factors limiting parallel execution. In this report, we will present design considerations and preliminary computing performance of GeantV physics models on coprocessors (Intel Xeon Phi and NVidia GPUs) as well as on mainstream CPUs.

  19. Performance of GeantV EM Physics Models

    Science.gov (United States)

    Amadio, G.; Ananya, A.; Apostolakis, J.; Aurora, A.; Bandieramonte, M.; Bhattacharyya, A.; Bianchini, C.; Brun, R.; Canal, P.; Carminati, F.; Cosmo, G.; Duhem, L.; Elvira, D.; Folger, G.; Gheata, A.; Gheata, M.; Goulas, I.; Iope, R.; Jun, S. Y.; Lima, G.; Mohanty, A.; Nikitina, T.; Novak, M.; Pokorski, W.; Ribon, A.; Seghal, R.; Shadura, O.; Vallecorsa, S.; Wenzel, S.; Zhang, Y.

    2017-10-01

    The recent progress in parallel hardware architectures with deeper vector pipelines or many-cores technologies brings opportunities for HEP experiments to take advantage of SIMD and SIMT computing models. Launched in 2013, the GeantV project studies performance gains in propagating multiple particles in parallel, improving instruction throughput and data locality in HEP event simulation on modern parallel hardware architecture. Due to the complexity of geometry description and physics algorithms of a typical HEP application, performance analysis is indispensable in identifying factors limiting parallel execution. In this report, we will present design considerations and preliminary computing performance of GeantV physics models on coprocessors (Intel Xeon Phi and NVidia GPUs) as well as on mainstream CPUs.

  20. Performance of GeantV EM Physics Models

    CERN Document Server

    Amadio, G; Apostolakis, J; Aurora, A; Bandieramonte, M; Bhattacharyya, A; Bianchini, C; Brun, R; Canal P; Carminati, F; Cosmo, G; Duhem, L; Elvira, D; Folger, G; Gheata, A; Gheata, M; Goulas, I; Iope, R; Jun, S Y; Lima, G; Mohanty, A; Nikitina, T; Novak, M; Pokorski, W; Ribon, A; Seghal, R; Shadura, O; Vallecorsa, S; Wenzel, S; Zhang, Y

    2017-01-01

    The recent progress in parallel hardware architectures with deeper vector pipelines or many-cores technologies brings opportunities for HEP experiments to take advantage of SIMD and SIMT computing models. Launched in 2013, the GeantV project studies performance gains in propagating multiple particles in parallel, improving instruction throughput and data locality in HEP event simulation on modern parallel hardware architecture. Due to the complexity of geometry description and physics algorithms of a typical HEP application, performance analysis is indispensable in identifying factors limiting parallel execution. In this report, we will present design considerations and preliminary computing performance of GeantV physics models on coprocessors (Intel Xeon Phi and NVidia GPUs) as well as on mainstream CPUs.

  1. Cluster Cooperation in Wireless-Powered Sensor Networks: Modeling and Performance Analysis.

    Science.gov (United States)

    Zhang, Chao; Zhang, Pengcheng; Zhang, Weizhan

    2017-09-27

    A wireless-powered sensor network (WPSN) consisting of one hybrid access point (HAP), a near cluster and the corresponding far cluster is investigated in this paper. These sensors are wireless-powered and they transmit information by consuming the harvested energy from signal ejected by the HAP. Sensors are able to harvest energy as well as store the harvested energy. We propose that if sensors in near cluster do not have their own information to transmit, acting as relays, they can help the sensors in a far cluster to forward information to the HAP in an amplify-and-forward (AF) manner. We use a finite Markov chain to model the dynamic variation process of the relay battery, and give a general analyzing model for WPSN with cluster cooperation. Though the model, we deduce the closed-form expression for the outage probability as the metric of this network. Finally, simulation results validate the start point of designing this paper and correctness of theoretical analysis and show how parameters have an effect on system performance. Moreover, it is also known that the outage probability of sensors in far cluster can be drastically reduced without sacrificing the performance of sensors in near cluster if the transmit power of HAP is fairly high. Furthermore, in the aspect of outage performance of far cluster, the proposed scheme significantly outperforms the direct transmission scheme without cooperation.

  2. A Correlated Model for Evaluating Performance and Energy of Cloud System Given System Reliability

    Directory of Open Access Journals (Sweden)

    Hongli Zhang

    2015-01-01

    Full Text Available The serious issue of energy consumption for high performance computing systems has attracted much attention. Performance and energy-saving have become important measures of a computing system. In the cloud computing environment, the systems usually allocate various resources (such as CPU, Memory, Storage, etc. on multiple virtual machines (VMs for executing tasks. Therefore, the problem of resource allocation for running VMs should have significant influence on both system performance and energy consumption. For different processor utilizations assigned to the VM, there exists the tradeoff between energy consumption and task completion time when a given task is executed by the VMs. Moreover, the hardware failure, software failure and restoration characteristics also have obvious influences on overall performance and energy. In this paper, a correlated model is built to analyze both performance and energy in the VM execution environment given the reliability restriction, and an optimization model is presented to derive the most effective solution of processor utilization for the VM. Then, the tradeoff between energy-saving and task completion time is studied and balanced when the VMs execute given tasks. Numerical examples are illustrated to build the performance-energy correlated model and evaluate the expected values of task completion time and consumed energy.

  3. Modelling Client Satisfaction Levels: The Impact of Contractor Performance

    Directory of Open Access Journals (Sweden)

    Robby Soetanto

    2012-11-01

    Full Text Available The performance of contractors is known to be a key determinant of client satisfaction.Here, using factor analysis, clients’ satisfaction is defined in several dimensions. Based onclients’ assessment of contractor performance, a number of satisfaction models developedusing the multiple regression (MR technique are presented. The models identify arange of variables encompassing contractor performance, project performance and respondent(i.e. client attributes as useful predictors of satisfaction levels. Contractor performanceattributes were found to be of utmost importance indicating that clientsatisfaction levels are mainly dependent on the performance of the contractor. Furthermore,findings suggest that subjectivity is to some extent prevalent in clients’ performanceassessment. The models demonstrate accurate and reliable predictive power as confirmedby validation tests. Contractors could use the models to help improve their performanceleading to more satisfied clients. This would also promote the development ofharmonious working relationships within the construction project coalition.

  4. Significant determinants of academic performance by new students enrolled in the higher distance education system of Ecuador. The case of the Universidad Técnica Particular de Loja

    Directory of Open Access Journals (Sweden)

    Luis F. Moncada Mora

    2011-12-01

    Full Text Available In this article we present the significant determiners of academic performance of new students enrolled in the higher distance education system of Ecuador. A description and correlation of the variables were undertaken to formalize the probabilistic model that confirms the positive, negative, individual and global effects.

  5. Predictive models for PEM-electrolyzer performance using adaptive neuro-fuzzy inference systems

    Energy Technology Data Exchange (ETDEWEB)

    Becker, Steffen [University of Tasmania, Hobart 7001, Tasmania (Australia); Karri, Vishy [Australian College of Kuwait (Kuwait)

    2010-09-15

    Predictive models were built using neural network based Adaptive Neuro-Fuzzy Inference Systems for hydrogen flow rate, electrolyzer system-efficiency and stack-efficiency respectively. A comprehensive experimental database forms the foundation for the predictive models. It is argued that, due to the high costs associated with the hydrogen measuring equipment; these reliable predictive models can be implemented as virtual sensors. These models can also be used on-line for monitoring and safety of hydrogen equipment. The quantitative accuracy of the predictive models is appraised using statistical techniques. These mathematical models are found to be reliable predictive tools with an excellent accuracy of {+-}3% compared with experimental values. The predictive nature of these models did not show any significant bias to either over prediction or under prediction. These predictive models, built on a sound mathematical and quantitative basis, can be seen as a step towards establishing hydrogen performance prediction models as generic virtual sensors for wider safety and monitoring applications. (author)

  6. Charge-coupled-device X-ray detector performance model

    Science.gov (United States)

    Bautz, M. W.; Berman, G. E.; Doty, J. P.; Ricker, G. R.

    1987-01-01

    A model that predicts the performance characteristics of CCD detectors being developed for use in X-ray imaging is presented. The model accounts for the interactions of both X-rays and charged particles with the CCD and simulates the transport and loss of charge in the detector. Predicted performance parameters include detective and net quantum efficiencies, split-event probability, and a parameter characterizing the effective thickness presented by the detector to cosmic-ray protons. The predicted performance of two CCDs of different epitaxial layer thicknesses is compared. The model predicts that in each device incomplete recovery of the charge liberated by a photon of energy between 0.1 and 10 keV is very likely to be accompanied by charge splitting between adjacent pixels. The implications of the model predictions for CCD data processing algorithms are briefly discussed.

  7. A Spectral Evaluation of Models Performances in Mediterranean Oak Woodlands

    Science.gov (United States)

    Vargas, R.; Baldocchi, D. D.; Abramowitz, G.; Carrara, A.; Correia, A.; Kobayashi, H.; Papale, D.; Pearson, D.; Pereira, J.; Piao, S.; Rambal, S.; Sonnentag, O.

    2009-12-01

    Ecosystem processes are influenced by climatic trends at multiple temporal scales including diel patterns and other mid-term climatic modes, such as interannual and seasonal variability. Because interactions between biophysical components of ecosystem processes are complex, it is important to test how models perform in frequency (e.g. hours, days, weeks, months, years) and time (i.e. day of the year) domains in addition to traditional tests of annual or monthly sums. Here we present a spectral evaluation using wavelet time series analysis of model performance in seven Mediterranean Oak Woodlands that encompass three deciduous and four evergreen sites. We tested the performance of five models (CABLE, ORCHIDEE, BEPS, Biome-BGC, and JULES) on measured variables of gross primary production (GPP) and evapotranspiration (ET). In general, model performance fails at intermediate periods (e.g. weeks to months) likely because these models do not represent the water pulse dynamics that influence GPP and ET at these Mediterranean systems. To improve the performance of a model it is critical to identify first where and when the model fails. Only by identifying where a model fails we can improve the model performance and use them as prognostic tools and to generate further hypotheses that can be tested by new experiments and measurements.

  8. A Designer’s Guide to Human Performance Modelling (La Modelisation des Performances Humaines: Manuel du Concepteur).

    Science.gov (United States)

    1998-12-01

    into the Systems Engineering Process 17 5.3 Validation of HPMs 18 5.4 Commercialisation of human performance modelling software 18 5.5 Model Tool...budget) so that inappropriate models/tools are not offered. The WG agreed that another form of ’ educating ’ designers in the use of models was by means... Commercialisation of human performance modelling Software 5.2.8 Include human performance in system test. g More and more, customer’s are mandating the provision

  9. Parameter definition using vibration prediction software leads to significant drilling performance improvements

    Energy Technology Data Exchange (ETDEWEB)

    Amorim, Dalmo; Hanley, Chris Hanley; Fonseca, Isaac; Santos, Juliana [National Oilwell Varco, Houston TX (United States); Leite, Daltro J.; Borella, Augusto; Gozzi, Danilo [Petroleo Brasileiro S.A. (PETROBRAS), Rio de Janeiro, RJ (Brazil)

    2012-07-01

    The understanding and mitigation of downhole vibration has been a heavily researched subject in the oil industry as it results in more expensive drilling operations, as vibrations significantly diminish the amount of effective drilling energy available to the bit and generate forces that can push the bit or the Bottom Hole Assembly (BHA) off its concentric axis of rotation, producing high magnitude impacts with the borehole wall. In order to drill ahead, a sufficient amount of energy must be supplied by the rig to overcome the resistance of the drilling system, including the reactive torque of the system, drag forces, fluid pressure losses and energy dissipated by downhole vibrations, then providing the bit with the energy required to fail the rock. If the drill string enters resonant modes of vibration, not only does it decreases the amount of available energy to drill, but increases the potential for catastrophic downhole equipment and drilling bit failures. In this sense, the mitigation of downhole vibrations will result in faster, smoother, and cheaper drilling operations. A software tool using Finite Element Analysis (FEA) has been developed to provide better understanding of downhole vibration phenomena in drilling environments. The software tool calculates the response of the drilling system at various input conditions, based on the design of the wellbore along with the geometry of the Bottom Hole Assembly (BHA) and the drill string. It identifies where undesired levels of resonant vibration will be driven by certain combinations of specific drilling parameters, and also which combinations of drilling parameters will result in lower levels of vibration, so the least shocks, the highest penetration rate and the lowest cost per foot can be achieved. With the growing performance of personal computers, complex software systems modeling the drilling vibrations using FEA has been accessible to a wider audience of field users, further complimenting with real time

  10. Models used to assess the performance of photovoltaic systems.

    Energy Technology Data Exchange (ETDEWEB)

    Stein, Joshua S.; Klise, Geoffrey T.

    2009-12-01

    This report documents the various photovoltaic (PV) performance models and software developed and utilized by researchers at Sandia National Laboratories (SNL) in support of the Photovoltaics and Grid Integration Department. In addition to PV performance models, hybrid system and battery storage models are discussed. A hybrid system using other distributed sources and energy storage can help reduce the variability inherent in PV generation, and due to the complexity of combining multiple generation sources and system loads, these models are invaluable for system design and optimization. Energy storage plays an important role in reducing PV intermittency and battery storage models are used to understand the best configurations and technologies to store PV generated electricity. Other researcher's models used by SNL are discussed including some widely known models that incorporate algorithms developed at SNL. There are other models included in the discussion that are not used by or were not adopted from SNL research but may provide some benefit to researchers working on PV array performance, hybrid system models and energy storage. The paper is organized into three sections to describe the different software models as applied to photovoltaic performance, hybrid systems, and battery storage. For each model, there is a description which includes where to find the model, whether it is currently maintained and any references that may be available. Modeling improvements underway at SNL include quantifying the uncertainty of individual system components, the overall uncertainty in modeled vs. measured results and modeling large PV systems. SNL is also conducting research into the overall reliability of PV systems.

  11. Parallel performance of TORT on the CRAY J90: Model and measurement

    International Nuclear Information System (INIS)

    Barnett, A.; Azmy, Y.Y.

    1997-10-01

    A limitation on the parallel performance of TORT on the CRAY J90 is the amount of extra work introduced by the multitasking algorithm itself. The extra work beyond that of the serial version of the code, called overhead, arises from the synchronization of the parallel tasks and the accumulation of results by the master task. The goal of recent updates to TORT was to reduce the time consumed by these activities. To help understand which components of the multitasking algorithm contribute significantly to the overhead, a parallel performance model was constructed and compared to measurements of actual timings of the code

  12. The Relationship between Shared Mental Models and Task Performance in an Online Team- Based Learning Environment

    Science.gov (United States)

    Johnson, Tristan E.; Lee, Youngmin

    2008-01-01

    In an effort to better understand learning teams, this study examines the effects of shared mental models on team and individual performance. The results indicate that each team's shared mental model changed significantly over the time that subjects participated in team-based learning activities. The results also showed that the shared mental…

  13. Global climate model performance over Alaska and Greenland

    DEFF Research Database (Denmark)

    Walsh, John E.; Chapman, William L.; Romanovsky, Vladimir

    2008-01-01

    The performance of a set of 15 global climate models used in the Coupled Model Intercomparison Project is evaluated for Alaska and Greenland, and compared with the performance over broader pan-Arctic and Northern Hemisphere extratropical domains. Root-mean-square errors relative to the 1958...... to narrowing the uncertainty and obtaining more robust estimates of future climate change in regions such as Alaska, Greenland, and the broader Arctic....... of the models are generally much larger than the biases of the composite output, indicating that the systematic errors differ considerably among the models. There is a tendency for the models with smaller errors to simulate a larger greenhouse warming over the Arctic, as well as larger increases of Arctic...

  14. Model Performance Evaluation and Scenario Analysis (MPESA) Tutorial

    Science.gov (United States)

    The model performance evaluation consists of metrics and model diagnostics. These metrics provides modelers with statistical goodness-of-fit measures that capture magnitude only, sequence only, and combined magnitude and sequence errors.

  15. Performance analysis of NOAA tropospheric signal delay model

    International Nuclear Information System (INIS)

    Ibrahim, Hassan E; El-Rabbany, Ahmed

    2011-01-01

    Tropospheric delay is one of the dominant global positioning system (GPS) errors, which degrades the positioning accuracy. Recent development in tropospheric modeling relies on implementation of more accurate numerical weather prediction (NWP) models. In North America one of the NWP-based tropospheric correction models is the NOAA Tropospheric Signal Delay Model (NOAATrop), which was developed by the US National Oceanic and Atmospheric Administration (NOAA). Because of its potential to improve the GPS positioning accuracy, the NOAATrop model became the focus of many researchers. In this paper, we analyzed the performance of the NOAATrop model and examined its effect on ionosphere-free-based precise point positioning (PPP) solution. We generated 3 year long tropospheric zenith total delay (ZTD) data series for the NOAATrop model, Hopfield model, and the International GNSS Services (IGS) final tropospheric correction product, respectively. These data sets were generated at ten IGS reference stations spanning Canada and the United States. We analyzed the NOAATrop ZTD data series and compared them with those of the Hopfield model. The IGS final tropospheric product was used as a reference. The analysis shows that the performance of the NOAATrop model is a function of both season (time of the year) and geographical location. However, its performance was superior to the Hopfield model in all cases. We further investigated the effect of implementing the NOAATrop model on the ionosphere-free-based PPP solution convergence and accuracy. It is shown that the use of the NOAATrop model improved the PPP solution convergence by 1%, 10% and 15% for the latitude, longitude and height components, respectively

  16. Modelling of green roofs' hydrologic performance using EPA's SWMM.

    Science.gov (United States)

    Burszta-Adamiak, E; Mrowiec, M

    2013-01-01

    Green roofs significantly affect the increase in water retention and thus the management of rain water in urban areas. In Poland, as in many other European countries, excess rainwater resulting from snowmelt and heavy rainfall contributes to the development of local flooding in urban areas. Opportunities to reduce surface runoff and reduce flood risks are among the reasons why green roofs are more likely to be used also in this country. However, there are relatively few data on their in situ performance. In this study the storm water performance was simulated for the green roofs experimental plots using the Storm Water Management Model (SWMM) with Low Impact Development (LID) Controls module (version 5.0.022). The model consists of many parameters for a particular layer of green roofs but simulation results were unsatisfactory considering the hydrologic response of the green roofs. For the majority of the tested rain events, the Nash coefficient had negative values. It indicates a weak fit between observed and measured flow-rates. Therefore complexity of the LID module does not affect the increase of its accuracy. Further research at a technical scale is needed to determine the role of the green roof slope, vegetation cover and drying process during the inter-event periods.

  17. Influence of the management strategy model on estimating water system performance under climate change

    Science.gov (United States)

    Francois, Baptiste; Hingray, Benoit; Creutin, Jean-Dominique; Hendrickx, Frederic

    2015-04-01

    The performance of water systems used worldwide for the management of water resources is expected to be influenced by future changes in regional climates and water uses. Anticipating possible performance changes of a given system requires a modeling chain simulating its management. Operational management is usually not trivial especially when several conflicting objectives have to be accounted for. Management models are therefore often a crude representation of the real system and they only approximate its performance. Estimated performance changes are expected to depend on the management model used, but this is often not assessed. This communication analyzes the influence of the management strategy representation on the performance of an Alpine reservoir (Serre-Ponçon, South-East of France) for which irrigation supply, hydropower generation and recreational activities are the main objectives. We consider three ways to construct the strategy named as clear-, short- and far-sighted management. They are based on different forecastability degrees of seasonal inflows into the reservoir. The strategies are optimized using a Dynamic Programming algorithm (deterministic for clear-sighted and implicit stochastic for short- and far-sighted). System performance is estimated for an ensemble of future hydro-meteorological projections obtained in the RIWER2030 research project (http://www.lthe.fr/RIWER2030/) from a suite of climate experiments from the EU - ENSEMBLES research project. Our results show that changes in system performance is much more influenced by changes in hydro-meteorological variables than by the choice of strategy modeling. They also show that a simple strategy representation (i.e. clear-sighted management) leads to similar estimates of performance modifications than those obtained with a representation supposedly closer to real world (i.e. the far-sighted management). The Short-Sighted management approach lead to significantly different results, especially

  18. Some considerations for validation of repository performance assessment models

    International Nuclear Information System (INIS)

    Eisenberg, N.

    1991-01-01

    Validation is an important aspect of the regulatory uses of performance assessment. A substantial body of literature exists indicating the manner in which validation of models is usually pursued. Because performance models for a nuclear waste repository cannot be tested over the long time periods for which the model must make predictions, the usual avenue for model validation is precluded. Further impediments to model validation include a lack of fundamental scientific theory to describe important aspects of repository performance and an inability to easily deduce the complex, intricate structures characteristic of a natural system. A successful strategy for validation must attempt to resolve these difficulties in a direct fashion. Although some procedural aspects will be important, the main reliance of validation should be on scientific substance and logical rigor. The level of validation needed will be mandated, in part, by the uses to which these models are put, rather than by the ideal of validation of a scientific theory. Because of the importance of the validation of performance assessment models, the NRC staff has engaged in a program of research and international cooperation to seek progress in this important area. 2 figs., 16 refs

  19. Clinical laboratory as an economic model for business performance analysis.

    Science.gov (United States)

    Buljanović, Vikica; Patajac, Hrvoje; Petrovecki, Mladen

    2011-08-15

    To perform SWOT (strengths, weaknesses, opportunities, and threats) analysis of a clinical laboratory as an economic model that may be used to improve business performance of laboratories by removing weaknesses, minimizing threats, and using external opportunities and internal strengths. Impact of possible threats to and weaknesses of the Clinical Laboratory at Našice General County Hospital business performance and use of strengths and opportunities to improve operating profit were simulated using models created on the basis of SWOT analysis results. The operating profit as a measure of profitability of the clinical laboratory was defined as total revenue minus total expenses and presented using a profit and loss account. Changes in the input parameters in the profit and loss account for 2008 were determined using opportunities and potential threats, and economic sensitivity analysis was made by using changes in the key parameters. The profit and loss account and economic sensitivity analysis were tools for quantifying the impact of changes in the revenues and expenses on the business operations of clinical laboratory. Results of simulation models showed that operational profit of €470 723 in 2008 could be reduced to only €21 542 if all possible threats became a reality and current weaknesses remained the same. Also, operational gain could be increased to €535 804 if laboratory strengths and opportunities were utilized. If both the opportunities and threats became a reality, the operational profit would decrease by €384 465. The operational profit of the clinical laboratory could be significantly reduced if all threats became a reality and the current weaknesses remained the same. The operational profit could be increased by utilizing strengths and opportunities as much as possible. This type of modeling may be used to monitor business operations of any clinical laboratory and improve its financial situation by implementing changes in the next fiscal

  20. Performance modeling of neighbor discovery in proactive routing protocols

    Directory of Open Access Journals (Sweden)

    Andres Medina

    2011-07-01

    Full Text Available It is well known that neighbor discovery is a critical component of proactive routing protocols in wireless ad hoc networks. However there is no formal study on the performance of proposed neighbor discovery mechanisms. This paper provides a detailed model of key performance metrics of neighbor discovery algorithms, such as node degree and the distribution of the distance to symmetric neighbors. The model accounts for the dynamics of neighbor discovery as well as node density, mobility, radio and interference. The paper demonstrates a method for applying these models to the evaluation of global network metrics. In particular, it describes a model of network connectivity. Validation of the models shows that the degree estimate agrees, within 5% error, with simulations for the considered scenarios. The work presented in this paper serves as a basis for the performance evaluation of remaining performance metrics of routing protocols, vital for large scale deployment of ad hoc networks.

  1. Indonesian Private University Lecturer Performance Improvement Model to Improve a Sustainable Organization Performance

    Science.gov (United States)

    Suryaman

    2018-01-01

    Lecturer performance will affect the quality and carrying capacity of the sustainability of an organization, in this case the university. There are many models developed to measure the performance of teachers, but not much to discuss the influence of faculty performance itself towards sustainability of an organization. This study was conducted in…

  2. Behavioral Change and Building Performance: Strategies for Significant, Persistent, and Measurable Institutional Change

    Energy Technology Data Exchange (ETDEWEB)

    Wolfe, Amy K.; Malone, Elizabeth L.; Heerwagen, Judith H.; Dion, Jerome P.

    2014-04-01

    The people who use Federal buildings — Federal employees, operations and maintenance staff, and the general public — can significantly impact a building’s environmental performance and the consumption of energy, water, and materials. Many factors influence building occupants’ use of resources (use behaviors) including work process requirements, ability to fulfill agency missions, new and possibly unfamiliar high-efficiency/high-performance building technologies; a lack of understanding, education, and training; inaccessible information or ineffective feedback mechanisms; and cultural norms and institutional rules and requirements, among others. While many strategies have been used to introduce new occupant use behaviors that promote sustainability and reduced resource consumption, few have been verified in the scientific literature or have properly documented case study results. This paper documents validated strategies that have been shown to encourage new use behaviors that can result in significant, persistent, and measureable reductions in resource consumption. From the peer-reviewed literature, the paper identifies relevant strategies for Federal facilities and commercial buildings that focus on the individual, groups of individuals (e.g., work groups), and institutions — their policies, requirements, and culture. The paper documents methods with evidence of success in changing use behaviors and enabling occupants to effectively interact with new technologies/designs. It also provides a case study of the strategies used at a Federal facility — Fort Carson, Colorado. The paper documents gaps in the current literature and approaches, and provides topics for future research.

  3. Causal Analysis for Performance Modeling of Computer Programs

    Directory of Open Access Journals (Sweden)

    Jan Lemeire

    2007-01-01

    Full Text Available Causal modeling and the accompanying learning algorithms provide useful extensions for in-depth statistical investigation and automation of performance modeling. We enlarged the scope of existing causal structure learning algorithms by using the form-free information-theoretic concept of mutual information and by introducing the complexity criterion for selecting direct relations among equivalent relations. The underlying probability distribution of experimental data is estimated by kernel density estimation. We then reported on the benefits of a dependency analysis and the decompositional capacities of causal models. Useful qualitative models, providing insight into the role of every performance factor, were inferred from experimental data. This paper reports on the results for a LU decomposition algorithm and on the study of the parameter sensitivity of the Kakadu implementation of the JPEG-2000 standard. Next, the analysis was used to search for generic performance characteristics of the applications.

  4. Real-time individualization of the unified model of performance.

    Science.gov (United States)

    Liu, Jianbo; Ramakrishnan, Sridhar; Laxminarayan, Srinivas; Balkin, Thomas J; Reifman, Jaques

    2017-12-01

    Existing mathematical models for predicting neurobehavioural performance are not suited for mobile computing platforms because they cannot adapt model parameters automatically in real time to reflect individual differences in the effects of sleep loss. We used an extended Kalman filter to develop a computationally efficient algorithm that continually adapts the parameters of the recently developed Unified Model of Performance (UMP) to an individual. The algorithm accomplishes this in real time as new performance data for the individual become available. We assessed the algorithm's performance by simulating real-time model individualization for 18 subjects subjected to 64 h of total sleep deprivation (TSD) and 7 days of chronic sleep restriction (CSR) with 3 h of time in bed per night, using psychomotor vigilance task (PVT) data collected every 2 h during wakefulness. This UMP individualization process produced parameter estimates that progressively approached the solution produced by a post-hoc fitting of model parameters using all data. The minimum number of PVT measurements needed to individualize the model parameters depended upon the type of sleep-loss challenge, with ~30 required for TSD and ~70 for CSR. However, model individualization depended upon the overall duration of data collection, yielding increasingly accurate model parameters with greater number of days. Interestingly, reducing the PVT sampling frequency by a factor of two did not notably hamper model individualization. The proposed algorithm facilitates real-time learning of an individual's trait-like responses to sleep loss and enables the development of individualized performance prediction models for use in a mobile computing platform. © 2017 European Sleep Research Society.

  5. Moderated Mediation Model of Interrelations between Workplace Romance, Wellbeing, and Employee Performance

    Directory of Open Access Journals (Sweden)

    Muhammad Aamir Shafique Khan

    2017-12-01

    Full Text Available In this study, first we examined the effect of workplace romance on employee job performance, and the mediatory role of psychological wellbeing in the relationship between workplace romance and employee performance. Then we tested the moderating effects of gender and workplace romance type – lateral or hierarchical – on the indirect effect of workplace romance on employee performance. Based on a survey of 311 doctors from five government teaching hospitals in Pakistan, we used structural equation modeling and bootstrapping to test these relationships. This study reveals that psychological wellbeing significantly fully mediates the positive relationship between workplace romance and job performance. Moreover, multi-group analysis shows that gender moderates the indirect effect of workplace romance on employee performance, where the indirect effect of workplace romance on employee performance is stronger for male participants. This study carries important implications, particularly for the policy makers and managers of healthcare sector organizations.

  6. The Hysteresis Performance and Restoring Force Model for Corroded Reinforced Concrete Frame Columns

    Directory of Open Access Journals (Sweden)

    Guifeng Zhao

    2016-01-01

    Full Text Available A numerical simulation of the hysteresis performance of corroded reinforced concrete (RC frame columns was conducted. Moreover, the results obtained were compared with experimental data. On this basis, a degenerated three-linearity (D-TRI restoring force model was established which could reflect the hysteresis performance of corroded RC frame columns through theoretical analysis and data fitting. Results indicated that the hysteretic bearing capacity of frame columns decreased significantly due to corrosion of the rebar. In view of the characteristics of the hysteresis curve, the plumpness of the hysteresis loop for frame columns decreased and shrinkage increased with increasing rebar corrosion. All these illustrated that the seismic energy dissipation performance of frame columns reduced but their brittleness increased. As for the features of the skeleton curve, the trends for corroded and noncorroded members were basically consistent and roughly corresponded to the features of a trilinear equivalent model. Thereby, the existing Clough hysteresis rule can be used to establish the restoring force model applicable to corroded RC frame columns based on that of the noncorroded RC members. The calculated skeleton curve and hysteresis curve of corroded RC frame columns using the D-TRI model are closer to the experimental results.

  7. An analytical model of the HINT performance metric

    Energy Technology Data Exchange (ETDEWEB)

    Snell, Q.O.; Gustafson, J.L. [Scalable Computing Lab., Ames, IA (United States)

    1996-10-01

    The HINT benchmark was developed to provide a broad-spectrum metric for computers and to measure performance over the full range of memory sizes and time scales. We have extended our understanding of why HINT performance curves look the way they do and can now predict the curves using an analytical model based on simple hardware specifications as input parameters. Conversely, by fitting the experimental curves with the analytical model, hardware specifications such as memory performance can be inferred to provide insight into the nature of a given computer system.

  8. Modelling saline intrusion for repository performance assessment

    International Nuclear Information System (INIS)

    Jackson, C.P.

    1989-04-01

    UK Nirex Ltd are currently considering the possibility of disposal of radioactive waste by burial in deep underground repositories. The natural pathway for radionuclides from such a repository to return to Man's immediate environment (the biosphere) is via groundwater. Thus analyses of the groundwater flow in the neighbourhood of a possible repository, and consequent radionuclide transport form an important part of a performance assessment for a repository. Some of the areas in the UK that might be considered as possible locations for a repository are near the coast. If a repository is located in a coastal region seawater may intrude into the groundwater flow system. As seawater is denser than fresh water buoyancy forces acting on the intruding saline water may have significant effects on the groundwater flow system, and consequently on the time for radionuclides to return to the biosphere. Further, the chemistry of the repository near-field may be strongly influenced by the salinity of the groundwater. It is therefore important for Nirex to have a capability for reliably modelling saline intrusion to an appropriate degree of accuracy in order to make performance assessments for a repository in a coastal region. This report describes work undertaken in the Nirex Research programme to provide such a capability. (author)

  9. Job stress models, depressive disorders and work performance of engineers in microelectronics industry.

    Science.gov (United States)

    Chen, Sung-Wei; Wang, Po-Chuan; Hsin, Ping-Lung; Oates, Anthony; Sun, I-Wen; Liu, Shen-Ing

    2011-01-01

    Microelectronic engineers are considered valuable human capital contributing significantly toward economic development, but they may encounter stressful work conditions in the context of a globalized industry. The study aims at identifying risk factors of depressive disorders primarily based on job stress models, the Demand-Control-Support and Effort-Reward Imbalance models, and at evaluating whether depressive disorders impair work performance in microelectronics engineers in Taiwan. The case-control study was conducted among 678 microelectronics engineers, 452 controls and 226 cases with depressive disorders which were defined by a score 17 or more on the Beck Depression Inventory and a psychiatrist's diagnosis. The self-administered questionnaires included the Job Content Questionnaire, Effort-Reward Imbalance Questionnaire, demography, psychosocial factors, health behaviors and work performance. Hierarchical logistic regression was applied to identify risk factors of depressive disorders. Multivariate linear regressions were used to determine factors affecting work performance. By hierarchical logistic regression, risk factors of depressive disorders are high demands, low work social support, high effort/reward ratio and low frequency of physical exercise. Combining the two job stress models may have better predictive power for depressive disorders than adopting either model alone. Three multivariate linear regressions provide similar results indicating that depressive disorders are associated with impaired work performance in terms of absence, role limitation and social functioning limitation. The results may provide insight into the applicability of job stress models in a globalized high-tech industry considerably focused in non-Western countries, and the design of workplace preventive strategies for depressive disorders in Asian electronics engineering population.

  10. Mapping the Most Significant Computer Hacking Events to a Temporal Computer Attack Model

    OpenAIRE

    Heerden , Renier ,; Pieterse , Heloise; Irwin , Barry

    2012-01-01

    Part 4: Section 3: ICT for Peace and War; International audience; This paper presents eight of the most significant computer hacking events (also known as computer attacks). These events were selected because of their unique impact, methodology, or other properties. A temporal computer attack model is presented that can be used to model computer based attacks. This model consists of the following stages: Target Identification, Reconnaissance, Attack, and Post-Attack Reconnaissance stages. The...

  11. NREL Research Yields Significant Thermoelectric Performance | News | NREL

    Science.gov (United States)

    Chemical and Materials Science and Technology center, said the introduction of SWCNT into fabrics could from an exemplary SWNCT thin film improved thermoelectric properties. The newest paper revealed that that the same SWCNT thin film achieved identical performance when doped with either positive or

  12. Probabilistic Radiological Performance Assessment Modeling and Uncertainty

    Science.gov (United States)

    Tauxe, J.

    2004-12-01

    A generic probabilistic radiological Performance Assessment (PA) model is presented. The model, built using the GoldSim systems simulation software platform, concerns contaminant transport and dose estimation in support of decision making with uncertainty. Both the U.S. Nuclear Regulatory Commission (NRC) and the U.S. Department of Energy (DOE) require assessments of potential future risk to human receptors of disposal of LLW. Commercially operated LLW disposal facilities are licensed by the NRC (or agreement states), and the DOE operates such facilities for disposal of DOE-generated LLW. The type of PA model presented is probabilistic in nature, and hence reflects the current state of knowledge about the site by using probability distributions to capture what is expected (central tendency or average) and the uncertainty (e.g., standard deviation) associated with input parameters, and propagating through the model to arrive at output distributions that reflect expected performance and the overall uncertainty in the system. Estimates of contaminant release rates, concentrations in environmental media, and resulting doses to human receptors well into the future are made by running the model in Monte Carlo fashion, with each realization representing a possible combination of input parameter values. Statistical summaries of the results can be compared to regulatory performance objectives, and decision makers are better informed of the inherently uncertain aspects of the model which supports their decision-making. While this information may make some regulators uncomfortable, they must realize that uncertainties which were hidden in a deterministic analysis are revealed in a probabilistic analysis, and the chance of making a correct decision is now known rather than hoped for. The model includes many typical features and processes that would be part of a PA, but is entirely fictitious. This does not represent any particular site and is meant to be a generic example. A

  13. Uncertainty and Sensitivity of Alternative Rn-222 Flux Density Models Used in Performance Assessment

    International Nuclear Information System (INIS)

    Greg J. Shott, Vefa Yucel, Lloyd Desotell Non-Nstec Authors: G. Pyles and Jon Carilli

    2007-01-01

    Performance assessments for the Area 5 Radioactive Waste Management Site on the Nevada Test Site have used three different mathematical models to estimate Rn-222 flux density. This study describes the performance, uncertainty, and sensitivity of the three models which include the U.S. Nuclear Regulatory Commission Regulatory Guide 3.64 analytical method and two numerical methods. The uncertainty of each model was determined by Monte Carlo simulation using Latin hypercube sampling. The global sensitivity was investigated using Morris one-at-time screening method, sample-based correlation and regression methods, the variance-based extended Fourier amplitude sensitivity test, and Sobol's sensitivity indices. The models were found to produce similar estimates of the mean and median flux density, but to have different uncertainties and sensitivities. When the Rn-222 effective diffusion coefficient was estimated using five different published predictive models, the radon flux density models were found to be most sensitive to the effective diffusion coefficient model selected, the emanation coefficient, and the radionuclide inventory. Using a site-specific measured effective diffusion coefficient significantly reduced the output uncertainty. When a site-specific effective-diffusion coefficient was used, the models were most sensitive to the emanation coefficient and the radionuclide inventory

  14. Propagation of uncertainty in nasal spray in vitro performance models using Monte Carlo simulation: Part II. Error propagation during product performance modeling.

    Science.gov (United States)

    Guo, Changning; Doub, William H; Kauffman, John F

    2010-08-01

    Monte Carlo simulations were applied to investigate the propagation of uncertainty in both input variables and response measurements on model prediction for nasal spray product performance design of experiment (DOE) models in the first part of this study, with an initial assumption that the models perfectly represent the relationship between input variables and the measured responses. In this article, we discard the initial assumption, and extended the Monte Carlo simulation study to examine the influence of both input variable variation and product performance measurement variation on the uncertainty in DOE model coefficients. The Monte Carlo simulations presented in this article illustrate the importance of careful error propagation during product performance modeling. Our results show that the error estimates based on Monte Carlo simulation result in smaller model coefficient standard deviations than those from regression methods. This suggests that the estimated standard deviations from regression may overestimate the uncertainties in the model coefficients. Monte Carlo simulations provide a simple software solution to understand the propagation of uncertainty in complex DOE models so that design space can be specified with statistically meaningful confidence levels. (c) 2010 Wiley-Liss, Inc. and the American Pharmacists Association

  15. Facial Performance Transfer via Deformable Models and Parametric Correspondence.

    Science.gov (United States)

    Asthana, Akshay; de la Hunty, Miles; Dhall, Abhinav; Goecke, Roland

    2012-09-01

    The issue of transferring facial performance from one person's face to another's has been an area of interest for the movie industry and the computer graphics community for quite some time. In recent years, deformable face models, such as the Active Appearance Model (AAM), have made it possible to track and synthesize faces in real time. Not surprisingly, deformable face model-based approaches for facial performance transfer have gained tremendous interest in the computer vision and graphics community. In this paper, we focus on the problem of real-time facial performance transfer using the AAM framework. We propose a novel approach of learning the mapping between the parameters of two completely independent AAMs, using them to facilitate the facial performance transfer in a more realistic manner than previous approaches. The main advantage of modeling this parametric correspondence is that it allows a "meaningful" transfer of both the nonrigid shape and texture across faces irrespective of the speakers' gender, shape, and size of the faces, and illumination conditions. We explore linear and nonlinear methods for modeling the parametric correspondence between the AAMs and show that the sparse linear regression method performs the best. Moreover, we show the utility of the proposed framework for a cross-language facial performance transfer that is an area of interest for the movie dubbing industry.

  16. Cost and Performance Model for Photovoltaic Systems

    Science.gov (United States)

    Borden, C. S.; Smith, J. H.; Davisson, M. C.; Reiter, L. J.

    1986-01-01

    Lifetime cost and performance (LCP) model assists in assessment of design options for photovoltaic systems. LCP is simulation of performance, cost, and revenue streams associated with photovoltaic power systems connected to electric-utility grid. LCP provides user with substantial flexibility in specifying technical and economic environment of application.

  17. Significance of matrix diagonalization in modelling inelastic electron scattering

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Z. [University of Ulm, Ulm 89081 (Germany); Hambach, R. [University of Ulm, Ulm 89081 (Germany); University of Jena, Jena 07743 (Germany); Kaiser, U.; Rose, H. [University of Ulm, Ulm 89081 (Germany)

    2017-04-15

    Electron scattering is always applied as one of the routines to investigate nanostructures. Nowadays the development of hardware offers more and more prospect for this technique. For example imaging nanostructures with inelastic scattered electrons may allow to produce component-sensitive images with atomic resolution. Modelling inelastic electron scattering is therefore essential for interpreting these images. The main obstacle to study inelastic scattering problem is its complexity. During inelastic scattering, incident electrons entangle with objects, and the description of this process involves a multidimensional array. Since the simulation usually involves fourdimensional Fourier transforms, the computation is highly inefficient. In this work we have offered one solution to handle the multidimensional problem. By transforming a high dimensional array into twodimensional array, we are able to perform matrix diagonalization and approximate the original multidimensional array with its twodimensional eigenvectors. Our procedure reduces the complicated multidimensional problem to a twodimensional problem. In addition, it minimizes the number of twodimensional problems. This method is very useful for studying multiple inelastic scattering. - Highlights: • 4D problems are involved in modelling inelastic electron scattering. • By means of matrix diagonalization, the 4D problems can be simplified as 2D problems. • The number of 2D problems is minimized by using this approach.

  18. Planetary Suit Hip Bearing Model for Predicting Design vs. Performance

    Science.gov (United States)

    Cowley, Matthew S.; Margerum, Sarah; Harvil, Lauren; Rajulu, Sudhakar

    2011-01-01

    Designing a planetary suit is very complex and often requires difficult trade-offs between performance, cost, mass, and system complexity. In order to verifying that new suit designs meet requirements, full prototypes must eventually be built and tested with human subjects. Using computer models early in the design phase of new hardware development can be advantageous, allowing virtual prototyping to take place. Having easily modifiable models of the suit hard sections may reduce the time it takes to make changes to the hardware designs and then to understand their impact on suit and human performance. A virtual design environment gives designers the ability to think outside the box and exhaust design possibilities before building and testing physical prototypes with human subjects. Reductions in prototyping and testing may eventually reduce development costs. This study is an attempt to develop computer models of the hard components of the suit with known physical characteristics, supplemented with human subject performance data. Objectives: The primary objective was to develop an articulating solid model of the Mark III hip bearings to be used for evaluating suit design performance of the hip joint. Methods: Solid models of a planetary prototype (Mark III) suit s hip bearings and brief section were reverse-engineered from the prototype. The performance of the models was then compared by evaluating the mobility performance differences between the nominal hardware configuration and hardware modifications. This was accomplished by gathering data from specific suited tasks. Subjects performed maximum flexion and abduction tasks while in a nominal suit bearing configuration and in three off-nominal configurations. Performance data for the hip were recorded using state-of-the-art motion capture technology. Results: The results demonstrate that solid models of planetary suit hard segments for use as a performance design tool is feasible. From a general trend perspective

  19. Cluster Cooperation in Wireless-Powered Sensor Networks: Modeling and Performance Analysis

    Directory of Open Access Journals (Sweden)

    Chao Zhang

    2017-09-01

    Full Text Available A wireless-powered sensor network (WPSN consisting of one hybrid access point (HAP, a near cluster and the corresponding far cluster is investigated in this paper. These sensors are wireless-powered and they transmit information by consuming the harvested energy from signal ejected by the HAP. Sensors are able to harvest energy as well as store the harvested energy. We propose that if sensors in near cluster do not have their own information to transmit, acting as relays, they can help the sensors in a far cluster to forward information to the HAP in an amplify-and-forward (AF manner. We use a finite Markov chain to model the dynamic variation process of the relay battery, and give a general analyzing model for WPSN with cluster cooperation. Though the model, we deduce the closed-form expression for the outage probability as the metric of this network. Finally, simulation results validate the start point of designing this paper and correctness of theoretical analysis and show how parameters have an effect on system performance. Moreover, it is also known that the outage probability of sensors in far cluster can be drastically reduced without sacrificing the performance of sensors in near cluster if the transmit power of HAP is fairly high. Furthermore, in the aspect of outage performance of far cluster, the proposed scheme significantly outperforms the direct transmission scheme without cooperation.

  20. Model of service-oriented catering supply chain performance evaluation

    Directory of Open Access Journals (Sweden)

    Juanqiong Gou

    2013-03-01

    Full Text Available Purpose: The aim of this paper is constructing a performance evaluation model for service-oriented catering supply chain. Design/methodology/approach: With the research on the current situation of catering industry, this paper summarized the characters of the catering supply chain, and then presents the service-oriented catering supply chain model based on the platform of logistics and information. At last, the fuzzy AHP method is used to evaluate the performance of service-oriented catering supply chain. Findings: With the analysis of the characteristics of catering supply chain, we construct the performance evaluation model in order to guarantee the food safety, logistics efficiency, price stability and so on. Practical implications: In order to evolve an efficient and effective service supply chain, it can not only used to own enterprise improvement, but also can be used for selecting different customers, to choose a different model of development. Originality/value: This paper has a new definition of service-oriented catering supply chain. And it offers a model to evaluate the performance of this catering supply chain.

  1. Rigid-body-spring model numerical analysis of joint performance of engineered cementitious composites and concrete

    Science.gov (United States)

    Khmurovska, Y.; Štemberk, P.; Křístek, V.

    2017-09-01

    This paper presents a numerical investigation of effectiveness of using engineered cementitious composites with polyvinyl alcohol fibers for concrete cover layer repair. A numerical model of a monolithic concaved L-shaped concrete structural detail which is strengthened with an engineered cementitious composite layer with polyvinyl alcohol fibers is created and loaded with bending moment. The numerical analysis employs nonlinear 3-D Rigid-Body-Spring Model. The proposed material model shows reliable results and can be used in further studies. The engineered cementitious composite shows extremely good performance in tension due to the strain-hardening effect. Since durability of the bond can be decreased significantly by its degradation due to the thermal loading, this effect should be also taken into account in the future work, as well as the experimental investigation, which should be performed for validation of the proposed numerical model.

  2. An evaluation of the performance of chemistry transport models by comparison with research aircraft observations. Part 1: Concepts and overall model performance

    Directory of Open Access Journals (Sweden)

    D. Brunner

    2003-01-01

    Full Text Available A rigorous evaluation of five global Chemistry-Transport and two Chemistry-Climate Models operated by several different groups in Europe, was performed. Comparisons were made of the models with trace gas observations from a number of research aircraft measurement campaigns during the four-year period 1995-1998. Whenever possible the models were run over the same four-year period and at each simulation time step the instantaneous tracer fields were interpolated to all coinciding observation points. This approach allows for a very close comparison with observations and fully accounts for the specific meteorological conditions during the measurement flights. This is important considering the often limited availability and representativity of such trace gas measurements. A new extensive database including all major research and commercial aircraft measurements between 1995 and 1998, as well as ozone soundings, was established specifically to support this type of direct comparison. Quantitative methods were applied to judge model performance including the calculation of average concentration biases and the visualization of correlations and RMS errors in the form of so-called Taylor diagrams. We present the general concepts applied, the structure and content of the database, and an overall analysis of model skills over four distinct regions. These regions were selected to represent various atmospheric conditions and to cover large geographical domains such that sufficient observations are available for comparison. The comparison of model results with the observations revealed specific problems for each individual model. This study suggests the further improvements needed and serves as a benchmark for re-evaluations of such improvements. In general all models show deficiencies with respect to both mean concentrations and vertical gradients of important trace gases. These include ozone, CO and NOx at the tropopause. Too strong two-way mixing across the

  3. Characterization uncertainty and its effects on models and performance

    International Nuclear Information System (INIS)

    Rautman, C.A.; Treadway, A.H.

    1991-01-01

    Geostatistical simulation is being used to develop multiple geologic models of rock properties at the proposed Yucca Mountain repository site. Because each replicate model contains the same known information, and is thus essentially indistinguishable statistically from others, the differences between models may be thought of as representing the uncertainty in the site description. The variability among performance measures, such as ground water travel time, calculated using these replicate models therefore quantifies the uncertainty in performance that arises from uncertainty in site characterization

  4. Performance measurement, modeling, and evaluation of integrated concurrency control and recovery algorithms in distributed data base systems

    Energy Technology Data Exchange (ETDEWEB)

    Jenq, B.C.

    1986-01-01

    The performance evaluation of integrated concurrency-control and recovery mechanisms for distributed data base systems is studied using a distributed testbed system. In addition, a queueing network model was developed to analyze the two phase locking scheme in the distributed testbed system. The combination of testbed measurement and analytical modeling provides an effective tool for understanding the performance of integrated concurrency control and recovery algorithms in distributed database systems. The design and implementation of the distributed testbed system, CARAT, are presented. The concurrency control and recovery algorithms implemented in CARAT include: a two phase locking scheme with distributed deadlock detection, a distributed version of optimistic approach, before-image and after-image journaling mechanisms for transaction recovery, and a two-phase commit protocol. Many performance measurements were conducted using a variety of workloads. A queueing network model is developed to analyze the performance of the CARAT system using the two-phase locking scheme with before-image journaling. The combination of testbed measurements and analytical modeling provides significant improvements in understanding the performance impacts of the concurrency control and recovery algorithms in distributed database systems.

  5. A model for evaluating the social performance of construction waste management

    International Nuclear Information System (INIS)

    Yuan Hongping

    2012-01-01

    Highlights: ► Scant attention is paid to social performance of construction waste management (CWM). ► We develop a model for assessing the social performance of CWM. ► With the model, the social performance of CWM can be quantitatively simulated. - Abstract: It has been determined by existing literature that a lot of research efforts have been made to the economic performance of construction waste management (CWM), but less attention is paid to investigation of the social performance of CWM. This study therefore attempts to develop a model for quantitatively evaluating the social performance of CWM by using a system dynamics (SD) approach. Firstly, major variables affecting the social performance of CWM are identified and a holistic system for assessing the social performance of CWM is formulated in line with feedback relationships underlying these variables. The developed system is then converted into a SD model through the software iThink. An empirical case study is finally conducted to demonstrate application of the model. Results of model validation indicate that the model is robust and reasonable to reflect the situation of the real system under study. Findings of the case study offer helpful insights into effectively promoting the social performance of CWM of the project investigated. Furthermore, the model exhibits great potential to function as an experimental platform for dynamically evaluating effects of management measures on improving the social performance of CWM of construction projects.

  6. Enhancing pavement performance prediction models for the Illinois Tollway System

    Directory of Open Access Journals (Sweden)

    Laxmikanth Premkumar

    2016-01-01

    Full Text Available Accurate pavement performance prediction represents an important role in prioritizing future maintenance and rehabilitation needs, and predicting future pavement condition in a pavement management system. The Illinois State Toll Highway Authority (Tollway with over 2000 lane miles of pavement utilizes the condition rating survey (CRS methodology to rate pavement performance. Pavement performance models developed in the past for the Illinois Department of Transportation (IDOT are used by the Tollway to predict the future condition of its network. The model projects future CRS ratings based on pavement type, thickness, traffic, pavement age and current CRS rating. However, with time and inclusion of newer pavement types there was a need to calibrate the existing pavement performance models, as well as, develop models for newer pavement types.This study presents the results of calibrating the existing models, and developing new models for the various pavement types in the Illinois Tollway network. The predicted future condition of the pavements is used in estimating its remaining service life to failure, which is of immediate use in recommending future maintenance and rehabilitation requirements for the network. Keywords: Pavement performance models, Remaining life, Pavement management

  7. Switching performance of OBS network model under prefetched real traffic

    Science.gov (United States)

    Huang, Zhenhua; Xu, Du; Lei, Wen

    2005-11-01

    Optical Burst Switching (OBS) [1] is now widely considered as an efficient switching technique in building the next generation optical Internet .So it's very important to precisely evaluate the performance of the OBS network model. The performance of the OBS network model is variable in different condition, but the most important thing is that how it works under real traffic load. In the traditional simulation models, uniform traffics are usually generated by simulation software to imitate the data source of the edge node in the OBS network model, and through which the performance of the OBS network is evaluated. Unfortunately, without being simulated by real traffic, the traditional simulation models have several problems and their results are doubtable. To deal with this problem, we present a new simulation model for analysis and performance evaluation of the OBS network, which uses prefetched IP traffic to be data source of the OBS network model. The prefetched IP traffic can be considered as real IP source of the OBS edge node and the OBS network model has the same clock rate with a real OBS system. So it's easy to conclude that this model is closer to the real OBS system than the traditional ones. The simulation results also indicate that this model is more accurate to evaluate the performance of the OBS network system and the results of this model are closer to the actual situation.

  8. Modeling the performance of low concentration photovoltaic systems

    Energy Technology Data Exchange (ETDEWEB)

    Reis, F. [SESUL, Faculdade de Ciencias da Universidade de Lisboa, 1749-016 Lisboa (Portugal); WS Energia, Ed. Tecnologia II 47, Taguspark, Oeiras (Portugal); Brito, M.C. [SESUL, Faculdade de Ciencias da Universidade de Lisboa, 1749-016 Lisboa (Portugal); Corregidor, V.; Wemans, J. [WS Energia, Ed. Tecnologia II 47, Taguspark, Oeiras (Portugal); Sorasio, G. [WS Energia, Ed. Tecnologia II 47, Taguspark, Oeiras (Portugal); Centro Richerche ISCAT, VS Pellico, 12037, Saluzzo (Italy)

    2010-07-15

    A theoretical model has been developed to describe the response of V-trough systems in terms of module temperature, power output and energy yield using as inputs the atmospheric conditions. The model was adjusted to DoubleSun {sup registered} concentration technology, which integrates dual-axis tracker and conventional mono-crystalline Si modules. The good agreement between model predictions and the results obtained at WS Energia laboratory, Portugal, validated the model. It is shown that DoubleSun {sup registered} technology increases up to 86% the yearly energy yield of conventional modules relative to a fixed flat-plate system. The model was also used to perform a sensitivity analysis, in order to highlight the relevance of the leading working parameters (such as irradiance) in system performance (energy yield and module temperature). Model results show that the operation module temperature is always below the maximum working temperature defined by the module manufacturers. (author)

  9. Sensitivity and uncertainty analyses for performance assessment modeling

    International Nuclear Information System (INIS)

    Doctor, P.G.

    1988-08-01

    Sensitivity and uncertainty analyses methods for computer models are being applied in performance assessment modeling in the geologic high level radioactive waste repository program. The models used in performance assessment tend to be complex physical/chemical models with large numbers of input variables. There are two basic approaches to sensitivity and uncertainty analyses: deterministic and statistical. The deterministic approach to sensitivity analysis involves numerical calculation or employs the adjoint form of a partial differential equation to compute partial derivatives; the uncertainty analysis is based on Taylor series expansions of the input variables propagated through the model to compute means and variances of the output variable. The statistical approach to sensitivity analysis involves a response surface approximation to the model with the sensitivity coefficients calculated from the response surface parameters; the uncertainty analysis is based on simulation. The methods each have strengths and weaknesses. 44 refs

  10. Performance of Air Pollution Models on Massively Parallel Computers

    DEFF Research Database (Denmark)

    Brown, John; Hansen, Per Christian; Wasniewski, Jerzy

    1996-01-01

    To compare the performance and use of three massively parallel SIMD computers, we implemented a large air pollution model on the computers. Using a realistic large-scale model, we gain detailed insight about the performance of the three computers when used to solve large-scale scientific problems...

  11. An integrated environmental and health performance quantification model for pre-occupancy phase of buildings in China

    Energy Technology Data Exchange (ETDEWEB)

    Li, Xiaodong, E-mail: eastdawn@tsinghua.edu.cn; Su, Shu, E-mail: sushuqh@163.com; Zhang, Zhihui, E-mail: zhzhg@tsinghua.edu.cn; Kong, Xiangqin, E-mail: kxlwq@126.com

    2017-03-15

    To comprehensively pre-evaluate the damages to both the environment and human health due to construction activities in China, this paper presents an integrated building environmental and health performance (EHP) assessment model based on the Building Environmental Performance Analysis System (BEPAS) and the Building Health Impact Analysis System (BHIAS) models and offers a new inventory data estimation method. The new model follows the life cycle assessment (LCA) framework and the inventory analysis step involves bill of quantity (BOQ) data collection, consumption data formation, and environmental profile transformation. The consumption data are derived from engineering drawings and quotas to conduct the assessment before construction for pre-evaluation. The new model classifies building impacts into three safeguard areas: ecosystems, natural resources and human health. Thus, this model considers environmental impacts as well as damage to human wellbeing. The monetization approach, distance-to-target method and panel method are considered as optional weighting approaches. Finally, nine residential buildings of different structural types are taken as case studies to test the operability of the integrated model through application. The results indicate that the new model can effectively pre-evaluate building EHP and the structure type significantly affects the performance of residential buildings.

  12. An integrated environmental and health performance quantification model for pre-occupancy phase of buildings in China

    International Nuclear Information System (INIS)

    Li, Xiaodong; Su, Shu; Zhang, Zhihui; Kong, Xiangqin

    2017-01-01

    To comprehensively pre-evaluate the damages to both the environment and human health due to construction activities in China, this paper presents an integrated building environmental and health performance (EHP) assessment model based on the Building Environmental Performance Analysis System (BEPAS) and the Building Health Impact Analysis System (BHIAS) models and offers a new inventory data estimation method. The new model follows the life cycle assessment (LCA) framework and the inventory analysis step involves bill of quantity (BOQ) data collection, consumption data formation, and environmental profile transformation. The consumption data are derived from engineering drawings and quotas to conduct the assessment before construction for pre-evaluation. The new model classifies building impacts into three safeguard areas: ecosystems, natural resources and human health. Thus, this model considers environmental impacts as well as damage to human wellbeing. The monetization approach, distance-to-target method and panel method are considered as optional weighting approaches. Finally, nine residential buildings of different structural types are taken as case studies to test the operability of the integrated model through application. The results indicate that the new model can effectively pre-evaluate building EHP and the structure type significantly affects the performance of residential buildings.

  13. Development of a Stochastically-driven, Forward Predictive Performance Model for PEMFCs

    Science.gov (United States)

    Harvey, David Benjamin Paul

    A one-dimensional multi-scale coupled, transient, and mechanistic performance model for a PEMFC membrane electrode assembly has been developed. The model explicitly includes each of the 5 layers within a membrane electrode assembly and solves for the transport of charge, heat, mass, species, dissolved water, and liquid water. Key features of the model include the use of a multi-step implementation of the HOR reaction on the anode, agglomerate catalyst sub-models for both the anode and cathode catalyst layers, a unique approach that links the composition of the catalyst layer to key properties within the agglomerate model and the implementation of a stochastic input-based approach for component material properties. The model employs a new methodology for validation using statistically varying input parameters and statistically-based experimental performance data; this model represents the first stochastic input driven unit cell performance model. The stochastic input driven performance model was used to identify optimal ionomer content within the cathode catalyst layer, demonstrate the role of material variation in potential low performing MEA materials, provide explanation for the performance of low-Pt loaded MEAs, and investigate the validity of transient-sweep experimental diagnostic methods.

  14. Investigating the performance of directional boundary layer model through staged modeling method

    Science.gov (United States)

    Jeong, Moon-Gyu; Lee, Won-Chan; Yang, Seung-Hune; Jang, Sung-Hoon; Shim, Seong-Bo; Kim, Young-Chang; Suh, Chun-Suk; Choi, Seong-Woon; Kim, Young-Hee

    2011-04-01

    Generally speaking, the models used in the optical proximity effect correction (OPC) can be divided into three parts, mask part, optic part, and resist part. For the excellent quality of the OPC model, each part has to be described by the first principles. However, OPC model can't take the all of the principles since it should cover the full chip level calculation during the correction. Moreover, the calculation has to be done iteratively during the correction until the cost function we want to minimize converges. Normally the optic part in OPC model is described with the sum of coherent system (SOCS[1]) method. Thanks to this method we can calculate the aerial image so fast without the significant loss of accuracy. As for the resist part, the first principle is too complex to implement in detail, so it is normally expressed in a simple way, such as the approximation of the first principles, and the linear combinations of factors which is highly correlated with the chemistries in the resist. The quality of this kind of the resist model depends on how well we train the model through fitting to the empirical data. The most popular way of making the mask function is based on the Kirchhoff's thin mask approximation. This method works well when the feature size on the mask is sufficiently large, but as the line width of the semiconductor circuit becomes smaller, this method causes significant error due to the mask topography effect. To consider the mask topography effect accurately, we have to use rigorous methods of calculating the mask function, such as finite difference time domain (FDTD[2]) and rigorous coupled-wave analysis (RCWA[3]). But these methods are too time-consuming to be used as a part of the OPC model. Until now many alternatives have been suggested as the efficient way of considering the mask topography effect. Among them we focused on the boundary layer model (BLM) in this paper. We mainly investigated the way of optimization of the parameters for the

  15. Ensenanzas en un gimnasio: an investigation of modeling and verbal rehearsal on the motor performance of Hispanic limited English proficient children.

    Science.gov (United States)

    Meaney, K S; Edwards, R

    1996-03-01

    This study investigated the effects of modeling and verbal rehearsal on the motor performance of English-speaking and limited English proficient (LEP) children. Children (N = 64) in 4th-grade classes were randomly assigned to conditions in a 2 x 2 x 2 x 2 (Gender x Primary Language x Model Type x Rehearsal) factorial design. Boys and girls whose primary language was English or Spanish were assigned to either a verbal model or no-model condition as well as to a verbal rehearsal or no-rehearsal condition of the motor skills required to be performed. Analysis of variance revealed a significant Model Type x Primary Language interaction as well as a significant Rehearsal x Primary Language interaction. Follow-up analyses revealed that English-speaking children provided with a verbal rehearsal strategy recalled significantly more skills than English-speaking children in the no-rehearsal condition; for LEP children, there were no differences due to rehearsal. Moreover, LEP children presented with a verbal model recalled significantly more skills than LEP children in the no-model condition; for English-speaking children, there were no differences attributed to model type. These results indicate that effective modeling conditions that are provided with verbal cues in English are related to children's primary language.

  16. Performance of hedging strategies in interval models

    NARCIS (Netherlands)

    Roorda, Berend; Engwerda, Jacob; Schumacher, J.M.

    2005-01-01

    For a proper assessment of risks associated with the trading of derivatives, the performance of hedging strategies should be evaluated not only in the context of the idealized model that has served as the basis of strategy development, but also in the context of other models. In this paper we

  17. Disaggregation of Rainy Hours: Compared Performance of Various Models.

    Science.gov (United States)

    Ben Haha, M.; Hingray, B.; Musy, A.

    In the urban environment, the response times of catchments are usually short. To de- sign or to diagnose waterworks in that context, it is necessary to describe rainfall events with a good time resolution: a 10mn time step is often necessary. Such in- formation is not always available. Rainfall disaggregation models have thus to be applied to produce from rough rainfall data that short time resolution information. The communication will present the performance obtained with several rainfall dis- aggregation models that allow for the disaggregation of rainy hours into six 10mn rainfall amounts. The ability of the models to reproduce some statistical character- istics of rainfall (mean, variance, overall distribution of 10mn-rainfall amounts; ex- treme values of maximal rainfall amounts over different durations) is evaluated thanks to different graphical and numerical criteria. The performance of simple models pre- sented in some scientific papers or developed in the Hydram laboratory as well as the performance of more sophisticated ones is compared with the performance of the basic constant disaggregation model. The compared models are either deterministic or stochastic; for some of them the disaggregation is based on scaling properties of rainfall. The compared models are in increasing complexity order: constant model, linear model (Ben Haha, 2001), Ormsbee Deterministic model (Ormsbee, 1989), Ar- tificial Neuronal Network based model (Burian et al. 2000), Hydram Stochastic 1 and Hydram Stochastic 2 (Ben Haha, 2001), Multiplicative Cascade based model (Olsson and Berndtsson, 1998), Ormsbee Stochastic model (Ormsbee, 1989). The 625 rainy hours used for that evaluation (with a hourly rainfall amount greater than 5mm) were extracted from the 21 years chronological rainfall series (10mn time step) observed at the Pully meteorological station, Switzerland. The models were also evaluated when applied to different rainfall classes depending on the season first and on the

  18. Generating Performance Models for Irregular Applications

    Energy Technology Data Exchange (ETDEWEB)

    Friese, Ryan D.; Tallent, Nathan R.; Vishnu, Abhinav; Kerbyson, Darren J.; Hoisie, Adolfy

    2017-05-30

    Many applications have irregular behavior --- non-uniform input data, input-dependent solvers, irregular memory accesses, unbiased branches --- that cannot be captured using today's automated performance modeling techniques. We describe new hierarchical critical path analyses for the \\Palm model generation tool. To create a model's structure, we capture tasks along representative MPI critical paths. We create a histogram of critical tasks with parameterized task arguments and instance counts. To model each task, we identify hot instruction-level sub-paths and model each sub-path based on data flow, instruction scheduling, and data locality. We describe application models that generate accurate predictions for strong scaling when varying CPU speed, cache speed, memory speed, and architecture. We present results for the Sweep3D neutron transport benchmark; Page Rank on multiple graphs; Support Vector Machine with pruning; and PFLOTRAN's reactive flow/transport solver with domain-induced load imbalance.

  19. A new global and comprehensive model for ICU ventilator performances evaluation.

    Science.gov (United States)

    Marjanovic, Nicolas S; De Simone, Agathe; Jegou, Guillaume; L'Her, Erwan

    2017-12-01

    This study aimed to provide a new global and comprehensive evaluation of recent ICU ventilators taking into account both technical performances and ergonomics. Six recent ICU ventilators were evaluated. Technical performances were assessed under two FIO 2 levels (100%, 50%), three respiratory mechanics combinations (Normal: compliance [C] = 70 mL cmH 2 O -1 /resistance [R] = 5 cmH 2 O L -1  s -1 ; Restrictive: C = 30/R = 10; Obstructive: C = 120/R = 20), four exponential levels of leaks (from 0 to 12.5 L min -1 ) and three levels of inspiratory effort (P0.1 = 2, 4 and 8 cmH 2 O), using an automated test lung. Ergonomics were evaluated by 20 ICU physicians using a global and comprehensive model involving physiological response to stress measurements (heart rate, respiratory rate, tidal volume variability and eye tracking), psycho-cognitive scales (SUS and NASA-TLX) and objective tasks completion. Few differences in terms of technical performance were observed between devices. Non-invasive ventilation modes had a huge influence on asynchrony occurrence. Using our global model, either objective tasks completion, psycho-cognitive scales and/or physiological measurements were able to depict significant differences in terms of devices' usability. The level of failure that was observed with some devices depicted the lack of adaptation of device's development to end users' requests. Despite similar technical performance, some ICU ventilators exhibit low ergonomics performance and a high risk of misusage.

  20. Models for Automated Tube Performance Calculations

    International Nuclear Information System (INIS)

    Brunkhorst, C.

    2002-01-01

    High power radio-frequency systems, as typically used in fusion research devices, utilize vacuum tubes. Evaluation of vacuum tube performance involves data taken from tube operating curves. The acquisition of data from such graphical sources is a tedious process. A simple modeling method is presented that will provide values of tube currents for a given set of element voltages. These models may be used as subroutines in iterative solutions of amplifier operating conditions for a specific loading impedance

  1. Hybrid Modeling Improves Health and Performance Monitoring

    Science.gov (United States)

    2007-01-01

    Scientific Monitoring Inc. was awarded a Phase I Small Business Innovation Research (SBIR) project by NASA's Dryden Flight Research Center to create a new, simplified health-monitoring approach for flight vehicles and flight equipment. The project developed a hybrid physical model concept that provided a structured approach to simplifying complex design models for use in health monitoring, allowing the output or performance of the equipment to be compared to what the design models predicted, so that deterioration or impending failure could be detected before there would be an impact on the equipment's operational capability. Based on the original modeling technology, Scientific Monitoring released I-Trend, a commercial health- and performance-monitoring software product named for its intelligent trending, diagnostics, and prognostics capabilities, as part of the company's complete ICEMS (Intelligent Condition-based Equipment Management System) suite of monitoring and advanced alerting software. I-Trend uses the hybrid physical model to better characterize the nature of health or performance alarms that result in "no fault found" false alarms. Additionally, the use of physical principles helps I-Trend identify problems sooner. I-Trend technology is currently in use in several commercial aviation programs, and the U.S. Air Force recently tapped Scientific Monitoring to develop next-generation engine health-management software for monitoring its fleet of jet engines. Scientific Monitoring has continued the original NASA work, this time under a Phase III SBIR contract with a joint NASA-Pratt & Whitney aviation security program on propulsion-controlled aircraft under missile-damaged aircraft conditions.

  2. Model Validation Using Coordinate Distance with Performance Sensitivity

    Directory of Open Access Journals (Sweden)

    Jiann-Shiun Lew

    2008-01-01

    Full Text Available This paper presents an innovative approach to model validation for a structure with significant parameter variations. Model uncertainty of the structural dynamics is quantified with the use of a singular value decomposition technique to extract the principal components of parameter change, and an interval model is generated to represent the system with parameter uncertainty. The coordinate vector, corresponding to the identified principal directions, of the validation system is computed. The coordinate distance between the validation system and the identified interval model is used as a metric for model validation. A beam structure with an attached subsystem, which has significant parameter uncertainty, is used to demonstrate the proposed approach.

  3. Performance modeling, stochastic networks, and statistical multiplexing

    CERN Document Server

    Mazumdar, Ravi R

    2013-01-01

    This monograph presents a concise mathematical approach for modeling and analyzing the performance of communication networks with the aim of introducing an appropriate mathematical framework for modeling and analysis as well as understanding the phenomenon of statistical multiplexing. The models, techniques, and results presented form the core of traffic engineering methods used to design, control and allocate resources in communication networks.The novelty of the monograph is the fresh approach and insights provided by a sample-path methodology for queueing models that highlights the importan

  4. The better model to predict and improve pediatric health care quality: performance or importance-performance?

    Science.gov (United States)

    Olsen, Rebecca M; Bryant, Carol A; McDermott, Robert J; Ortinau, David

    2013-01-01

    The perpetual search for ways to improve pediatric health care quality has resulted in a multitude of assessments and strategies; however, there is little research evidence as to their conditions for maximum effectiveness. A major reason for the lack of evaluation research and successful quality improvement initiatives is the methodological challenge of measuring quality from the parent perspective. Comparison of performance-only and importance-performance models was done to determine the better predictor of pediatric health care quality and more successful method for improving the quality of care provided to children. Fourteen pediatric health care centers serving approximately 250,000 patients in 70,000 households in three West Central Florida counties were studied. A cross-sectional design was used to determine the importance and performance of 50 pediatric health care attributes and four global assessments of pediatric health care quality. Exploratory factor analysis revealed five dimensions of care (physician care, access, customer service, timeliness of services, and health care facility). Hierarchical multiple regression compared the performance-only and the importance-performance models. In-depth interviews, participant observations, and a direct cognitive structural analysis identified 50 health care attributes included in a mailed survey to parents(n = 1,030). The tailored design method guided survey development and data collection. The importance-performance multiplicative additive model was a better predictor of pediatric health care quality. Attribute importance moderates performance and quality, making the importance-performance model superior for measuring and providing a deeper understanding of pediatric health care quality and a better method for improving the quality of care provided to children. Regardless of attribute performance, if the level of attribute importance is not taken into consideration, health care organizations may spend valuable

  5. The effect of the number of seed variables on the performance of Cooke′s classical model

    International Nuclear Information System (INIS)

    Eggstaff, Justin W.; Mazzuchi, Thomas A.; Sarkani, Shahram

    2014-01-01

    In risk analysis, Cooke′s classical model for aggregating expert judgment has been widely used for over 20 years. However, the validity of this model has been the subject of much debate. Critics assert that this model′s scoring rule may unintentionally reward experts who manipulate their quantile estimates in order to receive a greater weight. In addition, the question of the number of seed variables required to ensure adequate performance of Cooke′s classical model remains unanswered. In this study, we conduct a comprehensive examination of the model through an iterative, cross validation test to perform an out-of-sample comparison between Cooke′s classical model and the equal-weight linear opinion pool method on almost all of the expert judgment studies compiled by Cooke and colleagues to date. Our results indicate that Cooke′s classical model significantly outperforms equally weighting expert judgment, regardless of the number of seed variables used; however, there may, in fact, be a maximum number of seed variables beyond which Cooke′s model cannot outperform an equally-weighted panel. - Highlights: • We examine Cooke′s classical model through an iterative, cross validation test. • The performance-based and equally weighted decision makers are compared. • Results strengthen Cooke′s argument for a two-fold cross-validation approach. • Accuracy test results show strong support in favor of Cooke′s classical method. • There may be a maximum number of seed variables that ensures model performance

  6. Effects of attitude, social influence, and self-efficacy model factors on regular mammography performance in life-transition aged women in Korea.

    Science.gov (United States)

    Lee, Chang Hyun; Kim, Young Im

    2015-01-01

    This study analyzed predictors of regular mammography performance in Korea. In addition, we determined factors affecting regular mammography performance in life-transition aged women by applying an attitude, social influence, and self-efficacy (ASE) model. Data were collected from women aged over 40 years residing in province J in Korea. The 178 enrolled subjects provided informed voluntary consent prior to completing a structural questionnaire. The overall regular mammography performance rate of the subjects was 41.6%. Older age, city residency, high income and part-time job were associated with a high regular mammography performance. Among women who had undergone more breast self-examinations (BSE) or more doctors' physical examinations (PE), there were higher regular mammography performance rates. All three ASE model factors were significantly associated with regular mammography performance. Women with a high level of positive ASE values had a significantly high regular mammography performance rate. Within the ASE model, self-efficacy and social influence were particularly important. Logistic regression analysis explained 34.7% of regular mammography performance and PE experience (β=4.645, p=.003), part- time job (β=4.010, p=.050), self-efficacy (β=1.820, p=.026) and social influence (β=1.509, p=.038) were significant factors. Promotional strategies that could improve self-efficacy, reinforce social influence and reduce geographical, time and financial barriers are needed to increase the regular mammography performance rate in life-transition aged.

  7. Analytical Performance Modeling and Validation of Intel’s Xeon Phi Architecture

    Energy Technology Data Exchange (ETDEWEB)

    Chunduri, Sudheer; Balaprakash, Prasanna; Morozov, Vitali; Vishwanath, Venkatram; Kumaran, Kalyan

    2017-01-01

    Modeling the performance of scientific applications on emerging hardware plays a central role in achieving extreme-scale computing goals. Analytical models that capture the interaction between applications and hardware characteristics are attractive because even a reasonably accurate model can be useful for performance tuning before the hardware is made available. In this paper, we develop a hardware model for Intel’s second-generation Xeon Phi architecture code-named Knights Landing (KNL) for the SKOPE framework. We validate the KNL hardware model by projecting the performance of mini-benchmarks and application kernels. The results show that our KNL model can project the performance with prediction errors of 10% to 20%. The hardware model also provides informative recommendations for code transformations and tuning.

  8. ExaSAT: An exascale co-design tool for performance modeling

    International Nuclear Information System (INIS)

    Unat, Didem; Chan, Cy; Zhang, Weiqun; Williams, Samuel; Bachan, John

    2015-01-01

    One of the emerging challenges to designing HPC systems is understanding and projecting the requirements of exascale applications. In order to determine the performance consequences of different hardware designs, analytic models are essential because they can provide fast feedback to the co-design centers and chip designers without costly simulations. However, current attempts to analytically model program performance typically rely on the user manually specifying a performance model. Here we introduce the ExaSAT framework that automates the extraction of parameterized performance models directly from source code using compiler analysis. The parameterized analytic model enables quantitative evaluation of a broad range of hardware design trade-offs and software optimizations on a variety of different performance metrics, with a primary focus on data movement as a metric. Finally, we demonstrate the ExaSAT framework’s ability to perform deep code analysis of a proxy application from the Department of Energy Combustion Co-design Center to illustrate its value to the exascale co-design process. ExaSAT analysis provides insights into the hardware and software trade-offs and lays the groundwork for exploring a more targeted set of design points using cycle-accurate architectural simulators.

  9. Magnetic resonance imaging-detected extramural venous invasion in rectal cancer before and after preoperative chemoradiotherapy. Diagnostic performance and prognostic significance

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Eun Sun [Chung-Ang University Hospital, Department of Radiology, Seoul (Korea, Republic of); Chung-Ang University, College of Medicine and Graduate School of Medicine, Seoul (Korea, Republic of); National Cancer Centre, Department of Radiology, Goyang-si, Gyeonggi-do (Korea, Republic of); Kim, Min Ju; Hur, Bo Yun [National Cancer Centre, Department of Radiology, Goyang-si, Gyeonggi-do (Korea, Republic of); Park, Sung Chan; Hyun, Jong Hee; Chang, Hee Jin; Baek, Ji Yeon; Kim, Dae Yong; Oh, Jae Hwan [National Cancer Centre, Centre for Colorectal Cancer, Goyang, Gyeonggi-do (Korea, Republic of); Kim, Sun Young [National Cancer Centre, Centre for Colorectal Cancer, Goyang, Gyeonggi-do (Korea, Republic of); University of Ulsan College of Medicine, Department of Oncology, Asan Medical Centre, Seoul (Korea, Republic of)

    2018-02-15

    We evaluated the diagnostic performance of magnetic resonance imaging (MRI) in terms of identifying extramural venous invasion (EMVI) in rectal cancer patients with preoperative chemoradiotherapy (CRT) and its prognostic significance. During 2008-2010, 200 patients underwent surgery following preoperative CRT for rectal cancer. Two radiologists independently reviewed all pre- and post-CRT MRI retrospectively. We investigated diagnostic performance of pre-CRT MR-EMVI (MR-EMVI) and post-CRT MR-EMVI (yMR-EMVI), based on pathological EMVI as the standard of reference. We assessed correlation between MRI findings and patients' prognosis, such as disease-free survival (DFS) and overall survival (OS). Additionally, subgroup analysis in MR- or yMR-EMVI-positive patients was performed to confirm the significance of the severity of EMVI in MRI on patient's prognosis. The sensitivity and specificity of yMR-EMVI were 76.19% and 79.75% (area under the curve: 0.830), respectively. In univariate analysis, yMR-EMVI was the only significant MRI factor in DFS (P = 0.027). The mean DFS for yMR-EMVI (+) patients was significantly less than for yMR-EMVI (-) patients: 57.56 months versus 72.46 months. yMR-EMVI demonstrated good diagnostic performance. yMR-EMVI was the only significant EMVI-related MRI factor that correlated with patients' DFS in univariate analysis; however, it was not significant in multivariate analysis. (orig.)

  10. Statistically significant relational data mining :

    Energy Technology Data Exchange (ETDEWEB)

    Berry, Jonathan W.; Leung, Vitus Joseph; Phillips, Cynthia Ann; Pinar, Ali; Robinson, David Gerald; Berger-Wolf, Tanya; Bhowmick, Sanjukta; Casleton, Emily; Kaiser, Mark; Nordman, Daniel J.; Wilson, Alyson G.

    2014-02-01

    This report summarizes the work performed under the project (3z(BStatitically significant relational data mining.(3y (BThe goal of the project was to add more statistical rigor to the fairly ad hoc area of data mining on graphs. Our goal was to develop better algorithms and better ways to evaluate algorithm quality. We concetrated on algorithms for community detection, approximate pattern matching, and graph similarity measures. Approximate pattern matching involves finding an instance of a relatively small pattern, expressed with tolerance, in a large graph of data observed with uncertainty. This report gathers the abstracts and references for the eight refereed publications that have appeared as part of this work. We then archive three pieces of research that have not yet been published. The first is theoretical and experimental evidence that a popular statistical measure for comparison of community assignments favors over-resolved communities over approximations to a ground truth. The second are statistically motivated methods for measuring the quality of an approximate match of a small pattern in a large graph. The third is a new probabilistic random graph model. Statisticians favor these models for graph analysis. The new local structure graph model overcomes some of the issues with popular models such as exponential random graph models and latent variable models.

  11. Modeling Friction Performance of Drill String Torsional Oscillation Using Dynamic Friction Model

    Directory of Open Access Journals (Sweden)

    Xingming Wang

    2017-01-01

    Full Text Available Drill string torsional and longitudinal oscillation can significantly reduce axial drag in horizontal drilling. An improved theoretical model for the analysis of the frictional force was proposed based on microscopic contact deformation theory and a bristle model. The established model, an improved dynamic friction model established for drill strings in a wellbore, was used to determine the relationship of friction force changes and the drill string torsional vibration. The model results were in good agreement with the experimental data, verifying the accuracy of the established model. The analysis of the influence of drilling mud properties indicated that there is an approximately linear relationship between the axial friction force and dynamic shear and viscosity. The influence of drill string torsional oscillation on the axial friction force is discussed. The results indicated that the drill string transverse velocity is a prerequisite for reducing axial friction. In addition, low amplitude of torsional vibration speed can significantly reduce axial friction. Then, increasing the amplitude of transverse vibration speed, the effect of axial reduction is not significant. In addition, by involving general field drilling parameters, this model can accurately describe the friction behavior and quantitatively predict the frictional resistance in horizontal drilling.

  12. Performance of Linear and Nonlinear Two-Leaf Light Use Efficiency Models at Different Temporal Scales

    DEFF Research Database (Denmark)

    Wu, Xiaocui; Ju, Weimin; Zhou, Yanlian

    2015-01-01

    The reliable simulation of gross primary productivity (GPP) at various spatial and temporal scales is of significance to quantifying the net exchange of carbon between terrestrial ecosystems and the atmosphere. This study aimed to verify the ability of a nonlinear two-leaf model (TL-LUEn), a linear...... two-leaf model (TL-LUE), and a big-leaf light use efficiency model (MOD17) to simulate GPP at half-hourly, daily and 8-day scales using GPP derived from 58 eddy-covariance flux sites in Asia, Europe and North America as benchmarks. Model evaluation showed that the overall performance of TL...

  13. Azimuth cut-off model for significant wave height investigation along coastal water of Kuala Terengganu, Malaysia

    Science.gov (United States)

    Marghany, Maged; Ibrahim, Zelina; Van Genderen, Johan

    2002-11-01

    The present work is used to operationalize the azimuth cut-off concept in the study of significant wave height. Three ERS-1 images have been used along the coastal waters of Terengganu, Malaysia. The quasi-linear transform was applied to map the SAR wave spectra into real ocean wave spectra. The azimuth cut-off was then used to model the significant wave height. The results show that azimuth cut-off varied with the different period of the ERS-1 images. This is because of the fact that the azimuth cut-off is a function of wind speed and significant wave height. It is of interest to find that the significant wave height modeled from azimuth cut-off is in good relation with ground wave conditions. It can be concluded that ERS-1 can be used as a monitoring tool in detecting the significant wave height variation. The azimuth cut-off can be used to model the significant wave height. This means that the quasi-linear transform could be a good application to significant wave height variation during different seasons.

  14. Significance of settling model structures and parameter subsets in modelling WWTPs under wet-weather flow and filamentous bulking conditions.

    Science.gov (United States)

    Ramin, Elham; Sin, Gürkan; Mikkelsen, Peter Steen; Plósz, Benedek Gy

    2014-10-15

    Current research focuses on predicting and mitigating the impacts of high hydraulic loadings on centralized wastewater treatment plants (WWTPs) under wet-weather conditions. The maximum permissible inflow to WWTPs depends not only on the settleability of activated sludge in secondary settling tanks (SSTs) but also on the hydraulic behaviour of SSTs. The present study investigates the impacts of ideal and non-ideal flow (dry and wet weather) and settling (good settling and bulking) boundary conditions on the sensitivity of WWTP model outputs to uncertainties intrinsic to the one-dimensional (1-D) SST model structures and parameters. We identify the critical sources of uncertainty in WWTP models through global sensitivity analysis (GSA) using the Benchmark simulation model No. 1 in combination with first- and second-order 1-D SST models. The results obtained illustrate that the contribution of settling parameters to the total variance of the key WWTP process outputs significantly depends on the influent flow and settling conditions. The magnitude of the impact is found to vary, depending on which type of 1-D SST model is used. Therefore, we identify and recommend potential parameter subsets for WWTP model calibration, and propose optimal choice of 1-D SST models under different flow and settling boundary conditions. Additionally, the hydraulic parameters in the second-order SST model are found significant under dynamic wet-weather flow conditions. These results highlight the importance of developing a more mechanistic based flow-dependent hydraulic sub-model in second-order 1-D SST models in the future. Copyright © 2014 Elsevier Ltd. All rights reserved.

  15. Performance engineering in the community atmosphere model

    International Nuclear Information System (INIS)

    Worley, P; Mirin, A; Drake, J; Sawyer, W

    2006-01-01

    The Community Atmosphere Model (CAM) is the atmospheric component of the Community Climate System Model (CCSM) and is the primary consumer of computer resources in typical CCSM simulations. Performance engineering has been an important aspect of CAM development throughout its existence. This paper briefly summarizes these efforts and their impacts over the past five years

  16. Comparison of performance of simulation models for floor heating

    DEFF Research Database (Denmark)

    Weitzmann, Peter; Svendsen, Svend

    2005-01-01

    This paper describes the comparison of performance of simulation models for floor heating with different level of detail in the modelling process. The models are compared in an otherwise identical simulation model containing room model, walls, windows, ceiling and ventilation system. By exchanging...

  17. Uncertainty and Sensitivity of Alternative Rn-222 Flux Density Models Used in Performance Assessment

    Energy Technology Data Exchange (ETDEWEB)

    Greg J. Shott, Vefa Yucel, Lloyd Desotell

    2007-06-01

    Performance assessments for the Area 5 Radioactive Waste Management Site on the Nevada Test Site have used three different mathematical models to estimate Rn-222 flux density. This study describes the performance, uncertainty, and sensitivity of the three models which include the U.S. Nuclear Regulatory Commission Regulatory Guide 3.64 analytical method and two numerical methods. The uncertainty of each model was determined by Monte Carlo simulation using Latin hypercube sampling. The global sensitivity was investigated using Morris one-at-time screening method, sample-based correlation and regression methods, the variance-based extended Fourier amplitude sensitivity test, and Sobol's sensitivity indices. The models were found to produce similar estimates of the mean and median flux density, but to have different uncertainties and sensitivities. When the Rn-222 effective diffusion coefficient was estimated using five different published predictive models, the radon flux density models were found to be most sensitive to the effective diffusion coefficient model selected, the emanation coefficient, and the radionuclide inventory. Using a site-specific measured effective diffusion coefficient significantly reduced the output uncertainty. When a site-specific effective-diffusion coefficient was used, the models were most sensitive to the emanation coefficient and the radionuclide inventory.

  18. A more robust model of the biodiesel reaction, allowing identification of process conditions for significantly enhanced rate and water tolerance.

    Science.gov (United States)

    Eze, Valentine C; Phan, Anh N; Harvey, Adam P

    2014-03-01

    A more robust kinetic model of base-catalysed transesterification than the conventional reaction scheme has been developed. All the relevant reactions in the base-catalysed transesterification of rapeseed oil (RSO) to fatty acid methyl ester (FAME) were investigated experimentally, and validated numerically in a model implemented using MATLAB. It was found that including the saponification of RSO and FAME side reactions and hydroxide-methoxide equilibrium data explained various effects that are not captured by simpler conventional models. Both the experiment and modelling showed that the "biodiesel reaction" can reach the desired level of conversion (>95%) in less than 2min. Given the right set of conditions, the transesterification can reach over 95% conversion, before the saponification losses become significant. This means that the reaction must be performed in a reactor exhibiting good mixing and good control of residence time, and the reaction mixture must be quenched rapidly as it leaves the reactor. Copyright © 2014 Elsevier Ltd. All rights reserved.

  19. A PERFORMANCE MANAGEMENT MODEL FOR PHYSICAL ASSET MANAGEMENT

    Directory of Open Access Journals (Sweden)

    J.L. Jooste

    2012-01-01

    Full Text Available

    ENGLISH ABSTRACT: There has been an emphasis shift from maintenance management towards asset management, where the focus is on reliable and operational equipment and on effective assets at optimum life-cycle costs. A challenge in the manufacturing industry is to develop an asset performance management model that is integrated with business processes and strategies. The authors developed the APM2 model to satisfy that requirement. The model has a generic reference structure and is supported by operational protocols to assist in operations management. It facilitates performance measurement, business integration and continuous improvement, whilst exposing industry to the latest developments in asset performance management.

    AFRIKAANSE OPSOMMING: Daar is ‘n klemverskuiwing vanaf onderhoudsbestuur na batebestuur, waar daar gefokus word op betroubare en operasionele toerusting, asook effektiewe bates teen optimum lewensikluskoste. ‘n Uitdaging in die vervaardigingsindustrie is die ontwikkeling van ‘n prestasiemodel vir bates, wat geïntegreer is met besigheidsprosesse en –strategieë. Die outeurs het die APM2 model ontwikkel om in hierdie behoefte te voorsien. Die model het ‘n generiese verwysingsstruktuur, wat ondersteun word deur operasionele instruksies wat operasionele bestuur bevorder. Dit fasiliteer prestasiebestuur, besigheidsintegrasie en voortdurende verbetering, terwyl dit die industrie ook blootstel aan die nuutste ontwikkelinge in prestasiebestuur van bates.

  20. Modelling and assessment of dependent performance shaping factors through Analytic Network Process

    International Nuclear Information System (INIS)

    De Ambroggi, Massimiliano; Trucco, Paolo

    2011-01-01

    Despite continuous progresses in research and applications, one of the major weaknesses of current HRA methods dwells in their limited capability of modelling the mutual influences between performance shaping factors (PSFs). Indeed at least two types of dependencies between PSFs can be defined: (i) dependency between the states of the PSFs; (ii) dependency between the influences (impacts) of the PSFs on the human performance. This paper introduces a method, based on Analytic Network Process (ANP), for the quantification of the latter, where the overall contribution of each PSF (weight) to the human error probability (HEP) is eventually returned. The core of the method is the modelling process, articulated into two steps: firstly, a qualitative network of dependencies between PSFs is identified, then, the importance of each PSF is quantitatively assessed using ANP. The model allows to distinguish two components of the PSF influence: direct influence that is the influence that the considered PSF is able to express by itself, notwithstanding the presence of other PSFs and indirect influence that is the incremental influence of the considered PSF through its influence on other PSFs. A case study in Air Traffic Control is presented where the proposed approach is integrated into the cognitive simulator PROCOS. The results demonstrated a significant modification of the influence of PSFs over the operator performance when dependencies are taken into account, underlining the importance of considering not only the possible correlation between the states of PSFs but also their mutual dependency in affecting human performance in complex systems.

  1. Celecoxib does not significantly delay bone healing in a rat femoral osteotomy model: a bone histomorphometry study

    Directory of Open Access Journals (Sweden)

    Iwamoto J

    2011-12-01

    Full Text Available Jun Iwamoto1, Azusa Seki2, Yoshihiro Sato3, Hideo Matsumoto11Institute for Integrated Sports Medicine, Keio University School of Medicine, Tokyo, Japan; 2Hamri Co, Ltd, Tokyo, Japan; 3Department of Neurology, Mitate Hospital, Fukuoka, JapanBackground and objective: The objective of the present study was to determine whether celecoxib, a cyclo-oxygenase-2 inhibitor, would delay bone healing in a rat femoral osteotomy model by examining bone histomorphometry parameters.Methods: Twenty-one 6-week-old female Sprague-Dawley rats underwent a unilateral osteotomy of the femoral diaphysis followed by intramedullary wire fixation; the rats were then divided into three groups: the vehicle administration group (control, n = 8, the vitamin K2 administration (menatetrenone 30 mg/kg orally, five times a week group (positive control, n = 5, and the celecoxib administration (4 mg/kg orally, five times a week group (n = 8. After 6 weeks of treatment, the wires were removed, and a bone histomorphometric analysis was performed on the bone tissue inside the callus. The lamellar area relative to the bone area was significantly higher and the total area and woven area relative to the bone area were significantly lower in the vitamin K2 group than in the vehicle group. However, none of the structural parameters, such as the callus and bone area relative to the total area, lamellar and woven areas relative to the bone area, or the formative and resorptive parameters such as osteoclast surface, number of osteoclasts, osteoblast surface, osteoid surface, eroded surface, and bone formation rate per bone surface differed significantly between the vehicle and celecoxib groups.Conclusion: The present study implies that celecoxib may not significantly delay bone healing in a rat femoral osteotomy model based on the results of a bone histomorphometric analysis.Keywords: femoral osteotomy, bone healing, callus, rat, celecoxib

  2. Performance of a pavement solar energy collector: Model development and validation

    International Nuclear Information System (INIS)

    Guldentops, Gert; Nejad, Alireza Mahdavi; Vuye, Cedric; Van den bergh, Wim; Rahbar, Nima

    2016-01-01

    Highlights: • A novel numerical model is developed that predicts the thermal behavior of a pavement solar collector. • A parametric study is conducted on the sensitivity of the system to changes in design parameters. • A new methodology is developed to perform a long-term performance analysis of the system. - Abstract: Current aims regarding environmental protection, like reduction of fossil fuel consumption and greenhouse gas emissions, require the development of new technologies. These new technologies enable the production of renewable energy, which is both cleaner and more abundant in comparison to using fossil fuels for energy production. This necessity encourages researchers to develop new ways to capture solar energy, and if possible, store it for later use. In this paper, the Pavement Solar Collector (PSC), and its use to extract low temperature thermal energy, is studied. Such a system, which harvests energy by flowing water through a heat exchanger embedded in the pavement structure, could have a significant energy output since pavement materials tend to absorb large amounts of solar radiation. The main objective of this paper is to develop a modeling framework for the PSC system and validate it with a self-instructed experiment. Such a model will allow for a detailed parametric study of the system to optimize the design, as well as an investigation on the effect of aging (e.g. decreasing solar absorptivity) on the performance of the system. A long-term energy output of the system that is currently lacking is calculated based on results of the study on weather parameters. This newly acquired data could be the start of a comprehensive data set on the performance of a PSC, which leads to a comprehensive feasibility study of the system.

  3. Clinical laboratory as an economic model for business performance analysis

    Science.gov (United States)

    Buljanović, Vikica; Patajac, Hrvoje; Petrovečki, Mladen

    2011-01-01

    Aim To perform SWOT (strengths, weaknesses, opportunities, and threats) analysis of a clinical laboratory as an economic model that may be used to improve business performance of laboratories by removing weaknesses, minimizing threats, and using external opportunities and internal strengths. Methods Impact of possible threats to and weaknesses of the Clinical Laboratory at Našice General County Hospital business performance and use of strengths and opportunities to improve operating profit were simulated using models created on the basis of SWOT analysis results. The operating profit as a measure of profitability of the clinical laboratory was defined as total revenue minus total expenses and presented using a profit and loss account. Changes in the input parameters in the profit and loss account for 2008 were determined using opportunities and potential threats, and economic sensitivity analysis was made by using changes in the key parameters. The profit and loss account and economic sensitivity analysis were tools for quantifying the impact of changes in the revenues and expenses on the business operations of clinical laboratory. Results Results of simulation models showed that operational profit of €470 723 in 2008 could be reduced to only €21 542 if all possible threats became a reality and current weaknesses remained the same. Also, operational gain could be increased to €535 804 if laboratory strengths and opportunities were utilized. If both the opportunities and threats became a reality, the operational profit would decrease by €384 465. Conclusion The operational profit of the clinical laboratory could be significantly reduced if all threats became a reality and the current weaknesses remained the same. The operational profit could be increased by utilizing strengths and opportunities as much as possible. This type of modeling may be used to monitor business operations of any clinical laboratory and improve its financial situation by

  4. Trickle bed reactor model to simulate the performance of commercial diesel hydrotreating unit

    Energy Technology Data Exchange (ETDEWEB)

    C. Murali; R.K. Voolapalli; N. Ravichander; D.T. Gokak; N.V. Choudary [Bharat Petroleum Corporation Ltd., Udyog Kendra (India). Corporate R& amp; D Centre

    2007-05-15

    A two phase mathematical model was developed to simulate the performance of bench scale and commercial hydrotreating reactors. Major hydrotreating reactions, namely, hydrodesulphurization, hydrodearomatization and olefins saturation were modeled. Experiments were carried out in a fixed bed reactor to study the effect of different process variables and these results were used for estimating kinetic parameters. Significant amount of feed vaporization (20-50%) was estimated under normal operating conditions of DHDS suggesting the importance of considering feed vaporization in DHDS modeling. The model was validated with plant operating data, under close to ultra low sulphur levels by correctly accounting for feed vaporization in heat balance relations and appropriate use of hydrodynamic correlations. The model could predict the product quality, reactor bed temperature profiles and chemical hydrogen consumption in commercial plant adequately. 14 refs., 7 figs., 6 tabs.

  5. A human capital predictive model for agent performance in contact centres

    Directory of Open Access Journals (Sweden)

    Chris Jacobs

    2011-10-01

    Research purpose: The primary focus of this article was to develop a theoretically derived human capital predictive model for agent performance in contact centres and Business Process Outsourcing (BPO based on a review of current empirical research literature. Motivation for the study: The study was motivated by the need for a human capital predictive model that can predict agent and overall business performance. Research design: A nonempirical (theoretical research paradigm was adopted for this study and more specifically a theory or model-building approach was followed. A systematic review of published empirical research articles (for the period 2000–2009 in scholarly search portals was performed. Main findings: Eight building blocks of the human capital predictive model for agent performance in contact centres were identified. Forty-two of the human capital contact centre related articles are detailed in this study. Key empirical findings suggest that person– environment fit, job demands-resources, human resources management practices, engagement, agent well-being, agent competence; turnover intention; and agent performance are related to contact centre performance. Practical/managerial implications: The human capital predictive model serves as an operational management model that has performance implications for agents and ultimately influences the contact centre’s overall business performance. Contribution/value-add: This research can contribute to the fields of human resource management (HRM, human capital and performance management within the contact centre and BPO environment.

  6. Data Model Performance in Data Warehousing

    Science.gov (United States)

    Rorimpandey, G. C.; Sangkop, F. I.; Rantung, V. P.; Zwart, J. P.; Liando, O. E. S.; Mewengkang, A.

    2018-02-01

    Data Warehouses have increasingly become important in organizations that have large amount of data. It is not a product but a part of a solution for the decision support system in those organizations. Data model is the starting point for designing and developing of data warehouses architectures. Thus, the data model needs stable interfaces and consistent for a longer period of time. The aim of this research is to know which data model in data warehousing has the best performance. The research method is descriptive analysis, which has 3 main tasks, such as data collection and organization, analysis of data and interpretation of data. The result of this research is discussed in a statistic analysis method, represents that there is no statistical difference among data models used in data warehousing. The organization can utilize four data model proposed when designing and developing data warehouse.

  7. Modeling and evaluation of hand-eye coordination of surgical robotic system on task performance.

    Science.gov (United States)

    Gao, Yuanqian; Wang, Shuxin; Li, Jianmin; Li, Aimin; Liu, Hongbin; Xing, Yuan

    2017-12-01

    Robotic-assisted minimally invasive surgery changes the direct hand and eye coordination in traditional surgery to indirect instrument and camera coordination, which affects the ergonomics, operation performance, and safety. A camera, two instruments, and a target, as the descriptors, are used to construct the workspace correspondence and geometrical relationships in a surgical operation. A parametric model with a set of parameters is proposed to describe the hand-eye coordination of the surgical robot. From the results, optimal values and acceptable ranges of these parameters are identified from two tasks. A 90° viewing angle had the longest completion time; 60° instrument elevation angle and 0° deflection angle had better performance; there is no significant difference among manipulation angles and observing distances on task performance. This hand-eye coordination model provides evidence for robotic design, surgeon training, and robotic initialization to achieve dexterous and safe manipulation in surgery. Copyright © 2017 John Wiley & Sons, Ltd.

  8. A service based estimation method for MPSoC performance modelling

    DEFF Research Database (Denmark)

    Tranberg-Hansen, Anders Sejer; Madsen, Jan; Jensen, Bjørn Sand

    2008-01-01

    This paper presents an abstract service based estimation method for MPSoC performance modelling which allows fast, cycle accurate design space exploration of complex architectures including multi processor configurations at a very early stage in the design phase. The modelling method uses a service...... oriented model of computation based on Hierarchical Colored Petri Nets and allows the modelling of both software and hardware in one unified model. To illustrate the potential of the method, a small MPSoC system, developed at Bang & Olufsen ICEpower a/s, is modelled and performance estimates are produced...

  9. An application of seasonal ARIMA models on group commodities to forecast Philippine merchandise exports performance

    Science.gov (United States)

    Natividad, Gina May R.; Cawiding, Olive R.; Addawe, Rizavel C.

    2017-11-01

    The increase in the merchandise exports of the country offers information about the Philippines' trading role within the global economy. Merchandise exports statistics are used to monitor the country's overall production that is consumed overseas. This paper investigates the comparison between two models obtained by a) clustering the commodity groups into two based on its proportional contribution to the total exports, and b) treating only the total exports. Different seasonal autoregressive integrated moving average (SARIMA) models were then developed for the clustered commodities and for the total exports based on the monthly merchandise exports of the Philippines from 2011 to 2016. The data set used in this study was retrieved from the Philippine Statistics Authority (PSA) which is the central statistical authority in the country responsible for primary data collection. A test for significance of the difference between means at 0.05 level of significance was then performed on the forecasts produced. The result indicates that there is a significant difference between the mean of the forecasts of the two models. Moreover, upon a comparison of the root mean square error (RMSE) and mean absolute error (MAE) of the models, it was found that the models used for the clustered groups outperform the model for the total exports.

  10. Shoulder Arthroscopy Simulator Training Improves Shoulder Arthroscopy Performance in a Cadaver Model

    Science.gov (United States)

    Henn, R. Frank; Shah, Neel; Warner, Jon J.P.; Gomoll, Andreas H.

    2013-01-01

    Purpose The purpose of this study was to quantify the benefits of shoulder arthroscopy simulator training with a cadaver model of shoulder arthroscopy. Methods Seventeen first year medical students with no prior experience in shoulder arthroscopy were enrolled and completed this study. Each subject completed a baseline proctored arthroscopy on a cadaveric shoulder, which included controlling the camera and completing a standard series of tasks using the probe. The subjects were randomized, and nine of the subjects received training on a virtual reality simulator for shoulder arthroscopy. All subjects then repeated the same cadaveric arthroscopy. The arthroscopic videos were analyzed in a blinded fashion for time to task completion and subjective assessment of technical performance. The two groups were compared with students t-tests, and change over time within groups was analyzed with paired t-tests. Results There were no observed differences between the two groups on the baseline evaluation. The simulator group improved significantly from baseline with respect to time to completion and subjective performance (parthroscopy simulator training resulted in significant benefits in clinical shoulder arthroscopy time to task completion in this cadaver model. This study provides important additional evidence of the benefit of simulators in orthopaedic surgical training. Clinical Relevance There may be a role for simulator training in shoulder arthroscopy education. PMID:23591380

  11. Shoulder arthroscopy simulator training improves shoulder arthroscopy performance in a cadaveric model.

    Science.gov (United States)

    Henn, R Frank; Shah, Neel; Warner, Jon J P; Gomoll, Andreas H

    2013-06-01

    The purpose of this study was to quantify the benefits of shoulder arthroscopy simulator training with a cadaveric model of shoulder arthroscopy. Seventeen first-year medical students with no prior experience in shoulder arthroscopy were enrolled and completed this study. Each subject completed a baseline proctored arthroscopy on a cadaveric shoulder, which included controlling the camera and completing a standard series of tasks using the probe. The subjects were randomized, and 9 of the subjects received training on a virtual reality simulator for shoulder arthroscopy. All subjects then repeated the same cadaveric arthroscopy. The arthroscopic videos were analyzed in a blinded fashion for time to task completion and subjective assessment of technical performance. The 2 groups were compared by use of Student t tests, and change over time within groups was analyzed with paired t tests. There were no observed differences between the 2 groups on the baseline evaluation. The simulator group improved significantly from baseline with respect to time to completion and subjective performance (P arthroscopy simulator training resulted in significant benefits in clinical shoulder arthroscopy time to task completion in this cadaveric model. This study provides important additional evidence of the benefit of simulators in orthopaedic surgical training. There may be a role for simulator training in shoulder arthroscopy education. Copyright © 2013 Arthroscopy Association of North America. Published by Elsevier Inc. All rights reserved.

  12. Using satellite observations in performance evaluation for regulatory air quality modeling: Comparison with ground-level measurements

    Science.gov (United States)

    Odman, M. T.; Hu, Y.; Russell, A.; Chai, T.; Lee, P.; Shankar, U.; Boylan, J.

    2012-12-01

    Regulatory air quality modeling, such as State Implementation Plan (SIP) modeling, requires that model performance meets recommended criteria in the base-year simulations using period-specific, estimated emissions. The goal of the performance evaluation is to assure that the base-year modeling accurately captures the observed chemical reality of the lower troposphere. Any significant deficiencies found in the performance evaluation must be corrected before any base-case (with typical emissions) and future-year modeling is conducted. Corrections are usually made to model inputs such as emission-rate estimates or meteorology and/or to the air quality model itself, in modules that describe specific processes. Use of ground-level measurements that follow approved protocols is recommended for evaluating model performance. However, ground-level monitoring networks are spatially sparse, especially for particulate matter. Satellite retrievals of atmospheric chemical properties such as aerosol optical depth (AOD) provide spatial coverage that can compensate for the sparseness of ground-level measurements. Satellite retrievals can also help diagnose potential model or data problems in the upper troposphere. It is possible to achieve good model performance near the ground, but have, for example, erroneous sources or sinks in the upper troposphere that may result in misleading and unrealistic responses to emission reductions. Despite these advantages, satellite retrievals are rarely used in model performance evaluation, especially for regulatory modeling purposes, due to the high uncertainty in retrievals associated with various contaminations, for example by clouds. In this study, 2007 was selected as the base year for SIP modeling in the southeastern U.S. Performance of the Community Multiscale Air Quality (CMAQ) model, at a 12-km horizontal resolution, for this annual simulation is evaluated using both recommended ground-level measurements and non-traditional satellite

  13. Modeling and Quantification of Team Performance in Human Reliability Analysis for Probabilistic Risk Assessment

    Energy Technology Data Exchange (ETDEWEB)

    Jeffrey C. JOe; Ronald L. Boring

    2014-06-01

    Probabilistic Risk Assessment (PRA) and Human Reliability Assessment (HRA) are important technical contributors to the United States (U.S.) Nuclear Regulatory Commission’s (NRC) risk-informed and performance based approach to regulating U.S. commercial nuclear activities. Furthermore, all currently operating commercial NPPs in the U.S. are required by federal regulation to be staffed with crews of operators. Yet, aspects of team performance are underspecified in most HRA methods that are widely used in the nuclear industry. There are a variety of "emergent" team cognition and teamwork errors (e.g., communication errors) that are 1) distinct from individual human errors, and 2) important to understand from a PRA perspective. The lack of robust models or quantification of team performance is an issue that affects the accuracy and validity of HRA methods and models, leading to significant uncertainty in estimating HEPs. This paper describes research that has the objective to model and quantify team dynamics and teamwork within NPP control room crews for risk informed applications, thereby improving the technical basis of HRA, which improves the risk-informed approach the NRC uses to regulate the U.S. commercial nuclear industry.

  14. Impact of reactive settler models on simulated WWTP performance

    DEFF Research Database (Denmark)

    Gernaey, Krist; Jeppsson, Ulf; Batstone, Damien J.

    2006-01-01

    for an ASM1 case study. Simulations with a whole plant model including the non-reactive Takacs settler model are used as a reference, and are compared to simulation results considering two reactive settler models. The first is a return sludge model block removing oxygen and a user-defined fraction of nitrate......, combined with a non-reactive Takacs settler. The second is a fully reactive ASM1 Takacs settler model. Simulations with the ASM1 reactive settler model predicted a 15.3% and 7.4% improvement of the simulated N removal performance, for constant (steady-state) and dynamic influent conditions respectively....... The oxygen/nitrate return sludge model block predicts a 10% improvement of N removal performance under dynamic conditions, and might be the better modelling option for ASM1 plants: it is computationally more efficient and it will not overrate the importance of decay processes in the settler....

  15. Forecasting Performance of Asymmetric GARCH Stock Market Volatility Models

    Directory of Open Access Journals (Sweden)

    Hojin Lee

    2009-12-01

    Full Text Available We investigate the asymmetry between positive and negative returns in their effect on conditional variance of the stock market index and incorporate the characteristics to form an out-of-sample volatility forecast. Contrary to prior evidence, however, the results in this paper suggest that no asymmetric GARCH model is superior to basic GARCH(1,1 model. It is our prior knowledge that, for equity returns, it is unlikely that positive and negative shocks have the same impact on the volatility. In order to reflect this intuition, we implement three diagnostic tests for volatility models: the Sign Bias Test, the Negative Size Bias Test, and the Positive Size Bias Test and the tests against the alternatives of QGARCH and GJR-GARCH. The asymmetry test results indicate that the sign and the size of the unexpected return shock do not influence current volatility differently which contradicts our presumption that there are asymmetric effects in the stock market volatility. This result is in line with various diagnostic tests which are designed to determine whether the GARCH(1,1 volatility estimates adequately represent the data. The diagnostic tests in section 2 indicate that the GARCH(1,1 model for weekly KOSPI returns is robust to the misspecification test. We also investigate two representative asymmetric GARCH models, QGARCH and GJR-GARCH model, for our out-of-sample forecasting performance. The out-of-sample forecasting ability test reveals that no single model is clearly outperforming. It is seen that the GJR-GARCH and QGARCH model give mixed results in forecasting ability on all four criteria across all forecast horizons considered. Also, the predictive accuracy test of Diebold and Mariano based on both absolute and squared prediction errors suggest that the forecasts from the linear and asymmetric GARCH models need not be significantly different from each other.

  16. Modeling carbon dioxide sequestration in saline aquifers: Significance of elevated pressures and salinities

    International Nuclear Information System (INIS)

    Allen, D.E.; Strazisar, B.R.; Soong, Y.; Hedges, S.W.

    2005-01-01

    The ultimate capacity of saline formations to sequester carbon dioxide by solubility and mineral trapping must be determined by simulating sequestration with geochemical models. These models, however, are only as reliable as the data and reaction scheme on which they are based. Several models have been used to make estimates of carbon dioxide solubility and mineral formation as a function of pressure and fluid composition. Intercomparison of modeling results indicates that failure to adjust all equilibrium constants to account for elevated carbon dioxide pressures results in significant errors in both solubility and mineral formation estimates. Absence of experimental data at high carbon dioxide pressures and high salinities make verification of model results difficult. Results indicate standalone solubility models that do not take mineral reactions into account will underestimate the total capacity of aquifers to sequester carbon dioxide in the long term through enhanced solubility and mineral trapping mechanisms. Overall, it is difficult to confidently predict the ultimate sequestration capacity of deep saline aquifers using geochemical models. (author)

  17. Input data requirements for performance modelling and monitoring of photovoltaic plants

    DEFF Research Database (Denmark)

    Gavriluta, Anamaria Florina; Spataru, Sergiu; Sera, Dezso

    2018-01-01

    This work investigates the input data requirements in the context of performance modeling of thin-film photovoltaic (PV) systems. The analysis focuses on the PVWatts performance model, well suited for on-line performance monitoring of PV strings, due to its low number of parameters and high......, modelling the performance of the PV modules at high irradiances requires a dataset of only a few hundred samples in order to obtain a power estimation accuracy of ~1-2\\%....

  18. An Empirical Study of a Solo Performance Assessment Model

    Science.gov (United States)

    Russell, Brian E.

    2015-01-01

    The purpose of this study was to test a hypothesized model of solo music performance assessment. Specifically, this study investigates the influence of technique and musical expression on perceptions of overall performance quality. The Aural Musical Performance Quality (AMPQ) measure was created to measure overall performance quality, technique,…

  19. Optimized distributed systems achieve significant performance improvement on sorted merging of massive VCF files.

    Science.gov (United States)

    Sun, Xiaobo; Gao, Jingjing; Jin, Peng; Eng, Celeste; Burchard, Esteban G; Beaty, Terri H; Ruczinski, Ingo; Mathias, Rasika A; Barnes, Kathleen; Wang, Fusheng; Qin, Zhaohui S

    2018-06-01

    Sorted merging of genomic data is a common data operation necessary in many sequencing-based studies. It involves sorting and merging genomic data from different subjects by their genomic locations. In particular, merging a large number of variant call format (VCF) files is frequently required in large-scale whole-genome sequencing or whole-exome sequencing projects. Traditional single-machine based methods become increasingly inefficient when processing large numbers of files due to the excessive computation time and Input/Output bottleneck. Distributed systems and more recent cloud-based systems offer an attractive solution. However, carefully designed and optimized workflow patterns and execution plans (schemas) are required to take full advantage of the increased computing power while overcoming bottlenecks to achieve high performance. In this study, we custom-design optimized schemas for three Apache big data platforms, Hadoop (MapReduce), HBase, and Spark, to perform sorted merging of a large number of VCF files. These schemas all adopt the divide-and-conquer strategy to split the merging job into sequential phases/stages consisting of subtasks that are conquered in an ordered, parallel, and bottleneck-free way. In two illustrating examples, we test the performance of our schemas on merging multiple VCF files into either a single TPED or a single VCF file, which are benchmarked with the traditional single/parallel multiway-merge methods, message passing interface (MPI)-based high-performance computing (HPC) implementation, and the popular VCFTools. Our experiments suggest all three schemas either deliver a significant improvement in efficiency or render much better strong and weak scalabilities over traditional methods. Our findings provide generalized scalable schemas for performing sorted merging on genetics and genomics data using these Apache distributed systems.

  20. Correlation between human observer performance and model observer performance in differential phase contrast CT

    International Nuclear Information System (INIS)

    Li, Ke; Garrett, John; Chen, Guang-Hong

    2013-01-01

    Purpose: With the recently expanding interest and developments in x-ray differential phase contrast CT (DPC-CT), the evaluation of its task-specific detection performance and comparison with the corresponding absorption CT under a given radiation dose constraint become increasingly important. Mathematical model observers are often used to quantify the performance of imaging systems, but their correlations with actual human observers need to be confirmed for each new imaging method. This work is an investigation of the effects of stochastic DPC-CT noise on the correlation of detection performance between model and human observers with signal-known-exactly (SKE) detection tasks.Methods: The detectabilities of different objects (five disks with different diameters and two breast lesion masses) embedded in an experimental DPC-CT noise background were assessed using both model and human observers. The detectability of the disk and lesion signals was then measured using five types of model observers including the prewhitening ideal observer, the nonprewhitening (NPW) observer, the nonprewhitening observer with eye filter and internal noise (NPWEi), the prewhitening observer with eye filter and internal noise (PWEi), and the channelized Hotelling observer (CHO). The same objects were also evaluated by four human observers using the two-alternative forced choice method. The results from the model observer experiment were quantitatively compared to the human observer results to assess the correlation between the two techniques.Results: The contrast-to-detail (CD) curve generated by the human observers for the disk-detection experiments shows that the required contrast to detect a disk is inversely proportional to the square root of the disk size. Based on the CD curves, the ideal and NPW observers tend to systematically overestimate the performance of the human observers. The NPWEi and PWEi observers did not predict human performance well either, as the slopes of their CD

  1. Off gas condenser performance modelling

    International Nuclear Information System (INIS)

    Cains, P.W.; Hills, K.M.; Waring, S.; Pratchett, A.G.

    1989-12-01

    A suite of three programmes has been developed to model the ruthenium decontamination performance of a vitrification plant off-gas condenser. The stages of the model are: condensation of water vapour, NO x absorption in the condensate, RuO 4 absorption in the condensate. Juxtaposition of these stages gives a package that may be run on an IBM-compatible desktop PC. Experimental work indicates that the criterion [HNO 2 ] > 10 [RuO 4 ] used to determine RuO 4 destruction in solution is probably realistic under condenser conditions. Vapour pressures of RuO 4 over aqueous solutions at 70 o -90 o C are slightly lower than the values given by extrapolating the ln K p vs. T -1 relation derived from lower temperature data. (author)

  2. The Social Responsibility Performance Outcomes Model: Building Socially Responsible Companies through Performance Improvement Outcomes.

    Science.gov (United States)

    Hatcher, Tim

    2000-01-01

    Considers the role of performance improvement professionals and human resources development professionals in helping organizations realize the ethical and financial power of corporate social responsibility. Explains the social responsibility performance outcomes model, which incorporates the concepts of societal needs and outcomes. (LRW)

  3. A model for evaluating the social performance of construction waste management.

    Science.gov (United States)

    Yuan, Hongping

    2012-06-01

    It has been determined by existing literature that a lot of research efforts have been made to the economic performance of construction waste management (CWM), but less attention is paid to investigation of the social performance of CWM. This study therefore attempts to develop a model for quantitatively evaluating the social performance of CWM by using a system dynamics (SD) approach. Firstly, major variables affecting the social performance of CWM are identified and a holistic system for assessing the social performance of CWM is formulated in line with feedback relationships underlying these variables. The developed system is then converted into a SD model through the software iThink. An empirical case study is finally conducted to demonstrate application of the model. Results of model validation indicate that the model is robust and reasonable to reflect the situation of the real system under study. Findings of the case study offer helpful insights into effectively promoting the social performance of CWM of the project investigated. Furthermore, the model exhibits great potential to function as an experimental platform for dynamically evaluating effects of management measures on improving the social performance of CWM of construction projects. Copyright © 2012 Elsevier Ltd. All rights reserved.

  4. Modeling Driving Performance Using In-Vehicle Speech Data From a Naturalistic Driving Study.

    Science.gov (United States)

    Kuo, Jonny; Charlton, Judith L; Koppel, Sjaan; Rudin-Brown, Christina M; Cross, Suzanne

    2016-09-01

    We aimed to (a) describe the development and application of an automated approach for processing in-vehicle speech data from a naturalistic driving study (NDS), (b) examine the influence of child passenger presence on driving performance, and (c) model this relationship using in-vehicle speech data. Parent drivers frequently engage in child-related secondary behaviors, but the impact on driving performance is unknown. Applying automated speech-processing techniques to NDS audio data would facilitate the analysis of in-vehicle driver-child interactions and their influence on driving performance. Speech activity detection and speaker diarization algorithms were applied to audio data from a Melbourne-based NDS involving 42 families. Multilevel models were developed to evaluate the effect of speech activity and the presence of child passengers on driving performance. Speech activity was significantly associated with velocity and steering angle variability. Child passenger presence alone was not associated with changes in driving performance. However, speech activity in the presence of two child passengers was associated with the most variability in driving performance. The effects of in-vehicle speech on driving performance in the presence of child passengers appear to be heterogeneous, and multiple factors may need to be considered in evaluating their impact. This goal can potentially be achieved within large-scale NDS through the automated processing of observational data, including speech. Speech-processing algorithms enable new perspectives on driving performance to be gained from existing NDS data, and variables that were once labor-intensive to process can be readily utilized in future research. © 2016, Human Factors and Ergonomics Society.

  5. 10 km running performance predicted by a multiple linear regression model with allometrically adjusted variables.

    Science.gov (United States)

    Abad, Cesar C C; Barros, Ronaldo V; Bertuzzi, Romulo; Gagliardi, João F L; Lima-Silva, Adriano E; Lambert, Mike I; Pires, Flavio O

    2016-06-01

    The aim of this study was to verify the power of VO 2max , peak treadmill running velocity (PTV), and running economy (RE), unadjusted or allometrically adjusted, in predicting 10 km running performance. Eighteen male endurance runners performed: 1) an incremental test to exhaustion to determine VO 2max and PTV; 2) a constant submaximal run at 12 km·h -1 on an outdoor track for RE determination; and 3) a 10 km running race. Unadjusted (VO 2max , PTV and RE) and adjusted variables (VO 2max 0.72 , PTV 0.72 and RE 0.60 ) were investigated through independent multiple regression models to predict 10 km running race time. There were no significant correlations between 10 km running time and either the adjusted or unadjusted VO 2max . Significant correlations (p 0.84 and power > 0.88. The allometrically adjusted predictive model was composed of PTV 0.72 and RE 0.60 and explained 83% of the variance in 10 km running time with a standard error of the estimate (SEE) of 1.5 min. The unadjusted model composed of a single PVT accounted for 72% of the variance in 10 km running time (SEE of 1.9 min). Both regression models provided powerful estimates of 10 km running time; however, the unadjusted PTV may provide an uncomplicated estimation.

  6. Comparison of Simple Versus Performance-Based Fall Prediction Models

    Directory of Open Access Journals (Sweden)

    Shekhar K. Gadkaree BS

    2015-05-01

    Full Text Available Objective: To compare the predictive ability of standard falls prediction models based on physical performance assessments with more parsimonious prediction models based on self-reported data. Design: We developed a series of fall prediction models progressing in complexity and compared area under the receiver operating characteristic curve (AUC across models. Setting: National Health and Aging Trends Study (NHATS, which surveyed a nationally representative sample of Medicare enrollees (age ≥65 at baseline (Round 1: 2011-2012 and 1-year follow-up (Round 2: 2012-2013. Participants: In all, 6,056 community-dwelling individuals participated in Rounds 1 and 2 of NHATS. Measurements: Primary outcomes were 1-year incidence of “ any fall ” and “ recurrent falls .” Prediction models were compared and validated in development and validation sets, respectively. Results: A prediction model that included demographic information, self-reported problems with balance and coordination, and previous fall history was the most parsimonious model that optimized AUC for both any fall (AUC = 0.69, 95% confidence interval [CI] = [0.67, 0.71] and recurrent falls (AUC = 0.77, 95% CI = [0.74, 0.79] in the development set. Physical performance testing provided a marginal additional predictive value. Conclusion: A simple clinical prediction model that does not include physical performance testing could facilitate routine, widespread falls risk screening in the ambulatory care setting.

  7. How motivation affects academic performance: a structural equation modelling analysis.

    Science.gov (United States)

    Kusurkar, R A; Ten Cate, Th J; Vos, C M P; Westers, P; Croiset, G

    2013-03-01

    Few studies in medical education have studied effect of quality of motivation on performance. Self-Determination Theory based on quality of motivation differentiates between Autonomous Motivation (AM) that originates within an individual and Controlled Motivation (CM) that originates from external sources. To determine whether Relative Autonomous Motivation (RAM, a measure of the balance between AM and CM) affects academic performance through good study strategy and higher study effort and compare this model between subgroups: males and females; students selected via two different systems namely qualitative and weighted lottery selection. Data on motivation, study strategy and effort was collected from 383 medical students of VU University Medical Center Amsterdam and their academic performance results were obtained from the student administration. Structural Equation Modelling analysis technique was used to test a hypothesized model in which high RAM would positively affect Good Study Strategy (GSS) and study effort, which in turn would positively affect academic performance in the form of grade point averages. This model fit well with the data, Chi square = 1.095, df = 3, p = 0.778, RMSEA model fit = 0.000. This model also fitted well for all tested subgroups of students. Differences were found in the strength of relationships between the variables for the different subgroups as expected. In conclusion, RAM positively correlated with academic performance through deep strategy towards study and higher study effort. This model seems valid in medical education in subgroups such as males, females, students selected by qualitative and weighted lottery selection.

  8. Multilevel Modeling of the Performance Variance

    Directory of Open Access Journals (Sweden)

    Alexandre Teixeira Dias

    2012-12-01

    Full Text Available Focusing on the identification of the role played by Industry on the relations between Corporate Strategic Factors and Performance, the hierarchical multilevel modeling method was adopted when measuring and analyzing the relations between the variables that comprise each level of analysis. The adequacy of the multilevel perspective to the study of the proposed relations was identified and the relative importance analysis point out to the lower relevance of industry as a moderator of the effects of corporate strategic factors on performance, when the latter was measured by means of return on assets, and that industry don‟t moderates the relations between corporate strategic factors and Tobin‟s Q. The main conclusions of the research are that the organizations choices in terms of corporate strategy presents a considerable influence and plays a key role on the determination of performance level, but that industry should be considered when analyzing the performance variation despite its role as a moderator or not of the relations between corporate strategic factors and performance.

  9. Physical factors underlying the association between lower walking performance and falls in older people: a structural equation model.

    Science.gov (United States)

    Shimada, Hiroyuki; Tiedemann, Anne; Lord, Stephen R; Suzukawa, Megumi; Makizako, Hyuma; Kobayashi, Kumiko; Suzuki, Takao

    2011-01-01

    The purpose of this study was to determine the interrelationships between lower limb muscle performance, balance, gait and falls in older people using structural equation modeling. Study participants were two hundred and thirteen people aged 65 years and older (mean age, 80.0 ± 7.1 years), who used day-care services in Japan. The outcome measures were the history of falls three months retrospectively and physical risk factors for falling, including performance in the chair stand test (CST), one-leg standing test (OLS), tandem walk test, 6m walking time, and the timed up-and-go (TUG) test. Thirty-nine (18.3%) of the 213 participants had fallen at least one or more times during the preceding 3 months. The fall group had significantly slower 6m walking speed and took significantly longer to undertake the TUG test than the non-fall group. In a structural equation model, performance in the CST contributed significantly to gait function, and low gait function was significantly and directly associated with falls in older people. This suggests that task-specific strength exercise as well as general mobility retraining should be important components of exercise programs designed to reduce falls in older people. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  10. There are lots of big fish in this pond: The role of peer overqualification on task significance, perceived fit, and performance for overqualified employees.

    Science.gov (United States)

    Hu, Jia; Erdogan, Berrin; Bauer, Talya N; Jiang, Kaifeng; Liu, Songbo; Li, Yuhui

    2015-07-01

    Research has uncovered mixed results regarding the influence of overqualification on employee performance outcomes, suggesting the existence of boundary conditions for such an influence. Using relative deprivation theory (Crosby, 1976) as the primary theoretical basis, in the current research, we examine the moderating role of peer overqualification and provide insights to the questions regarding whether, when, and how overqualification relates to employee performance. We tested the theoretical model with data gathered across three phases over 6 months from 351 individuals and their supervisors in 72 groups. Results showed that when working with peers whose average overqualification level was high, as opposed to low, employees who felt overqualified for their jobs perceived greater task significance and person-group fit, and demonstrated higher levels of in-role and extra-role performance. We discuss theoretical and managerial implications for overqualification at the individual level and within the larger group context. (c) 2015 APA, all rights reserved).

  11. Pavement Pre- and Post-Treatment Performance Models Using LTPP Data

    OpenAIRE

    Lu, Pan; Tolliver, Denver

    2012-01-01

    This paper determines that pavement performance in International Roughness Index (IRI) is affected by exogenous interventions such as pavement age, precipitation level, freeze-thaw level, and lower level preservation maintenance strategies. An exponential function of pavement age was used to represent pavement IRI performance curves. Moreover, this paper demonstrates a method which calculates short-term post-pavement performance models from maintenance effect models and pre-treatment performa...

  12. A Meta-analysis for the Diagnostic Performance of Transient Elastography for Clinically Significant Portal Hypertension.

    Science.gov (United States)

    You, Myung-Won; Kim, Kyung Won; Pyo, Junhee; Huh, Jimi; Kim, Hyoung Jung; Lee, So Jung; Park, Seong Ho

    2017-01-01

    We aimed to evaluate the correlation between liver stiffness measurement using transient elastography (TE-LSM) and hepatic venous pressure gradient and the diagnostic performance of TE-LSM in assessing clinically significant portal hypertension through meta-analysis. Eleven studies were included from thorough literature research and selection processes. The summary correlation coefficient was 0.783 (95% confidence interval [CI], 0.737-0.823). Summary sensitivity, specificity and area under the hierarchical summary receiver operating characteristic curve (AUC) were 87.5% (95% CI, 75.8-93.9%), 85.3 % (95% CI, 76.9-90.9%) and 0.9, respectively. The subgroup with low cut-off values of 13.6-18 kPa had better summary estimates (sensitivity 91.2%, specificity 81.3% and partial AUC 0.921) than the subgroup with high cut-off values of 21-25 kPa (sensitivity 71.2%, specificity 90.9% and partial AUC 0.769). In summary, TE-LSM correlated well with hepatic venous pressure gradient and represented good diagnostic performance in diagnosing clinically significant portal hypertension. For use as a sensitive screening tool, we propose using low cut-off values of 13.6-18 kPa in TE-LSM. Copyright © 2016 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.

  13. Predicting detection performance with model observers: Fourier domain or spatial domain?

    Science.gov (United States)

    Chen, Baiyu; Yu, Lifeng; Leng, Shuai; Kofler, James; Favazza, Christopher; Vrieze, Thomas; McCollough, Cynthia

    2016-02-27

    The use of Fourier domain model observer is challenged by iterative reconstruction (IR), because IR algorithms are nonlinear and IR images have noise texture different from that of FBP. A modified Fourier domain model observer, which incorporates nonlinear noise and resolution properties, has been proposed for IR and needs to be validated with human detection performance. On the other hand, the spatial domain model observer is theoretically applicable to IR, but more computationally intensive than the Fourier domain method. The purpose of this study is to compare the modified Fourier domain model observer to the spatial domain model observer with both FBP and IR images, using human detection performance as the gold standard. A phantom with inserts of various low contrast levels and sizes was repeatedly scanned 100 times on a third-generation, dual-source CT scanner at 5 dose levels and reconstructed using FBP and IR algorithms. The human detection performance of the inserts was measured via a 2-alternative-forced-choice (2AFC) test. In addition, two model observer performances were calculated, including a Fourier domain non-prewhitening model observer and a spatial domain channelized Hotelling observer. The performance of these two mode observers was compared in terms of how well they correlated with human observer performance. Our results demonstrated that the spatial domain model observer correlated well with human observers across various dose levels, object contrast levels, and object sizes. The Fourier domain observer correlated well with human observers using FBP images, but overestimated the detection performance using IR images.

  14. Innovations in individual feature history management - The significance of feature-based temporal model

    Science.gov (United States)

    Choi, J.; Seong, J.C.; Kim, B.; Usery, E.L.

    2008-01-01

    A feature relies on three dimensions (space, theme, and time) for its representation. Even though spatiotemporal models have been proposed, they have principally focused on the spatial changes of a feature. In this paper, a feature-based temporal model is proposed to represent the changes of both space and theme independently. The proposed model modifies the ISO's temporal schema and adds new explicit temporal relationship structure that stores temporal topological relationship with the ISO's temporal primitives of a feature in order to keep track feature history. The explicit temporal relationship can enhance query performance on feature history by removing topological comparison during query process. Further, a prototype system has been developed to test a proposed feature-based temporal model by querying land parcel history in Athens, Georgia. The result of temporal query on individual feature history shows the efficiency of the explicit temporal relationship structure. ?? Springer Science+Business Media, LLC 2007.

  15. CORPORATE FORESIGHT AND PERFORMANCE: A CHAIN-OF-EFFECTS MODEL

    DEFF Research Database (Denmark)

    Jissink, Tymen; Huizingh, Eelko K.R.E.; Rohrbeck, René

    2015-01-01

    In this paper we develop and validate a measurement scale for corporate foresight and examine its impact on performance in a chain-of-effects model. We conceptualize corporate foresight as an organizational ability consisting of five distinct dimensions: information scope, method usage, people......, formal organization, and culture. We investigate the relation of corporate foresight with three innovation performance dimensions – new product success, new product innovativeness, and financial performance. We use partial-least-squares structural equations modelling to assess our measurement mode ls...... and test our research hypotheses. Using a cross-industry sample of 153 innovative firms, we find that corporate foresight can be validly and reliably measured by our measurement instrument. The results of the structural model support the hypothesized positive effects of corporate foresight on all...

  16. A Bibliometric Analysis and Review on Performance Modeling Literature

    Directory of Open Access Journals (Sweden)

    Barbara Livieri

    2015-04-01

    Full Text Available In management practice, performance indicators are considered as a prerequisite to make informed decisions in line with the organization’s goals. On the other hand, indicators summarizes compound phenomena in a few digits, which can induce to inadequate decisions, biased by information loss and conflicting values. Model driven approaches in enterprise engineering can be very effective to avoid these pitfalls, or to take it under control. For that reason, “performance modeling” has the numbers to play a primary role in the “model driven enterprise” scenario, together with process, information and other enterprise-related aspects. In this perspective, we propose a systematic review of the literature on performance modeling in order to retrieve, classify, and summarize existing research, identify the core authors and define areas and opportunities for future research.

  17. A Practical Model to Perform Comprehensive Cybersecurity Audits

    Directory of Open Access Journals (Sweden)

    Regner Sabillon

    2018-03-01

    Full Text Available These days organizations are continually facing being targets of cyberattacks and cyberthreats; the sophistication and complexity of modern cyberattacks and the modus operandi of cybercriminals including Techniques, Tactics and Procedures (TTP keep growing at unprecedented rates. Cybercriminals are always adopting new strategies to plan and launch cyberattacks based on existing cybersecurity vulnerabilities and exploiting end users by using social engineering techniques. Cybersecurity audits are extremely important to verify that information security controls are in place and to detect weaknesses of inexistent cybersecurity or obsolete controls. This article presents an innovative and comprehensive cybersecurity audit model. The CyberSecurity Audit Model (CSAM can be implemented to perform internal or external cybersecurity audits. This model can be used to perform single cybersecurity audits or can be part of any corporate audit program to improve cybersecurity controls. Any information security or cybersecurity audit team has either the options to perform a full audit for all cybersecurity domains or by selecting specific domains to audit certain areas that need control verification and hardening. The CSAM has 18 domains; Domain 1 is specific for Nation States and Domains 2-18 can be implemented at any organization. The organization can be any small, medium or large enterprise, the model is also applicable to any Non-Profit Organization (NPO.

  18. Performance modeling of network data services

    Energy Technology Data Exchange (ETDEWEB)

    Haynes, R.A.; Pierson, L.G.

    1997-01-01

    Networks at major computational organizations are becoming increasingly complex. The introduction of large massively parallel computers and supercomputers with gigabyte memories are requiring greater and greater bandwidth for network data transfers to widely dispersed clients. For networks to provide adequate data transfer services to high performance computers and remote users connected to them, the networking components must be optimized from a combination of internal and external performance criteria. This paper describes research done at Sandia National Laboratories to model network data services and to visualize the flow of data from source to sink when using the data services.

  19. Zone modelling of the thermal performances of a large-scale bloom reheating furnace

    International Nuclear Information System (INIS)

    Tan, Chee-Keong; Jenkins, Joana; Ward, John; Broughton, Jonathan; Heeley, Andy

    2013-01-01

    This paper describes the development and comparison of a two- (2D) and three-dimensional (3D) mathematical models, based on the zone method of radiation analysis, to simulate the thermal performances of a large bloom reheating furnace. The modelling approach adopted in the current paper differs from previous work since it takes into account the net radiation interchanges between the top and bottom firing sections of the furnace and also allows for enthalpy exchange due to the flows of combustion products between these sections. The models were initially validated at two different furnace throughput rates using experimental and plant's model data supplied by Tata Steel. The results to-date demonstrated that the model predictions are in good agreement with measured heating profiles of the blooms encountered in the actual furnace. It was also found no significant differences between the predictions from the 2D and 3D models. Following the validation, the 2D model was then used to assess the impact of the furnace responses to changing throughput rate. It was found that the potential furnace response to changing throughput rate influences the settling time of the furnace to the next steady state operation. Overall the current work demonstrates the feasibility and practicality of zone modelling and its potential for incorporation into a model based furnace control system. - Highlights: ► 2D and 3D zone models of large-scale bloom reheating furnace. ► The models were validated with experimental and plant model data. ► Examine the transient furnace response to changing the furnace throughput rates. ► No significant differences found between the predictions from the 2D and 3D models.

  20. Development of a Generic Performance Measurement Model in an Emergency Department

    DEFF Research Database (Denmark)

    Sørup, Christian Michel; Lundager Forberg, Jakob

    , and the use of triage. All of the mentioned initiatives are new and not well validated to date. It would be desirable to enable measurement of each of the initiative’s effects. The goal of this PhD project was to develop a performance measurement model for EDs. The new model comprises only the most important...... performance measures that provide an estimate for overall ED performance levels. Furthermore, a thorough analysis of the interdependencies between the included performance measures was conducted in order to gain deeper knowledge of the ED as a system. The model enables monitoring of how well the ED performs...... over time, including how performance is impacted by the various initiatives. In the end, the developed model will be an important management tool to meet the management’s vision of providing the best possible care for the acute patient meanwhile achieving the highest possible utilisation of resources....

  1. visCOS: An R-package to evaluate model performance of hydrological models

    Science.gov (United States)

    Klotz, Daniel; Herrnegger, Mathew; Wesemann, Johannes; Schulz, Karsten

    2016-04-01

    The evaluation of model performance is a central part of (hydrological) modelling. Much attention has been given to the development of evaluation criteria and diagnostic frameworks. (Klemeš, 1986; Gupta et al., 2008; among many others). Nevertheless, many applications exist for which objective functions do not yet provide satisfying summaries. Thus, the necessity to visualize results arises in order to explore a wider range of model capacities, be it strengths or deficiencies. Visualizations are usually devised for specific projects and these efforts are often not distributed to a broader community (e.g. via open source software packages). Hence, the opportunity to explicitly discuss a state-of-the-art presentation technique is often missed. We therefore present a comprehensive R-package for evaluating model performance by visualizing and exploring different aspects of hydrological time-series. The presented package comprises a set of useful plots and visualization methods, which complement existing packages, such as hydroGOF (Zambrano-Bigiarini et al., 2012). It is derived from practical applications of the hydrological models COSERO and COSEROreg (Kling et al., 2014). visCOS, providing an interface in R, represents an easy-to-use software package for visualizing and assessing model performance and can be implemented in the process of model calibration or model development. The package provides functions to load hydrological data into R, clean the data, process, visualize, explore and finally save the results in a consistent way. Together with an interactive zoom function of the time series, an online calculation of the objective functions for variable time-windows is included. Common hydrological objective functions, such as the Nash-Sutcliffe Efficiency and the Kling-Gupta Efficiency, can also be evaluated and visualized in different ways for defined sub-periods like hydrological years or seasonal sections. Many hydrologists use long-term water-balances as a

  2. Utilities for high performance dispersion model PHYSIC

    International Nuclear Information System (INIS)

    Yamazawa, Hiromi

    1992-09-01

    The description and usage of the utilities for the dispersion calculation model PHYSIC were summarized. The model was developed in the study of developing high performance SPEEDI with the purpose of introducing meteorological forecast function into the environmental emergency response system. The procedure of PHYSIC calculation consists of three steps; preparation of relevant files, creation and submission of JCL, and graphic output of results. A user can carry out the above procedure with the help of the Geographical Data Processing Utility, the Model Control Utility, and the Graphic Output Utility. (author)

  3. Modelling Flat Spring performance using FEA

    International Nuclear Information System (INIS)

    Fatola, B O; Keogh, P; Hicks, B

    2009-01-01

    This paper reports how the stiffness of a Flat Spring can be predicted using nonlinear Finite Element Analysis (FEA). The analysis of a Flat Spring is a nonlinear problem involving contact mechanics, geometric nonlinearity and material property nonlinearity. Research has been focused on improving the accuracy of the model by identifying and exploring the significant assumptions contributing to errors. This paper presents results from some of the models developed using FEA software. The validation process is shown to identify where improvements can be made to the model assumptions to increase the accuracy of prediction. The goal is to achieve an accuracy level of ±10 % as the intention is to replace practical testing with FEA modelling, thereby reducing the product development time and cost. Results from the FEA models are compared with experimental results to validate the accuracy.

  4. PORFLOW Modeling Supporting The H-Tank Farm Performance Assessment

    International Nuclear Information System (INIS)

    Jordan, J. M.; Flach, G. P.; Westbrook, M. L.

    2012-01-01

    Numerical simulations of groundwater flow and contaminant transport in the vadose and saturated zones have been conducted using the PORFLOW code in support of an overall Performance Assessment (PA) of the H-Tank Farm. This report provides technical detail on selected aspects of PORFLOW model development and describes the structure of the associated electronic files. The PORFLOW models for the H-Tank Farm PA, Rev. 1 were updated with grout, solubility, and inventory changes. The aquifer model was refined. In addition, a set of flow sensitivity runs were performed to allow flow to be varied in the related probabilistic GoldSim models. The final PORFLOW concentration values are used as input into a GoldSim dose calculator

  5. PORFLOW Modeling Supporting The H-Tank Farm Performance Assessment

    Energy Technology Data Exchange (ETDEWEB)

    Jordan, J. M.; Flach, G. P.; Westbrook, M. L.

    2012-08-31

    Numerical simulations of groundwater flow and contaminant transport in the vadose and saturated zones have been conducted using the PORFLOW code in support of an overall Performance Assessment (PA) of the H-Tank Farm. This report provides technical detail on selected aspects of PORFLOW model development and describes the structure of the associated electronic files. The PORFLOW models for the H-Tank Farm PA, Rev. 1 were updated with grout, solubility, and inventory changes. The aquifer model was refined. In addition, a set of flow sensitivity runs were performed to allow flow to be varied in the related probabilistic GoldSim models. The final PORFLOW concentration values are used as input into a GoldSim dose calculator.

  6. Balancing Model Performance and Simplicity to Predict Postoperative Primary Care Blood Pressure Elevation.

    Science.gov (United States)

    Schonberger, Robert B; Dai, Feng; Brandt, Cynthia A; Burg, Matthew M

    2015-09-01

    Because of uncertainty regarding the reliability of perioperative blood pressures and traditional notions downplaying the role of anesthesiologists in longitudinal patient care, there is no consensus for anesthesiologists to recommend postoperative primary care blood pressure follow-up for patients presenting for surgery with an increased blood pressure. The decision of whom to refer should ideally be based on a predictive model that balances performance with ease-of-use. If an acceptable decision rule was developed, a new practice paradigm integrating the surgical encounter into broader public health efforts could be tested, with the goal of reducing long-term morbidity from hypertension among surgical patients. Using national data from US veterans receiving surgical care, we determined the prevalence of poorly controlled outpatient clinic blood pressures ≥140/90 mm Hg, based on the mean of up to 4 readings in the year after surgery. Four increasingly complex logistic regression models were assessed to predict this outcome. The first included the mean of 2 preoperative blood pressure readings; other models progressively added a broad array of demographic and clinical data. After internal validation, the C-statistics and the Net Reclassification Index between the simplest and most complex models were assessed. The performance characteristics of several simple blood pressure referral thresholds were then calculated. Among 215,621 patients, poorly controlled outpatient clinic blood pressure was present postoperatively in 25.7% (95% confidence interval [CI], 25.5%-25.9%) including 14.2% (95% CI, 13.9%-14.6%) of patients lacking a hypertension history. The most complex prediction model demonstrated statistically significant, but clinically marginal, improvement in discrimination over a model based on preoperative blood pressure alone (C-statistic, 0.736 [95% CI, 0.734-0.739] vs 0.721 [95% CI, 0.718-0.723]; P for difference 1 of 4 patients (95% CI, 25

  7. Modeling within-word and cross-word pronunciation variation to improve the performance of a Dutch CSR

    OpenAIRE

    Kessens, J.M.; Wester, M.; Strik, H.

    1999-01-01

    This paper describes how the performance of a continuous speech recognizer for Dutch has been improved by modeling within-word and cross-word pronunciation variation. Within-word variants were automatically generated by applying five phonological rules to the words in the lexicon. For the within-word method, a significant improvement is found compared to the baseline. Cross-word pronunciation variation was modeled using two different methods: 1) adding cross-word variants directly to the lexi...

  8. Rising to the challenge : A model of contest performance

    OpenAIRE

    DesAutels, Philip; Berthon, Pierre; Salehi-Sangari, Esmail

    2011-01-01

    Contests are a ubiquitous form of promotion widely adopted by financial services advertisers, yet, paradoxically, academic research on them is conspicuous in its absence. This work addresses this gap by developing a model of contest engagement and performance. Using motivation theory, factors that drive participant engagement are modeled, and engagement's effect on experience and marketing success of the contest specified. Measures of contest performance, in-contest engagement and post-contes...

  9. Performance Evaluation and Modelling of Container Terminals

    Science.gov (United States)

    Venkatasubbaiah, K.; Rao, K. Narayana; Rao, M. Malleswara; Challa, Suresh

    2018-02-01

    The present paper evaluates and analyzes the performance of 28 container terminals of south East Asia through data envelopment analysis (DEA), principal component analysis (PCA) and hybrid method of DEA-PCA. DEA technique is utilized to identify efficient decision making unit (DMU)s and to rank DMUs in a peer appraisal mode. PCA is a multivariate statistical method to evaluate the performance of container terminals. In hybrid method, DEA is integrated with PCA to arrive the ranking of container terminals. Based on the composite ranking, performance modelling and optimization of container terminals is carried out through response surface methodology (RSM).

  10. Human performance models for computer-aided engineering

    Science.gov (United States)

    Elkind, Jerome I. (Editor); Card, Stuart K. (Editor); Hochberg, Julian (Editor); Huey, Beverly Messick (Editor)

    1989-01-01

    This report discusses a topic important to the field of computational human factors: models of human performance and their use in computer-based engineering facilities for the design of complex systems. It focuses on a particular human factors design problem -- the design of cockpit systems for advanced helicopters -- and on a particular aspect of human performance -- vision and related cognitive functions. By focusing in this way, the authors were able to address the selected topics in some depth and develop findings and recommendations that they believe have application to many other aspects of human performance and to other design domains.

  11. Modelling fuel cell performance using artificial intelligence

    Science.gov (United States)

    Ogaji, S. O. T.; Singh, R.; Pilidis, P.; Diacakis, M.

    Over the last few years, fuel cell technology has been increasing promisingly its share in the generation of stationary power. Numerous pilot projects are operating worldwide, continuously increasing the amount of operating hours either as stand-alone devices or as part of gas turbine combined cycles. An essential tool for the adequate and dynamic analysis of such systems is a software model that enables the user to assess a large number of alternative options in the least possible time. On the other hand, the sphere of application of artificial neural networks has widened covering such endeavours of life such as medicine, finance and unsurprisingly engineering (diagnostics of faults in machines). Artificial neural networks have been described as diagrammatic representation of a mathematical equation that receives values (inputs) and gives out results (outputs). Artificial neural networks systems have the capacity to recognise and associate patterns and because of their inherent design features, they can be applied to linear and non-linear problem domains. In this paper, the performance of the fuel cell is modelled using artificial neural networks. The inputs to the network are variables that are critical to the performance of the fuel cell while the outputs are the result of changes in any one or all of the fuel cell design variables, on its performance. Critical parameters for the cell include the geometrical configuration as well as the operating conditions. For the neural network, various network design parameters such as the network size, training algorithm, activation functions and their causes on the effectiveness of the performance modelling are discussed. Results from the analysis as well as the limitations of the approach are presented and discussed.

  12. Modelling fuel cell performance using artificial intelligence

    Energy Technology Data Exchange (ETDEWEB)

    Ogaji, S.O.T.; Singh, R.; Pilidis, P.; Diacakis, M. [Power Propulsion and Aerospace Engineering Department, Centre for Diagnostics and Life Cycle Costs, Cranfield University (United Kingdom)

    2006-03-09

    Over the last few years, fuel cell technology has been increasing promisingly its share in the generation of stationary power. Numerous pilot projects are operating worldwide, continuously increasing the amount of operating hours either as stand-alone devices or as part of gas turbine combined cycles. An essential tool for the adequate and dynamic analysis of such systems is a software model that enables the user to assess a large number of alternative options in the least possible time. On the other hand, the sphere of application of artificial neural networks has widened covering such endeavours of life such as medicine, finance and unsurprisingly engineering (diagnostics of faults in machines). Artificial neural networks have been described as diagrammatic representation of a mathematical equation that receives values (inputs) and gives out results (outputs). Artificial neural networks systems have the capacity to recognise and associate patterns and because of their inherent design features, they can be applied to linear and non-linear problem domains. In this paper, the performance of the fuel cell is modelled using artificial neural networks. The inputs to the network are variables that are critical to the performance of the fuel cell while the outputs are the result of changes in any one or all of the fuel cell design variables, on its performance. Critical parameters for the cell include the geometrical configuration as well as the operating conditions. For the neural network, various network design parameters such as the network size, training algorithm, activation functions and their causes on the effectiveness of the performance modelling are discussed. Results from the analysis as well as the limitations of the approach are presented and discussed. (author)

  13. Computational Modeling of Human Multiple-Task Performance

    National Research Council Canada - National Science Library

    Kieras, David E; Meyer, David

    2005-01-01

    This is the final report for a project that was a continuation of an earlier, long-term project on the development and validation of the EPIC cognitive architecture for modeling human cognition and performance...

  14. Activity-Based Costing Model for Assessing Economic Performance.

    Science.gov (United States)

    DeHayes, Daniel W.; Lovrinic, Joseph G.

    1994-01-01

    An economic model for evaluating the cost performance of academic and administrative programs in higher education is described. Examples from its application at Indiana University-Purdue University Indianapolis are used to illustrate how the model has been used to control costs and reengineer processes. (Author/MSE)

  15. Modeling and Performance Analysis of Manufacturing Systems in ...

    African Journals Online (AJOL)

    Modeling and Performance Analysis of Manufacturing Systems in Footwear Industry. ... researcher to experiment with different variables and controls the manufacturing process ... In this study Arena simulation software is employed to model and measure ... for Authors · for Policy Makers · about Open Access · Journal Quality.

  16. Supercomputer and cluster performance modeling and analysis efforts:2004-2006.

    Energy Technology Data Exchange (ETDEWEB)

    Sturtevant, Judith E.; Ganti, Anand; Meyer, Harold (Hal) Edward; Stevenson, Joel O.; Benner, Robert E., Jr. (.,; .); Goudy, Susan Phelps; Doerfler, Douglas W.; Domino, Stefan Paul; Taylor, Mark A.; Malins, Robert Joseph; Scott, Ryan T.; Barnette, Daniel Wayne; Rajan, Mahesh; Ang, James Alfred; Black, Amalia Rebecca; Laub, Thomas William; Vaughan, Courtenay Thomas; Franke, Brian Claude

    2007-02-01

    This report describes efforts by the Performance Modeling and Analysis Team to investigate performance characteristics of Sandia's engineering and scientific applications on the ASC capability and advanced architecture supercomputers, and Sandia's capacity Linux clusters. Efforts to model various aspects of these computers are also discussed. The goals of these efforts are to quantify and compare Sandia's supercomputer and cluster performance characteristics; to reveal strengths and weaknesses in such systems; and to predict performance characteristics of, and provide guidelines for, future acquisitions and follow-on systems. Described herein are the results obtained from running benchmarks and applications to extract performance characteristics and comparisons, as well as modeling efforts, obtained during the time period 2004-2006. The format of the report, with hypertext links to numerous additional documents, purposefully minimizes the document size needed to disseminate the extensive results from our research.

  17. Adding propensity scores to pure prediction models fails to improve predictive performance

    Directory of Open Access Journals (Sweden)

    Amy S. Nowacki

    2013-08-01

    Full Text Available Background. Propensity score usage seems to be growing in popularity leading researchers to question the possible role of propensity scores in prediction modeling, despite the lack of a theoretical rationale. It is suspected that such requests are due to the lack of differentiation regarding the goals of predictive modeling versus causal inference modeling. Therefore, the purpose of this study is to formally examine the effect of propensity scores on predictive performance. Our hypothesis is that a multivariable regression model that adjusts for all covariates will perform as well as or better than those models utilizing propensity scores with respect to model discrimination and calibration.Methods. The most commonly encountered statistical scenarios for medical prediction (logistic and proportional hazards regression were used to investigate this research question. Random cross-validation was performed 500 times to correct for optimism. The multivariable regression models adjusting for all covariates were compared with models that included adjustment for or weighting with the propensity scores. The methods were compared based on three predictive performance measures: (1 concordance indices; (2 Brier scores; and (3 calibration curves.Results. Multivariable models adjusting for all covariates had the highest average concordance index, the lowest average Brier score, and the best calibration. Propensity score adjustment and inverse probability weighting models without adjustment for all covariates performed worse than full models and failed to improve predictive performance with full covariate adjustment.Conclusion. Propensity score techniques did not improve prediction performance measures beyond multivariable adjustment. Propensity scores are not recommended if the analytical goal is pure prediction modeling.

  18. Performance verification tests of JT-60SA CS model coil

    Energy Technology Data Exchange (ETDEWEB)

    Obana, Tetsuhiro, E-mail: obana.tetsuhiro@LHD.nifs.ac.jp [National Institute for Fusion Science, 322-6 Oroshi, Toki, Gifu 509-5292 (Japan); Murakami, Haruyuki [Japan Atomic Energy Agency, 801-1 Mukoyama, Naka, Ibaraki 311-0193 (Japan); Takahata, Kazuya; Hamaguchi, Shinji; Chikaraishi, Hirotaka; Mito, Toshiyuki; Imagawa, Shinsaku [National Institute for Fusion Science, 322-6 Oroshi, Toki, Gifu 509-5292 (Japan); Kizu, Kaname; Natsume, Kyohei; Yoshida, Kiyoshi [Japan Atomic Energy Agency, 801-1 Mukoyama, Naka, Ibaraki 311-0193 (Japan)

    2015-11-15

    Highlights: • The performance of the JT-60SA CS model coil was verified. • The CS model coil comprised a quad-pancake wound with a Nb{sub 3}Sn CIC conductor. • The CS model coil met the design requirements. - Abstract: As a final check of the coil manufacturing method of the JT-60 Super Advanced (JT-60SA) central solenoid (CS), we verified the performance of a CS model coil. The model coil comprised a quad-pancake wound with a Nb{sub 3}Sn cable-in-conduit conductor. Measurements of the critical current, joint resistance, pressure drop, and magnetic field were conducted in the verification tests. In the critical-current measurement, the critical current of the model coil coincided with the estimation derived from a strain of −0.62% for the Nb{sub 3}Sn strands. As a result, critical-current degradation caused by the coil manufacturing process was not observed. The results of the performance verification tests indicate that the model coil met the design requirements. Consequently, the manufacturing process of the JT-60SA CS was established.

  19. Analyzing Financial Performance of Commercial Banks in India: Application of CAMEL Model

    Directory of Open Access Journals (Sweden)

    Prof. Dr. Mohi-ud-Din Sangmi

    Full Text Available Sound financial health of a bank is the guarantee not only to its depositors but is equally significant for the shareholders, employees and whole economy as well. As a sequel to this maxim, efforts have been made from time to time, to measure the financial position of each bank and manage it efficiently and effectively. In this paper, an effort has been made to evaluate the financial performance of the two major banks operating in northern India .This evaluation has been done by using CAMEL Parameters, the latest model of financial analysis. Through this model, it is highlighted that the position of the banks under study is sound and satisfactory so far as their capital adequacy, asset quality, Management capability and liquidity is concerned.

  20. Iowa calibration of MEPDG performance prediction models.

    Science.gov (United States)

    2013-06-01

    This study aims to improve the accuracy of AASHTO Mechanistic-Empirical Pavement Design Guide (MEPDG) pavement : performance predictions for Iowa pavement systems through local calibration of MEPDG prediction models. A total of 130 : representative p...

  1. Data modelling and performance of data base systems

    International Nuclear Information System (INIS)

    Rossiter, B.N.

    1984-01-01

    The three main methods of data modelling, hierarchical, network, and relational are described together with their advantages and disadvantages. The hierarchical model has strictly limited applicability, but the other two are of general use, although the network model in many respects defines a storage structure whilst the relational model defines a logical structure. Because of this, network systems are more difficult to use than relational systems but are easier to tune to obtain efficient performance. More advanced models have been developed to capture more semantic detail, and two of these RM/T and the role model are discussed. (orig.)

  2. Teamwork skills, shared mental models, and performance in simulated trauma teams: an independent group design

    Directory of Open Access Journals (Sweden)

    Westli Heidi

    2010-08-01

    Full Text Available Abstract Background Non-technical skills are seen as an important contributor to reducing adverse events and improving medical management in healthcare teams. Previous research on the effectiveness of teams has suggested that shared mental models facilitate coordination and team performance. The purpose of the study was to investigate whether demonstrated teamwork skills and behaviour indicating shared mental models would be associated with observed improved medical management in trauma team simulations. Methods Revised versions of the 'Anesthetists' Non-Technical Skills Behavioural marker system' and 'Anti-Air Teamwork Observation Measure' were field tested in moment-to-moment observation of 27 trauma team simulations in Norwegian hospitals. Independent subject matter experts rated medical management in the teams. An independent group design was used to explore differences in teamwork skills between higher-performing and lower-performing teams. Results Specific teamwork skills and behavioural markers were associated with indicators of good team performance. Higher and lower-performing teams differed in information exchange, supporting behaviour and communication, with higher performing teams showing more effective information exchange and communication, and less supporting behaviours. Behavioural markers of shared mental models predicted effective medical management better than teamwork skills. Conclusions The present study replicates and extends previous research by providing new empirical evidence of the significance of specific teamwork skills and a shared mental model for the effective medical management of trauma teams. In addition, the study underlines the generic nature of teamwork skills by demonstrating their transferability from different clinical simulations like the anaesthesia environment to trauma care, as well as the potential usefulness of behavioural frequency analysis in future research on non-technical skills.

  3. LCP- LIFETIME COST AND PERFORMANCE MODEL FOR DISTRIBUTED PHOTOVOLTAIC SYSTEMS

    Science.gov (United States)

    Borden, C. S.

    1994-01-01

    The Lifetime Cost and Performance (LCP) Model was developed to assist in the assessment of Photovoltaic (PV) system design options. LCP is a simulation of the performance, cost, and revenue streams associated with distributed PV power systems. LCP provides the user with substantial flexibility in specifying the technical and economic environment of the PV application. User-specified input parameters are available to describe PV system characteristics, site climatic conditions, utility purchase and sellback rate structures, discount and escalation rates, construction timing, and lifetime of the system. Such details as PV array orientation and tilt angle, PV module and balance-of-system performance attributes, and the mode of utility interconnection are user-specified. LCP assumes that the distributed PV system is utility grid interactive without dedicated electrical storage. In combination with a suitable economic model, LCP can provide an estimate of the expected net present worth of a PV system to the owner, as compared to electricity purchased from a utility grid. Similarly, LCP might be used to perform sensitivity analyses to identify those PV system parameters having significant impact on net worth. The user describes the PV system configuration to LCP via the basic electrical components. The module is the smallest entity in the PV system which is modeled. A PV module is defined in the simulation by its short circuit current, which varies over the system lifetime due to degradation and failure. Modules are wired in series to form a branch circuit. Bypass diodes are allowed between modules in the branch circuits. Branch circuits are then connected in parallel to form a bus. A collection of buses is connected in parallel to form an increment to capacity of the system. By choosing the appropriate series-parallel wiring design, the user can specify the current, voltage, and reliability characteristics of the system. LCP simulation of system performance is site

  4. A Mathematical Model to Improve the Performance of Logistics Network

    Directory of Open Access Journals (Sweden)

    Muhammad Izman Herdiansyah

    2012-01-01

    Full Text Available The role of logistics nowadays is expanding from just providing transportation and warehousing to offering total integrated logistics. To remain competitive in the global market environment, business enterprises need to improve their logistics operations performance. The improvement will be achieved when we can provide a comprehensive analysis and optimize its network performances. In this paper, a mixed integer linier model for optimizing logistics network performance is developed. It provides a single-product multi-period multi-facilities model, as well as the multi-product concept. The problem is modeled in form of a network flow problem with the main objective to minimize total logistics cost. The problem can be solved using commercial linear programming package like CPLEX or LINDO. Even in small case, the solver in Excel may also be used to solve such model.Keywords: logistics network, integrated model, mathematical programming, network optimization

  5. Model of service-oriented catering supply chain performance evaluation

    OpenAIRE

    Gou, Juanqiong; Shen, Guguan; Chai, Rui

    2013-01-01

    Purpose: The aim of this paper is constructing a performance evaluation model for service-oriented catering supply chain. Design/methodology/approach: With the research on the current situation of catering industry, this paper summarized the characters of the catering supply chain, and then presents the service-oriented catering supply chain model based on the platform of logistics and information. At last, the fuzzy AHP method is used to evaluate the performance of service-oriented catering ...

  6. A keyword spotting model using perceptually significant energy features

    Science.gov (United States)

    Umakanthan, Padmalochini

    The task of a keyword recognition system is to detect the presence of certain words in a conversation based on the linguistic information present in human speech. Such keyword spotting systems have applications in homeland security, telephone surveillance and human-computer interfacing. General procedure of a keyword spotting system involves feature generation and matching. In this work, new set of features that are based on the psycho-acoustic masking nature of human speech are proposed. After developing these features a time aligned pattern matching process was implemented to locate the words in a set of unknown words. A word boundary detection technique based on frame classification using the nonlinear characteristics of speech is also addressed in this work. Validation of this keyword spotting model was done using widely acclaimed Cepstral features. The experimental results indicate the viability of using these perceptually significant features as an augmented feature set in keyword spotting.

  7. Evaluating Performance of Safety Management and Occupational Health Using Total Quality Safety Management Model (TQSM

    Directory of Open Access Journals (Sweden)

    E Mohammadfam

    2015-11-01

    Full Text Available Introduction: All organizations, whether public or private, necessitate performance evaluation systems in regard with growth, stability, and development in the competitive fields. One of the existing models for performance evaluation of occupational health and safety management is Total Quality Safety Management model (TQSM. Therefore, the present study aimed to evaluate performance of safety management and occupational health utilizing TQSM model. Methods: In this descriptive-analytic study, the population consisted of 16 individuals, including managers, supervisors, and members of technical protection and work health committee. Then the participants were asked to respond to TQSM questionnaire before and after the implementation of Occupational Health & Safety Advisory Services 18001 (OHSAS18001. Ultimately, the level of each program as well as the TQSM status were determined before and after the implementation of OHSAS18001. Results: The study results showed that the scores obtained by the company before OHSAS 18001’s implementation, was 43.7 out of 312. After implementing OHSAS 18001 in the company and receiving the related certificate, the total score of safety program that company could obtain was 127.12 out of 312 demonstrating a rise of 83.42 scores (26.8%. The paired t-test revealed that mean difference of TQSM scores before and after OHSAS 18001 implementation was proved to be significant (p> 0.05. Conclusion: The study findings demonstrated that TQSM can be regarded as an appropriate model in order to monitor the performance of safety management system and occupational health, since it possesses the ability to quantitatively evaluate the system performance.

  8. Infant speech-sound discrimination testing: effects of stimulus intensity and procedural model on measures of performance.

    Science.gov (United States)

    Nozza, R J

    1987-06-01

    Performance of infants in a speech-sound discrimination task (/ba/ vs /da/) was measured at three stimulus intensity levels (50, 60, and 70 dB SPL) using the operant head-turn procedure. The procedure was modified so that data could be treated as though from a single-interval (yes-no) procedure, as is commonly done, as well as if from a sustained attention (vigilance) task. Discrimination performance changed significantly with increase in intensity, suggesting caution in the interpretation of results from infant discrimination studies in which only single stimulus intensity levels within this range are used. The assumptions made about the underlying methodological model did not change the performance-intensity relationships. However, infants demonstrated response decrement, typical of vigilance tasks, which supports the notion that the head-turn procedure is represented best by the vigilance model. Analysis then was done according to a method designed for tasks with undefined observation intervals [C. S. Watson and T. L. Nichols, J. Acoust. Soc. Am. 59, 655-668 (1976)]. Results reveal that, while group data are reasonably well represented across levels of difficulty by the fixed-interval model, there is a variation in performance as a function of time following trial onset that could lead to underestimation of performance in some cases.

  9. ERUPTION TO DOSE: COUPLING A TEPHRA DISPERSAL MODEL WITHIN A PERFORMANCE ASSESSMENT FRAMEWORK

    International Nuclear Information System (INIS)

    G. N. Keating, J. Pelletier

    2005-01-01

    The tephra dispersal model used by the Yucca Mountain Project (YMP) to evaluate the potential consequences of a volcanic eruption through the waste repository must incorporate simplifications in order to function within a large Monte-Carlo style performance assessment framework. That is, the explicit physics of the conduit, vent, and eruption column processes are abstracted to a 2-D, steady-state advection-dispersion model (ASHPLUME) that can be run quickly over thousands of realizations of the overall system model. Given the continuous development of tephra dispersal modeling techniques in the last few years, we evaluated the adequacy of this simplified model for its intended purpose within the YMP total system performance assessment (TSPA) model. We evaluated uncertainties inherent in model simplifications including (1) instantaneous, steady-state vs. unsteady eruption, which affects column height, (2) constant wind conditions, and (3) power-law distribution of the tephra blanket; comparisons were made to other models and published ash distributions. Spatial statistics are useful for evaluating differences in these model output vs. results using more complex wind, column height, and tephra deposition patterns. However, in order to assess the adequacy of the model for its intended use in TSPA, we evaluated the propagation of these uncertainties through FAR, the YMP ash redistribution model, which utilizes ASHPLUME tephra deposition results to calculate the concentration of nuclear waste-contaminated tephra at a dose-receptor population as a result of sedimentary transport and mixing processes on the landscape. Questions we sought to answer include: (1) what conditions of unsteadiness, wind variability, or departure from simplified tephra distribution result in significant effects on waste concentration (related to dose calculated for the receptor population)? (2) What criteria can be established for the adequacy of a tephra dispersal model within the TSPA

  10. Performance modeling of direct contact membrane distillation (DCMD) seawater desalination process using a commercial composite membrane

    KAUST Repository

    Lee, Junggil

    2015-01-10

    This paper presents the development of a rigorous theoretical model to predict the transmembrane flux of a flat sheet hydrophobic composite membrane, comprising both an active layer of polytetrafluoroethylene and a scrim-backing support layer of polypropylene, in the direct contact membrane distillation (DCMD) process. An integrated model includes the mass, momentum, species and energy balances for both retentate and permeate flows, coupled with the mass transfer of water vapor through the composite membrane and the heat transfer across the membrane and through the boundary layers adjacent to the membrane surfaces. Experimental results and model predictions for permeate flux and performance ratio are compared and shown to be in good agreement. The permeate flux through the composite layer can be ignored in the consideration of mass transfer pathways at the composite membrane. The effect of the surface porosity and the thickness of active and support layers on the process performance of composite membrane has also been studied. Among these parameters, surface porosity is identified to be the main factor significantly influencing the permeate flux and performance ratio, while the relative influence of the surface porosity on the performance ratio is less than that on flux.

  11. Performance Models and Risk Management in Communications Systems

    CERN Document Server

    Harrison, Peter; Rüstem, Berç

    2011-01-01

    This volume covers recent developments in the design, operation, and management of telecommunication and computer network systems in performance engineering and addresses issues of uncertainty, robustness, and risk. Uncertainty regarding loading and system parameters leads to challenging optimization and robustness issues. Stochastic modeling combined with optimization theory ensures the optimum end-to-end performance of telecommunication or computer network systems. In view of the diverse design options possible, supporting models have many adjustable parameters and choosing the best set for a particular performance objective is delicate and time-consuming. An optimization based approach determines the optimal possible allocation for these parameters. Researchers and graduate students working at the interface of telecommunications and operations research will benefit from this book. Due to the practical approach, this book will also serve as a reference tool for scientists and engineers in telecommunication ...

  12. Model tests on dynamic performance of RC shear walls

    International Nuclear Information System (INIS)

    Nagashima, Toshio; Shibata, Akenori; Inoue, Norio; Muroi, Kazuo.

    1991-01-01

    For the inelastic dynamic response analysis of a reactor building subjected to earthquakes, it is essentially important to properly evaluate its restoring force characteristics under dynamic loading condition and its damping performance. Reinforced concrete shear walls are the main structural members of a reactor building, and dominate its seismic behavior. In order to obtain the basic information on the dynamic restoring force characteristics and damping performance of shear walls, the dynamic test using a large shaking table, static displacement control test and the pseudo-dynamic test on the models of a shear wall were conducted. In the dynamic test, four specimens were tested on a large shaking table. In the static test, four specimens were tested, and in the pseudo-dynamic test, three specimens were tested. These tests are outlined. The results of these tests were compared, placing emphasis on the restoring force characteristics and damping performance of the RC wall models. The strength was higher in the dynamic test models than in the static test models mainly due to the effect of loading rate. (K.I.)

  13. Significantly enhanced robustness and electrochemical performance of flexible carbon nanotube-based supercapacitors by electrodepositing polypyrrole

    Science.gov (United States)

    Chen, Yanli; Du, Lianhuan; Yang, Peihua; Sun, Peng; Yu, Xiang; Mai, Wenjie

    2015-08-01

    Here, we report robust, flexible CNT-based supercapacitor (SC) electrodes fabricated by electrodepositing polypyrrole (PPy) on freestanding vacuum-filtered CNT film. These electrodes demonstrate significantly improved mechanical properties (with the ultimate tensile strength of 16 MPa), and greatly enhanced electrochemical performance (5.6 times larger areal capacitance). The major drawback of conductive polymer electrodes is the fast capacitance decay caused by structural breakdown, which decreases cycling stability but this is not observed in our case. All-solid-state SCs assembled with the robust CNT/PPy electrodes exhibit excellent flexibility, long lifetime (95% capacitance retention after 10,000 cycles) and high electrochemical performance (a total device volumetric capacitance of 4.9 F/cm3). Moreover, a flexible SC pack is demonstrated to light up 53 LEDs or drive a digital watch, indicating the broad potential application of our SCs for portable/wearable electronics.

  14. Predictive validity of a three-dimensional model of performance anxiety in the context of tae-kwon-do.

    Science.gov (United States)

    Cheng, Wen-Nuan Kara; Hardy, Lew; Woodman, Tim

    2011-02-01

    We tested the predictive validity of the recently validated three-dimensional model of performance anxiety (Chang, Hardy, & Markland, 2009) with elite tae-kwon-do competitors (N = 99). This conceptual framework emphasized the adaptive potential of anxiety by including a regulatory dimension (reflected by perceived control) along with the intensity-oriented dimensions of cognitive and physiological anxiety. Anxiety was assessed 30 min before a competitive contest using the Three-Factor Anxiety Inventory. Competitors rated their performance on a tae-kwon-do-specific performance scale within 30 min after completion of their contest. Moderated hierarchical regression analyses revealed initial support for the predictive validity of the three-dimensional performance anxiety model. The regulatory dimension of anxiety (perceived control) revealed significant main and interactive effects on performance. This dimension appeared to be adaptive, as performance was better under high than low perceived control, and best vs. worst performance was associated with highest vs. lowest perceived control, respectively. Results are discussed in terms of the importance of the regulatory dimension of anxiety.

  15. Choosing processor array configuration by performance modeling for a highly parallel linear algebra algorithm

    International Nuclear Information System (INIS)

    Littlefield, R.J.; Maschhoff, K.J.

    1991-04-01

    Many linear algebra algorithms utilize an array of processors across which matrices are distributed. Given a particular matrix size and a maximum number of processors, what configuration of processors, i.e., what size and shape array, will execute the fastest? The answer to this question depends on tradeoffs between load balancing, communication startup and transfer costs, and computational overhead. In this paper we analyze in detail one algorithm: the blocked factored Jacobi method for solving dense eigensystems. A performance model is developed to predict execution time as a function of the processor array and matrix sizes, plus the basic computation and communication speeds of the underlying computer system. In experiments on a large hypercube (up to 512 processors), this model has been found to be highly accurate (mean error ∼ 2%) over a wide range of matrix sizes (10 x 10 through 200 x 200) and processor counts (1 to 512). The model reveals, and direct experiment confirms, that the tradeoffs mentioned above can be surprisingly complex and counterintuitive. We propose decision procedures based directly on the performance model to choose configurations for fastest execution. The model-based decision procedures are compared to a heuristic strategy and shown to be significantly better. 7 refs., 8 figs., 1 tab

  16. The effect of various parameters of large scale radio propagation models on improving performance mobile communications

    Science.gov (United States)

    Pinem, M.; Fauzi, R.

    2018-02-01

    One technique for ensuring continuity of wireless communication services and keeping a smooth transition on mobile communication networks is the soft handover technique. In the Soft Handover (SHO) technique the inclusion and reduction of Base Station from the set of active sets is determined by initiation triggers. One of the initiation triggers is based on the strong reception signal. In this paper we observed the influence of parameters of large-scale radio propagation models to improve the performance of mobile communications. The observation parameters for characterizing the performance of the specified mobile system are Drop Call, Radio Link Degradation Rate and Average Size of Active Set (AS). The simulated results show that the increase in altitude of Base Station (BS) Antenna and Mobile Station (MS) Antenna contributes to the improvement of signal power reception level so as to improve Radio Link quality and increase the average size of Active Set and reduce the average Drop Call rate. It was also found that Hata’s propagation model contributed significantly to improvements in system performance parameters compared to Okumura’s propagation model and Lee’s propagation model.

  17. Modelling and Predicting Backstroke Start Performance Using Non-Linear and Linear Models.

    Science.gov (United States)

    de Jesus, Karla; Ayala, Helon V H; de Jesus, Kelly; Coelho, Leandro Dos S; Medeiros, Alexandre I A; Abraldes, José A; Vaz, Mário A P; Fernandes, Ricardo J; Vilas-Boas, João Paulo

    2018-03-01

    Our aim was to compare non-linear and linear mathematical model responses for backstroke start performance prediction. Ten swimmers randomly completed eight 15 m backstroke starts with feet over the wedge, four with hands on the highest horizontal and four on the vertical handgrip. Swimmers were videotaped using a dual media camera set-up, with the starts being performed over an instrumented block with four force plates. Artificial neural networks were applied to predict 5 m start time using kinematic and kinetic variables and to determine the accuracy of the mean absolute percentage error. Artificial neural networks predicted start time more robustly than the linear model with respect to changing training to the validation dataset for the vertical handgrip (3.95 ± 1.67 vs. 5.92 ± 3.27%). Artificial neural networks obtained a smaller mean absolute percentage error than the linear model in the horizontal (0.43 ± 0.19 vs. 0.98 ± 0.19%) and vertical handgrip (0.45 ± 0.19 vs. 1.38 ± 0.30%) using all input data. The best artificial neural network validation revealed a smaller mean absolute error than the linear model for the horizontal (0.007 vs. 0.04 s) and vertical handgrip (0.01 vs. 0.03 s). Artificial neural networks should be used for backstroke 5 m start time prediction due to the quite small differences among the elite level performances.

  18. Hydrological model performance and parameter estimation in the wavelet-domain

    Directory of Open Access Journals (Sweden)

    B. Schaefli

    2009-10-01

    Full Text Available This paper proposes a method for rainfall-runoff model calibration and performance analysis in the wavelet-domain by fitting the estimated wavelet-power spectrum (a representation of the time-varying frequency content of a time series of a simulated discharge series to the one of the corresponding observed time series. As discussed in this paper, calibrating hydrological models so as to reproduce the time-varying frequency content of the observed signal can lead to different results than parameter estimation in the time-domain. Therefore, wavelet-domain parameter estimation has the potential to give new insights into model performance and to reveal model structural deficiencies. We apply the proposed method to synthetic case studies and a real-world discharge modeling case study and discuss how model diagnosis can benefit from an analysis in the wavelet-domain. The results show that for the real-world case study of precipitation – runoff modeling for a high alpine catchment, the calibrated discharge simulation captures the dynamics of the observed time series better than the results obtained through calibration in the time-domain. In addition, the wavelet-domain performance assessment of this case study highlights the frequencies that are not well reproduced by the model, which gives specific indications about how to improve the model structure.

  19. Capabilities and performance of Elmer/Ice, a new-generation ice sheet model

    Directory of Open Access Journals (Sweden)

    O. Gagliardini

    2013-08-01

    Full Text Available The Fourth IPCC Assessment Report concluded that ice sheet flow models, in their current state, were unable to provide accurate forecast for the increase of polar ice sheet discharge and the associated contribution to sea level rise. Since then, the glaciological community has undertaken a huge effort to develop and improve a new generation of ice flow models, and as a result a significant number of new ice sheet models have emerged. Among them is the parallel finite-element model Elmer/Ice, based on the open-source multi-physics code Elmer. It was one of the first full-Stokes models used to make projections for the evolution of the whole Greenland ice sheet for the coming two centuries. Originally developed to solve local ice flow problems of high mechanical and physical complexity, Elmer/Ice has today reached the maturity to solve larger-scale problems, earning the status of an ice sheet model. Here, we summarise almost 10 yr of development performed by different groups. Elmer/Ice solves the full-Stokes equations, for isotropic but also anisotropic ice rheology, resolves the grounding line dynamics as a contact problem, and contains various basal friction laws. Derived fields, like the age of the ice, the strain rate or stress, can also be computed. Elmer/Ice includes two recently proposed inverse methods to infer badly known parameters. Elmer is a highly parallelised code thanks to recent developments and the implementation of a block preconditioned solver for the Stokes system. In this paper, all these components are presented in detail, as well as the numerical performance of the Stokes solver and developments planned for the future.

  20. Performance evaluation of public hospital information systems by the information system success model.

    Science.gov (United States)

    Cho, Kyoung Won; Bae, Sung-Kwon; Ryu, Ji-Hye; Kim, Kyeong Na; An, Chang-Ho; Chae, Young Moon

    2015-01-01

    This study was to evaluate the performance of the newly developed information system (IS) implemented on July 1, 2014 at three public hospitals in Korea. User satisfaction scores of twelve key performance indicators of six IS success factors based on the DeLone and McLean IS Success Model were utilized to evaluate IS performance before and after the newly developed system was introduced. All scores increased after system introduction except for the completeness of medical records and impact on the clinical environment. The relationships among six IS factors were also analyzed to identify the important factors influencing three IS success factors (Intention to Use, User Satisfaction, and Net Benefits). All relationships were significant except for the relationships among Service Quality, Intention to Use, and Net Benefits. The results suggest that hospitals should not only focus on systems and information quality; rather, they should also continuously improve service quality to improve user satisfaction and eventually reach full the potential of IS performance.

  1. A measurement-based performability model for a multiprocessor system

    Science.gov (United States)

    Ilsueh, M. C.; Iyer, Ravi K.; Trivedi, K. S.

    1987-01-01

    A measurement-based performability model based on real error-data collected on a multiprocessor system is described. Model development from the raw errror-data to the estimation of cumulative reward is described. Both normal and failure behavior of the system are characterized. The measured data show that the holding times in key operational and failure states are not simple exponential and that semi-Markov process is necessary to model the system behavior. A reward function, based on the service rate and the error rate in each state, is then defined in order to estimate the performability of the system and to depict the cost of different failure types and recovery procedures.

  2. Maintenance Personnel Performance Simulation (MAPPS) model: description of model content, structure, and sensitivity testing. Volume 2

    International Nuclear Information System (INIS)

    Siegel, A.I.; Bartter, W.D.; Wolf, J.J.; Knee, H.E.

    1984-12-01

    This volume of NUREG/CR-3626 presents details of the content, structure, and sensitivity testing of the Maintenance Personnel Performance Simulation (MAPPS) model that was described in summary in volume one of this report. The MAPPS model is a generalized stochastic computer simulation model developed to simulate the performance of maintenance personnel in nuclear power plants. The MAPPS model considers workplace, maintenance technician, motivation, human factors, and task oriented variables to yield predictive information about the effects of these variables on successful maintenance task performance. All major model variables are discussed in detail and their implementation and interactive effects are outlined. The model was examined for disqualifying defects from a number of viewpoints, including sensitivity testing. This examination led to the identification of some minor recalibration efforts which were carried out. These positive results indicate that MAPPS is ready for initial and controlled applications which are in conformity with its purposes

  3. Modelling and measurement of a moving magnet linear compressor performance

    International Nuclear Information System (INIS)

    Liang, Kun; Stone, Richard; Davies, Gareth; Dadd, Mike; Bailey, Paul

    2014-01-01

    A novel moving magnet linear compressor with clearance seals and flexure bearings has been designed and constructed. It is suitable for a refrigeration system with a compact heat exchanger, such as would be needed for CPU cooling. The performance of the compressor has been experimentally evaluated with nitrogen and a mathematical model has been developed to evaluate the performance of the linear compressor. The results from the compressor model and the measurements have been compared in terms of cylinder pressure, the ‘P–V’ loop, stroke, mass flow rate and shaft power. The cylinder pressure was not measured directly but was derived from the compressor dynamics and the motor magnetic force characteristics. The comparisons indicate that the compressor model is well validated and can be used to study the performance of this type of compressor, to help with design optimization and the identification of key parameters affecting the system transients. The electrical and thermodynamic losses were also investigated, particularly for the design point (stroke of 13 mm and pressure ratio of 3.0), since a full understanding of these can lead to an increase in compressor efficiency. - Highlights: • Model predictions of the performance of a novel moving magnet linear compressor. • Prototype linear compressor performance measurements using nitrogen. • Reconstruction of P–V loops using a model of the dynamics and electromagnetics. • Close agreement between the model and measurements for the P–V loops. • The design point motor efficiency was 74%, with potential improvements identified

  4. THE USE OF NEURAL NETWORK TECHNOLOGY TO MODEL SWIMMING PERFORMANCE

    Directory of Open Access Journals (Sweden)

    António José Silva

    2007-03-01

    Full Text Available The aims of the present study were: to identify the factors which are able to explain the performance in the 200 meters individual medley and 400 meters front crawl events in young swimmers, to model the performance in those events using non-linear mathematic methods through artificial neural networks (multi-layer perceptrons and to assess the neural network models precision to predict the performance. A sample of 138 young swimmers (65 males and 73 females of national level was submitted to a test battery comprising four different domains: kinanthropometric evaluation, dry land functional evaluation (strength and flexibility, swimming functional evaluation (hydrodynamics, hydrostatic and bioenergetics characteristics and swimming technique evaluation. To establish a profile of the young swimmer non-linear combinations between preponderant variables for each gender and swim performance in the 200 meters medley and 400 meters font crawl events were developed. For this purpose a feed forward neural network was used (Multilayer Perceptron with three neurons in a single hidden layer. The prognosis precision of the model (error lower than 0.8% between true and estimated performances is supported by recent evidence. Therefore, we consider that the neural network tool can be a good approach in the resolution of complex problems such as performance modeling and the talent identification in swimming and, possibly, in a wide variety of sports

  5. A cost-performance model for ground-based optical communications receiving telescopes

    Science.gov (United States)

    Lesh, J. R.; Robinson, D. L.

    1986-01-01

    An analytical cost-performance model for a ground-based optical communications receiving telescope is presented. The model considers costs of existing telescopes as a function of diameter and field of view. This, coupled with communication performance as a function of receiver diameter and field of view, yields the appropriate telescope cost versus communication performance curve.

  6. Significance of categorization and the modeling of age related factors for radiation protection

    International Nuclear Information System (INIS)

    Matsuoka, Osamu

    1987-01-01

    It is proposed that the categorization and modelling are necessary with regard to age related factors of radionuclide metabolism for the radiation protection of the public. In order to utilize the age related information as a model for life time risk estimate of public, it is necessary to generalize and simplify it according to the categorized model patterns. Since the patterns of age related changes in various parameters of radionuclide metabolism seem to be rather simple, it is possible to categorize them into eleven types of model patterns. Among these models, five are selected as positively significant models to be considered. Examples are shown as to the fitting of representative parameters of both physiological and metabolic parameter of radionuclides into the proposed model. The range of deviation from adult standard value is also analyzed for each model. The fitting of each parameter to categorized models, and its comparative consideration provide the effective information as to the physiological basis of radionuclide metabolism. Discussions are made on the problems encountered in the application of available age related information to radiation protection of the public, i.e. distribution of categorized parameter, period of life covered, range of deviation from adult value, implication to other dosimetric and pathological models and to the final estimation. 5 refs.; 3 figs.; 4 tabs

  7. Computation of spatial significance of mountain objects extracted from multiscale digital elevation models

    International Nuclear Information System (INIS)

    Sathyamoorthy, Dinesh

    2014-01-01

    The derivation of spatial significance is an important aspect of geospatial analysis and hence, various methods have been proposed to compute the spatial significance of entities based on spatial distances with other entities within the cluster. This paper is aimed at studying the spatial significance of mountain objects extracted from multiscale digital elevation models (DEMs). At each scale, the value of spatial significance index SSI of a mountain object is the minimum number of morphological dilation iterations required to occupy all the other mountain objects in the terrain. The mountain object with the lowest value of SSI is the spatially most significant mountain object, indicating that it has the shortest distance to the other mountain objects. It is observed that as the area of the mountain objects reduce with increasing scale, the distances between the mountain objects increase, resulting in increasing values of SSI. The results obtained indicate that the strategic location of a mountain object at the centre of the terrain is more important than its size in determining its reach to other mountain objects and thus, its spatial significance

  8. A modelling study of long term green roof retention performance.

    Science.gov (United States)

    Stovin, Virginia; Poë, Simon; Berretta, Christian

    2013-12-15

    This paper outlines the development of a conceptual hydrological flux model for the long term continuous simulation of runoff and drought risk for green roof systems. A green roof's retention capacity depends upon its physical configuration, but it is also strongly influenced by local climatic controls, including the rainfall characteristics and the restoration of retention capacity associated with evapotranspiration during dry weather periods. The model includes a function that links evapotranspiration rates to substrate moisture content, and is validated against observed runoff data. The model's application to typical extensive green roof configurations is demonstrated with reference to four UK locations characterised by contrasting climatic regimes, using 30-year rainfall time-series inputs at hourly simulation time steps. It is shown that retention performance is dependent upon local climatic conditions. Volumetric retention ranges from 0.19 (cool, wet climate) to 0.59 (warm, dry climate). Per event retention is also considered, and it is demonstrated that retention performance decreases significantly when high return period events are considered in isolation. For example, in Sheffield the median per-event retention is 1.00 (many small events), but the median retention for events exceeding a 1 in 1 yr return period threshold is only 0.10. The simulation tool also provides useful information about the likelihood of drought periods, for which irrigation may be required. A sensitivity study suggests that green roofs with reduced moisture-holding capacity and/or low evapotranspiration rates will tend to offer reduced levels of retention, whilst high moisture-holding capacity and low evapotranspiration rates offer the strongest drought resistance. Copyright © 2013 Elsevier Ltd. All rights reserved.

  9. Calibrating mechanistic-empirical pavement performance models with an expert matrix

    Energy Technology Data Exchange (ETDEWEB)

    Tighe, S.; AlAssar, R.; Haas, R. [Waterloo Univ., ON (Canada). Dept. of Civil Engineering; Zhiwei, H. [Stantec Consulting Ltd., Cambridge, ON (Canada)

    2001-07-01

    Proper management of pavement infrastructure requires pavement performance modelling. For the past 20 years, the Ontario Ministry of Transportation has used the Ontario Pavement Analysis of Costs (OPAC) system for pavement design. Pavement needs, however, have changed substantially during that time. To address this need, a new research contract is underway to enhance the model and verify the predictions, particularly at extreme points such as low and high traffic volume pavement design. This initiative included a complete evaluation of the existing OPAC pavement design method, the construction of a new set of pavement performance prediction models, and the development of the flexible pavement design procedure that incorporates reliability analysis. The design was also expanded to include rigid pavement designs and modification of the existing life cycle cost analysis procedure which includes both the agency cost and road user cost. Performance prediction and life-cycle costs were developed based on several factors, including material properties, traffic loads and climate. Construction and maintenance schedules were also considered. The methodology for the calibration and validation of a mechanistic-empirical flexible pavement performance model was described. Mechanistic-empirical design methods combine theory based design such as calculated stresses, strains or deflections with empirical methods, where a measured response is associated with thickness and pavement performance. Elastic layer analysis was used to determine pavement response to determine the most effective design using cumulative Equivalent Single Axle Loads (ESALs), below grade type and layer thickness.The new mechanistic-empirical model separates the environment and traffic effects on performance. This makes it possible to quantify regional differences between Southern and Northern Ontario. In addition, roughness can be calculated in terms of the International Roughness Index or Riding comfort Index

  10. Performance modeling of parallel algorithms for solving neutron diffusion problems

    International Nuclear Information System (INIS)

    Azmy, Y.Y.; Kirk, B.L.

    1995-01-01

    Neutron diffusion calculations are the most common computational methods used in the design, analysis, and operation of nuclear reactors and related activities. Here, mathematical performance models are developed for the parallel algorithm used to solve the neutron diffusion equation on message passing and shared memory multiprocessors represented by the Intel iPSC/860 and the Sequent Balance 8000, respectively. The performance models are validated through several test problems, and these models are used to estimate the performance of each of the two considered architectures in situations typical of practical applications, such as fine meshes and a large number of participating processors. While message passing computers are capable of producing speedup, the parallel efficiency deteriorates rapidly as the number of processors increases. Furthermore, the speedup fails to improve appreciably for massively parallel computers so that only small- to medium-sized message passing multiprocessors offer a reasonable platform for this algorithm. In contrast, the performance model for the shared memory architecture predicts very high efficiency over a wide range of number of processors reasonable for this architecture. Furthermore, the model efficiency of the Sequent remains superior to that of the hypercube if its model parameters are adjusted to make its processors as fast as those of the iPSC/860. It is concluded that shared memory computers are better suited for this parallel algorithm than message passing computers

  11. Monitoring the performance of Aux. Feedwater Pump using Smart Sensing Model

    Energy Technology Data Exchange (ETDEWEB)

    No, Young Gyu; Seong, Poong Hyun [Korea Advanced Institute of Science and Technology, Daejeon (Korea, Republic of)

    2015-10-15

    Many artificial intelligence (AI) techniques equipped with learning systems have recently been proposed to monitor sensors and components in NPPs. Therefore, the objective of this study is the development of an integrity evaluation method for safety critical components such as Aux. feedwater pump, high pressure safety injection (HPSI) pump, etc. using smart sensing models based on AI techniques. In this work, the smart sensing model is developed at first to predict the performance of Aux. feedwater pump by estimating flowrate using group method of data handing (GMDH) method. If the performance prediction is achieved by this feasibility study, the smart sensing model will be applied to development of the integrity evaluation method for safety critical components. Also, the proposed algorithm for the performance prediction is verified by comparison with the simulation data of the MARS code for station blackout (SBO) events. In this study, the smart sensing model for the prediction performance of Aux. feedwater pump has been developed. In order to develop the smart sensing model, the GMDH algorithm is employed. The GMDH algorithm is the way to find a function that can well express a dependent variable from independent variables. This method uses a data structure similar to that of multiple regression models. The proposed GMDH model can accurately predict the performance of Aux.

  12. Monitoring the performance of Aux. Feedwater Pump using Smart Sensing Model

    International Nuclear Information System (INIS)

    No, Young Gyu; Seong, Poong Hyun

    2015-01-01

    Many artificial intelligence (AI) techniques equipped with learning systems have recently been proposed to monitor sensors and components in NPPs. Therefore, the objective of this study is the development of an integrity evaluation method for safety critical components such as Aux. feedwater pump, high pressure safety injection (HPSI) pump, etc. using smart sensing models based on AI techniques. In this work, the smart sensing model is developed at first to predict the performance of Aux. feedwater pump by estimating flowrate using group method of data handing (GMDH) method. If the performance prediction is achieved by this feasibility study, the smart sensing model will be applied to development of the integrity evaluation method for safety critical components. Also, the proposed algorithm for the performance prediction is verified by comparison with the simulation data of the MARS code for station blackout (SBO) events. In this study, the smart sensing model for the prediction performance of Aux. feedwater pump has been developed. In order to develop the smart sensing model, the GMDH algorithm is employed. The GMDH algorithm is the way to find a function that can well express a dependent variable from independent variables. This method uses a data structure similar to that of multiple regression models. The proposed GMDH model can accurately predict the performance of Aux

  13. Performance of the general circulation models in simulating temperature and precipitation over Iran

    Science.gov (United States)

    Abbasian, Mohammadsadegh; Moghim, Sanaz; Abrishamchi, Ahmad

    2018-03-01

    General Circulation Models (GCMs) are advanced tools for impact assessment and climate change studies. Previous studies show that the performance of the GCMs in simulating climate variables varies significantly over different regions. This study intends to evaluate the performance of the Coupled Model Intercomparison Project phase 5 (CMIP5) GCMs in simulating temperature and precipitation over Iran. Simulations from 37 GCMs and observations from the Climatic Research Unit (CRU) were obtained for the period of 1901-2005. Six measures of performance including mean bias, root mean square error (RMSE), Nash-Sutcliffe efficiency (NSE), linear correlation coefficient (r), Kolmogorov-Smirnov statistic (KS), Sen's slope estimator, and the Taylor diagram are used for the evaluation. GCMs are ranked based on each statistic at seasonal and annual time scales. Results show that most GCMs perform reasonably well in simulating the annual and seasonal temperature over Iran. The majority of the GCMs have a poor skill to simulate precipitation, particularly at seasonal scale. Based on the results, the best GCMs to represent temperature and precipitation simulations over Iran are the CMCC-CMS (Euro-Mediterranean Center on Climate Change) and the MRI-CGCM3 (Meteorological Research Institute), respectively. The results are valuable for climate and hydrometeorological studies and can help water resources planners and managers to choose the proper GCM based on their criteria.

  14. A unified model of combined energy systems with different cycle modes and its optimum performance characteristics

    Energy Technology Data Exchange (ETDEWEB)

    Zhang Yue [Department of Physics and Institute of Theoretical Physics and Astrophysics, Xiamen University, Xiamen 361005 (China); College of Information Science and Engineering, Huaqiao University, Quanzhou 362021 (China); Hu, Weiqiang [Department of Physics and Institute of Theoretical Physics and Astrophysics, Xiamen University, Xiamen 361005 (China); Ou Congjie [College of Information Science and Engineering, Huaqiao University, Quanzhou 362021 (China); Chen Jincan [Department of Physics and Institute of Theoretical Physics and Astrophysics, Xiamen University, Xiamen 361005 (China)], E-mail: jcchen@xmu.edu.cn

    2009-06-15

    A unified model is presented for a class of combined energy systems, in which the systems mainly consist of a heat engine, a combustor and a counter-flow heat exchanger and the heat engine in the systems may have different thermodynamic cycle modes such as the Brayton cycle, Carnot cycle, Stirling cycle, Ericsson cycle, and so on. Not only the irreversibilities of the heat leak and finite-rate heat transfer but also the different cycle modes of the heat engine are considered in the model. On the basis of Newton's law, expressions for the overall efficiency and power output of the combined energy system with an irreversible Brayton cycle are derived. The maximum overall efficiency and power output and other relevant parameters are calculated. The general characteristic curves of the system are presented for some given parameters. Several interesting cases are discussed in detail. The results obtained here are very general and significant and can be used to discuss the optimal performance characteristics of a class of combined energy systems with different cycle modes. Moreover, it is significant to point out that not only the important conclusions obtained in Bejan's first combustor model and Peterson's general combustion driven model but also the optimal performance of a class of solar-driven heat engine systems can be directly derived from the present paper under some limit conditions.

  15. A unified model of combined energy systems with different cycle modes and its optimum performance characteristics

    International Nuclear Information System (INIS)

    Zhang Yue; Hu, Weiqiang; Ou Congjie; Chen Jincan

    2009-01-01

    A unified model is presented for a class of combined energy systems, in which the systems mainly consist of a heat engine, a combustor and a counter-flow heat exchanger and the heat engine in the systems may have different thermodynamic cycle modes such as the Brayton cycle, Carnot cycle, Stirling cycle, Ericsson cycle, and so on. Not only the irreversibilities of the heat leak and finite-rate heat transfer but also the different cycle modes of the heat engine are considered in the model. On the basis of Newton's law, expressions for the overall efficiency and power output of the combined energy system with an irreversible Brayton cycle are derived. The maximum overall efficiency and power output and other relevant parameters are calculated. The general characteristic curves of the system are presented for some given parameters. Several interesting cases are discussed in detail. The results obtained here are very general and significant and can be used to discuss the optimal performance characteristics of a class of combined energy systems with different cycle modes. Moreover, it is significant to point out that not only the important conclusions obtained in Bejan's first combustor model and Peterson's general combustion driven model but also the optimal performance of a class of solar-driven heat engine systems can be directly derived from the present paper under some limit conditions

  16. Statistical modelling of networked human-automation performance using working memory capacity.

    Science.gov (United States)

    Ahmed, Nisar; de Visser, Ewart; Shaw, Tyler; Mohamed-Ameen, Amira; Campbell, Mark; Parasuraman, Raja

    2014-01-01

    This study examines the challenging problem of modelling the interaction between individual attentional limitations and decision-making performance in networked human-automation system tasks. Analysis of real experimental data from a task involving networked supervision of multiple unmanned aerial vehicles by human participants shows that both task load and network message quality affect performance, but that these effects are modulated by individual differences in working memory (WM) capacity. These insights were used to assess three statistical approaches for modelling and making predictions with real experimental networked supervisory performance data: classical linear regression, non-parametric Gaussian processes and probabilistic Bayesian networks. It is shown that each of these approaches can help designers of networked human-automated systems cope with various uncertainties in order to accommodate future users by linking expected operating conditions and performance from real experimental data to observable cognitive traits like WM capacity. Practitioner Summary: Working memory (WM) capacity helps account for inter-individual variability in operator performance in networked unmanned aerial vehicle supervisory tasks. This is useful for reliable performance prediction near experimental conditions via linear models; robust statistical prediction beyond experimental conditions via Gaussian process models and probabilistic inference about unknown task conditions/WM capacities via Bayesian network models.

  17. Performance Modeling and Optimization of a High Energy CollidingBeam Simulation Code

    Energy Technology Data Exchange (ETDEWEB)

    Shan, Hongzhang; Strohmaier, Erich; Qiang, Ji; Bailey, David H.; Yelick, Kathy

    2006-06-01

    An accurate modeling of the beam-beam interaction is essential to maximizing the luminosity in existing and future colliders. BeamBeam3D was the first parallel code that can be used to study this interaction fully self-consistently on high-performance computing platforms. Various all-to-all personalized communication (AAPC) algorithms dominate its communication patterns, for which we developed a sequence of performance models using a series of micro-benchmarks. We find that for SMP based systems the most important performance constraint is node-adapter contention, while for 3D-Torus topologies good performance models are not possible without considering link contention. The best average model prediction error is very low on SMP based systems with of 3% to 7%. On torus based systems errors of 29% are higher but optimized performance can again be predicted within 8% in some cases. These excellent results across five different systems indicate that this methodology for performance modeling can be applied to a large class of algorithms.

  18. Performance Modeling and Optimization of a High Energy Colliding Beam Simulation Code

    International Nuclear Information System (INIS)

    Shan, Hongzhang; Strohmaier, Erich; Qiang, Ji; Bailey, David H.; Yelick, Kathy

    2006-01-01

    An accurate modeling of the beam-beam interaction is essential to maximizing the luminosity in existing and future colliders. BeamBeam3D was the first parallel code that can be used to study this interaction fully self-consistently on high-performance computing platforms. Various all-to-all personalized communication (AAPC) algorithms dominate its communication patterns, for which we developed a sequence of performance models using a series of micro-benchmarks. We find that for SMP based systems the most important performance constraint is node-adapter contention, while for 3D-Torus topologies good performance models are not possible without considering link contention. The best average model prediction error is very low on SMP based systems with of 3% to 7%. On torus based systems errors of 29% are higher but optimized performance can again be predicted within 8% in some cases. These excellent results across five different systems indicate that this methodology for performance modeling can be applied to a large class of algorithms

  19. DMFC performance and methanol cross-over: Experimental analysis and model validation

    Energy Technology Data Exchange (ETDEWEB)

    Casalegno, A.; Marchesi, R. [Dipartimento di Energia, Politecnico di Milano, Piazza Leonardo da Vinci 32, 20133 Milano (Italy)

    2008-10-15

    A combined experimental and modelling approach is proposed to analyze methanol cross-over and its effect on DMFC performance. The experimental analysis is performed in order to allow an accurate investigation of methanol cross-over influence on DMFC performance, hence measurements were characterized in terms of uncertainty and reproducibility. The findings suggest that methanol cross-over is mainly determined by diffusion transport and affects cell performance partly via methanol electro-oxidation at the cathode. The modelling analysis is carried out to further investigate methanol cross-over phenomenon. A simple model evaluates the effectiveness of two proposed interpretations regarding methanol cross-over and its effects. The model is validated using the experimental data gathered. Both the experimental analysis and the proposed and validated model allow a substantial step forward in the understanding of the main phenomena associated with methanol cross-over. The findings confirm the possibility to reduce methanol cross-over by optimizing anode feeding. (author)

  20. Model development and optimization of operating conditions to maximize PEMFC performance by response surface methodology

    International Nuclear Information System (INIS)

    Kanani, Homayoon; Shams, Mehrzad; Hasheminasab, Mohammadreza; Bozorgnezhad, Ali

    2015-01-01

    Highlights: • The optimization of the operating parameters in a serpentine PEMFC is done using RSM. • The RSM model can predict the cell power over the wide range of operating conditions. • St-An, St-Ca and RH-Ca have an optimum value to obtain the best performance. • The interactions of the operating conditions affect the output power significantly. • The cathode and anode stoichiometry are the most effective parameters on the power. - Abstract: Optimization of operating conditions to obtain maximum power in PEMFCs could have a significant role to reduce the costs of this emerging technology. In the present experimental study, a single serpentine PEMFC is used to investigate the effects of operating conditions on the electrical power production of the cell. Four significant parameters including cathode stoichiometry, anode stoichiometry, gases inlet temperature, and cathode relative humidity are studied using Design of Experiment (DOE) to obtain an optimal power. Central composite second order Response Surface Methodology (RSM) is used to model the relationship between goal function (power) and considered input parameters (operating conditions). Using this statistical–mathematical method leads to obtain a second-order equation for the cell power. This model considers interactions and quadratic effects of different operating conditions and predicts the maximum or minimum power production over the entire working range of the parameters. In this range, high stoichiometry of cathode and low stoichiometry of anode results in the minimum cell power and contrary the medium range of fuel and oxidant stoichiometry leads to the maximum power. Results show that there is an optimum value for the anode stoichiometry, cathode stoichiometry and relative humidity to reach the best performance. The predictions of the model are evaluated by experimental tests and they are in a good agreement for different ranges of the parameters

  1. REVIEW OF MECHANISTIC UNDERSTANDING AND MODELING AND UNCERTAINTY ANALYSIS METHODS FOR PREDICTING CEMENTITIOUS BARRIER PERFORMANCE

    Energy Technology Data Exchange (ETDEWEB)

    Langton, C.; Kosson, D.

    2009-11-30

    Cementitious barriers for nuclear applications are one of the primary controls for preventing or limiting radionuclide release into the environment. At the present time, performance and risk assessments do not fully incorporate the effectiveness of engineered barriers because the processes that influence performance are coupled and complicated. Better understanding the behavior of cementitious barriers is necessary to evaluate and improve the design of materials and structures used for radioactive waste containment, life extension of current nuclear facilities, and design of future nuclear facilities, including those needed for nuclear fuel storage and processing, nuclear power production and waste management. The focus of the Cementitious Barriers Partnership (CBP) literature review is to document the current level of knowledge with respect to: (1) mechanisms and processes that directly influence the performance of cementitious materials (2) methodologies for modeling the performance of these mechanisms and processes and (3) approaches to addressing and quantifying uncertainties associated with performance predictions. This will serve as an important reference document for the professional community responsible for the design and performance assessment of cementitious materials in nuclear applications. This review also provides a multi-disciplinary foundation for identification, research, development and demonstration of improvements in conceptual understanding, measurements and performance modeling that would be lead to significant reductions in the uncertainties and improved confidence in the estimating the long-term performance of cementitious materials in nuclear applications. This report identifies: (1) technology gaps that may be filled by the CBP project and also (2) information and computational methods that are in currently being applied in related fields but have not yet been incorporated into performance assessments of cementitious barriers. The various

  2. Review Of Mechanistic Understanding And Modeling And Uncertainty Analysis Methods For Predicting Cementitious Barrier Performance

    International Nuclear Information System (INIS)

    Langton, C.; Kosson, D.

    2009-01-01

    Cementitious barriers for nuclear applications are one of the primary controls for preventing or limiting radionuclide release into the environment. At the present time, performance and risk assessments do not fully incorporate the effectiveness of engineered barriers because the processes that influence performance are coupled and complicated. Better understanding the behavior of cementitious barriers is necessary to evaluate and improve the design of materials and structures used for radioactive waste containment, life extension of current nuclear facilities, and design of future nuclear facilities, including those needed for nuclear fuel storage and processing, nuclear power production and waste management. The focus of the Cementitious Barriers Partnership (CBP) literature review is to document the current level of knowledge with respect to: (1) mechanisms and processes that directly influence the performance of cementitious materials (2) methodologies for modeling the performance of these mechanisms and processes and (3) approaches to addressing and quantifying uncertainties associated with performance predictions. This will serve as an important reference document for the professional community responsible for the design and performance assessment of cementitious materials in nuclear applications. This review also provides a multi-disciplinary foundation for identification, research, development and demonstration of improvements in conceptual understanding, measurements and performance modeling that would be lead to significant reductions in the uncertainties and improved confidence in the estimating the long-term performance of cementitious materials in nuclear applications. This report identifies: (1) technology gaps that may be filled by the CBP project and also (2) information and computational methods that are in currently being applied in related fields but have not yet been incorporated into performance assessments of cementitious barriers. The various

  3. The Link between Corporate Social Performance and Financial Performance: Evidence from Indonesian Companies

    Directory of Open Access Journals (Sweden)

    Hasan Fauzi

    2007-06-01

    Full Text Available This study examines the relationship of corporate social performance (CSP to corporate financial performance (CFP to determine if CSP is related to firm performance.  Additionally, it examines whether firm size or industry affects the relationships between CSR and CSP. This study  advances the literature as it examines this relationship for companies in a developing country, Indonesia, along with examining the impact of moderating variables on this relationship. Two models were developed: the first model was derived using slack resource theory and the second model was developed using the good management theory. Through the examination of 383 firms, the result of the study failed to find a significant relationship between CSP and CFP in either model.  Further analysis, using the slack resource theory, did find that company size had a significant positive moderating effect on the relationship between CSP and CFP.

  4. Intriguing model significantly reduces boarding of psychiatric patients, need for inpatient hospitalization.

    Science.gov (United States)

    2015-01-01

    As new approaches to the care of psychiatric emergencies emerge, one solution is gaining particular traction. Under the Alameda model, which has been put into practice in Alameda County, CA, patients who are brought to regional EDs with emergency psychiatric issues are quickly transferred to a designated emergency psychiatric facility as soon as they are medically stabilized. This alleviates boarding problems in area EDs while also quickly connecting patients with specialized care. With data in hand on the model's effectiveness, developers believe the approach could alleviate boarding problems in other communities as well. The model is funded by through a billing code established by California's Medicaid program for crisis stabilization services. Currently, only 22% of the patients brought to the emergency psychiatric facility ultimately need to be hospitalized; the other 78% are able to go home or to an alternative situation. In a 30-day study of the model, involving five community hospitals in Alameda County, CA, researchers found that ED boarding times were as much as 80% lower than comparable ED averages, and that patients were stabilized at least 75% of the time, significantly reducing the need for inpatient hospitalization.

  5. Development and validation of the ENIGMA code for MOX fuel performance modelling

    International Nuclear Information System (INIS)

    Palmer, I.; Rossiter, G.; White, R.J.

    2000-01-01

    The ENIGMA fuel performance code has been under development in the UK since the mid-1980s with contributions made by both the fuel vendor (BNFL) and the utility (British Energy). In recent years it has become the principal code for UO 2 fuel licensing for both PWR and AGR reactor systems in the UK and has also been used by BNFL in support of overseas UO 2 and MOX fuel business. A significant new programme of work has recently been initiated by BNFL to further develop the code specifically for MOX fuel application. Model development is proceeding hand in hand with a major programme of MOX fuel testing and PIE studies, with the objective of producing a fuel modelling code suitable for mechanistic analysis, as well as for licensing applications. This paper gives an overview of the model developments being undertaken and of the experimental data being used to underpin and to validate the code. The paper provides a summary of the code development programme together with specific examples of new models produced. (author)

  6. Subjective Significance Shapes Arousal Effects on Modified Stroop Task Performance: A Duality of Activation Mechanisms Account.

    Science.gov (United States)

    Imbir, Kamil K

    2016-01-01

    Activation mechanisms such as arousal are known to be responsible for slowdown observed in the Emotional Stroop and modified Stroop tasks. Using the duality of mind perspective, we may conclude that both ways of processing information (automatic or controlled) should have their own mechanisms of activation, namely, arousal for an experiential mind, and subjective significance for a rational mind. To investigate the consequences of both, factorial manipulation was prepared. Other factors that influence Stroop task processing such as valence, concreteness, frequency, and word length were controlled. Subjective significance was expected to influence arousal effects. In the first study, the task was to name the color of font for activation charged words. In the second study, activation charged words were, at the same time, combined with an incongruent condition of the classical Stroop task around a fixation point. The task was to indicate the font color for color-meaning words. In both studies, subjective significance was found to shape the arousal impact on performance in terms of the slowdown reduction for words charged with subjective significance.

  7. Subjective Significance Shapes Arousal Effects on Modified Stroop Task Performance: a Duality of Activation Mechanisms Account

    Directory of Open Access Journals (Sweden)

    Kamil Konrad Imbir

    2016-02-01

    Full Text Available Activation mechanisms such as arousal are known to be responsible for slowdown observed in the Emotional Stroop (EST and modified Stroop tasks. Using the duality of mind perspective, we may conclude that both ways of processing information (automatic or controlled should have their own mechanisms of activation, namely, arousal for an experiential mind, and subjective significance for a rational mind. To investigate the consequences of both, factorial manipulation was prepared. Other factors that influence Stroop task processing such as valence, concreteness, frequency and word length were controlled. Subjective significance was expected to influence arousal effects. In the first study, the task was to name the color of font for activation charged words. In the second study, activation charged words were, at the same time, combined with an incongruent condition of the classical Stroop task around a fixation point. The task was to indicate the font color for color-meaning words. In both studies, subjective significance was found to shape the arousal impact on performance in terms of the slowdown reduction for words charged with subjective significance.

  8. In Silico Modeling of Gastrointestinal Drug Absorption: Predictive Performance of Three Physiologically Based Absorption Models.

    Science.gov (United States)

    Sjögren, Erik; Thörn, Helena; Tannergren, Christer

    2016-06-06

    Gastrointestinal (GI) drug absorption is a complex process determined by formulation, physicochemical and biopharmaceutical factors, and GI physiology. Physiologically based in silico absorption models have emerged as a widely used and promising supplement to traditional in vitro assays and preclinical in vivo studies. However, there remains a lack of comparative studies between different models. The aim of this study was to explore the strengths and limitations of the in silico absorption models Simcyp 13.1, GastroPlus 8.0, and GI-Sim 4.1, with respect to their performance in predicting human intestinal drug absorption. This was achieved by adopting an a priori modeling approach and using well-defined input data for 12 drugs associated with incomplete GI absorption and related challenges in predicting the extent of absorption. This approach better mimics the real situation during formulation development where predictive in silico models would be beneficial. Plasma concentration-time profiles for 44 oral drug administrations were calculated by convolution of model-predicted absorption-time profiles and reported pharmacokinetic parameters. Model performance was evaluated by comparing the predicted plasma concentration-time profiles, Cmax, tmax, and exposure (AUC) with observations from clinical studies. The overall prediction accuracies for AUC, given as the absolute average fold error (AAFE) values, were 2.2, 1.6, and 1.3 for Simcyp, GastroPlus, and GI-Sim, respectively. The corresponding AAFE values for Cmax were 2.2, 1.6, and 1.3, respectively, and those for tmax were 1.7, 1.5, and 1.4, respectively. Simcyp was associated with underprediction of AUC and Cmax; the accuracy decreased with decreasing predicted fabs. A tendency for underprediction was also observed for GastroPlus, but there was no correlation with predicted fabs. There were no obvious trends for over- or underprediction for GI-Sim. The models performed similarly in capturing dependencies on dose and

  9. Wind Farm Layout Optimization through a Crossover-Elitist Evolutionary Algorithm performed over a High Performing Analytical Wake Model

    Science.gov (United States)

    Kirchner-Bossi, Nicolas; Porté-Agel, Fernando

    2017-04-01

    Wind turbine wakes can significantly disrupt the performance of further downstream turbines in a wind farm, thus seriously limiting the overall wind farm power output. Such effect makes the layout design of a wind farm to play a crucial role on the whole performance of the project. An accurate definition of the wake interactions added to a computationally compromised layout optimization strategy can result in an efficient resource when addressing the problem. This work presents a novel soft-computing approach to optimize the wind farm layout by minimizing the overall wake effects that the installed turbines exert on one another. An evolutionary algorithm with an elitist sub-optimization crossover routine and an unconstrained (continuous) turbine positioning set up is developed and tested over an 80-turbine offshore wind farm over the North Sea off Denmark (Horns Rev I). Within every generation of the evolution, the wind power output (cost function) is computed through a recently developed and validated analytical wake model with a Gaussian profile velocity deficit [1], which has shown to outperform the traditionally employed wake models through different LES simulations and wind tunnel experiments. Two schemes with slightly different perimeter constraint conditions (full or partial) are tested. Results show, compared to the baseline, gridded layout, a wind power output increase between 5.5% and 7.7%. In addition, it is observed that the electric cable length at the facilities is reduced by up to 21%. [1] Bastankhah, Majid, and Fernando Porté-Agel. "A new analytical model for wind-turbine wakes." Renewable Energy 70 (2014): 116-123.

  10. Qualitative and quantitative examination of the performance of regional air quality models representing different modeling approaches

    International Nuclear Information System (INIS)

    Bhumralkar, C.M.; Ludwig, F.L.; Shannon, J.D.; McNaughton, D.

    1985-04-01

    The calculations of three different air quality models were compared with the best available observations. The comparisons were made without calibrating the models to improve agreement with the observations. Model performance was poor for short averaging times (less than 24 hours). Some of the poor performance can be traced to errors in the input meteorological fields, but error exist on all levels. It should be noted that these models were not originally designed for treating short-term episodes. For short-term episodes, much of the variance in the data can arise from small spatial scale features that tend to be averaged out over longer periods. These small spatial scale features cannot be resolved with the coarse grids that are used for the meteorological and emissions inputs. Thus, it is not surprising that the models performed for the longer averaging times. The models compared were RTM-II, ENAMAP-2 and ACID. (17 refs., 5 figs., 4 tabs

  11. PHARAO laser source flight model: Design and performances

    Energy Technology Data Exchange (ETDEWEB)

    Lévèque, T., E-mail: thomas.leveque@cnes.fr; Faure, B.; Esnault, F. X.; Delaroche, C.; Massonnet, D.; Grosjean, O.; Buffe, F.; Torresi, P. [Centre National d’Etudes Spatiales, 18 avenue Edouard Belin, 31400 Toulouse (France); Bomer, T.; Pichon, A.; Béraud, P.; Lelay, J. P.; Thomin, S. [Sodern, 20 Avenue Descartes, 94451 Limeil-Brévannes (France); Laurent, Ph. [LNE-SYRTE, CNRS, UPMC, Observatoire de Paris, 61 avenue de l’Observatoire, 75014 Paris (France)

    2015-03-15

    In this paper, we describe the design and the main performances of the PHARAO laser source flight model. PHARAO is a laser cooled cesium clock specially designed for operation in space and the laser source is one of the main sub-systems. The flight model presented in this work is the first remote-controlled laser system designed for spaceborne cold atom manipulation. The main challenges arise from mechanical compatibility with space constraints, which impose a high level of compactness, a low electric power consumption, a wide range of operating temperature, and a vacuum environment. We describe the main functions of the laser source and give an overview of the main technologies developed for this instrument. We present some results of the qualification process. The characteristics of the laser source flight model, and their impact on the clock performances, have been verified in operational conditions.

  12. Performance Analysis, Modeling and Scaling of HPC Applications and Tools

    Energy Technology Data Exchange (ETDEWEB)

    Bhatele, Abhinav [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-01-13

    E cient use of supercomputers at DOE centers is vital for maximizing system throughput, mini- mizing energy costs and enabling science breakthroughs faster. This requires complementary e orts along several directions to optimize the performance of scienti c simulation codes and the under- lying runtimes and software stacks. This in turn requires providing scalable performance analysis tools and modeling techniques that can provide feedback to physicists and computer scientists developing the simulation codes and runtimes respectively. The PAMS project is using time allocations on supercomputers at ALCF, NERSC and OLCF to further the goals described above by performing research along the following fronts: 1. Scaling Study of HPC applications; 2. Evaluation of Programming Models; 3. Hardening of Performance Tools; 4. Performance Modeling of Irregular Codes; and 5. Statistical Analysis of Historical Performance Data. We are a team of computer and computational scientists funded by both DOE/NNSA and DOE/ ASCR programs such as ECRP, XStack (Traleika Glacier, PIPER), ExaOSR (ARGO), SDMAV II (MONA) and PSAAP II (XPACC). This allocation will enable us to study big data issues when analyzing performance on leadership computing class systems and to assist the HPC community in making the most e ective use of these resources.

  13. A performance model for the communication in fast multipole methods on high-performance computing platforms

    KAUST Repository

    Ibeid, Huda

    2016-03-04

    Exascale systems are predicted to have approximately 1 billion cores, assuming gigahertz cores. Limitations on affordable network topologies for distributed memory systems of such massive scale bring new challenges to the currently dominant parallel programing model. Currently, there are many efforts to evaluate the hardware and software bottlenecks of exascale designs. It is therefore of interest to model application performance and to understand what changes need to be made to ensure extrapolated scalability. The fast multipole method (FMM) was originally developed for accelerating N-body problems in astrophysics and molecular dynamics but has recently been extended to a wider range of problems. Its high arithmetic intensity combined with its linear complexity and asynchronous communication patterns make it a promising algorithm for exascale systems. In this paper, we discuss the challenges for FMM on current parallel computers and future exascale architectures, with a focus on internode communication. We focus on the communication part only; the efficiency of the computational kernels are beyond the scope of the present study. We develop a performance model that considers the communication patterns of the FMM and observe a good match between our model and the actual communication time on four high-performance computing (HPC) systems, when latency, bandwidth, network topology, and multicore penalties are all taken into account. To our knowledge, this is the first formal characterization of internode communication in FMM that validates the model against actual measurements of communication time. The ultimate communication model is predictive in an absolute sense; however, on complex systems, this objective is often out of reach or of a difficulty out of proportion to its benefit when there exists a simpler model that is inexpensive and sufficient to guide coding decisions leading to improved scaling. The current model provides such guidance.

  14. A Weibull statistics-based lignocellulose saccharification model and a built-in parameter accurately predict lignocellulose hydrolysis performance.

    Science.gov (United States)

    Wang, Mingyu; Han, Lijuan; Liu, Shasha; Zhao, Xuebing; Yang, Jinghua; Loh, Soh Kheang; Sun, Xiaomin; Zhang, Chenxi; Fang, Xu

    2015-09-01

    Renewable energy from lignocellulosic biomass has been deemed an alternative to depleting fossil fuels. In order to improve this technology, we aim to develop robust mathematical models for the enzymatic lignocellulose degradation process. By analyzing 96 groups of previously published and newly obtained lignocellulose saccharification results and fitting them to Weibull distribution, we discovered Weibull statistics can accurately predict lignocellulose saccharification data, regardless of the type of substrates, enzymes and saccharification conditions. A mathematical model for enzymatic lignocellulose degradation was subsequently constructed based on Weibull statistics. Further analysis of the mathematical structure of the model and experimental saccharification data showed the significance of the two parameters in this model. In particular, the λ value, defined the characteristic time, represents the overall performance of the saccharification system. This suggestion was further supported by statistical analysis of experimental saccharification data and analysis of the glucose production levels when λ and n values change. In conclusion, the constructed Weibull statistics-based model can accurately predict lignocellulose hydrolysis behavior and we can use the λ parameter to assess the overall performance of enzymatic lignocellulose degradation. Advantages and potential applications of the model and the λ value in saccharification performance assessment were discussed. Copyright © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. A practical model for sustainable operational performance

    International Nuclear Information System (INIS)

    Vlek, C.A.J.; Steg, E.M.; Feenstra, D.; Gerbens-Leenis, W.; Lindenberg, S.; Moll, H.; Schoot Uiterkamp, A.; Sijtsma, F.; Van Witteloostuijn, A.

    2002-01-01

    By means of a concrete model for sustainable operational performance enterprises can report uniformly on the sustainability of their contributions to the economy, welfare and the environment. The development and design of a three-dimensional monitoring system is presented and discussed [nl

  16. Structural Model for the Effects of Environmental Elements on the Psychological Characteristics and Performance of the Employees of Manufacturing Systems.

    Science.gov (United States)

    Realyvásquez, Arturo; Maldonado-Macías, Aidé Aracely; García-Alcaraz, Jorge; Cortés-Robles, Guillermo; Blanco-Fernández, Julio

    2016-01-05

    This paper analyzes the effects of environmental elements on the psychological characteristics and performance of employees in manufacturing systems using structural equation modeling. Increasing the comprehension of these effects may help optimize manufacturing systems regarding their employees' psychological characteristics and performance from a macroergonomic perspective. As the method, a new macroergonomic compatibility questionnaire (MCQ) was developed and statistically validated, and 158 respondents at four manufacture companies were considered. Noise, lighting and temperature, humidity and air quality (THAQ) were used as independent variables and psychological characteristics and employees' performance as dependent variables. To propose and test the hypothetical causal model of significant relationships among the variables, a data analysis was deployed. Results found that the macroergonomic compatibility of environmental elements presents significant direct effects on employees' psychological characteristics and either direct or indirect effects on the employees' performance. THAQ had the highest direct and total effects on psychological characteristics. Regarding the direct and total effects on employees' performance, the psychological characteristics presented the highest effects, followed by THAQ conditions. These results may help measure and optimize manufacturing systems' performance by enhancing their macroergonomic compatibility and quality of life at work of the employees.

  17. Performance of the Sellick maneuver significantly improves when residents and trained nurses use a visually interactive guidance device in simulation

    International Nuclear Information System (INIS)

    Connor, Christopher W; Saffary, Roya; Feliz, Eddy

    2013-01-01

    We examined the proper performance of the Sellick maneuver, a maneuver used to reduce the risk of aspiration of stomach contents during induction of general anesthesia, using a novel device that measures and visualizes the force applied to the cricoid cartilage using thin-film force sensitive resistors in a form suitable for in vivo use. Performance was tested in three stages with twenty anaesthesiology residents and twenty trained operating room nurses. Firstly, subjects applied force to the cricoid cartilage as was customary to them. Secondly, subjects used the device to guide the application of that force. Thirdly, subjects were again asked to perform the manoeuvre without visual guidance. Each test lasted 1 min and the amount of force applied was measured throughout. Overall, the Sellick maneuver was often not applied properly, with large variance between individual subjects. Performance and inter-subject consistency improved to a very highly significant degree when subjects were able to use the device as a visual guide (p < 0.001). Subsequent significant improvements in performances during the last, unguided test demonstrated that the device initiated learning. (paper)

  18. Performance of the Sellick maneuver significantly improves when residents and trained nurses use a visually interactive guidance device in simulation

    Energy Technology Data Exchange (ETDEWEB)

    Connor, Christopher W; Saffary, Roya; Feliz, Eddy [Department of Anesthesiology Boston Medical Center, Boston, MA (United States)

    2013-12-15

    We examined the proper performance of the Sellick maneuver, a maneuver used to reduce the risk of aspiration of stomach contents during induction of general anesthesia, using a novel device that measures and visualizes the force applied to the cricoid cartilage using thin-film force sensitive resistors in a form suitable for in vivo use. Performance was tested in three stages with twenty anaesthesiology residents and twenty trained operating room nurses. Firstly, subjects applied force to the cricoid cartilage as was customary to them. Secondly, subjects used the device to guide the application of that force. Thirdly, subjects were again asked to perform the manoeuvre without visual guidance. Each test lasted 1 min and the amount of force applied was measured throughout. Overall, the Sellick maneuver was often not applied properly, with large variance between individual subjects. Performance and inter-subject consistency improved to a very highly significant degree when subjects were able to use the device as a visual guide (p < 0.001). Subsequent significant improvements in performances during the last, unguided test demonstrated that the device initiated learning. (paper)

  19. Assessing Ecosystem Model Performance in Semiarid Systems

    Science.gov (United States)

    Thomas, A.; Dietze, M.; Scott, R. L.; Biederman, J. A.

    2017-12-01

    In ecosystem process modelling, comparing outputs to benchmark datasets observed in the field is an important way to validate models, allowing the modelling community to track model performance over time and compare models at specific sites. Multi-model comparison projects as well as models themselves have largely been focused on temperate forests and similar biomes. Semiarid regions, on the other hand, are underrepresented in land surface and ecosystem modelling efforts, and yet will be disproportionately impacted by disturbances such as climate change due to their sensitivity to changes in the water balance. Benchmarking models at semiarid sites is an important step in assessing and improving models' suitability for predicting the impact of disturbance on semiarid ecosystems. In this study, several ecosystem models were compared at a semiarid grassland in southwestern Arizona using PEcAn, or the Predictive Ecosystem Analyzer, an open-source eco-informatics toolbox ideal for creating the repeatable model workflows necessary for benchmarking. Models included SIPNET, DALEC, JULES, ED2, GDAY, LPJ-GUESS, MAESPA, CLM, CABLE, and FATES. Comparison between model output and benchmarks such as net ecosystem exchange (NEE) tended to produce high root mean square error and low correlation coefficients, reflecting poor simulation of seasonality and the tendency for models to create much higher carbon sources than observed. These results indicate that ecosystem models do not currently adequately represent semiarid ecosystem processes.

  20. A Perspective on Computational Human Performance Models as Design Tools

    Science.gov (United States)

    Jones, Patricia M.

    2010-01-01

    The design of interactive systems, including levels of automation, displays, and controls, is usually based on design guidelines and iterative empirical prototyping. A complementary approach is to use computational human performance models to evaluate designs. An integrated strategy of model-based and empirical test and evaluation activities is particularly attractive as a methodology for verification and validation of human-rated systems for commercial space. This talk will review several computational human performance modeling approaches and their applicability to design of display and control requirements.

  1. Modeling Nanoscale FinFET Performance by a Neural Network Method

    Directory of Open Access Journals (Sweden)

    Jin He

    2017-07-01

    Full Text Available This paper presents a neural network method to model nanometer FinFET performance. The principle of this method is firstly introduced and its application in modeling DC and conductance characteristics of nanoscale FinFET transistor is demonstrated in detail. It is shown that this method does not need parameter extraction routine while its prediction of the transistor performance has a small relative error within 1 % compared with measured data, thus this new method is as accurate as the physics based surface potential model.

  2. Investigation Into Informational Compatibility Of Building Information Modelling And Building Performance Analysis Software Solutions

    OpenAIRE

    Hyun, S.; Marjanovic-Halburd, L.; Raslan, R.

    2015-01-01

    There are significant opportunities for Building Information Modelling (BIM) to address issues related to sustainable and energy efficient building design. While the potential benefits associated with the integration of BIM and BPA (Building Performance Analysis) have been recognised, its specifications and formats remain in their early infancy and often fail to live up to the promise of seamless interoperability at various stages of design process. This paper conducts a case study to investi...

  3. DETRA: Model description and evaluation of model performance

    International Nuclear Information System (INIS)

    Suolanen, V.

    1996-01-01

    The computer code DETRA is a generic tool for environmental transfer analyses of radioactive or stable substances. The code has been applied for various purposes, mainly problems related to the biospheric transfer of radionuclides both in safety analyses of disposal of nuclear wastes and in consideration of foodchain exposure pathways in the analyses of off-site consequences of reactor accidents. For each specific application an individually tailored conceptual model can be developed. The biospheric transfer analyses performed by the code are typically carried out for terrestrial, aquatic and food chain applications. 21 refs, 35 figs, 15 tabs

  4. Thermodynamic simulation model for predicting the performance of spark ignition engines using biogas as fuel

    International Nuclear Information System (INIS)

    Nunes de Faria, Mário M.; Vargas Machuca Bueno, Juan P.; Ayad, Sami M.M. Elmassalami; Belchior, Carlos R. Pereira

    2017-01-01

    Highlights: • A 0-D model for performance prediction of SI ICE fueled with biogas is proposed. • Relative difference between simulated and experimental values was under 5%. • Can be adapted for different biogas compositions and operating ranges. • Could be a valuable tool for predicting trends and guiding experimentation. • Is suitable for use with biogas supplies in developing regions. - Abstract: Biogas found its way from developing countries and is now an alternative to fossil fuels in internal combustion engines and with the advantage of lower greenhouse gas emissions. However, its use in gas engines requires engine modifications or adaptations that may be costly. This paper reports the results of experimental performance and emissions tests of an engine-generator unit fueled with biogas produced in a sewage plant in Brazil, operating under different loads, and with suitable engine modifications. These emissions and performance results were in agreement with the literature and it was confirmed that the penalties to engine performance were more significant than emission reduction in the operating range tested. Furthermore, a zero dimensional simulation model was employed to predict performance characteristics. Moreover, a differential thermodynamic equation system was solved, obtaining the pressure inside the cylinder as a function of the crank angle for different engine conditions. Mean effective pressure and indicated power were also obtained. The results of simulation and experimental tests of the engine in similar conditions were compared and the model validated. Although several simplifying assumptions were adopted and empirical correlations were used for Wiebe function, the model was adequate in predicting engine performance as the relative difference between simulated and experimental values was lower than 5%. The model can be adapted for use with different raw or enriched biogas compositions and could prove to be a valuable tool to guide

  5. A conceptual model to improve performance in virtual teams

    Directory of Open Access Journals (Sweden)

    Shopee Dube

    2016-09-01

    Full Text Available Background: The vast improvement in communication technologies and sophisticated project management tools, methods and techniques has allowed geographically and culturally diverse groups to operate and function in a virtual environment. To succeed in this virtual environment where time and space are becoming increasingly irrelevant, organisations must define new ways of implementing initiatives. This virtual environment phenomenon has brought about the formation of virtual project teams that allow organisations to harness the skills and knowhow of the best resources, irrespective of their location. Objectives: The aim of this article was to investigate performance criteria and develop a conceptual model which can be applied to enhance the success of virtual project teams. There are no clear guidelines of the performance criteria in managing virtual project teams. Method: A qualitative research methodology was used in this article. The purpose of content analysis was to explore the literature to understand the concept of performance in virtual project teams and to summarise the findings of the literature reviewed. Results: The research identified a set of performance criteria for the virtual project teams as follows: leadership, trust, communication, team cooperation, reliability, motivation, comfort and social interaction. These were used to conceptualise the model. Conclusion: The conceptual model can be used in a holistic way to determine the overall performance of the virtual project team, but each factor can be analysed individually to determine the impact on the overall performance. The knowledge of performance criteria for virtual project teams could aid project managers in enhancing the success of these teams and taking a different approach to better manage and coordinate them.

  6. Performance Evaluation of UML2-Modeled Embedded Streaming Applications with System-Level Simulation

    Directory of Open Access Journals (Sweden)

    Arpinen Tero

    2009-01-01

    Full Text Available This article presents an efficient method to capture abstract performance model of streaming data real-time embedded systems (RTESs. Unified Modeling Language version 2 (UML2 is used for the performance modeling and as a front-end for a tool framework that enables simulation-based performance evaluation and design-space exploration. The adopted application meta-model in UML resembles the Kahn Process Network (KPN model and it is targeted at simulation-based performance evaluation. The application workload modeling is done using UML2 activity diagrams, and platform is described with structural UML2 diagrams and model elements. These concepts are defined using a subset of the profile for Modeling and Analysis of Realtime and Embedded (MARTE systems from OMG and custom stereotype extensions. The goal of the performance modeling and simulation is to achieve early estimates on task response times, processing element, memory, and on-chip network utilizations, among other information that is used for design-space exploration. As a case study, a video codec application on multiple processors is modeled, evaluated, and explored. In comparison to related work, this is the first proposal that defines transformation between UML activity diagrams and streaming data application workload meta models and successfully adopts it for RTES performance evaluation.

  7. High-performance speech recognition using consistency modeling

    Science.gov (United States)

    Digalakis, Vassilios; Murveit, Hy; Monaco, Peter; Neumeyer, Leo; Sankar, Ananth

    1994-12-01

    The goal of SRI's consistency modeling project is to improve the raw acoustic modeling component of SRI's DECIPHER speech recognition system and develop consistency modeling technology. Consistency modeling aims to reduce the number of improper independence assumptions used in traditional speech recognition algorithms so that the resulting speech recognition hypotheses are more self-consistent and, therefore, more accurate. At the initial stages of this effort, SRI focused on developing the appropriate base technologies for consistency modeling. We first developed the Progressive Search technology that allowed us to perform large-vocabulary continuous speech recognition (LVCSR) experiments. Since its conception and development at SRI, this technique has been adopted by most laboratories, including other ARPA contracting sites, doing research on LVSR. Another goal of the consistency modeling project is to attack difficult modeling problems, when there is a mismatch between the training and testing phases. Such mismatches may include outlier speakers, different microphones and additive noise. We were able to either develop new, or transfer and evaluate existing, technologies that adapted our baseline genonic HMM recognizer to such difficult conditions.

  8. Surface tensions of multi-component mixed inorganic/organic aqueous systems of atmospheric significance: measurements, model predictions and importance for cloud activation predictions

    Directory of Open Access Journals (Sweden)

    D. O. Topping

    2007-01-01

    Full Text Available In order to predict the physical properties of aerosol particles, it is necessary to adequately capture the behaviour of the ubiquitous complex organic components. One of the key properties which may affect this behaviour is the contribution of the organic components to the surface tension of aqueous particles in the moist atmosphere. Whilst the qualitative effect of organic compounds on solution surface tensions has been widely reported, our quantitative understanding on mixed organic and mixed inorganic/organic systems is limited. Furthermore, it is unclear whether models that exist in the literature can reproduce the surface tension variability for binary and higher order multi-component organic and mixed inorganic/organic systems of atmospheric significance. The current study aims to resolve both issues to some extent. Surface tensions of single and multiple solute aqueous solutions were measured and compared with predictions from a number of model treatments. On comparison with binary organic systems, two predictive models found in the literature provided a range of values resulting from sensitivity to calculations of pure component surface tensions. Results indicate that a fitted model can capture the variability of the measured data very well, producing the lowest average percentage deviation for all compounds studied. The performance of the other models varies with compound and choice of model parameters. The behaviour of ternary mixed inorganic/organic systems was unreliably captured by using a predictive scheme and this was dependent on the composition of the solutes present. For more atmospherically representative higher order systems, entirely predictive schemes performed poorly. It was found that use of the binary data in a relatively simple mixing rule, or modification of an existing thermodynamic model with parameters derived from binary data, was able to accurately capture the surface tension variation with concentration. Thus

  9. Proposal for a Method for Business Model Performance Assessment: Toward an Experimentation Tool for Business Model Innovation

    Directory of Open Access Journals (Sweden)

    Antonio Batocchio

    2017-04-01

    Full Text Available The representation of business models has been recently widespread, especially in the pursuit of innovation. However, defining a company’s business model is sometimes limited to discussion and debates. This study observes the need for performance measurement so that business models can be data-driven. To meet this goal, the work proposed as a hypothesis the creation of a method that combines the practices of the Balanced Scorecard with a method of business models representation – the Business Model Canvas. Such a combination was based on study of conceptual adaptation, resulting in an application roadmap. A case study application was performed to check the functionality of the proposition, focusing on startup organizations. It was concluded that based on the performance assessment of the business model it is possible to propose the search for change through experimentation, a path that can lead to business model innovation.

  10. The Performance Blueprint: An Integrated Logic Model Developed To Enhance Performance Measurement Literacy: The Case of Performance-Based Contract Management.

    Science.gov (United States)

    Longo, Paul J.

    This study explored the mechanics of using an enhanced, comprehensive multipurpose logic model, the Performance Blueprint, as a means of building evaluation capacity, referred to in this paper as performance measurement literacy, to facilitate the attainment of both service-delivery oriented and community-oriented outcomes. The application of this…

  11. A critical review of the use and performance of different function types for modeling temperature-dependent development of arthropod larvae.

    Science.gov (United States)

    Quinn, Brady K

    2017-01-01

    Temperature-dependent development influences production rates of arthropods, including crustaceans important to fisheries and agricultural pests. Numerous candidate equation types (development functions) exist to describe the effect of temperature on development time, yet most studies use only a single type of equation and there is no consensus as to which, if any model predicts development rates better than the others, nor what the consequences of selecting a potentially incorrect model equation are on predicted development times. In this study, a literature search was performed of studies fitting development functions to development data of arthropod larvae (99 species). The published data of most (79) of these species were then fit with 33 commonly-used development functions. Overall performance of each function type and consequences of using a function other than the best one to model data were assessed. Performance was also related to taxonomy and the range of temperatures examined. The majority (91.1%) of studies were found to not use the best function out of those tested. Using the incorrect model lead to significantly less accurate (e.g., mean difference±SE 85.9±27.4%, range: -1.7 to 1725.5%) predictions of development times than the best function. Overall, more complex functions performed poorly relative to simpler ones. However, performance of some complex functions improved when wide temperature ranges were tested, which tended to be confined to studies of insects or arachnids compared with those of crustaceans. Results indicate the biological significance of choosing the best-fitting model to describe temperature-dependent development time data. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. New model performance index for engineering design of control systems

    Science.gov (United States)

    1970-01-01

    Performance index includes a model representing linear control-system design specifications. Based on a geometric criterion for approximation of the model by the actual system, the index can be interpreted directly in terms of the desired system response model without actually having the model's time response.

  13. A formalism to generate probability distributions for performance-assessment modeling

    International Nuclear Information System (INIS)

    Kaplan, P.G.

    1990-01-01

    A formalism is presented for generating probability distributions of parameters used in performance-assessment modeling. The formalism is used when data are either sparse or nonexistent. The appropriate distribution is a function of the known or estimated constraints and is chosen to maximize a quantity known as Shannon's informational entropy. The formalism is applied to a parameter used in performance-assessment modeling. The functional form of the model that defines the parameter, data from the actual field site, and natural analog data are analyzed to estimate the constraints. A beta probability distribution of the example parameter is generated after finding four constraints. As an example of how the formalism is applied to the site characterization studies of Yucca Mountain, the distribution is generated for an input parameter in a performance-assessment model currently used to estimate compliance with disposal of high-level radioactive waste in geologic repositories, 10 CFR 60.113(a)(2), commonly known as the ground water travel time criterion. 8 refs., 2 figs

  14. Data harmonization and model performance

    Science.gov (United States)

    The Joint Committee on Urban Storm Drainage of the International Association for Hydraulic Research (IAHR) and International Association on Water Pollution Research and Control (IAWPRC) was formed in 1982. The current committee members are (no more than two from a country): B. C. Yen, Chairman (USA); P. Harremoes, Vice Chairman (Denmark); R. K. Price, Secretary (UK); P. J. Colyer (UK), M. Desbordes (France), W. C. Huber (USA), K. Krauth (FRG), A. Sjoberg (Sweden), and T. Sueishi (Japan).The IAHR/IAWPRC Joint Committee is forming a Task Group on Data Harmonization and Model Performance. One objective is to promote international urban drainage data harmonization for easy data and information exchange. Another objective is to publicize available models and data internationally. Comments and suggestions concerning the formation and charge of the Task Group are welcome and should be sent to: B. C. Yen, Dept. of Civil Engineering, Univ. of Illinois, 208 N. Romine St., Urbana, IL 61801.

  15. The performance of fine-grained and coarse-grained elastic network models and its dependence on various factors.

    Science.gov (United States)

    Na, Hyuntae; Song, Guang

    2015-07-01

    In a recent work we developed a method for deriving accurate simplified models that capture the essentials of conventional all-atom NMA and identified two best simplified models: ssNMA and eANM, both of which have a significantly higher correlation with NMA in mean square fluctuation calculations than existing elastic network models such as ANM and ANMr2, a variant of ANM that uses the inverse of the squared separation distances as spring constants. Here, we examine closely how the performance of these elastic network models depends on various factors, namely, the presence of hydrogen atoms in the model, the quality of input structures, and the effect of crystal packing. The study reveals the strengths and limitations of these models. Our results indicate that ssNMA and eANM are the best fine-grained elastic network models but their performance is sensitive to the quality of input structures. When the quality of input structures is poor, ANMr2 is a good alternative for computing mean-square fluctuations while ANM model is a good alternative for obtaining normal modes. © 2015 Wiley Periodicals, Inc.

  16. A personality trait-based interactionist model of job performance.

    Science.gov (United States)

    Tett, Robert P; Burnett, Dawn D

    2003-06-01

    Evidence for situational specificity of personality-job performance relations calls for better understanding of how personality is expressed as valued work behavior. On the basis of an interactionist principle of trait activation (R. P. Tett & H. A. Guterman, 2000), a model is proposed that distinguishes among 5 situational features relevant to trait expression (job demands, distracters, constraints, releasers, and facilitators), operating at task, social, and organizational levels. Trait-expressive work behavior is distinguished from (valued) job performance in clarifying the conditions favoring personality use in selection efforts. The model frames linkages between situational taxonomies (e.g., J. L. Holland's [1985] RIASEC model) and the Big Five and promotes useful discussion of critical issues, including situational specificity, personality-oriented job analysis, team building, and work motivation.

  17. Thermal Model Predictions of Advanced Stirling Radioisotope Generator Performance

    Science.gov (United States)

    Wang, Xiao-Yen J.; Fabanich, William Anthony; Schmitz, Paul C.

    2014-01-01

    This paper presents recent thermal model results of the Advanced Stirling Radioisotope Generator (ASRG). The three-dimensional (3D) ASRG thermal power model was built using the Thermal Desktop(trademark) thermal analyzer. The model was correlated with ASRG engineering unit test data and ASRG flight unit predictions from Lockheed Martin's (LM's) I-deas(trademark) TMG thermal model. The auxiliary cooling system (ACS) of the ASRG is also included in the ASRG thermal model. The ACS is designed to remove waste heat from the ASRG so that it can be used to heat spacecraft components. The performance of the ACS is reported under nominal conditions and during a Venus flyby scenario. The results for the nominal case are validated with data from Lockheed Martin. Transient thermal analysis results of ASRG for a Venus flyby with a representative trajectory are also presented. In addition, model results of an ASRG mounted on a Cassini-like spacecraft with a sunshade are presented to show a way to mitigate the high temperatures of a Venus flyby. It was predicted that the sunshade can lower the temperature of the ASRG alternator by 20 C for the representative Venus flyby trajectory. The 3D model also was modified to predict generator performance after a single Advanced Stirling Convertor failure. The geometry of the Microtherm HT insulation block on the outboard side was modified to match deformation and shrinkage observed during testing of a prototypic ASRG test fixture by LM. Test conditions and test data were used to correlate the model by adjusting the thermal conductivity of the deformed insulation to match the post-heat-dump steady state temperatures. Results for these conditions showed that the performance of the still-functioning inboard ACS was unaffected.

  18. Modeling, Analysis, and Control of a Hypersonic Vehicle with Significant Aero-Thermo-Elastic-Propulsion Interactions: Elastic, Thermal and Mass Uncertainty

    Science.gov (United States)

    Khatri, Jaidev

    This thesis examines themodeling, analysis, and control system design issues for scramjet powered hypersonic vehicles. A nonlinear three degrees of freedom longitudinal model which includes aero-propulsion-elasticity effects was used for all analyses. This model is based upon classical compressible flow and Euler-Bernouli structural concepts. Higher fidelity computational fluid dynamics and finite element methods are needed for more precise intermediate and final evaluations. The methods presented within this thesis were shown to be useful for guiding initial control relevant design. The model was used to examine the vehicle's static and dynamic characteristics over the vehicle's trimmable region. The vehicle has significant longitudinal coupling between the fuel equivalency ratio (FER) and the flight path angle (FPA). For control system design, a two-input two-output plant (FER - elevator to speed-FPA) with 11 states (including 3 flexible modes) was used. Velocity, FPA, and pitch were assumed to be available for feedback. Aerodynamic heat modeling and design for the assumed TPS was incorporated to original Bolender's model to study the change in static and dynamic properties. De-centralized control stability, feasibility and limitations issues were dealt with the change in TPS elasticity, mass and physical dimension. The impact of elasticity due to TPS mass, TPS physical dimension as well as prolonged heating was also analyzed to understand performance limitations of de-centralized control designed for nominal model.

  19. Lysimeter data as input to performance assessment models

    International Nuclear Information System (INIS)

    McConnell, J.W. Jr.

    1998-01-01

    The Field Lysimeter Investigations: Low-Level Waste Data Base Development Program is obtaining information on the performance of radioactive waste forms in a disposal environment. Waste forms fabricated using ion-exchange resins from EPICOR-117 prefilters employed in the cleanup of the Three Mile Island (TMI) Nuclear Power Station are being tested to develop a low-level waste data base and to obtain information on survivability of waste forms in a disposal environment. The program includes reviewing radionuclide releases from those waste forms in the first 7 years of sampling and examining the relationship between code input parameters and lysimeter data. Also, lysimeter data are applied to performance assessment source term models, and initial results from use of data in two models are presented

  20. The Significance of the Bystander Effect: Modeling, Experiments, and More Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Brenner, David J.

    2009-07-22

    Non-targeted (bystander) effects of ionizing radiation are caused by intercellular signaling; they include production of DNA damage and alterations in cell fate (i.e. apoptosis, differentiation, senescence or proliferation). Biophysical models capable of quantifying these effects may improve cancer risk estimation at radiation doses below the epidemiological detection threshold. Understanding the spatial patterns of bystander responses is important, because it provides estimates of how many bystander cells are affected per irradiated cell. In a first approach to modeling of bystander spatial effects in a three-dimensional artificial tissue, we assumed the following: (1) The bystander phenomenon results from signaling molecules (S) that rapidly propagate from irradiated cells and decrease in concentration (exponentially in the case of planar symmetry) as distance increases. (2) These signals can convert cells to a long-lived epigenetically activated state, e.g. a state of oxidative stress; cells in this state are more prone to DNA damage and behavior alterations than normal and therefore exhibit an increased response (R) for many end points (e.g. apoptosis, differentiation, micronucleation). These assumptions were implemented by a mathematical formalism and computational algorithms. The model adequately described data on bystander responses in the 3D system using a small number of adjustable parameters. Mathematical models of radiation carcinogenesis are important for understanding mechanisms and for interpreting or extrapolating risk. There are two classes of such models: (1) long-term formalisms that track pre-malignant cell numbers throughout an entire lifetime but treat initial radiation dose-response simplistically and (2) short-term formalisms that provide a detailed initial dose-response even for complicated radiation protocols, but address its modulation during the subsequent cancer latency period only indirectly. We argue that integrating short- and long

  1. Modelling the predictive performance of credit scoring

    Directory of Open Access Journals (Sweden)

    Shi-Wei Shen

    2013-07-01

    Research purpose: The purpose of this empirical paper was to examine the predictive performance of credit scoring systems in Taiwan. Motivation for the study: Corporate lending remains a major business line for financial institutions. However, in light of the recent global financial crises, it has become extremely important for financial institutions to implement rigorous means of assessing clients seeking access to credit facilities. Research design, approach and method: Using a data sample of 10 349 observations drawn between 1992 and 2010, logistic regression models were utilised to examine the predictive performance of credit scoring systems. Main findings: A test of Goodness of fit demonstrated that credit scoring models that incorporated the Taiwan Corporate Credit Risk Index (TCRI, micro- and also macroeconomic variables possessed greater predictive power. This suggests that macroeconomic variables do have explanatory power for default credit risk. Practical/managerial implications: The originality in the study was that three models were developed to predict corporate firms’ defaults based on different microeconomic and macroeconomic factors such as the TCRI, asset growth rates, stock index and gross domestic product. Contribution/value-add: The study utilises different goodness of fits and receiver operator characteristics during the examination of the robustness of the predictive power of these factors.

  2. A Composite Model for Employees' Performance Appraisal and Improvement

    Science.gov (United States)

    Manoharan, T. R.; Muralidharan, C.; Deshmukh, S. G.

    2012-01-01

    Purpose: The purpose of this paper is to develop an innovative method of performance appraisal that will be useful for designing a structured training programme. Design/methodology/approach: Employees' performance appraisals are conducted using new approaches, namely data envelopment analysis and an integrated fuzzy model. Interpretive structural…

  3. A Water Treatment Case Study for Quantifying Model Performance with Multilevel Flow Modelling

    DEFF Research Database (Denmark)

    Nielsen, Emil Krabbe; Bram, Mads Valentin; Frutiger, Jerome

    2018-01-01

    Decision support systems are a key focus of research on developing control rooms to aid operators in making reliable decisions, and reducing incidents caused by human errors. For this purpose, models of complex systems can be developed to diagnose causes or consequences for specific alarms. Models...... during operation, this work aims to synthesize a procedure to measure model performance according to diagnostic requirements. A simple procedure is proposed for validating and evaluating the concept of Multilevel Flow Modelling. For this purpose, expert statements, dynamic process simulations, and pilot...

  4. A Novel Feed-Forward Modeling System Leads to Sustained Improvements in Attention and Academic Performance.

    Science.gov (United States)

    McDermott, Ashley F; Rose, Maya; Norris, Troy; Gordon, Eric

    2016-01-28

    This study tested a novel feed-forward modeling (FFM) system as a nonpharmacological intervention for the treatment of ADHD children and the training of cognitive skills that improve academic performance. This study implemented a randomized, controlled, parallel design comparing this FFM with a nonpharmacological community care intervention. Improvements were measured on parent- and clinician-rated scales of ADHD symptomatology and on academic performance tests completed by the participant. Participants were followed for 3 months after training. Participants in the FFM training group showed significant improvements in ADHD symptomatology and academic performance, while the control group did not. Improvements from FFM were sustained 3 months later. The FFM appeared to be an effective intervention for the treatment of ADHD and improving academic performance. This FFM training intervention shows promise as a first-line treatment for ADHD while improving academic performance. © The Author(s) 2016.

  5. Analysis of the Sheltered Instruction Observation Protocol Model on Academic Performance of English Language Learners

    Science.gov (United States)

    Ingram, Sandra W.

    This quantitative comparative descriptive study involved analyzing archival data from end-of-course (EOC) test scores in biology of English language learners (ELLs) taught or not taught using the sheltered instruction observation protocol (SIOP) model. The study includes descriptions and explanations of the benefits of the SIOP model to ELLs, especially in content area subjects such as biology. Researchers have shown that ELLs in high school lag behind their peers in academic achievement in content area subjects. Much of the research on the SIOP model took place in elementary and middle school, and more research was necessary at the high school level. This study involved analyzing student records from archival data to describe and explain if the SIOP model had an effect on the EOC test scores of ELLs taught or not taught using it. The sample consisted of 527 Hispanic students (283 females and 244 males) from Grades 9-12. An independent sample t-test determined if a significant difference existed in the mean EOC test scores of ELLs taught using the SIOP model as opposed to ELLs not taught using the SIOP model. The results indicated that a significant difference existed between EOC test scores of ELLs taught using the SIOP model and ELLs not taught using the SIOP model (p = .02). A regression analysis indicated a significant difference existed in the academic performance of ELLs taught using the SIOP model in high school science, controlling for free and reduced-price lunch (p = .001) in predicting passing scores on the EOC test in biology at the school level. The data analyzed for free and reduced-price lunch together with SIOP data indicated that both together were not significant (p = .175) for predicting passing scores on the EOC test in high school biology. Future researchers should repeat the study with student-level data as opposed to school-level data, and data should span at least three years.

  6. Finite element modelling of radial lentotomy cuts to improve the accommodation performance of the human lens.

    Science.gov (United States)

    Burd, H J; Wilde, G S

    2016-04-01

    The use of a femtosecond laser to form planes of cavitation bubbles within the ocular lens has been proposed as a potential treatment for presbyopia. The intended purpose of these planes of cavitation bubbles (referred to in this paper as 'cutting planes') is to increase the compliance of the lens, with a consequential increase in the amplitude of accommodation. The current paper describes a computational modelling study, based on three-dimensional finite element analysis, to investigate the relationship between the geometric arrangement of the cutting planes and the resulting improvement in lens accommodation performance. The study is limited to radial cutting planes. The effectiveness of a variety of cutting plane geometries was investigated by means of modelling studies conducted on a 45-year human lens. The results obtained from the analyses depend on the particular modelling procedures that are employed. When the lens substance is modelled as an incompressible material, radial cutting planes are found to be ineffective. However, when a poroelastic model is employed for the lens substance, radial cuts are shown to cause an increase in the computed accommodation performance of the lens. In this case, radial cuts made in the peripheral regions of the lens have a relatively small influence on the accommodation performance of the lens; the lentotomy process is seen to be more effective when cuts are made near to the polar axis. When the lens substance is modelled as a poroelastic material, the computational results suggest that useful improvements in lens accommodation performance can be achieved, provided that the radial cuts are extended to the polar axis. Radial cuts are ineffective when the lens substance is modelled as an incompressible material. Significant challenges remain in developing a safe and effective surgical procedure based on this lentotomy technique.

  7. Managerial performance and cost efficiency of Japanese local public hospitals: a latent class stochastic frontier model.

    Science.gov (United States)

    Besstremyannaya, Galina

    2011-09-01

    The paper explores the link between managerial performance and cost efficiency of 617 Japanese general local public hospitals in 1999-2007. Treating managerial performance as unobservable heterogeneity, the paper employs a panel data stochastic cost frontier model with latent classes. Financial parameters associated with better managerial performance are found to be positively significant in explaining the probability of belonging to the more efficient latent class. The analysis of latent class membership was consistent with the conjecture that unobservable technological heterogeneity reflected in the existence of the latent classes is related to managerial performance. The findings may support the cause for raising efficiency of Japanese local public hospitals by enhancing the quality of management. Copyright © 2011 John Wiley & Sons, Ltd.

  8. A Novel Environmental Performance Evaluation of Thailand’s Food Industry Using Structural Equation Modeling and Fuzzy Analytic Hierarchy Techniques

    Directory of Open Access Journals (Sweden)

    Anirut Pipatprapa

    2016-03-01

    Full Text Available Currently, the environment and sustainability are important topics for every industry. The food industry is particularly complicated in this regard because of the dynamic and complex character of food products and their production. This study uses structural equation modeling (SEM and a fuzzy analytic hierarchy process (FAHP to investigate which factors are suitable for evaluating the environmental performance of Thailand’s food industry. A first-stage questionnaire survey was conducted with 178 managers in the food industry that obtained a certificate from the Department of Industrial Work of Thailand to synthesize the performance measurement model and the significance of the relationship between the indicators. A second-stage questionnaire measured 18 experts’ priorities regarding the criteria and sub-factors involved in the different aspects and assessment items regarding environmental performance. SEM showed that quality management, market orientation, and innovation capability have a significantly positive effect on environmental performance. The FAHP showed that the experts were most concerned about quality management, followed by market orientation and innovation capability; the assessment items for quality policy, quality assurance, and customer orientation were of the most concern. The findings of this study can be referenced and support managerial decision making when monitoring environmental performance.

  9. Proficient brain for optimal performance: the MAP model perspective.

    Science.gov (United States)

    Bertollo, Maurizio; di Fronso, Selenia; Filho, Edson; Conforto, Silvia; Schmid, Maurizio; Bortoli, Laura; Comani, Silvia; Robazza, Claudio

    2016-01-01

    Background. The main goal of the present study was to explore theta and alpha event-related desynchronization/synchronization (ERD/ERS) activity during shooting performance. We adopted the idiosyncratic framework of the multi-action plan (MAP) model to investigate different processing modes underpinning four types of performance. In particular, we were interested in examining the neural activity associated with optimal-automated (Type 1) and optimal-controlled (Type 2) performances. Methods. Ten elite shooters (6 male and 4 female) with extensive international experience participated in the study. ERD/ERS analysis was used to investigate cortical dynamics during performance. A 4 × 3 (performance types × time) repeated measures analysis of variance was performed to test the differences among the four types of performance during the three seconds preceding the shots for theta, low alpha, and high alpha frequency bands. The dependent variables were the ERD/ERS percentages in each frequency band (i.e., theta, low alpha, high alpha) for each electrode site across the scalp. This analysis was conducted on 120 shots for each participant in three different frequency bands and the individual data were then averaged. Results. We found ERS to be mainly associated with optimal-automatic performance, in agreement with the "neural efficiency hypothesis." We also observed more ERD as related to optimal-controlled performance in conditions of "neural adaptability" and proficient use of cortical resources. Discussion. These findings are congruent with the MAP conceptualization of four performance states, in which unique psychophysiological states underlie distinct performance-related experiences. From an applied point of view, our findings suggest that the MAP model can be used as a framework to develop performance enhancement strategies based on cognitive and neurofeedback techniques.

  10. ASYMMETRIC PRICE TRANSMISSION MODELING: THE IMPORTANCE OF MODEL COMPLEXITY AND THE PERFORMANCE OF THE SELECTION CRITERIA

    Directory of Open Access Journals (Sweden)

    Henry de-Graft Acquah

    2013-01-01

    Full Text Available Information Criteria provides an attractive basis for selecting the best model from a set of competing asymmetric price transmission models or theories. However, little is understood about the sensitivity of the model selection methods to model complexity. This study therefore fits competing asymmetric price transmission models that differ in complexity to simulated data and evaluates the ability of the model selection methods to recover the true model. The results of Monte Carlo experimentation suggest that in general BIC, CAIC and DIC were superior to AIC when the true data generating process was the standard error correction model, whereas AIC was more successful when the true model was the complex error correction model. It is also shown that the model selection methods performed better in large samples for a complex asymmetric data generating process than with a standard asymmetric data generating process. Except for complex models, AIC's performance did not make substantial gains in recovery rates as sample size increased. The research findings demonstrate the influence of model complexity in asymmetric price transmission model comparison and selection.

  11. Cloud service performance evaluation: status, challenges, and opportunities – a survey from the system modeling perspective

    Directory of Open Access Journals (Sweden)

    Qiang Duan

    2017-05-01

    Full Text Available With rapid advancement of Cloud computing and networking technologies, a wide spectrum of Cloud services have been developed by various providers and utilized by numerous organizations as indispensable ingredients of their information systems. Cloud service performance has a significant impact on performance of the future information infrastructure. Thorough evaluation on Cloud service performance is crucial and beneficial to both service providers and consumers; thus forming an active research area. Some key technologies for Cloud computing, such as virtualization and the Service-Oriented Architecture (SOA, bring in special challenges to service performance evaluation. A tremendous amount of effort has been put by the research community to address these challenges and exciting progress has been made. Among the work on Cloud performance analysis, evaluation approaches developed with a system modeling perspective play an important role. However, related works have been reported in different sections of the literature; thus lacking a big picture that shows the latest status of this area. The objectives of this article is to present a survey that reflects the state of the art of Cloud service performance evaluation from the system modeling perspective. This articles also examines open issues and challenges to the surveyed evaluation approaches and identifies possible opportunities for future research in this important field.

  12. Proposing a Model to present Factors which Affect e-SCM Risk and their Impacts on Organizational Performance

    Directory of Open Access Journals (Sweden)

    ali rajabzadeh ghatari

    2010-03-01

    Full Text Available Companies strive to improve market share, grow corporate profit, and gain strategic advantage. In order to achieve these goals, supply chain competency must be placed at the heart of a company’s business model. Using e-Commerce and information and communication technologies (ICT in today’s changing demands of business has made organizations more responsive and flexible. E-Commerce and Internet have changed the nature of supply chains and re-defined how customers obtain wisdom of products, services, selection, purchasing and using them. Advent of ICT and new business environment has caused emerge of electronic supply chains. This research has proposed a model for presenting factors which affect electronic supply chain’s risk; besides the influence of the risk on financial and non-financial organization’s performance indicators. Studying the influence of the risk on organization’s performance is conducted in a sample of electronic and telecommunication companies. In order to measure these relationships, using correlation and structural equation modeling (SEM techniques proposed that electronic supply chain risk identification and management have significant impact on organization’s performance improvement.

  13. A network application for modeling a centrifugal compressor performance map

    Science.gov (United States)

    Nikiforov, A.; Popova, D.; Soldatova, K.

    2017-08-01

    The approximation of aerodynamic performance of a centrifugal compressor stage and vaneless diffuser by neural networks is presented. Advantages, difficulties and specific features of the method are described. An example of a neural network and its structure is shown. The performances in terms of efficiency, pressure ratio and work coefficient of 39 model stages within the range of flow coefficient from 0.01 to 0.08 were modeled with mean squared error 1.5 %. In addition, the loss and friction coefficients of vaneless diffusers of relative widths 0.014-0.10 are modeled with mean squared error 2.45 %.

  14. Maintenance personnel performance simulation (MAPPS): a model for predicting maintenance performance reliability in nuclear power plants

    International Nuclear Information System (INIS)

    Knee, H.E.; Krois, P.A.; Haas, P.M.; Siegel, A.I.; Ryan, T.G.

    1983-01-01

    The NRC has developed a structured, quantitative, predictive methodology in the form of a computerized simulation model for assessing maintainer task performance. Objective of the overall program is to develop, validate, and disseminate a practical, useful, and acceptable methodology for the quantitative assessment of NPP maintenance personnel reliability. The program was organized into four phases: (1) scoping study, (2) model development, (3) model evaluation, and (4) model dissemination. The program is currently nearing completion of Phase 2 - Model Development

  15. UNCONSTRAINED HANDWRITING RECOGNITION : LANGUAGE MODELS, PERPLEXITY, AND SYSTEM PERFORMANCE

    NARCIS (Netherlands)

    Marti, U-V.; Bunke, H.

    2004-01-01

    In this paper we present a number of language models and their behavior in the recognition of unconstrained handwritten English sentences. We use the perplexity to compare the different models and their prediction power, and relate it to the performance of a recognition system under different

  16. Team performance modeling for HRA in dynamic situations

    International Nuclear Information System (INIS)

    Shu Yufei; Furuta, Kazuo; Kondo, Shunsuke

    2002-01-01

    This paper proposes a team behavior network model that can simulate and analyze response of an operator team to an incident in a dynamic and context-sensitive situation. The model is composed of four sub-models, which describe the context of team performance. They are task model, event model, team model and human-machine interface model. Each operator demonstrates aspects of his/her specific cognitive behavior and interacts with other operators and the environment in order to deal with an incident. Individual human factors, which determine the basis of communication and interaction between individuals, and cognitive process of an operator, such as information acquisition, state-recognition, decision-making and action execution during development of an event scenario are modeled. A case of feed and bleed operation in pressurized water reactor under an emergency situation was studied and the result was compared with an experiment to check the validity of the proposed model

  17. 3D Massive MIMO Systems: Channel Modeling and Performance Analysis

    KAUST Repository

    Nadeem, Qurrat-Ul-Ain

    2015-03-01

    Multiple-input-multiple-output (MIMO) systems of current LTE releases are capable of adaptation in the azimuth only. More recently, the trend is to enhance the system performance by exploiting the channel\\'s degrees of freedom in the elevation through the dynamic adaptation of the vertical antenna beam pattern. This necessitates the derivation and characterization of three-dimensional (3D) channels. Over the years, channel models have evolved to address the challenges of wireless communication technologies. In parallel to theoretical studies on channel modeling, many standardized channel models like COST-based models, 3GPP SCM, WINNER, ITU have emerged that act as references for industries and telecommunication companies to assess system-level and link-level performances of advanced signal processing techniques over real-like channels. Given the existing channels are only two dimensional (2D) in nature; a large effort in channel modeling is needed to study the impact of the channel component in the elevation direction. The first part of this work sheds light on the current 3GPP activity around 3D channel modeling and beamforming, an aspect that to our knowledge has not been extensively covered by a research publication. The standardized MIMO channel model is presented, that incorporates both the propagation effects of the environment and the radio effects of the antennas. In order to facilitate future studies on the use of 3D beamforming, the main features of the proposed 3D channel model are discussed. A brief overview of the future 3GPP 3D channel model being outlined for the next generation of wireless networks is also provided. In the subsequent part of this work, we present an information-theoretic channel model for MIMO systems that supports the elevation dimension. The model is based on the principle of maximum entropy, which enables us to determine the distribution of the channel matrix consistent with the prior information on the angles of departure and

  18. Rethinking board role performance: Towards an integrative model

    Directory of Open Access Journals (Sweden)

    Babić Verica M.

    2011-01-01

    Full Text Available This research focuses on the board role evolution analysis which took place simultaneously with the development of different corporate governance theories and perspectives. The purpose of this paper is to provide understanding of key factors that make a board effective in the performance of its role. We argue that analysis of board role performance should incorporate both structural and process variables. This paper’s contribution is the development of an integrative model that aims to establish the relationship between the board structure and processes on the one hand, and board role performance on the other.

  19. Bayesian calibration of power plant models for accurate performance prediction

    International Nuclear Information System (INIS)

    Boksteen, Sowande Z.; Buijtenen, Jos P. van; Pecnik, Rene; Vecht, Dick van der

    2014-01-01

    Highlights: • Bayesian calibration is applied to power plant performance prediction. • Measurements from a plant in operation are used for model calibration. • A gas turbine performance model and steam cycle model are calibrated. • An integrated plant model is derived. • Part load efficiency is accurately predicted as a function of ambient conditions. - Abstract: Gas turbine combined cycles are expected to play an increasingly important role in the balancing of supply and demand in future energy markets. Thermodynamic modeling of these energy systems is frequently applied to assist in decision making processes related to the management of plant operation and maintenance. In most cases, model inputs, parameters and outputs are treated as deterministic quantities and plant operators make decisions with limited or no regard of uncertainties. As the steady integration of wind and solar energy into the energy market induces extra uncertainties, part load operation and reliability are becoming increasingly important. In the current study, methods are proposed to not only quantify various types of uncertainties in measurements and plant model parameters using measured data, but to also assess their effect on various aspects of performance prediction. The authors aim to account for model parameter and measurement uncertainty, and for systematic discrepancy of models with respect to reality. For this purpose, the Bayesian calibration framework of Kennedy and O’Hagan is used, which is especially suitable for high-dimensional industrial problems. The article derives a calibrated model of the plant efficiency as a function of ambient conditions and operational parameters, which is also accurate in part load. The article shows that complete statistical modeling of power plants not only enhances process models, but can also increases confidence in operational decisions

  20. ASSESSING INDIVIDUAL PERFORMANCE ON INFORMATION TECHNOLOGY ADOPTION: A NEW MODEL

    OpenAIRE

    Diah Hari Suryaningrum

    2012-01-01

    This paper aims to propose a new model in assessing individual performance on information technology adoption. The new model to assess individual performance was derived from two different theories: decomposed theory of planned behavior and task-technology fit theory. Although many researchers have tried to expand these theories, some of their efforts might lack of theoretical assumptions. To overcome this problem and enhance the coherence of the integration, I used a theory from social scien...

  1. Fiber Bragg grating-based performance monitoring of a slope model subjected to seepage

    Science.gov (United States)

    Zhu, Hong-Hu; Shi, Bin; Yan, Jun-Fan; Zhang, Jie; Zhang, Cheng-Cheng; Wang, Bao-Jun

    2014-09-01

    In the past few years, fiber optic sensing technologies have played an increasingly important role in the health monitoring of civil infrastructures. These innovative sensing technologies have recently been successfully applied to the performance monitoring of a series of geotechnical structures. Fiber optic sensors have shown many unique advantages in comparison with conventional sensors, including immunity to electrical noise, higher precision and improved durability and embedding capabilities; fiber optic sensors are also smaller in size and lighter in weight. In order to explore the mechanism of seepage-induced slope instability, a small-scale 1 g model test of the soil slope has been performed in the laboratory. During the model’s construction, specially fabricated sensing fibers containing nine fiber Bragg grating (FBG) strain sensors connected in a series were horizontally and vertically embedded into the soil mass. The surcharge load was applied on the slope crest, and the groundwater level inside of the slope was subsequently varied using two water chambers installed besides the slope model. The fiber optic sensing data of the vertical and horizontal strains within the slope model were automatically recorded by an FBG interrogator and a computer during the test. The test results are presented and interpreted in detail. It is found that the gradually accumulated deformation of the slope model subjected to seepage can be accurately captured by the quasi-distributed FBG strain sensors. The test results also demonstrate that the slope stability is significantly affected by ground water seepage, which fits well with the results that were calculated using finite element and limit equilibrium methods. The relationship between the strain measurements and the safety factors is further analyzed, together with a discussion on the residual strains. The performance evaluation of a soil slope using fiber optic strain sensors is proved to be a potentially effective

  2. Fiber Bragg grating-based performance monitoring of a slope model subjected to seepage

    International Nuclear Information System (INIS)

    Zhu, Hong-Hu; Shi, Bin; Yan, Jun-Fan; Zhang, Cheng-Cheng; Wang, Bao-Jun; Zhang, Jie

    2014-01-01

    In the past few years, fiber optic sensing technologies have played an increasingly important role in the health monitoring of civil infrastructures. These innovative sensing technologies have recently been successfully applied to the performance monitoring of a series of geotechnical structures. Fiber optic sensors have shown many unique advantages in comparison with conventional sensors, including immunity to electrical noise, higher precision and improved durability and embedding capabilities; fiber optic sensors are also smaller in size and lighter in weight. In order to explore the mechanism of seepage-induced slope instability, a small-scale 1 g model test of the soil slope has been performed in the laboratory. During the model’s construction, specially fabricated sensing fibers containing nine fiber Bragg grating (FBG) strain sensors connected in a series were horizontally and vertically embedded into the soil mass. The surcharge load was applied on the slope crest, and the groundwater level inside of the slope was subsequently varied using two water chambers installed besides the slope model. The fiber optic sensing data of the vertical and horizontal strains within the slope model were automatically recorded by an FBG interrogator and a computer during the test. The test results are presented and interpreted in detail. It is found that the gradually accumulated deformation of the slope model subjected to seepage can be accurately captured by the quasi-distributed FBG strain sensors. The test results also demonstrate that the slope stability is significantly affected by ground water seepage, which fits well with the results that were calculated using finite element and limit equilibrium methods. The relationship between the strain measurements and the safety factors is further analyzed, together with a discussion on the residual strains. The performance evaluation of a soil slope using fiber optic strain sensors is proved to be a potentially effective

  3. Performance and robustness of hybrid model predictive control for controllable dampers in building models

    Science.gov (United States)

    Johnson, Erik A.; Elhaddad, Wael M.; Wojtkiewicz, Steven F.

    2016-04-01

    A variety of strategies have been developed over the past few decades to determine controllable damping device forces to mitigate the response of structures and mechanical systems to natural hazards and other excitations. These "smart" damping devices produce forces through passive means but have properties that can be controlled in real time, based on sensor measurements of response across the structure, to dramatically reduce structural motion by exploiting more than the local "information" that is available to purely passive devices. A common strategy is to design optimal damping forces using active control approaches and then try to reproduce those forces with the smart damper. However, these design forces, for some structures and performance objectives, may achieve high performance by selectively adding energy, which cannot be replicated by a controllable damping device, causing the smart damper performance to fall far short of what an active system would provide. The authors have recently demonstrated that a model predictive control strategy using hybrid system models, which utilize both continuous and binary states (the latter to capture the switching behavior between dissipative and non-dissipative forces), can provide reductions in structural response on the order of 50% relative to the conventional clipped-optimal design strategy. This paper explores the robustness of this newly proposed control strategy through evaluating controllable damper performance when the structure model differs from the nominal one used to design the damping strategy. Results from the application to a two-degree-of-freedom structure model confirms the robustness of the proposed strategy.

  4. Overall models and experimental database for UO2 and MOX fuel increasing performance

    International Nuclear Information System (INIS)

    Bernard, L.C.; Blanpain, P.

    2001-01-01

    COPERNIC is an advanced fuel rod performance code developed by Framatome. It is based on the TRANSURANUS code that contains a clear and flexible architecture, and offers many modeling possibilities. The main objectives of COPERNIC are to accurately predict steady-state and transient fuel operations at high burnups and to incorporate advanced materials such as the Framatome M5-alloy cladding. An extensive development program was undertaken to benchmark the code to very high burnups and to new M5-alloy cladding data. New models were developed for the M5-alloy cladding and the COPERNIC thermal models were upgraded and improved to extend the predictions to burnups over 100 GWd/tM. Since key phenomena, like fission gas release, are strongly temperature dependent, many other models were upgraded also. The COPERNIC qualification range extends to 67, 55, 53 GWd/tM respectively for UO 2 , UO 2 -Gd 2 O 3 , and MOX fuels with Zircaloy-4 claddings. The range extends to 63 GWd/tM with UO 2 fuel and the advanced M5-alloy cladding. The paper focuses on thermal and fission gas release models, and on MOX fuel modeling. The COPERNIC thermal model consists of several submodels: gap conductance, gap closure, fuel thermal conductivity, radial power profile, and fuel rim. The fuel thermal conductivity and the gap closure models, in particular, have been significantly improved. The model was benchmarked with 3400 fuel centerline temperature data from many French and international programs. There are no measured to predicted statistical biases with respect to linear heat generation rate or burnup. The overall quality of the model is state-of-the-art as the model uncertainty is below 10 %. The fission gas release takes into account athermal and thermally activated mechanisms. The model was adapted to MOX and Gadolinia fuels. For the heterogeneous MOX MIMAS fuels, an effective burnup is used for the incubation threshold. For gadolinia fuels, a scaled temperature effect is used. The

  5. Computational Model-Based Prediction of Human Episodic Memory Performance Based on Eye Movements

    Science.gov (United States)

    Sato, Naoyuki; Yamaguchi, Yoko

    Subjects' episodic memory performance is not simply reflected by eye movements. We use a ‘theta phase coding’ model of the hippocampus to predict subjects' memory performance from their eye movements. Results demonstrate the ability of the model to predict subjects' memory performance. These studies provide a novel approach to computational modeling in the human-machine interface.

  6. Performance limits of coated particle fuel. Part I. The significance of empirical performance diagrams and mathematical models in fuel development and power reactor studies

    Energy Technology Data Exchange (ETDEWEB)

    Graham, L. W.; Hick, H.

    1973-06-15

    This report introduces a general survey of our present knowledge and understanding of coated particle fuel performance. It defines first the reference power reactor conditions and the reference coated particle design on which the survey is centred. It describes then the typical strategy which has been followed in coated particle fuel development by the Dragon Project R & D Branch. Finally it shows the priorities which have governed the time scale and scope of fuel development and of the present review.

  7. Proficient brain for optimal performance: the MAP model perspective

    Directory of Open Access Journals (Sweden)

    Maurizio Bertollo

    2016-05-01

    Full Text Available Background. The main goal of the present study was to explore theta and alpha event-related desynchronization/synchronization (ERD/ERS activity during shooting performance. We adopted the idiosyncratic framework of the multi-action plan (MAP model to investigate different processing modes underpinning four types of performance. In particular, we were interested in examining the neural activity associated with optimal-automated (Type 1 and optimal-controlled (Type 2 performances. Methods. Ten elite shooters (6 male and 4 female with extensive international experience participated in the study. ERD/ERS analysis was used to investigate cortical dynamics during performance. A 4 × 3 (performance types × time repeated measures analysis of variance was performed to test the differences among the four types of performance during the three seconds preceding the shots for theta, low alpha, and high alpha frequency bands. The dependent variables were the ERD/ERS percentages in each frequency band (i.e., theta, low alpha, high alpha for each electrode site across the scalp. This analysis was conducted on 120 shots for each participant in three different frequency bands and the individual data were then averaged. Results. We found ERS to be mainly associated with optimal-automatic performance, in agreement with the “neural efficiency hypothesis.” We also observed more ERD as related to optimal-controlled performance in conditions of “neural adaptability” and proficient use of cortical resources. Discussion. These findings are congruent with the MAP conceptualization of four performance states, in which unique psychophysiological states underlie distinct performance-related experiences. From an applied point of view, our findings suggest that the MAP model can be used as a framework to develop performance enhancement strategies based on cognitive and neurofeedback techniques.

  8. Range performance calculations using the NVEOL-Georgia Tech Research Institute 0.1- to 100-GHz radar performance model

    Science.gov (United States)

    Rodak, S. P.; Thomas, N. I.

    1983-05-01

    A computer model that can be used to calculate radar range performance at any frequency in the 0.1-to 100-GHz electromagnetic spectrum is described. These different numerical examples are used to demonstrate how to use the radar range performance model. Input/output documentation are included for each case that was run on the MERADCOM CDC 6600 computer at Fort Belvoir, Virginia.

  9. Green roof hydrologic performance and modeling: a review.

    Science.gov (United States)

    Li, Yanling; Babcock, Roger W

    2014-01-01

    Green roofs reduce runoff from impervious surfaces in urban development. This paper reviews the technical literature on green roof hydrology. Laboratory experiments and field measurements have shown that green roofs can reduce stormwater runoff volume by 30 to 86%, reduce peak flow rate by 22 to 93% and delay the peak flow by 0 to 30 min and thereby decrease pollution, flooding and erosion during precipitation events. However, the effectiveness can vary substantially due to design characteristics making performance predictions difficult. Evaluation of the most recently published study findings indicates that the major factors affecting green roof hydrology are precipitation volume, precipitation dynamics, antecedent conditions, growth medium, plant species, and roof slope. This paper also evaluates the computer models commonly used to simulate hydrologic processes for green roofs, including stormwater management model, soil water atmosphere and plant, SWMS-2D, HYDRUS, and other models that are shown to be effective for predicting precipitation response and economic benefits. The review findings indicate that green roofs are effective for reduction of runoff volume and peak flow, and delay of peak flow, however, no tool or model is available to predict expected performance for any given anticipated system based on design parameters that directly affect green roof hydrology.

  10. A resilience-based model for performance evaluation of information systems: the case of a gas company

    Science.gov (United States)

    Azadeh, A.; Salehi, V.; Salehi, R.

    2017-10-01

    Information systems (IS) are strongly influenced by changes in new technology and should react swiftly in response to external conditions. Resilience engineering is a new method that can enable these systems to absorb changes. In this study, a new framework is presented for performance evaluation of IS that includes DeLone and McLean's factors of success in addition to resilience. Hence, this study is an attempt to evaluate the impact of resilience on IS by the proposed model in Iranian Gas Engineering and Development Company via the data obtained from questionnaires and Fuzzy Data Envelopment Analysis (FDEA) approach. First, FDEA model with α-cut = 0.05 was identified as the most suitable model to this application by performing all Banker, Charnes and Cooper and Charnes, Cooper and Rhodes models of and FDEA and selecting the appropriate model based on maximum mean efficiency. Then, the factors were ranked based on the results of sensitivity analysis, which showed resilience had a significantly higher impact on the proposed model relative to other factors. The results of this study were then verified by conducting the related ANOVA test. This is the first study that examines the impact of resilience on IS by statistical and mathematical approaches.

  11. Loss model for off-design performance analysis of radial turbines with pivoting-vane, variable-area stators

    Science.gov (United States)

    Meitner, P. L.; Glassman, A. J.

    1980-01-01

    An off-design performance loss model for a radial turbine with pivoting, variable-area stators is developed through a combination of analytical modeling and experimental data analysis. A viscous loss model is used for the variation in stator loss with setting angle, and stator vane end-clearance leakage effects are predicted by a clearance flow model. The variation of rotor loss coefficient with stator setting angle is obtained by means of an analytical matching of experimental data for a rotor that was tested with six stators, having throat areas from 20 to 144% of the design area. An incidence loss model is selected to obtain best agreement with experimental data. The stator vane end-clearance leakage model predicts increasing mass flow and decreasing efficiency as a result of end-clearances, with changes becoming significantly larger with decreasing stator area.

  12. Strategies for Testing Statistical and Practical Significance in Detecting DIF with Logistic Regression Models

    Science.gov (United States)

    Fidalgo, Angel M.; Alavi, Seyed Mohammad; Amirian, Seyed Mohammad Reza

    2014-01-01

    This study examines three controversial aspects in differential item functioning (DIF) detection by logistic regression (LR) models: first, the relative effectiveness of different analytical strategies for detecting DIF; second, the suitability of the Wald statistic for determining the statistical significance of the parameters of interest; and…

  13. Weather model performance on extreme rainfall events simulation's over Western Iberian Peninsula

    Science.gov (United States)

    Pereira, S. C.; Carvalho, A. C.; Ferreira, J.; Nunes, J. P.; Kaiser, J. J.; Rocha, A.

    2012-08-01

    This study evaluates the performance of the WRF-ARW numerical weather model in simulating the spatial and temporal patterns of an extreme rainfall period over a complex orographic region in north-central Portugal. The analysis was performed for the December month of 2009, during the Portugal Mainland rainy season. The heavy rainfall to extreme heavy rainfall periods were due to several low surface pressure's systems associated with frontal surfaces. The total amount of precipitation for December exceeded, in average, the climatological mean for the 1971-2000 time period in +89 mm, varying from 190 mm (south part of the country) to 1175 mm (north part of the country). Three model runs were conducted to assess possible improvements in model performance: (1) the WRF-ARW is forced with the initial fields from a global domain model (RunRef); (2) data assimilation for a specific location (RunObsN) is included; (3) nudging is used to adjust the analysis field (RunGridN). Model performance was evaluated against an observed hourly precipitation dataset of 15 rainfall stations using several statistical parameters. The WRF-ARW model reproduced well the temporal rainfall patterns but tended to overestimate precipitation amounts. The RunGridN simulation provided the best results but model performance of the other two runs was good too, so that the selected extreme rainfall episode was successfully reproduced.

  14. A refined index of model performance: a rejoinder

    Science.gov (United States)

    Legates, David R.; McCabe, Gregory J.

    2013-01-01

    Willmott et al. [Willmott CJ, Robeson SM, Matsuura K. 2012. A refined index of model performance. International Journal of Climatology, forthcoming. DOI:10.1002/joc.2419.] recently suggest a refined index of model performance (dr) that they purport to be superior to other methods. Their refined index ranges from − 1.0 to 1.0 to resemble a correlation coefficient, but it is merely a linear rescaling of our modified coefficient of efficiency (E1) over the positive portion of the domain of dr. We disagree with Willmott et al. (2012) that dr provides a better interpretation; rather, E1 is more easily interpreted such that a value of E1 = 1.0 indicates a perfect model (no errors) while E1 = 0.0 indicates a model that is no better than the baseline comparison (usually the observed mean). Negative values of E1 (and, for that matter, dr McCabe [Legates DR, McCabe GJ. 1999. Evaluating the use of “goodness-of-fit” measures in hydrologic and hydroclimatic model validation. Water Resources Research 35(1): 233-241.] and Schaefli and Gupta [Schaefli B, Gupta HV. 2007. Do Nash values have value? Hydrological Processes 21: 2075-2080. DOI: 10.1002/hyp.6825.]. This important discussion focuses on the appropriate baseline comparison to use, and why the observed mean often may be an inadequate choice for model evaluation and development. 

  15. Aircraft Anomaly Detection Using Performance Models Trained on Fleet Data

    Science.gov (United States)

    Gorinevsky, Dimitry; Matthews, Bryan L.; Martin, Rodney

    2012-01-01

    This paper describes an application of data mining technology called Distributed Fleet Monitoring (DFM) to Flight Operational Quality Assurance (FOQA) data collected from a fleet of commercial aircraft. DFM transforms the data into aircraft performance models, flight-to-flight trends, and individual flight anomalies by fitting a multi-level regression model to the data. The model represents aircraft flight performance and takes into account fixed effects: flight-to-flight and vehicle-to-vehicle variability. The regression parameters include aerodynamic coefficients and other aircraft performance parameters that are usually identified by aircraft manufacturers in flight tests. Using DFM, the multi-terabyte FOQA data set with half-million flights was processed in a few hours. The anomalies found include wrong values of competed variables, (e.g., aircraft weight), sensor failures and baises, failures, biases, and trends in flight actuators. These anomalies were missed by the existing airline monitoring of FOQA data exceedances.

  16. A strategic management model for evaluation of health, safety and environmental performance.

    Science.gov (United States)

    Abbaspour, Majid; Toutounchian, Solmaz; Roayaei, Emad; Nassiri, Parvin

    2012-05-01

    Strategic health, safety, and environmental management system (HSE-MS) involves systematic and cooperative planning in each phase of the lifecycle of a project to ensure that interaction among the industry group, client, contractor, stakeholder, and host community exists with the highest level of health, safety, and environmental standard performances. Therefore, it seems necessary to assess the HSE-MS performance of contractor(s) by a comparative strategic management model with the aim of continuous improvement. The present Strategic Management Model (SMM) has been illustrated by a case study and the results show that the model is a suitable management tool for decision making in a contract environment, especially in oil and gas fields and based on accepted international standards within the framework of management deming cycle. To develop this model, a data bank has been created, which includes the statistical data calculated by converting the HSE performance qualitative data into quantitative values. Based on this fact, the structure of the model has been formed by defining HSE performance indicators according to the HSE-MS model. Therefore, 178 indicators have been selected which have been grouped into four attributes. Model output provides quantitative measures of HSE-MS performance as a percentage of an ideal level with maximum possible score for each attribute. Defining the strengths and weaknesses of the contractor(s) is another capability of this model. On the other hand, this model provides a ranking that could be used as the basis for decision making at the contractors' pre-qualification phase or during the execution of the project.

  17. A Five-Year CMAQ PM2.5 Model Performance for Wildfires and Prescribed Fires

    Science.gov (United States)

    Wilkins, J. L.; Pouliot, G.; Foley, K.; Rappold, A.; Pierce, T. E.

    2016-12-01

    Biomass burning has been identified as an important contributor to the degradation of air quality because of its impact on ozone and particulate matter. Two components of the biomass burning inventory, wildfires and prescribed fires are routinely estimated in the national emissions inventory. However, there is a large amount of uncertainty in the development of these emission inventory sectors. We have completed a 5 year set of CMAQ model simulations (2008-2012) in which we have simulated regional air quality with and without the wildfire and prescribed fire inventory. We will examine CMAQ model performance over regions with significant PM2.5 and Ozone contribution from prescribed fires and wildfires. We will also review plume rise to see how it affects model bias and compare CMAQ current fire emissions input to an hourly dataset from FLAMBE.

  18. Evaluation of CFVS Performance with SPARC Model and Application

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Sung Il; Na, Young Su; Ha, Kwang Soon; Cho, Song Won [KAERI, Daejeon (Korea, Republic of)

    2016-05-15

    Containment Filtered Venting System (CFVS) is one of the important safety features to reduce the amount of released fission product into the environment by depressurizing the containment. KAERI has been conducted the integrated performance verification test of CFVS as a part of a Ministry of Trade, Industry and Energy (MOTIE) project. Generally, some codes are used in the case of wet type filter, such as SPARC, BUSCA, SUPRA, etc. Especially SPARC model is included in the MELCOR to calculate the fission product removal rate through the pool scrubbing. In this study, CFVS performance is evaluated using SPARC model in MELCOR according to the steam fraction in the containment. The calculation is mainly focused on the effect of steam fraction in the containment, and the calculation result is explained with the aerosol removal model in SPARC. Previous study on the OPR 1000 is applied to the result. There were two CFVS valve opening period and it is found that the CFVS performance is different in each case. The result of the study provides the fundamental data can be used to decide the CFVS operation time, however, more calculation data is necessary to generalize the result.

  19. Exploring the relationships among performance-based functional ability, self-rated disability, perceived instrumental support, and depression: a structural equation model analysis.

    Science.gov (United States)

    Weil, Joyce; Hutchinson, Susan R; Traxler, Karen

    2014-11-01

    Data from the Women's Health and Aging Study were used to test a model of factors explaining depressive symptomology. The primary purpose of the study was to explore the association between performance-based measures of functional ability and depression and to examine the role of self-rated physical difficulties and perceived instrumental support in mediating the relationship between performance-based functioning and depression. The inclusion of performance-based measures allows for the testing of functional ability as a clinical precursor to disability and depression: a critical, but rarely examined, association in the disablement process. Structural equation modeling supported the overall fit of the model and found an indirect relationship between performance-based functioning and depression, with perceived physical difficulties serving as a significant mediator. Our results highlight the complementary nature of performance-based and self-rated measures and the importance of including perception of self-rated physical difficulties when examining depression in older persons. © The Author(s) 2014.

  20. Studies in Software Cost Model Behavior: Do We Really Understand Cost Model Performance?

    Science.gov (United States)

    Lum, Karen; Hihn, Jairus; Menzies, Tim

    2006-01-01

    While there exists extensive literature on software cost estimation techniques, industry practice continues to rely upon standard regression-based algorithms. These software effort models are typically calibrated or tuned to local conditions using local data. This paper cautions that current approaches to model calibration often produce sub-optimal models because of the large variance problem inherent in cost data and by including far more effort multipliers than the data supports. Building optimal models requires that a wider range of models be considered while correctly calibrating these models requires rejection rules that prune variables and records and use multiple criteria for evaluating model performance. The main contribution of this paper is to document a standard method that integrates formal model identification, estimation, and validation. It also documents what we call the large variance problem that is a leading cause of cost model brittleness or instability.

  1. Effect of roughness formulation on the performance of a coupled wave, hydrodynamic, and sediment transport model

    Science.gov (United States)

    Ganju, Neil K.; Sherwood, Christopher R.

    2010-01-01

    A variety of algorithms are available for parameterizing the hydrodynamic bottom roughness associated with grain size, saltation, bedforms, and wave–current interaction in coastal ocean models. These parameterizations give rise to spatially and temporally variable bottom-drag coefficients that ostensibly provide better representations of physical processes than uniform and constant coefficients. However, few studies have been performed to determine whether improved representation of these variable bottom roughness components translates into measurable improvements in model skill. We test the hypothesis that improved representation of variable bottom roughness improves performance with respect to near-bed circulation, bottom stresses, or turbulence dissipation. The inner shelf south of Martha’s Vineyard, Massachusetts, is the site of sorted grain-size features which exhibit sharp alongshore variations in grain size and ripple geometry over gentle bathymetric relief; this area provides a suitable testing ground for roughness parameterizations. We first establish the skill of a nested regional model for currents, waves, stresses, and turbulent quantities using a uniform and constant roughness; we then gauge model skill with various parameterization of roughness, which account for the influence of the wave-boundary layer, grain size, saltation, and rippled bedforms. We find that commonly used representations of ripple-induced roughness, when combined with a wave–current interaction routine, do not significantly improve skill for circulation, and significantly decrease skill with respect to stresses and turbulence dissipation. Ripple orientation with respect to dominant currents and ripple shape may be responsible for complicating a straightforward estimate of the roughness contribution from ripples. In addition, sediment-induced stratification may be responsible for lower stresses than predicted by the wave–current interaction model.

  2. Stutter-Step Models of Performance in School

    Science.gov (United States)

    Morgan, Stephen L.; Leenman, Theodore S.; Todd, Jennifer J.; Kentucky; Weeden, Kim A.

    2013-01-01

    To evaluate a stutter-step model of academic performance in high school, this article adopts a unique measure of the beliefs of 12,591 high school sophomores from the Education Longitudinal Study, 2002-2006. Verbatim responses to questions on occupational plans are coded to capture specific job titles, the listing of multiple jobs, and the listing…

  3. A systematic experimental investigation of significant parameters affecting model tire hydroplaning

    Science.gov (United States)

    Wray, G. A.; Ehrlich, I. R.

    1973-01-01

    The results of a comprehensive parametric study of model and small pneumatic tires operating on a wet surface are presented. Hydroplaning inception (spin down) and rolling restoration (spin up) are discussed. Conclusions indicate that hydroplaning inception occurs at a speed significantly higher than the rolling restoration speed. Hydroplaning speed increases considerably with tread depth, surface roughness and tire inflation pressure of footprint pressure, and only moderately with increased load. Water film thickness affects spin down speed only slightly. Spin down speed varies inversely as approximately the one-sixth power of film thickness. Empirical equations relating tire inflation pressure, normal load, tire diameter and water film thickness have been generated for various tire tread and surface configurations.

  4. Standardizing the performance evaluation of short-term wind prediction models

    DEFF Research Database (Denmark)

    Madsen, Henrik; Pinson, Pierre; Kariniotakis, G.

    2005-01-01

    Short-term wind power prediction is a primary requirement for efficient large-scale integration of wind generation in power systems and electricity markets. The choice of an appropriate prediction model among the numerous available models is not trivial, and has to be based on an objective...... evaluation of model performance. This paper proposes a standardized protocol for the evaluation of short-term wind-poser preciction systems. A number of reference prediction models are also described, and their use for performance comparison is analysed. The use of the protocol is demonstrated using results...... from both on-shore and off-shore wind forms. The work was developed in the frame of the Anemos project (EU R&D project) where the protocol has been used to evaluate more than 10 prediction systems....

  5. The relationship between inter-organizational trust and performance

    Directory of Open Access Journals (Sweden)

    Roman Fiala

    2012-01-01

    Full Text Available The article deals with an investigation of the relationship between inter-organizational trust and performance. Using data obtained in a questionnaire survey in 373 organizations with more than 20 employees with their seat in the Czech Republic, we found the relationship between inter-organizational trust and supplier performance, mediated by the level of conflict. Also, the statistically significant negative relationship between inter-organizational trust and costs of negotiation and the statistically significant positive relationship between supplier performance and perceived performance were confirmed. The hypothesis on the statistically significant relationship between inter-organizational trust and negotiating costs was not confirmed. The structural equation modelling technique was used in the calculations. The calculated model fit indices (CFI, NFI, NNFI with values over 0.9 demonstrate a very good quality of the model.

  6. Analysis report for WIPP colloid model constraints and performance assessment parameters

    Energy Technology Data Exchange (ETDEWEB)

    Mariner, Paul E.; Sassani, David Carl

    2014-03-01

    An analysis of the Waste Isolation Pilot Plant (WIPP) colloid model constraints and parameter values was performed. The focus of this work was primarily on intrinsic colloids, mineral fragment colloids, and humic substance colloids, with a lesser focus on microbial colloids. Comments by the US Environmental Protection Agency (EPA) concerning intrinsic Th(IV) colloids and Mg-Cl-OH mineral fragment colloids were addressed in detail, assumptions and data used to constrain colloid model calculations were evaluated, and inconsistencies between data and model parameter values were identified. This work resulted in a list of specific conclusions regarding model integrity, model conservatism, and opportunities for improvement related to each of the four colloid types included in the WIPP performance assessment.

  7. Approach to modeling of human performance for purposes of probabilistic risk assessment

    International Nuclear Information System (INIS)

    Swain, A.D.

    1983-01-01

    This paper describes the general approach taken in NUREG/CR-1278 to model human performance in sufficienct detail to permit probabilistic risk assessments of nuclear power plant operations. To show the basis for the more specific models in the above NUREG, a simplified model of the human component in man-machine systems is presented, the role of performance shaping factors is discussed, and special problems in modeling the cognitive aspect of behavior are described

  8. An updated fracture-flow model for total-system performance assessment of Yucca Mountain

    International Nuclear Information System (INIS)

    Gauthier, J.H.

    1994-01-01

    Improvements have been made to the fracture-flow model being used in the total-system performance assessment of a potential high-level radioactive waste repository at Yucca Mountain, Nevada. The ''weeps model'' now includes (1) weeps of varied sizes, (2) flow-pattern fluctuations caused by climate change, and (3) flow-pattern perturbations caused by repository heat generation. Comparison with the original weeps model indicates that allowing weeps of varied sizes substantially reduces the number of weeps and the number of containers contacted by weeps. However, flow-pattern perturbations caused by either climate change or repository heat generation greatly increases the number of containers contacted by weeps. In preliminary total-system calculations, using a phenomenological container-failure and radionuclide-release model, the weeps model predicts that radionuclide releases from a high-level radioactive waste repository at Yucca Mountain will be below the EPA standard specified in 40 CFR 191, but that the maximum radiation dose to an individual could be significant. Specific data from the site are required to determine the validity of the weep-flow mechanism and to better determine the parameters to which the dose calculation is sensitive

  9. Evaluating Internal Model Strength and Performance of Myoelectric Prosthesis Control Strategies.

    Science.gov (United States)

    Shehata, Ahmed W; Scheme, Erik J; Sensinger, Jonathon W

    2018-05-01

    On-going developments in myoelectric prosthesis control have provided prosthesis users with an assortment of control strategies that vary in reliability and performance. Many studies have focused on improving performance by providing feedback to the user but have overlooked the effect of this feedback on internal model development, which is key to improve long-term performance. In this paper, the strength of internal models developed for two commonly used myoelectric control strategies: raw control with raw feedback (using a regression-based approach) and filtered control with filtered feedback (using a classifier-based approach), were evaluated using two psychometric measures: trial-by-trial adaptation and just-noticeable difference. The performance of both strategies was also evaluated using Schmidt's style target acquisition task. Results obtained from 24 able-bodied subjects showed that although filtered control with filtered feedback had better short-term performance in path efficiency ( ), raw control with raw feedback resulted in stronger internal model development ( ), which may lead to better long-term performance. Despite inherent noise in the control signals of the regression controller, these findings suggest that rich feedback associated with regression control may be used to improve human understanding of the myoelectric control system.

  10. Use of 5-mm Laparoscopic Stapler to Perform Open Small Bowel Anastomosis in a Neonatal Animal Model.

    Science.gov (United States)

    Glenn, Ian C; Bruns, Nicholas E; Ponsky, Todd A

    2016-10-01

    While adult bowel anastomoses are typically performed with staplers, neonatal small bowel anastomoses have traditionally been performed in a hand-sewn manner due to the large size of surgical staplers. The purpose of this study was to compare stapled anastomosis using a newly available, 5-mm laparoscopic stapler to a hand-sewn anastomosis in an open animal model. Twenty anastomoses were performed by two general surgery residents (10 stapled and 10 hand-sewn) in an adult New Zealand white rabbit. The small bowel was divided with a scalpel. Surgical technique was alternated between single-layer hand-sewn and stapled anastomoses. Each anastomosis was resected for ex vivo testing. Measurements collected were outer diameter of the bowel before division, time to perform the anastomosis, anastomosis inner diameter (ID), and leak test. IDs were measured by cutting the anastomosis in cross-section, taking a photograph, and measuring the diameter by computer software. In addition, the surgeons qualitatively evaluated the anastomoses for hemostasis and overall quality. Statistical significance was determined using the Student's t-test. There were statistically significant differences between stapled and hand-sewn anastomosis, respectively, for average operative time (4 minutes 2 seconds versus 16 minutes 6 seconds, P animal model, a 5-mm stapled anastomosis is an acceptable alternative to hand-sewn small bowel anastomosis. The stapler is faster and creates a larger diameter anastomosis, however, there was one leak when closing the enterotomy in the stapled group and overlapping staple lines should be avoided.

  11. Investigation into the performance of different models for predicting stutter.

    Science.gov (United States)

    Bright, Jo-Anne; Curran, James M; Buckleton, John S

    2013-07-01

    In this paper we have examined five possible models for the behaviour of the stutter ratio, SR. These were two log-normal models, two gamma models, and a two-component normal mixture model. A two-component normal mixture model was chosen with different behaviours of variance; at each locus SR was described with two distributions, both with the same mean. The distributions have difference variances: one for the majority of the observations and a second for the less well-behaved ones. We apply each model to a set of known single source Identifiler™, NGM SElect™ and PowerPlex(®) 21 DNA profiles to show the applicability of our findings to different data sets. SR determined from the single source profiles were compared to the calculated SR after application of the models. The model performance was tested by calculating the log-likelihoods and comparing the difference in Akaike information criterion (AIC). The two-component normal mixture model systematically outperformed all others, despite the increase in the number of parameters. This model, as well as performing well statistically, has intuitive appeal for forensic biologists and could be implemented in an expert system with a continuous method for DNA interpretation. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  12. Significant Effect of a Pre-Exercise High-Fat Meal after a 3-Day High-Carbohydrate Diet on Endurance Performance

    Directory of Open Access Journals (Sweden)

    Ikuma Murakami

    2012-06-01

    Full Text Available We investigated the effect of macronutrient composition of pre-exercise meals on endurance performance. Subjects consumed a high-carbohydrate diet at each meal for 3 days, followed by a high-fat meal (HFM; 1007 ± 21 kcal, 30% CHO, 55% F and 15% P or high-carbohydrate meal (HCM; 1007 ± 21 kcal, 71% CHO, 20% F and 9% P 4 h before exercise. Furthermore, just prior to the test, subjects in the HFM group ingested either maltodextrin jelly (M or a placebo jelly (P, while subjects in the HCM ingested a placebo jelly. Endurance performance was measured as running time until exhaustion at a speed between lactate threshold and the onset of blood lactate accumulation. All subjects participated in each trial, randomly assigned at weekly intervals. We observed that the time until exhaustion was significantly longer in the HFM + M (p < 0.05 than in HFM + P and HCM + P conditions. Furthermore, the total amount of fat oxidation during exercise was significantly higher in HFM + M and HFM + P than in HCM + P (p < 0.05. These results suggest that ingestion of a HFM prior to exercise is more favorable for endurance performance than HCM. In addition, HFM and maltodextrin ingestion following 3 days of carbohydrate loading enhances endurance running performance.

  13. Evaluating the performance of unhealthy junk food consumption based on health belief model in elementary school girls

    Directory of Open Access Journals (Sweden)

    Azam Fathi

    2017-06-01

    Full Text Available Abstract Background and objective: Nowadays, due to changes in eating patterns, the worthless junk foods are replaced useful food among children. This study aimed to evaluate the performance of unhealthy junk food consumption based on health belief model in elementary school girls Methods: Cross-sectional study Descriptive-analytic type of multi-stage sampling (208 samples was carried out in 2016. The survey instrument was a questionnaire valid and reliable based on the Health Belief Model (70 items. Data was analyzed by SPSS software according to statistical tests of significance level of 0.05. Results: Results showed that students of sensitivity (49% and relatively high efficacy (53/8%, perceived benefits (73/1% and better social protection (68/3% had. The results showed that among all the health belief model structures with yield (junk food intake significantly correlated. Also significant differences in parental education and sensitivity, perceived severity, self-efficacy, social support and yield (p<0/05. Conclusion: The results of this study showed that students from relatively favorable sensitivity and self-efficacy, perceived benefits and social protection in the field of unhealthy snacks were good. Also a significant relationship between structural and non-use study results showed unhealthy snacks but because of the importance of unhealthy snacks and unhealthy snack consumption among school children and the complications of the health belief model in predicting nutritional behaviors suggest that this model used as a framework for school feeding programs. Paper Type: Research Article.

  14. Modeling electrochemical performance in large scale proton exchange membrane fuel cell stacks

    Energy Technology Data Exchange (ETDEWEB)

    Lee, J H [Los Alamos National Lab., NM (United States); Lalk, T R [Texas A and M Univ., College Station, TX (United States). Dept. of Mechanical Engineering; Appleby, A J [Center for Electrochemical Studies and Hydrogen Research, Texas Engineering Experimentation Station, Texas A and M Univ., College Station, TX (United States)

    1998-02-01

    The processes, losses, and electrical characteristics of a Membrane-Electrode Assembly (MEA) of a Proton Exchange Membrane Fuel Cell (PEMFC) are described. In addition, a technique for numerically modeling the electrochemical performance of a MEA, developed specifically to be implemented as part of a numerical model of a complete fuel cell stack, is presented. The technique of calculating electrochemical performance was demonstrated by modeling the MEA of a 350 cm{sup 2}, 125 cell PEMFC and combining it with a dynamic fuel cell stack model developed by the authors. Results from the demonstration that pertain to the MEA sub-model are given and described. These include plots of the temperature, pressure, humidity, and oxygen partial pressure distributions for the middle MEA of the modeled stack as well as the corresponding current produced by that MEA. The demonstration showed that models developed using this technique produce results that are reasonable when compared to established performance expectations and experimental results. (orig.)

  15. Modeling and analysis to quantify MSE wall behavior and performance.

    Science.gov (United States)

    2009-08-01

    To better understand potential sources of adverse performance of mechanically stabilized earth (MSE) walls, a suite of analytical models was studied using the computer program FLAC, a numerical modeling computer program widely used in geotechnical en...

  16. Rotary engine performance limits predicted by a zero-dimensional model

    Science.gov (United States)

    Bartrand, Timothy A.; Willis, Edward A.

    1992-01-01

    A parametric study was performed to determine the performance limits of a rotary combustion engine. This study shows how well increasing the combustion rate, insulating, and turbocharging increase brake power and decrease fuel consumption. Several generalizations can be made from the findings. First, it was shown that the fastest combustion rate is not necessarily the best combustion rate. Second, several engine insulation schemes were employed for a turbocharged engine. Performance improved only for a highly insulated engine. Finally, the variability of turbocompounding and the influence of exhaust port shape were calculated. Rotary engines performance was predicted by an improved zero-dimensional computer model based on a model developed at the Massachusetts Institute of Technology in the 1980's. Independent variables in the study include turbocharging, manifold pressures, wall thermal properties, leakage area, and exhaust port geometry. Additions to the computer programs since its results were last published include turbocharging, manifold modeling, and improved friction power loss calculation. The baseline engine for this study is a single rotor 650 cc direct-injection stratified-charge engine with aluminum housings and a stainless steel rotor. Engine maps are provided for the baseline and turbocharged versions of the engine.

  17. Modelling of Box Type Solar Cooker Performance in a Tropical ...

    African Journals Online (AJOL)

    Thermal performance model of box type solar cooker with loaded water is presented. The model was developed using the method of Funk to estimate cooking power in terms of climatic and design parameters for box type solar cooker in a tropical environment. Coefficients for each term used in the model were determined ...

  18. Adaptation Method for Overall and Local Performances of Gas Turbine Engine Model

    Science.gov (United States)

    Kim, Sangjo; Kim, Kuisoon; Son, Changmin

    2018-04-01

    An adaptation method was proposed to improve the modeling accuracy of overall and local performances of gas turbine engine. The adaptation method was divided into two steps. First, the overall performance parameters such as engine thrust, thermal efficiency, and pressure ratio were adapted by calibrating compressor maps, and second, the local performance parameters such as temperature of component intersection and shaft speed were adjusted by additional adaptation factors. An optimization technique was used to find the correlation equation of adaptation factors for compressor performance maps. The multi-island genetic algorithm (MIGA) was employed in the present optimization. The correlations of local adaptation factors were generated based on the difference between the first adapted engine model and performance test data. The proposed adaptation method applied to a low-bypass ratio turbofan engine of 12,000 lb thrust. The gas turbine engine model was generated and validated based on the performance test data in the sea-level static condition. In flight condition at 20,000 ft and 0.9 Mach number, the result of adapted engine model showed improved prediction in engine thrust (overall performance parameter) by reducing the difference from 14.5 to 3.3%. Moreover, there was further improvement in the comparison of low-pressure turbine exit temperature (local performance parameter) as the difference is reduced from 3.2 to 0.4%.

  19. Economic performance of photovoltaic water pumping systems with business model innovation in China

    International Nuclear Information System (INIS)

    Zhang, Chi; Campana, Pietro Elia; Yang, Jin; Yan, Jinyue

    2017-01-01

    Highlights: • A new business model of PV systems is proposed for PV water pumping systems (PVWP). • Three PVWP and one PV-roof scenarios are analysed to estimate economic performance. • The impacts of market incentives in four PV scenarios are insubstantial for its economic payback. • The PVWP system with added-value products will improve economy potential. - Abstract: Expansion by photovoltaic (PV) technologies in the renewable energy market requires exploring added value integrated with business model innovation. In recent years, a pilot trial of PV water pumping (PVWP) technologies for the conservation of grassland and farmland has been conducted in China. In this paper, we studied the added value of the PVWP technologies with an emphasis on the integration of the value proposition with the operation system and customer segmentation. Using the widely used existing PV business models (PV-roof) as a reference, we evaluated discounted cash flow (DCF) and net present value (NPV) under the scenarios of traditional PV roof, PVWP pilot, PVWP scale-up, and PVWP social network, where further added value via social network was included in the business model. The results show that the integrated PVWP system with social network products significantly improves the performance in areas such as the discounted payback period, internal rate of return (IRR), and return on investment (ROI). We conclude that scenario PVWP social network with business model innovation, can result in value add-ins, new sources of revenue, and market incentives. The paper also suggests that current policy incentives for PV industry are not efficient due to a limited source of revenue, and complex procedures of feed-in tariff verification.

  20. Critical research issues in development of biomathematical models of fatigue and performance.

    Science.gov (United States)

    Dinges, David F

    2004-03-01

    This article reviews the scientific research needed to ensure the continued development, validation, and operational transition of biomathematical models of fatigue and performance. These models originated from the need to ascertain the formal underlying relationships among sleep and circadian dynamics in the control of alertness and neurobehavioral performance capability. Priority should be given to research that further establishes their basic validity, including the accuracy of the core mathematical formulae and parameters that instantiate the interactions of sleep/wake and circadian processes. Since individuals can differ markedly and reliably in their responses to sleep loss and to countermeasures for it, models must incorporate estimates of these inter-individual differences, and research should identify predictors of them. To ensure models accurately predict recovery of function with sleep of varying durations, dose-response curves for recovery of performance as a function of prior sleep homeostatic load and the number of days of recovery are needed. It is also necessary to establish whether the accuracy of models is affected by using work/rest schedules as surrogates for sleep/wake inputs to models. Given the importance of light as both a circadian entraining agent and an alerting agent, research should determine the extent to which light input could incrementally improve model predictions of performance, especially in persons exposed to night work, jet lag, and prolonged work. Models seek to estimate behavioral capability and/or the relative risk of adverse events in a fatigued state. Research is needed on how best to scale and interpret metrics of behavioral capability, and incorporate factors that amplify or diminish the relationship between model predictions of performance and risk outcomes.

  1. Modeling take-over performance in level 3 conditionally automated vehicles.

    Science.gov (United States)

    Gold, Christian; Happee, Riender; Bengler, Klaus

    2017-11-28

    Taking over vehicle control from a Level 3 conditionally automated vehicle can be a demanding task for a driver. The take-over determines the controllability of automated vehicle functions and thereby also traffic safety. This paper presents models predicting the main take-over performance variables take-over time, minimum time-to-collision, brake application and crash probability. These variables are considered in relation to the situational and driver-related factors time-budget, traffic density, non-driving-related task, repetition, the current lane and driver's age. Regression models were developed using 753 take-over situations recorded in a series of driving simulator experiments. The models were validated with data from five other driving simulator experiments of mostly unrelated authors with another 729 take-over situations. The models accurately captured take-over time, time-to-collision and crash probability, and moderately predicted the brake application. Especially the time-budget, traffic density and the repetition strongly influenced the take-over performance, while the non-driving-related tasks, the lane and drivers' age explained a minor portion of the variance in the take-over performances. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Seismic assessment and performance of nonstructural components affected by structural modeling

    Energy Technology Data Exchange (ETDEWEB)

    Hur, Jieun; Althoff, Eric; Sezen, Halil; Denning, Richard; Aldemir, Tunc [Ohio State University, Columbus (United States)

    2017-03-15

    Seismic probabilistic risk assessment (SPRA) requires a large number of simulations to evaluate the seismic vulnerability of structural and nonstructural components in nuclear power plants. The effect of structural modeling and analysis assumptions on dynamic analysis of 3D and simplified 2D stick models of auxiliary buildings and the attached nonstructural components is investigated. Dynamic characteristics and seismic performance of building models are also evaluated, as well as the computational accuracy of the models. The presented results provide a better understanding of the dynamic behavior and seismic performance of auxiliary buildings. The results also help to quantify the impact of uncertainties associated with modeling and analysis of simplified numerical models of structural and nonstructural components subjected to seismic shaking on the predicted seismic failure probabilities of these systems.

  3. Performance assessment modeling of pyrometallurgical process wasteforms

    International Nuclear Information System (INIS)

    Nutt, W.M.; Hill, R.N.; Bullen, D.B.

    1995-01-01

    Performance assessment analyses have been completed to estimate the behavior of high-level nuclear wasteforms generated from the pyrometallurgical processing of liquid metal reactor (LMR) and light water reactor (LWR) spent nuclear fuel. Waste emplaced in the proposed repository at Yucca Mountain is investigated as the basis for the study. The resulting cumulative actinide and fission product releases to the accessible environment within a 100,000 year period from the various pyrometallurgical process wasteforms are compared to those of directly disposed LWR spent fuel using the same total repository system model. The impact of differing radionuclide transport models on the overall release characteristics is investigated

  4. Significant improvements of electrical discharge machining performance by step-by-step updated adaptive control laws

    Science.gov (United States)

    Zhou, Ming; Wu, Jianyang; Xu, Xiaoyi; Mu, Xin; Dou, Yunping

    2018-02-01

    In order to obtain improved electrical discharge machining (EDM) performance, we have dedicated more than a decade to correcting one essential EDM defect, the weak stability of the machining, by developing adaptive control systems. The instabilities of machining are mainly caused by complicated disturbances in discharging. To counteract the effects from the disturbances on machining, we theoretically developed three control laws from minimum variance (MV) control law to minimum variance and pole placements coupled (MVPPC) control law and then to a two-step-ahead prediction (TP) control law. Based on real-time estimation of EDM process model parameters and measured ratio of arcing pulses which is also called gap state, electrode discharging cycle was directly and adaptively tuned so that a stable machining could be achieved. To this end, we not only theoretically provide three proved control laws for a developed EDM adaptive control system, but also practically proved the TP control law to be the best in dealing with machining instability and machining efficiency though the MVPPC control law provided much better EDM performance than the MV control law. It was also shown that the TP control law also provided a burn free machining.

  5. Modeling and prediction of flotation performance using support vector regression

    Directory of Open Access Journals (Sweden)

    Despotović Vladimir

    2017-01-01

    Full Text Available Continuous efforts have been made in recent year to improve the process of paper recycling, as it is of critical importance for saving the wood, water and energy resources. Flotation deinking is considered to be one of the key methods for separation of ink particles from the cellulose fibres. Attempts to model the flotation deinking process have often resulted in complex models that are difficult to implement and use. In this paper a model for prediction of flotation performance based on Support Vector Regression (SVR, is presented. Representative data samples were created in laboratory, under a variety of practical control variables for the flotation deinking process, including different reagents, pH values and flotation residence time. Predictive model was created that was trained on these data samples, and the flotation performance was assessed showing that Support Vector Regression is a promising method even when dataset used for training the model is limited.

  6. Comparative study of turbulence model performance for axisymmetric sudden expansion flow

    International Nuclear Information System (INIS)

    Bae, Youngmin; Kim, Young In; Kim, Keung Koo; Yoon, Juhyeon

    2013-01-01

    In this study, the performance of turbulence models in predicting the turbulent flow in an axisymmetric sudden expansion with an expansion ratio of 4 is assessed for a Reynolds number of 5.6 Χ 10 4 . The comparisons show that the standard k-ε and RSM models provide the best agreement with the experimental data, whereas the standard k-ω model gives poor predictions. Owing to its computational efficiency, the Reynolds Averaged Navier-Stokes (RANS) approach has been widely used for the prediction of turbulent flows and associated pressure losses in a variety of internal flow systems such as a diffuser, orifice, converging nozzle, and pipes with sudden expansion. However, the lack of a general turbulence model often leads to limited applications of a RANS approach, i. e., the accuracy and validity of solutions obtained from RANS equations vary with the turbulence model, flow regime, near-wall treatment, and configuration of the problem. In light of the foregoing, a large amount of turbulence research has been conducted to assess the performance of existing turbulence models for different flow fields. In this paper, the turbulent flow in an axisymmetric sudden expansion is numerically investigated for a Reynolds number of 5.6 Χ 10 4 , with the aim of examining the performance of several turbulence models

  7. Multiscale modeling and characterization for performance and safety of lithium-ion batteries

    International Nuclear Information System (INIS)

    Pannala, S.; Turner, J. A.; Allu, S.; Elwasif, W. R.; Kalnaus, S.; Simunovic, S.; Kumar, A.; Billings, J. J.; Wang, H.; Nanda, J.

    2015-01-01

    Lithium-ion batteries are highly complex electrochemical systems whose performance and safety are governed by coupled nonlinear electrochemical-electrical-thermal-mechanical processes over a range of spatiotemporal scales. Gaining an understanding of the role of these processes as well as development of predictive capabilities for design of better performing batteries requires synergy between theory, modeling, and simulation, and fundamental experimental work to support the models. This paper presents the overview of the work performed by the authors aligned with both experimental and computational efforts. In this paper, we describe a new, open source computational environment for battery simulations with an initial focus on lithium-ion systems but designed to support a variety of model types and formulations. This system has been used to create a three-dimensional cell and battery pack models that explicitly simulate all the battery components (current collectors, electrodes, and separator). The models are used to predict battery performance under normal operations and to study thermal and mechanical safety aspects under adverse conditions. This paper also provides an overview of the experimental techniques to obtain crucial validation data to benchmark the simulations at various scales for performance as well as abuse. We detail some initial validation using characterization experiments such as infrared and neutron imaging and micro-Raman mapping. In addition, we identify opportunities for future integration of theory, modeling, and experiments

  8. A model to describe the performance of the UASB reactor.

    Science.gov (United States)

    Rodríguez-Gómez, Raúl; Renman, Gunno; Moreno, Luis; Liu, Longcheng

    2014-04-01

    A dynamic model to describe the performance of the Upflow Anaerobic Sludge Blanket (UASB) reactor was developed. It includes dispersion, advection, and reaction terms, as well as the resistances through which the substrate passes before its biotransformation. The UASB reactor is viewed as several continuous stirred tank reactors connected in series. The good agreement between experimental and simulated results shows that the model is able to predict the performance of the UASB reactor (i.e. substrate concentration, biomass concentration, granule size, and height of the sludge bed).

  9. Instruction-level performance modeling and characterization of multimedia applications

    Energy Technology Data Exchange (ETDEWEB)

    Luo, Y. [Los Alamos National Lab., NM (United States). Scientific Computing Group; Cameron, K.W. [Louisiana State Univ., Baton Rouge, LA (United States). Dept. of Computer Science

    1999-06-01

    One of the challenges for characterizing and modeling realistic multimedia applications is the lack of access to source codes. On-chip performance counters effectively resolve this problem by monitoring run-time behaviors at the instruction-level. This paper presents a novel technique of characterizing and modeling workloads at the instruction level for realistic multimedia applications using hardware performance counters. A variety of instruction counts are collected from some multimedia applications, such as RealPlayer, GSM Vocoder, MPEG encoder/decoder, and speech synthesizer. These instruction counts can be used to form a set of abstract characteristic parameters directly related to a processor`s architectural features. Based on microprocessor architectural constraints and these calculated abstract parameters, the architectural performance bottleneck for a specific application can be estimated. Meanwhile, the bottleneck estimation can provide suggestions about viable architectural/functional improvement for certain workloads. The biggest advantage of this new characterization technique is a better understanding of processor utilization efficiency and architectural bottleneck for each application. This technique also provides predictive insight of future architectural enhancements and their affect on current codes. In this paper the authors also attempt to model architectural effect on processor utilization without memory influence. They derive formulas for calculating CPI{sub 0}, CPI without memory effect, and they quantify utilization of architectural parameters. These equations are architecturally diagnostic and predictive in nature. Results provide promise in code characterization, and empirical/analytical modeling.

  10. Bayesian mixture modeling of significant p values: A meta-analytic method to estimate the degree of contamination from H₀.

    Science.gov (United States)

    Gronau, Quentin Frederik; Duizer, Monique; Bakker, Marjan; Wagenmakers, Eric-Jan

    2017-09-01

    Publication bias and questionable research practices have long been known to corrupt the published record. One method to assess the extent of this corruption is to examine the meta-analytic collection of significant p values, the so-called p -curve (Simonsohn, Nelson, & Simmons, 2014a). Inspired by statistical research on false-discovery rates, we propose a Bayesian mixture model analysis of the p -curve. Our mixture model assumes that significant p values arise either from the null-hypothesis H ₀ (when their distribution is uniform) or from the alternative hypothesis H1 (when their distribution is accounted for by a simple parametric model). The mixture model estimates the proportion of significant results that originate from H ₀, but it also estimates the probability that each specific p value originates from H ₀. We apply our model to 2 examples. The first concerns the set of 587 significant p values for all t tests published in the 2007 volumes of Psychonomic Bulletin & Review and the Journal of Experimental Psychology: Learning, Memory, and Cognition; the mixture model reveals that p values higher than about .005 are more likely to stem from H ₀ than from H ₁. The second example concerns 159 significant p values from studies on social priming and 130 from yoked control studies. The results from the yoked controls confirm the findings from the first example, whereas the results from the social priming studies are difficult to interpret because they are sensitive to the prior specification. To maximize accessibility, we provide a web application that allows researchers to apply the mixture model to any set of significant p values. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  11. THE EFFECT OF INTELLECTUAL CAPITAL AND ISLAMICITY PERFORMANCE INDEX TO THE PERFORMANCE OF ISLAMIC BANK IN INDONESIA 2010-2014 PERIODS

    Directory of Open Access Journals (Sweden)

    Pandu Dewanata

    2016-09-01

    Full Text Available The purpose of this research is to know the influence of intellectual capital and islamicity performance index by proxy is profit-sharing ratio, zakat performance ratio, and equitable distribution ratio on performance of Islamic bank in Indonesia in the period 2010-2014. The data used in this research is financial statement of 11 Islamic bank in Indonesia 2010-2014 periods. Regression model using panel data with Fixed Effect Model. The result of this research is intellectual capital and zakat performance ratio has significant and positif impact on ROA, while equitable distribution ratio has not significant impact on ROA, and profit sharing ratio has significant and positif impact on ROA.

  12. A Model for Effective Performance in the Indonesian Navy.

    Science.gov (United States)

    1987-06-01

    NAVY LEADERSHIP AND MANAGEMENT COM PETENCY M ODEL .................................. 15 D. MCBER COMPETENT MANAGERS MODEL ................ IS E. SU M M... leadership and managerial skills which emphasize on effective performance of the officers in managing the human resources under their cormnand and...supervision. By effective performance we mean officers who not only know about management theories , but who possess the characteristics, knowledge, skill, and

  13. Some useful characteristics of performance models

    International Nuclear Information System (INIS)

    Worledge, D.H.

    1985-01-01

    This paper examines the demands placed upon models of human cognitive decision processes in application to Probabilistic Risk Assessment. Successful models, for this purpose, should, 1) be based on proven or plausible psychological knowledge, e.g., Rasmussen's mental schematic, 2) incorporate opportunities for slips, 3) take account of the recursive nature, in time, of corrections to mistaken actions, and 4) depend on the crew's predominant mental states that accompany such recursions. The latter is equivalent to an explicit coupling between input and output of Rasmussen's mental schematic. A family of such models is proposed with observable rate processes mediating the (conscious) mental states involved. It is expected that the cumulative probability distributions corresponding to the individual rate processes can be identified with probability-time correlations of the HCR Human Cognitive Reliability type discussed elsewhere in this session. The functional forms of the conditional rates are intuitively shown to have simple characteristics that lead to a strongly recursive stochastic process with significant predictive capability. Models of the type proposed have few parts and form a representation that is intentionally far short of a fully transparent exposition of the mental process in order to avoid making impossible demands on data

  14. Predicting the Consequences of Workload Management Strategies with Human Performance Modeling

    Science.gov (United States)

    Mitchell, Diane Kuhl; Samma, Charneta

    2011-01-01

    Human performance modelers at the US Army Research Laboratory have developed an approach for establishing Soldier high workload that can be used for analyses of proposed system designs. Their technique includes three key components. To implement the approach in an experiment, the researcher would create two experimental conditions: a baseline and a design alternative. Next they would identify a scenario in which the test participants perform all their representative concurrent interactions with the system. This scenario should include any events that would trigger a different set of goals for the human operators. They would collect workload values during both the control and alternative design condition to see if the alternative increased workload and decreased performance. They have successfully implemented this approach for military vehicle. designs using the human performance modeling tool, IMPRINT. Although ARL researches use IMPRINT to implement their approach, it can be applied to any workload analysis. Researchers using other modeling and simulations tools or conducting experiments or field tests can use the same approach.

  15. INFORMATION SYSTEM FOR MODELING ECONOMIC AND FINANCIAL PERFORMANCES

    Directory of Open Access Journals (Sweden)

    Boldeanu Dana Maria

    2009-05-01

    Full Text Available The analysis of the most important financial and economic indicators at the level of some organizations from the same sector of activity, the selection of performance ratios and generating a particular analysis model help companies to move from the desire

  16. Asymptotic performance modelling of DCF protocol with prioritized channel access

    Science.gov (United States)

    Choi, Woo-Yong

    2017-11-01

    Recently, the modification of the DCF (Distributed Coordination Function) protocol by the prioritized channel access was proposed to resolve the problem that the DCF performance worsens exponentially as more nodes exist in IEEE 802.11 wireless LANs. In this paper, an asymptotic analytical performance model is presented to analyze the MAC performance of the DCF protocol with the prioritized channel access.

  17. The Effects of the Flipped Model of Instruction on Student Engagement and Performance in the Secondary Mathematics Classroom

    Directory of Open Access Journals (Sweden)

    Kevin R. Clark

    2015-01-01

    Full Text Available In many of the secondary classrooms across the country, students are passively engaged in the mathematics content, and academic performance can be described, at best, as mediocre. This research study sought to bring about improvements in student engagement and performance in the secondary mathematics classroom through the implementation of the flipped model of instruction and compared student interaction in the flipped classroom with a traditional format. The flipped model of instruction is a relatively new teaching strategy attempting to improve student engagement and performance by moving the lecture outside the classroom via technology and moving homework and exercises with concepts inside the classroom via learning activities. Changes in the student participants’ perceptions and attitudes were evidenced and evaluated through the completion of a pre- and post-survey, a teacher-created unit test, random interviews, and a focus group session. In addition, the researcher documented observations, experiences, thoughts, and insights regarding the intervention in a journal on a daily basis. Quantitative results and qualitative findings revealed the student participants responded favorably to the flipped model of instruction and experienced an increase in their engagement and communication when compared to the traditional classroom experience. The student participants also recognized improvements in the quality of instruction and use of class of time with the flipped model of instruction. In terms of academic performance, no significant changes were demonstrated between the flipped model of instruction students and those taught in a traditional classroom environment.

  18. Positioning performance of the NTCM model driven by GPS Klobuchar model parameters

    Science.gov (United States)

    Hoque, Mohammed Mainul; Jakowski, Norbert; Berdermann, Jens

    2018-03-01

    Users of the Global Positioning System (GPS) utilize the Ionospheric Correction Algorithm (ICA) also known as Klobuchar model for correcting ionospheric signal delay or range error. Recently, we developed an ionosphere correction algorithm called NTCM-Klobpar model for single frequency GNSS applications. The model is driven by a parameter computed from GPS Klobuchar model and consecutively can be used instead of the GPS Klobuchar model for ionospheric corrections. In the presented work we compare the positioning solutions obtained using NTCM-Klobpar with those using the Klobuchar model. Our investigation using worldwide ground GPS data from a quiet and a perturbed ionospheric and geomagnetic activity period of 17 days each shows that the 24-hour prediction performance of the NTCM-Klobpar is better than the GPS Klobuchar model in global average. The root mean squared deviation of the 3D position errors are found to be about 0.24 and 0.45 m less for the NTCM-Klobpar compared to the GPS Klobuchar model during quiet and perturbed condition, respectively. The presented algorithm has the potential to continuously improve the accuracy of GPS single frequency mass market devices with only little software modification.

  19. The Quadruple Helix Model Enhancing Innovative Performance Of Indonesian Creative Industry

    Directory of Open Access Journals (Sweden)

    Sri Wahyu Lelly Hana Setyanti

    2017-11-01

    Full Text Available The creative industry in Indonesia has contributed positively to the national economic growth. Creative industry grows from the creativity and innovation performance of the business actors. The challenge of creative industry is how to completely understand the creative and innovative processes in business management. Therefore it requires an approach that combines the synergy between academicians entrepreneurs government and society in a quadruple helix model. The objective of this research is to develop a creativity model through a quadruple helix model in improving innovation performance of the creative industry.

  20. A model for the training effects in swimming demonstrates a strong relationship between parasympathetic activity, performance and index of fatigue.

    Directory of Open Access Journals (Sweden)

    Sébastien Chalencon

    Full Text Available Competitive swimming as a physical activity results in changes to the activity level of the autonomic nervous system (ANS. However, the precise relationship between ANS activity, fatigue and sports performance remains contentious. To address this problem and build a model to support a consistent relationship, data were gathered from national and regional swimmers during two 30 consecutive-week training periods. Nocturnal ANS activity was measured weekly and quantified through wavelet transform analysis of the recorded heart rate variability. Performance was then measured through a subsequent morning 400 meters freestyle time-trial. A model was proposed where indices of fatigue were computed using Banister's two antagonistic component model of fatigue and adaptation applied to both the ANS activity and the performance. This demonstrated that a logarithmic relationship existed between performance and ANS activity for each subject. There was a high degree of model fit between the measured and calculated performance (R(2=0.84±0.14,p<0.01 and the measured and calculated High Frequency (HF power of the ANS activity (R(2=0.79±0.07, p<0.01. During the taper periods, improvements in measured performance and measured HF were strongly related. In the model, variations in performance were related to significant reductions in the level of 'Negative Influences' rather than increases in 'Positive Influences'. Furthermore, the delay needed to return to the initial performance level was highly correlated to the delay required to return to the initial HF power level (p<0.01. The delay required to reach peak performance was highly correlated to the delay required to reach the maximal level of HF power (p=0.02. Building the ANS/performance identity of a subject, including the time to peak HF, may help predict the maximal performance that could be obtained at a given time.

  1. Atomic scale simulations for improved CRUD and fuel performance modeling

    Energy Technology Data Exchange (ETDEWEB)

    Andersson, Anders David Ragnar [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Cooper, Michael William Donald [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-01-06

    A more mechanistic description of fuel performance codes can be achieved by deriving models and parameters from atomistic scale simulations rather than fitting models empirically to experimental data. The same argument applies to modeling deposition of corrosion products on fuel rods (CRUD). Here are some results from publications in 2016 carried out using the CASL allocation at LANL.

  2. Spatial variability and parametric uncertainty in performance assessment models

    International Nuclear Information System (INIS)

    Pensado, Osvaldo; Mancillas, James; Painter, Scott; Tomishima, Yasuo

    2011-01-01

    The problem of defining an appropriate treatment of distribution functions (which could represent spatial variability or parametric uncertainty) is examined based on a generic performance assessment model for a high-level waste repository. The generic model incorporated source term models available in GoldSim ® , the TDRW code for contaminant transport in sparse fracture networks with a complex fracture-matrix interaction process, and a biosphere dose model known as BDOSE TM . Using the GoldSim framework, several Monte Carlo sampling approaches and transport conceptualizations were evaluated to explore the effect of various treatments of spatial variability and parametric uncertainty on dose estimates. Results from a model employing a representative source and ensemble-averaged pathway properties were compared to results from a model allowing for stochastic variation of transport properties along streamline segments (i.e., explicit representation of spatial variability within a Monte Carlo realization). We concluded that the sampling approach and the definition of an ensemble representative do influence consequence estimates. In the examples analyzed in this paper, approaches considering limited variability of a transport resistance parameter along a streamline increased the frequency of fast pathways resulting in relatively high dose estimates, while those allowing for broad variability along streamlines increased the frequency of 'bottlenecks' reducing dose estimates. On this basis, simplified approaches with limited consideration of variability may suffice for intended uses of the performance assessment model, such as evaluation of site safety. (author)

  3. Geographic and temporal validity of prediction models: Different approaches were useful to examine model performance

    NARCIS (Netherlands)

    P.C. Austin (Peter); D. van Klaveren (David); Y. Vergouwe (Yvonne); D. Nieboer (Daan); D.S. Lee (Douglas); E.W. Steyerberg (Ewout)

    2016-01-01

    textabstractObjective: Validation of clinical prediction models traditionally refers to the assessment of model performance in new patients. We studied different approaches to geographic and temporal validation in the setting of multicenter data from two time periods. Study Design and Setting: We

  4. Compact models and performance investigations for subthreshold interconnects

    CERN Document Server

    Dhiman, Rohit

    2014-01-01

    The book provides a detailed analysis of issues related to sub-threshold interconnect performance from the perspective of analytical approach and design techniques. Particular emphasis is laid on the performance analysis of coupling noise and variability issues in sub-threshold domain to develop efficient compact models. The proposed analytical approach gives physical insight of the parameters affecting the transient behavior of coupled interconnects. Remedial design techniques are also suggested to mitigate the effect of coupling noise. The effects of wire width, spacing between the wires, wi

  5. Behavior model for performance assessment

    International Nuclear Information System (INIS)

    Brown-VanHoozer, S. A.

    1999-01-01

    Every individual channels information differently based on their preference of the sensory modality or representational system (visual auditory or kinesthetic) we tend to favor most (our primary representational system (PRS)). Therefore, some of us access and store our information primarily visually first, some auditorily, and others kinesthetically (through feel and touch); which in turn establishes our information processing patterns and strategies and external to internal (and subsequently vice versa) experiential language representation. Because of the different ways we channel our information, each of us will respond differently to a task--the way we gather and process the external information (input), our response time (process), and the outcome (behavior). Traditional human models of decision making and response time focus on perception, cognitive and motor systems stimulated and influenced by the three sensory modalities, visual, auditory and kinesthetic. For us, these are the building blocks to knowing how someone is thinking. Being aware of what is taking place and how to ask questions is essential in assessing performance toward reducing human errors. Existing models give predications based on time values or response times for a particular event, and may be summed and averaged for a generalization of behavior(s). However, by our not establishing a basic understanding of the foundation of how the behavior was predicated through a decision making strategy process, predicative models are overall inefficient in their analysis of the means by which behavior was generated. What is seen is the end result

  6. Behavior model for performance assessment.

    Energy Technology Data Exchange (ETDEWEB)

    Borwn-VanHoozer, S. A.

    1999-07-23

    Every individual channels information differently based on their preference of the sensory modality or representational system (visual auditory or kinesthetic) we tend to favor most (our primary representational system (PRS)). Therefore, some of us access and store our information primarily visually first, some auditorily, and others kinesthetically (through feel and touch); which in turn establishes our information processing patterns and strategies and external to internal (and subsequently vice versa) experiential language representation. Because of the different ways we channel our information, each of us will respond differently to a task--the way we gather and process the external information (input), our response time (process), and the outcome (behavior). Traditional human models of decision making and response time focus on perception, cognitive and motor systems stimulated and influenced by the three sensory modalities, visual, auditory and kinesthetic. For us, these are the building blocks to knowing how someone is thinking. Being aware of what is taking place and how to ask questions is essential in assessing performance toward reducing human errors. Existing models give predications based on time values or response times for a particular event, and may be summed and averaged for a generalization of behavior(s). However, by our not establishing a basic understanding of the foundation of how the behavior was predicated through a decision making strategy process, predicative models are overall inefficient in their analysis of the means by which behavior was generated. What is seen is the end result.

  7. 3D Massive MIMO Systems: Modeling and Performance Analysis

    KAUST Repository

    Nadeem, Qurrat-Ul-Ain

    2015-07-30

    Multiple-input-multiple-output (MIMO) systems of current LTE releases are capable of adaptation in the azimuth only. Recently, the trend is to enhance system performance by exploiting the channel’s degrees of freedom in the elevation, which necessitates the characterization of 3D channels. We present an information-theoretic channel model for MIMO systems that supports the elevation dimension. The model is based on the principle of maximum entropy, which enables us to determine the distribution of the channel matrix consistent with the prior information on the angles. Based on this model, we provide analytical expression for the cumulative density function (CDF) of the mutual information (MI) for systems with a single receive and finite number of transmit antennas in the general signalto- interference-plus-noise-ratio (SINR) regime. The result is extended to systems with finite receive antennas in the low SINR regime. A Gaussian approximation to the asymptotic behavior of MI distribution is derived for the large number of transmit antennas and paths regime. We corroborate our analysis with simulations that study the performance gains realizable through meticulous selection of the transmit antenna downtilt angles, confirming the potential of elevation beamforming to enhance system performance. The results are directly applicable to the analysis of 5G 3D-Massive MIMO-systems.

  8. Performance of chromatographic systems to model soil-water sorption.

    Science.gov (United States)

    Hidalgo-Rodríguez, Marta; Fuguet, Elisabet; Ràfols, Clara; Rosés, Martí

    2012-08-24

    A systematic approach for evaluating the goodness of chromatographic systems to model the sorption of neutral organic compounds by soil from water is presented in this work. It is based on the examination of the three sources of error that determine the overall variance obtained when soil-water partition coefficients are correlated against chromatographic retention factors: the variance of the soil-water sorption data, the variance of the chromatographic data, and the variance attributed to the dissimilarity between the two systems. These contributions of variance are easily predicted through the characterization of the systems by the solvation parameter model. According to this method, several chromatographic systems besides the reference octanol-water partition system have been selected to test their performance in the emulation of soil-water sorption. The results from the experimental correlations agree with the predicted variances. The high-performance liquid chromatography system based on an immobilized artificial membrane and the micellar electrokinetic chromatography systems of sodium dodecylsulfate and sodium taurocholate provide the most precise correlation models. They have shown to predict well soil-water sorption coefficients of several tested herbicides. Octanol-water partitions and high-performance liquid chromatography measurements using C18 columns are less suited for the estimation of soil-water partition coefficients. Copyright © 2012 Elsevier B.V. All rights reserved.

  9. Model performance evaluation (validation and calibration) in model-based studies of therapeutic interventions for cardiovascular diseases : a review and suggested reporting framework.

    Science.gov (United States)

    Haji Ali Afzali, Hossein; Gray, Jodi; Karnon, Jonathan

    2013-04-01

    Decision analytic models play an increasingly important role in the economic evaluation of health technologies. Given uncertainties around the assumptions used to develop such models, several guidelines have been published to identify and assess 'best practice' in the model development process, including general modelling approach (e.g., time horizon), model structure, input data and model performance evaluation. This paper focuses on model performance evaluation. In the absence of a sufficient level of detail around model performance evaluation, concerns regarding the accuracy of model outputs, and hence the credibility of such models, are frequently raised. Following presentation of its components, a review of the application and reporting of model performance evaluation is presented. Taking cardiovascular disease as an illustrative example, the review investigates the use of face validity, internal validity, external validity, and cross model validity. As a part of the performance evaluation process, model calibration is also discussed and its use in applied studies investigated. The review found that the application and reporting of model performance evaluation across 81 studies of treatment for cardiovascular disease was variable. Cross-model validation was reported in 55 % of the reviewed studies, though the level of detail provided varied considerably. We found that very few studies documented other types of validity, and only 6 % of the reviewed articles reported a calibration process. Considering the above findings, we propose a comprehensive model performance evaluation framework (checklist), informed by a review of best-practice guidelines. This framework provides a basis for more accurate and consistent documentation of model performance evaluation. This will improve the peer review process and the comparability of modelling studies. Recognising the fundamental role of decision analytic models in informing public funding decisions, the proposed

  10. A novel spatial performance metric for robust pattern optimization of distributed hydrological models

    Science.gov (United States)

    Stisen, S.; Demirel, C.; Koch, J.

    2017-12-01

    Evaluation of performance is an integral part of model development and calibration as well as it is of paramount importance when communicating modelling results to stakeholders and the scientific community. There exists a comprehensive and well tested toolbox of metrics to assess temporal model performance in the hydrological modelling community. On the contrary, the experience to evaluate spatial performance is not corresponding to the grand availability of spatial observations readily available and to the sophisticate model codes simulating the spatial variability of complex hydrological processes. This study aims at making a contribution towards advancing spatial pattern oriented model evaluation for distributed hydrological models. This is achieved by introducing a novel spatial performance metric which provides robust pattern performance during model calibration. The promoted SPAtial EFficiency (spaef) metric reflects three equally weighted components: correlation, coefficient of variation and histogram overlap. This multi-component approach is necessary in order to adequately compare spatial patterns. spaef, its three components individually and two alternative spatial performance metrics, i.e. connectivity analysis and fractions skill score, are tested in a spatial pattern oriented model calibration of a catchment model in Denmark. The calibration is constrained by a remote sensing based spatial pattern of evapotranspiration and discharge timeseries at two stations. Our results stress that stand-alone metrics tend to fail to provide holistic pattern information to the optimizer which underlines the importance of multi-component metrics. The three spaef components are independent which allows them to complement each other in a meaningful way. This study promotes the use of bias insensitive metrics which allow comparing variables which are related but may differ in unit in order to optimally exploit spatial observations made available by remote sensing

  11. Risk Prediction Models for Incident Heart Failure: A Systematic Review of Methodology and Model Performance.

    Science.gov (United States)

    Sahle, Berhe W; Owen, Alice J; Chin, Ken Lee; Reid, Christopher M

    2017-09-01

    Numerous models predicting the risk of incident heart failure (HF) have been developed; however, evidence of their methodological rigor and reporting remains unclear. This study critically appraises the methods underpinning incident HF risk prediction models. EMBASE and PubMed were searched for articles published between 1990 and June 2016 that reported at least 1 multivariable model for prediction of HF. Model development information, including study design, variable coding, missing data, and predictor selection, was extracted. Nineteen studies reporting 40 risk prediction models were included. Existing models have acceptable discriminative ability (C-statistics > 0.70), although only 6 models were externally validated. Candidate variable selection was based on statistical significance from a univariate screening in 11 models, whereas it was unclear in 12 models. Continuous predictors were retained in 16 models, whereas it was unclear how continuous variables were handled in 16 models. Missing values were excluded in 19 of 23 models that reported missing data, and the number of events per variable was models. Only 2 models presented recommended regression equations. There was significant heterogeneity in discriminative ability of models with respect to age (P prediction models that had sufficient discriminative ability, although few are externally validated. Methods not recommended for the conduct and reporting of risk prediction modeling were frequently used, and resulting algorithms should be applied with caution. Copyright © 2017 Elsevier Inc. All rights reserved.

  12. A water treatment case study for quantifying model performance with multilevel flow modeling

    Directory of Open Access Journals (Sweden)

    Emil K. Nielsen

    2018-05-01

    Full Text Available Decision support systems are a key focus of research on developing control rooms to aid operators in making reliable decisions and reducing incidents caused by human errors. For this purpose, models of complex systems can be developed to diagnose causes or consequences for specific alarms. Models applied in safety systems of complex and safety-critical systems require rigorous and reliable model building and testing. Multilevel flow modeling is a qualitative and discrete method for diagnosing faults and has previously only been validated by subjective and qualitative means. To ensure reliability during operation, this work aims to synthesize a procedure to measure model performance according to diagnostic requirements. A simple procedure is proposed for validating and evaluating the concept of multilevel flow modeling. For this purpose, expert statements, dynamic process simulations, and pilot plant experiments are used for validation of simple multilevel flow modeling models of a hydrocyclone unit for oil removal from produced water. Keywords: Fault Diagnosis, Model Validation, Multilevel Flow Modeling, Produced Water Treatment

  13. Models for the financial-performance effects of Marketing

    NARCIS (Netherlands)

    Hanssens, D.M.; Dekimpe, Marnik; Wierenga, B.; van der Lans, R.

    We consider marketing-mix models that explicitly include financial performance criteria. These financial metrics are not only comparable across the marketing mix, they also relate well to investors’ evaluation of the firm. To that extent, we treat marketing as an investment in customer value

  14. Aqua/Aura Updated Inclination Adjust Maneuver Performance Prediction Model

    Science.gov (United States)

    Boone, Spencer

    2017-01-01

    This presentation will discuss the updated Inclination Adjust Maneuver (IAM) performance prediction model that was developed for Aqua and Aura following the 2017 IAM series. This updated model uses statistical regression methods to identify potential long-term trends in maneuver parameters, yielding improved predictions when re-planning past maneuvers. The presentation has been reviewed and approved by Eric Moyer, ESMO Deputy Project Manager.

  15. Modelling vocal anatomy's significant effect on speech

    NARCIS (Netherlands)

    de Boer, B.

    2010-01-01

    This paper investigates the effect of larynx position on the articulatory abilities of a humanlike vocal tract. Previous work has investigated models that were built to resemble the anatomy of existing species or fossil ancestors. This has led to conflicting conclusions about the relation between

  16. A framework for performance evaluation of model-based optical trackers

    NARCIS (Netherlands)

    Smit, F.A.; Liere, van R.

    2008-01-01

    We describe a software framework to evaluate the performance of model-based optical trackers in virtual environments. The framework can be used to evaluate and compare the performance of different trackers under various conditions, to study the effects of varying intrinsic and extrinsic camera

  17. Contributions of Organizational Performance Measurement Model Performance Prism to the Balanced Scorecard: a study from the stakeholder’s perspective

    Directory of Open Access Journals (Sweden)

    Sady Darcy da Silva Jr

    2013-12-01

    Full Text Available Measuring the organizational performance is a big challenge to companies. Thus, we should highlight the organizational performance measurement model Balanced Scorecard (BSC. However, the Performance Prism (PP model emphasizes the organizational stakeholders and states that BSC treats them in a superficial way, giving more importance to the shareholders and the customers. The objective of this research is to identify the PP contributions to the BSC, from the stakeholders perspective. With this objective, a semi-structured script to the interviews was applied to professionals of the strategic area. In parallel, the structure of the models was compared to enrich the results, as well as to complement the analysis of the perceptions of respondents. The results were very relevant given important contributions of the PP to the BSC, opposing the original criticisms of the PP. These criticisms became questionable through the perception of respondents and the comparison between the models.

  18. A SEQUENTIAL MODEL OF INNOVATION STRATEGY—COMPANY NON-FINANCIAL PERFORMANCE LINKS

    Directory of Open Access Journals (Sweden)

    Wakhid Slamet Ciptono

    2006-05-01

    Full Text Available This study extends the prior research (Zahra and Das 1993 by examining the association between a company’s innovation strategy and its non-financial performance in the upstream and downstream strategic business units (SBUs of oil and gas companies. The sequential model suggests a causal sequence among six dimensions of innovation strategy (leadership orientation, process innovation, product/service innovation, external innovation source, internal innovation source, and investment that may lead to higher company non-financial performance (productivity and operational reliability. The study distributed a questionnaire (by mail, e-mailed web system, and focus group discussion to three levels of managers (top, middle, and first-line of 49 oil and gas companies with 140 SBUs in Indonesia. These qualified samples fell into 47 upstream (supply-chain companies with 132 SBUs, and 2 downstream (demand-chain companies with 8 SBUs. A total of 1,332 individual usable questionnaires were returned thus qualified for analysis, representing an effective response rate of 50.19 percent. The researcher conducts structural equation modeling (SEM and hierarchical multiple regression analysis to assess the goodness-of-fit between the research models and the sample data and to test whether innovation strategy mediates the impact of leadership orientation on company non-financial performance. SEM reveals that the models have met goodness-of-fit criteria, thus the interpretation of the sequential models fits with the data. The results of SEM and hierarchical multiple regression: (1 support the importance of innovation strategy as a determinant of company non-financial performance, (2 suggest that the sequential model is appropriate for examining the relationships between six dimensions of innovation strategy and company non-financial performance, and (3 show that the sequential model provides additional insights into the indirect contribution of the individual

  19. Modelling the thermodynamic performance of a concentrated solar power plant with a novel modular air-cooled condenser

    International Nuclear Information System (INIS)

    Moore, J.; Grimes, R.; Walsh, E.; O'Donovan, A.

    2014-01-01

    This paper aims at developing a novel air-cooled condenser for concentrated solar power plants. The condenser offers two significant advantages over the existing state-of-the-art. Firstly, it can be installed in a modular format where pre-assembled condenser modules reduce installation costs. Secondly, instead of using large fixed speed fans, smaller speed controlled fans are incorporated into the individual modules. This facility allows the operating point of the condenser to change and continuously maximise plant efficiency. A thorough experimental analysis was performed on a number of prototype condenser designs. This analysis investigated the validly and accuracy of correlations from literature in predicting the thermal and aerodynamic characteristics of different designs. These measurements were used to develop a thermodynamic model to predict the performance of a 50 MW CSP (Concentrated Solar Power) plant with various condenser designs installed. In order to compare different designs with respect to the specific plant capital cost, a techno-economic analysis was performed which identified the optimum size of each condenser. The results show that a single row plate finned tube design, a four row, and a two row circular finned tube design are all similar in terms of their techno-economic performance and offer significant savings over other designs. - Highlights: • A novel air cooled condenser for CSP (Concentrated Solar Power) applications is proposed. • A thorough experimental analysis of various condenser designs was performed. • Heat transfer and flow friction correlations validated for fan generated air flow. • A thermodynamic model to calculate CSP plant output is presented. • Results show the proposed condenser design can continually optimise plant output

  20. Modeling and design of a high-performance hybrid actuator

    Science.gov (United States)

    Aloufi, Badr; Behdinan, Kamran; Zu, Jean

    2016-12-01

    This paper presents the model and design of a novel hybrid piezoelectric actuator which provides high active and passive performances for smart structural systems. The actuator is composed of a pair of curved pre-stressed piezoelectric actuators, so-called commercially THUNDER actuators, installed opposite each other using two clamping mechanisms constructed of in-plane fixable hinges, grippers and solid links. A fully mathematical model is developed to describe the active and passive dynamics of the actuator and investigate the effects of its geometrical parameters on the dynamic stiffness, free displacement and blocked force properties. Among the literature that deals with piezoelectric actuators in which THUNDER elements are used as a source of electromechanical power, the proposed study is unique in that it presents a mathematical model that has the ability to predict the actuator characteristics and achieve other phenomena, such as resonances, mode shapes, phase shifts, dips, etc. For model validation, the measurements of the free dynamic response per unit voltage and passive acceleration transmissibility of a particular actuator design are used to check the accuracy of the results predicted by the model. The results reveal that there is a good agreement between the model and experiment. Another experiment is performed to teste the linearity of the actuator system by examining the variation of the output dynamic responses with varying forces and voltages at different frequencies. From the results, it can be concluded that the actuator acts approximately as a linear system at frequencies up to 1000 Hz. A parametric study is achieved here by applying the developed model to analyze the influence of the geometrical parameters of the fixable hinges on the active and passive actuator properties. The model predictions in the frequency range of 0-1000 Hz show that the hinge thickness, radius, and opening angle parameters have great effects on the frequency dynamic