WorldWideScience

Sample records for model perform significantly

  1. Field significance of performance measures in the context of regional climate model evaluation. Part 2: precipitation

    Science.gov (United States)

    Ivanov, Martin; Warrach-Sagi, Kirsten; Wulfmeyer, Volker

    2018-04-01

    A new approach for rigorous spatial analysis of the downscaling performance of regional climate model (RCM) simulations is introduced. It is based on a multiple comparison of the local tests at the grid cells and is also known as `field' or `global' significance. The block length for the local resampling tests is precisely determined to adequately account for the time series structure. New performance measures for estimating the added value of downscaled data relative to the large-scale forcing fields are developed. The methodology is exemplarily applied to a standard EURO-CORDEX hindcast simulation with the Weather Research and Forecasting (WRF) model coupled with the land surface model NOAH at 0.11 ∘ grid resolution. Daily precipitation climatology for the 1990-2009 period is analysed for Germany for winter and summer in comparison with high-resolution gridded observations from the German Weather Service. The field significance test controls the proportion of falsely rejected local tests in a meaningful way and is robust to spatial dependence. Hence, the spatial patterns of the statistically significant local tests are also meaningful. We interpret them from a process-oriented perspective. While the downscaled precipitation distributions are statistically indistinguishable from the observed ones in most regions in summer, the biases of some distribution characteristics are significant over large areas in winter. WRF-NOAH generates appropriate stationary fine-scale climate features in the daily precipitation field over regions of complex topography in both seasons and appropriate transient fine-scale features almost everywhere in summer. As the added value of global climate model (GCM)-driven simulations cannot be smaller than this perfect-boundary estimate, this work demonstrates in a rigorous manner the clear additional value of dynamical downscaling over global climate simulations. The evaluation methodology has a broad spectrum of applicability as it is

  2. Field significance of performance measures in the context of regional climate model evaluation. Part 1: temperature

    Science.gov (United States)

    Ivanov, Martin; Warrach-Sagi, Kirsten; Wulfmeyer, Volker

    2018-04-01

    A new approach for rigorous spatial analysis of the downscaling performance of regional climate model (RCM) simulations is introduced. It is based on a multiple comparison of the local tests at the grid cells and is also known as "field" or "global" significance. New performance measures for estimating the added value of downscaled data relative to the large-scale forcing fields are developed. The methodology is exemplarily applied to a standard EURO-CORDEX hindcast simulation with the Weather Research and Forecasting (WRF) model coupled with the land surface model NOAH at 0.11 ∘ grid resolution. Monthly temperature climatology for the 1990-2009 period is analysed for Germany for winter and summer in comparison with high-resolution gridded observations from the German Weather Service. The field significance test controls the proportion of falsely rejected local tests in a meaningful way and is robust to spatial dependence. Hence, the spatial patterns of the statistically significant local tests are also meaningful. We interpret them from a process-oriented perspective. In winter and in most regions in summer, the downscaled distributions are statistically indistinguishable from the observed ones. A systematic cold summer bias occurs in deep river valleys due to overestimated elevations, in coastal areas due probably to enhanced sea breeze circulation, and over large lakes due to the interpolation of water temperatures. Urban areas in concave topography forms have a warm summer bias due to the strong heat islands, not reflected in the observations. WRF-NOAH generates appropriate fine-scale features in the monthly temperature field over regions of complex topography, but over spatially homogeneous areas even small biases can lead to significant deteriorations relative to the driving reanalysis. As the added value of global climate model (GCM)-driven simulations cannot be smaller than this perfect-boundary estimate, this work demonstrates in a rigorous manner the

  3. Significance of uncertainties derived from settling tank model structure and parameters on predicting WWTP performance - A global sensitivity analysis study

    DEFF Research Database (Denmark)

    Ramin, Elham; Sin, Gürkan; Mikkelsen, Peter Steen

    2011-01-01

    Uncertainty derived from one of the process models – such as one-dimensional secondary settling tank (SST) models – can impact the output of the other process models, e.g., biokinetic (ASM1), as well as the integrated wastewater treatment plant (WWTP) models. The model structure and parameter...... uncertainty of settler models can therefore propagate, and add to the uncertainties in prediction of any plant performance criteria. Here we present an assessment of the relative significance of secondary settling model performance in WWTP simulations. We perform a global sensitivity analysis (GSA) based....... The outcome of this study contributes to a better understanding of uncertainty in WWTPs, and explicitly demonstrates the significance of secondary settling processes that are crucial elements of model prediction under dry and wet-weather loading conditions....

  4. On the significance of the noise model for the performance of a linear MPC in closed-loop operation

    DEFF Research Database (Denmark)

    Hagdrup, Morten; Boiroux, Dimitri; Mahmoudi, Zeinab

    2016-01-01

    models typically means less parameters to identify. Systematic tuning of such controllers is discussed. Simulation studies are conducted for linear time-invariant systems showing that choosing a noise model of low order is beneficial for closed-loop performance. (C) 2016, IFAC (International Federation...

  5. Hypoxia: Exposure Time Until Significant Performance Effects

    Science.gov (United States)

    2016-03-07

    1994). Acute hypoxia fails to influence two aspects of short-term memory : implications for the source of cognitive deficits. Aviation, Space...Naval Medical Research Unit Dayton HYPOXIA : EXPOSURE TIME UNTIL SIGNIFICANT PERFORMANCE EFFECTS PHILLIPS, J.P., DRUMMOND, L.A...Andrews, CAPT, MSC, USN Commanding Officer i 1 ARTICLE TYPE: Research Article TITLE: Hypoxia : Exposure Time Until Significant Performance

  6. Significant improvement of electrochemical performance of Cu ...

    Indian Academy of Sciences (India)

    Significant improvement of electrochemical performance of Cu-coated LiVPO4F cathode material for lithium-ion batteries ... School of Mechanical Engineering and Automation, Northeastern University, Shenyang 110819, China; School of Mechanical Engineering, Shenyang University of Chemical Technology, Shenyang ...

  7. A Note on Testing Mediated Effects in Structural Equation Models: Reconciling Past and Current Research on the Performance of the Test of Joint Significance

    Science.gov (United States)

    Valente, Matthew J.; Gonzalez, Oscar; Miocevic, Milica; MacKinnon, David P.

    2016-01-01

    Methods to assess the significance of mediated effects in education and the social sciences are well studied and fall into two categories: single sample methods and computer-intensive methods. A popular single sample method to detect the significance of the mediated effect is the test of joint significance, and a popular computer-intensive method…

  8. Modelling vocal anatomy's significant effect on speech

    NARCIS (Netherlands)

    de Boer, B.

    2010-01-01

    This paper investigates the effect of larynx position on the articulatory abilities of a humanlike vocal tract. Previous work has investigated models that were built to resemble the anatomy of existing species or fossil ancestors. This has led to conflicting conclusions about the relation between

  9. Significance of Attaining Users’ Feedback in Building Performance Assessment

    Directory of Open Access Journals (Sweden)

    Khalil Natasha

    2014-01-01

    Full Text Available Generally, building is a structure that provides basic shelter for the humans to conduct general activities. In common prose, the purpose of buildings is to provide humans a comfortable working and living space and protected from the extremes of climate. However, a building usage is depends on the lifespan and the change of rate effected on their impact on efficiency of use. Hence, more attention needs to be emphasized on the performance of buildings as the changes are not static over time. This paper highlights the concept and requirements in evaluating building performance. Exploration on the concept of building performance is also addressed on the purposes of building performance and the link of performance towards the end-users and incorporating their feedback. It concludes that obtaining users’ feedback is vital in building performance and the requirements of assessment must outline the performance criteria and mandates in such building.

  10. Well performance model

    International Nuclear Information System (INIS)

    Thomas, L.K.; Evans, C.E.; Pierson, R.G.; Scott, S.L.

    1992-01-01

    This paper describes the development and application of a comprehensive oil or gas well performance model. The model contains six distinct sections: stimulation design, tubing and/or casing flow, reservoir and near-wellbore calculations, production forecasting, wellbore heat transmission, and economics. These calculations may be performed separately or in an integrated fashion with data and results shared among the different sections. The model analysis allows evaluation of all aspects of well completion design, including the effects on future production and overall well economics

  11. Firm Sustainability Performance Index Modeling

    Directory of Open Access Journals (Sweden)

    Che Wan Jasimah Bt Wan Mohamed Radzi

    2015-12-01

    Full Text Available The main objective of this paper is to bring a model for firm sustainability performance index by applying both classical and Bayesian structural equation modeling (parametric and semi-parametric modeling. Both techniques are considered to the research data collected based on a survey directed to the China, Taiwan, and Malaysia food manufacturing industry. For estimating firm sustainability performance index we consider three main indicators include knowledge management, organizational learning, and business strategy. Based on the both Bayesian and classical methodology, we confirmed that knowledge management and business strategy have significant impact on firm sustainability performance index.

  12. NIF capsule performance modeling

    Directory of Open Access Journals (Sweden)

    Weber S.

    2013-11-01

    Full Text Available Post-shot modeling of NIF capsule implosions was performed in order to validate our physical and numerical models. Cryogenic layered target implosions and experiments with surrogate targets produce an abundance of capsule performance data including implosion velocity, remaining ablator mass, times of peak x-ray and neutron emission, core image size, core symmetry, neutron yield, and x-ray spectra. We have attempted to match the integrated data set with capsule-only simulations by adjusting the drive and other physics parameters within expected uncertainties. The simulations include interface roughness, time-dependent symmetry, and a model of mix. We were able to match many of the measured performance parameters for a selection of shots.

  13. Bayesian Test of Significance for Conditional Independence: The Multinomial Model

    Directory of Open Access Journals (Sweden)

    Pablo de Morais Andrade

    2014-03-01

    Full Text Available Conditional independence tests have received special attention lately in machine learning and computational intelligence related literature as an important indicator of the relationship among the variables used by their models. In the field of probabilistic graphical models, which includes Bayesian network models, conditional independence tests are especially important for the task of learning the probabilistic graphical model structure from data. In this paper, we propose the full Bayesian significance test for tests of conditional independence for discrete datasets. The full Bayesian significance test is a powerful Bayesian test for precise hypothesis, as an alternative to the frequentist’s significance tests (characterized by the calculation of the p-value.

  14. Multiresolution wavelet-ANN model for significant wave height forecasting.

    Digital Repository Service at National Institute of Oceanography (India)

    Deka, P.C.; Mandal, S.; Prahlada, R.

    (ANN) modeling. The transformed output data are used as inputs to ANN models. Various decomposition levels have been tried for a db3 wavelet to obtain optimal results. It is found that the performance of hybrid WLNN is better than that of ANN when lead...

  15. Principles of Sonar Performance Modeling

    NARCIS (Netherlands)

    Ainslie, M.A.

    2010-01-01

    Sonar performance modelling (SPM) is concerned with the prediction of quantitative measures of sonar performance, such as probability of detection. It is a multidisciplinary subject, requiring knowledge and expertise in the disparate fields of underwater acoustics, acoustical oceanography, sonar

  16. Characterising performance of environmental models

    NARCIS (Netherlands)

    Bennett, N.D.; Croke, B.F.W.; Guariso, G.; Guillaume, J.H.A.; Hamilton, S.H.; Jakeman, A.J.; Marsili-Libelli, S.; Newham, L.T.H.; Norton, J.; Perrin, C.; Pierce, S.; Robson, B.; Seppelt, R.; Voinov, A.; Fath, B.D.; Andreassian, V.

    2013-01-01

    In order to use environmental models effectively for management and decision-making, it is vital to establish an appropriate level of confidence in their performance. This paper reviews techniques available across various fields for characterising the performance of environmental models with focus

  17. Multiprocessor performance modeling with ADAS

    Science.gov (United States)

    Hayes, Paul J.; Andrews, Asa M.

    1989-01-01

    A graph managing strategy referred to as the Algorithm to Architecture Mapping Model (ATAMM) appears useful for the time-optimized execution of application algorithm graphs in embedded multiprocessors and for the performance prediction of graph designs. This paper reports the modeling of ATAMM in the Architecture Design and Assessment System (ADAS) to make an independent verification of ATAMM's performance prediction capability and to provide a user framework for the evaluation of arbitrary algorithm graphs. Following an overview of ATAMM and its major functional rules are descriptions of the ADAS model of ATAMM, methods to enter an arbitrary graph into the model, and techniques to analyze the simulation results. The performance of a 7-node graph example is evaluated using the ADAS model and verifies the ATAMM concept by substantiating previously published performance results.

  18. Performance on the Farnsworth-Munsell 100-Hue Test Is Significantly Related to Nonverbal IQ.

    Science.gov (United States)

    Cranwell, Matthew B; Pearce, Bradley; Loveridge, Camilla; Hurlbert, Anya C

    2015-05-01

    The Farnsworth-Munsell 100-Hue test (FM100) is a standardized measure of chromatic discrimination, based on colored cap-sorting, which has been widely used in both adults and children. Its dependence on seriation ability raises questions as to its universal suitability and accuracy in assessing purely sensory discrimination. This study investigates how general intellectual ability relates to performance on both the FM100 and a new computer-based chromatic discrimination threshold test, across different age groups in both typical and atypical development. Participants were divided into two main age groups, children (6-15 years) and young adults (16-25 years), with each group further subdivided into typically developing (TD; three groups; TD 6-7 years, TD 8-9 years, TD Adult) individuals and atypically developing individuals, all but one carrying a diagnosis of Autism Spectrum Disorders (ASD; two groups; atypically developing [ATY] child 7-15 years, ASD Adult). General intelligence was measured using the Wechsler Abbreviated Intelligence Scale and Wechsler Intelligence Scale for Children. All participants completed the FM100. Both child groups also completed a computer-based chromatic discrimination threshold test, which assessed discrimination along cone-opponent ("red-green," "blue-yellow") and luminance cardinal axes using a controlled staircase procedure. Farnsworth-Munsell 100-Hue test performance was better in adults than in children. Furthermore, performance significantly positively correlated with nonverbal intelligence quotient (NVIQ) for all child groups and the young adult ASD group. The slope of this relationship was steeper for the ASD than TD groups. Performance on the chromatic discrimination threshold test was not significantly related to any IQ measure. Regression models reveal that chromatic discrimination threshold, although a significant predictor of FM100 performance when used alone, is a weaker predictor than NVIQ used alone or in combination

  19. MODELING SUPPLY CHAIN PERFORMANCE VARIABLES

    Directory of Open Access Journals (Sweden)

    Ashish Agarwal

    2005-01-01

    Full Text Available In order to understand the dynamic behavior of the variables that can play a major role in the performance improvement in a supply chain, a System Dynamics-based model is proposed. The model provides an effective framework for analyzing different variables affecting supply chain performance. Among different variables, a causal relationship among different variables has been identified. Variables emanating from performance measures such as gaps in customer satisfaction, cost minimization, lead-time reduction, service level improvement and quality improvement have been identified as goal-seeking loops. The proposed System Dynamics-based model analyzes the affect of dynamic behavior of variables for a period of 10 years on performance of case supply chain in auto business.

  20. Air Conditioner Compressor Performance Model

    Energy Technology Data Exchange (ETDEWEB)

    Lu, Ning; Xie, YuLong; Huang, Zhenyu

    2008-09-05

    During the past three years, the Western Electricity Coordinating Council (WECC) Load Modeling Task Force (LMTF) has led the effort to develop the new modeling approach. As part of this effort, the Bonneville Power Administration (BPA), Southern California Edison (SCE), and Electric Power Research Institute (EPRI) Solutions tested 27 residential air-conditioning units to assess their response to delayed voltage recovery transients. After completing these tests, different modeling approaches were proposed, among them a performance modeling approach that proved to be one of the three favored for its simplicity and ability to recreate different SVR events satisfactorily. Funded by the California Energy Commission (CEC) under its load modeling project, researchers at Pacific Northwest National Laboratory (PNNL) led the follow-on task to analyze the motor testing data to derive the parameters needed to develop a performance models for the single-phase air-conditioning (SPAC) unit. To derive the performance model, PNNL researchers first used the motor voltage and frequency ramping test data to obtain the real (P) and reactive (Q) power versus voltage (V) and frequency (f) curves. Then, curve fitting was used to develop the P-V, Q-V, P-f, and Q-f relationships for motor running and stalling states. The resulting performance model ignores the dynamic response of the air-conditioning motor. Because the inertia of the air-conditioning motor is very small (H<0.05), the motor reaches from one steady state to another in a few cycles. So, the performance model is a fair representation of the motor behaviors in both running and stalling states.

  1. Alcohol and driving-related performance - A comprehensive meta-analysis focusing the significance of the non-significant

    OpenAIRE

    Schnabel, Eva

    2012-01-01

    The present work reviews the experimental literature on the acute effects of alcohol on human behaviour related to driving performance. A meta-analysis was conducted which includes studies published between 1954 and 2007 in order to provide a comprehensive knowledge of the substance alcohol. 450 studies reporting 5,300 findings were selected from over 12,000 references after applying certain in- and exclusion criteria. Thus, the present meta-analysis comprises far more studies than reviews on...

  2. ARMA modeling of stochastic processes in nuclear reactor with significant detection noise

    International Nuclear Information System (INIS)

    Zavaljevski, N.

    1992-01-01

    The theoretical basis of ARMA modelling of stochastic processes in nuclear reactor was presented in a previous paper, neglecting observational noise. The identification of real reactor data indicated that in some experiments the detection noise is significant. Thus a more rigorous theoretical modelling of stochastic processes in nuclear reactor is performed. Starting from the fundamental stochastic differential equations of the Langevin type for the interaction of the detector with neutron field, a new theoretical ARMA model is developed. preliminary identification results confirm the theoretical expectations. (author)

  3. Tailored model abstraction in performance assessments

    International Nuclear Information System (INIS)

    Kessler, J.H.

    1995-01-01

    Total System Performance Assessments (TSPAs) are likely to be one of the most significant parts of making safety cases for the continued development and licensing of geologic repositories for the disposal of spent fuel and HLW. Thus, it is critical that the TSPA model capture the 'essence' of the physical processes relevant to demonstrating the appropriate regulation is met. But how much detail about the physical processes must be modeled and understood before there is enough confidence that the appropriate essence has been captured? In this summary the level of model abstraction that is required is discussed. Approaches for subsystem and total system performance analyses are outlined, and the role of best estimate models is examined. It is concluded that a conservative approach for repository performance, based on limited amount of field and laboratory data, can provide sufficient confidence for a regulatory decision

  4. Significance of predictive models/risk calculators for HBV-related hepatocellular carcinoma

    Directory of Open Access Journals (Sweden)

    DONG Jing

    2015-06-01

    Full Text Available Hepatitis B virus (HBV-related hepatocellular carcinoma (HCC is a major public health problem in Southeast Asia. In recent years, researchers from Hong Kong and Taiwan have reported predictive models or risk calculators for HBV-associated HCC by studying its natural history, which, to some extent, predicts the possibility of HCC development. Generally, risk factors of each model involve age, sex, HBV DNA level, and liver cirrhosis. This article discusses the evolution and clinical significance of currently used predictive models for HBV-associated HCC and assesses the advantages and limits of risk calculators. Updated REACH-B model and LSM-HCC model show better negative predictive values and have better performance in predicting the outcomes of patients with chronic hepatitis B (CHB. These models can be applied to stratified screening of HCC and, meanwhile, become an assessment tool for the management of CHB patients.

  5. Performance feedback, paraeducators, and literacy instruction for students with significant disabilities.

    Science.gov (United States)

    Westover, Jennifer M; Martin, Emma J

    2014-12-01

    Literacy skills are fundamental for all learners. For students with significant disabilities, strong literacy skills provide a gateway to generative communication, genuine friendships, improved access to academic opportunities, access to information technology, and future employment opportunities. Unfortunately, many educators lack the knowledge to design or implement appropriate evidence-based literacy instruction for students with significant disabilities. Furthermore, students with significant disabilities often receive the majority of their instruction from paraeducators. This single-subject design study examined the effects of performance feedback on the delivery skills of paraeducators during systematic and explicit literacy instruction for students with significant disabilities. The specific skills targeted for feedback were planned opportunities for student responses and correct academic responses. Findings suggested that delivery of feedback on performance resulted in increased pacing, accuracy in student responses, and subsequent attainment of literacy skills for students with significant disabilities. Implications for the use of performance feedback as an evaluation and training tool for increasing effective instructional practices are provided. © The Author(s) 2014.

  6. Performance Evaluation Model for Application Layer Firewalls.

    Directory of Open Access Journals (Sweden)

    Shichang Xuan

    Full Text Available Application layer firewalls protect the trusted area network against information security risks. However, firewall performance may affect user experience. Therefore, performance analysis plays a significant role in the evaluation of application layer firewalls. This paper presents an analytic model of the application layer firewall, based on a system analysis to evaluate the capability of the firewall. In order to enable users to improve the performance of the application layer firewall with limited resources, resource allocation was evaluated to obtain the optimal resource allocation scheme in terms of throughput, delay, and packet loss rate. The proposed model employs the Erlangian queuing model to analyze the performance parameters of the system with regard to the three layers (network, transport, and application layers. Then, the analysis results of all the layers are combined to obtain the overall system performance indicators. A discrete event simulation method was used to evaluate the proposed model. Finally, limited service desk resources were allocated to obtain the values of the performance indicators under different resource allocation scenarios in order to determine the optimal allocation scheme. Under limited resource allocation, this scheme enables users to maximize the performance of the application layer firewall.

  7. Performance Evaluation Model for Application Layer Firewalls.

    Science.gov (United States)

    Xuan, Shichang; Yang, Wu; Dong, Hui; Zhang, Jiangchuan

    2016-01-01

    Application layer firewalls protect the trusted area network against information security risks. However, firewall performance may affect user experience. Therefore, performance analysis plays a significant role in the evaluation of application layer firewalls. This paper presents an analytic model of the application layer firewall, based on a system analysis to evaluate the capability of the firewall. In order to enable users to improve the performance of the application layer firewall with limited resources, resource allocation was evaluated to obtain the optimal resource allocation scheme in terms of throughput, delay, and packet loss rate. The proposed model employs the Erlangian queuing model to analyze the performance parameters of the system with regard to the three layers (network, transport, and application layers). Then, the analysis results of all the layers are combined to obtain the overall system performance indicators. A discrete event simulation method was used to evaluate the proposed model. Finally, limited service desk resources were allocated to obtain the values of the performance indicators under different resource allocation scenarios in order to determine the optimal allocation scheme. Under limited resource allocation, this scheme enables users to maximize the performance of the application layer firewall.

  8. Identifying significant uncertainties in thermally dependent processes for repository performance analysis

    International Nuclear Information System (INIS)

    Gansemer, J.D.; Lamont, A.

    1994-01-01

    In order to study the performance of the potential Yucca Mountain Nuclear Waste Repository, scientific investigations are being conducted to reduce the uncertainty about process models and system parameters. This paper is intended to demonstrate a method for determining a strategy for the cost effective management of these investigations. It is not meant to be a complete study of all processes and interactions, but does outline a method which can be applied to more in-depth investigations

  9. Behavior model for performance assessment.

    Energy Technology Data Exchange (ETDEWEB)

    Borwn-VanHoozer, S. A.

    1999-07-23

    Every individual channels information differently based on their preference of the sensory modality or representational system (visual auditory or kinesthetic) we tend to favor most (our primary representational system (PRS)). Therefore, some of us access and store our information primarily visually first, some auditorily, and others kinesthetically (through feel and touch); which in turn establishes our information processing patterns and strategies and external to internal (and subsequently vice versa) experiential language representation. Because of the different ways we channel our information, each of us will respond differently to a task--the way we gather and process the external information (input), our response time (process), and the outcome (behavior). Traditional human models of decision making and response time focus on perception, cognitive and motor systems stimulated and influenced by the three sensory modalities, visual, auditory and kinesthetic. For us, these are the building blocks to knowing how someone is thinking. Being aware of what is taking place and how to ask questions is essential in assessing performance toward reducing human errors. Existing models give predications based on time values or response times for a particular event, and may be summed and averaged for a generalization of behavior(s). However, by our not establishing a basic understanding of the foundation of how the behavior was predicated through a decision making strategy process, predicative models are overall inefficient in their analysis of the means by which behavior was generated. What is seen is the end result.

  10. Behavior model for performance assessment

    International Nuclear Information System (INIS)

    Brown-VanHoozer, S. A.

    1999-01-01

    Every individual channels information differently based on their preference of the sensory modality or representational system (visual auditory or kinesthetic) we tend to favor most (our primary representational system (PRS)). Therefore, some of us access and store our information primarily visually first, some auditorily, and others kinesthetically (through feel and touch); which in turn establishes our information processing patterns and strategies and external to internal (and subsequently vice versa) experiential language representation. Because of the different ways we channel our information, each of us will respond differently to a task--the way we gather and process the external information (input), our response time (process), and the outcome (behavior). Traditional human models of decision making and response time focus on perception, cognitive and motor systems stimulated and influenced by the three sensory modalities, visual, auditory and kinesthetic. For us, these are the building blocks to knowing how someone is thinking. Being aware of what is taking place and how to ask questions is essential in assessing performance toward reducing human errors. Existing models give predications based on time values or response times for a particular event, and may be summed and averaged for a generalization of behavior(s). However, by our not establishing a basic understanding of the foundation of how the behavior was predicated through a decision making strategy process, predicative models are overall inefficient in their analysis of the means by which behavior was generated. What is seen is the end result

  11. Model Performance Evaluation and Scenario Analysis (MPESA)

    Science.gov (United States)

    Model Performance Evaluation and Scenario Analysis (MPESA) assesses the performance with which models predict time series data. The tool was developed Hydrological Simulation Program-Fortran (HSPF) and the Stormwater Management Model (SWMM)

  12. A statistical model for predicting muscle performance

    Science.gov (United States)

    Byerly, Diane Leslie De Caix

    The objective of these studies was to develop a capability for predicting muscle performance and fatigue to be utilized for both space- and ground-based applications. To develop this predictive model, healthy test subjects performed a defined, repetitive dynamic exercise to failure using a Lordex spinal machine. Throughout the exercise, surface electromyography (SEMG) data were collected from the erector spinae using a Mega Electronics ME3000 muscle tester and surface electrodes placed on both sides of the back muscle. These data were analyzed using a 5th order Autoregressive (AR) model and statistical regression analysis. It was determined that an AR derived parameter, the mean average magnitude of AR poles, significantly correlated with the maximum number of repetitions (designated Rmax) that a test subject was able to perform. Using the mean average magnitude of AR poles, a test subject's performance to failure could be predicted as early as the sixth repetition of the exercise. This predictive model has the potential to provide a basis for improving post-space flight recovery, monitoring muscle atrophy in astronauts and assessing the effectiveness of countermeasures, monitoring astronaut performance and fatigue during Extravehicular Activity (EVA) operations, providing pre-flight assessment of the ability of an EVA crewmember to perform a given task, improving the design of training protocols and simulations for strenuous International Space Station assembly EVA, and enabling EVA work task sequences to be planned enhancing astronaut performance and safety. Potential ground-based, medical applications of the predictive model include monitoring muscle deterioration and performance resulting from illness, establishing safety guidelines in the industry for repetitive tasks, monitoring the stages of rehabilitation for muscle-related injuries sustained in sports and accidents, and enhancing athletic performance through improved training protocols while reducing

  13. Parameter definition using vibration prediction software leads to significant drilling performance improvements

    Energy Technology Data Exchange (ETDEWEB)

    Amorim, Dalmo; Hanley, Chris Hanley; Fonseca, Isaac; Santos, Juliana [National Oilwell Varco, Houston TX (United States); Leite, Daltro J.; Borella, Augusto; Gozzi, Danilo [Petroleo Brasileiro S.A. (PETROBRAS), Rio de Janeiro, RJ (Brazil)

    2012-07-01

    The understanding and mitigation of downhole vibration has been a heavily researched subject in the oil industry as it results in more expensive drilling operations, as vibrations significantly diminish the amount of effective drilling energy available to the bit and generate forces that can push the bit or the Bottom Hole Assembly (BHA) off its concentric axis of rotation, producing high magnitude impacts with the borehole wall. In order to drill ahead, a sufficient amount of energy must be supplied by the rig to overcome the resistance of the drilling system, including the reactive torque of the system, drag forces, fluid pressure losses and energy dissipated by downhole vibrations, then providing the bit with the energy required to fail the rock. If the drill string enters resonant modes of vibration, not only does it decreases the amount of available energy to drill, but increases the potential for catastrophic downhole equipment and drilling bit failures. In this sense, the mitigation of downhole vibrations will result in faster, smoother, and cheaper drilling operations. A software tool using Finite Element Analysis (FEA) has been developed to provide better understanding of downhole vibration phenomena in drilling environments. The software tool calculates the response of the drilling system at various input conditions, based on the design of the wellbore along with the geometry of the Bottom Hole Assembly (BHA) and the drill string. It identifies where undesired levels of resonant vibration will be driven by certain combinations of specific drilling parameters, and also which combinations of drilling parameters will result in lower levels of vibration, so the least shocks, the highest penetration rate and the lowest cost per foot can be achieved. With the growing performance of personal computers, complex software systems modeling the drilling vibrations using FEA has been accessible to a wider audience of field users, further complimenting with real time

  14. The European Academy laparoscopic “Suturing Training and Testing’’ (SUTT) significantly improves surgeons’ performance

    Science.gov (United States)

    Sleiman, Z.; Tanos, V.; Van Belle, Y.; Carvalho, J.L.; Campo, R.

    2015-01-01

    The efficiency of suturing training and testing (SUTT) model by laparoscopy was evaluated, measuring the suturingskill acquisition of trainee gynecologists at the beginning and at the end of a teaching course. During a workshop organized by the European Academy of Gynecological Surgery (EAGS), 25 participants with three different experience levels in laparoscopy (minor, intermediate and major) performed the 4 exercises of the SUTT model (Ex 1: both hands stitching and continuous suturing, Ex 2: right hand stitching and intracorporeal knotting, Ex 3: left hand stitching and intracorporeal knotting, Ex 4: dominant hand stitching, tissue approximation and intracorporeal knotting). The time needed to perform the exercises is recorded for each trainee and group and statistical analysis used to note the differences. Overall, all trainees achieved significant improvement in suturing time (p psychomotor skills, surgery, teaching, training suturing model. PMID:26977264

  15. Model training across multiple breeding cycles significantly improves genomic prediction accuracy in rye (Secale cereale L.).

    Science.gov (United States)

    Auinger, Hans-Jürgen; Schönleben, Manfred; Lehermeier, Christina; Schmidt, Malthe; Korzun, Viktor; Geiger, Hartwig H; Piepho, Hans-Peter; Gordillo, Andres; Wilde, Peer; Bauer, Eva; Schön, Chris-Carolin

    2016-11-01

    Genomic prediction accuracy can be significantly increased by model calibration across multiple breeding cycles as long as selection cycles are connected by common ancestors. In hybrid rye breeding, application of genome-based prediction is expected to increase selection gain because of long selection cycles in population improvement and development of hybrid components. Essentially two prediction scenarios arise: (1) prediction of the genetic value of lines from the same breeding cycle in which model training is performed and (2) prediction of lines from subsequent cycles. It is the latter from which a reduction in cycle length and consequently the strongest impact on selection gain is expected. We empirically investigated genome-based prediction of grain yield, plant height and thousand kernel weight within and across four selection cycles of a hybrid rye breeding program. Prediction performance was assessed using genomic and pedigree-based best linear unbiased prediction (GBLUP and PBLUP). A total of 1040 S 2 lines were genotyped with 16 k SNPs and each year testcrosses of 260 S 2 lines were phenotyped in seven or eight locations. The performance gap between GBLUP and PBLUP increased significantly for all traits when model calibration was performed on aggregated data from several cycles. Prediction accuracies obtained from cross-validation were in the order of 0.70 for all traits when data from all cycles (N CS  = 832) were used for model training and exceeded within-cycle accuracies in all cases. As long as selection cycles are connected by a sufficient number of common ancestors and prediction accuracy has not reached a plateau when increasing sample size, aggregating data from several preceding cycles is recommended for predicting genetic values in subsequent cycles despite decreasing relatedness over time.

  16. Focused R&D For Electrochromic Smart Windowsa: Significant Performance and Yield Enhancements

    Energy Technology Data Exchange (ETDEWEB)

    Mark Burdis; Neil Sbar

    2003-01-31

    There is a need to improve the energy efficiency of building envelopes as they are the primary factor governing the heating, cooling, lighting and ventilation requirements of buildings--influencing 53% of building energy use. In particular, windows contribute significantly to the overall energy performance of building envelopes, thus there is a need to develop advanced energy efficient window and glazing systems. Electrochromic (EC) windows represent the next generation of advanced glazing technology that will (1) reduce the energy consumed in buildings, (2) improve the overall comfort of the building occupants, and (3) improve the thermal performance of the building envelope. ''Switchable'' EC windows provide, on demand, dynamic control of visible light, solar heat gain, and glare without blocking the view. As exterior light levels change, the window's performance can be electronically adjusted to suit conditions. A schematic illustrating how SageGlass{reg_sign} electrochromic windows work is shown in Figure I.1. SageGlass{reg_sign} EC glazings offer the potential to save cooling and lighting costs, with the added benefit of improving thermal and visual comfort. Control over solar heat gain will also result in the use of smaller HVAC equipment. If a step change in the energy efficiency and performance of buildings is to be achieved, there is a clear need to bring EC technology to the marketplace. This project addresses accelerating the widespread introduction of EC windows in buildings and thus maximizing total energy savings in the U.S. and worldwide. We report on R&D activities to improve the optical performance needed to broadly penetrate the full range of architectural markets. Also, processing enhancements have been implemented to reduce manufacturing costs. Finally, tests are being conducted to demonstrate the durability of the EC device and the dual pane insulating glass unit (IGU) to be at least equal to that of conventional

  17. Behavioral Change and Building Performance: Strategies for Significant, Persistent, and Measurable Institutional Change

    Energy Technology Data Exchange (ETDEWEB)

    Wolfe, Amy K.; Malone, Elizabeth L.; Heerwagen, Judith H.; Dion, Jerome P.

    2014-04-01

    The people who use Federal buildings — Federal employees, operations and maintenance staff, and the general public — can significantly impact a building’s environmental performance and the consumption of energy, water, and materials. Many factors influence building occupants’ use of resources (use behaviors) including work process requirements, ability to fulfill agency missions, new and possibly unfamiliar high-efficiency/high-performance building technologies; a lack of understanding, education, and training; inaccessible information or ineffective feedback mechanisms; and cultural norms and institutional rules and requirements, among others. While many strategies have been used to introduce new occupant use behaviors that promote sustainability and reduced resource consumption, few have been verified in the scientific literature or have properly documented case study results. This paper documents validated strategies that have been shown to encourage new use behaviors that can result in significant, persistent, and measureable reductions in resource consumption. From the peer-reviewed literature, the paper identifies relevant strategies for Federal facilities and commercial buildings that focus on the individual, groups of individuals (e.g., work groups), and institutions — their policies, requirements, and culture. The paper documents methods with evidence of success in changing use behaviors and enabling occupants to effectively interact with new technologies/designs. It also provides a case study of the strategies used at a Federal facility — Fort Carson, Colorado. The paper documents gaps in the current literature and approaches, and provides topics for future research.

  18. Performance characteristics of SCC radioimmunoassay and clinical significance serum SCC Ag assay in patients with malignancy

    International Nuclear Information System (INIS)

    Kim, Dong Youn

    1986-01-01

    To evaluate the performance characteristics of SCC RIV and the clinical significance of serum SCC Ag assay in patients with malignancy, serum SCC Ag levels were measured by SCC RIV kit in 40 normal controls and 35 percents with various untreated malignancy, who visited Chonju Presbyterian Medical Center. The results were as follows; 1. The SCC RIA was simple to perform and can be completed in two workday. And the standard curve and reproducibility were both good. 2. The mean serum SCC Ag level in normal controls was 1.64 ± 0.93 ng/mL and normal upper limit of serum SCC Ag was defined as 2.6 ng/mL. 3 out of 40 (7.5%) normal controls showed elevated SCC Ag levels above the normal upper limit. 3. In 35 patients with various untreated malignancy, 18 patients (51.4%) showed elevated serum SCC Ag levels, 59.1% of 22 patients with cervical cancer, 80% of 5 patients with lung cancer, 33% of 3 patients with esophageal cancer, 0% of 2 patients with rectal cancer and 0% of 3 patients with breast cancer showed elevated serum SCC Ag levels. Above results represent that SCC RIV is simple method to perform followed by good standard curve and reproducibility, and may be a useful indicator reflecting diagnostic data of patients with cervical cancer and lung cancer

  19. A peptide-retrieval strategy enables significant improvement of quantitative performance without compromising confidence of identification.

    Science.gov (United States)

    Tu, Chengjian; Shen, Shichen; Sheng, Quanhu; Shyr, Yu; Qu, Jun

    2017-01-30

    Reliable quantification of low-abundance proteins in complex proteomes is challenging largely owing to the limited number of spectra/peptides identified. In this study we developed a straightforward method to improve the quantitative accuracy and precision of proteins by strategically retrieving the less confident peptides that were previously filtered out using the standard target-decoy search strategy. The filtered-out MS/MS spectra matched to confidently-identified proteins were recovered, and the peptide-spectrum-match FDR were re-calculated and controlled at a confident level of FDR≤1%, while protein FDR maintained at ~1%. We evaluated the performance of this strategy in both spectral count- and ion current-based methods. >60% increase of total quantified spectra/peptides was respectively achieved for analyzing a spike-in sample set and a public dataset from CPTAC. Incorporating the peptide retrieval strategy significantly improved the quantitative accuracy and precision, especially for low-abundance proteins (e.g. one-hit proteins). Moreover, the capacity of confidently discovering significantly-altered proteins was also enhanced substantially, as demonstrated with two spike-in datasets. In summary, improved quantitative performance was achieved by this peptide recovery strategy without compromising confidence of protein identification, which can be readily implemented in a broad range of quantitative proteomics techniques including label-free or labeling approaches. We hypothesize that more quantifiable spectra and peptides in a protein, even including less confident peptides, could help reduce variations and improve protein quantification. Hence the peptide retrieval strategy was developed and evaluated in two spike-in sample sets with different LC-MS/MS variations using both MS1- and MS2-based quantitative approach. The list of confidently identified proteins using the standard target-decoy search strategy was fixed and more spectra/peptides with less

  20. The Significant of Model School in Pluralistic Society of the Three Southern Border Provinces of Thailand

    Directory of Open Access Journals (Sweden)

    Haji-Awang Faisol

    2016-01-01

    The result of the study show that, a significant traits of the model schools in the multi-cultural society are not merely performed well in administrative procedure, teaching and learning process, but these schools also able to reveal the real social norm and religious believe into communities’ practical life as a truly “Malay-Muslim” society. It is means that, the school able to run the integrated programs under the shade of philosophy of Islamic education paralleled the National Education aims to ensure that the productivities of the programs able to serve both sides, national education on the one hand and the Malay Muslim communities’ satisfaction on the other hand.

  1. Significant improvements of electrical discharge machining performance by step-by-step updated adaptive control laws

    Science.gov (United States)

    Zhou, Ming; Wu, Jianyang; Xu, Xiaoyi; Mu, Xin; Dou, Yunping

    2018-02-01

    In order to obtain improved electrical discharge machining (EDM) performance, we have dedicated more than a decade to correcting one essential EDM defect, the weak stability of the machining, by developing adaptive control systems. The instabilities of machining are mainly caused by complicated disturbances in discharging. To counteract the effects from the disturbances on machining, we theoretically developed three control laws from minimum variance (MV) control law to minimum variance and pole placements coupled (MVPPC) control law and then to a two-step-ahead prediction (TP) control law. Based on real-time estimation of EDM process model parameters and measured ratio of arcing pulses which is also called gap state, electrode discharging cycle was directly and adaptively tuned so that a stable machining could be achieved. To this end, we not only theoretically provide three proved control laws for a developed EDM adaptive control system, but also practically proved the TP control law to be the best in dealing with machining instability and machining efficiency though the MVPPC control law provided much better EDM performance than the MV control law. It was also shown that the TP control law also provided a burn free machining.

  2. Calibration of PMIS pavement performance prediction models.

    Science.gov (United States)

    2012-02-01

    Improve the accuracy of TxDOTs existing pavement performance prediction models through calibrating these models using actual field data obtained from the Pavement Management Information System (PMIS). : Ensure logical performance superiority patte...

  3. Examining significant factors in micro and small enterprises performance: case study in Amhara region, Ethiopia

    Science.gov (United States)

    Cherkos, Tomas; Zegeye, Muluken; Tilahun, Shimelis; Avvari, Muralidhar

    2017-07-01

    Furniture manufacturing micro and small enterprises are confronted with several factors that affect their performance. Some enterprises fail to sustain, some others remain for long period of time without transforming, and most are producing similar and non-standard products. The main aim of this manuscript is on improving the performance and contribution of MSEs by analyzing impact of significant internal and external factors. Data was collected via a questionnaire, group discussion with experts and interviewing process. Randomly selected eight representative main cities of Amhara region with 120 furniture manufacturing enterprises are considered. Data analysis and presentation was made using SPSS tools (correlation, proximity, and T test) and impact-effort analysis matrix tool. The correlation analysis shows that politico-legal with infrastructure, leadership with entrepreneurship skills and finance and credit with marketing factors are those factors, which result in high correlation with Pearson correlation values of r = 0.988, 0.983, and 0.939, respectively. The study investigates that the most critical factors faced by MSEs are work premises, access to finance, infrastructure, entrepreneurship and business managerial problems. The impact of these factors is found to be high and is confirmed by the 50% drop-out rate in 2014/2015. Furthermore, more than 25% work time losses due to power interruption daily and around 65% work premises problems challenged MSEs. Further, an impact-effort matrix was developed to help the MSEs to prioritize the affecting factors.

  4. Effects of significance of auditory location changes on event related brain potentials and pitch discrimination performance.

    Science.gov (United States)

    Koistinen, Sonja; Rinne, Teemu; Cederström, Sebastian; Alho, Kimmo

    2012-01-03

    We examined effects of significance of task irrelevant changes in the location of tones on the mismatch negativity (MMN) and P3a event related brain potentials. The participants were to discriminate between two frequency modulated tones differing from each other in the direction of frequency glide. Each tone was delivered through one of five loudspeakers in front of the participant. On most trials, a tone was presented from the same location as the preceding tone, but occasionally the location changed. In the Varying Location Condition, these changes, although irrelevant with regard to pitch discrimination, were still significant for performance as the following tones were presented from the new location where attention had to be therefore shifted. In the Fixed Location Condition, the location changes were less significant as the tones following a location change were presented from the original location. In both conditions, the location changes were associated with decreased hit rates and increased reaction times in the pitch discrimination task. However, the hit rate decrease was larger in the Fixed Location Condition suggesting that in this condition the location changes were just distractors. MMN and P3a responses were elicited by location changes in both conditions. In the Fixed Location Condition, a P3a was also elicited by the first tone following a location change at the original location while the MMN was not. Thus, the P3a appeared to be related to shifting of attention in space and was not tightly coupled with MMN elicitation. Copyright © 2011 Elsevier B.V. All rights reserved.

  5. Incorporating representation of agricultural ecosystems and management within a dynamic biosphere model: Approach, validation, and significance

    Science.gov (United States)

    Kucharik, C.

    2004-12-01

    At the scale of individual fields, crop models have long been used to examine the interactions between soils, vegetation, the atmosphere and human management, using varied levels of numerical sophistication. While previous efforts have contributed significantly towards the advancement of modeling tools, the models themselves are not typically applied across larger continental scales due to a lack of crucial data. Furthermore, many times crop models are used to study a single quantity, process, or cycle in isolation, limiting their value in considering the important tradeoffs between competing ecosystem services such as food production, water quality, and sequestered carbon. In response to the need for a more integrated agricultural modeling approach across the continental scale, an updated agricultural version of a dynamic biosphere model (IBIS) now integrates representations of land-surface physics and soil physics, canopy physiology, terrestrial carbon and nitrogen balance, crop phenology, solute transport, and farm management into a single framework. This version of the IBIS model (Agro-IBIS) uses a short 20 to 60-minute timestep to simulate the rapid exchange of energy, carbon, water, and momentum between soils, vegetative canopies, and the atmosphere. The model can be driven either by site-specific meteorological data or by gridded climate datasets. Mechanistic crop models for corn, soybean, and wheat use physiologically-based representations of leaf photosynthesis, stomatal conductance, and plant respiration. Model validation has been performed using a variety of temporal scale data collected at the following spatial scales: (1) the precision-agriculture scale (5 m), (2) the individual field experiment scale (AmeriFlux), and (3) regional and continental scales using annual USDA county-level yield data and monthly satellite (AVHRR) observations of vegetation characteristics at 0.5 degree resolution. To date, the model has been used with great success to

  6. Human Performance Models of Pilot Behavior

    Science.gov (United States)

    Foyle, David C.; Hooey, Becky L.; Byrne, Michael D.; Deutsch, Stephen; Lebiere, Christian; Leiden, Ken; Wickens, Christopher D.; Corker, Kevin M.

    2005-01-01

    Five modeling teams from industry and academia were chosen by the NASA Aviation Safety and Security Program to develop human performance models (HPM) of pilots performing taxi operations and runway instrument approaches with and without advanced displays. One representative from each team will serve as a panelist to discuss their team s model architecture, augmentations and advancements to HPMs, and aviation-safety related lessons learned. Panelists will discuss how modeling results are influenced by a model s architecture and structure, the role of the external environment, specific modeling advances and future directions and challenges for human performance modeling in aviation.

  7. ROLE AND SIGNIFICANCE OF STATEMENT OF OTHER COMPREHENSIVE INCOME– IN RESPECT OF REPORTING COMPANIES’ PERFORMANCE

    Directory of Open Access Journals (Sweden)

    Ildiko Orban

    2014-07-01

    Full Text Available A commonly accepted rule-system, which name was International Financial Reporting Standards (IFRS created the framework for represent the financial performace, and other facts related to the company’s health. In the system of IFRS profit is not equal to income less expenses, this deviation led to the other comprehensive income, OCI term. IFRS have created the term of other comprehensive income, but knowledge and using of it is not widespread. In this paper I tend to present the meaning and essence of this income category, and to reveal how it is work in corporate practice. As basis of the research, definitions and formats related to the statement of comprehensive income will be presented in the paper first. In order to get a clear picture about the differences between the income statements, I make a comparison of the IFRS and the Hungarian Accounting Act in the field of performance’s representation. As a result of my comparison I’ve stated that the EU accepted the international financial reporting standards to present the financial performance of publicly traded companies, and as EU member state it is obligatory for the Hungarian companies as well. This is the reason why Hungary’s present task is taking over the IFRS mentality. After the comparative analysis I’ve examined the Statement of other comprehensive income in the practice of 11 listed companies in the Budapest Stock Exchange. The Premium category includes those companies’ series of liquid shares, which has got broader investor base. The aim of this examination was to reveal if the most significant listed companies calculate other comprehensive income and what kind of items do they present in the statement of OCI. As a result of the research we can state that statement of other comprehensive income is part of the statement of total comprehensive income in general, and not an individual statement. Main items of the other comprehensive income of the examined companies are the

  8. Identifying the most significant indicators of the total road safety performance index.

    Science.gov (United States)

    Tešić, Milan; Hermans, Elke; Lipovac, Krsto; Pešić, Dalibor

    2018-04-01

    The review of the national and international literature dealing with the assessment of the road safety level has shown great efforts of the authors who tried to define the methodology for calculating the composite road safety index on a territory (region, state, etc.). The procedure for obtaining a road safety composite index of an area has been largely harmonized. The question that has not been fully resolved yet concerns the selection of indicators. There is a wide range of road safety indicators used to show a road safety situation on a territory. Road safety performance index (RSPI) obtained on the basis of a larger number of safety performance indicators (SPIs) enable decision makers to more precisely define the earlier goal- oriented actions. However, recording a broader comprehensive set of SPIs helps identify the strengths and weaknesses of a country's road safety system. Providing high quality national and international databases that would include comparable SPIs seems to be difficult since a larger number of countries dispose of a small number of identical indicators available for use. Therefore, there is a need for calculating a road safety performance index with a limited number of indicators (RSPI ln n ) which will provide a comparison of a sufficient quality, of as many countries as possible. The application of the Data Envelopment Analysis (DEA) method and correlative analysis has helped to check if the RSPI ln n is likely to be of sufficient quality. A strong correlation between the RSPI ln n and the RSPI has been identified using the proposed methodology. Based on this, the most contributing indicators and methodologies for gradual monitoring of SPIs, have been defined for each country analyzed. The indicator monitoring phases in the analyzed countries have been defined in the following way: Phase 1- the indicators relating to alcohol, speed and protective systems; Phase 2- the indicators relating to roads and Phase 3- the indicators relating to

  9. Modelling and Motivating Academic Performance.

    Science.gov (United States)

    Brennan, Geoffrey; Pettit, Philip

    1991-01-01

    Three possible motivators for college teachers (individual economic interest, academic virtue, and academic honor) suggest mechanisms that can be used to improve performance. Policies need to address all three motivators; economic levers alone may undermine alternative ways of supporting good work. (MSE)

  10. Cognitive performance modeling based on general systems performance theory.

    Science.gov (United States)

    Kondraske, George V

    2010-01-01

    General Systems Performance Theory (GSPT) was initially motivated by problems associated with quantifying different aspects of human performance. It has proved to be invaluable for measurement development and understanding quantitative relationships between human subsystem capacities and performance in complex tasks. It is now desired to bring focus to the application of GSPT to modeling of cognitive system performance. Previous studies involving two complex tasks (i.e., driving and performing laparoscopic surgery) and incorporating measures that are clearly related to cognitive performance (information processing speed and short-term memory capacity) were revisited. A GSPT-derived method of task analysis and performance prediction termed Nonlinear Causal Resource Analysis (NCRA) was employed to determine the demand on basic cognitive performance resources required to support different levels of complex task performance. This approach is presented as a means to determine a cognitive workload profile and the subsequent computation of a single number measure of cognitive workload (CW). Computation of CW may be a viable alternative to measuring it. Various possible "more basic" performance resources that contribute to cognitive system performance are discussed. It is concluded from this preliminary exploration that a GSPT-based approach can contribute to defining cognitive performance models that are useful for both individual subjects and specific groups (e.g., military pilots).

  11. Assembly line performance and modeling

    Science.gov (United States)

    Rane, Arun B.; Sunnapwar, Vivek K.

    2017-09-01

    Automobile sector forms the backbone of manufacturing sector. Vehicle assembly line is important section in automobile plant where repetitive tasks are performed one after another at different workstations. In this thesis, a methodology is proposed to reduce cycle time and time loss due to important factors like equipment failure, shortage of inventory, absenteeism, set-up, material handling, rejection and fatigue to improve output within given cost constraints. Various relationships between these factors, corresponding cost and output are established by scientific approach. This methodology is validated in three different vehicle assembly plants. Proposed methodology may help practitioners to optimize the assembly line using lean techniques.

  12. Generalization performance of regularized neural network models

    DEFF Research Database (Denmark)

    Larsen, Jan; Hansen, Lars Kai

    1994-01-01

    Architecture optimization is a fundamental problem of neural network modeling. The optimal architecture is defined as the one which minimizes the generalization error. This paper addresses estimation of the generalization performance of regularized, complete neural network models. Regularization...

  13. Evidence That Bimanual Motor Timing Performance Is Not a Significant Factor in Developmental Stuttering

    Science.gov (United States)

    Hilger, Allison I.; Zelaznik, Howard; Smith, Anne

    2016-01-01

    Purpose: Stuttering involves a breakdown in the speech motor system. We address whether stuttering in its early stage is specific to the speech motor system or whether its impact is observable across motor systems. Method: As an extension of Olander, Smith, and Zelaznik (2010), we measured bimanual motor timing performance in 115 children: 70…

  14. Student-Led Project Teams: Significance of Regulation Strategies in High- and Low-Performing Teams

    Science.gov (United States)

    Ainsworth, Judith

    2016-01-01

    We studied group and individual co-regulatory and self-regulatory strategies of self-managed student project teams using data from intragroup peer evaluations and a postproject survey. We found that high team performers shared their research and knowledge with others, collaborated to advise and give constructive criticism, and demonstrated moral…

  15. How Often Is the Misfit of Item Response Theory Models Practically Significant?

    Science.gov (United States)

    Sinharay, Sandip; Haberman, Shelby J.

    2014-01-01

    Standard 3.9 of the Standards for Educational and Psychological Testing ([, 1999]) demands evidence of model fit when item response theory (IRT) models are employed to data from tests. Hambleton and Han ([Hambleton, R. K., 2005]) and Sinharay ([Sinharay, S., 2005]) recommended the assessment of practical significance of misfit of IRT models, but…

  16. A Probabilistic Approach to Symbolic Performance Modeling of Parallel Systems

    NARCIS (Netherlands)

    Gautama, H.

    2004-01-01

    Performance modeling plays a significant role in predicting the effects of a particular design choice or in diagnosing the cause for some observed performance behavior. Especially for complex systems such as parallel computer, typically, an intended performance cannot be achieved without recourse to

  17. Significance tests to determine the direction of effects in linear regression models.

    Science.gov (United States)

    Wiedermann, Wolfgang; Hagmann, Michael; von Eye, Alexander

    2015-02-01

    Previous studies have discussed asymmetric interpretations of the Pearson correlation coefficient and have shown that higher moments can be used to decide on the direction of dependence in the bivariate linear regression setting. The current study extends this approach by illustrating that the third moment of regression residuals may also be used to derive conclusions concerning the direction of effects. Assuming non-normally distributed variables, it is shown that the distribution of residuals of the correctly specified regression model (e.g., Y is regressed on X) is more symmetric than the distribution of residuals of the competing model (i.e., X is regressed on Y). Based on this result, 4 one-sample tests are discussed which can be used to decide which variable is more likely to be the response and which one is more likely to be the explanatory variable. A fifth significance test is proposed based on the differences of skewness estimates, which leads to a more direct test of a hypothesis that is compatible with direction of dependence. A Monte Carlo simulation study was performed to examine the behaviour of the procedures under various degrees of associations, sample sizes, and distributional properties of the underlying population. An empirical example is given which illustrates the application of the tests in practice. © 2014 The British Psychological Society.

  18. Constrained bayesian inference of project performance models

    OpenAIRE

    Sunmola, Funlade

    2013-01-01

    Project performance models play an important role in the management of project success. When used for monitoring projects, they can offer predictive ability such as indications of possible delivery problems. Approaches for monitoring project performance relies on available project information including restrictions imposed on the project, particularly the constraints of cost, quality, scope and time. We study in this paper a Bayesian inference methodology for project performance modelling in ...

  19. ORGANIZATIONAL LEARNING AND PERFORMANCE. A CONCEPTUAL MODEL

    OpenAIRE

    Alexandra Luciana GUÞÃ

    2013-01-01

    Throught this paper, our main objective is to propose a conceptual model that links the notions of organizational learning (as capability and as a process) and organizational performance. Our contribution consists in analyzing the literature on organizational learning and organizational performance and in proposing an integrated model, that comprises: organizational learning capability, the process of organizational learning, organizational performance, human capital (the value and uniqueness...

  20. FOCUSED R&D FOR ELECTROCHROMIC SMART WINDOWS: SIGNIFICANT PERFORMANCE AND YIELD ENHANCEMENTS

    Energy Technology Data Exchange (ETDEWEB)

    Marcus Milling

    2004-09-23

    Developments made under this program will play a key role in underpinning the technology for producing EC devices. It is anticipated that the work begun during this period will continue to improve materials properties, and drive yields up and costs down, increase durability and make manufacture simpler and more cost effective. It is hoped that this will contribute to a successful and profitable industry, which will help reduce energy consumption and improve comfort for building occupants worldwide. The first major task involved improvements to the materials used in the process. The improvements made as a result of the work done during this project have contributed to the enhanced performance, including dynamic range, uniformity and electrical characteristics. Another major objective of the project was to develop technology to improve yield, reduce cost, and facilitate manufacturing of EC products. Improvements directly attributable to the work carried out as part of this project and seen in the overall EC device performance, have been accompanied by an improvement in the repeatability and consistency of the production process. Innovative test facilities for characterizing devices in a timely and well-defined manner have been developed. The equipment has been designed in such a way as to make scaling-up to accommodate higher throughput necessary for manufacturing relatively straightforward. Finally, the third major goal was to assure the durability of the EC product, both by developments aimed at improving the product performance, as well as development of novel procedures to test the durability of this new product. Both aspects have been demonstrated, both by carrying out a number of different durability tests, both in-house and by independent third-party testers, and also developing several novel durability tests.

  1. In surgeons performing cardiothoracic surgery is sleep deprivation significant in its impact on morbidity or mortality?

    Science.gov (United States)

    Asfour, Leila; Asfour, Victoria; McCormack, David; Attia, Rizwan

    2014-09-01

    A best evidence topic in cardiac surgery was written according to a structured protocol. The question addressed was: is there a difference in cardiothoracic surgery outcomes in terms of morbidity or mortality of patients operated on by a sleep-deprived surgeon compared with those operated by a non-sleep-deprived surgeon? Reported search criteria yielded 77 papers, of which 15 were deemed to represent the best evidence on the topic. Three studies directly related to cardiothoracic surgery and 12 studies related to non-cardiothoracic surgery. Recommendations are based on 18 121 cardiothoracic patients and 214 666 non-cardiothoracic surgical patients. Different definitions of sleep deprivation were used in the studies, either reviewing surgeon's sleeping hours or out-of-hours operating. Surgical outcomes reviewed included: mortality rate, neurological, renal, pulmonary, infectious complications, length of stay, length of intensive care stay, cardiopulmonary bypass times and aortic-cross-clamp times. There were no significant differences in mortality or intraoperative complications in the groups of patients operated on by sleep-deprived versus non-sleep-deprived surgeons in cardiothoracic studies. One study showed a significant increase in the rate of septicaemia in patients operated on by severely sleep-deprived surgeons (3.6%) compared with the moderately sleep-deprived (0.9%) and non-sleep-deprived groups (0.8%) (P = 0.03). In the non-cardiothoracic studies, 7 of the 12 studies demonstrated statistically significant higher reoperation rate in trauma cases (P sleep deprivation in cardiothoracic surgeons on morbidity or mortality. However, overall the non-cardiothoracic studies have demonstrated that operative time and sleep deprivation can have a significant impact on overall morbidity and mortality. It is likely that other confounding factors concomitantly affect outcomes in out-of-hours surgery. © The Author 2014. Published by Oxford University Press on behalf of

  2. Model performance analysis and model validation in logistic regression

    Directory of Open Access Journals (Sweden)

    Rosa Arboretti Giancristofaro

    2007-10-01

    Full Text Available In this paper a new model validation procedure for a logistic regression model is presented. At first, we illustrate a brief review of different techniques of model validation. Next, we define a number of properties required for a model to be considered "good", and a number of quantitative performance measures. Lastly, we describe a methodology for the assessment of the performance of a given model by using an example taken from a management study.

  3. More Use of Peritoneal Dialysis Gives Significant Savings: A Systematic Review and Health Economic Decision Model.

    Science.gov (United States)

    Pike, Eva; Hamidi, Vida; Ringerike, Tove; Wisloff, Torbjorn; Klemp, Marianne

    2017-02-01

    Patients with end-stage renal disease (ESRD) are in need of renal replacement therapy as dialysis and/or transplantation. The prevalence of ESRD and, thus, the need for dialysis are constantly growing. The dialysis modalities are either peritoneal performed at home or hemodialysis (HD) performed in-center (hospital or satellite) or home. We examined effectiveness and cost-effectiveness of HD performed at different locations (hospital, satellite, and home) and peritoneal dialysis (PD) at home in the Norwegian setting. We conducted a systematic review for patients above 18 years with end-stage renal failure requiring dialysis in several databases and performed several meta-analyses of existing literature. Mortality and major complications that required were our main clinical outcomes. The quality of the evidence for each outcome was evaluated using GRADE. Cost-effectiveness was assessed by developing a probabilistic Markov model. The analysis was carried out from a societal perspective, and effects were expressed in quality-adjusted life-years. Uncertainties in the base-case parameter values were explored with a probabilistic sensitivity analysis. Scenario analyses were conducted by increasing the proportion of patients receiving PD with a corresponding reduction in HD patients in-center both for Norway and Europian Union. We assumed an annual growth rate of 4% in the number of dialysis patients, and a relative distribution between PD and HD in-center of 30% and 70%, respectively. From a societal perspective and over a 5-year time horizon, PD was the most cost-effective dialysis alternative. We found no significant difference in mortality between peritoneal and HD modalities. Our scenario analyses showed that a shift toward more patients on PD (as a first choice) with a corresponding reduction in HD in-center gave a saving over a 5-year period of 32 and 10,623 million EURO, respectively, for Norway and the European Union. PD was the most cost-effective dialysis

  4. Using ballistocardiography to measure cardiac performance: a brief review of its history and future significance.

    Science.gov (United States)

    Vogt, Emelie; MacQuarrie, David; Neary, John Patrick

    2012-11-01

    Ballistocardiography (BCG) is a non-invasive technology that has been used to record ultra-low-frequency vibrations of the heart allowing for the measurement of cardiac cycle events including timing and amplitudes of contraction. Recent developments in BCG have made this technology simple to use, as well as time- and cost-efficient in comparison with other more complicated and invasive techniques used to evaluate cardiac performance. Recent technological advances are considerably greater since the advent of microprocessors and laptop computers. Along with the history of BCG, this paper reviews the present and future potential benefits of using BCG to measure cardiac cycle events and its application to clinical and applied research. © 2012 The Authors Clinical Physiology and Functional Imaging © 2012 Scandinavian Society of Clinical Physiology and Nuclear Medicine.

  5. Photovoltaic performance models - A report card

    Science.gov (United States)

    Smith, J. H.; Reiter, L. R.

    1985-01-01

    Models for the analysis of photovoltaic (PV) systems' designs, implementation policies, and economic performance, have proliferated while keeping pace with rapid changes in basic PV technology and extensive empirical data compiled for such systems' performance. Attention is presently given to the results of a comparative assessment of ten well documented and widely used models, which range in complexity from first-order approximations of PV system performance to in-depth, circuit-level characterizations. The comparisons were made on the basis of the performance of their subsystem, as well as system, elements. The models fall into three categories in light of their degree of aggregation into subsystems: (1) simplified models for first-order calculation of system performance, with easily met input requirements but limited capability to address more than a small variety of design considerations; (2) models simulating PV systems in greater detail, encompassing types primarily intended for either concentrator-incorporating or flat plate collector PV systems; and (3) models not specifically designed for PV system performance modeling, but applicable to aspects of electrical system design. Models ignoring subsystem failure or degradation are noted to exclude operating and maintenance characteristics as well.

  6. Authoring experience: the significance and performance of storytelling in Socratic dialogue with rehabilitating cancer patients.

    Science.gov (United States)

    Knox, Jeanette Bresson Ladegaard; Svendsen, Mette Nordahl

    2015-08-01

    This article examines the storytelling aspect in philosophizing with rehabilitating cancer patients in small Socratic dialogue groups (SDG). Recounting an experience to illustrate a philosophical question chosen by the participants is the traditional point of departure for the dialogical exchange. However, narrating is much more than a beginning point or the skeletal framework of events and it deserves more scholarly attention than hitherto given. Storytelling pervades the whole Socratic process and impacts the conceptual analysis in a SDG. In this article we show how the narrative aspect became a rich resource for the compassionate bond between participants and how their stories cultivated the abstract reflection in the group. In addition, the aim of the article is to reveal the different layers in the performance of storytelling, or of authoring experience. By picking, poking and dissecting an experience through a collaborative effort, most participants had their initial experience existentially refined and the chosen concept of which the experience served as an illustration transformed into a moral compass to be used in self-orientation post cancer.

  7. The Significance of the Bystander Effect: Modeling, Experiments, and More Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Brenner, David J.

    2009-07-22

    Non-targeted (bystander) effects of ionizing radiation are caused by intercellular signaling; they include production of DNA damage and alterations in cell fate (i.e. apoptosis, differentiation, senescence or proliferation). Biophysical models capable of quantifying these effects may improve cancer risk estimation at radiation doses below the epidemiological detection threshold. Understanding the spatial patterns of bystander responses is important, because it provides estimates of how many bystander cells are affected per irradiated cell. In a first approach to modeling of bystander spatial effects in a three-dimensional artificial tissue, we assumed the following: (1) The bystander phenomenon results from signaling molecules (S) that rapidly propagate from irradiated cells and decrease in concentration (exponentially in the case of planar symmetry) as distance increases. (2) These signals can convert cells to a long-lived epigenetically activated state, e.g. a state of oxidative stress; cells in this state are more prone to DNA damage and behavior alterations than normal and therefore exhibit an increased response (R) for many end points (e.g. apoptosis, differentiation, micronucleation). These assumptions were implemented by a mathematical formalism and computational algorithms. The model adequately described data on bystander responses in the 3D system using a small number of adjustable parameters. Mathematical models of radiation carcinogenesis are important for understanding mechanisms and for interpreting or extrapolating risk. There are two classes of such models: (1) long-term formalisms that track pre-malignant cell numbers throughout an entire lifetime but treat initial radiation dose-response simplistically and (2) short-term formalisms that provide a detailed initial dose-response even for complicated radiation protocols, but address its modulation during the subsequent cancer latency period only indirectly. We argue that integrating short- and long

  8. Significant uncertainty in global scale hydrological modeling from precipitation data erros

    NARCIS (Netherlands)

    Sperna Weiland, F.; Vrugt, J.A.; Beek, van P.H.; Weerts, A.H.; Bierkens, M.F.P.

    2015-01-01

    In the past decades significant progress has been made in the fitting of hydrologic models to data. Most of this work has focused on simple, CPU-efficient, lumped hydrologic models using discharge, water table depth, soil moisture, or tracer data from relatively small river basins. In this paper, we

  9. Significant uncertainty in global scale hydrological modeling from precipitation data errors

    NARCIS (Netherlands)

    Weiland, Frederiek C. Sperna; Vrugt, Jasper A.; van Beek, Rens (L. ) P. H.|info:eu-repo/dai/nl/14749799X; Weerts, Albrecht H.; Bierkens, Marc F. P.|info:eu-repo/dai/nl/125022794

    2015-01-01

    In the past decades significant progress has been made in the fitting of hydrologic models to data. Most of this work has focused on simple, CPU-efficient, lumped hydrologic models using discharge, water table depth, soil moisture, or tracer data from relatively small river basins. In this paper, we

  10. Clinical Significance of Myocardial Uptake on F-18 FDG PET/CT Performed in Oncologic Patients

    Energy Technology Data Exchange (ETDEWEB)

    Cho, Ho Jin; Cho, Eung Hyuck; Lee, Jong Doo; Kang, Won Jun [Yonsei University College of Medicine, Seoul (Korea, Republic of)

    2009-12-15

    F-18 fluorodeoxyglucose (FDG) uptake of myocardium is influenced by various factors. Increased glycolysis, and subsequent increased F-18 FDG uptake has been reported in ischemic cardiomyopathy. However, clinical significance of incidentally found myocardial F-18 FDG uptake has not been clarified. We retrospectively reviewed the degree and pattern of myocardial uptake in patients without history of ischemic heart disease who underwent torso F-18 FDG PET/CT for evaluation of neoplastic disease. From January 2005 to June 2009, 77 patients who underwent F-18 FDG PET/CT and Tc-99m sestamibi stress/rest SPECT within 3 months were enrolled. Of 77 patients, 55 (71.4%) showed increased F-18 FDG uptake in the myocardium. In this population, 40 showed uniform uptake pattern, while 15 showed focal uptake. In patients with uniform uptake, 17 showed decreased uptake in the septum without perfusion defect on myocardial SPECT. Remaining 23 patients showed uniform uptake, with 1 reversible perfusion defect and 1 fixed perfusion defect. In 15 patients with focal uptake, 9 showed increased F-18 FDG uptake in the base, and only 1 of them showed reversible perfusion defect on myocardial SPECT. In the remaining 6 focal uptake group, 4 had reversible perfusion defect in the corresponding wall, and 1 had apical hypertrophy. We demonstrated that septal defect pattern and basal uptake pattern in the myocardium may represent normal variants. Focal myocardial uptake other than normal variants on oncologic torso F-18 FDG PET/CT with routine fasting protocol may suggest ischemic heart disease, thus further evaluation is warranted.

  11. AREVA - 2012 annual results: significant turnaround in performance one year after launching the Action 2016 plan

    International Nuclear Information System (INIS)

    Duperray, Julien; Berezowskyj, Katherine; Kempkes, Vincent; Rosso, Jerome; Thebault, Alexandre; Scorbiac, Marie de; Repaire, Philippine du

    2013-01-01

    One year after launching Areva's Action 2016 strategic plan, the first results are in. AREVA is ahead of schedule in executing its recovery plan. While pursuing its efforts in the management of a few difficult projects (such as OL3), Areva group was able to return to a virtuous performance cycle rooted in strong growth in nuclear order intake and good progress on its cost reduction program. Commercially, despite the difficult economic environment, AREVA was able to capitalize on its leadership in the installed base and on its long-term partnerships with strategic customers, beginning with EDF, with which AREVA renewed a confident and constructive working relationship. Areva has secured 80% of its objective of one billion euros of savings by the end of 2015 to improve its competitiveness. The group also continued efforts to optimize working capital requirement and control the capital expenditure trajectory. Together, these results enabled AREVA to exceed the objectives set for 2012 for two key indicators of its strategic plan: EBITDA and free operating cash flow. Nearly 60% of the 2.1 billion euros devoted to capital expenditures for future growth in 2012 were funded by operations, a quasi-doubled share compared to 2011. Areva's floor target for asset disposals was achieved one year ahead of schedule, also helping the Group to control its net debt, which remained below 4 billion euros. In 2013, Areva is continuing to implement the Action 2016 plan to keep its turnaround on track. In summary: - Backlog renewed over the year 2012 to euro 45.4 bn thanks to the increase in nuclear order intake; - Sales revenue growth: euro 9.342 bn (+5.3% vs. 2011), led by nuclear and renewables operations; - Very sharp upturn in EBITDA: euro 1.007 bn (+euro 586 m vs. 2011) - Very net improvement in free operating cash flow: -euro 854 m (+euro 512 m vs. 2011); - Back to positive reported operating income: euro 118 m (+euro 1.984 bn vs. 2011); - 2012-2013 floor target for asset disposals

  12. Mapping the Most Significant Computer Hacking Events to a Temporal Computer Attack Model

    OpenAIRE

    Heerden , Renier ,; Pieterse , Heloise; Irwin , Barry

    2012-01-01

    Part 4: Section 3: ICT for Peace and War; International audience; This paper presents eight of the most significant computer hacking events (also known as computer attacks). These events were selected because of their unique impact, methodology, or other properties. A temporal computer attack model is presented that can be used to model computer based attacks. This model consists of the following stages: Target Identification, Reconnaissance, Attack, and Post-Attack Reconnaissance stages. The...

  13. Iowa calibration of MEPDG performance prediction models.

    Science.gov (United States)

    2013-06-01

    This study aims to improve the accuracy of AASHTO Mechanistic-Empirical Pavement Design Guide (MEPDG) pavement : performance predictions for Iowa pavement systems through local calibration of MEPDG prediction models. A total of 130 : representative p...

  14. Assessing Ecosystem Model Performance in Semiarid Systems

    Science.gov (United States)

    Thomas, A.; Dietze, M.; Scott, R. L.; Biederman, J. A.

    2017-12-01

    In ecosystem process modelling, comparing outputs to benchmark datasets observed in the field is an important way to validate models, allowing the modelling community to track model performance over time and compare models at specific sites. Multi-model comparison projects as well as models themselves have largely been focused on temperate forests and similar biomes. Semiarid regions, on the other hand, are underrepresented in land surface and ecosystem modelling efforts, and yet will be disproportionately impacted by disturbances such as climate change due to their sensitivity to changes in the water balance. Benchmarking models at semiarid sites is an important step in assessing and improving models' suitability for predicting the impact of disturbance on semiarid ecosystems. In this study, several ecosystem models were compared at a semiarid grassland in southwestern Arizona using PEcAn, or the Predictive Ecosystem Analyzer, an open-source eco-informatics toolbox ideal for creating the repeatable model workflows necessary for benchmarking. Models included SIPNET, DALEC, JULES, ED2, GDAY, LPJ-GUESS, MAESPA, CLM, CABLE, and FATES. Comparison between model output and benchmarks such as net ecosystem exchange (NEE) tended to produce high root mean square error and low correlation coefficients, reflecting poor simulation of seasonality and the tendency for models to create much higher carbon sources than observed. These results indicate that ecosystem models do not currently adequately represent semiarid ecosystem processes.

  15. Individualized Biomathematical Modeling of Fatigue and Performance

    Science.gov (United States)

    2008-05-29

    waking period are omitted in order to avoid confounds from sleep inertia. Gray bars indicate scheduled sleep periods . (b) Performance predictions...i.e., total sleep deprivation; black). Light gray areas indicate nocturnal sleep periods . In this illustration, the bifurcation point is set to...confounds from sleep inertia. Gray bars indicate scheduled sleep periods . (b) Corresponding performance predictions according to the new model

  16. Driver Performance Model: 1. Conceptual Framework

    National Research Council Canada - National Science Library

    Heimerl, Joseph

    2001-01-01

    ...'. At the present time, no such comprehensive model exists. This report discusses a conceptual framework designed to encompass the relationships, conditions, and constraints related to direct, indirect, and remote modes of driving and thus provides a guide or 'road map' for the construction and creation of a comprehensive driver performance model.

  17. Performance of hedging strategies in interval models

    NARCIS (Netherlands)

    Roorda, Berend; Engwerda, Jacob; Schumacher, J.M.

    2005-01-01

    For a proper assessment of risks associated with the trading of derivatives, the performance of hedging strategies should be evaluated not only in the context of the idealized model that has served as the basis of strategy development, but also in the context of other models. In this paper we

  18. Analysing the temporal dynamics of model performance for hydrological models

    NARCIS (Netherlands)

    Reusser, D.E.; Blume, T.; Schaefli, B.; Zehe, E.

    2009-01-01

    The temporal dynamics of hydrological model performance gives insights into errors that cannot be obtained from global performance measures assigning a single number to the fit of a simulated time series to an observed reference series. These errors can include errors in data, model parameters, or

  19. Biofilm carrier migration model describes reactor performance.

    Science.gov (United States)

    Boltz, Joshua P; Johnson, Bruce R; Takács, Imre; Daigger, Glen T; Morgenroth, Eberhard; Brockmann, Doris; Kovács, Róbert; Calhoun, Jason M; Choubert, Jean-Marc; Derlon, Nicolas

    2017-06-01

    The accuracy of a biofilm reactor model depends on the extent to which physical system conditions (particularly bulk-liquid hydrodynamics and their influence on biofilm dynamics) deviate from the ideal conditions upon which the model is based. It follows that an improved capacity to model a biofilm reactor does not necessarily rely on an improved biofilm model, but does rely on an improved mathematical description of the biofilm reactor and its components. Existing biofilm reactor models typically include a one-dimensional biofilm model, a process (biokinetic and stoichiometric) model, and a continuous flow stirred tank reactor (CFSTR) mass balance that [when organizing CFSTRs in series] creates a pseudo two-dimensional (2-D) model of bulk-liquid hydrodynamics approaching plug flow. In such a biofilm reactor model, the user-defined biofilm area is specified for each CFSTR; thereby, X carrier does not exit the boundaries of the CFSTR to which they are assigned or exchange boundaries with other CFSTRs in the series. The error introduced by this pseudo 2-D biofilm reactor modeling approach may adversely affect model results and limit model-user capacity to accurately calibrate a model. This paper presents a new sub-model that describes the migration of X carrier and associated biofilms, and evaluates the impact that X carrier migration and axial dispersion has on simulated system performance. Relevance of the new biofilm reactor model to engineering situations is discussed by applying it to known biofilm reactor types and operational conditions.

  20. Performance modeling, loss networks, and statistical multiplexing

    CERN Document Server

    Mazumdar, Ravi

    2009-01-01

    This monograph presents a concise mathematical approach for modeling and analyzing the performance of communication networks with the aim of understanding the phenomenon of statistical multiplexing. The novelty of the monograph is the fresh approach and insights provided by a sample-path methodology for queueing models that highlights the important ideas of Palm distributions associated with traffic models and their role in performance measures. Also presented are recent ideas of large buffer, and many sources asymptotics that play an important role in understanding statistical multiplexing. I

  1. Advances in HTGR fuel performance models

    International Nuclear Information System (INIS)

    Stansfield, O.M.; Goodin, D.T.; Hanson, D.L.; Turner, R.F.

    1985-01-01

    Advances in HTGR fuel performance models have improved the agreement between observed and predicted performance and contributed to an enhanced position of the HTGR with regard to investment risk and passive safety. Heavy metal contamination is the source of about 55% of the circulating activity in the HTGR during normal operation, and the remainder comes primarily from particles which failed because of defective or missing buffer coatings. These failed particles make up about 5 x 10 -4 fraction of the total core inventory. In addition to prediction of fuel performance during normal operation, the models are used to determine fuel failure and fission product release during core heat-up accident conditions. The mechanistic nature of the models, which incorporate all important failure modes, permits the prediction of performance from the relatively modest accident temperatures of a passively safe HTGR to the much more severe accident conditions of the larger 2240-MW/t HTGR. (author)

  2. Estuarine modeling: Does a higher grid resolution improve model performance?

    Science.gov (United States)

    Ecological models are useful tools to explore cause effect relationships, test hypothesis and perform management scenarios. A mathematical model, the Gulf of Mexico Dissolved Oxygen Model (GoMDOM), has been developed and applied to the Louisiana continental shelf of the northern ...

  3. Significance of categorization and the modeling of age related factors for radiation protection

    International Nuclear Information System (INIS)

    Matsuoka, Osamu

    1987-01-01

    It is proposed that the categorization and modelling are necessary with regard to age related factors of radionuclide metabolism for the radiation protection of the public. In order to utilize the age related information as a model for life time risk estimate of public, it is necessary to generalize and simplify it according to the categorized model patterns. Since the patterns of age related changes in various parameters of radionuclide metabolism seem to be rather simple, it is possible to categorize them into eleven types of model patterns. Among these models, five are selected as positively significant models to be considered. Examples are shown as to the fitting of representative parameters of both physiological and metabolic parameter of radionuclides into the proposed model. The range of deviation from adult standard value is also analyzed for each model. The fitting of each parameter to categorized models, and its comparative consideration provide the effective information as to the physiological basis of radionuclide metabolism. Discussions are made on the problems encountered in the application of available age related information to radiation protection of the public, i.e. distribution of categorized parameter, period of life covered, range of deviation from adult value, implication to other dosimetric and pathological models and to the final estimation. 5 refs.; 3 figs.; 4 tabs

  4. Critical review of glass performance modeling

    International Nuclear Information System (INIS)

    Bourcier, W.L.

    1994-07-01

    Borosilicate glass is to be used for permanent disposal of high-level nuclear waste in a geologic repository. Mechanistic chemical models are used to predict the rate at which radionuclides will be released from the glass under repository conditions. The most successful and useful of these models link reaction path geochemical modeling programs with a glass dissolution rate law that is consistent with transition state theory. These models have been used to simulate several types of short-term laboratory tests of glass dissolution and to predict the long-term performance of the glass in a repository. Although mechanistically based, the current models are limited by a lack of unambiguous experimental support for some of their assumptions. The most severe problem of this type is the lack of an existing validated mechanism that controls long-term glass dissolution rates. Current models can be improved by performing carefully designed experiments and using the experimental results to validate the rate-controlling mechanisms implicit in the models. These models should be supported with long-term experiments to be used for model validation. The mechanistic basis of the models should be explored by using modern molecular simulations such as molecular orbital and molecular dynamics to investigate both the glass structure and its dissolution process

  5. Critical review of glass performance modeling

    Energy Technology Data Exchange (ETDEWEB)

    Bourcier, W.L. [Lawrence Livermore National Lab., CA (United States)

    1994-07-01

    Borosilicate glass is to be used for permanent disposal of high-level nuclear waste in a geologic repository. Mechanistic chemical models are used to predict the rate at which radionuclides will be released from the glass under repository conditions. The most successful and useful of these models link reaction path geochemical modeling programs with a glass dissolution rate law that is consistent with transition state theory. These models have been used to simulate several types of short-term laboratory tests of glass dissolution and to predict the long-term performance of the glass in a repository. Although mechanistically based, the current models are limited by a lack of unambiguous experimental support for some of their assumptions. The most severe problem of this type is the lack of an existing validated mechanism that controls long-term glass dissolution rates. Current models can be improved by performing carefully designed experiments and using the experimental results to validate the rate-controlling mechanisms implicit in the models. These models should be supported with long-term experiments to be used for model validation. The mechanistic basis of the models should be explored by using modern molecular simulations such as molecular orbital and molecular dynamics to investigate both the glass structure and its dissolution process.

  6. Statistically significant performance results of a mine detector and fusion algorithm from an x-band high-resolution SAR

    Science.gov (United States)

    Williams, Arnold C.; Pachowicz, Peter W.

    2004-09-01

    Current mine detection research indicates that no single sensor or single look from a sensor will detect mines/minefields in a real-time manner at a performance level suitable for a forward maneuver unit. Hence, the integrated development of detectors and fusion algorithms are of primary importance. A problem in this development process has been the evaluation of these algorithms with relatively small data sets, leading to anecdotal and frequently over trained results. These anecdotal results are often unreliable and conflicting among various sensors and algorithms. Consequently, the physical phenomena that ought to be exploited and the performance benefits of this exploitation are often ambiguous. The Army RDECOM CERDEC Night Vision Laboratory and Electron Sensors Directorate has collected large amounts of multisensor data such that statistically significant evaluations of detection and fusion algorithms can be obtained. Even with these large data sets care must be taken in algorithm design and data processing to achieve statistically significant performance results for combined detectors and fusion algorithms. This paper discusses statistically significant detection and combined multilook fusion results for the Ellipse Detector (ED) and the Piecewise Level Fusion Algorithm (PLFA). These statistically significant performance results are characterized by ROC curves that have been obtained through processing this multilook data for the high resolution SAR data of the Veridian X-Band radar. We discuss the implications of these results on mine detection and the importance of statistical significance, sample size, ground truth, and algorithm design in performance evaluation.

  7. Performance modeling, stochastic networks, and statistical multiplexing

    CERN Document Server

    Mazumdar, Ravi R

    2013-01-01

    This monograph presents a concise mathematical approach for modeling and analyzing the performance of communication networks with the aim of introducing an appropriate mathematical framework for modeling and analysis as well as understanding the phenomenon of statistical multiplexing. The models, techniques, and results presented form the core of traffic engineering methods used to design, control and allocate resources in communication networks.The novelty of the monograph is the fresh approach and insights provided by a sample-path methodology for queueing models that highlights the importan

  8. A multiparametric magnetic resonance imaging-based risk model to determine the risk of significant prostate cancer prior to biopsy.

    Science.gov (United States)

    van Leeuwen, Pim J; Hayen, Andrew; Thompson, James E; Moses, Daniel; Shnier, Ron; Böhm, Maret; Abuodha, Magdaline; Haynes, Anne-Maree; Ting, Francis; Barentsz, Jelle; Roobol, Monique; Vass, Justin; Rasiah, Krishan; Delprado, Warick; Stricker, Phillip D

    2017-12-01

    To develop and externally validate a predictive model for detection of significant prostate cancer. Development of the model was based on a prospective cohort including 393 men who underwent multiparametric magnetic resonance imaging (mpMRI) before biopsy. External validity of the model was then examined retrospectively in 198 men from a separate institution whom underwent mpMRI followed by biopsy for abnormal prostate-specific antigen (PSA) level or digital rectal examination (DRE). A model was developed with age, PSA level, DRE, prostate volume, previous biopsy, and Prostate Imaging Reporting and Data System (PIRADS) score, as predictors for significant prostate cancer (Gleason 7 with >5% grade 4, ≥20% cores positive or ≥7 mm of cancer in any core). Probability was studied via logistic regression. Discriminatory performance was quantified by concordance statistics and internally validated with bootstrap resampling. In all, 393 men had complete data and 149 (37.9%) had significant prostate cancer. While the variable model had good accuracy in predicting significant prostate cancer, area under the curve (AUC) of 0.80, the advanced model (incorporating mpMRI) had a significantly higher AUC of 0.88 (P prostate cancer. Individualised risk assessment of significant prostate cancer using a predictive model that incorporates mpMRI PIRADS score and clinical data allows a considerable reduction in unnecessary biopsies and reduction of the risk of over-detection of insignificant prostate cancer at the cost of a very small increase in the number of significant cancers missed. © 2017 The Authors BJU International © 2017 BJU International Published by John Wiley & Sons Ltd.

  9. Evaluation of models in performance assessment

    International Nuclear Information System (INIS)

    Dormuth, K.W.

    1993-01-01

    The reliability of models used for performance assessment for high-level waste repositories is a key factor in making decisions regarding the management of high-level waste. Model reliability may be viewed as a measure of the confidence that regulators and others have in the use of these models to provide information for decision making. The degree of reliability required for the models will increase as implementation of disposal proceeds and decisions become increasingly important to safety. Evaluation of the models by using observations of real systems provides information that assists the assessment analysts and reviewers in establishing confidence in the conclusions reached in the assessment. A continuing process of model calibration, evaluation, and refinement should lead to increasing reliability of models as implementation proceeds. However, uncertainty in the model predictions cannot be eliminated, so decisions will always be made under some uncertainty. Examples from the Canadian program illustrate the process of model evaluation using observations of real systems and its relationship to performance assessment. 21 refs., 2 figs

  10. Strategies for Testing Statistical and Practical Significance in Detecting DIF with Logistic Regression Models

    Science.gov (United States)

    Fidalgo, Angel M.; Alavi, Seyed Mohammad; Amirian, Seyed Mohammad Reza

    2014-01-01

    This study examines three controversial aspects in differential item functioning (DIF) detection by logistic regression (LR) models: first, the relative effectiveness of different analytical strategies for detecting DIF; second, the suitability of the Wald statistic for determining the statistical significance of the parameters of interest; and…

  11. Generating Performance Models for Irregular Applications

    Energy Technology Data Exchange (ETDEWEB)

    Friese, Ryan D.; Tallent, Nathan R.; Vishnu, Abhinav; Kerbyson, Darren J.; Hoisie, Adolfy

    2017-05-30

    Many applications have irregular behavior --- non-uniform input data, input-dependent solvers, irregular memory accesses, unbiased branches --- that cannot be captured using today's automated performance modeling techniques. We describe new hierarchical critical path analyses for the \\Palm model generation tool. To create a model's structure, we capture tasks along representative MPI critical paths. We create a histogram of critical tasks with parameterized task arguments and instance counts. To model each task, we identify hot instruction-level sub-paths and model each sub-path based on data flow, instruction scheduling, and data locality. We describe application models that generate accurate predictions for strong scaling when varying CPU speed, cache speed, memory speed, and architecture. We present results for the Sweep3D neutron transport benchmark; Page Rank on multiple graphs; Support Vector Machine with pruning; and PFLOTRAN's reactive flow/transport solver with domain-induced load imbalance.

  12. Performance Measurement Model A TarBase model with ...

    Indian Academy of Sciences (India)

    rohit

    Model A 8.0 2.0 94.52% 88.46% 76 108 12 12 0.86 0.91 0.78 0.94. Model B 2.0 2.0 93.18% 89.33% 64 95 10 9 0.88 0.90 0.75 0.98. The above results for TEST – 1 show details for our two models (Model A and Model B).Performance of Model A after adding of 32 negative dataset of MiRTif on our testing set(MiRecords) ...

  13. Performance Evaluation and Modelling of Container Terminals

    Science.gov (United States)

    Venkatasubbaiah, K.; Rao, K. Narayana; Rao, M. Malleswara; Challa, Suresh

    2018-02-01

    The present paper evaluates and analyzes the performance of 28 container terminals of south East Asia through data envelopment analysis (DEA), principal component analysis (PCA) and hybrid method of DEA-PCA. DEA technique is utilized to identify efficient decision making unit (DMU)s and to rank DMUs in a peer appraisal mode. PCA is a multivariate statistical method to evaluate the performance of container terminals. In hybrid method, DEA is integrated with PCA to arrive the ranking of container terminals. Based on the composite ranking, performance modelling and optimization of container terminals is carried out through response surface methodology (RSM).

  14. Utilities for high performance dispersion model PHYSIC

    International Nuclear Information System (INIS)

    Yamazawa, Hiromi

    1992-09-01

    The description and usage of the utilities for the dispersion calculation model PHYSIC were summarized. The model was developed in the study of developing high performance SPEEDI with the purpose of introducing meteorological forecast function into the environmental emergency response system. The procedure of PHYSIC calculation consists of three steps; preparation of relevant files, creation and submission of JCL, and graphic output of results. A user can carry out the above procedure with the help of the Geographical Data Processing Utility, the Model Control Utility, and the Graphic Output Utility. (author)

  15. A practical model for sustainable operational performance

    International Nuclear Information System (INIS)

    Vlek, C.A.J.; Steg, E.M.; Feenstra, D.; Gerbens-Leenis, W.; Lindenberg, S.; Moll, H.; Schoot Uiterkamp, A.; Sijtsma, F.; Van Witteloostuijn, A.

    2002-01-01

    By means of a concrete model for sustainable operational performance enterprises can report uniformly on the sustainability of their contributions to the economy, welfare and the environment. The development and design of a three-dimensional monitoring system is presented and discussed [nl

  16. Data Model Performance in Data Warehousing

    Science.gov (United States)

    Rorimpandey, G. C.; Sangkop, F. I.; Rantung, V. P.; Zwart, J. P.; Liando, O. E. S.; Mewengkang, A.

    2018-02-01

    Data Warehouses have increasingly become important in organizations that have large amount of data. It is not a product but a part of a solution for the decision support system in those organizations. Data model is the starting point for designing and developing of data warehouses architectures. Thus, the data model needs stable interfaces and consistent for a longer period of time. The aim of this research is to know which data model in data warehousing has the best performance. The research method is descriptive analysis, which has 3 main tasks, such as data collection and organization, analysis of data and interpretation of data. The result of this research is discussed in a statistic analysis method, represents that there is no statistical difference among data models used in data warehousing. The organization can utilize four data model proposed when designing and developing data warehouse.

  17. Performance model for a CCTV-MTI

    International Nuclear Information System (INIS)

    Dunn, D.R.; Dunbar, D.L.

    1978-01-01

    CCTV-MTI (closed circuit television--moving target indicator) monitors represent typical components of access control systems, as for example in a material control and accounting (MC and A) safeguards system. This report describes a performance model for a CCTV-MTI monitor. The performance of a human in an MTI role is a separate problem and is not addressed here. This work was done in conjunction with the NRC sponsored LLL assessment procedure for MC and A systems which is presently under development. We develop a noise model for a generic camera system and a model for the detection mechanism for a postulated MTI design. These models are then translated into an overall performance model. Measures of performance are probabilities of detection and false alarm as a function of intruder-induced grey level changes in the protected area. Sensor responsivity, lens F-number, source illumination and spectral response were treated as design parameters. Some specific results are illustrated for a postulated design employing a camera with a Si-target vidicon. Reflectance or light level changes in excess of 10% due to an intruder will be detected with a very high probability for the portion of the visible spectrum with wavelengths above 500 nm. The resulting false alarm rate was less than one per year. We did not address sources of nuisance alarms due to adverse environments, reliability, resistance to tampering, nor did we examine the effects of the spatial frequency response of the optics. All of these are important and will influence overall system detection performance

  18. Significance of Various Experimental Models and Assay Techniques in Cancer Diagnosis.

    Science.gov (United States)

    Ghanghoria, Raksha; Kesharwani, Prashant; Jain, Narendra K

    2017-01-01

    The experimental models are of vital significance to provide information regarding biological as well as genetic factors that control the phenotypic characteristics of the disease and serve as the foundation for the development of rational intervention stratagem. This review highlights the importance of experimental models in the field of cancer management. The process of pathogenesis in cancer progression, invasion and metastasis can be successfully explained by employing clinically relevant laboratory models of the disease. Cancer cell lines have been used extensively to monitor the process of cancer pathogenesis process by controlling growth regulation and chemo-sensitivity for the evaluation of novel therapeutics in both in vitro and xenograft models. The experimental models have been used for the elaboration of diagnostic or therapeutic protocols, and thus employed in preclinical studies of bioactive agents relevant for cancer prevention. The outcome of this review should provide useful information in understanding and selection of various models in accordance with the stage of cancer. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  19. Scoping review identifies significant number of knowledge translation theories, models and frameworks with limited use.

    Science.gov (United States)

    Strifler, Lisa; Cardoso, Roberta; McGowan, Jessie; Cogo, Elise; Nincic, Vera; Khan, Paul A; Scott, Alistair; Ghassemi, Marco; MacDonald, Heather; Lai, Yonda; Treister, Victoria; Tricco, Andrea C; Straus, Sharon E

    2018-04-13

    To conduct a scoping review of knowledge translation (KT) theories, models and frameworks that have been used to guide dissemination or implementation of evidence-based interventions targeted to prevention and/or management of cancer or other chronic diseases. We used a comprehensive multistage search process from 2000-2016, which included traditional bibliographic database searching, searching using names of theories, models and frameworks, and cited reference searching. Two reviewers independently screened the literature and abstracted data. We found 596 studies reporting on the use of 159 KT theories, models or frameworks. A majority (87%) of the identified theories, models or frameworks were used in five or fewer studies, with 60% used once. The theories, models and frameworks were most commonly used to inform planning/design, implementation and evaluation activities, and least commonly used to inform dissemination and sustainability/scalability activities. Twenty-six were used across the full implementation spectrum (from planning/design to sustainability/scalability) either within or across studies. All were used for at least individual-level behavior change, while 48% were used for organization-level, 33% for community-level and 17% for system-level change. We found a significant number of KT theories, models and frameworks with a limited evidence base describing their use. Copyright © 2018. Published by Elsevier Inc.

  20. Assessing The Performance of Hydrological Models

    Science.gov (United States)

    van der Knijff, Johan

    The performance of hydrological models is often characterized using the coefficient of efficiency, E. The sensitivity of E to extreme streamflow values, and the difficulty of deciding what value of E should be used as a threshold to identify 'good' models or model parameterizations, have proven to be serious shortcomings of this index. This paper reviews some alternative performance indices that have appeared in the litera- ture. Legates and McCabe (1999) suggested a more generalized form of E, E'(j,B). Here, j is a parameter that controls how much emphasis is put on extreme streamflow values, and B defines a benchmark or 'null hypothesis' against which the results of the model are tested. E'(j,B) was used to evaluate a large number of parameterizations of a conceptual rainfall-runoff model, using 6 different combinations of j and B. First, the effect of j and B is explained. Second, it is demonstrated how the index can be used to explicitly test hypotheses about the model and the data. This approach appears to be particularly attractive if the index is used as a likelihood measure within a GLUE-type analysis.

  1. System Advisor Model: Flat Plate Photovoltaic Performance Modeling Validation Report

    Energy Technology Data Exchange (ETDEWEB)

    Freeman, Janine [National Renewable Energy Lab. (NREL), Golden, CO (United States); Whitmore, Jonathan [National Renewable Energy Lab. (NREL), Golden, CO (United States); Kaffine, Leah [National Renewable Energy Lab. (NREL), Golden, CO (United States); Blair, Nate [National Renewable Energy Lab. (NREL), Golden, CO (United States); Dobos, Aron P. [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2013-12-01

    The System Advisor Model (SAM) is a free software tool that performs detailed analysis of both system performance and system financing for a variety of renewable energy technologies. This report provides detailed validation of the SAM flat plate photovoltaic performance model by comparing SAM-modeled PV system generation data to actual measured production data for nine PV systems ranging from 75 kW to greater than 25 MW in size. The results show strong agreement between SAM predictions and field data, with annualized prediction error below 3% for all fixed tilt cases and below 8% for all one axis tracked cases. The analysis concludes that snow cover and system outages are the primary sources of disagreement, and other deviations resulting from seasonal biases in the irradiation models and one axis tracking issues are discussed in detail.

  2. Computation of spatial significance of mountain objects extracted from multiscale digital elevation models

    International Nuclear Information System (INIS)

    Sathyamoorthy, Dinesh

    2014-01-01

    The derivation of spatial significance is an important aspect of geospatial analysis and hence, various methods have been proposed to compute the spatial significance of entities based on spatial distances with other entities within the cluster. This paper is aimed at studying the spatial significance of mountain objects extracted from multiscale digital elevation models (DEMs). At each scale, the value of spatial significance index SSI of a mountain object is the minimum number of morphological dilation iterations required to occupy all the other mountain objects in the terrain. The mountain object with the lowest value of SSI is the spatially most significant mountain object, indicating that it has the shortest distance to the other mountain objects. It is observed that as the area of the mountain objects reduce with increasing scale, the distances between the mountain objects increase, resulting in increasing values of SSI. The results obtained indicate that the strategic location of a mountain object at the centre of the terrain is more important than its size in determining its reach to other mountain objects and thus, its spatial significance

  3. Port performance evaluation tool based on microsimulation model

    Directory of Open Access Journals (Sweden)

    Tsavalista Burhani Jzolanda

    2017-01-01

    Full Text Available As port performance is becoming correlative to national competitiveness, the issue of port performance evaluation has significantly raised. Port performances can simply be indicated by port service levels to the ship (e.g., throughput, waiting for berthing etc., as well as the utilization level of equipment and facilities within a certain period. The performances evaluation then can be used as a tool to develop related policies for improving the port’s performance to be more effective and efficient. However, the evaluation is frequently conducted based on deterministic approach, which hardly captures the nature variations of port parameters. Therefore, this paper presents a stochastic microsimulation model for investigating the impacts of port parameter variations to the port performances. The variations are derived from actual data in order to provide more realistic results. The model is further developed using MATLAB and Simulink based on the queuing theory.

  4. Significance of primary factors influencing students' performance at the College of Dentistry, King Saud University, Saudi Arabia.

    Science.gov (United States)

    Al-Amri, Mohammad; Al-Madi, Ebtissam; Sadig, Walid Mahmoud; Ahmedani, Muhammad Shoaib; Salameh, Ziad

    2012-08-01

    To determine the effect of different enabling factors such as curriculum, role of faculty, academic advising and availability of learning resources and supportive services on the performance of students pursuing their Bachelor's degree in dentistry. Data was collected from the male and female students of the College of Dentistry, King Saud University, during the academic year 2008-2009. All undergraduate students (576) constituted the total sample size of the study. The respondents were requested to fill a questionnaire form, which was specially designed in accordance with requirements of the Association for Dental Education in Europe (ADEE). The questionnaire comprised 45 questions addressing all aspects of the relevant factors. The five-point Likert scale was used to evaluate the feedback. All the responses (239) were thoroughly examined and only the completely filled forms (169) were subjected to regression analyses, taking student's CGPA as a dependent factor and a depiction of their performance. The t-tests were also worked out to evaluate variations in the responses of male and female students to each sub-factor. The study showed a significant impact of faculty and learning resources and support services on a student's achievement (alpha = 0.05). Surprisingly, academic advising and the dental curriculum had non-significant effect at 95% level of confidence. However, the critical analyses acknowledged that the non-significant impact was due to poor performance of the two factors. The role of faculty and learning resources as well as of support services had significant effect on students' performance. However, there is an immense need to improve the level of academic advising and revise the curriculum to have a significant impact of these factors on the student's achievements.

  5. New Diagnostics to Assess Model Performance

    Science.gov (United States)

    Koh, Tieh-Yong

    2013-04-01

    The comparison of model performance between the tropics and the mid-latitudes is particularly problematic for observables like temperature and humidity: in the tropics, these observables have little variation and so may give an apparent impression that model predictions are often close to observations; on the contrary, they vary widely in mid-latitudes and so the discrepancy between model predictions and observations might be unnecessarily over-emphasized. We have developed a suite of mathematically rigorous diagnostics that measures normalized errors accounting for the observed and modeled variability of the observables themselves. Another issue in evaluating model performance is the relative importance of getting the variance of an observable right versus getting the modeled variation to be in phase with the observed. The correlation-similarity diagram was designed to analyse the pattern error of a model by breaking it down into contributions from amplitude and phase errors. A final and important question pertains to the generalization of scalar diagnostics to analyse vector observables like wind. In particular, measures of variance and correlation must be properly derived to avoid the mistake of ignoring the covariance between north-south and east-west winds (hence wrongly assuming that the north-south and east-west directions form a privileged vector basis for error analysis). There is also a need to quantify systematic preferences in the direction of vector wind errors, which we make possible by means of an error anisotropy diagram. Although the suite of diagnostics is mentioned with reference to model verification here, it is generally applicable to quantify differences between two datasets (e.g. from two observation platforms). Reference publication: Koh, T. Y. et al. (2012), J. Geophys. Res., 117, D13109, doi:10.1029/2011JD017103. also available at http://www.ntu.edu.sg/home/kohty

  6. Performance modeling of network data services

    Energy Technology Data Exchange (ETDEWEB)

    Haynes, R.A.; Pierson, L.G.

    1997-01-01

    Networks at major computational organizations are becoming increasingly complex. The introduction of large massively parallel computers and supercomputers with gigabyte memories are requiring greater and greater bandwidth for network data transfers to widely dispersed clients. For networks to provide adequate data transfer services to high performance computers and remote users connected to them, the networking components must be optimized from a combination of internal and external performance criteria. This paper describes research done at Sandia National Laboratories to model network data services and to visualize the flow of data from source to sink when using the data services.

  7. Probabilistic Radiological Performance Assessment Modeling and Uncertainty

    Science.gov (United States)

    Tauxe, J.

    2004-12-01

    A generic probabilistic radiological Performance Assessment (PA) model is presented. The model, built using the GoldSim systems simulation software platform, concerns contaminant transport and dose estimation in support of decision making with uncertainty. Both the U.S. Nuclear Regulatory Commission (NRC) and the U.S. Department of Energy (DOE) require assessments of potential future risk to human receptors of disposal of LLW. Commercially operated LLW disposal facilities are licensed by the NRC (or agreement states), and the DOE operates such facilities for disposal of DOE-generated LLW. The type of PA model presented is probabilistic in nature, and hence reflects the current state of knowledge about the site by using probability distributions to capture what is expected (central tendency or average) and the uncertainty (e.g., standard deviation) associated with input parameters, and propagating through the model to arrive at output distributions that reflect expected performance and the overall uncertainty in the system. Estimates of contaminant release rates, concentrations in environmental media, and resulting doses to human receptors well into the future are made by running the model in Monte Carlo fashion, with each realization representing a possible combination of input parameter values. Statistical summaries of the results can be compared to regulatory performance objectives, and decision makers are better informed of the inherently uncertain aspects of the model which supports their decision-making. While this information may make some regulators uncomfortable, they must realize that uncertainties which were hidden in a deterministic analysis are revealed in a probabilistic analysis, and the chance of making a correct decision is now known rather than hoped for. The model includes many typical features and processes that would be part of a PA, but is entirely fictitious. This does not represent any particular site and is meant to be a generic example. A

  8. Confirming the Value of Swimming-Performance Models for Adolescents.

    Science.gov (United States)

    Dormehl, Shilo J; Robertson, Samuel J; Barker, Alan R; Williams, Craig A

    2017-10-01

    To evaluate the efficacy of existing performance models to assess the progression of male and female adolescent swimmers through a quantitative and qualitative mixed-methods approach. Fourteen published models were tested using retrospective data from an independent sample of Dutch junior national-level swimmers from when they were 12-18 y of age (n = 13). The degree of association by Pearson correlations was compared between the calculated differences from the models and quadratic functions derived from the Dutch junior national qualifying times. Swimmers were grouped based on their differences from the models and compared with their swimming histories that were extracted from questionnaires and follow-up interviews. Correlations of the deviations from both the models and quadratic functions derived from the Dutch qualifying times were all significant except for the 100-m breaststroke and butterfly and the 200-m freestyle for females (P backstroke for males and 200-m freestyle for males and females were almost directly proportional. In general, deviations from the models were accounted for by the swimmers' training histories. Higher levels of retrospective motivation appeared to be synonymous with higher-level career performance. This mixed-methods approach helped confirm the validity of the models that were found to be applicable to adolescent swimmers at all levels, allowing coaches to track performance and set goals. The value of the models in being able to account for the expected performance gains during adolescence enables quantification of peripheral factors that could affect performance.

  9. The significance of using satellite imagery data only in Ecological Niche Modelling of Iberian herps

    Directory of Open Access Journals (Sweden)

    Neftalí Sillero

    2012-12-01

    Full Text Available The environmental data used to calculate ecological niche models (ENM are obtained mainly from ground-based maps (e.g., climatic interpolated surfaces. These data are often not available for less developed areas, or may be at an inappropriate scale, and thus to obtain this information requires fieldwork. An alternative source of eco-geographical data comes from satellite imagery. Three sets of ENM were calculated exclusively with variables obtained (1 from optical and radar images only and (2 from climatic and altitude maps obtained by ground-based methods. These models were compared to evaluate whether satellite imagery can accurately generate ENM. These comparisons must be made in areas with well-known species distribution and with available satellite imagery and ground-based data. Thus, the study area was the south-western part of Salamanca (Spain, using amphibian and reptiles as species models. Models’ discrimination capacity was measured with ROC plots. Models’ covariation was measured with a Spatial Spearman correlation. Four modelling techniques were used (Bioclim, Mahalanobis distance, GARP and Maxent. The results of this comparison showed that there were no significant differences between models generated using remotely sensed imagery or ground-based data. However, the models built with satellite imagery data exhibited a larger diversity of values, probably related to the higher spatial resolution of the satellite imagery. Satellite imagery can produce accurate ENM, independently of the modelling technique or the dataset used. Therefore, biogeographical analysis of species distribution in remote areas can be accurately developed only with variables from satellite imagery.

  10. Performance assessment modeling of pyrometallurgical process wasteforms

    International Nuclear Information System (INIS)

    Nutt, W.M.; Hill, R.N.; Bullen, D.B.

    1995-01-01

    Performance assessment analyses have been completed to estimate the behavior of high-level nuclear wasteforms generated from the pyrometallurgical processing of liquid metal reactor (LMR) and light water reactor (LWR) spent nuclear fuel. Waste emplaced in the proposed repository at Yucca Mountain is investigated as the basis for the study. The resulting cumulative actinide and fission product releases to the accessible environment within a 100,000 year period from the various pyrometallurgical process wasteforms are compared to those of directly disposed LWR spent fuel using the same total repository system model. The impact of differing radionuclide transport models on the overall release characteristics is investigated

  11. Model description and evaluation of model performance: DOSDIM model

    International Nuclear Information System (INIS)

    Lewyckyj, N.; Zeevaert, T.

    1996-01-01

    DOSDIM was developed to assess the impact to man from routine and accidental atmospheric releases. It is a compartmental, deterministic, radiological model. For an accidental release, dynamic transfer are used in opposition to a routine release for which equilibrium transfer factors are used. Parameters values were chosen to be conservative. Transfer between compartments are described by first-order differential equations. 2 figs

  12. Modelling and evaluation of surgical performance using hidden Markov models.

    Science.gov (United States)

    Megali, Giuseppe; Sinigaglia, Stefano; Tonet, Oliver; Dario, Paolo

    2006-10-01

    Minimally invasive surgery has become very widespread in the last ten years. Since surgeons experience difficulties in learning and mastering minimally invasive techniques, the development of training methods is of great importance. While the introduction of virtual reality-based simulators has introduced a new paradigm in surgical training, skill evaluation methods are far from being objective. This paper proposes a method for defining a model of surgical expertise and an objective metric to evaluate performance in laparoscopic surgery. Our approach is based on the processing of kinematic data describing movements of surgical instruments. We use hidden Markov model theory to define an expert model that describes expert surgical gesture. The model is trained on kinematic data related to exercises performed on a surgical simulator by experienced surgeons. Subsequently, we use this expert model as a reference model in the definition of an objective metric to evaluate performance of surgeons with different abilities. Preliminary results show that, using different topologies for the expert model, the method can be efficiently used both for the discrimination between experienced and novice surgeons, and for the quantitative assessment of surgical ability.

  13. Hybrid Modeling Improves Health and Performance Monitoring

    Science.gov (United States)

    2007-01-01

    Scientific Monitoring Inc. was awarded a Phase I Small Business Innovation Research (SBIR) project by NASA's Dryden Flight Research Center to create a new, simplified health-monitoring approach for flight vehicles and flight equipment. The project developed a hybrid physical model concept that provided a structured approach to simplifying complex design models for use in health monitoring, allowing the output or performance of the equipment to be compared to what the design models predicted, so that deterioration or impending failure could be detected before there would be an impact on the equipment's operational capability. Based on the original modeling technology, Scientific Monitoring released I-Trend, a commercial health- and performance-monitoring software product named for its intelligent trending, diagnostics, and prognostics capabilities, as part of the company's complete ICEMS (Intelligent Condition-based Equipment Management System) suite of monitoring and advanced alerting software. I-Trend uses the hybrid physical model to better characterize the nature of health or performance alarms that result in "no fault found" false alarms. Additionally, the use of physical principles helps I-Trend identify problems sooner. I-Trend technology is currently in use in several commercial aviation programs, and the U.S. Air Force recently tapped Scientific Monitoring to develop next-generation engine health-management software for monitoring its fleet of jet engines. Scientific Monitoring has continued the original NASA work, this time under a Phase III SBIR contract with a joint NASA-Pratt & Whitney aviation security program on propulsion-controlled aircraft under missile-damaged aircraft conditions.

  14. Intriguing model significantly reduces boarding of psychiatric patients, need for inpatient hospitalization.

    Science.gov (United States)

    2015-01-01

    As new approaches to the care of psychiatric emergencies emerge, one solution is gaining particular traction. Under the Alameda model, which has been put into practice in Alameda County, CA, patients who are brought to regional EDs with emergency psychiatric issues are quickly transferred to a designated emergency psychiatric facility as soon as they are medically stabilized. This alleviates boarding problems in area EDs while also quickly connecting patients with specialized care. With data in hand on the model's effectiveness, developers believe the approach could alleviate boarding problems in other communities as well. The model is funded by through a billing code established by California's Medicaid program for crisis stabilization services. Currently, only 22% of the patients brought to the emergency psychiatric facility ultimately need to be hospitalized; the other 78% are able to go home or to an alternative situation. In a 30-day study of the model, involving five community hospitals in Alameda County, CA, researchers found that ED boarding times were as much as 80% lower than comparable ED averages, and that patients were stabilized at least 75% of the time, significantly reducing the need for inpatient hospitalization.

  15. Construction Of A Performance Assessment Model For Zakat Management Institutions

    Directory of Open Access Journals (Sweden)

    Sri Fadilah

    2016-12-01

    Full Text Available The objective of the research is to examine the performance evaluation using Balanced Scorecard model. The research is conducted due to a big gap existing between zakat (alms and religious tax in Islam with its potential earn of as much as 217 trillion rupiahs and the realization of the collected zakat fund that is only reached for three trillion. This indicates that the performance of zakat management organizations in collecting the zakat is still very low. On the other hand, the quantity and the quality of zakat management organizations have to be improved. This means the performance evaluation model as a tool to evaluate performance is needed. The model construct is making a performance evaluation model that can be implemented to zakat management organizations. The organizational performance with Balanced Scorecard evaluation model will be effective if it is supported by three aspects, namely:  PI, BO and TQM. This research uses explanatory method and data analysis tool of SEM/PLS. Data collecting technique are questionnaires, interviews and documentation. The result of this research shows that PI, BO and TQM simultaneously and partially gives a significant effect on organizational performance.

  16. A Parallelized Pumpless Artificial Placenta System Significantly Prolonged Survival Time in a Preterm Lamb Model.

    Science.gov (United States)

    Miura, Yuichiro; Matsuda, Tadashi; Usuda, Haruo; Watanabe, Shimpei; Kitanishi, Ryuta; Saito, Masatoshi; Hanita, Takushi; Kobayashi, Yoshiyasu

    2016-05-01

    An artificial placenta (AP) is an arterio-venous extracorporeal life support system that is connected to the fetal circulation via the umbilical vasculature. Previously, we published an article describing a pumpless AP system with a small priming volume. We subsequently developed a parallelized system, hypothesizing that the reduced circuit resistance conveyed by this modification would enable healthy fetal survival time to be prolonged. We conducted experiments using a premature lamb model to test this hypothesis. As a result, the fetal survival period was significantly prolonged (60.4 ± 3.8 vs. 18.2 ± 3.2 h, P lamb fetuses to survive for a significantly longer period when compared with previous studies. Copyright © 2015 International Center for Artificial Organs and Transplantation and Wiley Periodicals Inc.

  17. Food restriction alters salivary cortisol and α-amylase responses to a simulated weightlifting competition without significant performance modification.

    Science.gov (United States)

    Durguerian, Alexandre; Filaire, Edith; Drogou, Catherine; Bougard, Clément; Chennaoui, Mounir

    2018-03-01

    The aim of this investigation was to evaluate the effect of a 6-day food restriction period on the physiological responses and performance of 11 high-level weightlifters. After a period of weight maintenance (T 2 ), they were assigned into two groups depending on whether they lost (Diet group, n = 6) or maintained their body weight (Control group, n = 5) during the course of those 6 days. An evaluation of performance and the measurement of salivary cortisol concentrations and salivary α-amylase (sAA) activity were performed during a simulated weightlifting competition which took place at T 2 , after a 6-day period of food restriction (T 3 ). Dietary data were collected using a 6-day diet record. We noted a 41.8% decrease in mean energy intake during the dietary restriction period, leading to a 4.34% weight loss for the Diet group. Dietary restriction did not modify absolute performance levels, whilst a significant improvement was noted for the Control group. Furthermore, we noted a response of decreased salivary cortisol and increased sAA activity to the simulated competition stress at T 3 for the Diet group. These results may indicate that dietary reduction led to a dissociation of the hypothalamo-pituitary-adrenal axis and the sympatho-adreno-medullary system, which could impair training adaptations and absolute performance development.

  18. Evaluating Flight Crew Performance by a Bayesian Network Model

    Directory of Open Access Journals (Sweden)

    Wei Chen

    2018-03-01

    Full Text Available Flight crew performance is of great significance in keeping flights safe and sound. When evaluating the crew performance, quantitative detailed behavior information may not be available. The present paper introduces the Bayesian Network to perform flight crew performance evaluation, which permits the utilization of multidisciplinary sources of objective and subjective information, despite sparse behavioral data. In this paper, the causal factors are selected based on the analysis of 484 aviation accidents caused by human factors. Then, a network termed Flight Crew Performance Model is constructed. The Delphi technique helps to gather subjective data as a supplement to objective data from accident reports. The conditional probabilities are elicited by the leaky noisy MAX model. Two ways of inference for the BN—probability prediction and probabilistic diagnosis are used and some interesting conclusions are drawn, which could provide data support to make interventions for human error management in aviation safety.

  19. Modelling fuel cell performance using artificial intelligence

    Energy Technology Data Exchange (ETDEWEB)

    Ogaji, S.O.T.; Singh, R.; Pilidis, P.; Diacakis, M. [Power Propulsion and Aerospace Engineering Department, Centre for Diagnostics and Life Cycle Costs, Cranfield University (United Kingdom)

    2006-03-09

    Over the last few years, fuel cell technology has been increasing promisingly its share in the generation of stationary power. Numerous pilot projects are operating worldwide, continuously increasing the amount of operating hours either as stand-alone devices or as part of gas turbine combined cycles. An essential tool for the adequate and dynamic analysis of such systems is a software model that enables the user to assess a large number of alternative options in the least possible time. On the other hand, the sphere of application of artificial neural networks has widened covering such endeavours of life such as medicine, finance and unsurprisingly engineering (diagnostics of faults in machines). Artificial neural networks have been described as diagrammatic representation of a mathematical equation that receives values (inputs) and gives out results (outputs). Artificial neural networks systems have the capacity to recognise and associate patterns and because of their inherent design features, they can be applied to linear and non-linear problem domains. In this paper, the performance of the fuel cell is modelled using artificial neural networks. The inputs to the network are variables that are critical to the performance of the fuel cell while the outputs are the result of changes in any one or all of the fuel cell design variables, on its performance. Critical parameters for the cell include the geometrical configuration as well as the operating conditions. For the neural network, various network design parameters such as the network size, training algorithm, activation functions and their causes on the effectiveness of the performance modelling are discussed. Results from the analysis as well as the limitations of the approach are presented and discussed. (author)

  20. Modelling fuel cell performance using artificial intelligence

    Science.gov (United States)

    Ogaji, S. O. T.; Singh, R.; Pilidis, P.; Diacakis, M.

    Over the last few years, fuel cell technology has been increasing promisingly its share in the generation of stationary power. Numerous pilot projects are operating worldwide, continuously increasing the amount of operating hours either as stand-alone devices or as part of gas turbine combined cycles. An essential tool for the adequate and dynamic analysis of such systems is a software model that enables the user to assess a large number of alternative options in the least possible time. On the other hand, the sphere of application of artificial neural networks has widened covering such endeavours of life such as medicine, finance and unsurprisingly engineering (diagnostics of faults in machines). Artificial neural networks have been described as diagrammatic representation of a mathematical equation that receives values (inputs) and gives out results (outputs). Artificial neural networks systems have the capacity to recognise and associate patterns and because of their inherent design features, they can be applied to linear and non-linear problem domains. In this paper, the performance of the fuel cell is modelled using artificial neural networks. The inputs to the network are variables that are critical to the performance of the fuel cell while the outputs are the result of changes in any one or all of the fuel cell design variables, on its performance. Critical parameters for the cell include the geometrical configuration as well as the operating conditions. For the neural network, various network design parameters such as the network size, training algorithm, activation functions and their causes on the effectiveness of the performance modelling are discussed. Results from the analysis as well as the limitations of the approach are presented and discussed.

  1. Relevance and clinical significance of serum resistin level in obese T2DM rhesus monkey models.

    Science.gov (United States)

    Qi, S-D; He, Z-L; Chen, Y; Ma, J; Yu, W-H; Li, Y-Y; Yang, F-M; Wang, J-B; Chen, L-X; Zhao, Y; Lu, S-Y

    2015-09-01

    Resistin is a type of hormone-like adipocytokines, which is secreted specifically by adipocytes. It may be a key factor in the development of type 2 diabetes mellitus (T2DM) from obesity- associated insulin resistance due to results that show that it has a close relationship with insulin resistance in rodents. We utilized the rhesus monkeys as study objects to preliminarily test the association with glucose metabolism and to conduct a correlation analysis for clinical parameters and serum resistin levels in obese rhesus monkey models of T2DM. The results suggested that resistin was significantly increased in T2DM monkeys (P insulin (FPI) and glycated hemoglobin (HbA1c), Insulin resistance index (HOA-IR), but a negative correlation with islet β-cell function (HOMA-β). In the course of glucose metabolism, reverse release change of resistin and insulin in T2DM monkeys occurred, but the phenomenon that was not observed in the control group, these findings indicated that resistin negatively regulated and interfered with carbohydrate metabolism in T2DM monkey models. The character of the releasing change of resistin might be a unique process in T2DM. Therefore, all of the results could provide references for clinical diagnostic criteria for human cases of T2DM, and could have clinical significance for obese T2DM diagnosis and degree of insulin resistance. © Georg Thieme Verlag KG Stuttgart · New York.

  2. Assessment of performance of survival prediction models for cancer prognosis

    Directory of Open Access Journals (Sweden)

    Chen Hung-Chia

    2012-07-01

    Full Text Available Abstract Background Cancer survival studies are commonly analyzed using survival-time prediction models for cancer prognosis. A number of different performance metrics are used to ascertain the concordance between the predicted risk score of each patient and the actual survival time, but these metrics can sometimes conflict. Alternatively, patients are sometimes divided into two classes according to a survival-time threshold, and binary classifiers are applied to predict each patient’s class. Although this approach has several drawbacks, it does provide natural performance metrics such as positive and negative predictive values to enable unambiguous assessments. Methods We compare the survival-time prediction and survival-time threshold approaches to analyzing cancer survival studies. We review and compare common performance metrics for the two approaches. We present new randomization tests and cross-validation methods to enable unambiguous statistical inferences for several performance metrics used with the survival-time prediction approach. We consider five survival prediction models consisting of one clinical model, two gene expression models, and two models from combinations of clinical and gene expression models. Results A public breast cancer dataset was used to compare several performance metrics using five prediction models. 1 For some prediction models, the hazard ratio from fitting a Cox proportional hazards model was significant, but the two-group comparison was insignificant, and vice versa. 2 The randomization test and cross-validation were generally consistent with the p-values obtained from the standard performance metrics. 3 Binary classifiers highly depended on how the risk groups were defined; a slight change of the survival threshold for assignment of classes led to very different prediction results. Conclusions 1 Different performance metrics for evaluation of a survival prediction model may give different conclusions in

  3. VALORA: data base system for storage significant information used in the behavior modelling in the biosphere

    International Nuclear Information System (INIS)

    Valdes R, M.; Aguero P, A.; Perez S, D.; Cancio P, D.

    2006-01-01

    The nuclear and radioactive facilities can emit to the environment effluents that contain radionuclides, which are dispersed and/or its accumulate in the atmosphere, the terrestrial surface and the surface waters. As part of the evaluations of radiological impact, it requires to be carried out qualitative and quantitative analysis. In many of the cases it doesn't have the real values of the parameters that are used in the modelling, neither it is possible to carry out their measure, for that to be able to carry out the evaluation it needs to be carried out an extensive search of that published in the literature about the possible values of each parameter, under similar conditions to the object of study, this work can be extensive. In this work the characteristics of the VALORA Database System developed with the purpose of organizing and to automate significant information that it appears in different sources (scientific or technique literature) of the parameters that are used in the modelling of the behavior of the pollutants in the environment and the values assigned to these parameters that are used in the evaluation of the radiological impact potential is described; VALORA allows the consultation and selection of the characteristic parametric data of different situations and processes that are required by the calculation pattern implemented. The software VALORA it is a component of a group of tools computer that have as objective to help to the resolution of dispersion models and transfer of pollutants. (Author)

  4. Modelling the predictive performance of credit scoring

    Directory of Open Access Journals (Sweden)

    Shi-Wei Shen

    2013-07-01

    Research purpose: The purpose of this empirical paper was to examine the predictive performance of credit scoring systems in Taiwan. Motivation for the study: Corporate lending remains a major business line for financial institutions. However, in light of the recent global financial crises, it has become extremely important for financial institutions to implement rigorous means of assessing clients seeking access to credit facilities. Research design, approach and method: Using a data sample of 10 349 observations drawn between 1992 and 2010, logistic regression models were utilised to examine the predictive performance of credit scoring systems. Main findings: A test of Goodness of fit demonstrated that credit scoring models that incorporated the Taiwan Corporate Credit Risk Index (TCRI, micro- and also macroeconomic variables possessed greater predictive power. This suggests that macroeconomic variables do have explanatory power for default credit risk. Practical/managerial implications: The originality in the study was that three models were developed to predict corporate firms’ defaults based on different microeconomic and macroeconomic factors such as the TCRI, asset growth rates, stock index and gross domestic product. Contribution/value-add: The study utilises different goodness of fits and receiver operator characteristics during the examination of the robustness of the predictive power of these factors.

  5. Longitudinal modeling in sports: young swimmers' performance and biomechanics profile.

    Science.gov (United States)

    Morais, Jorge E; Marques, Mário C; Marinho, Daniel A; Silva, António J; Barbosa, Tiago M

    2014-10-01

    New theories about dynamical systems highlight the multi-factorial interplay between determinant factors to achieve higher sports performances, including in swimming. Longitudinal research does provide useful information on the sportsmen's changes and how training help him to excel. These questions may be addressed in one single procedure such as latent growth modeling. The aim of the study was to model a latent growth curve of young swimmers' performance and biomechanics over a season. Fourteen boys (12.33 ± 0.65 years-old) and 16 girls (11.15 ± 0.55 years-old) were evaluated. Performance, stroke frequency, speed fluctuation, arm's propelling efficiency, active drag, active drag coefficient and power to overcome drag were collected in four different moments of the season. Latent growth curve modeling was computed to understand the longitudinal variation of performance (endogenous variables) over the season according to the biomechanics (exogenous variables). Latent growth curve modeling showed a high inter- and intra-subject variability in the performance growth. Gender had a significant effect at the baseline and during the performance growth. In each evaluation moment, different variables had a meaningful effect on performance (M1: Da, β = -0.62; M2: Da, β = -0.53; M3: η(p), β = 0.59; M4: SF, β = -0.57; all P < .001). The models' goodness-of-fit was 1.40 ⩽ χ(2)/df ⩽ 3.74 (good-reasonable). Latent modeling is a comprehensive way to gather insight about young swimmers' performance over time. Different variables were the main responsible for the performance improvement. A gender gap, intra- and inter-subject variability was verified. Copyright © 2014 Elsevier B.V. All rights reserved.

  6. Phasic firing in vasopressin cells: understanding its functional significance through computational models.

    Directory of Open Access Journals (Sweden)

    Duncan J MacGregor

    Full Text Available Vasopressin neurons, responding to input generated by osmotic pressure, use an intrinsic mechanism to shift from slow irregular firing to a distinct phasic pattern, consisting of long bursts and silences lasting tens of seconds. With increased input, bursts lengthen, eventually shifting to continuous firing. The phasic activity remains asynchronous across the cells and is not reflected in the population output signal. Here we have used a computational vasopressin neuron model to investigate the functional significance of the phasic firing pattern. We generated a concise model of the synaptic input driven spike firing mechanism that gives a close quantitative match to vasopressin neuron spike activity recorded in vivo, tested against endogenous activity and experimental interventions. The integrate-and-fire based model provides a simple physiological explanation of the phasic firing mechanism involving an activity-dependent slow depolarising afterpotential (DAP generated by a calcium-inactivated potassium leak current. This is modulated by the slower, opposing, action of activity-dependent dendritic dynorphin release, which inactivates the DAP, the opposing effects generating successive periods of bursting and silence. Model cells are not spontaneously active, but fire when perturbed by random perturbations mimicking synaptic input. We constructed one population of such phasic neurons, and another population of similar cells but which lacked the ability to fire phasically. We then studied how these two populations differed in the way that they encoded changes in afferent inputs. By comparison with the non-phasic population, the phasic population responds linearly to increases in tonic synaptic input. Non-phasic cells respond to transient elevations in synaptic input in a way that strongly depends on background activity levels, phasic cells in a way that is independent of background levels, and show a similar strong linearization of the response

  7. High Performance Modeling of Novel Diagnostics Configuration

    Science.gov (United States)

    Smith, Dalton; Gibson, John; Lodes, Rylie; Malcolm, Hayden; Nakamoto, Teagan; Parrack, Kristina; Trujillo, Christopher; Wilde, Zak; Los Alamos Laboratories Q-6 Students Team

    2017-06-01

    A novel diagnostics method to measure the Hayes Electric Effect was tested and verified against computerized models. Where standard PVDF diagnostics utilize piezoelectric materials to measure detonation pressure through strain-induced electrical signals, the PVDF was used in a novel technique by also detecting the detonation's induced electric field. The ALE-3D Hydro Codes predicted the performance by calculating detonation velocities, pressures, and arrival times. These theoretical results then validated the experimental use of the PVDF repurposed to specifically track the Hayes Electric Effect. Los Alamos National Laboratories Q-6.

  8. A Combat Mission Team Performance Model: Development and initial Application

    National Research Council Canada - National Science Library

    Silverman, Denise

    1997-01-01

    ... realistic combat scenarios. We present a conceptual model of team performance measurement in which aircrew coordination, team performance, mission performance and their interrelationships are operationally defined...

  9. Lack of significant associations with early career performance suggest no link between the DMRT3 "Gait Keeper" mutation and precocity in Coldblooded trotters.

    Directory of Open Access Journals (Sweden)

    Kim Jäderkvist Fegraeus

    Full Text Available The Swedish-Norwegian Coldblooded trotter (CBT is a local breed in Sweden and Norway mainly used for harness racing. Previous studies have shown that a mutation from cytosine (C to adenine (A in the doublesex and mab-3 related transcription factor 3 (DMRT3 gene has a major impact on harness racing performance of different breeds. An association of the DMRT3 mutation with early career performance has also been suggested. The aim of the current study was to investigate this proposed association in a randomly selected group of CBTs. 769 CBTs (485 raced, 284 unraced were genotyped for the DMRT3 mutation. The association with racing performance was investigated for 13 performance traits and three different age intervals: 3 years, 3 to 6 years, and 7 to 10 years of age, using the statistical software R. Each performance trait was analyzed for association with DMRT3 using linear models. The results suggest no association of the DMRT3 mutation with precocity (i.e. performance at 3 years of age. Only two traits (race time and number of disqualifications were significantly different between the genotypes, with AA horses having the fastest times and CC horses having the highest number of disqualifications at 3 years of age. The frequency of the AA genotype was significantly lower in the raced CBT sample compared with the unraced sample and less than 50% of the AA horses participated in a race. For the age intervals 3 to 6 and 7 to 10 years the AA horses also failed to demonstrate significantly better performance than the other genotypes. Although suggested as the most favorable genotype for racing performance in Standardbreds and Finnhorses across all ages, the AA genotype does not appear to be associated with superior performance, early or late, in the racing career of CBTs.

  10. Brake response time is significantly impaired after total knee arthroplasty: investigation of performing an emergency stop while driving a car.

    Science.gov (United States)

    Jordan, Maurice; Hofmann, Ulf-Krister; Rondak, Ina; Götze, Marco; Kluba, Torsten; Ipach, Ingmar

    2015-09-01

    The objective of this study was to investigate whether total knee arthroplasty (TKA) impairs the ability to perform an emergency stop. An automatic transmission brake simulator was developed to evaluate total brake response time. A prospective repeated-measures design was used. Forty patients (20 left/20 right) were measured 8 days and 6, 12, and 52 wks after surgery. Eight days postoperative total brake response time increased significantly by 30% in right TKA and insignificantly by 2% in left TKA. Brake force significantly decreased by 35% in right TKA and by 25% in left TKA during this period. Baseline values were reached at week 12 in right TKA; the impairment of outcome measures, however, was no longer significant at week 6 compared with preoperative values. Total brake response time and brake force in left TKA fell below baseline values at weeks 6 and 12. Brake force in left TKA was the only outcome measure significantly impaired 8 days postoperatively. This study highlights that categorical statements cannot be provided. This study's findings on automatic transmission driving suggest that right TKA patients may resume driving 6 wks postoperatively. Fitness to drive in left TKA is not fully recovered 8 days postoperatively. If testing is not available, patients should refrain from driving until they return from rehabilitation.

  11. Human performance modeling for system of systems analytics: combat performance-shaping factors.

    Energy Technology Data Exchange (ETDEWEB)

    Lawton, Craig R.; Miller, Dwight Peter

    2006-01-01

    The US military has identified Human Performance Modeling (HPM) as a significant requirement and challenge of future systems modeling and analysis initiatives. To support this goal, Sandia National Laboratories (SNL) has undertaken a program of HPM as an integral augmentation to its system-of-system (SoS) analytics capabilities. The previous effort, reported in SAND2005-6569, evaluated the effects of soldier cognitive fatigue on SoS performance. The current effort began with a very broad survey of any performance-shaping factors (PSFs) that also might affect soldiers performance in combat situations. The work included consideration of three different approaches to cognition modeling and how appropriate they would be for application to SoS analytics. This bulk of this report categorizes 47 PSFs into three groups (internal, external, and task-related) and provides brief descriptions of how each affects combat performance, according to the literature. The PSFs were then assembled into a matrix with 22 representative military tasks and assigned one of four levels of estimated negative impact on task performance, based on the literature. Blank versions of the matrix were then sent to two ex-military subject-matter experts to be filled out based on their personal experiences. Data analysis was performed to identify the consensus most influential PSFs. Results indicate that combat-related injury, cognitive fatigue, inadequate training, physical fatigue, thirst, stress, poor perceptual processing, and presence of chemical agents are among the PSFs with the most negative impact on combat performance.

  12. Models and Mechanisms of Acquired Antihormone Resistance in Breast Cancer: Significant Clinical Progress Despite Limitations

    Science.gov (United States)

    Sweeney, Elizabeth E.; McDaniel, Russell E.; Maximov, Philipp Y.; Fan, Ping; Jordan, V. Craig

    2012-01-01

    Translational research for the treatment and prevention of breast cancer depends upon the four Ms: models, molecules, and mechanisms in order to create medicines. The process, to target the estrogen receptor (ER) in estrogen-dependent breast cancer, has yielded significant advances in patient survivorship and the first approved medicines (tamoxifen and raloxifene) to reduce the incidence of any cancer in high- or low-risk women. This review focuses on the critical role of the few ER-positive cell lines (MCF-7, T47D, BT474, ZR-75) that continue to advance our understanding of the estrogen-regulated biology of breast cancer. More importantly, the model cell lines have provided an opportunity to document the development and evolution of acquired antihormone resistance. The description of this evolutionary process that occurs in micrometastatic disease during up to a decade of adjuvant therapy would not be possible in the patient. The use of the MCF-7 breast cancer cell line in particular has been instrumental in discovering a vulnerability of ER-positive breast cancer exhaustively treated with antihormone therapy. Physiologic estradiol acts as an apoptotic trigger to cause tumor regression. These unanticipated findings in the laboratory have translated to clinical advances in our knowledge of the paradoxical role of estrogen in the life and death of breast cancer. PMID:23308083

  13. Analysis of significance of environmental factors in landslide susceptibility modeling: Case study Jemma drainage network, Ethiopia

    Directory of Open Access Journals (Sweden)

    Vít Maca

    2017-06-01

    Full Text Available Aim of the paper is to describe methodology for calculating significance of environmental factors in landslide susceptibility modeling and present result of selected one. As a study area part of a Jemma basin in Ethiopian Highland is used. This locality is highly affected by mass movement processes. In the first part all major factors and their influence are described briefly. Majority of the work focuses on research of other methodologies used in susceptibility models and design of own methodology. This method is unlike most of the methods used completely objective, therefore it is not possible to intervene in the results. In article all inputs and outputs of the method are described as well as all stages of calculations. Results are illustrated on specific examples. In study area most important factor for landslide susceptibility is slope, on the other hand least important is land cover. At the end of article landslide susceptibility map is created. Part of the article is discussion of results and possible improvements of the methodology.

  14. Multilevel linear modelling of the response-contingent learning of young children with significant developmental delays.

    Science.gov (United States)

    Raab, Melinda; Dunst, Carl J; Hamby, Deborah W

    2018-02-27

    The purpose of the study was to isolate the sources of variations in the rates of response-contingent learning among young children with multiple disabilities and significant developmental delays randomly assigned to contrasting types of early childhood intervention. Multilevel, hierarchical linear growth curve modelling was used to analyze four different measures of child response-contingent learning where repeated child learning measures were nested within individual children (Level-1), children were nested within practitioners (Level-2), and practitioners were nested within the contrasting types of intervention (Level-3). Findings showed that sources of variations in rates of child response-contingent learning were associated almost entirely with type of intervention after the variance associated with differences in practitioners nested within groups were accounted for. Rates of child learning were greater among children whose existing behaviour were used as the building blocks for promoting child competence (asset-based practices) compared to children for whom the focus of intervention was promoting child acquisition of missing skills (needs-based practices). The methods of analysis illustrate a practical approach to clustered data analysis and the presentation of results in ways that highlight sources of variations in the rates of response-contingent learning among young children with multiple developmental disabilities and significant developmental delays. Copyright © 2018 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  15. Significant manipulation of output performance of a bridge-structured spin valve magnetoresistance sensor via an electric field

    Science.gov (United States)

    Zhang, Yue; Yan, Baiqian; Ou-Yang, Jun; Wang, Xianghao; Zhu, Benpeng; Chen, Shi; Yang, Xiaofei

    2016-01-01

    Through principles of spin-valve giant magnetoresistance (SV-GMR) effect and its application in magnetic sensors, we have investigated electric-field control of the output performance of a bridge-structured Co/Cu/NiFe/IrMn SV-GMR sensor on a PZN-PT piezoelectric substrate using the micro-magnetic simulation. We centered on the influence of the variation of uniaxial magnetic anisotropy constant (K) of Co on the output of the bridge, and K was manipulated via the stress of Co, which is generated from the strain of a piezoelectric substrate under an electric field. The results indicate that when K varies between 2 × 104 J/m3 and 10 × 104 J/m3, the output performance can be significantly manipulated: The linear range alters from between -330 Oe and 330 Oe to between -650 Oe and 650 Oe, and the sensitivity is tuned by almost 7 times, making it possible to measure magnetic fields with very different ranges. According to the converse piezoelectric effect, we have found that this variation of K can be realized by applying an electric field with the magnitude of about 2-20 kV/cm on a PZN-PT piezoelectric substrate, which is realistic in application. This result means that electric-control of SV-GMR effect has potential application in developing SV-GMR sensors with improved performance.

  16. Some useful characteristics of performance models

    International Nuclear Information System (INIS)

    Worledge, D.H.

    1985-01-01

    This paper examines the demands placed upon models of human cognitive decision processes in application to Probabilistic Risk Assessment. Successful models, for this purpose, should, 1) be based on proven or plausible psychological knowledge, e.g., Rasmussen's mental schematic, 2) incorporate opportunities for slips, 3) take account of the recursive nature, in time, of corrections to mistaken actions, and 4) depend on the crew's predominant mental states that accompany such recursions. The latter is equivalent to an explicit coupling between input and output of Rasmussen's mental schematic. A family of such models is proposed with observable rate processes mediating the (conscious) mental states involved. It is expected that the cumulative probability distributions corresponding to the individual rate processes can be identified with probability-time correlations of the HCR Human Cognitive Reliability type discussed elsewhere in this session. The functional forms of the conditional rates are intuitively shown to have simple characteristics that lead to a strongly recursive stochastic process with significant predictive capability. Models of the type proposed have few parts and form a representation that is intentionally far short of a fully transparent exposition of the mental process in order to avoid making impossible demands on data

  17. Urban Growth Causes Significant increase in Extreme Rainfall - A modelling study

    Science.gov (United States)

    Pathirana, Assela

    2010-05-01

    World's urban centers are growing rapidly causing the impact of extreme rainfall events felt much more severely due to relatively well unerstood phenomena like decreased infiltration and flow resistance. However, an increasing set of evidence (e.g. heavy rainfall event observed at Nerima, central part of Tokyo metropolitan area, on 21 July 1999) suggest that the extreme rainfall, the driving force itself increases as a result of the microclimatic changes due to urban growth. Urban heat islands(UHI) due to heat anomalies of urban sprawl act as virtual mountains resulting in a local atmosphere more conducive for heavy rainfall. In this study, we employ a popular mesoscale atmoshperic model to numerically simulate the UHI induced rainfall enhancement. Initial idealized experiments conducted under trophical atmospheric conditions indicated that the changes in landuse due to significant urban growth will indeed cause more intense rainfall events. This is largely due to increased convective breakup, causing a favourable situation for convective cloud systems. Five historical heavy rainfall events that caused floods in five urban centres (Dhaka, Mumbai, Colombo, Lyon and Taipei) were selected from historical records. Numerical simulations were setup to assertain what would be the amount of rainfall if the same large-scale atmospheric situations (forcings) occured under a hypothetical situation of doubled urbanization level these events. Significant increases (upto 50%) of extreme rainfall was indicated for many of the events. Under major assumptions, these simulations were used to estimate the anticipated changes in the Intensity-Duration-Frequency (IDF). The magnitude of the 30min event with 25 year return period increased by about 20 percent. Without considering any changes in the external forcing the urban growth alone could cause very significant increase in local rainfall.

  18. The COD Model: Simulating Workgroup Performance

    Science.gov (United States)

    Biggiero, Lucio; Sevi, Enrico

    Though the question of the determinants of workgroup performance is one of the most central in organization science, precise theoretical frameworks and formal demonstrations are still missing. In order to fill in this gap the COD agent-based simulation model is here presented and used to study the effects of task interdependence and bounded rationality on workgroup performance. The first relevant finding is an algorithmic demonstration of the ordering of interdependencies in terms of complexity, showing that the parallel mode is the most simplex, followed by the sequential and then by the reciprocal. This result is far from being new in organization science, but what is remarkable is that now it has the strength of an algorithmic demonstration instead of being based on the authoritativeness of some scholar or on some episodic empirical finding. The second important result is that the progressive introduction of realistic limits to agents' rationality dramatically reduces workgroup performance and addresses to a rather interesting result: when agents' rationality is severely bounded simple norms work better than complex norms. The third main finding is that when the complexity of interdependence is high, then the appropriate coordination mechanism is agents' direct and active collaboration, which means teamwork.

  19. Colchicine application significantly affects plant performance in the second generation of synthetic polyploids and its effects vary between populations.

    Science.gov (United States)

    Münzbergová, Zuzana

    2017-08-01

    Understanding the direct consequences of polyploidization is necessary for assessing the evolutionary significance of this mode of speciation. Previous studies have not studied the degree of between-population variation that occurs due to these effects. Although it is assumed that the effects of the substances that create synthetic polyploids disappear in second-generation synthetic polyploids, this has not been tested. The direct consequences of polyploidization were assessed and separated from the effects of subsequent evolution in Vicia cracca , a naturally occurring species with diploid and autotetraploid cytotypes. Synthetic tetraploids were created from diploids of four mixed-ploidy populations. Performance of natural diploids and tetraploids was compared with that of synthetic tetraploids. Diploid offspring of the synthetic tetraploid mothers were also included in the comparison. In this way, the effects of colchicine application in the maternal generation on offspring performance could be compared independently of the effects of polyploidization. The sizes of seeds and stomata were primarily affected by cytotype, while plant performance differed between natural and synthetic polyploids. Most performance traits were also determined by colchicine application to the mothers, and most of these results were largely population specific. Because the consequences of colchicine application are still apparent in the second generation of the plants, at least the third-generation polyploids should be considered in future comparisons. The specificities of the colchicine-treated plants may also be caused by strong selection pressures during the creation of synthetic polyploids. This could be tested by comparing the initial sizes of plants that survived the colchicine treatments with those of plants that did not. High variation between populations also suggests that different polyploids follow different evolutionary trajectories, and this should be considered when

  20. Human performance modeling for system of systems analytics :soldier fatigue.

    Energy Technology Data Exchange (ETDEWEB)

    Lawton, Craig R.; Campbell, James E.; Miller, Dwight Peter

    2005-10-01

    The military has identified Human Performance Modeling (HPM) as a significant requirement and challenge of future systems modeling and analysis initiatives as can be seen in the Department of Defense's (DoD) Defense Modeling and Simulation Office's (DMSO) Master Plan (DoD 5000.59-P 1995). To this goal, the military is currently spending millions of dollars on programs devoted to HPM in various military contexts. Examples include the Human Performance Modeling Integration (HPMI) program within the Air Force Research Laboratory, which focuses on integrating HPMs with constructive models of systems (e.g. cockpit simulations) and the Navy's Human Performance Center (HPC) established in September 2003. Nearly all of these initiatives focus on the interface between humans and a single system. This is insufficient in the era of highly complex network centric SoS. This report presents research and development in the area of HPM in a system-of-systems (SoS). Specifically, this report addresses modeling soldier fatigue and the potential impacts soldier fatigue can have on SoS performance.

  1. Model for measuring complex performance in an aviation environment

    International Nuclear Information System (INIS)

    Hahn, H.A.

    1988-01-01

    An experiment was conducted to identify models of pilot performance through the attainment and analysis of concurrent verbal protocols. Sixteen models were identified. Novice and expert pilots differed with respect to the models they used. Models were correlated to performance, particularly in the case of expert subjects. Models were not correlated to performance shaping factors (i.e. workload). 3 refs., 1 tab

  2. Cyclosporin A significantly improves preeclampsia signs and suppresses inflammation in a rat model.

    Science.gov (United States)

    Hu, Bihui; Yang, Jinying; Huang, Qian; Bao, Junjie; Brennecke, Shaun Patrick; Liu, Huishu

    2016-05-01

    Preeclampsia is associated with an increased inflammatory response. Immune suppression might be an effective treatment. The aim of this study was to examine whether Cyclosporin A (CsA), an immunosuppressant, improves clinical characteristics of preeclampsia and suppresses inflammation in a lipopolysaccharide (LPS) induced preeclampsia rat model. Pregnant rats were randomly divided into 4 groups: group 1 (PE) rats each received LPS via tail vein on gestational day (GD) 14; group 2 (PE+CsA5) rats were pretreated with LPS (1.0 μg/kg) on GD 14 and were then treated with CsA (5mg/kg, ip) on GDs 16, 17 and 18; group 3 (PE+CsA10) rats were pretreated with LPS (1.0 μg/kg) on GD 14 and were then treated with CsA (10mg/kg, ip) on GDs 16, 17 and 18; group 4 (pregnant control, PC) rats were treated with the vehicle (saline) used for groups 1, 2 and 3. Systolic blood pressure, urinary albumin, biometric parameters and the levels of serum cytokines were measured on day 20. CsA treatment significantly reduced LPS-induced systolic blood pressure and the mean 24-h urinary albumin excretion. Pro-inflammatory cytokines IL-6, IL-17, IFN-γ and TNF-α were increased in the LPS treatment group but were reduced in (LPS+CsA) group (Ppreeclampsia signs and attenuated inflammatory responses in the LPS induced preeclampsia rat model which suggests that immunosuppressant might be an alternative management option for preeclampsia. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Four-Stroke, Internal Combustion Engine Performance Modeling

    Science.gov (United States)

    Wagner, Richard C.

    In this thesis, two models of four-stroke, internal combustion engines are created and compared. The first model predicts the intake and exhaust processes using isentropic flow equations augmented by discharge coefficients. The second model predicts the intake and exhaust processes using a compressible, time-accurate, Quasi-One-Dimensional (Q1D) approach. Both models employ the same heat release and reduced-order modeling of the cylinder charge. Both include friction and cylinder loss models so that the predicted performance values can be compared to measurements. The results indicate that the isentropic-based model neglects important fluid mechanics and returns inaccurate results. The Q1D flow model, combined with the reduced-order model of the cylinder charge, is able to capture the dominant intake and exhaust fluid mechanics and produces results that compare well with measurement. Fluid friction, convective heat transfer, piston ring and skirt friction and temperature-varying specific heats in the working fluids are all shown to be significant factors in engine performance predictions. Charge blowby is shown to play a lesser role.

  4. Kernel density surface modelling as a means to identify significant concentrations of vulnerable marine ecosystem indicators.

    Directory of Open Access Journals (Sweden)

    Ellen Kenchington

    Full Text Available The United Nations General Assembly Resolution 61/105, concerning sustainable fisheries in the marine ecosystem, calls for the protection of vulnerable marine ecosystems (VME from destructive fishing practices. Subsequently, the Food and Agriculture Organization (FAO produced guidelines for identification of VME indicator species/taxa to assist in the implementation of the resolution, but recommended the development of case-specific operational definitions for their application. We applied kernel density estimation (KDE to research vessel trawl survey data from inside the fishing footprint of the Northwest Atlantic Fisheries Organization (NAFO Regulatory Area in the high seas of the northwest Atlantic to create biomass density surfaces for four VME indicator taxa: large-sized sponges, sea pens, small and large gorgonian corals. These VME indicator taxa were identified previously by NAFO using the fragility, life history characteristics and structural complexity criteria presented by FAO, along with an evaluation of their recovery trajectories. KDE, a non-parametric neighbour-based smoothing function, has been used previously in ecology to identify hotspots, that is, areas of relatively high biomass/abundance. We present a novel approach of examining relative changes in area under polygons created from encircling successive biomass categories on the KDE surface to identify "significant concentrations" of biomass, which we equate to VMEs. This allows identification of the VMEs from the broader distribution of the species in the study area. We provide independent assessments of the VMEs so identified using underwater images, benthic sampling with other gear types (dredges, cores, and/or published species distribution models of probability of occurrence, as available. For each VME indicator taxon we provide a brief review of their ecological function which will be important in future assessments of significant adverse impact on these habitats here

  5. Numerical modeling capabilities to predict repository performance

    International Nuclear Information System (INIS)

    1979-09-01

    This report presents a summary of current numerical modeling capabilities that are applicable to the design and performance evaluation of underground repositories for the storage of nuclear waste. The report includes codes that are available in-house, within Golder Associates and Lawrence Livermore Laboratories; as well as those that are generally available within the industry and universities. The first listing of programs are in-house codes in the subject areas of hydrology, solute transport, thermal and mechanical stress analysis, and structural geology. The second listing of programs are divided by subject into the following categories: site selection, structural geology, mine structural design, mine ventilation, hydrology, and mine design/construction/operation. These programs are not specifically designed for use in the design and evaluation of an underground repository for nuclear waste; but several or most of them may be so used

  6. Significant manipulation of output performance of a bridge-structured spin valve magnetoresistance sensor via an electric field

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Yue; Yan, Baiqian; Ou-Yang, Jun; Zhu, Benpeng; Chen, Shi; Yang, Xiaofei, E-mail: hust-yangxiaofei@163.com [School of Optical and Electronic Information, Huazhong University of Science and Technology, Wuhan 430074 (China); Wang, Xianghao [School of Information Engineering, Wuhan University of Technology, Wuhan 430070 (China)

    2016-01-28

    Through principles of spin-valve giant magnetoresistance (SV-GMR) effect and its application in magnetic sensors, we have investigated electric-field control of the output performance of a bridge-structured Co/Cu/NiFe/IrMn SV-GMR sensor on a PZN-PT piezoelectric substrate using the micro-magnetic simulation. We centered on the influence of the variation of uniaxial magnetic anisotropy constant (K) of Co on the output of the bridge, and K was manipulated via the stress of Co, which is generated from the strain of a piezoelectric substrate under an electric field. The results indicate that when K varies between 2 × 10{sup 4 }J/m{sup 3} and 10 × 10{sup 4 }J/m{sup 3}, the output performance can be significantly manipulated: The linear range alters from between −330 Oe and 330 Oe to between −650 Oe and 650 Oe, and the sensitivity is tuned by almost 7 times, making it possible to measure magnetic fields with very different ranges. According to the converse piezoelectric effect, we have found that this variation of K can be realized by applying an electric field with the magnitude of about 2–20 kV/cm on a PZN-PT piezoelectric substrate, which is realistic in application. This result means that electric-control of SV-GMR effect has potential application in developing SV-GMR sensors with improved performance.

  7. Significant manipulation of output performance of a bridge-structured spin valve magnetoresistance sensor via an electric field

    International Nuclear Information System (INIS)

    Zhang, Yue; Yan, Baiqian; Ou-Yang, Jun; Zhu, Benpeng; Chen, Shi; Yang, Xiaofei; Wang, Xianghao

    2016-01-01

    Through principles of spin-valve giant magnetoresistance (SV-GMR) effect and its application in magnetic sensors, we have investigated electric-field control of the output performance of a bridge-structured Co/Cu/NiFe/IrMn SV-GMR sensor on a PZN-PT piezoelectric substrate using the micro-magnetic simulation. We centered on the influence of the variation of uniaxial magnetic anisotropy constant (K) of Co on the output of the bridge, and K was manipulated via the stress of Co, which is generated from the strain of a piezoelectric substrate under an electric field. The results indicate that when K varies between 2 × 10 4  J/m 3 and 10 × 10 4  J/m 3 , the output performance can be significantly manipulated: The linear range alters from between −330 Oe and 330 Oe to between −650 Oe and 650 Oe, and the sensitivity is tuned by almost 7 times, making it possible to measure magnetic fields with very different ranges. According to the converse piezoelectric effect, we have found that this variation of K can be realized by applying an electric field with the magnitude of about 2–20 kV/cm on a PZN-PT piezoelectric substrate, which is realistic in application. This result means that electric-control of SV-GMR effect has potential application in developing SV-GMR sensors with improved performance

  8. Radiogenic heat production variability of some common lithological groups and its significance to lithospheric thermal modeling

    Science.gov (United States)

    Vilà, M.; Fernández, M.; Jiménez-Munt, I.

    2010-07-01

    Determining the temperature distribution within the lithosphere requires the knowledge of the radiogenic heat production (RHP) distribution within the crust and the lithospheric mantle. RHP of crustal rocks varies considerably at different scales as a result of the petrogenetic processes responsible for their formation and therefore RHP depends on the considered lithologies. In this work we address RHP variability of some common lithological groups from a compilation of a total of 2188 representative U, Th and K concentrations of different worldwide rock types derived from 102 published studies. To optimize the use of the generated RHP database we have classified and renamed the rock-type denominations of the original works following a petrologic classification scheme with a hierarchical structure. The RHP data of each lithological group is presented in cumulative distribution plots, and we report a table with the mean, the standard deviation, the minimum and maximum values, and the significant percentiles of these lithological groups. We discuss the reported RHP distribution for the different igneous, sedimentary and metamorphic lithological groups from a petrogenetic viewpoint and give some useful guidelines to assign RHP values to lithospheric thermal modeling.

  9. Hidden symmetry in asymmetric morphology: significance of Hjortsjo's anatomical model in liver surgery.

    Science.gov (United States)

    Shindoh, Junichi; Satou, Shoichi; Aoki, Taku; Beck, Yoshifumi; Hasegawa, Kiyoshi; Sugawara, Yasuhiko; Kokudo, Norihiro

    2012-01-01

    Several studies have recently reappraised the liver classification proposed by Hjortsjo in the 1940's and reported it as a surgically relevant theory. However, its clinical relevance and significance in liver surgery have not yet been well documented. Three-dimensional (3D) simulations of the livers of 100 healthy donors for living donor liver transplantation were reviewed. The adequacy of Hjortsjo's model was evaluated using 3D simulations and its clinical relevance was demonstrated in donor surgery. Both portal and hepatic venous branches exhibited symmetrical configuration on either side of the Rex-Cantlie line on the 3D images. In terms of the symmetry, the right paramedian sector seemed to be subdivided into two longitudinal parts, namely the "ventral" and "dorsal" parts. Volume analysis revealed that these longitudinal parts occupied relatively large areas of the liver (the ventral part, 15.7% and the dorsal part, 20.9% of the whole livers, respectively). Postoperative CT imaging confirmed marked congestion and/or impaired regeneration of these areas due to deprivation of the middle or right hepatic veins. Considering the symmetry of intrahepatic vascular distributions and clinical relevance, Hjortsjo's classification offers important viewpoint for surgeons to handle the liver based on both the portal and venous distributions.

  10. High-fat diet induces significant metabolic disorders in a mouse model of polycystic ovary syndrome.

    Science.gov (United States)

    Lai, Hao; Jia, Xiao; Yu, Qiuxiao; Zhang, Chenglu; Qiao, Jie; Guan, Youfei; Kang, Jihong

    2014-11-01

    Polycystic ovary syndrome (PCOS) is the most common female endocrinopathy associated with both reproductive and metabolic disorders. Dehydroepiandrosterone (DHEA) is currently used to induce a PCOS mouse model. High-fat diet (HFD) has been shown to cause obesity and infertility in female mice. The possible effect of an HFD on the phenotype of DHEA-induced PCOS mice is unknown. The aim of the present study was to investigate both reproductive and metabolic features of DHEA-induced PCOS mice fed a normal chow or a 60% HFD. Prepubertal C57BL/6 mice (age 25 days) on the normal chow or an HFD were injected (s.c.) daily with the vehicle sesame oil or DHEA for 20 consecutive days. At the end of the experiment, both reproductive and metabolic characteristics were assessed. Our data show that an HFD did not affect the reproductive phenotype of DHEA-treated mice. The treatment of HFD, however, caused significant metabolic alterations in DHEA-treated mice, including obesity, glucose intolerance, dyslipidemia, and pronounced liver steatosis. These findings suggest that HFD induces distinct metabolic features in DHEA-induced PCOS mice. The combined DHEA and HFD treatment may thus serve as a means of studying the mechanisms involved in metabolic derangements of this syndrome, particularly in the high prevalence of hepatic steatosis in women with PCOS. © 2014 by the Society for the Study of Reproduction, Inc.

  11. The predictive performance and stability of six species distribution models.

    Science.gov (United States)

    Duan, Ren-Yan; Kong, Xiao-Quan; Huang, Min-Yi; Fan, Wei-Yi; Wang, Zhi-Gao

    2014-01-01

    Predicting species' potential geographical range by species distribution models (SDMs) is central to understand their ecological requirements. However, the effects of using different modeling techniques need further investigation. In order to improve the prediction effect, we need to assess the predictive performance and stability of different SDMs. We collected the distribution data of five common tree species (Pinus massoniana, Betula platyphylla, Quercus wutaishanica, Quercus mongolica and Quercus variabilis) and simulated their potential distribution area using 13 environmental variables and six widely used SDMs: BIOCLIM, DOMAIN, MAHAL, RF, MAXENT, and SVM. Each model run was repeated 100 times (trials). We compared the predictive performance by testing the consistency between observations and simulated distributions and assessed the stability by the standard deviation, coefficient of variation, and the 99% confidence interval of Kappa and AUC values. The mean values of AUC and Kappa from MAHAL, RF, MAXENT, and SVM trials were similar and significantly higher than those from BIOCLIM and DOMAIN trials (pSDMs (MAHAL, RF, MAXENT, and SVM) had higher prediction accuracy, smaller confidence intervals, and were more stable and less affected by the random variable (randomly selected pseudo-absence points). According to the prediction performance and stability of SDMs, we can divide these six SDMs into two categories: a high performance and stability group including MAHAL, RF, MAXENT, and SVM, and a low performance and stability group consisting of BIOCLIM, and DOMAIN. We highlight that choosing appropriate SDMs to address a specific problem is an important part of the modeling process.

  12. Dengue human infection model performance parameters.

    Science.gov (United States)

    Endy, Timothy P

    2014-06-15

    Dengue is a global health problem and of concern to travelers and deploying military personnel with development and licensure of an effective tetravalent dengue vaccine a public health priority. The dengue viruses (DENVs) are mosquito-borne flaviviruses transmitted by infected Aedes mosquitoes. Illness manifests across a clinical spectrum with severe disease characterized by intravascular volume depletion and hemorrhage. DENV illness results from a complex interaction of viral properties and host immune responses. Dengue vaccine development efforts are challenged by immunologic complexity, lack of an adequate animal model of disease, absence of an immune correlate of protection, and only partially informative immunogenicity assays. A dengue human infection model (DHIM) will be an essential tool in developing potential dengue vaccines or antivirals. The potential performance parameters needed for a DHIM to support vaccine or antiviral candidates are discussed. © The Author 2014. Published by Oxford University Press on behalf of the Infectious Diseases Society of America. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  13. High-performance phase-field modeling

    KAUST Repository

    Vignal, Philippe

    2015-04-27

    Many processes in engineering and sciences involve the evolution of interfaces. Among the mathematical frameworks developed to model these types of problems, the phase-field method has emerged as a possible solution. Phase-fields nonetheless lead to complex nonlinear, high-order partial differential equations, whose solution poses mathematical and computational challenges. Guaranteeing some of the physical properties of the equations has lead to the development of efficient algorithms and discretizations capable of recovering said properties by construction [2, 5]. This work builds-up on these ideas, and proposes novel discretization strategies that guarantee numerical energy dissipation for both conserved and non-conserved phase-field models. The temporal discretization is based on a novel method which relies on Taylor series and ensures strong energy stability. It is second-order accurate, and can also be rendered linear to speed-up the solution process [4]. The spatial discretization relies on Isogeometric Analysis, a finite element method that possesses the k-refinement technology and enables the generation of high-order, high-continuity basis functions. These basis functions are well suited to handle the high-order operators present in phase-field models. Two-dimensional and three dimensional results of the Allen-Cahn, Cahn-Hilliard, Swift-Hohenberg and phase-field crystal equation will be presented, which corroborate the theoretical findings, and illustrate the robustness of the method. Results related to more challenging examples, namely the Navier-Stokes Cahn-Hilliard and a diusion-reaction Cahn-Hilliard system, will also be presented. The implementation was done in PetIGA and PetIGA-MF, high-performance Isogeometric Analysis frameworks [1, 3], designed to handle non-linear, time-dependent problems.

  14. Theoretical models regarding factors influencing switching regimes and the hydrological and erosional significance of hydrophobicity

    Science.gov (United States)

    Walsh, Rory; Urbanek, Emilia; Ferreira, Carla; Shakesby, Richard; Bento, Celia; Ferreira, Antonio

    2013-04-01

    The influence which soil hydrophobicity may have on hillslope hydrology and erosion in any location will depend on the proportion of storm events in which it is spatially contiguous. This in turn is dependent upon (a) the speed and three-dimensional pattern with which it disappears in wet weather and (b) the speed, three-dimensional pattern and degree of re-establishment of hydrophobicity in dry weather following hydrophilic or partially hydrophilic episodes. This paper draws upon results of laboratory and field investigations of changes through time in hydrophobicity, as well as recent advances in knowledge of switching mechanisms, to develop theory relating to hydrophobicity, its three-dimensional temporal dynamics and controls and its influence on overland flow and slopewash. Particular attention is given to modelling temporal change following fire. Use is made of key findings from (1) a field study of changes over a 4.2-year period January 2009 to March 2013 in hydrophobicity at two 10 m x 10 m grids (270 points, surface and 5 cm depth) on heather moorland in Central Portugal, where one grid was burned by an experimental fire in February 2009 and the other was an immediately adjacent unburned control; (2) a laboratory study of three-dimensional change in hydrophobicity with wetting (by an 8 mm simulated rainfall) and at different stages in an 80-hour drying phase of three different but initially equally hydrophobic soils, each of which comprising variants with and without artificial vertical routeways (simulated roots or linear cracks) and with or without drainage impedance at 2.5 cm depth. A series of theoretical models are presented addressing 1) factors and mechanisms influencing post-fire temporal change in hydrophobicity and (2) factors and mechanisms controlling the significance and temporal dynamics of hydrophobicity influence on overland flow and erosion (i) in unburned terrain and (ii) following fire. The field evidence from Portugal suggests a three

  15. Comparison of Far-field Noise for Three Significantly Different Model Turbofans

    Science.gov (United States)

    Woodward, Richard P.

    2008-01-01

    Far-field noise sound power level (PWL) spectra and overall sound pressure level (OASPL) directivities were compared for three significantly different model fan stages which were tested in the NASA Glenn 9 15 Low Speed Wind Tunnel. The test fans included the Advanced Ducted Propulsor (ADP) Fan1, the baseline Source Diagnostic Test (SDT) fan, and the Quiet High Speed Fan2 (QHSF2). These fans had design rotor tangential tip speeds from 840 to 1474 ft/s and stage pressure ratios from 1.29 to 1.82. Additional parameters included rotor-stator spacing, stator sweep, and downstream support struts. Acoustic comparison points were selected on the basis of stage thrust. Acoustic results for the low tip speed/low pressure ratio fan (ADP Fan1) were thrust-adjusted to show how a geometrically-scaled version of this fan might compare at the higher design thrust levels of the other two fans. Lowest noise levels were typically observed for ADP Fan1 (which had a radial stator) and for the intermediate tip speed fan (Source Diagnostics Test, SDT, R4 rotor) with a swept stator. Projected noise levels for the ADP fan to the SDT swept stator configuration at design point conditions showed the fans to have similar noise levels. However, it is possible that the ADP fan could be 2 to 3 dB quieter with incorporation of a swept stator. Benefits of a scaled ADP fan include avoidance of multiple pure tones associated with transonic and higher blade tip speeds. Penalties of a larger size ADP fan would include increased nacelle size and drag.

  16. Does the amount of tagged stool and fluid significantly affect the radiation exposure in low-dose CT colonography performed with an automatic exposure control?

    International Nuclear Information System (INIS)

    Lim, Hyun Kyong; Lee, Kyoung Ho; Kim, So Yeon; Kim, Young Hoon; Kim, Kil Joong; Kim, Bohyoung; Lee, Hyunna; Park, Seong Ho; Yanof, Jeffrey H.; Hwang, Seung-sik

    2011-01-01

    To determine whether the amount of tagged stool and fluid significantly affects the radiation exposure in low-dose screening CT colonography performed with an automatic tube-current modulation technique. The study included 311 patients. The tagging agent was barium (n = 271) or iodine (n = 40). Correlation was measured between mean volume CT dose index (CTDI vol ) and the estimated x-ray attenuation of the tagged stool and fluid (ATT). Multiple linear regression analyses were performed to determine the effect of ATT on CTDI vol and the effect of ATT on image noise while adjusting for other variables including abdominal circumference. CTDI vol varied from 0.88 to 2.54 mGy. There was no significant correlation between CTDI vol and ATT (p = 0.61). ATT did not significantly affect CTDI vol (p = 0.93), while abdominal circumference was the only factor significantly affecting CTDI vol (p < 0.001). Image noise ranged from 59.5 to 64.1 HU. The p value for the regression model explaining the noise was 0.38. The amount of stool and fluid tagging does not significantly affect radiation exposure. (orig.)

  17. Performance Modelling of Steam Turbine Performance using Fuzzy ...

    African Journals Online (AJOL)

    Centroid method of defuzzification gave good results irrespective of the type of membership function with error less than 5%. However, other defuzzification methods gave good result for some types of membership functions. Result of different input data tested do not vary significantly (P<<0.05). It can therefore be concluded ...

  18. DETRA: Model description and evaluation of model performance

    International Nuclear Information System (INIS)

    Suolanen, V.

    1996-01-01

    The computer code DETRA is a generic tool for environmental transfer analyses of radioactive or stable substances. The code has been applied for various purposes, mainly problems related to the biospheric transfer of radionuclides both in safety analyses of disposal of nuclear wastes and in consideration of foodchain exposure pathways in the analyses of off-site consequences of reactor accidents. For each specific application an individually tailored conceptual model can be developed. The biospheric transfer analyses performed by the code are typically carried out for terrestrial, aquatic and food chain applications. 21 refs, 35 figs, 15 tabs

  19. An automated nowcasting model of significant instability events in the flight terminal area of Rio de Janeiro, Brazil

    Science.gov (United States)

    Borges França, Gutemberg; Valdonel de Almeida, Manoel; Rosette, Alessana C.

    2016-05-01

    This paper presents a novel model, based on neural network techniques, to produce short-term and local-specific forecasts of significant instability for flights in the terminal area of Galeão Airport, Rio de Janeiro, Brazil. Twelve years of data were used for neural network training/validation and test. Data are originally from four sources: (1) hourly meteorological observations from surface meteorological stations at five airports distributed around the study area; (2) atmospheric profiles collected twice a day at the meteorological station at Galeão Airport; (3) rain rate data collected from a network of 29 rain gauges in the study area; and (4) lightning data regularly collected by national detection networks. An investigation was undertaken regarding the capability of a neural network to produce early warning signs - or as a nowcasting tool - for significant instability events in the study area. The automated nowcasting model was tested using results from five categorical statistics, indicated in parentheses in forecasts of the first, second, and third hours, respectively, namely proportion correct (0.99, 0.97, and 0.94), BIAS (1.10, 1.42, and 2.31), the probability of detection (0.79, 0.78, and 0.67), false-alarm ratio (0.28, 0.45, and 0.73), and threat score (0.61, 0.47, and 0.25). Possible sources of error related to the test procedure are presented and discussed. The test showed that the proposed model (or neural network) can grab the physical content inside the data set, and its performance is quite encouraging for the first and second hours to nowcast significant instability events in the study area.

  20. The landscape of GPGPU performance modeling tools

    NARCIS (Netherlands)

    Madougou, S.; Varbanescu, A.; de Laat, C.; van Nieuwpoort, R.

    GPUs are gaining fast adoption as high-performance computing architectures, mainly because of their impressive peak performance. Yet most applications only achieve small fractions of this performance. While both programmers and architects have clear opinions about the causes of this performance gap,

  1. Diagnostic performance of 64-channel multislice computed tomography in assessment of significant coronary artery disease in symptomatic subjects.

    Science.gov (United States)

    Shabestari, Abbas Arjmand; Abdi, Seifollah; Akhlaghpoor, Shahram; Azadi, Mitra; Baharjoo, Hamidreza; Pajouh, Mohammad Danesh; Emami, Zyae; Esfahani, Fatemeh; Firouzi, Iraj; Hashemian, Mahmoud; Kouhi, Morad; Mozafari, Mahmoud; Nazeri, Iraj; Roshani, Mahmoud; Salevatipour, Babak; Tavalla, Hedayatollah; Tehrai, Mahmoud; Zarrabi, Ali

    2007-06-15

    The recent development of 64-channel multislice computed tomography (MSCT) has resulted in noninvasive coronary artery imaging improvement. This study was conducted to determine the accuracy of 64-slice MSCT in a relatively unselected group of 143 patients with presentations suggestive of coronary artery disease, including those with unstable angina pectoris, who underwent both coronary computed tomographic angiography and invasive coronary angiography. No arrhythmia was considered an exclusion criterion except for atrial fibrillation or frequent extrasystoles. In patients with fast heart rates, a beta blocker was administered orally. Data were obtained using electrocardiography gated 64-slice MSCT. Computed tomographic angiography and invasive coronary angiography findings of each coronary segment were compared to determine the sensitivity, specificity, positive predictive value, and negative predictive value of MSCT in the detection of their normalcy or insignificant (or=50% diameter decrease) stenosis or total occlusion. In per-patient assessment, the calculated sensitivity, specificity, positive predictive value, and negative predictive value of MSCT were 96%, 67%, 91%, and 83%, respectively. These values in per-artery evaluation were 94%, 94%, 87%, and 97%, and corresponding values in per-segment analysis were 92%, 97%, 77%, and 99%, respectively. In conclusion, computed tomographic angiography has high diagnostic performance in the assessment of significant coronary artery disease in most patients in a daily routine practice, including those presenting with unstable angina pectoris symptoms.

  2. Sentinel node positive melanoma patients: prediction and prognostic significance of nonsentinel node metastases and development of a survival tree model.

    Science.gov (United States)

    Wiener, Martin; Acland, Katharine M; Shaw, Helen M; Soong, Seng-Jaw; Lin, Hui-Yi; Chen, Dung-Tsa; Scolyer, Richard A; Winstanley, Julie B; Thompson, John F

    2010-08-01

    Completion lymph node dissection (CLND) following positive sentinel node biopsy (SNB) for melanoma detects additional nonsentinel node (NSN) metastases in approximately 20% of cases. This study aimed to establish whether NSN status can be predicted, to determine its effect on survival, and to develop survival tree models for the sentinel node (SN) positive population. Sydney Melanoma Unit (SMU) patients with at least 1 positive SN, meeting inclusion criteria and treated between October 1992 and June 2005, were identified from the Unit database. Survival characteristics, potential predictors of survival, and NSN status were assessed using the Kaplan-Meier method, Cox regression model, and logistic regression analyses, respectively. Classification tree analysis was performed to identify groups with distinctly different survival characteristics. A total of 323 SN-positive melanoma patients met the inclusion criteria. On multivariate analysis, age, gender, primary tumor thickness, mitotic rate, number of positive NSNs, or total number of positive nodes were statistically significant predictors of survival. NSN metastasis, found at CLND in 19% of patients, was only predicted to a statistically significant degree by ulceration. Multivariate analyses demonstrated that survival was more closely related to number of positive NSNs than total number of positive nodes. Classification tree analysis revealed 4 prognostically distinct survival groups. Patients with NSN metastases could not be reliably identified prior to CLND. Prognosis following CLND was more closely related to number of positive NSNs than total number of positive nodes. Classification tree analysis defined distinctly different survival groups more accurately than use of single-factor analysis.

  3. Methylphenidate significantly improves driving performance of adults with attention-deficit hyperactivity disorder: a randomized crossover trial.

    NARCIS (Netherlands)

    Verster, J.C.; Bekker, E.M.; Roos, M.; Minova, A.; Eijken, E.J.; Kooij, J.J.S.; Buitelaar, J.K.; Kenemans, J.L.; Verbaten, M.N.; Olivier, B.; Volkerts, E.R.

    2008-01-01

    Although patients with attention-deficit hyperactivity disorder (ADHD) have reported improved driving performance on methylphenidate, limited evidence exists to support an effect of treatment on driving performance and some regions prohibit driving on methylphenidate. A randomized, crossover trial

  4. Significance of predictive models/risk calculators for HBV-related hepatocellular carcinoma

    OpenAIRE

    DONG Jing

    2015-01-01

    Hepatitis B virus (HBV)-related hepatocellular carcinoma (HCC) is a major public health problem in Southeast Asia. In recent years, researchers from Hong Kong and Taiwan have reported predictive models or risk calculators for HBV-associated HCC by studying its natural history, which, to some extent, predicts the possibility of HCC development. Generally, risk factors of each model involve age, sex, HBV DNA level, and liver cirrhosis. This article discusses the evolution and clinical significa...

  5. The Five Key Questions of Human Performance Modeling.

    Science.gov (United States)

    Wu, Changxu

    2018-01-01

    Via building computational (typically mathematical and computer simulation) models, human performance modeling (HPM) quantifies, predicts, and maximizes human performance, human-machine system productivity and safety. This paper describes and summarizes the five key questions of human performance modeling: 1) Why we build models of human performance; 2) What the expectations of a good human performance model are; 3) What the procedures and requirements in building and verifying a human performance model are; 4) How we integrate a human performance model with system design; and 5) What the possible future directions of human performance modeling research are. Recent and classic HPM findings are addressed in the five questions to provide new thinking in HPM's motivations, expectations, procedures, system integration and future directions.

  6. Performance of neutron kinetics models for ADS transient analyses

    International Nuclear Information System (INIS)

    Rineiski, A.; Maschek, W.; Rimpault, G.

    2002-01-01

    Within the framework of the SIMMER code development, neutron kinetics models for simulating transients and hypothetical accidents in advanced reactor systems, in particular in Accelerator Driven Systems (ADSs), have been developed at FZK/IKET in cooperation with CE Cadarache. SIMMER is a fluid-dynamics/thermal-hydraulics code, coupled with a structure model and a space-, time- and energy-dependent neutronics module for analyzing transients and accidents. The advanced kinetics models have also been implemented into KIN3D, a module of the VARIANT/TGV code (stand-alone neutron kinetics) for broadening application and for testing and benchmarking. In the paper, a short review of the SIMMER and KIN3D neutron kinetics models is given. Some typical transients related to ADS perturbations are analyzed. The general models of SIMMER and KIN3D are compared with more simple techniques developed in the context of this work to get a better understanding of the specifics of transients in subcritical systems and to estimate the performance of different kinetics options. These comparisons may also help in elaborating new kinetics models and extending existing computation tools for ADS transient analyses. The traditional point-kinetics model may give rather inaccurate transient reaction rate distributions in an ADS even if the material configuration does not change significantly. This inaccuracy is not related to the problem of choosing a 'right' weighting function: the point-kinetics model with any weighting function cannot take into account pronounced flux shape variations related to possible significant changes in the criticality level or to fast beam trips. To improve the accuracy of the point-kinetics option for slow transients, we have introduced a correction factor technique. The related analyses give a better understanding of 'long-timescale' kinetics phenomena in the subcritical domain and help to evaluate the performance of the quasi-static scheme in a particular case. One

  7. PV Performance Modeling Methods and Practices: Results from the 4th PV Performance Modeling Collaborative Workshop.

    Energy Technology Data Exchange (ETDEWEB)

    Stein, Joshua [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-03-01

    In 2014, the IEA PVPS Task 13 added the PVPMC as a formal activity to its technical work plan for 2014-2017. The goal of this activity is to expand the reach of the PVPMC to a broader international audience and help to reduce PV performance modeling uncertainties worldwide. One of the main deliverables of this activity is to host one or more PVPMC workshops outside the US to foster more international participation within this collaborative group. This report reviews the results of the first in a series of these joint IEA PVPS Task 13/PVPMC workshops. The 4th PV Performance Modeling Collaborative Workshop was held in Cologne, Germany at the headquarters of TÜV Rheinland on October 22-23, 2015.

  8. Modelling and simulating fire tube boiler performance

    DEFF Research Database (Denmark)

    Sørensen, K.; Condra, T.; Houbak, Niels

    2003-01-01

    A model for a flue gas boiler covering the flue gas and the water-/steam side has been formulated. The model has been formulated as a number of sub models that are merged into an overall model for the complete boiler. Sub models have been defined for the furnace, the convection zone (split in 2......: a zone submerged in water and a zone covered by steam), a model for the material in the boiler (the steel) and 2 models for resp. the water/steam zone (the boiling) and the steam. The dynamic model has been developed as a number of Differential-Algebraic-Equation system (DAE). Subsequently Mat......Lab/Simulink has been applied for carrying out the simulations. To be able to verify the simulated results experiments has been carried out on a full scale boiler plant....

  9. A New Model to Simulate Energy Performance of VRF Systems

    Energy Technology Data Exchange (ETDEWEB)

    Hong, Tianzhen; Pang, Xiufeng; Schetrit, Oren; Wang, Liping; Kasahara, Shinichi; Yura, Yoshinori; Hinokuma, Ryohei

    2014-03-30

    This paper presents a new model to simulate energy performance of variable refrigerant flow (VRF) systems in heat pump operation mode (either cooling or heating is provided but not simultaneously). The main improvement of the new model is the introduction of the evaporating and condensing temperature in the indoor and outdoor unit capacity modifier functions. The independent variables in the capacity modifier functions of the existing VRF model in EnergyPlus are mainly room wet-bulb temperature and outdoor dry-bulb temperature in cooling mode and room dry-bulb temperature and outdoor wet-bulb temperature in heating mode. The new approach allows compliance with different specifications of each indoor unit so that the modeling accuracy is improved. The new VRF model was implemented in a custom version of EnergyPlus 7.2. This paper first describes the algorithm for the new VRF model, which is then used to simulate the energy performance of a VRF system in a Prototype House in California that complies with the requirements of Title 24 ? the California Building Energy Efficiency Standards. The VRF system performance is then compared with three other types of HVAC systems: the Title 24-2005 Baseline system, the traditional High Efficiency system, and the EnergyStar Heat Pump system in three typical California climates: Sunnyvale, Pasadena and Fresno. Calculated energy savings from the VRF systems are significant. The HVAC site energy savings range from 51 to 85percent, while the TDV (Time Dependent Valuation) energy savings range from 31 to 66percent compared to the Title 24 Baseline Systems across the three climates. The largest energy savings are in Fresno climate followed by Sunnyvale and Pasadena. The paper discusses various characteristics of the VRF systems contributing to the energy savings. It should be noted that these savings are calculated using the Title 24 prototype House D under standard operating conditions. Actual performance of the VRF systems for real

  10. Integrated model for supplier selection and performance evaluation

    Directory of Open Access Journals (Sweden)

    Borges de Araújo, Maria Creuza

    2015-08-01

    Full Text Available This paper puts forward a model for selecting suppliers and evaluating the performance of those already working with a company. A simulation was conducted in a food industry. This sector has high significance in the economy of Brazil. The model enables the phases of selecting and evaluating suppliers to be integrated. This is important so that a company can have partnerships with suppliers who are able to meet their needs. Additionally, a group method is used to enable managers who will be affected by this decision to take part in the selection stage. Finally, the classes resulting from the performance evaluation are shown to support the contractor in choosing the most appropriate relationship with its suppliers.

  11. Modeling the marketing strategy-performance relationship : towards an hierarchical marketing performance framework

    OpenAIRE

    Huizingh, Eelko K.R.E.; Zengerink, Evelien

    2001-01-01

    Accurate measurement of marketing performance is an important topic for both marketing academics and marketing managers. Many researchers have recognized that marketing performance measurement should go beyond financial measurement. In this paper we propose a conceptual framework that models marketing performance as a sequence of intermediate performance measures ultimately leading to financial performance. This framework, called the Hierarchical Marketing Performance (HMP) framework, starts ...

  12. Modelling and simulating fire tube boiler performance

    DEFF Research Database (Denmark)

    Sørensen, K.; Condra, T.; Houbak, Niels

    2003-01-01

    : a zone submerged in water and a zone covered by steam), a model for the material in the boiler (the steel) and 2 models for resp. the water/steam zone (the boiling) and the steam. The dynamic model has been developed as a number of Differential-Algebraic-Equation system (DAE). Subsequently Mat...

  13. Models and criteria for waste repository performance

    International Nuclear Information System (INIS)

    Smith, C.F.; Cohen, J.J.

    1981-03-01

    A primary objective of the Waste Management Program is to assure that public health is protected. Predictive modeling, to some extent, will play a role in assuring that this objective is met. This paper considers the requirements and limitations of predictive modeling in providing useful inputs to waste management decision making. Criteria development needs and the relation between criteria and models are also discussed

  14. Significance of kinetics for sorption on inorganic colloids: modeling and experiment interpretation issues.

    Science.gov (United States)

    Painter, S; Cvetkovic, V; Pickett, D; Turner, D R

    2002-12-15

    A two-site kinetic model for solute sorption on inorganic colloids is developed. The model quantifies linear first-order sorption on two types of sites ("fast" and "slow") characterized by two pairs of rates (forward and reverse). We use the model to explore data requirements for long-term predictive calculations of colloid-facilitated transport and to evaluate laboratory kinetic sorption data of Lu et al.. Five batch sorption data sets are considered with plutonium as the tracer and montmorillonite, hematite, silica, and smectite as colloids. Using asymptotic results applicable on the time scale of limited duration experiments, a robust estimation procedure is developed for the fast-site partitioning coefficient K(C) and the slow forward rate alpha. The estimated range of K(C) is 1.1-76 L/g, and the range for alpha is 0.0017-0.02 1/h. The fast reverse rate k(r) is estimated in the range 0.012-0.1 1/h. Comparison of one-site and two-site sorption interpretations reveals the difficulty in discriminating between the two models for montmorillonite and to a lesser extent for hematite. For silica and smectite, the two-site model clearly provides a better representation of the data as compared with a single site model. Kinetic data for silica are available for different colloid concentrations (0.2 g/L and 1 g/L). For the range of experimental conditions considered, alpha appears to be independent of colloid concentration.

  15. Performance of bed-load transport equations relative to geomorphic significance: Predicting effective discharge and its transport rate

    Science.gov (United States)

    Jeffrey J. Barry; John M. Buffington; Peter Goodwin; John .G. King; William W. Emmett

    2008-01-01

    Previous studies assessing the accuracy of bed-load transport equations have considered equation performance statistically based on paired observations of measured and predicted bed-load transport rates. However, transport measurements were typically taken during low flows, biasing the assessment of equation performance toward low discharges, and because equation...

  16. Synthesised model of market orientation-business performance relationship

    Directory of Open Access Journals (Sweden)

    G. Nwokah

    2006-12-01

    Full Text Available Purpose: The purpose of this paper is to assess the impact of market orientation on the performance of the organisation. While much empirical works have centered on market orientation, the generalisability of its impact on performance of the Food and Beverages organisations in the Nigeria context has been under-researched. Design/Methodology/Approach: The study adopted a triangulation methodology (quantitative and qualitative approach. Data was collected from key informants using a research instrument. Returned instruments were analyzed using nonparametric correlation through the use of the Statistical Package for Social Sciences (SPSS version 10. Findings: The study validated the earlier instruments but did not find any strong association between market orientation and business performance in the Nigerian context using the food and beverages organisations for the study. The reasons underlying the weak relationship between market orientation and business performance of the Food and Beverages organisations is government policies, new product development, diversification, innovation and devaluation of the Nigerian currency. One important finding of this study is that market orientation leads to business performance through some moderating variables. Implications: The study recommends that Nigerian Government should ensure a stable economy and make economic policies that will enhance existing business development in the country. Also, organisations should have performance measurement systems to detect the impact of investment on market orientation with the aim of knowing how the organisation works. Originality/Value: This study significantly refines the body of knowledge concerning the impact of market orientation on the performance of the organisation, and thereby offers a model of market orientation and business performance in the Nigerian context for marketing scholars and practitioners. This model will, no doubt, contribute to the body of

  17. An Ecological-Transactional Model of Significant Risk Factors for Child Psychopathology in Outer Mongolia

    Science.gov (United States)

    Kohrt, Holbrook E.; Kohrt, Brandon A.; Waldman, Irwin; Saltzman, Kasey; Carrion, Victor G.

    2004-01-01

    The present study examined significant risk factors, including child maltreatment, for child psychopathology in a cross-cultural setting. Ninety-nine Mongolian boys, ages 3-10 years, were assessed. Primary caregivers (PCG) completed structured interviews including the Emory Combined Rating Scale (ECRS) and the Mood and Feelings Questionnaire…

  18. Models and criteria for LLW disposal performance

    International Nuclear Information System (INIS)

    Smith, C.F.; Cohen, J.J.

    1980-12-01

    A primary objective of the Low Level Waste (LLW) Management Program is to assure that public health is protected. Predictive modeling, to some extent, will play a role in meeting this objective. This paper considers the requirements and limitations of predictive modeling in providing useful inputs to waste mangement decision making. In addition, criteria development needs and the relation between criteria and models are discussed

  19. Models and criteria for LLW disposal performance

    International Nuclear Information System (INIS)

    Smith, C.F.; Cohen, J.J.

    1980-01-01

    A primary objective of the Low Level Waste (LLW) Management Program is to assure that public health is protected. Predictive modeling, to some extent, will play a role in meeting this objective. This paper considers the requirements and limitations of predictive modeling in providing useful inputs to waste management decision making. In addition, criteria development needs and the relation between criteria and models are discussed

  20. The predictive performance and stability of six species distribution models.

    Directory of Open Access Journals (Sweden)

    Ren-Yan Duan

    Full Text Available Predicting species' potential geographical range by species distribution models (SDMs is central to understand their ecological requirements. However, the effects of using different modeling techniques need further investigation. In order to improve the prediction effect, we need to assess the predictive performance and stability of different SDMs.We collected the distribution data of five common tree species (Pinus massoniana, Betula platyphylla, Quercus wutaishanica, Quercus mongolica and Quercus variabilis and simulated their potential distribution area using 13 environmental variables and six widely used SDMs: BIOCLIM, DOMAIN, MAHAL, RF, MAXENT, and SVM. Each model run was repeated 100 times (trials. We compared the predictive performance by testing the consistency between observations and simulated distributions and assessed the stability by the standard deviation, coefficient of variation, and the 99% confidence interval of Kappa and AUC values.The mean values of AUC and Kappa from MAHAL, RF, MAXENT, and SVM trials were similar and significantly higher than those from BIOCLIM and DOMAIN trials (p<0.05, while the associated standard deviations and coefficients of variation were larger for BIOCLIM and DOMAIN trials (p<0.05, and the 99% confidence intervals for AUC and Kappa values were narrower for MAHAL, RF, MAXENT, and SVM. Compared to BIOCLIM and DOMAIN, other SDMs (MAHAL, RF, MAXENT, and SVM had higher prediction accuracy, smaller confidence intervals, and were more stable and less affected by the random variable (randomly selected pseudo-absence points.According to the prediction performance and stability of SDMs, we can divide these six SDMs into two categories: a high performance and stability group including MAHAL, RF, MAXENT, and SVM, and a low performance and stability group consisting of BIOCLIM, and DOMAIN. We highlight that choosing appropriate SDMs to address a specific problem is an important part of the modeling process.

  1. Model Performance Evaluation and Scenario Analysis (MPESA) Tutorial

    Science.gov (United States)

    The model performance evaluation consists of metrics and model diagnostics. These metrics provides modelers with statistical goodness-of-fit measures that capture magnitude only, sequence only, and combined magnitude and sequence errors.

  2. Performance Evaluation of 3d Modeling Software for Uav Photogrammetry

    Science.gov (United States)

    Yanagi, H.; Chikatsu, H.

    2016-06-01

    UAV (Unmanned Aerial Vehicle) photogrammetry, which combines UAV and freely available internet-based 3D modeling software, is widely used as a low-cost and user-friendly photogrammetry technique in the fields such as remote sensing and geosciences. In UAV photogrammetry, only the platform used in conventional aerial photogrammetry is changed. Consequently, 3D modeling software contributes significantly to its expansion. However, the algorithms of the 3D modelling software are black box algorithms. As a result, only a few studies have been able to evaluate their accuracy using 3D coordinate check points. With this motive, Smart3DCapture and Pix4Dmapper were downloaded from the Internet and commercial software PhotoScan was also employed; investigations were performed in this paper using check points and images obtained from UAV.

  3. PERFORMANCE EVALUATION OF 3D MODELING SOFTWARE FOR UAV PHOTOGRAMMETRY

    Directory of Open Access Journals (Sweden)

    H. Yanagi

    2016-06-01

    Full Text Available UAV (Unmanned Aerial Vehicle photogrammetry, which combines UAV and freely available internet-based 3D modeling software, is widely used as a low-cost and user-friendly photogrammetry technique in the fields such as remote sensing and geosciences. In UAV photogrammetry, only the platform used in conventional aerial photogrammetry is changed. Consequently, 3D modeling software contributes significantly to its expansion. However, the algorithms of the 3D modelling software are black box algorithms. As a result, only a few studies have been able to evaluate their accuracy using 3D coordinate check points. With this motive, Smart3DCapture and Pix4Dmapper were downloaded from the Internet and commercial software PhotoScan was also employed; investigations were performed in this paper using check points and images obtained from UAV.

  4. Modelling Flexible Pavement Response and Performance

    DEFF Research Database (Denmark)

    Ullidtz, Per

    This textbook is primarily concerned with models for predicting the future condition of flexible pavements, as a function of traffic loading, climate, materials, etc., using analytical-empirical methods.......This textbook is primarily concerned with models for predicting the future condition of flexible pavements, as a function of traffic loading, climate, materials, etc., using analytical-empirical methods....

  5. HANDOVER MANAGEABILITY AND PERFORMANCE MODELING IN

    African Journals Online (AJOL)

    SOFTLINKS DIGITAL

    This work develops a model for interpreting implementation progress. The proposed progress monitoring model uses existing implementation artefact metrics, tries .... determine implementation velocity. As noted by McConnell [28] this velocity increases at the beginning and decreases near the end. A formal implementation.

  6. Modeling, simulation and performance evaluation of parabolic ...

    African Journals Online (AJOL)

    Model of a parabolic trough power plant, taking into consideration the different losses associated with collection of the solar irradiance and thermal losses is presented. MATLAB software is employed to model the power plant at reference state points. The code is then used to find the different reference values which are ...

  7. Detailed Performance Model for Photovoltaic Systems: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Tian, H.; Mancilla-David, F.; Ellis, K.; Muljadi, E.; Jenkins, P.

    2012-07-01

    This paper presents a modified current-voltage relationship for the single diode model. The single-diode model has been derived from the well-known equivalent circuit for a single photovoltaic cell. The modification presented in this paper accounts for both parallel and series connections in an array.

  8. Light water reactor sequence timing: its significance to probabilistic safety assessment modeling

    International Nuclear Information System (INIS)

    Bley, D.C.; Buttemer, D.R.; Stetkar, J.W.

    1988-01-01

    This paper examines event sequence timing in light water reactor plants from the viewpoint of probabilistic safety assessment (PSA). The analytical basis for the ideas presented here come primarily from the authors' work in support of more than 20 PSA studies over the past several years. Timing effects are important for establishing success criteria for support and safety system response and for identifying the time available for operator recovery actions. The principal results of this paper are as follows: 1. Analysis of event sequence timing is necessary for meaningful probabilistic safety assessment - both the success criteria for systems performance and the probability of recovery are tightly linked to sequence timing. 2. Simple engineering analyses based on first principles are often sufficient to provide adequate resolution of the time available for recovery of PSA scenarios. Only those parameters that influence sequence timing and its variability and uncertainty need be examined. 3. Time available for recovery is the basic criterion for evaluation of human performance, whether time is an explicit parameter of the operator actions analysis or not. (author)

  9. Comparative performance of high-fidelity training models for flexible ureteroscopy: Are all models effective?

    Directory of Open Access Journals (Sweden)

    Shashikant Mishra

    2011-01-01

    Full Text Available Objective: We performed a comparative study of high-fidelity training models for flexible ureteroscopy (URS. Our objective was to determine whether high-fidelity non-virtual reality (VR models are as effective as the VR model in teaching flexible URS skills. Materials and Methods: Twenty-one trained urologists without clinical experience of flexible URS underwent dry lab simulation practice. After a warm-up period of 2 h, tasks were performed on a high-fidelity non-VR (Uro-scopic Trainer TM ; Endo-Urologie-Modell TM and a high-fidelity VR model (URO Mentor TM . The participants were divided equally into three batches with rotation on each of the three stations for 30 min. Performance of the trainees was evaluated by an expert ureteroscopist using pass rating and global rating score (GRS. The participants rated a face validity questionnaire at the end of each session. Results: The GRS improved statistically at evaluation performed after second rotation (P<0.001 for batches 1, 2 and 3. Pass ratings also improved significantly for all training models when the third and first rotations were compared (P<0.05. The batch that was trained on the VR-based model had more improvement on pass ratings on second rotation but could not achieve statistical significance. Most of the realistic domains were higher for a VR model as compared with the non-VR model, except the realism of the flexible endoscope. Conclusions: All the models used for training flexible URS were effective in increasing the GRS and pass ratings irrespective of the VR status.

  10. Research Pearls: The Significance of Statistics and Perils of Pooling. Part 2: Predictive Modeling.

    Science.gov (United States)

    Hohmann, Erik; Wetzler, Merrick J; D'Agostino, Ralph B

    2017-07-01

    The focus of predictive modeling or predictive analytics is to use statistical techniques to predict outcomes and/or the results of an intervention or observation for patients that are conditional on a specific set of measurements taken on the patients prior to the outcomes occurring. Statistical methods to estimate these models include using such techniques as Bayesian methods; data mining methods, such as machine learning; and classical statistical models of regression such as logistic (for binary outcomes), linear (for continuous outcomes), and survival (Cox proportional hazards) for time-to-event outcomes. A Bayesian approach incorporates a prior estimate that the outcome of interest is true, which is made prior to data collection, and then this prior probability is updated to reflect the information provided by the data. In principle, data mining uses specific algorithms to identify patterns in data sets and allows a researcher to make predictions about outcomes. Regression models describe the relations between 2 or more variables where the primary difference among methods concerns the form of the outcome variable, whether it is measured as a binary variable (i.e., success/failure), continuous measure (i.e., pain score at 6 months postop), or time to event (i.e., time to surgical revision). The outcome variable is the variable of interest, and the predictor variable(s) are used to predict outcomes. The predictor variable is also referred to as the independent variable and is assumed to be something the researcher can modify in order to see its impact on the outcome (i.e., using one of several possible surgical approaches). Survival analysis investigates the time until an event occurs. This can be an event such as failure of a medical device or death. It allows the inclusion of censored data, meaning that not all patients need to have the event (i.e., die) prior to the study's completion. Copyright © 2017 Arthroscopy Association of North America. Published by

  11. Model for Agile Software Development Performance Monitoring

    OpenAIRE

    Žabkar, Nataša

    2013-01-01

    Agile methodologies have been in use for more than ten years and during this time they proved to be efficient, even though number of empirical research is scarce, especially regarding agile software development performance monitoring. The most popular agile framework Scrum is using only one measure of performance: the amount of work remaining for implementation of User Story from the Product Backlog or for implementation of Task from the Sprint Backlog. In time the need for additional me...

  12. Summary of Calculation Performed with NPIC's New FGR Model

    International Nuclear Information System (INIS)

    Jiao Yongjun; Li Wenjie; Zhou Yi; Xing Shuo

    2013-01-01

    1. Introduction The NPIC modeling group has performed calculations on both real cases and idealized cases in FUMEX II and III data packages. The performance code we used is COPERNIC 2.4 developed by AREVA but a new FGR model has been added. Therefore, a comparison study has been made between the Bernard model (V2.2) and the new model, in order to evaluate the performance of the new model. As mentioned before, the focus of our study lies in thermal fission gas release, or more specifically the grain boundary bubble behaviors. 2. Calculation method There are some differences between the calculated burnup and measured burnup in many real cases. Considering FGR is significant dependent on rod average burnup, a multiplicative factor on fuel rod linear power, i.e. FQE, is applied and adjusted in the calculations to ensure the calculated burnup generally equals the measured burnup. Also, a multiplicative factor on upper plenum volume, i.e. AOPL, is applied and adjusted in the calculations to ensure the calculated free volume equals pre-irradiation data of total free volume in rod. Cladding temperatures were entered if they were provided . Otherwise the cladding temperatures are calculated from the inlet coolant temperature. The results are presented in excel form as an attachment of this paper, including thirteen real cases and three idealized cases. Three real cases (BK353, BK370, US PWR TSQ022) are excluded from validation of the new model, because the athermal release predicted is even greater than release measured, which means a negative thermal release. Obviously it is not reasonable for validation, but the results are also listed in excel (sheet 'Cases excluded from validation'). 3. Results The results of 10 real cases are listed in sheet 'Steady case summary', which summarizes measured and predicted values of Bu, FGR for each case, and plots M/P ratio of FGR calculation by different models in COPERNIC. A statistic comparison was also made with three indexes, i

  13. A significant advantage for trapped field magnet applications—A failure of the critical state model

    Science.gov (United States)

    Weinstein, Roy; Parks, Drew; Sawh, Ravi-Persad; Carpenter, Keith; Davey, Kent

    2015-10-01

    Ongoing research has increased achievable field in trapped field magnets (TFMs) to multi-Tesla levels. This has greatly increased the attractiveness of TFMs for applications. However, it also increases the already very difficult problem of in situ activation and reactivation of the TFMs. The pulsed zero-field-cool (ZFC) method of activation is used in most applications because it can be accomplished with much lower power and more modest equipment than field-cool activation. The critical state model (CSM) has been a reliable theoretical tool for experimental analysis and engineering design of TFMs and their applications for over a half-century. The activating field, BA, required to fully magnetize a TFM to its maximum trappable field, BT,max, using pulsed-ZFC is predicted by CSM to be R ≡ BA/BT,max ≥ 2.0. We report here experiments on R as a function of Jc, which find a monotonic decrease of R to 1.0 as Jc increases. The reduction to R = 1.0 reduces the power needed to magnetize TFMs by about an order of magnitude. This is a critical advantage for TFM applications. The results also indicate the limits of applicability of CSM, and shed light on the physics omitted from the model. The experimental results rule out heating effects and pinning center geometry as causes of the decrease in R. A possible physical cause is proposed.

  14. On the selection of significant variables in a model for the deteriorating process of facades

    Science.gov (United States)

    Serrat, C.; Gibert, V.; Casas, J. R.; Rapinski, J.

    2017-10-01

    In previous works the authors of this paper have introduced a predictive system that uses survival analysis techniques for the study of time-to-failure in the facades of a building stock. The approach is population based, in order to obtain information on the evolution of the stock across time, and to help the manager in the decision making process on global maintenance strategies. For the decision making it is crutial to determine those covariates -like materials, morphology and characteristics of the facade, orientation or environmental conditions- that play a significative role in the progression of different failures. The proposed platform also incorporates an open source GIS plugin that includes survival and test moduli that allow the investigator to model the time until a lesion taking into account the variables collected during the inspection process. The aim of this paper is double: a) to shortly introduce the predictive system, as well as the inspection and the analysis methodologies and b) to introduce and illustrate the modeling strategy for the deteriorating process of an urban front. The illustration will be focused on the city of L’Hospitalet de Llobregat (Barcelona, Spain) in which more than 14,000 facades have been inspected and analyzed.

  15. Modeling and optimization of LCD optical performance

    CERN Document Server

    Yakovlev, Dmitry A; Kwok, Hoi-Sing

    2015-01-01

    The aim of this book is to present the theoretical foundations of modeling the optical characteristics of liquid crystal displays, critically reviewing modern modeling methods and examining areas of applicability. The modern matrix formalisms of optics of anisotropic stratified media, most convenient for solving problems of numerical modeling and optimization of LCD, will be considered in detail. The benefits of combined use of the matrix methods will be shown, which generally provides the best compromise between physical adequacy and accuracy with computational efficiency and optimization fac

  16. A unified tool for performance modelling and prediction

    International Nuclear Information System (INIS)

    Gilmore, Stephen; Kloul, Leila

    2005-01-01

    We describe a novel performability modelling approach, which facilitates the efficient solution of performance models extracted from high-level descriptions of systems. The notation which we use for our high-level designs is the Unified Modelling Language (UML) graphical modelling language. The technology which provides the efficient representation capability for the underlying performance model is the multi-terminal binary decision diagram (MTBDD)-based PRISM probabilistic model checker. The UML models are compiled through an intermediate language, the stochastic process algebra PEPA, before translation into MTBDDs for solution. We illustrate our approach on a real-world analysis problem from the domain of mobile telephony

  17. The significance of parks to physical activity and public health: a conceptual model.

    Science.gov (United States)

    Bedimo-Rung, Ariane L; Mowen, Andrew J; Cohen, Deborah A

    2005-02-01

    Park-based physical activity is a promising means to satisfy current physical activity requirements. However, there is little research concerning what park environmental and policy characteristics might enhance physical activity levels. This study proposes a conceptual model to guide thinking and suggest hypotheses. This framework describes the relationships between park benefits, park use, and physical activity, and the antecedents/correlates of park use. In this classification scheme, the discussion focuses on park environmental characteristics that could be related to physical activity, including park features, condition, access, aesthetics, safety, and policies. Data for these categories should be collected within specific geographic areas in or around the park, including activity areas, supporting areas, the overall park, and the surrounding neighborhood. Future research should focus on how to operationalize specific measures and methodologies for collecting data, as well as measuring associations between individual physical activity levels and specific park characteristics. Collaboration among many disciplines is needed.

  18. Integrated thermodynamic model for ignition target performance

    Directory of Open Access Journals (Sweden)

    Springer P.T.

    2013-11-01

    Full Text Available We have derived a 3-dimensional synthetic model for NIF implosion conditions, by predicting and optimizing fits to a broad set of x-ray and nuclear diagnostics obtained on each shot. By matching x-ray images, burn width, neutron time-of-flight ion temperature, yield, and fuel ρr, we obtain nearly unique constraints on conditions in the hotspot and fuel in a model that is entirely consistent with the observables. This model allows us to determine hotspot density, pressure, areal density (ρr, total energy, and other ignition-relevant parameters not available from any single diagnostic. This article describes the model and its application to National Ignition Facility (NIF tritium–hydrogen–deuterium (THD and DT implosion data, and provides an explanation for the large yield and ρr degradation compared to numerical code predictions.

  19. Mathematical Modeling of Circadian/Performance Countermeasures

    Data.gov (United States)

    National Aeronautics and Space Administration — We developed and refined our current mathematical model of circadian rhythms to incorporate melatonin as a marker rhythm. We used an existing physiologically based...

  20. Hydrologic Evaluation of Landfill Performance (HELP) Model

    Science.gov (United States)

    The program models rainfall, runoff, infiltration, and other water pathways to estimate how much water builds up above each landfill liner. It can incorporate data on vegetation, soil types, geosynthetic materials, initial moisture conditions, slopes, etc.

  1. Breast cancer-associated metastasis is significantly increased in a model of autoimmune arthritis

    Science.gov (United States)

    Das Roy, Lopamudra; Pathangey, Latha B; Tinder, Teresa L; Schettini, Jorge L; Gruber, Helen E; Mukherjee, Pinku

    2009-01-01

    Introduction Sites of chronic inflammation are often associated with the establishment and growth of various malignancies including breast cancer. A common inflammatory condition in humans is autoimmune arthritis (AA) that causes inflammation and deformity of the joints. Other systemic effects associated with arthritis include increased cellular infiltration and inflammation of the lungs. Several studies have reported statistically significant risk ratios between AA and breast cancer. Despite this knowledge, available for a decade, it has never been questioned if the site of chronic inflammation linked to AA creates a milieu that attracts tumor cells to home and grow in the inflamed bones and lungs which are frequent sites of breast cancer metastasis. Methods To determine if chronic inflammation induced by autoimmune arthritis contributes to increased breast cancer-associated metastasis, we generated mammary gland tumors in SKG mice that were genetically prone to develop AA. Two breast cancer cell lines, one highly metastatic (4T1) and the other non-metastatic (TUBO) were used to generate the tumors in the mammary fat pad. Lung and bone metastasis and the associated inflammatory milieu were evaluated in the arthritic versus the non-arthritic mice. Results We report a three-fold increase in lung metastasis and a significant increase in the incidence of bone metastasis in the pro-arthritic and arthritic mice compared to non-arthritic control mice. We also report that the metastatic breast cancer cells augment the severity of arthritis resulting in a vicious cycle that increases both bone destruction and metastasis. Enhanced neutrophilic and granulocytic infiltration in lungs and bone of the pro-arthritic and arthritic mice and subsequent increase in circulating levels of proinflammatory cytokines, such as macrophage colony stimulating factor (M-CSF), interleukin-17 (IL-17), interleukin-6 (IL-6), vascular endothelial growth factor (VEGF), and tumor necrosis factor

  2. Breast-cancer-associated metastasis is significantly increased in a model of autoimmune arthritis.

    Science.gov (United States)

    Das Roy, Lopamudra; Pathangey, Latha B; Tinder, Teresa L; Schettini, Jorge L; Gruber, Helen E; Mukherjee, Pinku

    2009-01-01

    Sites of chronic inflammation are often associated with the establishment and growth of various malignancies including breast cancer. A common inflammatory condition in humans is autoimmune arthritis (AA) that causes inflammation and deformity of the joints. Other systemic effects associated with arthritis include increased cellular infiltration and inflammation of the lungs. Several studies have reported statistically significant risk ratios between AA and breast cancer. Despite this knowledge, available for a decade, it has never been questioned if the site of chronic inflammation linked to AA creates a milieu that attracts tumor cells to home and grow in the inflamed bones and lungs which are frequent sites of breast cancer metastasis. To determine if chronic inflammation induced by autoimmune arthritis contributes to increased breast cancer-associated metastasis, we generated mammary gland tumors in SKG mice that were genetically prone to develop AA. Two breast cancer cell lines, one highly metastatic (4T1) and the other non-metastatic (TUBO) were used to generate the tumors in the mammary fat pad. Lung and bone metastasis and the associated inflammatory milieu were evaluated in the arthritic versus the non-arthritic mice. We report a three-fold increase in lung metastasis and a significant increase in the incidence of bone metastasis in the pro-arthritic and arthritic mice compared to non-arthritic control mice. We also report that the metastatic breast cancer cells augment the severity of arthritis resulting in a vicious cycle that increases both bone destruction and metastasis. Enhanced neutrophilic and granulocytic infiltration in lungs and bone of the pro-arthritic and arthritic mice and subsequent increase in circulating levels of proinflammatory cytokines, such as macrophage colony stimulating factor (M-CSF), interleukin-17 (IL-17), interleukin-6 (IL-6), vascular endothelial growth factor (VEGF), and tumor necrosis factor-alpha (TNF-alpha) may contribute

  3. Evaluation of Techniques to Detect Significant Network Performance Problems using End-to-End Active Network Measurements

    Energy Technology Data Exchange (ETDEWEB)

    Cottrell, R.Les; Logg, Connie; Chhaparia, Mahesh; /SLAC; Grigoriev, Maxim; /Fermilab; Haro, Felipe; /Chile U., Catolica; Nazir, Fawad; /NUST, Rawalpindi; Sandford, Mark

    2006-01-25

    End-to-End fault and performance problems detection in wide area production networks is becoming increasingly hard as the complexity of the paths, the diversity of the performance, and dependency on the network increase. Several monitoring infrastructures are built to monitor different network metrics and collect monitoring information from thousands of hosts around the globe. Typically there are hundreds to thousands of time-series plots of network metrics which need to be looked at to identify network performance problems or anomalous variations in the traffic. Furthermore, most commercial products rely on a comparison with user configured static thresholds and often require access to SNMP-MIB information, to which a typical end-user does not usually have access. In our paper we propose new techniques to detect network performance problems proactively in close to realtime and we do not rely on static thresholds and SNMP-MIB information. We describe and compare the use of several different algorithms that we have implemented to detect persistent network problems using anomalous variations analysis in real end-to-end Internet performance measurements. We also provide methods and/or guidance for how to set the user settable parameters. The measurements are based on active probes running on 40 production network paths with bottlenecks varying from 0.5Mbits/s to 1000Mbit/s. For well behaved data (no missed measurements and no very large outliers) with small seasonal changes most algorithms identify similar events. We compare the algorithms' robustness with respect to false positives and missed events especially when there are large seasonal effects in the data. Our proposed techniques cover a wide variety of network paths and traffic patterns. We also discuss the applicability of the algorithms in terms of their intuitiveness, their speed of execution as implemented, and areas of applicability. Our encouraging results compare and evaluate the accuracy of our

  4. Fleet equipment performance measure preventive maintenance model.

    Science.gov (United States)

    2013-02-28

    The Texas Department of Transportation : (TxDOT) operates a large fleet of on-road and : off-road equipment. Consequently, fleet : maintenance procedures (specifically preventive : maintenance such as oil changes) represent a : significant cost to th...

  5. Thermophysical modeling of asteroids from WISE thermal infrared data - Significance of the shape model and the pole orientation uncertainties

    Science.gov (United States)

    Hanuš, J.; Delbo', M.; Ďurech, J.; Alí-Lagoa, V.

    2015-08-01

    In the analysis of thermal infrared data of asteroids by means of thermophysical models (TPMs) it is a common practice to neglect the uncertainty of the shape model and the rotational state, which are taken as an input for the model. Here, we present a novel method of investigating the importance of the shape model and the pole orientation uncertainties in the thermophysical modeling - the varied shape TPM (VS-TPM). Our method uses optical photometric data to generate various shape models that map the uncertainty in the shape and the rotational state. The TPM procedure is then run for all these shape models. We apply the implementation of the classical TPM as well as our VS-TPM to the convex shape models of several asteroids together with their thermal infrared data acquired by the NASA's Wide-field Infrared Survey Explorer (WISE) and compare the results. These show that the uncertainties of the shape model and the pole orientation can be very important (e.g., for the determination of the thermal inertia) and should be considered in the thermophysical analyses. We present thermophysical properties for six asteroids - (624) Hektor, (771) Libera, (1036) Ganymed, (1472) Muonio, (1627) Ivar, and (2606) Odessa.

  6. Evaluating Models of Human Performance: Safety-Critical Systems Applications

    Science.gov (United States)

    Feary, Michael S.

    2012-01-01

    This presentation is part of panel discussion on Evaluating Models of Human Performance. The purpose of this panel is to discuss the increasing use of models in the world today and specifically focus on how to describe and evaluate models of human performance. My presentation will focus on discussions of generating distributions of performance, and the evaluation of different strategies for humans performing tasks with mixed initiative (Human-Automation) systems. I will also discuss issues with how to provide Human Performance modeling data to support decisions on acceptability and tradeoffs in the design of safety critical systems. I will conclude with challenges for the future.

  7. Comparison of performance of simulation models for floor heating

    DEFF Research Database (Denmark)

    Weitzmann, Peter; Svendsen, Svend

    2005-01-01

    This paper describes the comparison of performance of simulation models for floor heating with different level of detail in the modelling process. The models are compared in an otherwise identical simulation model containing room model, walls, windows, ceiling and ventilation system. By exchanging...

  8. Developing an Energy Performance Modeling Startup Kit

    Energy Technology Data Exchange (ETDEWEB)

    none,

    2012-10-01

    In 2011, the NAHB Research Center began assessing the needs and motivations of residential remodelers regarding energy performance remodeling. This report outlines: the current remodeling industry and the role of energy efficiency; gaps and barriers to adding energy efficiency into remodeling; and support needs of professional remodelers to increase sales and projects involving improving home energy efficiency.

  9. Modeling vibrato and portamento in music performance

    NARCIS (Netherlands)

    Desain, P.W.M.; Honing, H.J.

    1999-01-01

    Research in the psychology of music dealing with expression is often concerned with the discrete aspects of music performance, and mainly concentrates on the study of piano music (partly because of the ease with which piano music can be reduced to discrete note events). However, on other

  10. WirelessHART modeling and performance evaluation

    NARCIS (Netherlands)

    Remke, Anne Katharina Ingrid; Wu, Xian

    2013-01-01

    In process industries wired supervisory and control networks are more and more replaced by wireless systems. Wireless communication inevitably introduces time delays and message losses, which may degrade the system reliability and performance. WirelessHART, as the first international standard for

  11. Myriocin significantly increases the mortality of a non-mammalian model host during Candida pathogenesis.

    Directory of Open Access Journals (Sweden)

    Nadja Rodrigues de Melo

    Full Text Available Candida albicans is a major human pathogen whose treatment is challenging due to antifungal drug toxicity, drug resistance and paucity of antifungal agents available. Myrocin (MYR inhibits sphingosine synthesis, a precursor of sphingolipids, an important cell membrane and signaling molecule component. MYR also has dual immune suppressive and antifungal properties, potentially modulating mammalian immunity and simultaneously reducing fungal infection risk. Wax moth (Galleria mellonella larvae, alternatives to mice, were used to establish if MYR suppressed insect immunity and increased survival of C. albicans-infected insects. MYR effects were studied in vivo and in vitro, and compared alone and combined with those of approved antifungal drugs, fluconazole (FLC and amphotericin B (AMPH. Insect immune defenses failed to inhibit C. albicans with high mortalities. In insects pretreated with the drug followed by C. albicans inoculation, MYR+C. albicans significantly increased mortality to 93% from 67% with C. albicans alone 48 h post-infection whilst AMPH+C. albicans and FLC+C. albicans only showed 26% and 0% mortalities, respectively. MYR combinations with other antifungal drugs in vivo also enhanced larval mortalities, contrasting the synergistic antifungal effect of the MYR+AMPH combination in vitro. MYR treatment influenced immunity and stress management gene expression during C. albicans pathogenesis, modulating transcripts putatively associated with signal transduction/regulation of cytokines, I-kappaB kinase/NF-kappaB cascade, G-protein coupled receptor and inflammation. In contrast, all stress management gene expression was down-regulated in FLC and AMPH pretreated C. albicans-infected insects. Results are discussed with their implications for clinical use of MYR to treat sphingolipid-associated disorders.

  12. Myriocin Significantly Increases the Mortality of a Non-Mammalian Model Host during Candida Pathogenesis

    Science.gov (United States)

    de Melo, Nadja Rodrigues; Abdrahman, Ahmed; Greig, Carolyn; Mukherjee, Krishnendu; Thornton, Catherine; Ratcliffe, Norman A.; Vilcinskas, Andreas; Butt, Tariq M.

    2013-01-01

    Candida albicans is a major human pathogen whose treatment is challenging due to antifungal drug toxicity, drug resistance and paucity of antifungal agents available. Myrocin (MYR) inhibits sphingosine synthesis, a precursor of sphingolipids, an important cell membrane and signaling molecule component. MYR also has dual immune suppressive and antifungal properties, potentially modulating mammalian immunity and simultaneously reducing fungal infection risk. Wax moth (Galleria mellonella) larvae, alternatives to mice, were used to establish if MYR suppressed insect immunity and increased survival of C. albicans-infected insects. MYR effects were studied in vivo and in vitro, and compared alone and combined with those of approved antifungal drugs, fluconazole (FLC) and amphotericin B (AMPH). Insect immune defenses failed to inhibit C. albicans with high mortalities. In insects pretreated with the drug followed by C. albicans inoculation, MYR+C. albicans significantly increased mortality to 93% from 67% with C. albicans alone 48 h post-infection whilst AMPH+C. albicans and FLC+C. albicans only showed 26% and 0% mortalities, respectively. MYR combinations with other antifungal drugs in vivo also enhanced larval mortalities, contrasting the synergistic antifungal effect of the MYR+AMPH combination in vitro. MYR treatment influenced immunity and stress management gene expression during C. albicans pathogenesis, modulating transcripts putatively associated with signal transduction/regulation of cytokines, I-kappaB kinase/NF-kappaB cascade, G-protein coupled receptor and inflammation. In contrast, all stress management gene expression was down-regulated in FLC and AMPH pretreated C. albicans -infected insects. Results are discussed with their implications for clinical use of MYR to treat sphingolipid-associated disorders. PMID:24260135

  13. Significance of settling model structures and parameter subsets in modelling WWTPs under wet-weather flow and filamentous bulking conditions

    DEFF Research Database (Denmark)

    Ramin, Elham; Sin, Gürkan; Mikkelsen, Peter Steen

    2014-01-01

    Current research focuses on predicting and mitigating the impacts of high hydraulic loadings on centralized wastewater treatment plants (WWTPs) under wet-weather conditions. The maximum permissible inflow to WWTPs depends not only on the settleability of activated sludge in secondary settling tanks...... (SSTs) but also on the hydraulic behaviour of SSTs. The present study investigates the impacts of ideal and non-ideal flow (dry and wet weather) and settling (good settling and bulking) boundary conditions on the sensitivity of WWTP model outputs to uncertainties intrinsic to the one-dimensional (1-D......) SST model structures and parameters. We identify the critical sources of uncertainty in WWTP models through global sensitivity analysis (GSA) using the Benchmark simulation model No. 1 in combination with first- and second-order 1-D SST models. The results obtained illustrate that the contribution...

  14. Neuro-fuzzy model for evaluating the performance of processes ...

    Indian Academy of Sciences (India)

    In this work an Adaptive Neuro-Fuzzy Inference System (ANFIS) was used to model the periodic performance of some multi-input single-output (MISO) processes, namely: brewery operations (case study 1) and soap production (case study 2) processes. Two ANFIS models were developed to model the performance of the ...

  15. Persistence Modeling for Assessing Marketing Strategy Performance

    NARCIS (Netherlands)

    M.G. Dekimpe (Marnik); D.M. Hanssens (Dominique)

    2003-01-01

    textabstractThe question of long-run market response lies at the heart of any marketing strategy that tries to create a sustainable competitive advantage for the firm or brand. A key challenge, however, is that only short-run results of marketing actions are readily observable. Persistence modeling

  16. Evaluating Performances of Traffic Noise Models | Oyedepo ...

    African Journals Online (AJOL)

    Traffic noise in decibel dB(A) were measured at six locations using 407780A Integrating Sound Level Meter, while spot speed and traffic volume were collected with cine-camera. The predicted sound exposure level (SEL) was evaluated using Burgess, British and FWHA model. The average noise level obtained are 77.64 ...

  17. HANDOVER MANAGEABILITY AND PERFORMANCE MODELING IN

    African Journals Online (AJOL)

    SOFTLINKS DIGITAL

    situations. Library Management Design Using Use. Case. To model software using object oriented design, a case study of Bingham University. Library Management System is used. Software is developed to automate the Bingham. University manual Library. The system will be stand alone and will be designed with the.

  18. Sustaining Team Performance: A Systems Model\\

    Science.gov (United States)

    1979-07-31

    a " forgetting curve ." Three cases were tested and retested under four different research schedules. A description c;’ the test cases follows. III-11...available to show fluctuation in Ph due to unit yearly training cycle. Another real-world military example of the classical forgetting curve is found in the...no practice between the acquisition and subsequent test of performance. The classical forgetting curve is believed to apply. The shape of curve depends

  19. Modelling swimming hydrodynamics to enhance performance

    OpenAIRE

    Marinho, D.A.; Rouboa, A.; Barbosa, Tiago M.; Silva, A.J.

    2010-01-01

    Swimming assessment is one of the most complex but outstanding and fascinating topics in biomechanics. Computational fluid dynamics (CFD) methodology is one of the different methods that have been applied in swimming research to observe and understand water movements around the human body and its application to improve swimming performance. CFD has been applied attempting to understand deeply the biomechanical basis of swimming. Several studies have been conducted willing to analy...

  20. Intelligent system for statistically significant expertise knowledge on the basis of the model of self-organizing nonequilibrium dissipative system

    Directory of Open Access Journals (Sweden)

    E. A. Tatokchin

    2017-01-01

    Full Text Available Development of the modern educational technologies caused by broad introduction of comput-er testing and development of distant forms of education does necessary revision of methods of an examination of pupils. In work it was shown, need transition to mathematical criteria, exami-nations of knowledge which are deprived of subjectivity. In article the review of the problems arising at realization of this task and are offered approaches for its decision. The greatest atten-tion is paid to discussion of a problem of objective transformation of rated estimates of the ex-pert on to the scale estimates of the student. In general, the discussion this question is was con-cluded that the solution to this problem lies in the creation of specialized intellectual systems. The basis for constructing intelligent system laid the mathematical model of self-organizing nonequilibrium dissipative system, which is a group of students. This article assumes that the dissipative system is provided by the constant influx of new test items of the expert and non-equilibrium – individual psychological characteristics of students in the group. As a result, the system must self-organize themselves into stable patterns. This patern will allow for, relying on large amounts of data, get a statistically significant assessment of student. To justify the pro-posed approach in the work presents the data of the statistical analysis of the results of testing a large sample of students (> 90. Conclusions from this statistical analysis allowed to develop intelligent system statistically significant examination of student performance. It is based on data clustering algorithm (k-mean for the three key parameters. It is shown that this approach allows you to create of the dynamics and objective expertise evaluation.

  1. An Empirical Study of a Solo Performance Assessment Model

    Science.gov (United States)

    Russell, Brian E.

    2015-01-01

    The purpose of this study was to test a hypothesized model of solo music performance assessment. Specifically, this study investigates the influence of technique and musical expression on perceptions of overall performance quality. The Aural Musical Performance Quality (AMPQ) measure was created to measure overall performance quality, technique,…

  2. Uncertainty assessment in building energy performance with a simplified model

    Directory of Open Access Journals (Sweden)

    Titikpina Fally

    2015-01-01

    Full Text Available To assess a building energy performance, the consumption being predicted or estimated during the design stage is compared to the measured consumption when the building is operational. When valuing this performance, many buildings show significant differences between the calculated and measured consumption. In order to assess the performance accurately and ensure the thermal efficiency of the building, it is necessary to evaluate the uncertainties involved not only in measurement but also those induced by the propagation of the dynamic and the static input data in the model being used. The evaluation of measurement uncertainty is based on both the knowledge about the measurement process and the input quantities which influence the result of measurement. Measurement uncertainty can be evaluated within the framework of conventional statistics presented in the Guide to the Expression of Measurement Uncertainty (GUM as well as by Bayesian Statistical Theory (BST. Another choice is the use of numerical methods like Monte Carlo Simulation (MCS. In this paper, we proposed to evaluate the uncertainty associated to the use of a simplified model for the estimation of the energy consumption of a given building. A detailed review and discussion of these three approaches (GUM, MCS and BST is given. Therefore, an office building has been monitored and multiple temperature sensors have been mounted on candidate locations to get required data. The monitored zone is composed of six offices and has an overall surface of 102 m2.

  3. A comparison of predictive models for the onset of significant void at low pressures in forced-convection subcooled boiling

    International Nuclear Information System (INIS)

    Lee, S. C.; Bankoff, S. G.

    1998-01-01

    The predictive models for the Onset of Significant Void (OSV) in forced-convection subcooled boiling are reviewed and compared with extensive data. Three analytical models and seven empirical correlations are considered in this paper. These models and correlations are put onto a common basis and are compared, again on a common basis, with a variety of data. The evaluation of their range of validity and applicability under various operating conditions are discussed. The results show that the correlations of Saha-Zuber (1974) seems to be the best model to predict OSV in vertical subcooled boiling flow

  4. Clinical laboratory as an economic model for business performance analysis

    Science.gov (United States)

    Buljanović, Vikica; Patajac, Hrvoje; Petrovečki, Mladen

    2011-01-01

    Aim To perform SWOT (strengths, weaknesses, opportunities, and threats) analysis of a clinical laboratory as an economic model that may be used to improve business performance of laboratories by removing weaknesses, minimizing threats, and using external opportunities and internal strengths. Methods Impact of possible threats to and weaknesses of the Clinical Laboratory at Našice General County Hospital business performance and use of strengths and opportunities to improve operating profit were simulated using models created on the basis of SWOT analysis results. The operating profit as a measure of profitability of the clinical laboratory was defined as total revenue minus total expenses and presented using a profit and loss account. Changes in the input parameters in the profit and loss account for 2008 were determined using opportunities and potential threats, and economic sensitivity analysis was made by using changes in the key parameters. The profit and loss account and economic sensitivity analysis were tools for quantifying the impact of changes in the revenues and expenses on the business operations of clinical laboratory. Results Results of simulation models showed that operational profit of €470 723 in 2008 could be reduced to only €21 542 if all possible threats became a reality and current weaknesses remained the same. Also, operational gain could be increased to €535 804 if laboratory strengths and opportunities were utilized. If both the opportunities and threats became a reality, the operational profit would decrease by €384 465. Conclusion The operational profit of the clinical laboratory could be significantly reduced if all threats became a reality and the current weaknesses remained the same. The operational profit could be increased by utilizing strengths and opportunities as much as possible. This type of modeling may be used to monitor business operations of any clinical laboratory and improve its financial situation by

  5. Clinical laboratory as an economic model for business performance analysis.

    Science.gov (United States)

    Buljanović, Vikica; Patajac, Hrvoje; Petrovecki, Mladen

    2011-08-15

    To perform SWOT (strengths, weaknesses, opportunities, and threats) analysis of a clinical laboratory as an economic model that may be used to improve business performance of laboratories by removing weaknesses, minimizing threats, and using external opportunities and internal strengths. Impact of possible threats to and weaknesses of the Clinical Laboratory at Našice General County Hospital business performance and use of strengths and opportunities to improve operating profit were simulated using models created on the basis of SWOT analysis results. The operating profit as a measure of profitability of the clinical laboratory was defined as total revenue minus total expenses and presented using a profit and loss account. Changes in the input parameters in the profit and loss account for 2008 were determined using opportunities and potential threats, and economic sensitivity analysis was made by using changes in the key parameters. The profit and loss account and economic sensitivity analysis were tools for quantifying the impact of changes in the revenues and expenses on the business operations of clinical laboratory. Results of simulation models showed that operational profit of €470 723 in 2008 could be reduced to only €21 542 if all possible threats became a reality and current weaknesses remained the same. Also, operational gain could be increased to €535 804 if laboratory strengths and opportunities were utilized. If both the opportunities and threats became a reality, the operational profit would decrease by €384 465. The operational profit of the clinical laboratory could be significantly reduced if all threats became a reality and the current weaknesses remained the same. The operational profit could be increased by utilizing strengths and opportunities as much as possible. This type of modeling may be used to monitor business operations of any clinical laboratory and improve its financial situation by implementing changes in the next fiscal

  6. ICT evaluation models and performance of medium and small enterprises

    Directory of Open Access Journals (Sweden)

    Bayaga Anass

    2014-01-01

    Full Text Available Building on prior research related to (1 impact of information communication technology (ICT and (2 operational risk management (ORM in the context of medium and small enterprises (MSEs, the focus of this study was to investigate the relationship between (1 ICT operational risk management (ORM and (2 performances of MSEs. To achieve the focus, the research investigated evaluating models for understanding the value of ICT ORM in MSEs. Multiple regression, Repeated-Measures Analysis of Variance (RM-ANOVA and Repeated-Measures Multivariate Analysis of Variance (RM-MANOVA were performed. The findings of the distribution revealed that only one variable made a significant percentage contribution to the level of ICT operation in MSEs, the Payback method (β = 0.410, p < .000. It may thus be inferred that the Payback method is the prominent variable, explaining the variation in level of evaluation models affecting ICT adoption within MSEs. Conclusively, in answering the two questions (1 degree of variability explained and (2 predictors, the results revealed that the variable contributed approximately 88.4% of the variations in evaluation models affecting ICT adoption within MSEs. The analysis of variance also revealed that the regression coefficients were real and did not occur by chance

  7. Developing an Energy Performance Modeling Startup Kit

    Energy Technology Data Exchange (ETDEWEB)

    Wood, A.

    2012-10-01

    In 2011, the NAHB Research Center began the first part of the multi-year effort by assessing the needs and motivations of residential remodelers regarding energy performance remodeling. The scope is multifaceted - all perspectives will be sought related to remodeling firms ranging in size from small-scale, sole proprietor to national. This will allow the Research Center to gain a deeper understanding of the remodeling and energy retrofit business and the needs of contractors when offering energy upgrade services. To determine the gaps and the motivation for energy performance remodeling, the NAHB Research Center conducted (1) an initial series of focus groups with remodelers at the 2011 International Builders' Show, (2) a second series of focus groups with remodelers at the NAHB Research Center in conjunction with the NAHB Spring Board meeting in DC, and (3) quantitative market research with remodelers based on the findings from the focus groups. The goal was threefold, to: Understand the current remodeling industry and the role of energy efficiency; Identify the gaps and barriers to adding energy efficiency into remodeling; and Quantify and prioritize the support needs of professional remodelers to increase sales and projects involving improving home energy efficiency. This report outlines all three of these tasks with remodelers.

  8. Modeling the Mechanical Performance of Die Casting Dies

    Energy Technology Data Exchange (ETDEWEB)

    R. Allen Miller

    2004-02-27

    The following report covers work performed at Ohio State on modeling the mechanical performance of dies. The focus of the project was development and particularly verification of finite element techniques used to model and predict displacements and stresses in die casting dies. The work entails a major case study performed with and industrial partner on a production die and laboratory experiments performed at Ohio State.

  9. Modeling the marketing strategy-performance relationship : towards an hierarchical marketing performance framework

    NARCIS (Netherlands)

    Huizingh, Eelko K.R.E.; Zengerink, Evelien

    2001-01-01

    Accurate measurement of marketing performance is an important topic for both marketing academics and marketing managers. Many researchers have recognized that marketing performance measurement should go beyond financial measurement. In this paper we propose a conceptual framework that models

  10. Modeling and analysis to quantify MSE wall behavior and performance.

    Science.gov (United States)

    2009-08-01

    To better understand potential sources of adverse performance of mechanically stabilized earth (MSE) walls, a suite of analytical models was studied using the computer program FLAC, a numerical modeling computer program widely used in geotechnical en...

  11. Analytic models for fuel pin transient performance

    International Nuclear Information System (INIS)

    Bard, F.E.; Fox, G.L.; Washburn, D.F.; Hanson, J.E.

    1976-09-01

    HEDL's ability to analyze various mechanisms that operate within a fuel pin has progressed substantially through development of codes such as PECTCLAD, which solves cladding response, and DSTRESS, which solves fuel response. The PECTCLAD results show good correlation with a variety of mechanical tests on cladding material and also demonstrate the significance of cladding strength when applying the life fraction rule. The DSTRESS results have shown that fuel deforms sufficiently during overpower transient tests that available volumes are filled, whether in the form of a central cavity or start-up cracks

  12. Energy Certificate - Energy Performance Certificate for Buildings as Significant Support to Reducing Consumption Intensity in Croatia by 2050

    International Nuclear Information System (INIS)

    Vivoda, E.; Kurek, J.

    2006-01-01

    Since Energy Efficiency Certificate, as a certificate of energy efficiency in building, is a part of the DIRECTIVE 2002/91/EC on the energy performance of buildings and can be linked to the DIRECTIVE 2006/32/EG on energy end-use efficiency and energy services and to the Technical regulation of the Republic of Croatia about rational use of heath energy in the buildings, these are shown briefly and their correlation is pointed out. Activities for the joint European energy policy are listed: Green Book of the EU Commission for the joint European energy policy, Sustainable Energy Paths for Europe - Energy Paths Horizon 2050 and the programme Intelligent Energy in Europe (2003-2006). Issues of the Energy Efficiency Certificate are elaborated in detail, while the accent is on certification, not only for buildings, but also for house energy techniques and the class of energy sustainability. The advantages of the introduction of Energy Certificates are shown for the real estate market, as well as for the users. On the example of Austria, one of the EU leaders in introducing of sustainable energy policy, the issues linked to the introduction of Energy Efficiency Certificate are shown.(author)

  13. Characterization uncertainty and its effects on models and performance

    Energy Technology Data Exchange (ETDEWEB)

    Rautman, C.A.; Treadway, A.H.

    1991-01-01

    Geostatistical simulation is being used to develop multiple geologic models of rock properties at the proposed Yucca Mountain repository site. Because each replicate model contains the same known information, and is thus essentially indistinguishable statistically from others, the differences between models may be thought of as representing the uncertainty in the site description. The variability among performance measures, such as ground water travel time, calculated using these replicate models therefore quantifies the uncertainty in performance that arises from uncertainty in site characterization.

  14. Metrics for evaluating performance and uncertainty of Bayesian network models

    Science.gov (United States)

    Bruce G. Marcot

    2012-01-01

    This paper presents a selected set of existing and new metrics for gauging Bayesian network model performance and uncertainty. Selected existing and new metrics are discussed for conducting model sensitivity analysis (variance reduction, entropy reduction, case file simulation); evaluating scenarios (influence analysis); depicting model complexity (numbers of model...

  15. Modeling and Performance Analysis of Manufacturing Systems in ...

    African Journals Online (AJOL)

    This study deals with modeling and performance analysis of footwear manufacturing using arena simulation modeling software. It was investigated that modeling and simulation is a potential tool for modeling and analysis of manufacturing assembly lines like footwear manufacturing because it allows the researcher to ...

  16. Identifying the connective strength between model parameters and performance criteria

    Directory of Open Access Journals (Sweden)

    B. Guse

    2017-11-01

    Full Text Available In hydrological models, parameters are used to represent the time-invariant characteristics of catchments and to capture different aspects of hydrological response. Hence, model parameters need to be identified based on their role in controlling the hydrological behaviour. For the identification of meaningful parameter values, multiple and complementary performance criteria are used that compare modelled and measured discharge time series. The reliability of the identification of hydrologically meaningful model parameter values depends on how distinctly a model parameter can be assigned to one of the performance criteria. To investigate this, we introduce the new concept of connective strength between model parameters and performance criteria. The connective strength assesses the intensity in the interrelationship between model parameters and performance criteria in a bijective way. In our analysis of connective strength, model simulations are carried out based on a latin hypercube sampling. Ten performance criteria including Nash–Sutcliffe efficiency (NSE, Kling–Gupta efficiency (KGE and its three components (alpha, beta and r as well as RSR (the ratio of the root mean square error to the standard deviation for different segments of the flow duration curve (FDC are calculated. With a joint analysis of two regression tree (RT approaches, we derive how a model parameter is connected to different performance criteria. At first, RTs are constructed using each performance criterion as the target variable to detect the most relevant model parameters for each performance criterion. Secondly, RTs are constructed using each parameter as the target variable to detect which performance criteria are impacted by changes in the values of one distinct model parameter. Based on this, appropriate performance criteria are identified for each model parameter. In this study, a high bijective connective strength between model parameters and performance criteria

  17. Concave pit-containing scaffold surfaces improve stem cell-derived osteoblast performance and lead to significant bone tissue formation.

    Directory of Open Access Journals (Sweden)

    Antonio Graziano

    2007-06-01

    Full Text Available Scaffold surface features are thought to be important regulators of stem cell performance and endurance in tissue engineering applications, but details about these fundamental aspects of stem cell biology remain largely unclear.In the present study, smooth clinical-grade lactide-coglyolic acid 85:15 (PLGA scaffolds were carved as membranes and treated with NMP (N-metil-pyrrolidone to create controlled subtractive pits or microcavities. Scanning electron and confocal microscopy revealed that the NMP-treated membranes contained: (i large microcavities of 80-120 microm in diameter and 40-100 microm in depth, which we termed primary; and (ii smaller microcavities of 10-20 microm in diameter and 3-10 microm in depth located within the primary cavities, which we termed secondary. We asked whether a microcavity-rich scaffold had distinct bone-forming capabilities compared to a smooth one. To do so, mesenchymal stem cells derived from human dental pulp were seeded onto the two types of scaffold and monitored over time for cytoarchitectural characteristics, differentiation status and production of important factors, including bone morphogenetic protein-2 (BMP-2 and vascular endothelial growth factor (VEGF. We found that the microcavity-rich scaffold enhanced cell adhesion: the cells created intimate contact with secondary microcavities and were polarized. These cytological responses were not seen with the smooth-surface scaffold. Moreover, cells on the microcavity-rich scaffold released larger amounts of BMP-2 and VEGF into the culture medium and expressed higher alkaline phosphatase activity. When this type of scaffold was transplanted into rats, superior bone formation was elicited compared to cells seeded on the smooth scaffold.In conclusion, surface microcavities appear to support a more vigorous osteogenic response of stem cells and should be used in the design of therapeutic substrates to improve bone repair and bioengineering applications in the

  18. Performance Measurement Model A TarBase model with ...

    Indian Academy of Sciences (India)

    rohit

    Cost. G=Gamma. CV=Cross Validation. MCC=Matthew Correlation Coefficient. Test 1: C G CV Accuracy TP TN FP FN ... Conclusion: Without considering the MirTif negative dataset for training Model A and B classifiers, our Model A and B ...

  19. The performance indicators of model projects. A special evaluation

    International Nuclear Information System (INIS)

    1995-11-01

    As a result of the acknowledgment of the key role of the Model Project concept in the Agency's Technical Co-operation Programme, the present review of the objectives of the model projects which are now in operation, was undertaken, as recommended by the Board of Governors, to determine at an early stage: the extent to which the present objectives have been defined in a measurable way; whether objectively verifiable performance indicators and success criteria had been identified for each project; whether mechanisms to obtain feedback on the achievements had been foreseen. The overall budget for the 23 model projects, as approved from 1994 to 1998, amounts to $32,557,560, of which 45% is funded by Technical Co-operation Fund. This represents an average investment of about $8 million per year, that is over 15% of the annual TC budget. The conceptual importance of the Model Project initiative, as well as the significant funds allocated to them, led the Secretariat to plan the methods to be used to determine their socio-economic impact. 1 tab

  20. A Systemic Cause Analysis Model for Human Performance Technicians

    Science.gov (United States)

    Sostrin, Jesse

    2011-01-01

    This article presents a systemic, research-based cause analysis model for use in the field of human performance technology (HPT). The model organizes the most prominent barriers to workplace learning and performance into a conceptual framework that explains and illuminates the architecture of these barriers that exist within the fabric of everyday…

  1. Modelling of green roofs' hydrologic performance using EPA's SWMM.

    Science.gov (United States)

    Burszta-Adamiak, E; Mrowiec, M

    2013-01-01

    Green roofs significantly affect the increase in water retention and thus the management of rain water in urban areas. In Poland, as in many other European countries, excess rainwater resulting from snowmelt and heavy rainfall contributes to the development of local flooding in urban areas. Opportunities to reduce surface runoff and reduce flood risks are among the reasons why green roofs are more likely to be used also in this country. However, there are relatively few data on their in situ performance. In this study the storm water performance was simulated for the green roofs experimental plots using the Storm Water Management Model (SWMM) with Low Impact Development (LID) Controls module (version 5.0.022). The model consists of many parameters for a particular layer of green roofs but simulation results were unsatisfactory considering the hydrologic response of the green roofs. For the majority of the tested rain events, the Nash coefficient had negative values. It indicates a weak fit between observed and measured flow-rates. Therefore complexity of the LID module does not affect the increase of its accuracy. Further research at a technical scale is needed to determine the role of the green roof slope, vegetation cover and drying process during the inter-event periods.

  2. Multitasking TORT under UNICOS: Parallel performance models and measurements

    International Nuclear Information System (INIS)

    Barnett, A.; Azmy, Y.Y.

    1999-01-01

    The existing parallel algorithms in the TORT discrete ordinates code were updated to function in a UNICOS environment. A performance model for the parallel overhead was derived for the existing algorithms. The largest contributors to the parallel overhead were identified and a new algorithm was developed. A parallel overhead model was also derived for the new algorithm. The results of the comparison of parallel performance models were compared to applications of the code to two TORT standard test problems and a large production problem. The parallel performance models agree well with the measured parallel overhead

  3. Multitasking TORT Under UNICOS: Parallel Performance Models and Measurements

    International Nuclear Information System (INIS)

    Azmy, Y.Y.; Barnett, D.A.

    1999-01-01

    The existing parallel algorithms in the TORT discrete ordinates were updated to function in a UNI-COS environment. A performance model for the parallel overhead was derived for the existing algorithms. The largest contributors to the parallel overhead were identified and a new algorithm was developed. A parallel overhead model was also derived for the new algorithm. The results of the comparison of parallel performance models were compared to applications of the code to two TORT standard test problems and a large production problem. The parallel performance models agree well with the measured parallel overhead

  4. Comprehensive Simulation Lifecycle Management for High Performance Computing Modeling and Simulation, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — There are significant logistical barriers to entry-level high performance computing (HPC) modeling and simulation (M IllinoisRocstar) sets up the infrastructure for...

  5. Hydrothermal Fe cycling and deep ocean organic carbon scavenging: Model-based evidence for significant POC supply to seafloor sediments

    Digital Repository Service at National Institute of Oceanography (India)

    German, C.R.; Legendre, L.L.; Sander, S.G.;; Niquil, N.; Luther-III, G.W.; LokaBharathi, P.A.; Han, X.; LeBris, N.

    by more than ~10% over background values, what the model does indicate is that scavenging of carbon in association with Fe-rich hydrothermal plume particles should play a significant role in the delivery of particulate organic carbon to deep ocean...

  6. Cost and Performance Assumptions for Modeling Electricity Generation Technologies

    Energy Technology Data Exchange (ETDEWEB)

    Tidball, R.; Bluestein, J.; Rodriguez, N.; Knoke, S.

    2010-11-01

    The goal of this project was to compare and contrast utility scale power plant characteristics used in data sets that support energy market models. Characteristics include both technology cost and technology performance projections to the year 2050. Cost parameters include installed capital costs and operation and maintenance (O&M) costs. Performance parameters include plant size, heat rate, capacity factor or availability factor, and plant lifetime. Conventional, renewable, and emerging electricity generating technologies were considered. Six data sets, each associated with a different model, were selected. Two of the data sets represent modeled results, not direct model inputs. These two data sets include cost and performance improvements that result from increased deployment as well as resulting capacity factors estimated from particular model runs; other data sets represent model input data. For the technologies contained in each data set, the levelized cost of energy (LCOE) was also evaluated, according to published cost, performance, and fuel assumptions.

  7. Asymptotic performance modelling of DCF protocol with prioritized channel access

    Science.gov (United States)

    Choi, Woo-Yong

    2017-11-01

    Recently, the modification of the DCF (Distributed Coordination Function) protocol by the prioritized channel access was proposed to resolve the problem that the DCF performance worsens exponentially as more nodes exist in IEEE 802.11 wireless LANs. In this paper, an asymptotic analytical performance model is presented to analyze the MAC performance of the DCF protocol with the prioritized channel access.

  8. Maintenance personnel performance simulation (MAPPS): a model for predicting maintenance performance reliability in nuclear power plants

    International Nuclear Information System (INIS)

    Knee, H.E.; Krois, P.A.; Haas, P.M.; Siegel, A.I.; Ryan, T.G.

    1983-01-01

    The NRC has developed a structured, quantitative, predictive methodology in the form of a computerized simulation model for assessing maintainer task performance. Objective of the overall program is to develop, validate, and disseminate a practical, useful, and acceptable methodology for the quantitative assessment of NPP maintenance personnel reliability. The program was organized into four phases: (1) scoping study, (2) model development, (3) model evaluation, and (4) model dissemination. The program is currently nearing completion of Phase 2 - Model Development

  9. Performance and reliability model checking and model construction

    NARCIS (Netherlands)

    Hermanns, H.; Gnesi, Stefania; Schieferdecker, Ina; Rennoch, Axel

    2000-01-01

    Continuous-time Markov chains (CTMCs) are widely used to describe stochastic phenomena in many diverse areas. They are used to estimate performance and reliability characteristics of various nature, for instance to quantify throughputs of manufacturing systems, to locate bottlenecks in communication

  10. Automatic Performance Model Generation for Java Enterprise Edition (EE) Applications

    OpenAIRE

    Brunnert, Andreas;Vögele, Christian;Krcmar, Helmut

    2015-01-01

    The effort required to create performance models for enterprise applications is often out of proportion compared to their benefits. This work aims to reduce this effort by introducing an approach to automatically generate component-based performance models for running Java EE applications. The approach is applicable for all Java EE server products as it relies on standardized component types and interfaces to gather the required data for modeling an application. The feasibility of the approac...

  11. ECOPATH: Model description and evaluation of model performance

    International Nuclear Information System (INIS)

    Bergstroem, U.; Nordlinder, S.

    1996-01-01

    The model is based upon compartment theory and it is run in combination with a statistical error propagation method (PRISM, Gardner et al. 1983). It is intended to be generic for application on other sites with simple changing of parameter values. It was constructed especially for this scenario. However, it is based upon an earlier designed model for calculating relations between released amount of radioactivity and doses to critical groups (used for Swedish regulations concerning annual reports of released radioactivity from routine operation of Swedish nuclear power plants (Bergstroem och Nordlinder, 1991)). The model handles exposure from deposition on terrestrial areas as well as deposition on lakes, starting with deposition values. 14 refs, 16 figs, 7 tabs

  12. Indonesian Private University Lecturer Performance Improvement Model to Improve a Sustainable Organization Performance

    Science.gov (United States)

    Suryaman

    2018-01-01

    Lecturer performance will affect the quality and carrying capacity of the sustainability of an organization, in this case the university. There are many models developed to measure the performance of teachers, but not much to discuss the influence of faculty performance itself towards sustainability of an organization. This study was conducted in…

  13. A Spectral Evaluation of Models Performances in Mediterranean Oak Woodlands

    Science.gov (United States)

    Vargas, R.; Baldocchi, D. D.; Abramowitz, G.; Carrara, A.; Correia, A.; Kobayashi, H.; Papale, D.; Pearson, D.; Pereira, J.; Piao, S.; Rambal, S.; Sonnentag, O.

    2009-12-01

    Ecosystem processes are influenced by climatic trends at multiple temporal scales including diel patterns and other mid-term climatic modes, such as interannual and seasonal variability. Because interactions between biophysical components of ecosystem processes are complex, it is important to test how models perform in frequency (e.g. hours, days, weeks, months, years) and time (i.e. day of the year) domains in addition to traditional tests of annual or monthly sums. Here we present a spectral evaluation using wavelet time series analysis of model performance in seven Mediterranean Oak Woodlands that encompass three deciduous and four evergreen sites. We tested the performance of five models (CABLE, ORCHIDEE, BEPS, Biome-BGC, and JULES) on measured variables of gross primary production (GPP) and evapotranspiration (ET). In general, model performance fails at intermediate periods (e.g. weeks to months) likely because these models do not represent the water pulse dynamics that influence GPP and ET at these Mediterranean systems. To improve the performance of a model it is critical to identify first where and when the model fails. Only by identifying where a model fails we can improve the model performance and use them as prognostic tools and to generate further hypotheses that can be tested by new experiments and measurements.

  14. Atomic scale simulations for improved CRUD and fuel performance modeling

    Energy Technology Data Exchange (ETDEWEB)

    Andersson, Anders David Ragnar [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Cooper, Michael William Donald [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-01-06

    A more mechanistic description of fuel performance codes can be achieved by deriving models and parameters from atomistic scale simulations rather than fitting models empirically to experimental data. The same argument applies to modeling deposition of corrosion products on fuel rods (CRUD). Here are some results from publications in 2016 carried out using the CASL allocation at LANL.

  15. modeling the effect of bandwidth allocation on network performance

    African Journals Online (AJOL)

    Modeling The Effect Of Bandwidth Allocation On Network Performance. MODELING THE EFFECT OF BANDWIDTH ... ABSTRACT. In this paper, a new channel capacity model for interference- limited systems was obtained .... congestion admission control, with the intent of minimizing energy consumption at each terminal.

  16. Modelling of Box Type Solar Cooker Performance in a Tropical ...

    African Journals Online (AJOL)

    Thermal performance model of box type solar cooker with loaded water is presented. The model was developed using the method of Funk to estimate cooking power in terms of climatic and design parameters for box type solar cooker in a tropical environment. Coefficients for each term used in the model were determined ...

  17. Comparisons of Faulting-Based Pavement Performance Prediction Models

    Directory of Open Access Journals (Sweden)

    Weina Wang

    2017-01-01

    Full Text Available Faulting prediction is the core of concrete pavement maintenance and design. Highway agencies are always faced with the problem of lower accuracy for the prediction which causes costly maintenance. Although many researchers have developed some performance prediction models, the accuracy of prediction has remained a challenge. This paper reviews performance prediction models and JPCP faulting models that have been used in past research. Then three models including multivariate nonlinear regression (MNLR model, artificial neural network (ANN model, and Markov Chain (MC model are tested and compared using a set of actual pavement survey data taken on interstate highway with varying design features, traffic, and climate data. It is found that MNLR model needs further recalibration, while the ANN model needs more data for training the network. MC model seems a good tool for pavement performance prediction when the data is limited, but it is based on visual inspections and not explicitly related to quantitative physical parameters. This paper then suggests that the further direction for developing the performance prediction model is incorporating the advantages and disadvantages of different models to obtain better accuracy.

  18. FARMLAND: Model description and evaluation of model performance

    International Nuclear Information System (INIS)

    Attwood, C.; Fayers, C.; Mayall, A.; Brown, J.; Simmonds, J.R.

    1996-01-01

    The FARMLAND model was originally developed for use in connection with continuous, routine releases of radionuclides, but because it has many time-dependent features it has been developed further for a single accidental release. The most recent version of FARMLAND is flexible and can be used to predict activity concentrations in food as a function of time after both accidental and routine releases of radionuclides. The effect of deposition at different times of the year can be taken into account. FARMLAND contains a suite of models which simulate radionuclide transfer through different parts of the foodchain. The models can be used in different combinations and offer the flexibility to assess a variety of radiological situations. The main foods considered are green vegetables, grain products, root vegetables, milk, meat and offal from cattle, and meat and offal from sheep. A large variety of elements can be considered although the degree of complexity with which some are modelled is greater than others; isotopes of caesium, strontium and iodine are treated in greatest detail. 22 refs, 12 figs, 10 tabs

  19. A prognostic model for development of significant liver fibrosis in HIV-hepatitis C co-infection.

    Directory of Open Access Journals (Sweden)

    Nasheed Moqueet

    Full Text Available Liver fibrosis progresses rapidly in HIV-Hepatitis C virus (HCV co-infected individuals partially due to heightened inflammation. Immune markers targeting stages of fibrogenesis could aid in prognosis of fibrosis.A case-cohort study was nested in the prospective Canadian Co-infection Cohort (n = 1119. HCV RNA positive individuals without fibrosis, end-stage liver disease or chronic Hepatitis B at baseline (n = 679 were eligible. A random subcohort (n = 236 was selected from those eligible. Pro-fibrogenic markers and Interferon Lambda (IFNL rs8099917 genotype were measured from first available sample in all fibrosis cases (APRI ≥ 1.5 during follow-up and the subcohort. We used Cox proportional hazards and compared Model 1 (selected clinical predictors only to Model 2 (Model 1 plus selected markers for predicting 3-year risk of liver fibrosis using weighted Harrell's C and Net Reclassification Improvement indices.113 individuals developed significant liver fibrosis over 1300 person-years (8.63 per 100 person-years 95% CI: 7.08, 10.60. Model 1 (age, sex, current alcohol use, HIV RNA, baseline APRI, HCV genotype was nested in model 2, which also included IFNL genotype and IL-8, sICAM-1, RANTES, hsCRP, and sCD14. The C indexes (95% CI for model 1 vs. model 2 were 0.720 (0.649, 0.791 and 0.756 (0.688, 0.825, respectively. Model 2 classified risk more appropriately (overall net reclassification improvement, p<0.05.Including IFNL genotype and inflammatory markers IL-8, sICAM-1, RANTES, hs-CRP, and sCD14 enabled better prediction of the 3-year risk of significant liver fibrosis over clinical predictors alone. Whether this modest improvement in prediction justifies their additional cost requires further cost-benefit analyses.

  20. A modelling study of long term green roof retention performance.

    Science.gov (United States)

    Stovin, Virginia; Poë, Simon; Berretta, Christian

    2013-12-15

    This paper outlines the development of a conceptual hydrological flux model for the long term continuous simulation of runoff and drought risk for green roof systems. A green roof's retention capacity depends upon its physical configuration, but it is also strongly influenced by local climatic controls, including the rainfall characteristics and the restoration of retention capacity associated with evapotranspiration during dry weather periods. The model includes a function that links evapotranspiration rates to substrate moisture content, and is validated against observed runoff data. The model's application to typical extensive green roof configurations is demonstrated with reference to four UK locations characterised by contrasting climatic regimes, using 30-year rainfall time-series inputs at hourly simulation time steps. It is shown that retention performance is dependent upon local climatic conditions. Volumetric retention ranges from 0.19 (cool, wet climate) to 0.59 (warm, dry climate). Per event retention is also considered, and it is demonstrated that retention performance decreases significantly when high return period events are considered in isolation. For example, in Sheffield the median per-event retention is 1.00 (many small events), but the median retention for events exceeding a 1 in 1 yr return period threshold is only 0.10. The simulation tool also provides useful information about the likelihood of drought periods, for which irrigation may be required. A sensitivity study suggests that green roofs with reduced moisture-holding capacity and/or low evapotranspiration rates will tend to offer reduced levels of retention, whilst high moisture-holding capacity and low evapotranspiration rates offer the strongest drought resistance. Copyright © 2013 Elsevier Ltd. All rights reserved.

  1. Middle-School Science Students' Scientific Modelling Performances Across Content Areas and Within a Learning Progression

    Science.gov (United States)

    Bamberger, Yael M.; Davis, Elizabeth A.

    2013-01-01

    This paper focuses on students' ability to transfer modelling performances across content areas, taking into consideration their improvement of content knowledge as a result of a model-based instruction. Sixty-five sixth grade students of one science teacher in an urban public school in the Midwestern USA engaged in scientific modelling practices that were incorporated into a curriculum focused on the nature of matter. Concept-process models were embedded in the curriculum, as well as emphasis on meta-modelling knowledge and modelling practices. Pre-post test items that required drawing scientific models of smell, evaporation, and friction were analysed. The level of content understanding was coded and scored, as were the following elements of modelling performance: explanation, comparativeness, abstraction, and labelling. Paired t-tests were conducted to analyse differences in students' pre-post tests scores on content knowledge and on each element of the modelling performances. These are described in terms of the amount of transfer. Students significantly improved in their content knowledge for the smell and the evaporation models, but not for the friction model, which was expected as that topic was not taught during the instruction. However, students significantly improved in some of their modelling performances for all the three models. This improvement serves as evidence that the model-based instruction can help students acquire modelling practices that they can apply in a new content area.

  2. Conceptual adsorption models and open issues pertaining to performance assessment

    International Nuclear Information System (INIS)

    Serne, R.J.

    1992-01-01

    Recently several articles have been published that question the appropriateness of the distribution coefficient, Rd, concept to quantify radionuclide migration. Several distinct issues surrounding the modeling of nuclide retardation. The first section defines adsorption terminology and discusses various adsorption processes. The next section describes five commonly used adsorption conceptual models, specifically emphasizing what attributes that affect adsorption are explicitly accommodated in each model. I also review efforts to incorporate each adsorption model into performance assessment transport computer codes. The five adsorption conceptual models are (1) the constant Rd model, (2) the parametric Rd model, (3) isotherm adsorption models, (4) mass action adsorption models, and (5) surface-complexation with electrostatics models. The final section discusses the adequacy of the distribution ratio concept, the adequacy of transport calculations that rely on constant retardation factors and the status of incorporating sophisticated adsorption models into transport codes. 86 refs., 1 fig., 1 tab

  3. End-to-end models for marine ecosystems: Are we on the precipice of a significant advance or just putting lipstick on a pig?

    Directory of Open Access Journals (Sweden)

    Kenneth A. Rose

    2012-02-01

    Full Text Available There has been a rapid rise in the development of end-to-end models for marine ecosystems over the past decade. Some reasons for this rise include need for predicting effects of climate change on biota and dissatisfaction with existing models. While the benefits of a well-implemented end-to-end model are straightforward, there are many challenges. In the short term, my view is that the major role of end-to-end models is to push the modelling community forward, and to identify critical data so that these data can be collected now and thus be available for the next generation of end-to-end models. I think we should emulate physicists and build theoretically-oriented models first, and then collect the data. In the long-term, end-to-end models will increase their skill, data collection will catch up, and end-to-end models will move towards site-specific applications with forecasting and management capabilities. One pathway into the future is individual efforts, over-promise, and repackaging of poorly performing component submodels (“lipstick on a pig”. The other pathway is a community-based collaborative effort, with appropriate caution and thoughtfulness, so that the needed improvements are achieved (“significant advance”. The promise of end-to-end modelling is great. We should act now to avoid missing a great opportunity.

  4. Methods for significance testing of categorical covariates in logistic regression models after multiple imputation: power and applicability analysis.

    Science.gov (United States)

    Eekhout, Iris; van de Wiel, Mark A; Heymans, Martijn W

    2017-08-22

    Multiple imputation is a recommended method to handle missing data. For significance testing after multiple imputation, Rubin's Rules (RR) are easily applied to pool parameter estimates. In a logistic regression model, to consider whether a categorical covariate with more than two levels significantly contributes to the model, different methods are available. For example pooling chi-square tests with multiple degrees of freedom, pooling likelihood ratio test statistics, and pooling based on the covariance matrix of the regression model. These methods are more complex than RR and are not available in all mainstream statistical software packages. In addition, they do not always obtain optimal power levels. We argue that the median of the p-values from the overall significance tests from the analyses on the imputed datasets can be used as an alternative pooling rule for categorical variables. The aim of the current study is to compare different methods to test a categorical variable for significance after multiple imputation on applicability and power. In a large simulation study, we demonstrated the control of the type I error and power levels of different pooling methods for categorical variables. This simulation study showed that for non-significant categorical covariates the type I error is controlled and the statistical power of the median pooling rule was at least equal to current multiple parameter tests. An empirical data example showed similar results. It can therefore be concluded that using the median of the p-values from the imputed data analyses is an attractive and easy to use alternative method for significance testing of categorical variables.

  5. Emerging Carbon Nanotube Electronic Circuits, Modeling, and Performance

    OpenAIRE

    Xu, Yao; Srivastava, Ashok; Sharma, Ashwani K.

    2010-01-01

    Current transport and dynamic models of carbon nanotube field-effect transistors are presented. A model of single-walled carbon nanotube as interconnect is also presented and extended in modeling of single-walled carbon nanotube bundles. These models are applied in studying the performances of circuits such as the complementary carbon nanotube inverter pair and carbon nanotube as interconnect. Cadence/Spectre simulations show that carbon nanotube field-effect transistor circuits can operate a...

  6. CORPORATE FORESIGHT AND PERFORMANCE: A CHAIN-OF-EFFECTS MODEL

    DEFF Research Database (Denmark)

    Jissink, Tymen; Huizingh, Eelko K.R.E.; Rohrbeck, René

    2015-01-01

    , formal organization, and culture. We investigate the relation of corporate foresight with three innovation performance dimensions – new product success, new product innovativeness, and financial performance. We use partial-least-squares structural equations modelling to assess our measurement mode ls......In this paper we develop and validate a measurement scale for corporate foresight and examine its impact on performance in a chain-of-effects model. We conceptualize corporate foresight as an organizational ability consisting of five distinct dimensions: information scope, method usage, people...... performance dimensions. Implications of our findings, and limitations and future research avenues are discussed....

  7. Models used to assess the performance of photovoltaic systems.

    Energy Technology Data Exchange (ETDEWEB)

    Stein, Joshua S.; Klise, Geoffrey T.

    2009-12-01

    This report documents the various photovoltaic (PV) performance models and software developed and utilized by researchers at Sandia National Laboratories (SNL) in support of the Photovoltaics and Grid Integration Department. In addition to PV performance models, hybrid system and battery storage models are discussed. A hybrid system using other distributed sources and energy storage can help reduce the variability inherent in PV generation, and due to the complexity of combining multiple generation sources and system loads, these models are invaluable for system design and optimization. Energy storage plays an important role in reducing PV intermittency and battery storage models are used to understand the best configurations and technologies to store PV generated electricity. Other researcher's models used by SNL are discussed including some widely known models that incorporate algorithms developed at SNL. There are other models included in the discussion that are not used by or were not adopted from SNL research but may provide some benefit to researchers working on PV array performance, hybrid system models and energy storage. The paper is organized into three sections to describe the different software models as applied to photovoltaic performance, hybrid systems, and battery storage. For each model, there is a description which includes where to find the model, whether it is currently maintained and any references that may be available. Modeling improvements underway at SNL include quantifying the uncertainty of individual system components, the overall uncertainty in modeled vs. measured results and modeling large PV systems. SNL is also conducting research into the overall reliability of PV systems.

  8. Performance comparison of hydrological model structures during low flows

    Science.gov (United States)

    Staudinger, Maria; Stahl, Kerstin; Tallaksen, Lena M.; Clark, Martyn P.; Seibert, Jan

    2010-05-01

    Low flows are still poorly reproduced by common hydrological models since they are traditionally designed to meet peak flow situations best possible. As low flow becomes increasingly important to several target areas there is a need to improve available models. We present a study that assesses the impact of model structure on low flow simulations. This is done using the Framework for Understanding Structural Errors (FUSE), which identifies the set of (subjective) decisions made when building a hydrological model, and provides multiple options for each modeling decision. 79 models were built using the FUSE framework, and applied to simulate stream flows in the Narsjø catchment in Norway (119 km²). To allow comparison all new models were calibrated using an automatic optimization method. Low flow and recession analysis of the new models enables us to evaluate model performance focusing on different aspects by using various objective functions. Additionally, model structures responsible for poor performance, and hence unsuitable, can be detected. We focused on elucidating model performance during summer (August - October) and winter low flows which evolve from entirely different hydrological processes in the Narsjø catchment. Summer low flows develop out of a lack of precipitation while winter low flows are due to water storage in ice and snow. The results showed that simulations of summer low flows were throughout poorer than simulations of winter low flows when evaluating with an objective function focusing on low flows; here, the model structure influencing winter low flow simulations is the lower layer architecture. Different model structures were found to influence model performance during the summer season. The choice of other objective functions has the potential to affect such an evaluation. These findings call for the use of different model structures tailored to particular needs.

  9. A performance comparison of atmospheric dispersion models over complex topography

    International Nuclear Information System (INIS)

    Kido, Hiroko; Oishi, Ryoko; Hayashi, Keisuke; Kanno, Mitsuhiro; Kurosawa, Naohiro

    2007-01-01

    A code system using mass-consistent and Gaussian puff model was improved for a new option of atmospheric dispersion research. There are several atmospheric dispersion models for radionuclides. Because different models have both merits and disadvantages, it is necessary to choose the model that is most suitable for the surface conditions of the estimated region while regarding the calculation time, accuracy, and purpose of the calculations being performed. Some models are less accurate when the topography is complex. It is important to understand the differences between the models for smooth and complex surfaces. In this study, the performances of the following four models were compared: (1) Gaussian plume model (2) Gaussian puff model (3) Mass-consistent wind fields and Gaussian puff model that was improved in this study from one presented in Aomori Energy Society of Japan, 2005 Fall Meeting, D21. (4) Meso-scale meteorological model (RAMS: The Regional Atmospheric Modeling System) and particle-type model (HYPACT: The RAMS Hybrid Particle and Concentration Transport Model) (Reference: ATMET). (author)

  10. Expression and clinical significance of rhubarb on serum amylase and TNF-alpha of rat model of acute pancreatitis.

    Science.gov (United States)

    Zhang, W F; Li, Z T; Fang, J J; Wang, G B; Yu, Y; Liu, Z Q; Wu, Y N; Zheng, S S; Cai, L

    2017-01-01

    The aim of this study was to evaluate the therapeutic effect of rhubarb extract on acute pancreatitis. Ninety-six healthy Sprague Dawley rats, weighing 301±5.12 g were randomly divided into 4 groups: sham surgery (group A), acute pancreatitis model (group B), acute pancreatitis with normal saline (group C), and acute pancreatitis model with rhubarb (group D). The levels of serum amylase (AMY) and TNF-α were measured at 1st, 6th, 12th and 24th hour after modeling, and the pancreatic tissue were used to observe the pathologic changes. Compared to the sham group, the serum AMY and serum tumor necrosis factor (TNF-α) levels were significantly increased in the other groups (p acute pancreatitis. The rhubarb reduced the serum AMY and TNF-α level in rats with acute pancreatitis and reduced the pathological changes of pancreas and other tissues.

  11. PD-0332991, a CDK4/6 Inhibitor, Significantly Prolongs Survival in a Genetically Engineered Mouse Model of Brainstem Glioma

    Science.gov (United States)

    Barton, Kelly L.; Misuraca, Katherine; Cordero, Francisco; Dobrikova, Elena; Min, Hooney D.; Gromeier, Matthias; Kirsch, David G.; Becher, Oren J.

    2013-01-01

    Diffuse intrinsic pontine glioma (DIPG) is an incurable tumor that arises in the brainstem of children. To date there is not a single approved drug to effectively treat these tumors and thus novel therapies are desperately needed. Recent studies suggest that a significant fraction of these tumors contain alterations in cell cycle regulatory genes including amplification of the D-type cyclins and CDK4/6, and less commonly, loss of Ink4a-ARF leading to aberrant cell proliferation. In this study, we evaluated the therapeutic approach of targeting the cyclin-CDK-Retinoblastoma (Rb) pathway in a genetically engineered PDGF-B-driven brainstem glioma (BSG) mouse model. We found that PD-0332991 (PD), a CDK4/6 inhibitor, induces cell-cycle arrest in our PDGF-B; Ink4a-ARF deficient model both in vitro and in vivo. By contrast, the PDGF-B; p53 deficient model was mostly resistant to treatment with PD. We noted that a 7-day treatment course with PD significantly prolonged survival by 12% in the PDGF-B; Ink4a-ARF deficient BSG model. Furthermore, a single dose of 10 Gy radiation therapy (RT) followed by 7 days of treatment with PD increased the survival by 19% in comparison to RT alone. These findings provide the rationale for evaluating PD in children with Ink4a-ARF deficient gliomas. PMID:24098593

  12. PD-0332991, a CDK4/6 inhibitor, significantly prolongs survival in a genetically engineered mouse model of brainstem glioma.

    Directory of Open Access Journals (Sweden)

    Kelly L Barton

    Full Text Available Diffuse intrinsic pontine glioma (DIPG is an incurable tumor that arises in the brainstem of children. To date there is not a single approved drug to effectively treat these tumors and thus novel therapies are desperately needed. Recent studies suggest that a significant fraction of these tumors contain alterations in cell cycle regulatory genes including amplification of the D-type cyclins and CDK4/6, and less commonly, loss of Ink4a-ARF leading to aberrant cell proliferation. In this study, we evaluated the therapeutic approach of targeting the cyclin-CDK-Retinoblastoma (Rb pathway in a genetically engineered PDGF-B-driven brainstem glioma (BSG mouse model. We found that PD-0332991 (PD, a CDK4/6 inhibitor, induces cell-cycle arrest in our PDGF-B; Ink4a-ARF deficient model both in vitro and in vivo. By contrast, the PDGF-B; p53 deficient model was mostly resistant to treatment with PD. We noted that a 7-day treatment course with PD significantly prolonged survival by 12% in the PDGF-B; Ink4a-ARF deficient BSG model. Furthermore, a single dose of 10 Gy radiation therapy (RT followed by 7 days of treatment with PD increased the survival by 19% in comparison to RT alone. These findings provide the rationale for evaluating PD in children with Ink4a-ARF deficient gliomas.

  13. A performance model of the OSI communication architecture

    Science.gov (United States)

    Kritzinger, P. S.

    1986-06-01

    An analytical model aiming at predicting the performance of software implementations which would be built according to the OSI basic reference model is proposed. The model uses the peer protocol standard of a layer as the reference description of an implementation of that layer. The model is basically a closed multiclass multichain queueing network with a processor-sharing center, modeling process contention at the processor, and a delay center, modeling times spent waiting for responses from the corresponding peer processes. Each individual transition of the protocol constitutes a different class and each layer of the architecture forms a closed chain. Performance statistics include queue lengths and response times at the processor as a function of processor speed and the number of open connections. It is shown how to reduce the model should the protocol state space become very large. Numerical results based upon the derived formulas are given.

  14. Significance of hypoxia for tumor response to radiation: Mathematical modeling and analysis of local control and clonogenic assay data

    International Nuclear Information System (INIS)

    Buffa, Francesca Meteora

    2002-01-01

    Various hypotheses for radiation local tumor control probability (ltcp) were modeled, and assessed against local tumor control (LTC) and clonogenic assay (CA) data. For head-and-neck tumors receiving low-LET external-beam irradiation, the best model was a Poisson ltcp accounting for cell repopulation, hypoxia, and tumor volume dependence of radiosensitivity (α). This confirmed that hypoxia is limiting LTC of these tumors, with the magnitude depending upon tumor volume. However, LTC of cervical carcinoma receiving external-beam irradiation and brachytherapy was well described by a model not accounting for hypoxia. Furthermore, when the survival fraction at 2 Gy (SF 2 ) and colony forming efficiency (CFE) measured for individual patients were incorporated into this model, very good correlation with LTC was seen (p=0.0004). After multivariate analysis, this model was the best independent prognostic factor for LTC and patient survival. Furthermore, no difference in prediction was seen when a model based on birth-and-death stochastic theory was used. Two forms of hypoxia are known to be present in tumors: diffusion-limited, chronic hypoxia (CH), and acute, transient hypoxia (TH). A modeling study on WiDr multicellular spheroids showed that the CH effect on LTC is significantly lower than expected from CA. This could arise from energy charge depletion accompanying CH, reducing the number of proliferating clonogenic cells that can repair radiation damage, and thus mitigating the radioresistance of CH cells. This suggests that TH, rather than CH, may be the limiting factor for in vivo LTC. Finally, by computing ltcp using Monte Carlo calculated dose distributions, it was shown that Monte Carlo statistical noise can cause an underestimation of ltcp, with the magnitude depending upon the model hypotheses

  15. DIFFERENCES IN WATER VAPOR RADIATIVE TRANSFER AMONG 1D MODELS CAN SIGNIFICANTLY AFFECT THE INNER EDGE OF THE HABITABLE ZONE

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Jun; Wang, Yuwei [Department of Atmospheric and Oceanic Sciences, School of Physics, Peking University, Beijing (China); Leconte, Jérémy; Forget, François [Laboratoire de Météorologie Dynamique, Institut Pierre Simon Laplace, CNRS, Paris (France); Wolf, Eric T. [Laboratory for Atmospheric and Space Physics, University of Colorado in Boulder, CO (United States); Goldblatt, Colin [School of Earth and Ocean Sciences, University of Victoria, Victoria, BC (Canada); Feldl, Nicole [Division of Geological and Planetary Sciences, California Institute of Technology, CA (United States); Merlis, Timothy [Department of Atmospheric and Oceanic Sciences at McGill University, Montréal (Canada); Koll, Daniel D. B.; Ding, Feng; Abbot, Dorian S., E-mail: junyang@pku.edu.cn, E-mail: abbot@uchicago.edu [Department of the Geophysical Sciences, University of Chicago, Chicago, IL (United States)

    2016-08-01

    An accurate estimate of the inner edge of the habitable zone is critical for determining which exoplanets are potentially habitable and for designing future telescopes to observe them. Here, we explore differences in estimating the inner edge among seven one-dimensional radiative transfer models: two line-by-line codes (SMART and LBLRTM) as well as five band codes (CAM3, CAM4-Wolf, LMDG, SBDART, and AM2) that are currently being used in global climate models. We compare radiative fluxes and spectra in clear-sky conditions around G and M stars, with fixed moist adiabatic profiles for surface temperatures from 250 to 360 K. We find that divergences among the models arise mainly from large uncertainties in water vapor absorption in the window region (10 μ m) and in the region between 0.2 and 1.5 μ m. Differences in outgoing longwave radiation increase with surface temperature and reach 10–20 W m{sup 2}; differences in shortwave reach up to 60 W m{sup 2}, especially at the surface and in the troposphere, and are larger for an M-dwarf spectrum than a solar spectrum. Differences between the two line-by-line models are significant, although smaller than among the band models. Our results imply that the uncertainty in estimating the insolation threshold of the inner edge (the runaway greenhouse limit) due only to clear-sky radiative transfer is ≈10% of modern Earth’s solar constant (i.e., ≈34 W m{sup 2} in global mean) among band models and ≈3% between the two line-by-line models. These comparisons show that future work is needed that focuses on improving water vapor absorption coefficients in both shortwave and longwave, as well as on increasing the resolution of stellar spectra in broadband models.

  16. Performance evaluation:= (process algebra + model checking) x Markov chains

    NARCIS (Netherlands)

    Hermanns, H.; Larsen, K.G.; Nielsen, Mogens; Katoen, Joost P.

    2001-01-01

    Markov chains are widely used in practice to determine system performance and reliability characteristics. The vast majority of applications considers continuous-time Markov chains (CTMCs). This tutorial paper shows how successful model specification and analysis techniques from concurrency theory

  17. Thermal Model Predictions of Advanced Stirling Radioisotope Generator Performance

    Science.gov (United States)

    Wang, Xiao-Yen J.; Fabanich, William Anthony; Schmitz, Paul C.

    2014-01-01

    This presentation describes the capabilities of three-dimensional thermal power model of advanced stirling radioisotope generator (ASRG). The performance of the ASRG is presented for different scenario, such as Venus flyby with or without the auxiliary cooling system.

  18. Practical Techniques for Modeling Gas Turbine Engine Performance

    Science.gov (United States)

    Chapman, Jeffryes W.; Lavelle, Thomas M.; Litt, Jonathan S.

    2016-01-01

    The cost and risk associated with the design and operation of gas turbine engine systems has led to an increasing dependence on mathematical models. In this paper, the fundamentals of engine simulation will be reviewed, an example performance analysis will be performed, and relationships useful for engine control system development will be highlighted. The focus will be on thermodynamic modeling utilizing techniques common in industry, such as: the Brayton cycle, component performance maps, map scaling, and design point criteria generation. In general, these topics will be viewed from the standpoint of an example turbojet engine model; however, demonstrated concepts may be adapted to other gas turbine systems, such as gas generators, marine engines, or high bypass aircraft engines. The purpose of this paper is to provide an example of gas turbine model generation and system performance analysis for educational uses, such as curriculum creation or student reference.

  19. Testing algorithms for a passenger train braking performance model.

    Science.gov (United States)

    2011-09-01

    "The Federal Railroad Administrations Office of Research and Development funded a project to establish performance model to develop, analyze, and test positive train control (PTC) braking algorithms for passenger train operations. With a good brak...

  20. An Efficient Framework Model for Optimizing Routing Performance in VANETs

    Science.gov (United States)

    Zulkarnain, Zuriati Ahmad; Subramaniam, Shamala

    2018-01-01

    Routing in Vehicular Ad hoc Networks (VANET) is a bit complicated because of the nature of the high dynamic mobility. The efficiency of routing protocol is influenced by a number of factors such as network density, bandwidth constraints, traffic load, and mobility patterns resulting in frequency changes in network topology. Therefore, Quality of Service (QoS) is strongly needed to enhance the capability of the routing protocol and improve the overall network performance. In this paper, we introduce a statistical framework model to address the problem of optimizing routing configuration parameters in Vehicle-to-Vehicle (V2V) communication. Our framework solution is based on the utilization of the network resources to further reflect the current state of the network and to balance the trade-off between frequent changes in network topology and the QoS requirements. It consists of three stages: simulation network stage used to execute different urban scenarios, the function stage used as a competitive approach to aggregate the weighted cost of the factors in a single value, and optimization stage used to evaluate the communication cost and to obtain the optimal configuration based on the competitive cost. The simulation results show significant performance improvement in terms of the Packet Delivery Ratio (PDR), Normalized Routing Load (NRL), Packet loss (PL), and End-to-End Delay (E2ED). PMID:29462884

  1. Dynamic vehicle model for handling performance using experimental data

    Directory of Open Access Journals (Sweden)

    SangDo Na

    2015-11-01

    Full Text Available An analytical vehicle model is essential for the development of vehicle design and performance. Various vehicle models have different complexities, assumptions and limitations depending on the type of vehicle analysis. An accurate full vehicle model is essential in representing the behaviour of the vehicle in order to estimate vehicle dynamic system performance such as ride comfort and handling. An experimental vehicle model is developed in this article, which employs experimental kinematic and compliance data measured between the wheel and chassis. From these data, a vehicle model, which includes dynamic effects due to vehicle geometry changes, has been developed. The experimental vehicle model was validated using an instrumented experimental vehicle and data such as a step change steering input. This article shows a process to develop and validate an experimental vehicle model to enhance the accuracy of handling performance, which comes from precise suspension model measured by experimental data of a vehicle. The experimental force data obtained from a suspension parameter measuring device are employed for a precise modelling of the steering and handling response. The steering system is modelled by a lumped model, with stiffness coefficients defined and identified by comparing steering stiffness obtained by the measured data. The outputs, specifically the yaw rate and lateral acceleration of the vehicle, are verified by experimental results.

  2. Evaluation Model of Organizational Performance for Small and Medium Enterprises

    Directory of Open Access Journals (Sweden)

    Carlos Augusto Passos

    2014-12-01

    Full Text Available In the 1980’s, many tools for evaluating organizational performance were created. However, most of them are useful only to large companies and do not foster results in small and medium-sized enterprises (SMEs. In light of this fact, this article aims at proposing an Organizational Performance Assessment model (OPA which is flexible and adaptable to the reality of SMEs, based on the theoretical framework of various models, and comparisons on the basis of three major authors’ criteria to evaluate OPA models. The research has descriptive and exploratory character, with qualitative nature. The MADE-O model, according to the criteria described in the bibliography, is the one that best fits the needs of SMEs, used as a baseline for the model proposed in this study with adaptations pertaining to the BSC model. The model called the Overall Performance Indicator – Environment (IDG-E has as its main differential, in addition to the base of the models mentioned above, the assessment of the external and internal environment weighted in modules of OPA. As the SME is characterized by having few processes and people, the small amount of performance indicators is another positive aspect. Submitted to the evaluation of the criteria subscribed by the authors, the model proved to be quite feasible for use in SMEs.

  3. Model of service-oriented catering supply chain performance evaluation

    OpenAIRE

    Gou, Juanqiong; Shen, Guguan; Chai, Rui

    2013-01-01

    Purpose: The aim of this paper is constructing a performance evaluation model for service-oriented catering supply chain. Design/methodology/approach: With the research on the current situation of catering industry, this paper summarized the characters of the catering supply chain, and then presents the service-oriented catering supply chain model based on the platform of logistics and information. At last, the fuzzy AHP method is used to evaluate the performance of service-oriented catering ...

  4. ASSESSING INDIVIDUAL PERFORMANCE ON INFORMATION TECHNOLOGY ADOPTION: A NEW MODEL

    OpenAIRE

    Diah Hari Suryaningrum

    2012-01-01

    This paper aims to propose a new model in assessing individual performance on information technology adoption. The new model to assess individual performance was derived from two different theories: decomposed theory of planned behavior and task-technology fit theory. Although many researchers have tried to expand these theories, some of their efforts might lack of theoretical assumptions. To overcome this problem and enhance the coherence of the integration, I used a theory from social scien...

  5. Performance of Air Pollution Models on Massively Parallel Computers

    DEFF Research Database (Denmark)

    Brown, John; Hansen, Per Christian; Wasniewski, Jerzy

    1996-01-01

    To compare the performance and use of three massively parallel SIMD computers, we implemented a large air pollution model on the computers. Using a realistic large-scale model, we gain detailed insight about the performance of the three computers when used to solve large-scale scientific problems...... that involve several types of numerical computations. The computers considered in our study are the Connection Machines CM-200 and CM-5, and the MasPar MP-2216...

  6. A Mathematical Model to Improve the Performance of Logistics Network

    Directory of Open Access Journals (Sweden)

    Muhammad Izman Herdiansyah

    2012-01-01

    Full Text Available The role of logistics nowadays is expanding from just providing transportation and warehousing to offering total integrated logistics. To remain competitive in the global market environment, business enterprises need to improve their logistics operations performance. The improvement will be achieved when we can provide a comprehensive analysis and optimize its network performances. In this paper, a mixed integer linier model for optimizing logistics network performance is developed. It provides a single-product multi-period multi-facilities model, as well as the multi-product concept. The problem is modeled in form of a network flow problem with the main objective to minimize total logistics cost. The problem can be solved using commercial linear programming package like CPLEX or LINDO. Even in small case, the solver in Excel may also be used to solve such model.Keywords: logistics network, integrated model, mathematical programming, network optimization

  7. Team performance modeling for HRA in dynamic situations

    International Nuclear Information System (INIS)

    Shu Yufei; Furuta, Kazuo; Kondo, Shunsuke

    2002-01-01

    This paper proposes a team behavior network model that can simulate and analyze response of an operator team to an incident in a dynamic and context-sensitive situation. The model is composed of four sub-models, which describe the context of team performance. They are task model, event model, team model and human-machine interface model. Each operator demonstrates aspects of his/her specific cognitive behavior and interacts with other operators and the environment in order to deal with an incident. Individual human factors, which determine the basis of communication and interaction between individuals, and cognitive process of an operator, such as information acquisition, state-recognition, decision-making and action execution during development of an event scenario are modeled. A case of feed and bleed operation in pressurized water reactor under an emergency situation was studied and the result was compared with an experiment to check the validity of the proposed model

  8. Toward a Subjective Measurement Model for Firm Performance

    Directory of Open Access Journals (Sweden)

    Luiz Artur Ledur Brito

    2012-05-01

    Full Text Available Firm performance is a relevant construct in strategic management research and frequently used as a dependentvariable. Despite this relevance, there is hardly a consensus about its definition, dimensionality andmeasurement, what limits advances in research and understanding of the concept. This article proposes and testsa measurement model for firm performance, based on subjective indicators. The model is grounded instakeholder theory and a review of empirical articles. Confirmatory Factor Analyses, using data from 116Brazilian senior managers, were used to test its fit and psychometric properties. The final model had six firstorderdimensions: profitability, growth, customer satisfaction, employee satisfaction, social performance, andenvironmental performance. A second-order financial performance construct, influencing growth andprofitability, correlated with the first-order intercorrelated, non-financial dimensions. Results suggest dimensionscannot be used interchangeably, since they represent different aspects of firm performance, and corroborate theidea that stakeholders have different demands that need to be managed independently. Researchers andpractitioners may use the model to fully treat performance in empirical studies and to understand the impact ofstrategies on multiple performance facets.

  9. A Composite Model for Employees' Performance Appraisal and Improvement

    Science.gov (United States)

    Manoharan, T. R.; Muralidharan, C.; Deshmukh, S. G.

    2012-01-01

    Purpose: The purpose of this paper is to develop an innovative method of performance appraisal that will be useful for designing a structured training programme. Design/methodology/approach: Employees' performance appraisals are conducted using new approaches, namely data envelopment analysis and an integrated fuzzy model. Interpretive structural…

  10. A Model for Effective Performance in the Indonesian Navy.

    Science.gov (United States)

    1987-06-01

    NAVY LEADERSHIP AND MANAGEMENT COM PETENCY M ODEL .................................. 15 D. MCBER COMPETENT MANAGERS MODEL ................ IS E. SU M M... leadership and managerial skills which emphasize on effective performance of the officers in managing the human resources under their cormnand and...supervision. By effective performance we mean officers who not only know about management theories , but who possess the characteristics, knowledge, skill, and

  11. Discussion of various models related to cloud performance

    OpenAIRE

    Kande, Chaitanya Krishna

    2015-01-01

    This paper discusses the various models related to cloud computing. Knowing the metrics related to infrastructure is very critical to enhance the performance of cloud services. Various metrics related to clouds such as pageview response time, admission control and enforcing elasticity to cloud infrastructure are very crucial in analyzing the characteristics of the cloud to enhance the cloud performance.

  12. Performance Implications of Business Model Change: A Case Study

    Directory of Open Access Journals (Sweden)

    Jana Poláková

    2015-01-01

    Full Text Available The paper deals with changes in performance level introduced by the change of business model. The selected case is a small family business undergoing through substantial changes in reflection of structural changes of its markets. The authors used the concept of business model to describe value creation processes within the selected family business and by contrasting the differences between value creation processes before and after the change introduced they prove the role of business model as the performance differentiator. This is illustrated with the use of business model canvas constructed on the basis interviews, observations and document analysis. The two business model canvases allow for explanation of cause-and-effect relationships within the business leading to change in performance. The change in the performance is assessed by financial analysis of the business conducted over the period of 2006–2012 demonstrates changes in performance (comparing development of ROA, ROE and ROS having their lowest levels before the change of business model was introduced, growing after the introduction of the change, as well as the activity indicators with similar developments of the family business. The described case study contributes to the concept of business modeling with the arguments supporting its value as strategic tool facilitating decisions related to value creation within the business.

  13. Conceptual Modeling of Performance Indicators of Higher Education Institutions

    OpenAIRE

    Kahveci, Tuba Canvar; Taşkın, Harun; Toklu, Merve Cengiz

    2013-01-01

    Measuring and analyzing any type of organization are carried out by different actors in the organization. The performance indicators of performance management system increase according to products or services of the organization. Also these indicators should be defined for all levels of the organization. Finally, all of these characteristics make the performance evaluation process more complex for organizations. In order to manage this complexity, the process should be modeled at the beginnin...

  14. Gold-standard performance for 2D hydrodynamic modeling

    Science.gov (United States)

    Pasternack, G. B.; MacVicar, B. J.

    2013-12-01

    Two-dimensional, depth-averaged hydrodynamic (2D) models are emerging as an increasingly useful tool for environmental water resources engineering. One of the remaining technical hurdles to the wider adoption and acceptance of 2D modeling is the lack of standards for 2D model performance evaluation when the riverbed undulates, causing lateral flow divergence and convergence. The goal of this study was to establish a gold-standard that quantifies the upper limit of model performance for 2D models of undulating riverbeds when topography is perfectly known and surface roughness is well constrained. A review was conducted of published model performance metrics and the value ranges exhibited by models thus far for each one. Typically predicted velocity differs from observed by 20 to 30 % and the coefficient of determination between the two ranges from 0.5 to 0.8, though there tends to be a bias toward overpredicting low velocity and underpredicting high velocity. To establish a gold standard as to the best performance possible for a 2D model of an undulating bed, two straight, rectangular-walled flume experiments were done with no bed slope and only different bed undulations and water surface slopes. One flume tested model performance in the presence of a porous, homogenous gravel bed with a long flat section, then a linear slope down to a flat pool bottom, and then the same linear slope back up to the flat bed. The other flume had a PVC plastic solid bed with a long flat section followed by a sequence of five identical riffle-pool pairs in close proximity, so it tested model performance given frequent undulations. Detailed water surface elevation and velocity measurements were made for both flumes. Comparing predicted versus observed velocity magnitude for 3 discharges with the gravel-bed flume and 1 discharge for the PVC-bed flume, the coefficient of determination ranged from 0.952 to 0.987 and the slope for the regression line was 0.957 to 1.02. Unsigned velocity

  15. Improving winter leaf area index estimation in evergreen coniferous forests and its significance in carbon and water fluxes modeling

    Science.gov (United States)

    Wang, R.; Chen, J. M.; Luo, X.

    2016-12-01

    Modeling of carbon and water fluxes at the continental and global scales requires remotely sensed LAI as inputs. For evergreen coniferous forests (ENF), severely underestimated winter LAI has been one of the issues for mostly available remote sensing products, which could cause negative bias in the modeling of Gross Primary Productivity (GPP) and evapotranspiration (ET). Unlike deciduous trees which shed all the leaves in winter, conifers retains part of their needles and the proportion of the retained needles depends on the needle longevity. In this work, the Boreal Ecosystem Productivity Simulator (BEPS) was used to model GPP and ET at eight FLUXNET Canada ENF sites. Two sets of LAI were used as the model inputs: the 250m 10-day University of Toronto (U of T) LAI product Version 2 and the corrected LAI based on the U of T LAI product and the needle longevity of the corresponding tree species at individual sites. Validating model daily GPP (gC/m2) against site measurements, the mean RMSE over eight sites decreases from 1.85 to 1.15, and the bias changes from -0.99 to -0.19. For daily ET (mm), mean RMSE decreases from 0.63 to 0.33, and the bias changes from -0.31 to -0.16. Most of the improvements occur in the beginning and at the end of the growing season when there is large correction of LAI and meanwhile temperature is still suitable for photosynthesis and transpiration. For the dormant season, the improvement in ET simulation mostly comes from the increased interception of precipitation brought by the elevated LAI during that time. The results indicate that model performance can be improved by the application the corrected LAI. Improving the winter RS LAI can make a large impact on land surface carbon and energy budget.

  16. The Social Responsibility Performance Outcomes Model: Building Socially Responsible Companies through Performance Improvement Outcomes.

    Science.gov (United States)

    Hatcher, Tim

    2000-01-01

    Considers the role of performance improvement professionals and human resources development professionals in helping organizations realize the ethical and financial power of corporate social responsibility. Explains the social responsibility performance outcomes model, which incorporates the concepts of societal needs and outcomes. (LRW)

  17. Faculty Performance Evaluation: The CIPP-SAPS Model.

    Science.gov (United States)

    Mitcham, Maralynne

    1981-01-01

    The issues of faculty performance evaluation for allied health professionals are addressed. Daniel Stufflebeam's CIPP (content-imput-process-product) model is introduced and its development into a CIPP-SAPS (self-administrative-peer- student) model is pursued. (Author/CT)

  18. Technical performance of percutaneous and laminectomy leads analyzed by modeling

    NARCIS (Netherlands)

    Manola, L.; Holsheimer, J.

    2004-01-01

    The objective of this study was to compare the technical performance of laminectomy and percutaneous spinal cord stimulation leads with similar contact spacing by computer modeling. Monopolar and tripolar (guarded cathode) stimulation with both lead types in a low-thoracic spine model was simulated

  19. Neuro-fuzzy model for evaluating the performance of processes ...

    Indian Academy of Sciences (India)

    CHIDOZIE CHUKWUEMEKA NWOBI-OKOYE

    2017-11-16

    Nov 16, 2017 ... In this work an Adaptive Neuro-Fuzzy Inference System (ANFIS) was used to model the periodic performance of some ..... Every node i in this layer is an adaptive node with a node function. Neuro-fuzzy model for .... spectral analysis and parameter optimization using genetic algorithm, the values of v10. and ...

  20. UNCONSTRAINED HANDWRITING RECOGNITION : LANGUAGE MODELS, PERPLEXITY, AND SYSTEM PERFORMANCE

    NARCIS (Netherlands)

    Marti, U-V.; Bunke, H.

    2004-01-01

    In this paper we present a number of language models and their behavior in the recognition of unconstrained handwritten English sentences. We use the perplexity to compare the different models and their prediction power, and relate it to the performance of a recognition system under different

  1. Mathematical Models of Elementary Mathematics Learning and Performance. Final Report.

    Science.gov (United States)

    Suppes, Patrick

    This project was concerned with the development of mathematical models of elementary mathematics learning and performance. Probabilistic finite automata and register machines with a finite number of registers were developed as models and extensively tested with data arising from the elementary-mathematics strand curriculum developed by the…

  2. Activity-Based Costing Model for Assessing Economic Performance.

    Science.gov (United States)

    DeHayes, Daniel W.; Lovrinic, Joseph G.

    1994-01-01

    An economic model for evaluating the cost performance of academic and administrative programs in higher education is described. Examples from its application at Indiana University-Purdue University Indianapolis are used to illustrate how the model has been used to control costs and reengineer processes. (Author/MSE)

  3. Comparison of the performance of net radiation calculation models

    DEFF Research Database (Denmark)

    Kjærsgaard, Jeppe Hvelplund; Cuenca, R.H.; Martinez-Cob, A.

    2009-01-01

    Daily values of net radiation are used in many applications of crop-growth modeling and agricultural water management. Measurements of net radiation are not part of the routine measurement program at many weather stations and are commonly estimated based on other meteorological parameters. Daily....... The performance of the empirical models was nearly identical at all sites. Since the empirical models were easier to use and simpler to calibrate than the physically based models, the results indicate that the empirical models can be used as a good substitute for the physically based ones when available...

  4. Review of Methods for Buildings Energy Performance Modelling

    Science.gov (United States)

    Krstić, Hrvoje; Teni, Mihaela

    2017-10-01

    Research presented in this paper gives a brief review of methods used for buildings energy performance modelling. This paper gives also a comprehensive review of the advantages and disadvantages of available methods as well as the input parameters used for modelling buildings energy performance. European Directive EPBD obliges the implementation of energy certification procedure which gives an insight on buildings energy performance via exiting energy certificate databases. Some of the methods for buildings energy performance modelling mentioned in this paper are developed by employing data sets of buildings which have already undergone an energy certification procedure. Such database is used in this paper where the majority of buildings in the database have already gone under some form of partial retrofitting – replacement of windows or installation of thermal insulation but still have poor energy performance. The case study presented in this paper utilizes energy certificates database obtained from residential units in Croatia (over 400 buildings) in order to determine the dependence between buildings energy performance and variables from database by using statistical dependencies tests. Building energy performance in database is presented with building energy efficiency rate (from A+ to G) which is based on specific annual energy needs for heating for referential climatic data [kWh/(m2a)]. Independent variables in database are surfaces and volume of the conditioned part of the building, building shape factor, energy used for heating, CO2 emission, building age and year of reconstruction. Research results presented in this paper give an insight in possibilities of methods used for buildings energy performance modelling. Further on it gives an analysis of dependencies between buildings energy performance as a dependent variable and independent variables from the database. Presented results could be used for development of new building energy performance

  5. An analytical model of the HINT performance metric

    Energy Technology Data Exchange (ETDEWEB)

    Snell, Q.O.; Gustafson, J.L. [Scalable Computing Lab., Ames, IA (United States)

    1996-10-01

    The HINT benchmark was developed to provide a broad-spectrum metric for computers and to measure performance over the full range of memory sizes and time scales. We have extended our understanding of why HINT performance curves look the way they do and can now predict the curves using an analytical model based on simple hardware specifications as input parameters. Conversely, by fitting the experimental curves with the analytical model, hardware specifications such as memory performance can be inferred to provide insight into the nature of a given computer system.

  6. Disaggregation of Rainy Hours: Compared Performance of Various Models.

    Science.gov (United States)

    Ben Haha, M.; Hingray, B.; Musy, A.

    In the urban environment, the response times of catchments are usually short. To de- sign or to diagnose waterworks in that context, it is necessary to describe rainfall events with a good time resolution: a 10mn time step is often necessary. Such in- formation is not always available. Rainfall disaggregation models have thus to be applied to produce from rough rainfall data that short time resolution information. The communication will present the performance obtained with several rainfall dis- aggregation models that allow for the disaggregation of rainy hours into six 10mn rainfall amounts. The ability of the models to reproduce some statistical character- istics of rainfall (mean, variance, overall distribution of 10mn-rainfall amounts; ex- treme values of maximal rainfall amounts over different durations) is evaluated thanks to different graphical and numerical criteria. The performance of simple models pre- sented in some scientific papers or developed in the Hydram laboratory as well as the performance of more sophisticated ones is compared with the performance of the basic constant disaggregation model. The compared models are either deterministic or stochastic; for some of them the disaggregation is based on scaling properties of rainfall. The compared models are in increasing complexity order: constant model, linear model (Ben Haha, 2001), Ormsbee Deterministic model (Ormsbee, 1989), Ar- tificial Neuronal Network based model (Burian et al. 2000), Hydram Stochastic 1 and Hydram Stochastic 2 (Ben Haha, 2001), Multiplicative Cascade based model (Olsson and Berndtsson, 1998), Ormsbee Stochastic model (Ormsbee, 1989). The 625 rainy hours used for that evaluation (with a hourly rainfall amount greater than 5mm) were extracted from the 21 years chronological rainfall series (10mn time step) observed at the Pully meteorological station, Switzerland. The models were also evaluated when applied to different rainfall classes depending on the season first and on the

  7. A network application for modeling a centrifugal compressor performance map

    Science.gov (United States)

    Nikiforov, A.; Popova, D.; Soldatova, K.

    2017-08-01

    The approximation of aerodynamic performance of a centrifugal compressor stage and vaneless diffuser by neural networks is presented. Advantages, difficulties and specific features of the method are described. An example of a neural network and its structure is shown. The performances in terms of efficiency, pressure ratio and work coefficient of 39 model stages within the range of flow coefficient from 0.01 to 0.08 were modeled with mean squared error 1.5 %. In addition, the loss and friction coefficients of vaneless diffusers of relative widths 0.014-0.10 are modeled with mean squared error 2.45 %.

  8. Hybrid Corporate Performance Prediction Model Considering Technical Capability

    Directory of Open Access Journals (Sweden)

    Joonhyuck Lee

    2016-07-01

    Full Text Available Many studies have tried to predict corporate performance and stock prices to enhance investment profitability using qualitative approaches such as the Delphi method. However, developments in data processing technology and machine-learning algorithms have resulted in efforts to develop quantitative prediction models in various managerial subject areas. We propose a quantitative corporate performance prediction model that applies the support vector regression (SVR algorithm to solve the problem of the overfitting of training data and can be applied to regression problems. The proposed model optimizes the SVR training parameters based on the training data, using the genetic algorithm to achieve sustainable predictability in changeable markets and managerial environments. Technology-intensive companies represent an increasing share of the total economy. The performance and stock prices of these companies are affected by their financial standing and their technological capabilities. Therefore, we apply both financial indicators and technical indicators to establish the proposed prediction model. Here, we use time series data, including financial, patent, and corporate performance information of 44 electronic and IT companies. Then, we predict the performance of these companies as an empirical verification of the prediction performance of the proposed model.

  9. Facial Performance Transfer via Deformable Models and Parametric Correspondence.

    Science.gov (United States)

    Asthana, Akshay; de la Hunty, Miles; Dhall, Abhinav; Goecke, Roland

    2012-09-01

    The issue of transferring facial performance from one person's face to another's has been an area of interest for the movie industry and the computer graphics community for quite some time. In recent years, deformable face models, such as the Active Appearance Model (AAM), have made it possible to track and synthesize faces in real time. Not surprisingly, deformable face model-based approaches for facial performance transfer have gained tremendous interest in the computer vision and graphics community. In this paper, we focus on the problem of real-time facial performance transfer using the AAM framework. We propose a novel approach of learning the mapping between the parameters of two completely independent AAMs, using them to facilitate the facial performance transfer in a more realistic manner than previous approaches. The main advantage of modeling this parametric correspondence is that it allows a "meaningful" transfer of both the nonrigid shape and texture across faces irrespective of the speakers' gender, shape, and size of the faces, and illumination conditions. We explore linear and nonlinear methods for modeling the parametric correspondence between the AAMs and show that the sparse linear regression method performs the best. Moreover, we show the utility of the proposed framework for a cross-language facial performance transfer that is an area of interest for the movie dubbing industry.

  10. Real-time individualization of the unified model of performance.

    Science.gov (United States)

    Liu, Jianbo; Ramakrishnan, Sridhar; Laxminarayan, Srinivas; Balkin, Thomas J; Reifman, Jaques

    2017-12-01

    Existing mathematical models for predicting neurobehavioural performance are not suited for mobile computing platforms because they cannot adapt model parameters automatically in real time to reflect individual differences in the effects of sleep loss. We used an extended Kalman filter to develop a computationally efficient algorithm that continually adapts the parameters of the recently developed Unified Model of Performance (UMP) to an individual. The algorithm accomplishes this in real time as new performance data for the individual become available. We assessed the algorithm's performance by simulating real-time model individualization for 18 subjects subjected to 64 h of total sleep deprivation (TSD) and 7 days of chronic sleep restriction (CSR) with 3 h of time in bed per night, using psychomotor vigilance task (PVT) data collected every 2 h during wakefulness. This UMP individualization process produced parameter estimates that progressively approached the solution produced by a post-hoc fitting of model parameters using all data. The minimum number of PVT measurements needed to individualize the model parameters depended upon the type of sleep-loss challenge, with ~30 required for TSD and ~70 for CSR. However, model individualization depended upon the overall duration of data collection, yielding increasingly accurate model parameters with greater number of days. Interestingly, reducing the PVT sampling frequency by a factor of two did not notably hamper model individualization. The proposed algorithm facilitates real-time learning of an individual's trait-like responses to sleep loss and enables the development of individualized performance prediction models for use in a mobile computing platform. © 2017 European Sleep Research Society.

  11. Observer analysis and its impact on task performance modeling

    Science.gov (United States)

    Jacobs, Eddie L.; Brown, Jeremy B.

    2014-05-01

    Fire fighters use relatively low cost thermal imaging cameras to locate hot spots and fire hazards in buildings. This research describes the analyses performed to study the impact of thermal image quality on fire fighter fire hazard detection task performance. Using human perception data collected by the National Institute of Standards and Technology (NIST) for fire fighters detecting hazards in a thermal image, an observer analysis was performed to quantify the sensitivity and bias of each observer. Using this analysis, the subjects were divided into three groups representing three different levels of performance. The top-performing group was used for the remainder of the modeling. Models were developed which related image quality factors such as contrast, brightness, spatial resolution, and noise to task performance probabilities. The models were fitted to the human perception data using logistic regression, as well as probit regression. Probit regression was found to yield superior fits and showed that models with not only 2nd order parameter interactions, but also 3rd order parameter interactions performed the best.

  12. System Level Modelling and Performance Estimation of Embedded Systems

    DEFF Research Database (Denmark)

    Tranberg-Hansen, Anders Sejer

    The advances seen in the semiconductor industry within the last decade have brought the possibility of integrating evermore functionality onto a single chip forming functionally highly advanced embedded systems. These integration possibilities also imply that as the design complexity increases, so...... an efficient system level design methodology, a modelling framework for performance estimation and design space exploration at the system level is required. This thesis presents a novel component based modelling framework for system level modelling and performance estimation of embedded systems. The framework...... is performed by having the framework produce detailed quantitative information about the system model under investigation. The project is part of the national Danish research project, Danish Network of Embedded Systems (DaNES), which is funded by the Danish National Advanced Technology Foundation. The project...

  13. Causal Analysis for Performance Modeling of Computer Programs

    Directory of Open Access Journals (Sweden)

    Jan Lemeire

    2007-01-01

    Full Text Available Causal modeling and the accompanying learning algorithms provide useful extensions for in-depth statistical investigation and automation of performance modeling. We enlarged the scope of existing causal structure learning algorithms by using the form-free information-theoretic concept of mutual information and by introducing the complexity criterion for selecting direct relations among equivalent relations. The underlying probability distribution of experimental data is estimated by kernel density estimation. We then reported on the benefits of a dependency analysis and the decompositional capacities of causal models. Useful qualitative models, providing insight into the role of every performance factor, were inferred from experimental data. This paper reports on the results for a LU decomposition algorithm and on the study of the parameter sensitivity of the Kakadu implementation of the JPEG-2000 standard. Next, the analysis was used to search for generic performance characteristics of the applications.

  14. Impact of reactive settler models on simulated WWTP performance.

    Science.gov (United States)

    Gernaey, K V; Jeppsson, U; Batstone, D J; Ingildsen, P

    2006-01-01

    Including a reactive settler model in a wastewater treatment plant model allows representation of the biological reactions taking place in the sludge blanket in the settler, something that is neglected in many simulation studies. The idea of including a reactive settler model is investigated for an ASM1 case study. Simulations with a whole plant model including the non-reactive Takács settler model are used as a reference, and are compared to simulation results considering two reactive settler models. The first is a return sludge model block removing oxygen and a user-defined fraction of nitrate, combined with a non-reactive Takács settler. The second is a fully reactive ASM1 Takács settler model. Simulations with the ASM1 reactive settler model predicted a 15.3% and 7.4% improvement of the simulated N removal performance, for constant (steady-state) and dynamic influent conditions respectively. The oxygen/nitrate return sludge model block predicts a 10% improvement of N removal performance under dynamic conditions, and might be the better modelling option for ASM1 plants: it is computationally more efficient and it will not overrate the importance of decay processes in the settler.

  15. Some considerations for validation of repository performance assessment models

    International Nuclear Information System (INIS)

    Eisenberg, N.

    1991-01-01

    Validation is an important aspect of the regulatory uses of performance assessment. A substantial body of literature exists indicating the manner in which validation of models is usually pursued. Because performance models for a nuclear waste repository cannot be tested over the long time periods for which the model must make predictions, the usual avenue for model validation is precluded. Further impediments to model validation include a lack of fundamental scientific theory to describe important aspects of repository performance and an inability to easily deduce the complex, intricate structures characteristic of a natural system. A successful strategy for validation must attempt to resolve these difficulties in a direct fashion. Although some procedural aspects will be important, the main reliance of validation should be on scientific substance and logical rigor. The level of validation needed will be mandated, in part, by the uses to which these models are put, rather than by the ideal of validation of a scientific theory. Because of the importance of the validation of performance assessment models, the NRC staff has engaged in a program of research and international cooperation to seek progress in this important area. 2 figs., 16 refs

  16. Global climate model performance over Alaska and Greenland

    DEFF Research Database (Denmark)

    Walsh, John E.; Chapman, William L.; Romanovsky, Vladimir

    2008-01-01

    The performance of a set of 15 global climate models used in the Coupled Model Intercomparison Project is evaluated for Alaska and Greenland, and compared with the performance over broader pan-Arctic and Northern Hemisphere extratropical domains. Root-mean-square errors relative to the 1958...... of the models are generally much larger than the biases of the composite output, indicating that the systematic errors differ considerably among the models. There is a tendency for the models with smaller errors to simulate a larger greenhouse warming over the Arctic, as well as larger increases of Arctic...... to narrowing the uncertainty and obtaining more robust estimates of future climate change in regions such as Alaska, Greenland, and the broader Arctic....

  17. Human performance modeling for system of systems analytics.

    Energy Technology Data Exchange (ETDEWEB)

    Dixon, Kevin R.; Lawton, Craig R.; Basilico, Justin Derrick; Longsine, Dennis E. (INTERA, Inc., Austin, TX); Forsythe, James Chris; Gauthier, John Henry; Le, Hai D.

    2008-10-01

    A Laboratory-Directed Research and Development project was initiated in 2005 to investigate Human Performance Modeling in a System of Systems analytic environment. SAND2006-6569 and SAND2006-7911 document interim results from this effort; this report documents the final results. The problem is difficult because of the number of humans involved in a System of Systems environment and the generally poorly defined nature of the tasks that each human must perform. A two-pronged strategy was followed: one prong was to develop human models using a probability-based method similar to that first developed for relatively well-understood probability based performance modeling; another prong was to investigate more state-of-art human cognition models. The probability-based modeling resulted in a comprehensive addition of human-modeling capability to the existing SoSAT computer program. The cognitive modeling resulted in an increased understanding of what is necessary to incorporate cognition-based models to a System of Systems analytic environment.

  18. Evaluating the significance of paleophylogeographic species distribution models in reconstructing quaternary range-shifts of nearctic chelonians.

    Directory of Open Access Journals (Sweden)

    Dennis Rödder

    Full Text Available The climatic cycles of the Quaternary, during which global mean annual temperatures have regularly changed by 5-10°C, provide a special opportunity for studying the rate, magnitude, and effects of geographic responses to changing climates. During the Quaternary, high- and mid-latitude species were extirpated from regions that were covered by ice or otherwise became unsuitable, persisting in refugial retreats where the environment was compatible with their tolerances. In this study we combine modern geographic range data, phylogeny, Pleistocene paleoclimatic models, and isotopic records of changes in global mean annual temperature, to produce a temporally continuous model of geographic changes in potential habitat for 59 species of North American turtles over the past 320 Ka (three full glacial-interglacial cycles. These paleophylogeographic models indicate the areas where past climates were compatible with the modern ranges of the species and serve as hypotheses for how their geographic ranges would have changed in response to Quaternary climate cycles. We test these hypotheses against physiological, genetic, taxonomic and fossil evidence, and we then use them to measure the effects of Quaternary climate cycles on species distributions. Patterns of range expansion, contraction, and fragmentation in the models are strongly congruent with (i phylogeographic differentiation; (ii morphological variation; (iii physiological tolerances; and (iv intraspecific genetic variability. Modern species with significant interspecific differentiation have geographic ranges that strongly fluctuated and repeatedly fragmented throughout the Quaternary. Modern species with low genetic diversity have geographic distributions that were highly variable and at times exceedingly small in the past. Our results reveal the potential for paleophylogeographic models to (i reconstruct past geographic range modifications, (ii identify geographic processes that result in

  19. PERFORMANCE EVALUATION OF 3D MODELING SOFTWARE FOR UAV PHOTOGRAMMETRY

    OpenAIRE

    H. Yanagi; H. Yanagi; H. Chikatsu

    2016-01-01

    UAV (Unmanned Aerial Vehicle) photogrammetry, which combines UAV and freely available internet-based 3D modeling software, is widely used as a low-cost and user-friendly photogrammetry technique in the fields such as remote sensing and geosciences. In UAV photogrammetry, only the platform used in conventional aerial photogrammetry is changed. Consequently, 3D modeling software contributes significantly to its expansion. However, the algorithms of the 3D modelling software are black box algori...

  20. Performance modeling of parallel algorithms for solving neutron diffusion problems

    International Nuclear Information System (INIS)

    Azmy, Y.Y.; Kirk, B.L.

    1995-01-01

    Neutron diffusion calculations are the most common computational methods used in the design, analysis, and operation of nuclear reactors and related activities. Here, mathematical performance models are developed for the parallel algorithm used to solve the neutron diffusion equation on message passing and shared memory multiprocessors represented by the Intel iPSC/860 and the Sequent Balance 8000, respectively. The performance models are validated through several test problems, and these models are used to estimate the performance of each of the two considered architectures in situations typical of practical applications, such as fine meshes and a large number of participating processors. While message passing computers are capable of producing speedup, the parallel efficiency deteriorates rapidly as the number of processors increases. Furthermore, the speedup fails to improve appreciably for massively parallel computers so that only small- to medium-sized message passing multiprocessors offer a reasonable platform for this algorithm. In contrast, the performance model for the shared memory architecture predicts very high efficiency over a wide range of number of processors reasonable for this architecture. Furthermore, the model efficiency of the Sequent remains superior to that of the hypercube if its model parameters are adjusted to make its processors as fast as those of the iPSC/860. It is concluded that shared memory computers are better suited for this parallel algorithm than message passing computers

  1. Product Data Model for Performance-driven Design

    Science.gov (United States)

    Hu, Guang-Zhong; Xu, Xin-Jian; Xiao, Shou-Ne; Yang, Guang-Wu; Pu, Fan

    2017-09-01

    When designing large-sized complex machinery products, the design focus is always on the overall performance; however, there exist no design theory and method based on performance driven. In view of the deficiency of the existing design theory, according to the performance features of complex mechanical products, the performance indices are introduced into the traditional design theory of "Requirement-Function-Structure" to construct a new five-domain design theory of "Client Requirement-Function-Performance-Structure-Design Parameter". To support design practice based on this new theory, a product data model is established by using performance indices and the mapping relationship between them and the other four domains. When the product data model is applied to high-speed train design and combining the existing research result and relevant standards, the corresponding data model and its structure involving five domains of high-speed trains are established, which can provide technical support for studying the relationships between typical performance indices and design parameters and the fast achievement of a high-speed train scheme design. The five domains provide a reference for the design specification and evaluation criteria of high speed train and a new idea for the train's parameter design.

  2. Performance Analysis, Modeling and Scaling of HPC Applications and Tools

    Energy Technology Data Exchange (ETDEWEB)

    Bhatele, Abhinav [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-01-13

    E cient use of supercomputers at DOE centers is vital for maximizing system throughput, mini- mizing energy costs and enabling science breakthroughs faster. This requires complementary e orts along several directions to optimize the performance of scienti c simulation codes and the under- lying runtimes and software stacks. This in turn requires providing scalable performance analysis tools and modeling techniques that can provide feedback to physicists and computer scientists developing the simulation codes and runtimes respectively. The PAMS project is using time allocations on supercomputers at ALCF, NERSC and OLCF to further the goals described above by performing research along the following fronts: 1. Scaling Study of HPC applications; 2. Evaluation of Programming Models; 3. Hardening of Performance Tools; 4. Performance Modeling of Irregular Codes; and 5. Statistical Analysis of Historical Performance Data. We are a team of computer and computational scientists funded by both DOE/NNSA and DOE/ ASCR programs such as ECRP, XStack (Traleika Glacier, PIPER), ExaOSR (ARGO), SDMAV II (MONA) and PSAAP II (XPACC). This allocation will enable us to study big data issues when analyzing performance on leadership computing class systems and to assist the HPC community in making the most e ective use of these resources.

  3. How Significant is the Slope of the Sea-side Boundary for Modelling Seawater Intrusion in Coastal Aquifers?

    Science.gov (United States)

    Walther, Marc; Graf, Thomas; Kolditz, Olaf; Lield, Rudolf; Post, Vincent

    2017-04-01

    A large number of people live in coastal areas using the available water resources, which in (semi-)arid regions are often taken from groundwater resources as the only sufficient source. Compared to surface water, these usually provide a safe water supply due to the remediation and retention capabilities of the subsurface, their high yield, and potentially longer term stability. With a water withdrawal from a coastal aquifer, coastal water management, however, has to ensure that seawater intrusion is retained in order to keep the water salinity at an acceptable level for all water users (e.g. agriculture, industry, households). Besides monitoring of water levels and saline intrusion, it has become a common practice to use numerical modeling for evaluating the coastal water resources and projecting future scenarios. When applying a model, it is necessary for the simplifications implied during the conceptualization of the setup to include the relevant processes (here variable-density flow and mass transport) and sensitive parameters (for a steady state commonly hydraulic conductivity, density ratio, dispersivity). Additionally, the model's boundary conditions are essential to the simulation results. In order to reduce the number of elements, and thus, the computational burden, one simplification that is made in most regional scale saltwater intrusion applications, is to represent the sea-side boundary with a vertical geometry, contrary to the natural conditions, that usually show a very shallow decent of the interface between the aquifer and the open seawater. We use the scientific open-source modeling toolbox OpenGeoSys [1] to quantify the influence of this simplification on the saline intrusion, submarine groundwater discharge, and groundwater residence times. Using an ensemble of different shelf shapes for a steady state setup, we identified a significant dependency of saline intrusion length on the geometric parameters of the sea-side boundary. Results show that

  4. Genome-wide significant localization for working and spatial memory: Identifying genes for psychosis using models of cognition.

    Science.gov (United States)

    Knowles, Emma E M; Carless, Melanie A; de Almeida, Marcio A A; Curran, Joanne E; McKay, D Reese; Sprooten, Emma; Dyer, Thomas D; Göring, Harald H; Olvera, Rene; Fox, Peter; Almasy, Laura; Duggirala, Ravi; Kent, Jack W; Blangero, John; Glahn, David C

    2014-01-01

    It is well established that risk for developing psychosis is largely mediated by the influence of genes, but identifying precisely which genes underlie that risk has been problematic. Focusing on endophenotypes, rather than illness risk, is one solution to this problem. Impaired cognition is a well-established endophenotype of psychosis. Here we aimed to characterize the genetic architecture of cognition using phenotypically detailed models as opposed to relying on general IQ or individual neuropsychological measures. In so doing we hoped to identify genes that mediate cognitive ability, which might also contribute to psychosis risk. Hierarchical factor models of genetically clustered cognitive traits were subjected to linkage analysis followed by QTL region-specific association analyses in a sample of 1,269 Mexican American individuals from extended pedigrees. We identified four genome wide significant QTLs, two for working and two for spatial memory, and a number of plausible and interesting candidate genes. The creation of detailed models of cognition seemingly enhanced the power to detect genetic effects on cognition and provided a number of possible candidate genes for psychosis. © 2013 Wiley Periodicals, Inc.

  5. The better model to predict and improve pediatric health care quality: performance or importance-performance?

    Science.gov (United States)

    Olsen, Rebecca M; Bryant, Carol A; McDermott, Robert J; Ortinau, David

    2013-01-01

    The perpetual search for ways to improve pediatric health care quality has resulted in a multitude of assessments and strategies; however, there is little research evidence as to their conditions for maximum effectiveness. A major reason for the lack of evaluation research and successful quality improvement initiatives is the methodological challenge of measuring quality from the parent perspective. Comparison of performance-only and importance-performance models was done to determine the better predictor of pediatric health care quality and more successful method for improving the quality of care provided to children. Fourteen pediatric health care centers serving approximately 250,000 patients in 70,000 households in three West Central Florida counties were studied. A cross-sectional design was used to determine the importance and performance of 50 pediatric health care attributes and four global assessments of pediatric health care quality. Exploratory factor analysis revealed five dimensions of care (physician care, access, customer service, timeliness of services, and health care facility). Hierarchical multiple regression compared the performance-only and the importance-performance models. In-depth interviews, participant observations, and a direct cognitive structural analysis identified 50 health care attributes included in a mailed survey to parents(n = 1,030). The tailored design method guided survey development and data collection. The importance-performance multiplicative additive model was a better predictor of pediatric health care quality. Attribute importance moderates performance and quality, making the importance-performance model superior for measuring and providing a deeper understanding of pediatric health care quality and a better method for improving the quality of care provided to children. Regardless of attribute performance, if the level of attribute importance is not taken into consideration, health care organizations may spend valuable

  6. Fast Performance Computing Model for Smart Distributed Power Systems

    Directory of Open Access Journals (Sweden)

    Umair Younas

    2017-06-01

    Full Text Available Plug-in Electric Vehicles (PEVs are becoming the more prominent solution compared to fossil fuels cars technology due to its significant role in Greenhouse Gas (GHG reduction, flexible storage, and ancillary service provision as a Distributed Generation (DG resource in Vehicle to Grid (V2G regulation mode. However, large-scale penetration of PEVs and growing demand of energy intensive Data Centers (DCs brings undesirable higher load peaks in electricity demand hence, impose supply-demand imbalance and threaten the reliability of wholesale and retail power market. In order to overcome the aforementioned challenges, the proposed research considers smart Distributed Power System (DPS comprising conventional sources, renewable energy, V2G regulation, and flexible storage energy resources. Moreover, price and incentive based Demand Response (DR programs are implemented to sustain the balance between net demand and available generating resources in the DPS. In addition, we adapted a novel strategy to implement the computational intensive jobs of the proposed DPS model including incoming load profiles, V2G regulation, battery State of Charge (SOC indication, and fast computation in decision based automated DR algorithm using Fast Performance Computing resources of DCs. In response, DPS provide economical and stable power to DCs under strict power quality constraints. Finally, the improved results are verified using case study of ISO California integrated with hybrid generation.

  7. Aircraft Anomaly Detection Using Performance Models Trained on Fleet Data

    Science.gov (United States)

    Gorinevsky, Dimitry; Matthews, Bryan L.; Martin, Rodney

    2012-01-01

    This paper describes an application of data mining technology called Distributed Fleet Monitoring (DFM) to Flight Operational Quality Assurance (FOQA) data collected from a fleet of commercial aircraft. DFM transforms the data into aircraft performance models, flight-to-flight trends, and individual flight anomalies by fitting a multi-level regression model to the data. The model represents aircraft flight performance and takes into account fixed effects: flight-to-flight and vehicle-to-vehicle variability. The regression parameters include aerodynamic coefficients and other aircraft performance parameters that are usually identified by aircraft manufacturers in flight tests. Using DFM, the multi-terabyte FOQA data set with half-million flights was processed in a few hours. The anomalies found include wrong values of competed variables, (e.g., aircraft weight), sensor failures and baises, failures, biases, and trends in flight actuators. These anomalies were missed by the existing airline monitoring of FOQA data exceedances.

  8. Model complexity and performance: How far can we simplify?

    Science.gov (United States)

    Raick, C.; Soetaert, K.; Grégoire, M.

    2006-07-01

    Handling model complexity and reliability is a key area of research today. While complex models containing sufficient detail have become possible due to increased computing power, they often lead to too much uncertainty. On the other hand, very simple models often crudely oversimplify the real ecosystem and can not be used for management purposes. Starting from a complex and validated 1D pelagic ecosystem model of the Ligurian Sea (NW Mediterranean Sea), we derived simplified aggregated models in which either the unbalanced algal growth, the functional group diversity or the explicit description of the microbial loop was sacrificed. To overcome the problem of data availability with adequate spatial and temporal resolution, the outputs of the complex model are used as the baseline of perfect knowledge to calibrate the simplified models. Objective criteria of model performance were used to compare the simplified models’ results to the complex model output and to the available data at the DYFAMED station in the central Ligurian Sea. We show that even the simplest (NPZD) model is able to represent the global ecosystem features described by the complex model (e.g. primary and secondary productions, particulate organic matter export flux, etc.). However, a certain degree of sophistication in the formulation of some biogeochemical processes is required to produce realistic behaviors (e.g. the phytoplankton competition, the potential carbon or nitrogen limitation of the zooplankton ingestion, the model trophic closure, etc.). In general, a 9 state-variable model that has the functional group diversity removed, but which retains the bacterial loop and the unbalanced algal growth, performs best.

  9. Thin film bulk acoustic wave devices : performance optimization and modeling

    OpenAIRE

    Pensala, Tuomas

    2011-01-01

    Thin film bulk acoustic wave (BAW) resonators and filters operating in the GHz range are used in mobile phones for the most demanding filtering applications and complement the surface acoustic wave (SAW) based filters. Their main advantages are small size and high performance at frequencies above 2 GHz. This work concentrates on the characterization, performance optimization, and modeling techniques of thin film BAW devices. Laser interferometric vibration measurements together with plat...

  10. Computational modelling of expressive music performance in hexaphonic guitar

    OpenAIRE

    Siquier, Marc

    2017-01-01

    Computational modelling of expressive music performance has been widely studied in the past. While previous work in this area has been mainly focused on classical piano music, there has been very little work on guitar music, and such work has focused on monophonic guitar playing. In this work, we present a machine learning approach to automatically generate expressive performances from non expressive music scores for polyphonic guitar. We treated guitar as an hexaphonic instrument, obtaining ...

  11. Performance characteristics of CA 19-9 radioimmunoassay and clinical significance of serum CA 19-9 assay in patients with malignancy

    International Nuclear Information System (INIS)

    Kim, S.E.; Shong, Y.K.; Cho, B.Y.; Kim, N.K.; Koh, C.S.; Lee, M.H.; Hong, K.S.

    1985-01-01

    To evaluate the performance characteristics of CA 19-9 radioimmunoassay and the clinical significance of serum CA 19-9 assay in patients with malignancy, serum CA 19-9 levels were measured by radioimmunoassay using monoclonal antibody in 135 normal controls, 81 patients with various untreated malignancy, 9 patients of postoperative colon cancer without recurrence and 20 patients with benign gastrointestinal diseases, who visited Seoul National University Hospital from June, 1984 to March, 1985. (Author)

  12. Investigating the performance of directional boundary layer model through staged modeling method

    Science.gov (United States)

    Jeong, Moon-Gyu; Lee, Won-Chan; Yang, Seung-Hune; Jang, Sung-Hoon; Shim, Seong-Bo; Kim, Young-Chang; Suh, Chun-Suk; Choi, Seong-Woon; Kim, Young-Hee

    2011-04-01

    Generally speaking, the models used in the optical proximity effect correction (OPC) can be divided into three parts, mask part, optic part, and resist part. For the excellent quality of the OPC model, each part has to be described by the first principles. However, OPC model can't take the all of the principles since it should cover the full chip level calculation during the correction. Moreover, the calculation has to be done iteratively during the correction until the cost function we want to minimize converges. Normally the optic part in OPC model is described with the sum of coherent system (SOCS[1]) method. Thanks to this method we can calculate the aerial image so fast without the significant loss of accuracy. As for the resist part, the first principle is too complex to implement in detail, so it is normally expressed in a simple way, such as the approximation of the first principles, and the linear combinations of factors which is highly correlated with the chemistries in the resist. The quality of this kind of the resist model depends on how well we train the model through fitting to the empirical data. The most popular way of making the mask function is based on the Kirchhoff's thin mask approximation. This method works well when the feature size on the mask is sufficiently large, but as the line width of the semiconductor circuit becomes smaller, this method causes significant error due to the mask topography effect. To consider the mask topography effect accurately, we have to use rigorous methods of calculating the mask function, such as finite difference time domain (FDTD[2]) and rigorous coupled-wave analysis (RCWA[3]). But these methods are too time-consuming to be used as a part of the OPC model. Until now many alternatives have been suggested as the efficient way of considering the mask topography effect. Among them we focused on the boundary layer model (BLM) in this paper. We mainly investigated the way of optimization of the parameters for the

  13. Physics based performance model of a UV missile seeker

    Science.gov (United States)

    James, I.

    2017-10-01

    Electro-optically (EO) guided surface to air missiles (SAM) have developed to use Ultraviolet (UV) wavebands supplementary to the more common Infrared (IR) wavebands. Missiles such as the US Stinger have been around for some time, these have been joined recently by Chinese FN-16 and Russian SA-29 (Verba) and there is a much higher potential proliferation risk. The purpose of this paper is to introduce a first-principles, physics based, model of a typical seeker arrangement. The model is constructed from various calculations that aim to characterise the physical effects that will affect the performance of the system. Data has been gathered from a number of sources to provide realism to the variables within the model. It will be demonstrated that many of the variables have the power to dramatically alter the performance of the system as a whole. Further, data will be shown to illustrate the expected performance of a typical UV detector within a SAM in detection range against a variety of target sizes. The trend for the detection range against aircraft size and skin reflectivity will be shown to be non-linear, this should have been expected owing to the exponential decay of a signal through atmosphere. Future work will validate the performance of the model against real world performance data for cameras (when this is available) to ensure that it is operates within acceptable errors.

  14. A model to describe the performance of the UASB reactor.

    Science.gov (United States)

    Rodríguez-Gómez, Raúl; Renman, Gunno; Moreno, Luis; Liu, Longcheng

    2014-04-01

    A dynamic model to describe the performance of the Upflow Anaerobic Sludge Blanket (UASB) reactor was developed. It includes dispersion, advection, and reaction terms, as well as the resistances through which the substrate passes before its biotransformation. The UASB reactor is viewed as several continuous stirred tank reactors connected in series. The good agreement between experimental and simulated results shows that the model is able to predict the performance of the UASB reactor (i.e. substrate concentration, biomass concentration, granule size, and height of the sludge bed).

  15. PORFLOW Modeling Supporting The H-Tank Farm Performance Assessment

    International Nuclear Information System (INIS)

    Jordan, J. M.; Flach, G. P.; Westbrook, M. L.

    2012-01-01

    Numerical simulations of groundwater flow and contaminant transport in the vadose and saturated zones have been conducted using the PORFLOW code in support of an overall Performance Assessment (PA) of the H-Tank Farm. This report provides technical detail on selected aspects of PORFLOW model development and describes the structure of the associated electronic files. The PORFLOW models for the H-Tank Farm PA, Rev. 1 were updated with grout, solubility, and inventory changes. The aquifer model was refined. In addition, a set of flow sensitivity runs were performed to allow flow to be varied in the related probabilistic GoldSim models. The final PORFLOW concentration values are used as input into a GoldSim dose calculator

  16. PORFLOW Modeling Supporting The H-Tank Farm Performance Assessment

    Energy Technology Data Exchange (ETDEWEB)

    Jordan, J. M.; Flach, G. P.; Westbrook, M. L.

    2012-08-31

    Numerical simulations of groundwater flow and contaminant transport in the vadose and saturated zones have been conducted using the PORFLOW code in support of an overall Performance Assessment (PA) of the H-Tank Farm. This report provides technical detail on selected aspects of PORFLOW model development and describes the structure of the associated electronic files. The PORFLOW models for the H-Tank Farm PA, Rev. 1 were updated with grout, solubility, and inventory changes. The aquifer model was refined. In addition, a set of flow sensitivity runs were performed to allow flow to be varied in the related probabilistic GoldSim models. The final PORFLOW concentration values are used as input into a GoldSim dose calculator.

  17. Evaluating performances of simplified physically based landslide susceptibility models.

    Science.gov (United States)

    Capparelli, Giovanna; Formetta, Giuseppe; Versace, Pasquale

    2015-04-01

    Rainfall induced shallow landslides cause significant damages involving loss of life and properties. Prediction of shallow landslides susceptible locations is a complex task that involves many disciplines: hydrology, geotechnical science, geomorphology, and statistics. Usually to accomplish this task two main approaches are used: statistical or physically based model. This paper presents a package of GIS based models for landslide susceptibility analysis. It was integrated in the NewAge-JGrass hydrological model using the Object Modeling System (OMS) modeling framework. The package includes three simplified physically based models for landslides susceptibility analysis (M1, M2, and M3) and a component for models verifications. It computes eight goodness of fit indices (GOF) by comparing pixel-by-pixel model results and measurements data. Moreover, the package integration in NewAge-JGrass allows the use of other components such as geographic information system tools to manage inputs-output processes, and automatic calibration algorithms to estimate model parameters. The system offers the possibility to investigate and fairly compare the quality and the robustness of models and models parameters, according a procedure that includes: i) model parameters estimation by optimizing each of the GOF index separately, ii) models evaluation in the ROC plane by using each of the optimal parameter set, and iii) GOF robustness evaluation by assessing their sensitivity to the input parameter variation. This procedure was repeated for all three models. The system was applied for a case study in Calabria (Italy) along the Salerno-Reggio Calabria highway, between Cosenza and Altilia municipality. The analysis provided that among all the optimized indices and all the three models, Average Index (AI) optimization coupled with model M3 is the best modeling solution for our test case. This research was funded by PON Project No. 01_01503 "Integrated Systems for Hydrogeological Risk

  18. Model of service-oriented catering supply chain performance evaluation

    Directory of Open Access Journals (Sweden)

    Juanqiong Gou

    2013-03-01

    Full Text Available Purpose: The aim of this paper is constructing a performance evaluation model for service-oriented catering supply chain. Design/methodology/approach: With the research on the current situation of catering industry, this paper summarized the characters of the catering supply chain, and then presents the service-oriented catering supply chain model based on the platform of logistics and information. At last, the fuzzy AHP method is used to evaluate the performance of service-oriented catering supply chain. Findings: With the analysis of the characteristics of catering supply chain, we construct the performance evaluation model in order to guarantee the food safety, logistics efficiency, price stability and so on. Practical implications: In order to evolve an efficient and effective service supply chain, it can not only used to own enterprise improvement, but also can be used for selecting different customers, to choose a different model of development. Originality/value: This paper has a new definition of service-oriented catering supply chain. And it offers a model to evaluate the performance of this catering supply chain.

  19. Significant Effect of a Pre-Exercise High-Fat Meal after a 3-Day High-Carbohydrate Diet on Endurance Performance

    Directory of Open Access Journals (Sweden)

    Ikuma Murakami

    2012-06-01

    Full Text Available We investigated the effect of macronutrient composition of pre-exercise meals on endurance performance. Subjects consumed a high-carbohydrate diet at each meal for 3 days, followed by a high-fat meal (HFM; 1007 ± 21 kcal, 30% CHO, 55% F and 15% P or high-carbohydrate meal (HCM; 1007 ± 21 kcal, 71% CHO, 20% F and 9% P 4 h before exercise. Furthermore, just prior to the test, subjects in the HFM group ingested either maltodextrin jelly (M or a placebo jelly (P, while subjects in the HCM ingested a placebo jelly. Endurance performance was measured as running time until exhaustion at a speed between lactate threshold and the onset of blood lactate accumulation. All subjects participated in each trial, randomly assigned at weekly intervals. We observed that the time until exhaustion was significantly longer in the HFM + M (p < 0.05 than in HFM + P and HCM + P conditions. Furthermore, the total amount of fat oxidation during exercise was significantly higher in HFM + M and HFM + P than in HCM + P (p < 0.05. These results suggest that ingestion of a HFM prior to exercise is more favorable for endurance performance than HCM. In addition, HFM and maltodextrin ingestion following 3 days of carbohydrate loading enhances endurance running performance.

  20. A puzzle form of a non-verbal intelligence test gives significantly higher performance measures in children with severe intellectual disability

    Directory of Open Access Journals (Sweden)

    Crewther Sheila G

    2008-08-01

    Full Text Available Abstract Background Assessment of 'potential intellectual ability' of children with severe intellectual disability (ID is limited, as current tests designed for normal children do not maintain their interest. Thus a manual puzzle version of the Raven's Coloured Progressive Matrices (RCPM was devised to appeal to the attentional and sensory preferences and language limitations of children with ID. It was hypothesized that performance on the book and manual puzzle forms would not differ for typically developing children but that children with ID would perform better on the puzzle form. Methods The first study assessed the validity of this puzzle form of the RCPM for 76 typically developing children in a test-retest crossover design, with a 3 week interval between tests. A second study tested performance and completion rate for the puzzle form compared to the book form in a sample of 164 children with ID. Results In the first study, no significant difference was found between performance on the puzzle and book forms in typically developing children, irrespective of the order of completion. The second study demonstrated a significantly higher performance and completion rate for the puzzle form compared to the book form in the ID population. Conclusion Similar performance on book and puzzle forms of the RCPM by typically developing children suggests that both forms measure the same construct. These findings suggest that the puzzle form does not require greater cognitive ability but demands sensory-motor attention and limits distraction in children with severe ID. Thus, we suggest the puzzle form of the RCPM is a more reliable measure of the non-verbal mentation of children with severe ID than the book form.

  1. Aqua/Aura Updated Inclination Adjust Maneuver Performance Prediction Model

    Science.gov (United States)

    Boone, Spencer

    2017-01-01

    This presentation will discuss the updated Inclination Adjust Maneuver (IAM) performance prediction model that was developed for Aqua and Aura following the 2017 IAM series. This updated model uses statistical regression methods to identify potential long-term trends in maneuver parameters, yielding improved predictions when re-planning past maneuvers. The presentation has been reviewed and approved by Eric Moyer, ESMO Deputy Project Manager.

  2. Dynamic Experiments and Constitutive Model Performance for Polycarbonate

    Science.gov (United States)

    2014-07-01

    Storage and loss tangent moduli for PC; DMA experiments performed at 1 Hz and shift at 100 Hz showing the  and transition regions using the...author would also like to thank Dr. Adam D. Mulliken for courteously providing the experimental results and the Abaqus version of the model and...exponential factor . In 1955, the Ree-Eyring model further accounted for microstructural mechanisms by relating molecular motions to yield behavior

  3. Does segmentation always improve model performance in credit scoring?

    OpenAIRE

    Bijak, Katarzyna; Thomas, Lyn C.

    2012-01-01

    Credit scoring allows for the credit risk assessment of bank customers. A single scoring model (scorecard) can be developed for the entire customer population, e.g. using logistic regression. However, it is often expected that segmentation, i.e. dividing the population into several groups and building separate scorecards for them, will improve the model performance. The most common statistical methods for segmentation are the two-step approaches, where logistic regression follows Classificati...

  4. Building Information Modeling (BIM) for Indoor Environmental Performance Analysis

    DEFF Research Database (Denmark)

    The report is a part of a research assignment carried out by students in the 5ETCS course “Project Byggeri – [entitled as: Building Information Modeling (BIM) – Modeling & Analysis]”, during the 3rd semester of master degree in Civil and Architectural Engineering, Department of Engineering, Aarhus...... University. This includes seven papers describing BIM for Sustainability, concentrating specifically on individual topics regarding to Indoor Environment Performance Analysis....

  5. Optical polarization tractography revealed significant fiber disarray in skeletal muscles of a mouse model for Duchenne muscular dystrophy.

    Science.gov (United States)

    Wang, Y; Zhang, K; Wasala, N B; Duan, D; Yao, G

    2015-02-01

    Optical polarization tractography (OPT) was recently developed to visualize tissue fiber architecture with cellular-level resolution and accuracy. In this study, we explored the feasibility of using OPT to study muscle disease in the mdx4cv mouse model of Duchenne muscular dystrophy. The freshly dissected tibialis anterior muscles of mdx4cv and normal mice were imaged. A "fiber disarray index" (FDI) was developed to quantify the myofiber disorganization. In necrotic muscle regions of the mdx4cv mice, the FDI was significantly elevated and can be used to segment the 3D necrotic regions for assessing the overall muscle damage. These results demonstrated the OPT's capability for imaging microscopic fiber alternations in muscle research.

  6. Rethinking board role performance: Towards an integrative model

    Directory of Open Access Journals (Sweden)

    Babić Verica M.

    2011-01-01

    Full Text Available This research focuses on the board role evolution analysis which took place simultaneously with the development of different corporate governance theories and perspectives. The purpose of this paper is to provide understanding of key factors that make a board effective in the performance of its role. We argue that analysis of board role performance should incorporate both structural and process variables. This paper’s contribution is the development of an integrative model that aims to establish the relationship between the board structure and processes on the one hand, and board role performance on the other.

  7. Human performance models for computer-aided engineering

    Science.gov (United States)

    Elkind, Jerome I. (Editor); Card, Stuart K. (Editor); Hochberg, Julian (Editor); Huey, Beverly Messick (Editor)

    1989-01-01

    This report discusses a topic important to the field of computational human factors: models of human performance and their use in computer-based engineering facilities for the design of complex systems. It focuses on a particular human factors design problem -- the design of cockpit systems for advanced helicopters -- and on a particular aspect of human performance -- vision and related cognitive functions. By focusing in this way, the authors were able to address the selected topics in some depth and develop findings and recommendations that they believe have application to many other aspects of human performance and to other design domains.

  8. The integration of intrapreneurship into a performance management model

    Directory of Open Access Journals (Sweden)

    Thabo WL Foba

    2007-02-01

    Full Text Available This study aimed to investigate the feasibility of using the dynamics of intrapreneurship to develop a new generation performance management model based on the structural dynamics of the Balanced Score Card approach. The literature survey covered entrepreneurship, from which the construct, intrapreneurship, was synthesized. Reconstructive logic and Hermeneutic methodology were used in studying the performance management systems and the Balanced Score Card approach. The dynamics were then integrated into a new approach for the management of performance of intrapreneurial employees in the corporate environment. An unstructured opinion survey followed: a sample of intrapreneurship students evaluated and validated the model’s conceptual feasibility and probable practical value.

  9. Evaluation of performance of distributed delay model for chemotherapy-induced myelosuppression.

    Science.gov (United States)

    Krzyzanski, Wojciech; Hu, Shuhua; Dunlavey, Michael

    2018-04-01

    The distributed delay model has been introduced that replaces the transit compartments in the classic model of chemotherapy-induced myelosuppression with a convolution integral. The maturation of granulocyte precursors in the bone marrow is described by the gamma probability density function with the shape parameter (ν). If ν is a positive integer, the distributed delay model coincides with the classic model with ν transit compartments. The purpose of this work was to evaluate performance of the distributed delay model with particular focus on model deterministic identifiability in the presence of the shape parameter. The classic model served as a reference for comparison. Previously published white blood cell (WBC) count data in rats receiving bolus doses of 5-fluorouracil were fitted by both models. The negative two log-likelihood objective function (-2LL) and running times were used as major markers of performance. Local sensitivity analysis was done to evaluate the impact of ν on the pharmacodynamics response WBC. The ν estimate was 1.46 with 16.1% CV% compared to ν = 3 for the classic model. The difference of 6.78 in - 2LL between classic model and the distributed delay model implied that the latter performed significantly better than former according to the log-likelihood ratio test (P = 0.009), although the overall performance was modestly better. The running times were 1 s and 66.2 min, respectively. The long running time of the distributed delay model was attributed to computationally intensive evaluation of the convolution integral. The sensitivity analysis revealed that ν strongly influences the WBC response by controlling cell proliferation and elimination of WBCs from the circulation. In conclusion, the distributed delay model was deterministically identifiable from typical cytotoxic data. Its performance was modestly better than the classic model with significantly longer running time.

  10. Model for determining and optimizing delivery performance in industrial systems

    Directory of Open Access Journals (Sweden)

    Fechete Flavia

    2017-01-01

    Full Text Available Performance means achieving organizational objectives regardless of their nature and variety, and even overcoming them. Improving performance is one of the major goals of any company. Achieving the global performance means not only obtaining the economic performance, it is a must to take into account other functions like: function of quality, delivery, costs and even the employees satisfaction. This paper aims to improve the delivery performance of an industrial system due to their very low results. The delivery performance took into account all categories of performance indicators, such as on time delivery, backlog efficiency or transport efficiency. The research was focused on optimizing the delivery performance of the industrial system, using linear programming. Modeling the delivery function using linear programming led to obtaining precise quantities to be produced and delivered each month by the industrial system in order to minimize their transport cost, satisfying their customers orders and to control their stock. The optimization led to a substantial improvement in all four performance indicators that concern deliveries.

  11. Bacteriophage treatment significantly reduces viable Clostridium difficile and prevents toxin production in an in vitro model system.

    Science.gov (United States)

    Meader, Emma; Mayer, Melinda J; Gasson, Michael J; Steverding, Dietmar; Carding, Simon R; Narbad, Arjan

    2010-12-01

    Clostridium difficile is primarily a nosocomial pathogen, causing thousands of cases of antibiotic-associated diarrhoea in the UK each year. In this study, we used a batch fermentation model of a C. difficile colonised system to evaluate the potential of a prophylactic and a remedial bacteriophage treatment regime to control the pathogen. It is shown that the prophylaxis regime was effective at preventing the growth of C. difficile (p = viable C. difficile cells (p = <0.0001), but still resulted in a lower level of toxin production relative to the control. The numbers of commensal bacteria including total aerobes and anaerobes, Bifidobacterium sp., Bacteroides sp., Lactobacillus sp., total Clostridium sp., and Enterobacteriaceae were not significantly decreased by this therapy, whereas significant detrimental effects were observed with metronidazole treatment. Our study indicates that phage therapy has potential to be used for the control of C. difficile; it highlights the main benefits of this approach, and some future challenges. Copyright © 2010 Elsevier Ltd. All rights reserved.

  12. Reference Manual for the System Advisor Model's Wind Power Performance Model

    Energy Technology Data Exchange (ETDEWEB)

    Freeman, J.; Jorgenson, J.; Gilman, P.; Ferguson, T.

    2014-08-01

    This manual describes the National Renewable Energy Laboratory's System Advisor Model (SAM) wind power performance model. The model calculates the hourly electrical output of a single wind turbine or of a wind farm. The wind power performance model requires information about the wind resource, wind turbine specifications, wind farm layout (if applicable), and costs. In SAM, the performance model can be coupled to one of the financial models to calculate economic metrics for residential, commercial, or utility-scale wind projects. This manual describes the algorithms used by the wind power performance model, which is available in the SAM user interface and as part of the SAM Simulation Core (SSC) library, and is intended to supplement the user documentation that comes with the software.

  13. Significant improvement of olfactory performance in sleep apnea patients after three months of nasal CPAP therapy - Observational study and randomized trial.

    Science.gov (United States)

    Boerner, Bettina; Tini, Gabrielo M; Fachinger, Patrick; Graber, Sereina M; Irani, Sarosh

    2017-01-01

    The olfactory function highly impacts quality of life (QoL). Continuous positive airway pressure is an effective treatment for obstructive sleep apnea (OSA) and is often applied by nasal masks (nCPAP). The influence of nCPAP on the olfactory performance of OSA patients is unknown. The aim of this study was to assess the sense of smell before initiation of nCPAP and after three months treatment, in moderate and severe OSA patients. The sense of smell was assessed in 35 patients suffering from daytime sleepiness and moderate to severe OSA (apnea/hypopnea index ≥ 15/h), with the aid of a validated test battery (Sniffin' Sticks) before initiation of nCPAP therapy and after three months of treatment. Additionally, adherent subjects were included in a double-blind randomized three weeks CPAP-withdrawal trial (sub-therapeutic CPAP pressure). Twenty five of the 35 patients used the nCPAP therapy for more than four hours per night, and for more than 70% of nights (adherent group). The olfactory performance of these patients improved significantly (p = 0.007) after three months of nCPAP therapy. When considering the entire group of patients, olfaction also improved significantly (p = 0.001). In the randomized phase the sense of smell of six patients deteriorated under sub-therapeutic CPAP pressure (p = 0.046) whereas five patients in the maintenance CPAP group showed no significant difference (p = 0.501). Olfactory performance improved significantly after three months of nCPAP therapy in patients suffering from moderate and severe OSA. It seems that this effect of nCPAP is reversible under sub-therapeutic CPAP pressure. ISRCTN11128866.

  14. Significant improvement of olfactory performance in sleep apnea patients after three months of nasal CPAP therapy - Observational study and randomized trial.

    Directory of Open Access Journals (Sweden)

    Bettina Boerner

    Full Text Available The olfactory function highly impacts quality of life (QoL. Continuous positive airway pressure is an effective treatment for obstructive sleep apnea (OSA and is often applied by nasal masks (nCPAP. The influence of nCPAP on the olfactory performance of OSA patients is unknown. The aim of this study was to assess the sense of smell before initiation of nCPAP and after three months treatment, in moderate and severe OSA patients.The sense of smell was assessed in 35 patients suffering from daytime sleepiness and moderate to severe OSA (apnea/hypopnea index ≥ 15/h, with the aid of a validated test battery (Sniffin' Sticks before initiation of nCPAP therapy and after three months of treatment. Additionally, adherent subjects were included in a double-blind randomized three weeks CPAP-withdrawal trial (sub-therapeutic CPAP pressure.Twenty five of the 35 patients used the nCPAP therapy for more than four hours per night, and for more than 70% of nights (adherent group. The olfactory performance of these patients improved significantly (p = 0.007 after three months of nCPAP therapy. When considering the entire group of patients, olfaction also improved significantly (p = 0.001. In the randomized phase the sense of smell of six patients deteriorated under sub-therapeutic CPAP pressure (p = 0.046 whereas five patients in the maintenance CPAP group showed no significant difference (p = 0.501.Olfactory performance improved significantly after three months of nCPAP therapy in patients suffering from moderate and severe OSA. It seems that this effect of nCPAP is reversible under sub-therapeutic CPAP pressure.ISRCTN11128866.

  15. Spatial variability and parametric uncertainty in performance assessment models

    International Nuclear Information System (INIS)

    Pensado, Osvaldo; Mancillas, James; Painter, Scott; Tomishima, Yasuo

    2011-01-01

    The problem of defining an appropriate treatment of distribution functions (which could represent spatial variability or parametric uncertainty) is examined based on a generic performance assessment model for a high-level waste repository. The generic model incorporated source term models available in GoldSim ® , the TDRW code for contaminant transport in sparse fracture networks with a complex fracture-matrix interaction process, and a biosphere dose model known as BDOSE TM . Using the GoldSim framework, several Monte Carlo sampling approaches and transport conceptualizations were evaluated to explore the effect of various treatments of spatial variability and parametric uncertainty on dose estimates. Results from a model employing a representative source and ensemble-averaged pathway properties were compared to results from a model allowing for stochastic variation of transport properties along streamline segments (i.e., explicit representation of spatial variability within a Monte Carlo realization). We concluded that the sampling approach and the definition of an ensemble representative do influence consequence estimates. In the examples analyzed in this paper, approaches considering limited variability of a transport resistance parameter along a streamline increased the frequency of fast pathways resulting in relatively high dose estimates, while those allowing for broad variability along streamlines increased the frequency of 'bottlenecks' reducing dose estimates. On this basis, simplified approaches with limited consideration of variability may suffice for intended uses of the performance assessment model, such as evaluation of site safety. (author)

  16. A model of CCTV surveillance operator performance | Donald ...

    African Journals Online (AJOL)

    cognitive processes involved in visual search and monitoring – key activities of operators. The aim of this paper was to integrate the factors into a holistic theoretical model of performance for CCTV operators, drawing on areas such as vigilance, ...

  17. Item Response Theory Models for Performance Decline during Testing

    Science.gov (United States)

    Jin, Kuan-Yu; Wang, Wen-Chung

    2014-01-01

    Sometimes, test-takers may not be able to attempt all items to the best of their ability (with full effort) due to personal factors (e.g., low motivation) or testing conditions (e.g., time limit), resulting in poor performances on certain items, especially those located toward the end of a test. Standard item response theory (IRT) models fail to…

  18. Models for the financial-performance effects of Marketing

    NARCIS (Netherlands)

    Hanssens, D.M.; Dekimpe, Marnik; Wierenga, B.; van der Lans, R.

    We consider marketing-mix models that explicitly include financial performance criteria. These financial metrics are not only comparable across the marketing mix, they also relate well to investors’ evaluation of the firm. To that extent, we treat marketing as an investment in customer value

  19. Modeling performance measurement applications and implementation issues in DEA

    CERN Document Server

    Cook, Wade D

    2005-01-01

    Addresses advanced/new DEA methodology and techniques that are developed for modeling unique and new performance evaluation issuesPesents new DEA methodology and techniques via discussions on how to solve managerial problemsProvides an easy-to-use DEA software - DEAFrontier (www.deafrontier.com) which is an excellent tool for both DEA researchers and practitioners.

  20. Performances of estimators of linear auto-correlated error model ...

    African Journals Online (AJOL)

    The performances of five estimators of linear models with autocorrelated disturbance terms are compared when the independent variable is exponential. The results reveal that for both small and large samples, the Ordinary Least Squares (OLS) compares favourably with the Generalized least Squares (GLS) estimators in ...

  1. Towards a Social Networks Model for Online Learning & Performance

    Science.gov (United States)

    Chung, Kon Shing Kenneth; Paredes, Walter Christian

    2015-01-01

    In this study, we develop a theoretical model to investigate the association between social network properties, "content richness" (CR) in academic learning discourse, and performance. CR is the extent to which one contributes content that is meaningful, insightful and constructive to aid learning and by social network properties we…

  2. Quantitative modeling of human performance in complex, dynamic systems

    National Research Council Canada - National Science Library

    Baron, Sheldon; Kruser, Dana S; Huey, Beverly Messick

    1990-01-01

    ... Sheldon Baron, Dana S. Kruser, and Beverly Messick Huey, editors Panel on Human Performance Modeling Committee on Human Factors Commission on Behavioral and Social Sciences and Education National Research Council NATIONAL ACADEMY PRESS Washington, D.C. 1990 Copyrightoriginal retained, the be not from cannot book, paper original however, for version forma...

  3. Performances Of Estimators Of Linear Models With Autocorrelated ...

    African Journals Online (AJOL)

    The performances of five estimators of linear models with Autocorrelated error terms are compared when the independent variable is autoregressive. The results reveal that the properties of the estimators when the sample size is finite is quite similar to the properties of the estimators when the sample size is infinite although ...

  4. Performances of estimators of linear model with auto-correlated ...

    African Journals Online (AJOL)

    Performances of estimators of linear model with auto-correlated error terms when the independent variable is normal. ... On the other hand, the same slope coefficients β , under Generalized Least Squares (GLS) decreased with increased autocorrelation when the sample size T is small. Journal of the Nigerian Association ...

  5. Performance Modeling for Heterogeneous Wireless Networks with Multiservice Overflow Traffic

    DEFF Research Database (Denmark)

    Huang, Qian; Ko, King-Tim; Iversen, Villy Bæk

    2009-01-01

    . Multiservice loss analysis based on multi-dimensional Markov chain becomes intractable in these networks due to intensive computations required. This paper focuses on performance modeling for heterogeneous wireless networks based on a hierarchical overlay infrastructure. A method based on decomposition...

  6. Stutter-Step Models of Performance in School

    Science.gov (United States)

    Morgan, Stephen L.; Leenman, Theodore S.; Todd, Jennifer J.; Kentucky; Weeden, Kim A.

    2013-01-01

    To evaluate a stutter-step model of academic performance in high school, this article adopts a unique measure of the beliefs of 12,591 high school sophomores from the Education Longitudinal Study, 2002-2006. Verbatim responses to questions on occupational plans are coded to capture specific job titles, the listing of multiple jobs, and the listing…

  7. Performance in model transformations: experiments with ATL and QVT

    NARCIS (Netherlands)

    van Amstel, Marcel; Bosems, S.; Ivanov, Ivan; Ferreira Pires, Luis; Cabot, Jordi; Visser, Eelco

    Model transformations are increasingly being incorporated in software development processes. However, as systems being developed with transformations grow in size and complexity, the performance of the transformations tends to degrade. In this paper we investigate the factors that have an impact on

  8. Modelling of green roof hydrological performance for urban drainage applications

    DEFF Research Database (Denmark)

    Locatelli, Luca; Mark, Ole; Mikkelsen, Peter Steen

    2014-01-01

    Green roofs are being widely implemented for stormwater management and their impact on the urban hydrological cycle can be evaluated by incorporating them into urban drainage models. This paper presents a model of green roof long term and single event hydrological performance. The model includes...... surface and subsurface storage components representing the overall retention capacity of the green roof which is continuously re-established by evapotranspiration. The runoff from the model is described through a non-linear reservoir approach. The model was calibrated and validated using measurement data...... from 3 different extensive sedum roofs in Denmark. These data consist of high-resolution measurements of runoff, precipitation and atmospheric variables in the period 2010–2012. The hydrological response of green roofs was quantified based on statistical analysis of the results of a 22-year (1989...

  9. Multiscale Modeling and Uncertainty Quantification for Nuclear Fuel Performance

    Energy Technology Data Exchange (ETDEWEB)

    Estep, Donald [Colorado State Univ., Fort Collins, CO (United States); El-Azab, Anter [Florida State Univ., Tallahassee, FL (United States); Pernice, Michael [Idaho National Lab. (INL), Idaho Falls, ID (United States); Peterson, John W. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Polyakov, Peter [Univ. of Wyoming, Laramie, WY (United States); Tavener, Simon [Colorado State Univ., Fort Collins, CO (United States); Xiu, Dongbin [Purdue Univ., West Lafayette, IN (United States); Univ. of Utah, Salt Lake City, UT (United States)

    2017-03-23

    In this project, we will address the challenges associated with constructing high fidelity multiscale models of nuclear fuel performance. We (*) propose a novel approach for coupling mesoscale and macroscale models, (*) devise efficient numerical methods for simulating the coupled system, and (*) devise and analyze effective numerical approaches for error and uncertainty quantification for the coupled multiscale system. As an integral part of the project, we will carry out analysis of the effects of upscaling and downscaling, investigate efficient methods for stochastic sensitivity analysis of the individual macroscale and mesoscale models, and carry out a posteriori error analysis for computed results. We will pursue development and implementation of solutions in software used at Idaho National Laboratories on models of interest to the Nuclear Energy Advanced Modeling and Simulation (NEAMS) program.

  10. Performance of GeantV EM Physics Models

    Science.gov (United States)

    Amadio, G.; Ananya, A.; Apostolakis, J.; Aurora, A.; Bandieramonte, M.; Bhattacharyya, A.; Bianchini, C.; Brun, R.; Canal, P.; Carminati, F.; Cosmo, G.; Duhem, L.; Elvira, D.; Folger, G.; Gheata, A.; Gheata, M.; Goulas, I.; Iope, R.; Jun, S. Y.; Lima, G.; Mohanty, A.; Nikitina, T.; Novak, M.; Pokorski, W.; Ribon, A.; Seghal, R.; Shadura, O.; Vallecorsa, S.; Wenzel, S.; Zhang, Y.

    2017-10-01

    The recent progress in parallel hardware architectures with deeper vector pipelines or many-cores technologies brings opportunities for HEP experiments to take advantage of SIMD and SIMT computing models. Launched in 2013, the GeantV project studies performance gains in propagating multiple particles in parallel, improving instruction throughput and data locality in HEP event simulation on modern parallel hardware architecture. Due to the complexity of geometry description and physics algorithms of a typical HEP application, performance analysis is indispensable in identifying factors limiting parallel execution. In this report, we will present design considerations and preliminary computing performance of GeantV physics models on coprocessors (Intel Xeon Phi and NVidia GPUs) as well as on mainstream CPUs.

  11. Performance modelling for product development of advanced window systems

    DEFF Research Database (Denmark)

    Appelfeld, David

    The research presented in this doctoral thesis shows how the product development (PD) of Complex Fenestration Systems (CFSs) can be facilitated by computer-based analysis to improve the energy efficiency of fenestration systems as well as to improve the indoor environment. The first chapter defines...... and methods,which can address interrelated performance parameters of CFS, are sought. It is possible to evaluate such systems by measurements, however the high cost and complexity of the measurements are limiting factors. The studies in this thesis confirmed that the results from the performance measurements...... of CFSs can be interpreted by simulations and hence simulations can be used for the performance analysis of new CFSs. An advanced simulation model must be often developed and needs to be validated by measurements before the model can be reused. The validation of simulations against the measurements proved...

  12. Modeling the seakeeping performance of luxury cruise ships

    Science.gov (United States)

    Cao, Yu; Yu, Bao-Jun; Wang, Jian-Fang

    2010-09-01

    The seakeeping performance of a luxury cruise ship was evaluated during the concept design phase. By comparing numerical predictions based on 3-D linear potential flow theory in the frequency domain with the results of model tests, it was shown that the 3-D method predicted the seakeeping performance of the luxury cruise ship well. Based on the model, the seakeeping features of the luxury cruise ship were analyzed, and then the influence was seen of changes to the primary design parameters (center of gravity, inertial radius, etc.). Based on the results, suggestions were proposed to improve the choice of parameters for luxury cruise ships during the concept design phase. They should improve seakeeping performance.

  13. Performance Models and Risk Management in Communications Systems

    CERN Document Server

    Harrison, Peter; Rüstem, Berç

    2011-01-01

    This volume covers recent developments in the design, operation, and management of telecommunication and computer network systems in performance engineering and addresses issues of uncertainty, robustness, and risk. Uncertainty regarding loading and system parameters leads to challenging optimization and robustness issues. Stochastic modeling combined with optimization theory ensures the optimum end-to-end performance of telecommunication or computer network systems. In view of the diverse design options possible, supporting models have many adjustable parameters and choosing the best set for a particular performance objective is delicate and time-consuming. An optimization based approach determines the optimal possible allocation for these parameters. Researchers and graduate students working at the interface of telecommunications and operations research will benefit from this book. Due to the practical approach, this book will also serve as a reference tool for scientists and engineers in telecommunication ...

  14. Performance of GeantV EM Physics Models

    CERN Document Server

    Amadio, G; Apostolakis, J; Aurora, A; Bandieramonte, M; Bhattacharyya, A; Bianchini, C; Brun, R; Canal P; Carminati, F; Cosmo, G; Duhem, L; Elvira, D; Folger, G; Gheata, A; Gheata, M; Goulas, I; Iope, R; Jun, S Y; Lima, G; Mohanty, A; Nikitina, T; Novak, M; Pokorski, W; Ribon, A; Seghal, R; Shadura, O; Vallecorsa, S; Wenzel, S; Zhang, Y

    2017-01-01

    The recent progress in parallel hardware architectures with deeper vector pipelines or many-cores technologies brings opportunities for HEP experiments to take advantage of SIMD and SIMT computing models. Launched in 2013, the GeantV project studies performance gains in propagating multiple particles in parallel, improving instruction throughput and data locality in HEP event simulation on modern parallel hardware architecture. Due to the complexity of geometry description and physics algorithms of a typical HEP application, performance analysis is indispensable in identifying factors limiting parallel execution. In this report, we will present design considerations and preliminary computing performance of GeantV physics models on coprocessors (Intel Xeon Phi and NVidia GPUs) as well as on mainstream CPUs.

  15. Performance of GeantV EM Physics Models

    Energy Technology Data Exchange (ETDEWEB)

    Amadio, G.; et al.

    2016-10-14

    The recent progress in parallel hardware architectures with deeper vector pipelines or many-cores technologies brings opportunities for HEP experiments to take advantage of SIMD and SIMT computing models. Launched in 2013, the GeantV project studies performance gains in propagating multiple particles in parallel, improving instruction throughput and data locality in HEP event simulation on modern parallel hardware architecture. Due to the complexity of geometry description and physics algorithms of a typical HEP application, performance analysis is indispensable in identifying factors limiting parallel execution. In this report, we will present design considerations and preliminary computing performance of GeantV physics models on coprocessors (Intel Xeon Phi and NVidia GPUs) as well as on mainstream CPUs.

  16. Thermal Model Predictions of Advanced Stirling Radioisotope Generator Performance

    Science.gov (United States)

    Wang, Xiao-Yen J.; Fabanich, William Anthony; Schmitz, Paul C.

    2014-01-01

    This paper presents recent thermal model results of the Advanced Stirling Radioisotope Generator (ASRG). The three-dimensional (3D) ASRG thermal power model was built using the Thermal Desktop(trademark) thermal analyzer. The model was correlated with ASRG engineering unit test data and ASRG flight unit predictions from Lockheed Martin's (LM's) I-deas(trademark) TMG thermal model. The auxiliary cooling system (ACS) of the ASRG is also included in the ASRG thermal model. The ACS is designed to remove waste heat from the ASRG so that it can be used to heat spacecraft components. The performance of the ACS is reported under nominal conditions and during a Venus flyby scenario. The results for the nominal case are validated with data from Lockheed Martin. Transient thermal analysis results of ASRG for a Venus flyby with a representative trajectory are also presented. In addition, model results of an ASRG mounted on a Cassini-like spacecraft with a sunshade are presented to show a way to mitigate the high temperatures of a Venus flyby. It was predicted that the sunshade can lower the temperature of the ASRG alternator by 20 C for the representative Venus flyby trajectory. The 3D model also was modified to predict generator performance after a single Advanced Stirling Convertor failure. The geometry of the Microtherm HT insulation block on the outboard side was modified to match deformation and shrinkage observed during testing of a prototypic ASRG test fixture by LM. Test conditions and test data were used to correlate the model by adjusting the thermal conductivity of the deformed insulation to match the post-heat-dump steady state temperatures. Results for these conditions showed that the performance of the still-functioning inboard ACS was unaffected.

  17. A PERFORMANCE MANAGEMENT MODEL FOR PHYSICAL ASSET MANAGEMENT

    Directory of Open Access Journals (Sweden)

    J.L. Jooste

    2012-01-01

    Full Text Available

    ENGLISH ABSTRACT: There has been an emphasis shift from maintenance management towards asset management, where the focus is on reliable and operational equipment and on effective assets at optimum life-cycle costs. A challenge in the manufacturing industry is to develop an asset performance management model that is integrated with business processes and strategies. The authors developed the APM2 model to satisfy that requirement. The model has a generic reference structure and is supported by operational protocols to assist in operations management. It facilitates performance measurement, business integration and continuous improvement, whilst exposing industry to the latest developments in asset performance management.

    AFRIKAANSE OPSOMMING: Daar is ‘n klemverskuiwing vanaf onderhoudsbestuur na batebestuur, waar daar gefokus word op betroubare en operasionele toerusting, asook effektiewe bates teen optimum lewensikluskoste. ‘n Uitdaging in die vervaardigingsindustrie is die ontwikkeling van ‘n prestasiemodel vir bates, wat geïntegreer is met besigheidsprosesse en –strategieë. Die outeurs het die APM2 model ontwikkel om in hierdie behoefte te voorsien. Die model het ‘n generiese verwysingsstruktuur, wat ondersteun word deur operasionele instruksies wat operasionele bestuur bevorder. Dit fasiliteer prestasiebestuur, besigheidsintegrasie en voortdurende verbetering, terwyl dit die industrie ook blootstel aan die nuutste ontwikkelinge in prestasiebestuur van bates.

  18. Targeted liquid chromatography quadrupole ion trap mass spectrometry analysis of tachykinin related peptides reveals significant expression differences in a rat model of neuropathic pain.

    Science.gov (United States)

    Pailleux, Floriane; Vachon, Pascal; Lemoine, Jérôme; Beaudry, Francis

    2013-08-01

    Animal models are widely used to perform basic scientific research in pain. The rodent chronic constriction injury (CCI) model is widely used to study neuropathic pain. Animals were tested prior and after CCI surgery using behavioral tests (von Frey filaments and Hargreaves test) to evaluate pain. The brain and the lumbar enlargement of the spinal cord were collected from neuropathic and normal animals. Tachykinin related peptides were analyzed by high performance liquid chromatography quadrupole ion trap mass spectrometry. Our results reveal that the β-tachykinin₅₈₋₇₁, SP and SP₃₋₁₁ up-regulation are closely related to pain behavior. The spinal β-tachykinin₅₈₋₇₁, SP and SP₃₋₁₁ concentrations were significantly up-regulated in neuropathic animals compared with normal animals (ptachykinin₅₈₋₇₁ and SP concentrations were significantly up-regulated (ptachykinin₅₈₋₇₁, SP₁₋₇ and SP₆₋₁₁ (p>0.05). The β-tachykinin₅₈₋₇₁, SP and C-terminal SP metabolites could potentially serve as biomarkers in early drug discovery. Copyright © 2013 Elsevier Ltd. All rights reserved.

  19. Aerodynamic drag modeling of alpine skiers performing giant slalom turns.

    Science.gov (United States)

    Meyer, Frédéric; Le Pelley, David; Borrani, Fabio

    2012-06-01

    Aerodynamic drag plays an important role in performance for athletes practicing sports that involve high-velocity motions. In giant slalom, the skier is continuously changing his/her body posture, and this affects the energy dissipated in aerodynamic drag. It is therefore important to quantify this energy to understand the dynamic behavior of the skier. The aims of this study were to model the aerodynamic drag of alpine skiers in giant slalom simulated conditions and to apply these models in a field experiment to estimate energy dissipated through aerodynamic drag. The aerodynamic characteristics of 15 recreational male and female skiers were measured in a wind tunnel while holding nine different skiing-specific postures. The drag and the frontal area were recorded simultaneously for each posture. Four generalized and two individualized models of the drag coefficient were built, using different sets of parameters. These models were subsequently applied in a field study designed to compare the aerodynamic energy losses between a dynamic and a compact skiing technique. The generalized models estimated aerodynamic drag with an accuracy of between 11.00% and 14.28%, and the individualized models estimated aerodynamic drag with an accuracy between 4.52% and 5.30%. The individualized model used for the field study showed that using a dynamic technique led to 10% more aerodynamic drag energy loss than using a compact technique. The individualized models were capable of discriminating different techniques performed by advanced skiers and seemed more accurate than the generalized models. The models presented here offer a simple yet accurate method to estimate the aerodynamic drag acting upon alpine skiers while rapidly moving through the range of positions typical to turning technique.

  20. Some concepts of model uncertainty for performance assessments of nuclear waste repositories

    International Nuclear Information System (INIS)

    Eisenberg, N.A.; Sagar, B.; Wittmeyer, G.W.

    1994-01-01

    Models of the performance of nuclear waste repositories will be central to making regulatory decisions regarding the safety of such facilities. The conceptual model of repository performance is represented by mathematical relationships, which are usually implemented as one or more computer codes. A geologic system may allow many conceptual models, which are consistent with the observations. These conceptual models may or may not have the same mathematical representation. Experiences in modeling the performance of a waste repository representation. Experiences in modeling the performance of a waste repository (which is, in part, a geologic system), show that this non-uniqueness of conceptual models is a significant source of model uncertainty. At the same time, each conceptual model has its own set of parameters and usually, it is not be possible to completely separate model uncertainty from parameter uncertainty for the repository system. Issues related to the origin of model uncertainty, its relation to parameter uncertainty, and its incorporation in safety assessments are discussed from a broad regulatory perspective. An extended example in which these issues are explored numerically is also provided

  1. Performance model for telehealth use in home health agencies.

    Science.gov (United States)

    Frey, Jocelyn; Harmonosky, Catherine M; Dansky, Kathryn H

    2005-10-01

    Increasingly, home health agencies (HHAs) are considering the value of implementing telehealth technology. However, questions arise concerning how to manage and use this technology to benefit patients, nurses, and the agency. Performance models will be beneficial to managers and decision makers in the home health field by providing quantitative information for present and future planning of staff and technology usage in the HHA. This paper presents a model that predicts the average daily census of the HHA as a function of statistically identified parameters. Average daily census was chosen as the outcome variable because it is a proxy measure of an agency's capacity. The model suggests that including a telehealth system in the HHA increases average daily census by 40%-90% depending on the number of nurse full-time equivalent(s) (FTEs) and amount of travel hours per month. The use of a home telecare system enhances HHA performance.

  2. PHARAO laser source flight model: Design and performances

    Energy Technology Data Exchange (ETDEWEB)

    Lévèque, T., E-mail: thomas.leveque@cnes.fr; Faure, B.; Esnault, F. X.; Delaroche, C.; Massonnet, D.; Grosjean, O.; Buffe, F.; Torresi, P. [Centre National d’Etudes Spatiales, 18 avenue Edouard Belin, 31400 Toulouse (France); Bomer, T.; Pichon, A.; Béraud, P.; Lelay, J. P.; Thomin, S. [Sodern, 20 Avenue Descartes, 94451 Limeil-Brévannes (France); Laurent, Ph. [LNE-SYRTE, CNRS, UPMC, Observatoire de Paris, 61 avenue de l’Observatoire, 75014 Paris (France)

    2015-03-15

    In this paper, we describe the design and the main performances of the PHARAO laser source flight model. PHARAO is a laser cooled cesium clock specially designed for operation in space and the laser source is one of the main sub-systems. The flight model presented in this work is the first remote-controlled laser system designed for spaceborne cold atom manipulation. The main challenges arise from mechanical compatibility with space constraints, which impose a high level of compactness, a low electric power consumption, a wide range of operating temperature, and a vacuum environment. We describe the main functions of the laser source and give an overview of the main technologies developed for this instrument. We present some results of the qualification process. The characteristics of the laser source flight model, and their impact on the clock performances, have been verified in operational conditions.

  3. Ergonomic evaluation model of operational room based on team performance

    Directory of Open Access Journals (Sweden)

    YANG Zhiyi

    2017-05-01

    Full Text Available A theoretical calculation model based on the ergonomic evaluation of team performance was proposed in order to carry out the ergonomic evaluation of the layout design schemes of the action station in a multitasking operational room. This model was constructed in order to calculate and compare the theoretical value of team performance in multiple layout schemes by considering such substantial influential factors as frequency of communication, distance, angle, importance, human cognitive characteristics and so on. An experiment was finally conducted to verify the proposed model under the criteria of completion time and accuracy rating. As illustrated by the experiment results,the proposed approach is conductive to the prediction and ergonomic evaluation of the layout design schemes of the action station during early design stages,and provides a new theoretical method for the ergonomic evaluation,selection and optimization design of layout design schemes.

  4. Modeling time-lagged reciprocal psychological empowerment-performance relationships.

    Science.gov (United States)

    Maynard, M Travis; Luciano, Margaret M; D'Innocenzo, Lauren; Mathieu, John E; Dean, Matthew D

    2014-11-01

    Employee psychological empowerment is widely accepted as a means for organizations to compete in increasingly dynamic environments. Previous empirical research and meta-analyses have demonstrated that employee psychological empowerment is positively related to several attitudinal and behavioral outcomes including job performance. While this research positions psychological empowerment as an antecedent influencing such outcomes, a close examination of the literature reveals that this relationship is primarily based on cross-sectional research. Notably, evidence supporting the presumed benefits of empowerment has failed to account for potential reciprocal relationships and endogeneity effects. Accordingly, using a multiwave, time-lagged design, we model reciprocal relationships between psychological empowerment and job performance using a sample of 441 nurses from 5 hospitals. Incorporating temporal effects in a staggered research design and using structural equation modeling techniques, our findings provide support for the conventional positive correlation between empowerment and subsequent performance. Moreover, accounting for the temporal stability of variables over time, we found support for empowerment levels as positive influences on subsequent changes in performance. Finally, we also found support for the reciprocal relationship, as performance levels were shown to relate positively to changes in empowerment over time. Theoretical and practical implications of the reciprocal psychological empowerment-performance relationships are discussed. (PsycINFO Database Record (c) 2014 APA, all rights reserved).

  5. Measuring broadband in Europe: : development of a market model and performance index using structural equations modelling

    NARCIS (Netherlands)

    Lemstra, W.; Voogt, B.; Gorp, van N.

    2015-01-01

    This contribution reports on the development of a performance index and underlying market model with application to broadband developments in the European Union. The Structure–Conduct–Performance paradigm provides the theoretical grounding. Structural equations modelling was applied to determine the

  6. A performance model for the communication in fast multipole methods on high-performance computing platforms

    KAUST Repository

    Ibeid, Huda

    2016-03-04

    Exascale systems are predicted to have approximately 1 billion cores, assuming gigahertz cores. Limitations on affordable network topologies for distributed memory systems of such massive scale bring new challenges to the currently dominant parallel programing model. Currently, there are many efforts to evaluate the hardware and software bottlenecks of exascale designs. It is therefore of interest to model application performance and to understand what changes need to be made to ensure extrapolated scalability. The fast multipole method (FMM) was originally developed for accelerating N-body problems in astrophysics and molecular dynamics but has recently been extended to a wider range of problems. Its high arithmetic intensity combined with its linear complexity and asynchronous communication patterns make it a promising algorithm for exascale systems. In this paper, we discuss the challenges for FMM on current parallel computers and future exascale architectures, with a focus on internode communication. We focus on the communication part only; the efficiency of the computational kernels are beyond the scope of the present study. We develop a performance model that considers the communication patterns of the FMM and observe a good match between our model and the actual communication time on four high-performance computing (HPC) systems, when latency, bandwidth, network topology, and multicore penalties are all taken into account. To our knowledge, this is the first formal characterization of internode communication in FMM that validates the model against actual measurements of communication time. The ultimate communication model is predictive in an absolute sense; however, on complex systems, this objective is often out of reach or of a difficulty out of proportion to its benefit when there exists a simpler model that is inexpensive and sufficient to guide coding decisions leading to improved scaling. The current model provides such guidance.

  7. Performance of the cobas HPV Test for the Triage of Atypical Squamous Cells of Undetermined Significance Cytology in Cervical Specimens Collected in SurePath.

    Science.gov (United States)

    Tewari, Devansu; Novak-Weekley, Susan; Hong, Christina; Aslam, Shagufta; Behrens, Catherine M

    2017-11-02

    Determine performance of the cobas human papillomavirus (HPV) test for triage of atypical squamous cells of undetermined significance (ASC-US) in SurePath. Women presenting for routine screening had cervical specimens collected in SurePath and specimen transport medium (STM); those with ASC-US cytology underwent colposcopy. Performance of cobas HPV in SurePath specimens that had undergone a preanalytic procedure to reverse possible cross-linking of HPV DNA was compared with Hybrid Capture 2 (hc2) specimens in STM. Among 856 women, HPV prevalence was 45.8%; HPV 16 and HPV 18 prevalences were lower than expected in the 21- to 29-year-old group in this highly vaccinated population. cobas HPV performance in SurePath was comparable to hc2 in STM. Sensitivity and specificity for detection of cervical intraepithelial neoplasia grade 3 or worse were 87.5% (95% confidence interval [CI], 71.9%-95.2%) and 55.5% (95% CI, 52.1%-58.9%) for cobas and 85.3% (95% CI, 69.9%-93.6%) and 54.7% (95% CI, 51.4%-57.9%) for hc2. Sensitivity was negatively affected by random biopsies performed at colposcopy; comparable sensitivities were achieved in the nonvaccinated and vaccinated populations with disease determined by directed biopsy only. Performance of cobas HPV for ASC-US triage in pretreated SurePath specimens meets criteria for validation. Preliminary data indicate reliable performance of HPV testing in a highly vaccinated population. © American Society for Clinical Pathology, 2017. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com

  8. GeoSciML v3.0 - a significant upgrade of the CGI-IUGS geoscience data model

    Science.gov (United States)

    Raymond, O.; Duclaux, G.; Boisvert, E.; Cipolloni, C.; Cox, S.; Laxton, J.; Letourneau, F.; Richard, S.; Ritchie, A.; Sen, M.; Serrano, J.-J.; Simons, B.; Vuollo, J.

    2012-04-01

    GeoSciML version 3.0 (http://www.geosciml.org), released in late 2011, is the latest version of the CGI-IUGS* Interoperability Working Group geoscience data interchange standard. The new version is a significant upgrade and refactoring of GeoSciML v2 which was released in 2008. GeoSciML v3 has already been adopted by several major international interoperability initiatives, including OneGeology, the EU INSPIRE program, and the US Geoscience Information Network, as their standard data exchange format for geoscience data. GeoSciML v3 makes use of recently upgraded versions of several Open Geospatial Consortium (OGC) and ISO data transfer standards, including GML v3.2, SWE Common v2.0, and Observations and Measurements v2 (ISO 19156). The GeoSciML v3 data model has been refactored from a single large application schema with many packages, into a number of smaller, but related, application schema modules with individual namespaces. This refactoring allows the use and future development of modules of GeoSciML (eg; GeologicUnit, GeologicStructure, GeologicAge, Borehole) in smaller, more manageable units. As a result of this refactoring and the integration with new OGC and ISO standards, GeoSciML v3 is not backwardly compatible with previous GeoSciML versions. The scope of GeoSciML has been extended in version 3.0 to include new models for geomorphological data (a Geomorphology application schema), and for geological specimens, geochronological interpretations, and metadata for geochemical and geochronological analyses (a LaboratoryAnalysis-Specimen application schema). In addition, there is better support for borehole data, and the PhysicalProperties model now supports a wider range of petrophysical measurements. The previously used CGI_Value data type has been superseded in favour of externally governed data types provided by OGC's SWE Common v2 and GML v3.2 data standards. The GeoSciML v3 release includes worked examples of best practice in delivering geochemical

  9. Tree-based flood damage modeling of companies: Damage processes and model performance

    Science.gov (United States)

    Sieg, Tobias; Vogel, Kristin; Merz, Bruno; Kreibich, Heidi

    2017-07-01

    Reliable flood risk analyses, including the estimation of damage, are an important prerequisite for efficient risk management. However, not much is known about flood damage processes affecting companies. Thus, we conduct a flood damage assessment of companies in Germany with regard to two aspects. First, we identify relevant damage-influencing variables. Second, we assess the prediction performance of the developed damage models with respect to the gain by using an increasing amount of training data and a sector-specific evaluation of the data. Random forests are trained with data from two postevent surveys after flood events occurring in the years 2002 and 2013. For a sector-specific consideration, the data set is split into four subsets corresponding to the manufacturing, commercial, financial, and service sectors. Further, separate models are derived for three different company assets: buildings, equipment, and goods and stock. Calculated variable importance values reveal different variable sets relevant for the damage estimation, indicating significant differences in the damage process for various company sectors and assets. With an increasing number of data used to build the models, prediction errors decrease. Yet the effect is rather small and seems to saturate for a data set size of several hundred observations. In contrast, the prediction improvement achieved by a sector-specific consideration is more distinct, especially for damage to equipment and goods and stock. Consequently, sector-specific data acquisition and a consideration of sector-specific company characteristics in future flood damage assessments is expected to improve the model performance more than a mere increase in data.

  10. A conceptual model to improve performance in virtual teams

    Directory of Open Access Journals (Sweden)

    Shopee Dube

    2016-09-01

    Full Text Available Background: The vast improvement in communication technologies and sophisticated project management tools, methods and techniques has allowed geographically and culturally diverse groups to operate and function in a virtual environment. To succeed in this virtual environment where time and space are becoming increasingly irrelevant, organisations must define new ways of implementing initiatives. This virtual environment phenomenon has brought about the formation of virtual project teams that allow organisations to harness the skills and knowhow of the best resources, irrespective of their location. Objectives: The aim of this article was to investigate performance criteria and develop a conceptual model which can be applied to enhance the success of virtual project teams. There are no clear guidelines of the performance criteria in managing virtual project teams. Method: A qualitative research methodology was used in this article. The purpose of content analysis was to explore the literature to understand the concept of performance in virtual project teams and to summarise the findings of the literature reviewed. Results: The research identified a set of performance criteria for the virtual project teams as follows: leadership, trust, communication, team cooperation, reliability, motivation, comfort and social interaction. These were used to conceptualise the model. Conclusion: The conceptual model can be used in a holistic way to determine the overall performance of the virtual project team, but each factor can be analysed individually to determine the impact on the overall performance. The knowledge of performance criteria for virtual project teams could aid project managers in enhancing the success of these teams and taking a different approach to better manage and coordinate them.

  11. Significance of myoglobin as an oxygen store and oxygen transporter in the intermittently perfused human heart: a model study.

    Science.gov (United States)

    Endeward, Volker; Gros, Gerolf; Jürgens, Klaus D

    2010-07-01

    The mechanisms by which the left ventricular wall escapes anoxia during the systolic phase of low blood perfusion are investigated, especially the role of myoglobin (Mb), which can (i) store oxygen and (ii) facilitate intracellular oxygen transport. The quantitative role of these two Mb functions is studied in the maximally working human heart. Because discrimination between Mb functions has not been achieved experimentally, we use a Krogh cylinder model here. At a heart rate of 200 beats/min and a 1:1 ratio of diastole/systole, the systole lasts for 150 ms. The basic model assumption is that, with mobile Mb, the oxygen stored in the end-diastolic left ventricle wall exactly meets the demand during the 150 ms of systolic cessation of blood flow. The coronary blood flow necessary to achieve this agrees with literature data. By considering Mb immobile or setting its concentration to zero, respectively, we find that, depending on Mb concentration, Mb-facilitated O(2) transport maintains O(2) supply to the left ventricle wall during 22-34 of the 150 ms, while Mb storage function accounts for a further 12-17 ms. When Mb is completely absent, anoxia begins to develop after 116-99 ms. While Mb plays no significant role during diastole, it supplies O(2) to the left ventricular wall for < or = 50 ms of the 150 ms systole, whereas capillary haemoglobin is responsible for approximately 80 ms. Slight increases in haemoglobin concentration, blood flow, or capillary density can compensate the absence of Mb, a finding which agrees well with the observations using Mb knockout mice.

  12. Outdoor FSO Communications Under Fog: Attenuation Modeling and Performance Evaluation

    KAUST Repository

    Esmail, Maged Abdullah

    2016-07-18

    Fog is considered to be a primary challenge for free space optics (FSO) systems. It may cause attenuation that is up to hundreds of decibels per kilometer. Hence, accurate modeling of fog attenuation will help telecommunication operators to engineer and appropriately manage their networks. In this paper, we examine fog measurement data coming from several locations in Europe and the United States and derive a unified channel attenuation model. Compared with existing attenuation models, our proposed model achieves a minimum of 9 dB, which is lower than the average root-mean-square error (RMSE). Moreover, we have investigated the statistical behavior of the channel and developed a probabilistic model under stochastic fog conditions. Furthermore, we studied the performance of the FSO system addressing various performance metrics, including signal-to-noise ratio (SNR), bit-error rate (BER), and channel capacity. Our results show that in communication environments with frequent fog, FSO is typically a short-range data transmission technology. Therefore, FSO will have its preferred market segment in future wireless fifth-generation/sixth-generation (5G/6G) networks having cell sizes that are lower than a 1-km diameter. Moreover, the results of our modeling and analysis can be applied in determining the switching/thresholding conditions in highly reliable hybrid FSO/radio-frequency (RF) networks.

  13. Effect of Using Extreme Years in Hydrologic Model Calibration Performance

    Science.gov (United States)

    Goktas, R. K.; Tezel, U.; Kargi, P. G.; Ayvaz, T.; Tezyapar, I.; Mesta, B.; Kentel, E.

    2017-12-01

    Hydrological models are useful in predicting and developing management strategies for controlling the system behaviour. Specifically they can be used for evaluating streamflow at ungaged catchments, effect of climate change, best management practices on water resources, or identification of pollution sources in a watershed. This study is a part of a TUBITAK project named "Development of a geographical information system based decision-making tool for water quality management of Ergene Watershed using pollutant fingerprints". Within the scope of this project, first water resources in Ergene Watershed is studied. Streamgages found in the basin are identified and daily streamflow measurements are obtained from State Hydraulic Works of Turkey. Streamflow data is analysed using box-whisker plots, hydrographs and flow-duration curves focusing on identification of extreme periods, dry or wet. Then a hydrological model is developed for Ergene Watershed using HEC-HMS in the Watershed Modeling System (WMS) environment. The model is calibrated for various time periods including dry and wet ones and the performance of calibration is evaluated using Nash-Sutcliffe Efficiency (NSE), correlation coefficient, percent bias (PBIAS) and root mean square error. It is observed that calibration period affects the model performance, and the main purpose of the development of the hydrological model should guide calibration period selection. Acknowledgement: This study is funded by The Scientific and Technological Research Council of Turkey (TUBITAK) under Project Number 115Y064.

  14. The Network Performance Assessment Model - Regulation with a Reference Network

    International Nuclear Information System (INIS)

    Larsson, Mats B.O.

    2003-11-01

    A new model - the Network Performance Assessment Model - has been developed gradually since 1998, in order to evaluate and benchmark local electricity grids. The model is intended to be a regulation tool for the Swedish local electricity networks, used by the Swedish Energy Agency. At spring 2004 the Network Performance Assessment Model will run into operation, based on the companies' results for 2003. The mission of the Network Performance Assessment Model is to evaluate the networks from a costumers' point of view and establish a fair price level. In order to do that, the performance of the operator is evaluated. The performances are assessed in correspondence to a price level that the consumer is considered to accept, can agree to as fair and is prepared to pay. This price level is based on an average cost, based on the cost of an efficient grid that will be built today, with already known technology. The performances are accounted in Customer Values. Those Customer Values are what can be created by someone but can't be created better by someone else. The starting point is to look upon the companies from a customers' point of view. The factors that can't be influenced by the companies are evaluated by fixed rules, valid to all companies. The rules reflect the differences. The cost for a connection is evaluated from the actual facts, i.e. the distances between the subscribers and the demanded capacity by the subscriber. This is done by the creation of a reference network, with a capacity to fulfill the demand from the subscriber. This is an efficient grid with no spare capacity and no excess capacity. The companies' existing grid are without importance, as well as holds for dimensioning as technology. Those factors which the company can influence, for an example connection reliability, are evaluated from a customer perspective by measuring the actual reliability, measured as the number and length of the interruption. When implemented to the regulation the Network

  15. Performance of a quasi-steady model for hovering hummingbirds

    Directory of Open Access Journals (Sweden)

    Jialei Song

    2015-01-01

    Full Text Available A quasi-steady model describing aerodynamics of hovering Ruby-throated hummingbirds is presented to study extent of the low-order model in representing the flow physics of the bird and also to separately quantify the forces from the translational, rotational, and acceleration effects. Realistic wing kinematics are adopted and the model is calibrated against computational fluid dynamics (CFD simulations of a corresponding revolving-wing model. The results show that the quasi-steady model is able to predict overall lift production reasonably well but fails to capture detailed force oscillations. The downstroke–upstroke asymmetry is consistent with that in the previous CFD study. Further analysis shows that significant rotational force is produced during mid-stroke rather than wing reversal.

  16. Radionuclide release rates from spent fuel for performance assessment modeling

    International Nuclear Information System (INIS)

    Curtis, D.B.

    1994-01-01

    In a scenario of aqueous transport from a high-level radioactive waste repository, the concentration of radionuclides in water in contact with the waste constitutes the source term for transport models, and as such represents a fundamental component of all performance assessment models. Many laboratory experiments have been done to characterize release rates and understand processes influencing radionuclide release rates from irradiated nuclear fuel. Natural analogues of these waste forms have been studied to obtain information regarding the long-term stability of potential waste forms in complex natural systems. This information from diverse sources must be brought together to develop and defend methods used to define source terms for performance assessment models. In this manuscript examples of measures of radionuclide release rates from spent nuclear fuel or analogues of nuclear fuel are presented. Each example represents a very different approach to obtaining a numerical measure and each has its limitations. There is no way to obtain an unambiguous measure of this or any parameter used in performance assessment codes for evaluating the effects of processes operative over many millennia. The examples are intended to suggest by example that in the absence of the ability to evaluate accuracy and precision, consistency of a broadly based set of data can be used as circumstantial evidence to defend the choice of parameters used in performance assessments

  17. Integrated modeling tool for performance engineering of complex computer systems

    Science.gov (United States)

    Wright, Gary; Ball, Duane; Hoyt, Susan; Steele, Oscar

    1989-01-01

    This report summarizes Advanced System Technologies' accomplishments on the Phase 2 SBIR contract NAS7-995. The technical objectives of the report are: (1) to develop an evaluation version of a graphical, integrated modeling language according to the specification resulting from the Phase 2 research; and (2) to determine the degree to which the language meets its objectives by evaluating ease of use, utility of two sets of performance predictions, and the power of the language constructs. The technical approach followed to meet these objectives was to design, develop, and test an evaluation prototype of a graphical, performance prediction tool. The utility of the prototype was then evaluated by applying it to a variety of test cases found in the literature and in AST case histories. Numerous models were constructed and successfully tested. The major conclusion of this Phase 2 SBIR research and development effort is that complex, real-time computer systems can be specified in a non-procedural manner using combinations of icons, windows, menus, and dialogs. Such a specification technique provides an interface that system designers and architects find natural and easy to use. In addition, PEDESTAL's multiview approach provides system engineers with the capability to perform the trade-offs necessary to produce a design that meets timing performance requirements. Sample system designs analyzed during the development effort showed that models could be constructed in a fraction of the time required by non-visual system design capture tools.

  18. Model tests on dynamic performance of RC shear walls

    International Nuclear Information System (INIS)

    Nagashima, Toshio; Shibata, Akenori; Inoue, Norio; Muroi, Kazuo.

    1991-01-01

    For the inelastic dynamic response analysis of a reactor building subjected to earthquakes, it is essentially important to properly evaluate its restoring force characteristics under dynamic loading condition and its damping performance. Reinforced concrete shear walls are the main structural members of a reactor building, and dominate its seismic behavior. In order to obtain the basic information on the dynamic restoring force characteristics and damping performance of shear walls, the dynamic test using a large shaking table, static displacement control test and the pseudo-dynamic test on the models of a shear wall were conducted. In the dynamic test, four specimens were tested on a large shaking table. In the static test, four specimens were tested, and in the pseudo-dynamic test, three specimens were tested. These tests are outlined. The results of these tests were compared, placing emphasis on the restoring force characteristics and damping performance of the RC wall models. The strength was higher in the dynamic test models than in the static test models mainly due to the effect of loading rate. (K.I.)

  19. A Practical Model to Perform Comprehensive Cybersecurity Audits

    Directory of Open Access Journals (Sweden)

    Regner Sabillon

    2018-03-01

    Full Text Available These days organizations are continually facing being targets of cyberattacks and cyberthreats; the sophistication and complexity of modern cyberattacks and the modus operandi of cybercriminals including Techniques, Tactics and Procedures (TTP keep growing at unprecedented rates. Cybercriminals are always adopting new strategies to plan and launch cyberattacks based on existing cybersecurity vulnerabilities and exploiting end users by using social engineering techniques. Cybersecurity audits are extremely important to verify that information security controls are in place and to detect weaknesses of inexistent cybersecurity or obsolete controls. This article presents an innovative and comprehensive cybersecurity audit model. The CyberSecurity Audit Model (CSAM can be implemented to perform internal or external cybersecurity audits. This model can be used to perform single cybersecurity audits or can be part of any corporate audit program to improve cybersecurity controls. Any information security or cybersecurity audit team has either the options to perform a full audit for all cybersecurity domains or by selecting specific domains to audit certain areas that need control verification and hardening. The CSAM has 18 domains; Domain 1 is specific for Nation States and Domains 2-18 can be implemented at any organization. The organization can be any small, medium or large enterprise, the model is also applicable to any Non-Profit Organization (NPO.

  20. Instruction-level performance modeling and characterization of multimedia applications

    Energy Technology Data Exchange (ETDEWEB)

    Luo, Y. [Los Alamos National Lab., NM (United States). Scientific Computing Group; Cameron, K.W. [Louisiana State Univ., Baton Rouge, LA (United States). Dept. of Computer Science

    1999-06-01

    One of the challenges for characterizing and modeling realistic multimedia applications is the lack of access to source codes. On-chip performance counters effectively resolve this problem by monitoring run-time behaviors at the instruction-level. This paper presents a novel technique of characterizing and modeling workloads at the instruction level for realistic multimedia applications using hardware performance counters. A variety of instruction counts are collected from some multimedia applications, such as RealPlayer, GSM Vocoder, MPEG encoder/decoder, and speech synthesizer. These instruction counts can be used to form a set of abstract characteristic parameters directly related to a processor`s architectural features. Based on microprocessor architectural constraints and these calculated abstract parameters, the architectural performance bottleneck for a specific application can be estimated. Meanwhile, the bottleneck estimation can provide suggestions about viable architectural/functional improvement for certain workloads. The biggest advantage of this new characterization technique is a better understanding of processor utilization efficiency and architectural bottleneck for each application. This technique also provides predictive insight of future architectural enhancements and their affect on current codes. In this paper the authors also attempt to model architectural effect on processor utilization without memory influence. They derive formulas for calculating CPI{sub 0}, CPI without memory effect, and they quantify utilization of architectural parameters. These equations are architecturally diagnostic and predictive in nature. Results provide promise in code characterization, and empirical/analytical modeling.

  1. Integrated healthcare networks' performance: a growth curve modeling approach.

    Science.gov (United States)

    Wan, Thomas T H; Wang, Bill B L

    2003-05-01

    This study examines the effects of integration on the performance ratings of the top 100 integrated healthcare networks (IHNs) in the United States. A strategic-contingency theory is used to identify the relationship of IHNs' performance to their structural and operational characteristics and integration strategies. To create a database for the panel study, the top 100 IHNs selected by the SMG Marketing Group in 1998 were followed up in 1999 and 2000. The data were merged with the Dorenfest data on information system integration. A growth curve model was developed and validated by the Mplus statistical program. Factors influencing the top 100 IHNs' performance in 1998 and their subsequent rankings in the consecutive years were analyzed. IHNs' initial performance scores were positively influenced by network size, number of affiliated physicians and profit margin, and were negatively associated with average length of stay and technical efficiency. The continuing high performance, judged by maintaining higher performance scores, tended to be enhanced by the use of more managerial or executive decision-support systems. Future studies should include time-varying operational indicators to serve as predictors of network performance.

  2. Hybrid Building Performance Simulation Models for Industrial Energy Efficiency Applications

    Directory of Open Access Journals (Sweden)

    Peter Smolek

    2018-06-01

    Full Text Available In the challenge of achieving environmental sustainability, industrial production plants, as large contributors to the overall energy demand of a country, are prime candidates for applying energy efficiency measures. A modelling approach using cubes is used to decompose a production facility into manageable modules. All aspects of the facility are considered, classified into the building, energy system, production and logistics. This approach leads to specific challenges for building performance simulations since all parts of the facility are highly interconnected. To meet this challenge, models for the building, thermal zones, energy converters and energy grids are presented and the interfaces to the production and logistics equipment are illustrated. The advantages and limitations of the chosen approach are discussed. In an example implementation, the feasibility of the approach and models is shown. Different scenarios are simulated to highlight the models and the results are compared.

  3. Fracture modelling of a high performance armour steel

    Science.gov (United States)

    Skoglund, P.; Nilsson, M.; Tjernberg, A.

    2006-08-01

    The fracture characteristics of the high performance armour steel Armox 500T is investigated. Tensile mechanical experiments using samples with different notch geometries are used to investigate the effect of multi-axial stress states on the strain to fracture. The experiments are numerically simulated and from the simulation the stress at the point of fracture initiation is determined as a function of strain and these data are then used to extract parameters for fracture models. A fracture model based on quasi-static experiments is suggested and the model is tested against independent experiments done at both static and dynamic loading. The result show that the fracture model give reasonable good agreement between simulations and experiments at both static and dynamic loading condition. This indicates that multi-axial loading is more important to the strain to fracture than the deformation rate in the investigated loading range. However on-going work will further characterise the fracture behaviour of Armox 500T.

  4. Modeling the Performance of Fast Mulipole Method on HPC platforms

    KAUST Repository

    Ibeid, Huda

    2012-04-06

    The current trend in high performance computing is pushing towards exascale computing. To achieve this exascale performance, future systems will have between 100 million and 1 billion cores assuming gigahertz cores. Currently, there are many efforts studying the hardware and software bottlenecks for building an exascale system. It is important to understand and meet these bottlenecks in order to attain 10 PFLOPS performance. On applications side, there is an urgent need to model application performance and to understand what changes need to be made to ensure continued scalability at this scale. Fast multipole methods (FMM) were originally developed for accelerating N-body problems for particle based methods. Nowadays, FMM is more than an N-body solver, recent trends in HPC have been to use FMMs in unconventional application areas. FMM is likely to be a main player in exascale due to its hierarchical nature and the techniques used to access the data via a tree structure which allow many operations to happen simultaneously at each level of the hierarchy. In this thesis , we discuss the challenges for FMM on current parallel computers and future exasclae architecture. Furthermore, we develop a novel performance model for FMM. Our ultimate aim of this thesis is to ensure the scalability of FMM on the future exascale machines.

  5. 3D Massive MIMO Systems: Channel Modeling and Performance Analysis

    KAUST Repository

    Nadeem, Qurrat-Ul-Ain

    2015-03-01

    Multiple-input-multiple-output (MIMO) systems of current LTE releases are capable of adaptation in the azimuth only. More recently, the trend is to enhance the system performance by exploiting the channel\\'s degrees of freedom in the elevation through the dynamic adaptation of the vertical antenna beam pattern. This necessitates the derivation and characterization of three-dimensional (3D) channels. Over the years, channel models have evolved to address the challenges of wireless communication technologies. In parallel to theoretical studies on channel modeling, many standardized channel models like COST-based models, 3GPP SCM, WINNER, ITU have emerged that act as references for industries and telecommunication companies to assess system-level and link-level performances of advanced signal processing techniques over real-like channels. Given the existing channels are only two dimensional (2D) in nature; a large effort in channel modeling is needed to study the impact of the channel component in the elevation direction. The first part of this work sheds light on the current 3GPP activity around 3D channel modeling and beamforming, an aspect that to our knowledge has not been extensively covered by a research publication. The standardized MIMO channel model is presented, that incorporates both the propagation effects of the environment and the radio effects of the antennas. In order to facilitate future studies on the use of 3D beamforming, the main features of the proposed 3D channel model are discussed. A brief overview of the future 3GPP 3D channel model being outlined for the next generation of wireless networks is also provided. In the subsequent part of this work, we present an information-theoretic channel model for MIMO systems that supports the elevation dimension. The model is based on the principle of maximum entropy, which enables us to determine the distribution of the channel matrix consistent with the prior information on the angles of departure and

  6. Compact models and performance investigations for subthreshold interconnects

    CERN Document Server

    Dhiman, Rohit

    2014-01-01

    The book provides a detailed analysis of issues related to sub-threshold interconnect performance from the perspective of analytical approach and design techniques. Particular emphasis is laid on the performance analysis of coupling noise and variability issues in sub-threshold domain to develop efficient compact models. The proposed analytical approach gives physical insight of the parameters affecting the transient behavior of coupled interconnects. Remedial design techniques are also suggested to mitigate the effect of coupling noise. The effects of wire width, spacing between the wires, wi

  7. Performance prediction of industrial centrifuges using scale-down models.

    Science.gov (United States)

    Boychyn, M; Yim, S S S; Bulmer, M; More, J; Bracewell, D G; Hoare, M

    2004-12-01

    Computational fluid dynamics was used to model the high flow forces found in the feed zone of a multichamber-bowl centrifuge and reproduce these in a small, high-speed rotating disc device. Linking the device to scale-down centrifugation, permitted good estimation of the performance of various continuous-flow centrifuges (disc stack, multichamber bowl, CARR Powerfuge) for shear-sensitive protein precipitates. Critically, the ultra scale-down centrifugation process proved to be a much more accurate predictor of production multichamber-bowl performance than was the pilot centrifuge.

  8. Surface tensions of multi-component mixed inorganic/organic aqueous systems of atmospheric significance: measurements, model predictions and importance for cloud activation predictions

    Directory of Open Access Journals (Sweden)

    D. O. Topping

    2007-01-01

    Full Text Available In order to predict the physical properties of aerosol particles, it is necessary to adequately capture the behaviour of the ubiquitous complex organic components. One of the key properties which may affect this behaviour is the contribution of the organic components to the surface tension of aqueous particles in the moist atmosphere. Whilst the qualitative effect of organic compounds on solution surface tensions has been widely reported, our quantitative understanding on mixed organic and mixed inorganic/organic systems is limited. Furthermore, it is unclear whether models that exist in the literature can reproduce the surface tension variability for binary and higher order multi-component organic and mixed inorganic/organic systems of atmospheric significance. The current study aims to resolve both issues to some extent. Surface tensions of single and multiple solute aqueous solutions were measured and compared with predictions from a number of model treatments. On comparison with binary organic systems, two predictive models found in the literature provided a range of values resulting from sensitivity to calculations of pure component surface tensions. Results indicate that a fitted model can capture the variability of the measured data very well, producing the lowest average percentage deviation for all compounds studied. The performance of the other models varies with compound and choice of model parameters. The behaviour of ternary mixed inorganic/organic systems was unreliably captured by using a predictive scheme and this was dependent on the composition of the solutes present. For more atmospherically representative higher order systems, entirely predictive schemes performed poorly. It was found that use of the binary data in a relatively simple mixing rule, or modification of an existing thermodynamic model with parameters derived from binary data, was able to accurately capture the surface tension variation with concentration. Thus

  9. Evaluation of the performance of DIAS ionospheric forecasting models

    Directory of Open Access Journals (Sweden)

    Tsagouri Ioanna

    2011-08-01

    Full Text Available Nowcasting and forecasting ionospheric products and services for the European region are regularly provided since August 2006 through the European Digital upper Atmosphere Server (DIAS, http://dias.space.noa.gr. Currently, DIAS ionospheric forecasts are based on the online implementation of two models: (i the solar wind driven autoregression model for ionospheric short-term forecast (SWIF, which combines historical and real-time ionospheric observations with solar-wind parameters obtained in real time at the L1 point from NASA ACE spacecraft, and (ii the geomagnetically correlated autoregression model (GCAM, which is a time series forecasting method driven by a synthetic geomagnetic index. In this paper we investigate the operational ability and the accuracy of both DIAS models carrying out a metrics-based evaluation of their performance under all possible conditions. The analysis was established on the systematic comparison between models’ predictions with actual observations obtained over almost one solar cycle (1998–2007 at four European ionospheric locations (Athens, Chilton, Juliusruh and Rome and on the comparison of the models’ performance against two simple prediction strategies, the median- and the persistence-based predictions during storm conditions. The results verify operational validity for both models and quantify their prediction accuracy under all possible conditions in support of operational applications but also of comparative studies in assessing or expanding the current ionospheric forecasting capabilities.

  10. Does model performance improve with complexity? A case study with three hydrological models

    Science.gov (United States)

    Orth, Rene; Staudinger, Maria; Seneviratne, Sonia I.; Seibert, Jan; Zappa, Massimiliano

    2015-04-01

    In recent decades considerable progress has been made in climate model development. Following the massive increase in computational power, models became more sophisticated. At the same time also simple conceptual models have advanced. In this study we validate and compare three hydrological models of different complexity to investigate whether their performance varies accordingly. For this purpose we use runoff and also soil moisture measurements, which allow a truly independent validation, from several sites across Switzerland. The models are calibrated in similar ways with the same runoff data. Our results show that the more complex models HBV and PREVAH outperform the simple water balance model (SWBM) in case of runoff but not for soil moisture. Furthermore the most sophisticated PREVAH model shows an added value compared to the HBV model only in case of soil moisture. Focusing on extreme events we find generally improved performance of the SWBM during drought conditions and degraded agreement with observations during wet extremes. For the more complex models we find the opposite behavior, probably because they were primarily developed for prediction of runoff extremes. As expected given their complexity, HBV and PREVAH have more problems with over-fitting. All models show a tendency towards better performance in lower altitudes as opposed to (pre-) alpine sites. The results vary considerably across the investigated sites. In contrast, the different metrics we consider to estimate the agreement between models and observations lead to similar conclusions, indicating that the performance of the considered models is similar at different time scales as well as for anomalies and long-term means. We conclude that added complexity does not necessarily lead to improved performance of hydrological models, and that performance can vary greatly depending on the considered hydrological variable (e.g. runoff vs. soil moisture) or hydrological conditions (floods vs. droughts).

  11. Correlation between human observer performance and model observer performance in differential phase contrast CT

    International Nuclear Information System (INIS)

    Li, Ke; Garrett, John; Chen, Guang-Hong

    2013-01-01

    Purpose: With the recently expanding interest and developments in x-ray differential phase contrast CT (DPC-CT), the evaluation of its task-specific detection performance and comparison with the corresponding absorption CT under a given radiation dose constraint become increasingly important. Mathematical model observers are often used to quantify the performance of imaging systems, but their correlations with actual human observers need to be confirmed for each new imaging method. This work is an investigation of the effects of stochastic DPC-CT noise on the correlation of detection performance between model and human observers with signal-known-exactly (SKE) detection tasks.Methods: The detectabilities of different objects (five disks with different diameters and two breast lesion masses) embedded in an experimental DPC-CT noise background were assessed using both model and human observers. The detectability of the disk and lesion signals was then measured using five types of model observers including the prewhitening ideal observer, the nonprewhitening (NPW) observer, the nonprewhitening observer with eye filter and internal noise (NPWEi), the prewhitening observer with eye filter and internal noise (PWEi), and the channelized Hotelling observer (CHO). The same objects were also evaluated by four human observers using the two-alternative forced choice method. The results from the model observer experiment were quantitatively compared to the human observer results to assess the correlation between the two techniques.Results: The contrast-to-detail (CD) curve generated by the human observers for the disk-detection experiments shows that the required contrast to detect a disk is inversely proportional to the square root of the disk size. Based on the CD curves, the ideal and NPW observers tend to systematically overestimate the performance of the human observers. The NPWEi and PWEi observers did not predict human performance well either, as the slopes of their CD

  12. Modelling of performance of the ATLAS SCT detector

    International Nuclear Information System (INIS)

    Kazi, S.

    2000-01-01

    Full text: The ATLAS detector being built at LHC will use the SCT (semiconductor tracking) module for particle tracking in the inner core of the detector. An analytical/numerical model of the discriminator threshold dependence and the temperature dependence of the SCT module was derived. Measurements were conducted on the performance of the SCT module versus temperature and these results were compared with the predictions made by the model. The affect of radiation damage of the SCT detector was also investigated. The detector will operate for approximately 10 years so a study was carried out on the effects of the 10 years of radiation exposure to the SCT

  13. An Integrated Model to Explain How Corporate Social Responsibility Affects Corporate Financial Performance

    Directory of Open Access Journals (Sweden)

    Chin-Shien Lin

    2015-06-01

    Full Text Available The effect of corporate social responsibility (CSR on financial performance has important implications for enterprises, communities, and countries, and the significance of this issue cannot be ignored. Therefore, this paper proposes an integrated model to explain the influence of CSR on financial performance with intellectual capital as a mediator and industry type as a moderator. Empirical results indicate that intellectual capital mediates the relationship between CSR and financial performance, and industry type moderates the direct influence of CSR on financial performance. Such results have critical implications for both academia and practice.

  14. Models for the energy performance of low-energy houses

    DEFF Research Database (Denmark)

    Andersen, Philip Hvidthøft Delff

    of buildings is needed both in order to assess energy-effciency and to operate modern buildings economically. Energy signatures are a central tool in both energy performance assessment and decision making related to refurbishment of buildings. Also for operation of modern buildings with installations......-building. The building is well-insulated and features large modern energy-effcient windows and oor heating. These features lead to increased non-linear responses to solar radiation and longer time constants. The building is equipped with advanced control and measuring equipment. Experiments are designed and performed...... in order to identify important dynamical properties of the building, and the collected data is used for modeling. The thesis emphasizes the statistical model building and validation needed to identify dynamical systems. It distinguishes from earlier work by focusing on modern low-energy construction...

  15. Lysimeter data as input to performance assessment models

    International Nuclear Information System (INIS)

    McConnell, J.W. Jr.

    1998-01-01

    The Field Lysimeter Investigations: Low-Level Waste Data Base Development Program is obtaining information on the performance of radioactive waste forms in a disposal environment. Waste forms fabricated using ion-exchange resins from EPICOR-117 prefilters employed in the cleanup of the Three Mile Island (TMI) Nuclear Power Station are being tested to develop a low-level waste data base and to obtain information on survivability of waste forms in a disposal environment. The program includes reviewing radionuclide releases from those waste forms in the first 7 years of sampling and examining the relationship between code input parameters and lysimeter data. Also, lysimeter data are applied to performance assessment source term models, and initial results from use of data in two models are presented

  16. WWER reactor fuel performance, modelling and experimental support. Proceedings

    International Nuclear Information System (INIS)

    Stefanova, S.; Chantoin, P.; Kolev, I.

    1994-01-01

    This publication is a compilation of 36 papers presented at the International Seminar on WWER Reactor Fuel Performance, Modelling and Experimental Support, organised by the Institute for Nuclear Research and Nuclear Energy (BG), in cooperation with the International Atomic Energy Agency. The Seminar was attended by 76 participants from 16 countries, including representatives of all major Russian plants and institutions responsible for WWER reactor fuel manufacturing, design and research. The reports are grouped in four chapters: 1) WWER Fuel Performance and Economics: Status and Improvement Prospects: 2) WWER Fuel Behaviour Modelling and Experimental Support; 3) Licensing of WWER Fuel and Fuel Analysis Codes; 4) Spent Fuel of WWER Plants. The reports from the corresponding four panel discussion sessions are also included. All individual papers are recorded in INIS as separate items

  17. From Performance Measurement to Strategic Management Model: Balanced Scorecard

    Directory of Open Access Journals (Sweden)

    Cihat Savsar

    2015-03-01

    Full Text Available Abstract: In Today’s competitive markets, one of the main conditions of the surviving of enterprises is the necessity to have effective performance management systems. Decisions must be taken by the management according to the performance of assets. In the transition from industrial society to information society, the presence of business structures have changed and the values of non-financial assets have increased in this period. So some systems have emerged based on intangible assets and to measure them instead of tangible assets and their measurements. With economic and technological development multi-dimensional evaluation in the business couldn’t be sufficient.  Performance evaluation methods can be applied in business with an integrated approach by its accordance with business strategy, linking to reward system and cause effects link established between performance measures. Balanced scorecard is one of the commonly used in measurement methods. While it was used for the first time in 1992 as a performance measurement tool today it has been used as a strategic management model besides its conventional uses. BSC contains customer perspective, internal perspective and learning and growth perspective besides financial perspective. Learning and growth perspective is determinant of other perspectives. In order to achieve the objectives set out in the financial perspective in other dimensions that need to be accomplished, is emphasized. Establishing a causal link between performance measures and targets how to achieve specified goals with strategy maps are described.

  18. A Fuzzy Knowledge Representation Model for Student Performance Assessment

    DEFF Research Database (Denmark)

    Badie, Farshad

    Knowledge representation models based on Fuzzy Description Logics (DLs) can provide a foundation for reasoning in intelligent learning environments. While basic DLs are suitable for expressing crisp concepts and binary relationships, Fuzzy DLs are capable of processing degrees of truth/completene....../completeness about vague or imprecise information. This paper tackles the issue of representing fuzzy classes using OWL2 in a dataset describing Performance Assessment Results of Students (PARS)....

  19. PERFORMANCE EVALUATION OF EMPIRICAL MODELS FOR VENTED LEAN HYDROGEN EXPLOSIONS

    OpenAIRE

    Anubhav Sinha; Vendra C. Madhav Rao; Jennifer X. Wen

    2017-01-01

    Explosion venting is a method commonly used to prevent or minimize damage to an enclosure caused by an accidental explosion. An estimate of the maximum overpressure generated though explosion is an important parameter in the design of the vents. Various engineering models (Bauwens et al., 2012, Molkov and Bragin, 2015) and European (EN 14994 ) and USA standards (NFPA 68) are available to predict such overpressure. In this study, their performance is evaluated using a number of published exper...

  20. System performance modeling of extreme ultraviolet lithographic thermal issues

    International Nuclear Information System (INIS)

    Spence, P. A.; Gianoulakis, S. E.; Moen, C. D.; Kanouff, M. P.; Fisher, A.; Ray-Chaudhuri, A. K.

    1999-01-01

    Numerical simulation is used in the development of an extreme ultraviolet lithography Engineering Test Stand. Extensive modeling was applied to predict the impact of thermal loads on key lithographic parameters such as image placement error, focal shift, and loss of CD control. We show that thermal issues can be effectively managed to ensure that their impact on lithographic performance is maintained within design error budgets. (c) 1999 American Vacuum Society

  1. Introducing Model Predictive Control for Improving Power Plant Portfolio Performance

    DEFF Research Database (Denmark)

    Edlund, Kristian Skjoldborg; Bendtsen, Jan Dimon; Børresen, Simon

    2008-01-01

    This paper introduces a model predictive control (MPC) approach for construction of a controller for balancing the power generation against consumption in a power system. The objective of the controller is to coordinate a portfolio consisting of multiple power plant units in the effort to perform...... implementation consisting of a distributed PI controller structure, both in terms of minimising the overall cost but also in terms of the ability to minimise deviation, which is the classical objective....

  2. Thermal performance modeling of cross-flow heat exchangers

    CERN Document Server

    Cabezas-Gómez, Luben; Saíz-Jabardo, José Maria

    2014-01-01

    This monograph introduces a numerical computational methodology for thermal performance modeling of cross-flow heat exchangers, with applications in chemical, refrigeration and automobile industries. This methodology allows obtaining effectiveness-number of transfer units (e-NTU) data and has been used for simulating several standard and complex flow arrangements configurations of cross-flow heat exchangers. Simulated results have been validated through comparisons with results from available exact and approximate analytical solutions. Very accurate results have been obtained over wide ranges

  3. A non-traditional model of the metabolic syndrome: the adaptive significance of insulin resistance in fasting-adapted seals

    Directory of Open Access Journals (Sweden)

    Dorian S Houser

    2013-11-01

    Full Text Available Insulin resistance in modern society is perceived as a pathological consequence of excess energy consumption and reduced physical activity. Its presence in relation to the development of cardiovascular risk factors has been termed the metabolic syndrome, which produces increased mortality and morbidity and which is rapidly increasing in human populations. Ironically, insulin resistance likely evolved to assist animals during food shortages by increasing the availability of endogenous lipid for catabolism while protecting protein from use in gluconeogenesis and eventual oxidation. Some species that incorporate fasting as a predictable component of their life history demonstrate physiological traits similar to the metabolic syndrome during prolonged fasts. One such species is the northern elephant seal (Mirounga angustirostris, which fasts from food and water for periods of up to three months. During this time, ~90% of the seals metabolic demands are met through fat oxidation and circulating non-esterified fatty acids are high (0.7-3.2 mM. All life history stages of elephant seal studied to date demonstrate insulin resistance and fasting hyperglycemia as well as variations in hormones and adipocytokines that reflect the metabolic syndrome to some degree. Elephant seals demonstrate some intriguing adaptations with the potential for medical advancement; for example, ketosis is negligible despite significant and prolonged fatty acid oxidation and investigation of this feature might provide insight into the treatment of diabetic ketoacidosis. The parallels to the metabolic syndrome are likely reflected to varying degrees in other marine mammals, most of which evolved on diets high in lipid and protein content but essentially devoid of carbohydrate. Utilization of these natural models of insulin resistance may further our understanding of the pathophysiology of the metabolic syndrome in humans and better assist the development of preventative measures

  4. A non-traditional model of the metabolic syndrome: the adaptive significance of insulin resistance in fasting-adapted seals.

    Science.gov (United States)

    Houser, Dorian S; Champagne, Cory D; Crocker, Daniel E

    2013-11-01

    Insulin resistance in modern society is perceived as a pathological consequence of excess energy consumption and reduced physical activity. Its presence in relation to the development of cardiovascular risk factors has been termed the metabolic syndrome, which produces increased mortality and morbidity and which is rapidly increasing in human populations. Ironically, insulin resistance likely evolved to assist animals during food shortages by increasing the availability of endogenous lipid for catabolism while protecting protein from use in gluconeogenesis and eventual oxidation. Some species that incorporate fasting as a predictable component of their life history demonstrate physiological traits similar to the metabolic syndrome during prolonged fasts. One such species is the northern elephant seal (Mirounga angustirostris), which fasts from food and water for periods of up to 4 months. During this time, ∼90% of the seals metabolic demands are met through fat oxidation and circulating non-esterified fatty acids are high (0.7-3.2 mM). All life history stages of elephant seal studied to date demonstrate insulin resistance and fasting hyperglycemia as well as variations in hormones and adipocytokines that reflect the metabolic syndrome to some degree. Elephant seals demonstrate some intriguing adaptations with the potential for medical advancement; for example, ketosis is negligible despite significant and prolonged fatty acid oxidation and investigation of this feature might provide insight into the treatment of diabetic ketoacidosis. The parallels to the metabolic syndrome are likely reflected to varying degrees in other marine mammals, most of which evolved on diets high in lipid and protein content but essentially devoid of carbohydrate. Utilization of these natural models of insulin resistance may further our understanding of the pathophysiology of the metabolic syndrome in humans and better assist the development of preventative measures and therapies.

  5. 3D Massive MIMO Systems: Modeling and Performance Analysis

    KAUST Repository

    Nadeem, Qurrat-Ul-Ain

    2015-07-30

    Multiple-input-multiple-output (MIMO) systems of current LTE releases are capable of adaptation in the azimuth only. Recently, the trend is to enhance system performance by exploiting the channel’s degrees of freedom in the elevation, which necessitates the characterization of 3D channels. We present an information-theoretic channel model for MIMO systems that supports the elevation dimension. The model is based on the principle of maximum entropy, which enables us to determine the distribution of the channel matrix consistent with the prior information on the angles. Based on this model, we provide analytical expression for the cumulative density function (CDF) of the mutual information (MI) for systems with a single receive and finite number of transmit antennas in the general signalto- interference-plus-noise-ratio (SINR) regime. The result is extended to systems with finite receive antennas in the low SINR regime. A Gaussian approximation to the asymptotic behavior of MI distribution is derived for the large number of transmit antennas and paths regime. We corroborate our analysis with simulations that study the performance gains realizable through meticulous selection of the transmit antenna downtilt angles, confirming the potential of elevation beamforming to enhance system performance. The results are directly applicable to the analysis of 5G 3D-Massive MIMO-systems.

  6. Evaluation of CFVS Performance with SPARC Model and Application

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Sung Il; Na, Young Su; Ha, Kwang Soon; Cho, Song Won [KAERI, Daejeon (Korea, Republic of)

    2016-05-15

    Containment Filtered Venting System (CFVS) is one of the important safety features to reduce the amount of released fission product into the environment by depressurizing the containment. KAERI has been conducted the integrated performance verification test of CFVS as a part of a Ministry of Trade, Industry and Energy (MOTIE) project. Generally, some codes are used in the case of wet type filter, such as SPARC, BUSCA, SUPRA, etc. Especially SPARC model is included in the MELCOR to calculate the fission product removal rate through the pool scrubbing. In this study, CFVS performance is evaluated using SPARC model in MELCOR according to the steam fraction in the containment. The calculation is mainly focused on the effect of steam fraction in the containment, and the calculation result is explained with the aerosol removal model in SPARC. Previous study on the OPR 1000 is applied to the result. There were two CFVS valve opening period and it is found that the CFVS performance is different in each case. The result of the study provides the fundamental data can be used to decide the CFVS operation time, however, more calculation data is necessary to generalize the result.

  7. Modelling the Progression of Male Swimmers’ Performances through Adolescence

    Directory of Open Access Journals (Sweden)

    Shilo J. Dormehl

    2016-01-01

    Full Text Available Insufficient data on adolescent athletes is contributing to the challenges facing youth athletic development and accurate talent identification. The purpose of this study was to model the progression of male sub-elite swimmers’ performances during adolescence. The performances of 446 males (12–19 year olds competing in seven individual events (50, 100, 200 m freestyle, 100 m backstroke, breaststroke, butterfly, 200 m individual medley over an eight-year period at an annual international schools swimming championship, run under FINA regulations were collected. Quadratic functions for each event were determined using mixed linear models. Thresholds of peak performance were achieved between the ages of 18.5 ± 0.1 (50 m freestyle and 200 m individual medley and 19.8 ± 0.1 (100 m butterfly years. The slowest rate of improvement was observed in the 200 m individual medley (20.7% and the highest in the 100 m butterfly (26.2%. Butterfly does however appear to be one of the last strokes in which males specialise. The models may be useful as talent identification tools, as they predict the age at which an average sub-elite swimmer could potentially peak. The expected rate of improvement could serve as a tool in which to monitor and evaluate benchmarks.

  8. A performance measurement using balanced scorecard and structural equation modeling

    Directory of Open Access Journals (Sweden)

    Rosha Makvandi

    2014-02-01

    Full Text Available During the past few years, balanced scorecard (BSC has been widely used as a promising method for performance measurement. BSC studies organizations in terms of four perspectives including customer, internal processes, learning and growth and financial figures. This paper presents a hybrid of BSC and structural equation modeling (SEM to measure the performance of an Iranian university in province of Alborz, Iran. The proposed study of this paper uses this conceptual method, designs a questionnaire and distributes it among some university students and professors. Using SEM technique, the survey analyzes the data and the results indicate that the university did poorly in terms of all four perspectives. The survey extracts necessary target improvement by presenting necessary attributes for performance improvement.

  9. The application of DEA model in enterprise environmental performance auditing

    Science.gov (United States)

    Li, F.; Zhu, L. Y.; Zhang, J. D.; Liu, C. Y.; Qu, Z. G.; Xiao, M. S.

    2017-01-01

    As a part of society, enterprises have an inescapable responsibility for environmental protection and governance. This article discusses the feasibility and necessity of enterprises environmental performance auditing and uses DEA model calculate the environmental performance of Haier for example. The most of reference data are selected and sorted from Haier’s environmental reportspublished in 2008, 2009, 2011 and 2015, and some of the data from some published articles and fieldwork. All the calculation results are calculated by DEAP software andhave a high credibility. The analysis results of this article can give corporate managements an idea about using environmental performance auditing to adjust their corporate environmental investments capital quota and change their company’s environmental strategies.

  10. PHARAO flight model: optical on ground performance tests

    Science.gov (United States)

    Lévèque, T.; Faure, B.; Esnault, F. X.; Grosjean, O.; Delaroche, C.; Massonnet, D.; Escande, C.; Gasc, Ph.; Ratsimandresy, A.; Béraud, S.; Buffe, F.; Torresi, P.; Larivière, Ph.; Bernard, V.; Bomer, T.; Thomin, S.; Salomon, C.; Abgrall, M.; Rovera, D.; Moric, I.; Laurent, Ph.

    2017-11-01

    PHARAO (Projet d'Horloge Atomique par Refroidissement d'Atomes en Orbite), which has been developed by CNES, is the first primary frequency standard specially designed for operation in space. PHARAO is the main instrument of the ESA mission ACES (Atomic Clock Ensemble in Space). ACES payload will be installed on-board the International Space Station (ISS) to perform fundamental physics experiments. All the sub-systems of the Flight Model (FM) have now passed the qualification process and the whole FM of the cold cesium clock, PHARAO, is being assembled and will undergo extensive tests. The expected performances in space are frequency accuracy less than 3.10-16 (with a final goal at 10-16) and frequency stability of 10-13 τ-1/2. In this paper, we focus on the laser source performances and the main results on the cold atom manipulation.

  11. Advanced transport systems analysis, modeling, and evaluation of performances

    CERN Document Server

    Janić, Milan

    2014-01-01

    This book provides a systematic analysis, modeling and evaluation of the performance of advanced transport systems. It offers an innovative approach by presenting a multidimensional examination of the performance of advanced transport systems and transport modes, useful for both theoretical and practical purposes. Advanced transport systems for the twenty-first century are characterized by the superiority of one or several of their infrastructural, technical/technological, operational, economic, environmental, social, and policy performances as compared to their conventional counterparts. The advanced transport systems considered include: Bus Rapid Transit (BRT) and Personal Rapid Transit (PRT) systems in urban area(s), electric and fuel cell passenger cars, high speed tilting trains, High Speed Rail (HSR), Trans Rapid Maglev (TRM), Evacuated Tube Transport system (ETT), advanced commercial subsonic and Supersonic Transport Aircraft (STA), conventionally- and Liquid Hydrogen (LH2)-fuelled commercial air trans...

  12. Risk Prediction Models for Incident Heart Failure: A Systematic Review of Methodology and Model Performance.

    Science.gov (United States)

    Sahle, Berhe W; Owen, Alice J; Chin, Ken Lee; Reid, Christopher M

    2017-09-01

    Numerous models predicting the risk of incident heart failure (HF) have been developed; however, evidence of their methodological rigor and reporting remains unclear. This study critically appraises the methods underpinning incident HF risk prediction models. EMBASE and PubMed were searched for articles published between 1990 and June 2016 that reported at least 1 multivariable model for prediction of HF. Model development information, including study design, variable coding, missing data, and predictor selection, was extracted. Nineteen studies reporting 40 risk prediction models were included. Existing models have acceptable discriminative ability (C-statistics > 0.70), although only 6 models were externally validated. Candidate variable selection was based on statistical significance from a univariate screening in 11 models, whereas it was unclear in 12 models. Continuous predictors were retained in 16 models, whereas it was unclear how continuous variables were handled in 16 models. Missing values were excluded in 19 of 23 models that reported missing data, and the number of events per variable was models. Only 2 models presented recommended regression equations. There was significant heterogeneity in discriminative ability of models with respect to age (P prediction models that had sufficient discriminative ability, although few are externally validated. Methods not recommended for the conduct and reporting of risk prediction modeling were frequently used, and resulting algorithms should be applied with caution. Copyright © 2017 Elsevier Inc. All rights reserved.

  13. Performance model-directed data sieving for high-performance I/O

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Yong; Lu, Yin; Amritkar, Prathamesh; Thakur, Rajeev; Zhuang, Yu

    2014-09-10

    Many scientific computing applications and engineering simulations exhibit noncontiguous I/O access patterns. Data sieving is an important technique to improve the performance of noncontiguous I/O accesses by combining small and noncontiguous requests into a large and contiguous request. It has been proven effective even though more data are potentially accessed than demanded. In this study, we propose a new data sieving approach namely performance model-directed data sieving, or PMD data sieving in short. It improves the existing data sieving approach from two aspects: (1) dynamically determines when it is beneficial to perform data sieving; and (2) dynamically determines how to perform data sieving if beneficial. It improves the performance of the existing data sieving approach considerably and reduces the memory consumption as verified by both theoretical analysis and experimental results. Given the importance of supporting noncontiguous accesses effectively and reducing the memory pressure in a large-scale system, the proposed PMD data sieving approach in this research holds a great promise and will have an impact on high-performance I/O systems.

  14. Qualitative and quantitative examination of the performance of regional air quality models representing different modeling approaches

    International Nuclear Information System (INIS)

    Bhumralkar, C.M.; Ludwig, F.L.; Shannon, J.D.; McNaughton, D.

    1985-04-01

    The calculations of three different air quality models were compared with the best available observations. The comparisons were made without calibrating the models to improve agreement with the observations. Model performance was poor for short averaging times (less than 24 hours). Some of the poor performance can be traced to errors in the input meteorological fields, but error exist on all levels. It should be noted that these models were not originally designed for treating short-term episodes. For short-term episodes, much of the variance in the data can arise from small spatial scale features that tend to be averaged out over longer periods. These small spatial scale features cannot be resolved with the coarse grids that are used for the meteorological and emissions inputs. Thus, it is not surprising that the models performed for the longer averaging times. The models compared were RTM-II, ENAMAP-2 and ACID. (17 refs., 5 figs., 4 tabs

  15. Solutions for Determining the Significance Region Using the Johnson-Neyman Type Procedure in Generalized Linear (Mixed) Models

    Science.gov (United States)

    Lazar, Ann A.; Zerbe, Gary O.

    2011-01-01

    Researchers often compare the relationship between an outcome and covariate for two or more groups by evaluating whether the fitted regression curves differ significantly. When they do, researchers need to determine the "significance region," or the values of the covariate where the curves significantly differ. In analysis of covariance (ANCOVA),…

  16. Challenges and opportunities of modeling plasma-surface interactions in tungsten using high-performance computing

    Science.gov (United States)

    Wirth, Brian D.; Hammond, K. D.; Krasheninnikov, S. I.; Maroudas, D.

    2015-08-01

    The performance of plasma facing components (PFCs) is critical for ITER and future magnetic fusion reactors. The ITER divertor will be tungsten, which is the primary candidate material for future reactors. Recent experiments involving tungsten exposure to low-energy helium plasmas reveal significant surface modification, including the growth of nanometer-scale tendrils of "fuzz" and formation of nanometer-sized bubbles in the near-surface region. The large span of spatial and temporal scales governing plasma surface interactions are among the challenges to modeling divertor performance. Fortunately, recent innovations in computational modeling, increasingly powerful high-performance computers, and improved experimental characterization tools provide a path toward self-consistent, experimentally validated models of PFC and divertor performance. Recent advances in understanding tungsten-helium interactions are reviewed, including such processes as helium clustering, which serve as nuclei for gas bubbles; and trap mutation, dislocation loop punching and bubble bursting; which together initiate surface morphological modification.

  17. Challenges and opportunities of modeling plasma–surface interactions in tungsten using high-performance computing

    International Nuclear Information System (INIS)

    Wirth, Brian D.; Hammond, K.D.; Krasheninnikov, S.I.; Maroudas, D.

    2015-01-01

    The performance of plasma facing components (PFCs) is critical for ITER and future magnetic fusion reactors. The ITER divertor will be tungsten, which is the primary candidate material for future reactors. Recent experiments involving tungsten exposure to low-energy helium plasmas reveal significant surface modification, including the growth of nanometer-scale tendrils of “fuzz” and formation of nanometer-sized bubbles in the near-surface region. The large span of spatial and temporal scales governing plasma surface interactions are among the challenges to modeling divertor performance. Fortunately, recent innovations in computational modeling, increasingly powerful high-performance computers, and improved experimental characterization tools provide a path toward self-consistent, experimentally validated models of PFC and divertor performance. Recent advances in understanding tungsten–helium interactions are reviewed, including such processes as helium clustering, which serve as nuclei for gas bubbles; and trap mutation, dislocation loop punching and bubble bursting; which together initiate surface morphological modification

  18. Modeling Windows in Energy Plus with Simple Performance Indices

    Energy Technology Data Exchange (ETDEWEB)

    Arasteh, Dariush; Kohler, Christian; Griffith, Brent

    2009-10-12

    The building energy simulation program, Energy Plus (E+), cannot use standard window performance indices (U, SHGC, VT) to model window energy impacts. Rather, E+ uses more accurate methods which require a physical description of the window. E+ needs to be able to accept U and SHGC indices as window descriptors because, often, these are all that is known about a window and because building codes, standards, and voluntary programs are developed using these terms. This paper outlines a procedure, developed for E+, which will allow it to use standard window performance indices to model window energy impacts. In this 'Block' model, a given U, SHGC, VT are mapped to the properties of a fictitious 'layer' in E+. For thermal conductance calculations, the 'Block' functions as a single solid layer. For solar optical calculations, the model begins by defining a solar transmittance (Ts) at normal incidence based on the SHGC. For properties at non-normal incidence angles, the 'Block' takes on the angular properties of multiple glazing layers; the number and type of layers defined by the U and SHGC. While this procedure is specific to E+, parts of it may have applicability to other window/building simulation programs.

  19. Decline curve based models for predicting natural gas well performance

    Directory of Open Access Journals (Sweden)

    Arash Kamari

    2017-06-01

    Full Text Available The productivity of a gas well declines over its production life as cannot cover economic policies. To overcome such problems, the production performance of gas wells should be predicted by applying reliable methods to analyse the decline trend. Therefore, reliable models are developed in this study on the basis of powerful artificial intelligence techniques viz. the artificial neural network (ANN modelling strategy, least square support vector machine (LSSVM approach, adaptive neuro-fuzzy inference system (ANFIS, and decision tree (DT method for the prediction of cumulative gas production as well as initial decline rate multiplied by time as a function of the Arps' decline curve exponent and ratio of initial gas flow rate over total gas flow rate. It was concluded that the results obtained based on the models developed in current study are in satisfactory agreement with the actual gas well production data. Furthermore, the results of comparative study performed demonstrates that the LSSVM strategy is superior to the other models investigated for the prediction of both cumulative gas production, and initial decline rate multiplied by time.

  20. Green roof hydrologic performance and modeling: a review.

    Science.gov (United States)

    Li, Yanling; Babcock, Roger W

    2014-01-01

    Green roofs reduce runoff from impervious surfaces in urban development. This paper reviews the technical literature on green roof hydrology. Laboratory experiments and field measurements have shown that green roofs can reduce stormwater runoff volume by 30 to 86%, reduce peak flow rate by 22 to 93% and delay the peak flow by 0 to 30 min and thereby decrease pollution, flooding and erosion during precipitation events. However, the effectiveness can vary substantially due to design characteristics making performance predictions difficult. Evaluation of the most recently published study findings indicates that the major factors affecting green roof hydrology are precipitation volume, precipitation dynamics, antecedent conditions, growth medium, plant species, and roof slope. This paper also evaluates the computer models commonly used to simulate hydrologic processes for green roofs, including stormwater management model, soil water atmosphere and plant, SWMS-2D, HYDRUS, and other models that are shown to be effective for predicting precipitation response and economic benefits. The review findings indicate that green roofs are effective for reduction of runoff volume and peak flow, and delay of peak flow, however, no tool or model is available to predict expected performance for any given anticipated system based on design parameters that directly affect green roof hydrology.

  1. A Fluid Model for Performance Analysis in Cellular Networks

    Directory of Open Access Journals (Sweden)

    Coupechoux Marceau

    2010-01-01

    Full Text Available We propose a new framework to study the performance of cellular networks using a fluid model and we derive from this model analytical formulas for interference, outage probability, and spatial outage probability. The key idea of the fluid model is to consider the discrete base station (BS entities as a continuum of transmitters that are spatially distributed in the network. This model allows us to obtain simple analytical expressions to reveal main characteristics of the network. In this paper, we focus on the downlink other-cell interference factor (OCIF, which is defined for a given user as the ratio of its outer cell received power to its inner cell received power. A closed-form formula of the OCIF is provided in this paper. From this formula, we are able to obtain the global outage probability as well as the spatial outage probability, which depends on the location of a mobile station (MS initiating a new call. Our analytical results are compared to Monte Carlo simulations performed in a traditional hexagonal network. Furthermore, we demonstrate an application of the outage probability related to cell breathing and densification of cellular networks.

  2. Modeling and design of a high-performance hybrid actuator

    Science.gov (United States)

    Aloufi, Badr; Behdinan, Kamran; Zu, Jean

    2016-12-01

    This paper presents the model and design of a novel hybrid piezoelectric actuator which provides high active and passive performances for smart structural systems. The actuator is composed of a pair of curved pre-stressed piezoelectric actuators, so-called commercially THUNDER actuators, installed opposite each other using two clamping mechanisms constructed of in-plane fixable hinges, grippers and solid links. A fully mathematical model is developed to describe the active and passive dynamics of the actuator and investigate the effects of its geometrical parameters on the dynamic stiffness, free displacement and blocked force properties. Among the literature that deals with piezoelectric actuators in which THUNDER elements are used as a source of electromechanical power, the proposed study is unique in that it presents a mathematical model that has the ability to predict the actuator characteristics and achieve other phenomena, such as resonances, mode shapes, phase shifts, dips, etc. For model validation, the measurements of the free dynamic response per unit voltage and passive acceleration transmissibility of a particular actuator design are used to check the accuracy of the results predicted by the model. The results reveal that there is a good agreement between the model and experiment. Another experiment is performed to teste the linearity of the actuator system by examining the variation of the output dynamic responses with varying forces and voltages at different frequencies. From the results, it can be concluded that the actuator acts approximately as a linear system at frequencies up to 1000 Hz. A parametric study is achieved here by applying the developed model to analyze the influence of the geometrical parameters of the fixable hinges on the active and passive actuator properties. The model predictions in the frequency range of 0-1000 Hz show that the hinge thickness, radius, and opening angle parameters have great effects on the frequency dynamic

  3. Cooperative cognitive radio networking system model, enabling techniques, and performance

    CERN Document Server

    Cao, Bin; Mark, Jon W

    2016-01-01

    This SpringerBrief examines the active cooperation between users of Cooperative Cognitive Radio Networking (CCRN), exploring the system model, enabling techniques, and performance. The brief provides a systematic study on active cooperation between primary users and secondary users, i.e., (CCRN), followed by the discussions on research issues and challenges in designing spectrum-energy efficient CCRN. As an effort to shed light on the design of spectrum-energy efficient CCRN, they model the CCRN based on orthogonal modulation and orthogonally dual-polarized antenna (ODPA). The resource allocation issues are detailed with respect to both models, in terms of problem formulation, solution approach, and numerical results. Finally, the optimal communication strategies for both primary and secondary users to achieve spectrum-energy efficient CCRN are analyzed.

  4. How Good Is Good: Improved Tracking and Managing of Safety Goals, Performance Indicators, Production Targets and Significant Events Using Learning Curves

    International Nuclear Information System (INIS)

    Duffey, Rommey B.; Saull, John W.

    2002-01-01

    We show a new way to track and measure safety and performance using learning curves derived on a mathematical basis. When unusual or abnormal events occur in plants and equipment, the regulator and good management practice requires they be reported, investigated, understood and rectified. In addition to reporting so-called 'significant events', both management and the regulator often set targets for individual and collective performance, which are used for both reward and criticism. For almost completely safe systems, like nuclear power plants, commercial aircraft and chemical facilities, many parameters are tracked and measured. Continuous improvement has to be demonstrated, as well as meeting reduced occurrence rates, which are set as management goals or targets. This process usually takes the form of statistics for availability of plant and equipment, forced or unplanned maintenance outage, loss of safety function, safety or procedural violations, etc. These are often rolled up into a set of so-called 'Performance Indicators' as measures of how well safety and operation is being managed at a given facility. The overall operating standards of an industry are also measured. A whole discipline is formed of tracking, measuring, reporting, managing and understanding the plethora of indicators and data. Decreasing occurrence rates and meeting or exceeding goals are seen and rewarded as virtues. Managers and operators need to know how good is their safety management system that has been adopted and used (and paid for), and whether it can itself be improved. We show the importance of accumulated experience in correctly measuring and tracking the decreasing event and error rates speculating a finite minimum rate. We show that the rate of improvement constitutes a measurable 'learning curve', and the attainment of the goals and targets can be affected by the adopted measures. We examine some of the available data on significant events, reportable occurrences, and loss of

  5. On the performance of satellite precipitation products in riverine flood modeling: A review

    Science.gov (United States)

    Maggioni, Viviana; Massari, Christian

    2018-03-01

    This work is meant to summarize lessons learned on using satellite precipitation products for riverine flood modeling and to propose future directions in this field of research. Firstly, the most common satellite precipitation products (SPPs) during the Tropical Rainfall Measuring Mission (TRMM) and Global Precipitation Mission (GPM) eras are reviewed. Secondly, we discuss the main errors and uncertainty sources in these datasets that have the potential to affect streamflow and runoff model simulations. Thirdly, past studies that focused on using SPPs for predicting streamflow and runoff are analyzed. As the impact of floods depends not only on the characteristics of the flood itself, but also on the characteristics of the region (population density, land use, geophysical and climatic factors), a regional analysis is required to assess the performance of hydrologic models in monitoring and predicting floods. The performance of SPP-forced hydrological models was shown to largely depend on several factors, including precipitation type, seasonality, hydrological model formulation, topography. Across several basins around the world, the bias in SPPs was recognized as a major issue and bias correction methods of different complexity were shown to significantly reduce streamflow errors. Model re-calibration was also raised as a viable option to improve SPP-forced streamflow simulations, but caution is necessary when recalibrating models with SPP, which may result in unrealistic parameter values. From a general standpoint, there is significant potential for using satellite observations in flood forecasting, but the performance of SPP in hydrological modeling is still inadequate for operational purposes.

  6. Incorporating biologic measurements (SF2, CFE) into a tumor control probability model increases their prognostic significance: a study in cervical carcinoma treated with radiation therapy

    International Nuclear Information System (INIS)

    Buffa, Francesca Meteora; Davidson, Susan E.; Hunter, Robert D.; Nahum, Alan E.; West, Catharine M.L.

    2001-01-01

    Purpose: To assess whether incorporation of measurements of surviving fraction at 2 Gy (SF 2 ) and colony-forming efficiency (CFE) into a tumor control probability (tcp) model increases their prognostic significance. Methods and Materials: Measurements of SF 2 and CFE were available from a study on carcinoma of the cervix treated with radiation alone. These measurements, as well as tumor volume, dose, and treatment time, were incorporated into a Poisson tcp model (tcp α,ρ ). Regression analysis was performed to assess the prognostic power of tcp α,ρ vs. the use of either tcp models with biologic parameters fixed to best-fit estimates (but incorporating individual dose, volume, and treatment time) or the use of SF 2 and CFE measurements alone. Results: In a univariate regression analysis of 44 patients, tcp α,ρ was a better prognostic factor for both local control and survival (p 2 alone (p=0.009 for local control, p=0.29 for survival) or CFE alone (p=0.015 for local control, p=0.38 for survival). In multivariate analysis, tcp α,ρ emerged as the most important prognostic factor for local control (p α,ρ , CFE was still a significant independent prognostic factor for local control, whereas SF 2 was not. The sensitivities of tcp α,ρ and SF 2 as predictive tests for local control were 87% and 65%, respectively. Specificities were 70% and 77%, respectively. Conclusions: A Poisson tcp model incorporating individual SF 2 , CFE, dose, tumor volume, and treatment time was found to be the best independent prognostic factor for local control and survival in cervical carcinoma patients

  7. A refined index of model performance: a rejoinder

    Science.gov (United States)

    Legates, David R.; McCabe, Gregory J.

    2013-01-01

    Willmott et al. [Willmott CJ, Robeson SM, Matsuura K. 2012. A refined index of model performance. International Journal of Climatology, forthcoming. DOI:10.1002/joc.2419.] recently suggest a refined index of model performance (dr) that they purport to be superior to other methods. Their refined index ranges from − 1.0 to 1.0 to resemble a correlation coefficient, but it is merely a linear rescaling of our modified coefficient of efficiency (E1) over the positive portion of the domain of dr. We disagree with Willmott et al. (2012) that dr provides a better interpretation; rather, E1 is more easily interpreted such that a value of E1 = 1.0 indicates a perfect model (no errors) while E1 = 0.0 indicates a model that is no better than the baseline comparison (usually the observed mean). Negative values of E1 (and, for that matter, dr McCabe [Legates DR, McCabe GJ. 1999. Evaluating the use of “goodness-of-fit” measures in hydrologic and hydroclimatic model validation. Water Resources Research 35(1): 233-241.] and Schaefli and Gupta [Schaefli B, Gupta HV. 2007. Do Nash values have value? Hydrological Processes 21: 2075-2080. DOI: 10.1002/hyp.6825.]. This important discussion focuses on the appropriate baseline comparison to use, and why the observed mean often may be an inadequate choice for model evaluation and development. 

  8. A high performance finite element model for wind farm modeling in forested areas

    Science.gov (United States)

    Owen, Herbert; Avila, Matias; Folch, Arnau; Cosculluela, Luis; Prieto, Luis

    2015-04-01

    Wind energy has grown significantly during the past decade and is expected to continue growing in the fight against climate change. In the search for new land where the impact of the wind turbines is small several wind farms are currently being installed in forested areas. In order to optimize the distribution of the wind turbines within the wind farm the Reynolds Averaged Navier Stokes equations are solved over the domain of interest using either commercial or in house codes. The existence of a canopy alters the Atmospheric Boundary Layer wind profile close to the ground. Therefore in order to obtain a more accurate representation of the flow in forested areas modification to both the Navier Stokes and turbulence variables equations need to be introduced. Several existing canopy models have been tested in an academic problem showing that the one proposed by Sogachev et. al gives the best results. This model has been implemented in an in house CFD solver named Alya. It is a high performance unstructured finite element code that has been designed from scratch to be able to run in the world's biggest supercomputers. Its scalabililty has recently been tested up to 100000 processors in both American and European supercomputers. During the past three years the code has been tuned and tested for wind energy problems. Recent efforts have focused on the canopy model following industry needs. In this work we shall benchmark our results in a wind farm that is currently being designed by Scottish Power and Iberdrola in Scotland. This is a very interesting real case with extensive experimental data from five different masts with anemometers at several heights. It is used to benchmark both the wind profiles and the speed up obtained between different masts. Sixteen different wind directions are simulated. The numerical model provides very satisfactory results for both the masts that are affected by the canopy and those that are not influenced by it.

  9. ASYMMETRIC PRICE TRANSMISSION MODELING: THE IMPORTANCE OF MODEL COMPLEXITY AND THE PERFORMANCE OF THE SELECTION CRITERIA

    Directory of Open Access Journals (Sweden)

    Henry de-Graft Acquah

    2013-01-01

    Full Text Available Information Criteria provides an attractive basis for selecting the best model from a set of competing asymmetric price transmission models or theories. However, little is understood about the sensitivity of the model selection methods to model complexity. This study therefore fits competing asymmetric price transmission models that differ in complexity to simulated data and evaluates the ability of the model selection methods to recover the true model. The results of Monte Carlo experimentation suggest that in general BIC, CAIC and DIC were superior to AIC when the true data generating process was the standard error correction model, whereas AIC was more successful when the true model was the complex error correction model. It is also shown that the model selection methods performed better in large samples for a complex asymmetric data generating process than with a standard asymmetric data generating process. Except for complex models, AIC's performance did not make substantial gains in recovery rates as sample size increased. The research findings demonstrate the influence of model complexity in asymmetric price transmission model comparison and selection.

  10. Segmentation process significantly influences the accuracy of 3D surface models derived from cone beam computed tomography

    International Nuclear Information System (INIS)

    Fourie, Zacharias; Damstra, Janalt; Schepers, Rutger H.; Gerrits, Peter O.; Ren Yijin

    2012-01-01

    Aims: To assess the accuracy of surface models derived from 3D cone beam computed tomography (CBCT) with two different segmentation protocols. Materials and methods: Seven fresh-frozen cadaver heads were used. There was no conflict of interests in this study. CBCT scans were made of the heads and 3D surface models were created of the mandible using two different segmentation protocols. The one series of 3D models was segmented by a commercial software company, while the other series was done by an experienced 3D clinician. The heads were then macerated following a standard process. A high resolution laser surface scanner was used to make a 3D model of the macerated mandibles, which acted as the reference 3D model or “gold standard”. The 3D models generated from the two rendering protocols were compared with the “gold standard” using a point-based rigid registration algorithm to superimpose the three 3D models. The linear difference at 25 anatomic and cephalometric landmarks between the laser surface scan and the 3D models generate from the two rendering protocols was measured repeatedly in two sessions with one week interval. Results: The agreement between the repeated measurement was excellent (ICC = 0.923–1.000). The mean deviation from the gold standard by the 3D models generated from the CS group was 0.330 mm ± 0.427, while the mean deviation from the Clinician's rendering was 0.763 mm ± 0.392. The surface models segmented by both CS and DS protocols tend to be larger than those of the reference models. In the DS group, the biggest mean differences with the LSS models were found at the points ConLatR (CI: 0.83–1.23), ConMedR (CI: −3.16 to 2.25), CoLatL (CI: −0.68 to 2.23), Spine (CI: 1.19–2.28), ConAntL (CI: 0.84–1.69), ConSupR (CI: −1.12 to 1.47) and RetMolR (CI: 0.84–1.80). Conclusion: The Commercially segmented models resembled the reality more closely than the Doctor's segmented models. If 3D models are needed for surgical drilling

  11. OPTIMIZATION OF DEEP DRILLING PERFORMANCE--DEVELOPMENT AND BENCHMARK TESTING OF ADVANCED DIAMOND PRODUCT DRILL BITS & HP/HT FLUIDS TO SIGNIFICANTLY IMPROVE RATES OF PENETRATION

    Energy Technology Data Exchange (ETDEWEB)

    Alan Black; Arnis Judzis

    2004-10-01

    The industry cost shared program aims to benchmark drilling rates of penetration in selected simulated deep formations and to significantly improve ROP through a team development of aggressive diamond product drill bit--fluid system technologies. Overall the objectives are as follows: Phase 1--Benchmark ''best in class'' diamond and other product drilling bits and fluids and develop concepts for a next level of deep drilling performance; Phase 2--Develop advanced smart bit-fluid prototypes and test at large scale; and Phase 3--Field trial smart bit-fluid concepts, modify as necessary and commercialize products. As of report date, TerraTek has concluded all major preparations for the high pressure drilling campaign. Baker Hughes encountered difficulties in providing additional pumping capacity before TerraTek's scheduled relocation to another facility, thus the program was delayed further to accommodate the full testing program.

  12. Evaluating performance of simplified physically based models for shallow landslide susceptibility

    Directory of Open Access Journals (Sweden)

    G. Formetta

    2016-11-01

    Full Text Available Rainfall-induced shallow landslides can lead to loss of life and significant damage to private and public properties, transportation systems, etc. Predicting locations that might be susceptible to shallow landslides is a complex task and involves many disciplines: hydrology, geotechnical science, geology, hydrogeology, geomorphology, and statistics. Two main approaches are commonly used: statistical or physically based models. Reliable model applications involve automatic parameter calibration, objective quantification of the quality of susceptibility maps, and model sensitivity analyses. This paper presents a methodology to systemically and objectively calibrate, verify, and compare different models and model performance indicators in order to identify and select the models whose behavior is the most reliable for particular case studies.The procedure was implemented in a package of models for landslide susceptibility analysis and integrated in the NewAge-JGrass hydrological model. The package includes three simplified physically based models for landslide susceptibility analysis (M1, M2, and M3 and a component for model verification. It computes eight goodness-of-fit indices by comparing pixel-by-pixel model results and measurement data. The integration of the package in NewAge-JGrass uses other components, such as geographic information system tools, to manage input–output processes, and automatic calibration algorithms to estimate model parameters. The system was applied for a case study in Calabria (Italy along the Salerno–Reggio Calabria highway, between Cosenza and Altilia. The area is extensively subject to rainfall-induced shallow landslides mainly because of its complex geology and climatology. The analysis was carried out considering all the combinations of the eight optimized indices and the three models. Parameter calibration, verification, and model performance assessment were performed by a comparison with a detailed landslide

  13. Ga doping to significantly improve the performance of all-electrochemically fabricated Cu2O-ZnO nanowire solar cells.

    Science.gov (United States)

    Xie, Jiale; Guo, Chunxian; Li, Chang Ming

    2013-10-14

    Cu2O-ZnO nanowire solar cells have the advantages of light weight and high stability while possessing a large active material interface for potentially high power conversion efficiencies. In particular, electrochemically fabricated devices have attracted increasing attention due to their low-cost and simple fabrication process. However, most of them are "partially" electrochemically fabricated by vacuum deposition onto a preexisting ZnO layer. There are a few examples made via all-electrochemical deposition, but the power conversion efficiency (PCE) is too low (0.13%) for practical applications. Herein we use an all-electrochemical approach to directly deposit ZnO NWs onto FTO followed by electrochemical doping with Ga to produce a heterojunction solar cell. The Ga doping greatly improves light utilization while significantly suppressing charge recombination. A 2.5% molar ratio of Ga to ZnO delivers the best performance with a short circuit current density (Jsc) of 3.24 mA cm(-2) and a PCE of 0.25%, which is significantly higher than in the absence of Ga doping. Moreover, the use of electrochemically deposited ZnO powder-buffered Cu2O from a mixed Cu(2+)-ZnO powder solution and oxygen plasma treatment could reduce the density of defect sites in the heterojunction interface to further increase Jsc and PCE to 4.86 mA cm(-2) and 0.34%, respectively, resulting in the highest power conversion efficiency among all-electrochemically fabricated Cu2O-ZnO NW solar cells. This approach offers great potential for a low-cost solution-based process to mass-manufacture high-performance Cu2O-ZnO NW solar cells.

  14. Optimization of Deep Drilling Performance--Development and Benchmark Testing of Advanced Diamond Product Drill Bits & HP/HT Fluids to Significantly Improve Rates of Penetration

    Energy Technology Data Exchange (ETDEWEB)

    Alan Black; Arnis Judzis

    2003-10-01

    This document details the progress to date on the OPTIMIZATION OF DEEP DRILLING PERFORMANCE--DEVELOPMENT AND BENCHMARK TESTING OF ADVANCED DIAMOND PRODUCT DRILL BITS AND HP/HT FLUIDS TO SIGNIFICANTLY IMPROVE RATES OF PENETRATION contract for the year starting October 2002 through September 2002. The industry cost shared program aims to benchmark drilling rates of penetration in selected simulated deep formations and to significantly improve ROP through a team development of aggressive diamond product drill bit--fluid system technologies. Overall the objectives are as follows: Phase 1--Benchmark ''best in class'' diamond and other product drilling bits and fluids and develop concepts for a next level of deep drilling performance; Phase 2--Develop advanced smart bit--fluid prototypes and test at large scale; and Phase 3--Field trial smart bit--fluid concepts, modify as necessary and commercialize products. Accomplishments to date include the following: 4Q 2002--Project started; Industry Team was assembled; Kick-off meeting was held at DOE Morgantown; 1Q 2003--Engineering meeting was held at Hughes Christensen, The Woodlands Texas to prepare preliminary plans for development and testing and review equipment needs; Operators started sending information regarding their needs for deep drilling challenges and priorities for large-scale testing experimental matrix; Aramco joined the Industry Team as DEA 148 objectives paralleled the DOE project; 2Q 2003--Engineering and planning for high pressure drilling at TerraTek commenced; 3Q 2003--Continuation of engineering and design work for high pressure drilling at TerraTek; Baker Hughes INTEQ drilling Fluids and Hughes Christensen commence planning for Phase 1 testing--recommendations for bits and fluids.

  15. Imparting improvements in electrochemical sensors: evaluation of different carbon blacks that give rise to significant improvement in the performance of electroanalytical sensing platforms

    International Nuclear Information System (INIS)

    Vicentini, Fernando Campanhã; Ravanini, Amanda E.; Figueiredo-Filho, Luiz C.S.; Iniesta, Jesús; Banks, Craig E.; Fatibello-Filho, Orlando

    2015-01-01

    Three different carbon black materials have been evaluated as a potential modifier, however, only one demonstrated an improvement in the electrochemical properties. The carbon black structures were characterised with SEM, XPS and Raman spectroscopy and found to be very similar to that of amorphous graphitic materials. The modifications utilised were constructed by three different strategies (using ultrapure water, chitosan and dihexadecylphosphate). The fabricated sensors are electrochemically characterised using N,N,N',N'-tetramethyl-para-phenylenediamine and both inner-sphere and outer-sphere redox probes, namely potassium ferrocyanide(II) and hexaammineruthenium(III) chloride, in addition to the biologically relevant and electroactive analytes, dopamine (DA) and acetaminophen (AP). Comparisons are made with an edge-plane pyrolytic graphite and glassy-carbon electrode and the benefits of carbon black implemented as a modifier for sensors within electrochemistry are explored, as well as the characterisation of their electroanalytical performances. We reveal significant improvements in the electrochemical performance (excellent sensitivity, faster heterogeneous electron transfer rate (HET)) over that of a bare glassy-carbon and edge-plane pyrolytic graphite electrode and thus suggest that there are substantial advantages of using carbon black as modifier in the fabrication of electrochemical based sensors. Such work is highly important and informative for those working in the field of electroanalysis where electrochemistry can provide portable, rapid, reliable and accurate sensing protocols (bringing the laboratory into the field), with particular relevance to those searching for new electrode materials

  16. URBAN MODELLING PERFORMANCE OF NEXT GENERATION SAR MISSIONS

    Directory of Open Access Journals (Sweden)

    U. G. Sefercik

    2017-09-01

    Full Text Available In synthetic aperture radar (SAR technology, urban mapping and modelling have become possible with revolutionary missions TerraSAR-X (TSX and Cosmo-SkyMed (CSK since 2007. These satellites offer 1m spatial resolution in high-resolution spotlight imaging mode and capable for high quality digital surface model (DSM acquisition for urban areas utilizing interferometric SAR (InSAR technology. With the advantage of independent generation from seasonal weather conditions, TSX and CSK DSMs are much in demand by scientific users. The performance of SAR DSMs is influenced by the distortions such as layover, foreshortening, shadow and double-bounce depend up on imaging geometry. In this study, the potential of DSMs derived from convenient 1m high-resolution spotlight (HS InSAR pairs of CSK and TSX is validated by model-to-model absolute and relative accuracy estimations in an urban area. For the verification, an airborne laser scanning (ALS DSM of the study area was used as the reference model. Results demonstrated that TSX and CSK urban DSMs are compatible in open, built-up and forest land forms with the absolute accuracy of 8–10 m. The relative accuracies based on the coherence of neighbouring pixels are superior to absolute accuracies both for CSK and TSX.

  17. Urban Modelling Performance of Next Generation SAR Missions

    Science.gov (United States)

    Sefercik, U. G.; Yastikli, N.; Atalay, C.

    2017-09-01

    In synthetic aperture radar (SAR) technology, urban mapping and modelling have become possible with revolutionary missions TerraSAR-X (TSX) and Cosmo-SkyMed (CSK) since 2007. These satellites offer 1m spatial resolution in high-resolution spotlight imaging mode and capable for high quality digital surface model (DSM) acquisition for urban areas utilizing interferometric SAR (InSAR) technology. With the advantage of independent generation from seasonal weather conditions, TSX and CSK DSMs are much in demand by scientific users. The performance of SAR DSMs is influenced by the distortions such as layover, foreshortening, shadow and double-bounce depend up on imaging geometry. In this study, the potential of DSMs derived from convenient 1m high-resolution spotlight (HS) InSAR pairs of CSK and TSX is validated by model-to-model absolute and relative accuracy estimations in an urban area. For the verification, an airborne laser scanning (ALS) DSM of the study area was used as the reference model. Results demonstrated that TSX and CSK urban DSMs are compatible in open, built-up and forest land forms with the absolute accuracy of 8-10 m. The relative accuracies based on the coherence of neighbouring pixels are superior to absolute accuracies both for CSK and TSX.

  18. Photovoltaic Pixels for Neural Stimulation: Circuit Models and Performance.

    Science.gov (United States)

    Boinagrov, David; Lei, Xin; Goetz, Georges; Kamins, Theodore I; Mathieson, Keith; Galambos, Ludwig; Harris, James S; Palanker, Daniel

    2016-02-01

    Photovoltaic conversion of pulsed light into pulsed electric current enables optically-activated neural stimulation with miniature wireless implants. In photovoltaic retinal prostheses, patterns of near-infrared light projected from video goggles onto subretinal arrays of photovoltaic pixels are converted into patterns of current to stimulate the inner retinal neurons. We describe a model of these devices and evaluate the performance of photovoltaic circuits, including the electrode-electrolyte interface. Characteristics of the electrodes measured in saline with various voltages, pulse durations, and polarities were modeled as voltage-dependent capacitances and Faradaic resistances. The resulting mathematical model of the circuit yielded dynamics of the electric current generated by the photovoltaic pixels illuminated by pulsed light. Voltages measured in saline with a pipette electrode above the pixel closely matched results of the model. Using the circuit model, our pixel design was optimized for maximum charge injection under various lighting conditions and for different stimulation thresholds. To speed discharge of the electrodes between the pulses of light, a shunt resistor was introduced and optimized for high frequency stimulation.

  19. Modeling impact of environmental factors on photovoltaic array performance

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Jie; Sun, Yize; Xu, Yang [College of Mechanical Engineering, Donghua University NO.2999, North Renmin Road, Shanghai (China)

    2013-07-01

    It is represented in this paper that a methodology to model and quantify the impact of the three environmental factors, the ambient temperature, the incident irradiance and the wind speed, upon the performance of photovoltaic array operating under outdoor conditions. First, A simple correlation correlating operating temperature with the three environmental variables is validated for a range of wind speed studied, 2-8, and for irradiance values between 200 and 1000. Root mean square error (RMSE) between modeled operating temperature and measured values is 1.19% and the mean bias error (MBE) is -0.09%. The environmental factors studied influence I-V curves, P-V curves, and maximum-power outputs of photovoltaic array. The cell-to-module-to-array mathematical model for photovoltaic panels is established in this paper and the method defined as segmented iteration is adopted to solve the I-V curve expression to relate model I-V curves. The model I-V curves and P-V curves are concluded to coincide well with measured data points. The RMSE between numerically calculated maximum-power outputs and experimentally measured ones is 0.2307%, while the MBE is 0.0183%. In addition, a multivariable non-linear regression equation is proposed to eliminate the difference between numerically calculated values and measured ones of maximum power outputs over the range of high ambient temperature and irradiance at noon and in the early afternoon. In conclusion, the proposed method is reasonably simple and accurate.

  20. Performance Characteristics of CA 19-9 Radioimmunoassay and Clinical Significance of Serum CA 19-9 Assay in Patients with Malignancy

    International Nuclear Information System (INIS)

    Kim, Sang Eun; Shong, Young Kee; Cho, Bo Youn; Kim, Noe Kyeong; Koh, Chang Soon; Lee, Mun Ho; Hong, Seong Woon; Hong, Kee Suk

    1985-01-01

    To evaluate the performance characteristics of CA 19-9 radioimmunoassay and the clinical significance of serum CA 19-9 assay in patients with malignancy, serum. CA 19-9 levels were measured by radioimmunoassay using monoclonal antibody in 135 normal controls, 81 patients with various untreated malignancy, 9 patients of postoperative colon cancer without recurrence and 20 patients with benign gastrointestinal diseases, who visited Seoul National University Hospital from June, 1984 to March, 1985. The results were as follows; 1) The CA 19-9 radioimmunoassay was simple to perform and can be completed in one work day. And the between-assay reproducibility and the assay recovery were both excellent. 2) The mean serum CA 19-9 level in 135 normal controls was 8.4±4.2 U/mL. Normal upper limit of serum CA 19-9 was defined as 21.0 U/mL. 4 out of 135 (3.0%) normal controls showed elevated CA 19-9 levels above the normal upper limit. 3) One out of 20 (5.0%) patients with benign gastrointestinal diseases showed elevated serum CA 19-9 level above the normal upper limit. 4) In 81 patients with various untreated malignancy, 41 patients (50.6%) showed elevated serum CA 19-9 levels. 66.7% of 18 patients with colorectal cancer, 100% of 2 patients with pancreatic cancer, 100% of 3 patients with common bile duct cancer, 47.1% of 17 patients with stomach cancer, 28.6% of 28 patients with hepatoma and 60.0% of 5 gastrointestinal tract cancers showed elevated serum CA 19-9 levels. 5) The sensitivities of serum CA 19-9 related to respectability in colorectal and stomach cancer were 33.3% in resectable colorectal cancer, 83.3% in unresectable colorectal cancer, 41.7% in resectable stomach cancer, 60.0% in unresectable stomach cancer respectively. 6) The sensitivity of serum CA 19-9 in 9 patients of postoperative colorectal cancer without recurrence were 33.3% and significantly decreased compared with that of untreated colorectal cancer, 66.7% (p<0.05). 7) In Patients with colorectal cancer

  1. Thermal modelling of PV module performance under high ambient temperatures

    Energy Technology Data Exchange (ETDEWEB)

    Diarra, D.C.; Harrison, S.J. [Queen' s Univ., Kingston, ON (Canada). Dept. of Mechanical and Materials Engineering Solar Calorimetry Lab; Akuffo, F.O. [Kwame Nkrumah Univ. of Science and Technology, Kumasi (Ghana). Dept. of Mechanical Engineering

    2005-07-01

    When predicting the performance of photovoltaic (PV) generators, the actual performance is typically lower than test results conducted under standard test conditions because the radiant energy absorbed in the module under normal operation raises the temperature of the cell and other multilayer components. The increase in temperature translates to a lower conversion efficiency of the solar cells. In order to address these discrepancies, a thermal model of a characteristic PV module was developed to assess and predict its performance under real field-conditions. The PV module consisted of monocrystalline silicon cells in EVA between a glass cover and a tedlar backing sheet. The EES program was used to compute the equilibrium temperature profile in the PV module. It was shown that heat is dissipated towards the bottom and the top of the module, and that its temperature can be much higher than the ambient temperature. Modelling results indicate that 70-75 per cent of the absorbed solar radiation is dissipated from the solar cells as heat, while 4.7 per cent of the solar energy is absorbed in the glass cover and the EVA. It was also shown that the operating temperature of the PV module decreases with increased wind speed. 2 refs.

  2. Water desalination price from recent performances: Modelling, simulation and analysis

    International Nuclear Information System (INIS)

    Metaiche, M.; Kettab, A.

    2005-01-01

    The subject of the present article is the technical simulation of seawater desalination, by a one stage reverse osmosis system, the objectives of which are the recent valuation of cost price through the use of new membrane and permeator performances, the use of new means of simulation and modelling of desalination parameters, and show the main parameters influencing the cost price. We have taken as the simulation example the Seawater Desalting centre of Djannet (Boumerdes, Algeria). The present performances allow water desalting at a price of 0.5 $/m 3 , which is an interesting and promising price, corresponding with the very acceptable water product quality, in the order of 269 ppm. It is important to run the desalting systems by reverse osmosis under high pressure, resulting in further decrease of the desalting cost and the production of good quality water. Aberration in choice of functioning conditions produces high prices and unacceptable quality. However there exists the possibility of decreasing the price by decreasing the requirement on the product quality. The seawater temperature has an effect on the cost price and quality. The installation of big desalting centres, contributes to the decrease in prices. A very important, long and tedious calculation is effected, which is impossible to conduct without programming and informatics tools. The use of the simulation model has been much efficient in the design of desalination centres that can perform at very improved prices. (author)

  3. A Hybrid Fuzzy Model for Lean Product Development Performance Measurement

    Science.gov (United States)

    Osezua Aikhuele, Daniel; Mohd Turan, Faiz

    2016-02-01

    In the effort for manufacturing companies to meet up with the emerging consumer demands for mass customized products, many are turning to the application of lean in their product development process, and this is gradually moving from being a competitive advantage to a necessity. However, due to lack of clear understanding of the lean performance measurements, many of these companies are unable to implement and fully integrated the lean principle into their product development process. Extensive literature shows that only few studies have focus systematically on the lean product development performance (LPDP) evaluation. In order to fill this gap, the study therefore proposed a novel hybrid model based on Fuzzy Reasoning Approach (FRA), and the extension of Fuzzy-AHP and Fuzzy-TOPSIS methods for the assessment of the LPDP. Unlike the existing methods, the model considers the importance weight of each of the decision makers (Experts) since the performance criteria/attributes are required to be rated, and these experts have different level of expertise. The rating is done using a new fuzzy Likert rating scale (membership-scale) which is designed such that it can address problems resulting from information lost/distortion due to closed-form scaling and the ordinal nature of the existing Likert scale.

  4. Modelling of LOCA Tests with the BISON Fuel Performance Code

    Energy Technology Data Exchange (ETDEWEB)

    Williamson, Richard L [Idaho National Laboratory; Pastore, Giovanni [Idaho National Laboratory; Novascone, Stephen Rhead [Idaho National Laboratory; Spencer, Benjamin Whiting [Idaho National Laboratory; Hales, Jason Dean [Idaho National Laboratory

    2016-05-01

    BISON is a modern finite-element based, multidimensional nuclear fuel performance code that is under development at Idaho National Laboratory (USA). Recent advances of BISON include the extension of the code to the analysis of LWR fuel rod behaviour during loss-of-coolant accidents (LOCAs). In this work, BISON models for the phenomena relevant to LWR cladding behaviour during LOCAs are described, followed by presentation of code results for the simulation of LOCA tests. Analysed experiments include separate effects tests of cladding ballooning and burst, as well as the Halden IFA-650.2 fuel rod test. Two-dimensional modelling of the experiments is performed, and calculations are compared to available experimental data. Comparisons include cladding burst pressure and temperature in separate effects tests, as well as the evolution of fuel rod inner pressure during ballooning and time to cladding burst. Furthermore, BISON three-dimensional simulations of separate effects tests are performed, which demonstrate the capability to reproduce the effect of azimuthal temperature variations in the cladding. The work has been carried out in the frame of the collaboration between Idaho National Laboratory and Halden Reactor Project, and the IAEA Coordinated Research Project FUMAC.

  5. Studies in Software Cost Model Behavior: Do We Really Understand Cost Model Performance?

    Science.gov (United States)

    Lum, Karen; Hihn, Jairus; Menzies, Tim

    2006-01-01

    While there exists extensive literature on software cost estimation techniques, industry practice continues to rely upon standard regression-based algorithms. These software effort models are typically calibrated or tuned to local conditions using local data. This paper cautions that current approaches to model calibration often produce sub-optimal models because of the large variance problem inherent in cost data and by including far more effort multipliers than the data supports. Building optimal models requires that a wider range of models be considered while correctly calibrating these models requires rejection rules that prune variables and records and use multiple criteria for evaluating model performance. The main contribution of this paper is to document a standard method that integrates formal model identification, estimation, and validation. It also documents what we call the large variance problem that is a leading cause of cost model brittleness or instability.

  6. Behavioral Model of High Performance Camera for NIF Optics Inspection

    International Nuclear Information System (INIS)

    Hackel, B M

    2007-01-01

    The purpose of this project was to develop software that will model the behavior of the high performance Spectral Instruments 1000 series Charge-Coupled Device (CCD) camera located in the Final Optics Damage Inspection (FODI) system on the National Ignition Facility. NIF's target chamber will be mounted with 48 Final Optics Assemblies (FOAs) to convert the laser light from infrared to ultraviolet and focus it precisely on the target. Following a NIF shot, the optical components of each FOA must be carefully inspected for damage by the FODI to ensure proper laser performance during subsequent experiments. Rapid image capture and complex image processing (to locate damage sites) will reduce shot turnaround time; thus increasing the total number of experiments NIF can conduct during its 30 year lifetime. Development of these rapid processes necessitates extensive offline software automation -- especially after the device has been deployed in the facility. Without access to the unique real device or an exact behavioral model, offline software testing is difficult. Furthermore, a software-based behavioral model allows for many instances to be running concurrently; this allows multiple developers to test their software at the same time. Thus it is beneficial to construct separate software that will exactly mimic the behavior and response of the real SI-1000 camera

  7. On the computational assessment of white matter hyperintensity progression: difficulties in method selection and bias field correction performance on images with significant white matter pathology

    Energy Technology Data Exchange (ETDEWEB)

    Valdes Hernandez, Maria del C.; Gonzalez-Castro, Victor; Wang, Xin; Doubal, Fergus; Munoz Maniega, Susana; Wardlaw, Joanna M. [Centre for Clinical Brian Sciences, Department of Neuroimaging Sciences, Edinburgh (United Kingdom); Ghandour, Dina T. [University of Edinburgh, College of Medicine and Veterinary Medicine, Edinburgh (United Kingdom); Armitage, Paul A. [University of Sheffield, Department of Cardiovascular Sciences, Sheffield (United Kingdom)

    2016-05-15

    Subtle inhomogeneities in the scanner's magnetic fields (B{sub 0} and B{sub 1}) alter the intensity levels of the structural magnetic resonance imaging (MRI) affecting the volumetric assessment of WMH changes. Here, we investigate the influence that (1) correcting the images for the B{sub 1} inhomogeneities (i.e. bias field correction (BFC)) and (2) selection of the WMH change assessment method can have on longitudinal analyses of WMH progression and discuss possible solutions. We used brain structural MRI from 46 mild stroke patients scanned at stroke onset and 3 years later. We tested three BFC approaches: FSL-FAST, N4 and exponentially entropy-driven homomorphic unsharp masking (E{sup 2}D-HUM) and analysed their effect on the measured WMH change. Separately, we tested two methods to assess WMH changes: measuring WMH volumes independently at both time points semi-automatically (MCMxxxVI) and subtracting intensity-normalised FLAIR images at both time points following image gamma correction. We then combined the BFC with the computational method that performed best across the whole sample to assess WMH changes. Analysis of the difference in the variance-to-mean intensity ratio in normal tissue between BFC and uncorrected images and visual inspection showed that all BFC methods altered the WMH appearance and distribution, but FSL-FAST in general performed more consistently across the sample and MRI modalities. The WMH volume change over 3 years obtained with MCMxxxVI with vs. without FSL-FAST BFC did not significantly differ (medians(IQR)(with BFC) = 3.2(6.3) vs. 2.9(7.4)ml (without BFC), p = 0.5), but both differed significantly from the WMH volume change obtained from subtracting post-processed FLAIR images (without BFC)(7.6(8.2)ml, p < 0.001). This latter method considerably inflated the WMH volume change as subtle WMH at baseline that became more intense at follow-up were counted as increase in the volumetric change. Measurement of WMH volume change remains

  8. FRAMEWORK AND APPLICATION FOR MODELING CONTROL ROOM CREW PERFORMANCE AT NUCLEAR POWER PLANTS

    Energy Technology Data Exchange (ETDEWEB)

    Ronald L Boring; David I Gertman; Tuan Q Tran; Brian F Gore

    2008-09-01

    This paper summarizes an emerging project regarding the utilization of high-fidelity MIDAS simulations for visualizing and modeling control room crew performance at nuclear power plants. The key envisioned uses for MIDAS-based control room simulations are: (i) the estimation of human error associated with advanced control room equipment and configurations, (ii) the investigative determination of contributory cognitive factors for risk significant scenarios involving control room operating crews, and (iii) the certification of reduced staffing levels in advanced control rooms. It is proposed that MIDAS serves as a key component for the effective modeling of cognition, elements of situation awareness, and risk associated with human performance in next generation control rooms.

  9. Simplified Predictive Models for CO2 Sequestration Performance Assessment

    Science.gov (United States)

    Mishra, Srikanta; RaviGanesh, Priya; Schuetter, Jared; Mooney, Douglas; He, Jincong; Durlofsky, Louis

    2014-05-01

    simulations, the LHS-based meta-model yields a more robust predictive model, as verified by a k-fold cross-validation approach. In the third category (RMM), we use a reduced-order modeling procedure that combines proper orthogonal decomposition (POD) for reducing problem dimensionality with trajectory-piecewise linearization (TPWL) for extrapolating system response at new control points from a limited number of trial runs ("snapshots"). We observe significant savings in computational time with very good accuracy from the POD-TPWL reduced order model - which could be important in the context of history matching, uncertainty quantification and optimization problems. The paper will present results from our ongoing investigations, and also discuss future research directions and likely outcomes. This work was supported by U.S. Department of Energy National Energy Technology Laboratory award DE-FE0009051 and Ohio Department of Development grant D-13-02.

  10. A Tool for Performance Modeling of Parallel Programs

    Directory of Open Access Journals (Sweden)

    J.A. González

    2003-01-01

    Full Text Available Current performance prediction analytical models try to characterize the performance behavior of actual machines through a small set of parameters. In practice, substantial deviations are observed. These differences are due to factors as memory hierarchies or network latency. A natural approach is to associate a different proportionality constant with each basic block, and analogously, to associate different latencies and bandwidths with each "communication block". Unfortunately, to use this approach implies that the evaluation of parameters must be done for each algorithm. This is a heavy task, implying experiment design, timing, statistics, pattern recognition and multi-parameter fitting algorithms. Software support is required. We present a compiler that takes as source a C program annotated with complexity formulas and produces as output an instrumented code. The trace files obtained from the execution of the resulting code are analyzed with an interactive interpreter, giving us, among other information, the values of those parameters.

  11. Quality Circles: Determination of Significant Factors for Success an a General Model for Implementing a Quality Circle Process.

    Science.gov (United States)

    1981-06-01

    Quality Cir- cles?" First Annual IAQC Transactions, 1979, pp 59-65. 11. Beckhard , Richard . 0rganization Development: Strategies and Models. Reading...improve task accomplishment /57. Beckhard /T17 identifies three models that are commonly used in attempting to deal with a client’s problems. The...Jananese Challerfe. Addison-Wesley Publishing Co., Reading, Massachusetts, 1981. 84. Pascale, Richard T., Anthony G. Athos. The Art of Japaese

  12. A human performance modelling approach to intelligent decision support systems

    Science.gov (United States)

    Mccoy, Michael S.; Boys, Randy M.

    1987-01-01

    Manned space operations require that the many automated subsystems of a space platform be controllable by a limited number of personnel. To minimize the interaction required of these operators, artificial intelligence techniques may be applied to embed a human performance model within the automated, or semi-automated, systems, thereby allowing the derivation of operator intent. A similar application has previously been proposed in the domain of fighter piloting, where the demand for pilot intent derivation is primarily a function of limited time and high workload rather than limited operators. The derivation and propagation of pilot intent is presented as it might be applied to some programs.

  13. Use of total plant models for plant performance optimisation

    International Nuclear Information System (INIS)

    Ardron, K.H.

    2004-01-01

    Consideration is given to the mathematical techniques used by Nuclear Electric for steady state power plant analysis and performance optimisation. A quasi-Newton method is deployed to calculate the steady state followed by a model fitting procedure based on Lagrange's method to yield a fit to measured plant data. An optimising algorithm is used to establish maximum achievable power and efficiency. An example is described in which the techniques are applied to identify the plant constraints preventing output increases at a Nuclear Electric Advanced Gas Cooled Reactor. (author)

  14. Analytical and numerical performance models of a Heisenberg Vortex Tube

    Science.gov (United States)

    Bunge, C. D.; Cavender, K. A.; Matveev, K. I.; Leachman, J. W.

    2017-12-01

    Analytical and numerical investigations of a Heisenberg Vortex Tube (HVT) are performed to estimate the cooling potential with cryogenic hydrogen. The Ranque-Hilsch Vortex Tube (RHVT) is a device that tangentially injects a compressed fluid stream into a cylindrical geometry to promote enthalpy streaming and temperature separation between inner and outer flows. The HVT is the result of lining the inside of a RHVT with a hydrogen catalyst. This is the first concept to utilize the endothermic heat of para-orthohydrogen conversion to aid primary cooling. A review of 1st order vortex tube models available in the literature is presented and adapted to accommodate cryogenic hydrogen properties. These first order model predictions are compared with 2-D axisymmetric Computational Fluid Dynamics (CFD) simulations.

  15. Performance of the demonstration model of DIOS FXT

    Science.gov (United States)

    Tawara, Yuzuru; Sakurai, Ikuya; Masuda, Tadashi; Torii, Tatsuharu; Matsushita, Kohji; Ramsey, Brian D.

    2009-08-01

    To search for warm-hot intergalactic medium (WHIM), a small satellite mission DIOS (Diffuse Intergalactic Oxygen Surveyor ) is planned and a specially designed four-stage X-ray telescope (FXT) has been developed as the best fit optics to have a wide field of view and large effective area. Based on the previous design work and mirror fabrication technology used in the Suzaku mission, we made small demonstration model of DIOS FXT. This model has focal length of 700 mm consisting of quadrant housing and four-stage mirror sets with different radii of 150 - 180 mm and each stage mirror hight of 40 mm. We performed X-ray measurement for one set of four-stage mirror with a radius of 180 mm. From the results of the optical and X-ray measurement, it was found that tighter control were required for positioning and fabrication process of each mirror even to get angular resolution of several arcmin.

  16. BISON and MARMOT Development for Modeling Fast Reactor Fuel Performance

    Energy Technology Data Exchange (ETDEWEB)

    Gamble, Kyle Allan Lawrence [Idaho National Lab. (INL), Idaho Falls, ID (United States); Williamson, Richard L. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Schwen, Daniel [Idaho National Lab. (INL), Idaho Falls, ID (United States); Zhang, Yongfeng [Idaho National Lab. (INL), Idaho Falls, ID (United States); Novascone, Stephen Rhead [Idaho National Lab. (INL), Idaho Falls, ID (United States); Medvedev, Pavel G. [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-09-01

    BISON and MARMOT are two codes under development at the Idaho National Laboratory for engineering scale and lower length scale fuel performance modeling. It is desired to add capabilities for fast reactor applications to these codes. The fast reactor fuel types under consideration are metal (U-Pu-Zr) and oxide (MOX). The cladding types of interest include 316SS, D9, and HT9. The purpose of this report is to outline the proposed plans for code development and provide an overview of the models added to the BISON and MARMOT codes for fast reactor fuel behavior. A brief overview of preliminary discussions on the formation of a bilateral agreement between the Idaho National Laboratory and the National Nuclear Laboratory in the United Kingdom is presented.

  17. Quantitative renal perfusion measurements in a rat model of acute kidney injury at 3T: testing inter- and intramethodical significance of ASL and DCE-MRI.

    Directory of Open Access Journals (Sweden)

    Fabian Zimmer

    Full Text Available OBJECTIVES: To establish arterial spin labelling (ASL for quantitative renal perfusion measurements in a rat model at 3 Tesla and to test the diagnostic significance of ASL and dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI in a model of acute kidney injury (AKI. MATERIAL AND METHODS: ASL and DCE-MRI were consecutively employed on six Lewis rats, five of which had a unilateral ischaemic AKI. All measurements in this study were performed on a 3 Tesla MR scanner using a FAIR True-FISP approach and a TWIST sequence for ASL and DCE-MRI, respectively. Perfusion maps were calculated for both methods and the cortical perfusion of healthy and diseased kidneys was inter- and intramethodically compared using a region-of-interest based analysis. RESULTS/SIGNIFICANCE: Both methods produce significantly different values for the healthy and the diseased kidneys (P<0.01. The mean difference was 147±47 ml/100 g/min and 141±46 ml/100 g/min for ASL and DCE-MRI, respectively. ASL measurements yielded a mean cortical perfusion of 416±124 ml/100 g/min for the healthy and 316±102 ml/100 g/min for the diseased kidneys. The DCE-MRI values were systematically higher and the mean cortical renal blood flow (RBF was found to be 542±85 ml/100 g/min (healthy and 407±119 ml/100 g/min (AKI. CONCLUSION: Both methods are equally able to detect abnormal perfusion in diseased (AKI kidneys. This shows that ASL is a capable alternative to DCE-MRI regarding the detection of abnormal renal blood flow. Regarding absolute perfusion values, nontrivial differences and variations remain when comparing the two methods.

  18. Predicting the Impacts of Intravehicular Displays on Driving Performance with Human Performance Modeling

    Science.gov (United States)

    Mitchell, Diane Kuhl; Wojciechowski, Josephine; Samms, Charneta

    2012-01-01

    A challenge facing the U.S. National Highway Traffic Safety Administration (NHTSA), as well as international safety experts, is the need to educate car drivers about the dangers associated with performing distraction tasks while driving. Researchers working for the U.S. Army Research Laboratory have developed a technique for predicting the increase in mental workload that results when distraction tasks are combined with driving. They implement this technique using human performance modeling. They have predicted workload associated with driving combined with cell phone use. In addition, they have predicted the workload associated with driving military vehicles combined with threat detection. Their technique can be used by safety personnel internationally to demonstrate the dangers of combining distracter tasks with driving and to mitigate the safety risks.

  19. Moderated Mediation Model of Interrelations between Workplace Romance, Wellbeing, and Employee Performance.

    Science.gov (United States)

    Khan, Muhammad Aamir Shafique; Jianguo, Du; Usman, Muhammad; Ahmad, Malik I

    2017-01-01

    In this study, first we examined the effect of workplace romance on employee job performance, and the mediatory role of psychological wellbeing in the relationship between workplace romance and employee performance. Then we tested the moderating effects of gender and workplace romance type - lateral or hierarchical - on the indirect effect of workplace romance on employee performance. Based on a survey of 311 doctors from five government teaching hospitals in Pakistan, we used structural equation modeling and bootstrapping to test these relationships. This study reveals that psychological wellbeing significantly fully mediates the positive relationship between workplace romance and job performance. Moreover, multi-group analysis shows that gender moderates the indirect effect of workplace romance on employee performance, where the indirect effect of workplace romance on employee performance is stronger for male participants. This study carries important implications, particularly for the policy makers and managers of healthcare sector organizations.

  20. Moderated Mediation Model of Interrelations between Workplace Romance, Wellbeing, and Employee Performance

    Directory of Open Access Journals (Sweden)

    Muhammad Aamir Shafique Khan

    2017-12-01

    Full Text Available In this study, first we examined the effect of workplace romance on employee job performance, and the mediatory role of psychological wellbeing in the relationship between workplace romance and employee performance. Then we tested the moderating effects of gender and workplace romance type – lateral or hierarchical – on the indirect effect of workplace romance on employee performance. Based on a survey of 311 doctors from five government teaching hospitals in Pakistan, we used structural equation modeling and bootstrapping to test these relationships. This study reveals that psychological wellbeing significantly fully mediates the positive relationship between workplace romance and job performance. Moreover, multi-group analysis shows that gender moderates the indirect effect of workplace romance on employee performance, where the indirect effect of workplace romance on employee performance is stronger for male participants. This study carries important implications, particularly for the policy makers and managers of healthcare sector organizations.

  1. Performance Analysis of Different NeQuick Ionospheric Model Parameters

    Directory of Open Access Journals (Sweden)

    WANG Ningbo

    2017-04-01

    Full Text Available Galileo adopts NeQuick model for single-frequency ionospheric delay corrections. For the standard operation of Galileo, NeQuick model is driven by the effective ionization level parameter Az instead of the solar activity level index, and the three broadcast ionospheric coefficients are determined by a second-polynomial through fitting the Az values estimated from globally distributed Galileo Sensor Stations (GSS. In this study, the processing strategies for the estimation of NeQuick ionospheric coefficients are discussed and the characteristics of the NeQuick coefficients are also analyzed. The accuracy of Global Position System (GPS broadcast Klobuchar, original NeQuick2 and fitted NeQuickC as well as Galileo broadcast NeQuickG models is evaluated over the continental and oceanic regions, respectively, in comparison with the ionospheric total electron content (TEC provided by global ionospheric maps (GIM, GPS test stations and JASON-2 altimeter. The results show that NeQuickG can mitigate ionospheric delay by 54.2%~65.8% on a global scale, and NeQuickC can correct for 71.1%~74.2% of the ionospheric delay. NeQuick2 performs at the same level with NeQuickG, which is a bit better than that of GPS broadcast Klobuchar model.

  2. Simulation model of a twin-tail, high performance airplane

    Science.gov (United States)

    Buttrill, Carey S.; Arbuckle, P. Douglas; Hoffler, Keith D.

    1992-01-01

    The mathematical model and associated computer program to simulate a twin-tailed high performance fighter airplane (McDonnell Douglas F/A-18) are described. The simulation program is written in the Advanced Continuous Simulation Language. The simulation math model includes the nonlinear six degree-of-freedom rigid-body equations, an engine model, sensors, and first order actuators with rate and position limiting. A simplified form of the F/A-18 digital control laws (version 8.3.3) are implemented. The simulated control law includes only inner loop augmentation in the up and away flight mode. The aerodynamic forces and moments are calculated from a wind-tunnel-derived database using table look-ups with linear interpolation. The aerodynamic database has an angle-of-attack range of -10 to +90 and a sideslip range of -20 to +20 degrees. The effects of elastic deformation are incorporated in a quasi-static-elastic manner. Elastic degrees of freedom are not actively simulated. In the engine model, the throttle-commanded steady-state thrust level and the dynamic response characteristics of the engine are based on airflow rate as determined from a table look-up. Afterburner dynamics are switched in at a threshold based on the engine airflow and commanded thrust.

  3. Developing a theory of the strategic core of teams: a role composition model of team performance.

    Science.gov (United States)

    Humphrey, Stephen E; Morgeson, Frederick P; Mannor, Michael J

    2009-01-01

    Although numerous models of team performance have been articulated over the past 20 years, these models have primarily focused on the individual attribute approach to team composition. The authors utilized a role composition approach, which investigates how the characteristics of a set of role holders impact team effectiveness, to develop a theory of the strategic core of teams. Their theory suggests that certain team roles are most important for team performance and that the characteristics of the role holders in the "core" of the team are more important for overall team performance. This theory was tested in 778 teams drawn from 29 years of major league baseball (1974'-2002). Results demonstrate that although high levels of experience and job-related skill are important predictors of team performance, the relationships between these constructs and team performance are significantly stronger when the characteristics are possessed by core role holders (as opposed to non-core role holders). Further, teams that invest more of their financial resources in these core roles are able to leverage such investments into significantly improved performance. These results have implications for team composition models, as they suggest a new method for considering individual contributions to a team's success that shifts the focus onto core roles. (PsycINFO Database Record (c) 2009 APA, all rights reserved).

  4. Using Performance Assessment Model in Physics Laboratory to Increase Students’ Critical Thinking Disposition

    Science.gov (United States)

    Emiliannur, E.; Hamidah, I.; Zainul, A.; Wulan, A. R.

    2017-09-01

    Performance Assessment Model (PAM) has been developed to represent the physics concepts which able to be devided into five experiments: 1) acceleration due to gravity; 2) Hooke’s law; 3) simple harmonic motion; 4) work-energy concepts; and 5) the law of momentum conservation. The aim of this study was to determine the contribution of PAM in physics laboratory to increase students’ Critical Thinking Disposition (CTD) at senior high school. Subject of the study were 11th grade consist 32 students of a senior high school in Lubuk Sikaping, West Sumatera. The research used one group pretest-postest design. Data was collected through essay test and questionnaire about CTD. Data was analyzed using quantitative way with N-gain value. This study concluded that performance assessmet model effectively increases the N-gain at medium category. It means students’ critical thinking disposition significant increase after implementation of performance assessment model in physics laboratory.

  5. Investigation of the chromosome regions with significant affinity for the nuclear envelope in fruit fly--a model based approach.

    Directory of Open Access Journals (Sweden)

    Nicholas Allen Kinney

    Full Text Available Three dimensional nuclear architecture is important for genome function, but is still poorly understood. In particular, little is known about the role of the "boundary conditions"--points of attachment between chromosomes and the nuclear envelope. We describe a method for modeling the 3D organization of the interphase nucleus, and its application to analysis of chromosome-nuclear envelope (Chr-NE attachments of polytene (giant chromosomes in Drosophila melanogaster salivary glands. The model represents chromosomes as self-avoiding polymer chains confined within the nucleus; parameters of the model are taken directly from experiment, no fitting parameters are introduced. Methods are developed to objectively quantify chromosome territories and intertwining, which are discussed in the context of corresponding experimental observations. In particular, a mathematically rigorous definition of a territory based on convex hull is proposed. The self-avoiding polymer model is used to re-analyze previous experimental data; the analysis suggests 33 additional Chr-NE attachments in addition to the 15 already explored Chr-NE attachments. Most of these new Chr-NE attachments correspond to intercalary heterochromatin--gene poor, dark staining, late replicating regions of the genome; however, three correspond to euchromatin--gene rich, light staining, early replicating regions of the genome. The analysis also suggests 5 regions of anti-contact, characterized by aversion for the NE, only two of these correspond to euchromatin. This composition of chromatin suggests that heterochromatin may not be necessary or sufficient for the formation of a Chr-NE attachment. To the extent that the proposed model represents reality, the confinement of the polytene chromosomes in a spherical nucleus alone does not favor the positioning of specific chromosome regions at the NE as seen in experiment; consequently, the 15 experimentally known Chr-NE attachment positions do not

  6. Modelling and Comparative Performance Analysis of a Time-Reversed UWB System

    Directory of Open Access Journals (Sweden)

    Popovski K

    2007-01-01

    Full Text Available The effects of multipath propagation lead to a significant decrease in system performance in most of the proposed ultra-wideband communication systems. A time-reversed system utilises the multipath channel impulse response to decrease receiver complexity, through a prefiltering at the transmitter. This paper discusses the modelling and comparative performance of a UWB system utilising time-reversed communications. System equations are presented, together with a semianalytical formulation on the level of intersymbol interference and multiuser interference. The standardised IEEE 802.15.3a channel model is applied, and the estimated error performance is compared through simulation with the performance of both time-hopped time-reversed and RAKE-based UWB systems.

  7. The performance effect of the Lean package – a survey study using a structural equation model

    DEFF Research Database (Denmark)

    Kristensen, Thomas Borup; Israelsen, Poul

    Purpose - Our aim is to test and validate a system-wide approach using mediating relationships in a structural equation model in order to understand how the practices of Lean affect performance. Design/methodology/approach – A cross-sectional survey with 200 responding companies indicating...... that they use Lean. This is analyzed in a structural quation model setting. Findings - Previous quantitative research has shown mixed results for the performance of Lean because they have not addressed the system-wide mediating relations between Lean practices. We find that Companies using a system......-wide approach to Lean practices perform significantly better than those using a scattered array of these practices. This paper shows that the effect of Lean’s flow production practices on performance is mediated by analytical continuous improvement empowerment practices, and by delegation of decision rights...

  8. THE PENA BLANCA NATURAL ANALOGUE PERFORMANCE ASSESSMENT MODEL

    Energy Technology Data Exchange (ETDEWEB)

    G. Saulnier and W. Statham

    2006-04-16

    The Nopal I uranium mine in the Sierra Pena Blanca, Chihuahua, Mexico serves as a natural analogue to the Yucca Mountain repository. The Pena Blanca Natural Analogue Performance Assessment Model simulates the mobilization and transport of radionuclides that are released from the mine and transported to the saturated zone. The Pena Blanca Natural Analogue Performance Assessment Model uses probabilistic simulations of hydrogeologic processes that are analogous to the processes that occur at the Yucca Mountain site. The Nopal I uranium deposit lies in fractured, welded, and altered rhyolitic ash-flow tuffs that overlie carbonate rocks, a setting analogous to the geologic formations at the Yucca Mountain site. The Nopal I mine site has the following analogous characteristics as compared to the Yucca Mountain repository site: (1) Analogous source--UO{sub 2} uranium ore deposit = spent nuclear fuel in the repository; (2) Analogous geology--(i.e. fractured, welded, and altered rhyolitic ash-flow tuffs); (3) Analogous climate--Semiarid to arid; (4) Analogous setting--Volcanic tuffs overlie carbonate rocks; and (5) Analogous geochemistry--Oxidizing conditions Analogous hydrogeology: The ore deposit lies in the unsaturated zone above the water table.

  9. Exogenous and Endogenous Learning Resources in the Actiotope Model of Giftedness and Its Significance for Gifted Education

    Science.gov (United States)

    Ziegler, Albert; Chandler, Kimberley L.; Vialle, Wilma; Stoeger, Heidrun

    2017-01-01

    Based on the Actiotope Model of Giftedness, this article introduces a learning-resource-oriented approach for gifted education. It provides a comprehensive categorization of learning resources, including five exogenous learning resources termed "educational capital" and five endogenous learning resources termed "learning…

  10. Segmentation process significantly influences the accuracy of 3D surface models derived from cone beam computed tomography

    NARCIS (Netherlands)

    Fourie, Zacharias; Damstra, Janalt; Schepers, Rutger H; Gerrits, Pieter; Ren, Yijin

    AIMS: To assess the accuracy of surface models derived from 3D cone beam computed tomography (CBCT) with two different segmentation protocols. MATERIALS AND METHODS: Seven fresh-frozen cadaver heads were used. There was no conflict of interests in this study. CBCT scans were made of the heads and 3D

  11. The significant effect of the thickness of Ni film on the performance of the Ni/Au Ohmic contact to p-GaN

    Energy Technology Data Exchange (ETDEWEB)

    Li, X. J.; Zhao, D. G., E-mail: dgzhao@red.semi.ac.cn; Jiang, D. S.; Liu, Z. S.; Chen, P.; Zhu, J. J.; Le, L. C.; Yang, J.; He, X. G. [State Key Laboratory on Integrated Optoelectronics, Institute of Semiconductors, Chinese Academy of Science, P.O. Box 912, Beijing 100083 (China); Zhang, S. M.; Zhang, B. S.; Liu, J. P. [Suzhou Institute of Nano-tech and Nano-bionics, Chinese Academy of Sciences, Suzhou 215123 (China); Yang, H. [State Key Laboratory on Integrated Optoelectronics, Institute of Semiconductors, Chinese Academy of Science, P.O. Box 912, Beijing 100083 (China); Suzhou Institute of Nano-tech and Nano-bionics, Chinese Academy of Sciences, Suzhou 215123 (China)

    2014-10-28

    The significant effect of the thickness of Ni film on the performance of the Ohmic contact of Ni/Au to p-GaN is studied. The Ni/Au metal films with thickness of 15/50 nm on p-GaN led to better electrical characteristics, showing a lower specific contact resistivity after annealing in the presence of oxygen. Both the formation of a NiO layer and the evolution of metal structure on the sample surface and at the interface with p-GaN were checked by transmission electron microscopy and energy-dispersive x-ray spectroscopy. The experimental results indicate that a too thin Ni film cannot form enough NiO to decrease the barrier height and get Ohmic contact to p-GaN, while a too thick Ni film will transform into too thick NiO cover on the sample surface and thus will also deteriorate the electrical conductivity of sample.

  12. Performance studies of GooFit on GPUs vs RooFit on CPUs while estimating the statistical significance of a new physical signal

    Science.gov (United States)

    Di Florio, Adriano

    2017-10-01

    In order to test the computing capabilities of GPUs with respect to traditional CPU cores a high-statistics toy Monte Carlo technique has been implemented both in ROOT/RooFit and GooFit frameworks with the purpose to estimate the statistical significance of the structure observed by CMS close to the kinematical boundary of the J/ψϕ invariant mass in the three-body decay B + → J/ψϕK +. GooFit is a data analysis open tool under development that interfaces ROOT/RooFit to CUDA platform on nVidia GPU. The optimized GooFit application running on GPUs hosted by servers in the Bari Tier2 provides striking speed-up performances with respect to the RooFit application parallelised on multiple CPUs by means of PROOF-Lite tool. The considerable resulting speed-up, evident when comparing concurrent GooFit processes allowed by CUDA Multi Process Service and a RooFit/PROOF-Lite process with multiple CPU workers, is presented and discussed in detail. By means of GooFit it has also been possible to explore the behaviour of a likelihood ratio test statistic in different situations in which the Wilks Theorem may or may not apply because its regularity conditions are not satisfied.

  13. The Origin of MoS2 Significantly Influences Its Performance for the Hydrogen Evolution Reaction due to Differences in Phase Purity.

    Science.gov (United States)

    Chua, Xing Juan; Tan, Shu Min; Chia, Xinyi; Sofer, Zdenek; Luxa, Jan; Pumera, Martin

    2017-03-02

    Molybdenum disulfide (MoS 2 ) is at the forefront of materials research. It shows great promise for electrochemical applications, especially for hydrogen evolution reaction (HER) catalysis. There is a significant discrepancy in the literature on the reported catalytic activity for HER catalysis on MoS 2 . Here we test the electrochemical performance of MoS 2 obtained from seven sources and we show that these sources provide MoS 2 of various phase purity (2H and 3R, and their mixtures) and composition, which is responsible for their different electrochemical properties. The overpotentials for HER at -10 mA cm -2 for MoS 2 from seven different sources range from -0.59 V to -0.78 V vs. reversible hydrogen electrode (RHE). This is of very high importance as with much interest in 2D-MoS 2 , the use of the top-down approach would usually involve the application of commercially available MoS 2 . These commercially available MoS 2 are rarely characterized for composition and phase purity. These key parameters are responsible for large variance of reported catalytic properties of MoS 2 . © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. Development of an analytical model to assess fuel property effects on combustor performance

    Science.gov (United States)

    Sutton, R. D.; Troth, D. L.; Miles, G. A.; Riddlebaugh, S. M.

    1987-01-01

    A generalized first-order computer model has been developed in order to analytically evaluate the potential effect of alternative fuels' effects on gas turbine combustors. The model assesses the size, configuration, combustion reliability, and durability of the combustors required to meet performance and emission standards while operating on a broad range of fuels. Predictions predicated on combustor flow-field determinations by the model indicate that fuel chemistry, as defined by hydrogen content, exerts a significant influence on flame retardation, liner wall temperature, and smoke emission.

  15. Graphical User Interface for Simulink Integrated Performance Analysis Model

    Science.gov (United States)

    Durham, R. Caitlyn

    2009-01-01

    The J-2X Engine (built by Pratt & Whitney Rocketdyne,) in the Upper Stage of the Ares I Crew Launch Vehicle, will only start within a certain range of temperature and pressure for Liquid Hydrogen and Liquid Oxygen propellants. The purpose of the Simulink Integrated Performance Analysis Model is to verify that in all reasonable conditions the temperature and pressure of the propellants are within the required J-2X engine start boxes. In order to run the simulation, test variables must be entered at all reasonable values of parameters such as heat leak and mass flow rate. To make this testing process as efficient as possible in order to save the maximum amount of time and money, and to show that the J-2X engine will start when it is required to do so, a graphical user interface (GUI) was created to allow the input of values to be used as parameters in the Simulink Model, without opening or altering the contents of the model. The GUI must allow for test data to come from Microsoft Excel files, allow those values to be edited before testing, place those values into the Simulink Model, and get the output from the Simulink Model. The GUI was built using MATLAB, and will run the Simulink simulation when the Simulate option is activated. After running the simulation, the GUI will construct a new Microsoft Excel file, as well as a MATLAB matrix file, using the output values for each test of the simulation so that they may graphed and compared to other values.

  16. A Critical Analysis of Measurement Models of Export Performance

    Directory of Open Access Journals (Sweden)

    Jorge Carneiro

    2007-05-01

    Full Text Available Poor conceptualization of the export performance construct may undermine theory development efforts and may be one of the reasons behind the often conflicting findings in empirical research on the export performance phenomenon. This article reviews the conceptual and empirical literature and proposes a new analytical scheme that may serve as a standard for judging content validity and a guiding yardstick for drawing operational representations of the construct. A critical assessment of some of the most frequently cited measurement frameworks, followed by an analysis of recent (1999-2004 empirical research, leaves no doubt that there are flaws in the conceptualization and operationalization of the performance construct that ought to be addressed. A new measurement model is advanced along with some guidelines which are suggested for its future empirical validation. The new measurement framework allegedly improves on other past efforts in terms of breadth of coverage of the construct’s domain (content validity. It also offers a measurement perspective (with the simultaneous use of both formative and reflective approaches that appears to reflect better the nature of the construct.

  17. Modeling illumination performance of plastic optical fiber passive daylighting system

    International Nuclear Information System (INIS)

    Sulaiman, F.; Ahmad, A.; Ahmed, A.Z.

    2006-01-01

    One of the most direct methods of utilizing solar energy for energy conservation is to bring natural light indoors to light up an area. This paper reports on the investigation of the feasibility to utilize large core optical fibers to convey and distribute solar light passively throughout residential or commercial structures. The focus of this study is on the mathematical modeling of the illumination performance and the light transmission efficiency of solid core end light fiber for optical day lighting systems. The Meatball simulations features the optical fiber transmittance for glass and plastic fibers, illumination performance over lengths of plastic end-lit fiber, spectral transmission, light intensity loss through the large diameter solid core optical fibers as well as the transmission efficiency of the optical fiber itself. It was found that plastic optical fiber has less transmission loss over the distance of the fiber run which clearly shows that the Plastic Optical Fiber should be optimized for emitting visible light. The findings from the analysis on the performance of large diameter optical fibers for day lighting systems seems feasible for energy efficient lighting system in commercial or residential buildings

  18. Job Demands-Control-Support model and employee safety performance.

    Science.gov (United States)

    Turner, Nick; Stride, Chris B; Carter, Angela J; McCaughey, Deirdre; Carroll, Anthony E

    2012-03-01

    The aim of this study was to explore whether work characteristics (job demands, job control, social support) comprising Karasek and Theorell's (1990) Job Demands-Control-Support framework predict employee safety performance (safety compliance and safety participation; Neal and Griffin, 2006). We used cross-sectional data of self-reported work characteristics and employee safety performance from 280 healthcare staff (doctors, nurses, and administrative staff) from Emergency Departments of seven hospitals in the United Kingdom. We analyzed these data using a structural equation model that simultaneously regressed safety compliance and safety participation on the main effects of each of the aforementioned work characteristics, their two-way interactions, and the three-way interaction among them, while controlling for demographic, occupational, and organizational characteristics. Social support was positively related to safety compliance, and both job control and the two-way interaction between job control and social support were positively related to safety participation. How work design is related to employee safety performance remains an important area for research and provides insight into how organizations can improve workplace safety. The current findings emphasize the importance of the co-worker in promoting both safety compliance and safety participation. Crown Copyright © 2011. Published by Elsevier Ltd. All rights reserved.

  19. Assigning probability distributions to input parameters of performance assessment models

    International Nuclear Information System (INIS)

    Mishra, Srikanta

    2002-02-01

    This study presents an overview of various approaches for assigning probability distributions to input parameters and/or future states of performance assessment models. Specifically,three broad approaches are discussed for developing input distributions: (a) fitting continuous distributions to data, (b) subjective assessment of probabilities, and (c) Bayesian updating of prior knowledge based on new information. The report begins with a summary of the nature of data and distributions, followed by a discussion of several common theoretical parametric models for characterizing distributions. Next, various techniques are presented for fitting continuous distributions to data. These include probability plotting, method of moments, maximum likelihood estimation and nonlinear least squares analysis. The techniques are demonstrated using data from a recent performance assessment study for the Yucca Mountain project. Goodness of fit techniques are also discussed, followed by an overview of how distribution fitting is accomplished in commercial software packages. The issue of subjective assessment of probabilities is dealt with in terms of the maximum entropy distribution selection approach, as well as some common rules for codifying informal expert judgment. Formal expert elicitation protocols are discussed next, and are based primarily on the guidance provided by the US NRC. The Bayesian framework for updating prior distributions (beliefs) when new information becomes available is discussed. A simple numerical approach is presented for facilitating practical applications of the Bayes theorem. Finally, a systematic framework for assigning distributions is presented: (a) for the situation where enough data are available to define an empirical CDF or fit a parametric model to the data, and (b) to deal with the situation where only a limited amount of information is available

  20. Assigning probability distributions to input parameters of performance assessment models

    Energy Technology Data Exchange (ETDEWEB)

    Mishra, Srikanta [INTERA Inc., Austin, TX (United States)

    2002-02-01

    This study presents an overview of various approaches for assigning probability distributions to input parameters and/or future states of performance assessment models. Specifically,three broad approaches are discussed for developing input distributions: (a) fitting continuous distributions to data, (b) subjective assessment of probabilities, and (c) Bayesian updating of prior knowledge based on new information. The report begins with a summary of the nature of data and distributions, followed by a discussion of several common theoretical parametric models for characterizing distributions. Next, various techniques are presented for fitting continuous distributions to data. These include probability plotting, method of moments, maximum likelihood estimation and nonlinear least squares analysis. The techniques are demonstrated using data from a recent performance assessment study for the Yucca Mountain project. Goodness of fit techniques are also discussed, followed by an overview of how distribution fitting is accomplished in commercial software packages. The issue of subjective assessment of probabilities is dealt with in terms of the maximum entropy distribution selection approach, as well as some common rules for codifying informal expert judgment. Formal expert elicitation protocols are discussed next, and are based primarily on the guidance provided by the US NRC. The Bayesian framework for updating prior distributions (beliefs) when new information becomes available is discussed. A simple numerical approach is presented for facilitating practical applications of the Bayes theorem. Finally, a systematic framework for assigning distributions is presented: (a) for the situation where enough data are available to define an empirical CDF or fit a parametric model to the data, and (b) to deal with the situation where only a limited amount of information is available.