WorldWideScience

Sample records for evaluation model based

  1. Evaluating Emulation-based Models of Distributed Computing Systems

    Energy Technology Data Exchange (ETDEWEB)

    Jones, Stephen T. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Cyber Initiatives; Gabert, Kasimir G. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Cyber Initiatives; Tarman, Thomas D. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Emulytics Initiatives

    2017-08-01

    Emulation-based models of distributed computing systems are collections of virtual ma- chines, virtual networks, and other emulation components configured to stand in for oper- ational systems when performing experimental science, training, analysis of design alterna- tives, test and evaluation, or idea generation. As with any tool, we should carefully evaluate whether our uses of emulation-based models are appropriate and justified. Otherwise, we run the risk of using a model incorrectly and creating meaningless results. The variety of uses of emulation-based models each have their own goals and deserve thoughtful evaluation. In this paper, we enumerate some of these uses and describe approaches that one can take to build an evidence-based case that a use of an emulation-based model is credible. Predictive uses of emulation-based models, where we expect a model to tell us something true about the real world, set the bar especially high and the principal evaluation method, called validation , is comensurately rigorous. We spend the majority of our time describing and demonstrating the validation of a simple predictive model using a well-established methodology inherited from decades of development in the compuational science and engineering community.

  2. Agent-based modeling as a tool for program design and evaluation.

    Science.gov (United States)

    Lawlor, Jennifer A; McGirr, Sara

    2017-12-01

    Recently, systems thinking and systems science approaches have gained popularity in the field of evaluation; however, there has been relatively little exploration of how evaluators could use quantitative tools to assist in the implementation of systems approaches therein. The purpose of this paper is to explore potential uses of one such quantitative tool, agent-based modeling, in evaluation practice. To this end, we define agent-based modeling and offer potential uses for it in typical evaluation activities, including: engaging stakeholders, selecting an intervention, modeling program theory, setting performance targets, and interpreting evaluation results. We provide demonstrative examples from published agent-based modeling efforts both inside and outside the field of evaluation for each of the evaluative activities discussed. We further describe potential pitfalls of this tool and offer cautions for evaluators who may chose to implement it in their practice. Finally, the article concludes with a discussion of the future of agent-based modeling in evaluation practice and a call for more formal exploration of this tool as well as other approaches to simulation modeling in the field. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Evaluation-Function-based Model-free Adaptive Fuzzy Control

    Directory of Open Access Journals (Sweden)

    Agus Naba

    2016-12-01

    Full Text Available Designs of adaptive fuzzy controllers (AFC are commonly based on the Lyapunov approach, which requires a known model of the controlled plant. They need to consider a Lyapunov function candidate as an evaluation function to be minimized. In this study these drawbacks were handled by designing a model-free adaptive fuzzy controller (MFAFC using an approximate evaluation function defined in terms of the current state, the next state, and the control action. MFAFC considers the approximate evaluation function as an evaluative control performance measure similar to the state-action value function in reinforcement learning. The simulation results of applying MFAFC to the inverted pendulum benchmark verified the proposed scheme’s efficacy.

  4. Port performance evaluation tool based on microsimulation model

    Directory of Open Access Journals (Sweden)

    Tsavalista Burhani Jzolanda

    2017-01-01

    Full Text Available As port performance is becoming correlative to national competitiveness, the issue of port performance evaluation has significantly raised. Port performances can simply be indicated by port service levels to the ship (e.g., throughput, waiting for berthing etc., as well as the utilization level of equipment and facilities within a certain period. The performances evaluation then can be used as a tool to develop related policies for improving the port’s performance to be more effective and efficient. However, the evaluation is frequently conducted based on deterministic approach, which hardly captures the nature variations of port parameters. Therefore, this paper presents a stochastic microsimulation model for investigating the impacts of port parameter variations to the port performances. The variations are derived from actual data in order to provide more realistic results. The model is further developed using MATLAB and Simulink based on the queuing theory.

  5. Evaluation of template-based models in CASP8 with standard measures

    KAUST Repository

    Cozzetto, Domenico

    2009-01-01

    The strategy for evaluating template-based models submitted to CASP has continuously evolved from CASP1 to CASP5, leading to a standard procedure that has been used in all subsequent editions. The established approach includes methods for calculating the quality of each individual model, for assigning scores based on the distribution of the results for each target and for computing the statistical significance of the differences in scores between prediction methods. These data are made available to the assessor of the template-based modeling category, who uses them as a starting point for further evaluations and analyses. This article describes the detailed workflow of the procedure, provides justifications for a number of choices that are customarily made for CASP data evaluation, and reports the results of the analysis of template-based predictions at CASP8.

  6. Fuzzy comprehensive evaluation model of interuniversity collaborative learning based on network

    Science.gov (United States)

    Wenhui, Ma; Yu, Wang

    2017-06-01

    Learning evaluation is an effective method, which plays an important role in the network education evaluation system. But most of the current network learning evaluation methods still use traditional university education evaluation system, which do not take into account of web-based learning characteristics, and they are difficult to fit the rapid development of interuniversity collaborative learning based on network. Fuzzy comprehensive evaluation method is used to evaluate interuniversity collaborative learning based on the combination of fuzzy theory and analytic hierarchy process. Analytic hierarchy process is used to determine the weight of evaluation factors of each layer and to carry out the consistency check. According to the fuzzy comprehensive evaluation method, we establish interuniversity collaborative learning evaluation mathematical model. The proposed scheme provides a new thought for interuniversity collaborative learning evaluation based on network.

  7. Function-based payment model for inpatient medical rehabilitation: an evaluation.

    Science.gov (United States)

    Sutton, J P; DeJong, G; Wilkerson, D

    1996-07-01

    To describe the components of a function-based prospective payment model for inpatient medical rehabilitation that parallels diagnosis-related groups (DRGs), to evaluate this model in relation to stakeholder objectives, and to detail the components of a quality of care incentive program that, when combined with this payment model, creates an incentive for provides to maximize functional outcomes. This article describes a conceptual model, involving no data collection or data synthesis. The basic payment model described parallels DRGs. Information on the potential impact of this model on medical rehabilitation is gleaned from the literature evaluating the impact of DRGs. The conceptual model described is evaluated against the results of a Delphi Survey of rehabilitation providers, consumers, policymakers, and researchers previously conducted by members of the research team. The major shortcoming of a function-based prospective payment model for inpatient medical rehabilitation is that it contains no inherent incentive to maximize functional outcomes. Linkage of reimbursement to outcomes, however, by withholding a fixed proportion of the standard FRG payment amount, placing that amount in a "quality of care" pool, and distributing that pool annually among providers whose predesignated, facility-level, case-mix-adjusted outcomes are attained, may be one strategy for maximizing outcome goals.

  8. Systematic review of model-based cervical screening evaluations.

    Science.gov (United States)

    Mendes, Diana; Bains, Iren; Vanni, Tazio; Jit, Mark

    2015-05-01

    Optimising population-based cervical screening policies is becoming more complex due to the expanding range of screening technologies available and the interplay with vaccine-induced changes in epidemiology. Mathematical models are increasingly being applied to assess the impact of cervical cancer screening strategies. We systematically reviewed MEDLINE®, Embase, Web of Science®, EconLit, Health Economic Evaluation Database, and The Cochrane Library databases in order to identify the mathematical models of human papillomavirus (HPV) infection and cervical cancer progression used to assess the effectiveness and/or cost-effectiveness of cervical cancer screening strategies. Key model features and conclusions relevant to decision-making were extracted. We found 153 articles meeting our eligibility criteria published up to May 2013. Most studies (72/153) evaluated the introduction of a new screening technology, with particular focus on the comparison of HPV DNA testing and cytology (n = 58). Twenty-eight in forty of these analyses supported HPV DNA primary screening implementation. A few studies analysed more recent technologies - rapid HPV DNA testing (n = 3), HPV DNA self-sampling (n = 4), and genotyping (n = 1) - and were also supportive of their introduction. However, no study was found on emerging molecular markers and their potential utility in future screening programmes. Most evaluations (113/153) were based on models simulating aggregate groups of women at risk of cervical cancer over time without accounting for HPV infection transmission. Calibration to country-specific outcome data is becoming more common, but has not yet become standard practice. Models of cervical screening are increasingly used, and allow extrapolation of trial data to project the population-level health and economic impact of different screening policy. However, post-vaccination analyses have rarely incorporated transmission dynamics. Model calibration to country

  9. A model evaluation checklist for process-based environmental models

    Science.gov (United States)

    Jackson-Blake, Leah

    2015-04-01

    the conceptual model on which it is based. In this study, a number of model structural shortcomings were identified, such as a lack of dissolved phosphorus transport via infiltration excess overland flow, potential discrepancies in the particulate phosphorus simulation and a lack of spatial granularity. (4) Conceptual challenges, as conceptual models on which predictive models are built are often outdated, having not kept up with new insights from monitoring and experiments. For example, soil solution dissolved phosphorus concentration in INCA-P is determined by the Freundlich adsorption isotherm, which could potentially be replaced using more recently-developed adsorption models that take additional soil properties into account. This checklist could be used to assist in identifying why model performance may be poor or unreliable. By providing a model evaluation framework, it could help prioritise which areas should be targeted to improve model performance or model credibility, whether that be through using alternative calibration techniques and statistics, improved data collection, improving or simplifying the model structure or updating the model to better represent current understanding of catchment processes.

  10. Fuzzy comprehensive evaluation model of interuniversity collaborative learning based on network

    Directory of Open Access Journals (Sweden)

    Wenhui Ma

    2017-06-01

    Full Text Available Learning evaluation is an effective method, which plays an important role in the network education evaluation system. But most of the current network learning evaluation methods still use traditional university education evaluation system, which do not take into account of web-based learning characteristics, and they are difficult to fit the rapid development of interuniversity collaborative learning based on network. Fuzzy comprehensive evaluation method is used to evaluate interuniversity collaborative learning based on the combination of fuzzy theory and analytic hierarchy process. Analytic hierarchy process is used to determine the weight of evaluation factors of each layer and to carry out the consistency check. According to the fuzzy comprehensive evaluation method, we establish interuniversity collaborative learning evaluation mathematical model. The proposed scheme provides a new thought for interuniversity collaborative learning evaluation based on network.

  11. A framework for performance evaluation of model-based optical trackers

    NARCIS (Netherlands)

    Smit, F.A.; Liere, van R.

    2008-01-01

    We describe a software framework to evaluate the performance of model-based optical trackers in virtual environments. The framework can be used to evaluate and compare the performance of different trackers under various conditions, to study the effects of varying intrinsic and extrinsic camera

  12. Introducing Multisensor Satellite Radiance-Based Evaluation for Regional Earth System Modeling

    Science.gov (United States)

    Matsui, T.; Santanello, J.; Shi, J. J.; Tao, W.-K.; Wu, D.; Peters-Lidard, C.; Kemp, E.; Chin, M.; Starr, D.; Sekiguchi, M.; hide

    2014-01-01

    Earth System modeling has become more complex, and its evaluation using satellite data has also become more difficult due to model and data diversity. Therefore, the fundamental methodology of using satellite direct measurements with instrumental simulators should be addressed especially for modeling community members lacking a solid background of radiative transfer and scattering theory. This manuscript introduces principles of multisatellite, multisensor radiance-based evaluation methods for a fully coupled regional Earth System model: NASA-Unified Weather Research and Forecasting (NU-WRF) model. We use a NU-WRF case study simulation over West Africa as an example of evaluating aerosol-cloud-precipitation-land processes with various satellite observations. NU-WRF-simulated geophysical parameters are converted to the satellite-observable raw radiance and backscatter under nearly consistent physics assumptions via the multisensor satellite simulator, the Goddard Satellite Data Simulator Unit. We present varied examples of simple yet robust methods that characterize forecast errors and model physics biases through the spatial and statistical interpretation of various satellite raw signals: infrared brightness temperature (Tb) for surface skin temperature and cloud top temperature, microwave Tb for precipitation ice and surface flooding, and radar and lidar backscatter for aerosol-cloud profiling simultaneously. Because raw satellite signals integrate many sources of geophysical information, we demonstrate user-defined thresholds and a simple statistical process to facilitate evaluations, including the infrared-microwave-based cloud types and lidar/radar-based profile classifications.

  13. Evaluating performances of simplified physically based landslide susceptibility models.

    Science.gov (United States)

    Capparelli, Giovanna; Formetta, Giuseppe; Versace, Pasquale

    2015-04-01

    Rainfall induced shallow landslides cause significant damages involving loss of life and properties. Prediction of shallow landslides susceptible locations is a complex task that involves many disciplines: hydrology, geotechnical science, geomorphology, and statistics. Usually to accomplish this task two main approaches are used: statistical or physically based model. This paper presents a package of GIS based models for landslide susceptibility analysis. It was integrated in the NewAge-JGrass hydrological model using the Object Modeling System (OMS) modeling framework. The package includes three simplified physically based models for landslides susceptibility analysis (M1, M2, and M3) and a component for models verifications. It computes eight goodness of fit indices (GOF) by comparing pixel-by-pixel model results and measurements data. Moreover, the package integration in NewAge-JGrass allows the use of other components such as geographic information system tools to manage inputs-output processes, and automatic calibration algorithms to estimate model parameters. The system offers the possibility to investigate and fairly compare the quality and the robustness of models and models parameters, according a procedure that includes: i) model parameters estimation by optimizing each of the GOF index separately, ii) models evaluation in the ROC plane by using each of the optimal parameter set, and iii) GOF robustness evaluation by assessing their sensitivity to the input parameter variation. This procedure was repeated for all three models. The system was applied for a case study in Calabria (Italy) along the Salerno-Reggio Calabria highway, between Cosenza and Altilia municipality. The analysis provided that among all the optimized indices and all the three models, Average Index (AI) optimization coupled with model M3 is the best modeling solution for our test case. This research was funded by PON Project No. 01_01503 "Integrated Systems for Hydrogeological Risk

  14. Evaluation of Model Based State of Charge Estimation Methods for Lithium-Ion Batteries

    Directory of Open Access Journals (Sweden)

    Zhongyue Zou

    2014-08-01

    Full Text Available Four model-based State of Charge (SOC estimation methods for lithium-ion (Li-ion batteries are studied and evaluated in this paper. Different from existing literatures, this work evaluates different aspects of the SOC estimation, such as the estimation error distribution, the estimation rise time, the estimation time consumption, etc. The equivalent model of the battery is introduced and the state function of the model is deduced. The four model-based SOC estimation methods are analyzed first. Simulations and experiments are then established to evaluate the four methods. The urban dynamometer driving schedule (UDDS current profiles are applied to simulate the drive situations of an electrified vehicle, and a genetic algorithm is utilized to identify the model parameters to find the optimal parameters of the model of the Li-ion battery. The simulations with and without disturbance are carried out and the results are analyzed. A battery test workbench is established and a Li-ion battery is applied to test the hardware in a loop experiment. Experimental results are plotted and analyzed according to the four aspects to evaluate the four model-based SOC estimation methods.

  15. Ergonomic evaluation model of operational room based on team performance

    Directory of Open Access Journals (Sweden)

    YANG Zhiyi

    2017-05-01

    Full Text Available A theoretical calculation model based on the ergonomic evaluation of team performance was proposed in order to carry out the ergonomic evaluation of the layout design schemes of the action station in a multitasking operational room. This model was constructed in order to calculate and compare the theoretical value of team performance in multiple layout schemes by considering such substantial influential factors as frequency of communication, distance, angle, importance, human cognitive characteristics and so on. An experiment was finally conducted to verify the proposed model under the criteria of completion time and accuracy rating. As illustrated by the experiment results,the proposed approach is conductive to the prediction and ergonomic evaluation of the layout design schemes of the action station during early design stages,and provides a new theoretical method for the ergonomic evaluation,selection and optimization design of layout design schemes.

  16. Evaluation model based on FAHP for nuclear power project contract performance

    International Nuclear Information System (INIS)

    Liu Bohang; Cheng Jing

    2012-01-01

    Fuzzy Comprehensive Evaluation is a common tool to analyze comprehensive integration. Fuzzy Analytic Hierarchy Process is an improvement for Analytic Hierarchy Process. Firstly the paper pointed out the concept of FAHP, and then used FAHP to setup an evaluation system model for nuclear power project contract performance. Based on this model, all the evaluation factors were assigned to different weightiness. By weighting the score of each factor, output would be the result which could evaluate the contract performance. On the basis of the research, the paper gave the principle of evaluating contract performance of nuclear power suppliers, which can assure the procurement process. (authors)

  17. Research on evaluation of enterprise project culture based on Denison model

    Directory of Open Access Journals (Sweden)

    Yucheng Zeng

    2015-05-01

    Full Text Available Purpose: The purpose of this paper is to build enterprise project culture evaluation model and search for the best evaluation method for Chinese enterprise project culture on the basis of studying and drawing lessons from enterprise culture evaluation theory and method at home and abroad. Design/methodology/approach: Referring to the Denison enterprise culture evaluation model, this paper optimizes it according to the difference of enterprise project culture, designs the enterprise project culture evaluation model and proves the practicability of the model through empirical. Finding: This paper finds that it`s more applicable to use the Denison model for enterprise project culture evaluation through the comparative analysis of domestic and foreign enterprise culture evaluation theory and method, the systematic project culture management framework of Chinese enterprises has not yet formed through empirical research, and four factors in enterprise project culture have important influence on project operation performance improvement. Research limitations/implications: The research on evaluation of enterprise project culture based on Denison model is a preliminary attempt, the design of evaluation index system, evaluation model and scale structure also need to be improved, but the thinking of this paper in this field provides a valuable reference for future research. Practical Implications: This paper provides the support of theory and practice for evaluating the present situation of enterprise project culture construction and analyzing the advantages and disadvantages of project culture, which contributes to the "dialectical therapy" of enterprise project management, enterprise management and enterprise project culture construction. Originality/value: The main contribution of this paper is the introduction of Denison enterprise culture model. Combining with the actual situation of enterprises, this paper also builds the evaluation model for

  18. A Course Evaluation Tool Based on SPICES Model, and its Application to Evaluation of Medical Pharmacology Course

    Directory of Open Access Journals (Sweden)

    Tahereh Changiz

    2009-02-01

    Full Text Available Background and purpose: The SPICES model has been proposed to be used both as a framework for quality improvement in medical education and as a guide for evaluation of curricula. The six strategies of SPICES are representatives of innovative approaches to medical education, and each one has been considered as a continuum. The present study models a theory-based questionnaire, based on SPICES, to be used as a course evaluation tool, through developing a conceptual model for eachcontinuum of the six.Methods: At the first step, operational definition and questionnaire development was performed as an extensive literature review and consensus building in a focus groups of experts .The content andface validity of questionnaire was confirmed. In the second phase-as a pilot -, the questionnaire was used for evaluation of Medical Pharmacology course at Isfahan University of Medical Sciences.Results: The results showed that Medical Pharmacology course located in the traditional end of SPICES continua according to the most aspects of the course.Conclusion: The pilot study showed that the questionnaire scale should be changed. Also it may be more feasible and valid if an item bank is prepared based on the proposed matrix and appropriate items are selected according to the general situation of the curriculum.Keywords: SPICES MODEL, EVALUATION

  19. Evaluating energy saving system of data centers based on AHP and fuzzy comprehensive evaluation model

    Science.gov (United States)

    Jiang, Yingni

    2018-03-01

    Due to the high energy consumption of communication, energy saving of data centers must be enforced. But the lack of evaluation mechanisms has restrained the process on energy saving construction of data centers. In this paper, energy saving evaluation index system of data centers was constructed on the basis of clarifying the influence factors. Based on the evaluation index system, analytical hierarchy process was used to determine the weights of the evaluation indexes. Subsequently, a three-grade fuzzy comprehensive evaluation model was constructed to evaluate the energy saving system of data centers.

  20. A QFD-Based Evaluation Method for Business Models of Product Service Systems

    Directory of Open Access Journals (Sweden)

    Tianyang Li

    2016-01-01

    Full Text Available Organizations have been approaching Product Service Systems (PSS in unoptimized business model fashions. This is partially because there is ineffective evaluation of PSS business models. Therefore, a more sufficient evaluation method might advance the evaluation of PSS business models and assist organizations that are considering a servitisation strategy. In this paper, we develop a value oriented method by using the Quality Function Deployment (QFD technique to employ correlations derived from the design information of PSS business models to evaluate these PSS business models. We describe the method applying steps and its practical application in a real life case study. This method improves the formulation of an evaluation step within a design process of PSS business models based on correlations of different dimensions of the PSS value and PSS business models; it allows a balance between the customer value and organization value that are delivered by PSS business models to be made quantitatively. Finally, it fosters the effective utilization of the design information accumulated in the earlier part of the design process to quantitatively evaluate whether a PSS business model is optimized for providing appropriate values for customers and organizations.

  1. Catchment area-based evaluation of the AMC-dependent SCS-CN-based rainfall-runoff models

    Science.gov (United States)

    Mishra, S. K.; Jain, M. K.; Pandey, R. P.; Singh, V. P.

    2005-09-01

    Using a large set of rainfall-runoff data from 234 watersheds in the USA, a catchment area-based evaluation of the modified version of the Mishra and Singh (2002a) model was performed. The model is based on the Soil Conservation Service Curve Number (SCS-CN) methodology and incorporates the antecedent moisture in computation of direct surface runoff. Comparison with the existing SCS-CN method showed that the modified version performed better than did the existing one on the data of all seven area-based groups of watersheds ranging from 0.01 to 310.3 km2.

  2. Surface Water Quality Evaluation Based on a Game Theory-Based Cloud Model

    Directory of Open Access Journals (Sweden)

    Bing Yang

    2018-04-01

    Full Text Available Water quality evaluation is an essential measure to analyze water quality. However, excessive randomness and fuzziness affect the process of evaluation, thus reducing the accuracy of evaluation. Therefore, this study proposed a cloud model for evaluating the water quality to alleviate this problem. Analytic hierarchy process and entropy theory were used to calculate the subjective weight and objective weight, respectively, and then they were coupled as a combination weight (CW via game theory. The proposed game theory-based cloud model (GCM was then applied to the Qixinggang section of the Beijiang River. The results show that the CW ranks fecal coliform as the most important factor, followed by total nitrogen and total phosphorus, while biochemical oxygen demand and fluoride were considered least important. There were 19 months (31.67% at grade I, 39 months (65.00% at grade II, and one month at grade IV and grade V during 2010–2014. A total of 52 months (86.6% of GCM were identical to the comprehensive evaluation result (CER. The obtained water quality grades of GCM are close to the grades of the analytic hierarchy process weight (AHPW due to the weight coefficient of AHPW set to 0.7487. Generally, one or two grade gaps exist among the results of the three groups of weights, suggesting that the index weight is not particularly sensitive to the cloud model. The evaluated accuracy of water quality can be improved by modifying the quantitative boundaries. This study could provide a reference for water quality evaluation, prevention, and improvement of water quality assessment and other applications.

  3. Damage evaluation by a guided wave-hidden Markov model based method

    Science.gov (United States)

    Mei, Hanfei; Yuan, Shenfang; Qiu, Lei; Zhang, Jinjin

    2016-02-01

    Guided wave based structural health monitoring has shown great potential in aerospace applications. However, one of the key challenges of practical engineering applications is the accurate interpretation of the guided wave signals under time-varying environmental and operational conditions. This paper presents a guided wave-hidden Markov model based method to improve the damage evaluation reliability of real aircraft structures under time-varying conditions. In the proposed approach, an HMM based unweighted moving average trend estimation method, which can capture the trend of damage propagation from the posterior probability obtained by HMM modeling is used to achieve a probabilistic evaluation of the structural damage. To validate the developed method, experiments are performed on a hole-edge crack specimen under fatigue loading condition and a real aircraft wing spar under changing structural boundary conditions. Experimental results show the advantage of the proposed method.

  4. Evaluating Computer-Based Assessment in a Risk-Based Model

    Science.gov (United States)

    Zakrzewski, Stan; Steven, Christine; Ricketts, Chris

    2009-01-01

    There are three purposes for evaluation: evaluation for action to aid the decision making process, evaluation for understanding to further enhance enlightenment and evaluation for control to ensure compliance to standards. This article argues that the primary function of evaluation in the "Catherine Wheel" computer-based assessment (CBA)…

  5. Impact on house staff evaluation scores when changing from a Dreyfus- to a Milestone-based evaluation model: one internal medicine residency program's findings

    Directory of Open Access Journals (Sweden)

    Karen A. Friedman

    2014-11-01

    Full Text Available Purpose: As graduate medical education (GME moves into the Next Accreditation System (NAS, programs must take a critical look at their current models of evaluation and assess how well they align with reporting outcomes. Our objective was to assess the impact on house staff evaluation scores when transitioning from a Dreyfus-based model of evaluation to a Milestone-based model of evaluation. Milestones are a key component of the NAS. Method: We analyzed all end of rotation evaluations of house staff completed by faculty for academic years 2010–2011 (pre-Dreyfus model and 2011–2012 (post-Milestone model in one large university-based internal medicine residency training program. Main measures included change in PGY-level average score; slope, range, and separation of average scores across all six Accreditation Council for Graduate Medical Education (ACGME competencies. Results: Transitioning from a Dreyfus-based model to a Milestone-based model resulted in a larger separation in the scores between our three post-graduate year classes, a steeper progression of scores in the PGY-1 class, a wider use of the 5-point scale on our global end of rotation evaluation form, and a downward shift in the PGY-1 scores and an upward shift in the PGY-3 scores. Conclusions: For faculty trained in both models of assessment, the Milestone-based model had greater discriminatory ability as evidenced by the larger separation in the scores for all the classes, in particular the PGY-1 class.

  6. Impact on house staff evaluation scores when changing from a Dreyfus- to a Milestone-based evaluation model: one internal medicine residency program's findings.

    Science.gov (United States)

    Friedman, Karen A; Balwan, Sandy; Cacace, Frank; Katona, Kyle; Sunday, Suzanne; Chaudhry, Saima

    2014-01-01

    As graduate medical education (GME) moves into the Next Accreditation System (NAS), programs must take a critical look at their current models of evaluation and assess how well they align with reporting outcomes. Our objective was to assess the impact on house staff evaluation scores when transitioning from a Dreyfus-based model of evaluation to a Milestone-based model of evaluation. Milestones are a key component of the NAS. We analyzed all end of rotation evaluations of house staff completed by faculty for academic years 2010-2011 (pre-Dreyfus model) and 2011-2012 (post-Milestone model) in one large university-based internal medicine residency training program. Main measures included change in PGY-level average score; slope, range, and separation of average scores across all six Accreditation Council for Graduate Medical Education (ACGME) competencies. Transitioning from a Dreyfus-based model to a Milestone-based model resulted in a larger separation in the scores between our three post-graduate year classes, a steeper progression of scores in the PGY-1 class, a wider use of the 5-point scale on our global end of rotation evaluation form, and a downward shift in the PGY-1 scores and an upward shift in the PGY-3 scores. For faculty trained in both models of assessment, the Milestone-based model had greater discriminatory ability as evidenced by the larger separation in the scores for all the classes, in particular the PGY-1 class.

  7. Model-Based Approach to the Evaluation of Task Complexity in Nuclear Power Plant

    International Nuclear Information System (INIS)

    Ham, Dong Han

    2007-02-01

    This study developed a model-based method for evaluating task complexity and examined the ways of evaluating the complexity of tasks designed for abnormal situations and daily task situations in NPPs. The main results of this study can be summarised as follows. First, this study developed a conceptual framework for studying complexity factors and a model of complexity factors that classifies complexity factors according to the types of knowledge that human operators use. Second, this study developed a more practical model of task complexity factors and identified twenty-one complexity factors based on the model. The model emphasizes that a task is a system to be designed and its complexity has several dimensions. Third, we developed a method of identifying task complexity factors and evaluating task complexity qualitatively based on the developed model of task complexity factors. This method can be widely used in various task situations. Fourth, this study examined the applicability of TACOM to abnormal situations and daily task situations, such as maintenance and confirmed that it can be reasonably used in those situations. Fifth, we developed application examples to demonstrate the use of the theoretical results of this study. Lastly, this study reinterpreted well-know principles for designing information displays in NPPs in terms of task complexity and suggested a way of evaluating the conceptual design of displays in an analytical way by using the concept of task complexity. All of the results of this study will be used as a basis when evaluating the complexity of tasks designed on procedures or information displays and designing ways of improving human performance in NPPs

  8. Research on Evaluation Model for Secondary Task Driving Safety Based on Driver Eye Movements

    Directory of Open Access Journals (Sweden)

    Lisheng Jin

    2014-01-01

    Full Text Available This study was designed to gain insight into the influence of performing different types of secondary task while driving on driver eye movements and to build a safety evaluation model for secondary task driving. Eighteen young drivers were selected and completed the driving experiment on a driving simulator. Measures of fixations, saccades, and blinks were analyzed. Based on measures which had significant difference between the baseline and secondary tasks driving conditions, the evaluation index system was built. Method of principal component analysis (PCA was applied to analyze evaluation indexes data in order to obtain the coefficient weights of indexes and build the safety evaluation model. Based on evaluation scores, the driving safety was grouped into five levels (very high, high, average, low, and very low using K-means clustering algorithm. Results showed that secondary task driving severely distracts the driver and the evaluation model built in this study could estimate driving safety effectively under different driving conditions.

  9. Evaluation of template-based models in CASP8 with standard measures

    KAUST Repository

    Cozzetto, Domenico; Kryshtafovych, Andriy; Fidelis, Krzysztof; Moult, John; Rost, Burkhard; Tramontano, Anna

    2009-01-01

    The strategy for evaluating template-based models submitted to CASP has continuously evolved from CASP1 to CASP5, leading to a standard procedure that has been used in all subsequent editions. The established approach includes methods

  10. Model-based efficiency evaluation of combine harvester traction drives

    Directory of Open Access Journals (Sweden)

    Steffen Häberle

    2015-08-01

    Full Text Available As part of the research the drive train of the combine harvesters is investigated in detail. The focus on load and power distribution, energy consumption and usage distribution are explicitly explored on two test machines. Based on the lessons learned during field operations, model-based studies of energy saving potential in the traction train of combine harvesters can now be quantified. Beyond that the virtual machine trial provides an opportunity to compare innovative drivetrain architectures and control solutions under reproducible conditions. As a result, an evaluation method is presented and generically used to draw comparisons under local representative operating conditions.

  11. Evaluation of weather-based rice yield models in India

    Science.gov (United States)

    Sudharsan, D.; Adinarayana, J.; Reddy, D. Raji; Sreenivas, G.; Ninomiya, S.; Hirafuji, M.; Kiura, T.; Tanaka, K.; Desai, U. B.; Merchant, S. N.

    2013-01-01

    The objective of this study was to compare two different rice simulation models—standalone (Decision Support System for Agrotechnology Transfer [DSSAT]) and web based (SImulation Model for RIce-Weather relations [SIMRIW])—with agrometeorological data and agronomic parameters for estimation of rice crop production in southern semi-arid tropics of India. Studies were carried out on the BPT5204 rice variety to evaluate two crop simulation models. Long-term experiments were conducted in a research farm of Acharya N G Ranga Agricultural University (ANGRAU), Hyderabad, India. Initially, the results were obtained using 4 years (1994-1997) of data with weather parameters from a local weather station to evaluate DSSAT simulated results with observed values. Linear regression models used for the purpose showed a close relationship between DSSAT and observed yield. Subsequently, yield comparisons were also carried out with SIMRIW and DSSAT, and validated with actual observed values. Realizing the correlation coefficient values of SIMRIW simulation values in acceptable limits, further rice experiments in monsoon (Kharif) and post-monsoon (Rabi) agricultural seasons (2009, 2010 and 2011) were carried out with a location-specific distributed sensor network system. These proximal systems help to simulate dry weight, leaf area index and potential yield by the Java based SIMRIW on a daily/weekly/monthly/seasonal basis. These dynamic parameters are useful to the farming community for necessary decision making in a ubiquitous manner. However, SIMRIW requires fine tuning for better results/decision making.

  12. Evaluation of Artificial Intelligence Based Models for Chemical Biodegradability Prediction

    Directory of Open Access Journals (Sweden)

    Aleksandar Sabljic

    2004-12-01

    Full Text Available This study presents a review of biodegradability modeling efforts including a detailed assessment of two models developed using an artificial intelligence based methodology. Validation results for these models using an independent, quality reviewed database, demonstrate that the models perform well when compared to another commonly used biodegradability model, against the same data. The ability of models induced by an artificial intelligence methodology to accommodate complex interactions in detailed systems, and the demonstrated reliability of the approach evaluated by this study, indicate that the methodology may have application in broadening the scope of biodegradability models. Given adequate data for biodegradability of chemicals under environmental conditions, this may allow for the development of future models that include such things as surface interface impacts on biodegradability for example.

  13. Risk Evaluation of Railway Coal Transportation Network Based on Multi Level Grey Evaluation Model

    Science.gov (United States)

    Niu, Wei; Wang, Xifu

    2018-01-01

    The railway transport mode is currently the most important way of coal transportation, and now China’s railway coal transportation network has become increasingly perfect, but there is still insufficient capacity, some lines close to saturation and other issues. In this paper, the theory and method of risk assessment, analytic hierarchy process and multi-level gray evaluation model are applied to the risk evaluation of coal railway transportation network in China. Based on the example analysis of Shanxi railway coal transportation network, to improve the internal structure and the competitiveness of the market.

  14. Comprehensive evaluation of Shahid Motahari Educational Festival during 2008-2013 based on CIPP Evaluation Model

    Directory of Open Access Journals (Sweden)

    SN Hosseini

    2014-09-01

    Full Text Available Introduction: Education quality improvement is one of the main goals of higher education. In this regard, has been provided various solutions such as holding educational Shahid Motahari annual festivals, in order to appreciate of educational process, development and innovation educational processes and procedures, preparation of calibration standards and processes of accrediting educational. The aim of this study was to comprehensive evaluating of educational Shahid Motahari festival during six periods (2008-2013 based on CIPP evaluation model. Method : This cross-sectional study was conducted among the 473 faculty members include deputies and administrators educational, administrators and faculty members of medical education development centers, members of the scientific committee and faculty member’s participants in Shahid Motahari festival from 42 universities medical sciences of Iran. Data collection based on self-report writing questionnaires. Data were analyzed by SPSS version 20 at α=0.05 significant level. Results: The subjects reported 75.13%, 65.33%, 64.5%, and 59.21 % of receivable scores of process, context, input and product, respectively. In addition, there was a direct and significant correlation between all domains . Conclusion : According to the study findings, in the evaluation and correlation of domains models, we can explicitly how to holding festivals was appropriate and the main reason for the poor evaluation in product domain is related to the problems in input and product domains.

  15. An Evaluation of Mesoscale Model Based Model Output Statistics (MOS) During the 2002 Olympic and Paralympic Winter Games

    National Research Council Canada - National Science Library

    Hart, Kenneth

    2003-01-01

    The skill of a mesoscale model based Model Output Statistics (MOS) system that provided hourly forecasts for 18 sites over northern Utah during the 2002 Winter Olympic and Paralympic Games is evaluated...

  16. Evaluation of Smoking Prevention Television Messages Based on the Elaboration Likelihood Model

    Science.gov (United States)

    Flynn, Brian S.; Worden, John K.; Bunn, Janice Yanushka; Connolly, Scott W.; Dorwaldt, Anne L.

    2011-01-01

    Progress in reducing youth smoking may depend on developing improved methods to communicate with higher risk youth. This study explored the potential of smoking prevention messages based on the Elaboration Likelihood Model (ELM) to address these needs. Structured evaluations of 12 smoking prevention messages based on three strategies derived from…

  17. A Category Based Threat Evaluation Model Using Platform Kinematics Data

    Directory of Open Access Journals (Sweden)

    Mustafa Çöçelli

    2017-08-01

    Full Text Available Command and control (C2 systems direct operators to make accurate decisions in the stressful atmosphere of the battlefield at the earliest. There are powerful tools that fuse various instant piece of information and brings summary of those in front of operators. Threat evaluation is one of the important fusion method that provides these assistance to military people. However, C2 systems could be deprived of valuable data source due to the absence of capable equipment. This situation has a bad unfavorable influence on the quality of tactical picture in front of C2 operators. In this paper, we study on the threat evaluation model that take into account these deficiencies. Our method extracts threat level of various targets mostly from their kinematics in two dimensional space. In the meantime, classification of entities around battlefield is unavailable. Only, category of targets are determined as a result of sensors process, which is the information of whether entities belong to air or surface environment. Hereby, threat evaluation model is consist of three fundamental steps that runs on entities belongs to different environment separately: the extraction of threat assessment cues, threat selection based on Bayesian Inference and the calculation of threat assessment rating. We have evaluated performance of proposed model by simulating a set of synthetic scenarios.

  18. Nuclear safety culture evaluation model based on SSE-CMM

    International Nuclear Information System (INIS)

    Yang Xiaohua; Liu Zhenghai; Liu Zhiming; Wan Yaping; Peng Guojian

    2012-01-01

    Safety culture, which is of great significance to establish safety objectives, characterizes level of enterprise safety production and development. Traditional safety culture evaluation models emphasis on thinking and behavior of individual and organization, and pay attention to evaluation results while ignore process. Moreover, determining evaluation indicators lacks objective evidence. A novel multidimensional safety culture evaluation model, which has scientific and completeness, is addressed by building an preliminary mapping between safety culture and SSE-CMM's (Systems Security Engineering Capability Maturity Model) process area and generic practice. The model focuses on enterprise system security engineering process evaluation and provides new ideas and scientific evidences for the study of safety culture. (authors)

  19. How Feedback Can Improve Managerial Evaluations of Model-based Marketing Decision Support Systems

    NARCIS (Netherlands)

    U. Kayande (Ujwal); A. de Bruyn (Arnoud); G.L. Lilien (Gary); A. Rangaswamy (Arvind); G.H. van Bruggen (Gerrit)

    2006-01-01

    textabstractMarketing managers often provide much poorer evaluations of model-based marketing decision support systems (MDSSs) than are warranted by the objective performance of those systems. We show that a reason for this discrepant evaluation may be that MDSSs are often not designed to help users

  20. Developing evaluation instrument based on CIPP models on the implementation of portfolio assessment

    Science.gov (United States)

    Kurnia, Feni; Rosana, Dadan; Supahar

    2017-08-01

    This study aimed to develop an evaluation instrument constructed by CIPP model on the implementation of portfolio assessment in science learning. This study used research and development (R & D) method; adapting 4-D by the development of non-test instrument, and the evaluation instrument constructed by CIPP model. CIPP is the abbreviation of Context, Input, Process, and Product. The techniques of data collection were interviews, questionnaires, and observations. Data collection instruments were: 1) the interview guidelines for the analysis of the problems and the needs, 2) questionnaire to see level of accomplishment of portfolio assessment instrument, and 3) observation sheets for teacher and student to dig up responses to the portfolio assessment instrument. The data obtained was quantitative data obtained from several validators. The validators consist of two lecturers as the evaluation experts, two practitioners (science teachers), and three colleagues. This paper shows the results of content validity obtained from the validators and the analysis result of the data obtained by using Aikens' V formula. The results of this study shows that the evaluation instrument based on CIPP models is proper to evaluate the implementation of portfolio assessment instruments. Based on the experts' judgments, practitioners, and colleagues, the Aikens' V coefficient was between 0.86-1,00 which means that it is valid and can be used in the limited trial and operational field trial.

  1. Utilization of two web-based continuing education courses evaluated by Markov chain model.

    Science.gov (United States)

    Tian, Hao; Lin, Jin-Mann S; Reeves, William C

    2012-01-01

    To evaluate the web structure of two web-based continuing education courses, identify problems and assess the effects of web site modifications. Markov chain models were built from 2008 web usage data to evaluate the courses' web structure and navigation patterns. The web site was then modified to resolve identified design issues and the improvement in user activity over the subsequent 12 months was quantitatively evaluated. Web navigation paths were collected between 2008 and 2010. The probability of navigating from one web page to another was analyzed. The continuing education courses' sequential structure design was clearly reflected in the resulting actual web usage models, and none of the skip transitions provided was heavily used. The web navigation patterns of the two different continuing education courses were similar. Two possible design flaws were identified and fixed in only one of the two courses. Over the following 12 months, the drop-out rate in the modified course significantly decreased from 41% to 35%, but remained unchanged in the unmodified course. The web improvement effects were further verified via a second-order Markov chain model. The results imply that differences in web content have less impact than web structure design on how learners navigate through continuing education courses. Evaluation of user navigation can help identify web design flaws and guide modifications. This study showed that Markov chain models provide a valuable tool to evaluate web-based education courses. Both the results and techniques in this study would be very useful for public health education and research specialists.

  2. Description and evaluation of a mechanistically based conceptual model for spall

    Energy Technology Data Exchange (ETDEWEB)

    Hansen, F.D.; Knowles, M.K.; Thompson, T.W. [and others

    1997-08-01

    A mechanistically based model for a possible spall event at the WIPP site is developed and evaluated in this report. Release of waste material to the surface during an inadvertent borehole intrusion is possible if future states of the repository include high gas pressure and waste material consisting of fine particulates having low mechanical strength. The conceptual model incorporates the physics of wellbore hydraulics coupled to transient gas flow to the intrusion borehole, and mechanical response of the waste. Degraded waste properties using of the model. The evaluations include both numerical and analytical implementations of the conceptual model. A tensile failure criterion is assumed appropriate for calculation of volumes of waste experiencing fragmentation. Calculations show that for repository gas pressures less than 12 MPa, no tensile failure occurs. Minimal volumes of material experience failure below gas pressure of 14 MPa. Repository conditions dictate that the probability of gas pressures exceeding 14 MPa is approximately 1%. For these conditions, a maximum failed volume of 0.25 m{sup 3} is calculated.

  3. Description and evaluation of a mechanistically based conceptual model for spall

    International Nuclear Information System (INIS)

    Hansen, F.D.; Knowles, M.K.; Thompson, T.W.

    1997-08-01

    A mechanistically based model for a possible spall event at the WIPP site is developed and evaluated in this report. Release of waste material to the surface during an inadvertent borehole intrusion is possible if future states of the repository include high gas pressure and waste material consisting of fine particulates having low mechanical strength. The conceptual model incorporates the physics of wellbore hydraulics coupled to transient gas flow to the intrusion borehole, and mechanical response of the waste. Degraded waste properties using of the model. The evaluations include both numerical and analytical implementations of the conceptual model. A tensile failure criterion is assumed appropriate for calculation of volumes of waste experiencing fragmentation. Calculations show that for repository gas pressures less than 12 MPa, no tensile failure occurs. Minimal volumes of material experience failure below gas pressure of 14 MPa. Repository conditions dictate that the probability of gas pressures exceeding 14 MPa is approximately 1%. For these conditions, a maximum failed volume of 0.25 m 3 is calculated

  4. Projection pursuit water quality evaluation model based on chicken swam algorithm

    Science.gov (United States)

    Hu, Zhe

    2018-03-01

    In view of the uncertainty and ambiguity of each index in water quality evaluation, in order to solve the incompatibility of evaluation results of individual water quality indexes, a projection pursuit model based on chicken swam algorithm is proposed. The projection index function which can reflect the water quality condition is constructed, the chicken group algorithm (CSA) is introduced, the projection index function is optimized, the best projection direction of the projection index function is sought, and the best projection value is obtained to realize the water quality evaluation. The comparison between this method and other methods shows that it is reasonable and feasible to provide decision-making basis for water pollution control in the basin.

  5. Theory-Based Stakeholder Evaluation

    Science.gov (United States)

    Hansen, Morten Balle; Vedung, Evert

    2010-01-01

    This article introduces a new approach to program theory evaluation called theory-based stakeholder evaluation or the TSE model for short. Most theory-based approaches are program theory driven and some are stakeholder oriented as well. Practically, all of the latter fuse the program perceptions of the various stakeholder groups into one unitary…

  6. Reliability Evaluation for the Surface to Air Missile Weapon Based on Cloud Model

    Directory of Open Access Journals (Sweden)

    Deng Jianjun

    2015-01-01

    Full Text Available The fuzziness and randomness is integrated by using digital characteristics, such as Expected value, Entropy and Hyper entropy. The cloud model adapted to reliability evaluation is put forward based on the concept of the surface to air missile weapon. The cloud scale of the qualitative evaluation is constructed, and the quantitative variable and the qualitative variable in the system reliability evaluation are corresponded. The practical calculation result shows that it is more effective to analyze the reliability of the surface to air missile weapon by this way. The practical calculation result also reflects the model expressed by cloud theory is more consistent with the human thinking style of uncertainty.

  7. The development of macros program-based cognitive evaluation model via e-learning course mathematics in senior high school based on curriculum 2013

    Directory of Open Access Journals (Sweden)

    Djoko Purnomo

    2017-02-01

    Full Text Available The specific purpose of this research is: The implementation of the application of the learning tool with a form cognitive learning evaluation model based macros program via E-learning at High School grade X at july-december based on 2013 curriculum. The method used in this research followed the procedures is research and development by Borg and Gall [2]. In second year, population analysis has conducted at several universities in Semarang. The results of the research and application development of macro program-based cognitive evaluation model is effective which can be seen from (1 the student learning result is over KKM, (2 The student independency affects learning result positively, (3 the student learning a result by using macros program-based cognitive evaluation model is better than students class control. Based on the results above, the development of macros program-based cognitive evaluation model that have been tested have met quality standards according to Akker (1999. Large-scale testing includes operational phase of field testing and final product revision, i.e trials in the wider class that includes students in mathematics education major in several universities, they are the Universitas PGRI Semarang, Universitas Islam Sultan Agung and the Universitas Islam NegeriWalisongo Semarang. The positive responses is given by students at the Universitas PGRI Semarang, Universitas Islam Sultan Agung and the Universitas Islam NegeriWalisongo Semarang.

  8. Evaluation of Student Models on Current Socio-Scientific Topics Based on System Dynamics

    Science.gov (United States)

    Nuhoglu, Hasret

    2014-01-01

    This study aims to 1) enable primary school students to develop models that will help them understand and analyze a system, through a learning process based on system dynamics approach, 2) examine and evaluate students' models related to socio-scientific issues using certain criteria. The research method used is a case study. The study sample…

  9. An inverse problem strategy based on forward model evaluations: Gradient-based optimization without adjoint solves

    Energy Technology Data Exchange (ETDEWEB)

    Aguilo Valentin, Miguel Alejandro [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2016-07-01

    This study presents a new nonlinear programming formulation for the solution of inverse problems. First, a general inverse problem formulation based on the compliance error functional is presented. The proposed error functional enables the computation of the Lagrange multipliers, and thus the first order derivative information, at the expense of just one model evaluation. Therefore, the calculation of the Lagrange multipliers does not require the solution of the computationally intensive adjoint problem. This leads to significant speedups for large-scale, gradient-based inverse problems.

  10. A real options-based CCS investment evaluation model: Case study of China's power generation sector

    International Nuclear Information System (INIS)

    Zhu, Lei; Fan, Ying

    2011-01-01

    Highlights: → This paper establishes a carbon captures and storage (CCS) investment evaluation model. → The model is based on real options theory and solved by the Least Squares Monte Carlo (LSM) method. → China is taken as a case study to evaluate the effects of regulations on CCS investment. → The findings show that the current investment risk of CCS is high, climate policy having the greatest impact on CCS development. -- Abstract: This paper establishes a carbon capture and storage (CCS) investment evaluation model based on real options theory considering uncertainties from the existing thermal power generating cost, carbon price, thermal power with CCS generating cost, and investment in CCS technology deployment. The model aims to evaluate the value of the cost saving effect and amount of CO 2 emission reduction through investing in newly-built thermal power with CCS technology to replace existing thermal power in a given period from the perspective of power generation enterprises. The model is solved by the Least Squares Monte Carlo (LSM) method. Since the model could be used as a policy analysis tool, China is taken as a case study to evaluate the effects of regulations on CCS investment through scenario analysis. The findings show that the current investment risk of CCS is high, climate policy having the greatest impact on CCS development. Thus, there is an important trade off for policy makers between reducing greenhouse gas emissions and protecting the interests of power generation enterprises. The research presented would be useful for CCS technology evaluation and related policy-making.

  11. Study on dynamic team performance evaluation methodology based on team situation awareness model

    International Nuclear Information System (INIS)

    Kim, Suk Chul

    2005-02-01

    The purpose of this thesis is to provide a theoretical framework and its evaluation methodology of team dynamic task performance of operating team at nuclear power plant under the dynamic and tactical environment such as radiological accident. This thesis suggested a team dynamic task performance evaluation model so called team crystallization model stemmed from Endsely's situation awareness model being comprised of four elements: state, information, organization, and orientation and its quantification methods using system dynamics approach and a communication process model based on a receding horizon control approach. The team crystallization model is a holistic approach for evaluating the team dynamic task performance in conjunction with team situation awareness considering physical system dynamics and team behavioral dynamics for a tactical and dynamic task at nuclear power plant. This model provides a systematic measure to evaluate time-dependent team effectiveness or performance affected by multi-agents such as plant states, communication quality in terms of transferring situation-specific information and strategies for achieving the team task goal at given time, and organizational factors. To demonstrate the applicability of the proposed model and its quantification method, the case study was carried out using the data obtained from a full-scope power plant simulator for 1,000MWe pressurized water reactors with four on-the-job operating groups and one expert group who knows accident sequences. Simulated results team dynamic task performance with reference key plant parameters behavior and team-specific organizational center of gravity and cue-and-response matrix illustrated good symmetry with observed value. The team crystallization model will be useful and effective tool for evaluating team effectiveness in terms of recruiting new operating team for new plant as cost-benefit manner. Also, this model can be utilized as a systematic analysis tool for

  12. Study on dynamic team performance evaluation methodology based on team situation awareness model

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Suk Chul

    2005-02-15

    The purpose of this thesis is to provide a theoretical framework and its evaluation methodology of team dynamic task performance of operating team at nuclear power plant under the dynamic and tactical environment such as radiological accident. This thesis suggested a team dynamic task performance evaluation model so called team crystallization model stemmed from Endsely's situation awareness model being comprised of four elements: state, information, organization, and orientation and its quantification methods using system dynamics approach and a communication process model based on a receding horizon control approach. The team crystallization model is a holistic approach for evaluating the team dynamic task performance in conjunction with team situation awareness considering physical system dynamics and team behavioral dynamics for a tactical and dynamic task at nuclear power plant. This model provides a systematic measure to evaluate time-dependent team effectiveness or performance affected by multi-agents such as plant states, communication quality in terms of transferring situation-specific information and strategies for achieving the team task goal at given time, and organizational factors. To demonstrate the applicability of the proposed model and its quantification method, the case study was carried out using the data obtained from a full-scope power plant simulator for 1,000MWe pressurized water reactors with four on-the-job operating groups and one expert group who knows accident sequences. Simulated results team dynamic task performance with reference key plant parameters behavior and team-specific organizational center of gravity and cue-and-response matrix illustrated good symmetry with observed value. The team crystallization model will be useful and effective tool for evaluating team effectiveness in terms of recruiting new operating team for new plant as cost-benefit manner. Also, this model can be utilized as a systematic analysis tool for

  13. A Self-adaptive Dynamic Evaluation Model for Diabetes Mellitus, Based on Evolutionary Strategies

    Directory of Open Access Journals (Sweden)

    An-Jiang Lu

    2016-03-01

    Full Text Available In order to evaluate diabetes mellitus objectively and accurately, this paper builds a self-adaptive dynamic evaluation model for diabetes mellitus, based on evolutionary strategies. First of all, on the basis of a formalized description of the evolutionary process of diabetes syndromes, using a state transition function, it judges whether a disease is evolutionary, through an excitation parameter. It then, provides evidence for the rebuilding of the evaluation index system. After that, by abstracting and rebuilding the composition of evaluation indexes, it makes use of a heuristic algorithm to determine the composition of the evolved evaluation index set of diabetes mellitus, It then, calculates the weight of each index in the evolved evaluation index set of diabetes mellitus by building a dependency matrix and realizes the self-adaptive dynamic evaluation of diabetes mellitus under an evolutionary environment. Using this evaluation model, it is possible to, quantify all kinds of diagnoses and treatment experiences of diabetes and finally to adopt ideal diagnoses and treatment measures for different patients with diabetics.

  14. Model-Based Economic Evaluation of Treatments for Depression: A Systematic Literature Review

    DEFF Research Database (Denmark)

    Kolovos, Spyros; Bosmans, Judith E.; Riper, Heleen

    2017-01-01

    eligible if they used a health economic model with quality-adjusted life-years or disability-adjusted life-years as an outcome measure. Data related to various methodological characteristics were extracted from the included studies. The available modelling techniques were evaluated based on 11 predefined......, and DES models in seven.ConclusionThere were substantial methodological differences between the studies. Since the individual history of each patient is important for the prognosis of depression, DES and ISM simulation methods may be more appropriate than the others for a pragmatic representation...

  15. Survival modeling for the estimation of transition probabilities in model-based economic evaluations in the absence of individual patient data: a tutorial.

    Science.gov (United States)

    Diaby, Vakaramoko; Adunlin, Georges; Montero, Alberto J

    2014-02-01

    Survival modeling techniques are increasingly being used as part of decision modeling for health economic evaluations. As many models are available, it is imperative for interested readers to know about the steps in selecting and using the most suitable ones. The objective of this paper is to propose a tutorial for the application of appropriate survival modeling techniques to estimate transition probabilities, for use in model-based economic evaluations, in the absence of individual patient data (IPD). An illustration of the use of the tutorial is provided based on the final progression-free survival (PFS) analysis of the BOLERO-2 trial in metastatic breast cancer (mBC). An algorithm was adopted from Guyot and colleagues, and was then run in the statistical package R to reconstruct IPD, based on the final PFS analysis of the BOLERO-2 trial. It should be emphasized that the reconstructed IPD represent an approximation of the original data. Afterwards, we fitted parametric models to the reconstructed IPD in the statistical package Stata. Both statistical and graphical tests were conducted to verify the relative and absolute validity of the findings. Finally, the equations for transition probabilities were derived using the general equation for transition probabilities used in model-based economic evaluations, and the parameters were estimated from fitted distributions. The results of the application of the tutorial suggest that the log-logistic model best fits the reconstructed data from the latest published Kaplan-Meier (KM) curves of the BOLERO-2 trial. Results from the regression analyses were confirmed graphically. An equation for transition probabilities was obtained for each arm of the BOLERO-2 trial. In this paper, a tutorial was proposed and used to estimate the transition probabilities for model-based economic evaluation, based on the results of the final PFS analysis of the BOLERO-2 trial in mBC. The results of our study can serve as a basis for any model

  16. Occupant feedback based model predictive control for thermal comfort and energy optimization: A chamber experimental evaluation

    International Nuclear Information System (INIS)

    Chen, Xiao; Wang, Qian; Srebric, Jelena

    2016-01-01

    Highlights: • This study evaluates an occupant-feedback driven Model Predictive Controller (MPC). • The MPC adjusts indoor temperature based on a dynamic thermal sensation (DTS) model. • A chamber model for predicting chamber air temperature is developed and validated. • Experiments show that MPC using DTS performs better than using Predicted Mean Vote. - Abstract: In current centralized building climate control, occupants do not have much opportunity to intervene the automated control system. This study explores the benefit of using thermal comfort feedback from occupants in the model predictive control (MPC) design based on a novel dynamic thermal sensation (DTS) model. This DTS model based MPC was evaluated in chamber experiments. A hierarchical structure for thermal control was adopted in the chamber experiments. At the high level, an MPC controller calculates the optimal supply air temperature of the chamber heating, ventilation, and air conditioning (HVAC) system, using the feedback of occupants’ votes on thermal sensation. At the low level, the actual supply air temperature is controlled by the chiller/heater using a PI control to achieve the optimal set point. This DTS-based MPC was also compared to an MPC designed based on the Predicted Mean Vote (PMV) model for thermal sensation. The experiment results demonstrated that the DTS-based MPC using occupant feedback allows significant energy saving while maintaining occupant thermal comfort compared to the PMV-based MPC.

  17. Evaluating Water Demand Using Agent-Based Modeling

    Science.gov (United States)

    Lowry, T. S.

    2004-12-01

    The supply and demand of water resources are functions of complex, inter-related systems including hydrology, climate, demographics, economics, and policy. To assess the safety and sustainability of water resources, planners often rely on complex numerical models that relate some or all of these systems using mathematical abstractions. The accuracy of these models relies on how well the abstractions capture the true nature of the systems interactions. Typically, these abstractions are based on analyses of observations and/or experiments that account only for the statistical mean behavior of each system. This limits the approach in two important ways: 1) It cannot capture cross-system disruptive events, such as major drought, significant policy change, or terrorist attack, and 2) it cannot resolve sub-system level responses. To overcome these limitations, we are developing an agent-based water resources model that includes the systems of hydrology, climate, demographics, economics, and policy, to examine water demand during normal and extraordinary conditions. Agent-based modeling (ABM) develops functional relationships between systems by modeling the interaction between individuals (agents), who behave according to a probabilistic set of rules. ABM is a "bottom-up" modeling approach in that it defines macro-system behavior by modeling the micro-behavior of individual agents. While each agent's behavior is often simple and predictable, the aggregate behavior of all agents in each system can be complex, unpredictable, and different than behaviors observed in mean-behavior models. Furthermore, the ABM approach creates a virtual laboratory where the effects of policy changes and/or extraordinary events can be simulated. Our model, which is based on the demographics and hydrology of the Middle Rio Grande Basin in the state of New Mexico, includes agent groups of residential, agricultural, and industrial users. Each agent within each group determines its water usage

  18. Index-based groundwater vulnerability mapping models using hydrogeological settings: A critical evaluation

    International Nuclear Information System (INIS)

    Kumar, Prashant; Bansod, Baban K.S.; Debnath, Sanjit K.; Thakur, Praveen Kumar; Ghanshyam, C.

    2015-01-01

    Groundwater vulnerability maps are useful for decision making in land use planning and water resource management. This paper reviews the various groundwater vulnerability assessment models developed across the world. Each model has been evaluated in terms of its pros and cons and the environmental conditions of its application. The paper further discusses the validation techniques used for the generated vulnerability maps by various models. Implicit challenges associated with the development of the groundwater vulnerability assessment models have also been identified with scientific considerations to the parameter relations and their selections. - Highlights: • Various index-based groundwater vulnerability assessment models have been discussed. • A comparative analysis of the models and its applicability in different hydrogeological settings has been discussed. • Research problems of underlying vulnerability assessment models are also reported in this review paper

  19. Index-based groundwater vulnerability mapping models using hydrogeological settings: A critical evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Kumar, Prashant, E-mail: prashantkumar@csio.res.in [CSIR-Central Scientific Instruments Organisation, Chandigarh 160030 (India); Academy of Scientific and Innovative Research—CSIO, Chandigarh 160030 (India); Bansod, Baban K.S.; Debnath, Sanjit K. [CSIR-Central Scientific Instruments Organisation, Chandigarh 160030 (India); Academy of Scientific and Innovative Research—CSIO, Chandigarh 160030 (India); Thakur, Praveen Kumar [Indian Institute of Remote Sensing (ISRO), Dehradun 248001 (India); Ghanshyam, C. [CSIR-Central Scientific Instruments Organisation, Chandigarh 160030 (India); Academy of Scientific and Innovative Research—CSIO, Chandigarh 160030 (India)

    2015-02-15

    Groundwater vulnerability maps are useful for decision making in land use planning and water resource management. This paper reviews the various groundwater vulnerability assessment models developed across the world. Each model has been evaluated in terms of its pros and cons and the environmental conditions of its application. The paper further discusses the validation techniques used for the generated vulnerability maps by various models. Implicit challenges associated with the development of the groundwater vulnerability assessment models have also been identified with scientific considerations to the parameter relations and their selections. - Highlights: • Various index-based groundwater vulnerability assessment models have been discussed. • A comparative analysis of the models and its applicability in different hydrogeological settings has been discussed. • Research problems of underlying vulnerability assessment models are also reported in this review paper.

  20. Interactive model evaluation tool based on IPython notebook

    Science.gov (United States)

    Balemans, Sophie; Van Hoey, Stijn; Nopens, Ingmar; Seuntjes, Piet

    2015-04-01

    In hydrological modelling, some kind of parameter optimization is mostly performed. This can be the selection of a single best parameter set, a split in behavioural and non-behavioural parameter sets based on a selected threshold or a posterior parameter distribution derived with a formal Bayesian approach. The selection of the criterion to measure the goodness of fit (likelihood or any objective function) is an essential step in all of these methodologies and will affect the final selected parameter subset. Moreover, the discriminative power of the objective function is also dependent from the time period used. In practice, the optimization process is an iterative procedure. As such, in the course of the modelling process, an increasing amount of simulations is performed. However, the information carried by these simulation outputs is not always fully exploited. In this respect, we developed and present an interactive environment that enables the user to intuitively evaluate the model performance. The aim is to explore the parameter space graphically and to visualize the impact of the selected objective function on model behaviour. First, a set of model simulation results is loaded along with the corresponding parameter sets and a data set of the same variable as the model outcome (mostly discharge). The ranges of the loaded parameter sets define the parameter space. A selection of the two parameters visualised can be made by the user. Furthermore, an objective function and a time period of interest need to be selected. Based on this information, a two-dimensional parameter response surface is created, which actually just shows a scatter plot of the parameter combinations and assigns a color scale corresponding with the goodness of fit of each parameter combination. Finally, a slider is available to change the color mapping of the points. Actually, the slider provides a threshold to exclude non behaviour parameter sets and the color scale is only attributed to the

  1. Model-based economic evaluations in smoking cessation and their transferability to new contexts: a systematic review.

    Science.gov (United States)

    Berg, Marrit L; Cheung, Kei Long; Hiligsmann, Mickaël; Evers, Silvia; de Kinderen, Reina J A; Kulchaitanaroaj, Puttarin; Pokhrel, Subhash

    2017-06-01

    To identify different types of models used in economic evaluations of smoking cessation, analyse the quality of the included models examining their attributes and ascertain their transferability to a new context. A systematic review of the literature on the economic evaluation of smoking cessation interventions published between 1996 and April 2015, identified via Medline, EMBASE, National Health Service (NHS) Economic Evaluation Database (NHS EED), Health Technology Assessment (HTA). The checklist-based quality of the included studies and transferability scores was based on the European Network of Health Economic Evaluation Databases (EURONHEED) criteria. Studies that were not in smoking cessation, not original research, not a model-based economic evaluation, that did not consider adult population and not from a high-income country were excluded. Among the 64 economic evaluations included in the review, the state-transition Markov model was the most frequently used method (n = 30/64), with quality adjusted life years (QALY) being the most frequently used outcome measure in a life-time horizon. A small number of the included studies (13 of 64) were eligible for EURONHEED transferability checklist. The overall transferability scores ranged from 0.50 to 0.97, with an average score of 0.75. The average score per section was 0.69 (range = 0.35-0.92). The relative transferability of the studies could not be established due to a limitation present in the EURONHEED method. All existing economic evaluations in smoking cessation lack in one or more key study attributes necessary to be fully transferable to a new context. © 2017 The Authors. Addiction published by John Wiley & Sons Ltd on behalf of Society for the Study of Addiction.

  2. Health Research Governance: Introduction of a New Web-based Research Evaluation Model in Iran: One-decade Experience

    Science.gov (United States)

    MALEKZADEH, Reza; AKHONDZADEH, Shahin; EBADIFAR, Asghar; BARADARAN EFTEKHARI, Monir; OWLIA, Parviz; GHANEI, Mostafa; FALAHAT, Katayoun; HABIBI, Elham; SOBHANI, Zahra; DJALALINIA, Shirin; PAYKARI, Niloofar; MOJARRAB, Shahnaz; ELTEMASI, Masoumeh; LAALI, Reza

    2016-01-01

    Background: Governance is one of the main functions of Health Research System (HRS) that consist of four essential elements such as setting up evaluation system. The goal of this study was to introduce a new web based research evaluation model in Iran. Methods: Based on main elements of governance, research indicators have been clarified and with cooperation of technical team, appropriate software was designed. Three main steps in this study consist of developing of mission-oriented program, creating enabling environment and set up Iran Research Medical Portal as a center for research evaluation. Results: Fifty-two universities of medical sciences in three types have been participated. After training the evaluation focal points in all of medical universities, access to data entry and uploading all of documents were provided. Regarding to mission – based program, the contribution of medical universities in knowledge production was 60% for type one, 31% for type two and 9% for type three. The research priorities based on Essential National Health Research (ENHR) approach and mosaic model were gathered from universities of medical sciences and aggregated to nine main areas as national health research priorities. Ethical committees were established in all of medical universities. Conclusion: Web based research evaluation model is a comprehensive and integrated system for data collection in research. This system is appropriate tool to national health research ranking. PMID:27957437

  3. Model-based economic evaluation in Alzheimer's disease: a review of the methods available to model Alzheimer's disease progression.

    Science.gov (United States)

    Green, Colin; Shearer, James; Ritchie, Craig W; Zajicek, John P

    2011-01-01

    To consider the methods available to model Alzheimer's disease (AD) progression over time to inform on the structure and development of model-based evaluations, and the future direction of modelling methods in AD. A systematic search of the health care literature was undertaken to identify methods to model disease progression in AD. Modelling methods are presented in a descriptive review. The literature search identified 42 studies presenting methods or applications of methods to model AD progression over time. The review identified 10 general modelling frameworks available to empirically model the progression of AD as part of a model-based evaluation. Seven of these general models are statistical models predicting progression of AD using a measure of cognitive function. The main concerns with models are on model structure, around the limited characterization of disease progression, and on the use of a limited number of health states to capture events related to disease progression over time. None of the available models have been able to present a comprehensive model of the natural history of AD. Although helpful, there are serious limitations in the methods available to model progression of AD over time. Advances are needed to better model the progression of AD and the effects of the disease on peoples' lives. Recent evidence supports the need for a multivariable approach to the modelling of AD progression, and indicates that a latent variable analytic approach to characterising AD progression is a promising avenue for advances in the statistical development of modelling methods. Copyright © 2011 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  4. Evidence used in model-based economic evaluations for evaluating pharmacogenetic and pharmacogenomic tests: a systematic review protocol.

    Science.gov (United States)

    Peters, Jaime L; Cooper, Chris; Buchanan, James

    2015-11-11

    Decision models can be used to conduct economic evaluations of new pharmacogenetic and pharmacogenomic tests to ensure they offer value for money to healthcare systems. These models require a great deal of evidence, yet research suggests the evidence used is diverse and of uncertain quality. By conducting a systematic review, we aim to investigate the test-related evidence used to inform decision models developed for the economic evaluation of genetic tests. We will search electronic databases including MEDLINE, EMBASE and NHS EEDs to identify model-based economic evaluations of pharmacogenetic and pharmacogenomic tests. The search will not be limited by language or date. Title and abstract screening will be conducted independently by 2 reviewers, with screening of full texts and data extraction conducted by 1 reviewer, and checked by another. Characteristics of the decision problem, the decision model and the test evidence used to inform the model will be extracted. Specifically, we will identify the reported evidence sources for the test-related evidence used, describe the study design and how the evidence was identified. A checklist developed specifically for decision analytic models will be used to critically appraise the models described in these studies. Variations in the test evidence used in the decision models will be explored across the included studies, and we will identify gaps in the evidence in terms of both quantity and quality. The findings of this work will be disseminated via a peer-reviewed journal publication and at national and international conferences. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  5. Study on evaluation method for heterogeneous sedimentary rocks based on forward model

    International Nuclear Information System (INIS)

    Masui, Yasuhiro; Kawada, Koji; Katoh, Arata; Tsuji, Takashi; Suwabe, Mizue

    2004-02-01

    It is very important to estimate the facies distribution of heterogeneous sedimentary rocks for geological disposal of high level radioactive waste. The heterogeneousness of sedimentary rocks is due to variable distribution of grain size and mineral composition. The objective of this study is to establish the evaluation method for heterogeneous sedimentary rocks based on forward model. This study consisted of geological study for Horonobe area and the development of soft wear for sedimentary model. Geological study was composed of following items. 1. The sedimentary system for Koetoi and Wakkanai formations in Horonobe area was compiled based on papers. 2. The cores of HDB-1 were observed mainly from sedimentological view. 3. The facies and compaction property of argillaceous rocks were studied based on physical logs and core analysis data of wells. 4. The structure maps, isochrone maps, isopach maps and restored geological sections were made. The soft wear for sedimentary model to show sedimentary system on a basin scale was developed. This soft wear estimates the facies distribution and hydraulic conductivity of sedimentary rocks on three dimensions scale by numerical simulation. (author)

  6. Nuclear models relevant to evaluation

    International Nuclear Information System (INIS)

    Arthur, E.D.; Chadwick, M.B.; Hale, G.M.; Young, P.G.

    1991-01-01

    The widespread use of nuclear models continues in the creation of data evaluations. The reasons include extension of data evaluations to higher energies, creation of data libraries for isotopic components of natural materials, and production of evaluations for radiative target species. In these cases, experimental data are often sparse or nonexistent. As this trend continues, the nuclear models employed in evaluation work move towards more microscopically-based theoretical methods, prompted in part by the availability of increasingly powerful computational resources. Advances in nuclear models applicable to evaluation will be reviewed. These include advances in optical model theory, microscopic and phenomenological state and level density theory, unified models that consistently describe both equilibrium and nonequilibrium reaction mechanism, and improved methodologies for calculation of prompt radiation from fission. 84 refs., 8 figs

  7. Comparison and Analysis of Evaluation System of Child Care Program in Primary Health Care System in the East Azerbaijan Province Based on Comprehensive Evaluation Model (CIPP

    Directory of Open Access Journals (Sweden)

    Yalda Mousa Zadeh

    2015-08-01

    Full Text Available Background and objectives: Planning plays an important role in improving children's health and an evaluation system is necessary to planning. The purpose of this study was comparing and analyzing evaluation system of child care program in primary health care system in the Eastern Azerbaijan province based on Comprehensive Evaluation Model (CIPP. Material and Methods : This is a cross sectional study. The process includes a review of current evaluation system, comparison and analysis of the system based on a comprehensive model and to assess and identify the strengths and weaknesses of the current systems. The quantitative methods (such as brainstorming, observation and interview and consensus of study group about the results of the study were used for analyzing and interpreting results. Results: It showed that not enough attention was paid to the context and performance and the content of evaluation just includes inputs and processes based on the finding. Specific criteria and impact were considered in four areas (context, input, process and outcome and all levels which were according to the proposal and based on the CIPP model. Conclusion: Evaluation is a control process tool by higher levels and self-assessment by service provider is not considered in the current system. Evaluation includes quantitative aspects and not the quality of process. So, it is recommended that forecasting system that utilizes evaluation will result in improving future planning and a scientific model to be used in designing evaluation program. ​

  8. Nitrous Oxide Production in a Granule-based Partial Nitritation Reactor: A Model-based Evaluation.

    Science.gov (United States)

    Peng, Lai; Sun, Jing; Liu, Yiwen; Dai, Xiaohu; Ni, Bing-Jie

    2017-04-03

    Sustainable wastewater treatment has been attracting increasing attentions over the past decades. However, the production of nitrous oxide (N 2 O), a potent GHG, from the energy-efficient granule-based autotrophic nitrogen removal is largely unknown. This study applied a previously established N 2 O model, which incorporated two N 2 O production pathways by ammonia-oxidizing bacteria (AOB) (AOB denitrification and the hydroxylamine (NH 2 OH) oxidation). The two-pathway model was used to describe N 2 O production from a granule-based partial nitritation (PN) reactor and provide insights into the N 2 O distribution inside granules. The model was evaluated by comparing simulation results with N 2 O monitoring profiles as well as isotopic measurement data from the PN reactor. The model demonstrated its good predictive ability against N 2 O dynamics and provided useful information about the shift of N 2 O production pathways inside granules for the first time. The simulation results indicated that the increase of oxygen concentration and granule size would significantly enhance N 2 O production. The results further revealed a linear relationship between N 2 O production and ammonia oxidation rate (AOR) (R 2  = 0.99) under the conditions of varying oxygen levels and granule diameters, suggesting that bulk oxygen and granule size may exert an indirect effect on N 2 O production by causing a change in AOR.

  9. Fuzzy Comprehensive Evaluation of Ecological Risk Based on Cloud Model: Taking Chengchao Iron Mine as Example

    Science.gov (United States)

    Ruan, Jinghua; Chen, Yong; Xiao, Xiao; Yong, Gan; Huang, Ranran; Miao, Zuohua

    2018-01-01

    Aimed at the fuzziness and randomness during the evaluation process, this paper constructed a fuzzy comprehensive evaluation method based on cloud model. The evaluation index system was established based on the inherent risk, present level and control situation, which had been proved to be able to convey the main contradictions of ecological risk in mine on the macro level, and be advantageous for comparison among mines. The comment sets and membership functions improved by cloud model could reflect the uniformity of ambiguity and randomness effectively. In addition, the concept of fuzzy entropy was introduced to further characterize the fuzziness of assessments results and the complexities of ecological problems in target mine. A practical example in Chengchao Iron Mine evidenced that, the assessments results can reflect actual situations appropriately and provide a new theoretic guidance for comprehensive ecological risk evaluation of underground iron mine.

  10. Protein structure modelling and evaluation based on a 4-distance description of side-chain interactions

    Directory of Open Access Journals (Sweden)

    Inbar Yuval

    2010-07-01

    Full Text Available Abstract Background Accurate evaluation and modelling of residue-residue interactions within and between proteins is a key aspect of computational structure prediction including homology modelling, protein-protein docking, refinement of low-resolution structures, and computational protein design. Results Here we introduce a method for accurate protein structure modelling and evaluation based on a novel 4-distance description of residue-residue interaction geometry. Statistical 4-distance preferences were extracted from high-resolution protein structures and were used as a basis for a knowledge-based potential, called Hunter. We demonstrate that 4-distance description of side chain interactions can be used reliably to discriminate the native structure from a set of decoys. Hunter ranked the native structure as the top one in 217 out of 220 high-resolution decoy sets, in 25 out of 28 "Decoys 'R' Us" decoy sets and in 24 out of 27 high-resolution CASP7/8 decoy sets. The same concept was applied to side chain modelling in protein structures. On a set of very high-resolution protein structures the average RMSD was 1.47 Å for all residues and 0.73 Å for buried residues, which is in the range of attainable accuracy for a model. Finally, we show that Hunter performs as good or better than other top methods in homology modelling based on results from the CASP7 experiment. The supporting web site http://bioinfo.weizmann.ac.il/hunter/ was developed to enable the use of Hunter and for visualization and interactive exploration of 4-distance distributions. Conclusions Our results suggest that Hunter can be used as a tool for evaluation and for accurate modelling of residue-residue interactions in protein structures. The same methodology is applicable to other areas involving high-resolution modelling of biomolecules.

  11. A Comprehensive Decision-Making Approach Based on Hierarchical Attribute Model for Information Fusion Algorithms’ Performance Evaluation

    Directory of Open Access Journals (Sweden)

    Lianhui Li

    2014-01-01

    Full Text Available Aiming at the problem of fusion algorithm performance evaluation in multiradar information fusion system, firstly the hierarchical attribute model of track relevance performance evaluation model is established based on the structural model and functional model and quantization methods of evaluation indicators are given; secondly a combination weighting method is proposed to determine the weights of evaluation indicators, in which the objective and subjective weights are separately determined by criteria importance through intercriteria correlation (CRITIC and trapezoidal fuzzy scale analytic hierarchy process (AHP, and then experience factor is introduced to obtain the combination weight; at last the improved technique for order preference by similarity to ideal solution (TOPSIS replacing Euclidean distance with Kullback-Leibler divergence (KLD is used to sort the weighted indicator value of the evaluation object. An example is given to illustrate the correctness and feasibility of the proposed method.

  12. Discrete Model Predictive Control-Based Maximum Power Point Tracking for PV Systems: Overview and Evaluation

    DEFF Research Database (Denmark)

    Lashab, Abderezak; Sera, Dezso; Guerrero, Josep M.

    2018-01-01

    The main objective of this work is to provide an overview and evaluation of discrete model predictive controlbased maximum power point tracking (MPPT) for PV systems. A large number of MPC based MPPT methods have been recently introduced in the literature with very promising performance, however......, an in-depth investigation and comparison of these methods have not been carried out yet. Therefore, this paper has set out to provide an in-depth analysis and evaluation of MPC based MPPT methods applied to various common power converter topologies. The performance of MPC based MPPT is directly linked...... with the converter topology, and it is also affected by the accurate determination of the converter parameters, sensitivity to converter parameter variations is also investigated. The static and dynamic performance of the trackers are assessed according to the EN 50530 standard, using detailed simulation models...

  13. The evaluation model of the enterprise energy efficiency based on DPSR.

    Science.gov (United States)

    Wei, Jin-Yu; Zhao, Xiao-Yu; Sun, Xue-Shan

    2017-05-08

    The reasonable evaluation of the enterprise energy efficiency is an important work in order to reduce the energy consumption. In this paper, an effective energy efficiency evaluation index system is proposed based on DPSR (Driving forces-Pressure-State-Response) with the consideration of the actual situation of enterprises. This index system which covers multi-dimensional indexes of the enterprise energy efficiency can reveal the complete causal chain which includes the "driver forces" and "pressure" of the enterprise energy efficiency "state" caused by the internal and external environment, and the ultimate enterprise energy-saving "response" measures. Furthermore, the ANP (Analytic Network Process) and cloud model are used to calculate the weight of each index and evaluate the energy efficiency level. The analysis of BL Company verifies the feasibility of this index system and also provides an effective way to improve the energy efficiency at last.

  14. A Java-based fMRI processing pipeline evaluation system for assessment of univariate general linear model and multivariate canonical variate analysis-based pipelines.

    Science.gov (United States)

    Zhang, Jing; Liang, Lichen; Anderson, Jon R; Gatewood, Lael; Rottenberg, David A; Strother, Stephen C

    2008-01-01

    As functional magnetic resonance imaging (fMRI) becomes widely used, the demands for evaluation of fMRI processing pipelines and validation of fMRI analysis results is increasing rapidly. The current NPAIRS package, an IDL-based fMRI processing pipeline evaluation framework, lacks system interoperability and the ability to evaluate general linear model (GLM)-based pipelines using prediction metrics. Thus, it can not fully evaluate fMRI analytical software modules such as FSL.FEAT and NPAIRS.GLM. In order to overcome these limitations, a Java-based fMRI processing pipeline evaluation system was developed. It integrated YALE (a machine learning environment) into Fiswidgets (a fMRI software environment) to obtain system interoperability and applied an algorithm to measure GLM prediction accuracy. The results demonstrated that the system can evaluate fMRI processing pipelines with univariate GLM and multivariate canonical variates analysis (CVA)-based models on real fMRI data based on prediction accuracy (classification accuracy) and statistical parametric image (SPI) reproducibility. In addition, a preliminary study was performed where four fMRI processing pipelines with GLM and CVA modules such as FSL.FEAT and NPAIRS.CVA were evaluated with the system. The results indicated that (1) the system can compare different fMRI processing pipelines with heterogeneous models (NPAIRS.GLM, NPAIRS.CVA and FSL.FEAT) and rank their performance by automatic performance scoring, and (2) the rank of pipeline performance is highly dependent on the preprocessing operations. These results suggest that the system will be of value for the comparison, validation, standardization and optimization of functional neuroimaging software packages and fMRI processing pipelines.

  15. Evaluation of mobile ad hoc network reliability using propagation-based link reliability model

    International Nuclear Information System (INIS)

    Padmavathy, N.; Chaturvedi, Sanjay K.

    2013-01-01

    A wireless mobile ad hoc network (MANET) is a collection of solely independent nodes (that can move randomly around the area of deployment) making the topology highly dynamic; nodes communicate with each other by forming a single hop/multi-hop network and maintain connectivity in decentralized manner. MANET is modelled using geometric random graphs rather than random graphs because the link existence in MANET is a function of the geometric distance between the nodes and the transmission range of the nodes. Among many factors that contribute to the MANET reliability, the reliability of these networks also depends on the robustness of the link between the mobile nodes of the network. Recently, the reliability of such networks has been evaluated for imperfect nodes (transceivers) with binary model of communication links based on the transmission range of the mobile nodes and the distance between them. However, in reality, the probability of successful communication decreases as the signal strength deteriorates due to noise, fading or interference effects even up to the nodes' transmission range. Hence, in this paper, using a propagation-based link reliability model rather than a binary-model with nodes following a known failure distribution to evaluate the network reliability (2TR m , ATR m and AoTR m ) of MANET through Monte Carlo Simulation is proposed. The method is illustrated with an application and some imperative results are also presented

  16. Energy Sustainability Evaluation Model Based on the Matter-Element Extension Method: A Case Study of Shandong Province, China

    Directory of Open Access Journals (Sweden)

    Siqi Li

    2017-11-01

    Full Text Available Energy sustainability is of vital importance to regional sustainability, because energy sustainability is closely related to both regional economic growth and social stability. The existing energy sustainability evaluation methods lack a unified system to determine the relevant influencing factors, are relatively weak in quantitative analysis, and do not fully describe the ‘paradoxical’ characteristics of energy sustainability. To solve those problems and to reasonably and objectively evaluate energy sustainability, we propose an energy sustainability evaluation model based on the matter-element extension method. We first select energy sustainability evaluation indexes based on previous research and experience. Then, a variation coefficient method is used to determine the weights of these indexes. Finally, the study establishes the classical domain, joint domain, and the matter-element relationship to evaluate energy sustainability through matter-element extension. Data from Shandong Province is used as a case study to evaluate the region’s energy sustainability. The case study shows that the proposed energy sustainability evaluation model, based on the matter-element extension method, can effectively evaluate regional energy sustainability.

  17. A systematic and critical review of model-based economic evaluations of pharmacotherapeutics in patients with bipolar disorder.

    Science.gov (United States)

    Mohiuddin, Syed

    2014-08-01

    Bipolar disorder (BD) is a chronic and relapsing mental illness with a considerable health-related and economic burden. The primary goal of pharmacotherapeutics for BD is to improve patients' well-being. The use of decision-analytic models is key in assessing the added value of the pharmacotherapeutics aimed at treating the illness, but concerns have been expressed about the appropriateness of different modelling techniques and about the transparency in the reporting of economic evaluations. This paper aimed to identify and critically appraise published model-based economic evaluations of pharmacotherapeutics in BD patients. A systematic review combining common terms for BD and economic evaluation was conducted in MEDLINE, EMBASE, PSYCINFO and ECONLIT. Studies identified were summarised and critically appraised in terms of the use of modelling technique, model structure and data sources. Considering the prognosis and management of BD, the possible benefits and limitations of each modelling technique are discussed. Fourteen studies were identified using model-based economic evaluations of pharmacotherapeutics in BD patients. Of these 14 studies, nine used Markov, three used discrete-event simulation (DES) and two used decision-tree models. Most of the studies (n = 11) did not include the rationale for the choice of modelling technique undertaken. Half of the studies did not include the risk of mortality. Surprisingly, no study considered the risk of having a mixed bipolar episode. This review identified various modelling issues that could potentially reduce the comparability of one pharmacotherapeutic intervention with another. Better use and reporting of the modelling techniques in the future studies are essential. DES modelling appears to be a flexible and comprehensive technique for evaluating the comparability of BD treatment options because of its greater flexibility of depicting the disease progression over time. However, depending on the research question

  18. Evaluation of soy-based surface active copolymers as surfactant ingredients in model shampoo formulations.

    Science.gov (United States)

    Popadyuk, A; Kalita, H; Chisholm, B J; Voronov, A

    2014-12-01

    A new non-toxic soybean oil-based polymeric surfactant (SBPS) for personal-care products was developed and extensively characterized, including an evaluation of the polymeric surfactant performance in model shampoo formulations. To experimentally assure applicability of the soy-based macromolecules in shampoos, either in combination with common anionic surfactants (in this study, sodium lauryl sulfate, SLS) or as a single surface-active ingredient, the testing of SBPS physicochemical properties, performance and visual assessment of SBPS-based model shampoos was carried out. The results obtained, including foaming and cleaning ability of model formulations, were compared to those with only SLS as a surfactant as well as to SLS-free shampoos. Overall, the results show that the presence of SBPS improves cleaning, foaming, and conditioning of model formulations. SBPS-based formulations meet major requirements of multifunctional shampoos - mild detergency, foaming, good conditioning, and aesthetic appeal, which are comparable to commercially available shampoos. In addition, examination of SBPS/SLS mixtures in model shampoos showed that the presence of the SBPS enables the concentration of SLS to be significantly reduced without sacrificing shampoo performance. © 2014 Society of Cosmetic Scientists and the Société Française de Cosmétologie.

  19. Model-based segmentation in orbital volume measurement with cone beam computed tomography and evaluation against current concepts.

    Science.gov (United States)

    Wagner, Maximilian E H; Gellrich, Nils-Claudius; Friese, Karl-Ingo; Becker, Matthias; Wolter, Franz-Erich; Lichtenstein, Juergen T; Stoetzer, Marcus; Rana, Majeed; Essig, Harald

    2016-01-01

    Objective determination of the orbital volume is important in the diagnostic process and in evaluating the efficacy of medical and/or surgical treatment of orbital diseases. Tools designed to measure orbital volume with computed tomography (CT) often cannot be used with cone beam CT (CBCT) because of inferior tissue representation, although CBCT has the benefit of greater availability and lower patient radiation exposure. Therefore, a model-based segmentation technique is presented as a new method for measuring orbital volume and compared to alternative techniques. Both eyes from thirty subjects with no known orbital pathology who had undergone CBCT as a part of routine care were evaluated (n = 60 eyes). Orbital volume was measured with manual, atlas-based, and model-based segmentation methods. Volume measurements, volume determination time, and usability were compared between the three methods. Differences in means were tested for statistical significance using two-tailed Student's t tests. Neither atlas-based (26.63 ± 3.15 mm(3)) nor model-based (26.87 ± 2.99 mm(3)) measurements were significantly different from manual volume measurements (26.65 ± 4.0 mm(3)). However, the time required to determine orbital volume was significantly longer for manual measurements (10.24 ± 1.21 min) than for atlas-based (6.96 ± 2.62 min, p < 0.001) or model-based (5.73 ± 1.12 min, p < 0.001) measurements. All three orbital volume measurement methods examined can accurately measure orbital volume, although atlas-based and model-based methods seem to be more user-friendly and less time-consuming. The new model-based technique achieves fully automated segmentation results, whereas all atlas-based segmentations at least required manipulations to the anterior closing. Additionally, model-based segmentation can provide reliable orbital volume measurements when CT image quality is poor.

  20. Evaluation of three physiologically based pharmacokinetic (PBPK) modeling tools for emergency risk assessment after acute dichloromethane exposure

    NARCIS (Netherlands)

    Boerleider, R. Z.; Olie, J. D N; van Eijkeren, J. C H; Bos, P. M J; Hof, B. G H; de Vries, I.; Bessems, J. G M; Meulenbelt, J.; Hunault, C. C.

    2015-01-01

    Introduction: Physiologically based pharmacokinetic (PBPK) models may be useful in emergency risk assessment, after acute exposure to chemicals, such as dichloromethane (DCM). We evaluated the applicability of three PBPK models for human risk assessment following a single exposure to DCM: one model

  1. Spatial pattern evaluation of a calibrated national hydrological model - a remote-sensing-based diagnostic approach

    Science.gov (United States)

    Mendiguren, Gorka; Koch, Julian; Stisen, Simon

    2017-11-01

    Distributed hydrological models are traditionally evaluated against discharge stations, emphasizing the temporal and neglecting the spatial component of a model. The present study widens the traditional paradigm by highlighting spatial patterns of evapotranspiration (ET), a key variable at the land-atmosphere interface, obtained from two different approaches at the national scale of Denmark. The first approach is based on a national water resources model (DK-model), using the MIKE-SHE model code, and the second approach utilizes a two-source energy balance model (TSEB) driven mainly by satellite remote sensing data. Ideally, the hydrological model simulation and remote-sensing-based approach should present similar spatial patterns and driving mechanisms of ET. However, the spatial comparison showed that the differences are significant and indicate insufficient spatial pattern performance of the hydrological model.The differences in spatial patterns can partly be explained by the fact that the hydrological model is configured to run in six domains that are calibrated independently from each other, as it is often the case for large-scale multi-basin calibrations. Furthermore, the model incorporates predefined temporal dynamics of leaf area index (LAI), root depth (RD) and crop coefficient (Kc) for each land cover type. This zonal approach of model parameterization ignores the spatiotemporal complexity of the natural system. To overcome this limitation, this study features a modified version of the DK-model in which LAI, RD and Kc are empirically derived using remote sensing data and detailed soil property maps in order to generate a higher degree of spatiotemporal variability and spatial consistency between the six domains. The effects of these changes are analyzed by using empirical orthogonal function (EOF) analysis to evaluate spatial patterns. The EOF analysis shows that including remote-sensing-derived LAI, RD and Kc in the distributed hydrological model adds

  2. An evaluation of the hemiplegic subject based on the Bobath approach. Part I: The model.

    Science.gov (United States)

    Guarna, F; Corriveau, H; Chamberland, J; Arsenault, A B; Dutil, E; Drouin, G

    1988-01-01

    An evaluation, based on the Bobath approach to treatment has been developed. A model, substantiating this evaluation is presented. In this model, the three stages of motor recovery presented by Bobath have been extended to six, to better follow the progression of the patient. Six parameters have also been identified. These are the elements to be quantified so that the progress of the patient through the stages of motor recovery can be followed. Four of these parameters are borrowed from the Bobath approach, that is: postural reaction, muscle tone, reflex activity and active movement. Two have been added: sensorium and pain. An accompanying paper presents the evaluation protocol along with the operational definition of each of these parameters.

  3. New geometric design consistency model based on operating speed profiles for road safety evaluation.

    Science.gov (United States)

    Camacho-Torregrosa, Francisco J; Pérez-Zuriaga, Ana M; Campoy-Ungría, J Manuel; García-García, Alfredo

    2013-12-01

    To assist in the on-going effort to reduce road fatalities as much as possible, this paper presents a new methodology to evaluate road safety in both the design and redesign stages of two-lane rural highways. This methodology is based on the analysis of road geometric design consistency, a value which will be a surrogate measure of the safety level of the two-lane rural road segment. The consistency model presented in this paper is based on the consideration of continuous operating speed profiles. The models used for their construction were obtained by using an innovative GPS-data collection method that is based on continuous operating speed profiles recorded from individual drivers. This new methodology allowed the researchers to observe the actual behavior of drivers and to develop more accurate operating speed models than was previously possible with spot-speed data collection, thereby enabling a more accurate approximation to the real phenomenon and thus a better consistency measurement. Operating speed profiles were built for 33 Spanish two-lane rural road segments, and several consistency measurements based on the global and local operating speed were checked. The final consistency model takes into account not only the global dispersion of the operating speed, but also some indexes that consider both local speed decelerations and speeds over posted speeds as well. For the development of the consistency model, the crash frequency for each study site was considered, which allowed estimating the number of crashes on a road segment by means of the calculation of its geometric design consistency. Consequently, the presented consistency evaluation method is a promising innovative tool that can be used as a surrogate measure to estimate the safety of a road segment. Copyright © 2012 Elsevier Ltd. All rights reserved.

  4. Evaluation model for safety capacity of chemical industrial park based on acceptable regional risk

    Institute of Scientific and Technical Information of China (English)

    Guohua Chen; Shukun Wang; Xiaoqun Tan

    2015-01-01

    The paper defines the Safety Capacity of Chemical Industrial Park (SCCIP) from the perspective of acceptable regional risk. For the purpose of exploring the evaluation model for the SCCIP, a method based on quantitative risk assessment was adopted for evaluating transport risk and to confirm reasonable safety transport capacity of chemical industrial park, and then by combining with the safety storage capacity, a SCCIP evaluation model was put forward. The SCCIP was decided by the smaller one between the largest safety storage capacity and the maximum safety transport capacity, or else, the regional risk of the park will exceed the acceptable level. The developed method was applied to a chemical industrial park in Guangdong province to obtain the maximum safety transport capacity and the SCCIP. The results can be realized in the regional risk control of the park effectively.

  5. Computational electromagnetics and model-based inversion a modern paradigm for eddy-current nondestructive evaluation

    CERN Document Server

    Sabbagh, Harold A; Sabbagh, Elias H; Aldrin, John C; Knopp, Jeremy S

    2013-01-01

    Computational Electromagnetics and Model-Based Inversion: A Modern Paradigm for Eddy Current Nondestructive Evaluation describes the natural marriage of the computer to eddy-current NDE. Three distinct topics are emphasized in the book: (a) fundamental mathematical principles of volume-integral equations as a subset of computational electromagnetics, (b) mathematical algorithms applied to signal-processing and inverse scattering problems, and (c) applications of these two topics to problems in which real and model data are used. By showing how mathematics and the computer can solve problems more effectively than current analog practices, this book defines the modern technology of eddy-current NDE. This book will be useful to advanced students and practitioners in the fields of computational electromagnetics, electromagnetic inverse-scattering theory, nondestructive evaluation, materials evaluation and biomedical imaging. Users of eddy-current NDE technology in industries as varied as nuclear power, aerospace,...

  6. Empirically evaluating decision-analytic models.

    Science.gov (United States)

    Goldhaber-Fiebert, Jeremy D; Stout, Natasha K; Goldie, Sue J

    2010-08-01

    Model-based cost-effectiveness analyses support decision-making. To augment model credibility, evaluation via comparison to independent, empirical studies is recommended. We developed a structured reporting format for model evaluation and conducted a structured literature review to characterize current model evaluation recommendations and practices. As an illustration, we applied the reporting format to evaluate a microsimulation of human papillomavirus and cervical cancer. The model's outputs and uncertainty ranges were compared with multiple outcomes from a study of long-term progression from high-grade precancer (cervical intraepithelial neoplasia [CIN]) to cancer. Outcomes included 5 to 30-year cumulative cancer risk among women with and without appropriate CIN treatment. Consistency was measured by model ranges overlapping study confidence intervals. The structured reporting format included: matching baseline characteristics and follow-up, reporting model and study uncertainty, and stating metrics of consistency for model and study results. Structured searches yielded 2963 articles with 67 meeting inclusion criteria and found variation in how current model evaluations are reported. Evaluation of the cervical cancer microsimulation, reported using the proposed format, showed a modeled cumulative risk of invasive cancer for inadequately treated women of 39.6% (30.9-49.7) at 30 years, compared with the study: 37.5% (28.4-48.3). For appropriately treated women, modeled risks were 1.0% (0.7-1.3) at 30 years, study: 1.5% (0.4-3.3). To support external and projective validity, cost-effectiveness models should be iteratively evaluated as new studies become available, with reporting standardized to facilitate assessment. Such evaluations are particularly relevant for models used to conduct comparative effectiveness analyses.

  7. Carbon emission analysis and evaluation of industrial departments in China: An improved environmental DEA cross model based on information entropy.

    Science.gov (United States)

    Han, Yongming; Long, Chang; Geng, Zhiqiang; Zhang, Keyu

    2018-01-01

    Environmental protection and carbon emission reduction play a crucial role in the sustainable development procedure. However, the environmental efficiency analysis and evaluation based on the traditional data envelopment analysis (DEA) cross model is subjective and inaccurate, because all elements in a column or a row of the cross evaluation matrix (CEM) in the traditional DEA cross model are given the same weight. Therefore, this paper proposes an improved environmental DEA cross model based on the information entropy to analyze and evaluate the carbon emission of industrial departments in China. The information entropy is applied to build the entropy distance based on the turbulence of the whole system, and calculate the weights in the CEM of the environmental DEA cross model in a dynamic way. The theoretical results show that the new weight constructed based on the information entropy is unique and optimal globally by using the Monte Carlo simulation. Finally, compared with the traditional environmental DEA and DEA cross model, the improved environmental DEA cross model has a better efficiency discrimination ability based on the data of industrial departments in China. Moreover, the proposed model can obtain the potential of carbon emission reduction of industrial departments to improve the energy efficiency. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Spatial Planning and Policy Evaluation in an Urban Conurbation: a Regional Agent-Based Economic Model

    Directory of Open Access Journals (Sweden)

    Luzius Stricker

    2017-03-01

    Full Text Available This paper studies different functions and relations between 45 agglomerated municipalities in southern Switzerland (Ticino, using a territorial agent-based model. Our research adopts a bottom-up approach to urban systems, considering the agglomeration mechanism and effects of different regional and urban policies, and simulates the individual actions of diverse agents on a real city using an Agent-based model (ABM. Simulating the individual actions of diverse agents on a real city and measuring the resulting system behaviour and outcomes over time, they effectively provide a good test bed for evaluating the impact of different policies. The database is created merging the Swiss official secondary data for one reference year (2011 with Eurostat and OECD-Regpat. The results highlight that the understanding of municipalities’ functions on the territory appears to be essential for designing a solid institutional agglomeration (or city. From a methodological point of view, we contribute to improve the application of territorial ABM. Finally, our results provide a robust base to evaluate in a dynamic way various political interventions, in order to ensure a sustainable development of the agglomeration and the surrounding territories. Applying the analyses and the model on a larger scale, including further regions and conurbations, and including more indicators and variables, to obtain a more detailed and characteristic model, will constitute a further step of the research.

  9. Persuasion Model and Its Evaluation Based on Positive Change Degree of Agent Emotion

    Science.gov (United States)

    Jinghua, Wu; Wenguang, Lu; Hailiang, Meng

    For it can meet needs of negotiation among organizations take place in different time and place, and for it can make its course more rationality and result more ideal, persuasion based on agent can improve cooperation among organizations well. Integrated emotion change in agent persuasion can further bring agent advantage of artificial intelligence into play. Emotion of agent persuasion is classified, and the concept of positive change degree is given. Based on this, persuasion model based on positive change degree of agent emotion is constructed, which is explained clearly through an example. Finally, the method of relative evaluation is given, which is also verified through a calculation example.

  10. Performance evaluation of RANS-based turbulence models in simulating a honeycomb heat sink

    Science.gov (United States)

    Subasi, Abdussamet; Ozsipahi, Mustafa; Sahin, Bayram; Gunes, Hasan

    2017-07-01

    As well-known, there is not a universal turbulence model that can be used to model all engineering problems. There are specific applications for each turbulence model that make it appropriate to use, and it is vital to select an appropriate model and wall function combination that matches the physics of the problem considered. Therefore, in this study, performance of six well-known Reynolds-Averaged Navier-Stokes ( RANS) based turbulence models which are the Standard k {{-}} ɛ, the Renormalized Group k- ɛ, the Realizable k- ɛ, the Reynolds Stress Model, the k- ω and the Shear Stress Transport k- ω and accompanying wall functions which are the standard, the non-equilibrium and the enhanced are evaluated via 3D simulation of a honeycomb heat sink. The CutCell method is used to generate grid for the part including heat sink called test section while a hexahedral mesh is employed to discretize to inlet and outlet sections. A grid convergence study is conducted for verification process while experimental data and well-known correlations are used to validate the numerical results. Prediction of pressure drop along the test section, mean base plate temperature of the heat sink and temperature at the test section outlet are regarded as a measure of the performance of employed models and wall functions. The results indicate that selection of turbulence models and wall functions has a great influence on the results and, therefore, need to be selected carefully. Hydraulic and thermal characteristics of the honeycomb heat sink can be determined in a reasonable accuracy using RANS- based turbulence models provided that a suitable turbulence model and wall function combination is selected.

  11. Reviewing the Concept of Brand Equity and Evaluating Consumer-Based Brand Equity (CBBE) Models

    OpenAIRE

    Sanaz Farjam; Xu Hongyi

    2015-01-01

    The purpose of this paper is to explore the concept of brand equity and discuss its different perspectives, we try to review existing literature of brand equity and evaluate various Customer-based brand equity models to provide a collection from well-known databases for further research in this area.Classification-JEL: M00

  12. Evaluation of three energy balance-based evaporation models for estimating monthly evaporation for five lakes using derived heat storage changes from a hysteresis model

    Science.gov (United States)

    Duan, Zheng; Bastiaanssen, W. G. M.

    2017-02-01

    The heat storage changes (Q t) can be a significant component of the energy balance in lakes, and it is important to account for Q t for reasonable estimation of evaporation at monthly and finer timescales if the energy balance-based evaporation models are used. However, Q t has been often neglected in many studies due to the lack of required water temperature data. A simple hysteresis model (Q t = a*Rn + b + c* dRn/dt) has been demonstrated to reasonably estimate Q t from the readily available net all wave radiation (Rn) and three locally calibrated coefficients (a-c) for lakes and reservoirs. As a follow-up study, we evaluated whether this hysteresis model could enable energy balance-based evaporation models to yield good evaporation estimates. The representative monthly evaporation data were compiled from published literature and used as ground-truth to evaluate three energy balance-based evaporation models for five lakes. The three models in different complexity are De Bruin-Keijman (DK), Penman, and a new model referred to as Duan-Bastiaanssen (DB). All three models require Q t as input. Each model was run in three scenarios differing in the input Q t (S1: measured Q t; S2: modelled Q t from the hysteresis model; S3: neglecting Q t) to evaluate the impact of Q t on the modelled evaporation. Evaluation showed that the modelled Q t agreed well with measured counterparts for all five lakes. It was confirmed that the hysteresis model with locally calibrated coefficients can predict Q t with good accuracy for the same lake. Using modelled Q t as inputs all three evaporation models yielded comparably good monthly evaporation to those using measured Q t as inputs and significantly better than those neglecting Q t for the five lakes. The DK model requiring minimum data generally performed the best, followed by the Penman and DB model. This study demonstrated that once three coefficients are locally calibrated using historical data the simple hysteresis model can offer

  13. Evaluation of animal models of neurobehavioral disorders

    Directory of Open Access Journals (Sweden)

    Nordquist Rebecca E

    2009-02-01

    Full Text Available Abstract Animal models play a central role in all areas of biomedical research. The process of animal model building, development and evaluation has rarely been addressed systematically, despite the long history of using animal models in the investigation of neuropsychiatric disorders and behavioral dysfunctions. An iterative, multi-stage trajectory for developing animal models and assessing their quality is proposed. The process starts with defining the purpose(s of the model, preferentially based on hypotheses about brain-behavior relationships. Then, the model is developed and tested. The evaluation of the model takes scientific and ethical criteria into consideration. Model development requires a multidisciplinary approach. Preclinical and clinical experts should establish a set of scientific criteria, which a model must meet. The scientific evaluation consists of assessing the replicability/reliability, predictive, construct and external validity/generalizability, and relevance of the model. We emphasize the role of (systematic and extended replications in the course of the validation process. One may apply a multiple-tiered 'replication battery' to estimate the reliability/replicability, validity, and generalizability of result. Compromised welfare is inherent in many deficiency models in animals. Unfortunately, 'animal welfare' is a vaguely defined concept, making it difficult to establish exact evaluation criteria. Weighing the animal's welfare and considerations as to whether action is indicated to reduce the discomfort must accompany the scientific evaluation at any stage of the model building and evaluation process. Animal model building should be discontinued if the model does not meet the preset scientific criteria, or when animal welfare is severely compromised. The application of the evaluation procedure is exemplified using the rat with neonatal hippocampal lesion as a proposed model of schizophrenia. In a manner congruent to

  14. Traffic Congestion Evaluation and Signal Control Optimization Based on Wireless Sensor Networks: Model and Algorithms

    Directory of Open Access Journals (Sweden)

    Wei Zhang

    2012-01-01

    Full Text Available This paper presents the model and algorithms for traffic flow data monitoring and optimal traffic light control based on wireless sensor networks. Given the scenario that sensor nodes are sparsely deployed along the segments between signalized intersections, an analytical model is built using continuum traffic equation and develops the method to estimate traffic parameter with the scattered sensor data. Based on the traffic data and principle of traffic congestion formation, we introduce the congestion factor which can be used to evaluate the real-time traffic congestion status along the segment and to predict the subcritical state of traffic jams. The result is expected to support the timing phase optimization of traffic light control for the purpose of avoiding traffic congestion before its formation. We simulate the traffic monitoring based on the Mobile Century dataset and analyze the performance of traffic light control on VISSIM platform when congestion factor is introduced into the signal timing optimization model. The simulation result shows that this method can improve the spatial-temporal resolution of traffic data monitoring and evaluate traffic congestion status with high precision. It is helpful to remarkably alleviate urban traffic congestion and decrease the average traffic delays and maximum queue length.

  15. Multi-criteria evaluation of hydrological models

    Science.gov (United States)

    Rakovec, Oldrich; Clark, Martyn; Weerts, Albrecht; Hill, Mary; Teuling, Ryan; Uijlenhoet, Remko

    2013-04-01

    Over the last years, there is a tendency in the hydrological community to move from the simple conceptual models towards more complex, physically/process-based hydrological models. This is because conceptual models often fail to simulate the dynamics of the observations. However, there is little agreement on how much complexity needs to be considered within the complex process-based models. One way to proceed to is to improve understanding of what is important and unimportant in the models considered. The aim of this ongoing study is to evaluate structural model adequacy using alternative conceptual and process-based models of hydrological systems, with an emphasis on understanding how model complexity relates to observed hydrological processes. Some of the models require considerable execution time and the computationally frugal sensitivity analysis, model calibration and uncertainty quantification methods are well-suited to providing important insights for models with lengthy execution times. The current experiment evaluates two version of the Framework for Understanding Structural Errors (FUSE), which both enable running model inter-comparison experiments. One supports computationally efficient conceptual models, and the second supports more-process-based models that tend to have longer execution times. The conceptual FUSE combines components of 4 existing conceptual hydrological models. The process-based framework consists of different forms of Richard's equations, numerical solutions, groundwater parameterizations and hydraulic conductivity distribution. The hydrological analysis of the model processes has evolved from focusing only on simulated runoff (final model output), to also including other criteria such as soil moisture and groundwater levels. Parameter importance and associated structural importance are evaluated using different types of sensitivity analyses techniques, making use of both robust global methods (e.g. Sobol') as well as several

  16. Data modeling and evaluation

    International Nuclear Information System (INIS)

    Bauge, E.; Hilaire, S.

    2006-01-01

    This lecture is devoted to the nuclear data evaluation process, during which the current knowledge (experimental or theoretical) of nuclear reactions is condensed and synthesised into a computer file (the evaluated data file) that application codes can process and use for simulation calculations. After an overview of the content of evaluated nuclear data files, we describe the different methods used for evaluating nuclear data. We specifically focus on the model based approach which we use to evaluate data in the continuum region. A few examples, coming from the day to day practice of data evaluation will illustrate this lecture. Finally, we will discuss the most likely perspectives for improvement of the evaluation process in the next decade. (author)

  17. Addressing issues associated with evaluating prediction models for survival endpoints based on the concordance statistic.

    Science.gov (United States)

    Wang, Ming; Long, Qi

    2016-09-01

    Prediction models for disease risk and prognosis play an important role in biomedical research, and evaluating their predictive accuracy in the presence of censored data is of substantial interest. The standard concordance (c) statistic has been extended to provide a summary measure of predictive accuracy for survival models. Motivated by a prostate cancer study, we address several issues associated with evaluating survival prediction models based on c-statistic with a focus on estimators using the technique of inverse probability of censoring weighting (IPCW). Compared to the existing work, we provide complete results on the asymptotic properties of the IPCW estimators under the assumption of coarsening at random (CAR), and propose a sensitivity analysis under the mechanism of noncoarsening at random (NCAR). In addition, we extend the IPCW approach as well as the sensitivity analysis to high-dimensional settings. The predictive accuracy of prediction models for cancer recurrence after prostatectomy is assessed by applying the proposed approaches. We find that the estimated predictive accuracy for the models in consideration is sensitive to NCAR assumption, and thus identify the best predictive model. Finally, we further evaluate the performance of the proposed methods in both settings of low-dimensional and high-dimensional data under CAR and NCAR through simulations. © 2016, The International Biometric Society.

  18. A smart growth evaluation model based on data envelopment analysis

    Science.gov (United States)

    Zhang, Xiaokun; Guan, Yongyi

    2018-04-01

    With the rapid spread of urbanization, smart growth (SG) has attracted plenty of attention from all over the world. In this paper, by the establishment of index system for smart growth, data envelopment analysis (DEA) model was suggested to evaluate the SG level of the current growth situation in cities. In order to further improve the information of both radial direction and non-radial detection, we introduced the non-Archimedean infinitesimal to form C2GS2 control model. Finally, we evaluated the SG level in Canberra and identified a series of problems, which can verify the applicability of the model and provide us more improvement information.

  19. The EU model evaluation group

    International Nuclear Information System (INIS)

    Petersen, K.E.

    1999-01-01

    The model evaluation group (MEG) was launched in 1992 growing out of the Major Technological Hazards Programme with EU/DG XII. The goal of MEG was to improve the culture in which models were developed, particularly by encouraging voluntary model evaluation procedures based on a formalised and consensus protocol. The evaluation intended to assess the fitness-for-purpose of the models being used as a measure of the quality. The approach adopted was focused on developing a generic model evaluation protocol and subsequent targeting this onto specific areas of application. Five such developments have been initiated, on heavy gas dispersion, liquid pool fires, gas explosions, human factors and momentum fires. The quality of models is an important element when complying with the 'Seveso Directive' requiring that the safety reports submitted to the authorities comprise an assessment of the extent and severity of the consequences of identified major accidents. Further, the quality of models become important in the land use planning process, where the proximity of industrial sites to vulnerable areas may be critical. (Copyright (c) 1999 Elsevier Science B.V., Amsterdam. All rights reserved.)

  20. Neural network ensemble based supplier evaluation model in line with nuclear safety conditions

    International Nuclear Information System (INIS)

    Wang Yonggang; Chang Baosheng

    2006-01-01

    Nuclear safety is the most critical target for nuclear power plant operation. Besides the rigid operation procedures established, evaluation of suppliers working with plants can be another important aspects. Selection and evaluation of suppliers can be classified with qualitative analysis and quantitative management. The indicators involved are coupled with each other in a very complicated manner, therefore the relevant data show the strong characteristic of non-linearity. The article is based on the research and analysis of the real conditions of the Daya Bay nuclear power plant operation management. Through study and analysis of the information home and abroad, and with reference to the neural network ensemble technology, the supplier evaluation system and model are established as illustrated within the paper, thus to heighten objectivity of the supplier selection. (authors)

  1. Rock mechanics models evaluation report

    International Nuclear Information System (INIS)

    1987-08-01

    This report documents the evaluation of the thermal and thermomechanical models and codes for repository subsurface design and for design constraint analysis. The evaluation was based on a survey of the thermal and thermomechanical codes and models that are applicable to subsurface design, followed by a Kepner-Tregoe (KT) structured decision analysis of the codes and models. The primary recommendations of the analysis are that the DOT code be used for two-dimensional thermal analysis and that the STEALTH and HEATING 5/6 codes be used for three-dimensional and complicated two-dimensional thermal analysis. STEALTH and SPECTROM 32 are recommended for thermomechanical analyses. The other evaluated codes should be considered for use in certain applications. A separate review of salt creep models indicate that the commonly used exponential time law model is appropriate for use in repository design studies. 38 refs., 1 fig., 7 tabs

  2. Modelling and evaluation of surgical performance using hidden Markov models.

    Science.gov (United States)

    Megali, Giuseppe; Sinigaglia, Stefano; Tonet, Oliver; Dario, Paolo

    2006-10-01

    Minimally invasive surgery has become very widespread in the last ten years. Since surgeons experience difficulties in learning and mastering minimally invasive techniques, the development of training methods is of great importance. While the introduction of virtual reality-based simulators has introduced a new paradigm in surgical training, skill evaluation methods are far from being objective. This paper proposes a method for defining a model of surgical expertise and an objective metric to evaluate performance in laparoscopic surgery. Our approach is based on the processing of kinematic data describing movements of surgical instruments. We use hidden Markov model theory to define an expert model that describes expert surgical gesture. The model is trained on kinematic data related to exercises performed on a surgical simulator by experienced surgeons. Subsequently, we use this expert model as a reference model in the definition of an objective metric to evaluate performance of surgeons with different abilities. Preliminary results show that, using different topologies for the expert model, the method can be efficiently used both for the discrimination between experienced and novice surgeons, and for the quantitative assessment of surgical ability.

  3. Evaluation of the Agricultural Non-point Source Pollution in Chongqing Based on PSR Model

    Institute of Scientific and Technical Information of China (English)

    Hanwen; ZHANG; Xinli; MOU; Hui; XIE; Hong; LU; Xingyun; YAN

    2014-01-01

    Through a series of exploration based on PSR framework model,for the purpose of building a suitable Chongqing agricultural nonpoint source pollution evaluation index system model framework,combined with the presence of Chongqing specific agro-environmental issues,we build a agricultural non-point source pollution assessment index system,and then study the agricultural system pressure,agro-environmental status and human response in total 3 major categories,develope an agricultural non-point source pollution evaluation index consisting of 3 criteria indicators and 19 indicators. As can be seen from the analysis,pressures and responses tend to increase and decrease linearly,state and complex have large fluctuations,and their fluctuations are similar mainly due to the elimination of pressures and impact,increasing the impact for agricultural non-point source pollution.

  4. A Fuzzy Comprehensive Evaluation Model for Sustainability Risk Evaluation of PPP Projects

    Directory of Open Access Journals (Sweden)

    Libiao Bai

    2017-10-01

    Full Text Available Evaluating the sustainability risk level of public–private partnership (PPP projects can reduce project risk incidents and achieve the sustainable development of the organization. However, the existing studies about PPP projects risk management mainly focus on exploring the impact of financial and revenue risks but ignore the sustainability risks, causing the concept of “sustainability” to be missing while evaluating the risk level of PPP projects. To evaluate the sustainability risk level and achieve the most important objective of providing a reference for the public and private sectors when making decisions on PPP project management, this paper constructs a factor system of sustainability risk of PPP projects based on an extensive literature review and develops a mathematical model based on the methods of fuzzy comprehensive evaluation model (FCEM and failure mode, effects and criticality analysis (FMECA for evaluating the sustainability risk level of PPP projects. In addition, this paper conducts computational experiment based on a questionnaire survey to verify the effectiveness and feasibility of this proposed model. The results suggest that this model is reasonable for evaluating the sustainability risk level of PPP projects. To our knowledge, this paper is the first study to evaluate the sustainability risk of PPP projects, which would not only enrich the theories of project risk management, but also serve as a reference for the public and private sectors for the sustainable planning and development. Keywords: sustainability risk eva

  5. Development, implementation, and evaluation of a community pharmacy-based asthma care model.

    Science.gov (United States)

    Saini, Bandana; Krass, Ines; Armour, Carol

    2004-11-01

    Pharmacists are uniquely placed in the healthcare system to address critical issues in asthma management in the community. Various programs have shown the benefits of a pharmacist-led asthma care program; however, no such programs have previously been evaluated in Australia. To measure the impact of a specialized asthma service provided through community pharmacies in terms of objective patient clinical, humanistic, and economic outcomes. A parallel controlled design, where 52 intervention patients and 50 control patients with asthma were recruited in 2 distinct locations, was used. In the intervention area, pharmacists were trained and delivered an asthma care model, with 3 follow-up visits over 6 months. This model was evaluated based on clinical, humanistic, and economic outcomes compared between and within groups. There was a significant reduction in asthma severity in the intervention group, 2.6 +/- 0.5 to 1.6 +/- 0.7 (mean +/- SD; p < 0.001) versus the control group, 2.3 +/- 0.7 to 2.4 +/- 0.5. In the intervention group, peak flow indices improved from 82.7% +/- 8.2% at baseline to 87.4% +/- 8.9% (p < 0.001) at the final visit, and there was a significant reduction in the defined daily dose of albuterol used by patients, from 374.8 +/- 314.8 microg at baseline to 198.4 +/- 196.9 microg at the final visit (p < 0.015). There was also a statistically significant improvement in perceived control of asthma and asthma-related knowledge scores in the intervention group compared with the control group between baseline and the final visit. Annual savings of $132.84(AU) in medication costs per patient and $100,801.20 for the whole group, based on overall severity reduction, were demonstrated. Based on the results of this study, it appears that a specialized asthma care model offers community pharmacists an opportunity to contribute toward improving asthma management in the Australian community.

  6. Development and Evaluation of a Simple, Multifactorial Model Based on Landing Performance to Indicate Injury Risk in Surfing Athletes.

    Science.gov (United States)

    Lundgren, Lina E; Tran, Tai T; Nimphius, Sophia; Raymond, Ellen; Secomb, Josh L; Farley, Oliver R L; Newton, Robert U; Steele, Julie R; Sheppard, Jeremy M

    2015-11-01

    To develop and evaluate a multifactorial model based on landing performance to estimate injury risk for surfing athletes. Five measures were collected from 78 competitive surfing athletes and used to create a model to serve as a screening tool for landing tasks and potential injury risk. In the second part of the study, the model was evaluated using junior surfing athletes (n = 32) with a longitudinal follow-up of their injuries over 26 wk. Two models were compared based on the collected data, and magnitude-based inferences were applied to determine the likelihood of differences between injured and noninjured groups. The study resulted in a model based on 5 measures--ankle-dorsiflexion range of motion, isometric midthigh-pull lower-body strength, time to stabilization during a drop-and-stick (DS) landing, relative peak force during a DS landing, and frontal-plane DS-landing video analysis--for male and female professional surfers and male and female junior surfers. Evaluation of the model showed that a scaled probability score was more likely to detect injuries in junior surfing athletes and reported a correlation of r = .66, P = .001, with a model of equal variable importance. The injured (n = 7) surfers had a lower probability score (0.18 ± 0.16) than the noninjured group (n = 25, 0.36 ± 0.15), with 98% likelihood, Cohen d = 1.04. The proposed model seems sensitive and easy to implement and interpret. Further research is recommended to show full validity for potential adaptations for other sports.

  7. Evaluation of Loss Due to Storm Surge Disasters in China Based on Econometric Model Groups.

    Science.gov (United States)

    Jin, Xue; Shi, Xiaoxia; Gao, Jintian; Xu, Tongbin; Yin, Kedong

    2018-03-27

    Storm surge has become an important factor restricting the economic and social development of China's coastal regions. In order to improve the scientific judgment of future storm surge damage, a method of model groups is proposed to refine the evaluation of the loss due to storm surges. Due to the relative dispersion and poor regularity of the natural property data (login center air pressure, maximum wind speed, maximum storm water, super warning water level, etc.), storm surge disaster is divided based on eight kinds of storm surge disaster grade division methods combined with storm surge water, hypervigilance tide level, and disaster loss. The storm surge disaster loss measurement model groups consist of eight equations, and six major modules are constructed: storm surge disaster in agricultural loss, fishery loss, human resource loss, engineering facility loss, living facility loss, and direct economic loss. Finally, the support vector machine (SVM) model is used to evaluate the loss and the intra-sample prediction. It is indicated that the equations of the model groups can reflect in detail the relationship between the damage of storm surges and other related variables. Based on a comparison of the original value and the predicted value error, the model groups pass the test, providing scientific support and a decision basis for the early layout of disaster prevention and mitigation.

  8. Evaluation of Loss Due to Storm Surge Disasters in China Based on Econometric Model Groups

    Science.gov (United States)

    Shi, Xiaoxia; Xu, Tongbin; Yin, Kedong

    2018-01-01

    Storm surge has become an important factor restricting the economic and social development of China’s coastal regions. In order to improve the scientific judgment of future storm surge damage, a method of model groups is proposed to refine the evaluation of the loss due to storm surges. Due to the relative dispersion and poor regularity of the natural property data (login center air pressure, maximum wind speed, maximum storm water, super warning water level, etc.), storm surge disaster is divided based on eight kinds of storm surge disaster grade division methods combined with storm surge water, hypervigilance tide level, and disaster loss. The storm surge disaster loss measurement model groups consist of eight equations, and six major modules are constructed: storm surge disaster in agricultural loss, fishery loss, human resource loss, engineering facility loss, living facility loss, and direct economic loss. Finally, the support vector machine (SVM) model is used to evaluate the loss and the intra-sample prediction. It is indicated that the equations of the model groups can reflect in detail the relationship between the damage of storm surges and other related variables. Based on a comparison of the original value and the predicted value error, the model groups pass the test, providing scientific support and a decision basis for the early layout of disaster prevention and mitigation. PMID:29584628

  9. Evaluation of Loss Due to Storm Surge Disasters in China Based on Econometric Model Groups

    Directory of Open Access Journals (Sweden)

    Xue Jin

    2018-03-01

    Full Text Available Storm surge has become an important factor restricting the economic and social development of China’s coastal regions. In order to improve the scientific judgment of future storm surge damage, a method of model groups is proposed to refine the evaluation of the loss due to storm surges. Due to the relative dispersion and poor regularity of the natural property data (login center air pressure, maximum wind speed, maximum storm water, super warning water level, etc., storm surge disaster is divided based on eight kinds of storm surge disaster grade division methods combined with storm surge water, hypervigilance tide level, and disaster loss. The storm surge disaster loss measurement model groups consist of eight equations, and six major modules are constructed: storm surge disaster in agricultural loss, fishery loss, human resource loss, engineering facility loss, living facility loss, and direct economic loss. Finally, the support vector machine (SVM model is used to evaluate the loss and the intra-sample prediction. It is indicated that the equations of the model groups can reflect in detail the relationship between the damage of storm surges and other related variables. Based on a comparison of the original value and the predicted value error, the model groups pass the test, providing scientific support and a decision basis for the early layout of disaster prevention and mitigation.

  10. A resilience-based model for performance evaluation of information systems: the case of a gas company

    Science.gov (United States)

    Azadeh, A.; Salehi, V.; Salehi, R.

    2017-10-01

    Information systems (IS) are strongly influenced by changes in new technology and should react swiftly in response to external conditions. Resilience engineering is a new method that can enable these systems to absorb changes. In this study, a new framework is presented for performance evaluation of IS that includes DeLone and McLean's factors of success in addition to resilience. Hence, this study is an attempt to evaluate the impact of resilience on IS by the proposed model in Iranian Gas Engineering and Development Company via the data obtained from questionnaires and Fuzzy Data Envelopment Analysis (FDEA) approach. First, FDEA model with α-cut = 0.05 was identified as the most suitable model to this application by performing all Banker, Charnes and Cooper and Charnes, Cooper and Rhodes models of and FDEA and selecting the appropriate model based on maximum mean efficiency. Then, the factors were ranked based on the results of sensitivity analysis, which showed resilience had a significantly higher impact on the proposed model relative to other factors. The results of this study were then verified by conducting the related ANOVA test. This is the first study that examines the impact of resilience on IS by statistical and mathematical approaches.

  11. Exxon Nuclear WREM-based NJP-BWR ECCS evaluation model and example application to the Oyster Creek Plant

    International Nuclear Information System (INIS)

    Krysinski, T.L.; Bjornard, T.A.; Steves, L.H.

    1975-01-01

    A proposed integrated ECCS model for non-jet pump boiling water reactors is presented, using the RELAP4-EM/BLOWDOWN and RELAP4-EM/SMALL BREAK portions of the Exxon Nuclear WREM-based Generic PWR Evaluation Model coupled with the ENC NJP-BWR Fuel Heatup Model. The results of the application of the proposed model to Oyster Creek are summarized. The results of the break size sensitivity study using the proposed model for the Oyster Creek Plant are presented. The application of the above results yielded the MAPLHGR curves. Included are a description of the proposed non-jet pump boiling water reaction evaluation model, justification of its conformance with TOCFR50, Appendix K, the adopted Oyster Creek plant model, and results of the analysis and sensitivity studies. (auth)

  12. A real option-based simulation model to evaluate investments in pump storage plants

    International Nuclear Information System (INIS)

    Muche, Thomas

    2009-01-01

    Investments in pump storage plants are expected to grow especially due to their ability to store an excess of supply from wind power plants. In order to evaluate these investments correctly the peculiarities of pump storage plants and the characteristics of liberalized power markets have to be considered. The main characteristics of power markets are the strong power price volatility and the occurrence of prices spikes. In this article a valuation model is developed capturing these aspects using power price simulation, optimization of unit commitment and capital market theory. This valuation model is able to value a future price-based unit commitment planning that corresponds to future scope of actions also called real options. The resulting real option value for the pump storage plant is compared with the traditional net present value approach. Because this approach is not able to evaluate scope of actions correctly it results in strongly smaller investment values and forces wrong investment decisions.

  13. QUALITY SERVICES EVALUATION MODEL BASED ON DEDICATED SOFTWARE TOOL

    Directory of Open Access Journals (Sweden)

    ANDREEA CRISTINA IONICĂ

    2012-10-01

    Full Text Available In this paper we introduced a new model, called Service Quality (SQ, which combines QFD and SERVQUAL methods. This model takes from the SERVQUAL method the five dimensions of requirements and three of characteristics and from the QFD method the application methodology. The originality of the SQ model consists in computing a global index that reflects the customers’ requirements accomplishment level by the quality characteristics. In order to prove the viability of the SQ model, there was developed a software tool that was applied for the evaluation of a health care services provider.

  14. Evaluation of absorbed doses in voxel-based and simplified models for small animals

    International Nuclear Information System (INIS)

    Mohammadi, A.; Kinase, S.; Saito, K.

    2008-01-01

    Internal dosimetry in non-human biota is desirable from the viewpoint of radiation protection of the environment. The International Commission on Radiological Protection (ICRP) proposed Reference Animals and Plants using simplified models, such as ellipsoids and spheres and calculated absorbed fractions (AFs) for whole bodies. In this study, photon and electron AFs in whole bodies of voxel-based rat and frog models have been calculated and compared with AFs in the reference models. It was found that the voxel-based and the reference frog (or rat) models can be consistent for the whole-body AFs within a discrepancy of 25 %, as the source was uniformly distributed in the whole body. The specific absorbed fractions (SAFs) and S values were also evaluated in whole bodies and all organs of the voxel-based frog and rat models as the source was distributed in the whole body or skeleton. The results demonstrated that the whole-body SAFs reflect SAFs of all individual organs as the source was uniformly distributed per mass within the whole body by about 30 % uncertainties with exceptions for body contour (up to -40 %) for both electrons and photons due to enhanced radiation leakages, and for the skeleton for photons only (up to +185 %) due to differences in the mass attenuation coefficients. For nuclides such as 90 Y and 90 Sr, which were concentrated in the skeleton, there were large differences between S values in the whole body and those in individual organs, however the whole-body S values for the reference models with the whole body as the source were remarkably similar to those for the voxel-based models with the skeleton as the source, within about 4 and 0.3 %, respectively. It can be stated that whole-body SAFs or S values in simplified models without internal organs are not sufficient for accurate internal dosimetry because they do not reflect SAFs or S values of all individual organs as the source was not distributed uniformly in whole body. Thus, voxel-based

  15. Evaluation of long-term creep-fatigue life of stainless steel weldment based on a microstructure degradation model

    International Nuclear Information System (INIS)

    Asayama, Tai; Hasebe, Shinichi

    1997-01-01

    This paper describes a newly developed analytical method of evaluation of creep-fatigue strength of stainless weld metals. Based on the observation that creep-fatigue crack initiates adjacent to the interface of sigma-phase/delta-ferrite and matrix, a mechanistic model which allows the evaluation of micro stress/strain concentration adjacent to the interface was developed. Fatigue and creep damage were evaluated using the model which describes the microstructure after exposed to high temperatures for a long time. Thus it was made possible to predict analytically the long-term creep-fatigue life of stainless steel metals whose microstructure is degraded as a result of high temperature service. (author)

  16. Evaluation of Intensive Construction Land Use in the Emerging City Based on PSR-Entropy model

    Science.gov (United States)

    Jia, Yuanyuan; Lei, Guangyu

    2018-01-01

    A comprehensive understanding of emerging city land utilization and the evaluation of intensive land use in the Emerging City will provide the comprehensive and reliable technical basis for the planning and management. It is an important node. According to the Han cheng from 2008 to 2016 years of land use, based on PSR-Entropy model of land use evaluation system, using entropy method to determine the index weight, the introduction of comprehensive index method to evaluate the degree of land use. The results show that the intensive land use comprehensive evaluation index of Han cheng increased from 2008 to 2015, but the land intensive use can not achieve the standards. The potential of further enhancing space is relatively large.

  17. Model-based evaluation of the short-circuited tripolar cuff configuration.

    Science.gov (United States)

    Andreasen, Lotte N S; Struijk, Johannes J

    2006-05-01

    Recordings of neural information for use as feedback in functional electrical stimulation are often contaminated with interfering signals from muscles and from stimulus pulses. The cuff electrode used for the neural recording can be optimized to improve the S/I ratio. In this work, we evaluate a model of both the nerve signal and the interfering signals recorded by a cuff, and subsequently use this model to study the signal to interference ratio of different cuff designs and to evaluate a recently introduced short-circuited tripolar cuff configuration. The results of the model showed good agreement with results from measurements in rabbits and confirmed the superior performance of the short-circuited tripolar configuration as compared with the traditionally used tripolar configuration.

  18. Summary report on the evaluation of a 1977--1985 edited sorption data base for isotherm modeling

    International Nuclear Information System (INIS)

    Polzer, W.L.; Beckman, R.J.; Fuentes, H.R.; Yong, C.; Chan, P.; Rao, M.G.

    1993-01-01

    Sorption data bases collected by Los Alamos National Laboratory (LANL) from 1977 to 1985 for the Yucca Mountain Project.(YMP) have been inventoried and fitted with isotherm expressions. Effects of variables (e.g., particle size) on the isotherm were also evaluated. The sorption data are from laboratory batch measurements which were not designed specifically for isotherm modeling. However a limited number of data sets permitted such modeling. The analysis of those isotherm data can aid in the design of future sorption experiments and can provide expressions to be used in radionuclide transport modeling. Over 1200 experimental observations were inventoried for their adequacy to be modeled b isotherms and to evaluate the effects of variables on isotherms. About 15% of the observations provided suitable data sets for modeling. The data sets were obtained under conditions that include ambient temperature and two atmospheres, air and CO 2

  19. Residential-energy-demand modeling and the NIECS data base: an evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Cowing, T.G.; Dubin, J.A.; McFadden, D.

    1982-01-01

    The purpose of this report is to evaluate the 1978-1979 National Interim Energy Consumption Survey (NIECS) data base in terms of its usefulness for estimating residential energy demand models based on household appliance choice and utilization decisions. The NIECS contains detailed energy usage information at the household level for 4081 households during the April 1978 to March 1979 period. Among the data included are information on the structural and thermal characteristics of the housing unit, demographic characteristics of the household, fuel usage, appliance characteristics, and actual energy consumption. The survey covers the four primary residential fuels-electricity, natural gas, fuel oil, and liquefied petroleum gas - and includes detailed information on recent household conservation and retrofit activities. Section II contains brief descriptions of the major components of the NIECS data set. Discussions are included on the sample frame and the imputation procedures used in NIECS. There are also two extensive tables, giving detailed statistical and other information on most of the non-vehicle NIECS variables. Section III contains an assessment of the NIECS data, focusing on four areas: measurement error, sample design, imputation problems, and additional data needed to estimate appliance choice/use models. Section IV summarizes and concludes the report.

  20. Initial draft of CSE-UCLA evaluation model based on weighted product in order to optimize digital library services in computer college in Bali

    Science.gov (United States)

    Divayana, D. G. H.; Adiarta, A.; Abadi, I. B. G. S.

    2018-01-01

    The aim of this research was to create initial design of CSE-UCLA evaluation model modified with Weighted Product in evaluating digital library service at Computer College in Bali. The method used in this research was developmental research method and developed by Borg and Gall model design. The results obtained from the research that conducted earlier this month was a rough sketch of Weighted Product based CSE-UCLA evaluation model that the design had been able to provide a general overview of the stages of weighted product based CSE-UCLA evaluation model used in order to optimize the digital library services at the Computer Colleges in Bali.

  1. Applying and Individual-Based Model to Simultaneously Evaluate Net Ecosystem Production and Tree Diameter Increment

    Science.gov (United States)

    Fang, F. J.

    2017-12-01

    Reconciling observations at fundamentally different scales is central in understanding the global carbon cycle. This study investigates a model-based melding of forest inventory data, remote-sensing data and micrometeorological-station data ("flux towers" estimating forest heat, CO2 and H2O fluxes). The individual tree-based model FORCCHN was used to evaluate the tree DBH increment and forest carbon fluxes. These are the first simultaneous simulations of the forest carbon budgets from flux towers and individual-tree growth estimates of forest carbon budgets using the continuous forest inventory data — under circumstances in which both predictions can be tested. Along with the global implications of such findings, this also improves the capacity for forest sustainable management and the comprehensive understanding of forest ecosystems. In forest ecology, diameter at breast height (DBH) of a tree significantly determines an individual tree's cross-sectional sapwood area, its biomass and carbon storage. Evaluation the annual DBH increment (ΔDBH) of an individual tree is central to understanding tree growth and forest ecology. Ecosystem Carbon flux is a consequence of key ecosystem processes in the forest-ecosystem carbon cycle, Gross and Net Primary Production (GPP and NPP, respectively) and Net Ecosystem Respiration (NEP). All of these closely relate with tree DBH changes and tree death. Despite advances in evaluating forest carbon fluxes with flux towers and forest inventories for individual tree ΔDBH, few current ecological models can simultaneously quantify and predict the tree ΔDBH and forest carbon flux.

  2. The evaluation of distributed damage in concrete based on sinusoidal modeling of the ultrasonic response.

    Science.gov (United States)

    Sepehrinezhad, Alireza; Toufigh, Vahab

    2018-05-25

    Ultrasonic wave attenuation is an effective descriptor of distributed damage in inhomogeneous materials. Methods developed to measure wave attenuation have the potential to provide an in-site evaluation of existing concrete structures insofar as they are accurate and time-efficient. In this study, material classification and distributed damage evaluation were investigated based on the sinusoidal modeling of the response from the through-transmission ultrasonic tests on polymer concrete specimens. The response signal was modeled as single or the sum of damping sinusoids. Due to the inhomogeneous nature of concrete materials, model parameters may vary from one specimen to another. Therefore, these parameters are not known in advance and should be estimated while the response signal is being received. The modeling procedure used in this study involves a data-adaptive algorithm to estimate the parameters online. Data-adaptive algorithms are used due to a lack of knowledge of the model parameters. The damping factor was estimated as a descriptor of the distributed damage. The results were compared in two different cases as follows: (1) constant excitation frequency with varying concrete mixtures and (2) constant mixture with varying excitation frequencies. The specimens were also loaded up to their ultimate compressive strength to investigate the effect of distributed damage in the response signal. The results of the estimation indicated that the damping was highly sensitive to the change in material inhomogeneity, even in comparable mixtures. In addition to the proposed method, three methods were employed to compare the results based on their accuracy in the classification of materials and the evaluation of the distributed damage. It is shown that the estimated damping factor is not only sensitive to damage in the final stages of loading, but it is also applicable in evaluating micro damages in the earlier stages providing a reliable descriptor of damage. In addition

  3. A PRODUCTIVITY EVALUATION MODEL BASED ON INPUT AND OUTPUT ORIENTATIONS

    Directory of Open Access Journals (Sweden)

    C.O. Anyaeche

    2012-01-01

    Full Text Available

    ENGLISH ABSTRACT: Many productivity models evaluate either the input or the output performances using standalone techniques. This sometimes gives divergent views of the same system’s results. The work reported in this article, which simultaneously evaluated productivity from both orientations, was applied on real life data. The results showed losses in productivity (–2% and price recovery (–8% for the outputs; the inputs showed productivity gain (145% but price recovery loss (–63%. These imply losses in product performances but a productivity gain in inputs. The loss in the price recovery of inputs indicates a problem in the pricing policy. This model is applicable in product diversification.

    AFRIKAANSE OPSOMMING: Die meeste produktiwiteitsmodelle evalueer of die inset- of die uitsetverrigting deur gebruik te maak van geïsoleerde tegnieke. Dit lei soms tot uiteenlopende perspektiewe van dieselfde sisteem se verrigting. Hierdie artikel evalueer verrigting uit beide perspektiewe en gebruik ware data. Die resultate toon ‘n afname in produktiwiteit (-2% en prysherwinning (-8% vir die uitsette. Die insette toon ‘n toename in produktiwiteit (145%, maar ‘n afname in prysherwinning (-63%. Dit impliseer ‘n afname in produkverrigting, maar ‘n produktiwiteitstoename in insette. Die afname in die prysherwinning van insette dui op ‘n problem in die prysvasstellingbeleid. Hierdie model is geskik vir produkdiversifikasie.

  4. Damage Analysis and Evaluation of High Strength Concrete Frame Based on Deformation-Energy Damage Model

    Directory of Open Access Journals (Sweden)

    Huang-bin Lin

    2015-01-01

    Full Text Available A new method of characterizing the damage of high strength concrete structures is presented, which is based on the deformation energy double parameters damage model and incorporates both of the main forms of damage by earthquakes: first time damage beyond destruction and energy consumption. Firstly, test data of high strength reinforced concrete (RC columns were evaluated. Then, the relationship between stiffness degradation, strength degradation, and ductility performance was obtained. And an expression for damage in terms of model parameters was determined, as well as the critical input data for the restoring force model to be used in analytical damage evaluation. Experimentally, the unloading stiffness was found to be related to the cycle number. Then, a correction for this changing was applied to better describe the unloading phenomenon and compensate for the shortcomings of structure elastic-plastic time history analysis. The above algorithm was embedded into an IDARC program. Finally, a case study of high strength RC multistory frames was presented. Under various seismic wave inputs, the structural damages were predicted. The damage model and correction algorithm of stiffness unloading were proved to be suitable and applicable in engineering design and damage evaluation of a high strength concrete structure.

  5. Evaluation of liquefaction potential of soil based on standard penetration test using multi-gene genetic programming model

    Science.gov (United States)

    Muduli, Pradyut; Das, Sarat

    2014-06-01

    This paper discusses the evaluation of liquefaction potential of soil based on standard penetration test (SPT) dataset using evolutionary artificial intelligence technique, multi-gene genetic programming (MGGP). The liquefaction classification accuracy (94.19%) of the developed liquefaction index (LI) model is found to be better than that of available artificial neural network (ANN) model (88.37%) and at par with the available support vector machine (SVM) model (94.19%) on the basis of the testing data. Further, an empirical equation is presented using MGGP to approximate the unknown limit state function representing the cyclic resistance ratio (CRR) of soil based on developed LI model. Using an independent database of 227 cases, the overall rates of successful prediction of occurrence of liquefaction and non-liquefaction are found to be 87, 86, and 84% by the developed MGGP based model, available ANN and the statistical models, respectively, on the basis of calculated factor of safety (F s) against the liquefaction occurrence.

  6. Constructing a Travel Risks’ Evaluation Model for Tour Freelancers Based on the ANP Approach

    Directory of Open Access Journals (Sweden)

    Chin-Tsai Lin

    2016-01-01

    Full Text Available This study constructs a new travel risks’ evaluation model for freelancers to evaluate and select tour groups by considering the interdependencies of the evaluation criteria used. First of all, the proposed model adopts the Nominal Group Technique (NGT to identify suitable evaluation criteria for evaluating travel risks. Six evaluation criteria and 18 subcriteria are obtained. The six evaluation criteria are financial risk, transportation risk, social risk, hygiene risk, sightseeing spot risk, and general risk for freelancer tour groups. Secondly, the model uses the analytic network process (ANP to determine the relative weight of the criteria. Finally, examples of group package tours (GPTs are used to demonstrate the travel risk evaluation process for this model. The results show that the Tokyo GPT is the best group tour. The proposed model helps freelancers to effectively evaluate travel risks and decision-making, making it highly applicable to academia and tour groups.

  7. Evaluation models and evaluation use

    Science.gov (United States)

    Contandriopoulos, Damien; Brousselle, Astrid

    2012-01-01

    The use of evaluation results is at the core of evaluation theory and practice. Major debates in the field have emphasized the importance of both the evaluator’s role and the evaluation process itself in fostering evaluation use. A recent systematic review of interventions aimed at influencing policy-making or organizational behavior through knowledge exchange offers a new perspective on evaluation use. We propose here a framework for better understanding the embedded relations between evaluation context, choice of an evaluation model and use of results. The article argues that the evaluation context presents conditions that affect both the appropriateness of the evaluation model implemented and the use of results. PMID:23526460

  8. Model-Based Description of Human Body Motions for Ergonomics Evaluation

    Science.gov (United States)

    Imai, Sayaka

    This paper presents modeling of Working Process and Working Simulation factory works. I focus on an example work (motion), its actual work(motion) and reference between them. An example work and its actual work can be analyzed and described as a sequence of atomic action. In order to describe workers' motion, some concepts of Atomic Unit, Model Events and Mediator are introduced. By using these concepts, we can analyze a workers' action and evaluate their works. Also, we consider it as a possible way for unifying all the data used in various applications (CAD/CAM, etc) during the design process and evaluating all subsystems in a virtual Factory.

  9. Research on evaluating water resource resilience based on projection pursuit classification model

    Science.gov (United States)

    Liu, Dong; Zhao, Dan; Liang, Xu; Wu, Qiuchen

    2016-03-01

    Water is a fundamental natural resource while agriculture water guarantees the grain output, which shows that the utilization and management of water resource have a significant practical meaning. Regional agricultural water resource system features with unpredictable, self-organization, and non-linear which lays a certain difficulty on the evaluation of regional agriculture water resource resilience. The current research on water resource resilience remains to focus on qualitative analysis and the quantitative analysis is still in the primary stage, thus, according to the above issues, projection pursuit classification model is brought forward. With the help of artificial fish-swarm algorithm (AFSA), it optimizes the projection index function, seeks for the optimal projection direction, and improves AFSA with the application of self-adaptive artificial fish step and crowding factor. Taking Hongxinglong Administration of Heilongjiang as the research base and on the basis of improving AFSA, it established the evaluation of projection pursuit classification model to agriculture water resource system resilience besides the proceeding analysis of projection pursuit classification model on accelerating genetic algorithm. The research shows that the water resource resilience of Hongxinglong is the best than Raohe Farm, and the last 597 Farm. And the further analysis shows that the key driving factors influencing agricultural water resource resilience are precipitation and agriculture water consumption. The research result reveals the restoring situation of the local water resource system, providing foundation for agriculture water resource management.

  10. Automated Text Analysis Based on Skip-Gram Model for Food Evaluation in Predicting Consumer Acceptance.

    Science.gov (United States)

    Kim, Augustine Yongwhi; Ha, Jin Gwan; Choi, Hoduk; Moon, Hyeonjoon

    2018-01-01

    The purpose of this paper is to evaluate food taste, smell, and characteristics from consumers' online reviews. Several studies in food sensory evaluation have been presented for consumer acceptance. However, these studies need taste descriptive word lexicon, and they are not suitable for analyzing large number of evaluators to predict consumer acceptance. In this paper, an automated text analysis method for food evaluation is presented to analyze and compare recently introduced two jjampong ramen types (mixed seafood noodles). To avoid building a sensory word lexicon, consumers' reviews are collected from SNS. Then, by training word embedding model with acquired reviews, words in the large amount of review text are converted into vectors. Based on these words represented as vectors, inference is performed to evaluate taste and smell of two jjampong ramen types. Finally, the reliability and merits of the proposed food evaluation method are confirmed by a comparison with the results from an actual consumer preference taste evaluation.

  11. Automated Text Analysis Based on Skip-Gram Model for Food Evaluation in Predicting Consumer Acceptance

    Directory of Open Access Journals (Sweden)

    Augustine Yongwhi Kim

    2018-01-01

    Full Text Available The purpose of this paper is to evaluate food taste, smell, and characteristics from consumers’ online reviews. Several studies in food sensory evaluation have been presented for consumer acceptance. However, these studies need taste descriptive word lexicon, and they are not suitable for analyzing large number of evaluators to predict consumer acceptance. In this paper, an automated text analysis method for food evaluation is presented to analyze and compare recently introduced two jjampong ramen types (mixed seafood noodles. To avoid building a sensory word lexicon, consumers’ reviews are collected from SNS. Then, by training word embedding model with acquired reviews, words in the large amount of review text are converted into vectors. Based on these words represented as vectors, inference is performed to evaluate taste and smell of two jjampong ramen types. Finally, the reliability and merits of the proposed food evaluation method are confirmed by a comparison with the results from an actual consumer preference taste evaluation.

  12. Presenting an Evaluation Model for the Cancer Registry Software.

    Science.gov (United States)

    Moghaddasi, Hamid; Asadi, Farkhondeh; Rabiei, Reza; Rahimi, Farough; Shahbodaghi, Reihaneh

    2017-12-01

    As cancer is increasingly growing, cancer registry is of great importance as the main core of cancer control programs, and many different software has been designed for this purpose. Therefore, establishing a comprehensive evaluation model is essential to evaluate and compare a wide range of such software. In this study, the criteria of the cancer registry software have been determined by studying the documents and two functional software of this field. The evaluation tool was a checklist and in order to validate the model, this checklist was presented to experts in the form of a questionnaire. To analyze the results of validation, an agreed coefficient of %75 was determined in order to apply changes. Finally, when the model was approved, the final version of the evaluation model for the cancer registry software was presented. The evaluation model of this study contains tool and method of evaluation. The evaluation tool is a checklist including the general and specific criteria of the cancer registry software along with their sub-criteria. The evaluation method of this study was chosen as a criteria-based evaluation method based on the findings. The model of this study encompasses various dimensions of cancer registry software and a proper method for evaluating it. The strong point of this evaluation model is the separation between general criteria and the specific ones, while trying to fulfill the comprehensiveness of the criteria. Since this model has been validated, it can be used as a standard to evaluate the cancer registry software.

  13. An integrated utility-based model of conflict evaluation and resolution in the Stroop task.

    Science.gov (United States)

    Chuderski, Adam; Smolen, Tomasz

    2016-04-01

    Cognitive control allows humans to direct and coordinate their thoughts and actions in a flexible way, in order to reach internal goals regardless of interference and distraction. The hallmark test used to examine cognitive control is the Stroop task, which elicits both the weakly learned but goal-relevant and the strongly learned but goal-irrelevant response tendencies, and requires people to follow the former while ignoring the latter. After reviewing the existing computational models of cognitive control in the Stroop task, its novel, integrated utility-based model is proposed. The model uses 3 crucial control mechanisms: response utility reinforcement learning, utility-based conflict evaluation using the Festinger formula for assessing the conflict level, and top-down adaptation of response utility in service of conflict resolution. Their complex, dynamic interaction led to replication of 18 experimental effects, being the largest data set explained to date by 1 Stroop model. The simulations cover the basic congruency effects (including the response latency distributions), performance dynamics and adaptation (including EEG indices of conflict), as well as the effects resulting from manipulations applied to stimulation and responding, which are yielded by the extant Stroop literature. (c) 2016 APA, all rights reserved).

  14. A Linguistic Multigranular Sensory Evaluation Model for Olive Oil

    Directory of Open Access Journals (Sweden)

    Luis Martinez

    2008-06-01

    Full Text Available Evaluation is a process that analyzes elements in order to achieve different objectives such as quality inspection, marketing and other fields in industrial companies. This paper focuses on sensory evaluation where the evaluated items are assessed by a panel of experts according to the knowledge acquired via human senses. In these evaluation processes the information provided by the experts implies uncertainty, vagueness and imprecision. The use of the Fuzzy Linguistic Approach 32 has provided successful results modelling such a type of information. In sensory evaluation it may happen that the panel of experts have more or less degree knowledge of about the evaluated items or indicators. So, it seems suitable that each expert could express their preferences in different linguistic term sets based on their own knowledge. In this paper, we present a sensory evaluation model that manages multigranular linguistic evaluation framework based on a decision analysis scheme. This model will be applied to the sensory evaluation process of Olive Oil.

  15. Skull base tumor model.

    Science.gov (United States)

    Gragnaniello, Cristian; Nader, Remi; van Doormaal, Tristan; Kamel, Mahmoud; Voormolen, Eduard H J; Lasio, Giovanni; Aboud, Emad; Regli, Luca; Tulleken, Cornelius A F; Al-Mefty, Ossama

    2010-11-01

    Resident duty-hours restrictions have now been instituted in many countries worldwide. Shortened training times and increased public scrutiny of surgical competency have led to a move away from the traditional apprenticeship model of training. The development of educational models for brain anatomy is a fascinating innovation allowing neurosurgeons to train without the need to practice on real patients and it may be a solution to achieve competency within a shortened training period. The authors describe the use of Stratathane resin ST-504 polymer (SRSP), which is inserted at different intracranial locations to closely mimic meningiomas and other pathological entities of the skull base, in a cadaveric model, for use in neurosurgical training. Silicone-injected and pressurized cadaveric heads were used for studying the SRSP model. The SRSP presents unique intrinsic metamorphic characteristics: liquid at first, it expands and foams when injected into the desired area of the brain, forming a solid tumorlike structure. The authors injected SRSP via different passages that did not influence routes used for the surgical approach for resection of the simulated lesion. For example, SRSP injection routes included endonasal transsphenoidal or transoral approaches if lesions were to be removed through standard skull base approach, or, alternatively, SRSP was injected via a cranial approach if the removal was planned to be via the transsphenoidal or transoral route. The model was set in place in 3 countries (US, Italy, and The Netherlands), and a pool of 13 physicians from 4 different institutions (all surgeons and surgeons in training) participated in evaluating it and provided feedback. All 13 evaluating physicians had overall positive impressions of the model. The overall score on 9 components evaluated--including comparison between the tumor model and real tumor cases, perioperative requirements, general impression, and applicability--was 88% (100% being the best possible

  16. Towards a more objective evaluation of modelled land-carbon trends using atmospheric CO2 and satellite-based vegetation activity observations

    Directory of Open Access Journals (Sweden)

    D. Dalmonech

    2013-06-01

    Full Text Available Terrestrial ecosystem models used for Earth system modelling show a significant divergence in future patterns of ecosystem processes, in particular the net land–atmosphere carbon exchanges, despite a seemingly common behaviour for the contemporary period. An in-depth evaluation of these models is hence of high importance to better understand the reasons for this disagreement. Here, we develop an extension for existing benchmarking systems by making use of the complementary information contained in the observational records of atmospheric CO2 and remotely sensed vegetation activity to provide a novel set of diagnostics of ecosystem responses to climate variability in the last 30 yr at different temporal and spatial scales. The selection of observational characteristics (traits specifically considers the robustness of information given that the uncertainty of both data and evaluation methodology is largely unknown or difficult to quantify. Based on these considerations, we introduce a baseline benchmark – a minimum test that any model has to pass – to provide a more objective, quantitative evaluation framework. The benchmarking strategy can be used for any land surface model, either driven by observed meteorology or coupled to a climate model. We apply this framework to evaluate the offline version of the MPI Earth System Model's land surface scheme JSBACH. We demonstrate that the complementary use of atmospheric CO2 and satellite-based vegetation activity data allows pinpointing of specific model deficiencies that would not be possible by the sole use of atmospheric CO2 observations.

  17. A logic model framework for evaluation and planning in a primary care practice-based research network (PBRN)

    Science.gov (United States)

    Hayes, Holly; Parchman, Michael L.; Howard, Ray

    2012-01-01

    Evaluating effective growth and development of a Practice-Based Research Network (PBRN) can be challenging. The purpose of this article is to describe the development of a logic model and how the framework has been used for planning and evaluation in a primary care PBRN. An evaluation team was formed consisting of the PBRN directors, staff and its board members. After the mission and the target audience were determined, facilitated meetings and discussions were held with stakeholders to identify the assumptions, inputs, activities, outputs, outcomes and outcome indicators. The long-term outcomes outlined in the final logic model are two-fold: 1.) Improved health outcomes of patients served by PBRN community clinicians; and 2.) Community clinicians are recognized leaders of quality research projects. The Logic Model proved useful in identifying stakeholder interests and dissemination activities as an area that required more attention in the PBRN. The logic model approach is a useful planning tool and project management resource that increases the probability that the PBRN mission will be successfully implemented. PMID:21900441

  18. Spatial Modeling of a New Technological Typification in Forestry Based on Multicriteria Evaluation of Skidding Technologies

    Directory of Open Access Journals (Sweden)

    Michal Synek

    2015-01-01

    Full Text Available The study describes a new system of technological typification in forestry based on multicriteria evaluation of environmentally friendly use of common skidding technologies. A farm tractor, skidder, cable system, forwarder, and forwarder in combination with harvester were selected as model skidding technologies. The proposed model determines one of the four categories in terms of their environmentally friendly use: 1 Fully suitable, 2 Suitable, 3 Unsuitable – not excluded and 4 Unsuitable for every forest stand and individual skidding technology. The Saaty matrix was used to define weights of input parameters for multicriteria evaluation. The selected input parameters included: slope inclination, ground bearing capacity, risk of logging-transportation erosion hazard, presence and size of obstacles, skidding distance, terrain shape and age of stands. Stocking and areal representation of selected tree species were added to the evaluation of forwarder-harvester combination. Different equipment (standard tires, low-pressure tires, wheel tracks and climatic conditions (dry, wet were also taken into account in the evaluation of the model. A multicriteria evaluation was carried out by means of GIS tools in SW ESRI ArcGIS Desktop. The model was applied to the selected experimental territory in the upper part of the basin of the Oskava river and it was verified in different forest stands and terrain conditions in the northern part of the Mendel University Training Forest Enterprise Křtiny. Verification of model results was carried out in randomly selected stands with the overall area representing more than 10% of the total forest area in the experimental territory and more than 8% of the total forest area in the verification territory.

  19. Evaluation of a novel scoring and grading model for VP-based exams in postgraduate nurse education.

    Science.gov (United States)

    Forsberg, Elenita; Ziegert, Kristina; Hult, Håkan; Fors, Uno

    2015-12-01

    For Virtual Patient-based exams, several scoring and grading methods have been proposed, but none have yet been validated. The aim of this study was to evaluate a new scoring and grading model for VP-based exams in postgraduate paediatric nurse education. The same student group of 19 students performed a VP-based exam in three consecutive courses. When using the scoring and grading assessment model, which contains a deduction system for unnecessary or unwanted actions, a progression was found in the three courses: 53% of the students passed the first exam, 63% the second and 84% passed the final exam. The most common reason for deduction of points was due to students asking too many interview questions or ordering too many laboratory tests. The results showed that the new scoring model made it possible to judge the students' clinical reasoning process as well as their progress. Copyright © 2015 Elsevier Ltd. All rights reserved.

  20. How to Evaluate Smart Cities’ Construction? A Comparison of Chinese Smart City Evaluation Methods Based on PSF

    Directory of Open Access Journals (Sweden)

    Hongbo Shi

    2017-12-01

    Full Text Available With the rapid development of smart cities in the world, research relating to smart city evaluation has become a new research hotspot in academia. However, there are general problems of cognitive deprivation, lack of planning experience, and low level of coordination in smart cities construction. It is necessary for us to develop a set of scientific, reasonable, and effective evaluation index systems and evaluation models to analyze the development degree of urban wisdom. Based on the theory of the urban system, we established a comprehensive evaluation index system for urban intelligent development based on the people-oriented, city-system, and resources-flow (PSF evaluation model. According to the characteristics of the comprehensive evaluation index system of urban intelligent development, the analytic hierarchy process (AHP combined with the experts’ opinions determine the index weight of this system. We adopted the neural network model to construct the corresponding comprehensive evaluation model to characterize the non-linear characteristics of the comprehensive evaluation indexes system, thus to quantitatively quantify the comprehensive evaluation indexes of urban intelligent development. Finally, we used the AHP, AHP-BP (Back Propagation, and AHP-ELM (Extreme Learning Machine models to evaluate the intelligent development level of 151 cities in China, and compared them from the perspective of model accuracy and time cost. The final simulation results show that the AHP-ELM model is the best evaluation model.

  1. Evaluation of temperature-based global solar radiation models in China

    DEFF Research Database (Denmark)

    Liu, Xiaoying; Mei, Xurong; Li, Yuzhong

    2009-01-01

    empirical equations to estimate these parameters. Two schemes in calculating ¿T were employed: ¿T1 (based on single day Tmin) used in the Harg and ¿T2 (based on 2-day average of Tmin) used in the B-C model. Results showed that the original B-C model performed similarly to the best performing modified Harg...

  2. Evaluating hydrological model performance using information theory-based metrics

    Science.gov (United States)

    The accuracy-based model performance metrics not necessarily reflect the qualitative correspondence between simulated and measured streamflow time series. The objective of this work was to use the information theory-based metrics to see whether they can be used as complementary tool for hydrologic m...

  3. Safety evaluation model of urban cross-river tunnel based on driving simulation.

    Science.gov (United States)

    Ma, Yingqi; Lu, Linjun; Lu, Jian John

    2017-09-01

    Currently, Shanghai urban cross-river tunnels have three principal characteristics: increased traffic, a high accident rate and rapidly developing construction. Because of their complex geographic and hydrological characteristics, the alignment conditions in urban cross-river tunnels are more complicated than in highway tunnels, so a safety evaluation of urban cross-river tunnels is necessary to suggest follow-up construction and changes in operational management. A driving risk index (DRI) for urban cross-river tunnels was proposed in this study. An index system was also constructed, combining eight factors derived from the output of a driving simulator regarding three aspects of risk due to following, lateral accidents and driver workload. Analytic hierarchy process methods and expert marking and normalization processing were applied to construct a mathematical model for the DRI. The driving simulator was used to simulate 12 Shanghai urban cross-river tunnels and a relationship was obtained between the DRI for the tunnels and the corresponding accident rate (AR) via a regression analysis. The regression analysis results showed that the relationship between the DRI and the AR mapped to an exponential function with a high degree of fit. In the absence of detailed accident data, a safety evaluation model based on factors derived from a driving simulation can effectively assess the driving risk in urban cross-river tunnels constructed or in design.

  4. Evaluating score- and feature-based likelihood ratio models for multivariate continuous data: applied to forensic MDMA comparison

    NARCIS (Netherlands)

    Bolck, A.; Ni, H.; Lopatka, M.

    2015-01-01

    Likelihood ratio (LR) models are moving into the forefront of forensic evidence evaluation as these methods are adopted by a diverse range of application areas in forensic science. We examine the fundamentally different results that can be achieved when feature- and score-based methodologies are

  5. A Dynamic Model for the Evaluation of Aircraft Engine Icing Detection and Control-Based Mitigation Strategies

    Science.gov (United States)

    Simon, Donald L.; Rinehart, Aidan W.; Jones, Scott M.

    2017-01-01

    Aircraft flying in regions of high ice crystal concentrations are susceptible to the buildup of ice within the compression system of their gas turbine engines. This ice buildup can restrict engine airflow and cause an uncommanded loss of thrust, also known as engine rollback, which poses a potential safety hazard. The aviation community is conducting research to understand this phenomena, and to identify avoidance and mitigation strategies to address the concern. To support this research, a dynamic turbofan engine model has been created to enable the development and evaluation of engine icing detection and control-based mitigation strategies. This model captures the dynamic engine response due to high ice water ingestion and the buildup of ice blockage in the engines low pressure compressor. It includes a fuel control system allowing engine closed-loop control effects during engine icing events to be emulated. The model also includes bleed air valve and horsepower extraction actuators that, when modulated, change overall engine operating performance. This system-level model has been developed and compared against test data acquired from an aircraft turbofan engine undergoing engine icing studies in an altitude test facility and also against outputs from the manufacturers customer deck. This paper will describe the model and show results of its dynamic response under open-loop and closed-loop control operating scenarios in the presence of ice blockage buildup compared against engine test cell data. Planned follow-on use of the model for the development and evaluation of icing detection and control-based mitigation strategies will also be discussed. The intent is to combine the model and control mitigation logic with an engine icing risk calculation tool capable of predicting the risk of engine icing based on current operating conditions. Upon detection of an operating region of risk for engine icing events, the control mitigation logic will seek to change the

  6. HMM-based Trust Model

    DEFF Research Database (Denmark)

    ElSalamouny, Ehab; Nielsen, Mogens; Sassone, Vladimiro

    2010-01-01

    Probabilistic trust has been adopted as an approach to taking security sensitive decisions in modern global computing environments. Existing probabilistic trust frameworks either assume fixed behaviour for the principals or incorporate the notion of ‘decay' as an ad hoc approach to cope...... with their dynamic behaviour. Using Hidden Markov Models (HMMs) for both modelling and approximating the behaviours of principals, we introduce the HMM-based trust model as a new approach to evaluating trust in systems exhibiting dynamic behaviour. This model avoids the fixed behaviour assumption which is considered...... the major limitation of existing Beta trust model. We show the consistency of the HMM-based trust model and contrast it against the well known Beta trust model with the decay principle in terms of the estimation precision....

  7. Implementation of a Web-Based Organ Donation Educational Intervention: Development and Use of a Refined Process Evaluation Model

    Science.gov (United States)

    Harker, Laura; Bamps, Yvan; Flemming, Shauna St. Clair; Perryman, Jennie P; Thompson, Nancy J; Patzer, Rachel E; Williams, Nancy S DeSousa; Arriola, Kimberly R Jacob

    2017-01-01

    Background The lack of available organs is often considered to be the single greatest problem in transplantation today. Internet use is at an all-time high, creating an opportunity to increase public commitment to organ donation through the broad reach of Web-based behavioral interventions. Implementing Internet interventions, however, presents challenges including preventing fraudulent respondents and ensuring intervention uptake. Although Web-based organ donation interventions have increased in recent years, process evaluation models appropriate for Web-based interventions are lacking. Objective The aim of this study was to describe a refined process evaluation model adapted for Web-based settings and used to assess the implementation of a Web-based intervention aimed to increase organ donation among African Americans. Methods We used a randomized pretest-posttest control design to assess the effectiveness of the intervention website that addressed barriers to organ donation through corresponding videos. Eligible participants were African American adult residents of Georgia who were not registered on the state donor registry. Drawing from previously developed process evaluation constructs, we adapted reach (the extent to which individuals were found eligible, and participated in the study), recruitment (online recruitment mechanism), dose received (intervention uptake), and context (how the Web-based setting influenced study implementation) for Internet settings and used the adapted model to assess the implementation of our Web-based intervention. Results With regard to reach, 1415 individuals completed the eligibility screener; 948 (67.00%) were determined eligible, of whom 918 (96.8%) completed the study. After eliminating duplicate entries (n=17), those who did not initiate the posttest (n=21) and those with an invalid ZIP code (n=108), 772 valid entries remained. Per the Internet protocol (IP) address analysis, only 23 of the 772 valid entries (3.0%) were

  8. Conceptual modelling of human resource evaluation process

    Directory of Open Access Journals (Sweden)

    Negoiţă Doina Olivia

    2017-01-01

    Full Text Available Taking into account the highly diverse tasks which employees have to fulfil due to complex requirements of nowadays consumers, the human resource within an enterprise has become a strategic element for developing and exploiting products which meet the market expectations. Therefore, organizations encounter difficulties when approaching the human resource evaluation process. Hence, the aim of the current paper is to design a conceptual model of the aforementioned process, which allows the enterprises to develop a specific methodology. In order to design the conceptual model, Business Process Modelling instruments were employed - Adonis Community Edition Business Process Management Toolkit using the ADONIS BPMS Notation. The conceptual model was developed based on an in-depth secondary research regarding the human resource evaluation process. The proposed conceptual model represents a generic workflow (sequential and/ or simultaneously activities, which can be extended considering the enterprise’s needs regarding their requirements when conducting a human resource evaluation process. Enterprises can benefit from using software instruments for business process modelling as they enable process analysis and evaluation (predefined / specific queries and also model optimization (simulations.

  9. BALANCED SCORECARDS EVALUATION MODEL THAT INCLUDES ELEMENTS OF ENVIRONMENTAL MANAGEMENT SYSTEM USING AHP MODEL

    Directory of Open Access Journals (Sweden)

    Jelena Jovanović

    2010-03-01

    Full Text Available The research is oriented on improvement of environmental management system (EMS using BSC (Balanced Scorecard model that presents strategic model of measurem ents and improvement of organisational performance. The research will present approach of objectives and environmental management me trics involvement (proposed by literature review in conventional BSC in "Ad Barska plovi dba" organisation. Further we will test creation of ECO-BSC model based on business activities of non-profit organisations in order to improve envir onmental management system in parallel with other systems of management. Using this approach we may obtain 4 models of BSC that includ es elements of environmen tal management system for AD "Barska plovidba". Taking into acc ount that implementation and evaluation need long period of time in AD "Barska plovidba", the final choice will be based on 14598 (Information technology - Software product evaluation and ISO 9126 (Software engineering - Product quality using AHP method. Those standards are usually used for evaluation of quality software product and computer programs that serve in organisation as support and factors for development. So, AHP model will be bas ed on evolution criteria based on suggestion of ISO 9126 standards and types of evaluation from two evaluation teams. Members of team & will be experts in BSC and environmental management system that are not em ployed in AD "Barska Plovidba" organisation. The members of team 2 will be managers of AD "Barska Plovidba" organisation (including manage rs from environmental department. Merging results based on previously cr eated two AHP models, one can obtain the most appropriate BSC that includes elements of environmental management system. The chosen model will present at the same time suggestion for approach choice including ecological metrics in conventional BSC model for firm that has at least one ECO strategic orientation.

  10. Rock mechanics models evaluation report: Draft report

    International Nuclear Information System (INIS)

    1985-10-01

    This report documents the evaluation of the thermal and thermomechanical models and codes for repository subsurface design and for design constraint analysis. The evaluation was based on a survey of the thermal and thermomechanical codes and models that are applicable to subsurface design, followed by a Kepner-Tregoe (KT) structured decision analysis of the codes and models. The end result of the KT analysis is a balanced, documented recommendation of the codes and models which are best suited to conceptual subsurface design for the salt repository. The various laws for modeling the creep of rock salt are also reviewed in this report. 37 refs., 1 fig., 7 tabs

  11. The EMEFS model evaluation

    International Nuclear Information System (INIS)

    Barchet, W.R.; Dennis, R.L.; Seilkop, S.K.; Banic, C.M.; Davies, D.; Hoff, R.M.; Macdonald, A.M.; Mickle, R.E.; Padro, J.; Puckett, K.; Byun, D.; McHenry, J.N.; Karamchandani, P.; Venkatram, A.; Fung, C.; Misra, P.K.; Hansen, D.A.; Chang, J.S.

    1991-12-01

    The binational Eulerian Model Evaluation Field Study (EMEFS) consisted of several coordinated data gathering and model evaluation activities. In the EMEFS, data were collected by five air and precipitation monitoring networks between June 1988 and June 1990. Model evaluation is continuing. This interim report summarizes the progress made in the evaluation of the Regional Acid Deposition Model (RADM) and the Acid Deposition and Oxidant Model (ADOM) through the December 1990 completion of a State of Science and Technology report on model evaluation for the National Acid Precipitation Assessment Program (NAPAP). Because various assessment applications of RADM had to be evaluated for NAPAP, the report emphasizes the RADM component of the evaluation. A protocol for the evaluation was developed by the model evaluation team and defined the observed and predicted values to be used and the methods by which the observed and predicted values were to be compared. Scatter plots and time series of predicted and observed values were used to present the comparisons graphically. Difference statistics and correlations were used to quantify model performance. 64 refs., 34 figs., 6 tabs

  12. The EMEFS model evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Barchet, W.R. (Pacific Northwest Lab., Richland, WA (United States)); Dennis, R.L. (Environmental Protection Agency, Research Triangle Park, NC (United States)); Seilkop, S.K. (Analytical Sciences, Inc., Durham, NC (United States)); Banic, C.M.; Davies, D.; Hoff, R.M.; Macdonald, A.M.; Mickle, R.E.; Padro, J.; Puckett, K. (Atmospheric Environment Service, Downsview, ON (Canada)); Byun, D.; McHenry, J.N.

    1991-12-01

    The binational Eulerian Model Evaluation Field Study (EMEFS) consisted of several coordinated data gathering and model evaluation activities. In the EMEFS, data were collected by five air and precipitation monitoring networks between June 1988 and June 1990. Model evaluation is continuing. This interim report summarizes the progress made in the evaluation of the Regional Acid Deposition Model (RADM) and the Acid Deposition and Oxidant Model (ADOM) through the December 1990 completion of a State of Science and Technology report on model evaluation for the National Acid Precipitation Assessment Program (NAPAP). Because various assessment applications of RADM had to be evaluated for NAPAP, the report emphasizes the RADM component of the evaluation. A protocol for the evaluation was developed by the model evaluation team and defined the observed and predicted values to be used and the methods by which the observed and predicted values were to be compared. Scatter plots and time series of predicted and observed values were used to present the comparisons graphically. Difference statistics and correlations were used to quantify model performance. 64 refs., 34 figs., 6 tabs.

  13. A human visual model-based approach of the visual attention and performance evaluation

    Science.gov (United States)

    Le Meur, Olivier; Barba, Dominique; Le Callet, Patrick; Thoreau, Dominique

    2005-03-01

    In this paper, a coherent computational model of visual selective attention for color pictures is described and its performances are precisely evaluated. The model based on some important behaviours of the human visual system is composed of four parts: visibility, perception, perceptual grouping and saliency map construction. This paper focuses mainly on its performances assessment by achieving extended subjective and objective comparisons with real fixation points captured by an eye-tracking system used by the observers in a task-free viewing mode. From the knowledge of the ground truth, qualitatively and quantitatively comparisons have been made in terms of the measurement of the linear correlation coefficient (CC) and of the Kulback Liebler divergence (KL). On a set of 10 natural color images, the results show that the linear correlation coefficient and the Kullback Leibler divergence are of about 0.71 and 0.46, respectively. CC and Kl measures with this model are respectively improved by about 4% and 7% compared to the best model proposed by L.Itti. Moreover, by comparing the ability of our model to predict eye movements produced by an average observer, we can conclude that our model succeeds quite well in predicting the spatial locations of the most important areas of the image content.

  14. Modelling in Evaluating a Working Life Project in Higher Education

    Science.gov (United States)

    Sarja, Anneli; Janhonen, Sirpa; Havukainen, Pirjo; Vesterinen, Anne

    2012-01-01

    This article describes an evaluation method based on collaboration between the higher education, a care home and university, in a R&D project. The aim of the project was to elaborate modelling as a tool of developmental evaluation for innovation and competence in project cooperation. The approach was based on activity theory. Modelling enabled a…

  15. Modeling of the Dissolution Kinetics of Arbutus Wild Berries-Based Tablets as Evaluated by Electric Conductivity

    International Nuclear Information System (INIS)

    Abbas-Aksil, T.; Benamara, S.

    2015-01-01

    Lyophilized powder (LP) from Algerian arbutus wild berries (Arbutus unedo L.) was obtained. This present paper reports about the dissolution (releasing) properties of LP-based tablets, evaluated through the electric conductivity (EC) of distilled water which is employed as surrounding medium, at three different temperatures (291, 298 and 309 K). In addition to this, secondary physicochemical characteristics such as elementary analysis, color and compressibility were evaluated. Regarding the modeling of ionic transfer, among the three tested models, namely Peleg, Singh et al. and Singh and Kulshestha, the latter seems to be the most appropriate (R2 = 0.99), particularly in the case of compacted tablets under 2000 Pa. The temperature dependence of the dissolution process was also studied applying Arrhenius equation (R2>0.8) which allowed to deduce the activation energy, ranging from 18.7 to 21.4 kJ.mol -1 according to the model and compression force employed. (author)

  16. External Economies Evaluation of Wind Power Engineering Project Based on Analytic Hierarchy Process and Matter-Element Extension Model

    Directory of Open Access Journals (Sweden)

    Hong-ze Li

    2013-01-01

    Full Text Available The external economies of wind power engineering project may affect the operational efficiency of wind power enterprises and sustainable development of wind power industry. In order to ensure that the wind power engineering project is constructed and developed in a scientific manner, a reasonable external economies evaluation needs to be performed. Considering the interaction relationship of the evaluation indices and the ambiguity and uncertainty inherent, a hybrid model of external economies evaluation designed to be applied to wind power engineering project was put forward based on the analytic hierarchy process (AHP and matter-element extension model in this paper. The AHP was used to determine the weights of indices, and the matter-element extension model was used to deduce final ranking. Taking a wind power engineering project in Inner Mongolia city as an example, the external economies evaluation is performed by employing this hybrid model. The result shows that the external economies of this wind power engineering project are belonged to the “strongest” level, and “the degree of increasing region GDP,” “the degree of reducing pollution gas emissions,” and “the degree of energy conservation” are the sensitive indices.

  17. A review of Agent Based Modeling for agricultural policy evaluation

    NARCIS (Netherlands)

    Kremmydas, Dimitris; Athanasiadis, I.N.; Rozakis, Stelios

    2018-01-01

    Farm level scale policy analysis is receiving increased attention due to a changing agricultural policy orientation. Agent based models (ABM) are farm level models that have appeared in the end of 1990's, having several differences from traditional farm level models, like the consideration of

  18. The creation and evaluation of a model predicting the probability of conception in seasonal-calving, pasture-based dairy cows.

    Science.gov (United States)

    Fenlon, Caroline; O'Grady, Luke; Doherty, Michael L; Dunnion, John; Shalloo, Laurence; Butler, Stephen T

    2017-07-01

    Reproductive performance in pasture-based production systems has a fundamentally important effect on economic efficiency. The individual factors affecting the probability of submission and conception are multifaceted and have been extensively researched. The present study analyzed some of these factors in relation to service-level probability of conception in seasonal-calving pasture-based dairy cows to develop a predictive model of conception. Data relating to 2,966 services from 737 cows on 2 research farms were used for model development and data from 9 commercial dairy farms were used for model testing, comprising 4,212 services from 1,471 cows. The data spanned a 15-yr period and originated from seasonal-calving pasture-based dairy herds in Ireland. The calving season for the study herds extended from January to June, with peak calving in February and March. A base mixed-effects logistic regression model was created using a stepwise model-building strategy and incorporated parity, days in milk, interservice interval, calving difficulty, and predicted transmitting abilities for calving interval and milk production traits. To attempt to further improve the predictive capability of the model, the addition of effects that were not statistically significant was considered, resulting in a final model composed of the base model with the inclusion of BCS at service. The models' predictions were evaluated using discrimination to measure their ability to correctly classify positive and negative cases. Precision, recall, F-score, and area under the receiver operating characteristic curve (AUC) were calculated. Calibration tests measured the accuracy of the predicted probabilities. These included tests of overall goodness-of-fit, bias, and calibration error. Both models performed better than using the population average probability of conception. Neither of the models showed high levels of discrimination (base model AUC 0.61, final model AUC 0.62), possibly because of the

  19. Ottawa Model of Implementation Leadership and Implementation Leadership Scale: mapping concepts for developing and evaluating theory-based leadership interventions.

    Science.gov (United States)

    Gifford, Wendy; Graham, Ian D; Ehrhart, Mark G; Davies, Barbara L; Aarons, Gregory A

    2017-01-01

    Leadership in health care is instrumental to creating a supportive organizational environment and positive staff attitudes for implementing evidence-based practices to improve patient care and outcomes. The purpose of this study is to demonstrate the alignment of the Ottawa Model of Implementation Leadership (O-MILe), a theoretical model for developing implementation leadership, with the Implementation Leadership Scale (ILS), an empirically validated tool for measuring implementation leadership. A secondary objective is to describe the methodological process for aligning concepts of a theoretical model with an independently established measurement tool for evaluating theory-based interventions. Modified template analysis was conducted to deductively map items of the ILS onto concepts of the O-MILe. An iterative process was used in which the model and scale developers (n=5) appraised the relevance, conceptual clarity, and fit of each ILS items with the O-MILe concepts through individual feedback and group discussions until consensus was reached. All 12 items of the ILS correspond to at least one O-MILe concept, demonstrating compatibility of the ILS as a measurement tool for the O-MILe theoretical constructs. The O-MILe provides a theoretical basis for developing implementation leadership, and the ILS is a compatible tool for measuring leadership based on the O-MILe. Used together, the O-MILe and ILS provide an evidence- and theory-based approach for developing and measuring leadership for implementing evidence-based practices in health care. Template analysis offers a convenient approach for determining the compatibility of independently developed evaluation tools to test theoretical models.

  20. Ottawa Model of Implementation Leadership and Implementation Leadership Scale: mapping concepts for developing and evaluating theory-based leadership interventions

    Science.gov (United States)

    Gifford, Wendy; Graham, Ian D; Ehrhart, Mark G; Davies, Barbara L; Aarons, Gregory A

    2017-01-01

    Purpose Leadership in health care is instrumental to creating a supportive organizational environment and positive staff attitudes for implementing evidence-based practices to improve patient care and outcomes. The purpose of this study is to demonstrate the alignment of the Ottawa Model of Implementation Leadership (O-MILe), a theoretical model for developing implementation leadership, with the Implementation Leadership Scale (ILS), an empirically validated tool for measuring implementation leadership. A secondary objective is to describe the methodological process for aligning concepts of a theoretical model with an independently established measurement tool for evaluating theory-based interventions. Methods Modified template analysis was conducted to deductively map items of the ILS onto concepts of the O-MILe. An iterative process was used in which the model and scale developers (n=5) appraised the relevance, conceptual clarity, and fit of each ILS items with the O-MILe concepts through individual feedback and group discussions until consensus was reached. Results All 12 items of the ILS correspond to at least one O-MILe concept, demonstrating compatibility of the ILS as a measurement tool for the O-MILe theoretical constructs. Conclusion The O-MILe provides a theoretical basis for developing implementation leadership, and the ILS is a compatible tool for measuring leadership based on the O-MILe. Used together, the O-MILe and ILS provide an evidence- and theory-based approach for developing and measuring leadership for implementing evidence-based practices in health care. Template analysis offers a convenient approach for determining the compatibility of independently developed evaluation tools to test theoretical models. PMID:29355212

  1. Model-Based Design and Evaluation of a Brachiating Monkey Robot with an Active Waist

    Directory of Open Access Journals (Sweden)

    Alex Kai-Yuan Lo

    2017-09-01

    Full Text Available We report on the model-based development of a monkey robot that is capable of performing continuous brachiation locomotion on swingable rod, as the intermediate step toward studying brachiation on the soft rope or on horizontal ropes with both ends fixed. The work is different from other previous works where the model or the robot swings on fixed bars. The model, which is composed of two rigid links, was inspired by the dynamic motion of primates. The model further served as the design guideline for a robot that has five degree of freedoms: two on each arm for rod changing and one on the waist to initiate a swing motion. The model was quantitatively formulated, and its dynamic behavior was analyzed in simulation. Further, a two-stage controller was developed within the simulation environment, where the first stage used the natural dynamics of a two-link pendulum-like model, and the second stage used the angular velocity feedback to regulate the waist motion. Finally, the robot was empirically built and evaluated. The experimental results confirm that the robot can perform model-like swing behavior and continuous brachiation locomotion on rods.

  2. Evaluation of CO2-based cold sterilization of a model hydrogel.

    Science.gov (United States)

    Jiménez, A; Zhang, J; Matthews, M A

    2008-12-15

    The purpose of the present work is to evaluate a novel CO(2)-based cold sterilization process in terms of both its killing efficiency and its effects on the physical properties of a model hydrogel, poly(acrylic acid-co-acrylamide) potassium salt. Suspensions of Staphylococcus aureus and Escherichia coli were prepared for hydration and inoculation of the gel. The hydrogels were treated with supercritical CO(2) (40 degrees C, 27.6 MPa). The amount of bacteria was quantified before and after treatment. With pure CO(2), complete killing of S. aureus and E. coli was achieved for treatment times as low as 60 min. After treatment with CO(2) plus trace amounts of H(2)O(2) at the same experimental conditions, complete bacteria kill was also achieved. For times less than 30 min, incomplete kill was noted. Several physical properties of the gel were evaluated before and after SC-CO(2) treatment. These were largely unaffected by the CO(2) process. Drying curves showed no significant change between treated (pure CO(2) and CO(2) plus 30% H(2)O(2)) and untreated samples. The average equilibrium swelling ratios were also very similar. No changes in the dry hydrogel particle structure were evident from SEM micrographs.

  3. Popularity Evaluation Model for Microbloggers Online Social Network

    Directory of Open Access Journals (Sweden)

    Xia Zhang

    2014-01-01

    Full Text Available Recently, microblogging is widely studied by the researchers in the domain of the online social network (OSN. How to evaluate the popularities of microblogging users is an important research field, which can be applied to commercial advertising, user behavior analysis and information dissemination, and so forth. Previous studies on the evaluation methods cannot effectively solve and accurately evaluate the popularities of the microbloggers. In this paper, we proposed an electromagnetic field theory based model to analyze the popularities of microbloggers. The concept of the source in microblogging field is first put forward, which is based on the concept of source in the electromagnetic field; then, one’s microblogging flux is calculated according to his/her behaviors (send or receive feedbacks on the microblogging platform; finally, we used three methods to calculate one’s microblogging flux density, which can represent one’s popularity on the microblogging platform. In the experimental work, we evaluated our model using real microblogging data and selected the best one from the three popularity measure methods. We also compared our model with the classic PageRank algorithm; and the results show that our model is more effective and accurate to evaluate the popularities of the microbloggers.

  4. The GEWEX LandFlux project: evaluation of model evaporation using tower-based and globally gridded forcing data

    Science.gov (United States)

    McCabe, M. F.; Ershadi, A.; Jimenez, C.; Miralles, D. G.; Michel, D.; Wood, E. F.

    2016-01-01

    Determining the spatial distribution and temporal development of evaporation at regional and global scales is required to improve our understanding of the coupled water and energy cycles and to better monitor any changes in observed trends and variability of linked hydrological processes. With recent international efforts guiding the development of long-term and globally distributed flux estimates, continued product assessments are required to inform upon the selection of suitable model structures and also to establish the appropriateness of these multi-model simulations for global application. In support of the objectives of the Global Energy and Water Cycle Exchanges (GEWEX) LandFlux project, four commonly used evaporation models are evaluated against data from tower-based eddy-covariance observations, distributed across a range of biomes and climate zones. The selected schemes include the Surface Energy Balance System (SEBS) approach, the Priestley-Taylor Jet Propulsion Laboratory (PT-JPL) model, the Penman-Monteith-based Mu model (PM-Mu) and the Global Land Evaporation Amsterdam Model (GLEAM). Here we seek to examine the fidelity of global evaporation simulations by examining the multi-model response to varying sources of forcing data. To do this, we perform parallel and collocated model simulations using tower-based data together with a global-scale grid-based forcing product. Through quantifying the multi-model response to high-quality tower data, a better understanding of the subsequent model response to the coarse-scale globally gridded data that underlies the LandFlux product can be obtained, while also providing a relative evaluation and assessment of model performance. Using surface flux observations from 45 globally distributed eddy-covariance stations as independent metrics of performance, the tower-based analysis indicated that PT-JPL provided the highest overall statistical performance (0.72; 61 W m-2; 0.65), followed closely by GLEAM (0.68; 64 W m-2

  5. Technology Evaluation of Process Configurations for Second Generation Bioethanol Production using Dynamic Model-based Simulations

    DEFF Research Database (Denmark)

    Morales Rodriguez, Ricardo; Meyer, Anne S.; Gernaey, Krist

    2011-01-01

    An assessment of a number of different process flowsheets for bioethanol production was performed using dynamic model-based simulations. The evaluation employed diverse operational scenarios such as, fed-batch, continuous and continuous with recycle configurations. Each configuration was evaluated...... against the following benchmark criteria, yield (kg ethanol/kg dry-biomass), final product concentration and number of unit operations required in the different process configurations. The results has shown the process configuration for simultaneous saccharification and co-fermentation (SSCF) operating...... in continuous mode with a recycle of the SSCF reactor effluent, results in the best productivity of bioethanol among the proposed process configurations, with a yield of 0.18 kg ethanol /kg dry-biomass....

  6. Evaluation of AirGIS: a GIS-based air pollution and human exposure modelling system

    DEFF Research Database (Denmark)

    Ketzel, Matthias; Berkowicz, Ruwim; Hvidberg, Martin

    2011-01-01

    This study describes in brief the latest extensions of the Danish Geographic Information System (GIS)-based air pollution and human exposure modelling system (AirGIS), which has been developed in Denmark since 2001 and gives results of an evaluation with measured air pollution data. The system...... shows, in general, a good performance for both long-term averages (annual and monthly averages), short-term averages (hourly and daily) as well as when reproducing spatial variation in air pollution concentrations. Some shortcomings and future perspectives of the system are discussed too....

  7. Integrating an agent-based model into a large-scale hydrological model for evaluating drought management in California

    Science.gov (United States)

    Sheffield, J.; He, X.; Wada, Y.; Burek, P.; Kahil, M.; Wood, E. F.; Oppenheimer, M.

    2017-12-01

    California has endured record-breaking drought since winter 2011 and will likely experience more severe and persistent drought in the coming decades under changing climate. At the same time, human water management practices can also affect drought frequency and intensity, which underscores the importance of human behaviour in effective drought adaptation and mitigation. Currently, although a few large-scale hydrological and water resources models (e.g., PCR-GLOBWB) consider human water use and management practices (e.g., irrigation, reservoir operation, groundwater pumping), none of them includes the dynamic feedback between local human behaviors/decisions and the natural hydrological system. It is, therefore, vital to integrate social and behavioral dimensions into current hydrological modeling frameworks. This study applies the agent-based modeling (ABM) approach and couples it with a large-scale hydrological model (i.e., Community Water Model, CWatM) in order to have a balanced representation of social, environmental and economic factors and a more realistic representation of the bi-directional interactions and feedbacks in coupled human and natural systems. In this study, we focus on drought management in California and considers two types of agents, which are (groups of) farmers and state management authorities, and assumed that their corresponding objectives are to maximize the net crop profit and to maintain sufficient water supply, respectively. Farmers' behaviors are linked with local agricultural practices such as cropping patterns and deficit irrigation. More precisely, farmers' decisions are incorporated into CWatM across different time scales in terms of daily irrigation amount, seasonal/annual decisions on crop types and irrigated area as well as the long-term investment of irrigation infrastructure. This simulation-based optimization framework is further applied by performing different sets of scenarios to investigate and evaluate the effectiveness

  8. The GEWEX LandFlux project: evaluation of model evaporation using tower-based and globally-gridded forcing data

    Science.gov (United States)

    McCabe, M. F.; Ershadi, A.; Jimenez, C.; Miralles, D. G.; Michel, D.; Wood, E. F.

    2015-08-01

    Determining the spatial distribution and temporal development of evaporation at regional and global scales is required to improve our understanding of the coupled water and energy cycles and to better monitor any changes in observed trends and variability of linked hydrological processes. With recent international efforts guiding the development of long-term and globally distributed flux estimates, continued product assessments are required to inform upon the selection of suitable model structures and also to establish the appropriateness of these multi-model simulations for global application. In support of the objectives of the GEWEX LandFlux project, four commonly used evaporation models are evaluated against data from tower-based eddy-covariance observations, distributed across a range of biomes and climate zones. The selected schemes include the Surface Energy Balance System (SEBS) approach, the Priestley-Taylor Jet Propulsion Laboratory (PT-JPL) model, the Penman-Monteith based Mu model (PM-Mu) and the Global Land Evaporation: the Amsterdam Methodology (GLEAM). Here we seek to examine the fidelity of global evaporation simulations by examining the multi-model response to varying sources of forcing data. To do this, we perform parallel and collocated model simulations using tower-based data together with a global-scale grid-based forcing product. Through quantifying the multi-model response to high-quality tower data, a better understanding of the subsequent model response to coarse-scale globally gridded data that underlies the LandFlux product can be obtained, while also providing a relative evaluation and assessment of model performance. Using surface flux observations from forty-five globally distributed eddy-covariance stations as independent metrics of performance, the tower-based analysis indicated that PT-JPL provided the highest overally statistical performance (0.72; 61 W m-2; 0.65), followed closely by GLEAM (0.68; 64 W m-2; 0.62), with values in

  9. Research on efficiency evaluation model of integrated energy system based on hybrid multi-attribute decision-making.

    Science.gov (United States)

    Li, Yan

    2017-05-25

    The efficiency evaluation model of integrated energy system, involving many influencing factors, and the attribute values are heterogeneous and non-deterministic, usually cannot give specific numerical or accurate probability distribution characteristics, making the final evaluation result deviation. According to the characteristics of the integrated energy system, a hybrid multi-attribute decision-making model is constructed. The evaluation model considers the decision maker's risk preference. In the evaluation of the efficiency of the integrated energy system, the evaluation value of some evaluation indexes is linguistic value, or the evaluation value of the evaluation experts is not consistent. These reasons lead to ambiguity in the decision information, usually in the form of uncertain linguistic values and numerical interval values. In this paper, the risk preference of decision maker is considered when constructing the evaluation model. Interval-valued multiple-attribute decision-making method and fuzzy linguistic multiple-attribute decision-making model are proposed. Finally, the mathematical model of efficiency evaluation of integrated energy system is constructed.

  10. A Novel OBDD-Based Reliability Evaluation Algorithm for Wireless Sensor Networks on the Multicast Model

    Directory of Open Access Journals (Sweden)

    Zongshuai Yan

    2015-01-01

    Full Text Available The two-terminal reliability calculation for wireless sensor networks (WSNs is a #P-hard problem. The reliability calculation of WSNs on the multicast model provides an even worse combinatorial explosion of node states with respect to the calculation of WSNs on the unicast model; many real WSNs require the multicast model to deliver information. This research first provides a formal definition for the WSN on the multicast model. Next, a symbolic OBDD_Multicast algorithm is proposed to evaluate the reliability of WSNs on the multicast model. Furthermore, our research on OBDD_Multicast construction avoids the problem of invalid expansion, which reduces the number of subnetworks by identifying the redundant paths of two adjacent nodes and s-t unconnected paths. Experiments show that the OBDD_Multicast both reduces the complexity of the WSN reliability analysis and has a lower running time than Xing’s OBDD- (ordered binary decision diagram- based algorithm.

  11. Evaluation Model for Sentient Cities

    Directory of Open Access Journals (Sweden)

    Mª Florencia Fergnani Brion

    2016-11-01

    Full Text Available In this article we made a research about the Sentient Cities and produced an assessment model to analyse if a city is or could be potentially considered one. It can be used to evaluate the current situation of a city before introducing urban policies based on citizen participation in hybrid environments (physical and digital. To that effect, we've developed evaluation grids with the main elements that form a Sentient City and their measurement values. The Sentient City is a variation of the Smart City, also based on technology progress and innovation, but where the citizens are the principal agent. In this model, governments aim to have a participatory and sustainable system for achieving the Knowledge Society and Collective Intelligence development, as well as the city’s efficiency. Also, they increase communication channels between the Administration and citizens. In this new context, citizens are empowered because they have the opportunity to create a Local Identity and transform their surroundings through open and horizontal initiatives.

  12. Evaluation of Clear Sky Models for Satellite-Based Irradiance Estimates

    Energy Technology Data Exchange (ETDEWEB)

    Sengupta, Manajit [National Renewable Energy Lab. (NREL), Golden, CO (United States); Gotseff, Peter [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2013-12-01

    This report describes an intercomparison of three popular broadband clear sky solar irradiance model results with measured data, as well as satellite-based model clear sky results compared to measured clear sky data. The authors conclude that one of the popular clear sky models (the Bird clear sky model developed by Richard Bird and Roland Hulstrom) could serve as a more accurate replacement for current satellite-model clear sky estimations. Additionally, the analysis of the model results with respect to model input parameters indicates that rather than climatological, annual, or monthly mean input data, higher-time-resolution input parameters improve the general clear sky model performance.

  13. Development and evaluation of a biomedical search engine using a predicate-based vector space model.

    Science.gov (United States)

    Kwak, Myungjae; Leroy, Gondy; Martinez, Jesse D; Harwell, Jeffrey

    2013-10-01

    Although biomedical information available in articles and patents is increasing exponentially, we continue to rely on the same information retrieval methods and use very few keywords to search millions of documents. We are developing a fundamentally different approach for finding much more precise and complete information with a single query using predicates instead of keywords for both query and document representation. Predicates are triples that are more complex datastructures than keywords and contain more structured information. To make optimal use of them, we developed a new predicate-based vector space model and query-document similarity function with adjusted tf-idf and boost function. Using a test bed of 107,367 PubMed abstracts, we evaluated the first essential function: retrieving information. Cancer researchers provided 20 realistic queries, for which the top 15 abstracts were retrieved using a predicate-based (new) and keyword-based (baseline) approach. Each abstract was evaluated, double-blind, by cancer researchers on a 0-5 point scale to calculate precision (0 versus higher) and relevance (0-5 score). Precision was significantly higher (psearching than keywords, laying the foundation for rich and sophisticated information search. Copyright © 2013 Elsevier Inc. All rights reserved.

  14. A cloud model based multi-attribute decision making approach for selection and evaluation of groundwater management schemes

    Science.gov (United States)

    Lu, Hongwei; Ren, Lixia; Chen, Yizhong; Tian, Peipei; Liu, Jia

    2017-12-01

    Due to the uncertainty (i.e., fuzziness, stochasticity and imprecision) existed simultaneously during the process for groundwater remediation, the accuracy of ranking results obtained by the traditional methods has been limited. This paper proposes a cloud model based multi-attribute decision making framework (CM-MADM) with Monte Carlo for the contaminated-groundwater remediation strategies selection. The cloud model is used to handle imprecise numerical quantities, which can describe the fuzziness and stochasticity of the information fully and precisely. In the proposed approach, the contaminated concentrations are aggregated via the backward cloud generator and the weights of attributes are calculated by employing the weight cloud module. A case study on the remedial alternative selection for a contaminated site suffering from a 1,1,1-trichloroethylene leakage problem in Shanghai, China is conducted to illustrate the efficiency and applicability of the developed approach. Totally, an attribute system which consists of ten attributes were used for evaluating each alternative through the developed method under uncertainty, including daily total pumping rate, total cost and cloud model based health risk. Results indicated that A14 was evaluated to be the most preferred alternative for the 5-year, A5 for the 10-year, A4 for the 15-year and A6 for the 20-year remediation.

  15. A simulator-based approach to evaluating optical trackers

    NARCIS (Netherlands)

    Smit, F.A.; Liere, van R.

    2009-01-01

    We describe a software framework to evaluate the performance of model-based optical trackers in virtual environments. The framework can be used to evaluate and compare the performance of different trackers under various conditions, to study the effects of varying intrinsic and extrinsic camera

  16. Multiple flood vulnerability assessment approach based on fuzzy comprehensive evaluation method and coordinated development degree model.

    Science.gov (United States)

    Yang, Weichao; Xu, Kui; Lian, Jijian; Bin, Lingling; Ma, Chao

    2018-05-01

    Flood is a serious challenge that increasingly affects the residents as well as policymakers. Flood vulnerability assessment is becoming gradually relevant in the world. The purpose of this study is to develop an approach to reveal the relationship between exposure, sensitivity and adaptive capacity for better flood vulnerability assessment, based on the fuzzy comprehensive evaluation method (FCEM) and coordinated development degree model (CDDM). The approach is organized into three parts: establishment of index system, assessment of exposure, sensitivity and adaptive capacity, and multiple flood vulnerability assessment. Hydrodynamic model and statistical data are employed for the establishment of index system; FCEM is used to evaluate exposure, sensitivity and adaptive capacity; and CDDM is applied to express the relationship of the three components of vulnerability. Six multiple flood vulnerability types and four levels are proposed to assess flood vulnerability from multiple perspectives. Then the approach is applied to assess the spatiality of flood vulnerability in Hainan's eastern area, China. Based on the results of multiple flood vulnerability, a decision-making process for rational allocation of limited resources is proposed and applied to the study area. The study shows that multiple flood vulnerability assessment can evaluate vulnerability more completely, and help decision makers learn more information about making decisions in a more comprehensive way. In summary, this study provides a new way for flood vulnerability assessment and disaster prevention decision. Copyright © 2018 Elsevier Ltd. All rights reserved.

  17. Evaluation of the performance and limitations of empirical partition-relations and process based multisurface models to predict trace element solubility in soils

    Energy Technology Data Exchange (ETDEWEB)

    Groenenberg, J.E.; Bonten, L.T.C. [Alterra, Wageningen UR, P.O. Box 47, 6700 AA Wageningen (Netherlands); Dijkstra, J.J. [Energy research Centre of the Netherlands ECN, P.O. Box 1, 1755 ZG Petten (Netherlands); De Vries, W. [Department of Environmental Systems Analysis, Wageningen University, Wageningen UR, P.O. Box 47, 6700 AA Wageningen (Netherlands); Comans, R.N.J. [Department of Soil Quality, Wageningen University, Wageningen UR, P.O. Box 47, 6700 AA Wageningen (Netherlands)

    2012-07-15

    Here we evaluate the performance and limitations of two frequently used model-types to predict trace element solubility in soils: regression based 'partition-relations' and thermodynamically based 'multisurface models', for a large set of elements. For this purpose partition-relations were derived for As, Ba, Cd, Co, Cr, Cu, Mo, Ni, Pb, Sb, Se, V, Zn. The multi-surface model included aqueous speciation, mineral equilibria, sorption to organic matter, Fe/Al-(hydr)oxides and clay. Both approaches were evaluated by their application to independent data for a wide variety of conditions. We conclude that Freundlich-based partition-relations are robust predictors for most cations and can be used for independent soils, but within the environmental conditions of the data used for their derivation. The multisurface model is shown to be able to successfully predict solution concentrations over a wide range of conditions. Predicted trends for oxy-anions agree well for both approaches but with larger (random) deviations than for cations.

  18. Statistical modeling for visualization evaluation through data fusion.

    Science.gov (United States)

    Chen, Xiaoyu; Jin, Ran

    2017-11-01

    There is a high demand of data visualization providing insights to users in various applications. However, a consistent, online visualization evaluation method to quantify mental workload or user preference is lacking, which leads to an inefficient visualization and user interface design process. Recently, the advancement of interactive and sensing technologies makes the electroencephalogram (EEG) signals, eye movements as well as visualization logs available in user-centered evaluation. This paper proposes a data fusion model and the application procedure for quantitative and online visualization evaluation. 15 participants joined the study based on three different visualization designs. The results provide a regularized regression model which can accurately predict the user's evaluation of task complexity, and indicate the significance of all three types of sensing data sets for visualization evaluation. This model can be widely applied to data visualization evaluation, and other user-centered designs evaluation and data analysis in human factors and ergonomics. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. The GEWEX LandFlux project: evaluation of model evaporation using tower-based and globally gridded forcing data

    KAUST Repository

    McCabe, Matthew; Ershadi, Ali; Jimenez, C.; Miralles, Diego G.; Michel, D.; Wood, E. F.

    2016-01-01

    Determining the spatial distribution and temporal development of evaporation at regional and global scales is required to improve our understanding of the coupled water and energy cycles and to better monitor any changes in observed trends and variability of linked hydrological processes. With recent international efforts guiding the development of long-term and globally distributed flux estimates, continued product assessments are required to inform upon the selection of suitable model structures and also to establish the appropriateness of these multi-model simulations for global application. In support of the objectives of the Global Energy and Water Cycle Exchanges (GEWEX) LandFlux project, four commonly used evaporation models are evaluated against data from tower-based eddy-covariance observations, distributed across a range of biomes and climate zones. The selected schemes include the Surface Energy Balance System (SEBS) approach, the Priestley–Taylor Jet Propulsion Laboratory (PT-JPL) model, the Penman–Monteith-based Mu model (PM-Mu) and the Global Land Evaporation Amsterdam Model (GLEAM). Here we seek to examine the fidelity of global evaporation simulations by examining the multi-model response to varying sources of forcing data. To do this, we perform parallel and collocated model simulations using tower-based data together with a global-scale grid-based forcing product. Through quantifying the multi-model response to high-quality tower data, a better understanding of the subsequent model response to the coarse-scale globally gridded data that underlies the LandFlux product can be obtained, while also providing a relative evaluation and assessment of model performance.

    Using surface flux observations from 45 globally distributed eddy-covariance stations as independent metrics of performance, the tower-based analysis indicated that PT-JPL provided the highest overall statistical performance (0.72; 61 W m−2; 0.65), followed

  20. The GEWEX LandFlux project: evaluation of model evaporation using tower-based and globally gridded forcing data

    KAUST Repository

    McCabe, Matthew

    2016-01-26

    Determining the spatial distribution and temporal development of evaporation at regional and global scales is required to improve our understanding of the coupled water and energy cycles and to better monitor any changes in observed trends and variability of linked hydrological processes. With recent international efforts guiding the development of long-term and globally distributed flux estimates, continued product assessments are required to inform upon the selection of suitable model structures and also to establish the appropriateness of these multi-model simulations for global application. In support of the objectives of the Global Energy and Water Cycle Exchanges (GEWEX) LandFlux project, four commonly used evaporation models are evaluated against data from tower-based eddy-covariance observations, distributed across a range of biomes and climate zones. The selected schemes include the Surface Energy Balance System (SEBS) approach, the Priestley–Taylor Jet Propulsion Laboratory (PT-JPL) model, the Penman–Monteith-based Mu model (PM-Mu) and the Global Land Evaporation Amsterdam Model (GLEAM). Here we seek to examine the fidelity of global evaporation simulations by examining the multi-model response to varying sources of forcing data. To do this, we perform parallel and collocated model simulations using tower-based data together with a global-scale grid-based forcing product. Through quantifying the multi-model response to high-quality tower data, a better understanding of the subsequent model response to the coarse-scale globally gridded data that underlies the LandFlux product can be obtained, while also providing a relative evaluation and assessment of model performance.

    Using surface flux observations from 45 globally distributed eddy-covariance stations as independent metrics of performance, the tower-based analysis indicated that PT-JPL provided the highest overall statistical performance (0.72; 61 W m−2; 0.65), followed

  1. Graph configuration model based evaluation of the education-occupation match.

    Science.gov (United States)

    Gadar, Laszlo; Abonyi, Janos

    2018-01-01

    To study education-occupation matchings we developed a bipartite network model of education to work transition and a graph configuration model based metric. We studied the career paths of 15 thousand Hungarian students based on the integrated database of the National Tax Administration, the National Health Insurance Fund, and the higher education information system of the Hungarian Government. A brief analysis of gender pay gap and the spatial distribution of over-education is presented to demonstrate the background of the research and the resulted open dataset. We highlighted the hierarchical and clustered structure of the career paths based on the multi-resolution analysis of the graph modularity. The results of the cluster analysis can support policymakers to fine-tune the fragmented program structure of higher education.

  2. The IIR evaluation model

    DEFF Research Database (Denmark)

    Borlund, Pia

    2003-01-01

    An alternative approach to evaluation of interactive information retrieval (IIR) systems, referred to as the IIR evaluation model, is proposed. The model provides a framework for the collection and analysis of IR interaction data. The aim of the model is two-fold: 1) to facilitate the evaluation ...

  3. Implementation of a Web-Based Organ Donation Educational Intervention: Development and Use of a Refined Process Evaluation Model.

    Science.gov (United States)

    Redmond, Nakeva; Harker, Laura; Bamps, Yvan; Flemming, Shauna St Clair; Perryman, Jennie P; Thompson, Nancy J; Patzer, Rachel E; Williams, Nancy S DeSousa; Arriola, Kimberly R Jacob

    2017-11-30

    The lack of available organs is often considered to be the single greatest problem in transplantation today. Internet use is at an all-time high, creating an opportunity to increase public commitment to organ donation through the broad reach of Web-based behavioral interventions. Implementing Internet interventions, however, presents challenges including preventing fraudulent respondents and ensuring intervention uptake. Although Web-based organ donation interventions have increased in recent years, process evaluation models appropriate for Web-based interventions are lacking. The aim of this study was to describe a refined process evaluation model adapted for Web-based settings and used to assess the implementation of a Web-based intervention aimed to increase organ donation among African Americans. We used a randomized pretest-posttest control design to assess the effectiveness of the intervention website that addressed barriers to organ donation through corresponding videos. Eligible participants were African American adult residents of Georgia who were not registered on the state donor registry. Drawing from previously developed process evaluation constructs, we adapted reach (the extent to which individuals were found eligible, and participated in the study), recruitment (online recruitment mechanism), dose received (intervention uptake), and context (how the Web-based setting influenced study implementation) for Internet settings and used the adapted model to assess the implementation of our Web-based intervention. With regard to reach, 1415 individuals completed the eligibility screener; 948 (67.00%) were determined eligible, of whom 918 (96.8%) completed the study. After eliminating duplicate entries (n=17), those who did not initiate the posttest (n=21) and those with an invalid ZIP code (n=108), 772 valid entries remained. Per the Internet protocol (IP) address analysis, only 23 of the 772 valid entries (3.0%) were within Georgia, and only 17 of those

  4. Wind farms providing secondary frequency regulation: Evaluating the performance of model-based receding horizon control

    International Nuclear Information System (INIS)

    Shapiro, Carl R.; Meneveau, Charles; Gayme, Dennice F.; Meyers, Johan

    2016-01-01

    We investigate the use of wind farms to provide secondary frequency regulation for a power grid. Our approach uses model-based receding horizon control of a wind farm that is tested using a large eddy simulation (LES) framework. In order to enable real-time implementation, the control actions are computed based on a time-varying one-dimensional wake model. This model describes wake advection and interactions, both of which play an important role in wind farm power production. This controller is implemented in an LES model of an 84-turbine wind farm represented by actuator disk turbine models. Differences between the velocities at each turbine predicted by the wake model and measured in LES are used for closed-loop feedback. The controller is tested on two types of regulation signals, “RegA” and “RegD”, obtained from PJM, an independent system operator in the eastern United States. Composite performance scores, which are used by PJM to qualify plants for regulation, are used to evaluate the performance of the controlled wind farm. Our results demonstrate that the controlled wind farm consistently performs well, passing the qualification threshold for all fastacting RegD signals. For the RegA signal, which changes over slower time scales, the controlled wind farm's average performance surpasses the threshold, but further work is needed to enable the controlled system to achieve qualifying performance all of the time. (paper)

  5. Re-evaluation of model-based light-scattering spectroscopy for tissue spectroscopy

    Science.gov (United States)

    Lau, Condon; Šćepanović, Obrad; Mirkovic, Jelena; McGee, Sasha; Yu, Chung-Chieh; Fulghum, Stephen; Wallace, Michael; Tunnell, James; Bechtel, Kate; Feld, Michael

    2009-01-01

    Model-based light scattering spectroscopy (LSS) seemed a promising technique for in-vivo diagnosis of dysplasia in multiple organs. In the studies, the residual spectrum, the difference between the observed and modeled diffuse reflectance spectra, was attributed to single elastic light scattering from epithelial nuclei, and diagnostic information due to nuclear changes was extracted from it. We show that this picture is incorrect. The actual single scattering signal arising from epithelial nuclei is much smaller than the previously computed residual spectrum, and does not have the wavelength dependence characteristic of Mie scattering. Rather, the residual spectrum largely arises from assuming a uniform hemoglobin distribution. In fact, hemoglobin is packaged in blood vessels, which alters the reflectance. When we include vessel packaging, which accounts for an inhomogeneous hemoglobin distribution, in the diffuse reflectance model, the reflectance is modeled more accurately, greatly reducing the amplitude of the residual spectrum. These findings are verified via numerical estimates based on light propagation and Mie theory, tissue phantom experiments, and analysis of published data measured from Barrett’s esophagus. In future studies, vessel packaging should be included in the model of diffuse reflectance and use of model-based LSS should be discontinued. PMID:19405760

  6. Educational Program Evaluation Model, From the Perspective of the New Theories

    Directory of Open Access Journals (Sweden)

    Soleiman Ahmady

    2014-05-01

    Full Text Available Introduction: This study is focused on common theories that influenced the history of program evaluation and introduce the educational program evaluation proposal format based on the updated theory. Methods: Literature searches were carried out in March-December 2010 with a combination of key words, MeSH terms and other free text terms as suitable for the purpose. A comprehensive search strategy was developed to search Medline by the PubMed interface, ERIC (Education Resources Information Center and the main journal of medical education regarding current evaluation models and theories. We included all study designs in our study. We found 810 articles related to our topic, and finally 63 with the full text article included. We compared documents and used expert consensus for selection the best model. Results: We found that the complexity theory using logic model suggests compatible evaluation proposal formats, especially with new medical education programs. Common components of a logic model are: situation, inputs, outputs, and outcomes that our proposal format is based on. Its contents are: title page, cover letter, situation and background, introduction and rationale, project description, evaluation design, evaluation methodology, reporting, program evaluation management, timeline, evaluation budget based on the best evidences, and supporting documents. Conclusion: We found that the logic model is used for evaluation program planning in many places, but more research is needed to see if it is suitable for our context.

  7. Module-based quality system functionality evaluation in production logistics

    Energy Technology Data Exchange (ETDEWEB)

    Khabbazi, M.R.; Wikander, J.; Onori, M.; Maffei, A.; Chen, D.

    2016-07-01

    This paper addresses a comprehensive modeling and functionality evaluation of a module-based quality system in production logistics at the highest domain abstract level of business processes. All domain quality business processes and quality data transactions are modeled using BPMN and UML tools and standards at the business process and data modeling. A modular web-based prototype is developed to evaluate the models addressing the quality information system functionality requirements and modularity in production logistics through data scenarios and data queries. Using the object-oriented technique in design at the highest domain level, the proposed models are subject further development in the lower levels for the implementing case. The models are specifically able to manipulate all quality operations including remedy and control in a lot-based make-to-order production logistics system as an individual module. Due to the specification of system as domain design structure, all proposed BPMs, data models, and the actual database prototype are seen referential if not a solution as a practical “to-be” quality business process re-engineering template. This paper sets out to provide an explanatory approach using different practical technique at modeling steps as well as the prototype implementation. (Author)

  8. The GEWEX LandFlux project: evaluation of model evaporation using tower-based and globally-gridded forcing data

    KAUST Repository

    McCabe, Matthew

    2015-08-24

    Determining the spatial distribution and temporal development of evaporation at regional and global scales is required to improve our understanding of the coupled water and energy cycles and to better monitor any changes in observed trends and variability of linked hydrological processes. With recent international efforts guiding the development of long-term and globally distributed flux estimates, continued product assessments are required to inform upon the selection of suitable model structures and also to establish the appropriateness of these multi-model simulations for global application. In support of the objectives of the GEWEX LandFlux project, four commonly used evaporation models are evaluated against data from tower-based eddy-covariance observations, distributed across a range of biomes and climate zones. The selected schemes include the Surface Energy Balance System (SEBS) approach, the Priestley-Taylor Jet Propulsion Laboratory (PT-JPL) model, the Penman-Monteith based Mu model (PM-Mu) and the Global Land Evaporation: the Amsterdam Methodology (GLEAM). Here we seek to examine the fidelity of global evaporation simulations by examining the multi-model response to varying sources of forcing data. To do this, we perform parallel and collocated model simulations using tower-based data together with a global-scale grid-based forcing product. Through quantifying the multi-model response to high-quality tower data, a better understanding of the subsequent model response to coarse-scale globally gridded data that underlies the LandFlux product can be obtained, while also providing a relative evaluation and assessment of model performance.

    Using surface flux observations from forty-five globally distributed eddy-covariance stations as independent metrics of performance, the tower-based analysis indicated that PT-JPL provided the highest overally statistical performance (0.72; 61 W m−2; 0.65), followed closely by GLEAM (0.68; 64 W m

  9. Modeling and evaluation of location-based forwarding in vehicular networks

    NARCIS (Netherlands)

    Heijenk, Geert; Klein Wolterink, W.; van den Berg, Hans Leo; Karagiannis, Georgios; Chen, Wai

    2015-01-01

    Location-based forwarding plays an important role in vehicular networks to dissem- inate messages in a certain region beyond the immediate transmission range of the originator. In this chapter, we introduce an analytical performance model that cap- tures the behaviour of location-based forwarding in

  10. A Formal Approach for RT-DVS Algorithms Evaluation Based on Statistical Model Checking

    Directory of Open Access Journals (Sweden)

    Shengxin Dai

    2015-01-01

    Full Text Available Energy saving is a crucial concern in embedded real time systems. Many RT-DVS algorithms have been proposed to save energy while preserving deadline guarantees. This paper presents a novel approach to evaluate RT-DVS algorithms using statistical model checking. A scalable framework is proposed for RT-DVS algorithms evaluation, in which the relevant components are modeled as stochastic timed automata, and the evaluation metrics including utilization bound, energy efficiency, battery awareness, and temperature awareness are expressed as statistical queries. Evaluation of these metrics is performed by verifying the corresponding queries using UPPAAL-SMC and analyzing the statistical information provided by the tool. We demonstrate the applicability of our framework via a case study of five classical RT-DVS algorithms.

  11. Gold-standard evaluation of a folksonomy-based ontology learning model

    Science.gov (United States)

    Djuana, E.

    2018-03-01

    Folksonomy, as one result of collaborative tagging process, has been acknowledged for its potential in improving categorization and searching of web resources. However, folksonomy contains ambiguities such as synonymy and polysemy as well as different abstractions or generality problem. To maximize its potential, some methods for associating tags of folksonomy with semantics and structural relationships have been proposed such as using ontology learning method. This paper evaluates our previous work in ontology learning according to gold-standard evaluation approach in comparison to a notable state-of-the-art work and several baselines. The results show that our method is comparable to the state-of the art work which further validate our approach as has been previously validated using task-based evaluation approach.

  12. Channel Models for Capacity Evaluation of MIMO Handsets in Data Mode

    DEFF Research Database (Denmark)

    Nielsen, Jesper Ødum; Yanakiev, Boyan; Barrio, Samantha Caporal Del

    2017-01-01

    This work investigates different correlation based models useful for evaluation of outage capacity (OC) of mobile multiple-input multiple-output (MIMO) handsets. The work is based on a large measurement campaign in a micro-cellular setup involving two dual-band base stations, 10 different handsets...... in an indoor environment for different use cases and test users. Several models are evaluated statistically, comparing the OC values estimated from the model and measurement data, respectively, for about 2,700 measurement routes. The models are based on either estimates of the full correlation matrices...... or simplifications. Among other results, it is shown that the OC can be predicted accurately (median error typically within 2.6%) with a model assuming knowledge only of the Tx-correlation coefficient and the mean power gain....

  13. Cellular Automata Based Modeling for Evaluating Different Bus Stop Designs in China

    Directory of Open Access Journals (Sweden)

    Haoyang Ding

    2015-01-01

    Full Text Available A cellular automaton model is proposed to simulate mixed traffic flow composed of motor vehicles and bicycles near bus stops. Three typical types of bus stops which are common in China are considered in the model, including two types of curbside bus stops and one type of bus bay stops. Passenger transport capacity of three types of bus stops, which is applied to evaluate the bus stop design, is calculated based on the corresponding traffic flow rate. According to the simulation results, the flow rates of both motor vehicles and bicycles exhibit phase transition from free flow to the saturation one at the critical point. The results also show that the larger the interaction between motor vehicle and bicycle flow is near curbside bus stops, the more the value of saturated flows drops. Curbside bus stops are more suitable when the conflicts between two flows are small and the inflow rate of motor vehicles is low. On the contrary, bus bay stops should be applied due to their ability to reduce traffic conflicts. Findings of this study can provide useful suggestions on bus stop selection considering different inflow rate of motor vehicles and bicycles simultaneously.

  14. Measurement-based reliability/performability models

    Science.gov (United States)

    Hsueh, Mei-Chen

    1987-01-01

    Measurement-based models based on real error-data collected on a multiprocessor system are described. Model development from the raw error-data to the estimation of cumulative reward is also described. A workload/reliability model is developed based on low-level error and resource usage data collected on an IBM 3081 system during its normal operation in order to evaluate the resource usage/error/recovery process in a large mainframe system. Thus, both normal and erroneous behavior of the system are modeled. The results provide an understanding of the different types of errors and recovery processes. The measured data show that the holding times in key operational and error states are not simple exponentials and that a semi-Markov process is necessary to model the system behavior. A sensitivity analysis is performed to investigate the significance of using a semi-Markov process, as opposed to a Markov process, to model the measured system.

  15. Model performance evaluation (validation and calibration) in model-based studies of therapeutic interventions for cardiovascular diseases : a review and suggested reporting framework.

    Science.gov (United States)

    Haji Ali Afzali, Hossein; Gray, Jodi; Karnon, Jonathan

    2013-04-01

    Decision analytic models play an increasingly important role in the economic evaluation of health technologies. Given uncertainties around the assumptions used to develop such models, several guidelines have been published to identify and assess 'best practice' in the model development process, including general modelling approach (e.g., time horizon), model structure, input data and model performance evaluation. This paper focuses on model performance evaluation. In the absence of a sufficient level of detail around model performance evaluation, concerns regarding the accuracy of model outputs, and hence the credibility of such models, are frequently raised. Following presentation of its components, a review of the application and reporting of model performance evaluation is presented. Taking cardiovascular disease as an illustrative example, the review investigates the use of face validity, internal validity, external validity, and cross model validity. As a part of the performance evaluation process, model calibration is also discussed and its use in applied studies investigated. The review found that the application and reporting of model performance evaluation across 81 studies of treatment for cardiovascular disease was variable. Cross-model validation was reported in 55 % of the reviewed studies, though the level of detail provided varied considerably. We found that very few studies documented other types of validity, and only 6 % of the reviewed articles reported a calibration process. Considering the above findings, we propose a comprehensive model performance evaluation framework (checklist), informed by a review of best-practice guidelines. This framework provides a basis for more accurate and consistent documentation of model performance evaluation. This will improve the peer review process and the comparability of modelling studies. Recognising the fundamental role of decision analytic models in informing public funding decisions, the proposed

  16. Does the Model of Evaluation Based on Fair Value Answer the Requests of Financial Information Users?

    OpenAIRE

    Mitea Neluta; Sarac Aldea Laura

    2010-01-01

    Does the model of evaluation based on the fair value answers the requests of the financial information users? The financial situations have as purposes the presentation of the information concerning the enterprise financial position, the performances and modifications of this position which, according to IASB and FASB, must be credible and useful. Both referential maintain the existence of several conventions regarding assessment, like historical cost, actual cost, the realizable value or act...

  17. Estimation of Transmittance of Solar Radiation in the Visible Domain Based on Remote Sensing: Evaluation of Models Using In Situ Data

    Science.gov (United States)

    Zoffoli, M. Laura; Lee, Zhongping; Ondrusek, Michael; Lin, Junfang; Kovach, Charles; Wei, Jianwei; Lewis, Marlon

    2017-11-01

    The transmittance of solar radiation in the oceanic water column plays an important role in heat transfer and photosynthesis, with implications for the global carbon cycle, global circulation, and climate. Globally, the transmittance of solar radiation in the visible domain (˜400-700 nm) (TRVIS) through the water column, which determines the vertical distribution of visible light, has to be based on remote sensing products. There are models centered on chlorophyll-a (Chl) concentration or Inherent Optical Properties (IOPs) as both can be derived from ocean color measurements. We present evaluations of both schemes with field data from clear oceanic and from coastal waters. Here five models were evaluated: (1) Morel and Antoine (1994) (MA94), (2) Ohlmann and Siegel (2000) (OS00), (3) Murtugudde et al. (2002) (MU02), (4) Manizza et al. (2005) (MA05), and (5) Lee et al. ([Lee, Z., 2005]) (IOPs05), where the first four are Chl-based and the last one is IOPs-based, with all inputs derived from remote sensing reflectance. It is found that the best performing model is the IOPs05, with Unbiased Absolute Percent Difference (UAPD) ˜23%, while Chl-based models show higher uncertainties (UAPD for MA94: ˜54%, OS00: ˜133%, MU02: ˜56%, and MA05: ˜39%). The IOPs-based model was insensitive to the type of water, allowing it to be applied in most marine environments; whereas some of the Chl-based models (MU02 and MA05) show much higher sensitivities in coastal turbid waters (higher Chl waters). These results highlight the applicablity of using IOPs products for such applications.

  18. Evaluation of kriging based surrogate models constructed from mesoscale computations of shock interaction with particles

    Energy Technology Data Exchange (ETDEWEB)

    Sen, Oishik, E-mail: oishik-sen@uiowa.edu [Mechanical and Industrial Engineering, The University of Iowa, Iowa City, IA 52242 (United States); Gaul, Nicholas J., E-mail: nicholas-gaul@ramdosolutions.com [RAMDO Solutions, LLC, Iowa City, IA 52240 (United States); Choi, K.K., E-mail: kyung-choi@uiowa.edu [Mechanical and Industrial Engineering, The University of Iowa, Iowa City, IA 52242 (United States); Jacobs, Gustaaf, E-mail: gjacobs@sdsu.edu [Aerospace Engineering, San Diego State University, San Diego, CA 92115 (United States); Udaykumar, H.S., E-mail: hs-kumar@uiowa.edu [Mechanical and Industrial Engineering, The University of Iowa, Iowa City, IA 52242 (United States)

    2017-05-01

    Macro-scale computations of shocked particulate flows require closure laws that model the exchange of momentum/energy between the fluid and particle phases. Closure laws are constructed in this work in the form of surrogate models derived from highly resolved mesoscale computations of shock-particle interactions. The mesoscale computations are performed to calculate the drag force on a cluster of particles for different values of Mach Number and particle volume fraction. Two Kriging-based methods, viz. the Dynamic Kriging Method (DKG) and the Modified Bayesian Kriging Method (MBKG) are evaluated for their ability to construct surrogate models with sparse data; i.e. using the least number of mesoscale simulations. It is shown that if the input data is noise-free, the DKG method converges monotonically; convergence is less robust in the presence of noise. The MBKG method converges monotonically even with noisy input data and is therefore more suitable for surrogate model construction from numerical experiments. This work is the first step towards a full multiscale modeling of interaction of shocked particle laden flows.

  19. A Fuzzy Rule-Based Expert System for Evaluating Intellectual Capital

    Directory of Open Access Journals (Sweden)

    Mohammad Hossein Fazel Zarandi

    2012-01-01

    Full Text Available A fuzzy rule-based expert system is developed for evaluating intellectual capital. A fuzzy linguistic approach assists managers to understand and evaluate the level of each intellectual capital item. The proposed fuzzy rule-based expert system applies fuzzy linguistic variables to express the level of qualitative evaluation and criteria of experts. Feasibility of the proposed model is demonstrated by the result of intellectual capital performance evaluation for a sample company.

  20. Pragmatic geometric model evaluation

    Science.gov (United States)

    Pamer, Robert

    2015-04-01

    Quantification of subsurface model reliability is mathematically and technically demanding as there are many different sources of uncertainty and some of the factors can be assessed merely in a subjective way. For many practical applications in industry or risk assessment (e. g. geothermal drilling) a quantitative estimation of possible geometric variations in depth unit is preferred over relative numbers because of cost calculations for different scenarios. The talk gives an overview of several factors that affect the geometry of structural subsurface models that are based upon typical geological survey organization (GSO) data like geological maps, borehole data and conceptually driven construction of subsurface elements (e. g. fault network). Within the context of the trans-European project "GeoMol" uncertainty analysis has to be very pragmatic also because of different data rights, data policies and modelling software between the project partners. In a case study a two-step evaluation methodology for geometric subsurface model uncertainty is being developed. In a first step several models of the same volume of interest have been calculated by omitting successively more and more input data types (seismic constraints, fault network, outcrop data). The positions of the various horizon surfaces are then compared. The procedure is equivalent to comparing data of various levels of detail and therefore structural complexity. This gives a measure of the structural significance of each data set in space and as a consequence areas of geometric complexity are identified. These areas are usually very data sensitive hence geometric variability in between individual data points in these areas is higher than in areas of low structural complexity. Instead of calculating a multitude of different models by varying some input data or parameters as it is done by Monte-Carlo-simulations, the aim of the second step of the evaluation procedure (which is part of the ongoing work) is to

  1. Validating and Determining the Weight of Items Used for Evaluating Clinical Governance Implementation Based on Analytic Hierarchy Process Model

    Directory of Open Access Journals (Sweden)

    Elaheh Hooshmand

    2015-10-01

    Full Text Available Background The purpose of implementing a system such as Clinical Governance (CG is to integrate, establish and globalize distinct policies in order to improve quality through increasing professional knowledge and the accountability of healthcare professional toward providing clinical excellence. Since CG is related to change, and change requires money and time, CG implementation has to be focused on priority areas that are in more dire need of change. The purpose of the present study was to validate and determine the significance of items used for evaluating CG implementation. Methods The present study was descriptive-quantitative in method and design. Items used for evaluating CG implementation were first validated by the Delphi method and then compared with one another and ranked based on the Analytical Hierarchy Process (AHP model. Results The items that were validated for evaluating CG implementation in Iran include performance evaluation, training and development, personnel motivation, clinical audit, clinical effectiveness, risk management, resource allocation, policies and strategies, external audit, information system management, research and development, CG structure, implementation prerequisites, the management of patients’ non-medical needs, complaints and patients’ participation in the treatment process. The most important items based on their degree of significance were training and development, performance evaluation, and risk management. The least important items included the management of patients’ non-medical needs, patients’ participation in the treatment process and research and development. Conclusion The fundamental requirements of CG implementation included having an effective policy at national level, avoiding perfectionism, using the expertise and potentials of the entire country and the coordination of this model with other models of quality improvement such as accreditation and patient safety.

  2. Validating and determining the weight of items used for evaluating clinical governance implementation based on analytic hierarchy process model.

    Science.gov (United States)

    Hooshmand, Elaheh; Tourani, Sogand; Ravaghi, Hamid; Vafaee Najar, Ali; Meraji, Marziye; Ebrahimipour, Hossein

    2015-04-08

    The purpose of implementing a system such as Clinical Governance (CG) is to integrate, establish and globalize distinct policies in order to improve quality through increasing professional knowledge and the accountability of healthcare professional toward providing clinical excellence. Since CG is related to change, and change requires money and time, CG implementation has to be focused on priority areas that are in more dire need of change. The purpose of the present study was to validate and determine the significance of items used for evaluating CG implementation. The present study was descriptive-quantitative in method and design. Items used for evaluating CG implementation were first validated by the Delphi method and then compared with one another and ranked based on the Analytical Hierarchy Process (AHP) model. The items that were validated for evaluating CG implementation in Iran include performance evaluation, training and development, personnel motivation, clinical audit, clinical effectiveness, risk management, resource allocation, policies and strategies, external audit, information system management, research and development, CG structure, implementation prerequisites, the management of patients' non-medical needs, complaints and patients' participation in the treatment process. The most important items based on their degree of significance were training and development, performance evaluation, and risk management. The least important items included the management of patients' non-medical needs, patients' participation in the treatment process and research and development. The fundamental requirements of CG implementation included having an effective policy at national level, avoiding perfectionism, using the expertise and potentials of the entire country and the coordination of this model with other models of quality improvement such as accreditation and patient safety. © 2015 by Kerman University of Medical Sciences.

  3. Evaluation of CNN as anthropomorphic model observer

    Science.gov (United States)

    Massanes, Francesc; Brankov, Jovan G.

    2017-03-01

    Model observers (MO) are widely used in medical imaging to act as surrogates of human observers in task-based image quality evaluation, frequently towards optimization of reconstruction algorithms. In this paper, we explore the use of convolutional neural networks (CNN) to be used as MO. We will compare CNN MO to alternative MO currently being proposed and used such as the relevance vector machine based MO and channelized Hotelling observer (CHO). As the success of the CNN, and other deep learning approaches, is rooted in large data sets availability, which is rarely the case in medical imaging systems task-performance evaluation, we will evaluate CNN performance on both large and small training data sets.

  4. Biomechanical Analysis and Evaluation Technology Using Human Multi-Body Dynamic Model

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Yoon Hyuk; Shin, June Ho; Khurelbaatar, Tsolmonbaatar [Kyung Hee University, Yongin (Korea, Republic of)

    2011-10-15

    This paper presents the biomechanical analysis and evaluation technology of musculoskeletal system by multi-body human dynamic model and 3-D motion capture data. First, medical image based geometric model and material properties of tissue were used to develop the human dynamic model and 3-D motion capture data based motion analysis techniques were develop to quantify the in-vivo joint kinematics, joint moment, joint force, and muscle force. Walking and push-up motion was investigated using the developed model. The present model and technologies would be useful to apply the biomechanical analysis and evaluation of human activities.

  5. Evidence synthesis to inform model-based cost-effectiveness evaluations of diagnostic tests: a methodological review of health technology assessments

    Directory of Open Access Journals (Sweden)

    Bethany Shinkins

    2017-04-01

    Full Text Available Abstract Background Evaluations of diagnostic tests are challenging because of the indirect nature of their impact on patient outcomes. Model-based health economic evaluations of tests allow different types of evidence from various sources to be incorporated and enable cost-effectiveness estimates to be made beyond the duration of available study data. To parameterize a health-economic model fully, all the ways a test impacts on patient health must be quantified, including but not limited to diagnostic test accuracy. Methods We assessed all UK NIHR HTA reports published May 2009-July 2015. Reports were included if they evaluated a diagnostic test, included a model-based health economic evaluation and included a systematic review and meta-analysis of test accuracy. From each eligible report we extracted information on the following topics: 1 what evidence aside from test accuracy was searched for and synthesised, 2 which methods were used to synthesise test accuracy evidence and how did the results inform the economic model, 3 how/whether threshold effects were explored, 4 how the potential dependency between multiple tests in a pathway was accounted for, and 5 for evaluations of tests targeted at the primary care setting, how evidence from differing healthcare settings was incorporated. Results The bivariate or HSROC model was implemented in 20/22 reports that met all inclusion criteria. Test accuracy data for health economic modelling was obtained from meta-analyses completely in four reports, partially in fourteen reports and not at all in four reports. Only 2/7 reports that used a quantitative test gave clear threshold recommendations. All 22 reports explored the effect of uncertainty in accuracy parameters but most of those that used multiple tests did not allow for dependence between test results. 7/22 tests were potentially suitable for primary care but the majority found limited evidence on test accuracy in primary care settings

  6. A mathematical modeling framework to evaluate the performance of single diode and double diode based SPV systems

    Directory of Open Access Journals (Sweden)

    Sangram Bana

    2016-11-01

    Full Text Available In order to predict the performance of a PV system, a reliable and accurate simulation design of PV systems before being installed is a necessity. The present study concerns the development of single and double diode model of solar PV system and ensures the best suited model under specific environmental condition for accurate performance prediction. The information provided in the manufacturers’ data sheet is not sufficient for developing a Simulink based single and double diode models of PV module. These parameters are crucial to predict accurate performance of a PV module. These parameters of the proposed solar PV models have been calculated using an efficient iterative technique. This paper compares the simulation results of both the models with manufacturer’s data sheet to investigate the accuracy and validity. A MATLAB/Simulink based comparative performance analysis of these models under inconsistent atmospheric conditions and the effect of variations in model parameters has been carried out. Despite the simplicity, these models are highly sensitive and respond to a slight variation in temperature and insolation. It is observed that double diode PV model is more accurate under low intensity insolation or shading condition. The performance evaluation of the models under present study will be helpful to understand the I-V curves, which will enable us in predicting the solar PV system power production under variable input conditions.

  7. Multi-master profibus dp modelling and worst case analysis-based evaluation

    OpenAIRE

    Salvatore Monforte; Eduardo Tovar; Francisco Vasques; Salvatore Cavalieri

    2002-01-01

    This paper provides an analysis of the real-time behaviour of the multi-master Profibus DP network. The analysis is based on the evaluation of the worst-case message response time and the results obtained are compared with those present in literature, pointing out its capability to perform a more accurate evaluation of the performance of the Profibus network. Copyright © 2002 IFAC.

  8. Performance Evaluation of Sadoghi Hospital Based on «EFQM» Organizational Excellence Model

    Directory of Open Access Journals (Sweden)

    A Sanayeei

    2013-04-01

    Full Text Available Introduction: Realm of health care that organizations have faced in recent years has been described with high level of dynamism and development. To survive in such conditions, performance evaluation can have an effective role in satisfying proper quality for services. This study aimed to evaluate the performance of Shahid Sadoghi Yazd hospital through EFQM approach. Methods: This was a descriptive cross-sectional study. Data collection instrument was EFQM organization Excellence Model questionnaire which was completed by all the managers. The research data was gathered from a sample of 302 patients, staff, personnel and medical staff working in different parts of the hospital. Random stratified samples were selected and descriptive statistics were utilized in order to analyze the data. Results: The results revealed that Shahid Sadoughi hospital acquired 185.41 points out of the total 500 points considered in the model EFQM. In other words, the rating reflects the fact that regarding the defined desired position, the hospital has not achieved the desired rating. Conclusion: Since the hospital performance is posited in a low-middle class, much more attention is required in regard to therapeutic management in this hospital. Therefore, codifying an efficient and effective program to improve the hospital performance is necessary. Furthermore, it seems that EFQM model can be considered as a comprehensive model for performance evaluation in hospitals.

  9. Evaluation of two ozone air quality modelling systems

    Directory of Open Access Journals (Sweden)

    S. Ortega

    2004-01-01

    Full Text Available The aim of this paper is to compare two different modelling systems and to evaluate their ability to simulate high values of ozone concentration in typical summer episodes which take place in the north of Spain near the metropolitan area of Barcelona. As the focus of the paper is the comparison of the two systems, we do not attempt to improve the agreement by adjusting the emission inventory or model parameters. The first model, or forecasting system, is made up of three modules. The first module is a mesoscale model (MASS. This provides the initial condition for the second module, which is a nonlocal boundary layer model based on the transilient turbulence scheme. The third module is a photochemical box model (OZIPR, which is applied in Eulerian and Lagrangian modes and receives suitable information from the two previous modules. The model forecast is evaluated against ground base stations during summer 2001. The second model is the MM5/UAM-V. This is a grid model designed to predict the hourly three-dimensional ozone concentration fields. The model is applied during an ozone episode that occurred between 21 and 23 June 2001. Our results reflect the good performance of the two modelling systems when they are used in a specific episode.

  10. ANFIS-Based Modeling for Photovoltaic Characteristics Estimation

    Directory of Open Access Journals (Sweden)

    Ziqiang Bi

    2016-09-01

    Full Text Available Due to the high cost of photovoltaic (PV modules, an accurate performance estimation method is significantly valuable for studying the electrical characteristics of PV generation systems. Conventional analytical PV models are usually composed by nonlinear exponential functions and a good number of unknown parameters must be identified before using. In this paper, an adaptive-network-based fuzzy inference system (ANFIS based modeling method is proposed to predict the current-voltage characteristics of PV modules. The effectiveness of the proposed modeling method is evaluated through comparison with Villalva’s model, radial basis function neural networks (RBFNN based model and support vector regression (SVR based model. Simulation and experimental results confirm both the feasibility and the effectiveness of the proposed method.

  11. Modeling and simulation of complex systems a framework for efficient agent-based modeling and simulation

    CERN Document Server

    Siegfried, Robert

    2014-01-01

    Robert Siegfried presents a framework for efficient agent-based modeling and simulation of complex systems. He compares different approaches for describing structure and dynamics of agent-based models in detail. Based on this evaluation the author introduces the "General Reference Model for Agent-based Modeling and Simulation" (GRAMS). Furthermore he presents parallel and distributed simulation approaches for execution of agent-based models -from small scale to very large scale. The author shows how agent-based models may be executed by different simulation engines that utilize underlying hard

  12. Performance Evaluation Model for Application Layer Firewalls.

    Science.gov (United States)

    Xuan, Shichang; Yang, Wu; Dong, Hui; Zhang, Jiangchuan

    2016-01-01

    Application layer firewalls protect the trusted area network against information security risks. However, firewall performance may affect user experience. Therefore, performance analysis plays a significant role in the evaluation of application layer firewalls. This paper presents an analytic model of the application layer firewall, based on a system analysis to evaluate the capability of the firewall. In order to enable users to improve the performance of the application layer firewall with limited resources, resource allocation was evaluated to obtain the optimal resource allocation scheme in terms of throughput, delay, and packet loss rate. The proposed model employs the Erlangian queuing model to analyze the performance parameters of the system with regard to the three layers (network, transport, and application layers). Then, the analysis results of all the layers are combined to obtain the overall system performance indicators. A discrete event simulation method was used to evaluate the proposed model. Finally, limited service desk resources were allocated to obtain the values of the performance indicators under different resource allocation scenarios in order to determine the optimal allocation scheme. Under limited resource allocation, this scheme enables users to maximize the performance of the application layer firewall.

  13. Performance Evaluation Model for Application Layer Firewalls.

    Directory of Open Access Journals (Sweden)

    Shichang Xuan

    Full Text Available Application layer firewalls protect the trusted area network against information security risks. However, firewall performance may affect user experience. Therefore, performance analysis plays a significant role in the evaluation of application layer firewalls. This paper presents an analytic model of the application layer firewall, based on a system analysis to evaluate the capability of the firewall. In order to enable users to improve the performance of the application layer firewall with limited resources, resource allocation was evaluated to obtain the optimal resource allocation scheme in terms of throughput, delay, and packet loss rate. The proposed model employs the Erlangian queuing model to analyze the performance parameters of the system with regard to the three layers (network, transport, and application layers. Then, the analysis results of all the layers are combined to obtain the overall system performance indicators. A discrete event simulation method was used to evaluate the proposed model. Finally, limited service desk resources were allocated to obtain the values of the performance indicators under different resource allocation scenarios in order to determine the optimal allocation scheme. Under limited resource allocation, this scheme enables users to maximize the performance of the application layer firewall.

  14. Reliability evaluation of microgrid considering incentive-based demand response

    Science.gov (United States)

    Huang, Ting-Cheng; Zhang, Yong-Jun

    2017-07-01

    Incentive-based demand response (IBDR) can guide customers to adjust their behaviour of electricity and curtail load actively. Meanwhile, distributed generation (DG) and energy storage system (ESS) can provide time for the implementation of IBDR. The paper focus on the reliability evaluation of microgrid considering IBDR. Firstly, the mechanism of IBDR and its impact on power supply reliability are analysed. Secondly, the IBDR dispatch model considering customer’s comprehensive assessment and the customer response model are developed. Thirdly, the reliability evaluation method considering IBDR based on Monte Carlo simulation is proposed. Finally, the validity of the above models and method is studied through numerical tests on modified RBTS Bus6 test system. Simulation results demonstrated that IBDR can improve the reliability of microgrid.

  15. Evaluation of a task-based community oriented teaching model in family medicine for undergraduate medical students in Iraq

    Directory of Open Access Journals (Sweden)

    Al-Taee Waleed G

    2005-08-01

    Full Text Available Abstract Background The inclusion of family medicine in medical school curricula is essential for producing competent general practitioners. The aim of this study is to evaluate a task-based, community oriented teaching model of family medicine for undergraduate students in Iraqi medical schools. Methods An innovative training model in family medicine was developed based upon tasks regularly performed by family physicians providing health care services at the Primary Health Care Centre (PHCC in Mosul, Iraq. Participants were medical students enrolled in their final clinical year. Students were assigned to one of two groups. The implementation group (28 students was exposed to the experimental model and the control group (56 students received the standard teaching curriculum. The study took place at the Mosul College of Medicine and at the Al-Hadba PHCC in Mosul, Iraq, during the academic year 1999–2000. Pre- and post-exposure evaluations comparing the intervention group with the control group were conducted using a variety of assessment tools. Results The primary endpoints were improvement in knowledge of family medicine and development of essential performance skills. Results showed that the implementation group experienced a significant increase in knowledge and performance skills after exposure to the model and in comparison with the control group. Assessment of the model by participating students revealed a high degree of satisfaction with the planning, organization, and implementation of the intervention activities. Students also highly rated the relevancy of the intervention for future work. Conclusion A model on PHCC training in family medicine is essential for all Iraqi medical schools. The model is to be implemented by various relevant departments until Departments of Family medicine are established.

  16. A SIL quantification approach based on an operating situation model for safety evaluation in complex guided transportation systems

    International Nuclear Information System (INIS)

    Beugin, J.; Renaux, D.; Cauffriez, L.

    2007-01-01

    Safety analysis in guided transportation systems is essential to avoid rare but potentially catastrophic accidents. This article presents a quantitative probabilistic model that integrates Safety Integrity Levels (SIL) for evaluating the safety of such systems. The standardized SIL indicator allows the safety requirements of each safety subsystem, function and/or piece of equipment to be specified, making SILs pivotal parameters in safety evaluation. However, different interpretations of SIL exist, and faced with the complexity of guided transportation systems, the current SIL allocation methods are inadequate for the task of safety assessment. To remedy these problems, the model developed in this paper seeks to verify, during the design phase of guided transportation system, whether or not the safety specifications established by the transport authorities allow the overall safety target to be attained (i.e., if the SIL allocated to the different safety functions are sufficient to ensure the required level of safety). To meet this objective, the model is based both on the operating situation concept and on Monte Carlo simulation. The former allows safety systems to be formalized and their dynamics to be analyzed in order to show the evolution of the system in time and space, and the latter make it possible to perform probabilistic calculations based on the scenario structure obtained

  17. Evaluation of alternative age-based methods for estimating relative abundance from survey data in relation to assessment models

    DEFF Research Database (Denmark)

    Berg, Casper Willestofte; Nielsen, Anders; Kristensen, Kasper

    2014-01-01

    Indices of abundance from fishery-independent trawl surveys constitute an important source of information for many fish stock assessments. Indices are often calculated using area stratified sample means on age-disaggregated data, and finally treated in stock assessment models as independent...... observations. We evaluate a series of alternative methods for calculating indices of abundance from trawl survey data (delta-lognormal, delta-gamma, and Tweedie using Generalized Additive Models) as well as different error structures for these indices when used as input in an age-based stock assessment model...... the different indices produced. The stratified mean method is found much more imprecise than the alternatives based on GAMs, which are found to be similar. Having time-varying index variances is found to be of minor importance, whereas the independence assumption is not only violated but has significant impact...

  18. An Integrated Model Based on a Hierarchical Indices System for Monitoring and Evaluating Urban Sustainability

    Directory of Open Access Journals (Sweden)

    Xulin Guo

    2013-02-01

    Full Text Available Over 50% of world’s population presently resides in cities, and this number is expected to rise to ~70% by 2050. Increasing urbanization problems including population growth, urban sprawl, land use change, unemployment, and environmental degradation, have markedly impacted urban residents’ Quality of Life (QOL. Therefore, urban sustainability and its measurement have gained increasing attention from administrators, urban planners, and scientific communities throughout the world with respect to improving urban development and human well-being. The widely accepted definition of urban sustainability emphasizes the balancing development of three primary domains (urban economy, society, and environment. This article attempts to improve the aforementioned definition of urban sustainability by incorporating a human well-being dimension. Major problems identified in existing urban sustainability indicator (USI models include a weak integration of potential indicators, poor measurement and quantification, and insufficient spatial-temporal analysis. To tackle these challenges an integrated USI model based on a hierarchical indices system was established for monitoring and evaluating urban sustainability. This model can be performed by quantifying indicators using both traditional statistical approaches and advanced geomatic techniques based on satellite imagery and census data, which aims to provide a theoretical basis for a comprehensive assessment of urban sustainability from a spatial-temporal perspective.

  19. An Evaluation of PET Based on Longitudinal Data.

    Science.gov (United States)

    Mandeville, Garrett K.

    Although teacher inservice programs based on Madeline Hunter's Program for Effective Teaching (PET) have become very popular in U.S. schools, there is little evidence that the Hunter model ultimately results in increased student achievement. This longitudinal study attempts to evaluate the effects of Hunter-based staff development programs on…

  20. A systematic approach to the modelling of measurements for uncertainty evaluation

    International Nuclear Information System (INIS)

    Sommer, K D; Weckenmann, A; Siebert, B R L

    2005-01-01

    The evaluation of measurement uncertainty is based on both, the knowledge about the measuring process and the quantities which influence the measurement result. The knowledge about the measuring process is represented by the model equation which expresses the interrelation between the measurand and the input quantities. Therefore, the modelling of the measurement is a key element of modern uncertainty evaluation. A modelling concept has been developed that is based on the idea of the measuring chain. It gets on with only a few generic model structures. From this concept, a practical stepwise procedure has been derived

  1. Models based on multichannel R-matrix theory for evaluating light element reactions

    International Nuclear Information System (INIS)

    Dodder, D.C.; Hale, G.M.; Nisley, R.A.; Witte, K.; Young, P.G.

    1975-01-01

    Multichannel R-matrix theory has been used as a basis for models for analysis and evaluation of light nuclear systems. These models have the characteristic that data predictions can be made utilizing information derived from other reactions related to the one of primary interest. Several examples are given where such an approach is valid and appropriate. (auth.)

  2. Landslide Susceptibility Evaluation on agricultural terraces of DOURO VALLEY (PORTUGAL), using physically based mathematical models.

    Science.gov (United States)

    Faria, Ana; Bateira, Carlos; Laura, Soares; Fernandes, Joana; Gonçalves, José; Marques, Fernando

    2016-04-01

    The work focuses the evaluation of landslide susceptibility in Douro Region agricultural terraces, supported by dry stone walls and earth embankments, using two physically based models. The applied models, SHALSTAB (Montgomery et al.,1994; Dietrich et al., 1995) and SINMAP (PACK et al., 2005), combine an infinite slope stability model with a steady state hydrological model, and both use the following geophysical parameters: cohesion, friction angle, specific weight and soil thickness. The definition of the contributing areas is different in both models. The D∞ methodology used by SINMAP model suggests a great influence of the terraces morphology, providing a much more diffuse flow on the internal flow modelling. The MD8 used in SHALSTAB promotes an important degree of flow concentration, representing an internal flow based on preferential paths of the runoff as the areas more susceptible to saturation processes. The model validation is made through the contingency matrix method (Fawcett, 2006; Raia et al., 2014) and implies the confrontation with the inventory of past landslides. The True Positive Rate shows that SHALSTAB classifies 77% of the landslides on the high susceptibility areas, while SINMAP reaches 90%. The SINMAP has a False Positive Rate (represents the percentage of the slipped area that is classified as unstable but without landslides) of 83% and the SHALSTAB has 67%. The reliability (analyzes the areas that were correctly classified on the total area) of SHALSTAB is better (33% against 18% of SINMAP). Relative to Precision (refers to the ratio of the slipped area correctly classified over the whole area classified as unstable) SHALSTAB has better results (0.00298 against 0.00283 of SINMAP). It was elaborate the index TPR/FPR and better results obtained by SHALSTAB (1.14 against 1.09 of SINMAP). SHALSTAB shows a better performance in the definition of susceptibility most prone areas to instability processes. One of the reasons for the difference of

  3. Evaluation of different time domain peak models using extreme learning machine-based peak detection for EEG signal.

    Science.gov (United States)

    Adam, Asrul; Ibrahim, Zuwairie; Mokhtar, Norrima; Shapiai, Mohd Ibrahim; Cumming, Paul; Mubin, Marizan

    2016-01-01

    Various peak models have been introduced to detect and analyze peaks in the time domain analysis of electroencephalogram (EEG) signals. In general, peak model in the time domain analysis consists of a set of signal parameters, such as amplitude, width, and slope. Models including those proposed by Dumpala, Acir, Liu, and Dingle are routinely used to detect peaks in EEG signals acquired in clinical studies of epilepsy or eye blink. The optimal peak model is the most reliable peak detection performance in a particular application. A fair measure of performance of different models requires a common and unbiased platform. In this study, we evaluate the performance of the four different peak models using the extreme learning machine (ELM)-based peak detection algorithm. We found that the Dingle model gave the best performance, with 72 % accuracy in the analysis of real EEG data. Statistical analysis conferred that the Dingle model afforded significantly better mean testing accuracy than did the Acir and Liu models, which were in the range 37-52 %. Meanwhile, the Dingle model has no significant difference compared to Dumpala model.

  4. Topological Vulnerability Evaluation Model Based on Fractal Dimension of Complex Networks.

    Directory of Open Access Journals (Sweden)

    Li Gou

    Full Text Available With an increasing emphasis on network security, much more attentions have been attracted to the vulnerability of complex networks. In this paper, the fractal dimension, which can reflect space-filling capacity of networks, is redefined as the origin moment of the edge betweenness to obtain a more reasonable evaluation of vulnerability. The proposed model combining multiple evaluation indexes not only overcomes the shortage of average edge betweenness's failing to evaluate vulnerability of some special networks, but also characterizes the topological structure and highlights the space-filling capacity of networks. The applications to six US airline networks illustrate the practicality and effectiveness of our proposed method, and the comparisons with three other commonly used methods further validate the superiority of our proposed method.

  5. Evaluation and hydrological modelization in the natural hazard prevention

    International Nuclear Information System (INIS)

    Pla Sentis, Ildefonso

    2011-01-01

    Soil degradation affects negatively his functions as a base to produce food, to regulate the hydrological cycle and the environmental quality. All over the world soil degradation is increasing partly due to lacks or deficiencies in the evaluations of the processes and causes of this degradation on each specific situation. The processes of soil physical degradation are manifested through several problems as compaction, runoff, hydric and Eolic erosion, landslides with collateral effects in situ and in the distance, often with disastrous consequences as foods, landslides, sedimentations, droughts, etc. These processes are frequently associated to unfavorable changes into the hydrologic processes responsible of the water balance and soil hydric regimes, mainly derived to soil use changes and different management practices and climatic changes. The evaluation of these processes using simple simulation models; under several scenarios of climatic change, soil properties and land use and management; would allow to predict the occurrence of this disastrous processes and consequently to select and apply the appropriate practices of soil conservation to eliminate or reduce their effects. This simulation models require, as base, detailed climatic information and hydrologic soil properties data. Despite of the existence of methodologies and commercial equipment (each time more sophisticated and precise) to measure the different physical and hydrological soil properties related with degradation processes, most of them are only applicable under really specific or laboratory conditions. Often indirect methodologies are used, based on relations or empiric indexes without an adequate validation, that often lead to expensive mistakes on the evaluation of soil degradation processes and their effects on natural disasters. It could be preferred simple field methodologies, direct and adaptable to different soil types and climates and to the sample size and the spatial variability of the

  6. The creation and evaluation of a model to simulate the probability of conception in seasonal-calving pasture-based dairy heifers.

    Science.gov (United States)

    Fenlon, Caroline; O'Grady, Luke; Butler, Stephen; Doherty, Michael L; Dunnion, John

    2017-01-01

    Herd fertility in pasture-based dairy farms is a key driver of farm economics. Models for predicting nulliparous reproductive outcomes are rare, but age, genetics, weight, and BCS have been identified as factors influencing heifer conception. The aim of this study was to create a simulation model of heifer conception to service with thorough evaluation. Artificial Insemination service records from two research herds and ten commercial herds were provided to build and evaluate the models. All were managed as spring-calving pasture-based systems. The factors studied were related to age, genetics, and time of service. The data were split into training and testing sets and bootstrapping was used to train the models. Logistic regression (with and without random effects) and generalised additive modelling were selected as the model-building techniques. Two types of evaluation were used to test the predictive ability of the models: discrimination and calibration. Discrimination, which includes sensitivity, specificity, accuracy and ROC analysis, measures a model's ability to distinguish between positive and negative outcomes. Calibration measures the accuracy of the predicted probabilities with the Hosmer-Lemeshow goodness-of-fit, calibration plot and calibration error. After data cleaning and the removal of services with missing values, 1396 services remained to train the models and 597 were left for testing. Age, breed, genetic predicted transmitting ability for calving interval, month and year were significant in the multivariate models. The regression models also included an interaction between age and month. Year within herd was a random effect in the mixed regression model. Overall prediction accuracy was between 77.1% and 78.9%. All three models had very high sensitivity, but low specificity. The two regression models were very well-calibrated. The mean absolute calibration errors were all below 4%. Because the models were not adept at identifying unsuccessful

  7. Evaluation of model-based versus non-parametric monaural noise-reduction approaches for hearing aids.

    Science.gov (United States)

    Harlander, Niklas; Rosenkranz, Tobias; Hohmann, Volker

    2012-08-01

    Single channel noise reduction has been well investigated and seems to have reached its limits in terms of speech intelligibility improvement, however, the quality of such schemes can still be advanced. This study tests to what extent novel model-based processing schemes might improve performance in particular for non-stationary noise conditions. Two prototype model-based algorithms, a speech-model-based, and a auditory-model-based algorithm were compared to a state-of-the-art non-parametric minimum statistics algorithm. A speech intelligibility test, preference rating, and listening effort scaling were performed. Additionally, three objective quality measures for the signal, background, and overall distortions were applied. For a better comparison of all algorithms, particular attention was given to the usage of the similar Wiener-based gain rule. The perceptual investigation was performed with fourteen hearing-impaired subjects. The results revealed that the non-parametric algorithm and the auditory model-based algorithm did not affect speech intelligibility, whereas the speech-model-based algorithm slightly decreased intelligibility. In terms of subjective quality, both model-based algorithms perform better than the unprocessed condition and the reference in particular for highly non-stationary noise environments. Data support the hypothesis that model-based algorithms are promising for improving performance in non-stationary noise conditions.

  8. An Inter-Personal Information Sharing Model Based on Personalized Recommendations

    Science.gov (United States)

    Kamei, Koji; Funakoshi, Kaname; Akahani, Jun-Ichi; Satoh, Tetsuji

    In this paper, we propose an inter-personal information sharing model among individuals based on personalized recommendations. In the proposed model, we define an information resource as shared between people when both of them consider it important --- not merely when they both possess it. In other words, the model defines the importance of information resources based on personalized recommendations from identifiable acquaintances. The proposed method is based on a collaborative filtering system that focuses on evaluations from identifiable acquaintances. It utilizes both user evaluations for documents and their contents. In other words, each user profile is represented as a matrix of credibility to the other users' evaluations on each domain of interests. We extended the content-based collaborative filtering method to distinguish other users to whom the documents should be recommended. We also applied a concept-based vector space model to represent the domain of interests instead of the previous method which represented them by a term-based vector space model. We introduce a personalized concept-base compiled from each user's information repository to improve the information retrieval in the user's environment. Furthermore, the concept-spaces change from user to user since they reflect the personalities of the users. Because of different concept-spaces, the similarity between a document and a user's interest varies for each user. As a result, a user receives recommendations from other users who have different view points, achieving inter-personal information sharing based on personalized recommendations. This paper also describes an experimental simulation of our information sharing model. In our laboratory, five participants accumulated a personal repository of e-mails and web pages from which they built their own concept-base. Then we estimated the user profiles according to personalized concept-bases and sets of documents which others evaluated. We simulated

  9. Integrated Assessment Model Evaluation

    Science.gov (United States)

    Smith, S. J.; Clarke, L.; Edmonds, J. A.; Weyant, J. P.

    2012-12-01

    Integrated assessment models of climate change (IAMs) are widely used to provide insights into the dynamics of the coupled human and socio-economic system, including emission mitigation analysis and the generation of future emission scenarios. Similar to the climate modeling community, the integrated assessment community has a two decade history of model inter-comparison, which has served as one of the primary venues for model evaluation and confirmation. While analysis of historical trends in the socio-economic system has long played a key role in diagnostics of future scenarios from IAMs, formal hindcast experiments are just now being contemplated as evaluation exercises. Some initial thoughts on setting up such IAM evaluation experiments are discussed. Socio-economic systems do not follow strict physical laws, which means that evaluation needs to take place in a context, unlike that of physical system models, in which there are few fixed, unchanging relationships. Of course strict validation of even earth system models is not possible (Oreskes etal 2004), a fact borne out by the inability of models to constrain the climate sensitivity. Energy-system models have also been grappling with some of the same questions over the last quarter century. For example, one of "the many questions in the energy field that are waiting for answers in the next 20 years" identified by Hans Landsberg in 1985 was "Will the price of oil resume its upward movement?" Of course we are still asking this question today. While, arguably, even fewer constraints apply to socio-economic systems, numerous historical trends and patterns have been identified, although often only in broad terms, that are used to guide the development of model components, parameter ranges, and scenario assumptions. IAM evaluation exercises are expected to provide useful information for interpreting model results and improving model behavior. A key step is the recognition of model boundaries, that is, what is inside

  10. Review evaluation indicators of health information technology course of master's degree in medical sciences universities' based on CIPP Model.

    Science.gov (United States)

    Yarmohammadian, Mohammad Hossein; Mohebbi, Nooshin

    2015-01-01

    Sensitivity of teaching and learning processes in universities emphasizes the necessity of assessment of the quality of education which improves the efficiency and effectiveness of the country. This study was conducted with an aim to review and develop the evaluation criteria of health information technology course at Master of Science level in Tehran, Shahid Beheshti, Isfahan, Shiraz, and Kashan medical universities in 2012 by using CIPP model. This was an applied and descriptive research with statistical population of faculty members (23), students (97), directorates (5), and library staff (5), with a total of 130 people, and sampling was done as a census. In order to collect data, four questionnaires were used based on Likert scale with scores ranging from 1 to 5. Questionnaires' validity was confirmed by consulting with health information technology and educational evaluation experts, and questionnaires' reliability of directorates, faculty, students, and library staff was tested using the Cronbach's alpha coefficient formula, which gave r = 0.74, r = 0.93, r = 0.98, and r = 0.80, respectively. SPSS software for data analysis and both descriptive and inferential statistics containing mean, frequency percentage, standard deviation, Pearson correlation, and Spearman correlation were used. With studies from various sources, commentary of experts, and based on the CIPP evaluation model, 139 indicators were determined and then evaluated, which were associated with this course based on the three factors of context, input, and process in the areas of human resources professional, academic services, students, directors, faculty, curriculum, budget, facilities, teaching-learning activities, and scientific research activities of students and faculty, and the activities of the library staff. This study showed that in total, the health information technology course at the Master of Science level is relatively good, but trying to improve and correct it in some areas and

  11. Study on process evaluation model of students' learning in practical course

    Science.gov (United States)

    Huang, Jie; Liang, Pei; Shen, Wei-min; Ye, Youxiang

    2017-08-01

    In practical course teaching based on project object method, the traditional evaluation methods include class attendance, assignments and exams fails to give incentives to undergraduate students to learn innovatively and autonomously. In this paper, the element such as creative innovation, teamwork, document and reporting were put into process evaluation methods, and a process evaluation model was set up. Educational practice shows that the evaluation model makes process evaluation of students' learning more comprehensive, accurate, and fairly.

  12. Evaluation of Water Resource Security Based on an MIV-BP Model in a Karst Area

    Directory of Open Access Journals (Sweden)

    Liying Liu

    2018-06-01

    Full Text Available Evaluation of water resource security deserves particular attention in water resource planning and management. A typical karst area in Guizhou Province, China, was used as the research area in this paper. First, based on data from Guizhou Province for the past 10 years, the mean impact value–back propagation (MIV-BP model was used to analyze the factors influencing water resource security in the karst area. Second, 18 indices involving five aspects, water environment subsystem, social subsystem, economic subsystem, ecological subsystem, and human subsystem, were selected to establish an evaluation index of water resource security. Finally, a BP artificial neural network model was constructed to evaluate the water resource security of Guizhou Province from 2005 to 2014. The results show that water resource security in Guizhou, which was at a moderate warning level from 2005 to 2009 and a critical safety level from 2010 to 2014, has generally improved. Groundwater supply ratio, industrial water utilization rate, water use efficiency, per capita grain production, and water yield modulus were the obstacles to water resource security. Driving factors were comprehensive utilization rate of industrial solid waste, qualifying rate of industrial wastewater, above moderate rocky desertification area ratio, water requirement per unit gross domestic product (GDP, and degree of development and utilization of groundwater. Our results provide useful suggestions on the management of water resource security in Guizhou Province and a valuable reference for water resource research.

  13. Center for Integrated Nanotechnologies (CINT) Chemical Release Modeling Evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Stirrup, Timothy Scott [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2016-12-20

    This evaluation documents the methodology and results of chemical release modeling for operations at Building 518, Center for Integrated Nanotechnologies (CINT) Core Facility. This evaluation is intended to supplement an update to the CINT [Standalone] Hazards Analysis (SHA). This evaluation also updates the original [Design] Hazards Analysis (DHA) completed in 2003 during the design and construction of the facility; since the original DHA, additional toxic materials have been evaluated and modeled to confirm the continued low hazard classification of the CINT facility and operations. This evaluation addresses the potential catastrophic release of the current inventory of toxic chemicals at Building 518 based on a standard query in the Chemical Information System (CIS).

  14. Model of service-oriented catering supply chain performance evaluation

    OpenAIRE

    Gou, Juanqiong; Shen, Guguan; Chai, Rui

    2013-01-01

    Purpose: The aim of this paper is constructing a performance evaluation model for service-oriented catering supply chain. Design/methodology/approach: With the research on the current situation of catering industry, this paper summarized the characters of the catering supply chain, and then presents the service-oriented catering supply chain model based on the platform of logistics and information. At last, the fuzzy AHP method is used to evaluate the performance of service-oriented catering ...

  15. Individualized Positron Emission Tomography–Based Isotoxic Accelerated Radiation Therapy Is Cost-Effective Compared With Conventional Radiation Therapy: A Model-Based Evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Bongers, Mathilda L., E-mail: ml.bongers@vumc.nl [Department of Epidemiology and Biostatistics, VU University Medical Center, Amsterdam (Netherlands); Coupé, Veerle M.H. [Department of Epidemiology and Biostatistics, VU University Medical Center, Amsterdam (Netherlands); De Ruysscher, Dirk [Radiation Oncology University Hospitals Leuven/KU Leuven, Leuven (Belgium); Department of Radiation Oncology, GROW Research Institute, Maastricht University Medical Center, Maastricht (Netherlands); Oberije, Cary; Lambin, Philippe [Department of Radiation Oncology, GROW Research Institute, Maastricht University Medical Center, Maastricht (Netherlands); Uyl-de Groot, Cornelia A. [Department of Epidemiology and Biostatistics, VU University Medical Center, Amsterdam (Netherlands); Institute for Medical Technology Assessment, Erasmus University Rotterdam, Rotterdam (Netherlands)

    2015-03-15

    Purpose: To evaluate long-term health effects, costs, and cost-effectiveness of positron emission tomography (PET)-based isotoxic accelerated radiation therapy treatment (PET-ART) compared with conventional fixed-dose CT-based radiation therapy treatment (CRT) in non-small cell lung cancer (NSCLC). Methods and Materials: Our analysis uses a validated decision model, based on data of 200 NSCLC patients with inoperable stage I-IIIB. Clinical outcomes, resource use, costs, and utilities were obtained from the Maastro Clinic and the literature. Primary model outcomes were the difference in life-years (LYs), quality-adjusted life-years (QALYs), costs, and the incremental cost-effectiveness and cost/utility ratio (ICER and ICUR) of PET-ART versus CRT. Model outcomes were obtained from averaging the predictions for 50,000 simulated patients. A probabilistic sensitivity analysis and scenario analyses were carried out. Results: The average incremental costs per patient of PET-ART were €569 (95% confidence interval [CI] €−5327-€6936) for 0.42 incremental LYs (95% CI 0.19-0.61) and 0.33 QALYs gained (95% CI 0.13-0.49). The base-case scenario resulted in an ICER of €1360 per LY gained and an ICUR of €1744 per QALY gained. The probabilistic analysis gave a 36% probability that PET-ART improves health outcomes at reduced costs and a 64% probability that PET-ART is more effective at slightly higher costs. Conclusion: On the basis of the available data, individualized PET-ART for NSCLC seems to be cost-effective compared with CRT.

  16. Analysis on evaluation ability of nonlinear safety assessment model of coal mines based on artificial neural network

    Institute of Scientific and Technical Information of China (English)

    SHI Shi-liang; LIU Hai-bo; LIU Ai-hua

    2004-01-01

    Based on the integration analysis of goods and shortcomings of various methods used in safety assessment of coal mines, combining nonlinear feature of mine safety sub-system, this paper establishes the neural network assessment model of mine safety, analyzes the ability of artificial neural network to evaluate mine safety state, and lays the theoretical foundation of artificial neural network using in the systematic optimization of mine safety assessment and getting reasonable accurate safety assessment result.

  17. Equivalent model construction for a non-linear dynamic system based on an element-wise stiffness evaluation procedure and reduced analysis of the equivalent system

    Science.gov (United States)

    Kim, Euiyoung; Cho, Maenghyo

    2017-11-01

    In most non-linear analyses, the construction of a system matrix uses a large amount of computation time, comparable to the computation time required by the solving process. If the process for computing non-linear internal force matrices is substituted with an effective equivalent model that enables the bypass of numerical integrations and assembly processes used in matrix construction, efficiency can be greatly enhanced. A stiffness evaluation procedure (STEP) establishes non-linear internal force models using polynomial formulations of displacements. To efficiently identify an equivalent model, the method has evolved such that it is based on a reduced-order system. The reduction process, however, makes the equivalent model difficult to parameterize, which significantly affects the efficiency of the optimization process. In this paper, therefore, a new STEP, E-STEP, is proposed. Based on the element-wise nature of the finite element model, the stiffness evaluation is carried out element-by-element in the full domain. Since the unit of computation for the stiffness evaluation is restricted by element size, and since the computation is independent, the equivalent model can be constructed efficiently in parallel, even in the full domain. Due to the element-wise nature of the construction procedure, the equivalent E-STEP model is easily characterized by design parameters. Various reduced-order modeling techniques can be applied to the equivalent system in a manner similar to how they are applied in the original system. The reduced-order model based on E-STEP is successfully demonstrated for the dynamic analyses of non-linear structural finite element systems under varying design parameters.

  18. Risk Evaluation of a UHV Power Transmission Construction Project Based on a Cloud Model and FCE Method for Sustainability

    Directory of Open Access Journals (Sweden)

    Huiru Zhao

    2015-03-01

    Full Text Available In order to achieve the sustainable development of energy, Ultra High Voltage (UHV power transmission construction projects are being established in China currently. Their high-tech nature, the massive amount of money involved, and the need for multi-agent collaboration as well as complex construction environments bring many challenges and risks. Risk management, therefore, is critical to reduce the risks and realize sustainable development of projects. Unfortunately, many traditional risk assessment methods may not perform well due to the great uncertainty and randomness inherent in UHV power construction projects. This paper, therefore, proposes a risk evaluation index system and a hybrid risk evaluation model to evaluate the risk of UHV projects and find out the key risk factors. This model based on a cloud model and fuzzy comprehensive evaluation (FCE method combines the superiority of the cloud model for reflecting randomness and discreteness with the advantages of the fuzzy comprehensive evaluation method in handling uncertain and vague issues. For the sake of proving our framework, an empirical study of “Zhejiang-Fuzhou” UHV power transmission construction project is presented. As key contributions, we find the risk of this project lies at a “middle” to “high” level and closer to a “middle” level; the “management risk” and “social risk” are identified as the most important risk factors requiring more attention; and some risk control recommendations are proposed. This article demonstrates the value of our approach in risk identification, which seeks to improve the risk control level and the sustainable development of UHV power transmission construction projects.

  19. Evaluation of a whole-farm model for pasture-based dairy systems.

    Science.gov (United States)

    Beukes, P C; Palliser, C C; Macdonald, K A; Lancaster, J A S; Levy, G; Thorrold, B S; Wastney, M E

    2008-06-01

    In the temperate climate of New Zealand, animals can be grazed outdoors all year round. The pasture is supplemented with conserved feed, with the amount being determined by seasonal pasture growth, genetics of the herd, and stocking rate. The large number of factors that affect production makes it impractical and expensive to use field trials to explore all the farm system options. A model of an in situ-grazed pasture system has been developed to provide a tool for developing and testing novel farm systems; for example, different levels of bought-in supplements and different levels of nitrogen fertilizer application, to maintain sustainability or environmental integrity and profitability. It consists of a software framework that links climate information, on a daily basis, with dynamic, mechanistic component-models for pasture growth and animal metabolism, as well as management policies. A unique feature is that the component models were developed and published by other groups, and are retained in their original software language. The aim of this study was to compare the model, called the whole-farm model (WFM) with a farm trial that was conducted over 3 yr and in which data were collected specifically for evaluating the WFM. Data were used from the first year to develop the WFM and data from the second and third year to evaluate the model. The model predicted annual pasture production, end-of-season cow liveweight, cow body condition score, and pasture cover across season with relative prediction error pasture and supplement intake were predicted with acceptable accuracy, suggesting that the metabolic conversion of feed to fat, protein, and lactose in the mammary gland needs to be refined. Because feed growth and intake predictions were acceptable, economic predictions can be made using the WFM, with an adjustment for milk yield, to test different management policies, alterations in climate, or the use of genetically improved animals, pastures, or crops.

  20. New Temperature-based Models for Predicting Global Solar Radiation

    International Nuclear Information System (INIS)

    Hassan, Gasser E.; Youssef, M. Elsayed; Mohamed, Zahraa E.; Ali, Mohamed A.; Hanafy, Ahmed A.

    2016-01-01

    Highlights: • New temperature-based models for estimating solar radiation are investigated. • The models are validated against 20-years measured data of global solar radiation. • The new temperature-based model shows the best performance for coastal sites. • The new temperature-based model is more accurate than the sunshine-based models. • The new model is highly applicable with weather temperature forecast techniques. - Abstract: This study presents new ambient-temperature-based models for estimating global solar radiation as alternatives to the widely used sunshine-based models owing to the unavailability of sunshine data at all locations around the world. Seventeen new temperature-based models are established, validated and compared with other three models proposed in the literature (the Annandale, Allen and Goodin models) to estimate the monthly average daily global solar radiation on a horizontal surface. These models are developed using a 20-year measured dataset of global solar radiation for the case study location (Lat. 30°51′N and long. 29°34′E), and then, the general formulae of the newly suggested models are examined for ten different locations around Egypt. Moreover, the local formulae for the models are established and validated for two coastal locations where the general formulae give inaccurate predictions. Mostly common statistical errors are utilized to evaluate the performance of these models and identify the most accurate model. The obtained results show that the local formula for the most accurate new model provides good predictions for global solar radiation at different locations, especially at coastal sites. Moreover, the local and general formulas of the most accurate temperature-based model also perform better than the two most accurate sunshine-based models from the literature. The quick and accurate estimations of the global solar radiation using this approach can be employed in the design and evaluation of performance for

  1. Formal Implementation of a Performance Evaluation Model for the Face Recognition System

    Directory of Open Access Journals (Sweden)

    Yong-Nyuo Shin

    2008-01-01

    Full Text Available Due to usability features, practical applications, and its lack of intrusiveness, face recognition technology, based on information, derived from individuals' facial features, has been attracting considerable attention recently. Reported recognition rates of commercialized face recognition systems cannot be admitted as official recognition rates, as they are based on assumptions that are beneficial to the specific system and face database. Therefore, performance evaluation methods and tools are necessary to objectively measure the accuracy and performance of any face recognition system. In this paper, we propose and formalize a performance evaluation model for the biometric recognition system, implementing an evaluation tool for face recognition systems based on the proposed model. Furthermore, we performed evaluations objectively by providing guidelines for the design and implementation of a performance evaluation system, formalizing the performance test process.

  2. Multi-criteria comparative evaluation of spallation reaction models

    Science.gov (United States)

    Andrianov, Andrey; Andrianova, Olga; Konobeev, Alexandr; Korovin, Yury; Kuptsov, Ilya

    2017-09-01

    This paper presents an approach to a comparative evaluation of the predictive ability of spallation reaction models based on widely used, well-proven multiple-criteria decision analysis methods (MAVT/MAUT, AHP, TOPSIS, PROMETHEE) and the results of such a comparison for 17 spallation reaction models in the presence of the interaction of high-energy protons with natPb.

  3. Intelligent Evaluation Method of Tank Bottom Corrosion Status Based on Improved BP Artificial Neural Network

    Science.gov (United States)

    Qiu, Feng; Dai, Guang; Zhang, Ying

    According to the acoustic emission information and the appearance inspection information of tank bottom online testing, the external factors associated with tank bottom corrosion status are confirmed. Applying artificial neural network intelligent evaluation method, three tank bottom corrosion status evaluation models based on appearance inspection information, acoustic emission information, and online testing information are established. Comparing with the result of acoustic emission online testing through the evaluation of test sample, the accuracy of the evaluation model based on online testing information is 94 %. The evaluation model can evaluate tank bottom corrosion accurately and realize acoustic emission online testing intelligent evaluation of tank bottom.

  4. Large scale Bayesian nuclear data evaluation with consistent model defects

    International Nuclear Information System (INIS)

    Schnabel, G

    2015-01-01

    The aim of nuclear data evaluation is the reliable determination of cross sections and related quantities of the atomic nuclei. To this end, evaluation methods are applied which combine the information of experiments with the results of model calculations. The evaluated observables with their associated uncertainties and correlations are assembled into data sets, which are required for the development of novel nuclear facilities, such as fusion reactors for energy supply, and accelerator driven systems for nuclear waste incineration. The efficiency and safety of such future facilities is dependent on the quality of these data sets and thus also on the reliability of the applied evaluation methods. This work investigated the performance of the majority of available evaluation methods in two scenarios. The study indicated the importance of an essential component in these methods, which is the frequently ignored deficiency of nuclear models. Usually, nuclear models are based on approximations and thus their predictions may deviate from reliable experimental data. As demonstrated in this thesis, the neglect of this possibility in evaluation methods can lead to estimates of observables which are inconsistent with experimental data. Due to this finding, an extension of Bayesian evaluation methods is proposed to take into account the deficiency of the nuclear models. The deficiency is modeled as a random function in terms of a Gaussian process and combined with the model prediction. This novel formulation conserves sum rules and allows to explicitly estimate the magnitude of model deficiency. Both features are missing in available evaluation methods so far. Furthermore, two improvements of existing methods have been developed in the course of this thesis. The first improvement concerns methods relying on Monte Carlo sampling. A Metropolis-Hastings scheme with a specific proposal distribution is suggested, which proved to be more efficient in the studied scenarios than the

  5. IFIS Model-Plus: A Web-Based GUI for Visualization, Comparison and Evaluation of Distributed Flood Forecasts and Hindcasts

    Science.gov (United States)

    Krajewski, W. F.; Della Libera Zanchetta, A.; Mantilla, R.; Demir, I.

    2017-12-01

    This work explores the use of hydroinformatics tools to provide an user friendly and accessible interface for executing and assessing the output of realtime flood forecasts using distributed hydrological models. The main result is the implementation of a web system that uses an Iowa Flood Information System (IFIS)-based environment for graphical displays of rainfall-runoff simulation results for both real-time and past storm events. It communicates with ASYNCH ODE solver to perform large-scale distributed hydrological modeling based on segmentation of the terrain into hillslope-link hydrologic units. The cyber-platform also allows hindcast of model performance by testing multiple model configurations and assumptions of vertical flows in the soils. The scope of the currently implemented system is the entire set of contributing watersheds for the territory of the state of Iowa. The interface provides resources for visualization of animated maps for different water-related modeled states of the environment, including flood-waves propagation with classification of flood magnitude, runoff generation, surface soil moisture and total water column in the soil. Additional tools for comparing different model configurations and performing model evaluation by comparing to observed variables at monitored sites are also available. The user friendly interface has been published to the web under the URL http://ifis.iowafloodcenter.org/ifis/sc/modelplus/.

  6. Introducing Program Evaluation Models

    Directory of Open Access Journals (Sweden)

    Raluca GÂRBOAN

    2008-02-01

    Full Text Available Programs and project evaluation models can be extremely useful in project planning and management. The aim is to set the right questions as soon as possible in order to see in time and deal with the unwanted program effects, as well as to encourage the positive elements of the project impact. In short, different evaluation models are used in order to minimize losses and maximize the benefits of the interventions upon small or large social groups. This article introduces some of the most recently used evaluation models.

  7. Ottawa Model of Implementation Leadership and Implementation Leadership Scale: mapping concepts for developing and evaluating theory-based leadership interventions

    Directory of Open Access Journals (Sweden)

    Gifford W

    2017-03-01

    Full Text Available Wendy Gifford,1 Ian D Graham,2,3 Mark G Ehrhart,4 Barbara L Davies,5,6 Gregory A Aarons7 1School of Nursing, Faculty of Health Sciences, University of Ottawa, ON, Canada; 2Centre for Practice-Changing Research, Ottawa Hospital Research Institute, 3School of Epidemiology, Public Health and Preventive Medicine, Facility of Medicine, University of Ottawa, Ottawa, ON, Canada; 4Department of Psychology, San Diego State University, San Diego, CA, USA; 5Nursing Best Practice Research Center, University of Ottawa, Ottawa, ON, Canada; 6Department of Psychiatry, University of California, San Diego, La Jolla, CA, USA; 7Child and Adolescent Services Research Center, University of California, San Diego, CA, USA Purpose: Leadership in health care is instrumental to creating a supportive organizational environment and positive staff attitudes for implementing evidence-based practices to improve patient care and outcomes. The purpose of this study is to demonstrate the alignment of the Ottawa Model of Implementation Leadership (O-MILe, a theoretical model for developing implementation leadership, with the Implementation Leadership Scale (ILS, an empirically validated tool for measuring implementation leadership. A secondary objective is to describe the methodological process for aligning concepts of a theoretical model with an independently established measurement tool for evaluating theory-based interventions.Methods: Modified template analysis was conducted to deductively map items of the ILS onto concepts of the O-MILe. An iterative process was used in which the model and scale developers (n=5 appraised the relevance, conceptual clarity, and fit of each ILS items with the O-MILe concepts through individual feedback and group discussions until consensus was reached.Results: All 12 items of the ILS correspond to at least one O-MILe concept, demonstrating compatibility of the ILS as a measurement tool for the O-MILe theoretical constructs.Conclusion: The O

  8. Modeling Dynamic Systems with Efficient Ensembles of Process-Based Models.

    Directory of Open Access Journals (Sweden)

    Nikola Simidjievski

    Full Text Available Ensembles are a well established machine learning paradigm, leading to accurate and robust models, predominantly applied to predictive modeling tasks. Ensemble models comprise a finite set of diverse predictive models whose combined output is expected to yield an improved predictive performance as compared to an individual model. In this paper, we propose a new method for learning ensembles of process-based models of dynamic systems. The process-based modeling paradigm employs domain-specific knowledge to automatically learn models of dynamic systems from time-series observational data. Previous work has shown that ensembles based on sampling observational data (i.e., bagging and boosting, significantly improve predictive performance of process-based models. However, this improvement comes at the cost of a substantial increase of the computational time needed for learning. To address this problem, the paper proposes a method that aims at efficiently learning ensembles of process-based models, while maintaining their accurate long-term predictive performance. This is achieved by constructing ensembles with sampling domain-specific knowledge instead of sampling data. We apply the proposed method to and evaluate its performance on a set of problems of automated predictive modeling in three lake ecosystems using a library of process-based knowledge for modeling population dynamics. The experimental results identify the optimal design decisions regarding the learning algorithm. The results also show that the proposed ensembles yield significantly more accurate predictions of population dynamics as compared to individual process-based models. Finally, while their predictive performance is comparable to the one of ensembles obtained with the state-of-the-art methods of bagging and boosting, they are substantially more efficient.

  9. On the development and performance evaluation of a multiobjective GA-based RBF adaptive model for the prediction of stock indices

    Directory of Open Access Journals (Sweden)

    Babita Majhi

    2014-09-01

    Full Text Available This paper develops and assesses the performance of a hybrid prediction model using a radial basis function neural network and non-dominated sorting multiobjective genetic algorithm-II (NSGA-II for various stock market forecasts. The proposed technique simultaneously optimizes two mutually conflicting objectives: the structure (the number of centers in the hidden layer and the output mean square error (MSE of the model. The best compromised non-dominated solution-based model was determined from the optimal Pareto front using fuzzy set theory. The performances of this model were evaluated in terms of four different measures using Standard and Poor 500 (S&P500 and Dow Jones Industrial Average (DJIA stock data. The results of the simulation of the new model demonstrate a prediction performance superior to that of the conventional radial basis function (RBF-based forecasting model in terms of the mean average percentage error (MAPE, directional accuracy (DA, Thelis’ U and average relative variance (ARV values.

  10. Marker-based or model-based RSA for evaluation of hip resurfacing arthroplasty? A clinical validation and 5-year follow-up.

    Science.gov (United States)

    Lorenzen, Nina Dyrberg; Stilling, Maiken; Jakobsen, Stig Storgaard; Gustafson, Klas; Søballe, Kjeld; Baad-Hansen, Thomas

    2013-11-01

    The stability of implants is vital to ensure a long-term survival. RSA determines micro-motions of implants as a predictor of early implant failure. RSA can be performed as a marker- or model-based analysis. So far, CAD and RE model-based RSA have not been validated for use in hip resurfacing arthroplasty (HRA). A phantom study determined the precision of marker-based and CAD and RE model-based RSA on a HRA implant. In a clinical study, 19 patients were followed with stereoradiographs until 5 years after surgery. Analysis of double-examination migration results determined the clinical precision of marker-based and CAD model-based RSA, and at the 5-year follow-up, results of the total translation (TT) and the total rotation (TR) for marker- and CAD model-based RSA were compared. The phantom study showed that comparison of the precision (SDdiff) in marker-based RSA analysis was more precise than model-based RSA analysis in TT (p CAD RSA analysis (p = 0.002), but showed no difference between the marker- and CAD model-based RSA analysis regarding the TR (p = 0.91). Comparing the mean signed values regarding the TT and the TR at the 5-year follow-up in 13 patients, the TT was lower (p = 0.03) and the TR higher (p = 0.04) in the marker-based RSA compared to CAD model-based RSA. The precision of marker-based RSA was significantly better than model-based RSA. However, problems with occluded markers lead to exclusion of many patients which was not a problem with model-based RSA. HRA were stable at the 5-year follow-up. The detection limit was 0.2 mm TT and 1° TR for marker-based and 0.5 mm TT and 1° TR for CAD model-based RSA for HRA.

  11. Do participation and personalization matter? A model-driven evaluation of an Internet-based patient education intervention for fibromyalgia patients.

    Science.gov (United States)

    Camerini, Luca; Camerini, Anne-Linda; Schulz, Peter J

    2013-08-01

    To evaluate the effectiveness of an Internet-based patient education intervention, which was designed upon principles of personalization and participatory design. Fifteen months after the first release of the website, 209 fibromyalgia patients recruited through health professionals completed an online questionnaire to assess patients' use of the website, health knowledge, self-management behavior, and health outcomes. These constructs were combined into an a-priory model that was tested using a structural equation modeling approach. Results show that the usage of certain tools of the website - designed and personalized involving the end users - impacts patients' health knowledge, which in turn impacts self-management. Improvements in self-management ultimately lower the impact of Fibromyalgia Syndrome leading to better health outcomes. This study empirically confirmed that the adoption of a participatory approach to the design of eHealth interventions and the use of personalized contents enhance the overall effectiveness of systems. More time and effort should be invested in involving patients in the preliminary phases of the development of Internet-based patient education interventions and in the definition of models that can guide the systems' evaluation beyond technology-related variables such as usability, accessibility or adoption. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  12. Decision-relevant evaluation of climate models: A case study of chill hours in California

    Science.gov (United States)

    Jagannathan, K. A.; Jones, A. D.; Kerr, A. C.

    2017-12-01

    The past decade has seen a proliferation of different climate datasets with over 60 climate models currently in use. Comparative evaluation and validation of models can assist practitioners chose the most appropriate models for adaptation planning. However, such assessments are usually conducted for `climate metrics' such as seasonal temperature, while sectoral decisions are often based on `decision-relevant outcome metrics' such as growing degree days or chill hours. Since climate models predict different metrics with varying skill, the goal of this research is to conduct a bottom-up evaluation of model skill for `outcome-based' metrics. Using chill hours (number of hours in winter months where temperature is lesser than 45 deg F) in Fresno, CA as a case, we assess how well different GCMs predict the historical mean and slope of chill hours, and whether and to what extent projections differ based on model selection. We then compare our results with other climate-based evaluations of the region, to identify similarities and differences. For the model skill evaluation, historically observed chill hours were compared with simulations from 27 GCMs (and multiple ensembles). Model skill scores were generated based on a statistical hypothesis test of the comparative assessment. Future projections from RCP 8.5 runs were evaluated, and a simple bias correction was also conducted. Our analysis indicates that model skill in predicting chill hour slope is dependent on its skill in predicting mean chill hours, which results from the non-linear nature of the chill metric. However, there was no clear relationship between the models that performed well for the chill hour metric and those that performed well in other temperature-based evaluations (such winter minimum temperature or diurnal temperature range). Further, contrary to conclusions from other studies, we also found that the multi-model mean or large ensemble mean results may not always be most appropriate for this

  13. Maintenance evaluation using risk based criteria

    International Nuclear Information System (INIS)

    Torres Valle, A.

    1996-01-01

    The maintenance evaluation is currently performed by using economic and, in some case, technical equipment failure criteria, however this is done to a specific equipment level. In general, when statistics are used the analysis for maintenance optimization are made isolated and whit a post mortem character; The integration provided by mean of Probabilistic Safety assessment (PSA) together with the possibilities of its applications, allow for evaluation of maintenance on the basis of broader scope criteria in regard to those traditionally used. The evaluate maintenance using risk based criteria, is necessary to follow a dynamic and systematic approach, in studying the maintenance strategy, to allow for updating the initial probabilistic models, for including operational changes that often take place during operation of complex facilities. This paper proposes a dynamic evaluation system of maintenance task. The system is illustrated by means of a practical example

  14. Model-based and model-free Pavlovian reward learning: revaluation, revision, and revelation.

    Science.gov (United States)

    Dayan, Peter; Berridge, Kent C

    2014-06-01

    Evidence supports at least two methods for learning about reward and punishment and making predictions for guiding actions. One method, called model-free, progressively acquires cached estimates of the long-run values of circumstances and actions from retrospective experience. The other method, called model-based, uses representations of the environment, expectations, and prospective calculations to make cognitive predictions of future value. Extensive attention has been paid to both methods in computational analyses of instrumental learning. By contrast, although a full computational analysis has been lacking, Pavlovian learning and prediction has typically been presumed to be solely model-free. Here, we revise that presumption and review compelling evidence from Pavlovian revaluation experiments showing that Pavlovian predictions can involve their own form of model-based evaluation. In model-based Pavlovian evaluation, prevailing states of the body and brain influence value computations, and thereby produce powerful incentive motivations that can sometimes be quite new. We consider the consequences of this revised Pavlovian view for the computational landscape of prediction, response, and choice. We also revisit differences between Pavlovian and instrumental learning in the control of incentive motivation.

  15. Evaluating performance of simplified physically based models for shallow landslide susceptibility

    Directory of Open Access Journals (Sweden)

    G. Formetta

    2016-11-01

    Full Text Available Rainfall-induced shallow landslides can lead to loss of life and significant damage to private and public properties, transportation systems, etc. Predicting locations that might be susceptible to shallow landslides is a complex task and involves many disciplines: hydrology, geotechnical science, geology, hydrogeology, geomorphology, and statistics. Two main approaches are commonly used: statistical or physically based models. Reliable model applications involve automatic parameter calibration, objective quantification of the quality of susceptibility maps, and model sensitivity analyses. This paper presents a methodology to systemically and objectively calibrate, verify, and compare different models and model performance indicators in order to identify and select the models whose behavior is the most reliable for particular case studies.The procedure was implemented in a package of models for landslide susceptibility analysis and integrated in the NewAge-JGrass hydrological model. The package includes three simplified physically based models for landslide susceptibility analysis (M1, M2, and M3 and a component for model verification. It computes eight goodness-of-fit indices by comparing pixel-by-pixel model results and measurement data. The integration of the package in NewAge-JGrass uses other components, such as geographic information system tools, to manage input–output processes, and automatic calibration algorithms to estimate model parameters. The system was applied for a case study in Calabria (Italy along the Salerno–Reggio Calabria highway, between Cosenza and Altilia. The area is extensively subject to rainfall-induced shallow landslides mainly because of its complex geology and climatology. The analysis was carried out considering all the combinations of the eight optimized indices and the three models. Parameter calibration, verification, and model performance assessment were performed by a comparison with a detailed landslide

  16. Model of service-oriented catering supply chain performance evaluation

    Directory of Open Access Journals (Sweden)

    Juanqiong Gou

    2013-03-01

    Full Text Available Purpose: The aim of this paper is constructing a performance evaluation model for service-oriented catering supply chain. Design/methodology/approach: With the research on the current situation of catering industry, this paper summarized the characters of the catering supply chain, and then presents the service-oriented catering supply chain model based on the platform of logistics and information. At last, the fuzzy AHP method is used to evaluate the performance of service-oriented catering supply chain. Findings: With the analysis of the characteristics of catering supply chain, we construct the performance evaluation model in order to guarantee the food safety, logistics efficiency, price stability and so on. Practical implications: In order to evolve an efficient and effective service supply chain, it can not only used to own enterprise improvement, but also can be used for selecting different customers, to choose a different model of development. Originality/value: This paper has a new definition of service-oriented catering supply chain. And it offers a model to evaluate the performance of this catering supply chain.

  17. A rough set-based association rule approach implemented on a brand trust evaluation model

    Science.gov (United States)

    Liao, Shu-Hsien; Chen, Yin-Ju

    2017-09-01

    In commerce, businesses use branding to differentiate their product and service offerings from those of their competitors. The brand incorporates a set of product or service features that are associated with that particular brand name and identifies the product/service segmentation in the market. This study proposes a new data mining approach, a rough set-based association rule induction, implemented on a brand trust evaluation model. In addition, it presents as one way to deal with data uncertainty to analyse ratio scale data, while creating predictive if-then rules that generalise data values to the retail region. As such, this study uses the analysis of algorithms to find alcoholic beverages brand trust recall. Finally, discussions and conclusion are presented for further managerial implications.

  18. A Novel Evaluation Model for Hybrid Power System Based on Vague Set and Dempster-Shafer Evidence Theory

    Directory of Open Access Journals (Sweden)

    Dongxiao Niu

    2012-01-01

    Full Text Available Because clean energy and traditional energy have different advantages and disadvantages, it is of great significance to evaluate comprehensive benefits for hybrid power systems. Based on thorough analysis of important characters on hybrid power systems, an index system including security, economic benefit, environmental benefit, and social benefit is established in this paper. Due to advantages of processing abundant uncertain and fuzzy information, vague set is used to determine the decision matrix. Convert vague decision matrix to real one by vague combination ruleand determine uncertain degrees of different indexes by grey incidence analysis, then the mass functions of different comment set in different indexes are obtained. Information can be fused in accordance with Dempster-Shafer (D-S combination rule and the evaluation result is got by vague set and D-S evidence theory. A simulation of hybrid power system including thermal power, wind power, and photovoltaic power in China is provided to demonstrate the effectiveness and potential of the proposed design scheme. It can be clearly seen that the uncertainties in decision making can be dramatically decreased compared with existing methods in the literature. The actual implementation results illustrate that the proposed index system and evaluation model based on vague set and D-S evidence theory are effective and practical to evaluate comprehensive benefit of hybrid power system.

  19. Comparison between a sire model and an animal model for genetic evaluation of fertility traits in Danish Holstein population

    DEFF Research Database (Denmark)

    Sun, C; Madsen, P; Nielsen, U S

    2009-01-01

    Comparisons between a sire model, a sire-dam model, and an animal model were carried out to evaluate the ability of the models to predict breeding values of fertility traits, based on data including 471,742 records from the first lactation of Danish Holstein cows, covering insemination years from...... the results suggest that the animal model, rather than the sire model, should be used for genetic evaluation of fertility traits......Comparisons between a sire model, a sire-dam model, and an animal model were carried out to evaluate the ability of the models to predict breeding values of fertility traits, based on data including 471,742 records from the first lactation of Danish Holstein cows, covering insemination years from...... 1995 to 2004. The traits in the analysis were days from calving to first insemination, calving interval, days open, days from first to last insemination, number of inseminations per conception, and nonreturn rate within 56 d after first service. The correlations between sire estimated breeding value...

  20. Knowledge-Based Energy Damage Model for Evaluating Industrialised Building Systems (IBS Occupational Health and Safety (OHS Risk

    Directory of Open Access Journals (Sweden)

    Abas Nor Haslinda

    2016-01-01

    Full Text Available Malaysia’s construction industry has been long considered hazardous, owing to its poor health and safety record. It is proposed that one of the ways to improve safety and health in the construction industry is through the implementation of ‘off-site’ systems, commonly termed ‘industrialised building systems (IBS’ in Malaysia. This is deemed safer based on the risk concept of reduced exposure, brought about by the reduction in onsite workers; however, no method yet exists for determining the relative safety of various construction methods, including IBS. This study presents a comparative evaluation of the occupational health and safety (OHS risk presented by different construction approaches, namely IBS and traditional methods. The evaluation involved developing a model based on the concept of ‘argumentation theory’, which helps construction designers integrate the management of OHS risk into the design process. In addition, an ‘energy damage model’ was used as an underpinning framework. Development of the model was achieved through three phases, namely Phase I – knowledge acquisitaion, Phase II – argument trees mapping, and Phase III – validation of the model. The research revealed that different approaches/methods of construction projects carried a different level of energy damage, depending on how the activities were carried out. A study of the way in which the risks change from one construction process to another shows that there is a difference in the profile of OHS risk between IBS construction and traditional methods.Therefore, whether the option is an IBS or traditional approach, the fundamental idea of the model is to motivate construction designers or decision-makers to address safety in the design process and encourage them to examine carefully the probable OHS risk variables surrounding an action, thus preventing accidents in construction.

  1. Event-based model diagnosis of rainfall-runoff model structures

    International Nuclear Information System (INIS)

    Stanzel, P.

    2012-01-01

    The objective of this research is a comparative evaluation of different rainfall-runoff model structures. Comparative model diagnostics facilitate the assessment of strengths and weaknesses of each model. The application of multiple models allows an analysis of simulation uncertainties arising from the selection of model structure, as compared with effects of uncertain parameters and precipitation input. Four different model structures, including conceptual and physically based approaches, are compared. In addition to runoff simulations, results for soil moisture and the runoff components of overland flow, interflow and base flow are analysed. Catchment runoff is simulated satisfactorily by all four model structures and shows only minor differences. Systematic deviations from runoff observations provide insight into model structural deficiencies. While physically based model structures capture some single runoff events better, they do not generally outperform conceptual model structures. Contributions to uncertainty in runoff simulations stemming from the choice of model structure show similar dimensions to those arising from parameter selection and the representation of precipitation input. Variations in precipitation mainly affect the general level and peaks of runoff, while different model structures lead to different simulated runoff dynamics. Large differences between the four analysed models are detected for simulations of soil moisture and, even more pronounced, runoff components. Soil moisture changes are more dynamical in the physically based model structures, which is in better agreement with observations. Streamflow contributions of overland flow are considerably lower in these models than in the more conceptual approaches. Observations of runoff components are rarely made and are not available in this study, but are shown to have high potential for an effective selection of appropriate model structures (author) [de

  2. Association of Trans-theoretical Model (TTM based Exercise Behavior Change with Body Image Evaluation among Female Iranian Students

    Directory of Open Access Journals (Sweden)

    Sahar Rostami

    2017-03-01

    Full Text Available BackgroundBody image is a determinant of individual attractiveness and physical activity among the young people. This study was aimed to assess the association of Trans-theoretical model based exercise behavior change with body image evaluation among the female Iranian students.Materials and MethodsThis cross-sectional study was conducted in Sanandaj city, Iran in 2016. Using multistage sampling method, a total of 816 high school female students were included in the study. They completed a three-section questionnaire, including demographic information, Trans-theoretical model constructs and body image evaluation. The obtained data were fed into SPSS version 21.0.  ResultsThe results showed more than 60% of participants were in the pre-contemplation and contemplation stages of exercise behavior. The means of perceived self-efficacy, barriers and benefits were found to have a statistically significant difference during the stages of exercise behavior change (P

  3. A State-Based Modeling Approach for Efficient Performance Evaluation of Embedded System Architectures at Transaction Level

    Directory of Open Access Journals (Sweden)

    Anthony Barreteau

    2012-01-01

    Full Text Available Abstract models are necessary to assist system architects in the evaluation process of hardware/software architectures and to cope with the still increasing complexity of embedded systems. Efficient methods are required to create reliable models of system architectures and to allow early performance evaluation and fast exploration of the design space. In this paper, we present a specific transaction level modeling approach for performance evaluation of hardware/software architectures. This approach relies on a generic execution model that exhibits light modeling effort. Created models are used to evaluate by simulation expected processing and memory resources according to various architectures. The proposed execution model relies on a specific computation method defined to improve the simulation speed of transaction level models. The benefits of the proposed approach are highlighted through two case studies. The first case study is a didactic example illustrating the modeling approach. In this example, a simulation speed-up by a factor of 7,62 is achieved by using the proposed computation method. The second case study concerns the analysis of a communication receiver supporting part of the physical layer of the LTE protocol. In this case study, architecture exploration is led in order to improve the allocation of processing functions.

  4. A Multi Criteria Group Decision-Making Model for Teacher Evaluation in Higher Education Based on Cloud Model and Decision Tree

    Science.gov (United States)

    Chang, Ting-Cheng; Wang, Hui

    2016-01-01

    This paper proposes a cloud multi-criteria group decision-making model for teacher evaluation in higher education which is involving subjectivity, imprecision and fuzziness. First, selecting the appropriate evaluation index depending on the evaluation objectives, indicating a clear structural relationship between the evaluation index and…

  5. Development of Weeds Density Evaluation System Based on RGB Sensor

    Science.gov (United States)

    Solahudin, M.; Slamet, W.; Wahyu, W.

    2018-05-01

    Weeds are plant competitors which potentially reduce the yields due to competition for sunlight, water and soil nutrients. Recently, for chemical-based weed control, site-specific weed management that accommodates spatial and temporal diversity of weeds attack in determining the appropriate dose of herbicide based on Variable Rate Technology (VRT) is preferable than traditional approach with single dose herbicide application. In such application, determination of the level of weed density is an important task. Several methods have been studied to evaluate the density of weed attack. The objective of this study is to develop a system that is able to evaluate weed density based on RGB (Red, Green, and Blue) sensors. RGB sensor was used to acquire the RGB values of the surface of the field. An artificial neural network (ANN) model was then used for determining the weed density. In this study the ANN model was trained with 280 training data (70%), 60 validation data (15%), and 60 testing data (15%). Based on the field test, using the proposed method the weed density could be evaluated with an accuracy of 83.75%.

  6. Attribute Synthetic Evaluation Model for the CBM Recoverability and Its Application

    Directory of Open Access Journals (Sweden)

    Xiao-gang Xia

    2015-01-01

    Full Text Available The coal-bed methane (CBM recoverability is the basic premise of CBM development practice; in order to effectively evaluate the CBM recoverability, the attribute synthetic evaluation model is established based on the theory and method of attribute mathematics. Firstly, five indexes are chosen to evaluate the recoverability through analyzing the influence factors of CBM, including seam thickness, gas saturation, permeability, reservoir pressure gradient, and hydrogeological conditions. Secondly, the attribute measurement functions of each index are constructed based on the attribute mathematics theory, and the calculation methods of the single index attribute measurement and the synthetic attribute measurement also are provided. Meanwhile, the weight of each index is given with the method of similar number and similar weight; the evaluation results also are determined by the confidence criterion reliability code. At last, according to the application results of the model in some coal target area of Fuxin and Hancheng mine, the evaluation results are basically consistent with the actual situation, which proves that the evaluation model can be used in the CBM recoverability prediction, and an effective method of the CBM recoverability evaluation is also provided.

  7. SU-F-T-355: Evaluation of Knowledge-Based Planning Model for the Cervical Cancer Radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Chen, X; Wang, J; Hu, W [Fudan University Shanghai Cancer Center, Shanghai, Shanghai (China)

    2016-06-15

    Purpose: The Varian RapidPlan™ is a commercial knowledge-based optimization process which uses a set of clinically used treatment plans to train a model that can predict individualized dose-volume objectives. The purpose of this study is to evaluate the performance of RapidPlan to generate intensity modulated radiation therapy (IMRT) plans for cervical cancer. Methods: Totally 70 IMRT plans for cervical cancer with varying clinical and physiological indications were enrolled in this study. These patients were all previously treated in our institution. There were two prescription levels usually used in our institution: 45Gy/25 fractions and 50.4Gy/28 fractions. 50 of these plans were selected to train the RapidPlan model for predicting dose-volume constraints. After model training, this model was validated with 10 plans from training pool(internal validation) and additional other 20 new plans(external validation). All plans used for the validation were re-optimized with the original beam configuration and the generated priorities from RapidPlan were manually adjusted to ensure that re-optimized DVH located in the range of the model prediction. DVH quantitative analysis was performed to compare the RapidPlan generated and the original manual optimized plans. Results: For all the validation cases, RapidPlan based plans (RapidPlan) showed similar or superior results compared to the manual optimized ones. RapidPlan increased the result of D98% and homogeneity in both two validations. For organs at risk, the RapidPlan decreased mean doses of bladder by 1.25Gy/1.13Gy (internal/external validation) on average, with p=0.12/p<0.01. The mean dose of rectum and bowel were also decreased by an average of 2.64Gy/0.83Gy and 0.66Gy/1.05Gy,with p<0.01/ p<0.01and p=0.04/<0.01 for the internal/external validation, respectively. Conclusion: The RapidPlan model based cervical cancer plans shows ability to systematically improve the IMRT plan quality. It suggests that RapidPlan has

  8. SU-F-T-355: Evaluation of Knowledge-Based Planning Model for the Cervical Cancer Radiotherapy

    International Nuclear Information System (INIS)

    Chen, X; Wang, J; Hu, W

    2016-01-01

    Purpose: The Varian RapidPlan™ is a commercial knowledge-based optimization process which uses a set of clinically used treatment plans to train a model that can predict individualized dose-volume objectives. The purpose of this study is to evaluate the performance of RapidPlan to generate intensity modulated radiation therapy (IMRT) plans for cervical cancer. Methods: Totally 70 IMRT plans for cervical cancer with varying clinical and physiological indications were enrolled in this study. These patients were all previously treated in our institution. There were two prescription levels usually used in our institution: 45Gy/25 fractions and 50.4Gy/28 fractions. 50 of these plans were selected to train the RapidPlan model for predicting dose-volume constraints. After model training, this model was validated with 10 plans from training pool(internal validation) and additional other 20 new plans(external validation). All plans used for the validation were re-optimized with the original beam configuration and the generated priorities from RapidPlan were manually adjusted to ensure that re-optimized DVH located in the range of the model prediction. DVH quantitative analysis was performed to compare the RapidPlan generated and the original manual optimized plans. Results: For all the validation cases, RapidPlan based plans (RapidPlan) showed similar or superior results compared to the manual optimized ones. RapidPlan increased the result of D98% and homogeneity in both two validations. For organs at risk, the RapidPlan decreased mean doses of bladder by 1.25Gy/1.13Gy (internal/external validation) on average, with p=0.12/p<0.01. The mean dose of rectum and bowel were also decreased by an average of 2.64Gy/0.83Gy and 0.66Gy/1.05Gy,with p<0.01/ p<0.01and p=0.04/<0.01 for the internal/external validation, respectively. Conclusion: The RapidPlan model based cervical cancer plans shows ability to systematically improve the IMRT plan quality. It suggests that RapidPlan has

  9. Evaluating crown fire rate of spread predictions from physics-based models

    Science.gov (United States)

    C. M. Hoffman; J. Ziegler; J. Canfield; R. R. Linn; W. Mell; C. H. Sieg; F. Pimont

    2015-01-01

    Modeling the behavior of crown fires is challenging due to the complex set of coupled processes that drive the characteristics of a spreading wildfire and the large range of spatial and temporal scales over which these processes occur. Detailed physics-based modeling approaches such as FIRETEC and the Wildland Urban Interface Fire Dynamics Simulator (WFDS) simulate...

  10. The model for evaluation of the effectiveness of civil service modernization

    Directory of Open Access Journals (Sweden)

    O. A. Lyndyuk

    2016-09-01

    Full Text Available The effectiveness of the civil service modernization depends on the timely implementation of control measures and evaluating of the effectiveness of modernization processes and system components. The article analyzes the basic problems of evaluation the effectiveness of civil service modernization and scientific papers on these issues. The basic theoretical approaches to the definition of «assessment» and «evaluation» are studied. Existing theoretical and methodological approaches to the assessment process are analyzed and summarized, the main methods of evaluating the effectiveness of the civil service modernization and the most common assessment methods are defined. Eligible for evaluating the effectiveness of civil service modernization are special analytical techniques: functional review, Balanced Scorecard, taxonomic analysis, Key Performance Indicators, methods of multivariate analysis and others. Among the methods of studying consumer expectations about the effectiveness of the civil service modernization such ones are singled out: questionnaires, surveys, interviews, testing, monitoring, analysis of statistical sources, contents of documents, reports and regulatory framework and others. The methods of improving efficiency include: benchmarking, reengineering, performance assessment models and more. The importance of gradual replacement of cost evaluation methods by the results evaluation method is determined. It was shown the need for a comprehensive balanced scorecard to evaluate. With a view to the mutual agreement of principles, mechanisms and instruments for evaluating the effectiveness of civil service modernization the expediency of a systematic, targeted, synergistic, process, situational, strategic and resource approaches is grounded. Development of theoretical concepts and methodological principles of evaluating the effectiveness of civil service modernization should be based on the harmonious combination (integration of all

  11. Development and Application of a Life Cycle-Based Model to Evaluate Greenhouse Gas Emissions of Oil Sands Upgrading Technologies.

    Science.gov (United States)

    Pacheco, Diana M; Bergerson, Joule A; Alvarez-Majmutov, Anton; Chen, Jinwen; MacLean, Heather L

    2016-12-20

    A life cycle-based model, OSTUM (Oil Sands Technologies for Upgrading Model), which evaluates the energy intensity and greenhouse gas (GHG) emissions of current oil sands upgrading technologies, is developed. Upgrading converts oil sands bitumen into high quality synthetic crude oil (SCO), a refinery feedstock. OSTUM's novel attributes include the following: the breadth of technologies and upgrading operations options that can be analyzed, energy intensity and GHG emissions being estimated at the process unit level, it not being dependent on a proprietary process simulator, and use of publicly available data. OSTUM is applied to a hypothetical, but realistic, upgrading operation based on delayed coking, the most common upgrading technology, resulting in emissions of 328 kg CO 2 e/m 3 SCO. The primary contributor to upgrading emissions (45%) is the use of natural gas for hydrogen production through steam methane reforming, followed by the use of natural gas as fuel in the rest of the process units' heaters (39%). OSTUM's results are in agreement with those of a process simulation model developed by CanmetENERGY, other literature, and confidential data of a commercial upgrading operation. For the application of the model, emissions are found to be most sensitive to the amount of natural gas utilized as feedstock by the steam methane reformer. OSTUM is capable of evaluating the impact of different technologies, feedstock qualities, operating conditions, and fuel mixes on upgrading emissions, and its life cycle perspective allows easy incorporation of results into well-to-wheel analyses.

  12. EVALUATION OF SOIL LOSS IN GUARAÍRA BASIN BY GIS AND REMOTE SENSING BASED MODEL

    Directory of Open Access Journals (Sweden)

    Richarde Marques da Silva

    2007-01-01

    Full Text Available Environmental degradation, and specifically erosion, is a serious and extensive problem in many areas in Brazil. Prediction of runoff and erosion in ungauged basins is one of the most challenging tasks anywhere and it is especially a very difficult one in developing countries where monitoring and continuous measurements of these quantities are carried out in very few basins either due to the costs involved or due to the lack of trained personnel. The erosion processes and land use in the Guaraíra River Experimental Basin, located in Paraíba state, Brazil, are evaluated using remote sensing and a runoff-erosion model. WEPP is a process-based continuous simulation erosion model that can be applied to hillslope profiles and small watersheds. WEPP erosion model have been compared in numerous studies to observed values for soil loss and sediment delivery from cropland plots, forest roads, irrigated lands and small watersheds. A number of different techniques for evaluating WEPP have been used, including one recently developed in which the ability of WEPP to accurately predict soil erosion can be compared to the accuracy of replicated plots to predict soil erosion. WEPP was calibrated with daily rainfall data from five rain gauges for the period of 2003 to 2005. The obtained results showed the susceptible areas to the erosion process within Guaraíra river basin, and that the mean sediment yield could be in the order of 3.0 ton/ha/year (in an area of 5.84 ha.

  13. EVALUATION OF SOIL LOSS IN GUARAÍRA BASIN BY GIS AND REMOTE SENSING BASED MODEL

    Directory of Open Access Journals (Sweden)

    Richarde Marques Silva

    2007-12-01

    Full Text Available Environmental degradation, and specifically erosion, is a serious and extensive problem in many areas in Brazil. Prediction of runoff and erosion in ungauged basins is one of the most challenging tasks anywhere and it is especially a very difficult one in developing countries where monitoring and continuous measurements of these quantities are carried out in very few basins either due to the costs involved or due to the lack of trained personnel. The erosion processes and land use in the Guaraíra River Experimental Basin, located in Paraíba state, Brazil, are evaluated using remote sensing and a runoff-erosion model. WEPP is a process-based continuous simulation erosion model that can be applied to hillslope profiles and small watersheds. WEPP erosion model have been compared in numerous studies to observed values for soil loss and sediment delivery from cropland plots, forest roads, irrigated lands and small watersheds. A number of different techniques for evaluating WEPP have been used, including one recently developed in which the ability of WEPP to accurately predict soil erosion can be compared to the accuracy of replicated plots to predict soil erosion. WEPP was calibrated with daily rainfall data from five rain gauges for the period of 2003 to 2005. The obtained results showed the susceptible areas to the erosion process within Guaraíra river basin, and that the mean sediment yield could be in the order of 3.0 ton/ha/year (in an area of 5.84 ha.

  14. Evaluation of Rule-based Modularization in Model Transformation Languages illustrated with ATL

    NARCIS (Netherlands)

    Ivanov, Ivan; van den Berg, Klaas; Jouault, Frédéric

    This paper studies ways for modularizing transformation definitions in current rule-based model transformation languages. Two scenarios are shown in which the modular units are identified on the base of the relations between source and target metamodels and on the base of generic transformation

  15. Evaluation of the Professional Development Program on Web Based Content Development

    Science.gov (United States)

    Yurdakul, Bünyamin; Uslu, Öner; Çakar, Esra; Yildiz, Derya G.

    2014-01-01

    The aim of this study is to evaluate the professional development program on web based content development (WBCD) designed by the Ministry of National Education (MoNE). Based on the theoretical CIPP model by Stufflebeam and Guskey's levels of evaluation, the study was carried out as a case study. The study group consisted of the courses that…

  16. A website evaluation model by integration of previous evaluation models using a quantitative approach

    Directory of Open Access Journals (Sweden)

    Ali Moeini

    2015-01-01

    Full Text Available Regarding the ecommerce growth, websites play an essential role in business success. Therefore, many authors have offered website evaluation models since 1995. Although, the multiplicity and diversity of evaluation models make it difficult to integrate them into a single comprehensive model. In this paper a quantitative method has been used to integrate previous models into a comprehensive model that is compatible with them. In this approach the researcher judgment has no role in integration of models and the new model takes its validity from 93 previous models and systematic quantitative approach.

  17. Evaluation of Cost Models and Needs & Gaps Analysis

    DEFF Research Database (Denmark)

    Kejser, Ulla Bøgvad

    2014-01-01

    they breakdown costs. This is followed by an in depth analysis of stakeholders’ needs for financial information derived from the 4C project stakeholder consultation.The stakeholders’ needs analysis indicated that models should:• support accounting, but more importantly they should enable budgeting• be able......his report ’D3.1—Evaluation of Cost Models and Needs & Gaps Analysis’ provides an analysis of existing research related to the economics of digital curation and cost & benefit modelling. It reports upon the investigation of how well current models and tools meet stakeholders’ needs for calculating...... andcomparing financial information. Based on this evaluation, it aims to point out gaps that need to be bridged in order to increase the uptake of cost & benefit modelling and good practices that will enable costing and comparison of the costs of alternative scenarios—which in turn provides a starting point...

  18. Airline service quality evaluation: A review on concepts and models

    Directory of Open Access Journals (Sweden)

    Navid Haghighat

    2017-12-01

    Full Text Available This paper reviews different major service quality concept and models which led to great developments in evaluating service quality with focusing on improvement process of the models through discussing criticisms of each model. Criticisms against these models are discussed to clarify development steps of newer models which led to the improvement of airline service quality models. The precise and accurate evaluation of service quality needs utilizing a reliable concept with comprehensive criteria and effective measurement techniques as the fundamentals of a valuable framework. In this paper, service quality models improvement is described based on three major service quality concepts, the disconfirmation, performance and hierarchical concepts which are developed subsequently. Reviewing various criteria and different measurement techniques such a statistical analysis and multi-criteria decision making assist researchers to have a clear understanding of the development of the evaluation framework in the airline industry. This study aims at promoting reliable frameworks for evaluating airline service quality in different countries and societies due to economic, cultural and social aspects of each society.

  19. Descriptive and predictive evaluation of high resolution Markov chain precipitation models

    DEFF Research Database (Denmark)

    Sørup, Hjalte Jomo Danielsen; Madsen, Henrik; Arnbjerg-Nielsen, Karsten

    2012-01-01

    A time series of tipping bucket recordings of very high temporal and volumetric resolution precipitation is modelled using Markov chain models. Both first and second‐order Markov models as well as seasonal and diurnal models are investigated and evaluated using likelihood based techniques. The fi...

  20. Evaluation of Performance of Investment Funds Based on Decision Models (DEA

    Directory of Open Access Journals (Sweden)

    Alireza Samet

    2016-12-01

    Full Text Available Selection of a suitable investment funds is very important from investors' point of view and may have a significant impact on the profit or loss of the funds. Therefore, evaluation of performance of investment funds to choose the most suitable fund will be given special emphasis. One of the new techniques for evaluating the performance of the Funds based on efficiency is the Data Envelopment Analysis technique. Accordingly, the present study is aimed to analyze and evaluate the performance of investment Funds in capital market of Iran, using the technique of efficiency evaluation through data envelopment analysis technique (DEA. This research is a descriptive - applicable study and to analyze the efficiency and effectiveness, 53 investment funds in the capital market of Iran in 2013 were considered as the sample. To analyze the efficiency of these funds, data envelopment analysis (DEA is used. Research findings showed that in 2013, of a total of 53 examined funds, 11 funds were in the efficiency situation and the other 42 funds were in a state of inefficiency. Also the reference funds and virtual composited funds of all inefficient funds were evaluated.

  1. Theory-Based Stakeholder Evaluation – applied. Competing Stakeholder Theories in the Quality Management of Primary Education

    DEFF Research Database (Denmark)

    Hansen, Morten Balle; Heilesen, J. B.

    In the broader context of evaluation design, this paper examines and compares pros and cons of a theory-based approach to evaluation (TBE) with the Theory-Based Stakeholder evaluation (TSE) model, introduced by Morten Balle Hansen and Evert Vedung (Hansen and Vedung 2010). While most approaches...... to TBE construct one unitary theory of the program (Coryn et al. 2011), the TSE-model emphasizes the importance of keeping theories of diverse stakeholders apart. This paper applies the TSE-model to an evaluation study conducted by the Danish Evaluation Institute (EVA) of the Danish system of quality......-model, as an alternative to traditional program theory evaluation....

  2. Evaluating Outdoor Water Use Demand under Changing Climatic and Demographic Conditions: An Agent-based Modeling Approach

    Science.gov (United States)

    Kanta, L.; Berglund, E. Z.; Soh, M. H.

    2017-12-01

    Outdoor water-use for landscape and irrigation constitutes a significant end-use in total residential water demand. In periods of water shortages, utilities may reduce garden demands by implementing irrigation system audits, rebate programs, local ordinances, and voluntary or mandatory water-use restrictions. Because utilities do not typically record outdoor and indoor water-uses separately, the effects of policies for reducing garden demands cannot be readily calculated. The volume of water required to meet garden demands depends on the housing density, lawn size, type of vegetation, climatic conditions, efficiency of garden irrigation systems, and consumer water-use behaviors. Many existing outdoor demand estimation methods are deterministic and do not include consumer responses to conservation campaigns. In addition, mandatory restrictions may have a substantial impact on reducing outdoor demands, but the effectiveness of mandatory restrictions depends on the timing and the frequency of restrictions, in addition to the distribution of housing density and consumer types within a community. This research investigates a garden end-use model by coupling an agent-based modeling approach and a mechanistic-stochastic water demand model to create a methodology for estimating garden demand and evaluating demand reduction policies. The garden demand model is developed for two water utilities, using a diverse data sets, including residential customer billing records, outdoor conservation programs, frequency and type of mandatory water-use restrictions, lot size distribution, population growth, and climatic data. A set of garden irrigation parameter values, which are based on the efficiency of irrigation systems and irrigation habits of consumers, are determined for a set of conservation ordinances and restrictions. The model parameters are then validated using customer water usage data from the participating water utilities. A sensitivity analysis is conducted for garden

  3. Prioritizing alarms from sensor-based detection models in livestock production

    DEFF Research Database (Denmark)

    Dominiak, Katarina Nielsen; Kristensen, Anders Ringgaard

    2017-01-01

    The objective of this review is to present, evaluate and discuss methods for reducing false alarms in sensor-based detection models developed for livestock production as described in the scientific literature. Papers included in this review are all peer-reviewed and present sensor-based detection...... models developed for modern livestock production with the purpose of optimizing animal health or managerial routines. The papers must present a performance for the model, but no criteria were specified for animal species or the condition sought to be detected. 34 papers published during the last 20 years...... (NBN) and Hidden phase-type Markov model, the NBN shows the greatest potential for future reduction of alerts from sensor-based detection models in livestock production. The included detection models are evaluated on three criteria; performance, time-window and similarity to determine whether...

  4. An Object-Based Approach to Evaluation of Climate Variability Projections and Predictions

    Science.gov (United States)

    Ammann, C. M.; Brown, B.; Kalb, C. P.; Bullock, R.

    2017-12-01

    Evaluations of the performance of earth system model predictions and projections are of critical importance to enhance usefulness of these products. Such evaluations need to address specific concerns depending on the system and decisions of interest; hence, evaluation tools must be tailored to inform about specific issues. Traditional approaches that summarize grid-based comparisons of analyses and models, or between current and future climate, often do not reveal important information about the models' performance (e.g., spatial or temporal displacements; the reason behind a poor score) and are unable to accommodate these specific information needs. For example, summary statistics such as the correlation coefficient or the mean-squared error provide minimal information to developers, users, and decision makers regarding what is "right" and "wrong" with a model. New spatial and temporal-spatial object-based tools from the field of weather forecast verification (where comparisons typically focus on much finer temporal and spatial scales) have been adapted to more completely answer some of the important earth system model evaluation questions. In particular, the Method for Object-based Diagnostic Evaluation (MODE) tool and its temporal (three-dimensional) extension (MODE-TD) have been adapted for these evaluations. More specifically, these tools can be used to address spatial and temporal displacements in projections of El Nino-related precipitation and/or temperature anomalies, ITCZ-associated precipitation areas, atmospheric rivers, seasonal sea-ice extent, and other features of interest. Examples of several applications of these tools in a climate context will be presented, using output of the CESM large ensemble. In general, these tools provide diagnostic information about model performance - accounting for spatial, temporal, and intensity differences - that cannot be achieved using traditional (scalar) model comparison approaches. Thus, they can provide more

  5. A Memory-Based Model of Hick's Law

    Science.gov (United States)

    Schneider, Darryl W.; Anderson, John R.

    2011-01-01

    We propose and evaluate a memory-based model of Hick's law, the approximately linear increase in choice reaction time with the logarithm of set size (the number of stimulus-response alternatives). According to the model, Hick's law reflects a combination of associative interference during retrieval from declarative memory and occasional savings…

  6. Refined Diebold-Mariano Test Methods for the Evaluation of Wind Power Forecasting Models

    Directory of Open Access Journals (Sweden)

    Hao Chen

    2014-07-01

    Full Text Available The scientific evaluation methodology for the forecast accuracy of wind power forecasting models is an important issue in the domain of wind power forecasting. However, traditional forecast evaluation criteria, such as Mean Squared Error (MSE and Mean Absolute Error (MAE, have limitations in application to some degree. In this paper, a modern evaluation criterion, the Diebold-Mariano (DM test, is introduced. The DM test can discriminate the significant differences of forecasting accuracy between different models based on the scheme of quantitative analysis. Furthermore, the augmented DM test with rolling windows approach is proposed to give a more strict forecasting evaluation. By extending the loss function to an asymmetric structure, the asymmetric DM test is proposed. Case study indicates that the evaluation criteria based on DM test can relieve the influence of random sample disturbance. Moreover, the proposed augmented DM test can provide more evidence when the cost of changing models is expensive, and the proposed asymmetric DM test can add in the asymmetric factor, and provide practical evaluation of wind power forecasting models. It is concluded that the two refined DM tests can provide reference to the comprehensive evaluation for wind power forecasting models.

  7. Evaluation of burst pressure prediction models for line pipes

    Energy Technology Data Exchange (ETDEWEB)

    Zhu, Xian-Kui, E-mail: zhux@battelle.org [Battelle Memorial Institute, 505 King Avenue, Columbus, OH 43201 (United States); Leis, Brian N. [Battelle Memorial Institute, 505 King Avenue, Columbus, OH 43201 (United States)

    2012-01-15

    Accurate prediction of burst pressure plays a central role in engineering design and integrity assessment of oil and gas pipelines. Theoretical and empirical solutions for such prediction are evaluated in this paper relative to a burst pressure database comprising more than 100 tests covering a variety of pipeline steel grades and pipe sizes. Solutions considered include three based on plasticity theory for the end-capped, thin-walled, defect-free line pipe subjected to internal pressure in terms of the Tresca, von Mises, and ZL (or Zhu-Leis) criteria, one based on a cylindrical instability stress (CIS) concept, and a large group of analytical and empirical models previously evaluated by Law and Bowie (International Journal of Pressure Vessels and Piping, 84, 2007: 487-492). It is found that these models can be categorized into either a Tresca-family or a von Mises-family of solutions, except for those due to Margetson and Zhu-Leis models. The viability of predictions is measured via statistical analyses in terms of a mean error and its standard deviation. Consistent with an independent parallel evaluation using another large database, the Zhu-Leis solution is found best for predicting burst pressure, including consideration of strain hardening effects, while the Tresca strength solutions including Barlow, Maximum shear stress, Turner, and the ASME boiler code provide reasonably good predictions for the class of line-pipe steels with intermediate strain hardening response. - Highlights: Black-Right-Pointing-Pointer This paper evaluates different burst pressure prediction models for line pipes. Black-Right-Pointing-Pointer The existing models are categorized into two major groups of Tresca and von Mises solutions. Black-Right-Pointing-Pointer Prediction quality of each model is assessed statistically using a large full-scale burst test database. Black-Right-Pointing-Pointer The Zhu-Leis solution is identified as the best predictive model.

  8. Evaluation of burst pressure prediction models for line pipes

    International Nuclear Information System (INIS)

    Zhu, Xian-Kui; Leis, Brian N.

    2012-01-01

    Accurate prediction of burst pressure plays a central role in engineering design and integrity assessment of oil and gas pipelines. Theoretical and empirical solutions for such prediction are evaluated in this paper relative to a burst pressure database comprising more than 100 tests covering a variety of pipeline steel grades and pipe sizes. Solutions considered include three based on plasticity theory for the end-capped, thin-walled, defect-free line pipe subjected to internal pressure in terms of the Tresca, von Mises, and ZL (or Zhu-Leis) criteria, one based on a cylindrical instability stress (CIS) concept, and a large group of analytical and empirical models previously evaluated by Law and Bowie (International Journal of Pressure Vessels and Piping, 84, 2007: 487–492). It is found that these models can be categorized into either a Tresca-family or a von Mises-family of solutions, except for those due to Margetson and Zhu-Leis models. The viability of predictions is measured via statistical analyses in terms of a mean error and its standard deviation. Consistent with an independent parallel evaluation using another large database, the Zhu-Leis solution is found best for predicting burst pressure, including consideration of strain hardening effects, while the Tresca strength solutions including Barlow, Maximum shear stress, Turner, and the ASME boiler code provide reasonably good predictions for the class of line-pipe steels with intermediate strain hardening response. - Highlights: ► This paper evaluates different burst pressure prediction models for line pipes. ► The existing models are categorized into two major groups of Tresca and von Mises solutions. ► Prediction quality of each model is assessed statistically using a large full-scale burst test database. ► The Zhu-Leis solution is identified as the best predictive model.

  9. A Usability Evaluation Model for Academic Library Websites: Efficiency, Effectiveness and Learnability

    Directory of Open Access Journals (Sweden)

    Soohyung Joo

    2011-12-01

    Full Text Available Purpose – This paper aimed to develop a usability evaluation model and associated survey tool in the context of academic libraries. This study not only proposed a usability evaluation model but also a practical survey tool tailored to academic library websites. Design/methodology – A usability evaluation model has been developed for academic library websites based on literature review and expert consultation. Then, the authors verified the reliability and validity of the usability evaluation model empirically using the survey data from actual users. Statistical analysis, such as descriptive statistics, internal consistency test, and a factor analysis, were applied to ensure both the reliability and validity of the usability evaluation tool. Findings – From the document analysis and expert consultation, this study identified eighteen measurement items to survey the three constructs of the usability, effectiveness, efficiency, and learnability, in academic library websites. The evaluation tool was then validated with regard to data distribution, reliability, and validity. The empirical examination based on 147 actual user responses proved the survey evaluation tool suggested herein is acceptable in assessing academic library website usability. Originality/Value – This research is one of the few studies to engender a practical survey tool in evaluating library website usability. The usability model and corresponding survey tool would be useful for librarians and library administrators in academic libraries who plan to conduct a usability evaluation involving large sample.

  10. A new harvest operation cost model to evaluate forest harvest layout alternatives

    Science.gov (United States)

    Mark M. Clark; Russell D. Meller; Timothy P. McDonald; Chao Chi Ting

    1997-01-01

    The authors develop a new model for harvest operation costs that can be used to evaluate stands for potential harvest. The model is based on felling, extraction, and access costs, and is unique in its consideration of the interaction between harvest area shapes and access roads. The scientists illustrate the model and evaluate the impact of stand size, volume, and road...

  11. Model-Based Evaluation of Urban River Restoration: Conflicts between Sensitive Fish Species and Recreational Users

    Directory of Open Access Journals (Sweden)

    Aude Zingraff-Hamed

    2018-05-01

    Full Text Available Urban rivers are socioecological systems, and restored habitats may be attractive to both sensitive species and recreationists. Understanding the potential conflicts between ecological and recreational values is a critical issue for the development of a sustainable river-management plan. Habitat models are very promising tools for the ecological evaluation of river restoration projects that are already concluded, ongoing, or even to be planned. With our paper, we make a first attempt at integrating recreational user pressure into habitat modeling. The objective of this study was to analyze whether human impact is likely to hinder the re-establishment of a target species despite the successful restoration of physical habitat structures in the case of the restoration of the Isar River in Munich (Germany and the target fish species Chondostroma nasus L. Our analysis combined high-resolution 2D hydrodynamic modeling with mapping of recreational pressure and used an expert-based procedure for modeling habitat suitability. The results are twofold: (1 the restored river contains suitable physical habitats for population conservation but has low suitability for recruitment; (2 densely used areas match highly suitable habitats for C. nasus. In the future, the integrated modeling procedure presented here may allow ecological refuge for sensitive target species to be included in the design of restoration and may help in the development of visitor-management plans to safeguard biodiversity and recreational ecosystem services.

  12. Local difference measures between complex networks for dynamical system model evaluation.

    Science.gov (United States)

    Lange, Stefan; Donges, Jonathan F; Volkholz, Jan; Kurths, Jürgen

    2015-01-01

    A faithful modeling of real-world dynamical systems necessitates model evaluation. A recent promising methodological approach to this problem has been based on complex networks, which in turn have proven useful for the characterization of dynamical systems. In this context, we introduce three local network difference measures and demonstrate their capabilities in the field of climate modeling, where these measures facilitate a spatially explicit model evaluation.Building on a recent study by Feldhoff et al. [8] we comparatively analyze statistical and dynamical regional climate simulations of the South American monsoon system [corrected]. types of climate networks representing different aspects of rainfall dynamics are constructed from the modeled precipitation space-time series. Specifically, we define simple graphs based on positive as well as negative rank correlations between rainfall anomaly time series at different locations, and such based on spatial synchronizations of extreme rain events. An evaluation against respective networks built from daily satellite data provided by the Tropical Rainfall Measuring Mission 3B42 V7 reveals far greater differences in model performance between network types for a fixed but arbitrary climate model than between climate models for a fixed but arbitrary network type. We identify two sources of uncertainty in this respect. Firstly, climate variability limits fidelity, particularly in the case of the extreme event network; and secondly, larger geographical link lengths render link misplacements more likely, most notably in the case of the anticorrelation network; both contributions are quantified using suitable ensembles of surrogate networks. Our model evaluation approach is applicable to any multidimensional dynamical system and especially our simple graph difference measures are highly versatile as the graphs to be compared may be constructed in whatever way required. Generalizations to directed as well as edge- and node

  13. Hybrid network defense model based on fuzzy evaluation.

    Science.gov (United States)

    Cho, Ying-Chiang; Pan, Jen-Yi

    2014-01-01

    With sustained and rapid developments in the field of information technology, the issue of network security has become increasingly prominent. The theme of this study is network data security, with the test subject being a classified and sensitive network laboratory that belongs to the academic network. The analysis is based on the deficiencies and potential risks of the network's existing defense technology, characteristics of cyber attacks, and network security technologies. Subsequently, a distributed network security architecture using the technology of an intrusion prevention system is designed and implemented. In this paper, first, the overall design approach is presented. This design is used as the basis to establish a network defense model, an improvement over the traditional single-technology model that addresses the latter's inadequacies. Next, a distributed network security architecture is implemented, comprising a hybrid firewall, intrusion detection, virtual honeynet projects, and connectivity and interactivity between these three components. Finally, the proposed security system is tested. A statistical analysis of the test results verifies the feasibility and reliability of the proposed architecture. The findings of this study will potentially provide new ideas and stimuli for future designs of network security architecture.

  14. Quality Risk Evaluation of the Food Supply Chain Using a Fuzzy Comprehensive Evaluation Model and Failure Mode, Effects, and Criticality Analysis

    Directory of Open Access Journals (Sweden)

    Libiao Bai

    2018-01-01

    Full Text Available Evaluating the quality risk level in the food supply chain can reduce quality information asymmetry and food quality incidents and promote nationally integrated regulations for food quality. In order to evaluate it, a quality risk evaluation indicator system for the food supply chain is constructed based on an extensive literature review in this paper. Furthermore, a mathematical model based on the fuzzy comprehensive evaluation model (FCEM and failure mode, effects, and criticality analysis (FMECA for evaluating the quality risk level in the food supply chain is developed. A computational experiment aimed at verifying the effectiveness and feasibility of this proposed model is conducted on the basis of a questionnaire survey. The results suggest that this model can be used as a general guideline to assess the quality risk level in the food supply chain and achieve the most important objective of providing a reference for the public and private sectors when making decisions on food quality management.

  15. An open-source Java-based Toolbox for environmental model evaluation: The MOUSE Software Application

    Science.gov (United States)

    A consequence of environmental model complexity is that the task of understanding how environmental models work and identifying their sensitivities/uncertainties, etc. becomes progressively more difficult. Comprehensive numerical and visual evaluation tools have been developed such as the Monte Carl...

  16. Evaluation of multivariate calibration models transferred between spectroscopic instruments

    DEFF Research Database (Denmark)

    Eskildsen, Carl Emil Aae; Hansen, Per W.; Skov, Thomas

    2016-01-01

    In a setting where multiple spectroscopic instruments are used for the same measurements it may be convenient to develop the calibration model on a single instrument and then transfer this model to the other instruments. In the ideal scenario, all instruments provide the same predictions for the ......In a setting where multiple spectroscopic instruments are used for the same measurements it may be convenient to develop the calibration model on a single instrument and then transfer this model to the other instruments. In the ideal scenario, all instruments provide the same predictions...... for the same samples using the transferred model. However, sometimes the success of a model transfer is evaluated by comparing the transferred model predictions with the reference values. This is not optimal, as uncertainties in the reference method will impact the evaluation. This paper proposes a new method...... for calibration model transfer evaluation. The new method is based on comparing predictions from different instruments, rather than comparing predictions and reference values. A total of 75 flour samples were available for the study. All samples were measured on ten near infrared (NIR) instruments from two...

  17. BALANCED SCORE CARD MODEL EVALUATION: THE CASE OF AD BARSKA PLOVIDBA

    Directory of Open Access Journals (Sweden)

    Jelena Jovanović

    2009-06-01

    Full Text Available The paper analyses creation of Balanced Scorecard, which includes environmental protection elements in AD Barska Plovidba. Firstly,the paper presents proposed models that include elements of conventional Balanced scorecard, and then we start with proposed models evaluation. In fact, as implementation and evaluation of the model in AD Barska Plovidba takes longer period of time, its evaluation and final choice is based on ISO 14598 and ISO 9126 with use of AHP method. Usually those standards are used for quality evaluation of software products, computer programs and databases inside organisation. After all, they serve as support for their development and acceptance because they provide quality evaluation during the phase when software is not yet implemented inside organistaion, what we assume as very important.

  18. Effects of streamflow diversion on a fish population: combining empirical data and individual-based models in a site-specific evaluation

    Science.gov (United States)

    Bret C. Harvey; Jason L. White; Rodney J. Nakamoto; Steven F. Railsback

    2014-01-01

    Resource managers commonly face the need to evaluate the ecological consequences of specific water diversions of small streams. We addressed this need by conducting 4 years of biophysical monitoring of stream reaches above and below a diversion and applying two individual-based models of salmonid fish that simulated different levels of behavioral complexity. The...

  19. Smartphone-based evaluations of clinical placements—a useful complement to web-based evaluation tools

    Directory of Open Access Journals (Sweden)

    Jesper Hessius

    2015-11-01

    Full Text Available Purpose: Web-based questionnaires are currently the standard method for course evaluations. The high rate of smartphone adoption in Sweden makes possible a range of new uses, including course evaluation. This study examines the potential advantages and disadvantages of using a smartphone app as a complement to web-based course evaluationsystems. Methods: An iPhone app for course evaluations was developed and interfaced to an existing web-based tool. Evaluations submitted using the app were compared with those submitted using the web between August 2012 and June 2013, at the Faculty of Medicine at Uppsala University, Sweden. Results: At the time of the study, 49% of the students were judged to own iPhones. Over the course of the study, 3,340 evaluations were submitted, of which 22.8% were submitted using the app. The median of mean scores in the submitted evaluations was 4.50 for the app (with an interquartile range of 3.70-5.20 and 4.60 (3.70-5.20 for the web (P=0.24. The proportion of evaluations that included a free-text comment was 50.5% for the app and 49.9% for the web (P=0.80. Conclusion: An app introduced as a complement to a web-based course evaluation system met with rapid adoption. We found no difference in the frequency of free-text comments or in the evaluation scores. Apps appear to be promising tools for course evaluations. web-based course evaluation system met with rapid adoption. We found no difference in the frequency of free-text comments or in the evaluation scores. Apps appear to be promising tools for course evaluations.

  20. Smartphone-based evaluations of clinical placements-a useful complement to web-based evaluation tools.

    Science.gov (United States)

    Hessius, Jesper; Johansson, Jakob

    2015-01-01

    Web-based questionnaires are currently the standard method for course evaluations. The high rate of smartphone adoption in Sweden makes possible a range of new uses, including course evaluation. This study examines the potential advantages and disadvantages of using a smartphone app as a complement to web-based course evaluationsystems. An iPhone app for course evaluations was developed and interfaced to an existing web-based tool. Evaluations submitted using the app were compared with those submitted using the web between August 2012 and June 2013, at the Faculty of Medicine at Uppsala University, Sweden. At the time of the study, 49% of the students were judged to own iPhones. Over the course of the study, 3,340 evaluations were submitted, of which 22.8% were submitted using the app. The median of mean scores in the submitted evaluations was 4.50 for the app (with an interquartile range of 3.70-5.20) and 4.60 (3.70-5.20) for the web (P=0.24). The proportion of evaluations that included a free-text comment was 50.5% for the app and 49.9% for the web (P=0.80). An app introduced as a complement to a web-based course evaluation system met with rapid adoption. We found no difference in the frequency of free-text comments or in the evaluation scores. Apps appear to be promising tools for course evaluations. web-based course evaluation system met with rapid adoption. We found no difference in the frequency of free-text comments or in the evaluation scores. Apps appear to be promising tools for course evaluations.

  1. Performance evaluation of four directional emissivity analytical models with thermal SAIL model and airborne images.

    Science.gov (United States)

    Ren, Huazhong; Liu, Rongyuan; Yan, Guangjian; Li, Zhao-Liang; Qin, Qiming; Liu, Qiang; Nerry, Françoise

    2015-04-06

    Land surface emissivity is a crucial parameter in the surface status monitoring. This study aims at the evaluation of four directional emissivity models, including two bi-directional reflectance distribution function (BRDF) models and two gap-frequency-based models. Results showed that the kernel-driven BRDF model could well represent directional emissivity with an error less than 0.002, and was consequently used to retrieve emissivity with an accuracy of about 0.012 from an airborne multi-angular thermal infrared data set. Furthermore, we updated the cavity effect factor relating to multiple scattering inside canopy, which improved the performance of the gap-frequency-based models.

  2. Ratio-based vs. model-based methods to correct for urinary creatinine concentrations.

    Science.gov (United States)

    Jain, Ram B

    2016-08-01

    Creatinine-corrected urinary analyte concentration is usually computed as the ratio of the observed level of analyte concentration divided by the observed level of the urinary creatinine concentration (UCR). This ratio-based method is flawed since it implicitly assumes that hydration is the only factor that affects urinary creatinine concentrations. On the contrary, it has been shown in the literature, that age, gender, race/ethnicity, and other factors also affect UCR. Consequently, an optimal method to correct for UCR should correct for hydration as well as other factors like age, gender, and race/ethnicity that affect UCR. Model-based creatinine correction in which observed UCRs are used as an independent variable in regression models has been proposed. This study was conducted to evaluate the performance of ratio-based and model-based creatinine correction methods when the effects of gender, age, and race/ethnicity are evaluated one factor at a time for selected urinary analytes and metabolites. It was observed that ratio-based method leads to statistically significant pairwise differences, for example, between males and females or between non-Hispanic whites (NHW) and non-Hispanic blacks (NHB), more often than the model-based method. However, depending upon the analyte of interest, the reverse is also possible. The estimated ratios of geometric means (GM), for example, male to female or NHW to NHB, were also compared for the two methods. When estimated UCRs were higher for the group (for example, males) in the numerator of this ratio, these ratios were higher for the model-based method, for example, male to female ratio of GMs. When estimated UCR were lower for the group (for example, NHW) in the numerator of this ratio, these ratios were higher for the ratio-based method, for example, NHW to NHB ratio of GMs. Model-based method is the method of choice if all factors that affect UCR are to be accounted for.

  3. Standardizing the performance evaluation of short-term wind prediction models

    DEFF Research Database (Denmark)

    Madsen, Henrik; Pinson, Pierre; Kariniotakis, G.

    2005-01-01

    Short-term wind power prediction is a primary requirement for efficient large-scale integration of wind generation in power systems and electricity markets. The choice of an appropriate prediction model among the numerous available models is not trivial, and has to be based on an objective...... evaluation of model performance. This paper proposes a standardized protocol for the evaluation of short-term wind-poser preciction systems. A number of reference prediction models are also described, and their use for performance comparison is analysed. The use of the protocol is demonstrated using results...... from both on-shore and off-shore wind forms. The work was developed in the frame of the Anemos project (EU R&D project) where the protocol has been used to evaluate more than 10 prediction systems....

  4. NASA Software Cost Estimation Model: An Analogy Based Estimation Model

    Science.gov (United States)

    Hihn, Jairus; Juster, Leora; Menzies, Tim; Mathew, George; Johnson, James

    2015-01-01

    The cost estimation of software development activities is increasingly critical for large scale integrated projects such as those at DOD and NASA especially as the software systems become larger and more complex. As an example MSL (Mars Scientific Laboratory) developed at the Jet Propulsion Laboratory launched with over 2 million lines of code making it the largest robotic spacecraft ever flown (Based on the size of the software). Software development activities are also notorious for their cost growth, with NASA flight software averaging over 50% cost growth. All across the agency, estimators and analysts are increasingly being tasked to develop reliable cost estimates in support of program planning and execution. While there has been extensive work on improving parametric methods there is very little focus on the use of models based on analogy and clustering algorithms. In this paper we summarize our findings on effort/cost model estimation and model development based on ten years of software effort estimation research using data mining and machine learning methods to develop estimation models based on analogy and clustering. The NASA Software Cost Model performance is evaluated by comparing it to COCOMO II, linear regression, and K-­ nearest neighbor prediction model performance on the same data set.

  5. Development and Evaluation of Computer-Based Laboratory Practical Learning Tool

    Science.gov (United States)

    Gandole, Y. B.

    2006-01-01

    Effective evaluation of educational software is a key issue for successful introduction of advanced tools in the curriculum. This paper details to developing and evaluating a tool for computer assisted learning of science laboratory courses. The process was based on the generic instructional system design model. Various categories of educational…

  6. Evaluating the performance of unhealthy junk food consumption based on health belief model in elementary school girls

    Directory of Open Access Journals (Sweden)

    Azam Fathi

    2017-06-01

    Full Text Available Abstract Background and objective: Nowadays, due to changes in eating patterns, the worthless junk foods are replaced useful food among children. This study aimed to evaluate the performance of unhealthy junk food consumption based on health belief model in elementary school girls Methods: Cross-sectional study Descriptive-analytic type of multi-stage sampling (208 samples was carried out in 2016. The survey instrument was a questionnaire valid and reliable based on the Health Belief Model (70 items. Data was analyzed by SPSS software according to statistical tests of significance level of 0.05. Results: Results showed that students of sensitivity (49% and relatively high efficacy (53/8%, perceived benefits (73/1% and better social protection (68/3% had. The results showed that among all the health belief model structures with yield (junk food intake significantly correlated. Also significant differences in parental education and sensitivity, perceived severity, self-efficacy, social support and yield (p<0/05. Conclusion: The results of this study showed that students from relatively favorable sensitivity and self-efficacy, perceived benefits and social protection in the field of unhealthy snacks were good. Also a significant relationship between structural and non-use study results showed unhealthy snacks but because of the importance of unhealthy snacks and unhealthy snack consumption among school children and the complications of the health belief model in predicting nutritional behaviors suggest that this model used as a framework for school feeding programs. Paper Type: Research Article.

  7. Interactive Coherence-Based Façade Modeling

    KAUST Repository

    Musialski, Przemyslaw

    2012-05-01

    We propose a novel interactive framework for modeling building facades from images. Our method is based on the notion of coherence-based editing which allows exploiting partial symmetries across the facade at any level of detail. The proposed workflow mixes manual interaction with automatic splitting and grouping operations based on unsupervised cluster analysis. In contrast to previous work, our approach leads to detailed 3d geometric models with up to several thousand regions per facade. We compare our modeling scheme to others and evaluate our approach in a user study with an experienced user and several novice users.

  8. A formative evaluation of a coach-based technical assistance model for youth- and family-focused programming.

    Science.gov (United States)

    Olson, Jonathan R; McCarthy, Kimberly J; Perkins, Daniel F; Borden, Lynne M

    2018-04-01

    The Children, Youth, and Families At-Risk (CYFAR) initiative provides funding and technical support for local community-based programs designed to promote positive outcomes among vulnerable populations. In 2013, CYFAR implemented significant changes in the way it provides technical assistance (TA) to grantees. These changes included introducing a new TA model in which trained coaches provide proactive support that is tailored to individual CYFAR projects. The purpose of this paper is to describe the evolution of this TA model and present preliminary findings from a formative evaluation. CYFAR Principal Investigators (PIs) were invited to respond to online surveys in 2015 and 2016. The surveys were designed to assess PI attitudes towards the nature and quality of support that they receive from their coaches. CYFAR PIs reported that their coaches have incorporated a range of coaching skills and techniques into their work. PIs have generally positive attitudes towards their coaches, and these attitudes have become more positive over time. Results suggest that CYFAR PIs have been generally supportive of the new TA system. Factors that may have facilitated support include a strong emphasis on team-building and the provision of specific resources that support program design, implementation, and evaluation. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Multiple attribute decision making model and application to food safety risk evaluation.

    Science.gov (United States)

    Ma, Lihua; Chen, Hong; Yan, Huizhe; Yang, Lifeng; Wu, Lifeng

    2017-01-01

    Decision making for supermarket food purchase decisions are characterized by network relationships. This paper analyzed factors that influence supermarket food selection and proposes a supplier evaluation index system based on the whole process of food production. The author established the intuitive interval value fuzzy set evaluation model based on characteristics of the network relationship among decision makers, and validated for a multiple attribute decision making case study. Thus, the proposed model provides a reliable, accurate method for multiple attribute decision making.

  10. Understanding and Measuring Evaluation Capacity: A Model and Instrument Validation Study

    Science.gov (United States)

    Taylor-Ritzler, Tina; Suarez-Balcazar, Yolanda; Garcia-Iriarte, Edurne; Henry, David B.; Balcazar, Fabricio E.

    2013-01-01

    This study describes the development and validation of the Evaluation Capacity Assessment Instrument (ECAI), a measure designed to assess evaluation capacity among staff of nonprofit organizations that is based on a synthesis model of evaluation capacity. One hundred and sixty-nine staff of nonprofit organizations completed the ECAI. The 68-item…

  11. Physiologically based pharmacokinetic toolkit to evaluate environmental exposures: Applications of the dioxin model to study real life exposures

    Energy Technology Data Exchange (ETDEWEB)

    Emond, Claude, E-mail: claude.emond@biosmc.com [BioSimulation Consulting Inc, Newark, DE (United States); Ruiz, Patricia; Mumtaz, Moiz [Division of Toxicology and Human Health Sciences, Agency for Toxic Substances and Disease Registry, Atlanta, GA (United States)

    2017-01-15

    Chlorinated dibenzo-p-dioxins (CDDs) are a series of mono- to octa-chlorinated homologous chemicals commonly referred to as polychlorinated dioxins. One of the most potent, well-known, and persistent member of this family is 2,3,7,8-tetrachlorodibenzo-p-dioxin (TCDD). As part of translational research to make computerized models accessible to health risk assessors, we present a Berkeley Madonna recoded version of the human physiologically based pharmacokinetic (PBPK) model used by the U.S. Environmental Protection Agency (EPA) in the recent dioxin assessment. This model incorporates CYP1A2 induction, which is an important metabolic vector that drives dioxin distribution in the human body, and it uses a variable elimination half-life that is body burden dependent. To evaluate the model accuracy, the recoded model predictions were compared with those of the original published model. The simulations performed with the recoded model matched well with those of the original model. The recoded model was then applied to available data sets of real life exposure studies. The recoded model can describe acute and chronic exposures and can be useful for interpreting human biomonitoring data as part of an overall dioxin and/or dioxin-like compounds risk assessment. - Highlights: • The best available dioxin PBPK model for interpreting human biomonitoring data is presented. • The original PBPK model was recoded from acslX to the Berkeley Madonna (BM) platform. • Comparisons were made of the accuracy of the recoded model with the original model. • The model is a useful addition to the ATSDR's BM based PBPK toolkit that supports risk assessors. • The application of the model to real-life exposure data sets is illustrated.

  12. WRF-based fire risk modelling and evaluation for years 2010 and 2012 in Poland

    Science.gov (United States)

    Stec, Magdalena; Szymanowski, Mariusz; Kryza, Maciej

    2016-04-01

    year 2010 was characterized by the smallest number of wildfires and burnt area whereas 2012 - by the biggest number of fires and the largest area of conflagration. The data about time, localization, scale and causes of individual wildfire occurrence in given years are taken from the National Forest Fire Information System (KSIPL), administered by Forest Fire Protection Department of Polish Forest Research Institute. The database is a part of European Forest Fire Information System (EFFIS). Basing on this data and on the WRF-based fire risk modelling we intend to achieve the second goal of the study, which is the evaluation of the forecasted fire risk with an occurrence of wildfires. Special attention is paid here to the number, time and the spatial distribution of wildfires occurred in cases of low-level predicted fire risk. Results obtained reveals the effectiveness of the new forecasting method. The outcome of our investigation allows to draw a conclusion that some adjustments are possible to improve the efficiency on the fire-risk estimation method.

  13. Frontier models for evaluating environmental efficiency: an overview

    NARCIS (Netherlands)

    Oude Lansink, A.G.J.M.; Wall, A.

    2014-01-01

    Our aim in this paper is to provide a succinct overview of frontier-based models used to evaluate environmental efficiency, with a special emphasis on agricultural activity. We begin by providing a brief, up-to-date review of the main approaches used to measure environmental efficiency, with

  14. Evaluation of CH4 and N2O Budget of Natural Ecosystems and Croplands in Asia with a Process-based Model

    Science.gov (United States)

    Ito, A.

    2017-12-01

    Terrestrial ecosystems are important sink of carbon dioxide (CO2) but significant sources of other greenhouse gases such as methane (CH4) and nitrous oxide (N2O). To resolve the role of terrestrial biosphere in the climate system, we need to quantify total greenhouse gas budget with an adequate accuracy. In addition to top-down evaluation on the basis of atmospheric measurements, model-based approach is required for integration and up-scaling of filed data and for prediction under changing environment and different management practices. Since the early 2000s, we have developed a process-based model of terrestrial biogeochemical cycles focusing on atmosphere-ecosystem exchange of trace gases: Vegetation Integrated SImulator for Trace gases (VISIT). The model includes simple and comprehensive schemes of carbon and nitrogen cycles in terrestrial ecosystems, allowing us to capture dynamic nature of greenhouse gas budget. Beginning from natural ecosystems such as temperate and tropical forests, the models is now applicable to croplands by including agricultural practices such as planting, harvest, and fertilizer input. Global simulation results have been published from several papers, but model validation and benchmarking using up-to-date observations are remained for works. The model is now applied to several practical issues such as evaluation of N2O emission from bio-fuel croplands, which are expected to accomplish the mitigation target of the Paris Agreement. We also show several topics about basic model development such as revised CH4 emission affected by dynamic water-table and refined N2O emission from nitrification.

  15. Evaluation of Automated Model Calibration Techniques for Residential Building Energy Simulation

    Energy Technology Data Exchange (ETDEWEB)

    and Ben Polly, Joseph Robertson [National Renewable Energy Lab. (NREL), Golden, CO (United States); Polly, Ben [National Renewable Energy Lab. (NREL), Golden, CO (United States); Collis, Jon [Colorado School of Mines, Golden, CO (United States)

    2013-09-01

    This simulation study adapts and applies the general framework described in BESTEST-EX (Judkoff et al 2010) for self-testing residential building energy model calibration methods. BEopt/DOE-2.2 is used to evaluate four mathematical calibration methods in the context of monthly, daily, and hourly synthetic utility data for a 1960's-era existing home in a cooling-dominated climate. The home's model inputs are assigned probability distributions representing uncertainty ranges, random selections are made from the uncertainty ranges to define "explicit" input values, and synthetic utility billing data are generated using the explicit input values. The four calibration methods evaluated in this study are: an ASHRAE 1051-RP-based approach (Reddy and Maor 2006), a simplified simulated annealing optimization approach, a regression metamodeling optimization approach, and a simple output ratio calibration approach. The calibration methods are evaluated for monthly, daily, and hourly cases; various retrofit measures are applied to the calibrated models and the methods are evaluated based on the accuracy of predicted savings, computational cost, repeatability, automation, and ease of implementation.

  16. Evaluation of Automated Model Calibration Techniques for Residential Building Energy Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Robertson, J.; Polly, B.; Collis, J.

    2013-09-01

    This simulation study adapts and applies the general framework described in BESTEST-EX (Judkoff et al 2010) for self-testing residential building energy model calibration methods. BEopt/DOE-2.2 is used to evaluate four mathematical calibration methods in the context of monthly, daily, and hourly synthetic utility data for a 1960's-era existing home in a cooling-dominated climate. The home's model inputs are assigned probability distributions representing uncertainty ranges, random selections are made from the uncertainty ranges to define 'explicit' input values, and synthetic utility billing data are generated using the explicit input values. The four calibration methods evaluated in this study are: an ASHRAE 1051-RP-based approach (Reddy and Maor 2006), a simplified simulated annealing optimization approach, a regression metamodeling optimization approach, and a simple output ratio calibration approach. The calibration methods are evaluated for monthly, daily, and hourly cases; various retrofit measures are applied to the calibrated models and the methods are evaluated based on the accuracy of predicted savings, computational cost, repeatability, automation, and ease of implementation.

  17. Performance Evaluation of UML2-Modeled Embedded Streaming Applications with System-Level Simulation

    Directory of Open Access Journals (Sweden)

    Arpinen Tero

    2009-01-01

    Full Text Available This article presents an efficient method to capture abstract performance model of streaming data real-time embedded systems (RTESs. Unified Modeling Language version 2 (UML2 is used for the performance modeling and as a front-end for a tool framework that enables simulation-based performance evaluation and design-space exploration. The adopted application meta-model in UML resembles the Kahn Process Network (KPN model and it is targeted at simulation-based performance evaluation. The application workload modeling is done using UML2 activity diagrams, and platform is described with structural UML2 diagrams and model elements. These concepts are defined using a subset of the profile for Modeling and Analysis of Realtime and Embedded (MARTE systems from OMG and custom stereotype extensions. The goal of the performance modeling and simulation is to achieve early estimates on task response times, processing element, memory, and on-chip network utilizations, among other information that is used for design-space exploration. As a case study, a video codec application on multiple processors is modeled, evaluated, and explored. In comparison to related work, this is the first proposal that defines transformation between UML activity diagrams and streaming data application workload meta models and successfully adopts it for RTES performance evaluation.

  18. An Evaluation of a Language for Paper-based Form Sketching

    DEFF Research Database (Denmark)

    Farruglia, Phillip; Borg, Johnathan; Camilleri, K.P.

    2006-01-01

    form idea. The research disclosed in this paper is aimed at combining the benefits of paper-based freehand sketches with those of three-dimensional (3D) models. More specifically, this paper reports the on-going development of a prescribed sketching language (PSL) required to create a seamless link...... between paper-based form sketching and CAGM systems. Evaluation results revealed that whilst PSL is easy-to-learn, yet it requires improvements. Such a sketching language contributes a step towards simulating early form design solutions by the combined use of paper-based sketches and 3D models....

  19. The Spiral-Interactive Program Evaluation Model.

    Science.gov (United States)

    Khaleel, Ibrahim Adamu

    1988-01-01

    Describes the spiral interactive program evaluation model, which is designed to evaluate vocational-technical education programs in secondary schools in Nigeria. Program evaluation is defined; utility oriented and process oriented models for evaluation are described; and internal and external evaluative factors and variables that define each…

  20. Evaluation of potential crushed-salt constitutive models

    International Nuclear Information System (INIS)

    Callahan, G.D.; Loken, M.C.; Sambeek, L.L. Van; Chen, R.; Pfeifle, T.W.; Nieland, J.D.; Hansen, F.D.

    1995-12-01

    Constitutive models describing the deformation of crushed salt are presented in this report. Ten constitutive models with potential to describe the phenomenological and micromechanical processes for crushed salt were selected from a literature search. Three of these ten constitutive models, termed Sjaardema-Krieg, Zeuch, and Spiers models, were adopted as candidate constitutive models. The candidate constitutive models were generalized in a consistent manner to three-dimensional states of stress and modified to include the effects of temperature, grain size, and moisture content. A database including hydrostatic consolidation and shear consolidation tests conducted on Waste Isolation Pilot Plant and southeastern New Mexico salt was used to determine material parameters for the candidate constitutive models. Nonlinear least-squares model fitting to data from the hydrostatic consolidation tests, the shear consolidation tests, and a combination of the shear and hydrostatic tests produces three sets of material parameter values for the candidate models. The change in material parameter values from test group to test group indicates the empirical nature of the models. To evaluate the predictive capability of the candidate models, each parameter value set was used to predict each of the tests in the database. Based on the fitting statistics and the ability of the models to predict the test data, the Spiers model appeared to perform slightly better than the other two candidate models. The work reported here is a first-of-its kind evaluation of constitutive models for reconsolidation of crushed salt. Questions remain to be answered. Deficiencies in models and databases are identified and recommendations for future work are made. 85 refs

  1. A Reusable Framework for Regional Climate Model Evaluation

    Science.gov (United States)

    Hart, A. F.; Goodale, C. E.; Mattmann, C. A.; Lean, P.; Kim, J.; Zimdars, P.; Waliser, D. E.; Crichton, D. J.

    2011-12-01

    Climate observations are currently obtained through a diverse network of sensors and platforms that include space-based observatories, airborne and seaborne platforms, and distributed, networked, ground-based instruments. These global observational measurements are critical inputs to the efforts of the climate modeling community and can provide a corpus of data for use in analysis and validation of climate models. The Regional Climate Model Evaluation System (RCMES) is an effort currently being undertaken to address the challenges of integrating this vast array of observational climate data into a coherent resource suitable for performing model analysis at the regional level. Developed through a collaboration between the NASA Jet Propulsion Laboratory (JPL) and the UCLA Joint Institute for Regional Earth System Science and Engineering (JIFRESSE), the RCMES uses existing open source technologies (MySQL, Apache Hadoop, and Apache OODT), to construct a scalable, parametric, geospatial data store that incorporates decades of observational data from a variety of NASA Earth science missions, as well as other sources into a consistently annotated, highly available scientific resource. By eliminating arbitrary partitions in the data (individual file boundaries, differing file formats, etc), and instead treating each individual observational measurement as a unique, geospatially referenced data point, the RCMES is capable of transforming large, heterogeneous collections of disparate observational data into a unified resource suitable for comparison to climate model output. This facility is further enhanced by the availability of a model evaluation toolkit which consists of a set of Python libraries, a RESTful web service layer, and a browser-based graphical user interface that allows for orchestration of model-to-data comparisons by composing them visually through web forms. This combination of tools and interfaces dramatically simplifies the process of interacting with and

  2. Use of Occupancy Models to Evaluate Expert Knowledge-based Species-Habitat Relationships

    Directory of Open Access Journals (Sweden)

    Monica N. Iglecia

    2012-12-01

    Full Text Available Expert knowledge-based species-habitat relationships are used extensively to guide conservation planning, particularly when data are scarce. Purported relationships describe the initial state of knowledge, but are rarely tested. We assessed support in the data for suitability rankings of vegetation types based on expert knowledge for three terrestrial avian species in the South Atlantic Coastal Plain of the United States. Experts used published studies, natural history, survey data, and field experience to rank vegetation types as optimal, suitable, and marginal. We used single-season occupancy models, coupled with land cover and Breeding Bird Survey data, to examine the hypothesis that patterns of occupancy conformed to species-habitat suitability rankings purported by experts. Purported habitat suitability was validated for two of three species. As predicted for the Eastern Wood-Pewee (Contopus virens and Brown-headed Nuthatch (Sitta pusilla, occupancy was strongly influenced by vegetation types classified as "optimal habitat" by the species suitability rankings for nuthatches and wood-pewees. Contrary to predictions, Red-headed Woodpecker (Melanerpes erythrocephalus models that included vegetation types as covariates received similar support by the data as models without vegetation types. For all three species, occupancy was also related to sampling latitude. Our results suggest that covariates representing other habitat requirements might be necessary to model occurrence of generalist species like the woodpecker. The modeling approach described herein provides a means to test expert knowledge-based species-habitat relationships, and hence, help guide conservation planning.

  3. A simplified model of dynamic interior cooling load evaluation for office buildings

    International Nuclear Information System (INIS)

    Ding, Yan; Zhang, Qiang; Wang, Zhaoxia; Liu, Min; He, Qing

    2016-01-01

    Highlights: • The core interior disturbance was determined by principle component analysis. • Influences of occupants on cooling load should be described using time series. • A simplified model was built to evaluate dynamic interior building cooling load. - Abstract: Predicted cooling load is a valuable tool for assessing the operation of air-conditioning systems. Compared with exterior cooling load, interior cooling load is more unpredictable. According to principle components analysis, occupancy was proved to be a typical factor influencing interior cooling loads in buildings. By exploring the regularity of interior disturbances in an office building, a simplified evaluation model for interior cooling load was established in this paper. The stochastic occupancy rate was represented by a Markov transition model. Equipment power, lighting power and fresh air were all related to occupancy rate based on time sequence. The superposition of different types of interior cooling loads was also considered in the evaluation model. The error between the evaluation results and measurement results was found to be lower than 10%. In reference to the cooling loads calculated by the traditional design method and area-based method in case study office rooms, the evaluated cooling loads were suitable for operation regulation.

  4. Comprehensive environment-suitability evaluation model about Carya cathayensis

    International Nuclear Information System (INIS)

    Da-Sheng, W.; Li-Juan, L.; Qin-Fen, Y.

    2013-01-01

    On the relation between the suitable environment and the distribution areas of Carya cathayensis Sarg., the current studies are mainly committed to qualitative descriptions, but did not consider quantitative models. The objective of this study was to establish a environment-suitability evaluation model which used to predict potential suitable areas of C. cathayensis. Firstly, the 3 factor data of soil type, soil parent material and soil thickness were obtained based on 2-class forest resource survey, and other factor data, which included elevation, slope, aspect, surface curvature, humidity index, and solar radiation index, were extracted from DEM (Digital Elevation Model). Additionally, the key affecting factors were defined by PCA (Principal Component Analysis), the weights of evaluation factors were determined by AHP (Analysis Hierarchy Process) and the quantitative classification of single factor was determined by membership function with fuzzy mathematics. Finally, a comprehensive environment-suitability evaluation model was established and which was also used to predict the potential suitable areas of C. cathayensis in Daoshi Town in the study region. The results showed that 85.6% of actual distribution areas were in the most suitable and more suitable regions and 11.5% in the general suitable regions

  5. Integrate Data into Scientific Workflows for Terrestrial Biosphere Model Evaluation through Brokers

    Science.gov (United States)

    Wei, Y.; Cook, R. B.; Du, F.; Dasgupta, A.; Poco, J.; Huntzinger, D. N.; Schwalm, C. R.; Boldrini, E.; Santoro, M.; Pearlman, J.; Pearlman, F.; Nativi, S.; Khalsa, S.

    2013-12-01

    Terrestrial biosphere models (TBMs) have become integral tools for extrapolating local observations and process-level understanding of land-atmosphere carbon exchange to larger regions. Model-model and model-observation intercomparisons are critical to understand the uncertainties within model outputs, to improve model skill, and to improve our understanding of land-atmosphere carbon exchange. The DataONE Exploration, Visualization, and Analysis (EVA) working group is evaluating TBMs using scientific workflows in UV-CDAT/VisTrails. This workflow-based approach promotes collaboration and improved tracking of evaluation provenance. But challenges still remain. The multi-scale and multi-discipline nature of TBMs makes it necessary to include diverse and distributed data resources in model evaluation. These include, among others, remote sensing data from NASA, flux tower observations from various organizations including DOE, and inventory data from US Forest Service. A key challenge is to make heterogeneous data from different organizations and disciplines discoverable and readily integrated for use in scientific workflows. This presentation introduces the brokering approach taken by the DataONE EVA to fill the gap between TBMs' evaluation scientific workflows and cross-organization and cross-discipline data resources. The DataONE EVA started the development of an Integrated Model Intercomparison Framework (IMIF) that leverages standards-based discovery and access brokers to dynamically discover, access, and transform (e.g. subset and resampling) diverse data products from DataONE, Earth System Grid (ESG), and other data repositories into a format that can be readily used by scientific workflows in UV-CDAT/VisTrails. The discovery and access brokers serve as an independent middleware that bridge existing data repositories and TBMs evaluation scientific workflows but introduce little overhead to either component. In the initial work, an OpenSearch-based discovery broker

  6. Evaluation of six NEHRP B/C crustal amplification models proposed for use in western North America

    Science.gov (United States)

    Boore, David; Campbell, Kenneth W.

    2016-01-01

    We evaluate six crustal amplification models based on National Earthquake Hazards Reduction Program (NEHRP) B/C crustal profiles proposed for use in western North America (WNA) and often used in other active crustal regions where crustal properties are unknown. One of the models is based on an interpolation of generic rock velocity profiles previously proposed for WNA and central and eastern North America (CENA), in conjunction with material densities based on an updated velocity–density relationship. A second model is based on the velocity profile used to develop amplification factors for the Next Generation Attenuation (NGA)‐West2 project. A third model is based on a near‐surface velocity profile developed from the NGA‐West2 site database. A fourth model is based on velocity and density profiles originally proposed for use in CENA but recently used to represent crustal properties in California. We propose two alternatives to this latter model that more closely represent WNA crustal properties. We adopt a value of site attenuation (κ0) for each model that is either recommended by the author of the model or proposed by us. Stochastic simulation is used to evaluate the Fourier amplification factors and their impact on response spectra associated with each model. Based on this evaluation, we conclude that among the available models evaluated in this study the NEHRP B/C amplification model of Boore (2016) best represents median crustal amplification in WNA, although the amplification models based on the crustal profiles of Kamai et al. (2013, 2016, unpublished manuscript, see Data and Resources) and Yenier and Atkinson (2015), the latter adjusted to WNA crustal properties, can be used to represent epistemic uncertainty.

  7. The Application of FIA-based Data to Wildlife Habitat Modeling: A Comparative Study

    Science.gov (United States)

    Thomas C., Jr. Edwards; Gretchen G. Moisen; Tracey S. Frescino; Randall J. Schultz

    2005-01-01

    We evaluated the capability of two types of models, one based on spatially explicit variables derived from FIA data and one using so-called traditional habitat evaluation methods, for predicting the presence of cavity-nesting bird habitat in Fishlake National Forest, Utah. Both models performed equally well, in measures of predictive accuracy, with the FIA-based model...

  8. Multiple attribute decision making model and application to food safety risk evaluation.

    Directory of Open Access Journals (Sweden)

    Lihua Ma

    Full Text Available Decision making for supermarket food purchase decisions are characterized by network relationships. This paper analyzed factors that influence supermarket food selection and proposes a supplier evaluation index system based on the whole process of food production. The author established the intuitive interval value fuzzy set evaluation model based on characteristics of the network relationship among decision makers, and validated for a multiple attribute decision making case study. Thus, the proposed model provides a reliable, accurate method for multiple attribute decision making.

  9. Evaluating urban parking policies with agent-based model of driver parking behavior

    NARCIS (Netherlands)

    Martens, C.J.C.M.; Benenson, I.

    2008-01-01

    This paper presents an explicit agent-based model of parking search in a city. In the model, “drivers” drive toward their destination, search for parking, park, remain at the parking place, and leave. The city’s infrastructure is represented by a high-resolution geographic information system (GIS)

  10. A diagnostic evaluation model for complex research partnerships with community engagement: the partnership for Native American Cancer Prevention (NACP) model.

    Science.gov (United States)

    Trotter, Robert T; Laurila, Kelly; Alberts, David; Huenneke, Laura F

    2015-02-01

    Complex community oriented health care prevention and intervention partnerships fail or only partially succeed at alarming rates. In light of the current rapid expansion of critically needed programs targeted at health disparities in minority populations, we have designed and are testing an "logic model plus" evaluation model that combines classic logic model and query based evaluation designs (CDC, NIH, Kellogg Foundation) with advances in community engaged designs derived from industry-university partnership models. These approaches support the application of a "near real time" feedback system (diagnosis and intervention) based on organizational theory, social network theory, and logic model metrics directed at partnership dynamics, combined with logic model metrics. Copyright © 2014 Elsevier Ltd. All rights reserved.

  11. Evaluation of a Digital Game-Based Learning Program for Enhancing Youth Mental Health: A Structural Equation Modeling of the Program Effectiveness.

    Science.gov (United States)

    Huen, Jenny My; Lai, Eliza Sy; Shum, Angie Ky; So, Sam Wk; Chan, Melissa Ky; Wong, Paul Wc; Law, Y W; Yip, Paul Sf

    2016-10-07

    Digital game-based learning (DGBL) makes use of the entertaining power of digital games for educational purposes. Effectiveness assessment of DGBL programs has been underexplored and no attempt has been made to simultaneously model both important components of DGBL: learning attainment (ie, educational purposes of DGBL) and engagement of users (ie, entertaining power of DGBL) in evaluating program effectiveness. This study aimed to describe and evaluate an Internet-based DGBL program, Professor Gooley and the Flame of Mind, which promotes mental health to adolescents in a positive youth development approach. In particular, we investigated whether user engagement in the DGBL program could enhance their attainment on each of the learning constructs per DGBL module and subsequently enhance their mental health as measured by psychological well-being. Users were assessed on their attainment on each learning construct, psychological well-being, and engagement in each of the modules. One structural equation model was constructed for each DGBL module to model the effect of users' engagement and attainment on the learning construct on their psychological well-being. Of the 498 secondary school students that registered and participated from the first module of the DGBL program, 192 completed all 8 modules of the program. Results from structural equation modeling suggested that a higher extent of engagement in the program activities facilitated users' attainment on the learning constructs on most of the modules and in turn enhanced their psychological well-being after controlling for users' initial psychological well-being and initial attainment on the constructs. This study provided evidence that Internet intervention for mental health, implemented with the technologies and digital innovations of DGBL, could enhance youth mental health. Structural equation modeling is a promising approach in evaluating the effectiveness of DGBL programs.

  12. Model-Based Evaluation of Strategies to Control Brucellosis in China.

    Science.gov (United States)

    Li, Ming-Tao; Sun, Gui-Quan; Zhang, Wen-Yi; Jin, Zhen

    2017-03-12

    Brucellosis, the most common zoonotic disease worldwide, represents a great threat to animal husbandry with the potential to cause enormous economic losses. Brucellosis has become a major public health problem in China, and the number of human brucellosis cases has increased dramatically in recent years. In order to evaluate different intervention strategies to curb brucellosis transmission in China, a novel mathematical model with a general indirect transmission incidence rate was presented. By comparing the results of three models using national human disease data and 11 provinces with high case numbers, the best fitted model with standard incidence was used to investigate the potential for future outbreaks. Estimated basic reproduction numbers were highly heterogeneous, varying widely among provinces. The local basic reproduction numbers of provinces with an obvious increase in incidence were much larger than the average for the country as a whole, suggesting that environment-to-individual transmission was more common than individual-to-individual transmission. We concluded that brucellosis can be controlled through increasing animal vaccination rates, environment disinfection frequency, or elimination rates of infected animals. Our finding suggests that a combination of animal vaccination, environment disinfection, and elimination of infected animals will be necessary to ensure cost-effective control for brucellosis.

  13. Uranium resources evaluation model as an exploration tool

    International Nuclear Information System (INIS)

    Ruzicka, V.

    1976-01-01

    Evaluation of uranium resources, as conducted by the Uranium Resources Evaluation Section of the Geological Survey of Canada, comprises operations analogous with those performed during the preparatory stages of uranium exploration. The uranium resources evaluation model, simulating the estimation process, can be divided into four steps. The first step includes definition of major areas and ''unit subdivisions'' for which geological data are gathered, coded, computerized and retrieved. Selection of these areas and ''unit subdivisions'' is based on a preliminary appraisal of their favourability for uranium mineralization. The second step includes analyses of the data, definition of factors controlling uranium minearlization, classification of uranium occurrences into genetic types, and final delineation of favourable areas; this step corresponds to the selection of targets for uranium exploration. The third step includes geological field work; it is equivalent to geological reconnaissance in exploration. The fourth step comprises computation of resources; the preliminary evaluation techniques in the exploration are, as a rule, analogous with the simplest methods employed in the resource evaluation. The uranium resources evaluation model can be conceptually applied for decision-making during exploration or for formulation of exploration strategy using the quantified data as weighting factors. (author)

  14. An effective quality model for evaluating mobile websites

    International Nuclear Information System (INIS)

    Hassan, W.U.; Nawaz, M.T.; Syed, T.H.; Naseem, A.

    2015-01-01

    The Evolution in Web development in recent years has caused emergence of new area of mobile computing, Mobile phone has been transformed into high speed processing device capable of doing the processes which were suppose to be run only on computer previously, Modem mobile phones now have capability to process data with greater speed then desktop systems and with the inclusion of 3G and 4G networks, mobile became the prime choice for users to send and receive data from any device. As a result, there is a major increase in mobile website need and development but due to uniqueness of mobile website usage as compared to desktop website, there is a need to focus on quality aspect of mobile website, So, to increase and preserve quality of mobile website, a quality model is required which has to be designed specifically to evaluate mobile website quality, To design a mobile website quality model, a survey based methodology is used to gather the information regarding website unique usage in mobile from different users. On the basis of this information, a mobile website quality model is presented which aims to evaluate the quality of mobile websites. In proposed model, some sub characteristics are designed to evaluate mobile websites in particular. The result is a proposed model aims to evaluate features of website which are important in context of its deployment and its usability in mobile platform. (author)

  15. Performance measurement, modeling, and evaluation of integrated concurrency control and recovery algorithms in distributed data base systems

    Energy Technology Data Exchange (ETDEWEB)

    Jenq, B.C.

    1986-01-01

    The performance evaluation of integrated concurrency-control and recovery mechanisms for distributed data base systems is studied using a distributed testbed system. In addition, a queueing network model was developed to analyze the two phase locking scheme in the distributed testbed system. The combination of testbed measurement and analytical modeling provides an effective tool for understanding the performance of integrated concurrency control and recovery algorithms in distributed database systems. The design and implementation of the distributed testbed system, CARAT, are presented. The concurrency control and recovery algorithms implemented in CARAT include: a two phase locking scheme with distributed deadlock detection, a distributed version of optimistic approach, before-image and after-image journaling mechanisms for transaction recovery, and a two-phase commit protocol. Many performance measurements were conducted using a variety of workloads. A queueing network model is developed to analyze the performance of the CARAT system using the two-phase locking scheme with before-image journaling. The combination of testbed measurements and analytical modeling provides significant improvements in understanding the performance impacts of the concurrency control and recovery algorithms in distributed database systems.

  16. Intelligent Hydraulic Actuator and Exp-based Modelling of Losses in Pumps and .

    DEFF Research Database (Denmark)

    Zhang, Muzhi

    A intelligent fuzzy logic self-organising PD+I controller for a gearrotor hydraulic motor was developed and evaluated. Furthermore, a experimental-based modelling methods with a new software tool 'Dynamodata' for modelling of losses in hydraulic motors and pumps was developed.......A intelligent fuzzy logic self-organising PD+I controller for a gearrotor hydraulic motor was developed and evaluated. Furthermore, a experimental-based modelling methods with a new software tool 'Dynamodata' for modelling of losses in hydraulic motors and pumps was developed....

  17. EVALUATION OF RAINFALL-RUNOFF MODELS FOR MEDITERRANEAN SUBCATCHMENTS

    Directory of Open Access Journals (Sweden)

    A. Cilek

    2016-06-01

    Full Text Available The development and the application of rainfall-runoff models have been a corner-stone of hydrological research for many decades. The amount of rainfall and its intensity and variability control the generation of runoff and the erosional processes operating at different scales. These interactions can be greatly variable in Mediterranean catchments with marked hydrological fluctuations. The aim of the study was to evaluate the performance of rainfall-runoff model, for rainfall-runoff simulation in a Mediterranean subcatchment. The Pan-European Soil Erosion Risk Assessment (PESERA, a simplified hydrological process-based approach, was used in this study to combine hydrological surface runoff factors. In total 128 input layers derived from data set includes; climate, topography, land use, crop type, planting date, and soil characteristics, are required to run the model. Initial ground cover was estimated from the Landsat ETM data provided by ESA. This hydrological model was evaluated in terms of their performance in Goksu River Watershed, Turkey. It is located at the Central Eastern Mediterranean Basin of Turkey. The area is approximately 2000 km2. The landscape is dominated by bare ground, agricultural and forests. The average annual rainfall is 636.4mm. This study has a significant importance to evaluate different model performances in a complex Mediterranean basin. The results provided comprehensive insight including advantages and limitations of modelling approaches in the Mediterranean environment.

  18. Development of a Logic Model to Guide Evaluations of the ASCA National Model for School Counseling Programs

    Science.gov (United States)

    Martin, Ian; Carey, John

    2014-01-01

    A logic model was developed based on an analysis of the 2012 American School Counselor Association (ASCA) National Model in order to provide direction for program evaluation initiatives. The logic model identified three outcomes (increased student achievement/gap reduction, increased school counseling program resources, and systemic change and…

  19. Evaluation model applied to TRANSPETRO's Marine Terminals Standardization Program

    Energy Technology Data Exchange (ETDEWEB)

    Almeida, Maria Fatima Ludovico de; Mueller, Gabriela [Pontificia Universidade Catolica do Rio de Janeiro (PUC-Rio), Rio de Janeiro, RJ (Brazil). Instituto Tecnologico; Garcia, Luciano Maldonado [TRANSPETRO - PETROBRAS Transporte S.A., Rio de Janeiro, RJ (Brazil)

    2009-07-01

    This paper describes an innovative evaluation model applied to TRANSPETRO's 'Marine Terminals Standardization Program' based on updating approaches of programs evaluation and organizational learning. Since the program was launched in 2004, the need for having an evaluation model able to evaluate its implementation progress, to measure the degree of standards compliance and its potential economic, social and environmental impacts has become evident. Within a vision of safe and environmentally responsible operations of marine terminals, this evaluation model was jointly designed by TRANSPETRO and PUC-Rio to promote continuous improvement and learning in operational practices and in the standardization process itself. TRANSPETRO believes that standardization supports its services and management innovation capability by creating objective and internationally recognized parameters, targets and metrology for its business activities. The conceptual model and application guidelines for this important tool are presented in this paper, as well as the next steps towards its implementation. (author)

  20. [A new model fo the evaluation of measurements of the neurocranium].

    Science.gov (United States)

    Seidler, H; Wilfing, H; Weber, G; Traindl-Prohazka, M; zur Nedden, D; Platzer, W

    1993-12-01

    A simple and user-friendly model for trigonometric description of the neurocranium based on newly defined points of measurement is presented. This model not only provides individual description, but also allows for an evaluation of developmental and phylogenetic aspects.

  1. Strengthening air traffic safety management by moving from outcome-based towards risk-based evaluation of runway incursions

    International Nuclear Information System (INIS)

    Stroeve, Sybert H.; Som, Pradip; Doorn, Bas A. van; Bakker, G.J.

    2016-01-01

    Current safety management of aerodrome operations uses judgements of severity categories to evaluate runway incursions. Incident data show a small minority of severe incursions and a large majority of less severe incursions. We show that these severity judgements are mainly based upon the outcomes of runway incursions, in particular on the closest distances attained. As such, the severity-based evaluation leads to coincidental safety management feedback, wherein causes and risk implications of runway incursions are not well considered. In this paper we present a new framework for the evaluation of runway incursions, which effectively uses all runway incursions, which judges same types of causes similarly, and which structures causes and risk implications. The framework is based on risks of scenarios associated with the initiation of runway incursions. As a basis an inventory of scenarios is provided, which can represent almost all runway incursions involving a conflict with an aircraft. A main step in the framework is the assessment of the conditional probability of a collision given a runway incursion scenario. This can be effectively achieved for large sets of scenarios by agent-based dynamic risk modelling. The results provide detailed feedback on risks of runway incursion scenarios, thus enabling effective safety management. - Highlights: • Current evaluation of runway incursions is primarily based on their outcomes. • A new framework assesses collision risk given initiation of runway incursions. • Agent-based dynamic risk modelling can evaluate the risks of many scenarios. • A developed scenario inventory can represent almost all runway incursions. • The framework provides detailed feedback to safety management.

  2. Performance Evaluation and Modelling of Container Terminals

    Science.gov (United States)

    Venkatasubbaiah, K.; Rao, K. Narayana; Rao, M. Malleswara; Challa, Suresh

    2018-02-01

    The present paper evaluates and analyzes the performance of 28 container terminals of south East Asia through data envelopment analysis (DEA), principal component analysis (PCA) and hybrid method of DEA-PCA. DEA technique is utilized to identify efficient decision making unit (DMU)s and to rank DMUs in a peer appraisal mode. PCA is a multivariate statistical method to evaluate the performance of container terminals. In hybrid method, DEA is integrated with PCA to arrive the ranking of container terminals. Based on the composite ranking, performance modelling and optimization of container terminals is carried out through response surface methodology (RSM).

  3. The EMEFS model evaluation. An interim report

    Energy Technology Data Exchange (ETDEWEB)

    Barchet, W.R. [Pacific Northwest Lab., Richland, WA (United States); Dennis, R.L. [Environmental Protection Agency, Research Triangle Park, NC (United States); Seilkop, S.K. [Analytical Sciences, Inc., Durham, NC (United States); Banic, C.M.; Davies, D.; Hoff, R.M.; Macdonald, A.M.; Mickle, R.E.; Padro, J.; Puckett, K. [Atmospheric Environment Service, Downsview, ON (Canada); Byun, D.; McHenry, J.N. [Computer Sciences Corp., Research Triangle Park, NC (United States); Karamchandani, P.; Venkatram, A. [ENSR Consulting and Engineering, Camarillo, CA (United States); Fung, C.; Misra, P.K. [Ontario Ministry of the Environment, Toronto, ON (Canada); Hansen, D.A. [Electric Power Research Inst., Palo Alto, CA (United States); Chang, J.S. [State Univ. of New York, Albany, NY (United States). Atmospheric Sciences Research Center

    1991-12-01

    The binational Eulerian Model Evaluation Field Study (EMEFS) consisted of several coordinated data gathering and model evaluation activities. In the EMEFS, data were collected by five air and precipitation monitoring networks between June 1988 and June 1990. Model evaluation is continuing. This interim report summarizes the progress made in the evaluation of the Regional Acid Deposition Model (RADM) and the Acid Deposition and Oxidant Model (ADOM) through the December 1990 completion of a State of Science and Technology report on model evaluation for the National Acid Precipitation Assessment Program (NAPAP). Because various assessment applications of RADM had to be evaluated for NAPAP, the report emphasizes the RADM component of the evaluation. A protocol for the evaluation was developed by the model evaluation team and defined the observed and predicted values to be used and the methods by which the observed and predicted values were to be compared. Scatter plots and time series of predicted and observed values were used to present the comparisons graphically. Difference statistics and correlations were used to quantify model performance. 64 refs., 34 figs., 6 tabs.

  4. GPU-accelerated 3-D model-based tracking

    International Nuclear Information System (INIS)

    Brown, J Anthony; Capson, David W

    2010-01-01

    Model-based approaches to tracking the pose of a 3-D object in video are effective but computationally demanding. While statistical estimation techniques, such as the particle filter, are often employed to minimize the search space, real-time performance remains unachievable on current generation CPUs. Recent advances in graphics processing units (GPUs) have brought massively parallel computational power to the desktop environment and powerful developer tools, such as NVIDIA Compute Unified Device Architecture (CUDA), have provided programmers with a mechanism to exploit it. NVIDIA GPUs' single-instruction multiple-thread (SIMT) programming model is well-suited to many computer vision tasks, particularly model-based tracking, which requires several hundred 3-D model poses to be dynamically configured, rendered, and evaluated against each frame in the video sequence. Using 6 degree-of-freedom (DOF) rigid hand tracking as an example application, this work harnesses consumer-grade GPUs to achieve real-time, 3-D model-based, markerless object tracking in monocular video.

  5. Kirkpatrick evaluation model for in-service training on cardiopulmonary resuscitation

    Directory of Open Access Journals (Sweden)

    Safoura Dorri

    2016-01-01

    Full Text Available Background: There are several evaluation models that can be used to evaluate the effect of in-service training; one of them is the Kirkpatrick model. The aim of the present study is to assess the in-service training of cardiopulmonary resuscitation (CPR for nurses based on the Kirkpatrick′s model. Materials and Methods: This study is a cross-sectional study based on the Kirkpatrick′s model in which the efficacy of in-service training of CPR to nurses was assessed in the Shahadaye Lenjan Hospital in Isfahan province in 2014. 80 nurses and Nurse′s aides participated in the study after providing informed consent. The in-service training course was evaluated in reaction, learning, behavior, and results level of the Kirkpatrick model. Data were collected through a researcher-made questionnaire. Results: The mean age of the participants was 35 ± 8.5 years. The effectiveness score obtained in the reaction level (first level in the Kirkpatrick model was 4.2 ± 0.32. The effectiveness score in the second level of model or the learning level was 4.70 ± 0.09, which is statistically significant (P < 0.001. The effectiveness score at the third and fourth level were 4.1 ± 0.34 and 4.3 ± 0.12, respectively. Total effectiveness score was 4.35. Conclusions: The results of this study showed that CPR in-service training has a favorable effect on all four levels of the Kirkpatrick model for nurses and nurse′s aides.

  6. 3-D model-based vehicle tracking.

    Science.gov (United States)

    Lou, Jianguang; Tan, Tieniu; Hu, Weiming; Yang, Hao; Maybank, Steven J

    2005-10-01

    This paper aims at tracking vehicles from monocular intensity image sequences and presents an efficient and robust approach to three-dimensional (3-D) model-based vehicle tracking. Under the weak perspective assumption and the ground-plane constraint, the movements of model projection in the two-dimensional image plane can be decomposed into two motions: translation and rotation. They are the results of the corresponding movements of 3-D translation on the ground plane (GP) and rotation around the normal of the GP, which can be determined separately. A new metric based on point-to-line segment distance is proposed to evaluate the similarity between an image region and an instantiation of a 3-D vehicle model under a given pose. Based on this, we provide an efficient pose refinement method to refine the vehicle's pose parameters. An improved EKF is also proposed to track and to predict vehicle motion with a precise kinematics model. Experimental results with both indoor and outdoor data show that the algorithm obtains desirable performance even under severe occlusion and clutter.

  7. Discounted cost model for condition-based maintenance optimization

    International Nuclear Information System (INIS)

    Weide, J.A.M. van der; Pandey, M.D.; Noortwijk, J.M. van

    2010-01-01

    This paper presents methods to evaluate the reliability and optimize the maintenance of engineering systems that are damaged by shocks or transients arriving randomly in time and overall degradation is modeled as a cumulative stochastic point process. The paper presents a conceptually clear and comprehensive derivation of formulas for computing the discounted cost associated with a maintenance policy combining both condition-based and age-based criteria for preventive maintenance. The proposed discounted cost model provides a more realistic basis for optimizing the maintenance policies than those based on the asymptotic, non-discounted cost rate criterion.

  8. Evaluating the Risk of Metabolic Syndrome Based on an Artificial Intelligence Model

    Directory of Open Access Journals (Sweden)

    Hui Chen

    2014-01-01

    Full Text Available Metabolic syndrome is worldwide public health problem and is a serious threat to people's health and lives. Understanding the relationship between metabolic syndrome and the physical symptoms is a difficult and challenging task, and few studies have been performed in this field. It is important to classify adults who are at high risk of metabolic syndrome without having to use a biochemical index and, likewise, it is important to develop technology that has a high economic rate of return to simplify the complexity of this detection. In this paper, an artificial intelligence model was developed to identify adults at risk of metabolic syndrome based on physical signs; this artificial intelligence model achieved more powerful capacity for classification compared to the PCLR (principal component logistic regression model. A case study was performed based on the physical signs data, without using a biochemical index, that was collected from the staff of Lanzhou Grid Company in Gansu province of China. The results show that the developed artificial intelligence model is an effective classification system for identifying individuals at high risk of metabolic syndrome.

  9. MATHEMATICAL AND CHEMOMETRICAL MODELS – TOOLS TO EVALUATE HEAVY METALS CONTAMINATION

    Directory of Open Access Journals (Sweden)

    Despina Maria Bordean

    2017-11-01

    Full Text Available The aim of the this study is to present a combined view of bio – geo - chemistry, soil – plant interactions, mathematic models and statistic analysis, based on the correlation between the levels of soil contamination, and the remanence of polluting substances in soil and respectively in harvested fruits and vegetables. Most of the mathematical models which describe plant - soil interactions are integrated in plant growth models or climate change models. The models presented by this paper are Soil – Plant Interaction Models, Pollution Indices, The Indices for Evaluating the Adaptative Strategies of Plants and Chemo-metrical Methods, and they have the role to synthesize and evaluate the information regarding heavy metals contamination.

  10. Deuteron nuclear data for the design of accelerator-based neutron sources: Measurement, model analysis, evaluation, and application

    Science.gov (United States)

    Watanabe, Yukinobu; Kin, Tadahiro; Araki, Shouhei; Nakayama, Shinsuke; Iwamoto, Osamu

    2017-09-01

    A comprehensive research program on deuteron nuclear data motivated by development of accelerator-based neutron sources is being executed. It is composed of measurements of neutron and gamma-ray yields and production cross sections, modelling of deuteron-induced reactions and code development, nuclear data evaluation and benchmark test, and its application to medical radioisotopes production. The goal of this program is to develop a state-of-the-art deuteron nuclear data library up to 200 MeV which will be useful for the design of future (d,xn) neutron sources. The current status and future plan are reviewed.

  11. Evaluation of Cost Leadership Strategy in Shipping Enterprises with Simulation Model

    Science.gov (United States)

    Ferfeli, Maria V.; Vaxevanou, Anthi Z.; Damianos, Sakas P.

    2009-08-01

    The present study will attempt the evaluation of cost leadership strategy that prevails in certain shipping enterprises and the creation of simulation models based on strategic model STAIR. The above model is an alternative method of strategic applications evaluation. This is held in order to be realised if the strategy of cost leadership creates competitive advantage [1] and this will be achieved via the technical simulation which appreciates the interactions between the operations of an enterprise and the decision-making strategy in conditions of uncertainty with reduction of undertaken risk.

  12. Complexity effects in choice experiments-based models

    NARCIS (Netherlands)

    Dellaert, B.G.C.; Donkers, B.; van Soest, A.H.O.

    2012-01-01

    Many firms rely on choice experiment–based models to evaluate future marketing actions under various market conditions. This research investigates choice complexity (i.e., number of alternatives, number of attributes, and utility similarity between the most attractive alternatives) and individual

  13. Using measurements for evaluation of black carbon modeling

    Directory of Open Access Journals (Sweden)

    S. Gilardoni

    2011-01-01

    Full Text Available The ever increasing use of air quality and climate model assessments to underpin economic, public health, and environmental policy decisions makes effective model evaluation critical. This paper discusses the properties of black carbon and light attenuation and absorption observations that are the key to a reliable evaluation of black carbon model and compares parametric and nonparametric statistical tools for the quantification of the agreement between models and observations. Black carbon concentrations are simulated with TM5/M7 global model from July 2002 to June 2003 at four remote sites (Alert, Jungfraujoch, Mace Head, and Trinidad Head and two regional background sites (Bondville and Ispra. Equivalent black carbon (EBC concentrations are calculated using light attenuation measurements from January 2000 to December 2005. Seasonal trends in the measurements are determined by fitting sinusoidal functions and the representativeness of the period simulated by the model is verified based on the scatter of the experimental values relative to the fit curves. When the resolution of the model grid is larger than 1° × 1°, it is recommended to verify that the measurement site is representative of the grid cell. For this purpose, equivalent black carbon measurements at Alert, Bondville and Trinidad Head are compared to light absorption and elemental carbon measurements performed at different sites inside the same model grid cells. Comparison of these equivalent black carbon and elemental carbon measurements indicates that uncertainties in black carbon optical properties can compromise the comparison between model and observations. During model evaluation it is important to examine the extent to which a model is able to simulate the variability in the observations over different integration periods as this will help to identify the most appropriate timescales. The agreement between model and observation is accurately described by the overlap of

  14. Routine magnetic resonance imaging for idiopathic olfactory loss: a modeling-based economic evaluation.

    Science.gov (United States)

    Rudmik, Luke; Smith, Kristine A; Soler, Zachary M; Schlosser, Rodney J; Smith, Timothy L

    2014-10-01

    Idiopathic olfactory loss is a common clinical scenario encountered by otolaryngologists. While trying to allocate limited health care resources appropriately, the decision to obtain a magnetic resonance imaging (MRI) scan to investigate for a rare intracranial abnormality can be difficult. To evaluate the cost-effectiveness of ordering routine MRI in patients with idiopathic olfactory loss. We performed a modeling-based economic evaluation with a time horizon of less than 1 year. Patients included in the analysis had idiopathic olfactory loss defined by no preceding viral illness or head trauma and negative findings of a physical examination and nasal endoscopy. Routine MRI vs no-imaging strategies. We developed a decision tree economic model from the societal perspective. Effectiveness, probability, and cost data were obtained from the published literature. Litigation rates and costs related to a missed diagnosis were obtained from the Physicians Insurers Association of America. A univariate threshold analysis and multivariate probabilistic sensitivity analysis were performed to quantify the degree of certainty in the economic conclusion of the reference case. The comparative groups included those who underwent routine MRI of the brain with contrast alone and those who underwent no brain imaging. The primary outcome was the cost per correct diagnosis of idiopathic olfactory loss. The mean (SD) cost for the MRI strategy totaled $2400.00 ($1717.54) and was effective 100% of the time, whereas the mean (SD) cost for the no-imaging strategy totaled $86.61 ($107.40) and was effective 98% of the time. The incremental cost-effectiveness ratio for the MRI strategy compared with the no-imaging strategy was $115 669.50, which is higher than most acceptable willingness-to-pay thresholds. The threshold analysis demonstrated that when the probability of having a treatable intracranial disease process reached 7.9%, the incremental cost-effectiveness ratio for MRI vs no

  15. Using Computer Simulations for Promoting Model-based Reasoning. Epistemological and Educational Dimensions

    Science.gov (United States)

    Develaki, Maria

    2017-11-01

    Scientific reasoning is particularly pertinent to science education since it is closely related to the content and methodologies of science and contributes to scientific literacy. Much of the research in science education investigates the appropriate framework and teaching methods and tools needed to promote students' ability to reason and evaluate in a scientific way. This paper aims (a) to contribute to an extended understanding of the nature and pedagogical importance of model-based reasoning and (b) to exemplify how using computer simulations can support students' model-based reasoning. We provide first a background for both scientific reasoning and computer simulations, based on the relevant philosophical views and the related educational discussion. This background suggests that the model-based framework provides an epistemologically valid and pedagogically appropriate basis for teaching scientific reasoning and for helping students develop sounder reasoning and decision-taking abilities and explains how using computer simulations can foster these abilities. We then provide some examples illustrating the use of computer simulations to support model-based reasoning and evaluation activities in the classroom. The examples reflect the procedure and criteria for evaluating models in science and demonstrate the educational advantages of their application in classroom reasoning activities.

  16. An Efficient Dynamic Trust Evaluation Model for Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Zhengwang Ye

    2017-01-01

    Full Text Available Trust evaluation is an effective method to detect malicious nodes and ensure security in wireless sensor networks (WSNs. In this paper, an efficient dynamic trust evaluation model (DTEM for WSNs is proposed, which implements accurate, efficient, and dynamic trust evaluation by dynamically adjusting the weights of direct trust and indirect trust and the parameters of the update mechanism. To achieve accurate trust evaluation, the direct trust is calculated considering multitrust including communication trust, data trust, and energy trust with the punishment factor and regulating function. The indirect trust is evaluated conditionally by the trusted recommendations from a third party. Moreover, the integrated trust is measured by assigning dynamic weights for direct trust and indirect trust and combining them. Finally, we propose an update mechanism by a sliding window based on induced ordered weighted averaging operator to enhance flexibility. We can dynamically adapt the parameters and the interactive history windows number according to the actual needs of the network to realize dynamic update of direct trust value. Simulation results indicate that the proposed dynamic trust model is an efficient dynamic and attack-resistant trust evaluation model. Compared with existing approaches, the proposed dynamic trust model performs better in defending multiple malicious attacks.

  17. Presenting an evaluation model of the trauma registry software.

    Science.gov (United States)

    Asadi, Farkhondeh; Paydar, Somayeh

    2018-04-01

    Trauma is a major cause of 10% death in the worldwide and is considered as a global concern. This problem has made healthcare policy makers and managers to adopt a basic strategy in this context. Trauma registry has an important and basic role in decreasing the mortality and the disabilities due to injuries resulted from trauma. Today, different software are designed for trauma registry. Evaluation of this software improves management, increases efficiency and effectiveness of these systems. Therefore, the aim of this study is to present an evaluation model for trauma registry software. The present study is an applied research. In this study, general and specific criteria of trauma registry software were identified by reviewing literature including books, articles, scientific documents, valid websites and related software in this domain. According to general and specific criteria and related software, a model for evaluating trauma registry software was proposed. Based on the proposed model, a checklist designed and its validity and reliability evaluated. Mentioned model by using of the Delphi technique presented to 12 experts and specialists. To analyze the results, an agreed coefficient of %75 was determined in order to apply changes. Finally, when the model was approved by the experts and professionals, the final version of the evaluation model for the trauma registry software was presented. For evaluating of criteria of trauma registry software, two groups were presented: 1- General criteria, 2- Specific criteria. General criteria of trauma registry software were classified into four main categories including: 1- usability, 2- security, 3- maintainability, and 4-interoperability. Specific criteria were divided into four main categories including: 1- data submission and entry, 2- reporting, 3- quality control, 4- decision and research support. The presented model in this research has introduced important general and specific criteria of trauma registry software

  18. Comprehensive Evaluation of Car-Body Light-Weighting Scheme Based on LCC Theory

    Directory of Open Access Journals (Sweden)

    Han Qing-lan

    2016-01-01

    Full Text Available In this paper, a comprehensive evaluation model of light-weighting scheme is established, which is based on three dimensions, including the life cycle costs of the resource consumed by the designed objects (LCC, willingness to pay for the environmental effect of resource consumption (WTP and performance (P. Firstly, cost of each stage is determined. Then, based on the resource classification, which is based on cost elements, determine the material list needed, and apply WTP weight coefficient to monetize life cycle environmental impact and obtain the life cycle comprehensive cost of designed scheme (TCC. In the next step Performance (P index is calculated to measure the value of the life cycle costs by applying AHP and SAW method, integrated (TCC and (P to achieve comprehensive evaluation of light-weighting scheme. Finally, the effectiveness of the evaluation model is verified by the example of car engine hood.

  19. Evaluation Models of Some Morphological Characteristcs for Talent Scouting in Sport

    OpenAIRE

    Rogulj, Nenad; Papić, Vladan; Čavala, Marijana

    2009-01-01

    In this paper, for the purpose of expert system evaluation within the scientific project »Talent scouting in sport«, two methodological approaches for recognizing an athlete’s morphological compatibility for various sports has been presented, evaluated and compared. First approach is based on the fuzzy logic and expert opinion about compatibility of proposed hypothetical morphological models for 14 different sports which are part of the expert system. Second approach is based on determining t...

  20. Research on Performance Evaluation by IDSS Based on AHP

    Directory of Open Access Journals (Sweden)

    Tang Xuelian

    2013-03-01

    Full Text Available Talent resource is the primary resource. There are two sides as the starting point and foothold in this paper. One is how to evaluate the performance of science and technology talents flow by IDSS (Intelligence Decision Supporting System. Another is how to guide the innovation work of science and technology according to the evaluation results. The evaluation index on performance system has hierarchical structure. So AHP (Analytic Hierarchy Process is applied to evaluate the performance. Evaluation model is established and illustrated by the cases in this paper. It can be seen that the flow of performance is influenced by the growth rate of important scientific and technological achievements. Furthermore, some constructive suggestions are given based on the results of evaluation.

  1. Rough Set Theory Based Fuzzy TOPSIS on Serious Game Design Evaluation Framework

    Directory of Open Access Journals (Sweden)

    Chung-Ho Su

    2013-01-01

    Full Text Available This study presents a hybrid methodology for solving the serious game design evaluation in which evaluation criteria are based on meaningful learning, ARCS motivation, cognitive load, and flow theory (MACF by rough set theory (RST and experts’ selection. The purpose of this study tends to develop an evaluation model with RST based fuzzy Delphi-AHP-TOPSIS for MACF characteristics. Fuzzy Delphi method is utilized for selecting the evaluation criteria, Fuzzy AHP is used for analyzing the criteria structure and determining the evaluation weight of criteria, and Fuzzy TOPSIS is applied to determine the sequence of the evaluations. A real case is also used for evaluating the selection of MACF criteria design for four serious games, and both the practice and evaluation of the case could be explained. The results show that the playfulness (C24, skills (C22, attention (C11, and personalized (C35 are determined as the four most important criteria in the MACF selection process. And evaluation results of case study point out that Game 1 has the best score overall (Game 1 > Game 3 > Game 2 > Game 4. Finally, proposed evaluation framework tends to evaluate the effectiveness and the feasibility of the evaluation model and provide design criteria for relevant multimedia game design educators.

  2. COMPARISON of FUZZY-BASED MODELS in LANDSLIDE HAZARD MAPPING

    Directory of Open Access Journals (Sweden)

    N. Mijani

    2017-09-01

    Full Text Available Landslide is one of the main geomorphic processes which effects on the development of prospect in mountainous areas and causes disastrous accidents. Landslide is an event which has different uncertain criteria such as altitude, slope, aspect, land use, vegetation density, precipitation, distance from the river and distance from the road network. This research aims to compare and evaluate different fuzzy-based models including Fuzzy Analytic Hierarchy Process (Fuzzy-AHP, Fuzzy Gamma and Fuzzy-OR. The main contribution of this paper reveals to the comprehensive criteria causing landslide hazard considering their uncertainties and comparison of different fuzzy-based models. The quantify of evaluation process are calculated by Density Ratio (DR and Quality Sum (QS. The proposed methodology implemented in Sari, one of the city of Iran which has faced multiple landslide accidents in recent years due to the particular environmental conditions. The achieved results of accuracy assessment based on the quantifier strated that Fuzzy-AHP model has higher accuracy compared to other two models in landslide hazard zonation. Accuracy of zoning obtained from Fuzzy-AHP model is respectively 0.92 and 0.45 based on method Precision (P and QS indicators. Based on obtained landslide hazard maps, Fuzzy-AHP, Fuzzy Gamma and Fuzzy-OR respectively cover 13, 26 and 35 percent of the study area with a very high risk level. Based on these findings, fuzzy-AHP model has been selected as the most appropriate method of zoning landslide in the city of Sari and the Fuzzy-gamma method with a minor difference is in the second order.

  3. Aespoe HRL - Geoscientific evaluation 1997/5. Models based on site characterization 1986-1995

    International Nuclear Information System (INIS)

    Rhen, I.; Stanfors, R.; Wikberg, P.

    1997-10-01

    The pre-investigations for the Aespoe Hard Rock Laboratory were started in 1986 and involved extensive field measurements, aimed at characterizing the rock formations with regard to geology, geohydrology, hydrochemistry and rock mechanics. Predictions for the excavation phase were made prior to excavation of the laboratory which was started in the autumn of 1990. The predictions concern five key issues: lithology and geological structures, groundwater flow, hydrochemistry, transport of solutes and mechanical stability. During 1996 the results from the pre-investigations and the excavation of the Aespoe Hard Rock Laboratory were evaluated and were compiled in geological, mechanical stability, geohydrological, groundwater chemical and transport-of-solutes models. The model concepts and the models of 1996 are presented in this report. The model developments from the pre-investigation phase up to the models made 1996 are also presented briefly

  4. Aespoe HRL - Geoscientific evaluation 1997/5. Models based on site characterization 1986-1995

    Energy Technology Data Exchange (ETDEWEB)

    Rhen, I. [ed.; Gustafsson, Gunnar [VBB Viak AB, Goeteborg (Sweden); Stanfors, R. [RS Consulting, Lund (Sweden); Wikberg, P. [Swedish Nuclear Fuel and Waste Management Co., Stockholm (Sweden)

    1997-10-01

    The pre-investigations for the Aespoe Hard Rock Laboratory were started in 1986 and involved extensive field measurements, aimed at characterizing the rock formations with regard to geology, geohydrology, hydrochemistry and rock mechanics. Predictions for the excavation phase were made prior to excavation of the laboratory which was started in the autumn of 1990. The predictions concern five key issues: lithology and geological structures, groundwater flow, hydrochemistry, transport of solutes and mechanical stability. During 1996 the results from the pre-investigations and the excavation of the Aespoe Hard Rock Laboratory were evaluated and were compiled in geological, mechanical stability, geohydrological, groundwater chemical and transport-of-solutes models. The model concepts and the models of 1996 are presented in this report. The model developments from the pre-investigation phase up to the models made 1996 are also presented briefly. 317 refs, figs, tabs.

  5. Individual model evaluation and probabilistic weighting of models

    International Nuclear Information System (INIS)

    Atwood, C.L.

    1994-01-01

    This note stresses the importance of trying to assess the accuracy of each model individually. Putting a Bayesian probability distribution on a population of models faces conceptual and practical complications, and apparently can come only after the work of evaluating the individual models. Moreover, the primary issue is open-quotes How good is this modelclose quotes? Therefore, the individual evaluations are first in both chronology and importance. They are not easy, but some ideas are given here on how to perform them

  6. Evaluating Pillar Industry's Transformation Capability: A Case Study of Two Chinese Steel-Based Cities.

    Science.gov (United States)

    Li, Zhidong; Marinova, Dora; Guo, Xiumei; Gao, Yuan

    2015-01-01

    Many steel-based cities in China were established between the 1950s and 1960s. After more than half a century of development and boom, these cities are starting to decline and industrial transformation is urgently needed. This paper focuses on evaluating the transformation capability of resource-based cities building an evaluation model. Using Text Mining and the Document Explorer technique as a way of extracting text features, the 200 most frequently used words are derived from 100 publications related to steel- and other resource-based cities. The Expert Evaluation Method (EEM) and Analytic Hierarchy Process (AHP) techniques are then applied to select 53 indicators, determine their weights and establish an index system for evaluating the transformation capability of the pillar industry of China's steel-based cities. Using real data and expert reviews, the improved Fuzzy Relation Matrix (FRM) method is applied to two case studies in China, namely Panzhihua and Daye, and the evaluation model is developed using Fuzzy Comprehensive Evaluation (FCE). The cities' abilities to carry out industrial transformation are evaluated with concerns expressed for the case of Daye. The findings have policy implications for the potential and required industrial transformation in the two selected cities and other resource-based towns.

  7. Evaluation of one dimensional analytical models for vegetation canopies

    Science.gov (United States)

    Goel, Narendra S.; Kuusk, Andres

    1992-01-01

    The SAIL model for one-dimensional homogeneous vegetation canopies has been modified to include the specular reflectance and hot spot effects. This modified model and the Nilson-Kuusk model are evaluated by comparing the reflectances given by them against those given by a radiosity-based computer model, Diana, for a set of canopies, characterized by different leaf area index (LAI) and leaf angle distribution (LAD). It is shown that for homogeneous canopies, the analytical models are generally quite accurate in the visible region, but not in the infrared region. For architecturally realistic heterogeneous canopies of the type found in nature, these models fall short. These shortcomings are quantified.

  8. Milestone-specific, Observed data points for evaluating levels of performance (MODEL) assessment strategy for anesthesiology residency programs.

    Science.gov (United States)

    Nagy, Christopher J; Fitzgerald, Brian M; Kraus, Gregory P

    2014-01-01

    Anesthesiology residency programs will be expected to have Milestones-based evaluation systems in place by July 2014 as part of the Next Accreditation System. The San Antonio Uniformed Services Health Education Consortium (SAUSHEC) anesthesiology residency program developed and implemented a Milestones-based feedback and evaluation system a year ahead of schedule. It has been named the Milestone-specific, Observed Data points for Evaluating Levels of performance (MODEL) assessment strategy. The "MODEL Menu" and the "MODEL Blueprint" are tools that other anesthesiology residency programs can use in developing their own Milestones-based feedback and evaluation systems prior to ACGME-required implementation. Data from our early experience with the streamlined MODEL blueprint assessment strategy showed substantially improved faculty compliance with reporting requirements. The MODEL assessment strategy provides programs with a workable assessment method for residents, and important Milestones data points to programs for ACGME reporting.

  9. Development and evaluation of a clinical model for lung cancer patients using stereotactic body radiotherapy (SBRT) within a knowledge-based algorithm for treatment planning.

    Science.gov (United States)

    Chin Snyder, Karen; Kim, Jinkoo; Reding, Anne; Fraser, Corey; Gordon, James; Ajlouni, Munther; Movsas, Benjamin; Chetty, Indrin J

    2016-11-08

    The purpose of this study was to describe the development of a clinical model for lung cancer patients treated with stereotactic body radiotherapy (SBRT) within a knowledge-based algorithm for treatment planning, and to evaluate the model performance and applicability to different planning techniques, tumor locations, and beam arrangements. 105 SBRT plans for lung cancer patients previously treated at our institution were included in the development of the knowledge-based model (KBM). The KBM was trained with a combination of IMRT, VMAT, and 3D CRT techniques. Model performance was validated with 25 cases, for both IMRT and VMAT. The full KBM encompassed lesions located centrally vs. peripherally (43:62), upper vs. lower (62:43), and anterior vs. posterior (60:45). Four separate sub-KBMs were created based on tumor location. Results were compared with the full KBM to evaluate its robustness. Beam templates were used in conjunction with the optimizer to evaluate the model's ability to handle suboptimal beam placements. Dose differences to organs-at-risk (OAR) were evaluated between the plans gener-ated by each KBM. Knowledge-based plans (KBPs) were comparable to clinical plans with respect to target conformity and OAR doses. The KBPs resulted in a lower maximum spinal cord dose by 1.0 ± 1.6 Gy compared to clinical plans, p = 0.007. Sub-KBMs split according to tumor location did not produce significantly better DVH estimates compared to the full KBM. For central lesions, compared to the full KBM, the peripheral sub-KBM resulted in lower dose to 0.035 cc and 5 cc of the esophagus, both by 0.4Gy ± 0.8Gy, p = 0.025. For all lesions, compared to the full KBM, the posterior sub-KBM resulted in higher dose to 0.035 cc, 0.35 cc, and 1.2 cc of the spinal cord by 0.2 ± 0.4Gy, p = 0.01. Plans using template beam arrangements met target and OAR criteria, with an increase noted in maximum heart dose (1.2 ± 2.2Gy, p = 0.01) and GI (0.2 ± 0.4, p = 0.01) for the nine

  10. A structure-based approach to evaluation product adaptability in adaptable design

    International Nuclear Information System (INIS)

    Cheng, Qiang; Liu, Zhifeng; Cai, Ligang; Zhang, Guojun; Gu, Peihua

    2011-01-01

    Adaptable design, as a new design paradigm, involves creating designs and products that can be easily changed to satisfy different requirements. In this paper, two types of product adaptability are proposed as essential adaptability and behavioral adaptability, and through measuring which respectively a model for product adaptability evaluation is developed. The essential adaptability evaluation proceeds with analyzing the independencies of function requirements and function modules firstly based on axiomatic design, and measuring the adaptability of interfaces secondly with three indices. The behavioral adaptability reflected by the performance of adaptable requirements after adaptation is measured based on Kano model. At last, the effectiveness of the proposed method is demonstrated by an illustrative example of the motherboard of a personal computer. The results show that the method can evaluate and reveal the adaptability of a product in essence, and is of directive significance to improving design and innovative design

  11. Pipe fracture evaluations for leak-rate detection: Probabilistic models

    International Nuclear Information System (INIS)

    Rahman, S.; Wilkowski, G.; Ghadiali, N.

    1993-01-01

    This is the second in series of three papers generated from studies on nuclear pipe fracture evaluations for leak-rate detection. This paper focuses on the development of novel probabilistic models for stochastic performance evaluation of degraded nuclear piping systems. It was accomplished here in three distinct stages. First, a statistical analysis was conducted to characterize various input variables for thermo-hydraulic analysis and elastic-plastic fracture mechanics, such as material properties of pipe, crack morphology variables, and location of cracks found in nuclear piping. Second, a new stochastic model was developed to evaluate performance of degraded piping systems. It is based on accurate deterministic models for thermo-hydraulic and fracture mechanics analyses described in the first paper, statistical characterization of various input variables, and state-of-the-art methods of modem structural reliability theory. From this model. the conditional probability of failure as a function of leak-rate detection capability of the piping systems can be predicted. Third, a numerical example was presented to illustrate the proposed model for piping reliability analyses. Results clearly showed that the model provides satisfactory estimates of conditional failure probability with much less computational effort when compared with those obtained from Monte Carlo simulation. The probabilistic model developed in this paper will be applied to various piping in boiling water reactor and pressurized water reactor plants for leak-rate detection applications

  12. Aircraft Cockpit Ergonomic Layout Evaluation Based on Uncertain Linguistic Multiattribute Decision Making

    Directory of Open Access Journals (Sweden)

    Junxuan Chen

    2014-03-01

    Full Text Available In the view of the current cockpit information interaction, facilities and other characteristics are increasingly multifarious; the early layout evaluation methods based on single or partial components, often cause comprehensive evaluation unilateral, leading to the problems of long development period and low efficiency. Considering the fuzziness of ergonomic evaluation and diversity of evaluation information attributes, we refine and build an evaluation system based on the characteristics of the current cockpit man-machine layout and introduce the different types of uncertain linguistic multiple attribute combination decision making (DTULDM method in the cockpit layout evaluation process. Meanwhile, we also establish an aircraft cockpit ergonomic layout evaluation model. Finally, an experiment about cockpit layout evaluation is given, and the result demonstrates that the proposed method about cockpit ergonomic layout evaluation is feasible and effective.

  13. Measurement-Based Performance Evaluation of Advanced MIMO Transceiver Designs

    Directory of Open Access Journals (Sweden)

    Schneider Christian

    2005-01-01

    Full Text Available This paper describes the methodology and the results of performance investigations on a multiple-input multiple-output (MIMO transceiver scheme for frequency-selective radio channels. The method relies on offline simulations and employs real-time MIMO channel sounder measurement data to ensure a realistic channel modeling. Thus it can be classified in between the performance evaluation using some predefined channel models and the evaluation of a prototype hardware in field experiments. New aspects for the simulation setup are discussed, which are frequently ignored when using simpler model-based evaluations. Example simulations are provided for an iterative ("turbo" MIMO equalizer concept. The dependency of the achievable bit error rate performance on the propagation characteristics and on the variation in some system design parameters is shown, whereas the antenna constellation is of particular concern for MIMO systems. Although in many of the considered constellations turbo MIMO equalization appears feasible in real field scenarios, there exist cases with poor performance as well, indicating that in practical applications link adaptation of the transmitter and receiver processing to the environment is necessary.

  14. A Nonlinear Model for Gene-Based Gene-Environment Interaction

    Directory of Open Access Journals (Sweden)

    Jian Sa

    2016-06-01

    Full Text Available A vast amount of literature has confirmed the role of gene-environment (G×E interaction in the etiology of complex human diseases. Traditional methods are predominantly focused on the analysis of interaction between a single nucleotide polymorphism (SNP and an environmental variable. Given that genes are the functional units, it is crucial to understand how gene effects (rather than single SNP effects are influenced by an environmental variable to affect disease risk. Motivated by the increasing awareness of the power of gene-based association analysis over single variant based approach, in this work, we proposed a sparse principle component regression (sPCR model to understand the gene-based G×E interaction effect on complex disease. We first extracted the sparse principal components for SNPs in a gene, then the effect of each principal component was modeled by a varying-coefficient (VC model. The model can jointly model variants in a gene in which their effects are nonlinearly influenced by an environmental variable. In addition, the varying-coefficient sPCR (VC-sPCR model has nice interpretation property since the sparsity on the principal component loadings can tell the relative importance of the corresponding SNPs in each component. We applied our method to a human birth weight dataset in Thai population. We analyzed 12,005 genes across 22 chromosomes and found one significant interaction effect using the Bonferroni correction method and one suggestive interaction. The model performance was further evaluated through simulation studies. Our model provides a system approach to evaluate gene-based G×E interaction.

  15. Model-based design evaluation of a compact, high-efficiency neutron scatter camera

    Science.gov (United States)

    Weinfurther, Kyle; Mattingly, John; Brubaker, Erik; Steele, John

    2018-03-01

    This paper presents the model-based design and evaluation of an instrument that estimates incident neutron direction using the kinematics of neutron scattering by hydrogen-1 nuclei in an organic scintillator. The instrument design uses a single, nearly contiguous volume of organic scintillator that is internally subdivided only as necessary to create optically isolated pillars, i.e., long, narrow parallelepipeds of organic scintillator. Scintillation light emitted in a given pillar is confined to that pillar by a combination of total internal reflection and a specular reflector applied to the four sides of the pillar transverse to its long axis. The scintillation light is collected at each end of the pillar using a photodetector, e.g., a microchannel plate photomultiplier (MCP-PM) or a silicon photomultiplier (SiPM). In this optically segmented design, the (x , y) position of scintillation light emission (where the x and y coordinates are transverse to the long axis of the pillars) is estimated as the pillar's (x , y) position in the scintillator "block", and the z-position (the position along the pillar's long axis) is estimated from the amplitude and relative timing of the signals produced by the photodetectors at each end of the pillar. The neutron's incident direction and energy is estimated from the (x , y , z) -positions of two sequential neutron-proton scattering interactions in the scintillator block using elastic scatter kinematics. For proton recoils greater than 1 MeV, we show that the (x , y , z) -position of neutron-proton scattering can be estimated with < 1 cm root-mean-squared [RMS] error and the proton recoil energy can be estimated with < 50 keV RMS error by fitting the photodetectors' response time history to models of optical photon transport within the scintillator pillars. Finally, we evaluate several alternative designs of this proposed single-volume scatter camera made of pillars of plastic scintillator (SVSC-PiPS), studying the effect of

  16. Reliability Evaluation of Machine Center Components Based on Cascading Failure Analysis

    Science.gov (United States)

    Zhang, Ying-Zhi; Liu, Jin-Tong; Shen, Gui-Xiang; Long, Zhe; Sun, Shu-Guang

    2017-07-01

    In order to rectify the problems that the component reliability model exhibits deviation, and the evaluation result is low due to the overlook of failure propagation in traditional reliability evaluation of machine center components, a new reliability evaluation method based on cascading failure analysis and the failure influenced degree assessment is proposed. A direct graph model of cascading failure among components is established according to cascading failure mechanism analysis and graph theory. The failure influenced degrees of the system components are assessed by the adjacency matrix and its transposition, combined with the Pagerank algorithm. Based on the comprehensive failure probability function and total probability formula, the inherent failure probability function is determined to realize the reliability evaluation of the system components. Finally, the method is applied to a machine center, it shows the following: 1) The reliability evaluation values of the proposed method are at least 2.5% higher than those of the traditional method; 2) The difference between the comprehensive and inherent reliability of the system component presents a positive correlation with the failure influenced degree of the system component, which provides a theoretical basis for reliability allocation of machine center system.

  17. Color Image Evaluation for Small Space Based on FA and GEP

    Directory of Open Access Journals (Sweden)

    Li Deng

    2014-01-01

    Full Text Available Aiming at the problem that color image is difficult to quantify, this paper proposes an evaluation method of color image for small space based on factor analysis (FA and gene expression programming (GEP and constructs a correlation model between color image factors and comprehensive color image. The basic color samples of small space and color images are evaluated by semantic differential method (SD method, color image factors are selected via dimension reduction in FA, factor score function is established, and by combining the entropy weight method to determine each factor weights then the comprehensive color image score is calculated finally. The best fitting function between color image factors and comprehensive color image is obtained by GEP algorithm, which can predict the users’ color image values. A color image evaluation system for small space is developed based on this model. The color evaluation of a control room on AC frequency conversion rig is taken as an example, verifying the effectiveness of the proposed method. It also can assist the designers in other color designs and provide a fast evaluation tool for testing users’ color image.

  18. CTBT integrated verification system evaluation model supplement

    International Nuclear Information System (INIS)

    EDENBURN, MICHAEL W.; BUNTING, MARCUS; PAYNE, ARTHUR C. JR.; TROST, LAWRENCE C.

    2000-01-01

    Sandia National Laboratories has developed a computer based model called IVSEM (Integrated Verification System Evaluation Model) to estimate the performance of a nuclear detonation monitoring system. The IVSEM project was initiated in June 1994, by Sandia's Monitoring Systems and Technology Center and has been funded by the U.S. Department of Energy's Office of Nonproliferation and National Security (DOE/NN). IVSEM is a simple, ''top-level,'' modeling tool which estimates the performance of a Comprehensive Nuclear Test Ban Treaty (CTBT) monitoring system and can help explore the impact of various sensor system concepts and technology advancements on CTBT monitoring. One of IVSEM's unique features is that it integrates results from the various CTBT sensor technologies (seismic, in sound, radionuclide, and hydroacoustic) and allows the user to investigate synergy among the technologies. Specifically, IVSEM estimates the detection effectiveness (probability of detection), location accuracy, and identification capability of the integrated system and of each technology subsystem individually. The model attempts to accurately estimate the monitoring system's performance at medium interfaces (air-land, air-water) and for some evasive testing methods such as seismic decoupling. The original IVSEM report, CTBT Integrated Verification System Evaluation Model, SAND97-25 18, described version 1.2 of IVSEM. This report describes the changes made to IVSEM version 1.2 and the addition of identification capability estimates that have been incorporated into IVSEM version 2.0

  19. Using learning analytics to evaluate a video-based lecture series.

    Science.gov (United States)

    Lau, K H Vincent; Farooque, Pue; Leydon, Gary; Schwartz, Michael L; Sadler, R Mark; Moeller, Jeremy J

    2018-01-01

    The video-based lecture (VBL), an important component of the flipped classroom (FC) and massive open online course (MOOC) approaches to medical education, has primarily been evaluated through direct learner feedback. Evaluation may be enhanced through learner analytics (LA) - analysis of quantitative audience usage data generated by video-sharing platforms. We applied LA to an experimental series of ten VBLs on electroencephalography (EEG) interpretation, uploaded to YouTube in the model of a publicly accessible MOOC. Trends in view count; total percentage of video viewed and audience retention (AR) (percentage of viewers watching at a time point compared to the initial total) were examined. The pattern of average AR decline was characterized using regression analysis, revealing a uniform linear decline in viewership for each video, with no evidence of an optimal VBL length. Segments with transient increases in AR corresponded to those focused on core concepts, indicative of content requiring more detailed evaluation. We propose a model for applying LA at four levels: global, series, video, and feedback. LA may be a useful tool in evaluating a VBL series. Our proposed model combines analytics data and learner self-report for comprehensive evaluation.

  20. Evaluating the effect of human activity patterns on air pollution exposure using an integrated field-based and agent-based modelling framework

    Science.gov (United States)

    Schmitz, Oliver; Beelen, Rob M. J.; de Bakker, Merijn P.; Karssenberg, Derek

    2015-04-01

    Constructing spatio-temporal numerical models to support risk assessment, such as assessing the exposure of humans to air pollution, often requires the integration of field-based and agent-based modelling approaches. Continuous environmental variables such as air pollution are best represented using the field-based approach which considers phenomena as continuous fields having attribute values at all locations. When calculating human exposure to such pollutants it is, however, preferable to consider the population as a set of individuals each with a particular activity pattern. This would allow to account for the spatio-temporal variation in a pollutant along the space-time paths travelled by individuals, determined, for example, by home and work locations, road network, and travel times. Modelling this activity pattern requires an agent-based or individual based modelling approach. In general, field- and agent-based models are constructed with the help of separate software tools, while both approaches should play together in an interacting way and preferably should be combined into one modelling framework, which would allow for efficient and effective implementation of models by domain specialists. To overcome this lack in integrated modelling frameworks, we aim at the development of concepts and software for an integrated field-based and agent-based modelling framework. Concepts merging field- and agent-based modelling were implemented by extending PCRaster (http://www.pcraster.eu), a field-based modelling library implemented in C++, with components for 1) representation of discrete, mobile, agents, 2) spatial networks and algorithms by integrating the NetworkX library (http://networkx.github.io), allowing therefore to calculate e.g. shortest routes or total transport costs between locations, and 3) functions for field-network interactions, allowing to assign field-based attribute values to networks (i.e. as edge weights), such as aggregated or averaged

  1. [Evaluation of national prevention campaigns against AIDS: analysis model].

    Science.gov (United States)

    Hausser, D; Lehmann, P; Dubois, F; Gutzwiller, F

    1987-01-01

    The evaluation of the "Stop-Aids" campaign is based upon a model of behaviour modification (McAlister) which includes the communication theory of McGuire and the social learning theory of Bandura. Using this model, it is possible to define key variables that are used to measure the impact of the campaign. Process evaluation allows identification of multipliers that reinforce and confirm the initial message of prevention (source) thereby encouraging behaviour modifications that are likely to reduce the transmission of HIV (condom use, no sharing of injection material, monogamous relationship, etc.). Twelve studies performed by seven teams in the three linguistic areas contribute to the project. A synthesis of these results will be performed by the IUMSP.

  2. Linking agent-based models and stochastic models of financial markets.

    Science.gov (United States)

    Feng, Ling; Li, Baowen; Podobnik, Boris; Preis, Tobias; Stanley, H Eugene

    2012-05-29

    It is well-known that financial asset returns exhibit fat-tailed distributions and long-term memory. These empirical features are the main objectives of modeling efforts using (i) stochastic processes to quantitatively reproduce these features and (ii) agent-based simulations to understand the underlying microscopic interactions. After reviewing selected empirical and theoretical evidence documenting the behavior of traders, we construct an agent-based model to quantitatively demonstrate that "fat" tails in return distributions arise when traders share similar technical trading strategies and decisions. Extending our behavioral model to a stochastic model, we derive and explain a set of quantitative scaling relations of long-term memory from the empirical behavior of individual market participants. Our analysis provides a behavioral interpretation of the long-term memory of absolute and squared price returns: They are directly linked to the way investors evaluate their investments by applying technical strategies at different investment horizons, and this quantitative relationship is in agreement with empirical findings. Our approach provides a possible behavioral explanation for stochastic models for financial systems in general and provides a method to parameterize such models from market data rather than from statistical fitting.

  3. The design and implementation of an operational model evaluation system. Revision 1

    Energy Technology Data Exchange (ETDEWEB)

    Foster, K.T.

    1995-06-01

    The complete evaluation of an atmospheric transport and diffusion model typically includes a study of the model`s operational performance. Such a study very often attempts to compare the model`s calculations of an atmospheric pollutant`s temporal and spatial distribution with field experiment measurements. However, these comparisons tend to use data from a small number of experiments and are very often limited to producing the commonly quoted statistics based on the differences between model calculations and the experimental measurements (fractional bias, fractional scatter, etc.). This paper presents initial efforts to develop a model evaluation system geared for both the objective statistical analysis and the subjective visualization of the interrelationships between a model`s calculations and the appropriate field measurement data.

  4. RTMOD: Real-Time MODel evaluation

    International Nuclear Information System (INIS)

    Graziani, G; Galmarini, S.; Mikkelsen, T.

    2000-01-01

    The 1998 - 1999 RTMOD project is a system based on an automated statistical evaluation for the inter-comparison of real-time forecasts produced by long-range atmospheric dispersion models for national nuclear emergency predictions of cross-boundary consequences. The background of RTMOD was the 1994 ETEX project that involved about 50 models run in several Institutes around the world to simulate two real tracer releases involving a large part of the European territory. In the preliminary phase of ETEX, three dry runs (i.e. simulations in real-time of fictitious releases) were carried out. At that time, the World Wide Web was not available to all the exercise participants, and plume predictions were therefore submitted to JRC-Ispra by fax and regular mail for subsequent processing. The rapid development of the World Wide Web in the second half of the nineties, together with the experience gained during the ETEX exercises suggested the development of this project. RTMOD featured a web-based user-friendly interface for data submission and an interactive program module for displaying, intercomparison and analysis of the forecasts. RTMOD has focussed on model intercomparison of concentration predictions at the nodes of a regular grid with 0.5 degrees of resolution both in latitude and in longitude, the domain grid extending from 5W to 40E and 40N to 65N. Hypothetical releases were notified around the world to the 28 model forecasters via the web on a one-day warning in advance. They then accessed the RTMOD web page for detailed information on the actual release, and as soon as possible they then uploaded their predictions to the RTMOD server and could soon after start their inter-comparison analysis with other modelers. When additional forecast data arrived, already existing statistical results would be recalculated to include the influence by all available predictions. The new web-based RTMOD concept has proven useful as a practical decision-making tool for realtime

  5. Cleanliness Policy Implementation: Evaluating Retribution Model to Rise Public Satisfaction

    Science.gov (United States)

    Dailiati, Surya; Hernimawati; Prihati; Chintia Utami, Bunga

    2018-05-01

    This research is based on the principal issues concerning the evaluation of cleanliness retribution policy which has not been optimally be able to improve the Local Revenue of Pekanbaru City and has not improved the cleanliness of Pekanbaru City. It was estimated to be caused by the performance of Garden and Sanitation Department are not in accordance with the requirement of society of Pekanbaru City. The research method used in this study is a mixed method with sequential exploratory strategy. The data collection used are observation, interview and documentation for qualitative research as well as questionnaires for quantitative research. The collected data were analyzed with interactive model of Miles and Huberman for qualitative research and multiple regression analysis for quantitative research. The research result indicated that the model of cleanliness policy implementation that can increase of PAD Pekanbaru City and be able to improve people’s satisfaction divided into two (2) which are the evaluation model and the society satisfaction model. The evaluation model influence by criteria/variable of effectiveness, efficiency, adequacy, equity, responsiveness, and appropriateness, while the society satisfaction model influence by variables of society satisfaction, intentions, goals, plans, programs, and appropriateness of cleanliness retribution collection policy.

  6. IEA-Task 31 WAKEBENCH: Towards a protocol for wind farm flow model evaluation. Part 2: Wind farm wake models

    DEFF Research Database (Denmark)

    Moriarty, Patrick; Rodrigo, Javier Sanz; Gancarski, Pawel

    2014-01-01

    Researchers within the International Energy Agency (IEA) Task 31: Wakebench have created a framework for the evaluation of wind farm flow models operating at the microscale level. The framework consists of a model evaluation protocol integrated with a web-based portal for model benchmarking (www.......windbench.net). This paper provides an overview of the building-block validation approach applied to wind farm wake models, including best practices for the benchmarking and data processing procedures for validation datasets from wind farm SCADA and meteorological databases. A hierarchy of test cases has been proposed...

  7. IEA-Task 31 WAKEBENCH: Towards a protocol for wind farm flow model evaluation. Part 1: Flow-over-terrain models

    DEFF Research Database (Denmark)

    Rodrigo, Javier Sanz; Gancarski, Pawel; Arroyo, Roberto Chavez

    2014-01-01

    The IEA Task 31 Wakebench is setting up a framework for the evaluation of wind farm flow models operating at microscale level. The framework consists on a model evaluation protocol integrated on a web-based portal for model benchmarking (www.windbench.net). This paper provides an overview...... of the building-block validation approach applied to flow-over-terrain models, including best practices for the benchmarking and data processing procedures for the analysis and qualification of validation datasets from wind resource assessment campaigns. A hierarchy of test cases has been proposed for flow...

  8. Strategic directions for agent-based modeling: avoiding the YAAWN syndrome.

    Science.gov (United States)

    O'Sullivan, David; Evans, Tom; Manson, Steven; Metcalf, Sara; Ligmann-Zielinska, Arika; Bone, Chris

    In this short communication, we examine how agent-based modeling has become common in land change science and is increasingly used to develop case studies for particular times and places. There is a danger that the research community is missing a prime opportunity to learn broader lessons from the use of agent-based modeling (ABM), or at the very least not sharing these lessons more widely. How do we find an appropriate balance between empirically rich, realistic models and simpler theoretically grounded models? What are appropriate and effective approaches to model evaluation in light of uncertainties not only in model parameters but also in model structure? How can we best explore hybrid model structures that enable us to better understand the dynamics of the systems under study, recognizing that no single approach is best suited to this task? Under what circumstances - in terms of model complexity, model evaluation, and model structure - can ABMs be used most effectively to lead to new insight for stakeholders? We explore these questions in the hope of helping the growing community of land change scientists using models in their research to move from 'yet another model' to doing better science with models.

  9. A Validation of Subchannel Based CHF Prediction Model for Rod Bundles

    International Nuclear Information System (INIS)

    Hwang, Dae-Hyun; Kim, Seong-Jin

    2015-01-01

    A large number of CHF data base were procured from various sources which included square and non-square lattice test bundles. CHF prediction accuracy was evaluated for various models including CHF lookup table method, empirical correlations, and phenomenological DNB models. The parametric effect of the mass velocity and unheated wall has been investigated from the experimental result, and incorporated into the development of local parameter CHF correlation applicable to APWR conditions. According to the CHF design criterion, the CHF should not occur at the hottest rod in the reactor core during normal operation and anticipated operational occurrences with at least a 95% probability at a 95% confidence level. This is accomplished by assuring that the minimum DNBR (Departure from Nucleate Boiling Ratio) in the reactor core is greater than the limit DNBR which accounts for the accuracy of CHF prediction model. The limit DNBR can be determined from the inverse of the lower tolerance limit of M/P that is evaluated from the measured-to-predicted CHF ratios for the relevant CHF data base. It is important to evaluate an adequacy of the CHF prediction model for application to the actual reactor core conditions. Validation of CHF prediction model provides the degree of accuracy inferred from the comparison of solution and data. To achieve a required accuracy for the CHF prediction model, it may be necessary to calibrate the model parameters by employing the validation results. If the accuracy of the model is acceptable, then it is applied to the real complex system with the inferred accuracy of the model. In a conventional approach, the accuracy of CHF prediction model was evaluated from the M/P statistics for relevant CHF data base, which was evaluated by comparing the nominal values of the predicted and measured CHFs. The experimental uncertainty for the CHF data was not considered in this approach to determine the limit DNBR. When a subchannel based CHF prediction model

  10. Evaluation of chemical transport model predictions of primary organic aerosol for air masses classified by particle-component-based factor analysis

    OpenAIRE

    C. A. Stroud; M. D. Moran; P. A. Makar; S. Gong; W. Gong; J. Zhang; J. G. Slowik; J. P. D. Abbatt; G. Lu; J. R. Brook; C. Mihele; Q. Li; D. Sills; K. B. Strawbridge; M. L. McGuire

    2012-01-01

    Observations from the 2007 Border Air Quality and Meteorology Study (BAQS-Met 2007) in Southern Ontario, Canada, were used to evaluate predictions of primary organic aerosol (POA) and two other carbonaceous species, black carbon (BC) and carbon monoxide (CO), made for this summertime period by Environment Canada's AURAMS regional chemical transport model. Particle component-based factor analysis was applied to aerosol mass spectrometer measurements made at one urban site (Windsor, ON) and two...

  11. Study on Fuzzy Comprehensive Evaluation Model for the Safety of Mine Belt Conveyor

    Directory of Open Access Journals (Sweden)

    Gong Xiaoyan

    2017-01-01

    Full Text Available To improve the situation of the frequent failures of mine belt conveyor during operation, a model was used to evaluate the safety of mine belt conveyor. Based on the foundation of collecting and analyzing a large quantity of fault information of belt conveyor in the nationwide coal mine, the fault tree model of belt conveyor has been built, then the safety evaluation index system was established by analyzing and removing some secondary indicators. Furthermore, the weighted value of safety evaluation indexs was determined by analytic hierarchy process(AHP, and the single factor fuzzy evaluation matrix was constructed by experts grading method. Additionally, the model was applied in evaluating the security of belt conveyor in Nanliang coal mine. The results shows the security level is recognized to the “general”, which means that this model can be adopted widely in evaluating the safety of mine belt conveyor.

  12. Model-based evaluation of highly and low pathogenic avian influenza dynamics in wild birds.

    Directory of Open Access Journals (Sweden)

    Viviane Hénaux

    Full Text Available There is growing interest in avian influenza (AI epidemiology to predict disease risk in wild and domestic birds, and prevent transmission to humans. However, understanding the epidemic dynamics of highly pathogenic (HPAI viruses remains challenging because they have rarely been detected in wild birds. We used modeling to integrate available scientific information from laboratory and field studies, evaluate AI dynamics in individual hosts and waterfowl populations, and identify key areas for future research. We developed a Susceptible-Exposed-Infectious-Recovered (SEIR model and used published laboratory challenge studies to estimate epidemiological parameters (rate of infection, latency period, recovery and mortality rates, considering the importance of age classes, and virus pathogenicity. Infectious contact leads to infection and virus shedding within 1-2 days, followed by relatively slower period for recovery or mortality. We found a shorter infectious period for HPAI than low pathogenic (LP AI, which may explain that HPAI has been much harder to detect than LPAI during surveillance programs. Our model predicted a rapid LPAI epidemic curve, with a median duration of infection of 50-60 days and no fatalities. In contrast, HPAI dynamics had lower prevalence and higher mortality, especially in young birds. Based on field data from LPAI studies, our model suggests to increase surveillance for HPAI in post-breeding areas, because the presence of immunologically naïve young birds is predicted to cause higher HPAI prevalence and bird losses during this season. Our results indicate a better understanding of the transmission, infection, and immunity-related processes is required to refine predictions of AI risk and spread, improve surveillance for HPAI in wild birds, and develop disease control strategies to reduce potential transmission to domestic birds and/or humans.

  13. Model-based evaluation of highly and low pathogenic avian influenza dynamics in wild birds

    Science.gov (United States)

    Hénaux, Viviane; Samuel, Michael D.; Bunck, Christine M.

    2010-01-01

    There is growing interest in avian influenza (AI) epidemiology to predict disease risk in wild and domestic birds, and prevent transmission to humans. However, understanding the epidemic dynamics of highly pathogenic (HPAI) viruses remains challenging because they have rarely been detected in wild birds. We used modeling to integrate available scientific information from laboratory and field studies, evaluate AI dynamics in individual hosts and waterfowl populations, and identify key areas for future research. We developed a Susceptible-Exposed-Infectious-Recovered (SEIR) model and used published laboratory challenge studies to estimate epidemiological parameters (rate of infection, latency period, recovery and mortality rates), considering the importance of age classes, and virus pathogenicity. Infectious contact leads to infection and virus shedding within 1–2 days, followed by relatively slower period for recovery or mortality. We found a shorter infectious period for HPAI than low pathogenic (LP) AI, which may explain that HPAI has been much harder to detect than LPAI during surveillance programs. Our model predicted a rapid LPAI epidemic curve, with a median duration of infection of 50–60 days and no fatalities. In contrast, HPAI dynamics had lower prevalence and higher mortality, especially in young birds. Based on field data from LPAI studies, our model suggests to increase surveillance for HPAI in post-breeding areas, because the presence of immunologically naïve young birds is predicted to cause higher HPAI prevalence and bird losses during this season. Our results indicate a better understanding of the transmission, infection, and immunity-related processes is required to refine predictions of AI risk and spread, improve surveillance for HPAI in wild birds, and develop disease control strategies to reduce potential transmission to domestic birds and/or humans.

  14. Comparison of results from dispersion models for regulatory purposes based on Gaussian-and Lagrangian-algorithms: an evaluating literature study

    International Nuclear Information System (INIS)

    Walter, H.

    2004-01-01

    Powerful tools to describe atmospheric transport processes for radiation protection can be provided by meteorology; these are atmospheric flow and dispersion models. Concerning dispersion models, Gaussian plume models have been used since a long time to describe atmospheric dispersion processes. Advantages of the Gaussian plume models are short computation time, good validation and broad acceptance worldwide. However, some limitations and their implications on model result interpretation have to be taken into account, as the mathematical derivation of an analytic solution of the equations of motion leads to severe constraints. In order to minimise these constraints, various dispersion models for scientific and regulatory purposes have been developed and applied. Among these the Lagrangian particle models are of special interest, because these models are able to simulate atmospheric transport processes close to reality, e.g. the influence of orography, topography, wind shear and other meteorological phenomena. Within this study, the characteristics and computational results of Gaussian dispersion models as well as of Lagrangian models have been compared and evaluated on the base of numerous papers and reports published in literature. Special emphasis has been laid on the intention that dispersion models should comply with EU requests (Richtlinie 96/29/Euratom, 1996) on a more realistic assessment of the radiation exposure to the population. (orig.)

  15. An agent-based hydroeconomic model to evaluate water policies in Jordan

    Science.gov (United States)

    Yoon, J.; Gorelick, S.

    2014-12-01

    Modern water systems can be characterized by a complex network of institutional and private actors that represent competing sectors and interests. Identifying solutions to enhance water security in such systems calls for analysis that can adequately account for this level of complexity and interaction. Our work focuses on the development of a hierarchical, multi-agent, hydroeconomic model that attempts to realistically represent complex interactions between hydrologic and multi-faceted human systems. The model is applied to Jordan, one of the most water-poor countries in the world. In recent years, the water crisis in Jordan has escalated due to an ongoing drought and influx of refugees from regional conflicts. We adopt a modular approach in which biophysical modules simulate natural and engineering phenomena, and human modules represent behavior at multiple scales of decision making. The human modules employ agent-based modeling, in which agents act as autonomous decision makers at the transboundary, state, organizational, and user levels. A systematic nomenclature and conceptual framework is used to characterize model agents and modules. Concepts from the Unified Modeling Language (UML) are adopted to promote clear conceptualization of model classes and process sequencing, establishing a foundation for full deployment of the integrated model in a scalable object-oriented programming environment. Although the framework is applied to the Jordanian water context, it is generalizable to other regional human-natural freshwater supply systems.

  16. Human neuronal cell based assay: A new in vitro model for toxicity evaluation of ciguatoxin.

    Science.gov (United States)

    Coccini, Teresa; Caloni, Francesca; De Simone, Uliana

    2017-06-01

    Ciguatoxins (CTXs) are emerging marine neurotoxins representing the main cause of ciguatera fish poisoning, an intoxication syndrome which configures a health emergency and constitutes an evolving issue constantly changing due to new vectors and derivatives of CTXs, as well as their presence in new non-endemic areas. The study applied the neuroblastoma cell model of human origin (SH-SY5Y) to evaluate species-specific mechanistic information on CTX toxicity. Metabolic functionality, cell morphology, cytosolic Ca 2+ i responses, neuronal cell growth and proliferation were assessed after short- (4-24h) and long-term exposure (10days) to P-CTX-3C. In SH-SY5Y, P-CTX-3C displayed a powerful cytotoxicity requiring the presence of both Veratridine and Ouabain. SH-SY5Y were very sensitive to Ouabain: 10 and 0.25nM appeared the optimal concentrations, for short- and long-term toxicity studies, respectively, to be used in co-incubation with Veratridine (25μM), simulating the physiological and pathological endogenous Ouabain levels in humans. P-CTX-3C cytotoxic effect, on human neurons co-incubated with OV (Ouabain+Veratridine) mix, was expressed starting from 100pM after short- and 25pM after long-term exposure. Notably, P-CTX-3C alone at 25nM induced cytotoxicity after 24h and prolonged exposure. This human brain-derived cell line appears a suitable cell-based-model to evaluate cytotoxicity of CTX present in marine food contaminated at low toxic levels and to characterize the toxicological profile of other/new congeners. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. A Logic Model for Evaluating the Academic Health Department.

    Science.gov (United States)

    Erwin, Paul Campbell; McNeely, Clea S; Grubaugh, Julie H; Valentine, Jennifer; Miller, Mark D; Buchanan, Martha

    2016-01-01

    Academic Health Departments (AHDs) are collaborative partnerships between academic programs and practice settings. While case studies have informed our understanding of the development and activities of AHDs, there has been no formal published evaluation of AHDs, either singularly or collectively. Developing a framework for evaluating AHDs has potential to further aid our understanding of how these relationships may matter. In this article, we present a general theory of change, in the form of a logic model, for how AHDs impact public health at the community level. We then present a specific example of how the logic model has been customized for a specific AHD. Finally, we end with potential research questions on the AHD based on these concepts. We conclude that logic models are valuable tools, which can be used to assess the value and ultimate impact of the AHD.

  18. The Development and Evaluation of Speaking Learning Model by Cooperative Approach

    Science.gov (United States)

    Darmuki, Agus; Andayani; Nurkamto, Joko; Saddhono, Kundharu

    2018-01-01

    A cooperative approach-based Speaking Learning Model (SLM) has been developed to improve speaking skill of Higher Education students. This research aimed at evaluating the effectiveness of cooperative-based SLM viewed from the development of student's speaking ability and its effectiveness on speaking activity. This mixed method study combined…

  19. A Method for Evaluation of Model-Generated Vertical Profiles of Meteorological Variables

    Science.gov (United States)

    2016-03-01

    evaluated WRF output for the boundary layer over Svalbard in the Arctic in terms of height above ground compared to tower and tethered balloon ...Valparaiso, Chile; 2011. Dutsch ML. Evaluation of the WRF model based on observations made by controlled meteorological balloons in the atmospheric

  20. A diagnostic evaluation model for complex research partnerships with community engagement: The partnership for Native American Cancer Prevention (NACP) model

    OpenAIRE

    Trotter, Robert T.; Laurila, Kelly; Alberts, David; Huenneke, Laura F.

    2014-01-01

    Complex community oriented health care prevention and intervention partnerships fail or only partially succeed at alarming rates. In light of the current rapid expansion of critically needed programs targeted at health disparities in minority populations, we have designed and are testing an “logic model plus” evaluation model that combines classic logic model and query based evaluation designs (CDC, NIH, Kellogg Foundation) with advances in community engaged designs derived from industry-univ...

  1. Evaluating model performance of an ensemble-based chemical data assimilation system during INTEX-B field mission

    Directory of Open Access Journals (Sweden)

    A. F. Arellano Jr.

    2007-11-01

    Full Text Available We present a global chemical data assimilation system using a global atmosphere model, the Community Atmosphere Model (CAM3 with simplified chemistry and the Data Assimilation Research Testbed (DART assimilation package. DART is a community software facility for assimilation studies using the ensemble Kalman filter approach. Here, we apply the assimilation system to constrain global tropospheric carbon monoxide (CO by assimilating meteorological observations of temperature and horizontal wind velocity and satellite CO retrievals from the Measurement of Pollution in the Troposphere (MOPITT satellite instrument. We verify the system performance using independent CO observations taken on board the NSF/NCAR C-130 and NASA DC-8 aircrafts during the April 2006 part of the Intercontinental Chemical Transport Experiment (INTEX-B. Our evaluations show that MOPITT data assimilation provides significant improvements in terms of capturing the observed CO variability relative to no MOPITT assimilation (i.e. the correlation improves from 0.62 to 0.71, significant at 99% confidence. The assimilation provides evidence of median CO loading of about 150 ppbv at 700 hPa over the NE Pacific during April 2006. This is marginally higher than the modeled CO with no MOPITT assimilation (~140 ppbv. Our ensemble-based estimates of model uncertainty also show model overprediction over the source region (i.e. China and underprediction over the NE Pacific, suggesting model errors that cannot be readily explained by emissions alone. These results have important implications for improving regional chemical forecasts and for inverse modeling of CO sources and further demonstrate the utility of the assimilation system in comparing non-coincident measurements, e.g. comparing satellite retrievals of CO with in-situ aircraft measurements.

  2. Betterment, undermining, support and distortion: A heuristic model for the analysis of pressure on evaluators.

    Science.gov (United States)

    Pleger, Lyn; Sager, Fritz

    2016-09-18

    Evaluations can only serve as a neutral evidence base for policy decision-making as long as they have not been altered along non-scientific criteria. Studies show that evaluators are repeatedly put under pressure to deliver results in line with given expectations. The study of pressure and influence to misrepresent findings is hence an important research strand for the development of evaluation praxis. A conceptual challenge in the area of evaluation ethics research is the fact that pressure can be not only negative, but also positive. We develop a heuristic model of influence on evaluations that does justice to this ambivalence of influence: the BUSD-model (betterment, undermining, support, distortion). The model is based on the distinction of two dimensions, namely 'explicitness of pressure' and 'direction of influence'. We demonstrate how the model can be applied to understand pressure and offer a practical tool to distinguish positive from negative influence in the form of three so-called differentiators (awareness, accordance, intention). The differentiators comprise a practical component by assisting evaluators who are confronted with influence. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Image based Monte Carlo modeling for computational phantom

    International Nuclear Information System (INIS)

    Cheng, M.; Wang, W.; Zhao, K.; Fan, Y.; Long, P.; Wu, Y.

    2013-01-01

    Full text of the publication follows. The evaluation on the effects of ionizing radiation and the risk of radiation exposure on human body has been becoming one of the most important issues for radiation protection and radiotherapy fields, which is helpful to avoid unnecessary radiation and decrease harm to human body. In order to accurately evaluate the dose on human body, it is necessary to construct more realistic computational phantom. However, manual description and verification of the models for Monte Carlo (MC) simulation are very tedious, error-prone and time-consuming. In addition, it is difficult to locate and fix the geometry error, and difficult to describe material information and assign it to cells. MCAM (CAD/Image-based Automatic Modeling Program for Neutronics and Radiation Transport Simulation) was developed as an interface program to achieve both CAD- and image-based automatic modeling. The advanced version (Version 6) of MCAM can achieve automatic conversion from CT/segmented sectioned images to computational phantoms such as MCNP models. Imaged-based automatic modeling program(MCAM6.0) has been tested by several medical images and sectioned images. And it has been applied in the construction of Rad-HUMAN. Following manual segmentation and 3D reconstruction, a whole-body computational phantom of Chinese adult female called Rad-HUMAN was created by using MCAM6.0 from sectioned images of a Chinese visible human dataset. Rad-HUMAN contains 46 organs/tissues, which faithfully represented the average anatomical characteristics of the Chinese female. The dose conversion coefficients (Dt/Ka) from kerma free-in-air to absorbed dose of Rad-HUMAN were calculated. Rad-HUMAN can be applied to predict and evaluate dose distributions in the Treatment Plan System (TPS), as well as radiation exposure for human body in radiation protection. (authors)

  4. Performance Evaluation and Optimal Management of Distance-Based Registration Using a Semi-Markov Process

    Directory of Open Access Journals (Sweden)

    Jae Joon Suh

    2017-01-01

    Full Text Available We consider the distance-based registration (DBR which is a kind of dynamic location registration scheme in a mobile communication network. In the DBR, the location of a mobile station (MS is updated when it enters a base station more than or equal to a specified distance away from the base station where the location registration for the MS was done last. In this study, we first investigate the existing performance-evaluation methods on the DBR with implicit registration (DBIR presented to improve the performance of the DBR and point out some problems of the evaluation methods. We propose a new performance-evaluation method for the DBIR scheme using a semi-Markov process (SMP which can resolve the controversial issues of the existing methods. The numerical results obtained with the proposed SMP model are compared with those from previous models. It is shown that the SMP model should be considered to get an accurate performance of the DBIR scheme.

  5. A holistic model for evaluating the impact of individual technology-enhanced learning resources.

    Science.gov (United States)

    Pickering, James D; Joynes, Viktoria C T

    2016-12-01

    The use of technology within education has now crossed the Rubicon; student expectations, the increasing availability of both hardware and software and the push to fully blended learning environments mean that educational institutions cannot afford to turn their backs on technology-enhanced learning (TEL). The ability to meaningfully evaluate the impact of TEL resources nevertheless remains problematic. This paper aims to establish a robust means of evaluating individual resources and meaningfully measure their impact upon learning within the context of the program in which they are used. Based upon the experience of developing and evaluating a range of mobile and desktop based TEL resources, this paper outlines a new four-stage evaluation process, taking into account learner satisfaction, learner gain, and the impact of a resource on both the individual and the institution in which it has been adapted. A new multi-level model of TEL resource evaluation is proposed, which includes a preliminary evaluation of need, learner satisfaction and gain, learner impact and institutional impact. Each of these levels are discussed in detail, and in relation to existing TEL evaluation frameworks. This paper details a holistic, meaningful evaluation model for individual TEL resources within the specific context in which they are used. It is proposed that this model is adopted to ensure that TEL resources are evaluated in a more meaningful and robust manner than is currently undertaken.

  6. COMPUTER EVALUATION OF SKILLS FORMATION QUALITY IN THE IMPLEMENTATION OF COMPETENCE-BASED APPROACH TO LEARNING

    Directory of Open Access Journals (Sweden)

    Vitalia A. Zhuravleva

    2014-01-01

    Full Text Available The article deals with the problem of effective organization of skills forming as an important part of the competence approach in education, implemented via educational standards of new generation. The solution of the problem suggests using of computer tools to assess the quality of skills formation and abilities based on the proposed model of the problem. This paper proposes an approach to creating an assessing model of the level of skills formation in knowledge management systems based on mathematical modeling methods. Attention is paid to the evaluation strategy and technology of assessment, which is based on the use of rules of fuzzy mathematics. Algorithmic implementation of the proposed model of evaluation of the quality of skills development is shown as well. 

  7. Using satellite observations in performance evaluation for regulatory air quality modeling: Comparison with ground-level measurements

    Science.gov (United States)

    Odman, M. T.; Hu, Y.; Russell, A.; Chai, T.; Lee, P.; Shankar, U.; Boylan, J.

    2012-12-01

    Regulatory air quality modeling, such as State Implementation Plan (SIP) modeling, requires that model performance meets recommended criteria in the base-year simulations using period-specific, estimated emissions. The goal of the performance evaluation is to assure that the base-year modeling accurately captures the observed chemical reality of the lower troposphere. Any significant deficiencies found in the performance evaluation must be corrected before any base-case (with typical emissions) and future-year modeling is conducted. Corrections are usually made to model inputs such as emission-rate estimates or meteorology and/or to the air quality model itself, in modules that describe specific processes. Use of ground-level measurements that follow approved protocols is recommended for evaluating model performance. However, ground-level monitoring networks are spatially sparse, especially for particulate matter. Satellite retrievals of atmospheric chemical properties such as aerosol optical depth (AOD) provide spatial coverage that can compensate for the sparseness of ground-level measurements. Satellite retrievals can also help diagnose potential model or data problems in the upper troposphere. It is possible to achieve good model performance near the ground, but have, for example, erroneous sources or sinks in the upper troposphere that may result in misleading and unrealistic responses to emission reductions. Despite these advantages, satellite retrievals are rarely used in model performance evaluation, especially for regulatory modeling purposes, due to the high uncertainty in retrievals associated with various contaminations, for example by clouds. In this study, 2007 was selected as the base year for SIP modeling in the southeastern U.S. Performance of the Community Multiscale Air Quality (CMAQ) model, at a 12-km horizontal resolution, for this annual simulation is evaluated using both recommended ground-level measurements and non-traditional satellite

  8. Modeling IoT-Based Solutions Using Human-Centric Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Álvaro Monares

    2014-08-01

    Full Text Available The Internet of Things (IoT has inspired solutions that are already available for addressing problems in various application scenarios, such as healthcare, security, emergency support and tourism. However, there is no clear approach to modeling these systems and envisioning their capabilities at the design time. Therefore, the process of designing these systems is ad hoc and its real impact is evaluated once the solution is already implemented, which is risky and expensive. This paper proposes a modeling approach that uses human-centric wireless sensor networks to specify and evaluate models of IoT-based systems at the time of design, avoiding the need to spend time and effort on early implementations of immature designs. It allows designers to focus on the system design, leaving the implementation decisions for a next phase. The article illustrates the usefulness of this proposal through a running example, showing the design of an IoT-based solution to support the first responses during medium-sized or large urban incidents. The case study used in the proposal evaluation is based on a real train crash. The proposed modeling approach can be used to design IoT-based systems for other application scenarios, e.g., to support security operatives or monitor chronic patients in their homes.

  9. Modeling IoT-based solutions using human-centric wireless sensor networks.

    Science.gov (United States)

    Monares, Álvaro; Ochoa, Sergio F; Santos, Rodrigo; Orozco, Javier; Meseguer, Roc

    2014-08-25

    The Internet of Things (IoT) has inspired solutions that are already available for addressing problems in various application scenarios, such as healthcare, security, emergency support and tourism. However, there is no clear approach to modeling these systems and envisioning their capabilities at the design time. Therefore, the process of designing these systems is ad hoc and its real impact is evaluated once the solution is already implemented, which is risky and expensive. This paper proposes a modeling approach that uses human-centric wireless sensor networks to specify and evaluate models of IoT-based systems at the time of design, avoiding the need to spend time and effort on early implementations of immature designs. It allows designers to focus on the system design, leaving the implementation decisions for a next phase. The article illustrates the usefulness of this proposal through a running example, showing the design of an IoT-based solution to support the first responses during medium-sized or large urban incidents. The case study used in the proposal evaluation is based on a real train crash. The proposed modeling approach can be used to design IoT-based systems for other application scenarios, e.g., to support security operatives or monitor chronic patients in their homes.

  10. Evaluation of an energy-based fatigue approach considering mean stress effects

    Energy Technology Data Exchange (ETDEWEB)

    Kabir, S. M. Humayun [Chittagong University of Engineering and Technology, Chittagong (Bangladesh); Yeo, Tae In [University of Ulsan, Ulsan (Korea, Republic of)

    2014-04-15

    In this paper, an attempt is made to extend the total strain energy approach for predicting the fatigue life subjected to mean stress under uniaxial state. The effects of means stress on the fatigue failure of a ferritic stainless steel and high pressure tube steel are studied under strain-controlled low cycle fatigue condition. Based on the fatigue results from different strain ratios, modified total strain energy density approach is proposed to account for the mean stress effects. The proposed damage parameter provides convenient means of evaluating fatigue life with mean stress effects considering the fact that the definitions used for measuring strain energies are the same as in the fully-reversed cycling (R = -1). A good agreement is observed between experimental life and predicted life using proposed approach. Two other mean stress models (Smith-Watson-Topper model and Morrow model) are also used to evaluate the low cycle fatigue data. Based on a simple statistical estimator, the proposed approach is compared with these models and is found realistic.

  11. Evaluation of an energy-based fatigue approach considering mean stress effects

    International Nuclear Information System (INIS)

    Kabir, S. M. Humayun; Yeo, Tae In

    2014-01-01

    In this paper, an attempt is made to extend the total strain energy approach for predicting the fatigue life subjected to mean stress under uniaxial state. The effects of means stress on the fatigue failure of a ferritic stainless steel and high pressure tube steel are studied under strain-controlled low cycle fatigue condition. Based on the fatigue results from different strain ratios, modified total strain energy density approach is proposed to account for the mean stress effects. The proposed damage parameter provides convenient means of evaluating fatigue life with mean stress effects considering the fact that the definitions used for measuring strain energies are the same as in the fully-reversed cycling (R = -1). A good agreement is observed between experimental life and predicted life using proposed approach. Two other mean stress models (Smith-Watson-Topper model and Morrow model) are also used to evaluate the low cycle fatigue data. Based on a simple statistical estimator, the proposed approach is compared with these models and is found realistic.

  12. Modeling and knowledge acquisition processes using case-based inference

    Directory of Open Access Journals (Sweden)

    Ameneh Khadivar

    2017-03-01

    Full Text Available The method of acquisition and presentation of the organizational Process Knowledge has considered by many KM researches. In this research a model for process knowledge acquisition and presentation has been presented by using the approach of Case Base Reasoning. The validation of the presented model was evaluated by conducting an expert panel. Then a software has been developed based on the presented model and implemented in Eghtesad Novin Bank of Iran. In this company, based on the stages of the presented model, first the knowledge intensive processes has been identified, then the Process Knowledge was stored in a knowledge base in the format of problem/solution/consequent .The retrieval of the knowledge was done based on the similarity of the nearest neighbor algorithm. For validating of the implemented system, results of the system has compared by the results of the decision making of the expert of the process.

  13. [Model-based biofuels system analysis: a review].

    Science.gov (United States)

    Chang, Shiyan; Zhang, Xiliang; Zhao, Lili; Ou, Xunmin

    2011-03-01

    Model-based system analysis is an important tool for evaluating the potential and impacts of biofuels, and for drafting biofuels technology roadmaps and targets. The broad reach of the biofuels supply chain requires that biofuels system analyses span a range of disciplines, including agriculture/forestry, energy, economics, and the environment. Here we reviewed various models developed for or applied to modeling biofuels, and presented a critical analysis of Agriculture/Forestry System Models, Energy System Models, Integrated Assessment Models, Micro-level Cost, Energy and Emission Calculation Models, and Specific Macro-level Biofuel Models. We focused on the models' strengths, weaknesses, and applicability, facilitating the selection of a suitable type of model for specific issues. Such an analysis was a prerequisite for future biofuels system modeling, and represented a valuable resource for researchers and policy makers.

  14. An Effective Model for Efficiency Evaluation of Strategic Project Based on

    Directory of Open Access Journals (Sweden)

    Virgil ROTARU

    2015-10-01

    Full Text Available When analyzing a strategic development project implemented by universities both the system created, as a temporary organization having the participant organizations as distinctive actors and the universities have to be seen as learning organizations. As a consequence, the evaluation of such projects should always be more complex since there are intangible results and outcomes to be considered and an appropriate method for analyze is a must. Such a complex method, initially introduced by Caplan and Norton (1992 as balanced scorecard (BSC was later on developed based on frequently addressed critics. Nevertheless, more than 40% of the biggest 500 companies in the world, use BSC as an evaluation method for their strategic performance and almost all of strategic development projects analysis includes financial and non-financial perspectives as those of BSC. Although a lot of improvements have been done already, there are still very few steps done in the direction of connection BSC with the social networking analysis. This study reflects our actual research aiming to design a more effectiv system for strategic analysis, that to integrate social network analysis and a system dynamics BSC in a more complex analysis method for efficiency evaluation of the strategic project

  15. Evaluation of Models of the Reading Process.

    Science.gov (United States)

    Balajthy, Ernest

    A variety of reading process models have been proposed and evaluated in reading research. Traditional approaches to model evaluation specify the workings of a system in a simplified fashion to enable organized, systematic study of the system's components. Following are several statistical methods of model evaluation: (1) empirical research on…

  16. Hybrid Model for e-Learning Quality Evaluation

    Directory of Open Access Journals (Sweden)

    Suzana M. Savic

    2012-02-01

    Full Text Available E-learning is becoming increasingly important for the competitive advantage of economic organizations and higher education institutions. Therefore, it is becoming a significant aspect of quality which has to be integrated into the management system of every organization or institution. The paper examines e-learning quality characteristics, standards, criteria and indicators and presents a multi-criteria hybrid model for e-learning quality evaluation based on the method of Analytic Hierarchy Process, trend analysis, and data comparison.

  17. Correlation between the model accuracy and model-based SOC estimation

    International Nuclear Information System (INIS)

    Wang, Qianqian; Wang, Jiao; Zhao, Pengju; Kang, Jianqiang; Yan, Few; Du, Changqing

    2017-01-01

    State-of-charge (SOC) estimation is a core technology for battery management systems. Considerable progress has been achieved in the study of SOC estimation algorithms, especially the algorithm on the basis of Kalman filter to meet the increasing demand of model-based battery management systems. The Kalman filter weakens the influence of white noise and initial error during SOC estimation but cannot eliminate the existing error of the battery model itself. As such, the accuracy of SOC estimation is directly related to the accuracy of the battery model. Thus far, the quantitative relationship between model accuracy and model-based SOC estimation remains unknown. This study summarizes three equivalent circuit lithium-ion battery models, namely, Thevenin, PNGV, and DP models. The model parameters are identified through hybrid pulse power characterization test. The three models are evaluated, and SOC estimation conducted by EKF-Ah method under three operating conditions are quantitatively studied. The regression and correlation of the standard deviation and normalized RMSE are studied and compared between the model error and the SOC estimation error. These parameters exhibit a strong linear relationship. Results indicate that the model accuracy affects the SOC estimation accuracy mainly in two ways: dispersion of the frequency distribution of the error and the overall level of the error. On the basis of the relationship between model error and SOC estimation error, our study provides a strategy for selecting a suitable cell model to meet the requirements of SOC precision using Kalman filter.

  18. Study on comprehensive evaluation model for nuclear power plant control room layout

    International Nuclear Information System (INIS)

    Zhu Yiming; Liu Yuan; Fan Huixian

    2010-01-01

    A comprehensive evaluation model for layout of the main control room of nuclear power plants was proposed. Firstly the design scope and principle for the layout of the main control room were defined based on the standards, and then the index system for the comprehensive evaluation was established. Finally, comprehensive evaluation was carried out for the layout design by applying the fuzzy comprehensive evaluation method in the index system. (authors)

  19. Building a community-based culture of evaluation.

    Science.gov (United States)

    Janzen, Rich; Ochocka, Joanna; Turner, Leanne; Cook, Tabitha; Franklin, Michelle; Deichert, Debbie

    2017-12-01

    In this article we argue for a community-based approach as a means of promoting a culture of evaluation. We do this by linking two bodies of knowledge - the 70-year theoretical tradition of community-based research and the trans-discipline of program evaluation - that are seldom intersected within the evaluation capacity building literature. We use the three hallmarks of a community-based research approach (community-determined; equitable participation; action and change) as a conceptual lens to reflect on a case example of an evaluation capacity building program led by the Ontario Brian Institute. This program involved two community-based groups (Epilepsy Southwestern Ontarioand the South West Alzheimer Society Alliance) who were supported by evaluators from the Centre for Community Based Research to conduct their own internal evaluation. The article provides an overview of a community-based research approach and its link to evaluation. It then describes the featured evaluation capacity building initiative, including reflections by the participating organizations themselves. We end by discussing lessons learned and their implications for future evaluation capacity building. Our main argument is that organizations that strive towards a community-based approach to evaluation are well placed to build and sustain a culture of evaluation. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. [The Brazilian National Health Surveillance Agency performance evaluation at the management contract model].

    Science.gov (United States)

    Moreira, Elka Maltez de Miranda; Costa, Ediná Alves

    2010-11-01

    The Brazilian National Health Surveillance Agency (Anvisa) is supervised by the Ministry of Health by means of a management contract, a performance evaluation tool. This case study was aimed at describing and analyzing Anvisa's performance evaluation model based on the agency's institutional purpose, according to the following analytical categories: the management contract formalization, evaluation tools, evaluators and institutional performance. Semi-structured interviews and document analysis revealed that Anvisa signed only one management contract with the Ministry of Health in 1999, updated by four additive terms. The Collegiate Board of Directors and the Advisory Center for Strategic Management play the role of Anvisa's internal evaluators and an Assessing Committee, comprising the Ministry of Health, constitutes its external evaluator. Three phases were identified in the evaluation model: the structuring of the new management model (1999-2000), legitimation regarding the productive segment (2001-2004) and widespread legitimation (2005). The best performance was presented in 2000 (86.05%) and the worst in 2004 (40.00%). The evaluation model was shown to have contributed little towards the agency's institutional purpose and the effectiveness measurement of the implemented actions.

  1. Evaluating Pillar Industry’s Transformation Capability: A Case Study of Two Chinese Steel-Based Cities

    Science.gov (United States)

    Li, Zhidong; Marinova, Dora; Guo, Xiumei; Gao, Yuan

    2015-01-01

    Many steel-based cities in China were established between the 1950s and 1960s. After more than half a century of development and boom, these cities are starting to decline and industrial transformation is urgently needed. This paper focuses on evaluating the transformation capability of resource-based cities building an evaluation model. Using Text Mining and the Document Explorer technique as a way of extracting text features, the 200 most frequently used words are derived from 100 publications related to steel- and other resource-based cities. The Expert Evaluation Method (EEM) and Analytic Hierarchy Process (AHP) techniques are then applied to select 53 indicators, determine their weights and establish an index system for evaluating the transformation capability of the pillar industry of China’s steel-based cities. Using real data and expert reviews, the improved Fuzzy Relation Matrix (FRM) method is applied to two case studies in China, namely Panzhihua and Daye, and the evaluation model is developed using Fuzzy Comprehensive Evaluation (FCE). The cities’ abilities to carry out industrial transformation are evaluated with concerns expressed for the case of Daye. The findings have policy implications for the potential and required industrial transformation in the two selected cities and other resource-based towns. PMID:26422266

  2. Evaluation of some infiltration models and hydraulic parameters

    International Nuclear Information System (INIS)

    Haghighi, F.; Gorji, M.; Shorafa, M.; Sarmadian, F.; Mohammadi, M. H.

    2010-01-01

    The evaluation of infiltration characteristics and some parameters of infiltration models such as sorptivity and final steady infiltration rate in soils are important in agriculture. The aim of this study was to evaluate some of the most common models used to estimate final soil infiltration rate. The equality of final infiltration rate with saturated hydraulic conductivity (Ks) was also tested. Moreover, values of the estimated sorptivity from the Philips model were compared to estimates by selected pedotransfer functions (PTFs). The infiltration experiments used the doublering method on soils with two different land uses in the Taleghan watershed of Tehran province, Iran, from September to October, 2007. The infiltration models of Kostiakov-Lewis, Philip two-term and Horton were fitted to observed infiltration data. Some parameters of the models and the coefficient of determination goodness of fit were estimated using MATLAB software. The results showed that, based on comparing measured and model-estimated infiltration rate using root mean squared error (RMSE), Hortons model gave the best prediction of final infiltration rate in the experimental area. Laboratory measured Ks values gave significant differences and higher values than estimated final infiltration rates from the selected models. The estimated final infiltration rate was not equal to laboratory measured Ks values in the study area. Moreover, the estimated sorptivity factor by Philips model was significantly different to those estimated by selected PTFs. It is suggested that the applicability of PTFs is limited to specific, similar conditions. (Author) 37 refs.

  3. Mobility Models for Systems Evaluation

    Science.gov (United States)

    Musolesi, Mirco; Mascolo, Cecilia

    Mobility models are used to simulate and evaluate the performance of mobile wireless systems and the algorithms and protocols at the basis of them. The definition of realistic mobility models is one of the most critical and, at the same time, difficult aspects of the simulation of applications and systems designed for mobile environments. There are essentially two possible types of mobility patterns that can be used to evaluate mobile network protocols and algorithms by means of simulations: traces and synthetic models [130]. Traces are obtained by means of measurements of deployed systems and usually consist of logs of connectivity or location information, whereas synthetic models are mathematical models, such as sets of equations, which try to capture the movement of the devices.

  4. Seismic performance evaluation of an MR elastomer-based smart base isolation system using real-time hybrid simulation

    International Nuclear Information System (INIS)

    Eem, S H; Jung, H J; Koo, J H

    2013-01-01

    Recently, magneto-rheological (MR) elastomer-based base isolation systems have been actively studied as alternative smart base isolation systems because MR elastomers are capable of adjusting their modulus or stiffness depending on the magnitude of the applied magnetic field. By taking advantage of the MR elastomers’ stiffness-tuning ability, MR elastomer-based smart base isolation systems strive to alleviate limitations of existing smart base isolation systems as well as passive-type base isolators. Until now, research on MR elastomer-based base isolation systems primarily focused on characterization, design, and numerical evaluations of MR elastomer-based isolators, as well as experimental tests with simple structure models. However, their applicability to large civil structures has not been properly studied yet because it is quite challenging to numerically emulate the complex behavior of MR elastomer-based isolators and to conduct experiments with large-size structures. To address these difficulties, this study employs the real-time hybrid simulation technique, which combines physical testing and computational modeling. The primary goal of the current hybrid simulation study is to evaluate seismic performances of an MR elastomer-based smart base isolation system, particularly its adaptability to distinctly different seismic excitations. In the hybrid simulation, a single-story building structure (non-physical, computational model) is coupled with a physical testing setup for a smart base isolation system with associated components (such as laminated MR elastomers and electromagnets) installed on a shaking table. A series of hybrid simulations is carried out under two seismic excitations having different dominant frequencies. The results show that the proposed smart base isolation system outperforms the passive base isolation system in reducing the responses of the structure for the excitations considered in this study. (paper)

  5. Li-NMC Batteries Model Evaluation with Experimental Data for Electric Vehicle Application

    Directory of Open Access Journals (Sweden)

    Aleksandra Baczyńska

    2018-02-01

    Full Text Available The aim of the paper is to present the battery equivalent circuit for electric vehicle application. Moreover, the model described below is dedicated to lithium-ion types of batteries. The purpose of this paper is to introduce an efficient and transparent method to develop a battery equivalent circuit model. Battery modeling requires, depending on the chosen method, either significant calculations or a highly developed mathematical model for optimization. The model is evaluated in comparison to the real data measurements, to present the performance of the method. Battery measurements based on charge/discharge tests at a fixed C-rate are presented to show the relation of the output voltage profiles with the battery state of charge. The pulse discharge test is presented to obtain the electric parameters of the battery equivalent circuit model, using a Thévenin circuit. According to the Reverse Trike Ecologic Electric Vehicle (VEECO RT characteristics used as a case study in this work, new values for vehicle autonomy and battery pack volume based on lithium nickel manganese cobalt oxide cells are evaluated.

  6. Model-Based Requirements Management in Gear Systems Design Based On Graph-Based Design Languages

    Directory of Open Access Journals (Sweden)

    Kevin Holder

    2017-10-01

    Full Text Available For several decades, a wide-spread consensus concerning the enormous importance of an in-depth clarification of the specifications of a product has been observed. A weak clarification of specifications is repeatedly listed as a main cause for the failure of product development projects. Requirements, which can be defined as the purpose, goals, constraints, and criteria associated with a product development project, play a central role in the clarification of specifications. The collection of activities which ensure that requirements are identified, documented, maintained, communicated, and traced throughout the life cycle of a system, product, or service can be referred to as “requirements engineering”. These activities can be supported by a collection and combination of strategies, methods, and tools which are appropriate for the clarification of specifications. Numerous publications describe the strategy and the components of requirements management. Furthermore, recent research investigates its industrial application. Simultaneously, promising developments of graph-based design languages for a holistic digital representation of the product life cycle are presented. Current developments realize graph-based languages by the diagrams of the Unified Modelling Language (UML, and allow the automatic generation and evaluation of multiple product variants. The research presented in this paper seeks to present a method in order to combine the advantages of a conscious requirements management process and graph-based design languages. Consequently, the main objective of this paper is the investigation of a model-based integration of requirements in a product development process by means of graph-based design languages. The research method is based on an in-depth analysis of an exemplary industrial product development, a gear system for so-called “Electrical Multiple Units” (EMU. Important requirements were abstracted from a gear system

  7. On the upscaling of process-based models in deltaic applications

    Science.gov (United States)

    Li, L.; Storms, J. E. A.; Walstra, D. J. R.

    2018-03-01

    Process-based numerical models are increasingly used to study the evolution of marine and terrestrial depositional environments. Whilst a detailed description of small-scale processes provides an accurate representation of reality, application on geological timescales is restrained by the associated increase in computational time. In order to reduce the computational time, a number of acceleration methods are combined and evaluated for a schematic supply-driven delta (static base level) and an accommodation-driven delta (variable base level). The performance of the combined acceleration methods is evaluated by comparing the morphological indicators such as distributary channel networking and delta volumes derived from the model predictions for various levels of acceleration. The results of the accelerated models are compared to the outcomes from a series of simulations to capture autogenic variability. Autogenic variability is quantified by re-running identical models on an initial bathymetry with 1 cm added noise. The overall results show that the variability of the accelerated models fall within the autogenic variability range, suggesting that the application of acceleration methods does not significantly affect the simulated delta evolution. The Time-scale compression method (the acceleration method introduced in this paper) results in an increased computational efficiency of 75% without adversely affecting the simulated delta evolution compared to a base case. The combination of the Time-scale compression method with the existing acceleration methods has the potential to extend the application range of process-based models towards geologic timescales.

  8. Evaluation of bond strength of silorane and methacrylate based restorative systems to dentin using different cavity models

    Directory of Open Access Journals (Sweden)

    Stephano Zerlottini Isaac

    2013-09-01

    Full Text Available OBJECTIVE: The aim of this in vitro study was to evaluate the microtensile bond strength (µTBS to dentin of two different restorative systems: silorane-based (P90, and methacrylate-based (P60, using two cavity models. MATERIAL AND METHODS: Occlusal enamel of 40 human third molars was removed to expose flat dentin surface. Class I cavities with 4 mm mesial-distal width, 3 mm buccal-lingual width and 3 mm depth (C-factor=4.5 were prepared in 20 teeth, which were divided into two groups (n=10 restored with P60 and P90, bulk-filled after dentin treatment according to manufacturer's instructions. Flat buccal dentin surfaces were prepared in the 20 remaining teeth (C-factor=0.2 and restored with resin blocks measuring 4x3x3 mm using the two restorative systems (n=10. The teeth were sectioned into samples with area between 0.85 and 1.25 mm2 that were submitted to µTBS testing, using a universal testing machine (EMIC at speed of 0.5 mm/min. Fractured specimens were analyzed under stereomicroscope and categorized according to fracture pattern. Data were analyzed using ANOVA and Tukey Kramer tests. RESULTS: For flat surfaces, P60 obtained higher bond strength values compared with P90. However, for Class I cavities, P60 showed significant reduction in bond strength (p0.05, or between Class I Cavity and Flat Surface group, considering P90 restorative system (p>0.05. Regarding fracture pattern, there was no statistical difference among groups (p=0.0713 and 56.3% of the fractures were adhesive. CONCLUSION: It was concluded that methacrylate-based composite µTBS was influenced by cavity models, and the use of silorane-based composite led to similar bond strength values compared to the methacrylate-based composite in cavities with high C-factor.

  9. Using a data base management system for modelling SSME test history data

    Science.gov (United States)

    Abernethy, K.

    1985-01-01

    The usefulness of a data base management system (DBMS) for modelling historical test data for the complete series of static test firings for the Space Shuttle Main Engine (SSME) was assessed. From an analysis of user data base query requirements, it became clear that a relational DMBS which included a relationally complete query language would permit a model satisfying the query requirements. Representative models and sample queries are discussed. A list of environment-particular evaluation criteria for the desired DBMS was constructed; these criteria include requirements in the areas of user-interface complexity, program independence, flexibility, modifiability, and output capability. The evaluation process included the construction of several prototype data bases for user assessement. The systems studied, representing the three major DBMS conceptual models, were: MIRADS, a hierarchical system; DMS-1100, a CODASYL-based network system; ORACLE, a relational system; and DATATRIEVE, a relational-type system.

  10. ORCHIDEE-CROP (v0), a new process based Agro-Land Surface Model: model description and evaluation over Europe

    Science.gov (United States)

    Wu, X.; Vuichard, N.; Ciais, P.; Viovy, N.; de Noblet-Ducoudré, N.; Wang, X.; Magliulo, V.; Wattenbach, M.; Vitale, L.; Di Tommasi, P.; Moors, E. J.; Jans, W.; Elbers, J.; Ceschia, E.; Tallec, T.; Bernhofer, C.; Grünwald, T.; Moureaux, C.; Manise, T.; Ligne, A.; Cellier, P.; Loubet, B.; Larmanou, E.; Ripoche, D.

    2015-06-01

    The responses of crop functioning to changing climate and atmospheric CO2 concentration ([CO2]) could have large effects on food production, and impact carbon, water and energy fluxes, causing feedbacks to climate. To simulate the responses of temperate crops to changing climate and [CO2], accounting for the specific phenology of crops mediated by management practice, we present here the development of a process-oriented terrestrial biogeochemical model named ORCHIDEE-CROP (v0), which integrates a generic crop phenology and harvest module and a very simple parameterization of nitrogen fertilization, into the land surface model (LSM) ORCHIDEEv196, in order to simulate biophysical and biochemical interactions in croplands, as well as plant productivity and harvested yield. The model is applicable for a range of temperate crops, but it is tested here for maize and winter wheat, with the phenological parameterizations of two European varieties originating from the STICS agronomical model. We evaluate the ORCHIDEE-CROP (v0) model against eddy covariance and biometric measurements at 7 winter wheat and maize sites in Europe. The specific ecosystem variables used in the evaluation are CO2 fluxes (NEE), latent heat and sensible heat fluxes. Additional measurements of leaf area index (LAI), aboveground biomass and yield are used as well. Evaluation results reveal that ORCHIDEE-CROP (v0) reproduces the observed timing of crop development stages and the amplitude of pertaining LAI changes in contrast to ORCHIDEEv196 in which by default crops have the same phenology than grass. A near-halving of the root mean square error of LAI from 2.38 ± 0.77 to 1.08 ± 0.34 m2 m-2 is obtained between ORCHIDEEv196 and ORCHIDEE-CROP (v0) across the 7 study sites. Improved crop phenology and carbon allocation lead to a general good match between modelled and observed aboveground biomass (with a normalized root mean squared error (NRMSE) of 11.0-54.2 %), crop yield, as well as of the daily

  11. Reliability model analysis and primary experimental evaluation of laser triggered pulse trigger

    International Nuclear Information System (INIS)

    Chen Debiao; Yang Xinglin; Li Yuan; Li Jin

    2012-01-01

    High performance pulse trigger can enhance performance and stability of the PPS. It is necessary to evaluate the reliability of the LTGS pulse trigger, so we establish the reliability analysis model of this pulse trigger based on CARMES software, the reliability evaluation is accord with the statistical results. (authors)

  12. Growth Models and Teacher Evaluation: What Teachers Need to Know and Do

    Science.gov (United States)

    Katz, Daniel S.

    2016-01-01

    Including growth models based on student test scores in teacher evaluations effectively holds teachers individually accountable for students improving their test scores. While an attractive policy for state administrators and advocates of education reform, value-added measures have been fraught with problems, and their use in teacher evaluation is…

  13. Three new models for evaluation of standard involute spur gear mesh stiffness

    Science.gov (United States)

    Liang, Xihui; Zhang, Hongsheng; Zuo, Ming J.; Qin, Yong

    2018-02-01

    Time-varying mesh stiffness is one of the main internal excitation sources of gear dynamics. Accurate evaluation of gear mesh stiffness is crucial for gear dynamic analysis. This study is devoted to developing new models for spur gear mesh stiffness evaluation. Three models are proposed. The proposed model 1 can give very accurate mesh stiffness result but the gear bore surface must be assumed to be rigid. Enlighted by the proposed model 1, our research discovers that the angular deflection pattern of the gear bore surface of a pair of meshing gears under a constant torque basically follows a cosine curve. Based on this finding, two other models are proposed. The proposed model 2 evaluates gear mesh stiffness by using angular deflections at different circumferential angles of an end surface circle of the gear bore. The proposed model 3 requires using only the angular deflection at an arbitrary circumferential angle of an end surface circle of the gear bore but this model can only be used for a gear with the same tooth profile among all teeth. The proposed models are accurate in gear mesh stiffness evaluation and easy to use. Finite element analysis is used to validate the accuracy of the proposed models.

  14. [Simulation and data analysis of stereological modeling based on virtual slices].

    Science.gov (United States)

    Wang, Hao; Shen, Hong; Bai, Xiao-yan

    2008-05-01

    To establish a computer-assisted stereological model for simulating the process of slice section and evaluate the relationship between section surface and estimated three-dimensional structure. The model was designed by mathematic method as a win32 software based on the MFC using Microsoft visual studio as IDE for simulating the infinite process of sections and analysis of the data derived from the model. The linearity of the fitting of the model was evaluated by comparison with the traditional formula. The win32 software based on this algorithm allowed random sectioning of the particles distributed randomly in an ideal virtual cube. The stereological parameters showed very high throughput (>94.5% and 92%) in homogeneity and independence tests. The data of density, shape and size of the section were tested to conform to normal distribution. The output of the model and that from the image analysis system showed statistical correlation and consistency. The algorithm we described can be used for evaluating the stereologic parameters of the structure of tissue slices.

  15. MO-F-CAMPUS-T-04: Development and Evaluation of a Knowledge-Based Model for Treatment Planning of Lung Cancer Patients Using Stereotactic Body Radiotherapy (SBRT)

    Energy Technology Data Exchange (ETDEWEB)

    Snyder, K; Kim, J; Reding, A; Fraser, C; Lu, S; Gordon, J; Ajlouni, M; Movsas, B; Chetty, I [Henry Ford Health System, Detroit, MI (United States)

    2015-06-15

    Purpose: To describe the development of a knowledge-based treatment planning model for lung cancer patients treated with SBRT, and to evaluate the model performance and applicability to different planning techniques and tumor locations. Methods: 105 lung SBRT plans previously treated at our institution were included in the development of the model using Varian’s RapidPlan DVH estimation algorithm. The model was trained with a combination of IMRT, VMAT, and 3D–CRT techniques. Tumor locations encompassed lesions located centrally vs peripherally (43:62), upper vs lower (62:43), and anterior vs posterior lobes (60:45). The model performance was validated with 25 cases independent of the training set, for both IMRT and VMAT. Model generated plans were created with only one optimization and no planner intervention. The original, general model was also divided into four separate models according to tumor location. The model was also applied using different beam templates to further improve workflow. Dose differences to targets and organs-at-risk were evaluated. Results: IMRT and VMAT RapidPlan generated plans were comparable to clinical plans with respect to target coverage and several OARs. Spinal cord dose was lowered in the model-based plans by 1Gy compared to the clinical plans, p=0.008. Splitting the model according to tumor location resulted in insignificant differences in DVH estimation. The peripheral model decreased esophagus dose to the central lesions by 0.5Gy compared to the original model, p=0.025, and the posterior model increased dose to the spinal cord by 1Gy compared to the anterior model, p=0.001. All template beam plans met OAR criteria, with 1Gy increases noted in maximum heart dose for the 9-field plans, p=0.04. Conclusion: A RapidPlan knowledge-based model for lung SBRT produces comparable results to clinical plans, with increased consistency and greater efficiency. The model encompasses both IMRT and VMAT techniques, differing tumor locations

  16. An evaluation framework for participatory modelling

    Science.gov (United States)

    Krueger, T.; Inman, A.; Chilvers, J.

    2012-04-01

    Strong arguments for participatory modelling in hydrology can be made on substantive, instrumental and normative grounds. These arguments have led to increasingly diverse groups of stakeholders (here anyone affecting or affected by an issue) getting involved in hydrological research and the management of water resources. In fact, participation has become a requirement of many research grants, programs, plans and policies. However, evidence of beneficial outcomes of participation as suggested by the arguments is difficult to generate and therefore rare. This is because outcomes are diverse, distributed, often tacit, and take time to emerge. In this paper we develop an evaluation framework for participatory modelling focussed on learning outcomes. Learning encompasses many of the potential benefits of participation, such as better models through diversity of knowledge and scrutiny, stakeholder empowerment, greater trust in models and ownership of subsequent decisions, individual moral development, reflexivity, relationships, social capital, institutional change, resilience and sustainability. Based on the theories of experiential, transformative and social learning, complemented by practitioner experience our framework examines if, when and how learning has occurred. Special emphasis is placed on the role of models as learning catalysts. We map the distribution of learning between stakeholders, scientists (as a subgroup of stakeholders) and models. And we analyse what type of learning has occurred: instrumental learning (broadly cognitive enhancement) and/or communicative learning (change in interpreting meanings, intentions and values associated with actions and activities; group dynamics). We demonstrate how our framework can be translated into a questionnaire-based survey conducted with stakeholders and scientists at key stages of the participatory process, and show preliminary insights from applying the framework within a rural pollution management situation in

  17. Mathematical models and lymphatic filariasis control: monitoring and evaluating interventions.

    Science.gov (United States)

    Michael, Edwin; Malecela-Lazaro, Mwele N; Maegga, Bertha T A; Fischer, Peter; Kazura, James W

    2006-11-01

    Monitoring and evaluation are crucially important to the scientific management of any mass parasite control programme. Monitoring enables the effectiveness of implemented actions to be assessed and necessary adaptations to be identified; it also determines when management objectives are achieved. Parasite transmission models can provide a scientific template for informing the optimal design of such monitoring programmes. Here, we illustrate the usefulness of using a model-based approach for monitoring and evaluating anti-parasite interventions and discuss issues that need addressing. We focus on the use of such an approach for the control and/or elimination of the vector-borne parasitic disease, lymphatic filariasis.

  18. Combining Internal and External Evaluation: A Case for School-Based Evaluation.

    Science.gov (United States)

    Nevo, David

    1994-01-01

    School-based evaluation, the focus of this article, is conceived of as neither a synonym for internal evaluation nor an antonym for external evaluation, but as a combination that is examined through a review of recent research. A school-based evaluation in Israel illustrates combining these approaches in a complementary way. (SLD)

  19. Evaluating topic models with stability

    CSIR Research Space (South Africa)

    De Waal, A

    2008-11-01

    Full Text Available Topic models are unsupervised techniques that extract likely topics from text corpora, by creating probabilistic word-topic and topic-document associations. Evaluation of topic models is a challenge because (a) topic models are often employed...

  20. CTBT integrated verification system evaluation model supplement

    Energy Technology Data Exchange (ETDEWEB)

    EDENBURN,MICHAEL W.; BUNTING,MARCUS; PAYNE JR.,ARTHUR C.; TROST,LAWRENCE C.

    2000-03-02

    Sandia National Laboratories has developed a computer based model called IVSEM (Integrated Verification System Evaluation Model) to estimate the performance of a nuclear detonation monitoring system. The IVSEM project was initiated in June 1994, by Sandia's Monitoring Systems and Technology Center and has been funded by the U.S. Department of Energy's Office of Nonproliferation and National Security (DOE/NN). IVSEM is a simple, ''top-level,'' modeling tool which estimates the performance of a Comprehensive Nuclear Test Ban Treaty (CTBT) monitoring system and can help explore the impact of various sensor system concepts and technology advancements on CTBT monitoring. One of IVSEM's unique features is that it integrates results from the various CTBT sensor technologies (seismic, in sound, radionuclide, and hydroacoustic) and allows the user to investigate synergy among the technologies. Specifically, IVSEM estimates the detection effectiveness (probability of detection), location accuracy, and identification capability of the integrated system and of each technology subsystem individually. The model attempts to accurately estimate the monitoring system's performance at medium interfaces (air-land, air-water) and for some evasive testing methods such as seismic decoupling. The original IVSEM report, CTBT Integrated Verification System Evaluation Model, SAND97-25 18, described version 1.2 of IVSEM. This report describes the changes made to IVSEM version 1.2 and the addition of identification capability estimates that have been incorporated into IVSEM version 2.0.

  1. A Three-Dimensional Radiation Transfer Model to Evaluate Performance of Compound Parabolic Concentrator-Based Photovoltaic Systems

    Directory of Open Access Journals (Sweden)

    Jingjing Tang

    2018-04-01

    Full Text Available In the past, two-dimensional radiation transfer models (2-D models were widely used to investigate the optical performance of linear compound parabolic concentrators (CPCs, in which the radiation transfer on the cross-section of CPC troughs is considered. However, the photovoltaic efficiency of solar cells depends on the real incidence angle instead of the projection incidence angle, thus 2-D models can’t reasonably evaluate the photovoltaic performance of CPC-based photovoltaic systems (CPVs. In this work, three-dimensional radiation transfer (3-D model within CPC-θa/θe, the CPC with a maximum exit angle θe for radiation within its acceptance angle (θa, is investigated by means of vector algebra, solar geometry and imaging principle of plane mirror, and effects of geometry of CPV-θa/θe on its annual electricity generation are studied. Analysis shows that, as compared to similar photovoltaic (PV panels, the use of CPCs makes the incident angle of solar rays on solar cells increase thus lowers the photovoltaic conversion efficiency of solar cells. Calculations show that, 2-D models can reasonably predict the optical performance of CPVs, but such models always overestimate the photovoltaic performance of CPVs, and even can’t predict the variation trend of annual power output of CPV-θa/θe with θe. Results show that, for full CPV-θa/θe with a given θa, the annual power output increases with θe first and then comes to a halt as θe > 83°, whereas for truncated CPV-θa/θe with a given geometric concentration (Ct, the annual power output decreases with θe.

  2. Intuitionistic fuzzy (IF) evaluations of multidimensional model

    International Nuclear Information System (INIS)

    Valova, I.

    2012-01-01

    There are different logical methods for data structuring, but no one is perfect enough. Multidimensional model-MD of data is presentation of data in a form of cube (referred also as info-cube or hypercube) with data or in form of 'star' type scheme (referred as multidimensional scheme), by use of F-structures (Facts) and set of D-structures (Dimensions), based on the notion of hierarchy of D-structures. The data, being subject of analysis in a specific multidimensional model is located in a Cartesian space, being restricted by D-structures. In fact, the data is either dispersed or 'concentrated', therefore the data cells are not distributed evenly within the respective space. The moment of occurrence of any event is difficult to be predicted and the data is concentrated as per time periods, location of performed business event, etc. To process such dispersed or concentrated data, various technical strategies are needed. The basic methods for presentation of such data should be selected. The approaches of data processing and respective calculations are connected with different options for data representation. The use of intuitionistic fuzzy evaluations (IFE) provide us new possibilities for alternative presentation and processing of data, subject of analysis in any OLAP application. The use of IFE at the evaluation of multidimensional models will result in the following advantages: analysts will dispose with more complete information for processing and analysis of respective data; benefit for the managers is that the final decisions will be more effective ones; enabling design of more functional multidimensional schemes. The purpose of this work is to apply intuitionistic fuzzy evaluations of multidimensional model of data. (authors)

  3. Surrogate-Based Optimization of Biogeochemical Transport Models

    Science.gov (United States)

    Prieß, Malte; Slawig, Thomas

    2010-09-01

    First approaches towards a surrogate-based optimization method for a one-dimensional marine biogeochemical model of NPZD type are presented. The model, developed by Oschlies and Garcon [1], simulates the distribution of nitrogen, phytoplankton, zooplankton and detritus in a water column and is driven by ocean circulation data. A key issue is to minimize the misfit between the model output and given observational data. Our aim is to reduce the overall optimization cost avoiding expensive function and derivative evaluations by using a surrogate model replacing the high-fidelity model in focus. This in particular becomes important for more complex three-dimensional models. We analyse a coarsening in the discretization of the model equations as one way to create such a surrogate. Here the numerical stability crucially depends upon the discrete stepsize in time and space and the biochemical terms. We show that for given model parameters the level of grid coarsening can be choosen accordingly yielding a stable and satisfactory surrogate. As one example of a surrogate-based optimization method we present results of the Aggressive Space Mapping technique (developed by John W. Bandler [2, 3]) applied to the optimization of this one-dimensional biogeochemical transport model.

  4. IN VITRO MODELS TO EVALUATE DRUG-INDUCED HYPERSENSITIVITY: POTENTIAL TEST BASED ON ACTIVATION OF DENDRITIC CELLS

    Directory of Open Access Journals (Sweden)

    Valentina Galbiati

    2016-07-01

    Full Text Available Hypersensitivity drug reactions (HDRs are the adverse effect of pharmaceuticals that clinically resemble allergy. HDRs account for approximately 1/6 of drug-induced adverse effects, and include immune-mediated ('allergic' and non immune-mediated ('pseudo allergic' reactions. In recent years, the severe and unpredicted drug adverse events clearly indicate that the immune system can be a critical target of drugs. Enhanced prediction in preclinical safety evaluation is, therefore, crucial. Nowadays, there are no validated in vitro or in vivo methods to screen the sensitizing potential of drugs in the pre-clinical phase. The problem of non-predictability of immunologically-based hypersensitivity reactions is related to the lack of appropriate experimental models rather than to the lack of -understanding of the adverse phenomenon.We recently established experimental conditions and markers to correctly identify drug associated with in vivo hypersensitivity reactions using THP-1 cells and IL-8 production, CD86 and CD54 expression. The proposed in vitro method benefits from a rationalistic approach with the idea that allergenic drugs share with chemical allergens common mechanisms of cell activation. This assay can be easily incorporated into drug development for hazard identification of drugs, which may have the potential to cause in vivo hypersensitivity reactions. The purpose of this review is to assess the state of the art of in vitro models to assess the allergenic potential of drugs based on the activation of dendritic cells.

  5. Dynamic modeling and performance evaluation of axial flux PMSG based wind turbine system with MPPT control

    Directory of Open Access Journals (Sweden)

    Vahid Behjat

    2014-12-01

    Full Text Available This research work develops dynamic model of a gearless small scale wind power generation system based on a direct driven single sided outer rotor AFPMSG with coreless armature winding. Dynamic modeling of the AFPMSG based wind turbine requires machine parameters. To this end, a 3D FEM model of the generator is developed and from magnetostatic and transient analysis of the FEM model, machine parameters are calculated and utilized in dynamic modeling of the system. A maximum power point tracking (MPPT-based FOC control approach is used to obtain maximum power from the variable wind speed. The simulation results show the proper performance of the developed dynamic model of the AFPMSG, control approach and power generation system.

  6. Relevance of the c-statistic when evaluating risk-adjustment models in surgery.

    Science.gov (United States)

    Merkow, Ryan P; Hall, Bruce L; Cohen, Mark E; Dimick, Justin B; Wang, Edward; Chow, Warren B; Ko, Clifford Y; Bilimoria, Karl Y

    2012-05-01

    The measurement of hospital quality based on outcomes requires risk adjustment. The c-statistic is a popular tool used to judge model performance, but can be limited, particularly when evaluating specific operations in focused populations. Our objectives were to examine the interpretation and relevance of the c-statistic when used in models with increasingly similar case mix and to consider an alternative perspective on model calibration based on a graphical depiction of model fit. From the American College of Surgeons National Surgical Quality Improvement Program (2008-2009), patients were identified who underwent a general surgery procedure, and procedure groups were increasingly restricted: colorectal-all, colorectal-elective cases only, and colorectal-elective cancer cases only. Mortality and serious morbidity outcomes were evaluated using logistic regression-based risk adjustment, and model c-statistics and calibration curves were used to compare model performance. During the study period, 323,427 general, 47,605 colorectal-all, 39,860 colorectal-elective, and 21,680 colorectal cancer patients were studied. Mortality ranged from 1.0% in general surgery to 4.1% in the colorectal-all group, and serious morbidity ranged from 3.9% in general surgery to 12.4% in the colorectal-all procedural group. As case mix was restricted, c-statistics progressively declined from the general to the colorectal cancer surgery cohorts for both mortality and serious morbidity (mortality: 0.949 to 0.866; serious morbidity: 0.861 to 0.668). Calibration was evaluated graphically by examining predicted vs observed number of events over risk deciles. For both mortality and serious morbidity, there was no qualitative difference in calibration identified between the procedure groups. In the present study, we demonstrate how the c-statistic can become less informative and, in certain circumstances, can lead to incorrect model-based conclusions, as case mix is restricted and patients become

  7. A Bayesian approach to the evaluation of risk-based microbiological criteria for Campylobacter in broiler meat

    DEFF Research Database (Denmark)

    Ranta, Jukka; Lindqvist, Roland; Hansson, Ingrid

    2015-01-01

    Shifting from traditional hazard-based food safety management toward risk-based management requires statistical methods for evaluating intermediate targets in food production, such as microbiological criteria (MC), in terms of their effects on human risk of illness. A fully risk-based evaluation...... of MC involves several uncertainties that are related to both the underlying Quantitative Microbiological Risk Assessment (QMRA) model and the production-specific sample data on the prevalence and concentrations of microbes in production batches. We used Bayesian modeling for statistical inference...

  8. On Spoken English Phoneme Evaluation Method Based on Sphinx-4 Computer System

    Directory of Open Access Journals (Sweden)

    Li Qin

    2017-12-01

    Full Text Available In oral English learning, HDPs (phonemes that are hard to be distinguished are areas where Chinese students frequently make mistakes in pronunciation. This paper studies a speech phoneme evaluation method for HDPs, hoping to improve the ability of individualized evaluation on HDPs and help provide a personalized learning platform for English learners. First of all, this paper briefly introduces relevant phonetic recognition technologies and pronunciation evaluation algorithms and also describes the phonetic retrieving, phonetic decoding and phonetic knowledge base in the Sphinx-4 computer system, which constitute the technological foundation for phoneme evaluation. Then it proposes an HDP evaluation model, which integrates the reliability of the speech processing system and the individualization of spoken English learners into the evaluation system. After collecting HDPs of spoken English learners and sorting them into different sets, it uses the evaluation system to recognize these HDP sets and at last analyzes the experimental results of HDP evaluation, which proves the effectiveness of the HDP evaluation model.

  9. Performability Modelling Tools, Evaluation Techniques and Applications

    NARCIS (Netherlands)

    Haverkort, Boudewijn R.H.M.

    1990-01-01

    This thesis deals with three aspects of quantitative evaluation of fault-tolerant and distributed computer and communication systems: performability evaluation techniques, performability modelling tools, and performability modelling applications. Performability modelling is a relatively new

  10. ORCHIDEE-CROP (v0), a new process-based agro-land surface model: model description and evaluation over Europe

    Science.gov (United States)

    Wu, X.; Vuichard, N.; Ciais, P.; Viovy, N.; de Noblet-Ducoudré, N.; Wang, X.; Magliulo, V.; Wattenbach, M.; Vitale, L.; Di Tommasi, P.; Moors, E. J.; Jans, W.; Elbers, J.; Ceschia, E.; Tallec, T.; Bernhofer, C.; Grünwald, T.; Moureaux, C.; Manise, T.; Ligne, A.; Cellier, P.; Loubet, B.; Larmanou, E.; Ripoche, D.

    2016-03-01

    The response of crops to changing climate and atmospheric CO2 concentration ([CO2]) could have large effects on food production, and impact carbon, water, and energy fluxes, causing feedbacks to the climate. To simulate the response of temperate crops to changing climate and [CO2], which accounts for the specific phenology of crops mediated by management practice, we describe here the development of a process-oriented terrestrial biogeochemical model named ORCHIDEE-CROP (v0), which integrates a generic crop phenology and harvest module, and a very simple parameterization of nitrogen fertilization, into the land surface model (LSM) ORCHIDEEv196, in order to simulate biophysical and biochemical interactions in croplands, as well as plant productivity and harvested yield. The model is applicable for a range of temperate crops, but is tested here using maize and winter wheat, with the phenological parameterizations of two European varieties originating from the STICS agronomical model. We evaluate the ORCHIDEE-CROP (v0) model against eddy covariance and biometric measurements at seven winter wheat and maize sites in Europe. The specific ecosystem variables used in the evaluation are CO2 fluxes (net ecosystem exchange, NEE), latent heat, and sensible heat fluxes. Additional measurements of leaf area index (LAI) and aboveground biomass and yield are used as well. Evaluation results revealed that ORCHIDEE-CROP (v0) reproduced the observed timing of crop development stages and the amplitude of the LAI changes. This is in contrast to ORCHIDEEv196 where, by default, crops have the same phenology as grass. A halving of the root mean square error for LAI from 2.38 ± 0.77 to 1.08 ± 0.34 m2 m-2 was obtained when ORCHIDEEv196 and ORCHIDEE-CROP (v0) were compared across the seven study sites. Improved crop phenology and carbon allocation led to a good match between modeled and observed aboveground biomass (with a normalized root mean squared error (NRMSE) of 11.0-54.2 %), crop

  11. PARTICIPATION BASED MODEL OF SHIP CREW MANAGEMENT

    Directory of Open Access Journals (Sweden)

    Toni Bielić

    2014-10-01

    Full Text Available 800x600 This paper analyse the participation - based model on board the ship as possibly optimal leadership model existing in the shipping industry with accent on decision - making process. In the paper authors have tried to define master’s behaviour model and management style identifying drawbacks and disadvantages of vertical, pyramidal organization with master on the top. Paper describes efficiency of decision making within team organization and optimization of a ship’s organisation by introducing teamwork on board the ship. Three examples of the ship’s accidents are studied and evaluated through “Leader - participation” model. The model of participation based management as a model of the teamwork has been applied in studying the cause - and - effect of accidents with the critical review of the communication and managing the human resources on a ship. The results have showed that the cause of all three accidents is the autocratic behaviour of the leaders and lack of communication within teams. Normal 0 21 false false false HR X-NONE X-NONE MicrosoftInternetExplorer4

  12. The Design of Model-Based Training Programs

    Science.gov (United States)

    Polson, Peter; Sherry, Lance; Feary, Michael; Palmer, Everett; Alkin, Marty; McCrobie, Dan; Kelley, Jerry; Rosekind, Mark (Technical Monitor)

    1997-01-01

    This paper proposes a model-based training program for the skills necessary to operate advance avionics systems that incorporate advanced autopilots and fight management systems. The training model is based on a formalism, the operational procedure model, that represents the mission model, the rules, and the functions of a modem avionics system. This formalism has been defined such that it can be understood and shared by pilots, the avionics software, and design engineers. Each element of the software is defined in terms of its intent (What?), the rationale (Why?), and the resulting behavior (How?). The Advanced Computer Tutoring project at Carnegie Mellon University has developed a type of model-based, computer aided instructional technology called cognitive tutors. They summarize numerous studies showing that training times to a specified level of competence can be achieved in one third the time of conventional class room instruction. We are developing a similar model-based training program for the skills necessary to operation the avionics. The model underlying the instructional program and that simulates the effects of pilots entries and the behavior of the avionics is based on the operational procedure model. Pilots are given a series of vertical flightpath management problems. Entries that result in violations, such as failure to make a crossing restriction or violating the speed limits, result in error messages with instruction. At any time, the flightcrew can request suggestions on the appropriate set of actions. A similar and successful training program for basic skills for the FMS on the Boeing 737-300 was developed and evaluated. The results strongly support the claim that the training methodology can be adapted to the cockpit.

  13. An improved Rosetta pedotransfer function and evaluation in earth system models

    Science.gov (United States)

    Zhang, Y.; Schaap, M. G.

    2017-12-01

    Soil hydraulic parameters are often difficult and expensive to measure, leading to the pedotransfer functions (PTFs) an alternative to predict those parameters. Rosetta (Schaap et al., 2001, denoted as Rosetta1) are widely used PTFs, which is based on artificial neural network (ANN) analysis coupled with the bootstrap re-sampling method, allowing the estimation of van Genuchten water retention parameters (van Genuchten, 1980, abbreviated here as VG), saturated hydraulic conductivity (Ks), as well as their uncertainties. We present an improved hierarchical pedotransfer functions (Rosetta3) that unify the VG water retention and Ks submodels into one, thus allowing the estimation of uni-variate and bi-variate probability distributions of estimated parameters. Results show that the estimation bias of moisture content was reduced significantly. Rosetta1 and Posetta3 were implemented in the python programming language, and the source code are available online. Based on different soil water retention equations, there are diverse PTFs used in different disciplines of earth system modelings. PTFs based on Campbell [1974] or Clapp and Hornberger [1978] are frequently used in land surface models and general circulation models, while van Genuchten [1980] based PTFs are more widely used in hydrology and soil sciences. We use an independent global scale soil database to evaluate the performance of diverse PTFs used in different disciplines of earth system modelings. PTFs are evaluated based on different soil characteristics and environmental characteristics, such as soil textural data, soil organic carbon, soil pH, as well as precipitation and soil temperature. This analysis provides more quantitative estimation error information for PTF predictions in different disciplines of earth system modelings.

  14. Using modeling to develop and evaluate a corrective action system

    International Nuclear Information System (INIS)

    Rodgers, L.

    1995-01-01

    At a former trucking facility in EPA Region 4, a corrective action system was installed to remediate groundwater and soil contaminated with gasoline and fuel oil products released from several underground storage tanks (USTs). Groundwater modeling was used to develop the corrective action plan and later used with soil vapor modeling to evaluate the systems effectiveness. Groundwater modeling was used to determine the effects of a groundwater recovery system on the water table at the site. Information gathered during the assessment phase was used to develop a three dimensional depiction of the subsurface at the site. Different groundwater recovery schemes were then modeled to determine the most effective method for recovering contaminated groundwater. Based on the modeling and calculations, a corrective action system combining soil vapor extraction (SVE) and groundwater recovery was designed. The system included seven recovery wells, to extract both soil vapor and groundwater, and a groundwater treatment system. Operation and maintenance of the system included monthly system sampling and inspections and quarterly groundwater sampling. After one year of operation the effectiveness of the system was evaluated. A subsurface soil gas model was used to evaluate the effects of the SVE system on the site contamination as well as its effects on the water table and groundwater recovery operations. Groundwater modeling was used in evaluating the effectiveness of the groundwater recovery system. Plume migration and capture were modeled to insure that the groundwater recovery system at the site was effectively capturing the contaminant plume. The two models were then combined to determine the effects of the two systems, acting together, on the remediation process

  15. Evaluating Predictive Uncertainty of Hyporheic Exchange Modelling

    Science.gov (United States)

    Chow, R.; Bennett, J.; Dugge, J.; Wöhling, T.; Nowak, W.

    2017-12-01

    Hyporheic exchange is the interaction of water between rivers and groundwater, and is difficult to predict. One of the largest contributions to predictive uncertainty for hyporheic fluxes have been attributed to the representation of heterogeneous subsurface properties. This research aims to evaluate which aspect of the subsurface representation - the spatial distribution of hydrofacies or the model for local-scale (within-facies) heterogeneity - most influences the predictive uncertainty. Also, we seek to identify data types that help reduce this uncertainty best. For this investigation, we conduct a modelling study of the Steinlach River meander, in Southwest Germany. The Steinlach River meander is an experimental site established in 2010 to monitor hyporheic exchange at the meander scale. We use HydroGeoSphere, a fully integrated surface water-groundwater model, to model hyporheic exchange and to assess the predictive uncertainty of hyporheic exchange transit times (HETT). A highly parameterized complex model is built and treated as `virtual reality', which is in turn modelled with simpler subsurface parameterization schemes (Figure). Then, we conduct Monte-Carlo simulations with these models to estimate the predictive uncertainty. Results indicate that: Uncertainty in HETT is relatively small for early times and increases with transit times. Uncertainty from local-scale heterogeneity is negligible compared to uncertainty in the hydrofacies distribution. Introducing more data to a poor model structure may reduce predictive variance, but does not reduce predictive bias. Hydraulic head observations alone cannot constrain the uncertainty of HETT, however an estimate of hyporheic exchange flux proves to be more effective at reducing this uncertainty. Figure: Approach for evaluating predictive model uncertainty. A conceptual model is first developed from the field investigations. A complex model (`virtual reality') is then developed based on that conceptual model

  16. Evaluation of NWP-based Satellite Precipitation Error Correction with Near-Real-Time Model Products and Flood-inducing Storms

    Science.gov (United States)

    Zhang, X.; Anagnostou, E. N.; Schwartz, C. S.

    2017-12-01

    Satellite precipitation products tend to have significant biases over complex terrain. Our research investigates a statistical approach for satellite precipitation adjustment based solely on numerical weather simulations. This approach has been evaluated in two mid-latitude (Zhang et al. 2013*1, Zhang et al. 2016*2) and three topical mountainous regions by using the WRF model to adjust two high-resolution satellite products i) National Oceanic and Atmospheric Administration (NOAA) Climate Prediction Center morphing technique (CMORPH) and ii) Global Satellite Mapping of Precipitation (GSMaP). Results show the adjustment effectively reduces the satellite underestimation of high rain rates, which provides a solid proof-of-concept for continuing research of NWP-based satellite correction. In this study we investigate the feasibility of using NCAR Real-time Ensemble Forecasts*3 for adjusting near-real-time satellite precipitation datasets over complex terrain areas in the Continental United States (CONUS) such as Olympic Peninsula, California coastal mountain ranges, Rocky Mountains and South Appalachians. The research will focus on flood-inducing storms occurred from May 2015 to December 2016 and four satellite precipitation products (CMORPH, GSMaP, PERSIANN-CCS and IMERG). The error correction performance evaluation will be based on comparisons against the gauge-adjusted Stage IV precipitation data. *1 Zhang, Xinxuan, et al. "Using NWP simulations in satellite rainfall estimation of heavy precipitation events over mountainous areas." Journal of Hydrometeorology 14.6 (2013): 1844-1858. *2 Zhang, Xinxuan, et al. "Hydrologic Evaluation of NWP-Adjusted CMORPH Estimates of Hurricane-Induced Precipitation in the Southern Appalachians." Journal of Hydrometeorology 17.4 (2016): 1087-1099. *3 Schwartz, Craig S., et al. "NCAR's experimental real-time convection-allowing ensemble prediction system." Weather and Forecasting 30.6 (2015): 1645-1654.

  17. Neonatal intensive care nursing curriculum challenges based on context, input, process, and product evaluation model: A qualitative study

    Directory of Open Access Journals (Sweden)

    Mansoureh Ashghali-Farahani

    2018-01-01

    Full Text Available Background: Weakness of curriculum development in nursing education results in lack of professional skills in graduates. This study was done on master's students in nursing to evaluate challenges of neonatal intensive care nursing curriculum based on context, input, process, and product (CIPP evaluation model. Materials and Methods: This study was conducted with qualitative approach, which was completed according to the CIPP evaluation model. The study was conducted from May 2014 to April 2015. The research community included neonatal intensive care nursing master's students, the graduates, faculty members, neonatologists, nurses working in neonatal intensive care unit (NICU, and mothers of infants who were hospitalized in such wards. Purposeful sampling was applied. Results: The data analysis showed that there were two main categories: “inappropriate infrastructure” and “unknown duties,” which influenced the context formation of NICU master's curriculum. The input was formed by five categories, including “biomedical approach,” “incomprehensive curriculum,” “lack of professional NICU nursing mentors,” “inappropriate admission process of NICU students,” and “lack of NICU skill labs.” Three categories were extracted in the process, including “more emphasize on theoretical education,” “the overlap of credits with each other and the inconsistency among the mentors,” and “ineffective assessment.” Finally, five categories were extracted in the product, including “preferring routine work instead of professional job,” “tendency to leave the job,” “clinical incompetency of graduates,” “the conflict between graduates and nursing staff expectations,” and “dissatisfaction of graduates.” Conclusions: Some changes are needed in NICU master's curriculum by considering the nursing experts' comments and evaluating the consequences of such program by them.

  18. A temperature dependent simple spice based modeling platform for power IGBT modules

    NARCIS (Netherlands)

    Sfakianakis, G.; Nawaz, M.; Chimento, F.

    2014-01-01

    This paper deals with the development of a PSpice based temperature dependent modelling platform for the evaluation of silicon based IGBT power modules. The developed device modelling platform is intended to be used for the design and assessment of converter valves/cells for potential high power

  19. Photovoltaic Grid-Connected Modeling and Characterization Based on Experimental Results.

    Science.gov (United States)

    Humada, Ali M; Hojabri, Mojgan; Sulaiman, Mohd Herwan Bin; Hamada, Hussein M; Ahmed, Mushtaq N

    2016-01-01

    A grid-connected photovoltaic (PV) system operates under fluctuated weather condition has been modeled and characterized based on specific test bed. A mathematical model of a small-scale PV system has been developed mainly for residential usage, and the potential results have been simulated. The proposed PV model based on three PV parameters, which are the photocurrent, IL, the reverse diode saturation current, Io, the ideality factor of diode, n. Accuracy of the proposed model and its parameters evaluated based on different benchmarks. The results showed that the proposed model fitting the experimental results with high accuracy compare to the other models, as well as the I-V characteristic curve. The results of this study can be considered valuable in terms of the installation of a grid-connected PV system in fluctuated climatic conditions.

  20. Evaluating the effect of corridors and landscape heterogeneity on dispersal probability: a comparison of three spatially explicit modelling approaches

    DEFF Research Database (Denmark)

    Jepsen, J. U.; Baveco, J. M.; Topping, C. J.

    2004-01-01

    preferences of the modeller, rather than by a critical evaluation of model performance. We present a comparison of three common spatial simulation approaches (patch-based incidence-function model (IFM), individual-based movement model (IBMM), individual-based population model including detailed behaviour...

  1. Evaluation of mechanistic DNB models using HCLWR CHF data

    International Nuclear Information System (INIS)

    Iwamura, Takamichi; Watanabe, Hironori; Okubo, Tsutomu; Araya, Fumimasa; Murao, Yoshio.

    1992-03-01

    An onset of departure from nucleate boiling (DNB) in light water reactor (LWR) has been generally predicted with empirical correlations. Since these correlations have less physical bases and contain adjustable empirical constants determined by best fitting of test data, applicable geometries and flow conditions are limited within the original experiment ranges. In order to obtain more universal prediction method, several mechanistic DNB models based on physical approaches have been proposed in recent years. However, the predictive capabilities of mechanistic DNB models have not been verified successfully especially for advanced LWR design purposes. In this report, typical DNB mechanistic models are reviewed and compared with critical heat flux (CHF) data for high conversion light water reactor (HCLWR). The experiments were performed using triangular 7-rods array with non-uniform axial heat flux distribution. Test pressure was 16 MPa, mass velocities ranged from 800 t0 3100 kg/s·m 2 and exit qualities from -0.07 to 0.19. The evaluated models are: 1) Wisman-Pei, 2) Chang-Lee, 3) Lee-Mudawwar, 4) Lin-Lee-Pei, and 5) Katto. The first two models are based on near-wall bubble crowding model and the other three models on sublayer dryout model. The comparison with experimental data indicated that the Weisman-Pei model agreed relatively well with the CHF data. Effects of empirical constants in each model on CHF calculation were clarified by sensitivity studies. It was also found that the magnitudes of physical quantities obtained in the course of calculation were significantly different for each model. Therefore, microscopic observation of the onset of DNB on heated surface is essential to clarify the DNB mechanism and establish a general DNB mechanistic model based on physical phenomenon. (author)

  2. Model evaluation methodology applicable to environmental assessment models

    International Nuclear Information System (INIS)

    Shaeffer, D.L.

    1979-08-01

    A model evaluation methodology is presented to provide a systematic framework within which the adequacy of environmental assessment models might be examined. The necessity for such a tool is motivated by the widespread use of models for predicting the environmental consequences of various human activities and by the reliance on these model predictions for deciding whether a particular activity requires the deployment of costly control measures. Consequently, the uncertainty associated with prediction must be established for the use of such models. The methodology presented here consists of six major tasks: model examination, algorithm examination, data evaluation, sensitivity analyses, validation studies, and code comparison. This methodology is presented in the form of a flowchart to show the logical interrelatedness of the various tasks. Emphasis has been placed on identifying those parameters which are most important in determining the predictive outputs of a model. Importance has been attached to the process of collecting quality data. A method has been developed for analyzing multiplicative chain models when the input parameters are statistically independent and lognormally distributed. Latin hypercube sampling has been offered as a promising candidate for doing sensitivity analyses. Several different ways of viewing the validity of a model have been presented. Criteria are presented for selecting models for environmental assessment purposes

  3. Modeling sediment yield in small catchments at event scale: Model comparison, development and evaluation

    Science.gov (United States)

    Tan, Z.; Leung, L. R.; Li, H. Y.; Tesfa, T. K.

    2017-12-01

    Sediment yield (SY) has significant impacts on river biogeochemistry and aquatic ecosystems but it is rarely represented in Earth System Models (ESMs). Existing SY models focus on estimating SY from large river basins or individual catchments so it is not clear how well they simulate SY in ESMs at larger spatial scales and globally. In this study, we compare the strengths and weaknesses of eight well-known SY models in simulating annual mean SY at about 400 small catchments ranging in size from 0.22 to 200 km2 in the US, Canada and Puerto Rico. In addition, we also investigate the performance of these models in simulating event-scale SY at six catchments in the US using high-quality hydrological inputs. The model comparison shows that none of the models can reproduce the SY at large spatial scales but the Morgan model performs the better than others despite its simplicity. In all model simulations, large underestimates occur in catchments with very high SY. A possible pathway to reduce the discrepancies is to incorporate sediment detachment by landsliding, which is currently not included in the models being evaluated. We propose a new SY model that is based on the Morgan model but including a landsliding soil detachment scheme that is being developed. Along with the results of the model comparison and evaluation, preliminary findings from the revised Morgan model will be presented.

  4. On global and regional spectral evaluation of global geopotential models

    International Nuclear Information System (INIS)

    Ustun, A; Abbak, R A

    2010-01-01

    Spectral evaluation of global geopotential models (GGMs) is necessary to recognize the behaviour of gravity signal and its error recorded in spherical harmonic coefficients and associated standard deviations. Results put forward in this wise explain the whole contribution of gravity data in different kinds that represent various sections of the gravity spectrum. This method is more informative than accuracy assessment methods, which use external data such as GPS-levelling. Comparative spectral evaluation for more than one model can be performed both in global and local sense using many spectral tools. The number of GGMs has grown with the increasing number of data collected by the dedicated satellite gravity missions, CHAMP, GRACE and GOCE. This fact makes it necessary to measure the differences between models and to monitor the improvements in the gravity field recovery. In this paper, some of the satellite-only and combined models are examined in different scales, globally and regionally, in order to observe the advances in the modelling of GGMs and their strengths at various expansion degrees for geodetic and geophysical applications. The validation of the published errors of model coefficients is a part of this evaluation. All spectral tools explicitly reveal the superiority of the GRACE-based models when compared against the models that comprise the conventional satellite tracking data. The disagreement between models is large in local/regional areas if data sets are different, as seen from the example of the Turkish territory

  5. Neurally based measurement and evaluation of environmental noise

    CERN Document Server

    Soeta, Yoshiharu

    2015-01-01

    This book deals with methods of measurement and evaluation of environmental noise based on an auditory neural and brain-oriented model. The model consists of the autocorrelation function (ACF) and the interaural cross-correlation function (IACF) mechanisms for signals arriving at the two ear entrances. Even when the sound pressure level of a noise is only about 35 dBA, people may feel annoyed due to the aspects of sound quality. These aspects can be formulated by the factors extracted from the ACF and IACF. Several examples of measuring environmental noise—from outdoor noise such as that of aircraft, traffic, and trains, and indoor noise such as caused by floor impact, toilets, and air-conditioning—are demonstrated. According to the noise measurement and evaluation, applications for sound design are discussed. This book provides an excellent resource for students, researchers, and practitioners in a wide range of fields, such as the automotive, railway, and electronics industries, and soundscape, architec...

  6. Identity of organizational conflict framework: Evaluating model factors based on demographic characteristics in Iran

    Directory of Open Access Journals (Sweden)

    Kaveh Hasani

    2014-10-01

    Full Text Available Normal 0 false false false EN-US X-NONE FA Purpose: The purpose of this study was to Identity of organizational conflict framework:  Evaluating model factors based on demographic characteristics in Iran. Design/methodology/approach: Research method is descriptive - applied. The statistical population includes all of the employees in Iran`s Azad Universites with 600 individuals at the time of the study and statistical sample included 234 individuals who were selected using Morgan table. Beside this study, descriptive and inferential statistics were used. Also, reliability approved through Cronbach alpha (0.87. Then, to detect the dimensions causes of organizational conflict, factor analysis in line with the main components was used. Through exploratory analysis, ten principal factors identified. Thereafter, confirmatory factor analysis reconfirmed these factors. Findings and originality/value: The results of study showed that there is no significant difference between the causes of organizational conflict based on the gender. Also, there are significant differences among the causes of organizational conflict based on the variables of age, education and work experience. Research limitations/implications: we adopt a cross sectional research design and as a result inferences regarding causality cannot be drawn. Future studies following a longitudinal design could provide a more dynamic perspective and contribute further to this stream of research. Originality/value: A lot of researches about the conflict management styles, organizational conflict's effects, etc. are conducted by different researchers, but a handful of researches have been conducted in the field of resources and causes of organizational conflict and this is one of the reasons that it is important for researchers to address this issue.

  7. Development of 4D mathematical observer models for the task-based evaluation of gated myocardial perfusion SPECT

    Science.gov (United States)

    Lee, Taek-Soo; Frey, Eric C.; Tsui, Benjamin M. W.

    2015-04-01

    This paper presents two 4D mathematical observer models for the detection of motion defects in 4D gated medical images. Their performance was compared with results from human observers in detecting a regional motion abnormality in simulated 4D gated myocardial perfusion (MP) SPECT images. The first 4D mathematical observer model extends the conventional channelized Hotelling observer (CHO) based on a set of 2D spatial channels and the second is a proposed model that uses a set of 4D space-time channels. Simulated projection data were generated using the 4D NURBS-based cardiac-torso (NCAT) phantom with 16 gates/cardiac cycle. The activity distribution modelled uptake of 99mTc MIBI with normal perfusion and a regional wall motion defect. An analytical projector was used in the simulation and the filtered backprojection (FBP) algorithm was used in image reconstruction followed by spatial and temporal low-pass filtering with various cut-off frequencies. Then, we extracted 2D image slices from each time frame and reorganized them into a set of cine images. For the first model, we applied 2D spatial channels to the cine images and generated a set of feature vectors that were stacked for the images from different slices of the heart. The process was repeated for each of the 1,024 noise realizations, and CHO and receiver operating characteristics (ROC) analysis methodologies were applied to the ensemble of the feature vectors to compute areas under the ROC curves (AUCs). For the second model, a set of 4D space-time channels was developed and applied to the sets of cine images to produce space-time feature vectors to which the CHO methodology was applied. The AUC values of the second model showed better agreement (Spearman’s rank correlation (SRC) coefficient = 0.8) to human observer results than those from the first model (SRC coefficient = 0.4). The agreement with human observers indicates the proposed 4D mathematical observer model provides a good predictor of the

  8. Development of 4D mathematical observer models for the task-based evaluation of gated myocardial perfusion SPECT

    International Nuclear Information System (INIS)

    Lee, Taek-Soo; Frey, Eric C; Tsui, Benjamin M W

    2015-01-01

    This paper presents two 4D mathematical observer models for the detection of motion defects in 4D gated medical images. Their performance was compared with results from human observers in detecting a regional motion abnormality in simulated 4D gated myocardial perfusion (MP) SPECT images. The first 4D mathematical observer model extends the conventional channelized Hotelling observer (CHO) based on a set of 2D spatial channels and the second is a proposed model that uses a set of 4D space-time channels. Simulated projection data were generated using the 4D NURBS-based cardiac-torso (NCAT) phantom with 16 gates/cardiac cycle. The activity distribution modelled uptake of 99m Tc MIBI with normal perfusion and a regional wall motion defect. An analytical projector was used in the simulation and the filtered backprojection (FBP) algorithm was used in image reconstruction followed by spatial and temporal low-pass filtering with various cut-off frequencies. Then, we extracted 2D image slices from each time frame and reorganized them into a set of cine images. For the first model, we applied 2D spatial channels to the cine images and generated a set of feature vectors that were stacked for the images from different slices of the heart. The process was repeated for each of the 1,024 noise realizations, and CHO and receiver operating characteristics (ROC) analysis methodologies were applied to the ensemble of the feature vectors to compute areas under the ROC curves (AUCs). For the second model, a set of 4D space-time channels was developed and applied to the sets of cine images to produce space-time feature vectors to which the CHO methodology was applied. The AUC values of the second model showed better agreement (Spearman’s rank correlation (SRC) coefficient = 0.8) to human observer results than those from the first model (SRC coefficient = 0.4). The agreement with human observers indicates the proposed 4D mathematical observer model provides a good predictor of the

  9. Evaluate Yourself. Evaluation: Research-Based Decision Making Series, Number 9304.

    Science.gov (United States)

    Fetterman, David M.

    This document considers both self-examination and external evaluation of gifted and talented education programs. Principles of the self-examination process are offered, noting similarities to external evaluation models. Principles of self-evaluation efforts include the importance of maintaining a nonjudgmental orientation, soliciting views from…

  10. QUALITY OF AN ACADEMIC STUDY PROGRAMME - EVALUATION MODEL

    Directory of Open Access Journals (Sweden)

    Mirna Macur

    2016-01-01

    Full Text Available Quality of an academic study programme is evaluated by many: employees (internal evaluation and by external evaluators: experts, agencies and organisations. Internal and external evaluation of an academic programme follow written structure that resembles on one of the quality models. We believe the quality models (mostly derived from EFQM excellence model don’t fit very well into non-profit activities, policies and programmes, because they are much more complex than environment, from which quality models derive from (for example assembly line. Quality of an academic study programme is very complex and understood differently by various stakeholders, so we present dimensional evaluation in the article. Dimensional evaluation, as opposed to component and holistic evaluation, is a form of analytical evaluation in which the quality of value of the evaluand is determined by looking at its performance on multiple dimensions of merit or evaluation criteria. First stakeholders of a study programme and their views, expectations and interests are presented, followed by evaluation criteria. They are both joined into the evaluation model revealing which evaluation criteria can and should be evaluated by which stakeholder. Main research questions are posed and research method for each dimension listed.

  11. A model based method for evaluation of crop operation scenarios in greenhouses

    NARCIS (Netherlands)

    Ooster, van 't A.

    2015-01-01

    Abstract

    This research initiated a model-based method to analyse labour in crop production systems and to quantify effects of system changes in order to contribute to effective greenhouse crop cultivation systems with efficient use of human labour and technology. This

  12. SDRAM-based packet buffer model for high speed switches

    DEFF Research Database (Denmark)

    Rasmussen, Anders; Ruepp, Sarah Renée; Berger, Michael Stübert

    2011-01-01

    based on the specifications of a real-life DDR3-SDRAM chip. Based on this model the performance of different schemes for optimizing the performance of such a packet buffer can be evaluated. The purpose of this study is to find efficient schemes for memory mapping of the packet queues and I/O traffic...

  13. Performance Evaluation of Hadoop-based Large-scale Network Traffic Analysis Cluster

    Directory of Open Access Journals (Sweden)

    Tao Ran

    2016-01-01

    Full Text Available As Hadoop has gained popularity in big data era, it is widely used in various fields. The self-design and self-developed large-scale network traffic analysis cluster works well based on Hadoop, with off-line applications running on it to analyze the massive network traffic data. On purpose of scientifically and reasonably evaluating the performance of analysis cluster, we propose a performance evaluation system. Firstly, we set the execution times of three benchmark applications as the benchmark of the performance, and pick 40 metrics of customized statistical resource data. Then we identify the relationship between the resource data and the execution times by a statistic modeling analysis approach, which is composed of principal component analysis and multiple linear regression. After training models by historical data, we can predict the execution times by current resource data. Finally, we evaluate the performance of analysis cluster by the validated predicting of execution times. Experimental results show that the predicted execution times by trained models are within acceptable error range, and the evaluation results of performance are accurate and reliable.

  14. Model-based estimation for dynamic cardiac studies using ECT.

    Science.gov (United States)

    Chiao, P C; Rogers, W L; Clinthorne, N H; Fessler, J A; Hero, A O

    1994-01-01

    The authors develop a strategy for joint estimation of physiological parameters and myocardial boundaries using ECT (emission computed tomography). They construct an observation model to relate parameters of interest to the projection data and to account for limited ECT system resolution and measurement noise. The authors then use a maximum likelihood (ML) estimator to jointly estimate all the parameters directly from the projection data without reconstruction of intermediate images. They also simulate myocardial perfusion studies based on a simplified heart model to evaluate the performance of the model-based joint ML estimator and compare this performance to the Cramer-Rao lower bound. Finally, the authors discuss model assumptions and potential uses of the joint estimation strategy.

  15. An Agent-Based Model of Farmer Decision Making in Jordan

    Science.gov (United States)

    Selby, Philip; Medellin-Azuara, Josue; Harou, Julien; Klassert, Christian; Yoon, Jim

    2016-04-01

    We describe an agent based hydro-economic model of groundwater irrigated agriculture in the Jordan Highlands. The model employs a Multi-Agent-Simulation (MAS) framework and is designed to evaluate direct and indirect outcomes of climate change scenarios and policy interventions on farmer decision making, including annual land use, groundwater use for irrigation, and water sales to a water tanker market. Land use and water use decisions are simulated for groups of farms grouped by location and their behavioural and economic similarities. Decreasing groundwater levels, and the associated increase in pumping costs, are important drivers for change within Jordan'S agricultural sector. We describe how this is considered by coupling of agricultural and groundwater models. The agricultural production model employs Positive Mathematical Programming (PMP), a method for calibrating agricultural production functions to observed planted areas. PMP has successfully been used with disaggregate models for policy analysis. We adapt the PMP approach to allow explicit evaluation of the impact of pumping costs, groundwater purchase fees and a water tanker market. The work demonstrates the applicability of agent-based agricultural decision making assessment in the Jordan Highlands and its integration with agricultural model calibration methods. The proposed approach is designed and implemented with software such that it could be used to evaluate a variety of physical and human influences on decision making in agricultural water management.

  16. Evaluation of base, optimum and ceiling temperature for (Kochia scoparia L. Schard with application of Five-Parameters-Beta Model

    Directory of Open Access Journals (Sweden)

    S. Sabouri Rad

    2016-05-01

    Full Text Available Kochia (Kochia scoparia L. Schard is an annual, halophyte and drought resistant plant, that it can be irrigated with saline water and a valuable source for forage under drought and saline ecosystems. In order to evaluate germination characteristics of kochia, an experiment was conducted at Physiology laboratory of Ferdowsi University of Mashhad, Iran, during 2009. This experiment was conducted in a completely randomized design with four replications. Germination was evaluated at 5, 10, 15, 20, 25, 30, 35 and 40°C under dark germinator with 50-60 percentage relative humidity. The results showed that the highest germination percentage was obtained at 20-30°C and the lowest obtained at 40°C. The longest and the shortest period to 20 and 50 germination percentage were recorded to 5-10°C and 20-30°C, respectively. The longest and the shortest period to 80 percentage germination were belonging to 15 and 30°C, respectively. Based on Five Parameters Beta model, base, optimum and ceiling temperatures for kochia estimated 3.4, 25 and 43.3°C, respectively. However, seed of this plant is able to germinate in wide temperature range.

  17. Site descriptive modelling - strategy for integrated evaluation

    International Nuclear Information System (INIS)

    Andersson, Johan

    2003-02-01

    The current document establishes the strategy to be used for achieving sufficient integration between disciplines in producing Site Descriptive Models during the Site Investigation stage. The Site Descriptive Model should be a multidisciplinary interpretation of geology, rock mechanics, thermal properties, hydrogeology, hydrogeochemistry, transport properties and ecosystems using site investigation data from deep bore holes and from the surface as input. The modelling comprise the following iterative steps, evaluation of primary data, descriptive and quantitative modelling (in 3D), overall confidence evaluation. Data are first evaluated within each discipline and then the evaluations are checked between the disciplines. Three-dimensional modelling (i.e. estimating the distribution of parameter values in space and its uncertainty) is made in a sequence, where the geometrical framework is taken from the geological model and in turn used by the rock mechanics, thermal and hydrogeological modelling etc. The three-dimensional description should present the parameters with their spatial variability over a relevant and specified scale, with the uncertainty included in this description. Different alternative descriptions may be required. After the individual discipline modelling and uncertainty assessment a phase of overall confidence evaluation follows. Relevant parts of the different modelling teams assess the suggested uncertainties and evaluate the feedback. These discussions should assess overall confidence by, checking that all relevant data are used, checking that information in past model versions is considered, checking that the different kinds of uncertainty are addressed, checking if suggested alternatives make sense and if there is potential for additional alternatives, and by discussing, if appropriate, how additional measurements (i.e. more data) would affect confidence. The findings as well as the modelling results are to be documented in a Site Description

  18. Assessing the sustainability of wheat-based cropping systems using APSIM: Model parameterisation and evaluation

    NARCIS (Netherlands)

    Moeller, C.; Pala, M.; Manschadi, A.M.; Meinke, H.B.; Sauerborn, J.

    2007-01-01

    Assessing the sustainability of crop and soil management practices in wheat-based rotations requires a well-tested model with the demonstrated ability to sensibly predict crop productivity and changes in the soil resource. The Agricultural Production Systems Simulator (APSIM) suite of models was

  19. CTBT Integrated Verification System Evaluation Model

    Energy Technology Data Exchange (ETDEWEB)

    Edenburn, M.W.; Bunting, M.L.; Payne, A.C. Jr.

    1997-10-01

    Sandia National Laboratories has developed a computer based model called IVSEM (Integrated Verification System Evaluation Model) to estimate the performance of a nuclear detonation monitoring system. The IVSEM project was initiated in June 1994, by Sandia`s Monitoring Systems and Technology Center and has been funded by the US Department of Energy`s Office of Nonproliferation and National Security (DOE/NN). IVSEM is a simple, top-level, modeling tool which estimates the performance of a Comprehensive Nuclear Test Ban Treaty (CTBT) monitoring system and can help explore the impact of various sensor system concepts and technology advancements on CTBT monitoring. One of IVSEM`s unique features is that it integrates results from the various CTBT sensor technologies (seismic, infrasound, radionuclide, and hydroacoustic) and allows the user to investigate synergy among the technologies. Specifically, IVSEM estimates the detection effectiveness (probability of detection) and location accuracy of the integrated system and of each technology subsystem individually. The model attempts to accurately estimate the monitoring system`s performance at medium interfaces (air-land, air-water) and for some evasive testing methods such as seismic decoupling. This report describes version 1.2 of IVSEM.

  20. Evaluation of onset of nucleate boiling models

    Energy Technology Data Exchange (ETDEWEB)

    Huang, LiDong [Heat Transfer Research, Inc., College Station, TX (United States)], e-mail: lh@htri.net

    2009-07-01

    This article discusses available models and correlations for predicting the required heat flux or wall superheat for the Onset of Nucleate Boiling (ONB) on plain surfaces. It reviews ONB data in the open literature and discusses the continuing efforts of Heat Transfer Research, Inc. in this area. Our ONB database contains ten individual sources for ten test fluids and a wide range of operating conditions for different geometries, e.g., tube side and shell side flow boiling and falling film evaporation. The article also evaluates literature models and correlations based on the data: no single model in the open literature predicts all data well. The prediction uncertainty is especially higher in vacuum conditions. Surface roughness is another critical criterion in determining which model should be used. However, most models do not directly account for surface roughness, and most investigators do not provide surface roughness information in their published findings. Additional experimental research is needed to improve confidence in predicting the required wall superheats for nucleation boiling for engineering design purposes. (author)

  1. Evaluation of onset of nucleate boiling models

    International Nuclear Information System (INIS)

    Huang, LiDong

    2009-01-01

    This article discusses available models and correlations for predicting the required heat flux or wall superheat for the Onset of Nucleate Boiling (ONB) on plain surfaces. It reviews ONB data in the open literature and discusses the continuing efforts of Heat Transfer Research, Inc. in this area. Our ONB database contains ten individual sources for ten test fluids and a wide range of operating conditions for different geometries, e.g., tube side and shell side flow boiling and falling film evaporation. The article also evaluates literature models and correlations based on the data: no single model in the open literature predicts all data well. The prediction uncertainty is especially higher in vacuum conditions. Surface roughness is another critical criterion in determining which model should be used. However, most models do not directly account for surface roughness, and most investigators do not provide surface roughness information in their published findings. Additional experimental research is needed to improve confidence in predicting the required wall superheats for nucleation boiling for engineering design purposes. (author)

  2. Evaluation of CASP8 model quality predictions

    KAUST Repository

    Cozzetto, Domenico

    2009-01-01

    The model quality assessment problem consists in the a priori estimation of the overall and per-residue accuracy of protein structure predictions. Over the past years, a number of methods have been developed to address this issue and CASP established a prediction category to evaluate their performance in 2006. In 2008 the experiment was repeated and its results are reported here. Participants were invited to infer the correctness of the protein models submitted by the registered automatic servers. Estimates could apply to both whole models and individual amino acids. Groups involved in the tertiary structure prediction categories were also asked to assign local error estimates to each predicted residue in their own models and their results are also discussed here. The correlation between the predicted and observed correctness measures was the basis of the assessment of the results. We observe that consensus-based methods still perform significantly better than those accepting single models, similarly to what was concluded in the previous edition of the experiment. © 2009 WILEY-LISS, INC.

  3. Land Surface Verification Toolkit (LVT) - A Generalized Framework for Land Surface Model Evaluation

    Science.gov (United States)

    Kumar, Sujay V.; Peters-Lidard, Christa D.; Santanello, Joseph; Harrison, Ken; Liu, Yuqiong; Shaw, Michael

    2011-01-01

    Model evaluation and verification are key in improving the usage and applicability of simulation models for real-world applications. In this article, the development and capabilities of a formal system for land surface model evaluation called the Land surface Verification Toolkit (LVT) is described. LVT is designed to provide an integrated environment for systematic land model evaluation and facilitates a range of verification approaches and analysis capabilities. LVT operates across multiple temporal and spatial scales and employs a large suite of in-situ, remotely sensed and other model and reanalysis datasets in their native formats. In addition to the traditional accuracy-based measures, LVT also includes uncertainty and ensemble diagnostics, information theory measures, spatial similarity metrics and scale decomposition techniques that provide novel ways for performing diagnostic model evaluations. Though LVT was originally designed to support the land surface modeling and data assimilation framework known as the Land Information System (LIS), it also supports hydrological data products from other, non-LIS environments. In addition, the analysis of diagnostics from various computational subsystems of LIS including data assimilation, optimization and uncertainty estimation are supported within LVT. Together, LIS and LVT provide a robust end-to-end environment for enabling the concepts of model data fusion for hydrological applications. The evolving capabilities of LVT framework are expected to facilitate rapid model evaluation efforts and aid the definition and refinement of formal evaluation procedures for the land surface modeling community.

  4. Land surface Verification Toolkit (LVT) - a generalized framework for land surface model evaluation

    Science.gov (United States)

    Kumar, S. V.; Peters-Lidard, C. D.; Santanello, J.; Harrison, K.; Liu, Y.; Shaw, M.

    2012-06-01

    Model evaluation and verification are key in improving the usage and applicability of simulation models for real-world applications. In this article, the development and capabilities of a formal system for land surface model evaluation called the Land surface Verification Toolkit (LVT) is described. LVT is designed to provide an integrated environment for systematic land model evaluation and facilitates a range of verification approaches and analysis capabilities. LVT operates across multiple temporal and spatial scales and employs a large suite of in-situ, remotely sensed and other model and reanalysis datasets in their native formats. In addition to the traditional accuracy-based measures, LVT also includes uncertainty and ensemble diagnostics, information theory measures, spatial similarity metrics and scale decomposition techniques that provide novel ways for performing diagnostic model evaluations. Though LVT was originally designed to support the land surface modeling and data assimilation framework known as the Land Information System (LIS), it supports hydrological data products from non-LIS environments as well. In addition, the analysis of diagnostics from various computational subsystems of LIS including data assimilation, optimization and uncertainty estimation are supported within LVT. Together, LIS and LVT provide a robust end-to-end environment for enabling the concepts of model data fusion for hydrological applications. The evolving capabilities of LVT framework are expected to facilitate rapid model evaluation efforts and aid the definition and refinement of formal evaluation procedures for the land surface modeling community.

  5. A model-data based systems approach to process intensification

    DEFF Research Database (Denmark)

    Gani, Rafiqul

    . Their developments, however, are largely due to experiment based trial and error approaches and while they do not require validation, they can be time consuming and resource intensive. Also, one may ask, can a truly new intensified unit operation be obtained in this way? An alternative two-stage approach is to apply...... a model-based synthesis method to systematically generate and evaluate alternatives in the first stage and an experiment-model based validation in the second stage. In this way, the search for alternatives is done very quickly, reliably and systematically over a wide range, while resources are preserved...... for focused validation of only the promising candidates in the second-stage. This approach, however, would be limited to intensification based on “known” unit operations, unless the PI process synthesis/design is considered at a lower level of aggregation, namely the phenomena level. That is, the model-based...

  6. Training effectiveness evaluation model

    International Nuclear Information System (INIS)

    Penrose, J.B.

    1993-01-01

    NAESCO's Training Effectiveness Evaluation Model (TEEM) integrates existing evaluation procedures with new procedures. The new procedures are designed to measure training impact on organizational productivity. TEEM seeks to enhance organizational productivity through proactive training focused on operation results. These results can be identified and measured by establishing and tracking performance indicators. Relating training to organizational productivity is not easy. TEEM is a team process. It offers strategies to assess more effectively organizational costs and benefits of training. TEEM is one organization's attempt to refine, manage and extend its training evaluation program

  7. LPJmL4 - a dynamic global vegetation model with managed land - Part 2: Model evaluation

    Science.gov (United States)

    Schaphoff, Sibyll; Forkel, Matthias; Müller, Christoph; Knauer, Jürgen; von Bloh, Werner; Gerten, Dieter; Jägermeyr, Jonas; Lucht, Wolfgang; Rammig, Anja; Thonicke, Kirsten; Waha, Katharina

    2018-04-01

    The dynamic global vegetation model LPJmL4 is a process-based model that simulates climate and land use change impacts on the terrestrial biosphere, agricultural production, and the water and carbon cycle. Different versions of the model have been developed and applied to evaluate the role of natural and managed ecosystems in the Earth system and the potential impacts of global environmental change. A comprehensive model description of the new model version, LPJmL4, is provided in a companion paper (Schaphoff et al., 2018c). Here, we provide a full picture of the model performance, going beyond standard benchmark procedures and give hints on the strengths and shortcomings of the model to identify the need for further model improvement. Specifically, we evaluate LPJmL4 against various datasets from in situ measurement sites, satellite observations, and agricultural yield statistics. We apply a range of metrics to evaluate the quality of the model to simulate stocks and flows of carbon and water in natural and managed ecosystems at different temporal and spatial scales. We show that an advanced phenology scheme improves the simulation of seasonal fluctuations in the atmospheric CO2 concentration, while the permafrost scheme improves estimates of carbon stocks. The full LPJmL4 code including the new developments will be supplied open source through https://gitlab.pik-potsdam.de/lpjml/LPJmL" target="_blank">https://gitlab.pik-potsdam.de/lpjml/LPJmL. We hope that this will lead to new model developments and applications that improve the model performance and possibly build up a new understanding of the terrestrial biosphere.

  8. Evaluation of 2 process-based models to estimate soil N{sub 2}O emissions in eastern Canada

    Energy Technology Data Exchange (ETDEWEB)

    Smith, W.N.; Grant, B.B.; Desjardins, R.L. [Agriculture and Agri-Food Canada, Ottawa, ON (Canada). Eastern Cereal and Oilseed Research Centre; Rochette, P. [Agriculture and Agri-Food Canada, Sainte-Foy, PQ (Canada); Drury, C.F. [Agriculture and Agri-Food Canada, Harrow, ON (Canada); Li, C. [New Hampshire Univ., Durham, NH (United States). Inst. for the Study of Earth, Oceans, and Space

    2008-04-15

    This study assessed the ability of 2 process-based nitrogen (N) models to accurately estimate nitrous oxide (N{sub 2}O) emissions and auxiliary soil and hydraulic data from 2 field sites in eastern Canada. The DAYCENT model was used to simulate fluxes of carbon (C) and N between soil, vegetation, and the atmosphere on a daily basis. The model contained a submodel that considered the scheduling of management events; a parameter for considering drainage related to soil texture; a submodel that considered the effect of solar radiation on plant growth; a simulation module of seed germination as a function of soil temperature, growth and harvest; and submodel of water table depths. The DeNitrification DeComposition (DNDC) model consisted of 4 submodels: (1) soil and climate; (2) crop vegetation; (3) decomposition; and (4) a denitrification model that operated on an hourly time step and was activated when soil moisture increased or when soil and oxygen availability decreased. Results of the comparative evaluation showed that the DNDC model accurately predicted total N{sub 2}O emissions from both test sites. However, the timing of emissions peaks was inaccurate, and emissions predictions from individual treatments were also incorrect. The DAYCENT model underpredicted emissions from most treatment regimes due to its prediction of lower mineralization rates. Simplistic soil water routines and a 1-D approach were used to overcome data limitations in both models, and results of the study suggested that the mechanisms were not able to characterize soil hydraulics in some soils. It was concluded that the mechanisms used to characterize the distribution and mineralization of N must be revised in both models after hydrology routines are optimized. 20 refs., 5 tabs., 3 figs.

  9. Evaluation of Savannah River Plant emergency response models using standard and nonstandard meteorological data

    International Nuclear Information System (INIS)

    Hoel, D.D.

    1984-01-01

    Two computer codes have been developed for operational use in performing real time evaluations of atmospheric releases from the Savannah River Plant (SRP) in South Carolina. These codes, based on mathematical models, are part of the SRP WIND (Weather Information and Display) automated emergency response system. Accuracy of ground level concentrations from a Gaussian puff-plume model and a two-dimensional sequential puff model are being evaluated with data from a series of short range diffusion experiments using sulfur hexafluoride as a tracer. The models use meteorological data collected from 7 towers on SRP and at the 300 m WJBF-TV tower about 15 km northwest of SRP. The winds and the stability, which is based on turbulence measurements, are measured at the 60 m stack heights. These results are compared to downwind concentrations using only standard meteorological data, i.e., adjusted 10 m winds and stability determined by the Pasquill-Turner stability classification method. Scattergrams and simple statistics were used for model evaluations. Results indicate predictions within accepted limits for the puff-plume code and a bias in the sequential puff model predictions using the meteorologist-adjusted nonstandard data. 5 references, 4 figures, 2 tables

  10. Photovoltaic Grid-Connected Modeling and Characterization Based on Experimental Results

    Science.gov (United States)

    Humada, Ali M.; Hojabri, Mojgan; Sulaiman, Mohd Herwan Bin; Hamada, Hussein M.; Ahmed, Mushtaq N.

    2016-01-01

    A grid-connected photovoltaic (PV) system operates under fluctuated weather condition has been modeled and characterized based on specific test bed. A mathematical model of a small-scale PV system has been developed mainly for residential usage, and the potential results have been simulated. The proposed PV model based on three PV parameters, which are the photocurrent, IL, the reverse diode saturation current, Io, the ideality factor of diode, n. Accuracy of the proposed model and its parameters evaluated based on different benchmarks. The results showed that the proposed model fitting the experimental results with high accuracy compare to the other models, as well as the I-V characteristic curve. The results of this study can be considered valuable in terms of the installation of a grid-connected PV system in fluctuated climatic conditions. PMID:27035575

  11. Evaluation of Medical Education virtual Program: P3 model.

    Science.gov (United States)

    Rezaee, Rita; Shokrpour, Nasrin; Boroumand, Maryam

    2016-10-01

    In e-learning, people get involved in a process and create the content (product) and make it available for virtual learners. The present study was carried out in order to evaluate the first virtual master program in medical education at Shiraz University of Medical Sciences according to P3 Model. This is an evaluation research study with post single group design used to determine how effective this program was. All students 60 who participated more than one year in this virtual program and 21 experts including teachers and directors participated in this evaluation project. Based on the P3 e-learning model, an evaluation tool with 5-point Likert rating scale was designed and applied to collect the descriptive data. Students reported storyboard and course design as the most desirable element of learning environment (2.30±0.76), but they declared technical support as the less desirable part (1.17±1.23). Presence of such framework in this regard and using it within the format of appropriate tools for evaluation of e-learning in universities and higher education institutes, which present e-learning curricula in the country, may contribute to implementation of the present and future e-learning curricula efficiently and guarantee its implementation in an appropriate way.

  12. Unifying Model-Based and Reactive Programming within a Model-Based Executive

    Science.gov (United States)

    Williams, Brian C.; Gupta, Vineet; Norvig, Peter (Technical Monitor)

    1999-01-01

    Real-time, model-based, deduction has recently emerged as a vital component in AI's tool box for developing highly autonomous reactive systems. Yet one of the current hurdles towards developing model-based reactive systems is the number of methods simultaneously employed, and their corresponding melange of programming and modeling languages. This paper offers an important step towards unification. We introduce RMPL, a rich modeling language that combines probabilistic, constraint-based modeling with reactive programming constructs, while offering a simple semantics in terms of hidden state Markov processes. We introduce probabilistic, hierarchical constraint automata (PHCA), which allow Markov processes to be expressed in a compact representation that preserves the modularity of RMPL programs. Finally, a model-based executive, called Reactive Burton is described that exploits this compact encoding to perform efficIent simulation, belief state update and control sequence generation.

  13. Evaluating airline energy efficiency: An integrated approach with Network Epsilon-based Measure and Network Slacks-based Measure

    International Nuclear Information System (INIS)

    Xu, Xin; Cui, Qiang

    2017-01-01

    This paper focuses on evaluating airline energy efficiency, which is firstly divided into four stages: Operations Stage, Fleet Maintenance Stage, Services Stage and Sales Stage. The new four-stage network structure of airline energy efficiency is a modification of existing models. A new approach, integrated with Network Epsilon-based Measure and Network Slacks-based Measure, is applied to assess the overall energy efficiency and divisional efficiency of 19 international airlines from 2008 to 2014. The influencing factors of airline energy efficiency are analyzed through the regression analysis. The results indicate the followings: 1. The integrated model can identify the benchmarking airlines in the overall system and stages. 2. Most airlines' energy efficiencies keep steady during the period, except for some sharply fluctuations. The efficiency decreases mainly centralized in the year 2008–2011, affected by the financial crisis in the USA. 3. The average age of fleet is positively correlated with the overall energy efficiency, and each divisional efficiency has different significant influencing factors. - Highlights: • An integrated approach with Network Epsilon-based Measure and Network Slacks-based Measure is developed. • 19 airlines' energy efficiencies are evaluated. • Garuda Indonesia has the highest overall energy efficiency.

  14. Hepatotoxicity evaluation of traditional Chinese medicines using a computational molecular model.

    Science.gov (United States)

    Zhao, Pan; Liu, Bin; Wang, Chunya

    2017-11-01

    Liver injury caused by traditional Chinese medicines (TCMs) is reported from many countries around the world. TCM hepatotoxicity has attracted worldwide concerns. This study aims to develop a more applicable and optimal tool to evaluate TCM hepatotoxicity. A quantitative structure-activity relationship (QSAR) analysis was performed based on published data and U.S. Food and Drug Administration's Liver Toxicity Knowledge Base (LTKB). Eleven herbal ingredients with proven liver toxicity in the literature were added into the dataset besides chemicals from LTKB. The finally generated QSAR model yielded a sensitivity of 83.8%, a specificity of 70.1%, and an accuracy of 80.2%. Among the externally tested 20 ingredients from TCMs, 14 hepatotoxic ingredients were all accurately identified by the QSAR model derived from the dataset containing natural hepatotoxins. Adding natural hepatotoxins into the dataset makes the QSAR model more applicable for TCM hepatotoxicity assessment, which provides a right direction in the methodology study for TCM safety evaluation. The generated QSAR model has the practical value to prioritize the hepatotoxicity risk of TCM compounds. Furthermore, an open-access international specialized database on TCM hepatotoxicity should be quickly established.

  15. The Sustainable Island Development Evaluation Model and Its Application Based on the Nonstructural Decision Fuzzy Set

    Directory of Open Access Journals (Sweden)

    Quanming Wang

    2013-01-01

    Full Text Available Due to the complexity and diversity of the issue of sustainable island development, no widely accepted and applicable evaluation system model regarding the issue currently exists. In this paper, we discuss and establish the sustainable development indicator system and the model approach from the perspective of resources, the island environment, the island development status, the island social development, and the island intelligence development. We reference the sustainable development theory and the sustainable development indicator system method concerning land region, combine the character of the sustainable island development, analyze and evaluate the extent of the sustainable island development, orient development, and identify the key and limited factors of sustainable island development capability. This research adopts the entropy method and the nonstructural decision fuzzy set theory model to determine the weight of the evaluating indicators. Changhai County was selected as the subject of the research, which consisted of a quantitative study of its sustainable development status from 2001 to 2008 to identify the key factors influencing its sustainability development, existing problems, and limited factors and to provide basic technical support for ocean development planning and economic development planning.

  16. Using Reactive Transport Modeling to Evaluate the Source Term at Yucca Mountain

    Energy Technology Data Exchange (ETDEWEB)

    Y. Chen

    2001-12-19

    The conventional approach of source-term evaluation for performance assessment of nuclear waste repositories uses speciation-solubility modeling tools and assumes pure phases of radioelements control their solubility. This assumption may not reflect reality, as most radioelements (except for U) may not form their own pure phases. As a result, solubility limits predicted using the conventional approach are several orders of magnitude higher then the concentrations of radioelements measured in spent fuel dissolution experiments. This paper presents the author's attempt of using a non-conventional approach to evaluate source term of radionuclide release for Yucca Mountain. Based on the general reactive-transport code AREST-CT, a model for spent fuel dissolution and secondary phase precipitation has been constructed. The model accounts for both equilibrium and kinetic reactions. Its predictions have been compared against laboratory experiments and natural analogues. It is found that without calibrations, the simulated results match laboratory and field observations very well in many aspects. More important is the fact that no contradictions between them have been found. This provides confidence in the predictive power of the model. Based on the concept of Np incorporated into uranyl minerals, the model not only predicts a lower Np source-term than that given by conventional Np solubility models, but also produces results which are consistent with laboratory measurements and observations. Moreover, two hypotheses, whether Np enters tertiary uranyl minerals or not, have been tested by comparing model predictions against laboratory observations, the results favor the former. It is concluded that this non-conventional approach of source term evaluation not only eliminates over-conservatism in conventional solubility approach to some extent, but also gives a realistic representation of the system of interest, which is a prerequisite for truly understanding the long

  17. Using Reactive Transport Modeling to Evaluate the Source Term at Yucca Mountain

    International Nuclear Information System (INIS)

    Y. Chen

    2001-01-01

    The conventional approach of source-term evaluation for performance assessment of nuclear waste repositories uses speciation-solubility modeling tools and assumes pure phases of radioelements control their solubility. This assumption may not reflect reality, as most radioelements (except for U) may not form their own pure phases. As a result, solubility limits predicted using the conventional approach are several orders of magnitude higher then the concentrations of radioelements measured in spent fuel dissolution experiments. This paper presents the author's attempt of using a non-conventional approach to evaluate source term of radionuclide release for Yucca Mountain. Based on the general reactive-transport code AREST-CT, a model for spent fuel dissolution and secondary phase precipitation has been constructed. The model accounts for both equilibrium and kinetic reactions. Its predictions have been compared against laboratory experiments and natural analogues. It is found that without calibrations, the simulated results match laboratory and field observations very well in many aspects. More important is the fact that no contradictions between them have been found. This provides confidence in the predictive power of the model. Based on the concept of Np incorporated into uranyl minerals, the model not only predicts a lower Np source-term than that given by conventional Np solubility models, but also produces results which are consistent with laboratory measurements and observations. Moreover, two hypotheses, whether Np enters tertiary uranyl minerals or not, have been tested by comparing model predictions against laboratory observations, the results favor the former. It is concluded that this non-conventional approach of source term evaluation not only eliminates over-conservatism in conventional solubility approach to some extent, but also gives a realistic representation of the system of interest, which is a prerequisite for truly understanding the long

  18. Emissions and Fuel Consumption Modeling for Evaluating Environmental Effectiveness of ITS Strategies

    Directory of Open Access Journals (Sweden)

    Yuan-yuan Song

    2013-01-01

    Full Text Available Road transportation is a major fuel consumer and greenhouse gas emitter. Recently, the intelligent transportation systems (ITSs technologies, which can improve traffic flow and safety, have been developed to reduce the fuel consumption and vehicle emissions. Emission and fuel consumption estimation models play a key role in the evaluation of ITS technologies. Based on the influence analysis of driving parameters on vehicle emissions, this paper establishes a set of mesoscopic vehicle emission and fuel consumption models using the real-world vehicle operation and emission data. The results demonstrate that these models are more appropriate to evaluate the environmental effectiveness of ITS strategies with enough estimation accuracy.

  19. Clinical Utility and Safety of a Model-Based Patient-Tailored Dose of Vancomycin in Neonates.

    Science.gov (United States)

    Leroux, Stéphanie; Jacqz-Aigrain, Evelyne; Biran, Valérie; Lopez, Emmanuel; Madeleneau, Doriane; Wallon, Camille; Zana-Taïeb, Elodie; Virlouvet, Anne-Laure; Rioualen, Stéphane; Zhao, Wei

    2016-04-01

    Pharmacokinetic modeling has often been applied to evaluate vancomycin pharmacokinetics in neonates. However, clinical application of the model-based personalized vancomycin therapy is still limited. The objective of the present study was to evaluate the clinical utility and safety of a model-based patient-tailored dose of vancomycin in neonates. A model-based vancomycin dosing calculator, developed from a population pharmacokinetic study, has been integrated into the routine clinical care in 3 neonatal intensive care units (Robert Debré, Cochin Port Royal, and Clocheville hospitals) between 2012 and 2014. The target attainment rate, defined as the percentage of patients with a first therapeutic drug monitoring serum vancomycin concentration achieving the target window of 15 to 25 mg/liter, was selected as an endpoint for evaluating the clinical utility. The safety evaluation was focused on nephrotoxicity. The clinical application of the model-based patient-tailored dose of vancomycin has been demonstrated in 190 neonates. The mean (standard deviation) gestational and postnatal ages of the study population were 31.1 (4.9) weeks and 16.7 (21.7) days, respectively. The target attainment rate increased from 41% to 72% without any case of vancomycin-related nephrotoxicity. This proof-of-concept study provides evidence for integrating model-based antimicrobial therapy in neonatal routine care. Copyright © 2016, American Society for Microbiology. All Rights Reserved.

  20. Model-based evaluation of the use of polycyclic aromatic hydrocarbons molecular diagnostic ratios as a source identification tool

    International Nuclear Information System (INIS)

    Katsoyiannis, Athanasios; Breivik, Knut

    2014-01-01

    Polycyclic Aromatic Hydrocarbons (PAHs) molecular diagnostic ratios (MDRs) are unitless concentration ratios of pair-PAHs with the same molecular weight (MW); MDRs have long been used as a tool for PAHs source identification purposes. In the present paper, the efficiency of the MDR methodology is evaluated through the use of a multimedia fate model, the calculation of characteristic travel distances (CTD) and the estimation of air concentrations for individual PAHs as a function of distance from an initial point source. The results show that PAHs with the same MW are sometimes characterized by substantially different CTDs and therefore their air concentrations and hence MDRs are predicted to change as the distance from the original source increases. From the assessed pair-PAHs, the biggest CTD difference is seen for Fluoranthene (107 km) vs. Pyrene (26 km). This study provides a strong indication that MDRs are of limited use as a source identification tool. -- Highlights: • Model-based evaluation of the PAHs molecular diagnostic ratios efficiency. • Individual PAHs are characterized by different characteristic travel distances. • MDRs are proven to be a limited tool for source identification. • Use of MDRs for other environmental media is likely unfeasible. -- PAHs molecular diagnostic ratios which change greatly as a function of distance from the emitting source are improper for source identification purposes