WorldWideScience

Sample records for model requires evaluation

  1. Evaluation of Foreign Exchange Risk Capital Requirement Models

    Directory of Open Access Journals (Sweden)

    Ricardo S. Maia Clemente

    2005-12-01

    Full Text Available This paper examines capital requirement for financial institutions in order to cover market risk stemming from exposure to foreign currencies. The models examined belong to two groups according to the approach involved: standardized and internal models. In the first group, we study the Basel model and the model adopted by the Brazilian legislation. In the second group, we consider the models based on the concept of value at risk (VaR. We analyze the single and the double-window historical model, the exponential smoothing model (EWMA and a hybrid approach that combines features of both models. The results suggest that the Basel model is inadequate to the Brazilian market, exhibiting a large number of exceptions. The model of the Brazilian legislation has no exceptions, though generating higher capital requirements than other internal models based on VaR. In general, VaR-based models perform better and result in less capital allocation than the standardized approach model applied in Brazil.

  2. Evaluation of olive flowering at low latitude sites in Argentina using a chilling requirement model

    Energy Technology Data Exchange (ETDEWEB)

    Aybar, V.E.; Melo-Abreu, J.P. de; Searles, P.S.; Matias, A.G.; Del Rio, C.; Caballero, C. M.; Rousseaux, M.C.

    2015-07-01

    Olive production has expanded significantly from the Mediterranean Basin into the New World over the last two decades. In some cases, cultivars of European origin have been introduced at a large commercial scale with little previous evaluation of potential productivity. The objective of this study was to evaluate whether a temperature-driven simulation model developed in the Mediterranean Basin to predict normal flowering occurrence and flowering date using cultivar-specific thermal requirements was suitable for the low latitude areas of Northwest Argentina. The model was validated at eight sites over several years and a wide elevation range (350–1200 m above mean sea level) for three cultivars (‘Arbequina’, ‘Frantoio’, ‘Leccino’) with potentially different chilling requirements. In ‘Arbequina’, normal flowering was observed at almost all sites and in all years, while normal flowering events in ‘Frantoio’ and ‘Leccino’ were uncommon. The model successfully predicted if flowering would be normal in 92% and 83% of the cases in ‘Arbequina’ and ‘Frantoio’, respectively, but was somewhat less successful in ‘Leccino’ (61%). When flowering occurred, the predicted flowering date was within ± 7 days of the observed date in 71% of the cases. Overall, the model results indicate that cultivar-specific simulation models may be used as an approximate tool to predict whether individual cultivars will be successful in new growing areas. In Northwest Argentina, the model could be used to identify cultivars to replace ‘Frantoio’ and ‘Leccino’ and to simulate global warming scenarios. (Author)

  3. Existing and Required Modeling Capabilities for Evaluating ATM Systems and Concepts

    Science.gov (United States)

    Odoni, Amedeo R.; Bowman, Jeremy; Delahaye, Daniel; Deyst, John J.; Feron, Eric; Hansman, R. John; Khan, Kashif; Kuchar, James K.; Pujet, Nicolas; Simpson, Robert W.

    1997-01-01

    ATM systems throughout the world are entering a period of major transition and change. The combination of important technological developments and of the globalization of the air transportation industry has necessitated a reexamination of some of the fundamental premises of existing Air Traffic Management (ATM) concepts. New ATM concepts have to be examined, concepts that may place more emphasis on: strategic traffic management; planning and control; partial decentralization of decision-making; and added reliance on the aircraft to carry out strategic ATM plans, with ground controllers confined primarily to a monitoring and supervisory role. 'Free Flight' is a case in point. In order to study, evaluate and validate such new concepts, the ATM community will have to rely heavily on models and computer-based tools/utilities, covering a wide range of issues and metrics related to safety, capacity and efficiency. The state of the art in such modeling support is adequate in some respects, but clearly deficient in others. It is the objective of this study to assist in: (1) assessing the strengths and weaknesses of existing fast-time models and tools for the study of ATM systems and concepts and (2) identifying and prioritizing the requirements for the development of additional modeling capabilities in the near future. A three-stage process has been followed to this purpose: 1. Through the analysis of two case studies involving future ATM system scenarios, as well as through expert assessment, modeling capabilities and supporting tools needed for testing and validating future ATM systems and concepts were identified and described. 2. Existing fast-time ATM models and support tools were reviewed and assessed with regard to the degree to which they offer the capabilities identified under Step 1. 3 . The findings of 1 and 2 were combined to draw conclusions about (1) the best capabilities currently existing, (2) the types of concept testing and validation that can be carried

  4. How Many Model Evaluations Are Required To Predict The AEP Of A Wind Power Plant?

    DEFF Research Database (Denmark)

    Murcia Leon, Juan Pablo; Réthoré, Pierre-Elouan; Natarajan, Anand;

    2015-01-01

    Wind farm flow models have advanced considerably with the use of large eddy simulations (LES) and Reynolds averaged Navier-Stokes (RANS) computations. The main limitation of these techniques is their high computational time requirements; which makes their use for wind farm annual energy production...

  5. A study on the minimum number of loci required for genetic evaluation using a finite locus model

    Directory of Open Access Journals (Sweden)

    Fernando Rohan L

    2004-07-01

    Full Text Available Abstract For a finite locus model, Markov chain Monte Carlo (MCMC methods can be used to estimate the conditional mean of genotypic values given phenotypes, which is also known as the best predictor (BP. When computationally feasible, this type of genetic prediction provides an elegant solution to the problem of genetic evaluation under non-additive inheritance, especially for crossbred data. Successful application of MCMC methods for genetic evaluation using finite locus models depends, among other factors, on the number of loci assumed in the model. The effect of the assumed number of loci on evaluations obtained by BP was investigated using data simulated with about 100 loci. For several small pedigrees, genetic evaluations obtained by best linear prediction (BLP were compared to genetic evaluations obtained by BP. For BLP evaluation, used here as the standard of comparison, only the first and second moments of the joint distribution of the genotypic and phenotypic values must be known. These moments were calculated from the gene frequencies and genotypic effects used in the simulation model. BP evaluation requires the complete distribution to be known. For each model used for BP evaluation, the gene frequencies and genotypic effects, which completely specify the required distribution, were derived such that the genotypic mean, the additive variance, and the dominance variance were the same as in the simulation model. For lowly heritable traits, evaluations obtained by BP under models with up to three loci closely matched the evaluations obtained by BLP for both purebred and crossbred data. For highly heritable traits, models with up to six loci were needed to match the evaluations obtained by BLP.

  6. Testing agile requirements models

    Institute of Scientific and Technical Information of China (English)

    BOTASCHANJAN Jewgenij; PISTER Markus; RUMPE Bernhard

    2004-01-01

    This paper discusses a model-based approach to validate software requirements in agile development processes by simulation and in particular automated testing. The use of models as central development artifact needs to be added to the portfolio of software engineering techniques, to further increase efficiency and flexibility of the development beginning already early in the requirements definition phase. Testing requirements are some of the most important techniques to give feedback and to increase the quality of the result. Therefore testing of artifacts should be introduced as early as possible, even in the requirements definition phase.

  7. Model to Evaluate the Aerodynamic Energy Requirements of Active Materials in Morphing Wings

    OpenAIRE

    Pettit, Gregory William

    2001-01-01

    A computational model is presented which predicts the force, stroke, and energy needed to overcome aerodynamic loads encountered by morphing wings during aircraft maneuvers. This low-cost model generates wing section shapes needed to follow a desired flight path, computes the resulting aerodynamic forces using a unique combination of conformal mapping and the vortex panel method, computes the longitudinal motion of the simulated aircraft, and closes the loop with a zero-error control law. T...

  8. Improving the Enterprise Requirements and Acquisition Model’s Developmental Test and Evaluation Process Fidelity

    Science.gov (United States)

    2014-03-27

    by Leach and Searle (2010), Montomery (2011), and Baldus and others (2013). Below is a summary of their research efforts. Table 1: Overview of ERAM... Leach and Searle 2010 ERAM 1.1 ExtendSim Updates by the Aerospace Design Team and served as new baseline model ERAM 1.2 ExtendSim Implemented...complex relationships within DAMS and to assist in supporting acquisition reform. 66 DT&E Silver Bullet The most substantial improvement from a

  9. Performance Evaluation of Portfolios with Margin Requirements

    Directory of Open Access Journals (Sweden)

    Hui Ding

    2014-01-01

    Full Text Available In financial markets, short sellers will be required to post margin to cover possible losses in case the prices of the risky assets go up. Only a few studies focus on the optimization and performance evaluation of portfolios in the presence of margin requirements. In this paper, we investigate the theoretical foundation of DEA (data envelopment analysis approach to evaluate the performance of portfolios with margin requirements from a different perspective. Under the mean-variance framework, we construct the optimization model and portfolio possibility set on considering margin requirements. The convexity of the portfolio possibility set is proved and the concept of efficiency in classical economics is extended to the portfolio case. The DEA models are then developed to evaluate the performance of portfolios with margin requirements. Through the simulations carried out in the end, we show that, with adequate portfolios, DEA can be used as an effective tool in computing the efficiencies of portfolios with margin requirements for the performance evaluation purpose. This study can be viewed as a justification of DEA into performance evaluation of portfolios with margin requirements.

  10. Value-Focused Thinking Model to Evaluate SHM System Alternatives From Military end User Requirements Point of View

    OpenAIRE

    Klimaszewski Sławomir

    2016-01-01

    The article describes Value-Focused Thinking (VFT) model developed in order to evaluate various alternatives for implementation of Structural Health Monitoring (SHM) system on a military aircraft. Four SHM system alternatives are considered based on: visual inspection (current approach), piezoelectric (PZT) sensors, Fiber Bragg Grating (FBG) sensors and Comparative Vacuum Monitoring (CVM) sensors. A numerical example is shown to illustrate the model capability. Sensitivity analyses are perfor...

  11. Flight program language requirements. Volume 2: Requirements and evaluations

    Science.gov (United States)

    1972-01-01

    The efforts and results are summarized for a study to establish requirements for a flight programming language for future onboard computer applications. Several different languages were available as potential candidates for future NASA flight programming efforts. The study centered around an evaluation of the four most pertinent existing aerospace languages. Evaluation criteria were established, and selected kernels from the current Saturn 5 and Skylab flight programs were used as benchmark problems for sample coding. An independent review of the language specifications incorporated anticipated future programming requirements into the evaluation. A set of detailed language requirements was synthesized from these activities. The details of program language requirements and of the language evaluations are described.

  12. Agent Based Multiviews Requirements Model

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    Based on the current researches of viewpoints oriented requirements engineering and intelligent agent, we present the concept of viewpoint agent and its abstract model based on a meta-language for multiviews requirements engineering. It provided a basis for consistency checking and integration of different viewpoint requirements, at the same time, these checking and integration works can automatically realized in virtue of intelligent agent's autonomy, proactiveness and social ability. Finally, we introduce the practical application of the model by the case study of data flow diagram.

  13. Performance Evaluation of Portfolios with Margin Requirements

    OpenAIRE

    Hui Ding; Zhongbao Zhou; Helu Xiao; Chaoqun Ma; Wenbin Liu

    2014-01-01

    In financial markets, short sellers will be required to post margin to cover possible losses in case the prices of the risky assets go up. Only a few studies focus on the optimization and performance evaluation of portfolios in the presence of margin requirements. In this paper, we investigate the theoretical foundation of DEA (data envelopment analysis) approach to evaluate the performance of portfolios with margin requirements from a different perspective. Under the mean-variance framework,...

  14. Modeling Requirements for Cohort and Register IT.

    Science.gov (United States)

    Stäubert, Sebastian; Weber, Ulrike; Michalik, Claudia; Dress, Jochen; Ngouongo, Sylvie; Stausberg, Jürgen; Winter, Alfred

    2016-01-01

    The project KoRegIT (funded by TMF e.V.) aimed to develop a generic catalog of requirements for research networks like cohort studies and registers (KoReg). The catalog supports such kind of research networks to build up and to manage their organizational and IT infrastructure. To make transparent the complex relationships between requirements, which are described in use cases from a given text catalog. By analyzing and modeling the requirements a better understanding and optimizations of the catalog are intended. There are two subgoals: a) to investigate one cohort study and two registers and to model the current state of their IT infrastructure; b) to analyze the current state models and to find simplifications within the generic catalog. Processing the generic catalog was performed by means of text extraction, conceptualization and concept mapping. Then methods of enterprise architecture planning (EAP) are used to model the extracted information. To work on objective a) questionnaires are developed by utilizing the model. They are used for semi-structured interviews, whose results are evaluated via qualitative content analysis. Afterwards the current state was modeled. Objective b) was done by model analysis. A given generic text catalog of requirements was transferred into a model. As result of objective a) current state models of one existing cohort study and two registers are created and analyzed. An optimized model called KoReg-reference-model is the result of objective b). It is possible to use methods of EAP to model requirements. This enables a better overview of the partly connected requirements by means of visualization. The model based approach also enables the analysis and comparison of the empirical data from the current state models. Information managers could reduce the effort of planning the IT infrastructure utilizing the KoReg-reference-model. Modeling the current state and the generation of reports from the model, which could be used as

  15. Modelling the quantitative evaluation of soil nutrient supply, nutrient use efficiency, and fertilizer requirements of wheat in India

    NARCIS (Netherlands)

    Pathak, H.; Aggarwal, P.K.; Roetter, R.P.; Kalra, N.; Bandyopadhaya, S.K.; Prasad, S.; Keulen, van H.

    2003-01-01

    Wheat yields in many parts of India are stagnant. The main reason for this is conventional blanket fertilizer recommendation, lower fertilizer use efficiency, and imbalanced use of fertilizers. Estimation of fertilizer requirements based on quantitative approaches can assist in improving wheat

  16. A fuzzy model for exploiting customer requirements

    Directory of Open Access Journals (Sweden)

    Zahra Javadirad

    2016-09-01

    Full Text Available Nowadays, Quality function deployment (QFD is one of the total quality management tools, where customers’ views and requirements are perceived and using various techniques improves the production requirements and operations. The QFD department, after identification and analysis of the competitors, takes customers’ feedbacks to meet the customers’ demands for the products compared with the competitors. In this study, a comprehensive model for assessing the importance of the customer requirements in the products or services for an organization is proposed. The proposed study uses linguistic variables, as a more comprehensive approach, to increase the precision of the expression evaluations. The importance of these requirements specifies the strengths and weaknesses of the organization in meeting the requirements relative to competitors. The results of these experiments show that the proposed method performs better than the other methods.

  17. Visual soil evaluation - future research requirements

    Science.gov (United States)

    Emmet-Booth, Jeremy; Forristal, Dermot; Fenton, Owen; Ball, Bruce; Holden, Nick

    2017-04-01

    A review of Visual Soil Evaluation (VSE) techniques (Emmet-Booth et al., 2016) highlighted their established utility for soil quality assessment, though some limitations were identified; (1) The examination of aggregate size, visible intra-porosity and shape forms a key assessment criterion in almost all methods, thus limiting evaluation to structural form. The addition of criteria that holistically examine structure may be desirable. For example, structural stability can be indicated using dispersion tests or examining soil surface crusting, while the assessment of soil colour may indirectly indicate soil organic matter content, a contributor to stability. Organic matter assessment may also indicate structural resilience, along with rooting, earthworm numbers or shrinkage cracking. (2) Soil texture may influence results or impeded method deployment. Modification of procedures to account for extreme texture variation is desirable. For example, evidence of compaction in sandy or single grain soils greatly differs to that in clayey soils. Some procedures incorporate separate classification systems or adjust deployment based on texture. (3) Research into impacts of soil moisture content on VSE evaluation criteria is required. Criteria such as rupture resistance and shape may be affected by moisture content. It is generally recommended that methods are deployed on moist soils and quantification of influences of moisture variation on results is necessary. (4) Robust sampling strategies for method deployment are required. Dealing with spatial variation differs between methods, but where methods can be deployed over large areas, clear instruction on sampling is required. Additionally, as emphasis has been placed on the agricultural production of soil, so the ability of VSE for exploring structural quality in terms of carbon storage, water purification and biodiversity support also requires research. References Emmet-Booth, J.P., Forristal. P.D., Fenton, O., Ball, B

  18. The IIR evaluation model

    DEFF Research Database (Denmark)

    Borlund, Pia

    2003-01-01

    An alternative approach to evaluation of interactive information retrieval (IIR) systems, referred to as the IIR evaluation model, is proposed. The model provides a framework for the collection and analysis of IR interaction data. The aim of the model is two-fold: 1) to facilitate the evaluation...... assessments. The IIR evaluation model is presented as an alternative to the system-driven Cranfield model (Cleverdon, Mills & Keen, 1966; Cleverdon & Keen, 1966) which still is the dominant approach to the evaluation of IR and IIR systems. Key elements of the IIR evaluation model are the use of realistic...... of IIR systems as realistically as possible with reference to actual information searching and retrieval processes, though still in a relatively controlled evaluation environment; and 2) to calculate the IIR system performance taking into account the non-binary nature of the assigned relevance...

  19. Radiation Belt and Plasma Model Requirements

    Science.gov (United States)

    Barth, Janet L.

    2005-01-01

    Contents include the following: Radiation belt and plasma model environment. Environment hazards for systems and humans. Need for new models. How models are used. Model requirements. How can space weather community help?

  20. Introducing Program Evaluation Models

    Directory of Open Access Journals (Sweden)

    Raluca GÂRBOAN

    2008-02-01

    Full Text Available Programs and project evaluation models can be extremely useful in project planning and management. The aim is to set the right questions as soon as possible in order to see in time and deal with the unwanted program effects, as well as to encourage the positive elements of the project impact. In short, different evaluation models are used in order to minimize losses and maximize the benefits of the interventions upon small or large social groups. This article introduces some of the most recently used evaluation models.

  1. A Framework for Modelling Software Requirements

    Directory of Open Access Journals (Sweden)

    Dhirendra Pandey

    2011-05-01

    Full Text Available Requirement engineering plays an important role in producing quality software products. In recent past years, some approaches of requirement framework have been designed to provide an end-to-end solution for system development life cycle. Textual requirements specifications are difficult to learn, design, understand, review, and maintain whereas pictorial modelling is widely recognized as an effective requirement analysis tool. In this paper, we will present a requirement modelling framework with the analysis of modern requirements modelling techniques. Also, we will discuss various domains of requirement engineering with the help of modelling elements such as semantic map of business concepts, lifecycles of business objects, business processes, business rules, system context diagram, use cases and their scenarios, constraints, and user interface prototypes. The proposed framework will be illustrated with the case study of inventory management system.

  2. An Extended Analysis of Requirements Traceability Model

    Institute of Scientific and Technical Information of China (English)

    Jiang Dandong(蒋丹东); Zhang Shensheng; Chen Lu

    2004-01-01

    A new extended meta model of traceability is presented. Then, a formalized fine-grained model of traceability is described. Some major issues about this model, including trace units, requirements and relations within the model, are further analyzed. Finally, a case study that comes from a key project of 863 Program is given.

  3. Long-term dynamics simulation: Modeling requirements

    Energy Technology Data Exchange (ETDEWEB)

    Morched, A.S.; Kar, P.K.; Rogers, G.J.; Morison, G.K. (Ontario Hydro, Toronto, ON (Canada))

    1989-12-01

    This report details the required performance and modelling capabilities of a computer program intended for the study of the long term dynamics of power systems. Following a general introduction which outlines the need for long term dynamic studies, the modelling requirements for the conduct of such studies is discussed in detail. Particular emphasis is placed on models for system elements not normally modelled in power system stability programs, which will have a significant impact in the long term time frame of minutes to hours following the initiating disturbance. The report concludes with a discussion of the special computational and programming requirements for a long term stability program. 43 refs., 36 figs.

  4. Requirements for clinical information modelling tools.

    Science.gov (United States)

    Moreno-Conde, Alberto; Jódar-Sánchez, Francisco; Kalra, Dipak

    2015-07-01

    This study proposes consensus requirements for clinical information modelling tools that can support modelling tasks in medium/large scale institutions. Rather than identify which functionalities are currently available in existing tools, the study has focused on functionalities that should be covered in order to provide guidance about how to evolve the existing tools. After identifying a set of 56 requirements for clinical information modelling tools based on a literature review and interviews with experts, a classical Delphi study methodology was applied to conduct a two round survey in order to classify them as essential or recommended. Essential requirements are those that must be met by any tool that claims to be suitable for clinical information modelling, and if we one day have a certified tools list, any tool that does not meet essential criteria would be excluded. Recommended requirements are those more advanced requirements that may be met by tools offering a superior product or only needed in certain modelling situations. According to the answers provided by 57 experts from 14 different countries, we found a high level of agreement to enable the study to identify 20 essential and 21 recommended requirements for these tools. It is expected that this list of identified requirements will guide developers on the inclusion of new basic and advanced functionalities that have strong support by end users. This list could also guide regulators in order to identify requirements that could be demanded of tools adopted within their institutions. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  5. CMAQ Model Evaluation Framework

    Science.gov (United States)

    CMAQ is tested to establish the modeling system’s credibility in predicting pollutants such as ozone and particulate matter. Evaluation of CMAQ has been designed to assess the model’s performance for specific time periods and for specific uses.

  6. VPPA weld model evaluation

    Science.gov (United States)

    McCutcheon, Kimble D.; Gordon, Stephen S.; Thompson, Paul A.

    1992-07-01

    NASA uses the Variable Polarity Plasma Arc Welding (VPPAW) process extensively for fabrication of Space Shuttle External Tanks. This welding process has been in use at NASA since the late 1970's but the physics of the process have never been satisfactorily modeled and understood. In an attempt to advance the level of understanding of VPPAW, Dr. Arthur C. Nunes, Jr., (NASA) has developed a mathematical model of the process. The work described in this report evaluated and used two versions (level-0 and level-1) of Dr. Nunes' model, and a model derived by the University of Alabama at Huntsville (UAH) from Dr. Nunes' level-1 model. Two series of VPPAW experiments were done, using over 400 different combinations of welding parameters. Observations were made of VPPAW process behavior as a function of specific welding parameter changes. Data from these weld experiments was used to evaluate and suggest improvements to Dr. Nunes' model. Experimental data and correlations with the model were used to develop a multi-variable control algorithm for use with a future VPPAW controller. This algorithm is designed to control weld widths (both on the crown and root of the weld) based upon the weld parameters, base metal properties, and real-time observation of the crown width. The algorithm exhibited accuracy comparable to that of the weld width measurements for both aluminum and mild steel welds.

  7. A Requirements Analysis Model Based on QFD

    Institute of Scientific and Technical Information of China (English)

    TANG Zhi-wei; Nelson K.H.Tang

    2004-01-01

    The enterprise resource planning (ERP) system has emerged to offer an integrated IT solution and more and more enterprises are increasing by adopting this system and regarding it as an important innovation. However, there is already evidence of high failure risks in ERP project implementation, one major reason is poor analysis of the requirements for system implementation. In this paper, the importance of requirements analysis for ERP project implementation is highlighted, and a requirements analysis model by applying quality function deployment (QFD) is presented, which will support to conduct requirements analysis for ERP project.

  8. 38 CFR 21.8030 - Requirement for evaluation of child.

    Science.gov (United States)

    2010-07-01

    ... evaluation of child. 21.8030 Section 21.8030 Pensions, Bonuses, and Veterans' Relief DEPARTMENT OF VETERANS... Certain Children of Vietnam Veterans-Spina Bifida and Covered Birth Defects Evaluation § 21.8030 Requirement for evaluation of child. (a) Children to be evaluated. The VR&E Division will evaluate each...

  9. DECISION MAKING MODELING OF CONCRETE REQUIREMENTS

    Directory of Open Access Journals (Sweden)

    Suhartono Irawan

    2001-01-01

    Full Text Available This paper presents the results of an experimental evaluation between predicted and practice concrete strength. The scope of the evaluation is the optimisation of the cement content for different concrete grades as a result of bringing the target mean value of tests cubes closer to the required characteristic strength value by reducing the standard deviation. Abstract in Bahasa Indonesia : concrete+mix+design%2C+acceptance+control%2C+optimisation%2C+cement+content.

  10. Modeling and Testing Legacy Data Consistency Requirements

    DEFF Research Database (Denmark)

    Nytun, J. P.; Jensen, Christian Søndergaard

    2003-01-01

    An increasing number of data sources are available on the Internet, many of which offer semantically overlapping data, but based on different schemas, or models. While it is often of interest to integrate such data sources, the lack of consistency among them makes this integration difficult....... This paper addresses the need for new techniques that enable the modeling and consistency checking for legacy data sources. Specifically, the paper contributes to the development of a framework that enables consistency testing of data coming from different types of data sources. The vehicle is UML and its...... accompanying XMI. The paper presents techniques for modeling consistency requirements using OCL and other UML modeling elements: it studies how models that describe the required consistencies among instances of legacy models can be designed in standard UML tools that support XMI. The paper also considers...

  11. Pragmatic geometric model evaluation

    Science.gov (United States)

    Pamer, Robert

    2015-04-01

    Quantification of subsurface model reliability is mathematically and technically demanding as there are many different sources of uncertainty and some of the factors can be assessed merely in a subjective way. For many practical applications in industry or risk assessment (e. g. geothermal drilling) a quantitative estimation of possible geometric variations in depth unit is preferred over relative numbers because of cost calculations for different scenarios. The talk gives an overview of several factors that affect the geometry of structural subsurface models that are based upon typical geological survey organization (GSO) data like geological maps, borehole data and conceptually driven construction of subsurface elements (e. g. fault network). Within the context of the trans-European project "GeoMol" uncertainty analysis has to be very pragmatic also because of different data rights, data policies and modelling software between the project partners. In a case study a two-step evaluation methodology for geometric subsurface model uncertainty is being developed. In a first step several models of the same volume of interest have been calculated by omitting successively more and more input data types (seismic constraints, fault network, outcrop data). The positions of the various horizon surfaces are then compared. The procedure is equivalent to comparing data of various levels of detail and therefore structural complexity. This gives a measure of the structural significance of each data set in space and as a consequence areas of geometric complexity are identified. These areas are usually very data sensitive hence geometric variability in between individual data points in these areas is higher than in areas of low structural complexity. Instead of calculating a multitude of different models by varying some input data or parameters as it is done by Monte-Carlo-simulations, the aim of the second step of the evaluation procedure (which is part of the ongoing work) is to

  12. Designer's requirements for evaluation of sustainability

    DEFF Research Database (Denmark)

    Bey, Niki; Lenau, Torben Anker

    1998-01-01

    Today, sustainability of products is often evaluated on the basis of assessments of their environmental performance. Established means for this purpose are formal Life Cycle Assessment (LCA) methods. Designers have an essential influence on product design and are therefore one target group for life...... LCAs involve calculations with a relatively high accuracy. Most LCA methods do therefore not qualify as hands-on tool for utilisation by typical designers.In this context, the authors raise the question, whether a largely simplified LCA-method which is exclusively based on energy considerations can...

  13. Integrated modelling requires mass collaboration (Invited)

    Science.gov (United States)

    Moore, R. V.

    2009-12-01

    The need for sustainable solutions to the world’s problems is self evident; the challenge is to anticipate where, in the environment, economy or society, the proposed solution will have negative consequences. If we failed to realise that the switch to biofuels would have the seemingly obvious result of reduced food production, how much harder will it be to predict the likely impact of policies whose impacts may be more subtle? It has been clear for a long time that models and data will be important tools for assessing the impact of events and the measures for their mitigation. They are an effective way of encapsulating knowledge of a process and using it for prediction. However, most models represent a single or small group of processes. The sustainability challenges that face us now require not just the prediction of a single process but the prediction of how many interacting processes will respond in given circumstances. These processes will not be confined to a single discipline but will often straddle many. For example, the question, “What will be the impact on river water quality of the medical plans for managing a ‘flu pandemic and could they cause a further health hazard?” spans medical planning, the absorption of drugs by the body, the spread of disease, the hydraulic and chemical processes in sewers and sewage treatment works and river water quality. This question nicely reflects the present state of the art. We have models of the processes and standards, such as the Open Modelling Interface (the OpenMI), allow them to be linked together and to datasets. We can therefore answer the question but with the important proviso that we thought to ask it. The next and greater challenge is to deal with the open question, “What are the implications of the medical plans for managing a ‘flu pandemic?”. This implies a system that can make connections that may well not have occurred to us and then evaluate their probable impact. The final touch will be to

  14. Econometric Model Evaluation: Implications for Program Evaluation.

    Science.gov (United States)

    Ridge, Richard S.; And Others

    1990-01-01

    The problem associated with evaluating an econometric model using values outside those used in the model estimation is illustrated in the evaluations of a residential load management program during each of two successive years. Analysis reveals that attention must be paid to this problem. (Author/TJH)

  15. Requirements engineering for cross-sectional information chain models.

    Science.gov (United States)

    Hübner, U; Cruel, E; Gök, M; Garthaus, M; Zimansky, M; Remmers, H; Rienhoff, O

    2012-01-01

    Despite the wealth of literature on requirements engineering, little is known about engineering very generic, innovative and emerging requirements, such as those for cross-sectional information chains. The IKM health project aims at building information chain reference models for the care of patients with chronic wounds, cancer-related pain and back pain. Our question therefore was how to appropriately capture information and process requirements that are both generally applicable and practically useful. To this end, we started with recommendations from clinical guidelines and put them up for discussion in Delphi surveys and expert interviews. Despite the heterogeneity we encountered in all three methods, it was possible to obtain requirements suitable for building reference models. We evaluated three modelling languages and then chose to write the models in UML (class and activity diagrams). On the basis of the current project results, the pros and cons of our approach are discussed.

  16. User Requirements and Domain Model Engineering

    NARCIS (Netherlands)

    Specht, Marcus; Glahn, Christian

    2006-01-01

    Specht, M., & Glahn, C. (2006). User requirements and domain model engineering. Presentation at International Workshop in Learning Networks for Lifelong Competence Development. March, 30-31, 2006. Sofia, Bulgaria: TENCompetence Conference. Retrieved June 30th, 2006, from http://dspace.learningnetwor

  17. User Requirements and Domain Model Engineering

    NARCIS (Netherlands)

    Specht, Marcus; Glahn, Christian

    2006-01-01

    Specht, M., & Glahn, C. (2006). User requirements and domain model engineering. Presentation at International Workshop in Learning Networks for Lifelong Competence Development. March, 30-31, 2006. Sofia, Bulgaria: TENCompetence Conference. Retrieved June 30th, 2006, from http://dspace.learningnetwor

  18. 40 CFR 501.16 - Requirements for compliance evaluation programs.

    Science.gov (United States)

    2010-07-01

    ... § 501.16 Requirements for compliance evaluation programs. State sludge management programs shall have... 40 Protection of Environment 29 2010-07-01 2010-07-01 false Requirements for compliance evaluation programs. 501.16 Section 501.16 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY...

  19. 40 CFR 233.40 - Requirements for compliance evaluation programs.

    Science.gov (United States)

    2010-07-01

    ... DUMPING 404 STATE PROGRAM REGULATIONS Compliance Evaluation and Enforcement § 233.40 Requirements for compliance evaluation programs. (a) In order to abate violations of the permit program, the State shall... 40 Protection of Environment 24 2010-07-01 2010-07-01 false Requirements for compliance...

  20. Round-Robin Peer Evaluations Require Candid Assessments.

    Science.gov (United States)

    Hays, Robert G.

    1990-01-01

    Presents a round-robin technique (in which each faculty member is evaluated by all the others in a single session) developed by a small, highly specialized faculty group to meet a new peer evaluation requirement. Notes that this technique does not require classroom observation, but still provides comprehensive and useful critiques. (SR)

  1. A Requirement-Driven Software Trustworthiness Evaluation and Evolution Model%一种需求驱动的软件可信性评估及演化模型

    Institute of Scientific and Technical Information of China (English)

    丁帅; 鲁付俊; 杨善林; 夏承遗

    2011-01-01

    Software trustworthiness evaluation model is built upon the accurately eliciting of trustworthy requirements and the reasonable establish ment of indicator system in the domain special application. Toward software which has huge architecture and complex non-functional demands,trustworthy requirements become changing with software operational state transition. The stability of trustworthiness evaluation indicator system will be affected by trustworthy requirement dynamic evolution. The software trustworthiness evaluation and evolution problem are attracting wide attention in the field of trustworthy software. In this paper, a novel requirement-driven software trustworthiness evaluation and evolution model is designed. Firstly, several key technologies adopted in the process of software trustworthiness evaluation are analyzed and summarized, such as requirements analysis and indicators extraction, trustworthy evidence acquisition and conversion, etc.And the problem of trustworthiness evaluation adaptive solution under the requirements evolution is discussed. Secondly, incidence matrix is used to achieve correlation analysis between trustworthy attributes and then the variation rule of relative weight is revealed. On this basis, adaptive reconstruction device, which can analyze and solve the software trustworthiness evaluation indicator system of self-reconfiguration, is designed based on incidence matrix. Finally, a complete framework of trustworthiness evaluation evolution and evolution model is proposed. Experimental results show the rationality and validity of the model.%软件可信性评估模型的构建依赖于对特定应用领域中可信需求的准确提取和指标系统的合理建立.对于体系结构庞大、非功能性需求复杂的软件而言,可信需求往往随着软件运行状态的转移而不断发生变化.由于可信需求的动态演化将直接影响指标系统的稳定性,因此引起了可信软件研究领域专家的广泛关

  2. Modeling requirements for in situ vitrification

    Energy Technology Data Exchange (ETDEWEB)

    MacKinnon, R.J.; Mecham, D.C.; Hagrman, D.L.; Johnson, R.W.; Murray, P.E.; Slater, C.E.; Marwil, E.S.; Weaver, R.A.; Argyle, M.D.

    1991-11-01

    This document outlines the requirements for the model being developed at the INEL which will provide analytical support for the ISV technology assessment program. The model includes representations of the electric potential field, thermal transport with melting, gas and particulate release, vapor migration, off-gas combustion and process chemistry. The modeling objectives are to (1) help determine the safety of the process by assessing the air and surrounding soil radionuclide and chemical pollution hazards, the nuclear criticality hazard, and the explosion and fire hazards, (2) help determine the suitability of the ISV process for stabilizing the buried wastes involved, and (3) help design laboratory and field tests and interpret results therefrom.

  3. Fuzzy Model for Trust Evaluation

    Institute of Scientific and Technical Information of China (English)

    Zhang Shibin; He Dake

    2006-01-01

    Based on fuzzy set theory, a fuzzy trust model is established by using membership function to describe the fuzziness of trust. The trust vectors of subjective trust are obtained based on a mathematical model of fuzzy synthetic evaluation. Considering the complicated and changeable relationships between various subjects, the multi-level mathematical model of fuzzy synthetic evaluation is introduced. An example of a two-level fuzzy synthetic evaluation model confirms the feasibility of the multi-level fuzzy synthesis evaluation model. The proposed fuzzy model for trust evaluation may provide a promising method for research of trust model in open networks.

  4. 40 CFR 145.12 - Requirements for compliance evaluation programs.

    Science.gov (United States)

    2010-07-01

    ... for compliance evaluation programs. (a) State programs shall have procedures for receipt, evaluation... 40 Protection of Environment 22 2010-07-01 2010-07-01 false Requirements for compliance evaluation programs. 145.12 Section 145.12 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED)...

  5. 40 CFR 123.26 - Requirements for compliance evaluation programs.

    Science.gov (United States)

    2010-07-01

    ... evaluation programs. (a) State programs shall have procedures for receipt, evaluation, retention and... an enforcement proceeding or in court. (e) State NPDES compliance evaluation programs shall have... 40 Protection of Environment 21 2010-07-01 2010-07-01 false Requirements for compliance...

  6. Evaluation Theory, Models, and Applications

    Science.gov (United States)

    Stufflebeam, Daniel L.; Shinkfield, Anthony J.

    2007-01-01

    "Evaluation Theory, Models, and Applications" is designed for evaluators and students who need to develop a commanding knowledge of the evaluation field: its history, theory and standards, models and approaches, procedures, and inclusion of personnel as well as program evaluation. This important book shows how to choose from a growing…

  7. Understanding requirements via natural language information modeling

    Energy Technology Data Exchange (ETDEWEB)

    Sharp, J.K.; Becker, S.D.

    1993-07-01

    Information system requirements that are expressed as simple English sentences provide a clear understanding of what is needed between system specifiers, administrators, users, and developers of information systems. The approach used to develop the requirements is the Natural-language Information Analysis Methodology (NIAM). NIAM allows the processes, events, and business rules to be modeled using natural language. The natural language presentation enables the people who deal with the business issues that are to be supported by the information system to describe exactly the system requirements that designers and developers will implement. Computer prattle is completely eliminated from the requirements discussion. An example is presented that is based upon a section of a DOE Order involving nuclear materials management. Where possible, the section is analyzed to specify the process(es) to be done, the event(s) that start the process, and the business rules that are to be followed during the process. Examples, including constraints, are developed. The presentation steps through the modeling process and shows where the section of the DOE Order needs clarification, extensions or interpretations that could provide a more complete and accurate specification.

  8. 13 CFR 303.3 - Application requirements and evaluation criteria.

    Science.gov (United States)

    2010-01-01

    ..., DEPARTMENT OF COMMERCE PLANNING INVESTMENTS AND COMPREHENSIVE ECONOMIC DEVELOPMENT STRATEGIES § 303.3 Application requirements and evaluation criteria. (a) For Planning Investment awards, EDA uses the general application evaluation criteria set forth in § 301.8 of this chapter. In addition, applications for Planning...

  9. Analysis and evaluation of collaborative modeling processes

    NARCIS (Netherlands)

    Ssebuggwawo, D.

    2012-01-01

    Analysis and evaluation of collaborative modeling processes is confronted with many challenges. On the one hand, many systems design and re-engineering projects require collaborative modeling approaches that can enhance their productivity. But, such collaborative efforts, which often consist of the

  10. A nutrition mathematical model to account for dietary supply and requirements of energy and nutrients for domesticated small ruminants: the development and evaluation of the Small Ruminant Nutrition System

    Directory of Open Access Journals (Sweden)

    Luis Orlindo Tedeschi

    2008-07-01

    Full Text Available A mechanistic model that predicts nutrient requirements and biological values of feeds for sheep (Cornell Net Carbohydrate and Protein System; CNCPS-S was expanded to include goats and the name was changed to the Small Ruminant Nutrition System (SRNS. The SRNS uses animal and environmental factors to predict metabolizable energy (ME and protein, and Ca and P requirements. Requirements for goats in the SRNS are predicted based on the equations developed for CNCPS-S, modified to account for specific requirements of goats, including maintenance, lactation, and pregnancy requirements, and body reserves. Feed biological values are predicted based on carbohydrate and protein fractions and their ruminal fermentation rates, forage, concentrate and liquid passage rates, and microbial growth. The evaluation of the SRNS for sheep using published papers (19 treatment means indicated no mean bias (MB; 1.1 g/100 g and low root mean square prediction error (RMSPE; 3.6 g/100g when predicting dietary organic matter digestibility for diets not deficient in ruminal nitrogen. The SRNS accurately predicted gains and losses of shrunk body weight (SBW of adult sheep (15 treatment means; MB = 5.8 g/d and RMSPE = 30 g/d when diets were not deficient in ruminal nitrogen. The SRNS for sheep had MB varying from -34 to 1 g/d and RSME varying from 37 to 56 g/d when predicting average daily gain (ADG of growing lambs (42 treatment means. The evaluation of the SRNS for goats based on literature data showed accurate predictions for ADG of kids (31 treatment means; RMSEP = 32.5 g/d; r2= 0.85; concordance correlation coefficient, CCC, = 0.91, daily ME intake (21 treatment means; RMSEP = 0.24 Mcal/d g/d; r2 = 0.99; CCC = 0.99, and energy balance (21 treatment means; RMSEP = 0.20 Mcal/d g/d; r2 = 0.87; CCC = 0.90 of goats. In conclusion, the SRNS for sheep can accurately predict dietary organic matter digestibility, ADG of growing lambs and changes in SBW of mature sheep. The SRNS

  11. Evaluation Model of System Survivability

    Institute of Scientific and Technical Information of China (English)

    LIU Yuling; PAN Shiying; TIAN Junfeng

    2006-01-01

    This paper puts forward a survivability evaluation model, SQEM(Survivability Quantitative Evaluation Model), based on lucubrating the main method existed. Then it defines the measurement factors and parses the survivability mathematically, introduces state change probability and the idea of setting the weights of survivability factors dynamically into the evaluating process of SQEM, which improved the accuracy of evaluation. An example is presented to illustrate the way SQEM works, which demonstrated the validity and feasibility of the method.

  12. EVALUATION OF REQUIREMENTS FOR THE DWPF HIGHER CAPACITY CANISTER

    Energy Technology Data Exchange (ETDEWEB)

    Miller, D.; Estochen, E.; Jordan, J.; Kesterson, M.; Mckeel, C.

    2014-08-05

    The Defense Waste Processing Facility (DWPF) is considering the option to increase canister glass capacity by reducing the wall thickness of the current production canister. This design has been designated as the DWPF Higher Capacity Canister (HCC). A significant decrease in the number of canisters processed during the life of the facility would be achieved if the HCC were implemented leading to a reduced overall reduction in life cycle costs. Prior to implementation of the change, Savannah River National Laboratory (SRNL) was requested to conduct an evaluation of the potential impacts. The specific areas of interest included loading and deformation of the canister during the filling process. Additionally, the effect of the reduced wall thickness on corrosion and material compatibility needed to be addressed. Finally the integrity of the canister during decontamination and other handling steps needed to be determined. The initial request regarding canister fabrication was later addressed in an alternate study. A preliminary review of canister requirements and previous testing was conducted prior to determining the testing approach. Thermal and stress models were developed to predict the forces on the canister during the pouring and cooling process. The thermal model shows the HCC increasing and decreasing in temperature at a slightly faster rate than the original. The HCC is shown to have a 3°F ΔT between the internal and outer surfaces versus a 5°F ΔT for the original design. The stress model indicates strain values ranging from 1.9% to 2.9% for the standard canister and 2.5% to 3.1% for the HCC. These values are dependent on the glass level relative to the thickness transition between the top head and the canister wall. This information, along with field readings, was used to set up environmental test conditions for corrosion studies. Small 304-L canisters were filled with glass and subjected to accelerated environmental testing for 3 months. No evidence of

  13. Tools for Model Evaluation

    DEFF Research Database (Denmark)

    Olesen, H. R.

    1998-01-01

    Proceedings of the Twenty-Second NATO/CCMS International Technical Meeting on Air Pollution Modeling and Its Application, held June 6-10, 1997, in Clermont-Ferrand, France.......Proceedings of the Twenty-Second NATO/CCMS International Technical Meeting on Air Pollution Modeling and Its Application, held June 6-10, 1997, in Clermont-Ferrand, France....

  14. An Evaluation of Requirement Prioritization Techniques with ANP

    Directory of Open Access Journals (Sweden)

    Javed ali Khan

    2016-07-01

    Full Text Available This article elaborates an evaluation of seven software requirements prioritization methods (ANP, binary search tree, AHP, hierarchy AHP, spanning tree matrix, priority group and bubble sort. Based on the case study of local project (automation of Mobilink franchise system, the experiment is conducted by students in the Requirement Engineering course in the department of Software Engineering at the University of Science and Technology Bannu, Khyber Pakhtunkhawa, Pakistan. Parameters/ measures used for the experiment are consistency indication, scale of measurement, interdependence, required number of decisions, total time consumption, time consumption per decision, ease of use, reliability of results and fault tolerance; on which requirements prioritization techniques are evaluated. The results of experiment show that ANP is the most successful prioritization methodology among all the available prioritization methodologies.

  15. Model Waveform Accuracy Requirements for the $\\chi^2$ Discriminator

    CERN Document Server

    Lindblom, Lee

    2016-01-01

    This paper derives accuracy standards for model gravitational waveforms required to ensure proper use of the $\\chi^2$ discriminator test in gravitational wave (GW) data analysis. These standards are different from previously established requirements for detection and waveform parameter measurement based on signal-to-noise optimization. We present convenient formulae both for evaluating and interpreting the contribution of model errors to measured $\\chi^2$ values. Motivated by these formula, we also present an enhanced, complexified variant of the standard $\\chi^2$ statistic used in GW searches. While our results are not directly relevant to current searches (which use the $\\chi^2$ test only to veto signal candidates with extremely high $\\chi^2$ values), they could be useful in future GW searches and as figures of merit for model gravitational waveforms.

  16. Requirements for an evaluation infrastructure for reliable pervasive healthcare research

    DEFF Research Database (Denmark)

    Wagner, Stefan Rahr; Toftegaard, Thomas Skjødeberg; Bertelsen, Olav W.

    2012-01-01

    The need for a non-intrusive evaluation infrastructure platform to support research on reliable pervasive healthcare in the unsupervised setting is analyzed and challenges and possibilities are identified. A list of requirements is presented and a solution is suggested that would allow researcher...

  17. HIGH RESOLUTION RESISTIVITY LEAK DETECTION DATA PROCESSING & EVALUATION MEHTODS & REQUIREMENTS

    Energy Technology Data Exchange (ETDEWEB)

    SCHOFIELD JS

    2007-10-04

    This document has two purposes: {sm_bullet} Describe how data generated by High Resolution REsistivity (HRR) leak detection (LD) systems deployed during single-shell tank (SST) waste retrieval operations are processed and evaluated. {sm_bullet} Provide the basic review requirements for HRR data when Hrr is deployed as a leak detection method during SST waste retrievals.

  18. 34 CFR 427.30 - What are the evaluation requirements?

    Science.gov (United States)

    2010-07-01

    ... 34 Education 3 2010-07-01 2010-07-01 false What are the evaluation requirements? 427.30 Section 427.30 Education Regulations of the Offices of the Department of Education (Continued) OFFICE OF VOCATIONAL AND ADULT EDUCATION, DEPARTMENT OF EDUCATION BILINGUAL VOCATIONAL TRAINING PROGRAM What...

  19. Gamified Requirements Engineering: Model and Experimentation

    NARCIS (Netherlands)

    Lombriser, Philipp; Dalpiaz, Fabiano; Lucassen, Garm; Brinkkemper, Sjaak

    2016-01-01

    [Context & Motivation] Engaging stakeholders in requirements engineering (RE) influences the quality of the requirements and ultimately of the system to-be. Unfortunately, stakeholder engagement is often insufficient, leading to too few, low-quality requirements. [Question/problem] We aim to

  20. Gamified Requirements Engineering: Model and Experimentation

    NARCIS (Netherlands)

    Lombriser, Philipp; Dalpiaz, Fabiano; Lucassen, Garm; Brinkkemper, Sjaak

    2016-01-01

    [Context & Motivation] Engaging stakeholders in requirements engineering (RE) influences the quality of the requirements and ultimately of the system to-be. Unfortunately, stakeholder engagement is often insufficient, leading to too few, low-quality requirements. [Question/problem] We aim to evaluat

  1. Gamified Requirements Engineering: Model and Experimentation

    NARCIS (Netherlands)

    Lombriser, Philipp; Dalpiaz, Fabiano|info:eu-repo/dai/nl/369508394; Lucassen, Garm; Brinkkemper, Sjaak|info:eu-repo/dai/nl/07500707X

    2016-01-01

    [Context & Motivation] Engaging stakeholders in requirements engineering (RE) influences the quality of the requirements and ultimately of the system to-be. Unfortunately, stakeholder engagement is often insufficient, leading to too few, low-quality requirements. [Question/problem] We aim to evaluat

  2. Determination of Space Station on-orbit nondestructive evaluation requirements

    Science.gov (United States)

    Salkowski, Charles

    1995-07-01

    NASA has recently initiated a reassessment of requirements for the performance of in-space nondestructive evaluation (NDE) of the International Space Station Alpha (ISSA) while on- orbit. given the on-orbit operating environment, there is a powerful motivation for avoiding inspection requirements. For example the ISSA maintenance philosophy includes the use of orbital replacement units (ORUs); hardware that is designed to fail without impact on mission assurance or safety. Identification of on-orbit inspection requirements involves review of a complex set of disciplines and considerations such as fracture control, contamination, safety, mission assurance, electrical power, and cost. This paper presents background discussion concerning on-orbit NDE and a technical approach for separating baseline requirements from opportunities.

  3. A nutrition mathematical model to account for dietary supply and requirements of energy and nutrients for domesticated small ruminants: the development and evaluation of the Small Ruminant Nutrition System

    OpenAIRE

    Luis Orlindo Tedeschi; Antonello Cannas; Danny Gene Fox

    2008-01-01

    A mechanistic model that predicts nutrient requirements and biological values of feeds for sheep (Cornell Net Carbohydrate and Protein System; CNCPS-S) was expanded to include goats and the name was changed to the Small Ruminant Nutrition System (SRNS). The SRNS uses animal and environmental factors to predict metabolizable energy (ME) and protein, and Ca and P requirements. Requirements for goats in the SRNS are predicted based on the equations developed for CNCPS-S, modified to account for ...

  4. From requirements to Java in a snap model-driven requirements engineering in practice

    CERN Document Server

    Smialek, Michal

    2015-01-01

    This book provides a coherent methodology for Model-Driven Requirements Engineering which stresses the systematic treatment of requirements within the realm of modelling and model transformations. The underlying basic assumption is that detailed requirements models are used as first-class artefacts playing a direct role in constructing software. To this end, the book presents the Requirements Specification Language (RSL) that allows precision and formality, which eventually permits automation of the process of turning requirements into a working system by applying model transformations and co

  5. Modeling uncertainty in requirements engineering decision support

    Science.gov (United States)

    Feather, Martin S.; Maynard-Zhang, Pedrito; Kiper, James D.

    2005-01-01

    One inherent characteristic of requrements engineering is a lack of certainty during this early phase of a project. Nevertheless, decisions about requirements must be made in spite of this uncertainty. Here we describe the context in which we are exploring this, and some initial work to support elicitation of uncertain requirements, and to deal with the combination of such information from multiple stakeholders.

  6. Modeling uncertainty in requirements engineering decision support

    Science.gov (United States)

    Feather, Martin S.; Maynard-Zhang, Pedrito; Kiper, James D.

    2005-01-01

    One inherent characteristic of requrements engineering is a lack of certainty during this early phase of a project. Nevertheless, decisions about requirements must be made in spite of this uncertainty. Here we describe the context in which we are exploring this, and some initial work to support elicitation of uncertain requirements, and to deal with the combination of such information from multiple stakeholders.

  7. A goal-oriented requirements modelling language for enterprise architecture

    NARCIS (Netherlands)

    Quartel, Dick; Engelsman, Wilco; Jonkers, Henk; Sinderen, van Marten

    2009-01-01

    Methods for enterprise architecture, such as TOGAF, acknowledge the importance of requirements engineering in the development of enterprise architectures. Modelling support is needed to specify, document, communicate and reason about goals and requirements. Current modelling techniques for enterpris

  8. A goal-oriented requirements modelling language for enterprise architecture

    NARCIS (Netherlands)

    Quartel, Dick; Engelsman, W.; Jonkers, Henk; van Sinderen, Marten J.

    2009-01-01

    Methods for enterprise architecture, such as TOGAF, acknowledge the importance of requirements engineering in the development of enterprise architectures. Modelling support is needed to specify, document, communicate and reason about goals and requirements. Current modelling techniques for

  9. Evaluation of animal models of neurobehavioral disorders

    Directory of Open Access Journals (Sweden)

    Nordquist Rebecca E

    2009-02-01

    Full Text Available Abstract Animal models play a central role in all areas of biomedical research. The process of animal model building, development and evaluation has rarely been addressed systematically, despite the long history of using animal models in the investigation of neuropsychiatric disorders and behavioral dysfunctions. An iterative, multi-stage trajectory for developing animal models and assessing their quality is proposed. The process starts with defining the purpose(s of the model, preferentially based on hypotheses about brain-behavior relationships. Then, the model is developed and tested. The evaluation of the model takes scientific and ethical criteria into consideration. Model development requires a multidisciplinary approach. Preclinical and clinical experts should establish a set of scientific criteria, which a model must meet. The scientific evaluation consists of assessing the replicability/reliability, predictive, construct and external validity/generalizability, and relevance of the model. We emphasize the role of (systematic and extended replications in the course of the validation process. One may apply a multiple-tiered 'replication battery' to estimate the reliability/replicability, validity, and generalizability of result. Compromised welfare is inherent in many deficiency models in animals. Unfortunately, 'animal welfare' is a vaguely defined concept, making it difficult to establish exact evaluation criteria. Weighing the animal's welfare and considerations as to whether action is indicated to reduce the discomfort must accompany the scientific evaluation at any stage of the model building and evaluation process. Animal model building should be discontinued if the model does not meet the preset scientific criteria, or when animal welfare is severely compromised. The application of the evaluation procedure is exemplified using the rat with neonatal hippocampal lesion as a proposed model of schizophrenia. In a manner congruent to

  10. INO340 telescope control system: middleware requirements, design, and evaluation

    Science.gov (United States)

    Shalchian, Hengameh; Ravanmehr, Reza

    2016-07-01

    The INO340 Control System (INOCS) is being designed in terms of a distributed real-time architecture. The real-time (soft and firm) nature of many processes inside INOCS causes the communication paradigm between its different components to be time-critical and sensitive. For this purpose, we have chosen the Data Distribution Service (DDS) standard as the communications middleware which is itself based on the publish-subscribe paradigm. In this paper, we review and compare the main middleware types, and then we illustrate the middleware architecture of INOCS and its specific requirements. Finally, we present the experimental results, performed to evaluate our middleware in order to ensure that it meets our requirements.

  11. Requirements for the evaluation of computational speech segregation systems.

    Science.gov (United States)

    May, Tobias; Dau, Torsten

    2014-12-01

    Recent studies on computational speech segregation reported improved speech intelligibility in noise when estimating and applying an ideal binary mask with supervised learning algorithms. However, an important requirement for such systems in technical applications is their robustness to acoustic conditions not considered during training. This study demonstrates that the spectro-temporal noise variations that occur during training and testing determine the achievable segregation performance. In particular, such variations strongly affect the identification of acoustical features in the system associated with perceptual attributes in speech segregation. The results could help establish a framework for a systematic evaluation of future segregation systems.

  12. Requirements for the evaluation of computational speech segregation systems

    DEFF Research Database (Denmark)

    May, Tobias; Dau, Torsten

    2014-01-01

    Recent studies on computational speech segregation reported improved speech intelligibility in noise when estimating and applying an ideal binary mask with supervised learning algorithms. However, an important requirement for such systems in technical applications is their robustness to acoustic...... conditions not considered during training. This study demonstrates that the spectro-temporal noise variations that occur during training and testing determine the achievable segregation performance. In particular, such variations strongly affect the identification of acoustical features in the system...... associated with perceptual attributes in speech segregation. The results could help establish a framework for a systematic evaluation of future segregation systems....

  13. A MODEL FOR ALIGNING SOFTWARE PROJECTS REQUIREMENTS WITH PROJECT TEAM MEMBERS REQUIREMENTS

    Directory of Open Access Journals (Sweden)

    Robert Hans

    2013-02-01

    Full Text Available The fast-paced, dynamic environment within which information and communication technology (ICT projects are run as well as ICT professionals’ constant changing requirements present a challenge for project managers in terms of aligning projects’ requirements with project team members’ requirements. This research paper purports that if projects’ requirements are properly aligned with team members’ requirements, then this will result in a balanced decision approach. Moreover, such an alignment will result in the realization of employee’s needs as well as meeting project’s needs. This paper presents a Project’s requirements and project Team members’ requirements (PrTr alignment model and argues that a balanced decision which meets both software project’s requirements and team members’ requirements can be achieved through the application of the PrTr alignment model.

  14. Supporting requirements model evolution throughout the system life-cycle

    OpenAIRE

    Ernst, Neil; Mylopoulos, John; Yu, Yijun; Ngyuen, Tien T.

    2008-01-01

    Requirements models are essential not just during system implementation, but also to manage system changes post-implementation. Such models should be supported by a requirements model management framework that allows users to create, manage and evolve models of domains, requirements, code and other design-time artifacts along with traceability links between their elements. We propose a comprehensive framework which delineates the operations and elements necessary, and then describe a tool imp...

  15. Complex Evaluation Model of Corporate Energy Management

    OpenAIRE

    Ágnes Kádár Horváth

    2014-01-01

    With the ever increasing energy problems at the doorstep alongside with political, economic, social and environmental challenges, conscious energy management has become of increasing importance in corporate resource management. Rising energy costs, stricter environmental and climate regulations as well as considerable changes in the energy market require companies to rationalise their energy consumption and cut energy costs. This study presents a complex evaluation model of corporate energy m...

  16. Requirements model for an e-Health awareness portal

    Science.gov (United States)

    Hussain, Azham; Mkpojiogu, Emmanuel O. C.; Nawi, Mohd Nasrun M.

    2016-08-01

    Requirements engineering is at the heart and foundation of software engineering process. Poor quality requirements inevitably lead to poor quality software solutions. Also, poor requirement modeling is tantamount to designing a poor quality product. So, quality assured requirements development collaborates fine with usable products in giving the software product the needed quality it demands. In the light of the foregoing, the requirements for an e-Ebola Awareness Portal were modeled with a good attention given to these software engineering concerns. The requirements for the e-Health Awareness Portal are modeled as a contribution to the fight against Ebola and helps in the fulfillment of the United Nation's Millennium Development Goal No. 6. In this study requirements were modeled using UML 2.0 modeling technique.

  17. Extending enterprise architecture modelling with business goals and requirements

    NARCIS (Netherlands)

    Engelsman, Wilco; Quartel, Dick; Jonkers, Henk; Sinderen, van Marten

    2011-01-01

    The methods for enterprise architecture (EA), such as The Open Group Architecture Framework, acknowledge the importance of requirements modelling in the development of EAs. Modelling support is needed to specify, document, communicate and reason about goals and requirements. The current modelling te

  18. Extending enterprise architecture modelling with business goals and requirements

    Science.gov (United States)

    Engelsman, Wilco; Quartel, Dick; Jonkers, Henk; van Sinderen, Marten

    2011-02-01

    The methods for enterprise architecture (EA), such as The Open Group Architecture Framework, acknowledge the importance of requirements modelling in the development of EAs. Modelling support is needed to specify, document, communicate and reason about goals and requirements. The current modelling techniques for EA focus on the products, services, processes and applications of an enterprise. In addition, techniques may be provided to describe structured requirements lists and use cases. Little support is available however for modelling the underlying motivation of EAs in terms of stakeholder concerns and the high-level goals that address these concerns. This article describes a language that supports the modelling of this motivation. The definition of the language is based on existing work on high-level goal and requirements modelling and is aligned with an existing standard for enterprise modelling: the ArchiMate language. Furthermore, the article illustrates how EA can benefit from analysis techniques from the requirements engineering domain.

  19. Mixing Formal and Informal Model Elements for Tracing Requirements

    DEFF Research Database (Denmark)

    Jastram, Michael; Hallerstede, Stefan; Ladenberger, Lukas

    2011-01-01

    a system for traceability with a state-based formal method that supports refinement. We do not require all specification elements to be modelled formally and support incremental incorporation of new specification elements into the formal model. Refinement is used to deal with larger amounts of requirements......Tracing between informal requirements and formal models is challenging. A method for such tracing should permit to deal efficiently with changes to both the requirements and the model. A particular challenge is posed by the persisting interplay of formal and informal elements. In this paper, we...

  20. Evaluation of the general university requirements: what did students say?

    Science.gov (United States)

    Shek, Daniel T L; Yu, Lu; Chai, Wen Yu

    2017-02-01

    The General University Requirements (GUR) is the core general education component of the new 4-year undergraduate curriculum at The Hong Kong Polytechnic University (PolyU) commencing from the 2012/2013 academic year. The major objective of the GUR is to widen students' horizons and promote their holistic development in their undergraduate years. To evaluate the perceived effectiveness of the GUR in its second year implementation, 18 focus group interviews (n=74 students) were conducted in the 2013/2014 academic year. Findings showed that subjects under the GUR framework were overall welcomed by students for the well-designed subject contents, dedicated teaching staff, and collaborative and experiential learning methods. Students perceived that the GUR was beneficial to their development in effective communication, critical thinking, problem solving, lifelong learning, and ethical leadership. Some challenges encountered by students were noted to further revamp the GUR curriculum in the future.

  1. Using cognitive modeling for requirements engineering in anesthesiology

    NARCIS (Netherlands)

    Pott, C; le Feber, J

    2005-01-01

    Cognitive modeling is a complexity reducing method to describe significant cognitive processes under a specified research focus. Here, a cognitive process model for decision making in anesthesiology is presented and applied in requirements engineering. Three decision making situations of

  2. Model evaluation methodology applicable to environmental assessment models

    Energy Technology Data Exchange (ETDEWEB)

    Shaeffer, D.L.

    1979-08-01

    A model evaluation methodology is presented to provide a systematic framework within which the adequacy of environmental assessment models might be examined. The necessity for such a tool is motivated by the widespread use of models for predicting the environmental consequences of various human activities and by the reliance on these model predictions for deciding whether a particular activity requires the deployment of costly control measures. Consequently, the uncertainty associated with prediction must be established for the use of such models. The methodology presented here consists of six major tasks: model examination, algorithm examination, data evaluation, sensitivity analyses, validation studies, and code comparison. This methodology is presented in the form of a flowchart to show the logical interrelatedness of the various tasks. Emphasis has been placed on identifying those parameters which are most important in determining the predictive outputs of a model. Importance has been attached to the process of collecting quality data. A method has been developed for analyzing multiplicative chain models when the input parameters are statistically independent and lognormally distributed. Latin hypercube sampling has been offered as a promising candidate for doing sensitivity analyses. Several different ways of viewing the validity of a model have been presented. Criteria are presented for selecting models for environmental assessment purposes.

  3. Software Requirements Specification Verifiable Fuel Cycle Simulation (VISION) Model

    Energy Technology Data Exchange (ETDEWEB)

    D. E. Shropshire; W. H. West

    2005-11-01

    The purpose of this Software Requirements Specification (SRS) is to define the top-level requirements for a Verifiable Fuel Cycle Simulation Model (VISION) of the Advanced Fuel Cycle (AFC). This simulation model is intended to serve a broad systems analysis and study tool applicable to work conducted as part of the AFCI (including costs estimates) and Generation IV reactor development studies.

  4. Requirements Validation: Execution of UML Models with CPN Tools

    DEFF Research Database (Denmark)

    Machado, Ricardo J.; Lassen, Kristian Bisgaard; Oliveira, Sérgio

    2007-01-01

    with simple unified modelling language (UML) requirements models, it is not easy for the development team to get confidence on the stakeholders' requirements validation. This paper describes an approach, based on the construction of executable interactive prototypes, to support the validation of workflow...

  5. Performability Modelling Tools, Evaluation Techniques and Applications

    NARCIS (Netherlands)

    Haverkort, Boudewijn R.H.M.

    1990-01-01

    This thesis deals with three aspects of quantitative evaluation of fault-tolerant and distributed computer and communication systems: performability evaluation techniques, performability modelling tools, and performability modelling applications. Performability modelling is a relatively new

  6. [Evaluation of preoperative anxiety in patients requiring glaucoma filtration surgery].

    Science.gov (United States)

    Lemaitre, S; Blumen-Ohana, E; Akesbi, J; Laplace, O; Nordmann, J-P

    2014-01-01

    Preoperative anxiety is often expressed by patients requiring filtration surgery for their glaucoma. So far, there has been no scale for screening this group of patients for preoperative anxiety. The Amsterdam Preoperative Anxiety and Information Scale (APAIS) is a self-evaluation questionnaire which has been used in specialties other than ophthalmology and which makes it possible to identify the adult patients with a high level of preoperative anxiety over an upcoming surgical procedure. The purpose of this study is to estimate the preoperative anxiety in glaucoma patients requiring filtration surgery. We performed a prospective study of 36 adult patients with chronic glaucoma not responding to medical treatment and who were about to undergo filtration surgery (trabeculectomy or deep sclerectomy). The APAIS questionnaire was given to the patients after discussing the indication for surgery. A global anxiety score (ranging from 4 to 20) above 10 defined patients with a high level of preoperative anxiety. We attempted to identify among these patients the factors related to filtration surgery which caused them anxiety (lack of control of intraocular pressure, risk of blindness, presence of the filtering bleb). In our sample of patients, we found that glaucoma was a source of anxiety. That was also true for the surgical procedure, though most patients believe that once the decision had been made, their psychological status was not modified by the upcoming procedure. The patient-clinician relationship is important in any chronic disease, all the more so in glaucoma, since this disease remains asymptomatic for a long time. When filtration surgery is necessary, the patients are going to express less preoperative anxiety if they trust their physician and if individualized information has been given to them The French version of the APAIS is a quick scale, easily completed, that can be recommended for evaluating anxiety and patients' need for information prior to filtering

  7. Metrics and Evaluation Models for Accessible Television

    DEFF Research Database (Denmark)

    Li, Dongxiao; Looms, Peter Olaf

    2014-01-01

    The adoption of the UN Convention on the Rights of Persons with Disabilities (UN CRPD) in 2006 has provided a global framework for work on accessibility, including information and communication technologies and audiovisual content. One of the challenges facing the application of the UN CRPD is te...... and evaluation models for access service provision, the paper identifies options that could facilitate the evaluation of UN CRPD outcomes and suggests priorities for future research in this area....... number of platforms on which audiovisual content needs to be distributed, requiring very clear multiplatform architectures to facilitate interworking and assure interoperability. As a consequence, the regular evaluations of progress being made by signatories to the UN CRPD protocol are difficult...

  8. Process Model for Defining Space Sensing and Situational Awareness Requirements

    Science.gov (United States)

    2006-04-01

    process model for defining systems for space sensing and space situational awareness is presented. The paper concentrates on eight steps for determining the requirements to include: decision maker needs, system requirements, exploitation methods and vulnerabilities, critical capabilities, and identify attack scenarios. Utilization of the USAF anti-tamper (AT) implementation process as a process model departure point for the space sensing and situational awareness (SSSA...is presented. The AT implementation process model , as an

  9. [Evaluation of muscle relaxant requirement for hospital anesthesia].

    Science.gov (United States)

    Shchegolev, A V; Levshankov, A I; Bogomolov, B N; Pereloma, V I; Dumnov, A G

    2013-03-01

    The rationale for cost-effectiveness of modern muscle relaxants (MR) administration in general anesthesia was evaluated. New MRs are more expensive than traditionally used pipecuronium and succinylcholine. However, the old MRs are often required as a block reversion with anticholinesterase medicines at the end of surgery, the longer artificial lung ventilation and observation in patients during recovery in intensive care unit. It was found that the district military hospital had done an annual average of about 900 general anesthesia assisted with artificial ventilation and muscle relaxation. About 2% of all anesthesias accrue to short-term anesthesia, the 27% to medium-term and 71% to long-term. 81% of the medium-term anesthesia accrue small hospitals. According to cost/effectiveness the most optimal muscle relaxants administration scheme for short-term (up to 30 min) anesthesia was mivacurium, for the operation of medium duration (30-120 min)--rocuronium, for long-term (120 min)--pipecuronium. An electronic form of annual report, which allows to obtain the necessary data for calculation of annual muscle relaxants demand and costs both in hospital and in the whole of the armed forces quickly, was developed.

  10. Requirements for Logical Models for Value-Added Tax Legislation

    DEFF Research Database (Denmark)

    Nielsen, Morten Ib; Simonsen, Jakob Grue; Larsen, Ken Friis

    -specific needs. Currently, these difficulties are handled in most major ERP systems by customising and localising the native code of the ERP systems for each specific country and industry. We propose an alternative that uses logical modeling of VAT legislation. The potential benefit is to eventually transform...... such a model automatically into programs that essentially will replace customisation and localisation by con¿guration by changing parameters in the model. In particular, we: (1) identify a number of requirements for such modeling, including requirements for the underlying logic; (2) model salient parts...

  11. Digital Avionics Information System (DAIS): Training Requirements Analysis Model (TRAMOD).

    Science.gov (United States)

    Czuchry, Andrew J.; And Others

    The training requirements analysis model (TRAMOD) described in this report represents an important portion of the larger effort called the Digital Avionics Information System (DAIS) Life Cycle Cost (LCC) Study. TRAMOD is the second of three models that comprise an LCC impact modeling system for use in the early stages of system development. As…

  12. Requirements Validation: Execution of UML Models with CPN Tools

    DEFF Research Database (Denmark)

    Machado, Ricardo J.; Lassen, Kristian Bisgaard; Oliveira, Sérgio

    2007-01-01

    Requirements validation is a critical task in any engineering project. The confrontation of stakeholders with static requirements models is not enough, since stakeholders with non-computer science education are not able to discover all the inter-dependencies between the elicited requirements. Eve...... requirements, where the system to be built must explicitly support the interaction between people within a pervasive cooperative workflow execution. A case study from a real project is used to illustrate the proposed approach.......Requirements validation is a critical task in any engineering project. The confrontation of stakeholders with static requirements models is not enough, since stakeholders with non-computer science education are not able to discover all the inter-dependencies between the elicited requirements. Even...... with simple unified modelling language (UML) requirements models, it is not easy for the development team to get confidence on the stakeholders' requirements validation. This paper describes an approach, based on the construction of executable interactive prototypes, to support the validation of workflow...

  13. Evaluation Model for Sentient Cities

    Directory of Open Access Journals (Sweden)

    Mª Florencia Fergnani Brion

    2016-11-01

    Full Text Available In this article we made a research about the Sentient Cities and produced an assessment model to analyse if a city is or could be potentially considered one. It can be used to evaluate the current situation of a city before introducing urban policies based on citizen participation in hybrid environments (physical and digital. To that effect, we've developed evaluation grids with the main elements that form a Sentient City and their measurement values. The Sentient City is a variation of the Smart City, also based on technology progress and innovation, but where the citizens are the principal agent. In this model, governments aim to have a participatory and sustainable system for achieving the Knowledge Society and Collective Intelligence development, as well as the city’s efficiency. Also, they increase communication channels between the Administration and citizens. In this new context, citizens are empowered because they have the opportunity to create a Local Identity and transform their surroundings through open and horizontal initiatives.

  14. Evaluation of onset of nucleate boiling models

    Energy Technology Data Exchange (ETDEWEB)

    Huang, LiDong [Heat Transfer Research, Inc., College Station, TX (United States)], e-mail: lh@htri.net

    2009-07-01

    This article discusses available models and correlations for predicting the required heat flux or wall superheat for the Onset of Nucleate Boiling (ONB) on plain surfaces. It reviews ONB data in the open literature and discusses the continuing efforts of Heat Transfer Research, Inc. in this area. Our ONB database contains ten individual sources for ten test fluids and a wide range of operating conditions for different geometries, e.g., tube side and shell side flow boiling and falling film evaporation. The article also evaluates literature models and correlations based on the data: no single model in the open literature predicts all data well. The prediction uncertainty is especially higher in vacuum conditions. Surface roughness is another critical criterion in determining which model should be used. However, most models do not directly account for surface roughness, and most investigators do not provide surface roughness information in their published findings. Additional experimental research is needed to improve confidence in predicting the required wall superheats for nucleation boiling for engineering design purposes. (author)

  15. Flight simulator requirements for airline transport pilot training - An evaluation of motion system design alternatives

    Science.gov (United States)

    Lee, A. T.; Bussolari, S. R.

    1986-01-01

    The effect of motion platform systems on pilot behavior is considered with emphasis placed on civil aviation applications. A dynamic model for human spatial orientation based on the physiological structure and function of the human vestibular system is presented. Motion platform alternatives were evaluated on the basis of the following motion platform conditions: motion with six degrees-of-freedom required for Phase II simulators and two limited motion conditions. Consideration was given to engine flameout, airwork, and approach and landing scenarios.

  16. GENERAL REQUIREMENTS FOR SIMULATION MODELS IN WASTE MANAGEMENT

    Energy Technology Data Exchange (ETDEWEB)

    Miller, Ian; Kossik, Rick; Voss, Charlie

    2003-02-27

    Most waste management activities are decided upon and carried out in a public or semi-public arena, typically involving the waste management organization, one or more regulators, and often other stakeholders and members of the public. In these environments, simulation modeling can be a powerful tool in reaching a consensus on the best path forward, but only if the models that are developed are understood and accepted by all of the parties involved. These requirements for understanding and acceptance of the models constrain the appropriate software and model development procedures that are employed. This paper discusses requirements for both simulation software and for the models that are developed using the software. Requirements for the software include transparency, accessibility, flexibility, extensibility, quality assurance, ability to do discrete and/or continuous simulation, and efficiency. Requirements for the models that are developed include traceability, transparency, credibility/validity, and quality control. The paper discusses these requirements with specific reference to the requirements for performance assessment models that are used for predicting the long-term safety of waste disposal facilities, such as the proposed Yucca Mountain repository.

  17. Closed loop models for analyzing engineering requirements for simulators

    Science.gov (United States)

    Baron, S.; Muralidharan, R.; Kleinman, D.

    1980-01-01

    A closed loop analytic model, incorporating a model for the human pilot, (namely, the optimal control model) that would allow certain simulation design tradeoffs to be evaluated quantitatively was developed. This model was applied to a realistic flight control problem. The resulting model is used to analyze both overall simulation effects and the effects of individual elements. The results show that, as compared to an ideal continuous simulation, the discrete simulation can result in significant performance and/or workload penalties.

  18. Evaluating models of vowel perception

    Science.gov (United States)

    Molis, Michelle R.

    2005-08-01

    There is a long-standing debate concerning the efficacy of formant-based versus whole spectrum models of vowel perception. Categorization data for a set of synthetic steady-state vowels were used to evaluate both types of models. The models tested included various combinations of formant frequencies and amplitudes, principal components derived from excitation patterns, and perceptually scaled LPC cepstral coefficients. The stimuli were 54 five-formant synthesized vowels that had a common F1 frequency and varied orthogonally in F2 and F3 frequency. Twelve speakers of American English categorized the stimuli as the vowels /smcapi/, /capomega/, or /hkbkeh/. Results indicate that formant frequencies provided the best account of the data only if nonlinear terms, in the form of squares and cross products of the formant values, were also included in the analysis. The excitation pattern principal components also produced reasonably accurate fits to the data. Although a wish to use the lowest-dimensional representation would dictate that formant frequencies are the most appropriate vowel description, the relative success of richer, more flexible, and more neurophysiologically plausible whole spectrum representations suggests that they may be preferred for understanding human vowel perception.

  19. Inferring Requirement Goals from Model Implementing in UML

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    UML is used widely in many software developmentprocesses.However,it does not make explicit requirement goals.Here is a method tending to establish the semantic relationship between requirements goals and UML models.Before the method is introduced,some relevant concepts are described

  20. Validation of Power Requirement Model for Active Loudspeakers

    DEFF Research Database (Denmark)

    Schneider, Henrik; Madsen, Anders Normann; Bjerregaard, Ruben

    2015-01-01

    The actual power requirement of an active loudspeaker during playback of music has not received much attention in the literature. This is probably because no single and simple solution exists and because a complete system knowledge from input voltage to output sound pressure level is required....... There are however many advantages that could be harvested from such knowledge like size, cost and efficiency improvements. In this paper a recently proposed power requirement model for active loudspeakers is experimentally validated and the model is expanded to include the closed and vented type enclosures...

  1. The Status of Program Evaluation Expectations in State School Counselor Certification Requirements.

    Science.gov (United States)

    Trevisan, Michael S.

    2000-01-01

    Studied state school counselor certification requirements with respect to program evaluation expectations. Responses of state certification offices show 19 states and the District of Columbia require some form of program evaluation knowledge and skills, but only Colorado and Washington require knowledge of the program evaluation standards…

  2. 45 CFR 2519.800 - What are the evaluation requirements for Higher Education programs?

    Science.gov (United States)

    2010-10-01

    ... (Continued) CORPORATION FOR NATIONAL AND COMMUNITY SERVICE HIGHER EDUCATION INNOVATIVE PROGRAMS FOR COMMUNITY SERVICE Evaluation Requirements § 2519.800 What are the evaluation requirements for Higher Education... 45 Public Welfare 4 2010-10-01 2010-10-01 false What are the evaluation requirements for...

  3. Towards a Formalized Ontology-Based Requirements Model

    Institute of Scientific and Technical Information of China (English)

    JIANG Dan-dong; ZHANG Shen-sheng; WANG Ying-lin

    2005-01-01

    The goal of this paper is to take a further step towards an ontological approach for representing requirements information. The motivation for ontologies was discussed. The definitions of ontology and requirements ontology were given. Then, it presented a collection of informal terms, including four subject areas. It also discussed the formalization process of ontology. The underlying meta-ontology was determined, and the formalized requirements ontology was analyzed. This formal ontology is built to serve as a basis for requirements model. Finally, the implementation of software system was given.

  4. Risk Quantification and Evaluation Modelling

    Directory of Open Access Journals (Sweden)

    Manmohan Singh

    2014-07-01

    Full Text Available In this paper authors have discussed risk quantification methods and evaluation of risks and decision parameter to be used for deciding on ranking of the critical items, for prioritization of condition monitoring based risk and reliability centered maintenance (CBRRCM. As time passes any equipment or any product degrades into lower effectiveness and the rate of failure or malfunctioning increases, thereby lowering the reliability. Thus with the passage of time or a number of active tests or periods of work, the reliability of the product or the system, may fall down to a low value known as a threshold value, below which the reliability should not be allowed to dip. Hence, it is necessary to fix up the normal basis for determining the appropriate points in the product life cycle where predictive preventive maintenance may be applied in the programme so that the reliability (the probability of successful functioning can be enhanced, preferably to its original value, by reducing the failure rate and increasing the mean time between failure. It is very important for defence application where reliability is a prime work. An attempt is made to develop mathematical model for risk assessment and ranking them. Based on likeliness coefficient β1 and risk coefficient β2 ranking of the sub-systems can be modelled and used for CBRRCM.Defence Science Journal, Vol. 64, No. 4, July 2014, pp. 378-384, DOI:http://dx.doi.org/10.14429/dsj.64.6366 

  5. A transformation approach for collaboration based requirement models

    CERN Document Server

    Harbouche, Ahmed; Mokhtari, Aicha

    2012-01-01

    Distributed software engineering is widely recognized as a complex task. Among the inherent complexities is the process of obtaining a system design from its global requirement specification. This paper deals with such transformation process and suggests an approach to derive the behavior of a given system components, in the form of distributed Finite State Machines, from the global system requirements, in the form of an augmented UML Activity Diagrams notation. The process of the suggested approach is summarized in three steps: the definition of the appropriate source Meta-Model (requirements Meta-Model), the definition of the target Design Meta-Model and the definition of the rules to govern the transformation during the derivation process. The derivation process transforms the global system requirements described as UML diagram activities (extended with collaborations) to system roles behaviors represented as UML finite state machines. The approach is implemented using Atlas Transformation Language (ATL).

  6. Irrigation Requirement Estimation Using Vegetation Indices and Inverse Biophysical Modeling

    Science.gov (United States)

    Bounoua, Lahouari; Imhoff, Marc L.; Franks, Shannon

    2010-01-01

    We explore an inverse biophysical modeling process forced by satellite and climatological data to quantify irrigation requirements in semi-arid agricultural areas. We constrain the carbon and water cycles modeled under both equilibrium, balance between vegetation and climate, and non-equilibrium, water added through irrigation. We postulate that the degree to which irrigated dry lands vary from equilibrium climate conditions is related to the amount of irrigation. The amount of water required over and above precipitation is considered as an irrigation requirement. For July, results show that spray irrigation resulted in an additional amount of water of 1.3 mm per occurrence with a frequency of 24.6 hours. In contrast, the drip irrigation required only 0.6 mm every 45.6 hours or 46% of that simulated by the spray irrigation. The modeled estimates account for 87% of the total reported irrigation water use, when soil salinity is not important and 66% in saline lands.

  7. Innovative Product Design Based on Customer Requirement Weight Calculation Model

    Institute of Scientific and Technical Information of China (English)

    Chen-Guang Guo; Yong-Xian Liu; Shou-Ming Hou; Wei Wang

    2010-01-01

    In the processes of product innovation and design, it is important for the designers to find and capture customer's focus through customer requirement weight calculation and ranking. Based on the fuzzy set theory and Euclidean space distance, this paper puts forward a method for customer requirement weight calculation called Euclidean space distances weighting ranking method. This method is used in the fuzzy analytic hierarchy process that satisfies the additive consistent fuzzy matrix. A model for the weight calculation steps is constructed;meanwhile, a product innovation design module on the basis of the customer requirement weight calculation model is developed. Finally, combined with the instance of titanium sponge production, the customer requirement weight calculation model is validated. By the innovation design module, the structure of the titanium sponge reactor has been improved and made innovative.

  8. Fuzzy Based Evaluation of Software Quality Using Quality Models and Goal Models

    Directory of Open Access Journals (Sweden)

    Arfan Mansoor

    2015-09-01

    Full Text Available Software quality requirements are essential part for the success of software development. Defined and guaranteed quality in software development requires identifying, refining, and predicting quality properties by appropriate means. Goal models of goal oriented requirements engineering (GORE and quality models are useful for modelling of functional goals as well as for quality goals. Once the goal models are obtained representing the functional requirements and integrated quality goals, there is need to evaluate each functional requirement arising from functional goals and quality requirement arising from quality goals. The process consist of two main parts. In first part, the goal models are used to evaluate functional goals. The leaf level goals are used to establish the evaluation criteria. Stakeholders are also involved to contribute their opinions about the importance of each goal (functional and/or quality goal. Stakeholder opinions are then converted into quantifiable numbers using triangle fuzzy numbers (TFN. After applying the defuzzification process on TFN, the scores (weights are obtained for each goal. In second part specific quality goals are identified, refined/tailored based on existing quality models and their evaluation is performed similarly using TFN and by applying defuzzification process. The two step process helps to evaluate each goal based on stakeholder opinions and to evaluate the impact of quality requirements. It also helps to evaluate the relationships among functional goals and quality goals. The process is described and applied on ’cyclecomputer’ case study.

  9. The Benefit of Ambiguity in Understanding Goals in Requirements Modelling

    DEFF Research Database (Denmark)

    Paay, Jeni; Pedell, Sonja; Sterling, Leon

    2011-01-01

    This paper examines the benefit of ambiguity in describing goals in requirements modelling for the design of socio-technical systems using concepts from Agent-Oriented Software Engineering (AOSE) and ethnographic and cultural probe methods from Human Computer Interaction (HCI). The authors’ aim...... a holistic approach to eliciting, analyzing, and modelling socially-oriented requirements by combining a particular form of ethnographic technique, cultural probes, with Agent Oriented Software Engineering notations to model these requirements. This paper focuses on examining the value of maintaining...... of their research is to create technologies that support more flexible and meaningful social interactions, by combining best practice in software engineering with ethnographic techniques to model complex social interactions from their socially oriented life for the purposes of building rich socio...

  10. Requirements of Integrated Design Teams While Evaluating Advanced Energy Retrofit Design Options in Immersive Virtual Environments

    Directory of Open Access Journals (Sweden)

    Xue Yang

    2015-12-01

    Full Text Available One of the significant ways to save energy use in buildings is to implement advanced energy retrofits in existing buildings. Improving energy performance of buildings through advanced energy retrofitting requires a clear understanding of the cost and energy implications of design alternatives from various engineering disciplines when different retrofit options are considered. The communication of retrofit design alternatives and their energy implications is essential in the decision-making process, as it affects the final retrofit selections and hence the energy efficiency of the retrofitted buildings. The objective of the research presented here was to identify a generic list of information requirements that are needed to be shared and collectively analyzed by integrated design teams during advanced energy retrofit design review meetings held in immersive settings. While identifying such requirements, the authors used an immersive environment based iterative requirements elicitation approach. The technology was used as a means to better identify the information requirements of integrated design teams to be analyzed as a group. This paper provides findings on information requirements of integrated design teams when evaluating retrofit options in immersive virtual environments. The information requirements were identified through interactions with sixteen experts in design and energy modeling domain, and validated with another group of participants consisting of six design experts who were experienced in integrated design processes. Industry practitioners can use the findings in deciding on what information to share with integrated design team members during design review meetings that utilize immersive virtual environments.

  11. 14 CFR 325.13 - Environmental evaluations and energy information not required.

    Science.gov (United States)

    2010-01-01

    ... Environmental evaluations and energy information not required. Notwithstanding any provision of part 312 or part... environmental evaluation or energy information with the application. ... 14 Aeronautics and Space 4 2010-01-01 2010-01-01 false Environmental evaluations and...

  12. Business Process Simulation: Requirements for Business and Resource Models

    Directory of Open Access Journals (Sweden)

    Audrius Rima

    2015-07-01

    Full Text Available The purpose of Business Process Model and Notation (BPMN is to provide easily understandable graphical representation of business process. Thus BPMN is widely used and applied in various areas one of them being a business process simulation. This paper addresses some BPMN model based business process simulation problems. The paper formulate requirements for business process and resource models in enabling their use for business process simulation.

  13. Requirements for Evaluation on Drug-medical Device

    Institute of Scientific and Technical Information of China (English)

    2009-01-01

    The administration of combination products in U.S.Food and Drug Administration(FDA)are analyzed and summarized in this paper.Furthermore,the technical evaluation on drug-medical device in the State Food and Drug Administration of China(SFDA)is also illustrated.Meanwhile,this paper discusses how to promote the development of drug-medical device in the administration and technical evaluation.

  14. A TRANSFORMATION APPROACH FOR COLLABORATION BASED REQUIREMENT MODELS

    Directory of Open Access Journals (Sweden)

    Ahmed Harbouche

    2012-02-01

    Full Text Available Distributed software engineering is widely recognized as a complex task. Among the inherent complexitiesis the process of obtaining a system design from its global requirement specification. This paper deals withsuch transformation process and suggests an approach to derive the behavior of a given systemcomponents, in the form of distributed Finite State Machines, from the global system requirements, in theform of an augmented UML Activity Diagrams notation. The process of the suggested approach issummarized in three steps: the definition of the appropriate source Meta-Model (requirements Meta-Model, the definition of the target Design Meta-Model and the definition of the rules to govern thetransformation during the derivation process. The derivation process transforms the global systemrequirements described as UML diagram activities (extended with collaborations to system rolesbehaviors represented as UML finite state machines. The approach is implemented using AtlasTransformation Language (ATL.

  15. Model Evaluation of Continuous Data Pharmacometric Models: Metrics and Graphics.

    Science.gov (United States)

    Nguyen, Tht; Mouksassi, M-S; Holford, N; Al-Huniti, N; Freedman, I; Hooker, A C; John, J; Karlsson, M O; Mould, D R; Pérez Ruixo, J J; Plan, E L; Savic, R; van Hasselt, Jgc; Weber, B; Zhou, C; Comets, E; Mentré, F

    2017-02-01

    This article represents the first in a series of tutorials on model evaluation in nonlinear mixed effect models (NLMEMs), from the International Society of Pharmacometrics (ISoP) Model Evaluation Group. Numerous tools are available for evaluation of NLMEM, with a particular emphasis on visual assessment. This first basic tutorial focuses on presenting graphical evaluation tools of NLMEM for continuous data. It illustrates graphs for correct or misspecified models, discusses their pros and cons, and recalls the definition of metrics used.

  16. Model Evaluation of Continuous Data Pharmacometric Models: Metrics and Graphics

    Science.gov (United States)

    Nguyen, THT; Mouksassi, M‐S; Holford, N; Al‐Huniti, N; Freedman, I; Hooker, AC; John, J; Karlsson, MO; Mould, DR; Pérez Ruixo, JJ; Plan, EL; Savic, R; van Hasselt, JGC; Weber, B; Zhou, C; Comets, E

    2017-01-01

    This article represents the first in a series of tutorials on model evaluation in nonlinear mixed effect models (NLMEMs), from the International Society of Pharmacometrics (ISoP) Model Evaluation Group. Numerous tools are available for evaluation of NLMEM, with a particular emphasis on visual assessment. This first basic tutorial focuses on presenting graphical evaluation tools of NLMEM for continuous data. It illustrates graphs for correct or misspecified models, discusses their pros and cons, and recalls the definition of metrics used. PMID:27884052

  17. Documentation in Evaluation Research: Managerial and Scientific Requirements.

    Science.gov (United States)

    Pollard, William E.; And Others

    1985-01-01

    The role of documentation in the planning and control functions of project management is reviewed. The importance of documentation in the assessment of research quality with respect to objectivity, validity, and replicability is discussed. An outline of documentation required in different phases of research projects is provided. (Author/DWH)

  18. Meta-Model and UML Profile for Requirements Management of Software and Embedded Systems

    Directory of Open Access Journals (Sweden)

    Arpinen Tero

    2011-01-01

    Full Text Available Software and embedded system companies today encounter problems related to requirements management tool integration, incorrect tool usage, and lack of traceability. This is due to utilized tools with no clear meta-model and semantics to communicate requirements between different stakeholders. This paper presents a comprehensive meta-model for requirements management. The focus is on software and embedded system domains. The goal is to define generic requirements management domain concepts and abstract interfaces between requirements management and system development. This leads to a portable requirements management meta-model which can be adapted with various system modeling languages. The created meta-model is prototyped by translating it into a UML profile. The profile is imported into a UML tool which is used for rapid evaluation of meta-model concepts in practice. The developed profile is associated with a proof of concept report generator tool that automatically produces up-to-date documentation from the models in form of web pages. The profile is adopted to create an example model of embedded system requirement specification which is built with the profile.

  19. Training evaluation models: Theory and applications

    OpenAIRE

    Carbone, V.; MORVILLO, A

    2002-01-01

    This chapter has the following aims: 1. Compare the various conceptual models for evaluation, identifying their strengths and weaknesses; 2. Define an evaluation model consistent with the aims and constraints of the fit project; 3. Describe, in critical fashion, operative tools for evaluating training which are reliable, flexible and analytical.

  20. A Hybrid Parallel Execution Model for Logic Based Requirement Specifications (Invited Paper

    Directory of Open Access Journals (Sweden)

    Jeffrey J. P. Tsai

    1999-05-01

    Full Text Available It is well known that undiscovered errors in a requirements specification is extremely expensive to be fixed when discovered in the software maintenance phase. Errors in the requirement phase can be reduced through the validation and verification of the requirements specification. Many logic-based requirements specification languages have been developed to achieve these goals. However, the execution and reasoning of a logic-based requirements specification can be very slow. An effective way to improve their performance is to execute and reason the logic-based requirements specification in parallel. In this paper, we present a hybrid model to facilitate the parallel execution of a logic-based requirements specification language. A logic-based specification is first applied by a data dependency analysis technique which can find all the mode combinations that exist within a specification clause. This mode information is used to support a novel hybrid parallel execution model, which combines both top-down and bottom-up evaluation strategies. This new execution model can find the failure in the deepest node of the search tree at the early stage of the evaluation, thus this new execution model can reduce the total number of nodes searched in the tree, the total processes needed to be generated, and the total communication channels needed in the search process. A simulator has been implemented to analyze the execution behavior of the new model. Experiments show significant improvement based on several criteria.

  1. NVC Based Model for Selecting Effective Requirement Elicitation Technique

    Directory of Open Access Journals (Sweden)

    Md. Rizwan Beg

    2012-10-01

    Full Text Available Requirement Engineering process starts from gathering of requirements i.e.; requirements elicitation. Requirementselicitation (RE is the base building block for a software project and has very high impact onsubsequent design and builds phases as well. Accurately capturing system requirements is the major factorin the failure of most of software projects. Due to the criticality and impact of this phase, it is very importantto perform the requirements elicitation in no less than a perfect manner. One of the most difficult jobsfor elicitor is to select appropriate technique for eliciting the requirement. Interviewing and Interactingstakeholder during Elicitation process is a communication intensive activity involves Verbal and Nonverbalcommunication (NVC. Elicitor should give emphasis to Non-verbal communication along with verbalcommunication so that requirements recorded more efficiently and effectively. In this paper we proposea model in which stakeholders are classified by observing non-verbal communication and use it as a basefor elicitation technique selection. We also propose an efficient plan for requirements elicitation which intendsto overcome on the constraints, faced by elicitor.

  2. SMC Standard: Evaluation and Test Requirements for Liquid Rocket Engines

    Science.gov (United States)

    2017-07-26

    Company, 2002. 23. NASA SP-8123, Liquid Rocket Lines, Bellows, Flexible Hoses, and Filters, National Aeronautics and Space Administration, April 1977...boundaries established by design requirements. For example, the test screens for malfunctions, failure to execute, sequence of action, interruption...Test: A static load or pressure test performed as an acceptance workmanship screen to prove the structural integrity of a unit or assembly. Gives

  3. Recommendations concerning energy information model documentation, public access, and evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Wood, D.O.; Mason, M.J.

    1979-10-01

    A review is presented of the Energy Information Administration (EIA) response to Congressional and management concerns, relating specifically to energy information system documentation, public access to EIA systems, and scientific/peer evaluation. The relevant organizational and policy responses of EIA are discussed. An analysis of the model development process and approaches to, and organization of, model evaluation is presented. Included is a survey of model evaluation studies. A more detailed analysis of the origins of the legislated documentation and public access requirements is presented in Appendix A, and the results of an informal survey of other agency approaches to public access and evaluation is presented in Appendix B. Appendix C provides a survey of non-EIA activities relating to model documentation and evaluation. Twelve recommendations to improve EIA's procedures for energy information system documentation, evaluation activities, and public access are determined. These are discussed in detail. (MCW)

  4. Formal Requirements Modeling for Reactive Systems with Coloured Petri Nets

    DEFF Research Database (Denmark)

    Tjell, Simon

    This dissertation presents the contributions of seven publications all concerned with the application of Coloured Petri Nets (CPN) to requirements modeling for reactive systems. The publications are introduced along with relevant background material and related work, and their contributions...... interface composed of recognizable artifacts and activities. The presentation of the three publications related to Use Cases is followed by a the presentation of a publication formalizing some of the guidelines applied for structuring the CPN requirements models|namely the guidelines that make it possible...... activity. The traces are automatically recorded during execution of the model. The second publication presents a formally specified framework for automating a large part of the tasks related to integrating Problem Frames with CPN. The framework is specified in VDM++, and allows the modeler to automatically...

  5. NASA Standard for Models and Simulations: Philosophy and Requirements Overview

    Science.gov (United States)

    Blattnig, Steve R.; Luckring, James M.; Morrison, Joseph H.; Sylvester, Andre J.; Tripathi, Ram K.; Zang, Thomas A.

    2013-01-01

    Following the Columbia Accident Investigation Board report, the NASA Administrator chartered an executive team (known as the Diaz Team) to identify those CAIB report elements with NASA-wide applicability and to develop corrective measures to address each element. One such measure was the development of a standard for the development, documentation, and operation of models and simulations. This report describes the philosophy and requirements overview of the resulting NASA Standard for Models and Simulations.

  6. Single High Fidelity Geometric Data Sets for LCM - Model Requirements

    Science.gov (United States)

    2006-11-01

    material name (example, an HY80 steel ) plus additional material requirements (heat treatment, etc.) Creation of a more detailed description of the data...57 Figure 2.22. Typical Stress-Strain Curve for Steel (adapted from Ref 59) .............................. 60 Figure...structures are steel , aluminum and composites. The structural components that make up a global FEA model drive the fidelity of the model. For example

  7. Finds in Testing Experiments for Model Evaluation

    Institute of Scientific and Technical Information of China (English)

    WU Ji; JIA Xiaoxia; LIU Chang; YANG Haiyan; LIU Chao

    2005-01-01

    To evaluate the fault location and the failure prediction models, simulation-based and code-based experiments were conducted to collect the required failure data. The PIE model was applied to simulate failures in the simulation-based experiment. Based on syntax and semantic level fault injections, a hybrid fault injection model is presented. To analyze the injected faults, the difficulty to inject (DTI) and difficulty to detect (DTD) are introduced and are measured from the programs used in the code-based experiment. Three interesting results were obtained from the experiments: 1) Failures simulated by the PIE model without consideration of the program and testing features are unreliably predicted; 2) There is no obvious correlation between the DTI and DTD parameters; 3) The DTD for syntax level faults changes in a different pattern to that for semantic level faults when the DTI increases. The results show that the parameters have a strong effect on the failures simulated, and the measurement of DTD is not strict.

  8. Requirements for a next generation global flood inundation models

    Science.gov (United States)

    Bates, P. D.; Neal, J. C.; Smith, A.; Sampson, C. C.

    2016-12-01

    In this paper we review the current status of global hydrodynamic models for flood inundation prediction and highlight recent successes and current limitations. Building on this analysis we then go on to consider what is required to develop the next generation of such schemes and show that to achieve this a number of fundamental science problems will need to be overcome. New data sets and new types of analysis will be required, and we show that these will only partially be met by currently planned satellite missions and data collection initiatives. A particular example is the quality of available global Digital Elevation data. The current best data set for flood modelling, SRTM, is only available at a relatively modest 30m resolution, contains pixel-to-pixel noise of 6m and is corrupted by surface artefacts. Creative processing techniques have sought to address these issues with some success, but fundamentally the quality of the available global terrain data limits flood modelling and needs to be overcome. Similar arguments can be made for many other elements of global hydrodynamic models including their bathymetry data, boundary conditions, flood defence information and model validation data. We therefore systematically review each component of global flood models and document whether planned new technology will solve current limitations and, if not, what exactly will be required to do so.

  9. Differential program evaluation model in child protection.

    Science.gov (United States)

    Lalayants, Marina

    2012-01-01

    Increasingly attention has been focused to the degree to which social programs have effectively and efficiently delivered services. Using the differential program evaluation model by Tripodi, Fellin, and Epstein (1978) and by Bielawski and Epstein (1984), this paper described the application of this model to evaluating a multidisciplinary clinical consultation practice in child protection. This paper discussed the uses of the model by demonstrating them through the four stages of program initiation, contact, implementation, and stabilization. This organizational case study made a contribution to the model by introducing essential and interrelated elements of a "practical evaluation" methodology in evaluating social programs, such as a participatory evaluation approach; learning, empowerment and sustainability; and a flexible individualized approach to evaluation. The study results demonstrated that by applying the program development model, child-protective administrators and practitioners were able to evaluate the existing practices and recognize areas for program improvement.

  10. 34 CFR 403.191 - What are the requirements for program evaluation?

    Science.gov (United States)

    2010-07-01

    ... 34 Education 3 2010-07-01 2010-07-01 false What are the requirements for program evaluation? 403... program evaluation? (a)(1) Beginning in the 1992-1993 school year, each recipient of financial assistance... program listed in § 403.130 to support the cost of conducting the evaluation required under paragraphs...

  11. 49 CFR 40.285 - When is a SAP evaluation required?

    Science.gov (United States)

    2010-10-01

    ... 49 Transportation 1 2010-10-01 2010-10-01 false When is a SAP evaluation required? 40.285 Section... § 40.285 When is a SAP evaluation required? (a) As an employee, when you have violated DOT drug and... unless you complete the SAP evaluation, referral, and education/treatment process set forth in this...

  12. Laboratory evaluation of dynamic traffic assignment systems: Requirements, framework, and system design

    Energy Technology Data Exchange (ETDEWEB)

    Miaou, S.-P.; Pillai, R.S.; Summers, M.S.; Rathi, A.K. [Oak Ridge National Lab., TN (United States); Lieu, H.C. [Federal Highway Administration, McLean, VA (United States). Intelligent Systems Div.

    1997-01-01

    The success of Advanced Traveler Information 5ystems (ATIS) and Advanced Traffic Management Systems (ATMS) depends on the availability and dissemination of timely and accurate estimates of current and emerging traffic network conditions. Real-time Dynamic Traffic Assignment (DTA) systems are being developed to provide the required timely information. The DTA systems will provide faithful and coherent real-time, pre-trip, and en-route guidance/information which includes routing, mode, and departure time suggestions for use by travelers, ATIS, and ATMS. To ensure the credibility and deployment potential of such DTA systems, an evaluation system supporting all phases of DTA system development has been designed and presented in this paper. This evaluation system is called the DTA System Laboratory (DSL). A major component of the DSL is a ground- truth simulator, the DTA Evaluation System (DES). The DES is envisioned to be a virtual representation of a transportation system in which ATMS and ATIS technologies are deployed. It simulates the driving and decision-making behavior of travelers in response to ATIS and ATMS guidance, information, and control. This paper presents the major evaluation requirements for a DTA Systems, a modular modeling framework for the DES, and a distributed DES design. The modeling framework for the DES is modular, meets the requirements, can be assembled using both legacy and independently developed modules, and can be implemented as a either a single process or a distributed system. The distributed design is extendible, provides for the optimization of distributed performance, and object-oriented design within each distributed component. A status report on the development of the DES and other research applications is also provided.

  13. Laboratory evaluation of dynamic traffic assignment systems: Requirements, framework, and system design

    Energy Technology Data Exchange (ETDEWEB)

    Miaou, S.-P.; Pillai, R.S.; Summers, M.S.; Rathi, A.K. [Oak Ridge National Lab., TN (United States); Lieu, H.C. [Federal Highway Administration, McLean, VA (United States). Intelligent Systems Div.

    1997-01-01

    The success of Advanced Traveler Information 5ystems (ATIS) and Advanced Traffic Management Systems (ATMS) depends on the availability and dissemination of timely and accurate estimates of current and emerging traffic network conditions. Real-time Dynamic Traffic Assignment (DTA) systems are being developed to provide the required timely information. The DTA systems will provide faithful and coherent real-time, pre-trip, and en-route guidance/information which includes routing, mode, and departure time suggestions for use by travelers, ATIS, and ATMS. To ensure the credibility and deployment potential of such DTA systems, an evaluation system supporting all phases of DTA system development has been designed and presented in this paper. This evaluation system is called the DTA System Laboratory (DSL). A major component of the DSL is a ground- truth simulator, the DTA Evaluation System (DES). The DES is envisioned to be a virtual representation of a transportation system in which ATMS and ATIS technologies are deployed. It simulates the driving and decision-making behavior of travelers in response to ATIS and ATMS guidance, information, and control. This paper presents the major evaluation requirements for a DTA Systems, a modular modeling framework for the DES, and a distributed DES design. The modeling framework for the DES is modular, meets the requirements, can be assembled using both legacy and independently developed modules, and can be implemented as a either a single process or a distributed system. The distributed design is extendible, provides for the optimization of distributed performance, and object-oriented design within each distributed component. A status report on the development of the DES and other research applications is also provided.

  14. Evaluation of wastewater treatment requirements for thermochemical biomass liquefaction

    Energy Technology Data Exchange (ETDEWEB)

    Elliott, D C [Pacific Northwest Lab., Richland, WA (United States)

    1992-04-01

    Biomass can provide a substantial energy source. Liquids are preferred for use as transportation fuels because of their high energy density and handling ease and safety. Liquid fuel production from biomass can be accomplished by any of several different processes including hydrolysis and fermentation of the carbohydrates to alcohol fuels, thermal gasification and synthesis of alcohol or hydrocarbon fuels, direct extraction of biologically produced hydrocarbons such as seed oils or algae lipids, or direct thermochemical conversion of the biomass to liquids and catalytic upgrading to hydrocarbon fuels. This report discusses direct thermochemical conversion to achieve biomass liquefaction and the requirements for wastewater treatment inherent in such processing. 21 refs.

  15. Fusing Quantitative Requirements Analysis with Model-based Systems Engineering

    Science.gov (United States)

    Cornford, Steven L.; Feather, Martin S.; Heron, Vance A.; Jenkins, J. Steven

    2006-01-01

    A vision is presented for fusing quantitative requirements analysis with model-based systems engineering. This vision draws upon and combines emergent themes in the engineering milieu. "Requirements engineering" provides means to explicitly represent requirements (both functional and non-functional) as constraints and preferences on acceptable solutions, and emphasizes early-lifecycle review, analysis and verification of design and development plans. "Design by shopping" emphasizes revealing the space of options available from which to choose (without presuming that all selection criteria have previously been elicited), and provides means to make understandable the range of choices and their ramifications. "Model-based engineering" emphasizes the goal of utilizing a formal representation of all aspects of system design, from development through operations, and provides powerful tool suites that support the practical application of these principles. A first step prototype towards this vision is described, embodying the key capabilities. Illustrations, implications, further challenges and opportunities are outlined.

  16. Fusing Quantitative Requirements Analysis with Model-based Systems Engineering

    Science.gov (United States)

    Cornford, Steven L.; Feather, Martin S.; Heron, Vance A.; Jenkins, J. Steven

    2006-01-01

    A vision is presented for fusing quantitative requirements analysis with model-based systems engineering. This vision draws upon and combines emergent themes in the engineering milieu. "Requirements engineering" provides means to explicitly represent requirements (both functional and non-functional) as constraints and preferences on acceptable solutions, and emphasizes early-lifecycle review, analysis and verification of design and development plans. "Design by shopping" emphasizes revealing the space of options available from which to choose (without presuming that all selection criteria have previously been elicited), and provides means to make understandable the range of choices and their ramifications. "Model-based engineering" emphasizes the goal of utilizing a formal representation of all aspects of system design, from development through operations, and provides powerful tool suites that support the practical application of these principles. A first step prototype towards this vision is described, embodying the key capabilities. Illustrations, implications, further challenges and opportunities are outlined.

  17. Mesoscale to microscale wind farm flow modeling and evaluation

    DEFF Research Database (Denmark)

    Sanz Rodrigo, Javier; Chávez Arroyo, Roberto Aurelio; Moriarty, Patrick

    2017-01-01

    of meteorological and wind engineering flow models and the definition of a formal model evaluation methodology, is a strong area of research for the next generation of wind conditions assessment and wind farm and wind turbine design tools. Some fundamental challenges are identified in order to guide future research...... design tools and meteorological models. The challenge is how to build the bridge between atmospheric and wind engineering model communities and how to establish a comprehensive evaluation process that identifies relevant physical phenomena for wind energy applications with modeling and experimental...... requirements. A framework for model verification, validation, and uncertainty quantification is established to guide this process by a systematic evaluation of the modeling system at increasing levels of complexity. In terms of atmospheric physics, 'building the bridge' means developing models for the so...

  18. Models of protein and amino acid requirements for cattle

    Directory of Open Access Journals (Sweden)

    Luis Orlindo Tedeschi

    2015-03-01

    Full Text Available Protein supply and requirements by ruminants have been studied for more than a century. These studies led to the accumulation of lots of scientific information about digestion and metabolism of protein by ruminants as well as the characterization of the dietary protein in order to maximize animal performance. During the 1980s and 1990s, when computers became more accessible and powerful, scientists began to conceptualize and develop mathematical nutrition models, and to program them into computers to assist with ration balancing and formulation for domesticated ruminants, specifically dairy and beef cattle. The most commonly known nutrition models developed during this period were the National Research Council (NRC in the United States, Agricultural Research Council (ARC in the United Kingdom, Institut National de la Recherche Agronomique (INRA in France, and the Commonwealth Scientific and Industrial Research Organization (CSIRO in Australia. Others were derivative works from these models with different degrees of modifications in the supply or requirement calculations, and the modeling nature (e.g., static or dynamic, mechanistic, or deterministic. Circa 1990s, most models adopted the metabolizable protein (MP system over the crude protein (CP and digestible CP systems to estimate supply of MP and the factorial system to calculate MP required by the animal. The MP system included two portions of protein (i.e., the rumen-undegraded dietary CP - RUP - and the contributions of microbial CP - MCP as the main sources of MP for the animal. Some models would explicitly account for the impact of dry matter intake (DMI on the MP required for maintenance (MPm; e.g., Cornell Net Carbohydrate and Protein System - CNCPS, the Dutch system - DVE/OEB, while others would simply account for scurf, urinary, metabolic fecal, and endogenous contributions independently of DMI. All models included milk yield and its components in estimating MP required for lactation

  19. Performance Evaluation and Requirements Assessment for Gravity Gradient Referenced Navigation

    Directory of Open Access Journals (Sweden)

    Jisun Lee

    2015-07-01

    Full Text Available In this study, simulation tests for gravity gradient referenced navigation (GGRN are conducted to verify the effects of various factors such as database (DB and sensor errors, flight altitude, DB resolution, initial errors, and measurement update rates on the navigation performance. Based on the simulation results, requirements for GGRN are established for position determination with certain target accuracies. It is found that DB and sensor errors and flight altitude have strong effects on the navigation performance. In particular, a DB and sensor with accuracies of 0.1 E and 0.01 E, respectively, are required to determine the position more accurately than or at a level similar to the navigation performance of terrain referenced navigation (TRN. In most cases, the horizontal position error of GGRN is less than 100 m. However, the navigation performance of GGRN is similar to or worse than that of a pure inertial navigation system when the DB and sensor errors are 3 E or 5 E each and the flight altitude is 3000 m. Considering that the accuracy of currently available gradiometers is about 3 E or 5 E, GGRN does not show much advantage over TRN at present. However, GGRN is expected to exhibit much better performance in the near future when accurate DBs and gravity gradiometer are available.

  20. A Required Course in the Development, Implementation, and Evaluation of Clinical Pharmacy Services

    Science.gov (United States)

    Kamal, Khalid M.; Berdine, Hildegarde J.

    2008-01-01

    Objective To develop, implement, and assess a required pharmacy practice course to prepare pharmacy students to develop, implement, and evaluate clinical pharmacy services using a business plan model. Design Course content centered around the process of business planning and pharmacoeconomic evaluations. Selected business planning topics included literature evaluation, mission statement development, market evaluation, policy and procedure development, and marketing strategy. Selected pharmacoeconomic topics included cost-minimization analysis, cost-benefit analysis, cost-effectiveness analysis, cost-utility analysis, and health-related quality of life (HRQoL). Assessment methods included objective examinations, student participation, performance on a group project, and peer evaluation. Assessment One hundred fifty-three students were enrolled in the course. The mean scores on the objective examinations (100 points per examination) ranged from 82 to 85 points, with 25%-35% of students in the class scoring over 90, and 40%-50% of students scoring from 80 to 89. The mean scores on the group project (200 points) and classroom participation (50 points) were 183.5 and 46.1, respectively. The mean score on the peer evaluation was 30.8, with scores ranging from 27.5 to 31.7. Conclusion The course provided pharmacy students with the framework necessary to develop and implement evidence-based disease management programs and to assure efficient, cost-effective utilization of pertinent resources in the provision of patient care. PMID:19214263

  1. Evaluation of Information Requirements of Reliability Methods in Engineering Design

    DEFF Research Database (Denmark)

    Marini, Vinicius Kaster; Restrepo-Giraldo, John Dairo; Ahmed-Kristensen, Saeema

    2010-01-01

    This paper aims to characterize the information needed to perform methods for robustness and reliability, and verify their applicability to early design stages. Several methods were evaluated on their support to synthesis in engineering design. Of those methods, FMEA, FTA and HAZOP were selected...... on their insight on design risks and wide spread application. A pilot case study has been performed with a washing machine in using these methods to assess design risks, following a reverse engineering approach. The study has shown the methods can be initiated at early design stages, but cannot be concluded...

  2. Evaluating uncertainty in simulation models

    Energy Technology Data Exchange (ETDEWEB)

    McKay, M.D.; Beckman, R.J.; Morrison, J.D.; Upton, S.C.

    1998-12-01

    The authors discussed some directions for research and development of methods for assessing simulation variability, input uncertainty, and structural model uncertainty. Variance-based measures of importance for input and simulation variables arise naturally when using the quadratic loss function of the difference between the full model prediction y and the restricted prediction {tilde y}. The concluded that generic methods for assessing structural model uncertainty do not now exist. However, methods to analyze structural uncertainty for particular classes of models, like discrete event simulation models, may be attainable.

  3. Modeling requirements for in situ vitrification. Revision 1

    Energy Technology Data Exchange (ETDEWEB)

    MacKinnon, R.J.; Mecham, D.C.; Hagrman, D.L.; Johnson, R.W.; Murray, P.E.; Slater, C.E.; Marwil, E.S.; Weaver, R.A.; Argyle, M.D.

    1991-11-01

    This document outlines the requirements for the model being developed at the INEL which will provide analytical support for the ISV technology assessment program. The model includes representations of the electric potential field, thermal transport with melting, gas and particulate release, vapor migration, off-gas combustion and process chemistry. The modeling objectives are to (1) help determine the safety of the process by assessing the air and surrounding soil radionuclide and chemical pollution hazards, the nuclear criticality hazard, and the explosion and fire hazards, (2) help determine the suitability of the ISV process for stabilizing the buried wastes involved, and (3) help design laboratory and field tests and interpret results therefrom.

  4. Required experimental accuracy to select between supersymmetrical models

    Indian Academy of Sciences (India)

    David Grellscheid

    2004-03-01

    We will present a method to decide a priori whether various supersymmetrical scenarios can be distinguished based on sparticle mass data alone. For each model, a scan over all free SUSY breaking parameters reveals the extent of that model's physically allowed region of sparticle-mass-space. Based on the geometrical configuration of these regions in mass-space, it is possible to obtain an estimate of the required accuracy of future sparticle mass measurements to distinguish between the models. We will illustrate this algorithm with an example. Ths talk is based on work done in collaboration with B C Allanach (LAPTH, Annecy) and F Quevedo (DAMTP, Cambridge).

  5. Systems Evaluation Methods, Models, and Applications

    CERN Document Server

    Liu, Siefeng; Xie, Naiming; Yuan, Chaoqing

    2011-01-01

    A book in the Systems Evaluation, Prediction, and Decision-Making Series, Systems Evaluation: Methods, Models, and Applications covers the evolutionary course of systems evaluation methods, clearly and concisely. Outlining a wide range of methods and models, it begins by examining the method of qualitative assessment. Next, it describes the process and methods for building an index system of evaluation and considers the compared evaluation and the logical framework approach, analytic hierarchy process (AHP), and the data envelopment analysis (DEA) relative efficiency evaluation method. Unique

  6. Thermodynamic models for bounding pressurant mass requirements of cryogenic tanks

    Science.gov (United States)

    Vandresar, Neil T.; Haberbusch, Mark S.

    1994-01-01

    Thermodynamic models have been formulated to predict lower and upper bounds for the mass of pressurant gas required to pressurize a cryogenic tank and then expel liquid from the tank. Limiting conditions are based on either thermal equilibrium or zero energy exchange between the pressurant gas and initial tank contents. The models are independent of gravity level and allow specification of autogenous or non-condensible pressurants. Partial liquid fill levels may be specified for initial and final conditions. Model predictions are shown to successfully bound results from limited normal-gravity tests with condensable and non-condensable pressurant gases. Representative maximum collapse factor maps are presented for liquid hydrogen to show the effects of initial and final fill level on the range of pressurant gas requirements. Maximum collapse factors occur for partial expulsions with large final liquid fill fractions.

  7. A commuting generation model requiring only aggregated data

    CERN Document Server

    Lenormand, Maxime; Gargiulo, Floriana

    2011-01-01

    We recently proposed, in (Gargiulo et al., 2011), an innova tive stochastic model with only one parameter to calibrate. It reproduces the complete network by an iterative process stochastically choosing, for each commuter living in the municipality of a region, a workplace in the region. The choice is done considering the job offer in each municipality of the region and the distance to all the possible destinations. The model is quite effective if the region is sufficiently autonomous in terms of job offers. However, calibrating or being sure of this autonomy require data or expertise which are not necessarily available. Moreover the region can be not autonomous. In the present, we overcome these limitations, extending the job search geographical base of the commuters to the outside of the region, and changing the deterrence function form. We also found a law to calibrate the improvement model which does not require data.

  8. QUALITY SERVICES EVALUATION MODEL BASED ON DEDICATED SOFTWARE TOOL

    Directory of Open Access Journals (Sweden)

    ANDREEA CRISTINA IONICĂ

    2012-10-01

    Full Text Available In this paper we introduced a new model, called Service Quality (SQ, which combines QFD and SERVQUAL methods. This model takes from the SERVQUAL method the five dimensions of requirements and three of characteristics and from the QFD method the application methodology. The originality of the SQ model consists in computing a global index that reflects the customers’ requirements accomplishment level by the quality characteristics. In order to prove the viability of the SQ model, there was developed a software tool that was applied for the evaluation of a health care services provider.

  9. An evaluation framework for participatory modelling

    Science.gov (United States)

    Krueger, T.; Inman, A.; Chilvers, J.

    2012-04-01

    Strong arguments for participatory modelling in hydrology can be made on substantive, instrumental and normative grounds. These arguments have led to increasingly diverse groups of stakeholders (here anyone affecting or affected by an issue) getting involved in hydrological research and the management of water resources. In fact, participation has become a requirement of many research grants, programs, plans and policies. However, evidence of beneficial outcomes of participation as suggested by the arguments is difficult to generate and therefore rare. This is because outcomes are diverse, distributed, often tacit, and take time to emerge. In this paper we develop an evaluation framework for participatory modelling focussed on learning outcomes. Learning encompasses many of the potential benefits of participation, such as better models through diversity of knowledge and scrutiny, stakeholder empowerment, greater trust in models and ownership of subsequent decisions, individual moral development, reflexivity, relationships, social capital, institutional change, resilience and sustainability. Based on the theories of experiential, transformative and social learning, complemented by practitioner experience our framework examines if, when and how learning has occurred. Special emphasis is placed on the role of models as learning catalysts. We map the distribution of learning between stakeholders, scientists (as a subgroup of stakeholders) and models. And we analyse what type of learning has occurred: instrumental learning (broadly cognitive enhancement) and/or communicative learning (change in interpreting meanings, intentions and values associated with actions and activities; group dynamics). We demonstrate how our framework can be translated into a questionnaire-based survey conducted with stakeholders and scientists at key stages of the participatory process, and show preliminary insights from applying the framework within a rural pollution management situation in

  10. IDENTIFYING OPERATIONAL REQUIREMENTS TO SELECT SUITABLE DECISION MODELS FOR A PUBLIC SECTOR EPROCUREMENT DECISION SUPPORT SYSTEM

    Directory of Open Access Journals (Sweden)

    Mohamed Adil

    2014-10-01

    Full Text Available Public sector procurement should be a transparent and fair process. Strict legal requirements are enforced on public sector procurement to make it a standardised process. To make fair decisions on selecting suppliers, a practical method which adheres to legal requirements is important. The research that is the base for this paper aimed at identifying a suitable Multi-Criteria Decision Analysis (MCDA method for the specific legal and functional needs of the Maldivian Public Sector. To identify such operational requirements, a set of focus group interviews were conducted in the Maldives with public officials responsible for procurement decision making. Based on the operational requirements identified through focus groups, criteria-based evaluation is done on published MCDA methods to identify the suitable methods for e-procurement decision making. This paper describes the identification of the operational requirements and the results of the evaluation to select suitable decision models for the Maldivian context.

  11. Modeling and verifying SoS performance requirements of C4ISR systems

    Institute of Scientific and Technical Information of China (English)

    Yudong Qi; Zhixue Wang; Qingchao Dong; Hongyue He

    2015-01-01

    System-of-systems (SoS) engineering involves a com-plex process of refining high-level SoS requirements into more detailed systems requirements and assessing the extent to which the performances of to-be systems may possibly satisfy SoS capa-bility objectives. The key issue is how to model such requirements to automate the process of analysis and assessment. This paper suggests a meta-model that defines both functional and non-functional features of SoS requirements for command and control, communication, computer, intel igence, surveil ance reconnais-sance (C4ISR) systems. A domain-specific modeling language is defined by extending unified modeling language (UML) con-structed of class and association with fuzzy theory in order to model the fuzzy concepts of performance requirements. An effi-ciency evaluation function is introduced, based on B´ezier curves, to predict the effectiveness of systems. An algorithm is presented to transform domain models in fuzzy UML into a requirements ontology in description logic (DL) so that requirements verification can be automated with a popular DL reasoner such as Pel et.

  12. Cognition and procedure representational requirements for predictive human performance models

    Science.gov (United States)

    Corker, K.

    1992-01-01

    Models and modeling environments for human performance are becoming significant contributors to early system design and analysis procedures. Issues of levels of automation, physical environment, informational environment, and manning requirements are being addressed by such man/machine analysis systems. The research reported here investigates the close interaction between models of human cognition and models that described procedural performance. We describe a methodology for the decomposition of aircrew procedures that supports interaction with models of cognition on the basis of procedures observed; that serves to identify cockpit/avionics information sources and crew information requirements; and that provides the structure to support methods for function allocation among crew and aiding systems. Our approach is to develop an object-oriented, modular, executable software representation of the aircrew, the aircraft, and the procedures necessary to satisfy flight-phase goals. We then encode in a time-based language, taxonomies of the conceptual, relational, and procedural constraints among the cockpit avionics and control system and the aircrew. We have designed and implemented a goals/procedures hierarchic representation sufficient to describe procedural flow in the cockpit. We then execute the procedural representation in simulation software and calculate the values of the flight instruments, aircraft state variables and crew resources using the constraints available from the relationship taxonomies. The system provides a flexible, extensible, manipulative and executable representation of aircrew and procedures that is generally applicable to crew/procedure task-analysis. The representation supports developed methods of intent inference, and is extensible to include issues of information requirements and functional allocation. We are attempting to link the procedural representation to models of cognitive functions to establish several intent inference methods

  13. Using multifractals to evaluate oceanographic model skill

    Science.gov (United States)

    Skákala, Jozef; Cazenave, Pierre W.; Smyth, Timothy J.; Torres, Ricardo

    2016-08-01

    We are in an era of unprecedented data volumes generated from observations and model simulations. This is particularly true from satellite Earth Observations (EO) and global scale oceanographic models. This presents us with an opportunity to evaluate large-scale oceanographic model outputs using EO data. Previous work on model skill evaluation has led to a plethora of metrics. The paper defines two new model skill evaluation metrics. The metrics are based on the theory of universal multifractals and their purpose is to measure the structural similarity between the model predictions and the EO data. The two metrics have the following advantages over the standard techniques: (a) they are scale-free and (b) they carry important part of information about how model represents different oceanographic drivers. Those two metrics are then used in the paper to evaluate the performance of the FVCOM model in the shelf seas around the south-west coast of the UK.

  14. Mathematical Modeling of Programmatic Requirements for Yaws Eradication

    Science.gov (United States)

    Mitjà, Oriol; Fitzpatrick, Christopher; Asiedu, Kingsley; Solomon, Anthony W.; Mabey, David C.W.; Funk, Sebastian

    2017-01-01

    Yaws is targeted for eradication by 2020. The mainstay of the eradication strategy is mass treatment followed by case finding. Modeling has been used to inform programmatic requirements for other neglected tropical diseases and could provide insights into yaws eradication. We developed a model of yaws transmission varying the coverage and number of rounds of treatment. The estimated number of cases arising from an index case (basic reproduction number [R0]) ranged from 1.08 to 3.32. To have 80% probability of achieving eradication, 8 rounds of treatment with 80% coverage were required at low estimates of R0 (1.45). This requirement increased to 95% at high estimates of R0 (2.47). Extending the treatment interval to 12 months increased requirements at all estimates of R0. At high estimates of R0 with 12 monthly rounds of treatment, no combination of variables achieved eradication. Models should be used to guide the scale-up of yaws eradication. PMID:27983500

  15. Evaluation of the magnetic field requirements for nanomagnetic gene transfection

    Directory of Open Access Journals (Sweden)

    A. Fouriki

    2010-07-01

    Full Text Available The objective of this work was to examine the effects of magnet distance (and by proxy, field strength on nanomagnetic transfection efficiency. Methods: non-viral magnetic nanoparticle-based transfection was evaluated using both static and oscillating magnet arrays. Results: Fluorescence intensity (firefly luciferase of transfected H292 cells showed no increase using a 96-well NdFeB magnet array when the magnets were 5 mm from the cell culture plate or nearer. At 6 mm and higher, fluorescence intensity decreased systematically. Conclusion: In all cases, fluorescence intensity was higher when using an oscillating array compared to a static array. For distances closer than 5 mm, the oscillating system also outperformed Lipofectamine 2000™.

  16. The Benefit of Ambiguity in Understanding Goals in Requirements Modelling

    DEFF Research Database (Denmark)

    Paay, Jeni; Pedell, Sonja; Sterling, Leon

    2011-01-01

    of their research is to create technologies that support more flexible and meaningful social interactions, by combining best practice in software engineering with ethnographic techniques to model complex social interactions from their socially oriented life for the purposes of building rich socio......This paper examines the benefit of ambiguity in describing goals in requirements modelling for the design of socio-technical systems using concepts from Agent-Oriented Software Engineering (AOSE) and ethnographic and cultural probe methods from Human Computer Interaction (HCI). The authors’ aim...... of abstraction, ambiguous and open for conversations through the modelling process add richness to goal models, and communicate quality attributes of the interaction being modelled to the design phase, where this ambiguity is regarded as a resource for design....

  17. Information Models, Data Requirements, and Agile Data Curation

    Science.gov (United States)

    Hughes, John S.; Crichton, Dan; Ritschel, Bernd; Hardman, Sean; Joyner, Ron

    2015-04-01

    The Planetary Data System's next generation system, PDS4, is an example of the successful use of an ontology-based Information Model (IM) to drive the development and operations of a data system. In traditional systems engineering, requirements or statements about what is necessary for the system are collected and analyzed for input into the design stage of systems development. With the advent of big data the requirements associated with data have begun to dominate and an ontology-based information model can be used to provide a formalized and rigorous set of data requirements. These requirements address not only the usual issues of data quantity, quality, and disposition but also data representation, integrity, provenance, context, and semantics. In addition the use of these data requirements during system's development has many characteristics of Agile Curation as proposed by Young et al. [Taking Another Look at the Data Management Life Cycle: Deconstruction, Agile, and Community, AGU 2014], namely adaptive planning, evolutionary development, early delivery, continuous improvement, and rapid and flexible response to change. For example customers can be satisfied through early and continuous delivery of system software and services that are configured directly from the information model. This presentation will describe the PDS4 architecture and its three principle parts: the ontology-based Information Model (IM), the federated registries and repositories, and the REST-based service layer for search, retrieval, and distribution. The development of the IM will be highlighted with special emphasis on knowledge acquisition, the impact of the IM on development and operations, and the use of shared ontologies at multiple governance levels to promote system interoperability and data correlation.

  18. Design requirements for SRB production control system. Volume 3: Package evaluation, modification and hardware

    Science.gov (United States)

    1981-01-01

    The software package evaluation was designed to analyze commercially available, field-proven, production control or manufacturing resource planning management technology and software package. The analysis was conducted by comparing SRB production control software requirements and conceptual system design to software package capabilities. The methodology of evaluation and the findings at each stage of evaluation are described. Topics covered include: vendor listing; request for information (RFI) document; RFI response rate and quality; RFI evaluation process; and capabilities versus requirements.

  19. Evaluation of solar sludge drying alternatives by costs and area requirements.

    Science.gov (United States)

    Kurt, Mayıs; Aksoy, Ayşegül; Sanin, F Dilek

    2015-10-01

    Thermal drying is a common method to reach above 90% dry solids content (DS) in sludge. However, thermal drying requires high amount of energy and can be expensive. A greenhouse solar dryer (GSD) can be a cost-effective substitute if the drying performance, which is typically 70% DS, can be increased by additional heat. In this study feasibility of GSD supported with solar panels is evaluated as an alternative to thermal dryers to reach 90% DS. Evaluations are based on capital and O&M costs as well as area requirements for 37 wastewater treatment plants (WWTPs) with various sludge production rates. Costs for the supported GSD system are compared to that of conventional and co-generation thermal dryers. To calculate the optimal costs associated with the drying system, an optimization model was developed in which area limitation was a constraint. Results showed that total cost was minimum when the DS in the GSD (DS(m,i)) was equal to the maximum attainable value (70% DS). On average, 58% of the total cost and 38% of total required area were associated with the GSD. Variations in costs for 37 WWTPs were due to differences in initial DS (DS(i,i)) and sludge production rates, indicating the importance of dewatering to lower drying costs. For large plants, GSD supported with solar panels provided savings in total costs especially in long term when compared to conventional and co-generation thermal dryers.

  20. A Model for Forecasting Enlisted Student IA Billet Requirements

    Science.gov (United States)

    2016-03-01

    were promised and had at least one course failure . Training times Student execution depends on TTT. TTT includes under-instruction (UI) time and...Cleared for Public Release A Model for Forecasting Enlisted Student IA Billet Requirements Steven W. Belcher with David L. Reese...and Kletus S. Lawler March 2016 Copyright © 2016 CNA This document contains the best opinion of CNA at the time of issue. It does

  1. Using a DSGE Model to Assess the Macroeconomic Effects of Reserve Requirements in Brazil

    OpenAIRE

    Waldyr Dutra Areosa; Christiano Arrigoni Coelho

    2013-01-01

    The goal of this paper is to present how a Dynamic General Equilibrium Model (DSGE) can be used by policy makers in the qualitative and quantitative evaluation of the macroeconomics impacts of two monetary policy instruments: (i) short term interest rate and (ii) reserve requirements ratio. In our model, this last instrument affects the leverage of banks that have to deal with agency problems in order to raise funds from depositors. We estimated a modified version of Gertler and Karadi (2011)...

  2. Using a DSGE Model to Assess the Macroeconomic Effects of Reserve Requirements in Brazil

    OpenAIRE

    Waldyr Dutra Areosa; Christiano Arrigoni Coelho

    2013-01-01

    The goal of this paper is to present how a Dynamic General Equilibrium Model (DSGE) can be used by policy makers in the qualitative and quantitative evaluation of the macroeconomics impacts of two monetary policy instruments: (i) short term interest rate and (ii) reserve requirements ratio. In our model, this last instrument affects the leverage of banks that have to deal with agency problems in order to raise funds from depositors. We estimated a modified version of Gertler and Karadi (2011)...

  3. 42 CFR 456.242 - UR plan requirements for medical care evaluation studies.

    Science.gov (United States)

    2010-10-01

    ...: Mental Hospitals Ur Plan: Medical Care Evaluation Studies § 456.242 UR plan requirements for medical care... medical care evaluation studies under paragraph (b)(1) of this section. (b) The UR plan must provide that... 42 Public Health 4 2010-10-01 2010-10-01 false UR plan requirements for medical care...

  4. 42 CFR 456.142 - UR plan requirements for medical care evaluation studies.

    Science.gov (United States)

    2010-10-01

    ...: Hospitals Ur Plan: Medical Care Evaluation Studies § 456.142 UR plan requirements for medical care... medical care evaluation studies under paragraph (b)(1) of this section. (b) The UR plan must provide that... 42 Public Health 4 2010-10-01 2010-10-01 false UR plan requirements for medical care...

  5. Evaluating topic models with stability

    CSIR Research Space (South Africa)

    De Waal, A

    2008-11-01

    Full Text Available on unlabelled data, so that a ground truth does not exist and (b) "soft" (probabilistic) document clusters are created by state-of-the-art topic models, which complicates comparisons even when ground truth labels are available. Perplexity has often been used...

  6. The case for applying an early-lifecycle technology evaluation methodology to comparative evaluation of requirements engineering research

    Science.gov (United States)

    Feather, Martin S.

    2003-01-01

    The premise of this paper is taht there is a useful analogy between evaluation of proposed problem solutions and evaluation of requirements engineering research itself. Both of these application areas face the challenges of evaluation early in the lifecycle, of the need to consider a wide variety of factors, and of the need to combine inputs from multiple stakeholders in making thse evaluation and subsequent decisions.

  7. The case for applying an early-lifecycle technology evaluation methodology to comparative evaluation of requirements engineering research

    Science.gov (United States)

    Feather, Martin S.

    2003-01-01

    The premise of this paper is taht there is a useful analogy between evaluation of proposed problem solutions and evaluation of requirements engineering research itself. Both of these application areas face the challenges of evaluation early in the lifecycle, of the need to consider a wide variety of factors, and of the need to combine inputs from multiple stakeholders in making thse evaluation and subsequent decisions.

  8. TMDL MODEL EVALUATION AND RESEARCH NEEDS

    Science.gov (United States)

    This review examines the modeling research needs to support environmental decision-making for the 303(d) requirements for development of total maximum daily loads (TMDLs) and related programs such as 319 Nonpoint Source Program activities, watershed management, stormwater permits...

  9. Evaluating the SharedCanvas Manuscript Data Model in CATCHPlus

    CERN Document Server

    Sanderson, Robert; Albritton, Benjamin; Van de Sompel, Herbert

    2011-01-01

    In this paper, we present the SharedCanvas model for describing the layout of culturally important, hand-written objects such as medieval manuscripts, which is intended to be used as a common input format to presentation interfaces. The model is evaluated using two collections from CATCHPlus not consulted during the design phase, each with their own complex requirements, in order to determine if further development is required or if the model is ready for general usage. The model is applied to the new collections, revealing several new areas of concern for user interface production and discovery of the constituent resources. However, the fundamental information modelling aspects of SharedCanvas and the underlying Open Annotation Collaboration ontology are demonstrated to be sufficient to cover the challenging new requirements. The distributed, Linked Open Data approach is validated as an important methodology to seamlessly allow simultaneous interaction with multiple repositories, and at the same time to faci...

  10. Decision tree approach to evaluating inactive uranium processing sites for liner requirements

    Energy Technology Data Exchange (ETDEWEB)

    Relyea, J.F.

    1983-03-01

    Recently, concern has been expressed about potential toxic effects of both radon emission and release of toxic elements in leachate from inactive uranium mill tailings piles. Remedial action may be required to meet disposal standards set by the states and the US Environmental Protection Agency (EPA). In some cases, a possible disposal option is the exhumation and reburial (either on site or at a new location) of tailings and reliance on engineered barriers to satisfy the objectives established for remedial actions. Liners under disposal pits are the major engineered barrier for preventing contaminant release to ground and surface water. The purpose of this report is to provide a logical sequence of action, in the form of a decision tree, which could be followed to show whether a selected tailings disposal design meets the objectives for subsurface contaminant release without a liner. This information can be used to determine the need and type of liner for sites exhibiting a potential groundwater problem. The decision tree is based on the capability of hydrologic and mass transport models to predict the movement of water and contaminants with time. The types of modeling capabilities and data needed for those models are described, and the steps required to predict water and contaminant movement are discussed. A demonstration of the decision tree procedure is given to aid the reader in evaluating the need for the adequacy of a liner.

  11. The Pantex Process model: Formulations of the evaluation planning module

    Energy Technology Data Exchange (ETDEWEB)

    JONES,DEAN A.; LAWTON,CRAIG R.; LIST,GEORGE FISHER; TURNQUIST,MARK ALAN

    1999-12-01

    This paper describes formulations of the Evaluation Planning Module that have been developed since its inception. This module is one of the core algorithms in the Pantex Process Model, a computerized model to support production planning in a complex manufacturing system at the Pantex Plant, a US Department of Energy facility. Pantex is responsible for three major DOE programs -- nuclear weapons disposal, stockpile evaluation, and stockpile maintenance -- using shared facilities, technicians, and equipment. The model reflects the interactions of scheduling constraints, material flow constraints, and the availability of required technicians and facilities.

  12. Evaluation of model fit in nonlinear multilevel structural equation modeling

    Directory of Open Access Journals (Sweden)

    Karin eSchermelleh-Engel

    2014-03-01

    Full Text Available Evaluating model fit in nonlinear multilevel structural equation models (MSEM presents a challenge as no adequate test statistic is available. Nevertheless, using a product indicator approach a likelihood ratio test for linear models is provided which may also be useful for nonlinear MSEM. The main problem with nonlinear models is that product variables are nonnormally distributed. Although robust test statistics have been developed for linear SEM to ensure valid results under the condition of nonnormality, they were not yet investigated for nonlinear MSEM. In a Monte Carlo study, the performance of the robust likelihood ratio test was investigated for models with single-level latent interaction effects using the unconstrained product indicator approach. As overall model fit evaluation has a potential limitation in detecting the lack of fit at a single level even for linear models, level-specific model fit evaluation was also investigated using partially saturated models. Four population models were considered: a model with interaction effects at both levels, an interaction effect at the within-group level, an interaction effect at the between-group level, and a model with no interaction effects at both levels. For these models the number of groups, predictor correlation, and model misspecification was varied. The results indicate that the robust test statistic performed sufficiently well. Advantages of level-specific model fit evaluation for the detection of model misfit are demonstrated.

  13. Evaluation of model fit in nonlinear multilevel structural equation modeling.

    Science.gov (United States)

    Schermelleh-Engel, Karin; Kerwer, Martin; Klein, Andreas G

    2014-01-01

    Evaluating model fit in nonlinear multilevel structural equation models (MSEM) presents a challenge as no adequate test statistic is available. Nevertheless, using a product indicator approach a likelihood ratio test for linear models is provided which may also be useful for nonlinear MSEM. The main problem with nonlinear models is that product variables are non-normally distributed. Although robust test statistics have been developed for linear SEM to ensure valid results under the condition of non-normality, they have not yet been investigated for nonlinear MSEM. In a Monte Carlo study, the performance of the robust likelihood ratio test was investigated for models with single-level latent interaction effects using the unconstrained product indicator approach. As overall model fit evaluation has a potential limitation in detecting the lack of fit at a single level even for linear models, level-specific model fit evaluation was also investigated using partially saturated models. Four population models were considered: a model with interaction effects at both levels, an interaction effect at the within-group level, an interaction effect at the between-group level, and a model with no interaction effects at both levels. For these models the number of groups, predictor correlation, and model misspecification was varied. The results indicate that the robust test statistic performed sufficiently well. Advantages of level-specific model fit evaluation for the detection of model misfit are demonstrated.

  14. Model Performance Evaluation and Scenario Analysis (MPESA) Tutorial

    Science.gov (United States)

    This tool consists of two parts: model performance evaluation and scenario analysis (MPESA). The model performance evaluation consists of two components: model performance evaluation metrics and model diagnostics. These metrics provides modelers with statistical goodness-of-fit m...

  15. Modeling and evaluation of information systems using coloured petri network

    Directory of Open Access Journals (Sweden)

    Ehsan Zamirpour

    2014-07-01

    Full Text Available Nowadays with the growth of organizations and their affiliates, the importance of information systems has increased. Functional and non-functional requirements of information systems in an organization are supported. There are literally several techniques to support the functional requirements in terms of software methodologies, but support for the second set of requirements has received little attention. Software Performance Engineering (SPE forum tries to address this issue by presenting software methodologies to support both types of requirements. In this paper, we present a formal model for the evaluation of system performance based on a pragmatic model. Because of supporting the concurrency concepts, petri net has a higher priority than queuing system. For mapping UML to colored Petri net diagram, we use an intermediate graph. The preliminary results indicate that the proposed model may save significant amount of computations.

  16. SMV model-based safety analysis of software requirements

    Energy Technology Data Exchange (ETDEWEB)

    Koh, Kwang Yong [Department of Nuclear and Quantum Engineering, Korea Advanced Institute of Science and Technology, 373-1, Guseong-dong, Yuseong-gu, Daejeon 305-701 (Korea, Republic of); Seong, Poong Hyun [Department of Nuclear and Quantum Engineering, Korea Advanced Institute of Science and Technology, 373-1, Guseong-dong, Yuseong-gu, Daejeon 305-701 (Korea, Republic of)], E-mail: phseong@kaist.ac.kr

    2009-02-15

    Fault tree analysis (FTA) is one of the most frequently applied safety analysis techniques when developing safety-critical industrial systems such as software-based emergency shutdown systems of nuclear power plants and has been used for safety analysis of software requirements in the nuclear industry. However, the conventional method for safety analysis of software requirements has several problems in terms of correctness and efficiency; the fault tree generated from natural language specifications may contain flaws or errors while the manual work of safety verification is very labor-intensive and time-consuming. In this paper, we propose a new approach to resolve problems of the conventional method; we generate a fault tree from a symbolic model verifier (SMV) model, not from natural language specifications, and verify safety properties automatically, not manually, by a model checker SMV. To demonstrate the feasibility of this approach, we applied it to shutdown system 2 (SDS2) of Wolsong nuclear power plant (NPP). In spite of subtle ambiguities present in the approach, the results of this case study demonstrate its overall feasibility and effectiveness.

  17. Wastewater treatment models in teaching and training: the mismatch between education and requirements for jobs.

    Science.gov (United States)

    Hug, Thomas; Benedetti, Lorenzo; Hall, Eric R; Johnson, Bruce R; Morgenroth, Eberhard; Nopens, Ingmar; Rieger, Leiv; Shaw, Andrew; Vanrolleghem, Peter A

    2009-01-01

    As mathematical modeling of wastewater treatment plants has become more common in research and consultancy, a mismatch between education and requirements for model-related jobs has developed. There seems to be a shortage of skilled people, both in terms of quantity and in quality. In order to address this problem, this paper provides a framework to outline different types of model-related jobs, assess the required skills for these jobs and characterize different types of education that modelers obtain "in school" as well as "on the job". It is important to consider that education of modelers does not mainly happen in university courses and that the variety of model related jobs goes far beyond use for process design by consulting companies. To resolve the mismatch, the current connection between requirements for different jobs and the various types of education has to be assessed for different geographical regions and professional environments. This allows the evaluation and improvement of important educational paths, considering quality assurance and future developments. Moreover, conclusions from a workshop involving practitioners and academics from North America and Europe are presented. The participants stressed the importance of non-technical skills and recommended strengthening the role of realistic modeling experience in university training. However, this paper suggests that all providers of modeling education and support, not only universities, but also software suppliers, professional associations and companies performing modeling tasks are called to assess and strengthen their role in training and support of professional modelers.

  18. 42 CFR 418.205 - Special requirements for hospice pre-election evaluation and counseling services.

    Science.gov (United States)

    2010-10-01

    ... evaluation and counseling services. 418.205 Section 418.205 Public Health CENTERS FOR MEDICARE & MEDICAID... Services § 418.205 Special requirements for hospice pre-election evaluation and counseling services. (a... evaluation and counseling services as specified in § 418.304(d) may be made to a hospice on behalf of...

  19. Software Platform Evaluation - Verifiable Fuel Cycle Simulation (VISION) Model

    Energy Technology Data Exchange (ETDEWEB)

    J. J. Jacobson; D. E. Shropshire; W. B. West

    2005-11-01

    The purpose of this Software Platform Evaluation (SPE) is to document the top-level evaluation of potential software platforms on which to construct a simulation model that satisfies the requirements for a Verifiable Fuel Cycle Simulation Model (VISION) of the Advanced Fuel Cycle (AFC). See the Software Requirements Specification for Verifiable Fuel Cycle Simulation (VISION) Model (INEEL/EXT-05-02643, Rev. 0) for a discussion of the objective and scope of the VISION model. VISION is intended to serve as a broad systems analysis and study tool applicable to work conducted as part of the AFCI (including costs estimates) and Generation IV reactor development studies. This document will serve as a guide for selecting the most appropriate software platform for VISION. This is a “living document” that will be modified over the course of the execution of this work.

  20. Experimental development based on mapping rule between requirements analysis model and web framework specific design model.

    Science.gov (United States)

    Okuda, Hirotaka; Ogata, Shinpei; Matsuura, Saeko

    2013-12-01

    Model Driven Development is a promising approach to develop high quality software systems. We have proposed a method of model-driven requirements analysis using Unified Modeling Language (UML). The main feature of our method is to automatically generate a Web user interface prototype from UML requirements analysis model so that we can confirm validity of input/output data for each page and page transition on the system by directly operating the prototype. We proposes a mapping rule in which design information independent of each web application framework implementation is defined based on the requirements analysis model, so as to improve the traceability to the final product from the valid requirements analysis model. This paper discusses the result of applying our method to the development of a Group Work Support System that is currently running in our department.

  1. User requirements for hydrological models with remote sensing input

    Energy Technology Data Exchange (ETDEWEB)

    Kolberg, Sjur

    1997-10-01

    Monitoring the seasonal snow cover is important for several purposes. This report describes user requirements for hydrological models utilizing remotely sensed snow data. The information is mainly provided by operational users through a questionnaire. The report is primarily intended as a basis for other work packages within the Snow Tools project which aim at developing new remote sensing products for use in hydrological models. The HBV model is the only model mentioned by users in the questionnaire. It is widely used in Northern Scandinavia and Finland, in the fields of hydroelectric power production, flood forecasting and general monitoring of water resources. The current implementation of HBV is not based on remotely sensed data. Even the presently used HBV implementation may benefit from remotely sensed data. However, several improvements can be made to hydrological models to include remotely sensed snow data. Among these the most important are a distributed version, a more physical approach to the snow depletion curve, and a way to combine data from several sources. 1 ref.

  2. Life sciences research in space: The requirement for animal models

    Science.gov (United States)

    Fuller, C. A.; Philips, R. W.; Ballard, R. W.

    1987-01-01

    Use of animals in NASA space programs is reviewed. Animals are needed because life science experimentation frequently requires long-term controlled exposure to environments, statistical validation, invasive instrumentation or biological tissue sampling, tissue destruction, exposure to dangerous or unknown agents, or sacrifice of the subject. The availability and use of human subjects inflight is complicated by the multiple needs and demands upon crew time. Because only living organisms can sense, integrate and respond to the environment around them, the sole use of tissue culture and computer models is insufficient for understanding the influence of the space environment on intact organisms. Equipment for spaceborne experiments with animals is described.

  3. Specification of advanced safety modeling requirements (Rev. 0).

    Energy Technology Data Exchange (ETDEWEB)

    Fanning, T. H.; Tautges, T. J.

    2008-06-30

    The U.S. Department of Energy's Global Nuclear Energy Partnership has lead to renewed interest in liquid-metal-cooled fast reactors for the purpose of closing the nuclear fuel cycle and making more efficient use of future repository capacity. However, the U.S. has not designed or constructed a fast reactor in nearly 30 years. Accurate, high-fidelity, whole-plant dynamics safety simulations will play a crucial role by providing confidence that component and system designs will satisfy established design limits and safety margins under a wide variety of operational, design basis, and beyond design basis transient conditions. Current modeling capabilities for fast reactor safety analyses have resulted from several hundred person-years of code development effort supported by experimental validation. The broad spectrum of mechanistic and phenomenological models that have been developed represent an enormous amount of institutional knowledge that needs to be maintained. Complicating this, the existing code architectures for safety modeling evolved from programming practices of the 1970s. This has lead to monolithic applications with interdependent data models which require significant knowledge of the complexities of the entire code in order for each component to be maintained. In order to develop an advanced fast reactor safety modeling capability, the limitations of the existing code architecture must be overcome while preserving the capabilities that already exist. To accomplish this, a set of advanced safety modeling requirements is defined, based on modern programming practices, that focuses on modular development within a flexible coupling framework. An approach for integrating the existing capabilities of the SAS4A/SASSYS-1 fast reactor safety analysis code into the SHARP framework is provided in order to preserve existing capabilities while providing a smooth transition to advanced modeling capabilities. In doing this, the advanced fast reactor safety models

  4. [Evaluation model for municipal health planning management].

    Science.gov (United States)

    Berretta, Isabel Quint; Lacerda, Josimari Telino de; Calvo, Maria Cristina Marino

    2011-11-01

    This article presents an evaluation model for municipal health planning management. The basis was a methodological study using the health planning theoretical framework to construct the evaluation matrix, in addition to an understanding of the organization and functioning designed by the Planning System of the Unified National Health System (PlanejaSUS) and definition of responsibilities for the municipal level under the Health Management Pact. The indicators and measures were validated using the consensus technique with specialists in planning and evaluation. The applicability was tested in 271 municipalities (counties) in the State of Santa Catarina, Brazil, based on population size. The proposed model features two evaluative dimensions which reflect the municipal health administrator's commitment to planning: the guarantee of resources and the internal and external relations needed for developing the activities. The data were analyzed using indicators, sub-dimensions, and dimensions. The study concludes that the model is feasible and appropriate for evaluating municipal performance in health planning management.

  5. Modeling of Testability Requirement Based on Generalized Stochastic Petri Nets

    Institute of Scientific and Technical Information of China (English)

    SU Yong-ding; QIU Jing; LIU Guan-jun; QIAN Yan-ling

    2009-01-01

    Testability design is an effective way to realize the fault detection and isolation. Its important step is to determine testability figures of merits (TFOM). Firstly, some influence factors for TFOMs are analyzed, such as the processes of system operation, maintenance and support, fault detection and isolation and so on. Secondly, a testability requirement analysis model is built based on generalized stochastic Petri net (GSPN). Then, the system's reachable states are analyzed based on the model, a Markov chain isomorphic with Petri net is constructed, a state transition matrix is created and the system's steady state probability is obtained. The relationship between the steady state availability and testability parameters can be revealed and reasoned. Finally, an example shows that the proposed method can determine TFOM, such as fault detection rate and fault isolation rate, effectively and reasonably.

  6. Model Penentuan Nilai Target Functional Requirement Berbasis Utilitas

    Directory of Open Access Journals (Sweden)

    Cucuk Nur Rosyidi

    2012-01-01

    Full Text Available In a product design and development process, a designer faces a problem to decide functional requirement (FR target values. That decision is made under a risk since it is conducted in the early design phase using incomplete information. Utility function can be used to reflect the decision maker attitude towards the risk in making such decision. In this research, we develop a utility-based model to determine FR target values using quadratic utility function and information from Quality Function Deployment (QFD. A pencil design is used as a numerical example using quadratic utility function for each FR. The model can be applied for balancing customer and designer interest in determining FR target values.

  7. Quality Evaluation Model for Map Labeling

    Institute of Scientific and Technical Information of China (English)

    FAN Hong; ZHANG Zuxun; DU Daosheng

    2005-01-01

    This paper discusses and sums up the basic criterions of guaranteeing the labeling quality and abstracts the four basic factors including the conflict for a label with a label, overlay for label with the features, position's priority and the association for a label with its feature. By establishing the scoring system, a formalized four-factors quality evaluation model is constructed. Last, this paper introduces the experimental result of the quality evaluation model applied to the automatic map labeling system-MapLabel.

  8. Metrics for Evaluation of Student Models

    Science.gov (United States)

    Pelanek, Radek

    2015-01-01

    Researchers use many different metrics for evaluation of performance of student models. The aim of this paper is to provide an overview of commonly used metrics, to discuss properties, advantages, and disadvantages of different metrics, to summarize current practice in educational data mining, and to provide guidance for evaluation of student…

  9. Developing a Kano-Based Evaluation Model for Innovation Design

    Directory of Open Access Journals (Sweden)

    Chang-Tzuoh Wu

    2015-01-01

    Full Text Available This research focuses on developing a psychology-based evaluation procedure for innovative design. The divergent thinking part of the main innovative design procedure which discussed in this paper is the extensive QFD developed by the author (Wu. The major performance of QFD is to identify the customers’ requirements and their priorities. Then, the proposed extensive QFD helps transform the high priority requirements into appropriate technical characteristics. According to the priorities of product characteristics, the prior engineering parameters will be identified to be the key performances to redesign. For identifying the requirements and achieving the attractive design, we introduce the Kano model to construct the evaluation model. The proposed Kano-based evaluation procedure is mainly used in two stages of the innovative design. First, the evaluation process was used in QFD stage to help identify attractive customers’ requirements, and the other was used in the extension stage to help assess concepts. The flowchart of proposed innovative design procedure with psychology-based evaluation has also been developed. A case study, exercise equipment design, is adopted to explain and verify feasibility of the proposed approach.

  10. SAPHIRE models and software for ASP evaluations

    Energy Technology Data Exchange (ETDEWEB)

    Sattison, M.B.; Schroeder, J.A.; Russell, K.D. [Idaho National Engineering Lab., Idaho Falls, ID (United States)] [and others

    1995-04-01

    The Idaho National Engineering Laboratory (INEL) over the past year has created 75 plant-specific Accident Sequence Precursor (ASP) models using the SAPHIRE suite of PRA codes. Along with the new models, the INEL has also developed a new module for SAPHIRE which is tailored specifically to the unique needs of conditional core damage probability (CCDP) evaluations. These models and software will be the next generation of risk tools for the evaluation of accident precursors by both NRR and AEOD. This paper presents an overview of the models and software. Key characteristics include: (1) classification of the plant models according to plant response with a unique set of event trees for each plant class, (2) plant-specific fault trees using supercomponents, (3) generation and retention of all system and sequence cutsets, (4) full flexibility in modifying logic, regenerating cutsets, and requantifying results, and (5) user interface for streamlined evaluation of ASP events.

  11. Evaluation of clinical information modeling tools.

    Science.gov (United States)

    Moreno-Conde, Alberto; Austin, Tony; Moreno-Conde, Jesús; Parra-Calderón, Carlos L; Kalra, Dipak

    2016-11-01

    Clinical information models are formal specifications for representing the structure and semantics of the clinical content within electronic health record systems. This research aims to define, test, and validate evaluation metrics for software tools designed to support the processes associated with the definition, management, and implementation of these models. The proposed framework builds on previous research that focused on obtaining agreement on the essential requirements in this area. A set of 50 conformance criteria were defined based on the 20 functional requirements agreed by that consensus and applied to evaluate the currently available tools. Of the 11 initiative developing tools for clinical information modeling identified, 9 were evaluated according to their performance on the evaluation metrics. Results show that functionalities related to management of data types, specifications, metadata, and terminology or ontology bindings have a good level of adoption. Improvements can be made in other areas focused on information modeling and associated processes. Other criteria related to displaying semantic relationships between concepts and communication with terminology servers had low levels of adoption. The proposed evaluation metrics were successfully tested and validated against a representative sample of existing tools. The results identify the need to improve tool support for information modeling and software development processes, especially in those areas related to governance, clinician involvement, and optimizing the technical validation of testing processes. This research confirmed the potential of these evaluation metrics to support decision makers in identifying the most appropriate tool for their organization. Los Modelos de Información Clínica son especificaciones para representar la estructura y características semánticas del contenido clínico en los sistemas de Historia Clínica Electrónica. Esta investigación define, prueba y valida

  12. An emergency dispatch model considering the urgency of the requirement for reliefs in different disaster areas

    Directory of Open Access Journals (Sweden)

    Liu Sheng

    2015-11-01

    Full Text Available Abstract: Purpose: Frequent sudden-onset disasters which have threatened the survival of human and the development of society force the public to pay an increasing attention to emergency management. A challenging task in the process of emergency management is emergency dispatch of reliefs. An emergency dispatch model considering the urgency of the requirement for reliefs in different disaster areas is proposed in this paper to dispatch reliefs reasonably and reduce the effect of sudden-onset disasters. Design/methodology/approach: Firstly, quantitative assessment on the urgency of the requirement for reliefs in different disaster areas is done by an evaluation method based on Fuzzy Comprehensive Evaluation and improved Evidence Reasoning which is proposed in this paper. And then based the quantitative results, an emergency dispatch model aiming to minimize the response time, the distribution cost and the unsatisfied rate of the requirement for reliefs is proposed, which reflects the requests of disaster areas under emergency, including the urgency of requirement, the economy of distribution and the equity of allocation. Finally, the Genetic Algorithm is improved based on the adaptive crossover and mutation probability function to solve the emergency dispatch model. Findings and Originality/value: A case that the Y hydraulic power enterprise carries on emergency dispatch of reliefs under continuous sudden-onset heavy rain is given to illustrate the availability of the emergency dispatch model proposed in this paper. The results show that the emergency dispatch model meets the distribution priority requirement of disaster area with the higher urgency, so thatreliefs are supplied more timely. Research limitations/implications: The emergency dispatch model faced to large scale sudden-onset disasters is complex. The quantity of reliefs that disaster area requires and the running time of vehicles are viewed as available information, and the problem

  13. A Generic Evaluation Model for Semantic Web Services

    Science.gov (United States)

    Shafiq, Omair

    Semantic Web Services research has gained momentum over the last few Years and by now several realizations exist. They are being used in a number of industrial use-cases. Soon software developers will be expected to use this infrastructure to build their B2B applications requiring dynamic integration. However, there is still a lack of guidelines for the evaluation of tools developed to realize Semantic Web Services and applications built on top of them. In normal software engineering practice such guidelines can already be found for traditional component-based systems. Also some efforts are being made to build performance models for servicebased systems. Drawing on these related efforts in component-oriented and servicebased systems, we identified the need for a generic evaluation model for Semantic Web Services applicable to any realization. The generic evaluation model will help users and customers to orient their systems and solutions towards using Semantic Web Services. In this chapter, we have presented the requirements for the generic evaluation model for Semantic Web Services and further discussed the initial steps that we took to sketch such a model. Finally, we discuss related activities for evaluating semantic technologies.

  14. Requirements for high level models supporting design space exploration in model-based systems engineering

    NARCIS (Netherlands)

    Haveman, Steven; Bonnema, Gerrit Maarten

    2013-01-01

    Most formal models are used in detailed design and focus on a single domain. Few effective approaches exist that can effectively tie these lower level models to a high level system model during design space exploration. This complicates the validation of high level system requirements during

  15. A GENERALIZATION OF TRADITIONAL KANO MODEL FOR CUSTOMER REQUIREMENTS ANALYSIS

    Directory of Open Access Journals (Sweden)

    Renáta Turisová

    2015-07-01

    Full Text Available Purpose: The theory of attractiveness determines the relationship between the technically achieved and customer perceived quality of product attributes. The most frequently used approach in the theory of attractiveness is the implementation of Kano‘s model. There exist a lot of generalizations of that model which take into consideration various aspects and approaches focused on understanding the customer preferences and identification of his priorities for a selling  product. The aim of this article is to outline another possible generalization of Kano‘s model.Methodology/Approach: The traditional Kano’s model captures the nonlinear relationship between reached attributes of quality and customer requirements. The individual attributes of quality are divided into three main categories: must-be, one-dimensional, attractive quality and into two side categories: indifferent and reverse quality. The well selling product has to contain the must-be attribute. It should contain as many one-dimensional attributes as possible. If there are also supplementary attractive attributes, it means that attractiveness of the entire product, from the viewpoint of the customer, nonlinearly sharply rises what has a direct positive impact on a decision of potential customer when purchasing the product. In this article, we show that inclusion of individual quality attributes of a product to the mentioned categories depends, among other things, also on costs on life cycle of the product, respectively on a price of the product on the market.Findings: In practice, we are often encountering the inclusion of products into different price categories: lower, middle and upper class. For a certain type of products the category is either directly declared by a producer (especially in automotive industry, or is determined by a customer by means of assessment of available market prices. To each of those groups of a products different customer expectations can be assigned

  16. Coastline modelling for nourishment strategy evaluation

    NARCIS (Netherlands)

    Huisman, B.J.A.; Wang, Z.B.; De Ronde, J.G.; Stronkhorst, J.; Sprengers, C.J.

    2013-01-01

    Coastal zone managers in the Netherlands require new dedicated tools for the assessment of the long-term impacts of coastal maintenance policies. The policies need to be evaluated on the impacts on multiple coastal functions in order to be able to optimize the performance of such strategies. This pa

  17. A method to evaluate response models

    NARCIS (Netherlands)

    Bruijnes, Merijn; Wapperom, Sjoerd; op den Akker, Hendrikus J.A.; Heylen, Dirk K.J.; Bickmore, Timothy; Marcella, Stacy; Sidner, Candace

    We are working towards computational models of mind of virtual characters that act as suspects in interview (interrogation) training of police officers. We implemented a model that calculates the responses of the virtual suspect based on theory and observation. We evaluated it by means of our test,

  18. Museum: Multidimensional web page segment evaluation model

    CERN Document Server

    Kuppusamy, K S

    2012-01-01

    The evaluation of a web page with respect to a query is a vital task in the web information retrieval domain. This paper proposes the evaluation of a web page as a bottom-up process from the segment level to the page level. A model for evaluating the relevancy is proposed incorporating six different dimensions. An algorithm for evaluating the segments of a web page, using the above mentioned six dimensions is proposed. The benefits of fine-granining the evaluation process to the segment level instead of the page level are explored. The proposed model can be incorporated for various tasks like web page personalization, result re-ranking, mobile device page rendering etc.

  19. Global daily reference evapotranspiration modeling and evaluation

    Science.gov (United States)

    Senay, G.B.; Verdin, J.P.; Lietzow, R.; Melesse, Assefa M.

    2008-01-01

    Accurate and reliable evapotranspiration (ET) datasets are crucial in regional water and energy balance studies. Due to the complex instrumentation requirements, actual ET values are generally estimated from reference ET values by adjustment factors using coefficients for water stress and vegetation conditions, commonly referred to as crop coefficients. Until recently, the modeling of reference ET has been solely based on important weather variables collected from weather stations that are generally located in selected agro-climatic locations. Since 2001, the National Oceanic and Atmospheric Administration's Global Data Assimilation System (GDAS) has been producing six-hourly climate parameter datasets that are used to calculate daily reference ET for the whole globe at 1-degree spatial resolution. The U.S. Geological Survey Center for Earth Resources Observation and Science has been producing daily reference ET (ETo) since 2001, and it has been used on a variety of operational hydrological models for drought and streamflow monitoring all over the world. With the increasing availability of local station-based reference ET estimates, we evaluated the GDAS-based reference ET estimates using data from the California Irrigation Management Information System (CIMIS). Daily CIMIS reference ET estimates from 85 stations were compared with GDAS-based reference ET at different spatial and temporal scales using five-year daily data from 2002 through 2006. Despite the large difference in spatial scale (point vs. ???100 km grid cell) between the two datasets, the correlations between station-based ET and GDAS-ET were very high, exceeding 0.97 on a daily basis to more than 0.99 on time scales of more than 10 days. Both the temporal and spatial correspondences in trend/pattern and magnitudes between the two datasets were satisfactory, suggesting the reliability of using GDAS parameter-based reference ET for regional water and energy balance studies in many parts of the world

  20. A Novel Model for Security Evaluation for Compliance

    DEFF Research Database (Denmark)

    Hald, Sara Ligaard; Pedersen, Jens Myrup; Prasad, Neeli R.

    2011-01-01

    for Compliance (SEC) model offers a lightweight alternative for use by decision makers to get a quick overview of the security attributes of different technologies for easy comparison and requirement compliance evaluation. The scientific contribution is this new approach to security modelling as well......With the increasing focus on security in information systems, it is becoming necessary to be able to describe and compare security attributes for different technologies. Existing are well-described and comprehensive, but expensive and resource demanding to apply. The Security Evaluation...

  1. A Novel Model for Security Evaluation for Compliance

    DEFF Research Database (Denmark)

    Hald, Sara Ligaard; Pedersen, Jens Myrup; Prasad, Neeli R.

    2011-01-01

    With the increasing focus on security in information systems, it is becoming necessary to be able to describe and compare security attributes for different technologies. Existing are well-described and comprehensive, but expensive and resource demanding to apply. The Security Evaluation...... for Compliance (SEC) model offers a lightweight alternative for use by decision makers to get a quick overview of the security attributes of different technologies for easy comparison and requirement compliance evaluation. The scientific contribution is this new approach to security modelling as well...

  2. Modeling for Green Supply Chain Evaluation

    Directory of Open Access Journals (Sweden)

    Elham Falatoonitoosi

    2013-01-01

    Full Text Available Green supply chain management (GSCM has become a practical approach to develop environmental performance. Under strict regulations and stakeholder pressures, enterprises need to enhance and improve GSCM practices, which are influenced by both traditional and green factors. This study developed a causal evaluation model to guide selection of qualified suppliers by prioritizing various criteria and mapping causal relationships to find effective criteria to improve green supply chain. The aim of the case study was to model and examine the influential and important main GSCM practices, namely, green logistics, organizational performance, green organizational activities, environmental protection, and green supplier evaluation. In the case study, decision-making trial and evaluation laboratory technique is applied to test the developed model. The result of the case study shows only “green supplier evaluation” and “green organizational activities” criteria of the model are in the cause group and the other criteria are in the effect group.

  3. Evaluation of the BPMN According to the Requirements of the Enterprise Architecture Methodology

    OpenAIRE

    Václav Řepa

    2012-01-01

    This article evaluates some characteristics of the Business Process Modelling Notation from the perspective of the business system modelling methodology. Firstly the enterprise architecture context of the business process management as well as the importance of standards are discussed. Then the Business System Modelling Methodology is introduced with special attention paid to the Business Process Meta-model as a basis for the evaluation of the BPMN features. Particular basic concepts from the...

  4. TA-55 Final Safety Analysis Report Comparison Document and DOE Safety Evaluation Report Requirements

    Energy Technology Data Exchange (ETDEWEB)

    Alan Bond

    2001-04-01

    This document provides an overview of changes to the currently approved TA-55 Final Safety Analysis Report (FSAR) that are included in the upgraded FSAR. The DOE Safety Evaluation Report (SER) requirements that are incorporated into the upgraded FSAR are briefly discussed to provide the starting point in the FSAR with respect to the SER requirements.

  5. Obs4MIPS: Satellite Observations for Model Evaluation

    Science.gov (United States)

    Ferraro, R.; Waliser, D. E.; Gleckler, P. J.

    2015-12-01

    This poster will review the current status of the obs4MIPs project, whose purpose is to provide a limited collection of well-established and documented datasets for comparison with Earth system models (https://www.earthsystemcog.org/projects/obs4mips/). These datasets have been reformatted to correspond with the CMIP5 model output requirements, and include technical documentation specifically targeted for their use in model output evaluation. There are currently over 50 datasets containing observations that directly correspond to CMIP5 model output variables. We will review recent additions to the obs4MIPs collection, and provide updated download statistics. We will also provide an update on changes to submission and documentation guidelines, the work of the WCRP Data Advisory Council (WDAC) Observations for Model Evaluation Task Team, and engagement with the CMIP6 MIP experiments.

  6. A Simulation Model for Evaluating Distributed Systems Dependability

    CERN Document Server

    Dobre, Ciprian; Cristea, Valentin

    2012-01-01

    In this paper we present a new simulation model designed to evaluate the dependability in distributed systems. This model extends the MONARC simulation model with new capabilities for capturing reliability, safety, availability, security, and maintainability requirements. The model has been implemented as an extension of the multithreaded, process oriented simulator MONARC, which allows the realistic simulation of a wide-range of distributed system technologies, with respect to their specific components and characteristics. The extended simulation model includes the necessary components to inject various failure events, and provides the mechanisms to evaluate different strategies for replication, redundancy procedures, and security enforcement mechanisms, as well. The results obtained in simulation experiments presented in this paper probe that the use of discrete-event simulators, such as MONARC, in the design and development of distributed systems is appealing due to their efficiency and scalability.

  7. Evaluation of a lake whitefish bioenergetics model

    Science.gov (United States)

    Madenjian, Charles P.; O'Connor, Daniel V.; Pothoven, Steven A.; Schneeberger, Philip J.; Rediske, Richard R.; O'Keefe, James P.; Bergstedt, Roger A.; Argyle, Ray L.; Brandt, Stephen B.

    2006-01-01

    We evaluated the Wisconsin bioenergetics model for lake whitefish Coregonus clupeaformis in the laboratory and in the field. For the laboratory evaluation, lake whitefish were fed rainbow smelt Osmerus mordax in four laboratory tanks during a 133-d experiment. Based on a comparison of bioenergetics model predictions of lake whitefish food consumption and growth with observed consumption and growth, we concluded that the bioenergetics model furnished significantly biased estimates of both food consumption and growth. On average, the model overestimated consumption by 61% and underestimated growth by 16%. The source of the bias was probably an overestimation of the respiration rate. We therefore adjusted the respiration component of the bioenergetics model to obtain a good fit of the model to the observed consumption and growth in our laboratory tanks. Based on the adjusted model, predictions of food consumption over the 133-d period fell within 5% of observed consumption in three of the four tanks and within 9% of observed consumption in the remaining tank. We used polychlorinated biphenyls (PCBs) as a tracer to evaluate model performance in the field. Based on our laboratory experiment, the efficiency with which lake whitefish retained PCBs from their food (I?) was estimated at 0.45. We applied the bioenergetics model to Lake Michigan lake whitefish and then used PCB determinations of both lake whitefish and their prey from Lake Michigan to estimate p in the field. Application of the original model to Lake Michigan lake whitefish yielded a field estimate of 0.28, implying that the original formulation of the model overestimated consumption in Lake Michigan by 61%. Application of the bioenergetics model with the adjusted respiration component resulted in a field I? estimate of 0.56, implying that this revised model underestimated consumption by 20%.

  8. Saphire models and software for ASP evaluations

    Energy Technology Data Exchange (ETDEWEB)

    Sattison, M.B. [Idaho National Engineering Lab., Idaho Falls, ID (United States)

    1997-02-01

    The Idaho National Engineering Laboratory (INEL) over the three years has created 75 plant-specific Accident Sequence Precursor (ASP) models using the SAPHIRE suite of PRA codes. Along with the new models, the INEL has also developed a new module for SAPHIRE which is tailored specifically to the unique needs of ASP evaluations. These models and software will be the next generation of risk tools for the evaluation of accident precursors by both the U.S. Nuclear Regulatory Commission`s (NRC`s) Office of Nuclear Reactor Regulation (NRR) and the Office for Analysis and Evaluation of Operational Data (AEOD). This paper presents an overview of the models and software. Key characteristics include: (1) classification of the plant models according to plant response with a unique set of event trees for each plant class, (2) plant-specific fault trees using supercomponents, (3) generation and retention of all system and sequence cutsets, (4) full flexibility in modifying logic, regenerating cutsets, and requantifying results, and (5) user interface for streamlined evaluation of ASP events. Future plans for the ASP models is also presented.

  9. LSST camera heat requirements using CFD and thermal seeing modeling

    Science.gov (United States)

    Sebag, Jacques; Vogiatzis, Konstantinos

    2010-07-01

    The LSST camera is located above the LSST primary/tertiary mirror and in front of the secondary mirror in the shadow of its central obscuration. Due to this position within the optical path, heat released from the camera has a potential impact on the seeing degradation that is larger than traditionally estimated for Cassegrain or Nasmyth telescope configurations. This paper presents the results of thermal seeing modeling combined with Computational Fluid Dynamics (CFD) analyzes to define the thermal requirements on the LSST camera. Camera power output fluxes are applied to the CFD model as boundary conditions to calculate the steady-state temperature distribution on the camera and the air inside the enclosure. Using a previously presented post-processing analysis to calculate the optical seeing based on the mechanical turbulence and temperature variations along the optical path, the optical performance resulting from the seeing is determined. The CFD simulations are repeated for different wind speeds and orientations to identify the worst case scenario and generate an estimate of seeing contribution as a function of camera-air temperature difference. Finally, after comparing with the corresponding error budget term, a maximum allowable temperature for the camera is selected.

  10. Research on Computer Aided Innovation Model of Weapon Equipment Requirement Demonstration

    Science.gov (United States)

    Li, Yong; Guo, Qisheng; Wang, Rui; Li, Liang

    Firstly, in order to overcome the shortcoming of using only AD or TRIZ solely, and solve the problems currently existed in weapon equipment requirement demonstration, the paper construct the method system of weapon equipment requirement demonstration combining QFD, AD, TRIZ, FA. Then, we construct a CAI model frame of weapon equipment requirement demonstration, which include requirement decomposed model, requirement mapping model and requirement plan optimization model. Finally, we construct the computer aided innovation model of weapon equipment requirement demonstration, and developed CAI software of equipment requirement demonstration.

  11. Understanding the requirements imposed by programming model middleware on a common communication subsystem.

    Energy Technology Data Exchange (ETDEWEB)

    Buntinas, D.; Gropp, W.

    2005-12-13

    In high-performance parallel computing, most programming-model middleware libraries and runtime systems use a communication subsystem to abstract the lower-level network layer. The functionality required of a communication subsystem depends largely on the programming model implemented by the middleware. In order to maximize performance, middleware libraries and runtime systems typically implement their own communication subsystems that are specially tuned for the middleware, rather than use an existing communication subsystem. This situation leads to duplicated effort and prevents different middleware libraries from being used by the same application in hybrid programming models. In this paper we describe features required by various middleware libraries as well as some desirable features that would make it easier to port a middleware library to the communication subsystem and allow the middleware to make use of high-performance features provided by some networking layers. We show that none of the communication subsystems that we evaluate support all of the features.

  12. Net energy evaluation of feeds and determination of net energy requirements for pigs

    Directory of Open Access Journals (Sweden)

    Jean Noblet

    2007-07-01

    between feeds and results of least-cost formulation are then dependent on the energy system with overestimation of protein rich feeds and underestimation of starch and/or fat rich feeds in the DE or ME systems. The NE system provides an energy value which is the closest estimate of the "true" energy value of a feed; it predicts more accurately the performance of the pigs and allows implementing new feeding approaches such as the use of low protein and/or high fat diets. Energy requirements expressed as DE or ME can be transformed to NE requirements if we assume that the average efficiencies of DE or ME for NE are 71 and 74%, respectively. More sophisticated methods including modeling techniques can also be used for evaluating energy requirements.

  13. Evaluation of trends in wheat yield models

    Science.gov (United States)

    Ferguson, M. C.

    1982-01-01

    Trend terms in models for wheat yield in the U.S. Great Plains for the years 1932 to 1976 are evaluated. The subset of meteorological variables yielding the largest adjusted R(2) is selected using the method of leaps and bounds. Latent root regression is used to eliminate multicollinearities, and generalized ridge regression is used to introduce bias to provide stability in the data matrix. The regression model used provides for two trends in each of two models: a dependent model in which the trend line is piece-wise continuous, and an independent model in which the trend line is discontinuous at the year of the slope change. It was found that the trend lines best describing the wheat yields consisted of combinations of increasing, decreasing, and constant trend: four combinations for the dependent model and seven for the independent model.

  14. CRITICAL ANALYSIS OF EVALUATION MODEL LOMCE

    Directory of Open Access Journals (Sweden)

    José Luis Bernal Agudo

    2015-06-01

    Full Text Available The evaluation model that the LOMCE projects sinks its roots into the neoliberal beliefs, reflecting a specific way of understanding the world. What matters is not the process but the results, being the evaluation the center of the education-learning processes. It presents an evil planning, since the theory that justifies the model doesn’t specify upon coherent proposals, where there is an excessive worry for excellence and diversity is left out. A comprehensive way of understanding education should be recovered.

  15. Requirements-Driven Deployment: Customizing the Requirements Model for the Host Environment

    NARCIS (Netherlands)

    Ali, Raian; Dalpiaz, Fabiano; Giorgini, Paolo

    2014-01-01

    Deployment is a main development phase which configures a software to be ready for use in a certain environment. The ultimate goal of deployment is to enable users to achieve their requirements while using the deployed software. However, requirements are not uniform and differ between deployment env

  16. A model-based evaluation system of enterprise

    Institute of Scientific and Technical Information of China (English)

    Yan Junwei; Ye Yang; Wang Jian

    2005-01-01

    This paper analyses the architecture of enterprise modeling, proposesindicator selection principles and indicator decomposition methods, examines the approaches to the evaluation of enterprise modeling and designs an evaluation model of AHP. Then a model-based evaluation system of enterprise is presented toeffectively evaluate the business model in the framework of enterprise modeling.

  17. Evaluation of climate extremes in the CMIP5 model simulations

    Science.gov (United States)

    Sillmann, J.; Kharin, S.; Zhang, X.; Zwiers, F. W.

    2011-12-01

    Climate extremes manifest an important aspect of natural climate variability and anthropogenic climate change. To minimize human and financial losses caused by extreme events it is important to have reliable projections of their occurrence and intensity. State-of-the-art global climate models represented by the CMIP5 model ensemble are widely used as tools to simulate the present and project the future climate. Thus, it is crucial to get an understanding of how well climate extremes are simulated by these models in the present climate to be able to appraise their usefulness for future projections. We calculated a global set of well-defined indices for climate extremes based on daily temperature and precipitation data with the available CMIP5 models and use the indices to present a first-order evaluation of the model performance in comparison with re-analysis and a gridded observational dataset. We also focus our analysis on regional aspects of the model performance. Some of the indices are based on threshold exceedance, i.e. percentage of days below the 10th or above the 90th percentile of the maximum and minimum temperature. These indices require special attention for model evaluation as by definition the threshold exceedance is approximately 10% for individual models, re-analysis, and observations. We introduce a novel method to evaluate the model performance particular for these indices.

  18. Modeling Energy and Development : An Evaluation of Models and Concepts

    NARCIS (Netherlands)

    Ruijven, Bas van; Urban, Frauke; Benders, René M.J.; Moll, Henri C.; Sluijs, Jeroen P. van der; Vries, Bert de; Vuuren, Detlef P. van

    2008-01-01

    Most global energy models are developed by institutes from developed countries focusing primarily oil issues that are important in industrialized countries. Evaluation of the results for Asia of the IPCC/SRES models shows that broad concepts of energy and development. the energy ladder and the envir

  19. A 2nd generation static model of greenhouse energy requirements (horticern) : a comparison with dynamic models

    CERN Document Server

    Jolliet, O; Munday, G L

    1989-01-01

    Optimisation of a greenhouse and its components requires a suitable model permitting precise determination of its energy requirements. Existing static models are simple but lack precision; dynamic models though more precise, are unsuitable for use over long periods and difficult to handle in practice. A theoretical study and measurements from the CERN trial greenhouse have allowed the development of new static model named "HORTICERN", precise and easy to use for predicting energy consumption and which takes into account effects of solar energy, wind and radiative loss to the sky. This paper compares the HORTICERN model with the dynamic models of Bot, Takakura, Van Bavel and Gembloux, and demonstrates that its precision is comparable; differences on average being less than 5%, it is independent of type of greenhouse (e.g. single or double glazing, Hortiplus, etc.) and climate. The HORTICERN method has been developed for PC use and is proving to be a powerful tool for greenhouse optimisation by research work...

  20. A model to evaluate quality and effectiveness of disease management.

    Science.gov (United States)

    Lemmens, K M M; Nieboer, A P; van Schayck, C P; Asin, J D; Huijsman, R

    2008-12-01

    Disease management has emerged as a new strategy to enhance quality of care for patients suffering from chronic conditions, and to control healthcare costs. So far, however, the effects of this strategy remain unclear. Although current models define the concept of disease management, they do not provide a systematic development or an explanatory theory of how disease management affects the outcomes of care. The objective of this paper is to present a framework for valid evaluation of disease-management initiatives. The evaluation model is built on two pillars of disease management: patient-related and professional-directed interventions. The effectiveness of these interventions is thought to be affected by the organisational design of the healthcare system. Disease management requires a multifaceted approach; hence disease-management programme evaluations should focus on the effects of multiple interventions, namely patient-related, professional-directed and organisational interventions. The framework has been built upon the conceptualisation of these disease-management interventions. Analysis of the underlying mechanisms of these interventions revealed that learning and behavioural theories support the core assumptions of disease management. The evaluation model can be used to identify the components of disease-management programmes and the mechanisms behind them, making valid comparison feasible. In addition, this model links the programme interventions to indicators that can be used to evaluate the disease-management programme. Consistent use of this framework will enable comparisons among disease-management programmes and outcomes in evaluation research.

  1. Virtual reality technology as a tool for human factors requirements evaluation in design of the nuclear reactors control desks

    Energy Technology Data Exchange (ETDEWEB)

    Grecco, Claudio H.S.; Santos, Isaac J.A.L.; Mol, Antonio C.A.; Carvalho, Paulo V.R.; Silva, Antonio C.F.; Ferreira, Francisco J.O.; Dutra, Marco A.M. [Instituto de Engenharia Nuclear (IEN/CNEN-RJ), Rio de Janeiro, RJ (Brazil)]. E-mail: grecco@ien.gov.br; luquetti@ien.gov.br; mol@ien.gov.br; paulov@ien.gov.br; tonico@ien.gov.br; fferreira@ien.gov.br; dutra@ien.gov.br

    2007-07-01

    The Virtual Reality (VR) is an advanced computer interface technology that allows the user to internet or to explore a three-dimensional environment through the computer, as was part of the virtual world. This technology presents great applicability in the most diverse areas of the human knowledge. This paper presents a study on the use of the VR as tool for human factors requirements evaluation in design of the nuclear reactors control desks. Moreover, this paper presents a case study: a virtual model of the control desk, developed using virtual reality technology to be used in the human factors requirements evaluation. This case study was developed in the Virtual Reality Laboratory at IEN, and understands the stereo visualization of the Argonauta research nuclear reactor control desk for a static ergonomic evaluation using check-lists, in accordance to the standards and human factors nuclear international guides (IEC 1771, NUREG-0700). (author)

  2. Modeling Healthcare Processes Using Commitments: An Empirical Evaluation

    Science.gov (United States)

    2015-01-01

    The two primary objectives of this paper are: (a) to demonstrate how Comma, a business modeling methodology based on commitments, can be applied in healthcare process modeling, and (b) to evaluate the effectiveness of such an approach in producing healthcare process models. We apply the Comma approach on a breast cancer diagnosis process adapted from an HHS committee report, and presents the results of an empirical study that compares Comma with a traditional approach based on the HL7 Messaging Standard (Traditional-HL7). Our empirical study involved 47 subjects, and two phases. In the first phase, we partitioned the subjects into two approximately equal groups. We gave each group the same requirements based on a process scenario for breast cancer diagnosis. Members of one group first applied Traditional-HL7 and then Comma whereas members of the second group first applied Comma and then Traditional-HL7—each on the above-mentioned requirements. Thus, each subject produced two models, each model being a set of UML Sequence Diagrams. In the second phase, we repartitioned the subjects into two groups with approximately equal distributions from both original groups. We developed exemplar Traditional-HL7 and Comma models; we gave one repartitioned group our Traditional-HL7 model and the other repartitioned group our Comma model. We provided the same changed set of requirements to all subjects and asked them to modify the provided exemplar model to satisfy the new requirements. We assessed solutions produced by subjects in both phases with respect to measures of flexibility, time, difficulty, objective quality, and subjective quality. Our study found that Comma is superior to Traditional-HL7 in flexibility and objective quality as validated via Student’s t-test to the 10% level of significance. Comma is a promising new approach for modeling healthcare processes. Further gains could be made through improved tooling and enhanced training of modeling personnel. PMID

  3. Evaluation of a Mysis bioenergetics model

    Science.gov (United States)

    Chipps, S.R.; Bennett, D.H.

    2002-01-01

    Direct approaches for estimating the feeding rate of the opossum shrimp Mysis relicta can be hampered by variable gut residence time (evacuation rate models) and non-linear functional responses (clearance rate models). Bioenergetics modeling provides an alternative method, but the reliability of this approach needs to be evaluated using independent measures of growth and food consumption. In this study, we measured growth and food consumption for M. relicta and compared experimental results with those predicted from a Mysis bioenergetics model. For Mysis reared at 10??C, model predictions were not significantly different from observed values. Moreover, decomposition of mean square error indicated that 70% of the variation between model predictions and observed values was attributable to random error. On average, model predictions were within 12% of observed values. A sensitivity analysis revealed that Mysis respiration and prey energy density were the most sensitive parameters affecting model output. By accounting for uncertainty (95% CLs) in Mysis respiration, we observed a significant improvement in the accuracy of model output (within 5% of observed values), illustrating the importance of sensitive input parameters for model performance. These findings help corroborate the Mysis bioenergetics model and demonstrate the usefulness of this approach for estimating Mysis feeding rate.

  4. Defining Requirements and Applying Information Modeling for Protecting Enterprise Assets

    Science.gov (United States)

    Fortier, Stephen C.; Volk, Jennifer H.

    The advent of terrorist threats has heightened local, regional, and national governments' interest in emergency response and disaster preparedness. The threat of natural disasters also challenges emergency responders to act swiftly and in a coordinated fashion. When a disaster occurs, an ad hoc coalition of pre-planned groups usually forms to respond to the incident. History has shown that these “system of systems” do not interoperate very well. Communications between fire, police and rescue components either do not work or are inefficient. Government agencies, non-governmental organizations (NGOs), and private industry use a wide array of software platforms for managing data about emergency conditions, resources and response activities. Most of these are stand-alone systems with very limited capability for data sharing with other agencies or other levels of government. Information technology advances have facilitated the movement towards an integrated and coordinated approach to emergency management. Other communication mechanisms, such as video teleconferencing, digital television and radio broadcasting, are being utilized to combat the challenges of emergency information exchange. Recent disasters, such as Hurricane Katrina and the tsunami in Indonesia, have illuminated the weaknesses in emergency response. This paper will discuss the need for defining requirements for components of ad hoc coalitions which are formed to respond to disasters. A goal of our effort was to develop a proof of concept that applying information modeling to the business processes used to protect and mitigate potential loss of an enterprise was feasible. These activities would be modeled both pre- and post-incident.

  5. Evaluation of the BPMN According to the Requirements of the Enterprise Architecture Methodology

    Directory of Open Access Journals (Sweden)

    Václav Řepa

    2012-04-01

    Full Text Available This article evaluates some characteristics of the Business Process Modelling Notation from the perspective of the business system modelling methodology. Firstly the enterprise architecture context of the business process management as well as the importance of standards are discussed. Then the Business System Modelling Methodology is introduced with special attention paid to the Business Process Meta-model as a basis for the evaluation of the BPMN features. Particular basic concepts from the Business Process Meta-model are mapped to the usable constructs of the BPMN and related issues are analysed. Finally the basic conclusions are made and the general context is discussed.

  6. Evaluation of Usability Utilizing Markov Models

    Science.gov (United States)

    Penedo, Janaina Rodrigues; Diniz, Morganna; Ferreira, Simone Bacellar Leal; Silveira, Denis S.; Capra, Eliane

    2012-01-01

    Purpose: The purpose of this paper is to analyze the usability of a remote learning system in its initial development phase, using a quantitative usability evaluation method through Markov models. Design/methodology/approach: The paper opted for an exploratory study. The data of interest of the research correspond to the possible accesses of users…

  7. Evaluating spatial patterns in hydrological modelling

    DEFF Research Database (Denmark)

    Koch, Julian

    of spatial information in a holistic assessment. Opposed, statistical measures typically only address a limited amount of spatial information. A web-based survey and a citizen science project are employed to quantify the collective perceptive skills of humans aiming at benchmarking spatial metrics...... of environmental science, such as meteorology, geostatistics or geography. In total, seven metrics are evaluated with respect to their capability to quantitatively compare spatial patterns. The human visual perception is often considered superior to computer based measures, because it integrates various dimensions...... with respect to their capability to mimic human evaluations. This PhD thesis aims at expanding the standard toolbox of spatial model evaluation with innovative metrics that adequately compare spatial patterns. Driven by the rise of more complex model structures and the increase of suitable remote sensing...

  8. Evaluation of help model replacement codes

    Energy Technology Data Exchange (ETDEWEB)

    Whiteside, Tad [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Hang, Thong [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Flach, Gregory [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)

    2009-07-01

    This work evaluates the computer codes that are proposed to be used to predict percolation of water through the closure-cap and into the waste containment zone at the Department of Energy closure sites. This work compares the currently used water-balance code (HELP) with newly developed computer codes that use unsaturated flow (Richards’ equation). It provides a literature review of the HELP model and the proposed codes, which result in two recommended codes for further evaluation: HYDRUS-2D3D and VADOSE/W. This further evaluation involved performing actual simulations on a simple model and comparing the results of those simulations to those obtained with the HELP code and the field data. From the results of this work, we conclude that the new codes perform nearly the same, although moving forward, we recommend HYDRUS-2D3D.

  9. A research and evaluation capacity building model in Western Australia.

    Science.gov (United States)

    Lobo, Roanna; Crawford, Gemma; Hallett, Jonathan; Laing, Sue; Mak, Donna B; Jancey, Jonine; Rowell, Sally; McCausland, Kahlia; Bastian, Lisa; Sorenson, Anne; Tilley, P J Matt; Yam, Simon; Comfort, Jude; Brennan, Sean; Doherty, Maryanne

    2016-12-27

    Evaluation of public health programs, services and policies is increasingly required to demonstrate effectiveness. Funding constraints necessitate that existing programs, services and policies be evaluated and their findings disseminated. Evidence-informed practice and policy is also desirable to maximise investments in public health. Partnerships between public health researchers, service providers and policymakers can help address evaluation knowledge and skills gaps. The Western Australian Sexual Health and Blood-borne Virus Applied Research and Evaluation Network (SiREN) aims to build research and evaluation capacity in the sexual health and blood-borne virus sector in Western Australia (WA). Partners' perspectives of the SiREN model after 2 years were explored. Qualitative written responses from service providers, policymakers and researchers about the SiREN model were analysed thematically. Service providers reported that participation in SiREN prompted them to consider evaluation earlier in the planning process and increased their appreciation of the value of evaluation. Policymakers noted benefits of the model in generating local evidence and highlighting local issues of importance for consideration at a national level. Researchers identified challenges communicating the services available through SiREN and the time investment needed to develop effective collaborative partnerships. Stronger engagement between public health researchers, service providers and policymakers through collaborative partnerships has the potential to improve evidence generation and evidence translation. These outcomes require long-term funding and commitment from all partners to develop and maintain partnerships. Ongoing monitoring and evaluation can ensure the partnership remains responsive to the needs of key stakeholders. The findings are applicable to many sectors.

  10. Evaluation of Database Modeling Methods for Geographic Information Systems

    Directory of Open Access Journals (Sweden)

    Thanasis Hadzilacos

    1998-11-01

    Full Text Available We present a systematic evaluation of different modeling techniques for the design of Geographic Information Systems as we experienced them through theoretical research and real world applications. A set of exemplary problems for spatial systems on which the suitability of models can be tested is discussed. We analyse the use of a specific database design methodology including the phases of conceptual, logical and physical modeling. By employing, at each phase, representative models of classical and object-oriented approaches we assess their efficiency in spatial data handling. At the conceptual phase, we show how the Entity-Relationship, EFO and OMT models deal with the geographic needs; at the logical phase we argue why the relational model is good to serve as a basis to accommodate these requirements, but not good enough as a stand alone solution.

  11. Evaluation of Workflow Management Systems - A Meta Model Approach

    Directory of Open Access Journals (Sweden)

    Michael Rosemann

    1998-11-01

    Full Text Available The automated enactment of processes through the use of workflow management systems enables the outsourcing of the control flow from application systems. By now a large number of systems, that follow different workflow paradigms, are available. This leads to the problem of selecting the appropriate workflow management system for a given situation. In this paper we outline the benefits of a meta model approach for the evaluation and comparison of different workflow management systems. After a general introduction on the topic of meta modeling the meta models of the workflow management systems WorkParty (Siemens Nixdorf and FlowMark (IBM are compared as an example. These product specific meta models can be generalized to meta reference models, which helps to specify a workflow methodology. Exemplary, an organisational reference meta model is presented, which helps users in specifying their requirements for a workflow management system.

  12. Evaluating the TD model of classical conditioning.

    Science.gov (United States)

    Ludvig, Elliot A; Sutton, Richard S; Kehoe, E James

    2012-09-01

    The temporal-difference (TD) algorithm from reinforcement learning provides a simple method for incrementally learning predictions of upcoming events. Applied to classical conditioning, TD models suppose that animals learn a real-time prediction of the unconditioned stimulus (US) on the basis of all available conditioned stimuli (CSs). In the TD model, similar to other error-correction models, learning is driven by prediction errors--the difference between the change in US prediction and the actual US. With the TD model, however, learning occurs continuously from moment to moment and is not artificially constrained to occur in trials. Accordingly, a key feature of any TD model is the assumption about the representation of a CS on a moment-to-moment basis. Here, we evaluate the performance of the TD model with a heretofore unexplored range of classical conditioning tasks. To do so, we consider three stimulus representations that vary in their degree of temporal generalization and evaluate how the representation influences the performance of the TD model on these conditioning tasks.

  13. Evaluating computational models of cholesterol metabolism.

    Science.gov (United States)

    Paalvast, Yared; Kuivenhoven, Jan Albert; Groen, Albert K

    2015-10-01

    Regulation of cholesterol homeostasis has been studied extensively during the last decades. Many of the metabolic pathways involved have been discovered. Yet important gaps in our knowledge remain. For example, knowledge on intracellular cholesterol traffic and its relation to the regulation of cholesterol synthesis and plasma cholesterol levels is incomplete. One way of addressing the remaining questions is by making use of computational models. Here, we critically evaluate existing computational models of cholesterol metabolism making use of ordinary differential equations and addressed whether they used assumptions and make predictions in line with current knowledge on cholesterol homeostasis. Having studied the results described by the authors, we have also tested their models. This was done primarily by testing the effect of statin treatment in each model. Ten out of eleven models tested have made assumptions in line with current knowledge of cholesterol metabolism. Three out of the ten remaining models made correct predictions, i.e. predicting a decrease in plasma total and LDL cholesterol or increased uptake of LDL upon treatment upon the use of statins. In conclusion, few models on cholesterol metabolism are able to pass a functional test. Apparently most models have not undergone the critical iterative systems biology cycle of validation. We expect modeling of cholesterol metabolism to go through many more model topologies and iterative cycles and welcome the increased understanding of cholesterol metabolism these are likely to bring.

  14. Dependencies among Architectural Views Got from Software Requirements Based on a Formal Model

    Directory of Open Access Journals (Sweden)

    Osis Janis

    2014-12-01

    Full Text Available A system architect has software requirements and some unspecified knowledge about a problem domain (e.g., an enterprise as source information for assessment and evaluation of possible solutions and getting the target point, a preliminary software design. The solving factor is architect’s experience and expertise in the problem domain (“AS-IS”. A proposed approach is dedicated to assist a system architect in making an appropriate decision on the solution (“TO-BE”. It is based on a formal mathematical model, Topological Functioning Model (TFM. Compliant TFMs can be transformed into software architectural views. The paper demonstrates and discusses tracing dependency links from the requirements to and between the architectural views.

  15. Expansion of the Kano model to identify relevant customer segments and functional requirements

    DEFF Research Database (Denmark)

    Atlason, Reynir Smari; Stefansson, Arnaldur Smari; Wietz, Miriam

    2017-01-01

    or a service. A current limitation of the Kano model is that it does not allow developers to visualise which combined sets of FRs would provide the highest satisfaction between different customer segments. In this paper, a stepwise method to address this particular shortcoming is presented. First......The Kano model of customer satisfaction has been widely used to analyse perceived needs of customers. The model provides product developers valuable information about if, and then how much a given functional requirement (FR) will impact customer satisfaction if implemented within a product, system...... are identified. At last, the functions of the chosen segments with the smallest interval, define the FRs appealing to the biggest target group. The proposed extension to the model should assist product developers within various fields to more effectively evaluate which FRs should be implemented when considering...

  16. Multi-level Expression Design Language: Requirement level (MEDL-R) system evaluation

    Science.gov (United States)

    1980-01-01

    An evaluation of the Multi-Level Expression Design Language Requirements Level (MEDL-R) system was conducted to determine whether it would be of use in the Goddard Space Flight Center Code 580 software development environment. The evaluation is based upon a study of the MEDL-R concept of requirement languages, the functions performed by MEDL-R, and the MEDL-R language syntax. Recommendations are made for changes to MEDL-R that would make it useful in the Code 580 environment.

  17. Two models for evaluating landslide hazards

    Science.gov (United States)

    Davis, J.C.; Chung, C.-J.; Ohlmacher, G.C.

    2006-01-01

    Two alternative procedures for estimating landslide hazards were evaluated using data on topographic digital elevation models (DEMs) and bedrock lithologies in an area adjacent to the Missouri River in Atchison County, Kansas, USA. The two procedures are based on the likelihood ratio model but utilize different assumptions. The empirical likelihood ratio model is based on non-parametric empirical univariate frequency distribution functions under an assumption of conditional independence while the multivariate logistic discriminant model assumes that likelihood ratios can be expressed in terms of logistic functions. The relative hazards of occurrence of landslides were estimated by an empirical likelihood ratio model and by multivariate logistic discriminant analysis. Predictor variables consisted of grids containing topographic elevations, slope angles, and slope aspects calculated from a 30-m DEM. An integer grid of coded bedrock lithologies taken from digitized geologic maps was also used as a predictor variable. Both statistical models yield relative estimates in the form of the proportion of total map area predicted to already contain or to be the site of future landslides. The stabilities of estimates were checked by cross-validation of results from random subsamples, using each of the two procedures. Cell-by-cell comparisons of hazard maps made by the two models show that the two sets of estimates are virtually identical. This suggests that the empirical likelihood ratio and the logistic discriminant analysis models are robust with respect to the conditional independent assumption and the logistic function assumption, respectively, and that either model can be used successfully to evaluate landslide hazards. ?? 2006.

  18. A Methodology to Evaluate Object oriented Software Systems Using Change Requirement Traceability Based on Impact Analysis

    Directory of Open Access Journals (Sweden)

    Sunil T. D

    2014-06-01

    Full Text Available It is a well known fact that software maintenance plays a major role and finds importance in software development life cycle. As object - oriented programming has become the standard, it is very important to understand the problems of maintaining object -oriented software systems. This paper aims at evaluating object - oriented software system through change requirement traceability – based impact analysis methodology for non functional requirements using functional requirements . The major issues have been related to change impact algorithms and inheritance of functionality.

  19. A case study in modeling a nuclear formation evaluation sub

    Energy Technology Data Exchange (ETDEWEB)

    Locke, S. (Teleco Oilfield Services, Inc., Middletown, CT (United States))

    1992-08-01

    This paper reports that constructing an accurate Monte Carlo representation of a complex nuclear formation evaluation tool requires careful matching of computed values to the results of laboratory experiments. The availability of only finite computing resources forces restraint in the step by step addition of complexity to the model. Once confidence in the model has been established, it can be used to predict tool response to many situations not reproducible in a laboratory. High temperatures, high pressures, and pore fluids composed of liquid-gas mixtures are typical of conditions encountered in real oil wells that are difficult to reproduce in a laboratory. Tool design and the effects of design modifications can also be evaluated efficiently using the model.

  20. Sound insulation of dwellings - Legal requirements in Europe and subjective evaluation of acoustical comfort

    DEFF Research Database (Denmark)

    Rasmussen, Birgit; Rindel, Jens Holger

    2003-01-01

    insulation requirements in several European countries and (b) a review of investigations related to the subjective and/or objective evaluation. Based on the analysis of several investigations in the field and by laboratory simulations it is suggested how to estimate the degree of satisfaction with a specific......Acoustical comfort is a concept that can be characterised by absence of unwanted sound and by opportunities for acoustic activities without annoying other people. In order to achieve acoustical comfort in dwellings certain requirements have to be fulfilled concerning the airborne sound insulation...... against noise from neighbours the relationship is not so well understood. A comparison of sound insulation requirements in different countries shows that the sound insulation requirements differ considerably in terms of the concepts used, the frequency range considered and the level of requirement...

  1. Human Modeling Evaluations in Microgravity Workstation and Restraint Development

    Science.gov (United States)

    Whitmore, Mihriban; Chmielewski, Cynthia; Wheaton, Aneice; Hancock, Lorraine; Beierle, Jason; Bond, Robert L. (Technical Monitor)

    1999-01-01

    The International Space Station (ISS) will provide long-term missions which will enable the astronauts to live and work, as well as, conduct research in a microgravity environment. The dominant factor in space affecting the crew is "weightlessness" which creates a challenge for establishing workstation microgravity design requirements. The crewmembers will work at various workstations such as Human Research Facility (HRF), Microgravity Sciences Glovebox (MSG) and Life Sciences Glovebox (LSG). Since the crew will spend considerable amount of time at these workstations, it is critical that ergonomic design requirements are integral part of design and development effort. In order to achieve this goal, the Space Human Factors Laboratory in the Johnson Space Center Flight Crew Support Division has been tasked to conduct integrated evaluations of workstations and associated crew restraints. Thus, a two-phase approach was used: 1) ground and microgravity evaluations of the physical dimensions and layout of the workstation components, and 2) human modeling analyses of the user interface. Computer-based human modeling evaluations were an important part of the approach throughout the design and development process. Human modeling during the conceptual design phase included crew reach and accessibility of individual equipment, as well as, crew restraint needs. During later design phases, human modeling has been used in conjunction with ground reviews and microgravity evaluations of the mock-ups in order to verify the human factors requirements. (Specific examples will be discussed.) This two-phase approach was the most efficient method to determine ergonomic design characteristics for workstations and restraints. The real-time evaluations provided a hands-on implementation in a microgravity environment. On the other hand, only a limited number of participants could be tested. The human modeling evaluations provided a more detailed analysis of the setup. The issues identified

  2. A New Rapid Simplified Model for Urban Rainstorm Inundation with Low Data Requirements

    Directory of Open Access Journals (Sweden)

    Ji Shen

    2016-11-01

    Full Text Available This paper proposes a new rapid simplified inundation model (NRSIM for flood inundation caused by rainstorms in an urban setting that can simulate the urban rainstorm inundation extent and depth in a data-scarce area. Drainage basins delineated from a floodplain map according to the distribution of the inundation sources serve as the calculation cells of NRSIM. To reduce data requirements and computational costs of the model, the internal topography of each calculation cell is simplified to a circular cone, and a mass conservation equation based on a volume spreading algorithm is established to simulate the interior water filling process. Moreover, an improved D8 algorithm is outlined for the simulation of water spilling between different cells. The performance of NRSIM is evaluated by comparing the simulated results with those from a traditional rapid flood spreading model (TRFSM for various resolutions of digital elevation model (DEM data. The results are as follows: (1 given high-resolution DEM data input, the TRFSM model has better performance in terms of precision than NRSIM; (2 the results from TRFSM are seriously affected by the decrease in DEM data resolution, whereas those from NRSIM are not; and (3 NRSIM always requires less computational time than TRFSM. Apparently, compared with the complex hydrodynamic or traditional rapid flood spreading model, NRSIM has much better applicability and cost-efficiency in real-time urban inundation forecasting for data-sparse areas.

  3. CTBT integrated verification system evaluation model supplement

    Energy Technology Data Exchange (ETDEWEB)

    EDENBURN,MICHAEL W.; BUNTING,MARCUS; PAYNE JR.,ARTHUR C.; TROST,LAWRENCE C.

    2000-03-02

    Sandia National Laboratories has developed a computer based model called IVSEM (Integrated Verification System Evaluation Model) to estimate the performance of a nuclear detonation monitoring system. The IVSEM project was initiated in June 1994, by Sandia's Monitoring Systems and Technology Center and has been funded by the U.S. Department of Energy's Office of Nonproliferation and National Security (DOE/NN). IVSEM is a simple, ''top-level,'' modeling tool which estimates the performance of a Comprehensive Nuclear Test Ban Treaty (CTBT) monitoring system and can help explore the impact of various sensor system concepts and technology advancements on CTBT monitoring. One of IVSEM's unique features is that it integrates results from the various CTBT sensor technologies (seismic, in sound, radionuclide, and hydroacoustic) and allows the user to investigate synergy among the technologies. Specifically, IVSEM estimates the detection effectiveness (probability of detection), location accuracy, and identification capability of the integrated system and of each technology subsystem individually. The model attempts to accurately estimate the monitoring system's performance at medium interfaces (air-land, air-water) and for some evasive testing methods such as seismic decoupling. The original IVSEM report, CTBT Integrated Verification System Evaluation Model, SAND97-25 18, described version 1.2 of IVSEM. This report describes the changes made to IVSEM version 1.2 and the addition of identification capability estimates that have been incorporated into IVSEM version 2.0.

  4. CTBT integrated verification system evaluation model supplement

    Energy Technology Data Exchange (ETDEWEB)

    EDENBURN,MICHAEL W.; BUNTING,MARCUS; PAYNE JR.,ARTHUR C.; TROST,LAWRENCE C.

    2000-03-02

    Sandia National Laboratories has developed a computer based model called IVSEM (Integrated Verification System Evaluation Model) to estimate the performance of a nuclear detonation monitoring system. The IVSEM project was initiated in June 1994, by Sandia's Monitoring Systems and Technology Center and has been funded by the U.S. Department of Energy's Office of Nonproliferation and National Security (DOE/NN). IVSEM is a simple, ''top-level,'' modeling tool which estimates the performance of a Comprehensive Nuclear Test Ban Treaty (CTBT) monitoring system and can help explore the impact of various sensor system concepts and technology advancements on CTBT monitoring. One of IVSEM's unique features is that it integrates results from the various CTBT sensor technologies (seismic, in sound, radionuclide, and hydroacoustic) and allows the user to investigate synergy among the technologies. Specifically, IVSEM estimates the detection effectiveness (probability of detection), location accuracy, and identification capability of the integrated system and of each technology subsystem individually. The model attempts to accurately estimate the monitoring system's performance at medium interfaces (air-land, air-water) and for some evasive testing methods such as seismic decoupling. The original IVSEM report, CTBT Integrated Verification System Evaluation Model, SAND97-25 18, described version 1.2 of IVSEM. This report describes the changes made to IVSEM version 1.2 and the addition of identification capability estimates that have been incorporated into IVSEM version 2.0.

  5. Evaluating spatial patterns in hydrological modelling

    DEFF Research Database (Denmark)

    Koch, Julian

    is not fully exploited by current modelling frameworks due to the lack of suitable spatial performance metrics. Furthermore, the traditional model evaluation using discharge is found unsuitable to lay confidence on the predicted catchment inherent spatial variability of hydrological processes in a fully...... the contiguous United Sates (10^6 km2). To this end, the thesis at hand applies a set of spatial performance metrics on various hydrological variables, namely land-surface-temperature (LST), evapotranspiration (ET) and soil moisture. The inspiration for the applied metrics is found in related fields...

  6. Evaluation of CNN as anthropomorphic model observer

    Science.gov (United States)

    Massanes, Francesc; Brankov, Jovan G.

    2017-03-01

    Model observers (MO) are widely used in medical imaging to act as surrogates of human observers in task-based image quality evaluation, frequently towards optimization of reconstruction algorithms. In this paper, we explore the use of convolutional neural networks (CNN) to be used as MO. We will compare CNN MO to alternative MO currently being proposed and used such as the relevance vector machine based MO and channelized Hotelling observer (CHO). As the success of the CNN, and other deep learning approaches, is rooted in large data sets availability, which is rarely the case in medical imaging systems task-performance evaluation, we will evaluate CNN performance on both large and small training data sets.

  7. Structural equation modeling: building and evaluating causal models: Chapter 8

    Science.gov (United States)

    Grace, James B.; Scheiner, Samuel M.; Schoolmaster, Donald R.

    2015-01-01

    Scientists frequently wish to study hypotheses about causal relationships, rather than just statistical associations. This chapter addresses the question of how scientists might approach this ambitious task. Here we describe structural equation modeling (SEM), a general modeling framework for the study of causal hypotheses. Our goals are to (a) concisely describe the methodology, (b) illustrate its utility for investigating ecological systems, and (c) provide guidance for its application. Throughout our presentation, we rely on a study of the effects of human activities on wetland ecosystems to make our description of methodology more tangible. We begin by presenting the fundamental principles of SEM, including both its distinguishing characteristics and the requirements for modeling hypotheses about causal networks. We then illustrate SEM procedures and offer guidelines for conducting SEM analyses. Our focus in this presentation is on basic modeling objectives and core techniques. Pointers to additional modeling options are also given.

  8. Phoenix Metropolitan Model Deployment Initiative Evaluation Report

    OpenAIRE

    Zimmerman, C; Marks, J.; Jenq, J.; Cluett, Chris; DeBlasio, Allan; Lappin, Jane; Rakha, Hesham A.; Wunderlich, K

    2000-01-01

    This report presents the evaluation results of the Phoenix, Arizona Metropolitan Model Deployment Initiative (MMDI). The MMDI was a three-year program of the Intelligent Transportation Systems (ITS) Joint Program Office of the U.S. Department of Transportation. It focused on aggressive deployment of ITS at four sites across the United States, including the metropolitan areas of San Antonio, Seattle, NY/NJ/Connecticut as well as Phoenix. The focus of the deployments was on integration of exist...

  9. Evaluation of Rainfall-Runoff Models for Mediterranean Subcatchments

    Science.gov (United States)

    Cilek, A.; Berberoglu, S.; Donmez, C.

    2016-06-01

    The development and the application of rainfall-runoff models have been a corner-stone of hydrological research for many decades. The amount of rainfall and its intensity and variability control the generation of runoff and the erosional processes operating at different scales. These interactions can be greatly variable in Mediterranean catchments with marked hydrological fluctuations. The aim of the study was to evaluate the performance of rainfall-runoff model, for rainfall-runoff simulation in a Mediterranean subcatchment. The Pan-European Soil Erosion Risk Assessment (PESERA), a simplified hydrological process-based approach, was used in this study to combine hydrological surface runoff factors. In total 128 input layers derived from data set includes; climate, topography, land use, crop type, planting date, and soil characteristics, are required to run the model. Initial ground cover was estimated from the Landsat ETM data provided by ESA. This hydrological model was evaluated in terms of their performance in Goksu River Watershed, Turkey. It is located at the Central Eastern Mediterranean Basin of Turkey. The area is approximately 2000 km2. The landscape is dominated by bare ground, agricultural and forests. The average annual rainfall is 636.4mm. This study has a significant importance to evaluate different model performances in a complex Mediterranean basin. The results provided comprehensive insight including advantages and limitations of modelling approaches in the Mediterranean environment.

  10. EVALUATION OF RAINFALL-RUNOFF MODELS FOR MEDITERRANEAN SUBCATCHMENTS

    Directory of Open Access Journals (Sweden)

    A. Cilek

    2016-06-01

    Full Text Available The development and the application of rainfall-runoff models have been a corner-stone of hydrological research for many decades. The amount of rainfall and its intensity and variability control the generation of runoff and the erosional processes operating at different scales. These interactions can be greatly variable in Mediterranean catchments with marked hydrological fluctuations. The aim of the study was to evaluate the performance of rainfall-runoff model, for rainfall-runoff simulation in a Mediterranean subcatchment. The Pan-European Soil Erosion Risk Assessment (PESERA, a simplified hydrological process-based approach, was used in this study to combine hydrological surface runoff factors. In total 128 input layers derived from data set includes; climate, topography, land use, crop type, planting date, and soil characteristics, are required to run the model. Initial ground cover was estimated from the Landsat ETM data provided by ESA. This hydrological model was evaluated in terms of their performance in Goksu River Watershed, Turkey. It is located at the Central Eastern Mediterranean Basin of Turkey. The area is approximately 2000 km2. The landscape is dominated by bare ground, agricultural and forests. The average annual rainfall is 636.4mm. This study has a significant importance to evaluate different model performances in a complex Mediterranean basin. The results provided comprehensive insight including advantages and limitations of modelling approaches in the Mediterranean environment.

  11. Implicit moral evaluations: A multinomial modeling approach.

    Science.gov (United States)

    Cameron, C Daryl; Payne, B Keith; Sinnott-Armstrong, Walter; Scheffer, Julian A; Inzlicht, Michael

    2017-01-01

    Implicit moral evaluations-i.e., immediate, unintentional assessments of the wrongness of actions or persons-play a central role in supporting moral behavior in everyday life. Yet little research has employed methods that rigorously measure individual differences in implicit moral evaluations. In five experiments, we develop a new sequential priming measure-the Moral Categorization Task-and a multinomial model that decomposes judgment on this task into multiple component processes. These include implicit moral evaluations of moral transgression primes (Unintentional Judgment), accurate moral judgments about target actions (Intentional Judgment), and a directional tendency to judge actions as morally wrong (Response Bias). Speeded response deadlines reduced Intentional Judgment but not Unintentional Judgment (Experiment 1). Unintentional Judgment was stronger toward moral transgression primes than non-moral negative primes (Experiments 2-4). Intentional Judgment was associated with increased error-related negativity, a neurophysiological indicator of behavioral control (Experiment 4). Finally, people who voted for an anti-gay marriage amendment had stronger Unintentional Judgment toward gay marriage primes (Experiment 5). Across Experiments 1-4, implicit moral evaluations converged with moral personality: Unintentional Judgment about wrong primes, but not negative primes, was negatively associated with psychopathic tendencies and positively associated with moral identity and guilt proneness. Theoretical and practical applications of formal modeling for moral psychology are discussed. Copyright © 2016 Elsevier B.V. All rights reserved.

  12. EVALUATION OF RISKS AND WASTE CHARACTERIZATION REQUIREMENTS FOR THE TRANSURANIC WASTE EMPLACED IN WIPP DURING 1999

    Energy Technology Data Exchange (ETDEWEB)

    Channell, J.K.; Walker, B.A.

    2000-05-01

    Specifically this report: 1. Compares requirements of the WAP that are pertinent from a technical viewpoint with the WIPP pre-Permit waste characterization program, 2. Presents the results of a risk analysis of the currently emplaced wastes. Expected and bounding risks from routine operations and possible accidents are evaluated; and 3. Provides conclusions and recommendations.

  13. General University Requirements at Hong Kong Polytechnic University: Evaluation Findings Based on Student Focus Groups

    Science.gov (United States)

    Shek, Daniel Tan Lei; Yu, Lu; Wu, Florence Ka Yu; Chai, Wen Yu

    2015-01-01

    Under the new four-year undergraduate programme, a general education framework titled "General University Requirements" (GUR) has been developed and implemented since 2012/2013 at Hong Kong Polytechnic University (PolyU). To evaluate the implementation and effectiveness of the GUR in its first year, focus group interviews with students…

  14. An Elephant in the Room: Bias in Evaluating a Required Quantitative Methods Course

    Science.gov (United States)

    Fletcher, Joseph F.; Painter-Main, Michael A.

    2014-01-01

    Undergraduate Political Science programs often require students to take a quantitative research methods course. Such courses are typically among the most poorly rated. This can be due, in part, to the way in which courses are evaluated. Students are generally asked to provide an overall rating, which, in turn, is widely used by students, faculty,…

  15. An Elephant in the Room: Bias in Evaluating a Required Quantitative Methods Course

    Science.gov (United States)

    Fletcher, Joseph F.; Painter-Main, Michael A.

    2014-01-01

    Undergraduate Political Science programs often require students to take a quantitative research methods course. Such courses are typically among the most poorly rated. This can be due, in part, to the way in which courses are evaluated. Students are generally asked to provide an overall rating, which, in turn, is widely used by students, faculty,…

  16. Acceptance criteria for urban dispersion model evaluation

    Science.gov (United States)

    Hanna, Steven; Chang, Joseph

    2012-05-01

    The authors suggested acceptance criteria for rural dispersion models' performance measures in this journal in 2004. The current paper suggests modified values of acceptance criteria for urban applications and tests them with tracer data from four urban field experiments. For the arc-maximum concentrations, the fractional bias should have a magnitude 0.3. For all data paired in space, for which a threshold concentration must always be defined, the normalized absolute difference should be SCIPUFF dispersion model with the urban canopy option and the urban dispersion model (UDM) option. In each set of evaluations, three or four likely options are tested for meteorological inputs (e.g., a local building top wind speed, the closest National Weather Service airport observations, or outputs from numerical weather prediction models). It is found that, due to large natural variability in the urban data, there is not a large difference between the performance measures for the two model options and the three or four meteorological input options. The more detailed UDM and the state-of-the-art numerical weather models do provide a slight improvement over the other options. The proposed urban dispersion model acceptance criteria are satisfied at over half of the field experiments.

  17. Transport properties site descriptive model. Guidelines for evaluation and modelling

    Energy Technology Data Exchange (ETDEWEB)

    Berglund, Sten [WSP Environmental, Stockholm (Sweden); Selroos, Jan-Olof [Swedish Nuclear Fuel and Waste Management Co., Stockholm (Sweden)

    2004-04-01

    This report describes a strategy for the development of Transport Properties Site Descriptive Models within the SKB Site Investigation programme. Similar reports have been produced for the other disciplines in the site descriptive modelling (Geology, Hydrogeology, Hydrogeochemistry, Rock mechanics, Thermal properties, and Surface ecosystems). These reports are intended to guide the site descriptive modelling, but also to provide the authorities with an overview of modelling work that will be performed. The site descriptive modelling of transport properties is presented in this report and in the associated 'Strategy for the use of laboratory methods in the site investigations programme for the transport properties of the rock', which describes laboratory measurements and data evaluations. Specifically, the objectives of the present report are to: Present a description that gives an overview of the strategy for developing Site Descriptive Models, and which sets the transport modelling into this general context. Provide a structure for developing Transport Properties Site Descriptive Models that facilitates efficient modelling and comparisons between different sites. Provide guidelines on specific modelling issues where methodological consistency is judged to be of special importance, or where there is no general consensus on the modelling approach. The objectives of the site descriptive modelling process and the resulting Transport Properties Site Descriptive Models are to: Provide transport parameters for Safety Assessment. Describe the geoscientific basis for the transport model, including the qualitative and quantitative data that are of importance for the assessment of uncertainties and confidence in the transport description, and for the understanding of the processes at the sites. Provide transport parameters for use within other discipline-specific programmes. Contribute to the integrated evaluation of the investigated sites. The site descriptive

  18. Required Collaborative Work in Online Courses: A Predictive Modeling Approach

    Science.gov (United States)

    Smith, Marlene A.; Kellogg, Deborah L.

    2015-01-01

    This article describes a predictive model that assesses whether a student will have greater perceived learning in group assignments or in individual work. The model produces correct classifications 87.5% of the time. The research is notable in that it is the first in the education literature to adopt a predictive modeling methodology using data…

  19. Reduction of wafer-edge overlay errors using advanced correction models, optimized for minimal metrology requirements

    Science.gov (United States)

    Kim, Min-Suk; Won, Hwa-Yeon; Jeong, Jong-Mun; Böcker, Paul; Vergaij-Huizer, Lydia; Kupers, Michiel; Jovanović, Milenko; Sochal, Inez; Ryan, Kevin; Sun, Kyu-Tae; Lim, Young-Wan; Byun, Jin-Moo; Kim, Gwang-Gon; Suh, Jung-Joon

    2016-03-01

    In order to optimize yield in DRAM semiconductor manufacturing for 2x nodes and beyond, the (processing induced) overlay fingerprint towards the edge of the wafer needs to be reduced. Traditionally, this is achieved by acquiring denser overlay metrology at the edge of the wafer, to feed field-by-field corrections. Although field-by-field corrections can be effective in reducing localized overlay errors, the requirement for dense metrology to determine the corrections can become a limiting factor due to a significant increase of metrology time and cost. In this study, a more cost-effective solution has been found in extending the regular correction model with an edge-specific component. This new overlay correction model can be driven by an optimized, sparser sampling especially at the wafer edge area, and also allows for a reduction of noise propagation. Lithography correction potential has been maximized, with significantly less metrology needs. Evaluations have been performed, demonstrating the benefit of edge models in terms of on-product overlay performance, as well as cell based overlay performance based on metrology-to-cell matching improvements. Performance can be increased compared to POR modeling and sampling, which can contribute to (overlay based) yield improvement. Based on advanced modeling including edge components, metrology requirements have been optimized, enabling integrated metrology which drives down overall metrology fab footprint and lithography cycle time.

  20. A Model for Telestrok Network Evaluation

    DEFF Research Database (Denmark)

    Storm, Anna; Günzel, Franziska; Theiss, Stephan

    2011-01-01

    Different telestroke network concepts have been implemented worldwide to enable fast and efficient treatment of stroke patients in underserved rural areas. Networks could demonstrate the improvement in clinical outcome, but have so far excluded a cost-effectiveness analysis. With health economic...... was developed from the third-party payer perspective. In principle, it enables telestroke networks to conduct cost-effectiveness studies, because the majority of the required data can be extracted from health insurance companies’ databases and the telestroke network itself. The model presents a basis...

  1. Bioenergy crop models: Descriptions, data requirements and future challenges

    Energy Technology Data Exchange (ETDEWEB)

    Nair, S. Surendran [University of Tennessee, Knoxville (UTK); Kang, Shujiang [ORNL; Zhang, Xuesong [Pacific Northwest National Laboratory (PNNL); Miguez, Fernando [Iowa State University; Izaurralde, Dr. R. Cesar [Pacific Northwest National Laboratory (PNNL); Post, Wilfred M [ORNL; Dietze, Michael [University of Illinois, Urbana-Champaign; Lynd, L. [Dartmouth College; Wullschleger, Stan D [ORNL

    2012-01-01

    Field studies that address the production of lignocellulosic biomass as a source of renewable energy provide critical data for the development of bioenergy crop models. A literature survey revealed that 14 models have been used for simulating bioenergy crops including herbaceous and woody bioenergy crops, and for crassulacean acid metabolism (CAM) crops. These models simulate field-scale production of biomass for switchgrass (ALMANAC, EPIC, and Agro-BGC), miscanthus (MISCANFOR, MISCANMOD, and WIMOVAC), sugarcane (APSIM, AUSCANE, and CANEGRO), and poplar and willow (SECRETS and 3PG). Two models are adaptations of dynamic global vegetation models and simulate biomass yields of miscanthus and sugarcane at regional scales (Agro-IBIS and LPJmL). Although it lacks the complexity of other bioenergy crop models, the environmental productivity index (EPI) is the only model used to estimate biomass production of CAM (Agave and Opuntia) plants. Except for the EPI model, all models include representations of leaf area dynamics, phenology, radiation interception and utilization, biomass production, and partitioning of biomass to roots and shoots. A few models simulate soil water, nutrient, and carbon cycle dynamics, making them especially useful for assessing the environmental consequences (e.g., erosion and nutrient losses) associated with the large-scale deployment of bioenergy crops. The rapid increase in use of models for energy crop simulation is encouraging; however, detailed information on the influence of climate, soils, and crop management practices on biomass production is scarce. Thus considerable work remains regarding the parameterization and validation of process-based models for bioenergy crops; generation and distribution of high-quality field data for model development and validation; and implementation of an integrated framework for efficient, high-resolution simulations of biomass production for use in planning sustainable bioenergy systems.

  2. Evaluating face trustworthiness: a model based approach.

    Science.gov (United States)

    Todorov, Alexander; Baron, Sean G; Oosterhof, Nikolaas N

    2008-06-01

    Judgments of trustworthiness from faces determine basic approach/avoidance responses and approximate the valence evaluation of faces that runs across multiple person judgments. Here, based on trustworthiness judgments and using a computer model for face representation, we built a model for representing face trustworthiness (study 1). Using this model, we generated novel faces with an increased range of trustworthiness and used these faces as stimuli in a functional Magnetic Resonance Imaging study (study 2). Although participants did not engage in explicit evaluation of the faces, the amygdala response changed as a function of face trustworthiness. An area in the right amygdala showed a negative linear response-as the untrustworthiness of faces increased so did the amygdala response. Areas in the left and right putamen, the latter area extended into the anterior insula, showed a similar negative linear response. The response in the left amygdala was quadratic--strongest for faces on both extremes of the trustworthiness dimension. The medial prefrontal cortex and precuneus also showed a quadratic response, but their response was strongest to faces in the middle range of the trustworthiness dimension.

  3. Achieving a System Operational Availability Requirement (ASOAR) Model

    Science.gov (United States)

    1992-07-01

    ASOAR requires only system and end item level input data, not Line Replaceable Unit (LRU) Input data. ASOAR usage provides concepts for major logistics...the Corp/Theater ADP Service Center II (CTASC II) to a systen operational availabilty goal. The CTASC II system configuration had many redundant types

  4. Data Assimilation and Model Evaluation Experiment Datasets.

    Science.gov (United States)

    Lai, Chung-Chieng A.; Qian, Wen; Glenn, Scott M.

    1994-05-01

    The Institute for Naval Oceanography, in cooperation with Naval Research Laboratories and universities, executed the Data Assimilation and Model Evaluation Experiment (DAMÉE) for the Gulf Stream region during fiscal years 1991-1993. Enormous effort has gone into the preparation of several high-quality and consistent datasets for model initialization and verification. This paper describes the preparation process, the temporal and spatial scopes, the contents, the structure, etc., of these datasets.The goal of DAMEE and the need of data for the four phases of experiment are briefly stated. The preparation of DAMEE datasets consisted of a series of processes: 1)collection of observational data; 2) analysis and interpretation; 3) interpolation using the Optimum Thermal Interpolation System package; 4) quality control and re-analysis; and 5) data archiving and software documentation.The data products from these processes included a time series of 3D fields of temperature and salinity, 2D fields of surface dynamic height and mixed-layer depth, analysis of the Gulf Stream and rings system, and bathythermograph profiles. To date, these are the most detailed and high-quality data for mesoscale ocean modeling, data assimilation, and forecasting research. Feedback from ocean modeling groups who tested this data was incorporated into its refinement.Suggestions for DAMEE data usages include 1) ocean modeling and data assimilation studies, 2) diagnosis and theorectical studies, and 3) comparisons with locally detailed observations.

  5. Evaluation of CASP8 model quality predictions

    KAUST Repository

    Cozzetto, Domenico

    2009-01-01

    The model quality assessment problem consists in the a priori estimation of the overall and per-residue accuracy of protein structure predictions. Over the past years, a number of methods have been developed to address this issue and CASP established a prediction category to evaluate their performance in 2006. In 2008 the experiment was repeated and its results are reported here. Participants were invited to infer the correctness of the protein models submitted by the registered automatic servers. Estimates could apply to both whole models and individual amino acids. Groups involved in the tertiary structure prediction categories were also asked to assign local error estimates to each predicted residue in their own models and their results are also discussed here. The correlation between the predicted and observed correctness measures was the basis of the assessment of the results. We observe that consensus-based methods still perform significantly better than those accepting single models, similarly to what was concluded in the previous edition of the experiment. © 2009 WILEY-LISS, INC.

  6. European Cohesion Policy: A Proposed Evaluation Model

    Directory of Open Access Journals (Sweden)

    Alina Bouroşu (Costăchescu

    2012-06-01

    Full Text Available The current approach of European Cohesion Policy (ECP is intended to be a bridge between different fields of study, emphasizing the intersection between "the public policy cycle, theories of new institutionalism and the new public management”. ECP can be viewed as a focal point between putting into practice the principles of the new governance theory, theories of economic convergence and divergence and the governance of common goods. After a short introduction of defining the concepts used, the author discussed on the created image of ECP by applying three different theories, focusing on the structural funds implementation system (SFIS, directing the discussion on the evaluation part of this policy, by proposing a model of performance evaluation of the system, in order to outline key principles for creating effective management mechanisms of ECP.

  7. Use of an operational model evaluation system for model intercomparison

    Energy Technology Data Exchange (ETDEWEB)

    Foster, K. T., LLNL

    1998-03-01

    The Atmospheric Release Advisory Capability (ARAC) is a centralized emergency response system used to assess the impact from atmospheric releases of hazardous materials. As part of an on- going development program, new three-dimensional diagnostic windfield and Lagrangian particle dispersion models will soon replace ARAC`s current operational windfield and dispersion codes. A prototype model performance evaluation system has been implemented to facilitate the study of the capabilities and performance of early development versions of these new models relative to ARAC`s current operational codes. This system provides tools for both objective statistical analysis using common performance measures and for more subjective visualization of the temporal and spatial relationships of model results relative to field measurements. Supporting this system is a database of processed field experiment data (source terms and meteorological and tracer measurements) from over 100 individual tracer releases.

  8. Advances in Application of Models in Soil Quality Evaluation

    Institute of Scientific and Technical Information of China (English)

    SI Zhi-guo; WANG Ji-jie; YU Yuan-chun; LIANG Guan-feng; CHEN Chang-ren; SHU Hong-lan

    2012-01-01

    Soil quality is a comprehensive reflection of soil properties.Since the soil quality concept was put forward in the 1970s,the quality of different type soils in different regions have been evaluated through a variety of evaluation methods,but it still lacks universal soil quantity evaluation models and methods.In this paper,the applications and prospects of grey relevancy comprehensive evaluation model,attribute hierarchical model,fuzzy comprehensive evaluation model,matter-element model,RAGA-based PPC /PPE model and GIS model in soil quality evaluation are reviewed.

  9. ZNJPrice/Earnings Ratio Model through Dividend Yield and Required Yield Above Expected Inflation

    Directory of Open Access Journals (Sweden)

    Emil Mihalina

    2010-07-01

    Full Text Available Price/earnings ratio is the most popular and most widespread evaluation model used to assess relative capital asset value on financial markets. In functional terms, company earnings in the very long term can be described with high significance. Empirically, it is visible from long-term statistics that the demanded (required yield on capital markets has certain regularity. Thus, investors first require a yield above the stable inflation rate and then a dividend yield and a capital increase caused by the growth of earnings that influence the price, with the assumption that the P/E ratio is stable. By combining the Gordon model for current dividend value, the model of market capitalization of earnings (price/earnings ratio and bearing in mind the influence of the general price levels on company earnings, it is possible to adjust the price/earnings ratio by deriving a function of the required yield on capital markets measured by a market index through dividend yield and inflation rate above the stable inflation rate increased by profit growth. The S&P 500 index for example, has in the last 100 years grown by exactly the inflation rate above the stable inflation rate increased by profit growth. The comparison of two series of price/earnings ratios, a modelled one and an average 7-year ratio, shows a notable correlation in the movement of two series of variables, with a three year deviation. Therefore, it could be hypothesized that three years of the expected inflation level, dividend yield and profit growth rate of the market index are discounted in the current market prices. The conclusion is that, at the present time, the relationship between the adjusted average price/earnings ratio and its effect on the market index on one hand and the modelled price/earnings ratio on the other can clearly show the expected dynamics and course in the following period.

  10. A model predicting fluindione dose requirement in elderly inpatients including genotypes, body weight, and amiodarone.

    Science.gov (United States)

    Moreau, Caroline; Pautas, Eric; Duverlie, Charlotte; Berndt, Celia; Andro, Marion; Mahé, Isabelle; Emmerich, Joseph; Lacut, Karine; Le Gal, Grégoire; Peyron, Isabelle; Gouin-Thibault, Isabelle; Golmard, Jean-Louis; Loriot, Marie-Anne; Siguret, Virginie

    2014-04-01

    Indandione VKAs have been widely used for decades, especially in Eastern Europe and France. Contrary to coumarin VKAs, the relative contribution of individual factors to the indandione-VKA response is poorly known. In the present multicentre study, we sought to develop and validate a model including genetic and non-genetic factors to predict the daily fluindione dose requirement in elderly patients in whom VKA dosing is challenging. We prospectively recorded clinical and therapeutic data in 230 Caucasian inpatients mean aged 85 ± 6 years, who had reached international normalized ratio stabilisation (range 2.0-3.0) on fluindione. In the derivation cohort (n=156), we analysed 13 polymorphisms in seven genes potentially involved in the pharmacological effect or vitamin-K cycle (VKORC1, CYP4F2, EPHX1) and fluindione metabolism/transport (CYP2C9, CYP2C19, CYP3A5, ABCB1). We built a regression model incorporating non-genetic and genetic data and evaluated the model performances in a separate cohort (n=74).Body-weight, amiodarone intake, VKORC1, CYP4F2, ABCB1 genotypes were retained in the final model, accounting for 31.5% of dose variability. None influence of CYP2C9 was observed. Our final model showed good performances: in 83.3% of the validation cohort patients, the dose was accurately predicted within 5 mg, i.e.the usual step used for adjusting fluindione dosage. In conclusion, in addition to body-weight and amiodarone-intake, pharmacogenetic factors (VKORC1, CYP4F2, ABCB1) related to the pharmacodynamic effect and transport of fluindione significantly influenced the dose requirement in elderly patients while CYP2C9 did not. Studies are required to know whether fluindione could be an alternative VKA in carriers of polymorphic CYP2C9 alleles, hypersensitive to coumarins.

  11. General description of KAERI LBLOCA realistic evaluation model (REM) for ECCS evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Sang Yong; Lee, Young Jin; Chung, Bub Dong; Lee, Won Jae [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1994-06-01

    A realistic evaluation model (REM) for LBLOCA licensing calculation is developed for application to pressurezed ligh water reactors. The developmental aim of the KAERI-REM is to provide a systematic methodology that is simple in structure and to used and built upon sound logical reasoning, for improving the code capability to realistically describe the LBLOCA phenomena and for evaluating the associated uncertainties. The method strives to be faithful to the intention of being best-estimate, that is, the method aims to evaluate the best-estimate values and the associated uncertainties while complying to the requirements in the ECCS regulations. As a demonstration, KAERI-REM was applied to quantify the safety margin for LBLOCA for Kori 3 and 4 and appended to this report. (Author) 11 refs., 2 figs., 6 tabs.

  12. Three Tier Unified Process Model for Requirement Negotiations and Stakeholder Collaborations

    Science.gov (United States)

    Niazi, Muhammad Ashraf Khan; Abbas, Muhammad; Shahzad, Muhammad

    2012-11-01

    This research paper is focused towards carrying out a pragmatic qualitative analysis of various models and approaches of requirements negotiations (a sub process of requirements management plan which is an output of scope managementís collect requirements process) and studies stakeholder collaborations methodologies (i.e. from within communication management knowledge area). Experiential analysis encompass two tiers; first tier refers to the weighted scoring model while second tier focuses on development of SWOT matrices on the basis of findings of weighted scoring model for selecting an appropriate requirements negotiation model. Finally the results are simulated with the help of statistical pie charts. On the basis of simulated results of prevalent models and approaches of negotiations, a unified approach for requirements negotiations and stakeholder collaborations is proposed where the collaboration methodologies are embeded into selected requirements negotiation model as internal parameters of the proposed process alongside some external required parameters like MBTI, opportunity analysis etc.

  13. Two criteria for evaluating risk prediction models.

    Science.gov (United States)

    Pfeiffer, R M; Gail, M H

    2011-09-01

    We propose and study two criteria to assess the usefulness of models that predict risk of disease incidence for screening and prevention, or the usefulness of prognostic models for management following disease diagnosis. The first criterion, the proportion of cases followed PCF (q), is the proportion of individuals who will develop disease who are included in the proportion q of individuals in the population at highest risk. The second criterion is the proportion needed to follow-up, PNF (p), namely the proportion of the general population at highest risk that one needs to follow in order that a proportion p of those destined to become cases will be followed. PCF (q) assesses the effectiveness of a program that follows 100q% of the population at highest risk. PNF (p) assess the feasibility of covering 100p% of cases by indicating how much of the population at highest risk must be followed. We show the relationship of those two criteria to the Lorenz curve and its inverse, and present distribution theory for estimates of PCF and PNF. We develop new methods, based on influence functions, for inference for a single risk model, and also for comparing the PCFs and PNFs of two risk models, both of which were evaluated in the same validation data.

  14. CTBT Integrated Verification System Evaluation Model

    Energy Technology Data Exchange (ETDEWEB)

    Edenburn, M.W.; Bunting, M.L.; Payne, A.C. Jr.

    1997-10-01

    Sandia National Laboratories has developed a computer based model called IVSEM (Integrated Verification System Evaluation Model) to estimate the performance of a nuclear detonation monitoring system. The IVSEM project was initiated in June 1994, by Sandia`s Monitoring Systems and Technology Center and has been funded by the US Department of Energy`s Office of Nonproliferation and National Security (DOE/NN). IVSEM is a simple, top-level, modeling tool which estimates the performance of a Comprehensive Nuclear Test Ban Treaty (CTBT) monitoring system and can help explore the impact of various sensor system concepts and technology advancements on CTBT monitoring. One of IVSEM`s unique features is that it integrates results from the various CTBT sensor technologies (seismic, infrasound, radionuclide, and hydroacoustic) and allows the user to investigate synergy among the technologies. Specifically, IVSEM estimates the detection effectiveness (probability of detection) and location accuracy of the integrated system and of each technology subsystem individually. The model attempts to accurately estimate the monitoring system`s performance at medium interfaces (air-land, air-water) and for some evasive testing methods such as seismic decoupling. This report describes version 1.2 of IVSEM.

  15. A Framework for Evaluating Computer Architectures to Support Systems with Security Requirements, with Applications.

    Science.gov (United States)

    1987-11-05

    develops a set of criteria for evaluating computer architectures that are to support sy’stemns v% ith securit % requirements. Central to these criteria is the...M.. u Fu ’VMR Appendix B DEC VAX-11/780 OVERVIEW The VAX-I1/780 is a 32-bit computer with a virtual memory space of up to 4G -bytes IBI]. The

  16. Designing and evaluating representations to model pedagogy

    Directory of Open Access Journals (Sweden)

    Elizabeth Masterman

    2013-08-01

    Full Text Available This article presents the case for a theory-informed approach to designing and evaluating representations for implementation in digital tools to support Learning Design, using the framework of epistemic efficacy as an example. This framework, which is rooted in the literature of cognitive psychology, is operationalised through dimensions of fit that attend to: (1 the underlying ontology of the domain, (2 the purpose of the task that the representation is intended to facilitate, (3 how best to support the cognitive processes of the users of the representations, (4 users’ differing needs and preferences, and (5 the tool and environment in which the representations are constructed and manipulated.Through showing how epistemic efficacy can be applied to the design and evaluation of representations, the article presents the Learning Designer, a constructionist microworld in which teachers can both assemble their learning designs and model their pedagogy in terms of students’ potential learning experience. Although the activity of modelling may add to the cognitive task of design, the article suggests that the insights thereby gained can additionally help a lecturer who wishes to reuse a particular learning design to make informed decisions about its value to their practice.

  17. Requirements and Problems in Parallel Model Development at DWD

    Directory of Open Access Journals (Sweden)

    Ulrich Schäattler

    2000-01-01

    Full Text Available Nearly 30 years after introducing the first computer model for weather forecasting, the Deutscher Wetterdienst (DWD is developing the 4th generation of its numerical weather prediction (NWP system. It consists of a global grid point model (GME based on a triangular grid and a non-hydrostatic Lokal Modell (LM. The operational demand for running this new system is immense and can only be met by parallel computers. From the experience gained in developing earlier NWP models, several new problems had to be taken into account during the design phase of the system. Most important were portability (including efficieny of the programs on several computer architectures and ease of code maintainability. Also the organization and administration of the work done by developers from different teams and institutions is more complex than it used to be. This paper describes the models and gives some performance results. The modular approach used for the design of the LM is explained and the effects on the development are discussed.

  18. Comprehensive Evaluation Cloud Model for Ship Navigation Adaptability

    OpenAIRE

    Man Zhu; Y.Q. Wen; Zhou, C. H.; C.S. Xiao

    2014-01-01

    In this paper, using cloud model and Delphi, we build a comprehensive evaluation cloud model to solve the problems of qualitative description and quantitative transformation in ship navigation adaptability comprehensive evaluation. In the model, the normal cloud generator is used to find optimal cloud models of reviews and evaluation factors. The weight of each evaluation factor is determined by cloud model and Delphi. The floating cloud algorithm is applied to aggregate the bottom level’s ev...

  19. Evaluation Methodologies for Information Management Systems; Building Digital Tobacco Industry Document Libraries at the University of California, San Francisco Library/Center for Knowledge Management; Experiments with the IFLA Functional Requirements for Bibliographic Records (FRBR); Coming to Term: Designing the Texas Email Repository Model.

    Science.gov (United States)

    Morse, Emile L.; Schmidt, Heidi; Butter, Karen; Rider, Cynthia; Hickey, Thomas B.; O'Neill, Edward T.; Toves, Jenny; Green, Marlan; Soy, Sue; Gunn, Stan; Galloway, Patricia

    2002-01-01

    Includes four articles that discuss evaluation methods for information management systems under the Defense Advanced Research Projects Agency; building digital libraries at the University of California San Francisco's Tobacco Control Archives; IFLA's Functional Requirements for Bibliographic Records; and designing the Texas email repository model…

  20. Modelling and Simulation for Requirements Engineering and Options Analysis

    Science.gov (United States)

    2010-05-01

    Defence, 2010 © Sa Majesté la Reine (en droit du Canada), telle que représentée par le ministre de la Défense nationale, 2010 DRDC Toronto CR 2010...externalize their mental model of the assumed solution for critique and correction by others, and whether or not this would assist in ensuring that

  1. Predicting Flu Season Requirements: An Undergraduate Modeling Project

    Science.gov (United States)

    Kramlich, Gary R., II; Braunstein Fierson, Janet L.; Wright, J. Adam

    2010-01-01

    This project was designed to be used in a freshman calculus class whose students had already been introduced to logistic functions and basic data modeling techniques. It need not be limited to such an audience, however; it has also been implemented in a topics in mathematics class for college upperclassmen. Originally intended to be presented in…

  2. Thermal Modeling and Feedback Requirements for LIFE Neutronic Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Seifried, J E

    2009-07-15

    An initial study is performed to determine how temperature considerations affect LIFE neutronic simulations. Among other figures of merit, the isotopic mass accumulation, thermal power, tritium breeding, and criticality are analyzed. Possible fidelities of thermal modeling and degrees of coupling are explored. Lessons learned from switching and modifying nuclear datasets is communicated.

  3. [Analysis of the medical and psychosocial evaluations required by the new public curator law].

    Science.gov (United States)

    Nélisse, C; Uribé, I

    1992-01-01

    The new Public Curator Act systematically requires medical and psychosocial evaluations. In confronting the letter and the spirit of the law to its regulations and operating procedures (and inversely), this article outlines the various tasks that will fall under the responsibility of health and social service professionals. Following a brief presentation of the law, the authors describe how the role of these professionals is limited to evaluations for the purpose of conducting expert appraisements, a matter which raises a great deal of concern. In addition, the two key notions of "inaptitude" and "need" are discussed in their legal sense, along with their consequences from the medical evaluation and psychosocial standpoints. The latter in particular will be analyzed regarding implementation difficulties. The authors conclude with a general commentary that is likely to give meaning to that simple and sometimes routine gesture consisting of "completing a form".

  4. The Information Service Evaluation (ISE Model

    Directory of Open Access Journals (Sweden)

    Laura Schumann

    2014-06-01

    Full Text Available Information services are an inherent part of our everyday life. Especially since ubiquitous cities are being developed all over the world their number is increasing even faster. They aim at facilitating the production of information and the access to the needed information and are supposed to make life easier. Until today many different evaluation models (among others, TAM, TAM 2, TAM 3, UTAUT and MATH have been developed to measure the quality and acceptance of these services. Still, they only consider subareas of the whole concept that represents an information service. As a holistic and comprehensive approach, the ISE Model studies five dimensions that influence adoption, use, impact and diffusion of the information service: information service quality, information user, information acceptance, information environment and time. All these aspects have a great impact on the final grading and of the success (or failure of the service. Our model combines approaches, which study subjective impressions of users (e.g., the perceived service quality, and user-independent, more objective approaches (e.g., the degree of gamification of a system. Furthermore, we adopt results of network economics, especially the "Success breeds success"-principle.

  5. Mathematical Formulation Requirements and Specifications for the Process Models

    Energy Technology Data Exchange (ETDEWEB)

    Steefel, C.; Moulton, D.; Pau, G.; Lipnikov, K.; Meza, J.; Lichtner, P.; Wolery, T.; Bacon, D.; Spycher, N.; Bell, J.; Moridis, G.; Yabusaki, S.; Sonnenthal, E.; Zyvoloski, G.; Andre, B.; Zheng, L.; Davis, J.

    2010-11-01

    The Advanced Simulation Capability for Environmental Management (ASCEM) is intended to be a state-of-the-art scientific tool and approach for understanding and predicting contaminant fate and transport in natural and engineered systems. The ASCEM program is aimed at addressing critical EM program needs to better understand and quantify flow and contaminant transport behavior in complex geological systems. It will also address the long-term performance of engineered components including cementitious materials in nuclear waste disposal facilities, in order to reduce uncertainties and risks associated with DOE EM's environmental cleanup and closure activities. Building upon national capabilities developed from decades of Research and Development in subsurface geosciences, computational and computer science, modeling and applied mathematics, and environmental remediation, the ASCEM initiative will develop an integrated, open-source, high-performance computer modeling system for multiphase, multicomponent, multiscale subsurface flow and contaminant transport. This integrated modeling system will incorporate capabilities for predicting releases from various waste forms, identifying exposure pathways and performing dose calculations, and conducting systematic uncertainty quantification. The ASCEM approach will be demonstrated on selected sites, and then applied to support the next generation of performance assessments of nuclear waste disposal and facility decommissioning across the EM complex. The Multi-Process High Performance Computing (HPC) Simulator is one of three thrust areas in ASCEM. The other two are the Platform and Integrated Toolsets (dubbed the Platform) and Site Applications. The primary objective of the HPC Simulator is to provide a flexible and extensible computational engine to simulate the coupled processes and flow scenarios described by the conceptual models developed using the ASCEM Platform. The graded and iterative approach to assessments

  6. "Open Access" Requires Clarification: Medical Journal Publication Models Evolve.

    Science.gov (United States)

    Lubowitz, James H; Brand, Jefferson C; Rossi, Michael J; Provencher, Matthew T

    2017-03-01

    While Arthroscopy journal is a traditional subscription model journal, our companion journal Arthroscopy Techniques is "open access." We used to believe open access simply meant online and free of charge. However, while open-access journals are free to readers, in 2017 authors must make a greater sacrifice in the form of an article-processing charge (APC). Again, while this does not apply to Arthroscopy, the APC will apply to Arthroscopy Techniques.

  7. Bioprocesses: Modelling needs for process evaluation and sustainability assessment

    DEFF Research Database (Denmark)

    Jiménez-Gonzaléz, Concepcion; Woodley, John

    2010-01-01

    of process system engineering and life cycle inventory and assessment in the design, development and improvement of sustainable bioprocesses are explored. The existing process systems engineering software tools will prove essential to assist this work. However, the existing tools will also require further......The next generation of process engineers will face a new set of challenges, with the need to devise new bioprocesses, with high selectivity for pharmaceutical manufacture, and for lower value chemicals manufacture based on renewable feedstocks. In this paper the current and predicted future roles...... development such that they can also be used to evaluate processes against sustainability metrics, as well as economics as an integral part of assessments. Finally, property models will also be required based on compounds not currently present in existing databases. It is clear that many new opportunities...

  8. Moisture evaluation by dynamic thermography data modeling

    Science.gov (United States)

    Bison, Paolo G.; Grinzato, Ermanno G.; Marinetti, Sergio

    1994-03-01

    This paper discusses the design of a nondestructive method for in situ detection of moistened areas in buildings and the evaluation of the water content in porous materials by thermographic analysis. The use of heat transfer model to interpret data allows to improve the measurement accuracy taking into account the actual boundary conditions. The relative increase of computation time is balanced by the additional advantage to optimize the testing procedure of different objects simulating the heat transfer. Experimental results on bricks used in building for restoration activities, are discussed. The water content measured in different hygrometric conditions is compared with known values. A correction on the absorptivity coefficient dependent on water content is introduced.

  9. Requirements for Logical Models for Value-Added Tax Legislation

    DEFF Research Database (Denmark)

    Nielsen, Morten Ib; Simonsen, Jakob Grue; Larsen, Ken Friis

    -specific needs. Currently, these difficulties are handled in most major ERP systems by customising and localising the native code of the ERP systems for each specific country and industry. We propose an alternative that uses logical modeling of VAT legislation. The potential benefit is to eventually transform......Enterprise resource planning (ERP) systems are ubiquitous in commercial enterprises of all sizes and invariably need to account for the notion of value-added tax (VAT). The legal and technical difficulties in handling VAT are exacerbated by spanning a broad and chaotic spectrum of intricate country...

  10. ZATPAC: a model consortium evaluates teen programs.

    Science.gov (United States)

    Owen, Kathryn; Murphy, Dana; Parsons, Chris

    2009-09-01

    How do we advance the environmental literacy of young people, support the next generation of environmental stewards and increase the diversity of the leadership of zoos and aquariums? We believe it is through ongoing evaluation of zoo and aquarium teen programming and have founded a consortium to pursue those goals. The Zoo and Aquarium Teen Program Assessment Consortium (ZATPAC) is an initiative by six of the nation's leading zoos and aquariums to strengthen institutional evaluation capacity, model a collaborative approach toward assessing the impact of youth programs, and bring additional rigor to evaluation efforts within the field of informal science education. Since its beginning in 2004, ZATPAC has researched, developed, pilot-tested and implemented a pre-post program survey instrument designed to assess teens' knowledge of environmental issues, skills and abilities to take conservation actions, self-efficacy in environmental actions, and engagement in environmentally responsible behaviors. Findings from this survey indicate that teens who join zoo/aquarium programs are already actively engaged in many conservation behaviors. After participating in the programs, teens showed a statistically significant increase in their reported knowledge of conservation and environmental issues and their abilities to research, explain, and find resources to take action on conservation issues of personal concern. Teens also showed statistically significant increases pre-program to post-program for various conservation behaviors, including "I talk with my family and/or friends about things they can do to help the animals or the environment," "I save water...," "I save energy...," "When I am shopping I look for recycled products," and "I help with projects that restore wildlife habitat."

  11. Application of Multiple Evaluation Models in Brazil

    Directory of Open Access Journals (Sweden)

    Rafael Victal Saliba

    2008-07-01

    Full Text Available Based on two different samples, this article tests the performance of a number of Value Drivers commonly used for evaluating companies by finance practitioners, through simple regression models of cross-section type which estimate the parameters associated to each Value Driver, denominated Market Multiples. We are able to diagnose the behavior of several multiples in the period 1994-2004, with an outlook also on the particularities of the economic activities performed by the sample companies (and their impacts on the performance through a subsequent analysis with segregation of companies in the sample by sectors. Extrapolating simple multiples evaluation standards from analysts of the main financial institutions in Brazil, we find that adjusting the ratio formulation to allow for an intercept does not provide satisfactory results in terms of pricing errors reduction. Results found, in spite of evidencing certain relative and absolute superiority among the multiples, may not be generically representative, given samples limitation.

  12. RTMOD: Real-Time MODel evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Graziani, G; Galmarini, S. [Joint Research centre, Ispra (Italy); Mikkelsen, T. [Risoe National Lab., Wind Energy and Atmospheric Physics Dept. (Denmark)

    2000-01-01

    The 1998 - 1999 RTMOD project is a system based on an automated statistical evaluation for the inter-comparison of real-time forecasts produced by long-range atmospheric dispersion models for national nuclear emergency predictions of cross-boundary consequences. The background of RTMOD was the 1994 ETEX project that involved about 50 models run in several Institutes around the world to simulate two real tracer releases involving a large part of the European territory. In the preliminary phase of ETEX, three dry runs (i.e. simulations in real-time of fictitious releases) were carried out. At that time, the World Wide Web was not available to all the exercise participants, and plume predictions were therefore submitted to JRC-Ispra by fax and regular mail for subsequent processing. The rapid development of the World Wide Web in the second half of the nineties, together with the experience gained during the ETEX exercises suggested the development of this project. RTMOD featured a web-based user-friendly interface for data submission and an interactive program module for displaying, intercomparison and analysis of the forecasts. RTMOD has focussed on model intercomparison of concentration predictions at the nodes of a regular grid with 0.5 degrees of resolution both in latitude and in longitude, the domain grid extending from 5W to 40E and 40N to 65N. Hypothetical releases were notified around the world to the 28 model forecasters via the web on a one-day warning in advance. They then accessed the RTMOD web page for detailed information on the actual release, and as soon as possible they then uploaded their predictions to the RTMOD server and could soon after start their inter-comparison analysis with other modelers. When additional forecast data arrived, already existing statistical results would be recalculated to include the influence by all available predictions. The new web-based RTMOD concept has proven useful as a practical decision-making tool for realtime

  13. Exploring a capability-demand interaction model for inclusive design evaluation

    OpenAIRE

    Persad, Umesh

    2012-01-01

    Designers are required to evaluate their designs against the needs and capabilities of their target user groups in order to achieve successful, inclusive products. This dissertation presents exploratory research into the specific problem of supporting analytical design evaluation for Inclusive Design. The analytical evaluation process involves evaluating products with user data rather than testing with actual users. The work focuses on the exploration of a capability-demand model of product i...

  14. Integrating fire with hydrological projections: model evaluation to identify uncertainties and tradeoffs in model complexity

    Science.gov (United States)

    Kennedy, M.; McKenzie, D.

    2013-12-01

    It is imperative for resource managers to understand how a changing climate might modify future watershed and hydrological processes, and such an understanding is incomplete if disturbances such as fire are not integrated with hydrological projections. Can a robust fire spread model be developed that approximates patterns of fire spread in response to varying topography wind patterns, and fuel loads and moistures, without requiring intensive calibration to each new study area or time frame? We assessed the performance of a stochastic model of fire spread (WMFire), integrated with the Regional Hydro-Ecological Simulation System (RHESSys), for projecting the effects of climatic change on mountain watersheds. We first use Monte Carlo inference to determine that the fire spread model is able to replicate the spatial pattern of fire spread for a contemporary wildfire in Washington State (the Tripod fire), measured by the lacunarity and fractal dimension of the fire. We then integrate a version of WMFire able to replicate the contemporary wildfire with RHESSys and simulate a New Mexico watershed over the calibration period of RHESSys (1941-1997). In comparing the fire spread model to a single contemporary wildfire we found issues in parameter identifiability for several of the nine parameters, due to model input uncertainty and insensitivity of the mathematical function to certain ranges of the parameter values. Model input uncertainty is caused by the inherent difficulty in reconstructing fuel loads and fuel moistures for a fire event after the fire has occurred, as well as by issues in translating variables relevant to hydrological processes produced by the hydrological model to those known to affect fire spread and fire severity. The first stage in the model evaluation aided the improvement of the model in both of these regards. In transporting the model to a new landscape in order to evaluate fire regimes in addition to patterns of fire spread, we find reasonable

  15. Game Theory: Trust Model for Common Criteria Certifications & Evaluations

    Directory of Open Access Journals (Sweden)

    Mohd Anuar Mat Isa

    2015-05-01

    Full Text Available International standard and certification play major role in product distributions and marketing activities. To be well accepted in global market, all IT products and services require international evaluation and certification such as Common Criteria (CC certification. This paper discusses some of the security, trust and privacy issues in Common Criteria that would happen during evaluation and certification of IT products and services. Our main intention is to help interested stake holders in choosing a finest authorizing member of CC certification for IT products and services using our new trust model. The proposed trust models takes into account the dynamically changing international relationship among nations which produces an index value during selection of finest CC authorizing member. The trust models use game theory to identify the finest CC authorizing member. We hope to contribute to this area of research by lessening the “cost to market” of IT products and related services. It is anticipated that it would give positive impact on global business transaction by having better and wider acceptability using our models in the selection of the finest CC authorizing member, CC consumer, and vendor (manufacturer.

  16. Rainwater harvesting: model-based design evaluation.

    Science.gov (United States)

    Ward, S; Memon, F A; Butler, D

    2010-01-01

    The rate of uptake of rainwater harvesting (RWH) in the UK has been slow to date, but is expected to gain momentum in the near future. The designs of two different new-build rainwater harvesting systems, based on simple methods, are evaluated using three different design methods, including a continuous simulation modelling approach. The RWH systems are shown to fulfill 36% and 46% of WC demand. Financial analyses reveal that RWH systems within large commercial buildings maybe more financially viable than smaller domestic systems. It is identified that design methods based on simple approaches generate tank sizes substantially larger than the continuous simulation. Comparison of the actual tank sizes and those calculated using continuous simulation established that the tanks installed are oversized for their associated demand level and catchment size. Oversizing tanks can lead to excessive system capital costs, which currently hinders the uptake of systems. Furthermore, it is demonstrated that the catchment area size is often overlooked when designing UK-based RWH systems. With respect to these findings, a recommendation for a transition from the use of simple tools to continuous simulation models is made.

  17. Building a transnational biosurveillance network using semantic web technologies: requirements, design, and preliminary evaluation.

    Science.gov (United States)

    Teodoro, Douglas; Pasche, Emilie; Gobeill, Julien; Emonet, Stéphane; Ruch, Patrick; Lovis, Christian

    2012-05-29

    Antimicrobial resistance has reached globally alarming levels and is becoming a major public health threat. Lack of efficacious antimicrobial resistance surveillance systems was identified as one of the causes of increasing resistance, due to the lag time between new resistances and alerts to care providers. Several initiatives to track drug resistance evolution have been developed. However, no effective real-time and source-independent antimicrobial resistance monitoring system is available publicly. To design and implement an architecture that can provide real-time and source-independent antimicrobial resistance monitoring to support transnational resistance surveillance. In particular, we investigated the use of a Semantic Web-based model to foster integration and interoperability of interinstitutional and cross-border microbiology laboratory databases. Following the agile software development methodology, we derived the main requirements needed for effective antimicrobial resistance monitoring, from which we proposed a decentralized monitoring architecture based on the Semantic Web stack. The architecture uses an ontology-driven approach to promote the integration of a network of sentinel hospitals or laboratories. Local databases are wrapped into semantic data repositories that automatically expose local computing-formalized laboratory information in the Web. A central source mediator, based on local reasoning, coordinates the access to the semantic end points. On the user side, a user-friendly Web interface provides access and graphical visualization to the integrated views. We designed and implemented the online Antimicrobial Resistance Trend Monitoring System (ARTEMIS) in a pilot network of seven European health care institutions sharing 70+ million triples of information about drug resistance and consumption. Evaluation of the computing performance of the mediator demonstrated that, on average, query response time was a few seconds (mean 4.3, SD 0.1 × 10

  18. Quantitative evaluation of the requirements for the promotion as associate professor at German medical faculties.

    Science.gov (United States)

    Sorg, Heiko; Knobloch, Karsten

    2012-01-01

    First quantitative evaluation of the requirements for the promotion as associate professor (AP) at German medical faculties. Analysis of the AP-regulations of German medical faculties according to a validated scoring system, which has been adapted to this study. The overall scoring for the AP-requirements at 35 German medical faculties was 13.5±0.6 of 20 possible scoring points (95% confidence interval 12.2-14.7). More than 88% of the AP-regulations demand sufficient performance in teaching and research with adequate scientific publication. Furthermore, 83% of the faculties expect an expert review of the candidate's performance. Conference presentations required as an assistant professor as well as the reduction of the minimum time as an assistant professor do only play minor roles. The requirements for assistant professors to get nominated as an associate professor at German medical faculties are high with an only small range. In detail, however, it can be seen that there still exists large heterogeneity, which hinders equal opportunities and career possibilities. These data might be used for the ongoing objective discussion.

  19. COMPUTER MODEL FOR ORGANIC FERTILIZER EVALUATION

    Directory of Open Access Journals (Sweden)

    Zdenko Lončarić

    2009-12-01

    Full Text Available Evaluation of manures, composts and growing media quality should include enough properties to enable an optimal use from productivity and environmental points of view. The aim of this paper is to describe basic structure of organic fertilizer (and growing media evaluation model to present the model example by comparison of different manures as well as example of using plant growth experiment for calculating impact of pH and EC of growing media on lettuce plant growth. The basic structure of the model includes selection of quality indicators, interpretations of indicators value, and integration of interpreted values into new indexes. The first step includes data input and selection of available data as a basic or additional indicators depending on possible use as fertilizer or growing media. The second part of the model uses inputs for calculation of derived quality indicators. The third step integrates values into three new indexes: fertilizer, growing media, and environmental index. All three indexes are calculated on the basis of three different groups of indicators: basic value indicators, additional value indicators and limiting factors. The possible range of indexes values is 0-10, where range 0-3 means low, 3-7 medium and 7-10 high quality. Comparing fresh and composted manures, higher fertilizer and environmental indexes were determined for composted manures, and the highest fertilizer index was determined for composted pig manure (9.6 whereas the lowest for fresh cattle manure (3.2. Composted manures had high environmental index (6.0-10 for conventional agriculture, but some had no value (environmental index = 0 for organic agriculture because of too high zinc, copper or cadmium concentrations. Growing media indexes were determined according to their impact on lettuce growth. Growing media with different pH and EC resulted in very significant impacts on height, dry matter mass and leaf area of lettuce seedlings. The highest lettuce

  20. Critical evaluation of factors required to terminate the postclosure monitoring period at solid waste landfills

    DEFF Research Database (Denmark)

    Barlaz, M.A.; Rooker, A.P.; Kjeldsen, Peter

    2002-01-01

    requirements and regulatory activity that might be required to prepare regulators for the large number of requests to terminate postclosure monitoring expected over the next 20 years. An approach in which the frequency and extent of postclosure monitoring is reduced as warranted by site-specific data......Regulations governing the disposal of solid waste in landfills specify that they must be monitored for 30 years after closure unless this period is extended by the governing regulatory authority. Given the wide range of conditions under which refuse is buried, technical criteria, rather than...... water or groundwater. The acceptability of gaseous releases should be evaluated against criteria for odors, the potential for subsurface migration, and greenhouse gas and ozone precursor emissions. The approach presented here must be tested on a site-specific basis to identify additional data...

  1. Review of Hydrologic Models for Evaluating Use of Remote Sensing Capabilities

    Science.gov (United States)

    Peck, E. L.; Mcquivey, R. S.; Keefer, T.; Johnson, E. R.; Erekson, J. L.

    1982-01-01

    Hydrologic models most commonly used by federal agencies for hydrologic forecasting are reviewed. Six catchment models and one snow accumulation and ablation model are reviewed. Information on the structure, parameters, states, and required inputs is presented in schematic diagrams and in tables. The primary and secondary roles of parameters and state variables with respect to their function in the models are identified. The information will be used to evaluate the usefulness of remote sensing capabilities in the operational use of hydrologic models.

  2. Model-based Assessment for Balancing Privacy Requirements and Operational Capabilities

    Energy Technology Data Exchange (ETDEWEB)

    Knirsch, Fabian [Salzburg Univ. (Austria); Engel, Dominik [Salzburg Univ. (Austria); Frincu, Marc [Univ. of Southern California, Los Angeles, CA (United States); Prasanna, Viktor [Univ. of Southern California, Los Angeles, CA (United States)

    2015-02-17

    The smart grid changes the way energy is produced and distributed. In addition both, energy and information is exchanged bidirectionally among participating parties. Therefore heterogeneous systems have to cooperate effectively in order to achieve a common high-level use case, such as smart metering for billing or demand response for load curtailment. Furthermore, a substantial amount of personal data is often needed for achieving that goal. Capturing and processing personal data in the smart grid increases customer concerns about privacy and in addition, certain statutory and operational requirements regarding privacy aware data processing and storage have to be met. An increase of privacy constraints, however, often limits the operational capabilities of the system. In this paper, we present an approach that automates the process of finding an optimal balance between privacy requirements and operational requirements in a smart grid use case and application scenario. This is achieved by formally describing use cases in an abstract model and by finding an algorithm that determines the optimum balance by forward mapping privacy and operational impacts. For this optimal balancing algorithm both, a numeric approximation and – if feasible – an analytic assessment are presented and investigated. The system is evaluated by applying the tool to a real-world use case from the University of Southern California (USC) microgrid.

  3. Studies on Models,Patterns and Require-ments of Digestible Amino Acids for Layers by Nitrogen Metabolism

    Institute of Scientific and Technical Information of China (English)

    1999-01-01

    The nitrogen (N) metabolic experiments were made to estimate separately amino acid requirements of 43~48 weeks old layers for maintenance, for protein accretion to estabolish models to estimate digestible amino acid requirements. The regression relationship of nitrogen retention vs amino acid intake was estimated for each amino acid by giving, at rate of N intake of 0.91, 0.52, 0.15 and 0.007g.kg-1 body-weight (W0.75) per d, the semi-synthetic diets was made specially deficient in one amino acid. From the regression coefficients, it was calculated that, for the accretion of 1 g protein, the dietary digestible amino acid requirements were (mg) Thr 63.1, Val 100.4, Met 39.9, Ile 88.6, Leu 114.3, Phe 63.2, Lys 87.0, His 20.5, Arg 87.9, Trp 21.4, Met+Cys 77.6, and Phe+Tyr 114.3. Daily amino acid requirements for N equilibrium were estimated to be (mg.kg-1W0.75 per day) Thr 50.6, Val 74.7, Met 30.3, ILe 66.7 Leu 81.4, Phe 44.8, Lys 60.5 His 14.7, Arg 73.9 ,Trp 17.3, Met+Cys 58.6, and Phe+Tyr 83.9 The dietary degestible amino acid patterns for protein accretion and N equilibrium were also proposed. The models of estimating digestible amino acid requirements for the different productions were developed.

  4. Critical evaluation of paradigms for modelling integrated supply chains

    NARCIS (Netherlands)

    Van Dam, K.H.; Adhitya, A.; Srinivasan, R.; Lukszo, Z.

    2009-01-01

    Contemporary problems in process systems engineering often require model-based decision support tool. Among the various modelling paradigms, equation-based models and agent-based models are widely used to develop dynamic models of systems. Which is the most appropriate modelling paradigm for a suppl

  5. Non-parametric probabilistic forecasts of wind power: required properties and evaluation

    DEFF Research Database (Denmark)

    Pinson, Pierre; Nielsen, Henrik Aalborg; Møller, Jan Kloppenborg;

    2007-01-01

    of the conditional expectation of future generation for each look-ahead time, but also with uncertainty estimates given by probabilistic forecasts. In order to avoid assumptions on the shape of predictive distributions, these probabilistic predictions are produced from nonparametric methods, and then take the form...... of a single or a set of quantile forecasts. The required and desirable properties of such probabilistic forecasts are defined and a framework for their evaluation is proposed. This framework is applied for evaluating the quality of two statistical methods producing full predictive distributions from point......Predictions of wind power production for horizons up to 48-72 hour ahead comprise a highly valuable input to the methods for the daily management or trading of wind generation. Today, users of wind power predictions are not only provided with point predictions, which are estimates...

  6. Comparison of multiplicative heterogeneous variance adjustment models for genetic evaluations.

    Science.gov (United States)

    Márkus, Sz; Mäntysaari, E A; Strandén, I; Eriksson, J-Å; Lidauer, M H

    2014-06-01

    Two heterogeneous variance adjustment methods and two variance models were compared in a simulation study. The method used for heterogeneous variance adjustment in the Nordic test-day model, which is a multiplicative method based on Meuwissen (J. Dairy Sci., 79, 1996, 310), was compared with a restricted multiplicative method where the fixed effects were not scaled. Both methods were tested with two different variance models, one with a herd-year and the other with a herd-year-month random effect. The simulation study was built on two field data sets from Swedish Red dairy cattle herds. For both data sets, 200 herds with test-day observations over a 12-year period were sampled. For one data set, herds were sampled randomly, while for the other, each herd was required to have at least 10 first-calving cows per year. The simulations supported the applicability of both methods and models, but the multiplicative mixed model was more sensitive in the case of small strata sizes. Estimation of variance components for the variance models resulted in different parameter estimates, depending on the applied heterogeneous variance adjustment method and variance model combination. Our analyses showed that the assumption of a first-order autoregressive correlation structure between random-effect levels is reasonable when within-herd heterogeneity is modelled by year classes, but less appropriate for within-herd heterogeneity by month classes. Of the studied alternatives, the multiplicative method and a variance model with a random herd-year effect were found most suitable for the Nordic test-day model for dairy cattle evaluation.

  7. Establishment of CSR Matter-Element Evaluation Model in Perspective of Low Carbon Economy in China

    Directory of Open Access Journals (Sweden)

    Li Furong

    2013-06-01

    Full Text Available With the development of CSR concept, more and more enterprises begin to bring CSR into practice. Especially in the era of low-carbon economy develops quickly, CSR practice is going to mature. In order to make a scientific evaluation on CSR and promote the comprehensive development of CSR, this study develop a new CSR evaluation model under requires of low-carbon economy development. This CSR evaluation model combined G1 weighting method with the Matter-Element Model and finally verified by an example. What we do in this study will provide a good guidance for the development of CSR evaluation and practices.

  8. 33 CFR 154.1225 - Specific response plan development and evaluation criteria and other requirements for fixed...

    Science.gov (United States)

    2010-07-01

    ... development and evaluation criteria and other requirements for fixed facilities that handle, store, or... MATERIAL IN BULK Response Plans for Animal Fats and Vegetable Oils Facilities § 154.1225 Specific response plan development and evaluation criteria and other requirements for fixed facilities that handle,...

  9. Brief Lags in Interrupted Sequential Performance: Evaluating a Model and Model Evaluation Method

    Science.gov (United States)

    2015-01-05

    Task interruption Sequence errors Cognitive modeling Goodness-of-fit testing a b s t r a c t We examined effects of adding brief (1 second ) lags...should decay faster than pred2, such that after a lag there is increased probability of an intrusion by pred2 and thus an error at offset 1. The second ...For example, language production requires that words be produced in the correct order, and research in this domain has examined sequence errors at the

  10. Model-based Type B uncertainty evaluations of measurement towards more objective evaluation strategies

    NARCIS (Netherlands)

    M. Boumans

    2013-01-01

    This article proposes a more objective Type B evaluation. This can be achieved when Type B uncertainty evaluations are model-based. This implies, however, grey-box modelling and validation instead of white-box modelling and validation which are appropriate for Type A evaluation.

  11. Developing a conceptual model for selecting and evaluating online markets

    Directory of Open Access Journals (Sweden)

    Sadegh Feizollahi

    2013-04-01

    Full Text Available There are many evidences, which emphasis on the benefits of using new technologies of information and communication in international business and many believe that E-Commerce can help satisfy customer explicit and implicit requirements. Internet shopping is a concept developed after the introduction of electronic commerce. Information technology (IT and its applications, specifically in the realm of the internet and e-mail promoted the development of e-commerce in terms of advertising, motivating and information. However, with the development of new technologies, credit and financial exchange on the internet websites were constructed so to facilitate e-commerce. The proposed study sends a total of 200 questionnaires to the target group (teachers - students - professionals - managers of commercial web sites and it manages to collect 130 questionnaires for final evaluation. Cronbach's alpha test is used for measuring reliability and to evaluate the validity of measurement instruments (questionnaires, and to assure construct validity, confirmatory factor analysis is employed. In addition, in order to analyze the research questions based on the path analysis method and to determine markets selection models, a regular technique is implemented. In the present study, after examining different aspects of e-commerce, we provide a conceptual model for selecting and evaluating online marketing in Iran. These findings provide a consistent, targeted and holistic framework for the development of the Internet market in the country.

  12. Forecasting Model of Coal Requirement Quantity Based on Grey System Theory

    Institute of Scientific and Technical Information of China (English)

    孙继湖

    2001-01-01

    The generally used methods of forecasting coal requirement quantity include the analogy method, the outside-push method and the cause-effect analysis method. However, the precision of forecasting results using these methods is lower. This paper uses the grey system theory, and sets up grey forecasting model GM (1, 3) to coal requirement quantity. The forecasting result for the Chinese coal requirement quantity coincides with the actual values, and this shows that the model is reliable. Finally, this model are used to forecast Chinese coal requirement quantity in the future ten years.

  13. Towards systematic evaluation of crop model outputs for global land-use models

    Science.gov (United States)

    Leclere, David; Azevedo, Ligia B.; Skalský, Rastislav; Balkovič, Juraj; Havlík, Petr

    2016-04-01

    Land provides vital socioeconomic resources to the society, however at the cost of large environmental degradations. Global integrated models combining high resolution global gridded crop models (GGCMs) and global economic models (GEMs) are increasingly being used to inform sustainable solution for agricultural land-use. However, little effort has yet been done to evaluate and compare the accuracy of GGCM outputs. In addition, GGCM datasets require a large amount of parameters whose values and their variability across space are weakly constrained: increasing the accuracy of such dataset has a very high computing cost. Innovative evaluation methods are required both to ground credibility to the global integrated models, and to allow efficient parameter specification of GGCMs. We propose an evaluation strategy for GGCM datasets in the perspective of use in GEMs, illustrated with preliminary results from a novel dataset (the Hypercube) generated by the EPIC GGCM and used in the GLOBIOM land use GEM to inform on present-day crop yield, water and nutrient input needs for 16 crops x 15 management intensities, at a spatial resolution of 5 arc-minutes. We adopt the following principle: evaluation should provide a transparent diagnosis of model adequacy for its intended use. We briefly describe how the Hypercube data is generated and how it articulates with GLOBIOM in order to transparently identify the performances to be evaluated, as well as the main assumptions and data processing involved. Expected performances include adequately representing the sub-national heterogeneity in crop yield and input needs: i) in space, ii) across crop species, and iii) across management intensities. We will present and discuss measures of these expected performances and weight the relative contribution of crop model, input data and data processing steps in performances. We will also compare obtained yield gaps and main yield-limiting factors against the M3 dataset. Next steps include

  14. Digital Avionics Information System (DAIS): Training Requirements Analysis Model Users Guide. Final Report.

    Science.gov (United States)

    Czuchry, Andrew J.; And Others

    This user's guide describes the functions, logical operations and subroutines, input data requirements, and available outputs of the Training Requirements Analysis Model (TRAMOD), a computerized analytical life cycle cost modeling system for use in the early stages of system design. Operable in a stand-alone mode, TRAMOD can be used for the…

  15. Evaluation of weather-based rice yield models in India

    Science.gov (United States)

    Sudharsan, D.; Adinarayana, J.; Reddy, D. Raji; Sreenivas, G.; Ninomiya, S.; Hirafuji, M.; Kiura, T.; Tanaka, K.; Desai, U. B.; Merchant, S. N.

    2013-01-01

    The objective of this study was to compare two different rice simulation models—standalone (Decision Support System for Agrotechnology Transfer [DSSAT]) and web based (SImulation Model for RIce-Weather relations [SIMRIW])—with agrometeorological data and agronomic parameters for estimation of rice crop production in southern semi-arid tropics of India. Studies were carried out on the BPT5204 rice variety to evaluate two crop simulation models. Long-term experiments were conducted in a research farm of Acharya N G Ranga Agricultural University (ANGRAU), Hyderabad, India. Initially, the results were obtained using 4 years (1994-1997) of data with weather parameters from a local weather station to evaluate DSSAT simulated results with observed values. Linear regression models used for the purpose showed a close relationship between DSSAT and observed yield. Subsequently, yield comparisons were also carried out with SIMRIW and DSSAT, and validated with actual observed values. Realizing the correlation coefficient values of SIMRIW simulation values in acceptable limits, further rice experiments in monsoon (Kharif) and post-monsoon (Rabi) agricultural seasons (2009, 2010 and 2011) were carried out with a location-specific distributed sensor network system. These proximal systems help to simulate dry weight, leaf area index and potential yield by the Java based SIMRIW on a daily/weekly/monthly/seasonal basis. These dynamic parameters are useful to the farming community for necessary decision making in a ubiquitous manner. However, SIMRIW requires fine tuning for better results/decision making.

  16. Evaluating Performance of the DGM(2,1 Model and Its Modified Models

    Directory of Open Access Journals (Sweden)

    Ying-Fang Huang

    2016-03-01

    Full Text Available The direct grey model (DGM(2,1 is considered for fluctuation characteristics of the sampling data in Grey system theory. However, its applications are quite uncommon in the past literature. The improvement of the precision of the DGM(2,1 is only presented in few previous researches. Moreover, the evaluation of forecasted performance of the DGM(2,1 model and its applications was not conducted in previous studies. As the results, this study aims to evaluate forecasted performance of the DGM(2,1 and its three modified models, including the Markov direct grey model MDGM(2,1, the Fourier direct grey model FDGM(2,1, and the Fourier Markov direct grey model FMDGM(2,1 in order to determine the application of the DGM(2,1 model in practical applications and academic research. The results demonstrate that the DGM(2,1 model has lower precision than its modified models, while the forecasted precision of the FDGM(2,1 is better than that of MDGM(2,1. Additionally, the FMDGM(2,1 model presents the best performance among all of the modified models of DGM(2,1, which can effectively overcome the fluctuating of the data sample and minimize the predicted error of the DGM(2,1 model. The finding indicated that the FMDGM(2,1 model does not only have advantages with regard to the sample size requirement, but can also be flexibly applied to the large fluctuation and random sequences with a high quality of estimation.

  17. Evaluation of legal liability for technological risks in view of requirements for peaceful coexistence and progress.

    Science.gov (United States)

    Zandvoort, Henk

    2011-06-01

    Legal liability for risk-generating technological activities is evaluated in view of requirements that are necessary for peaceful human coexistence and progress in order to show possibilities for improvement. The requirements imply, given that political decision making about the activities proceeds on the basis of majority rule, that legal liability should be unconditional (absolute, strict) and unlimited (full). We analyze actual liability in international law for various risk-generating technological activities, to conclude that nowhere is the standard of unconditional and unlimited liability fully met. Apart from that there are enormous differences. Although significant international liability legislation is in place for some risk-generating technological activities, legislation is virtually absent for others. We discuss fundamental possibilities and limitations of liability and private insurance to secure credible and ethically sound risk assessment and risk management practices. The limitations stem from problems of establishing a causal link between an activity and a harm; compensating irreparable harm; financial warranty; moral hazard in insurance and in organizations; and discounting future damage to present value. As our requirements call for prior agreement among all who are subjected to the risks of an activity about the settlement of these difficult problems, precautionary ex ante regulation of risk-generating activities may be a more attractive option, either combined with liability stipulations or not. However, if ex ante regulation is not based on the consent of all subjected to the risks, it remains that the basis of liability in the law should be unconditional and unlimited liability.

  18. Evaluation of video quality models for multimedia

    Science.gov (United States)

    Brunnström, Kjell; Hands, David; Speranza, Filippo; Webster, Arthur

    2008-02-01

    The Video Quality Experts Group (VQEG) is a group of experts from industry, academia, government and standards organizations working in the field of video quality assessment. Over the last 10 years, VQEG has focused its efforts on the evaluation of objective video quality metrics for digital video. Objective video metrics are mathematical models that predict the picture quality as perceived by an average observer. VQEG has completed validation tests for full reference objective metrics for the Standard Definition Television (SDTV) format. From this testing, two ITU Recommendations were produced. This standardization effort is of great relevance to the video industries because objective metrics can be used for quality control of the video at various stages of the delivery chain. Currently, VQEG is undertaking several projects in parallel. The most mature project is concerned with objective measurement of multimedia content. This project is probably the largest coordinated set of video quality testing ever embarked upon. The project will involve the collection of a very large database of subjective quality data. About 40 subjective assessment experiments and more than 160,000 opinion scores will be collected. These will be used to validate the proposed objective metrics. This paper describes the test plan for the project, its current status, and one of the multimedia subjective tests.

  19. Communicating Sustainability: An Operational Model for Evaluating Corporate Websites

    Directory of Open Access Journals (Sweden)

    Alfonso Siano

    2016-09-01

    Full Text Available The interest in corporate sustainability has increased rapidly in recent years and has encouraged organizations to adopt appropriate digital communication strategies, in which the corporate website plays a key role. Despite this growing attention in both the academic and business communities, models for the analysis and evaluation of online sustainability communication have not been developed to date. This paper aims to develop an operational model to identify and assess the requirements of sustainability communication in corporate websites. It has been developed from a literature review on corporate sustainability and digital communication and the analysis of the websites of the organizations included in the “Global CSR RepTrak 2015” by the Reputation Institute. The model identifies the core dimensions of online sustainability communication (orientation, structure, ergonomics, content—OSEC, sub-dimensions, such as stakeholder engagement and governance tools, communication principles, and measurable items (e.g., presence of the materiality matrix, interactive graphs. A pilot study on the websites of the energy and utilities companies included in the Dow Jones Sustainability World Index 2015 confirms the applicability of the OSEC framework. Thus, the model can provide managers and digital communication consultants with an operational tool that is useful for developing an industry ranking and assessing the best practices. The model can also help practitioners to identify corrective actions in the critical areas of digital sustainability communication and avoid greenwashing.

  20. Scenario Evaluator for Electrical Resistivity survey pre-modeling tool

    Science.gov (United States)

    Terry, Neil; Day-Lewis, Frederick D.; Robinson, Judith L.; Slater, Lee D; Halford, Keith J.; Binley, Andrew; Lane, John; Werkema, Dale

    2017-01-01

    Geophysical tools have much to offer users in environmental, water resource, and geotechnical fields; however, techniques such as electrical resistivity imaging (ERI) are often oversold and/or overinterpreted due to a lack of understanding of the limitations of the techniques, such as the appropriate depth intervals or resolution of the methods. The relationship between ERI data and resistivity is nonlinear; therefore, these limitations depend on site conditions and survey design and are best assessed through forward and inverse modeling exercises prior to field investigations. In this approach, proposed field surveys are first numerically simulated given the expected electrical properties of the site, and the resulting hypothetical data are then analyzed using inverse models. Performing ERI forward/inverse modeling, however, requires substantial expertise and can take many hours to implement. We present a new spreadsheet-based tool, the Scenario Evaluator for Electrical Resistivity (SEER), which features a graphical user interface that allows users to manipulate a resistivity model and instantly view how that model would likely be interpreted by an ERI survey. The SEER tool is intended for use by those who wish to determine the value of including ERI to achieve project goals, and is designed to have broad utility in industry, teaching, and research.

  1. Diverse secreted effectors are required for Salmonella persistence in a mouse infection model.

    Directory of Open Access Journals (Sweden)

    Afshan S Kidwai

    Full Text Available Salmonella enterica serovar Typhimurium causes typhoid-like disease in mice and is a model of typhoid fever in humans. One of the hallmarks of typhoid is persistence, the ability of the bacteria to survive in the host weeks after infection. Virulence factors called effectors facilitate this process by direct transfer to the cytoplasm of infected cells thereby subverting cellular processes. Secretion of effectors to the cell cytoplasm takes place through multiple routes, including two separate type III secretion (T3SS apparati as well as outer membrane vesicles. The two T3SS are encoded on separate pathogenicity islands, SPI-1 and -2, with SPI-1 more strongly associated with the intestinal phase of infection, and SPI-2 with the systemic phase. Both T3SS are required for persistence, but the effectors required have not been systematically evaluated. In this study, mutations in 48 described effectors were tested for persistence. We replaced each effector with a specific DNA barcode sequence by allelic exchange and co-infected with a wild-type reference to calculate the ratio of wild-type parent to mutant at different times after infection. The competitive index (CI was determined by quantitative PCR in which primers that correspond to the barcode were used for amplification. Mutations in all but seven effectors reduced persistence demonstrating that most effectors were required. One exception was CigR, a recently discovered effector that is widely conserved throughout enteric bacteria. Deletion of cigR increased lethality, suggesting that it may be an anti-virulence factor. The fact that almost all Salmonella effectors are required for persistence argues against redundant functions. This is different from effector repertoires in other intracellular pathogens such as Legionella.

  2. Diverse Secreted Effectors Are Required for Salmonella Persistence in a Mouse Infection Model

    Energy Technology Data Exchange (ETDEWEB)

    Kidwai, Afshan S.; Mushamiri, Ivy T.; Niemann, George; Brown, Roslyn N.; Adkins, Joshua N.; Heffron, Fred

    2013-08-12

    Salmonella enterica serovar Typhimurium causes typhoid-like disease in mice and is a model of typhoid fever in humans. One of the hallmarks of typhoid is persistence, the ability of the bacteria to survive in the host weeks after infection. Virulence factors called effectors facilitate this process by direct transfer to the cytoplasm of infected cells thereby subverting cellular processes. Secretion of effectors to the cell cytoplasm takes place through multiple routes, including two separate type III secretion (T3SS) apparati as well as outer membrane vesicles. The two T3SS are encoded on separate pathogenicity islands, SPI-1 and -2, with SPI-1 more strongly associated with the intestinal phase of infection, and SPI-2 with the systemic phase. Both T3SS are required for persistence, but the effectors required have not been systematically evaluated. In this study, mutations in 48 described effectors were tested for persistence. We replaced each effector with a specific DNA barcode sequence by allelic exchange and co-infected with a wild-type reference to calculate the ratio of wild-type parent to mutant at different times after infection. The competitive index (CI) was determined by quantitative PCR in which primers that correspond to the barcode were used for amplification. Mutations in all but seven effectors reduced persistence demonstrating that most effectors were required. One exception was CigR, a recently discovered effector that is widely conserved throughout enteric bacteria. Deletion of cigR increased lethality, suggesting that it may be an anti-virulence factor. The fact that almost all Salmonella effectors are required for persistence argues against redundant functions. This is different from effector repertoires in other intracellular pathogens such as Legionella.

  3. Model fit versus biological relevance: Evaluating photosynthesis-temperature models for three tropical seagrass species

    Science.gov (United States)

    Adams, Matthew P.; Collier, Catherine J.; Uthicke, Sven; Ow, Yan X.; Langlois, Lucas; O’Brien, Katherine R.

    2017-01-01

    When several models can describe a biological process, the equation that best fits the data is typically considered the best. However, models are most useful when they also possess biologically-meaningful parameters. In particular, model parameters should be stable, physically interpretable, and transferable to other contexts, e.g. for direct indication of system state, or usage in other model types. As an example of implementing these recommended requirements for model parameters, we evaluated twelve published empirical models for temperature-dependent tropical seagrass photosynthesis, based on two criteria: (1) goodness of fit, and (2) how easily biologically-meaningful parameters can be obtained. All models were formulated in terms of parameters characterising the thermal optimum (Topt) for maximum photosynthetic rate (Pmax). These parameters indicate the upper thermal limits of seagrass photosynthetic capacity, and hence can be used to assess the vulnerability of seagrass to temperature change. Our study exemplifies an approach to model selection which optimises the usefulness of empirical models for both modellers and ecologists alike.

  4. Modelling phosphorus intake, digestion, retention and excretion in growing and finishing pig: model evaluation.

    Science.gov (United States)

    Symeou, V; Leinonen, I; Kyriazakis, I

    2014-10-01

    A deterministic, dynamic model was developed, to enable predictions of phosphorus (P) digested, retained and excreted for different pig genotypes and under different dietary conditions. Before confidence can be placed on the predictions of the model, its evaluation was required. A sensitivity analysis of model predictions to ±20% changes in the model parameters was undertaken using a basal UK industry standard diet and a pig genotype characterized by British Society Animal Science as being of 'intermediate growth'. Model outputs were most sensitive to the values of the efficiency of digestible P utilization for growth and the non-phytate P absorption coefficient from the small intestine into the bloodstream; all other model parameters influenced model outputs by excretion. In general, the model predicted satisfactorily the quantitative pig responses, in terms of P digested, retained and excreted, to variation in dietary inorganic P supply, Ca and phytase supplementation. The model performed well with 'conventional', European feed ingredients and poorly with 'less conventional' ones, such as dried distillers grains with solubles and canola meal. Explanations for these inconsistencies in the predictions are offered in the paper and they are expected to lead to further model development and improvement. The latter would include the characterization of the origin of phytate in pig diets.

  5. Performance evaluation of quality monitor models in spot welding

    Institute of Scientific and Technical Information of China (English)

    Zhang Zhongdian; Li Dongqing; Wang Kai

    2005-01-01

    Performance of quality monitor models in spot welding determines the monitor precision directly, so it's crucial to evaluate it. Previously, mean square error ( MSE ) is often used to evaluate performances of models, but it can only show the total errors of finite specimens of models, and cannot show whether the quality information inferred from models are accurate and reliable enough or not. For this reason, by means of measure error theory, a new way to evaluate the performances of models according to the error distributions is developed as follows: Only if correct and precise enough the error distribution of model is, the quality information inferred from model is accurate and reliable.

  6. Usability Evaluation of Variability Modeling by means of Common Variability Language

    Directory of Open Access Journals (Sweden)

    Jorge Echeverria

    2015-12-01

    Full Text Available Common Variability Language (CVL is a recent proposal for OMG's upcoming Variability Modeling standard. CVL models variability in terms of Model Fragments.  Usability is a widely-recognized quality criterion essential to warranty the successful use of tools that put these ideas in practice. Facing the need of evaluating the usability of CVL modeling tools, this paper presents a Usability Evaluation of CVL applied to a Modeling Tool for firmware code of Induction Hobs. This evaluation addresses the configuration, scoping and visualization facets. The evaluation involved the end users of the tool whom are engineers of our Induction Hob industrial partner. Effectiveness and efficiency results indicate that model configuration in terms of model fragment substitutions is intuitive enough but both scoping and visualization require improved tool support. Results also enabled us to identify a list of usability problems which may contribute to alleviate scoping and visualization issues in CVL.

  7. Hybrid supply chain model for material requirement planning under financial constraints: A case study

    Science.gov (United States)

    Curci, Vita; Dassisti, Michele; Josefa, Mula Bru; Manuel, Díaz Madroñero

    2014-10-01

    Supply chain model (SCM) are potentially capable to integrate different aspects in supporting decision making for enterprise management tasks. The aim of the paper is to propose an hybrid mathematical programming model for optimization of production requirements resources planning. The preliminary model was conceived bottom-up from a real industrial case analysed oriented to maximize cash flow. Despite the intense computational effort required to converge to a solution, optimisation done brought good result in solving the objective function.

  8. EVALUATION OF SUPPLEMENTAL PRE-TREATMENT DEVELOPMENT REQUIREMENTS TO MEET TRL 6 ROTARY MICROFILTRATION

    Energy Technology Data Exchange (ETDEWEB)

    HUBER HJ

    2011-10-03

    In spring 2011, the Technology Maturation Plan (TMP) for the Supplemental Treatment Project (RPP-PLAN-49827, Rev. 0), Technology Maturation Plan for the Treatment Project (T4S01) was developed. This plan contains all identified actions required to reach technical maturity for a field-deployable waste feed pretreatment system. The supplemental pretreatment system has a filtration and a Cs-removal component. Subsequent to issuance of the TMP, rotary microfiltration (RMF) has been identified as the prime filtration technology for this application. The prime Cs-removal technology is small column ion exchange (ScIX) using spherical resorcinol formaldehyde (sRF) as the exchange resin. During fiscal year 2011 (FY2011) some of the tasks identified in the TMP have been completed. As of September 2011, the conceptual design package has been submitted to DOE as part of the critical decision (CD-1) process. This document describes the remaining tasks identified in the TMP to reach technical maturity and evaluates the validity of the proposed tests to fill the gaps as previously identified in the TMP. The potential vulnerabilities are presented and the completed list of criteria for the DOE guide DOE G 413.3-4 different technology readiness levels are added in an attachment. This evaluation has been conducted from a technology development perspective - all programmatic and manufacturing aspects were excluded from this exercise. Compliance with the DOE G 413.3-4 programmatic and manufacturing requirements will be addressed directly by the Treatment Project during the course of engineering design. The results of this evaluation show that completion of the proposed development tasks in the TMP are sufficient to reach TRL 6 from a technological point of view. The tasks involve actual waste tests using the current baseline configuration (2nd generation disks, 40 psi differential pressure, 30 C feed temperature) and three different simulants - the PEP, an AP-Farm and an S

  9. An integrated PMP model to assess the development of agro-energy crops and the effect on water requirements

    Directory of Open Access Journals (Sweden)

    Michele Donati

    2013-12-01

    Full Text Available This paper presents an integrated model for the economic and environmental assessment of the use of natural resources when new activities (i.e. biomass crops for energy production are introduced into the farm production plan. The methodology is based on the integration of positive mathematical programming (PMP with the AquaCrop model developed by FAO. PMP represents farmer decision processes and evaluates how farms react to the biomass-sorghum activity option at different price levels. AquaCrop evaluates the relationship between water needs and biomass production and assesses the effect of the land allocation on water requirements at regional level. The integration of these two models assists global policy evaluation at regional level as it makes it possible to identify the economic threshold for biomass crops, the change in land allocation and total water requirement. The model can help policy makers to evaluate the impacts of variations in crop profitability and market innovations on farm profitability, land use and water consumption and the sustainability of the market scenario.

  10. Model selection on solid ground: Rigorous comparison of nine ways to evaluate Bayesian model evidence.

    Science.gov (United States)

    Schöniger, Anneli; Wöhling, Thomas; Samaniego, Luis; Nowak, Wolfgang

    2014-12-01

    Bayesian model selection or averaging objectively ranks a number of plausible, competing conceptual models based on Bayes' theorem. It implicitly performs an optimal trade-off between performance in fitting available data and minimum model complexity. The procedure requires determining Bayesian model evidence (BME), which is the likelihood of the observed data integrated over each model's parameter space. The computation of this integral is highly challenging because it is as high-dimensional as the number of model parameters. Three classes of techniques to compute BME are available, each with its own challenges and limitations: (1) Exact and fast analytical solutions are limited by strong assumptions. (2) Numerical evaluation quickly becomes unfeasible for expensive models. (3) Approximations known as information criteria (ICs) such as the AIC, BIC, or KIC (Akaike, Bayesian, or Kashyap information criterion, respectively) yield contradicting results with regard to model ranking. Our study features a theory-based intercomparison of these techniques. We further assess their accuracy in a simplistic synthetic example where for some scenarios an exact analytical solution exists. In more challenging scenarios, we use a brute-force Monte Carlo integration method as reference. We continue this analysis with a real-world application of hydrological model selection. This is a first-time benchmarking of the various methods for BME evaluation against true solutions. Results show that BME values from ICs are often heavily biased and that the choice of approximation method substantially influences the accuracy of model ranking. For reliable model selection, bias-free numerical methods should be preferred over ICs whenever computationally feasible.

  11. Model selection on solid ground: Rigorous comparison of nine ways to evaluate Bayesian model evidence

    Science.gov (United States)

    Schöniger, Anneli; Wöhling, Thomas; Samaniego, Luis; Nowak, Wolfgang

    2014-12-01

    Bayesian model selection or averaging objectively ranks a number of plausible, competing conceptual models based on Bayes' theorem. It implicitly performs an optimal trade-off between performance in fitting available data and minimum model complexity. The procedure requires determining Bayesian model evidence (BME), which is the likelihood of the observed data integrated over each model's parameter space. The computation of this integral is highly challenging because it is as high-dimensional as the number of model parameters. Three classes of techniques to compute BME are available, each with its own challenges and limitations: (1) Exact and fast analytical solutions are limited by strong assumptions. (2) Numerical evaluation quickly becomes unfeasible for expensive models. (3) Approximations known as information criteria (ICs) such as the AIC, BIC, or KIC (Akaike, Bayesian, or Kashyap information criterion, respectively) yield contradicting results with regard to model ranking. Our study features a theory-based intercomparison of these techniques. We further assess their accuracy in a simplistic synthetic example where for some scenarios an exact analytical solution exists. In more challenging scenarios, we use a brute-force Monte Carlo integration method as reference. We continue this analysis with a real-world application of hydrological model selection. This is a first-time benchmarking of the various methods for BME evaluation against true solutions. Results show that BME values from ICs are often heavily biased and that the choice of approximation method substantially influences the accuracy of model ranking. For reliable model selection, bias-free numerical methods should be preferred over ICs whenever computationally feasible.

  12. Evaluating an Online Cognitive Training Platform for Older Adults: User Experience and Implementation Requirements.

    Science.gov (United States)

    Haesner, Marten; Steinert, Anika; O'Sullivan, Julie Lorraine; Weichenberger, Markus

    2015-08-01

    Decline of cognitive function is a part of aging. However, intensive cognitive training can improve important cognitive functions, such as attention and working memory. Because existing systems are not older adult-friendly and are usually not based on scientific evidence, an online platform was developed for cognitive training with information and communication features and evaluated in an 8-week field test. In a randomized clinical trial with 80 older adults, findings from log data analysis and questionnaires revealed a good use of the online platform. Communication or assistive features were not used often. Good usability ratings were given to the cognitive exercises. Subjective improvements of cognitive functions due to the training were reported. The current article presents concrete requirements and recommendations for deploying cognitive training software in older adult residential homes.

  13. Rhode Island Model Evaluation & Support System: Teacher. Edition III

    Science.gov (United States)

    Rhode Island Department of Education, 2015

    2015-01-01

    Rhode Island educators believe that implementing a fair, accurate, and meaningful educator evaluation and support system will help improve teaching and learning. The primary purpose of the Rhode Island Model Teacher Evaluation and Support System (Rhode Island Model) is to help all teachers improve. Through the Model, the goal is to help create a…

  14. Evaluation of Fabric Hand with Grey Element Model

    Institute of Scientific and Technical Information of China (English)

    CHEN Dong-sheng; GAN Ying-jin; BAI Yue

    2004-01-01

    A premium composite grey element model is established and used for objective evaluation of fabric hand. Fabric hand is regarded as a grey system and the model is composed of fabric mechanical properties, which are primary hand attributes. Based on comparison with a standard model, fabric hand can be objectively evaluated.

  15. A nonlabeled method to evaluate cortisol production rate by modeling plasma CBG-free cortisol disposition

    OpenAIRE

    Picard-Hagen, Nicole; Gayrard-Troy, Véronique,; Alvinerie, Michel; Smeyers, Hélène; Ricou, Raphael; Bousquet-Mélou, Alain; Toutain, Pierre-Louis

    2001-01-01

    This study aimed to develop a nonlabeled method for the measurement of cortisol production rate to evaluate adrenal function. The cortisol production rate determination requires that of cortisol clearance, which is not a parameter but a variable resulting from the saturable binding of cortisol to corticosteroid-binding globulin (CBG). Our method is based on evaluation of the plasma clearance of the CBG-free cortisol fraction. This parameter was evaluated from a pharmacokinetic model of total ...

  16. ¿Evaluating or patchworking? An Evaluand-oriented Responsive Evaluation Model

    Directory of Open Access Journals (Sweden)

    Iván Jorrín Abellán

    2009-12-01

    Full Text Available This article presents the CSCL Evaluand-Oriented Responsive Evaluation Model, an evolving evaluation model, conceived as a “boundary object”, to be used in the evaluation of a wide range of CSCL systems. The model relies on a responsive evaluation approach and tries to provide potential evaluators with a practical tool to evaluate CSCL systems. The article is driven by a needlework metaphor that tries to illustrate the complexity of the traditions, perspectives and practical issues that converge in this proposal.

  17. Evaluation of black carbon estimations in global aerosol models

    NARCIS (Netherlands)

    Koch, D.; Schulz, M.; McNaughton, C.; Spackman, J.R.; Balkanski, Y.; Bauer, S.; Krol, M.C.

    2009-01-01

    We evaluate black carbon (BC) model predictions from the AeroCom model intercomparison project by considering the diversity among year 2000 model simulations and comparing model predictions with available measurements. These model-measurement intercomparisons include BC surface and aircraft concentr

  18. Issues in Value-at-Risk Modeling and Evaluation

    NARCIS (Netherlands)

    J. Danielsson; C.G. de Vries (Casper); B.N. Jorgensen (Bjørn); P.F. Christoffersen (Peter); F.X. Diebold (Francis); T. Schuermann (Til); J.A. Lopez (Jose); B. Hirtle (Beverly)

    1998-01-01

    textabstractDiscusses the issues in value-at-risk modeling and evaluation. Value of value at risk; Horizon problems and extreme events in financial risk management; Methods of evaluating value-at-risk estimates.

  19. Evaluation of environmental flow requirements using eco-hydrologic-hydraulic methods in perennial rivers.

    Science.gov (United States)

    Abdi, Reza; Yasi, Mehdi

    2015-01-01

    The assessment of environmental flows in rivers is of vital importance for preserving riverine ecosystem processes. This paper addresses the evaluation of environmental flow requirements in three reaches along a typical perennial river (the Zab transboundary river, in north-west Iran), using different hydraulic, hydrological and ecological methods. The main objective of this study came from the construction of three dams and inter-basin transfer of water from the Zab River to the Urmia Lake. Eight hydrological methods (i.e. Tennant, Tessman, flow duration curve analysis, range of variability approach, Smakhtin, flow duration curve shifting, desktop reserve and 7Q2&10 (7-day low flow with a 2- and 10-year return period)); two hydraulic methods (slope value and maximum curvature); and two habitat simulation methods (hydraulic-ecologic, and Q Equation based on water quality indices) were used. Ecological needs of the riverine key species (mainly Barbus capito fish), river geometries, natural flow regime and the environmental status of river management were the main indices for determining the minimum flow requirements. The results indicate that the order of 35%, 17% and 18% of the mean annual flow are to be maintained for the upper, middle and downstream river reaches, respectively. The allocated monthly flow rates in the three Dams steering program are not sufficient to preserve the Zab River life.

  20. Evaluating Vocational Programs: A Three Dimensional Model.

    Science.gov (United States)

    Rehman, Sharaf N.; Nejad, Mahmoud

    The traditional methods of assessing the academic programs in the liberal arts are inappropriate for evaluating vocational and technical programs. In traditional academic disciplines, assessment of instruction is conducted in two fashions: student evaluation at the end of a course and institutional assessment of its goals and mission. Because of…

  1. Instructor’s Performance: A Proposed Model for Online Evaluation

    Directory of Open Access Journals (Sweden)

    Salah Alkhafaji

    2013-10-01

    Full Text Available Currently due to high awareness and quality audits, the higher education institutions have made to keep a track on various performances of the institutions. One such most important activity that has to be analyzed and evaluated is Instructor’s classroom performance. As the students are the main stakeholders of the educational process, their concerns over the instructor, teaching pedagogies and methodologies, assessment techniques need to be collected and analyzed for achieving the institution’s goals and objectives. The students shall give their opinions related to the various performance indicators of instructor.In general, the higher education institutions use various techniques to evaluate instructor’s performance in the classroom from the students. The latest technological developments help in data collection using web technologies. Online system with required questionnaire and attributes will help the higher education institutions in easy data collection. Apart from that the students shall give their opinions without any fear from any place and at any time. In this paper, we have identified the major factors and users of an instructor online evaluation system. Also, we have proposed a model for such system with subsystem interface, entity relationship diagram and context diagram.

  2. Model-Based Requirements Analysis for Reactive Systems with UML Sequence Diagrams and Coloured Petri Nets

    DEFF Research Database (Denmark)

    Tjell, Simon; Lassen, Kristian Bisgaard

    2008-01-01

    In this paper, we describe a formal foundation for a specialized approach to automatically checking traces against real-time requirements. The traces are obtained from simulation of Coloured Petri Net (CPN) models of reactive systems. The real-time requirements are expressed in terms...... of a derivative of UML 2.0 high-level Sequence Diagrams. The automated requirement checking is part of a bigger tool framework in which VDM++ is applied to automatically generate initial CPN models based on Problem Diagrams. These models are manually enhanced to provide behavioral descriptions of the environment...

  3. Model requirements for decision support under uncertainty in data scarce dynamic deltas

    NARCIS (Netherlands)

    Haasnoot, Marjolijn; van Deursen, W.P.A.; Kwakkel, J. H.; Middelkoop, H.

    2016-01-01

    There is a long tradition of model-based decision support in water management. The consideration of deep uncertainty, however, changes the requirements imposed on models.. In the face of deep uncertainty, models are used to explore many uncertainties and the decision space across multiple outcomes o

  4. Requirements for UML and OWL Integration Tool for User Data Consistency Modeling and Testing

    DEFF Research Database (Denmark)

    Nytun, J. P.; Jensen, Christian Søndergaard; Oleshchuk, V. A.

    2003-01-01

    . In this paper we analyze requirements for a tool that support integration of UML models and ontologies written in languages like the W3C Web Ontology Language (OWL). The tool can be used in the following way: after loading two legacy models into the tool, the tool user connects them by inserting modeling...

  5. Modeling and Evaluating Emotions Impact on Cognition

    Science.gov (United States)

    2013-07-01

    International Conference on Automatic Face and Gesture Recognition . Shanghai, China, April 2013 • Wenji Mao and Jonathan Gratch. Modeling Social...Modeling, Lorentz Center, Leiden. August 2011 • Keynote speaker, IEEE International Conference on Automatic Face and Gesture Recognition , Santa

  6. Statistical models of shape optimisation and evaluation

    CERN Document Server

    Davies, Rhodri; Taylor, Chris

    2014-01-01

    Deformable shape models have wide application in computer vision and biomedical image analysis. This book addresses a key issue in shape modelling: establishment of a meaningful correspondence between a set of shapes. Full implementation details are provided.

  7. EVALUATION OF PHARMACIES IN ANKARA ABOUT SOME LEGAL REQUIREMENTS AND COLD CHAIN RULES

    Directory of Open Access Journals (Sweden)

    Cengiz Han ACIKEL

    Full Text Available We aimed to evaluate the pharmacies in Ankara Centrum on some legal requirements and cold chain rules. We planned this study as a cross sectional research and performed it in Ankara 15-20 June 2000. Among the 1400 registered pharmacies in Ankara, stratified sampling chose 200; 189 could be reached. A questionnaire was formed in GATA Department of Public Health to evaluate cold chain rules, and also another observation form was filled for each pharmacy. Most common faults were lack of name tags (89.4%, lack of white uniforms (70.4%, and absence of pharmacist (42.3%. 95.8% mentioned that they received unprescribed drug requests; and 69.5% confessed that they gave unprescribed drugs. 42.1% said that they had food in the refrigerators they kept vaccines and biological materials. 55.8% of the responsible personnel had no knowledge about the suitable shelve for vaccines in the refrigerator. When comparing the pharmacies according to their placement, we found that there were considerable faults and especially these were more in hospital district pharmacies. [TAF Prev Med Bull 2004; 3(7.000: 148-155

  8. SIMPLEBOX: a generic multimedia fate evaluation model

    NARCIS (Netherlands)

    Meent D van de

    1993-01-01

    This document describes the technical details of the multimedia fate model SimpleBox, version 1.0 (930801). SimpleBox is a multimedia box model of what is commonly referred to as a "Mackay-type" model ; it assumes spatially homogeneous environmental compartments (air, water, suspended matter, aquati

  9. SIMPLEBOX: a generic multimedia fate evaluation model

    NARCIS (Netherlands)

    van de Meent D

    1993-01-01

    This document describes the technical details of the multimedia fate model SimpleBox, version 1.0 (930801). SimpleBox is a multimedia box model of what is commonly referred to as a "Mackay-type" model ; it assumes spatially homogeneous environmental compartments (air, water, suspended m

  10. QUALITY OF AN ACADEMIC STUDY PROGRAMME - EVALUATION MODEL

    Directory of Open Access Journals (Sweden)

    Mirna Macur

    2016-01-01

    Full Text Available Quality of an academic study programme is evaluated by many: employees (internal evaluation and by external evaluators: experts, agencies and organisations. Internal and external evaluation of an academic programme follow written structure that resembles on one of the quality models. We believe the quality models (mostly derived from EFQM excellence model don’t fit very well into non-profit activities, policies and programmes, because they are much more complex than environment, from which quality models derive from (for example assembly line. Quality of an academic study programme is very complex and understood differently by various stakeholders, so we present dimensional evaluation in the article. Dimensional evaluation, as opposed to component and holistic evaluation, is a form of analytical evaluation in which the quality of value of the evaluand is determined by looking at its performance on multiple dimensions of merit or evaluation criteria. First stakeholders of a study programme and their views, expectations and interests are presented, followed by evaluation criteria. They are both joined into the evaluation model revealing which evaluation criteria can and should be evaluated by which stakeholder. Main research questions are posed and research method for each dimension listed.

  11. Requirements Evolution Processes Modeling%需求演化过程建模

    Institute of Scientific and Technical Information of China (English)

    张国生

    2012-01-01

    Requirements tasks, requirements activities, requirements engineering processes and requirements engineering processes system are formally defined. Requirements tasks are measured with information entropy. Requirements activities, requirements engineering processes and requirements engineering processes system are measured with joint entropy. From point of view of requirements engineering processes, microcosmic evolution of iteration and feedback of the requirements engineering processes are modeled with condition-event nets. From point of view of system engineering, macro evolution of the whole software requirements engineering processes system is modeled with dissipative structure theory.%对需求任务、需求活动、需求工程过程以及需求工程过程系统进行形式化定义.用信息熵对需求任务演化进行度量,用联合熵对需求活动、需求工程过程以及需求工程过程系统演化进行度量.从需求工程过程的角度,用条件一事件网对需求工程过程的迭代、反馈进行微观演化建模.从系统工程的角度,用耗散结构理论对整个软件需求工程过程系统进行宏观演化建模.

  12. Performance Requirements Modeling andAssessment for Active Power Ancillary Services

    DEFF Research Database (Denmark)

    Bondy, Daniel Esteban Morales; Thavlov, Anders; Tougaard, Janus Bundsgaard Mosbæk

    2017-01-01

    New sources of ancillary services are expected in the power system. For large and conventional generation units the dynamic response is well understood and detailed individual measurement is feasible, which factors in to the straightforward performance requirements applied today. For secure power...... ancillary service sources. This paper develops a modeling method for ancillary services performance requirements, including performance and verification indices. The use of the modeling method and the indices is exemplified in two case studies.......New sources of ancillary services are expected in the power system. For large and conventional generation units the dynamic response is well understood and detailed individual measurement is feasible, which factors in to the straightforward performance requirements applied today. For secure power...... system operation, a reliable service delivery is required, yet it may not be appropriate to apply conventional performance requirements to new technologies and methods. The service performance requirements and assessment methods therefore need to be generalized and standardized in order to include future...

  13. APPLICATION OF GRAY EVALUATION MODEL BASED ON AHP IN ATM SYSTEM

    Institute of Scientific and Technical Information of China (English)

    Wu Zhijun; Pan Wen

    2008-01-01

    This paper presents a hierarchy model of Air Traffic Management (ATM) according to the security requirements in ATM system, analyzes it by grey assessment and Analytic Hierarchy Process(AHP), and evaluates it in details. It also provides theoretical support for building an effective evaluation system. The basic idea is to use AHP and Grey Assessment to obtain the weights of the indicators, and count grey evaluation coefficients with whitening function. The compositive clustering coefficients are obtained by combining the weights and the grey evaluation coefficients. Evaluation result can be gotten from the compositive clustering coefficients.

  14. Evaluation of attenuation and scatter correction requirements in small animal PET and SPECT imaging

    Science.gov (United States)

    Konik, Arda Bekir

    Positron emission tomography (PET) and single photon emission tomography (SPECT) are two nuclear emission-imaging modalities that rely on the detection of high-energy photons emitted from radiotracers administered to the subject. The majority of these photons are attenuated (absorbed or scattered) in the body, resulting in count losses or deviations from true detection, which in turn degrades the accuracy of images. In clinical emission tomography, sophisticated correction methods are often required employing additional x-ray CT or radionuclide transmission scans. Having proven their potential in both clinical and research areas, both PET and SPECT are being adapted for small animal imaging. However, despite the growing interest in small animal emission tomography, little scientific information exists about the accuracy of these correction methods on smaller size objects, and what level of correction is required. The purpose of this work is to determine the role of attenuation and scatter corrections as a function of object size through simulations. The simulations were performed using Interactive Data Language (IDL) and a Monte Carlo based package, Geant4 application for emission tomography (GATE). In IDL simulations, PET and SPECT data acquisition were modeled in the presence of attenuation. A mathematical emission and attenuation phantom approximating a thorax slice and slices from real PET/CT data were scaled to 5 different sizes (i.e., human, dog, rabbit, rat and mouse). The simulated emission data collected from these objects were reconstructed. The reconstructed images, with and without attenuation correction, were compared to the ideal (i.e., non-attenuated) reconstruction. Next, using GATE, scatter fraction values (the ratio of the scatter counts to the total counts) of PET and SPECT scanners were measured for various sizes of NEMA (cylindrical phantoms representing small animals and human), MOBY (realistic mouse/rat model) and XCAT (realistic human model

  15. Fraud Risk Modelling: Requirements Elicitation in the Case of Telecom Services

    DEFF Research Database (Denmark)

    Yesuf, Ahmed; Wolos, Lars Peter; Rannenberg, Kai

    2017-01-01

    . In this paper, we highlight the important requirements for a usable and context-aware fraud risk modelling approach for Telecom services. To do so, we have conducted two workshops with experts from a Telecom provider and experts from multi-disciplinary areas. In order to show and document the requirements, we...

  16. The Nuremberg Code subverts human health and safety by requiring animal modeling

    OpenAIRE

    Greek Ray; Pippus Annalea; Hansen Lawrence A

    2012-01-01

    Abstract Background The requirement that animals be used in research and testing in order to protect humans was formalized in the Nuremberg Code and subsequent national and international laws, codes, and declarations. Discussion We review the history of these requirements and contrast what was known via science about animal models then with what is known now. We further analyze the predictive...

  17. Peltier Thermoelectric Modules Modeling and Evaluation

    OpenAIRE

    Chakib Alaoui

    2011-01-01

    The purpose of this work is to develop and experimentally test a model for the Peltier effect heat pump for the transient simulation in Spice software. The proposed model uses controlled sources and lumped components and its parameters can be directly calculated from the manufacturer’s data-sheets. In order to validate this model, a refrigeration chamber was designed and fabricated by using the Peltier modules. The overall system was experimentally tested and simulated with Spice. The simulat...

  18. A Regional Climate Model Evaluation System Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Develop a packaged data management infrastructure for the comparison of generated climate model output to existing observational datasets that includes capabilities...

  19. Evaluation of Turbulence Models in Gas Dispersion

    OpenAIRE

    Moen, Alexander

    2016-01-01

    Several earlier model validation studies for predicting gas dispersion scenarios have been conducted for the three RANS two-equation eddy viscosity turbulence models, the standard k-ε (SKE), Re- Normalisation group k-ε (RNG) and Realizable k-ε (Realizable). However, these studies have mainly validated one or two of the models, and have mostly used one simulation case as a basis for determining which model is the best suited for predicting such scenarios. In addition, the studies have shown co...

  20. Virtual Community Life Cycle: a Model to Develop Systems with Fluid Requirements

    OpenAIRE

    El Morr, Christo; Maret, Pierre de; Rioux, Marcia; Dinca-Panaitescu, Mihaela; Subercaze, Julien

    2011-01-01

    This paper reports the results of an investigation into the life cycle model needed to develop information systems for group of people with fluid requirements. For this purpose, we developed a modified spiral model and applied to the analysis, design and implementation of a virtual community for a group of researchers and organizations that collaborated in a research project and had changing system requirements? The virtual knowledge community was dedicated to support mobilization and dissemi...

  1. Robust Medical Test Evaluation Using Flexible Bayesian Semiparametric Regression Models

    Directory of Open Access Journals (Sweden)

    Adam J. Branscum

    2013-01-01

    Full Text Available The application of Bayesian methods is increasing in modern epidemiology. Although parametric Bayesian analysis has penetrated the population health sciences, flexible nonparametric Bayesian methods have received less attention. A goal in nonparametric Bayesian analysis is to estimate unknown functions (e.g., density or distribution functions rather than scalar parameters (e.g., means or proportions. For instance, ROC curves are obtained from the distribution functions corresponding to continuous biomarker data taken from healthy and diseased populations. Standard parametric approaches to Bayesian analysis involve distributions with a small number of parameters, where the prior specification is relatively straight forward. In the nonparametric Bayesian case, the prior is placed on an infinite dimensional space of all distributions, which requires special methods. A popular approach to nonparametric Bayesian analysis that involves Polya tree prior distributions is described. We provide example code to illustrate how models that contain Polya tree priors can be fit using SAS software. The methods are used to evaluate the covariate-specific accuracy of the biomarker, soluble epidermal growth factor receptor, for discerning lung cancer cases from controls using a flexible ROC regression modeling framework. The application highlights the usefulness of flexible models over a standard parametric method for estimating ROC curves.

  2. Optimization and evaluation of probabilistic-logic sequence models

    DEFF Research Database (Denmark)

    Christiansen, Henning; Lassen, Ole Torp

    Analysis of biological sequence data demands more and more sophisticated and fine-grained models, but these in turn introduce hard computational problems. A class of probabilistic-logic models is considered, which increases the expressibility from HMM's and SCFG's regular and context-free languages...... for preprocessing or splitting them into submodels. An evaluation method for approximating models is suggested based on automatic generation of samples. These models and evaluation processes are illustrated in the PRISM system developed by other authors....

  3. Evaluation of Fast-Time Wake Vortex Prediction Models

    Science.gov (United States)

    Proctor, Fred H.; Hamilton, David W.

    2009-01-01

    Current fast-time wake models are reviewed and three basic types are defined. Predictions from several of the fast-time models are compared. Previous statistical evaluations of the APA-Sarpkaya and D2P fast-time models are discussed. Root Mean Square errors between fast-time model predictions and Lidar wake measurements are examined for a 24 hr period at Denver International Airport. Shortcomings in current methodology for evaluating wake errors are also discussed.

  4. Model for Evaluating the Ecological Health of Municipal Solid Waste Treatment System

    Institute of Scientific and Technical Information of China (English)

    WANG Yan-ming; ZHANG Qian-fei

    2008-01-01

    Ecological municipal solid waste (MSW) treatment systems are complex systems engineering concerning with multiple objectives and hierarchical levels. By combining an extension method with fuzzy logic theory, this paper investigated key technologies required by the comprehensive evaluation of ecological health. The method includes the construction of an evaluation system, quantification of evaluation indices, development of a matter-element model, development of an extension evaluation method, and assignment of a blended weight that combines subjectively and objectively estimated weights. This approach was used to develop a comprehensive model for evaluating the ecological health of an ecological treatment system for MSW. The model was then applied to a case study, and the results demonstrated that the model is a reasonable and effective.

  5. Modeling and Evaluation of Multimodal Perceptual Quality

    DEFF Research Database (Denmark)

    Petersen, Kim T; Hansen, Steffen Duus; Sørensen, John Aasted

    1997-01-01

    The increasing performance requirements of multimedia modalities, carrying speech, audio, video, image, and graphics emphasize the need for assessment methods of the total quality of a multimedia system and methods for simultaneous analysis of the system components. It is important to take into a...

  6. Model visualization for evaluation of biocatalytic processes

    DEFF Research Database (Denmark)

    Law, HEM; Lewis, DJ; McRobbie, I

    2008-01-01

    Biocatalysis offers great potential as an additional, and in some cases as an alternative, synthetic tool for organic chemists, especially as a route to introduce chirality. However, the implementation of scalable biocatalytic processes nearly always requires the introduction of process and/or bi...

  7. Evaluating Energy Efficiency Policies with Energy-Economy Models

    Energy Technology Data Exchange (ETDEWEB)

    Mundaca, Luis; Neij, Lena; Worrell, Ernst; McNeil, Michael A.

    2010-08-01

    The growing complexities of energy systems, environmental problems and technology markets are driving and testing most energy-economy models to their limits. To further advance bottom-up models from a multidisciplinary energy efficiency policy evaluation perspective, we review and critically analyse bottom-up energy-economy models and corresponding evaluation studies on energy efficiency policies to induce technological change. We use the household sector as a case study. Our analysis focuses on decision frameworks for technology choice, type of evaluation being carried out, treatment of market and behavioural failures, evaluated policy instruments, and key determinants used to mimic policy instruments. Although the review confirms criticism related to energy-economy models (e.g. unrealistic representation of decision-making by consumers when choosing technologies), they provide valuable guidance for policy evaluation related to energy efficiency. Different areas to further advance models remain open, particularly related to modelling issues, techno-economic and environmental aspects, behavioural determinants, and policy considerations.

  8. Standardizing the performance evaluation of short-term wind prediction models

    DEFF Research Database (Denmark)

    Madsen, Henrik; Pinson, Pierre; Kariniotakis, G.

    2005-01-01

    Short-term wind power prediction is a primary requirement for efficient large-scale integration of wind generation in power systems and electricity markets. The choice of an appropriate prediction model among the numerous available models is not trivial, and has to be based on an objective...... evaluation of model performance. This paper proposes a standardized protocol for the evaluation of short-term wind-poser preciction systems. A number of reference prediction models are also described, and their use for performance comparison is analysed. The use of the protocol is demonstrated using results...

  9. Evaluation of EOR Processes Using Network Models

    DEFF Research Database (Denmark)

    Larsen, Jens Kjell; Krogsbøll, Anette

    1998-01-01

    The report consists of the following parts: 1) Studies of wetting properties of model fluids and fluid mixtures aimed at an optimal selection of candidates for micromodel experiments. 2) Experimental studies of multiphase transport properties using physical models of porous networks (micromodels)...

  10. Evaluation of spinal cord injury animal models

    Institute of Scientific and Technical Information of China (English)

    Ning Zhang; Marong Fang; Haohao Chen; Fangming Gou; Mingxing Ding

    2014-01-01

    Because there is no curative treatment for spinal cord injury, establishing an ideal animal model is important to identify injury mechanisms and develop therapies for individuals suffering from spinal cord injuries. In this article, we systematically review and analyze various kinds of animal models of spinal cord injury and assess their advantages and disadvantages for further studies.

  11. Evaluating Econometric Models and Expert Intuition

    NARCIS (Netherlands)

    R. Legerstee (Rianne)

    2012-01-01

    textabstractThis thesis is about forecasting situations which involve econometric models and expert intuition. The first three chapters are about what it is that experts do when they adjust statistical model forecasts and what might improve that adjustment behavior. It is investigated how expert for

  12. A Review of Equation of State Models, Chemical Equilibrium Calculations and CERV Code Requirements for SHS Detonation Modelling

    Science.gov (United States)

    2009-10-01

    Beattie - Bridgeman Virial expansion The above equations are suitable for moderate pressures and are usually based on either empirical constants...CR 2010-013 October 2009 A Review of Equation of State Models, Chemical Equilibrium Calculations and CERV Code Requirements for SHS Detonation...Defence R&D Canada. A Review of Equation of State Models, Chemical Equilibrium Calculations and CERV Code Requirements for SHS Detonation

  13. Estimates of nutritional requirements and use of Small Ruminant Nutrition System model for hair sheep in semiarid conditions

    Directory of Open Access Journals (Sweden)

    Alessandra Pinto de Oliveira

    2014-09-01

    Full Text Available The objective was to determine the efficiency of utilization of metabolizable energy for maintenance (km and weight gain (kf, the dietary requirements of total digestible nutrients (TDN and metabolizable protein (MP, as well as, evaluate the Small Ruminant Nutrition System (SRNS model to predict the dry matter intake (DMI and the average daily gain (ADG of Santa Ines lambs, fed diets containing different levels of metabolizable energy (ME. Thirty five lambs, non-castrated, with initial body weight (BW of 14.77 ± 1.26 kg at approximate two months old, were used. At the beginning of the experiment, five animals were slaughtered to serve as reference for the estimative of empty body weight (EBW and initial body composition of the 30 remaining animals, which were distributed in randomized block design with five treatments (1.13; 1.40; 1.73; 2.22 and 2.60 Mcal/kg DM, and six repetitions. The requirement of metabolizable energy for maintenance was 78.53 kcal/kg EBW0,75/day, with a utilization efficiency of 66%. The average value of efficiency of metabolizable energy utilization for weight gain was 48%. The dietary requirements of TDN and MP increased with the increase in BW and ADG of the animals. The SRNS model underestimated the DMI and ADG of the animals in 6.2% and 24.6%, respectively. Concludes that the values of km and kf are consistent with those observed in several studies with lambs created in the tropics. The dietary requirements of TDN and MP of Santa Ines lambs for different BW and ADG are, approximately, 42% and 24%, respectively, lower than those suggested by the american system of evaluation of food and nutrient requirements of small ruminants. The SRNS model was sensitive to predict the DMI in Santa Ines lambs, however, for variable ADG, more studies are needed, since the model underestimated the response of the animals of this study.

  14. Moral development: a differential evaluation of dominant models.

    Science.gov (United States)

    Omery, A

    1983-10-01

    This article examines and evaluates the supporting evidence from the prevailing models of moral development. Using the criteria of empirical relevance, intersubjectivity, and usefulness, the classical model from psychoanalytic theory, Kohlberg's and Gilligan's models from cognitive developmental theory, and the social learning theory model are reviewed. Additional considerations such as the theoretical congruency and sex role bias of certain models are briefly discussed before concluding with the current use of the models by nursing.

  15. A qualitative readiness-requirements assessment model for enterprise big-data infrastructure investment

    Science.gov (United States)

    Olama, Mohammed M.; McNair, Allen W.; Sukumar, Sreenivas R.; Nutaro, James J.

    2014-05-01

    In the last three decades, there has been an exponential growth in the area of information technology providing the information processing needs of data-driven businesses in government, science, and private industry in the form of capturing, staging, integrating, conveying, analyzing, and transferring data that will help knowledge workers and decision makers make sound business decisions. Data integration across enterprise warehouses is one of the most challenging steps in the big data analytics strategy. Several levels of data integration have been identified across enterprise warehouses: data accessibility, common data platform, and consolidated data model. Each level of integration has its own set of complexities that requires a certain amount of time, budget, and resources to implement. Such levels of integration are designed to address the technical challenges inherent in consolidating the disparate data sources. In this paper, we present a methodology based on industry best practices to measure the readiness of an organization and its data sets against the different levels of data integration. We introduce a new Integration Level Model (ILM) tool, which is used for quantifying an organization and data system's readiness to share data at a certain level of data integration. It is based largely on the established and accepted framework provided in the Data Management Association (DAMADMBOK). It comprises several key data management functions and supporting activities, together with several environmental elements that describe and apply to each function. The proposed model scores the maturity of a system's data governance processes and provides a pragmatic methodology for evaluating integration risks. The higher the computed scores, the better managed the source data system and the greater the likelihood that the data system can be brought in at a higher level of integration.

  16. A Qualitative Readiness-Requirements Assessment Model for Enterprise Big-Data Infrastructure Investment

    Energy Technology Data Exchange (ETDEWEB)

    Olama, Mohammed M [ORNL; McNair, Wade [ORNL; Sukumar, Sreenivas R [ORNL; Nutaro, James J [ORNL

    2014-01-01

    In the last three decades, there has been an exponential growth in the area of information technology providing the information processing needs of data-driven businesses in government, science, and private industry in the form of capturing, staging, integrating, conveying, analyzing, and transferring data that will help knowledge workers and decision makers make sound business decisions. Data integration across enterprise warehouses is one of the most challenging steps in the big data analytics strategy. Several levels of data integration have been identified across enterprise warehouses: data accessibility, common data platform, and consolidated data model. Each level of integration has its own set of complexities that requires a certain amount of time, budget, and resources to implement. Such levels of integration are designed to address the technical challenges inherent in consolidating the disparate data sources. In this paper, we present a methodology based on industry best practices to measure the readiness of an organization and its data sets against the different levels of data integration. We introduce a new Integration Level Model (ILM) tool, which is used for quantifying an organization and data system s readiness to share data at a certain level of data integration. It is based largely on the established and accepted framework provided in the Data Management Association (DAMA-DMBOK). It comprises several key data management functions and supporting activities, together with several environmental elements that describe and apply to each function. The proposed model scores the maturity of a system s data governance processes and provides a pragmatic methodology for evaluating integration risks. The higher the computed scores, the better managed the source data system and the greater the likelihood that the data system can be brought in at a higher level of integration.

  17. Designing and Evaluating Representations to Model Pedagogy

    Science.gov (United States)

    Masterman, Elizabeth; Craft, Brock

    2013-01-01

    This article presents the case for a theory-informed approach to designing and evaluating representations for implementation in digital tools to support Learning Design, using the framework of epistemic efficacy as an example. This framework, which is rooted in the literature of cognitive psychology, is operationalised through dimensions of fit…

  18. Rhode Island Model Evaluation & Support System: Support Professional. Edition II

    Science.gov (United States)

    Rhode Island Department of Education, 2015

    2015-01-01

    Rhode Island educators believe that implementing a fair, accurate, and meaningful evaluation and support system for support professionals will help improve student outcomes. The primary purpose of the Rhode Island Model Support Professional Evaluation and Support System (Rhode Island Model) is to help all support professionals do their best work…

  19. Rhode Island Model Evaluation & Support System: Building Administrator. Edition III

    Science.gov (United States)

    Rhode Island Department of Education, 2015

    2015-01-01

    Rhode Island educators believe that implementing a fair, accurate, and meaningful educator evaluation and support system will help improve teaching, learning, and school leadership. The primary purpose of the Rhode Island Model Building Administrator Evaluation and Support System (Rhode Island Model) is to help all building administrators improve.…

  20. Modelling in Evaluating a Working Life Project in Higher Education

    Science.gov (United States)

    Sarja, Anneli; Janhonen, Sirpa; Havukainen, Pirjo; Vesterinen, Anne

    2012-01-01

    This article describes an evaluation method based on collaboration between the higher education, a care home and university, in a R&D project. The aim of the project was to elaborate modelling as a tool of developmental evaluation for innovation and competence in project cooperation. The approach was based on activity theory. Modelling enabled a…

  1. The Development of Educational Evaluation Models in Indonesia.

    Science.gov (United States)

    Nasoetion, N.; And Others

    The primary purpose of this project was to develop model evaluation procedures that could be applied to large educational undertakings in Indonesia. Three programs underway in Indonesia were selected for the development of evaluation models: the Textbook-Teacher Upgrading Project, the Development School Project, and the Examinations (Item Bank)…

  2. Evaluating a Community-School Model of Social Work Practice

    Science.gov (United States)

    Diehl, Daniel; Frey, Andy

    2008-01-01

    While research has shown that social workers can have positive impacts on students' school adjustment, evaluations of overall practice models continue to be limited. This article evaluates a model of community-school social work practice by examining its effect on problem behaviors and concerns identified by teachers and parents at referral. As…

  3. Pilot evaluation in TENCompetence: a theory-driven model1

    NARCIS (Netherlands)

    J. Schoonenboom; H. Sligte; A. Moghnieh; M. Specht; C. Glahn; K. Stefanov

    2008-01-01

    This paper describes a theory-driven evaluation model that is used in evaluating four pilots in which an infrastructure for lifelong competence development, which is currently being developed, is validated. The model makes visible the separate implementation steps that connect the envisaged infrastr

  4. Evaluating a Training Using the "Four Levels Model"

    Science.gov (United States)

    Steensma, Herman; Groeneveld, Karin

    2010-01-01

    Purpose: The aims of this study are: to present a training evaluation based on the "four levels model"; to demonstrate the value of experimental designs in evaluation studies; and to take a first step in the development of an evidence-based training program. Design/methodology/approach: The Kirkpatrick four levels model was used to…

  5. Development of an efficient coupled model for soil–atmosphere modelling (FHAVeT: model evaluation and comparison

    Directory of Open Access Journals (Sweden)

    A.-J. Tinet

    2014-07-01

    Full Text Available In agricultural management, a good timing in operations such as irrigation or sowing, is essential to enhance both economical and environmental performance. To improve such timing, predictive software are of particular interest. An optimal decision making software would require process modules which provides robust, efficient and accurate predictions while being based on a minimal amount of parameters easily available. This paper develops a coupled soil–atmosphere model based on Ross fast solution for Richards' equation, heat transfer and detailed surface energy balance. In this paper, the developed model, FHAVeT (Fast Hydro Atmosphere Vegetation Temperature, has been evaluated in bare soil conditions against the coupled model based of the De Vries description, TEC. The two models were compared for different climatic and soil conditions. Moreover, the model allows the use of various pedotransfer functions. The FHAVeT model showed better performance in regards to mass balance. In order to allow a more precise comparison, 6 time windows were selected. The study demonstrated that the FHAVeT behaviour is quite similar to the TEC behaviour except under some dry conditions. An evaluation of day detection in regards to moisture thresholds is performed.

  6. Hydrologic Evaluation of Landfill Performance (HELP) Model

    Science.gov (United States)

    The program models rainfall, runoff, infiltration, and other water pathways to estimate how much water builds up above each landfill liner. It can incorporate data on vegetation, soil types, geosynthetic materials, initial moisture conditions, slopes, etc.

  7. A Context-Adaptive Model for Program Evaluation.

    Science.gov (United States)

    Lynch, Brian K.

    1990-01-01

    Presents an adaptable, context-sensitive model for ESL/EFL program evaluation, consisting of seven steps that guide an evaluator through consideration of relevant issues, information, and design elements. Examples from an evaluation of the Reading for Science and Technology Project at the University of Guadalajara, Mexico are given. (31…

  8. Structural model requirements to describe microbial inactivation during a mild heat treatment.

    Science.gov (United States)

    Geeraerd, A H; Herremans, C H; Van Impe, J F

    2000-09-10

    The classical concept of D and z values, established for sterilisation processes, is unable to deal with the typical non-loglinear behaviour of survivor curves occurring during the mild heat treatment of sous vide or cook-chill food products. Structural model requirements are formulated, eliminating immediately some candidate model types. Promising modelling approaches are thoroughly analysed and, if applicable, adapted to the specific needs: two models developed by Casolari (1988), the inactivation model of Sapru et al. (1992), the model of Whiting (1993), the Baranyi and Roberts growth model (1994), the model of Chiruta et al. (1997), the model of Daughtry et al. (1997) and the model of Xiong et al. (1999). A range of experimental data of Bacillus cereus, Yersinia enterocolitica, Escherichia coli O157:H7, Listeria monocytogenes and Lactobacillus sake are used to illustrate the different models' performances. Moreover, a novel modelling approach is developed, fulfilling all formulated structural model requirements, and based on a careful analysis of literature knowledge of the shoulder and tailing phenomenon. Although a thorough insight in the occurrence of shoulders and tails is still lacking from a biochemical point of view, this newly developed model incorporates the possibility of a straightforward interpretation within this framework.

  9. International evaluation of current and future requirements for environmental engineering education.

    Science.gov (United States)

    Morgenroth, E; Daigger, G T; Ledin, A; Keller, J

    2004-01-01

    The field of environmental engineering is developing as a result of changing environmental requirements. In response, environmental engineering education (E3) needs to ensure that it provides students with the necessary tools to address these challenges. In this paper the current status and future development of E3 is evaluated based on a questionnaire sent to universities and potential employers of E3 graduates. With increasing demands on environmental quality, the complexity of environmental engineering problems to be solved can be expected to increase. To find solutions environmental engineers will need to work in interdisciplinary teams. Based on the questionnaire there was a broad agreement that the best way to prepare students for these future challenges is to provide them with a fundamental education in basic sciences and related engineering fields. Many exciting developments in the environmental engineering profession will be located at the interface between engineering, science, and society. Aspects of all three areas need to be included in E3 and the student needs to be exposed to the tensions associated with linking the three.

  10. Center for Integrated Nanotechnologies (CINT) Chemical Release Modeling Evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Stirrup, Timothy Scott [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2016-12-20

    This evaluation documents the methodology and results of chemical release modeling for operations at Building 518, Center for Integrated Nanotechnologies (CINT) Core Facility. This evaluation is intended to supplement an update to the CINT [Standalone] Hazards Analysis (SHA). This evaluation also updates the original [Design] Hazards Analysis (DHA) completed in 2003 during the design and construction of the facility; since the original DHA, additional toxic materials have been evaluated and modeled to confirm the continued low hazard classification of the CINT facility and operations. This evaluation addresses the potential catastrophic release of the current inventory of toxic chemicals at Building 518 based on a standard query in the Chemical Information System (CIS).

  11. Statistical modeling for visualization evaluation through data fusion.

    Science.gov (United States)

    Chen, Xiaoyu; Jin, Ran

    2017-11-01

    There is a high demand of data visualization providing insights to users in various applications. However, a consistent, online visualization evaluation method to quantify mental workload or user preference is lacking, which leads to an inefficient visualization and user interface design process. Recently, the advancement of interactive and sensing technologies makes the electroencephalogram (EEG) signals, eye movements as well as visualization logs available in user-centered evaluation. This paper proposes a data fusion model and the application procedure for quantitative and online visualization evaluation. 15 participants joined the study based on three different visualization designs. The results provide a regularized regression model which can accurately predict the user's evaluation of task complexity, and indicate the significance of all three types of sensing data sets for visualization evaluation. This model can be widely applied to data visualization evaluation, and other user-centered designs evaluation and data analysis in human factors and ergonomics. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. Systematic evaluation of atmospheric chemistry-transport model CHIMERE

    Science.gov (United States)

    Khvorostyanov, Dmitry; Menut, Laurent; Mailler, Sylvain; Siour, Guillaume; Couvidat, Florian; Bessagnet, Bertrand; Turquety, Solene

    2017-04-01

    Regional-scale atmospheric chemistry-transport models (CTM) are used to develop air quality regulatory measures, to support environmentally sensitive decisions in the industry, and to address variety of scientific questions involving the atmospheric composition. Model performance evaluation with measurement data is critical to understand their limits and the degree of confidence in model results. CHIMERE CTM (http://www.lmd.polytechnique.fr/chimere/) is a French national tool for operational forecast and decision support and is widely used in the international research community in various areas of atmospheric chemistry and physics, climate, and environment (http://www.lmd.polytechnique.fr/chimere/CW-articles.php). This work presents the model evaluation framework applied systematically to the new CHIMERE CTM versions in the course of the continuous model development. The framework uses three of the four CTM evaluation types identified by the Environmental Protection Agency (EPA) and the American Meteorological Society (AMS): operational, diagnostic, and dynamic. It allows to compare the overall model performance in subsequent model versions (operational evaluation), identify specific processes and/or model inputs that could be improved (diagnostic evaluation), and test the model sensitivity to the changes in air quality, such as emission reductions and meteorological events (dynamic evaluation). The observation datasets currently used for the evaluation are: EMEP (surface concentrations), AERONET (optical depths), and WOUDC (ozone sounding profiles). The framework is implemented as an automated processing chain and allows interactive exploration of the results via a web interface.

  13. An Ontological Model of Evaluation: A Dynamic Model for Aiding Organizational Development.

    Science.gov (United States)

    Peper, John B.

    Evaluation models imply or assume theories of organization, behavior, and decision-making. Seldom does an evaluation model specify these assumptions. As a result, program evaluators often choose mechanistic models and their resultant information is either inadequate or inappropriate for most of the client's purposes. The Ontological Evaluation…

  14. Evaluation of methods to estimate the essential amino acids requirements of fish from the muscle amino acid profile

    Directory of Open Access Journals (Sweden)

    Álvaro José de Almeida Bicudo

    2014-03-01

    Full Text Available Many methods to estimate amino acid requirement based on amino acid profile of fish have been proposed. This study evaluates the methodology proposed by Meyer & Fracalossi (2005 and by Tacon (1989 to estimate amino acids requirement of fish, which do exempt knowledge on previous nutritional requirement of reference amino acid. Data on amino acid requirement of pacu, Piaractus mesopotamicus, were used to validate de accuracy of those methods. Meyer & Fracalossi's and Tacon's methodology estimated the lysine requirement of pacu, respectively, at 13 and 23% above requirement determined using dose-response method. The values estimated by both methods lie within the range of requirements determined for other omnivorous fish species, the Meyer & Fracalossi (2005 method showing better accuracy.

  15. Capital Regulation, Liquidity Requirements and Taxation in a Dynamic Model of Banking

    NARCIS (Netherlands)

    Di Nicolo, G.; Gamba, A.; Lucchetta, M.

    2011-01-01

    This paper formulates a dynamic model of a bank exposed to both credit and liquidity risk, which can resolve financial distress in three costly forms: fire sales, bond issuance ad equity issuance. We use the model to analyze the impact of capital regulation, liquidity requirements and taxation on ba

  16. Capital Regulation, Liquidity Requirements and Taxation in a Dynamic Model of Banking

    NARCIS (Netherlands)

    Di Nicolo, G.; Gamba, A.; Lucchetta, M.

    2011-01-01

    This paper formulates a dynamic model of a bank exposed to both credit and liquidity risk, which can resolve financial distress in three costly forms: fire sales, bond issuance and equity issuance. We use the model to analyze the impact of capital regulation, liquidity requirements and taxation on b

  17. Evaluation of Model Fit in Cognitive Diagnosis Models

    Science.gov (United States)

    Hu, Jinxiang; Miller, M. David; Huggins-Manley, Anne Corinne; Chen, Yi-Hsin

    2016-01-01

    Cognitive diagnosis models (CDMs) estimate student ability profiles using latent attributes. Model fit to the data needs to be ascertained in order to determine whether inferences from CDMs are valid. This study investigated the usefulness of some popular model fit statistics to detect CDM fit including relative fit indices (AIC, BIC, and CAIC),…

  18. Connecting Requirements to Architecture and Analysis via Model-Based Systems Engineering

    Science.gov (United States)

    Cole, Bjorn F.; Jenkins, J. Steven

    2015-01-01

    In traditional systems engineering practice, architecture, concept development, and requirements development are related but still separate activities. Concepts for operation, key technical approaches, and related proofs of concept are developed. These inform the formulation of an architecture at multiple levels, starting with the overall system composition and functionality and progressing into more detail. As this formulation is done, a parallel activity develops a set of English statements that constrain solutions. These requirements are often called "shall statements" since they are formulated to use "shall." The separation of requirements from design is exacerbated by well-meaning tools like the Dynamic Object-Oriented Requirements System (DOORS) that remained separated from engineering design tools. With the Europa Clipper project, efforts are being taken to change the requirements development approach from a separate activity to one intimately embedded in formulation effort. This paper presents a modeling approach and related tooling to generate English requirement statements from constraints embedded in architecture definition.

  19. Connecting Requirements to Architecture and Analysis via Model-Based Systems Engineering

    Science.gov (United States)

    Cole, Bjorn F.; Jenkins, J. Steven

    2015-01-01

    In traditional systems engineering practice, architecture, concept development, and requirements development are related but still separate activities. Concepts for operation, key technical approaches, and related proofs of concept are developed. These inform the formulation of an architecture at multiple levels, starting with the overall system composition and functionality and progressing into more detail. As this formulation is done, a parallel activity develops a set of English statements that constrain solutions. These requirements are often called "shall statements" since they are formulated to use "shall." The separation of requirements from design is exacerbated by well-meaning tools like the Dynamic Object-Oriented Requirements System (DOORS) that remained separated from engineering design tools. With the Europa Clipper project, efforts are being taken to change the requirements development approach from a separate activity to one intimately embedded in formulation effort. This paper presents a modeling approach and related tooling to generate English requirement statements from constraints embedded in architecture definition.

  20. Workshop on Functional Requirements for the Modeling of Fate and Transport of Waterborne CBRN Materials

    Energy Technology Data Exchange (ETDEWEB)

    Giles, GE

    2005-02-03

    The purpose of this Workshop on ''Functional Requirements for the Modeling of Fate and Transport of Waterborne CBRN Materials'' was to solicit functional requirements for tools that help Incident Managers plan for and deal with the consequences of industrial or terrorist releases of materials into the nation's waterways and public water utilities. Twenty representatives attended and several made presentations. Several hours of discussions elicited a set of requirements. These requirements were summarized in a form for the attendees to vote on their highest priority requirements. These votes were used to determine the prioritized requirements that are reported in this paper and can be used to direct future developments.

  1. Experimental evaluations of the microchannel flow model.

    Science.gov (United States)

    Parker, K J

    2015-06-07

    Recent advances have enabled a new wave of biomechanics measurements, and have renewed interest in selecting appropriate rheological models for soft tissues such as the liver, thyroid, and prostate. The microchannel flow model was recently introduced to describe the linear response of tissue to stimuli such as stress relaxation or shear wave propagation. This model postulates a power law relaxation spectrum that results from a branching distribution of vessels and channels in normal soft tissue such as liver. In this work, the derivation is extended to determine the explicit link between the distribution of vessels and the relaxation spectrum. In addition, liver tissue is modified by temperature or salinity, and the resulting changes in tissue responses (by factors of 1.5 or greater) are reasonably predicted from the microchannel flow model, simply by considering the changes in fluid flow through the modified samples. The 2 and 4 parameter versions of the model are considered, and it is shown that in some cases the maximum time constant (corresponding to the minimum vessel diameters), could be altered in a way that has major impact on the observed tissue response. This could explain why an inflamed region is palpated as a harder bump compared to surrounding normal tissue.

  2. The Nuremberg Code subverts human health and safety by requiring animal modeling

    Science.gov (United States)

    2012-01-01

    Background The requirement that animals be used in research and testing in order to protect humans was formalized in the Nuremberg Code and subsequent national and international laws, codes, and declarations. Discussion We review the history of these requirements and contrast what was known via science about animal models then with what is known now. We further analyze the predictive value of animal models when used as test subjects for human response to drugs and disease. We explore the use of animals for models in toxicity testing as an example of the problem with using animal models. Summary We conclude that the requirements for animal testing found in the Nuremberg Code were based on scientifically outdated principles, compromised by people with a vested interest in animal experimentation, serve no useful function, increase the cost of drug development, and prevent otherwise safe and efficacious drugs and therapies from being implemented. PMID:22769234

  3. The Nuremberg Code subverts human health and safety by requiring animal modeling

    Directory of Open Access Journals (Sweden)

    Greek Ray

    2012-07-01

    Full Text Available Abstract Background The requirement that animals be used in research and testing in order to protect humans was formalized in the Nuremberg Code and subsequent national and international laws, codes, and declarations. Discussion We review the history of these requirements and contrast what was known via science about animal models then with what is known now. We further analyze the predictive value of animal models when used as test subjects for human response to drugs and disease. We explore the use of animals for models in toxicity testing as an example of the problem with using animal models. Summary We conclude that the requirements for animal testing found in the Nuremberg Code were based on scientifically outdated principles, compromised by people with a vested interest in animal experimentation, serve no useful function, increase the cost of drug development, and prevent otherwise safe and efficacious drugs and therapies from being implemented.

  4. Evaluating uncertainty estimates in hydrologic models: borrowing measures from the forecast verification community

    Directory of Open Access Journals (Sweden)

    K. J. Franz

    2011-11-01

    Full Text Available The hydrologic community is generally moving towards the use of probabilistic estimates of streamflow, primarily through the implementation of Ensemble Streamflow Prediction (ESP systems, ensemble data assimilation methods, or multi-modeling platforms. However, evaluation of probabilistic outputs has not necessarily kept pace with ensemble generation. Much of the modeling community is still performing model evaluation using standard deterministic measures, such as error, correlation, or bias, typically applied to the ensemble mean or median. Probabilistic forecast verification methods have been well developed, particularly in the atmospheric sciences, yet few have been adopted for evaluating uncertainty estimates in hydrologic model simulations. In the current paper, we overview existing probabilistic forecast verification methods and apply the methods to evaluate and compare model ensembles produced from two different parameter uncertainty estimation methods: the Generalized Uncertainty Likelihood Estimator (GLUE, and the Shuffle Complex Evolution Metropolis (SCEM. Model ensembles are generated for the National Weather Service SACramento Soil Moisture Accounting (SAC-SMA model for 12 forecast basins located in the Southeastern United States. We evaluate the model ensembles using relevant metrics in the following categories: distribution, correlation, accuracy, conditional statistics, and categorical statistics. We show that the presented probabilistic metrics are easily adapted to model simulation ensembles and provide a robust analysis of model performance associated with parameter uncertainty. Application of these methods requires no information in addition to what is already available as part of traditional model validation methodology and considers the entire ensemble or uncertainty range in the approach.

  5. Decision-oriented Usability Evaluation of an Operation Interface: Model and Application

    Institute of Scientific and Technical Information of China (English)

    LI Xiao-jun; SUN Lin-yan; LI Min

    2012-01-01

    To support multi-factor decision problems about usability evaluation, especially when studies fall short of comparable objects, a fuzzy synthetic evaluation model is explored in this paper. Grey relational analysis (GRA) is brought in the model to calculate weight vectors of the usability factors. And membership functions of a remark vector are constructed in the context of use of the operation interface. The present method is applied in usability evaluation of operation interface and is proved to be effective. The comprehensive usability gradation of the operation interface to good is 0. 616 4 that meets the requirements in practice.

  6. A review method for UML requirements analysis model employing system-side prototyping.

    Science.gov (United States)

    Ogata, Shinpei; Matsuura, Saeko

    2013-12-01

    User interface prototyping is an effective method for users to validate the requirements defined by analysts at an early stage of a software development. However, a user interface prototype system offers weak support for the analysts to verify the consistency of the specifications about internal aspects of a system such as business logic. As the result, the inconsistency causes a lot of rework costs because the inconsistency often makes the developers impossible to actualize the system based on the specifications. For verifying such consistency, functional prototyping is an effective method for the analysts, but it needs a lot of costs and more detailed specifications. In this paper, we propose a review method so that analysts can verify the consistency among several different kinds of diagrams in UML efficiently by employing system-side prototyping without the detailed model. The system-side prototype system does not have any functions to achieve business logic, but visualizes the results of the integration among the diagrams in UML as Web pages. The usefulness of our proposal was evaluated by applying our proposal into a development of Library Management System (LMS) for a laboratory. This development was conducted by a group. As the result, our proposal was useful for discovering the serious inconsistency caused by the misunderstanding among the members of the group.

  7. Towards Model Diagnosis in Hydrologic Models: Evaluation of the abcd Water Balance Model Using the HCDN Dataset

    Science.gov (United States)

    Martinez Baquero, G. F.; Gupta, H. V.

    2006-12-01

    Increasing model complexity demands the development of new methods able to mine larger amounts of information from model results and available data. Measures commonly used to compare models with data typically lack diagnostic "power". This work therefore explores the design of more powerful strategies to identify the causes of discrepancy between models and hydrologic phenomena, as well as to increase the knowledge about the input-output relationship of the system. In this context, we evaluate how the abcd monthly water balance model, used to infer soil moisture conditions or groundwater recharge, performs in 764 watersheds of the conterminous United States. Work done under these guidelines required the integration of the Hydro-Climatic Data Network dataset with spatial information to summarize the results and relate the performance with the model assumptions and specific conditions of the basins. The diagnostic process is implemented by the definition of appropriate hydrologic signatures to measure the capability of watersheds to transform environmental inputs and propose equivalent modeling structures. Knowledge acquired during this process is used to test modifications of the model in hydrologic regions where the performance is poor.

  8. A COMPARISON OF SEMANTIC SIMILARITY MODELS IN EVALUATING CONCEPT SIMILARITY

    Directory of Open Access Journals (Sweden)

    Q. X. Xu

    2012-08-01

    Full Text Available The semantic similarities are important in concept definition, recognition, categorization, interpretation, and integration. Many semantic similarity models have been established to evaluate semantic similarities of objects or/and concepts. To find out the suitability and performance of different models in evaluating concept similarities, we make a comparison of four main types of models in this paper: the geometric model, the feature model, the network model, and the transformational model. Fundamental principles and main characteristics of these models are introduced and compared firstly. Land use and land cover concepts of NLCD92 are employed as examples in the case study. The results demonstrate that correlations between these models are very high for a possible reason that all these models are designed to simulate the similarity judgement of human mind.

  9. a Comparison of Semantic Similarity Models in Evaluating Concept Similarity

    Science.gov (United States)

    Xu, Q. X.; Shi, W. Z.

    2012-08-01

    The semantic similarities are important in concept definition, recognition, categorization, interpretation, and integration. Many semantic similarity models have been established to evaluate semantic similarities of objects or/and concepts. To find out the suitability and performance of different models in evaluating concept similarities, we make a comparison of four main types of models in this paper: the geometric model, the feature model, the network model, and the transformational model. Fundamental principles and main characteristics of these models are introduced and compared firstly. Land use and land cover concepts of NLCD92 are employed as examples in the case study. The results demonstrate that correlations between these models are very high for a possible reason that all these models are designed to simulate the similarity judgement of human mind.

  10. An Overview of Westinghouse Realistic Large Break LOCA Evaluation Model

    Directory of Open Access Journals (Sweden)

    Cesare Frepoli

    2008-01-01

    Full Text Available Since the 1988 amendment of the 10 CFR 50.46 rule in 1988, Westinghouse has been developing and applying realistic or best-estimate methods to perform LOCA safety analyses. A realistic analysis requires the execution of various realistic LOCA transient simulations where the effect of both model and input uncertainties are ranged and propagated throughout the transients. The outcome is typically a range of results with associated probabilities. The thermal/hydraulic code is the engine of the methodology but a procedure is developed to assess the code and determine its biases and uncertainties. In addition, inputs to the simulation are also affected by uncertainty and these uncertainties are incorporated into the process. Several approaches have been proposed and applied in the industry in the framework of best-estimate methods. Most of the implementations, including Westinghouse, follow the Code Scaling, Applicability and Uncertainty (CSAU methodology. Westinghouse methodology is based on the use of the WCOBRA/TRAC thermal-hydraulic code. The paper starts with an overview of the regulations and its interpretation in the context of realistic analysis. The CSAU roadmap is reviewed in the context of its implementation in the Westinghouse evaluation model. An overview of the code (WCOBRA/TRAC and methodology is provided. Finally, the recent evolution to nonparametric statistics in the current edition of the W methodology is discussed. Sample results of a typical large break LOCA analysis for a PWR are provided.

  11. Application of pharmacokinetics local model to evaluate renal function

    Institute of Scientific and Technical Information of China (English)

    1999-01-01

    The pharmacokinetics local model was used to evaluate renal function.Some typical kinds of renal function cases, normal or disorder, were selected to be imaged with SPECT and those data measured were treated by the pharmacokinetics local model computer program (PLM).The results indicated that parameters, including peak value, peak time, inflexion time, half-excretion time, and kinetic equation played and importantrole in judging renal function.The fact confirms that local model isvery useful in evaluating renal function.

  12. Numerical models for the evaluation of geothermal systems

    Energy Technology Data Exchange (ETDEWEB)

    Bodvarsson, G.S.; Pruess, K.; Lippmann, M.J.

    1986-08-01

    We have carried out detailed simulations of various fields in the USA (Bada, New Mexico; Heber, California); Mexico (Cerro Prieto); Iceland (Krafla); and Kenya (Olkaria). These simulation studies have illustrated the usefulness of numerical models for the overall evaluation of geothermal systems. The methodology for modeling the behavior of geothermal systems, different approaches to geothermal reservoir modeling and how they can be applied in comprehensive evaluation work are discussed.

  13. Working with Teaching Assistants: Three Models Evaluated

    Science.gov (United States)

    Cremin, Hilary; Thomas, Gary; Vincett, Karen

    2005-01-01

    Questions about how best to deploy teaching assistants (TAs) are particularly opposite given the greatly increasing numbers of TAs in British schools and given findings about the difficulty effecting adult teamwork in classrooms. In six classrooms, three models of team organisation and planning for the work of teaching assistants -- "room…

  14. Evaluating the Pedagogical Potential of Hybrid Models

    Science.gov (United States)

    Levin, Tzur; Levin, Ilya

    2013-01-01

    The paper examines how the use of hybrid models--that consist of the interacting continuous and discrete processes--may assist in teaching system thinking. We report an experiment in which undergraduate students were asked to choose between a hybrid and a continuous solution for a number of control problems. A correlation has been found between…

  15. Models for Evaluating and Improving Architecture Competence

    Science.gov (United States)

    2008-03-01

    to report on the second. We propose to use the duties from the DSK model to isolate the various aspects of the architect’s job. If we are auditing ...for Scenario Analysis.” Proceedings of the Fifth International Workshop on Product Family Engineering ( PFE -5). Sien- na, Italy, 2003, Springer

  16. Evaluating a Model of Youth Physical Activity

    Science.gov (United States)

    Heitzler, Carrie D.; Lytle, Leslie A.; Erickson, Darin J.; Barr-Anderson, Daheia; Sirard, John R.; Story, Mary

    2010-01-01

    Objective: To explore the relationship between social influences, self-efficacy, enjoyment, and barriers and physical activity. Methods: Structural equation modeling examined relationships between parent and peer support, parent physical activity, individual perceptions, and objectively measured physical activity using accelerometers among a…

  17. A "three-plus-one" evaluation model for clinical research management.

    Science.gov (United States)

    Dilts, David M

    2013-12-01

    Clinical research management (CRM) is a critical resource for the management of clinical trials and it requires proper evaluation. This article advances a model of evaluation that has three local levels, plus one global level, for evaluating the value of CRM. The primary level for evaluation is that of the study or processes level. The managerial or aggregate level concerns management of the portfolio of trials under the control of the CRM office. The third, often overlooked level of evaluation, is the strategic level, whose goal is encapsulated in the phrase, "doing the right trials, while doing trials right." The global ("plus one") evaluation level concerns the need to evaluate the ever-increasing number of multi-institutional and multinational studies. As there are host of evaluation metrics, this article provides representative examples of metrics at each level and provides methods that can aid in the selecting appropriate metrics for an organization.

  18. Test modelling single accidents with the basic evaluation model.

    NARCIS (Netherlands)

    Bijleveld, F.D. & Commandeur, J.J.F.

    2006-01-01

    This report elaborates and illustrates the proposed methodology for the evaluation and exploration of developments in Dutch road safety in the SWOV projects ‘Road safety assessments’ (Verkeersveiligheidsbalansen ) and ‘Road safety outlooks’ (Verkeersveiligheidsverkenningen ). Previous work in this

  19. Automated expert modeling for automated student evaluation.

    Energy Technology Data Exchange (ETDEWEB)

    Abbott, Robert G.

    2006-01-01

    The 8th International Conference on Intelligent Tutoring Systems provides a leading international forum for the dissemination of original results in the design, implementation, and evaluation of intelligent tutoring systems and related areas. The conference draws researchers from a broad spectrum of disciplines ranging from artificial intelligence and cognitive science to pedagogy and educational psychology. The conference explores intelligent tutoring systems increasing real world impact on an increasingly global scale. Improved authoring tools and learning object standards enable fielding systems and curricula in real world settings on an unprecedented scale. Researchers deploy ITS's in ever larger studies and increasingly use data from real students, tasks, and settings to guide new research. With high volumes of student interaction data, data mining, and machine learning, tutoring systems can learn from experience and improve their teaching performance. The increasing number of realistic evaluation studies also broaden researchers knowledge about the educational contexts for which ITS's are best suited. At the same time, researchers explore how to expand and improve ITS/student communications, for example, how to achieve more flexible and responsive discourse with students, help students integrate Web resources into learning, use mobile technologies and games to enhance student motivation and learning, and address multicultural perspectives.

  20. A Comprehensive Model for Evaluating Coalbed Methane Reservoirs in China

    Institute of Scientific and Technical Information of China (English)

    YAO Yanbin; LIU Dameng; TANG Dazhen; HUANG Wenhui; TANG Shuheng; CHE Yao

    2008-01-01

    Coalbed methane reservoir (CBMR) evaluation is important for choosing the prospective target area for coalbed methane exploration and production. This study aims at identifying the characteristic parameters and methods to evaluate CBMR. Based on the geological surveys, laboratory measurements and field works, a four-level analytic hierarchy process (AHP) model for CBMR evaluation is proposed. In this model, different weights are prioritized and assigned on the basis of three main criteria (including reservoir physical property, storage capacity and geological characteristics), 15 sub-criteria, and 18 technical alternatives; the later of which are discussed in detail. The model was applied to evaluate the CBMR of the Permo-Carboniferous coals in the Qinshui Basin, North China. This GIS-based fuzzy AHP comprehensive model can be used for the evaluation of CBMR of medium-high rank (mean maximum vitrinite reflectance >0.5%) coal districts in China.

  1. Preparation and Evaluation of Newly Developed Chitosan Salt Coating Dispersions for Colon Delivery without Requiring Overcoating.

    Science.gov (United States)

    Yamada, Kyohei; Iwao, Yasunori; Bani-Jaber, Ahmad; Noguchi, Shuji; Itai, Shigeru

    2015-01-01

    Although chitosan (CS) has been recognized as a good material for colon-specific drug delivery systems, an overcoating with an enteric coating polymer on the surface of CS is absolutely necessary because CS is soluble in acidic conditions before reaching the colon. In the present study, to improve its stability in the presence of acid, a newly developed CS-laurate (CS-LA) material was evaluated as a coating dispersion for the development of colon-specific drug delivery systems. Two types of CS with different molecular weights, CS250 and CS600, were used to prepare CS-LA films by the casting method. The CS250-LA films had smooth surfaces, whereas the surfaces of the CS600-LA films were rough, indicating that the CS250-LA dispersion could form a denser film than CS600-LA. Both of these CS-LA films maintained a constant shape over 22 h in a pH 1.2 HCl/NaCl buffer, where the corresponding CS films rapidly disintegrated. In addition, the CS250-LA film showed specific colon degradability in a pH 6.0 phosphate buffered solution containing 1.0% (w/v) β-glucosidase. As a result of tensile strength and elongation at the break, both CS-LA films were found to have flexible film properties. Finally, the release of acetaminophen from disks coated with CS250-LA dispersions was significantly suppressed in fluids at pH 1.2 and 6.8, whereas disks coated with CS solution rapidly released the drug in pH 1.2 fluids. Taken together, this study shows that LA modification could be a useful approach in preparing CS films with acid stability and colonic degradability properties without requiring overcoating.

  2. Evaluation protocol for the WIND system atmospheric models

    Energy Technology Data Exchange (ETDEWEB)

    Fast, J.D.

    1991-12-31

    Atmospheric transport and diffusion models have been developed for real-time calculations of the location and concentration of toxic or radioactive materials during a accidental release at the Savannah River Site (SRS). These models are have been incorporated into an automated menu-driven computer based system called the WIND (Weather INformation and Display) system. In an effort to establish more formal quality assurance procedures for the WIND system atmospheric codes, a software evaluation protocol is being developed. An evaluation protocol is necessary to determine how well they may perform in emergency response (real-time) situations. The evaluation of high-impact software must be conducted in accordance with WSRC QA Manual, 1Q, QAP 20-1. This report will describe the method that will be used to evaluate the atmospheric models. The evaluation will determine the effectiveness of the atmospheric models in emergency response situations, which is not necessarily the same procedure used for research purposes. The format of the evaluation plan will provide guidance for the evaluation of atmospheric models that may be added to the WIND system in the future. The evaluation plan is designed to provide the user with information about the WIND system atmospheric models that is necessary for emergency response situations.

  3. Evaluation protocol for the WIND system atmospheric models

    Energy Technology Data Exchange (ETDEWEB)

    Fast, J.D.

    1991-01-01

    Atmospheric transport and diffusion models have been developed for real-time calculations of the location and concentration of toxic or radioactive materials during a accidental release at the Savannah River Site (SRS). These models are have been incorporated into an automated menu-driven computer based system called the WIND (Weather INformation and Display) system. In an effort to establish more formal quality assurance procedures for the WIND system atmospheric codes, a software evaluation protocol is being developed. An evaluation protocol is necessary to determine how well they may perform in emergency response (real-time) situations. The evaluation of high-impact software must be conducted in accordance with WSRC QA Manual, 1Q, QAP 20-1. This report will describe the method that will be used to evaluate the atmospheric models. The evaluation will determine the effectiveness of the atmospheric models in emergency response situations, which is not necessarily the same procedure used for research purposes. The format of the evaluation plan will provide guidance for the evaluation of atmospheric models that may be added to the WIND system in the future. The evaluation plan is designed to provide the user with information about the WIND system atmospheric models that is necessary for emergency response situations.

  4. Evaluation of a hydrological model based on Bidirectional Reach (BReach)

    Science.gov (United States)

    Van Eerdenbrugh, Katrien; Van Hoey, Stijn; Verhoest, Niko E. C.

    2016-04-01

    Evaluation and discrimination of model structures is crucial to ensure an appropriate use of hydrological models. When evaluating model results by aggregating their quality in (a subset of) individual observations, overall results of this analysis sometimes conceal important detailed information about model structural deficiencies. Analyzing model results within their local (time) context can uncover this detailed information. In this research, a methodology called Bidirectional Reach (BReach) is proposed to evaluate and analyze results of a hydrological model by assessing the maximum left and right reach in each observation point that is used for model evaluation. These maximum reaches express the capability of the model to describe a subset of the evaluation data both in the direction of the previous (left) and of the following data (right). This capability is evaluated on two levels. First, on the level of individual observations, the combination of a parameter set and an observation is classified as non-acceptable if the deviation between the accompanying model result and the measurement exceeds observational uncertainty. Second, the behavior in a sequence of observations is evaluated by means of a tolerance degree. This tolerance degree expresses the condition for satisfactory model behavior in a data series and is defined by the percentage of observations within this series that can have non-acceptable model results. Based on both criteria, the maximum left and right reaches of a model in an observation represent the data points in the direction of the previous respectively the following observations beyond which none of the sampled parameter sets both are satisfactory and result in an acceptable deviation. After assessing these reaches for a variety of tolerance degrees, results can be plotted in a combined BReach plot that show temporal changes in the behavior of model results. The methodology is applied on a Probability Distributed Model (PDM) of the river

  5. Mesoscale Wind Predictions for Wave Model Evaluation

    Science.gov (United States)

    2016-06-07

    N0001400WX20041(B) http://www.nrlmry.navy.mil LONG TERM GOALS The long-term goal is to demonstrate the significance and importance of high...ocean waves by an appropriate wave model. OBJECTIVES The main objectives of this project are to: 1. Build the infrastructure to generate the...temperature for all COAMPS grids at the resolution of each of these grids. These analyses are important for the proper 2 specification of the lower

  6. CREATING AND EVALUATING SUSTAINABLE BUSINESS MODELS – A CROSS-INDUSTRY CASE STUDY

    DEFF Research Database (Denmark)

    Aagaard, Annabeth

    of understanding, creating and evaluating businesses and their business models. The objective of this article is to explore how sustainable business models can be created and evaluated across different companies and industrial contexts. Although many authors have stressed the business potentials of sustainable......Sustainability has become a new premise for doing business - moving it from political discourses into company boardrooms (Dryzek, 2005; Bisgaard, 2009; Aagaard, 2016). However, applying business model innovation as a way to create sustainable value requires several alterations of our ways...

  7. Teachers' Development Model to Authentic Assessment by Empowerment Evaluation Approach

    Science.gov (United States)

    Charoenchai, Charin; Phuseeorn, Songsak; Phengsawat, Waro

    2015-01-01

    The purposes of this study were 1) Study teachers authentic assessment, teachers comprehension of authentic assessment and teachers needs for authentic assessment development. 2) To create teachers development model. 3) Experiment of teachers development model. 4) Evaluate effectiveness of teachers development model. The research is divided into 4…

  8. THE SYSTEM OF EVALUATION MODELS UI WEB APPLICATION

    OpenAIRE

    Anna A. Pupykina; Anna E. Satunina

    2015-01-01

    Graph theory provides a means for a formalized description of the model of interaction and ensure the provision of precisemathematical relationships between thecomponents. A scheme for modeling Web applications was proposed. The system of evaluation models UI Web application. Taking into account such factors as understandability, predictability, learning. For each indicator matrix is constructed for assignment to class compliance.

  9. THE SYSTEM OF EVALUATION MODELS UI WEB APPLICATION

    Directory of Open Access Journals (Sweden)

    Anna A. Pupykina

    2015-01-01

    Full Text Available Graph theory provides a means for a formalized description of the model of interaction and ensure the provision of precisemathematical relationships between thecomponents. A scheme for modeling Web applications was proposed. The system of evaluation models UI Web application. Taking into account such factors as understandability, predictability, learning. For each indicator matrix is constructed for assignment to class compliance.

  10. EcoMark: Evaluating Models of Vehicular Environmental Impact

    DEFF Research Database (Denmark)

    Guo, Chenjuan; Ma, Mike; Yang, Bin

    2012-01-01

    the vehicle travels in. We develop an evaluation framework, called EcoMark, for such environmental impact models. In addition, we survey all eleven state-of-the-art impact models known to us. To gain insight into the capabilities of the models and to understand the effectiveness of the EcoMark, we apply...

  11. A Model Management Approach for Co-Simulation Model Evaluation

    NARCIS (Netherlands)

    Zhang, X.C.; Broenink, Johannes F.; Filipe, Joaquim; Kacprzyk, Janusz; Pina, Nuno

    2011-01-01

    Simulating formal models is a common means for validating the correctness of the system design and reduce the time-to-market. In most of the embedded control system design, multiple engineering disciplines and various domain-specific models are often involved, such as mechanical, control, software

  12. Applications of Social Science to Management Information Systems and Evaluation Process: A Peace Corps Model.

    Science.gov (United States)

    Lassey, William R.; And Others

    This study discusses some of the central concepts, assumptions and methods used in the development and design of a Management Information and Evaluation System for the Peace Corps in Colombia. Methodological problems encountered are reviewed. The model requires explicit project or program objectives, individual staff behavioral objectives, client…

  13. Modeling of Car-Following Required Safe Distance Based on Molecular Dynamics

    OpenAIRE

    Dayi Qu; Xiufeng Chen; Wansan Yang; Xiaohua Bian

    2014-01-01

    In car-following procedure, some distances are reserved between the vehicles, through which drivers can avoid collisions with vehicles before and after them in the same lane and keep a reasonable clearance with lateral vehicles. This paper investigates characters of vehicle operating safety in car following state based on required safe distance. To tackle this problem, we probe into required safe distance and car-following model using molecular dynamics, covering longitudinal and lateral safe...

  14. Computer software requirements specification for the world model light duty utility arm system

    Energy Technology Data Exchange (ETDEWEB)

    Ellis, J.E.

    1996-02-01

    This Computer Software Requirements Specification defines the software requirements for the world model of the Light Duty Utility Arm (LDUA) System. It is intended to be used to guide the design of the application software, to be a basis for assessing the application software design, and to establish what is to be tested in the finished application software product. (This deploys end effectors into underground storage tanks by means of robotic arm on end of telescoping mast.)

  15. [Applying multilevel models in evaluation of bioequivalence (I)].

    Science.gov (United States)

    Liu, Qiao-lan; Shen, Zhuo-zhi; Chen, Feng; Li, Xiao-song; Yang, Min

    2009-12-01

    This study aims to explore the application value of multilevel models for bioequivalence evaluation. Using a real example of 2 x 4 cross-over experimental design in evaluating bioequivalence of antihypertensive drug, this paper explores complex variance components corresponding to criteria statistics in existing methods recommended by FDA but obtained in multilevel models analysis. Results are compared with those from FDA standard Method of Moments, specifically on the feasibility and applicability of multilevel models in directly assessing the bioequivalence (ABE), the population bioequivalence (PBE) and the individual bioequivalence (IBE). When measuring ln (AUC), results from all variance components of the test and reference groups such as total variance (sigma(TT)(2) and sigma(TR)(2)), between-subject variance (sigma(BT)(2) and sigma(BR)(2)) and within-subject variance (sigma(WT)(2) and sigma(WR)(2)) estimated by simple 2-level models are very close to those that using the FDA Method of Moments. In practice, bioequivalence evaluation can be carried out directly by multilevel models, or by FDA criteria, based on variance components estimated from multilevel models. Both approaches produce consistent results. Multilevel models can be used to evaluate bioequivalence in cross-over test design. Compared to FDA methods, this one is more flexible in decomposing total variance into sub components in order to evaluate the ABE, PBE and IBE. Multilevel model provides a new way into the practice of bioequivalence evaluation.

  16. Evaluation of an Infiltration Model with Microchannels

    Science.gov (United States)

    Garcia-Serrana, M.; Gulliver, J. S.; Nieber, J. L.

    2015-12-01

    This research goal is to develop and demonstrate the means by which roadside drainage ditches and filter strips can be assigned the appropriate volume reduction credits by infiltration. These vegetated surfaces convey stormwater, infiltrate runoff, and filter and/or settle solids, and are often placed along roads and other impermeable surfaces. Infiltration rates are typically calculated by assuming that water flows as sheet flow over the slope. However, for most intensities water flow occurs in narrow and shallow micro-channels and concentrates in depressions. This channelization reduces the fraction of the soil surface covered with the water coming from the road. The non-uniform distribution of water along a hillslope directly affects infiltration. First, laboratory and field experiments have been conducted to characterize the spatial pattern of flow for stormwater runoff entering onto the surface of a sloped surface in a drainage ditch. In the laboratory experiments different micro-topographies were tested over bare sandy loam soil: a smooth surface, and three and five parallel rills. All the surfaces experienced erosion; the initially smooth surface developed a system of channels over time that increased runoff generation. On average, the initially smooth surfaces infiltrated 10% more volume than the initially rilled surfaces. The field experiments were performed in the side slope of established roadside drainage ditches. Three rates of runoff from a road surface into the swale slope were tested, representing runoff from 1, 2, and 10-year storm events. The average percentage of input runoff water infiltrated in the 32 experiments was 67%, with a 21% standard deviation. Multiple measurements of saturated hydraulic conductivity were conducted to account for its spatial variability. Second, a rate-based coupled infiltration and overland model has been designed that calculates stormwater infiltration efficiency of swales. The Green-Ampt-Mein-Larson assumptions were

  17. RTMOD: Real-Time MODel evaluation

    DEFF Research Database (Denmark)

    Graziani, G.; Galmarini, S.; Mikkelsen, Torben

    2000-01-01

    . At that time, the World Wide Web was not available to all the exercise participants, and plume predictions were therefore submitted to JRC-Ispra by fax andregular mail for subsequent processing. The rapid development of the World Wide Web in the second half of the nineties, together with the experience gained...... the RTMOD web page for detailed information on the actual release, and as soon as possible they then uploaded their predictions to the RTMOD server and could soon after start their inter-comparison analysis with other modellers. When additionalforecast data arrived, already existing statistical results...

  18. A neural network evaluation model for individual thermal comfort

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Weiwei; Lian, Zhiwei; Zhao, Bo [Institute of Refrigeration and Cryogenics, Shanghai Jiao Tong University, Shanghai 200240 (China)

    2007-10-15

    An evaluation model for individual thermal comfort is presented based on the BP neural network. The train data came from a thermal comfort survey. The evaluation results of the model showed a good match with the subject's real thermal sensation, which indicated that the model can be used to evaluate individual thermal comfort rightly. Taken a room air conditioner as an example, the application of the NNEM in creating a microenvironment for individual was discussed. The result showed that the NNEM can play an important role of connecting individual thermal comfort with the control on the air conditioner. (author)

  19. New Partnerships Require New Approaches to Participatory Program Evaluations: Planning for the Future.

    Science.gov (United States)

    McHardy, Joan

    2002-01-01

    Discusses this era of transition in program evaluation, which is marked by growing cooperation between government and civil society, and suggests participatory program evaluation as an approach to lessen undue influence by any one partner and promote a positive culture of evaluation. (SLD)

  20. [Effect evaluation of three cell culture models].

    Science.gov (United States)

    Wang, Aiguo; Xia, Tao; Yuan, Jing; Chen, Xuemin

    2003-11-01

    Primary rat hepatocytes were cultured using three kinds of models in vitro and the enzyme leakage, albumin secretion, and cytochrome P450 1A (CYP 1A) activity were observed. The results showed that the level of LDH in the medium decreased over time in the period of culture. However, on 5 days, LDH showed a significant increase in monolayer culture (MC) while after 8 days LDH was not detected in sandwich culture (SC). The levels of AST and ALT in the medium did not change significantly over the investigated time. The basic CYP 1A activity gradually decreased with time in MC and SC. The decline of CYP 1A in rat hepatocytes was faster in MC than that in SC. This effect was partially reversed by using cytochrome P450 (CYP450) inducers such as omeprazol and 3-methylcholanthrene (3-MC) and the CYP 1A induction was always higher in MC than that in SC. Basic CYP 1A activity in bioreactor was keeped over 2 weeks and the highest albumin production was observed in bioreactor, and next were SC and MC. In conclusion, our results clearly indicated that there have some advantages and disadvantages in each of models in which can address different questions in metabolism of toxicants and drugs.

  1. Evaluating indoor exposure modeling alternatives for LCA: A case study in the vehicle repair industry

    Energy Technology Data Exchange (ETDEWEB)

    Demou, Evangelia; Hellweg, Stefanie; Wilson, Michael P.; Hammond, S. Katharine; McKone, Thomas E.

    2009-05-01

    We evaluated three exposure models with data obtained from measurements among workers who use"aerosol" solvent products in the vehicle repair industry and with field experiments using these products to simulate the same exposure conditions. The three exposure models were the: 1) homogeneously-mixed-one-box model, 2) multi-zone model, and 3) eddy-diffusion model. Temporally differentiated real-time breathing zone volatile organic compound (VOC) concentration measurements, integrated far-field area samples, and simulated experiments were used in estimating parameters, such as emission rates, diffusivity, and near-field dimensions. We assessed differences in model input requirements and their efficacy for predictive modeling. The One-box model was not able to resemble the temporal profile of exposure concentrations, but it performed well concerning time-weighted exposure over extended time periods. However, this model required an adjustment for spatial concentration gradients. Multi-zone models and diffusion-models may solve this problem. However, we found that the reliable use of both these models requires extensive field data to appropriately define pivotal parameters such as diffusivity or near-field dimensions. We conclude that it is difficult to apply these models for predicting VOC exposures in the workplace. However, for comparative exposure scenarios in life-cycle assessment they may be useful.

  2. Understanding and evaluating the levels of sustainability in business models across dimensions, time and contexts

    DEFF Research Database (Denmark)

    Aagaard, Annabeth; Lindgren, Peter

    2016-01-01

    - theoretically and empirically. Five case examples from the public sector, professional services and regulated industries are utilized to provide an understanding of the concept empirically and to derive a framework that enables evaluation and exploration of the levels of sustainability of business models across......Over the last decade, research on sustainable businesses and sustainable innovations has increased rapidly. Sustainability has become a new premise for doing business - moving it from political discourses into business boardrooms. However, applying business model innovation as a way to create...... sustainable value requires several alterations of our ways of understanding and evaluating businesses and their business models. The aim of this article is to map, discuss and present a framework for how to identify and evaluate sustainable business model innovation and sustainability in business models...

  3. Understanding and evaluating the levels of sustainability in business models over time and across dimensions

    DEFF Research Database (Denmark)

    Aagaard, Annabeth; Lindgren, Peter

    2016-01-01

    and empirically. Case examples from the public sector, professional services and regulated industries are utilized to provide an understanding of the concept empirically and to derive a framework that enables evaluation and exploration of the levels of sustainability of business models. The findings......Over the last decade, research on sustainable businesses and -innovations has expanded rapidly. Sustainability has become a new premise for doing business - moving it from political discourses into company boardrooms. However, applying business model innovation as a way to create sustainable value...... requires several alterations of our ways of understanding and evaluating businesses and their business models. The aim of this article is to identify, discuss and present a framework for how to identify and evaluate sustainable business model innovation and sustainability in business models - theoretically...

  4. Evaluation of Cost Models and Needs & Gaps Analysis

    DEFF Research Database (Denmark)

    Kejser, Ulla Bøgvad

    2014-01-01

    his report ’D3.1—Evaluation of Cost Models and Needs & Gaps Analysis’ provides an analysis of existing research related to the economics of digital curation and cost & benefit modelling. It reports upon the investigation of how well current models and tools meet stakeholders’ needs for calculating...... andcomparing financial information. Based on this evaluation, it aims to point out gaps that need to be bridged in order to increase the uptake of cost & benefit modelling and good practices that will enable costing and comparison of the costs of alternative scenarios—which in turn provides a starting point...... for amore efficient use of resources for digital curation. To facilitate and clarify the model evaluation the report first outlines a basic terminology and a generaldescription of the characteristics of cost and benefit models.The report then describes how the ten current and emerging cost and benefit...

  5. On Early Conflict Identification by Requirements Modeling of Energy System Control Structures

    DEFF Research Database (Denmark)

    Heussen, Kai; Gehrke, Oliver; Niemann, Hans Henrik

    2015-01-01

    Control systems are purposeful systems involving goal-oriented information processing (cyber) and technical (physical) structures. Requirements modeling formalizes fundamental concepts and relations of a system architecture at a high-level design stage and can be used to identify potential design...... at later design stages. However, languages employed for requirements modeling today do not offer the expressiveness necessary to represent control purposes in relation to domain level interactions and therefore miss several types of interdependencies. This paper introduces the idea of control structure...

  6. Experimental evaluation of the power balance model of speed skating.

    Science.gov (United States)

    de Koning, Jos J; Foster, Carl; Lampen, Joanne; Hettinga, Floor; Bobbert, Maarten F

    2005-01-01

    Prediction of speed skating performance with a power balance model requires assumptions about the kinetics of energy production, skating efficiency, and skating technique. The purpose of this study was to evaluate these parameters during competitive imitations for the purpose of improving model predictions. Elite speed skaters (n = 8) performed races and submaximal efficiency tests. External power output (P(o)) was calculated from movement analysis and aerodynamic models and ice friction measurements. Aerobic kinetics was calculated from breath-by-breath oxygen uptake (Vo(2)). Aerobic power (P(aer)) was calculated from measured skating efficiency. Anaerobic power (P(an)) kinetics was determined by subtracting P(aer) from P(o). We found gross skating efficiency to be 15.8% (1.8%). In the 1,500-m event, the kinetics of P(an) was characterized by a first-order system as P(an) = 88 + 556e(-0.0494t) (in W, where t is time). The rate constant for the increase in P(aer) was -0.153 s(-1), the time delay was 8.7 s, and the peak P(aer) was 234 W; P(aer) was equal to 234[1 - e(-0.153(t-8.7))] (in W). Skating position changed with preextension knee angle increasing and trunk angle decreasing throughout the event. We concluded the pattern of P(aer) to be quite similar to that reported during other competitive imitations, with the exception that the increase in P(aer) was more rapid. The pattern of P(an) does not appear to fit an "all-out" pattern, with near zero values during the last portion of the event, as assumed in our previous model (De Koning JJ, de Groot G, and van Ingen Schenau GJ. J Biomech 25: 573-580, 1992). Skating position changed in ways different from those assumed in our previous model. In addition to allowing improved predictions, the results demonstrate the importance of observations in unique subjects to the process of model construction.

  7. Evaluations of an Experiential Gaming Model

    Directory of Open Access Journals (Sweden)

    Kristian Kiili

    2006-01-01

    Full Text Available This paper examines the experiences of players of a problem-solving game. The main purpose of the paper is to validate the flow antecedents included in an experiential gaming model and to study their influence on the flow experience. Additionally, the study aims to operationalize the flow construct in a game context and to start a scale development process for assessing the experience of flow in game settings. Results indicated that the flow antecedents studied—challenges matched to a player’s skill level, clear goals, unambiguous feedback, a sense of control, and playability—should be considered in game design because they contribute to the flow experience. Furthermore, the indicators of the actual flow experience were distinguished.

  8. Developing a Model for Evaluation and Accreditation of Hospitals

    Directory of Open Access Journals (Sweden)

    AM Mosadegh Rad

    2005-10-01

    Full Text Available Background: Evaluation and accreditation of Health Care organizations has an important role in increasing the effectiveness and efficiency of these organizations and developing the quality of services provided. The current system of evaluation and accreditation of hospitals in Iran is based on structure only with out considering the context, process and output and this cause many problems for both health care providers and customers. Therefore, there is a pressing need for developing a model for health care systems’ evaluation and accreditation, which help these organizations to improve the quality of services. Method: The aim of this article is therefore: “To identify the strengths and weakness of the current system of evaluation and accreditation of hospitals in Iran, determining the ideal system of evaluation and accreditation to develop a model so that health care organizations aiming at achieving effectiveness and efficiency”. A descriptive study using a cross-sectional survey for data collection performed. All of hospital managers and hospital evaluators in Isfahan province were studied via a self administrated questionnaire (53 managers and 30 evaluators. This questionnaire specifies the respondents’ opinions about the current and ideal system of evaluation and accreditation of hospitals. Data was analyzed via SPSS11 software. Findings: The mean score of current system of evaluation and accreditation of hospitals was 3.12 0.83 and 4.41 0.34 (in a 5 scale.The mean score of structure, process and outcome were 3.10.73, 3.120.91and 3.091 in the current system of hospital evaluation respectively. From the view points of hospital managers and evaluators the differences between values of current and ideal scores of evaluation and accreditation system were statistically significant (P< 0.05. Therefore, there is a need for developing a model for hospital evaluation and accreditation. Based on these results an empirical model

  9. Improved Traceability of a Small Satellite Mission Concept to Requirements Using Model Based System Engineering

    Science.gov (United States)

    Reil, Robin L.

    2014-01-01

    Model Based Systems Engineering (MBSE) has recently been gaining significant support as a means to improve the "traditional" document-based systems engineering (DBSE) approach to engineering complex systems. In the spacecraft design domain, there are many perceived and propose benefits of an MBSE approach, but little analysis has been presented to determine the tangible benefits of such an approach (e.g. time and cost saved, increased product quality). This paper presents direct examples of how developing a small satellite system model can improve traceability of the mission concept to its requirements. A comparison of the processes and approaches for MBSE and DBSE is made using the NASA Ames Research Center SporeSat CubeSat mission as a case study. A model of the SporeSat mission is built using the Systems Modeling Language standard and No Magic's MagicDraw modeling tool. The model incorporates mission concept and requirement information from the mission's original DBSE design efforts. Active dependency relationships are modeled to demonstrate the completeness and consistency of the requirements to the mission concept. Anecdotal information and process-duration metrics are presented for both the MBSE and original DBSE design efforts of SporeSat.

  10. Improved Traceability of Mission Concept to Requirements Using Model Based Systems Engineering

    Science.gov (United States)

    Reil, Robin

    2014-01-01

    Model Based Systems Engineering (MBSE) has recently been gaining significant support as a means to improve the traditional document-based systems engineering (DBSE) approach to engineering complex systems. In the spacecraft design domain, there are many perceived and propose benefits of an MBSE approach, but little analysis has been presented to determine the tangible benefits of such an approach (e.g. time and cost saved, increased product quality). This thesis presents direct examples of how developing a small satellite system model can improve traceability of the mission concept to its requirements. A comparison of the processes and approaches for MBSE and DBSE is made using the NASA Ames Research Center SporeSat CubeSat mission as a case study. A model of the SporeSat mission is built using the Systems Modeling Language standard and No Magics MagicDraw modeling tool. The model incorporates mission concept and requirement information from the missions original DBSE design efforts. Active dependency relationships are modeled to analyze the completeness and consistency of the requirements to the mission concept. Overall experience and methodology are presented for both the MBSE and original DBSE design efforts of SporeSat.

  11. Automatically multi-paradigm requirements modeling and analyzing: An ontology-based approach

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    There are several purposes for modeling and analyzing the problem domain before starting the software requirements analysis. First, it focuses on the problem domain, so that the domain users could be involved easily. Secondly, a comprehensive description on the problem domain will advantage getting a comprehensive software requirements model. This paper proposes an ontology-based approach for mod-eling the problem domain. It interacts with the domain users by using terminology that they can under-stand and guides them to provide the relevant information. A multiple paradigm analysis approach, with the basis of the description on the problem domain, has also been presented. Three criteria, i.e. the ra-tionality of organization structure, the achievability of organization goals, and the feasibility of organiza-tion process, have been proposed. The results of the analysis could be used as feedbacks for guiding the domain users to provide further information on the problem domain. And those models on the problem domain could be a kind of document for the pre-requirements analysis phase. They also will be the basis for further software requirements modeling.

  12. Fractured rock modeling in the National Waste Terminal Storage Program: a review of requirements and status

    Energy Technology Data Exchange (ETDEWEB)

    St. John, C.; Krug, A.; Key, S.; Monsees, J.

    1983-05-01

    Generalized computer codes capable of forming the basis for numerical models of fractured rock masses are being used within the NWTS program. Little additional development of these codes is considered justifiable, except in the area of representation of discrete fractures. On the other hand, model preparation requires definition of medium-specific constitutive descriptions and site characteristics and is therefore legitimately conducted by each of the media-oriented projects within the National Waste Terminal Storage program. However, it is essential that a uniform approach to the role of numerical modeling be adopted, including agreement upon the contribution of modeling to the design and licensing process and the need for, and means of, model qualification for particular purposes. This report discusses the role of numerical modeling, reviews the capabilities of several computer codes that are being used to support design or performance assessment, and proposes a framework for future numerical modeling activities within the NWTS program.

  13. 40 CFR Table 6 to Subpart Wwww of... - Basic Requirements for Performance Tests, Performance Evaluations, and Design Evaluations for New...

    Science.gov (United States)

    2010-07-01

    ... (CONTINUED) AIR PROGRAMS (CONTINUED) NATIONAL EMISSION STANDARDS FOR HAZARDOUS AIR POLLUTANTS FOR SOURCE CATEGORIES National Emissions Standards for Hazardous Air Pollutants: Reinforced Plastic Composites... emissions to an add-on control device that is a PTE Meet the requirements for a PTE EPA method 204...

  14. Ocean thermal energy conversion (OTEC) platform configuration and integration. Final report. Volume I. Systems requirements and evaluation

    Energy Technology Data Exchange (ETDEWEB)

    None

    1978-06-01

    Studies leading to the development of two 400 MW Offshore Thermal Energy Conversion Commercial Plants are presented. This volume includes a summary of three tasks: task IIA--systems evaluation and requirements; task IIB--evaluation plan; task III--technology review; and task IV--systems integration evaluation. Task IIA includes the definition of top level requirements and an assessment of factors critical to the selection of hull configuration and size, quantification of payload requirements and characteristics, and sensitivity of system characteristics to site selection. Task IIB includes development of a methodology for systematically evaluating the candidate hullforms, based on interrelationships and priorities developed during task IIA. Task III includes the assessment of current technology and identification of deficiencies in relation to OTEC requirements and the development of plans to correct such deficiencies. Task IV involves the formal evaluation of the six candidate hullforms in relation to sit and plant capacity to quantify cost/size/capability relationships, leading to selection of an optimum commercial plant. (WHK)

  15. Use of the Aerosol Rabbitpox Virus Model for Evaluation of Anti-Poxvirus Agents

    Directory of Open Access Journals (Sweden)

    Thomas G. Voss

    2010-09-01

    Full Text Available Smallpox is an acute disease caused by infection with variola virus that has had historic effects on the human population due to its virulence and infectivity. Because variola remains a threat to humans, the discovery and development of novel pox therapeutics and vaccines has been an area of intense focus. As variola is a uniquely human virus lacking a robust animal model, the development of rational therapeutic or vaccine approaches for variola requires the use of model systems that reflect the clinical aspects of human infection. Many laboratory animal models of poxviral disease have been developed over the years to study host response and to evaluate new therapeutics and vaccines for the treatment or prevention of human smallpox. Rabbitpox (rabbitpox virus infection in rabbits is a severe and often lethal infection that has been identified as an ideal disease model for the study of poxviruses in a non-rodent species. The aerosol infection model (aerosolized rabbitpox infection embodies many of the desired aspects of the disease syndrome that involves the respiratory system and thus may serve as an appropriate model for evaluation of antivirals under development for the therapeutic treatment of human smallpox. In this review we summarize the aerosol model of rabbitpox, discuss the development efforts that have thus far used this model for antiviral testing, and comment on the prospects for its use in future evaluations requiring a poxviral model with a focus on respiratory infection.

  16. [Development of human embryonic stem cell model for toxicity evaluation].

    Science.gov (United States)

    Yu, Guang-yan; Cao, Tong; Ouyang, Hong-wei; Peng, Shuang-qing; Deng, Xu-liang; Li, Sheng-lin; Liu, He; Zou, Xiao-hui; Fu, Xin; Peng, Hui; Wang, Xiao-ying; Zhan, Yuan

    2013-02-18

    The current international standard for toxicity screening of biomedical devices and materials recommend the use of immortalized cell lines because of their homogeneous morphologies and infinite proliferation which provide good reproducibility for in vitro cytotoxicity screening. However, most of the widely used immortalized cell lines are derived from animals and may not be representative of normal human cell behavior in vivo, in particular in terms of the cytotoxic and genotoxic response. Therefore, It is vital to develop a model for toxicity evaluation. In our studies, two Chinese human embryonic stem cell (hESC) lines as toxicity model were established. hESC derived tissue/organ cell model for tissue/organ specific toxicity evaluation were developed. The efficiency and accuracy of using hESC model for cytoxicity, embryotoxicity and genotoxicity evaluation were confirmed. The results indicated that hESCs might be good tools for toxicity testing and biosafety evaluation in vitro.

  17. Industrial Waste Management Evaluation Model Version 3.1

    Science.gov (United States)

    IWEM is a screening level ground water model designed to simulate contaminant fate and transport. IWEM v3.1 is the latest version of the IWEM software, which includes additional tools to evaluate the beneficial use of industrial materials

  18. A Review on Evaluation Methods of Climate Modeling

    Institute of Scientific and Technical Information of China (English)

    ZHAO; Zong-Ci; LUO; Yong; HUANG; Jian-Bin

    2013-01-01

    There is scientific progress in the evaluation methods of recent Earth system models(ESMs).Methods range from single variable to multi-variables,multi-processes,multi-phenomena quantitative evaluations in five layers(spheres)of the Earth system,from climatic mean assessment to climate change(such as trends,periodicity,interdecadal variability),extreme values,abnormal characters and quantitative evaluations of phenomena,from qualitative assessment to quantitative calculation of reliability and uncertainty for model simulations.Researchers started considering independence and similarity between models in multi-model use,as well as the quantitative evaluation of climate prediction and projection efect and the quantitative uncertainty contribution analysis.In this manuscript,the simulations and projections by both CMIP5 and CMIP3 that have been published after 2007 are reviewed and summarized.

  19. Spatial Economic-Hydroecological Modelling and Evaluation of Land Use Impacts in the Vecht Wetlands Area

    OpenAIRE

    van den Bergh, J.C.J.M.; Barendregt, A.; Gilbert, A.; Herwijnen, M. van; van Horssen, P.; P. Kandelaars

    2000-01-01

    Wetlands provide many important goods and services to human societies, and generate nonuse values as well. Wetlands are also very sensitive ecosystems that are subject to much stress from human activities. Reducing the stress on wetlands requires a spatial matching between physical planning, hydrological and ecological processes, and economic activities. Spatially integrated modelling and evaluation can support this. The present study has developed a triple layer model that integrates informa...

  20. Evaluation of a Physiologically Based Pharmacokinetic (PBPK) Model Used to Develop Health Protective Levels for Trichloroethylene

    Science.gov (United States)

    2017-02-28

    AFRL-RH-WP-TR-2017-0014 EVALUATION OF A PHYSIOLOGICALLY-BASED PHARMACOKINETIC (PBPK) MODEL USED TO DEVELOP HEALTH PROTECTIVE LEVELS FOR...Pharmacokinetic (PBPK) Model Used to Develop Health Protective Levels for Trichloroethylene 5a. CONTRACT NUMBER FA8650-15-2-6608 5b. GRANT...Anita Meyer, Army Corps of Engineers (CEHNC-EMS) and Shannon S. Garcia, AFCEC/CZTE for their efforts to obtain the required funding. The authors also

  1. Regime-based evaluation of cloudiness in CMIP5 models

    Science.gov (United States)

    Jin, Daeho; Oreopoulos, Lazaros; Lee, Dongmin

    2017-01-01

    The concept of cloud regimes (CRs) is used to develop a framework for evaluating the cloudiness of 12 fifth Coupled Model Intercomparison Project (CMIP5) models. Reference CRs come from existing global International Satellite Cloud Climatology Project (ISCCP) weather states. The evaluation is made possible by the implementation in several CMIP5 models of the ISCCP simulator generating in each grid cell daily joint histograms of cloud optical thickness and cloud top pressure. Model performance is assessed with several metrics such as CR global cloud fraction (CF), CR relative frequency of occurrence (RFO), their product [long-term average total cloud amount (TCA)], cross-correlations of CR RFO maps, and a metric of resemblance between model and ISCCP CRs. In terms of CR global RFO, arguably the most fundamental metric, the models perform unsatisfactorily overall, except for CRs representing thick storm clouds. Because model CR CF is internally constrained by our method, RFO discrepancies yield also substantial TCA errors. Our results support previous findings that CMIP5 models underestimate cloudiness. The multi-model mean performs well in matching observed RFO maps for many CRs, but is still not the best for this or other metrics. When overall performance across all CRs is assessed, some models, despite shortcomings, apparently outperform Moderate Resolution Imaging Spectroradiometer cloud observations evaluated against ISCCP like another model output. Lastly, contrasting cloud simulation performance against each model's equilibrium climate sensitivity in order to gain insight on whether good cloud simulation pairs with particular values of this parameter, yields no clear conclusions.

  2. A Formal Method to Model Early Requirement of Multi-Agent System

    Institute of Scientific and Technical Information of China (English)

    MAO Xin-jun; YU Eric

    2004-01-01

    A formal specification language iFL based on i* framework is presented in this paper to formally specify and analyze the early requirement of multi-agent system. It is a branching temporal logic which defines the concepts and models in i* framework in a rigorous way. The method to transform the i* models to iFL formal specification is also put forward.

  3. System Design Description and Requirements for Modeling the Off-Gas Systems for Fuel Recycling Facilities

    Energy Technology Data Exchange (ETDEWEB)

    Daryl R. Haefner; Jack D. Law; Troy J. Tranter

    2010-08-01

    This document provides descriptions of the off-gases evolved during spent nuclear fuel processing and the systems used to capture the gases of concern. Two reprocessing techniques are discussed, namely aqueous separations and electrochemical (pyrochemical) processing. The unit operations associated with each process are described in enough detail so that computer models to mimic their behavior can be developed. The document also lists the general requirements for the desired computer models.

  4. A new model for comprehensive evaluation of a mine design

    Energy Technology Data Exchange (ETDEWEB)

    Chen, L.; Sun, B. [Liaoing Technical University (China)

    1997-02-01

    Based on the general procedure of evaluation, a model is established for the comprehensive evaluation of a mine design by applying the multi-goal method. In the evaluation, equivalent analysis and changeable position value analysis are introduced. Theoretical analysis shows that this method is simple in technology and easy in operation. It is a good method for optimizing a mine design with multi-index. 1 ref.

  5. R&D for computational cognitive and social models : foundations for model evaluation through verification and validation (final LDRD report).

    Energy Technology Data Exchange (ETDEWEB)

    Slepoy, Alexander; Mitchell, Scott A.; Backus, George A.; McNamara, Laura A.; Trucano, Timothy Guy

    2008-09-01

    Sandia National Laboratories is investing in projects that aim to develop computational modeling and simulation applications that explore human cognitive and social phenomena. While some of these modeling and simulation projects are explicitly research oriented, others are intended to support or provide insight for people involved in high consequence decision-making. This raises the issue of how to evaluate computational modeling and simulation applications in both research and applied settings where human behavior is the focus of the model: when is a simulation 'good enough' for the goals its designers want to achieve? In this report, we discuss two years' worth of review and assessment of the ASC program's approach to computational model verification and validation, uncertainty quantification, and decision making. We present a framework that extends the principles of the ASC approach into the area of computational social and cognitive modeling and simulation. In doing so, we argue that the potential for evaluation is a function of how the modeling and simulation software will be used in a particular setting. In making this argument, we move from strict, engineering and physics oriented approaches to V&V to a broader project of model evaluation, which asserts that the systematic, rigorous, and transparent accumulation of evidence about a model's performance under conditions of uncertainty is a reasonable and necessary goal for model evaluation, regardless of discipline. How to achieve the accumulation of evidence in areas outside physics and engineering is a significant research challenge, but one that requires addressing as modeling and simulation tools move out of research laboratories and into the hands of decision makers. This report provides an assessment of our thinking on ASC Verification and Validation, and argues for further extending V&V research in the physical and engineering sciences toward a broader program of model

  6. An Emerging Model for Student Feedback: Electronic Distributed Evaluation

    Science.gov (United States)

    Brunk-Chavez, Beth; Arrigucci, Annette

    2012-01-01

    In this article we address several issues and challenges that the evaluation of writing presents individual instructors and composition programs as a whole. We present electronic distributed evaluation, or EDE, as an emerging model for feedback on student writing and describe how it was integrated into our program's course redesign. Because the…

  7. Evaluation of clinical model for deep vein thrombosis: a cheap ...

    African Journals Online (AJOL)

    Evaluation of clinical model for deep vein thrombosis: a cheap alternative for ... Results: Twelve (57.1% of the 21 patients evaluated had a high pretest clinical ... There was a 100% correlation between the high-risk categories and the ...

  8. A Universal Model for the Normative Evaluation of Internet Information.

    NARCIS (Netherlands)

    Spence, E.H.

    2009-01-01

    Beginning with the initial premise that as the Internet has a global character, the paper will argue that the normative evaluation of digital information on the Internet necessitates an evaluative model that is itself universal and global in character (I agree, therefore, with Gorniak- Kocikowska’s

  9. Information and complexity measures for hydrologic model evaluation

    Science.gov (United States)

    Hydrological models are commonly evaluated through the residual-based performance measures such as the root-mean square error or efficiency criteria. Such measures, however, do not evaluate the degree of similarity of patterns in simulated and measured time series. The objective of this study was to...

  10. A universal model for the normative evaluation of internet information

    NARCIS (Netherlands)

    Spence, Edward H.

    2009-01-01

    Beginning with the initial premise that as the Internet has a global character, the paper will argue that the normative evaluation of digital information on the Internet necessitates an evaluative model that is itself universal and global in character (I agree, therefore, with Gorniak- Kocikowska’s

  11. Evaluating Interventions with Multimethod Data: A Structural Equation Modeling Approach

    Science.gov (United States)

    Crayen, Claudia; Geiser, Christian; Scheithauer, Herbert; Eid, Michael

    2011-01-01

    In many intervention and evaluation studies, outcome variables are assessed using a multimethod approach comparing multiple groups over time. In this article, we show how evaluation data obtained from a complex multitrait-multimethod-multioccasion-multigroup design can be analyzed with structural equation models. In particular, we show how the…

  12. A Methodology for Writing High Quality Requirements Specification and Evaluating Existing Ones

    Science.gov (United States)

    Rosenberg, Linda; Hammer, Theodore

    1999-01-01

    Requirements development and management have always been critical in the implementation of software systems; engineers are unable to build what analysts can't define. It is generally accepted that the earlier in the life cycle potential risks are identified the easier it is to eliminate or manage the conditions that introduce that risk. Problems that are not found until testing are approximately 14 times more costly to fix than if the problem was found in the requirement phase. The requirements specification, as the first tangible representation of the capability to be produced, establishes the basis for all of the project's engineering management and assurance functions. If the quality of the requirements specification is poor it can give rise to risks in all areas of the project. Recently, automated tools have become available to support requirements management. The use of these tools not only provides support in the definition and tracing of requirements, but it also opens the door to effective use of metrics in characterizing and assessing the quality of the requirement specifications.

  13. Modelling and Evaluation of Spectra in Beam Aided Spectroscopy

    Science.gov (United States)

    von Hellermann, M. G.; Delabie, E.; Jaspers, R.; Lotte, P.; Summers, H. P.

    2008-10-01

    The evaluation of active beam induced spectra requires advanced modelling of both active and passive features. Three types of line shapes are addressed in this paper: Thermal spectra representing Maxwellian distribution functions described by Gaussian-like line shapes, secondly broad-band fast ion spectra with energies well above local ion temperatures, and, finally, the narrow lines shapes of the equi-spaced Motion Stark multiplet (MSE) of excited neutral beam particles travelling through the magnetic field confining the plasma. In each case additional line shape broadening caused by Gaussian-like instrument functions is taken into account. Further broadening effects are induced by collision velocity dependent effective atomic rates where the observed spectral shape is the result of a convolution of emission rate function and velocity distribution function projected into the direction of observation. In the case of Beam Emission Spectroscopy which encompasses the Motional Stark features, line broadening is also caused by the finite angular spread of injected neutrals and secondly by a ripple in the acceleration voltage associated with high energy neutral beams.

  14. De novo actin polymerization is required for model Hirano body formation in Dictyostelium

    Directory of Open Access Journals (Sweden)

    Yun Dong

    2016-06-01

    Full Text Available Hirano bodies are eosinophilic, actin-rich inclusions found in autopsied brains in numerous neurodegenerative diseases. The mechanism of Hirano body formation is unknown. Mass spectrometry analysis was performed to identify proteins from partially purified model Hirano bodies from Dictyostelium. This analysis identified proteins primarily belonging to ribosomes, proteasomes, mitochondria and cytoskeleton. Profilin, Arp/2/3 and WASH identified by mass spectrometry were found to colocalise with model Hirano bodies. Due to their roles in actin regulation, we selected these proteins for further investigation. Inhibition of the Arp2/3 complex by CK666 prevented formation of model Hirano bodies. Since Arp2/3 activation occurs via the WASH or WAVE complex, we next investigated how these proteins affect Hirano body formation. Whereas model Hirano bodies could form in WASH-deficient cells, they failed to form in cells lacking HSPC300, a member of the WAVE complex. We identified other proteins required for Hirano body formation that include profilin and VASP, an actin nucleation factor. In the case of VASP, both its G- and F-actin binding domains were required for model Hirano body formation. Collectively, our results indicate that de novo actin polymerization is required to form model Hirano bodies.

  15. Understanding the relationship between Kano model's customer satisfaction scores and self-stated requirements importance.

    Science.gov (United States)

    Mkpojiogu, Emmanuel O C; Hashim, Nor Laily

    2016-01-01

    Customer satisfaction is the result of product quality and viability. The place of the perceived satisfaction of users/customers for a software product cannot be neglected especially in today competitive market environment as it drives the loyalty of customers and promotes high profitability and return on investment. Therefore understanding the importance of requirements as it is associated with the satisfaction of users/customers when their requirements are met is worth the pain considering. It is necessary to know the relationship between customer satisfactions when their requirements are met (or their dissatisfaction when their requirements are unmet) and the importance of such requirement. So many works have been carried out on customer satisfaction in connection with the importance of requirements but the relationship between customer satisfaction scores (coefficients) of the Kano model and users/customers self-stated requirements importance have not been sufficiently explored. In this study, an attempt is made to unravel the underlying relationship existing between Kano model's customer satisfaction indexes and users/customers self reported requirements importance. The results of the study indicate some interesting associations between these considered variables. These bivariate associations reveal that customer satisfaction index (SI), and average satisfaction coefficient (ASC) and customer dissatisfaction index (DI) and average satisfaction coefficient (ASC) are highly correlated (r = 96 %) and thus ASC can be used in place of either SI or DI in representing customer satisfaction scores. Also, these Kano model's customer satisfaction variables (SI, DI, and ASC) are each associated with self-stated requirements importance (IMP). Further analysis indicates that the value customers or users place on requirements that are met or on features that are incorporated into a product influences the level of satisfaction such customers derive from the product. The

  16. Evaluation of global luminous efficacy models for Florianopolis, Brazil

    Energy Technology Data Exchange (ETDEWEB)

    De Souza, Roberta G.; Pereira, Fernando O.R. [Universidade Federal de Santa Catarina, Florianopolis (Brazil). Laboratorio de Conforto Ambiental, Dpto. de Arquitetura; Robledo, Luis [Universidad Politecnica de Madrid, Madrid (Spain). E.P.E.S. Ciencias Ambientales; Soler, Alfonso [Universidad Politecnica de Madrid, Madrid (Spain). E.P.E.S. Ciencias Ambientales and Dpto. de Fisica e Instalaciones Aplicadas, E.T.S. de Arquitectura

    2006-10-15

    Several global luminous efficacy models have been tested with daylight-measured data obtained for Felipresina, Southern Brazil. The models have been used with their original coefficients, given by the authors and also with local coefficients obtained when the models were optimized with the data measured in Felipresina. The evaluation of the different models has been carried out considering three sky categories, according to a higher or lower presence of clouds. For clear sky, the models tested have been compared with a proposed polynomial model on the solar altitude, obtained by the best fit of experimental points for Felipresina. It has been proved that the model coefficients have a local character. If those models are used with local coefficients, there is no model that works better than the others for all sky types, but that for each sky category a different model could be recommended. (author)

  17. An Evaluation of Unsaturated Flow Models in an Arid Climate

    Energy Technology Data Exchange (ETDEWEB)

    Dixon, J. [Univ. of Nevada, Las Vegas, NV (United States)

    1999-12-01

    The objective of this study was to evaluate the effectiveness of two unsaturated flow models in arid regions. The area selected for the study was the Area 5 Radioactive Waste Management Site (RWMS) at the Nevada Test Site in Nye County, Nevada. The two models selected for this evaluation were HYDRUS-1D [Simunek et al., 1998] and the SHAW model [Flerchinger and Saxton, 1989]. Approximately 5 years of soil-water and atmospheric data collected from an instrumented weighing lysimeter site near the RWMS were used for building the models with actual initial and boundary conditions representative of the site. Physical processes affecting the site and model performance were explored. Model performance was based on a detailed sensitivity analysis and ultimately on storage comparisons. During the process of developing descriptive model input, procedures for converting hydraulic parameters for each model were explored. In addition, the compilation of atmospheric data collected at the site became a useful tool for developing predictive functions for future studies. The final model results were used to evaluate the capacities of the HYDRUS and SHAW models for predicting soil-moisture movement and variable surface phenomena for bare soil conditions in the arid vadose zone. The development of calibrated models along with the atmospheric and soil data collected at the site provide useful information for predicting future site performance at the RWMS.

  18. INTEGRATED DATA CAPTURING REQUIREMENTS FOR 3D SEMANTIC MODELLING OF CULTURAL HERITAGE: THE INCEPTION PROTOCOL

    Directory of Open Access Journals (Sweden)

    R. Di Giulio

    2017-02-01

    In order to face these challenges and to start solving the issue of the large amount of captured data and time-consuming processes in the production of 3D digital models, an Optimized Data Acquisition Protocol (DAP has been set up. The purpose is to guide the processes of digitization of cultural heritage, respecting needs, requirements and specificities of cultural assets.

  19. Non-formal techniques for requirements elicitation, modeling, and early assessment for services

    NARCIS (Netherlands)

    van der Veer, Gerrit C.; Vyas, Dhaval; Dittmar, A.; Forbig, P.

    2011-01-01

    Designing systems for multiple stakeholders requires frequent collaboration with multiple stakeholders from the start. In many cases at least some stakeholders lack a professional habit of formal modeling. We report observations from two case studies of stakeholder involvement in early design where

  20. Staying in the Light: Evaluating Sustainability Models for Brokering Software

    Science.gov (United States)

    Powers, L. A.; Benedict, K. K.; Best, M.; Fyfe, S.; Jacobs, C. A.; Michener, W. K.; Pearlman, J.; Turner, A.; Nativi, S.

    2015-12-01

    The Business Models Team of the Research Data Alliance Brokering Governance Working Group examined several support models proposed to promote the long-term sustainability of brokering middleware. The business model analysis includes examination of funding source, implementation frameworks and obstacles, and policy and legal considerations. The issue of sustainability is not unique to brokering software and these models may be relevant to many applications. Results of this comprehensive analysis highlight advantages and disadvantages of the various models in respect to the specific requirements for brokering services. We offer recommendations based on the outcomes of this analysis while recognizing that all software is part of an evolutionary process and has a lifespan.

  1. Evaluation of Roof Bolting Requirements Based on In-Mine Roof Bolter Drilling

    Energy Technology Data Exchange (ETDEWEB)

    Syd S. Peng

    2005-10-01

    Roof bolting is the most popular method for underground openings in the mining industry, especially in the bedded deposits such as coal. In fact, all U.S. underground coal mine entries are roof-bolted as required by law. However, roof falls still occur frequently in the roof bolted entries. The two possible reasons are: the lack of knowledge of and technology to detect the roof geological conditions in advance of mining, and lack of roof bolting design criteria for modern roof bolting systems. This research is to develop a method for predicting the roof geology and stability condition in real time during roof bolting operation. Based on this information, roof bolting design criteria for modern roof bolting systems will be developed for implementation in real time. For the prediction of roof geology and stability condition in real time, a micro processor was used and a program developed to monitor and record the drilling parameters of roof bolter. These parameters include feed pressure, feed flow (penetration rate), rotation pressure, rotation rate, vacuum pressure, oil temperature of hydraulic circuit, and signals for controlling machine. From the results of a series of laboratory and underground tests so far, feed pressure is found to be a good indicator for identifying the voids/fractures and estimating the roof rock strength. The method for determining quantitatively the location and the size of void/fracture and estimating the roof rock strength from the drilling parameters of roof bolter was developed. Also, a set of computational rules has been developed for in-mine roof using measured roof drilling parameters and implemented in MRGIS (Mine Roof Geology Information System), a software package developed to allow mine engineers to make use of the large amount of roof drilling parameters for predicting roof geology properties automatically. For the development of roof bolting criteria, finite element models were developed for tensioned and fully grouted bolting

  2. The Dynamic Integrated Evaluation Model (DIEM): Achieving Sustainability in Organizational Intervention through a Participatory Evaluation Approach.

    Science.gov (United States)

    von Thiele Schwarz, Ulrica; Lundmark, Robert; Hasson, Henna

    2016-10-01

    Recently, there have been calls to develop ways of using a participatory approach when conducting interventions, including evaluating the process and context to improve and adapt the intervention as it evolves over time. The need to integrate interventions into daily organizational practices, thereby increasing the likelihood of successful implementation and sustainable changes, has also been highlighted. We propose an evaluation model-the Dynamic Integrated Evaluation Model (DIEM)-that takes this into consideration. In the model, evaluation is fitted into a co-created iterative intervention process, in which the intervention activities can be continuously adapted based on collected data. By explicitly integrating process and context factors, DIEM also considers the dynamic sustainability of the intervention over time. It emphasizes the practical value of these evaluations for organizations, as well as the importance of their rigorousness for research purposes. Copyright © 2016 John Wiley & Sons, Ltd.

  3. Research on Maintainability Evaluation Model Based on Fuzzy Theory

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Maintainability influencing attributes are analyzed, their weight and value calculating methods are given, and the maintainability fuzzy evaluation method is proposed based on the relative closeness. According to the maintenance task simulation operated in virtual environment, the maintainability virtual evaluation model is built by analyzing the maintenance task for each replaceable unit of product.At last, a case study is given based upon the main landing gear system of a certain type civil aircraft, and the result indicates that the model is suitable for maintainability qualitative evaluation and can support maintainability concurrent design.

  4. Evaluation and Simulation of Black-box Arc Models for High-Voltage Circuit-Breakers

    OpenAIRE

    Gustavsson, Niklas

    2004-01-01

    The task for this Master thesis was to evaluate different black-box arc models for circuit-breakers with the purpose of finding criteria for the breaking ability. A black-box model is a model that requires no knowledge from the user of the underlying physical processes. Black-box arc models have been used in circuit-breaker development for many years. Arc voltages from tests made in the High Power Laboratory in Ludvika were used for validation, along with the resistance calculated at current ...

  5. The IIR evaluation model: a framework for evaluation of interactive information retrieval systems

    Directory of Open Access Journals (Sweden)

    Borlund Pia

    2003-01-01

    Full Text Available An alternative approach to evaluation of interactive information retrieval (IIR systems is proposed. The model provides a framework for the collection and analysis of IR interaction data.

  6. Experiences in Using Practitioner’s Checklists to Evaluate the Relevance of Experiments Reported in Requirements Engineering

    NARCIS (Netherlands)

    Daneva, Maia; Sikkel, Nicolaas; Condori-Fernandez, Nelly; Herrmann, Andrea

    2011-01-01

    Background: Requirements Engineering (RE) researchers recognize that for RE methods to be adopted in industry, practitioners should be able to evaluate the relevance of a study to their practice. Kitchenham et al proposed a set of perspective-based checklists, which demonstrated to be a useful instr

  7. A comprehensive model to evaluate implementation of the world health organization framework convention of tobacco control.

    Science.gov (United States)

    Sarrafzadegan, Nizal; Kelishad, Roya; Rabiei, Katayoun; Abedi, Heidarali; Mohaseli, Khadijeh Fereydoun; Masooleh, Hasan Azaripour; Alavi, Mousa; Heidari, Gholamreza; Ghaffari, Mostafa; O'Loughlin, Jennifer

    2012-03-01

    Iran is one of the countries that has ratified the World Health Organization Framework Convention of Tobacco Control (WHO-FCTC), and has implemented a series of tobacco control interventions including the Comprehensive Tobacco Control Law. Enforcement of this legislation and assessment of its outcome requires a dedicated evaluation system. This study aimed to develop a generic model to evaluate the implementation of the Comprehensive Tobacco Control Law in Iran that was provided based on WHO-FCTC articles. Using a grounded theory approach, qualitative data were collected from 265 subjects in individual interviews and focus group discussions with policymakers who designed the legislation, key stakeholders, and members of the target community. In addition, field observations data in supermarkets/shops, restaurants, teahouses and coffee shops were collected. Data were analyzed in two stages through conceptual theoretical coding. Overall, 617 open codes were extracted from the data into tables; 72 level-3 codes were retained from the level-2 code series. Using a Model Met paradigm, the relationships between the components of each paradigm were depicted graphically. The evaluation model entailed three levels, namely: short-term results, process evaluation and long-term results. Central concept of the process of evaluation is that enforcing the law influences a variety of internal and environmental factors including legislative changes. These factors will be examined during the process evaluation and context evaluation. The current model can be applicable for providing FCTC evaluation tools across other jurisdictions.

  8. Evaluating a novel resident role-modelling programme.

    Science.gov (United States)

    Sternszus, Robert; Steinert, Yvonne; Bhanji, Farhan; Andonian, Sero; Snell, Linda S

    2017-05-09

    Role modelling is a fundamental method by which students learn from residents. To our knowledge, however, resident-as-teacher curricula have not explicitly addressed resident role modelling. The purpose of this project was to design, implement and evaluate an innovative programme to teach residents about role modelling. The authors designed a resident role-modelling programme and incorporated it into the 2015 and 2016 McGill University resident-as-teacher curriculum. Influenced by experiential and social learning theories, the programme incorporated flipped-classroom and simulation approaches to teach residents to be aware and deliberate role models. Outcomes were assessed through a pre- and immediate post-programme questionnaire evaluating reaction and learning, a delayed post-programme questionnaire evaluating learning, and a retrospective pre-post questionnaire (1 month following the programme) evaluating self-reported behaviour changes. Thirty-three of 38 (87%) residents who participated in the programme completed the evaluation, with 25 residents (66%) completing all questionnaires. Participants rated the programme highly on a five-point Likert scale (where 1 = not helpful and 5 = very helpful; mean score, M = 4.57; standard deviation, SD = 0.50), and showed significant improvement in their perceptions of their importance as role models and their knowledge of deliberate role modelling. Residents also reported an increased use of deliberate role-modelling strategies 1 month after completing the programme. Resident-as-teacher curricula have not explicitly addressed resident role modelling DISCUSSION: The incorporation of resident role modelling into our resident-as-teacher curriculum positively influenced the participants' perceptions of their role-modelling abilities. This programme responds to a gap in resident training and has the potential to guide further programme development in this important and often overlooked area. © 2017 John Wiley & Sons

  9. A random effects generalized linear model for reliability compositive evaluation

    Institute of Scientific and Technical Information of China (English)

    2009-01-01

    This paper first proposes a random effects generalized linear model to evaluate the storage life of one kind of high reliable and small sample-sized products by combining multi-sources information of products coming from the same population but stored at different environments. The relevant algorithms are also provided. Simulation results manifest the soundness and effectiveness of the proposed model.

  10. Evaluation of preformance of Predictive Models for Deoxynivalenol in Wheat

    NARCIS (Netherlands)

    Fels, van der H.J.

    2014-01-01

    The aim of this study was to evaluate the performance of two predictive models for deoxynivalenol contamination of wheat at harvest in the Netherlands, including the use of weather forecast data and external model validation. Data were collected in a different year and from different wheat fields th

  11. Systematic evaluation of land use regression models for NO₂

    NARCIS (Netherlands)

    Wang, M.|info:eu-repo/dai/nl/345480279; Beelen, R.M.J.|info:eu-repo/dai/nl/30483100X; Eeftens, M.R.|info:eu-repo/dai/nl/315028300; Meliefste, C.; Hoek, G.|info:eu-repo/dai/nl/069553475; Brunekreef, B.|info:eu-repo/dai/nl/067548180

    2012-01-01

    Land use regression (LUR) models have become popular to explain the spatial variation of air pollution concentrations. Independent evaluation is important. We developed LUR models for nitrogen dioxide (NO(2)) using measurements conducted at 144 sampling sites in The Netherlands. Sites were randomly

  12. Evaluating choices in multi-process landscape evolution models

    NARCIS (Netherlands)

    Temme, A.J.A.M.; Claessens, L.; Veldkamp, A.; Schoorl, J.M.

    2011-01-01

    The interest in landscape evolution models (LEMs) that simulate multiple landscape processes is growing. However, modelling multiple processes constitutes a new starting point for which some aspects of the set up of LEMs must be re-evaluated. The objective of this paper is to demonstrate the practic

  13. Outline and Preliminary Evaluation of the Classical Digital Library Model.

    Science.gov (United States)

    MacCall, Steven L.; Cleveland, Ana D.; Gibson, Ian E.

    1999-01-01

    Outlines the classical digital library model, which is derived from traditional practices of library and information science professionals, as an alternative to the database retrieval model. Reports preliminary results from an evaluation study of library and information professionals and endusers involved with primary care medicine. (AEF)

  14. Evaluating energy efficiency policies with energy-economy models

    NARCIS (Netherlands)

    Mundaca, L.; Neij, L.; Worrell, E.; McNeil, M.

    2010-01-01

    The growing complexities of energy systems, environmental problems, and technology markets are driving and testing most energy-economy models to their limits. To further advance bottom-up models from a multidisciplinary energy efficiency policy evaluation perspective, we review and critically analyz

  15. A random effects generalized linear model for reliability compositive evaluation

    Institute of Scientific and Technical Information of China (English)

    ZHAO Hui; YU Dan

    2009-01-01

    This paper first proposes a random effects generalized linear model to evaluate the storage life of one kind of high reliable and small sample-sized products by combining multi-sources information of products coming from the same population but stored at different environments.The relevant algorithms are also provided.Simulation results manifest the soundness and effectiveness of the proposed model.

  16. [Application of multilevel models in the evaluation of bioequivalence (II).].

    Science.gov (United States)

    Liu, Qiao-lan; Shen, Zhuo-zhi; Li, Xiao-song; Chen, Feng; Yang, Min

    2010-03-01

    The main purpose of this paper is to explore the applicability of multivariate multilevel models for bioequivalence evaluation. Using an example of a 4 x 4 cross-over test design in evaluating bioequivalence of homemade and imported rosiglitazone maleate tablets, this paper illustrated the multivariate-model-based method for partitioning total variances of ln(AUC) and ln(C(max)) in the framework of multilevel models. It examined the feasibility of multivariate multilevel models in directly evaluating average bioequivalence (ABE), population bioequivalence (PBE) and individual bioequivalence (IBE). Taking into account the correlation between ln(AUC) and ln(C(max)) of rosiglitazone maleate tablets, the proposed models suggested no statistical difference between the two effect measures in their ABE bioequivalence via joint tests, whilst a contradictive conclusion was derived based on univariate multilevel models. Furthermore, the PBE and IBE for both ln(AUC) and ln(C(max)) of the two types of tablets were assessed with no statistical difference based on estimates of variance components from the proposed models. Multivariate multilevel models could be used to analyze bioequivalence of multiple effect measures simultaneously and they provided a new way of statistical analysis to evaluate bioequivalence.

  17. Evaluating Emulation-based Models of Distributed Computing Systems.

    Energy Technology Data Exchange (ETDEWEB)

    Jones, Stephen T.

    2017-10-01

    Emulation-based models of distributed computing systems are collections of virtual ma- chines, virtual networks, and other emulation components configured to stand in for oper- ational systems when performing experimental science, training, analysis of design alterna- tives, test and evaluation, or idea generation. As with any tool, we should carefully evaluate whether our uses of emulation-based models are appropriate and justified. Otherwise, we run the risk of using a model incorrectly and creating meaningless results. The variety of uses of emulation-based models each have their own goals and deserve thoughtful evaluation. In this paper, we enumerate some of these uses and describe approaches that one can take to build an evidence-based case that a use of an emulation-based model is credible. Predictive uses of emulation-based models, where we expect a model to tell us something true about the real world, set the bar especially high and the principal evaluation method, called validation , is comensurately rigorous. We spend the majority of our time describing and demonstrating the validation of a simple predictive model using a well-established methodology inherited from decades of development in the compuational science and engineering community.

  18. Evaluation of preformance of Predictive Models for Deoxynivalenol in Wheat

    NARCIS (Netherlands)

    Fels, van der H.J.

    2014-01-01

    The aim of this study was to evaluate the performance of two predictive models for deoxynivalenol contamination of wheat at harvest in the Netherlands, including the use of weather forecast data and external model validation. Data were collected in a different year and from different wheat fields

  19. Improvement of procedures for evaluating photochemical models. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Tesche, T.W.; Lurmann, F.R.; Roth, P.M.; Georgopoulos, P.; Seinfeld, J.H.

    1990-08-01

    The study establishes a set of procedures that should be used by all groups evaluating the performance of a photochemical model application. A set of ten numerical measures are recommended for evaluating a photochemical model's accuracy in predicting ozone concentrations. Nine graphical methods and six investigative simulations are also recommended to give additional insight into model performance. Standards are presented that each modeling study should try to meet. To complement the operational model evaluation procedures, several diagnostic procedures are suggested. The sensitivity of the model to uncertainties in hydrocarbon emission rates and speciation, and other parameters should be assessed. Uncertainty bounds of key input variables and parameters can be propagated through the model to provide estimated uncertainties in the ozone predictions. Comparisons between measurements and predictions of species other than ozone will help ensure that the model is predicting the right ozone for the right reasons. Plotting concentrations residuals (differences) against a variety of variables may give insight into the reasons for poor model performance. Mass flux and balance calculations can identify the relative importance of emissions and transport. The study also identifies testing a model's response to emission changes as the most important research need. Another important area is testing the emissions inventory.

  20. Study of Purchase Process Modeling and Reengineering Effects Evaluation

    Institute of Scientific and Technical Information of China (English)

    ZhenhuanJiang; LiangQuan; PeiwuDong

    2004-01-01

    Purchase process is an important part in the process of running an enterprise.For reengineering the process of'enterprise, raise business efficiency of enterprise , make enterprise competitive among complicated and changeable market environment. Use the method of planning to evaluate the technology modeling and method to set up evaluation index, based on the basic principle business process reengineering, studied the methods of modeling and evaluating of the reengineering of the purchase process. Given the intact method of modeling process. Give the mathematics express method and figure express method of the process, propose using PT reducing rate to evaluate the effect of the process reengineer, and has analysed three kinds of situations that affect the reducing rate of PT.